text
stringlengths
0
316k
year
stringclasses
50 values
No
stringclasses
911 values
Semantic Interpretation Using KL-ONE 1 Norman K. Sondheimer USC/Information Sciences Institute Marina del Rey, California 90292 USA Ralph M. Weischedel Dept. of Computer & Information Sciences University of Delaware Newark, Delaware 19716 USA Robert J. Bobrow Bolt Beranek and Newman, Inc. Cambridge, Massachusetts 02238 USA Abstract This paper presents extensions to the work of Bobrow and Webber [Bobrow&Webber 80a, Bobrow&Webber 80b] on semantic interpretation using KL-ONE to represent knowledge. The approach is based on an extended case frame formalism applicable to all types of phrases, not just clauses. The frames are used to recognize semantically acceptable phrases, identify their structure, and, relate them to their meaning representation through translation rules. Approaches are presented for generating KL-ONE structures as the meaning of a sentence, for capturing semantic generalizations through abstract case frames, and for handling pronouns and relative clauses. 1. Introduction Semantic interpretation is the process of relating the syntactic analysis of an utterance ",o its meaning representatioh. Syntactic analyses associate immediate constituents with their syntactic function in a matrix constituent, e.g., the sentence "Send him the message that arrived yesterday.", has a syntactic analysis in RUS [Bobrow 78] as shown in Figure 1.2 The elements of the meaning representation are the objects, events, and states of affairs perceived by the speaker. The relationships between these entities will be called semantic functions. The basis for our semantic processing scheme is a familiar one based on that of case frames used to describe clausa structure [Bruce 75]. Our case frames are used for all phrase types: clauses, noun phrases, prepositional phrases, etc. We choose to represent both the syntactic and semantic analyses in the knowledge representation language KL-ONE [Brachman&Schmolze 82, Schmolze&Lipkis 83, Moser 83]. The essential properties for the meaning representations constructed are that each concept represents a semantic constituent and each of its roles identifies the semantic function of one of its immediate constituents. Figure 23 gives an analysis of the example sentence above. We have picked a constituent structure and names for semantic functions fitting the computer mail application of .the the Consul project at USC/Information Sciences Institute [Kaczmarek 83]. The exact details of the analysis are not critical; the essential point is that 1This material is based upon work supported in part by the Defense Advanced Research Projects Agency under Contract Numbers MDA 903-81-C-0335, ARPA Order No. 2223, and N00014-77-C-0378, ARPA Order No. 3414. Views and conclusions contained in this paper are the authors' and should not be interpreted as representing the official policies of DARPA, the U.S, Government, or any person or agency connected with them. 2We use this sentence to illustrate many of the points in this paper. Assume that "yesterday" modifies "arrived". 3All of the KL-ONE diagrams in this paper are simplified for expository purposes, semantic interpretation relates a' phrase's analysis based on syntactic criteria to one based on semantic criteria. Clause Head: Send Indire~-I Object: Noun Phrase Head: Him Direct Object Noun Phrase Head: Message Article: The Relative: Clause Head: Arrive Subject: That Time: Yesterday Figure 1: Syntactic Analysis of "Send him the message that arrived yesterday.". Simplifications in tense, determiners and numbers are for the sake of presentation. Figure 2: Meaning Representation of "Send him the message that arrived yesterday.". Simplification on determiners and the further-constraints structure for the sake of presentation. I01 Our framework does not assume that a syntactic analysis of a complete sentence is found before semantic interpretation begins. Rather, the implemented semantic interpreter proceeds incrementally as the grammar proposes the syntactic function of an immediate constituent; this moc~el of communication between syntax and semantics has been termed a cascade [Woods 80, Bobrow&Webber 80b]. To achieve semantic interpretation, some well.known types of knowledge need "to be employed, e.g., selection restriction information (often represented using semantic features), structural information (often encoded in case frames), and translation information (often defined with various kinds of projection rules). Some of the difficulties in representing and applying this knowledge include the following: 1. Translation rules (projection rules) for generating correct meaning representations must be defined. We have been able to define modular projection rules that make use of the inheritance properties of KL- ONE. 2. Since much of the knowledge for a particular application is necessarily domain specific, it is important to organize it in a way to ease extension of the knowledge base and to ease moving to a new domain. 3. Since distributional restrictions require specific semantic features, pronouns and other semantically neutral terms not necessarily having those features must be accepted wherever they are consistent with the expected type of noun phrase. 4. The inter-constituent relationships arising in relative clauses must be consistent with all selection restrictions and be represented in the resulting meaning representation. This paper addresses each of these issues in turn. We are building on techniques presented by Bobrow and Webber [Bobrow&Webber 80a, Bobrow&Webber 80b]. This paper describes the system currently in use at USC/Information Sciences Institute. The basic framework is reviewed in Section 2. Section 3 presents the translation mechanism [Sondheimer 84]. Capturing semantic generalizations is the topic of Section 4. Sections 5 and 6 discuss issues regarding pronouns and relative clauses, respectively. Related work is identified in Section 7. The final section summarizes the results, and identifies further work. A very brief introduction to KL-ONE is provided in an appendix. 2. Background The framework being developed uses a frame for each semantically distinguishable type of phrase. Thus, a frame will be required for each class of phrase having a uniq.ue combination of . semantic distribution, - selection restrictions on constituents making up the phrase, and -_assignment of semantic relations to syntactic function. It is likely that the frames will reflect the natural categories of descriptions of objects, events, actions, and states of affairs in any particular application. For example, in the computer mail domain, the following are some frames that have been useful: - Clauses describing the sending of messages: SEND. CLAUSE - Clauses describing message arrival: ARRIVE. CLAUSE - Noun phrases describing messages: MESSAGE-NP -Noun phrases describing senders and recipients: USER-NP In the framework developed by Bobrow and Webber [Bobrow&Webber 80a, Bobrow&Webber 80b], for each frame, each possible immediate constituent is associated by syntactic function with a case or slot. The clause frames have slots identified as head, subject, 4"direct object, indirect object, etc. Noun phrase frames have slots for the head, adjective modifiers, article, etc. Each slot specifies the fillers that are semantically acceptable, whether it is required or optional, and the number of times it may be filled in a phrase. The constraints on fillers of frames' slots are stated in terms of other frames, e.g., the direct object of a SEND-CLAUSE must be a MESSAGE.NP, or in terms of word senses and categories of these senses. Some example word sense categories are: • Message description nouns, such as "message" or "letter": MESSAGE.NOUN • Information transmission verbs, such as "send" or "forward": TRANSMISSION.VERB In our domain the constraint on the subject of an ARRIVE- CLAUSE is that it satisfies the MESSAGE.NP frame. A constraint on the head of the MESSAGE.NP frame is that it is a word sense in the category MESSAGE.NOUN. Frames are represented as KL.ONE concepts. Case slots appear as roles of concepts. 5 Semantic constraints on what can fill a case slot are encoded as the value restrictions of roles. These value restrictions are concepts representing frames, word senses, or word sense categories. Number restrictions on roles show the number of times the syntactic function may be realized. A required slot is marked by the number restriction on its role having a minimum of 1; an optional slot has a number restriction with a minimum of 0 and a maximum greater than 0. A phrase is said to instantiate a given frame X if and only if its immediate constituents satisfy the appropriate value and number restrictions of all of X's roles. 6 The collection of frames and word-sense 4Subject, object, etc. refer to logical roles rather than surface syntactic ones. 51t is possible to associate roles with semantically defined subsets of other roles, e.g., to assign separate roles to uses of color adjectives, size adjectives, etc. This is an important convenience in constructing frames but not crucial to our discussion. 6A recognition algorithm for this representation has been presented [Bobrow&Webber 80b] and several others have been developed since then. Thase will be presented in separate reports. 102 information is called a Syntaxonomy (for syntactic taxonomy), since it encodes knowledge regarding semantic interpretation in a hierarchy of syntactic classes. 3. Translation Rules To achieve the mapping from syntactic analysis to meaning representation, translation rules are associated with individual frames. Though the rules we give generate KL-ONE structures as the meaning representation, other translation rules could be developed for generating forms in a different target representation language. Any KL.ONE concept C representing a frame has an associated concept C' representing the main predicate of the translation. For example, the translation of SEND-CLAUSE is the concept Send-mail. Translations are stored in data attached to the frame; we label this data TRANSLATION. The translation rules themselves can be associated with individual case slots. When inheritance results in more than one translation rule for a case slot, the one originating from the most specific frame in the hierarchy is selected. 7 Suppose we are building the translation C' of a matched frame C. One common translation rule that could appear at a role R of C is (Paraphrase-as R'). This establishes the translation of the filler of R as the filler of R' at concept C'. For example, the indirect object slot of SEND-CLAUSE has the rule "(Paraphrase- as addressee)" to map the translation of the noun phrase in the indirect object position into the addressee role of the Send-mail. Another rule form, (Attach-SD sf), takes a semantic function sf as an argument and attaches the translation of the constituent filling R as the filler F of sf. A example of its use in the processing of relative clauses as described in Section 6. Attach- SD differs from Paraphrase-as by having facilities to establish a role from F to C'. This automatic feature is essentially the opposite of Paraphrase.as, in that a semantic function runs from the embedded constituent to its matrix phrase. Another rule form is not a translation rule per se, but stores data with the syntactic concept representing the syntactic analysis of the phrase. The data could be checked by other (conditional) translation rules. Underlying these forms and available for more complex types of translation is a general mechanism having the form "source = = > goal." The source identifies the structure that is to be placed at the location identified by the goal. The formalism for the source allows reference to arbitrary constants and concepts and to a path through the concepts, roles, and attached data of a KL-ONE network. The goal formalism also shows a path through a network and may specify establishment of additional roles. A separate test may be associated with a translation rule to state conditions on the applicability of a rule. If the test is false, the rule does not apply, and no translation corresponding to that role is generated. The most common type of condition is (Realized-Function? role), which is true if and only if some 7There is also an escape mechanism that allows inheritance of all rules not indexed to any role. immediate constituent fills that role in the analysis. It can be used as an explicit statement that an optional role is translated only if filled or as a way of stating that one constituent's translation depends on the presence of another role. Additional conditions are (EMPTY-RC)LE?role), which checks that role is not filled, and (ROLE-FILLER? role class), which checks that the filler of role is of type class. Since all three take a role name as argument, they may be used to state cross,dependencies among roles. Figure 3 contains some of the frames that allow for the analysis .of our example. The treatment of the pronoun and relative clause in the example sentence of Section I will be explained in Sections 5 and 6. 4.Capturing Semantic Generalizations via Abstract Case Frames Verbs can be grouped with respect to the cases they accept [Simmons 73, Celce-Murcia 76, Gawron 83]; likewise, groups exist for nouns. A KL-ONE syntaxonomy allows straightforward statement of common properties, as well as individually distinct properties of group members. Abstract case frames are semantic generalizations applicable across a set of the familiar sort of concrete frames. Properties common to the generalization can be defined at the abstract frames and related to the concrete frames through inheritance. The use of time modification in "that arrived yesterday" is the same as that of other verbs describing completion of an activity, e.g., "come", "reach", and "finish". A general frame for clauses with these verbs can show this role. The concrete frames for clauses with verbs in this group are subconcepts and thereby accept the time modifier (see Figure 4). The concrete frames can restrict both the number and type of time modifiers, if necessary. Translation rules associated with this time role can also be restricted at the concrete frames. Some modifiers dramatically affect the translation of entire phrases, as in the partitive modifier "half of". A description of "half of" some individual entity (as opposed to a set of entities) may not have the same distribution. For example, "Delete this message from my directory.", makes sense, but "Delete half of this message from my directory.", does not. This can be easily stated through an abstract frame for the basic message description specialized by two concrete frames(see Figure 5). A related case is "toy X." The translation of "toy X" is certainly different from that of X, and their distributions may differ as well. This may be handled in a way similar to the partitive example. 8 This class of examples points out the limits of case frame systems. Other modifiers, such as "model" and "fake", are easily recognizable. However, more complex modifiers also make the same distinctions, e.g., "The gun that was a fake was 8An'interesting alternative is .to show the toy modifier as an optional role on an abstract frame for object descriptions. Underneath it could be an abstract frame distinguished only by requiring the toy modification'role. All appropriate inferences associated with descriptions of toys could De associated with this concept. Frames for the basic descriptions of specific object types could be placed underneath the object description frame. These could recognize "toy X". Our systems invoke the KL-ONE classifier after the recognition of each phrase [Schmolze&Lipkis 83]. in this case, classification will result in identification of the phrase ss a kind of both X description and toy description allowing translation to show what is known about both without creating a "toy X" frame by hand. We have not completely analyzed the affect of this strategy on the translation system. 103 TB A kit21 A TIt~kJ * ~slation Rule: If (Realized-Function? Indirect Object) then (Paraphrase-as addressee) ~slation Rule: (Paraphrase.as message) TRANSLATION: ) Min:l Max:l_~ Subject Min:O Max:l Translation Rule: If (Realized.Function? Subject) then (Paraphrase-as message) Time Min:0 Max:l Translation Rule: If (Realized.Function? Time) then(Paraphrase.as completion-time.interval) TRANSLATION: Min:l M a x : ~ Determiner Min:l Relative Min:O Max:oo Translation Rule: If (Realized-Function? Relative) then (Attach.SD further.constraint) Figure 3: Some frames used for "Send him the message that arrived yesterday." .ti Figure 4: A fragment of the syntaxonomy. Double arrows are subc relationships, i.e., essentially "is-a" arcs. Not all roles are shown. partitive ~ partitive Min:O Max:O Min:l Max:l Figure 5: Syntaxonomy for partitives. 104 John's.", and "The gun that was made of soap was John's.". Viewing our semantic interpretation system as a special purpose infereoce system, it seems prudent to leave the recognition of the type of these "guns" to more general.purpose reasoners. Abstract case frames have significantly eased the development and expansion of semantic coverage within our application by helping us to focus on issues of generality and speciiicity. The new frames we add have many slots established by inheritance; consistency has been easier to maintain; and the structure of the resulting syntaxonomy has helped in debugging. 5. Semantically Neutral Terms Case frames are an attempt to characterize semantically coherent phrases, for instance, by selection restrictions. In computational linguistics, selection restrictions have been applied to the constituents that are possible fillers rather than to what the constituents denote. For example, the restriction on the direct object of a SEND-CLAUSE is MESSAGE-NP, rather than messages. Problems with using such approximations in parsing are discussed in [Ritchie 83]. For many natural language interfaces, a noun phrase's internal structure gives enough information to determine whether it satisfies a restriction, s However, there are forms whose semantic interpretation does not provide enough information to guarantee the satisfaction of a constraint and yet need to be allowed as fillers for slots. These include pronouns, some elliptical forms, such as "the last three", and otherneutral noun phrase forms, such as "the thing" and "the gift". This also includes some nonlexical gestural forms like the input from a display that shows where the user pointed (literally or via a mouse). We refer to all of these as sernantica//y neutra/terms. A semantic interpretation system should accept such forms without giving up restrictions on acceptable semantic categories. However, these forms cannot, in general, appear everywhere. In discussing computer mail, "1 sent him" should be considered nonsense. Bobrow and Webber [Bobrow&Webber 80b] propose a general strategy for testing the compatibility of a constituent as a slot filler based on non-incompatibility. The current system at USC/ISI takes a conservative view of this proposal, developing the idea for only neutral reference forms. All noun phrase types displaying neutral reference are defined as instances of the concept NeutraIReference.NP. Furthermore, disjointness" relations are marked between the various subclasses of neutral references and those classes of explicit descriptions which have nonintersecting sets of potential references. During interpretation, when such a NeutralReference-NP is proposed as a slot filler, and that concept is not disjoint from the value restriction on the slot, it is accepted. In addition, since the slot restriction and the filler each have meaning of their own, e.g., "he" describes a human male in the computer mail domain, the translation should show the contribution of both the neutral term and the constraint on the slot. When the neutral form is qualified as a constituent by the system, both the neutral form and the selection constraint are 9Clearly, misreference also intederes with this method [Goodman 8,3], as does personification, metonymy and synecdoche. We propose other methods for these last phenomena in [Weischedel 84, Weischedel 83]. remembered. When it is time to produce the translation, the translation rule for the slot applies to a concept which is the conjunction of the translations of the neutral reference form and the restriction. Part of the network that supports the translation of "he" in the example of section 1 is shown in Figure 6. Referring to Figures 2 and 3, the effect of a reference to a male where a reference to a computer-user was expected can be seen. ~ANSLATION: sex I Head Min:l Max:l ~TRANSLATION: Figure 6: Network for "he." Note that computer User is a subconcept of Person. 6. Inter-Constituent Relationships: Relative Clauses In relative clauses, the constraint on the slot filled by the relative pronoun or the trace 1° must be satisfied by the noun phrase that the relative clause modifies. In addition, the translation of the noun phrase must reflect the contribution of the use of the pronoun or trace in the relative clause. For example, in "Send him the message that arrived yesterday", the constraint on the subject of "arrive" must be satisfied by the noun phrase of which it is a part. Further, translation must result in co-reference within the meaning representation of the value of the message role of the Arrival.mail concept and the value of the message role of the Send.mail concept (see Figure 2). This is a form of inter- constituent relationship. Our system processes relative clauses by treating the relative pronouns and trace elements as neutral reference forms (just as in the pronominal cases discussed in Section 5 and by storing the constraints on the head of the relative clause until they can be employed directly. In our example, the noun phrase "that" is seen as a Trace-NP, a kind of NeutralReference.NP. The structure assigned "that" is compatible with MESSAGE-NP and hence acceptable. On translation, the Trace-NP is treated like a neutral reference but the role and unchecked constraint are recorded, as attached data on the instantiated case frame that results from parsing the arrival clause. In the example, the facts that a Trace.NP is in the subject role and that a Message.NP is required are stored. That constraint is tested against the classification of the matrix noun phrase when the clause is proposed as a relative clause modifier. 11 10The RUS parser which we employ supplies a "trace" to establish • syntactic place holder with reduced relatives. 11 If the use of the relative pronoun or trace is inside • phrase inside the relative clause, as in "the town from which I come", the role and constraint will be passed upward twice, 105 If that constraint is satisfied, the fact that the relative pronoun and noun phrase co-refer is recorded. When the entire noun phrase is processed successfully, the appropriate co- references are established by performing (Attach-SD further- constraint) and by retrieving the translation associated with the role filled by the Trace-NP. This establishes co-reference between the concept attached by the translation rule and the : translation of the entire noun phrase. In our example, the translation of the noun phrase is made the value of the message role of the Arrival-mail. 7. Related Work Our technique uses properties of KL-ONE to build a simplified, special-purpose inference engine for" semantic interpretation. The semantic processor is separate from both syntactic and pragmatic processing, though it is designed to maintain well-defined interaction with those components through Woods's cascade model of natural language processing [Woods 80]. Uniform methods include logic grammars [Pereira 83, Palmer 83] and semantic grammars[Burton 77, Hendrix 78, Wilensky 80]. Logic grammars employ a Horn-clause theorem prover for both syntactic and semantic processing. Semantic grammars collapse syntactic and semantic analysis into an essentially domain.specific grammar. Semantic interpretation is handled through unification in some evolving systems, such as PATTR-II [Robinson 83]. Several recent systems have separate semantic interpretation components. Hirst [Hirst 83] uses a Montague- inspired approach to produce statements in a frame language. He uses individual mapping rules tied to the meaning-affecting rules of a grammar. Boguraev [Boguraev 79] presents a semantic interpreter based on patterns very similar to those of our case frames. The meaning representation it produces is very similar to the structure of our case frames. 8. Conclusion We have presented approaches to typical difficulties in building semantic interpreters. These have included a sketch of a translation system that maps from the matched frames to KL-ONE meaning representations. The idea of abstract case frames and applications of them were introduced. Finally, ways of accepting neutral references and allowing for the inter-constituent constraints imposed by relative clauses were presented. Our experience indicates that KL-ONE is effective as a means of building and employing a library of case frames. The basic approach is being used in research computer systems at both USC/Information Sciences Institute and Bolt Beranek and Newman, Inc. Of course, many problems remain to be solved. Problems currently under investigation include: - Robust response to input that appears semantically ill.formed, such as using an unknown word, - A general treatment of quantification, - Treatment of.conjunction, . Feedback from the pragmatic component to guide semantic interpretation, • Generation of error messages (in English) based on the case frames if the request seems beyond the system's capabilities, - Understanding classes of metonymy, such as "Send this window to Jones," and • Provision for meaningful use of nonsense phrases, such as "Can I send a package over the ARPAnet?" I. Brief Description of KL-ONE KL-ONE offers a rigorous means of specifying terms (concepts) and basic relationships among them, such as subset/superset, disjointness, exhaustive cover, and relational structure. Concepts are denoted graphically as ovals. Concepts are Structured objects whose structure is indicated by named relations (ro/es) between concepts. Roles are drawn as arcs containing a circle and square. The concepts at the end of the role arcs are said to be va/ue restrictions. In addition, roles have maximum and minimum restrictions on the number of concepts that can be related by the role to the concept at the origin of the arc. Concepts can also have data attached to them, stored as a property list. Finally, the set of concepts is organized into an inheritance hierarchy, through subc relations drawn with double. line arrows from the subconcept to the superconcept. All of the KL-ONE diagrams in the text are incomplete; for instance, Figures 3 and 5 focus on different aspects of what is one KL-ONE structure. In figure 3, the diagram for SEND- CLAUSE specifies the concepts of "send" clauses. They have exactly one head, which must be the lexical concept "send." Theymust have a direct object which is a MESSAGE.NP, and they optionally have an indirect object which is a USER-NP. Figure 5 shows that SEND-CLAUSE's are MESSAGE- TRANSMISSION-CLAUSE's, which are a type of CLAUSE. The meaning representation, Figure 2, generated for "Send him the message that arrived yesterday" consists of the concept Send-mail, having an addressee which is a Computer-User and a message which is ComputerMail. References [Bobrow 78] R.J. Bobrow, "The RUS System," in B.L. Webber, R. Bobrow (eds.), Research in Natura/ Language Understanding, Bolt, Beranek, and Newman, Inc., Cambridge, MA, 1978. BBN Technical Report 3878. [Bobrow&Webber 80a] Robert Bobrow and Bonnie Webber, "PSI-KLONE: Parsing and Semantic Interpretation in the BBN Natural Language Understanding System," in Proceedings of the 1980 Conference of the Canadian Society for Computationa/ Studies of/nte//igence, CSCSI/SCEIO, May 1980. 106 [Bobrow&Webber 80b] Robert Bobrow and Bonnie Webber, "Knowledge Representation for Syntactic/Semantic Processing," in Proceedings of the National Conference on Artificial Intelligence, AAAI, August 1980. [Boguraev 79] Branimir K. Boguraev, Automatic Resolution of Linguistic Ambiguities, Computer Laboratory, University of Cambridge, Cambridge, U.K., Technical Report NO. 11, August 1979. [Brachman&Schmolze 82] James Schmolze and Ronald Brachman (eds.), Proceedings of the 1981 KL-ONE Workshop, Fairchild, Technical Report No. 618, May 1982. [Bruce 75] B. Bruce, "Case Systems for Natural Language," Artificial Intelligence 6,(4), 1975, 327-360. [Burton 77] R.R. Burton, J.S. Brown, Semantic Grammar: A technique for constructing natural language interface to instructional systems, Bolt, Beranek, and Newman, Inc., BBN Report 3587, May 1977. Cambridge, MA [Celce-Murcia 76] M. Celce-Murcia, "Verb Paradigms for Sentence Recognition," American Journal of Computational Linguistics, 1976. Microfiche 38. [Gawron 83] J. M. Gawron, Lexical Representation and the Semantics of Complementation, Ph.D. thesis, Univ. of California, Berkeley, Linguistics Dept., 1983. [Goodman 83] Bradley A. Goodman, "Repairing Miscommunication: Relaxation in Reference," in AAAI-83, Proceedings of the National Conference on Artificial Intelligence, pp. 134-138, AAAI, Washington, D.C., August 1983. [Hendrix 78] Gary Hendrix, et al., "Developing a Natural Language Interface to Complex Data," ACM Transactions on Database Systems 3, (2), 1978, 105-147. [Hirst 83] G. Hirst, "A Foundation for Semantic Interpretation," in Proceedings of the 21st Annual Meeting of the Association for Computational Linguistics, pp. 64-73, Association for Computational Linguistics, June 1983. [Kaczmarek 83] T. Kaczmarek, W. Mark, and N. Sondheimer, "The Consul/CUE Interface: An Integrated Interactive Environment," in Proceedings of CHI '83 Human Factors in Computing Systems, pp. 98.102, ACM, December 1983. [Moser 83] M.G. Moser, "An Overview of NIKL, the New Implementation of KL-ONE," in Research in Natural Language Understanding, B01t, Beranek, and Newman, Inc., Cambridge, MA, 1983. BBN Technical Report 5421. [Palmer 83] Martha Stone Palmer, "Inference.Driven Semantic Analysis," in AAAI-83, Proceedings of the National Conference on Artificial Intelligence, pp. 310-313, AAAI, Washington, D.C., August 1983. • [Pereira 83] Fernando C. N. Pereira and David H. D. Warren, "Parsing as Deduction," in Proceedings of the 21th Annual Meeting of the Association for Computational Linguistics, pp. 137-144, Association for Computational Linguistics, Cambridge, Massachusetts, June 1983. [Ritchie 83] G. Ritchie, "Semantics in Parsing," in Margaret J. King (ed.), Parsing Natural Language, pp. 199-217,, 1963. [Robinson 83] Jane Robinson et at.._=, Personal Communication, 1983 [Schmolze&Lipkis 83] James Schmolze, Thomas Lipkis, "Classification in the KL-ONE Knowledge Representation System," in Proceedings of the Eighth International Joint Conference on Artificial Intelligence, IJCAI, 1983. [Simmons 73] R. F. Simmons, "Semantic Networks: Their Computation and Use for Understanding English Sentences," in R. Schank and K. Colby (eds.), Computer Models of Thought and Language, pp. 63-113, W. H. Freeman and Company, San Francisco, 1973. [Sondheimer 84] Norman K. Sondheimer, Consul Note 23: "Translating to User Model", 1984. [Weischedel 83] Ralph M. Weischedel and Norman K. S0ndheimer, "Meta-Rules as a Basis for Processing Ill- Formed Input," American Journal of Computational Linguistics 9, (3-4), 1983. [Weischede184] Ralph M. Weischedel and Norman K. Sondheimer, Consul Note 22: "Relaxing Constraints in MIFIKL ", 1984. [Wilensky 80] Wilensky, Robert and Yigal Arens, "PHRAN .. A Knowledge-Based Natural Language Understander," in Proceedings of the 18th Annual Meeting of the Association for Computational Linguistics and Parasession on Topics in Interactive Discourse, pp. 117-121, Association for Computational Linguistics, Philadelphia, PA, June 1980. [Woods 80] W.A. Woods, "Cascaded ATN Grammars," American Journal of Computational Linguistics 6, (1), 1980, 1-12. 107
1984
24
TWO THEORIES FOR COMPUTING THE LOGICAL FORM OF MASS EXPRESSIONS Francis Jeffry Pelletier Lenhart K. Schubert Dept. Computing Science University of Alberta Edmonton, Alberta T6G 2El Canada ABSTRACT Applying the rules of translation is even simpler. In essence, all that is needed is a mechanism for arranging There are various difficulties in accomodating the traditional logical expressions into larger expressions in conformity with mass/count distinction into a grammar for English which the semantic rules. (For examples of parsers see Thompson has a goal the production of "logical form" semantic translations of the initial English sentences, The present paper surveys some of these difficulties. One puzzle is whether the distinction is a syntactic one or a semantic one, i.e., whether it is a well-formedness constraint or whether it is a description of the semantic translations produced. Another puzzle is whether it should be applied to simple words (as they occur in the lexicon) or whether it should apply only to longer units (such as entire NPs). Of the wide variety of possible theories, only two seem to produce the required results (having to do with plausible inferences and intuitively satisfying semantic representations). These two theories are developed and compared. According to Montague (Thomason 1974), Gazdar (Gazdar et al 1984) and a rapidly growing number of linguists, philosophers, and AI researchers, the logical form underlying sentences of a natural language are systematically--and simply--determined by the syntactic form of those sentences. This view is in contrast with a tacit assumption often made in AI, that computation of logical translations requires throngs of more or less arbitrary rules operating upon syntactic forms.* The following are a few grammar rules in approximately the style of Gazdar's Generalized Phrase Structure Grammar (GPSG). They differ from Gazdar's primarily in that they are designed to produce more or less "conventional" logical translations, rather than the intensional ones of Montague and Gazdar (for details see Schubert & Pelletier 1982). Each rule consists of a rule number, a phrase structure rule, and a semantic (logical translation) rule. 1. S., NP VP, VP'(NP') 2. VP., [V +be] PRED, PRED' 3. PILED .* N, N' N,={water,wine,food,furniture,...} Parsing and translating in accordance with such rules is a fairly straightforward matter. Since the syntactic rules are context free, standard context-free parsing methods can be employed, except that allowance must be made for the propagation of features, with due regard for concord. 'The work reported herein was partially supported by NSERC grants A5525 (FJP) and A8818 (LKS). We also wish to thank Matthew Dryer, David Justice, Bernard Linsky, and other members of the Univ. Alberta Logical Grammar Study Group for discussions on these topics. 1981, Schubert & Pelletier 1982, Gawron et al 1982, Rosenschein & Shieber 1982). The topic of mass terms and predicates has a substantial literature within both linguistics and philosophical logic, with much of the recent research deriving inspiration from Montague Grammar (e.g., see Pellefier 1979, ter Meulen 1980, Bunt 1981, Chierchia 1982). There are three views on the mass/count distinction, namely that the distinction is (a) syntactic, (b) semantic,, and (c) pragmatic, Orthogonal to these views we have the further possibilities (i) that the mass/count distinction is lexical. and (ii) that it is determined by the context in which the expression occurs. We shall present arguments in the full paper to eliminate position (c), leaving us with four possible kinds of theories. (i) a syntactic expression (lexical) approach, (2) a syntactic occurrence approach. (3) a semantic expression approach, and (4) a semantic occurrence approach. This raises the question of what is the difference between syntactic approaches generally and semantic approaches generally. A syntactic approach treats +mass and +count as syntactic classifications or features, that is as features to be used by the syntactic rules in determining whether some longer stretch of words is well-formed. Central to the semantic approach is the claim that +count and +mass are not syntactic features or categories, but rather are a description of the semantic representation of the expression. In this approach, no syntactic rules refer to +count or +mass (since these are not syntactic objects). Rather, in sentences like Mary put apple in the salad vs. Mary put an apple in the. salad, the semantic approaches allow us to say that it was a mass or count semantic representation of apple only after inspecting the kind of thing that apple is true of in the sentences. There are reasons for rejecting options (2) and (3). thus leaving us with only a syntactic expression approach and a semantic occurrence approach. (The reasons are given in Pelletier & Schubert 1985). These are the two theories of mass expressions that are to be discussed in the paper. They seem to us to be the most plausible candidates for an adequate theory of the logical form of sentences involving mass expressions. The fragment of English that the two theories of mass expressions are concerned with is roughly those sentences with a copular verb and either a mass or count expression as predicate, and whose subjects are either bare noun phrases or quantified noun phrases. A sentence is a noun phrase and a verb phrase. A verb phrase is a copula followed by a 108 PP. E Do hich in turn is either a bare noun (as in Claret is wine or This puddle is ma.......~n - -the latter said after an application of the universal grinder) 2 or an a followed by a noun (as in John is a man or Claret is aq wine) or is an entire noun phrase (as in John is the man most likely to succeed or Claret is ~ favourite red wine). A noun phrase is either a bare noun (as in Claret is a dry red wine or Dogs are barking outside) or else is a quantified term (as in All men are mortal or Sm red wine is tasty --we include as determiners this, all, some, sin, much, little, each, every, and the numeral quantifiers). Nouns may themselves be either an adjective-phrase noun combination, or just a noun. We consider here two cases of adjective modification: intersective and non-intersective. For the former we have in mind such adjectives as red, while for the latter we think of such adjectives as fake. The rules which give alternatives, such as 3p vs. 3s, are those rules which are different for the two theories of mass terms. The p-rules are for the semantic occurrence approach while the s-rules are for the syntactic expression approach. The ontological underpinnings of these theories are that "reality" contains two sorts of items: (1) "ordinary objects" such as rings, sofas, puddles (and including here what many theorists have called "quantities of matter"). (2) "kinds", that is, "varieties", "substances", etc. We have in mind here such items as wine, claret, red wine, and the like, and also servings of such items. We wish to make no special metaphysical claims about the relationships that might hold between "ordinary objects" and "kinds"--instead we content ourselves with describing how such an ontology leads to a simple and natural description of various of the facts concerning mass (and possibly plural ) expressions. Linguistically, that is semantically, we take there to be three distinct types of predicates: (a) those which apply only to "kinds', e.g., is a substance, is scarce, is a kind of wine, is abundant, (b) those which apply only to "objects', e,g., is a quantity of goM, is a puddle, and (c) those which can apply to both "kinds" and "objects". In this last group we have in mind mass predicates such as is wine. is furniture, is food, and is computer software. Both of these theories take it that is wine is true of the (abstract) kind claret in addition to an individual quantity such as the contents of this glass. Moreover, they take is wine to be true of an object such as a drop or puddle of wine, occupying the same region as some quantity of wine. (This ring is goM or This hamburger is food are clearer examples of the application of mass predicates to objects.) Generally speaking, the theories view the kinds of M as forming an upper semilattice of kinds with M at the top. This is a "formal" semilattiee in that the union of any two elements of it is a member of the semilattice, and we view is wine as being true of any of these formal kinds. So a sentence like Cheap wine is wine will be true, since cheap wine names an element of the semilattice. Predicates like is a wine are true of conventionally recognized kinds (Claret is a wine is true) but not of every "formal" kind since, e.g., Cheap wine is 2 The universal grinder (Pelletier 1975) takes objects corresponding to any count noun, grinds them up and spews the result from the other end. Put a table into it and after a few minutes there is sm table on the floor. (We regularly represent the unstressed some by sin.) a wine is not true. (Sauterne mixed with claret is a wine is also not true, showing that is a wine is not true of unions of elements of the semilattice). These predicates are not only true of the conventional kinds but also of conventional servings such as the bottle of wine on the table or the 250ml in this glass. Note that these can again be abstract entities: but rather than potentially being abstract conventional kinds of wine, they can be abstract conventional kinds of servings of wine. Finally such predicates are true of individual quantities--as when we say we have ordered four wines, all of the same kind and size. When a bare mass noun phrase (or indeed other bare noun phrases, although we shall not dwell on them here) is used as a subject (or object, but again we shall not consider that here), it is taken to name the kind. So in Cheap wine is wine, the subject cheap wine names a kind; and since the sentence is true it must name a "formal kind" so that is wine can be predicated of it. But since Cheap wine is a wine is not true, the formal kind cannot be a conventionally recognized kind (nor, for that matter, a conventional serving nor an individual quantity). Both theories hold that mass CN's should be translated into the semantics as predicates. Strictly this is not required: for, all we have given direct evidence for is that mass VP's be translated as predicates with a mixed object/kind extension. It could be the case that mass CN's are quite different, yet in the formation of a mass VP the entire VP gets assigned a mixed, predicate denotation. Still, it would be simple, and in keeping with much philosophical and linguistic analysis, to assume coincidence of CN and "is CN" denotations (at least when tense is ignored, as here). With just this much of the theory sketched, we can overcome various of the difficulties that plagued other theories. For example, it is most unclear that any other theory can adequately translate sentences like Tap water is water This puddle is water Consider also sentences like All wine is wine wherein the subject all wine seems to quantify over both kinds of wine and quantities of wine, entailing both White wine is wine and The litre of wine in this bottle is wine, for example. It seems to us that no other theory allows this comprehensiveness. An even clearer example of such comprehensive denotation is (a), from which both of (b) and (c) follow, given that rice is edible and this sandwich is edible. (Note also the comprehensive denotation of edible). No other theory we know of can account for the validity of these two arguments. a. Everything edible is food b. Rice is food c. This sandwich is food Both of these theories will want to be able, in the semantics, to form predicates which are true of kinds, or of servings, or of individuals, given a predicate which has comprehensive extension. So, for example, from the predicate water' which is assumed to be true of quantities, servings, and kinds, we shall want to be able to form (k water') which is true of conventional kinds of water, to form (p water') which is true of conventional portions (and kinds of portions) of water, and to form (q water') 109 which is true of quantities of water, Conversely, if we have a predicate which is true of individuals and kinds, we shall want to form a predicate true of all the entities that mass predicates are true of--qnantities of stuff, kinds of stuff, and objects coincident with quantities of stuff. For example, if man' is a predicate true of objects and kinds, then (s man') is the mass predicate formed therefrom. Also, we shall want to be able to form the name of a kind from a predicate: (# water') is the name of the kind water and (# (cheap'(wine')) is the name of the kind cheap wine. The rules for the relevant portion of our two theories are () is our symbol for lambda abstraction): 1. S -) NP VP. VF(NF) 2. VP -) [V +be] PRED. FRED' 3p. FRED .) N. N' 3s. FRED .) [N +MASS]. N' 4p. FRED .) [DET +a] N. (tx)[(k N')(x) v (p N')(x)] 4s. FRED .* [DET +a] [N +COUNT]. N' 5. FRED ,, NP. ()x)(x=NF) 6. FRED -) ADJP. ADJF 7p. NP .) N. (# N') %. NP .) [N +MASS]. (~ N') 8. NP .* DET N. DET(N') 9. [N + ADJ F ] .) [ADJ P + INTERSECT] N, ()x)[ADJP'(x) & N'(x)] 10. [N +ADJP] -) [ADJP ",INTERSECT] N. ADJF(N') The S-theory distinguishes in the lexicon mass from count nouns. And it has what might be called "lexical extension" rules to give us the "stretched" meaning of nouns that we have earlier talked about. For example, it has [N +COUNT] ~ sofa, man, substance .... [N +MASS] ~ wi.e.w.,er .... [N +COUNT]., [N +MASS]. (k N') [N +C(mJNT] - [N +MASS]. (p N') [N +MASS] .) [N +COUNT], (s N') Now. both of these theories can give the correct semantic representation to a wide range of sentences involving mass terms, given certain meaning postulates. (The two theories do it slightly differently, as might be expected since they have somewhat different semantic understandings of the lexical nouns. For example, the s-theory takes man to be true of individual men and of kinds of men, while the p-theory takes it also to be true of the stuff of which men are made. In the p-theory, when a sentence uses a --as in a man --then the semantic operators convert this "basic" meaning into one that is true of individual men and of kinds of men. The s-theory rather has a lexical. extension rule which will convert the lexical count noun man into one which is a mass noun and is true of the stuff of which men are made. They will also take a different tack on what quantified terms designate, although that has been hidden in rule $ above by assigning the same logical form to both theories. Nonetheless, the meaning postulates of the two theories will differ for these.) In addition to the sorts of examples stated above, both these theories can generate and give the correct logical form to such sentences as Wine is wine (two readings, both analytic) Wine is a wine (false) All wine is wine (analytic) Claret is a wine (true) Cheap wine is a wine (false) *All wine is a wine (semantically anomalous) Water is dripping from the faucet (entails: sm water is dripping from the faucet) Water is a liquid (entails: water is liquid) Both theories make the following six inferences valid i. Claret is a wine, wine is a liquid, so claret is a liquid 2. Claret is a wine, wine is a liquid, so claret is liquid 3. Claret is a wine, wine is liquid, so claret is a liquid 4. Claret is a wine, wine is liquid, so claret is liquid 5. Claret is wine, wine is a liquid, so claret is liquid 6. Claret is wine, wine is liquid, so claret is liquid And they both make these two inferences invalid 7. Claret is wine, wine is a liquid, so claret is a liquid 8. Claret is wine, wine is liquid, so claret is a liquid We know of no other theories which can do all these things. Yet the two theories are radically different: one has a mass/count distinction in the syntax and the other doesn't, and they have different extensions assigned to the lexical items. So the question naturally arises- -which is better? What can be said against the two theories? There is not space in a paper of this size to go into this in detail, so we shall content ourselves with just hurling the main charge that each one directs against the other. Briefly, the p-theory charges the s-theory with pretending to use syntactic features +mass and +count but allowing them to do no syntactic work. For every, sentence which has a mass term in a given location, there is another sentence which has a count term in that position. No constructious are ruled out; the only use of the +mass/+count features is in directing the semantic translation process. And that suggests that the features should all along have been semantic. The s-theory charges the p-theory with being unable to give coherent meaning postulates because of its committment to a comprehensive extension to the lexical terms. For example, suppose one wanted to give as a meaning (or factual) postulate that A larab has fur. The s-theory can do this without difficulty: lamb' is true of individual lambs and the meaning postulate says of each of them that they have fur. But the p-theory cannot easily do this: lamb' is true of stuff, so the predicate must be converted to one which is true of individuals. But there is no provision in the p-theory for doing this- -the closest that it could come is with a predicate that is true of both conventional kinds and "conventional portions" (i.e., ordinary Iambs). Given the above rules (augmented with additional features such as number and person agreement features in rule i) we are able to extend the capabilities of our parsers (Schubert & PeIletier 1982) so that they deliver logical form translations of sentences involving mass expressions. These translations have the desired semantic properties and, with an extension of the inference mechanisms to allow for predicate modification and ~-abstraction. allow the above valid arguments to be duplicated. So. which theory is to be preferred? That is a topic for further research. The time for studies of mass ii0 expressions with only casual reference to the syntax and semantics of language is past. Only systematic attempts to account for large classes of mass expressions within formal syntactic-semantic-pragmatic frameworks can hope to resolve the remaining i~sues. WORKS CITED Bunt, H.C. (1981) The Formal Semaraics of Mass Terms Dissertation, University of Amsterdam. Chierchla, G. (1982a) "Bare Plurals, Mass Nouns and Nominaliration" in D. Flickinger, M. Macken & N. Wiegand (eds) Proceedings of the First West Coast Conference on Formal Linguistics 243-255. Gawron, J., J. King, J. Lamping, E. Loebner, A. Paulson, G. Pullum, I. Sag, & T. Wasow (1982) "The GPSG Linguistics System" Proc. 20th Annual Meeting of the Association for Computational Linguistics 74-81. Gazdar; G., E. Klein, G. Pullum, I. Sag (1984) English Syntax (forthcoming). Pelletier, F.J. (1975) "Non-Singular Reference: Some Preliminaries" Philosophia 5. Reprinted in Pelletier (1979), 1-14. Page references to the reprint. Pelletier, F.J. (ed.) (1979) Mass Terms: Some Philosophical Problems (Reidel: Dordrecht). Pelletier, F.J. & L.K. Schubert (1985) "Mass Expressions" to appear in D. Gabbay & F. Guenthner Handbook of Philosophical Logic, Val. 4 (Reidel: Dordrecht). Rosenschein, S. & S. Shieber (1982) "Translating English into Logical Form" Proc. 20th Annual Meeting of the Association for Computational Linguistics. Schubert, L.K. & F.J. Pelletier (1982) "From English to Logic: Context-Free Computation of 'Conventional' Logical Translation" American Journal of Computational Linguistics 8, 26-44. ter Meulen, A. (1980) Substances, Quantities and Individuals. Ph.D. Dissertation, Stanford University. Available through Indiana University Linguistics Club. Thomason, R. (1974) Formal Philosophy: Writings of Richard Montague, (Yale UP: New Haven). Thompson, H. (1981) "Chart Parsing and Rule Schemata in PSG" Proc. 19th Annual Meeting of the Association for Computational Linguistics 167-172. iii
1984
25
SYNTACTIC AND SEMANTIC PARSABILITY Geoffrey K. Pullum Syntax Research Center, Cowell College, UCSC, Santa Cruz, CA 95064 and Center for the Study of Language and Information, Stanford, CA 94305 ABSTRACT This paper surveys some issues that arise in the study of the syntax and semantics of natural languages (NL's) and have potential relevance to the automatic recognition, parsing, and translation of NL's. An attempt is made to take into account the fact that parsing is scarcely ever thought about with reference to syntax alone; semantic ulterior motives always underly the assignment of a syntactic structure to a sentence. First I con- sider the state of the art with respect to argu- ments about the language-theoretic complexity of NL's: whether NL's are regular sets, deterministic CFL's, CFL's, or whatever. While English still appears to be a CFL as far as I can tell, new argu- ments (some not yet published) appear to show for the first time that some languages are not CFL's. Next I consider the question of how semantic filtering affects the power of grammars. Then I turn to a brief consideration of some syntactic proposals that employ more or less modest exten- sions of the power of context-free gr-mm-rs. I. INTRODUCTION Parsing as standardly defined is a purely syn- tactic matter. Dictionaries describe parsing as analysing a sentence into its elements, or exhibit- ing the parts of speech composing the sentence and their relation to each other in terms of government and agreement. But in practice, as soon as parsing a natural language (NL) is under discussion, people ask for much more than that. Let us distinguish three kinds of algorithm operating on strings of words: recognition output: a decision concerning whether the string is a member of the language or not parsing output: a syntactic analysis of the string (or an error message if the string is not in the language) translation output: a translation (or set of transla- tions) of the string into some language of semantic representation (or an error mes- sage if the string is not in the language) Much potential confusion will be avoided if we are careful to use these terms as defined. However, further refinement is needed. What constitutes a "syntactic analysis of the string" in the defin- ition of parsing? In applications development work and when modeling the whole of the native speaker's knowledge of the relevant part of the language, we want ambiguous sentences to be repesented as such, and we want Time flies like an arrow to be mapped onto a whole list of different structures. For rapid access to a database or other back-end system in an actual application, or for modeling a speaker's performance in a conversational context, we will prefer a program that yields one syntactic description in response to a given string presenta- tion. Thus we need to refer to two kinds of algo- rithm: all-paths parser output: a list of all structural descriptions of the string that the grammar defines (or an error message if the string is not in the language) one-path parser output: one structural description that the grammar defines for the string (or an error message if the string is not in the language) By analogy, we will occasionally want to talk of all-paths or one-path recognizers and translators as well. There is a crucial connection between the theory of parsing and the theory of languages. There is no parsing without a definition of the language to be parsed. This should be clear enough from the literature on the definition and parsing of pro- gramming languages, but for some reason it is occa- sionally denied in the context of the much larger and richer multi-purpose languages spoken by humans. I frankly cannot discern a sensible interpretation of the claims made by some artifi- cial intelligence researchers about parsing a NL without having a defined syntax for it. Assume that some program P produces finite, meaningful responses to sentences from some NL ~ over some terminal vocabulary T, producing error messages of some sort in response to nonsentences. It seems to me that automatically we have a generative grammar for 2. Moreover, since ~ is clearly recursive, we can even enumerate the sentences of L in canonical order. One algorithm to do this simply enumerates the strings over the terminal vocabulary in order of increasing length and in alphabetical order within a given string-length, and for each one, tests it for grammaticality using P, and adds it to the output if no error message is returned. Given that parsability is thus connected to definability, it has become standard not only for parser-designers to pay attention to the grammar for the language they are trying to parse, but also 112 for linguists to give some thought to the parsabil- ity claims entailed by their linguistic theory. This is all to the good, since it would hardly be sensible for the study of NL's to proceed for ever in isolation from the study of ways in which they can be used by finite organisms. Since 1978, following suggestions by Stanley Peters, Aravind Joshi, and others, developed most notably in the work of Gerald Gazdar, there has been a strong resurgence of the idea that context- free phrase structure grammars could be used for the description of NL's. A significant motivation for the original suggestions was the existence of already known high-efficiency algorithms (recogni- tion in deterministic time proportional to the cube of the string length) for recognizing and parsing context-free languages (CFL's). This was not, however, the motivation for the interest that signficant numbers of linguists began to show in context-free phrase structure grammars (CF-PSG's) from early 1979. Their motivation was in nearly all cases an interest sparked by the elegant solutions to purely linguistic problems that Gazdar and others began to put forward in various articles, initially unpublished working papers. We have now seen nearly half a decade of work using CF-PSG to successfully tackle problems in linguistic description (the Coordinate Structure Constraint (Gazdar 1981e), the English auxiliary system (Gazdar et al. 1982), etc.) that had proved somewhat recalcitrant even for the grossly more powerful transformational theories of grn~---r that had formerly dominated linguistics. The influence of the parsing argument on linguists has probably been overestimated. It seems to me that when Gaz- dar (1981b, 267) says our grammars can be shown to be formally equivalent to what are known as the context-free phrase structure grammars [which] has the effect of making potentially relevant to natural language grammars a whole literature of mathematical results on the parsability and learnability of context-free phrase structure grammars he is making a point exactly analogous to the one made by Robert Nozick in his book Anarchy, State and Utonia, when he says of a proposed social organization (1974, 302): We seem to have e realization of the economists" model of a competitive market. This is most welcome, for it gives us immediate access to a powerful, elaborate, and sophisticated body of theory and analysis. We are surely not to conclude from this remark of Nozick's that his libertarian utopia of interest groups competing for members is motivated solely by a desire to have a society that functions like a competitive market. The point is one of serendi- pity: if a useful theory turns out to be equivalent to one that enjoys a rich technical literature, that is very fortunate, because we may be able to make use of some of the results therein. The idea of returning to CF-PSC as e theory of NL's looks retrogressive until one realizes that the arguments that had led linguists to consign CF-PSG's to the scrap-heap of history can be shown to be fallacious (cf. especially Pullom and Gazdar (1982)). In view of that development, I think it would be reasonable for someone to ask whether we could not return all the way to finite-state gram- mars, whichwould give us even more efficient pars- ing (guaranteed deterministic linear time). It may therefore be useful if I briefly reconsider this question, first dealt with by Chomsky nearly thirty years ago. 2. COULD NL'S BE REGULAR SETS? Chomsky's negative answer to this question was the correct one. Although his original argument in Syntactic Structures (1957) for the non-regular character of English was not given in anything like a valid form (cf. Daly 1974 for a critique), others can be given. Consider the following, patterned after a suggestion by Brandt Corstius (see Levelt 1974, 25-26). The set (1): (I) {a white male (whom a white male) n (hired) n hired another white male I n ~ 0} is the intersection of English with the regular set & whi;e male (.whom a white male)* hired* another white male. But (1) is not regular, yet the regu- lar sets are closed under intersection; hence English is not regular. Q.E.D. It is perfectly possible that some NL's happen not to present the inherently self-embedding confi- gurations that make a language non-regular. Languages in which parataxis is used much more than hypotaxis (i.e. languages in which separate clauses are strung out linearly rather than embedded) are not at all uncommon. However, it should not be thought that non-regular configurations will be found to be rare in languages of the world. There are likely to be many languages that furnish better arguments for non-regular character than English does; for example, according to Bag~ge (1976), center-embedding seems to be commoner and more acceptable in several Central Sudanic languages than it is in English. In Moru, we find examples such as this (slightly simplified from Ha~ege (1976, 200); Xi is the possession marker for nonhu- man nouns, and ro is the equivalent for human nouns): (2) kokyE [toko [odrupi [ma ro] ro] ri] drate 1 2 3 3 2 1 dog wife brother me of of of is-dead "My brother's chief wife's black dog is dead." The center-embedding word order here is the only one allowed; the alternative right-branching order ("dog chief-wife-of brother-of me-of"), which a regular grammar could handle, is ungrammatical. Presumably, the intersection of odrupi* ma to* drate with Moru is n n {odrupi ma ro drate [ n > 0} (an infinite set of sentences with meanings like '~y brother's brother's brother is dead" where n - 3). This clearly non-regular, hence so is Moru. 113 The fact that NL's are not regular does not necessarily mean that techniques for parsing regu- lar languages are irrelevant to NL parsing. Langendoen (1975) and Church (1980) have both, in rather different ways, proposed that hearers pro- cess sentences as if they were finite automata (or as if they were pushdown automata with a finite stack depth limit, which is weakly equivalent) rather than showing the behavior that would be characteristic of a more powerful device. To the extent that progress along these lines casts light on the human parsing ability, the theory of regular grammars and finite automata will continue to be important in the study of natural languages even though they are not regular sets. The fact that NL's are not regular sets is both surprising and disappointing from the standpoint of parsahility. It is surprising because there is no simpler way to obtain infinite languages than to admit union, concatenation, and Kleene closure on finite vocabularies, and there is no apparent priori reason why humans could not have been well served by regular languages. Expressibility con- siderations, for example, do not appear to be relevant: there is no reason why a regular language could not express any proposition expressible by a sentence of any finite-string-length language. Indeed, many languages provide ways of expressing sentences with self-ambedding structure in non- self-embedding ways as well. In an SOV language like Korean, for example, sentences with the tree- structure (3a) are also expressible with left- branching tree-structure as shown in (3b). (3)a. S b. S /I\ /J\ II\ II\ II \ I I \ NP S V S NP V /1\ /1\ / I\ / I\ / I \ / I \ NP S V S NP V zlx Clearly such structural rearrangement will not alter the capacity of a language to express propo- sitions, any more than an optimizing compiler makes certain programs inexpressible when it irons out true recursion into tail recursion wherever possi- ble. If NL's were regular sets, we know we could recognize them in deterministic linear time using the fastest and simplest abstract computing devices of all, finite state machines. However, there are much larger classes of languages that have linear time recognition. One such class is the deter- ministic context-free languages (DCFL's). It might be reasonable, therefore~ to raise the question dealt with in the following section. 3. COULD NL'S BE DCFL'S? To the best of my knowledge, this question has never previously been raised, much less answered, in the literature of linguistics or computer sci- ence. Rich (1983) is not atypical in dismissing the entire literature on DCFL's without a glance on the basis of an invalid argument which is supposed to show that English is not even a CFL, hence fortiori not a DCFL. I should make it clear that the DCFL's are not just those CFL's for which someone has written a parser that is in some way deterministic. They are the CFL's that are accepted by some deterministic pushdown stack automaton. The term "deterministic parsing" is used in many different ways (cf. Marcus (1980) for an attempt to motivate a definition of determinism specifically for the parsing of NL's). For example, a translator system with a post- processor to rank quantifier-scope ambiguities for plausibility and output only the highest-ranked translation might be described as deterministic, but there is no reason why the language it recog- nizes should he a DCFL; it might be any recursive language. The parser currently being implemented by the natural language team at HP Labs (in partic- ular, by Derek Proudian and Dan Flickinger) intro- duces an interesting compromise between determinism and nondeterminism in that it ranks paths through the rule system so as to make some structural pos- sibilities highly unlikely ones, and there is a toggle that can be set to force the output to con- tain only likely parses. When this option is selected, the parser runs faster, but can still show ambiguities when both readings are defined as likely. This is an intriguing development, but again is irrelevant to the language-theoretic ques- tion about DCFL status that I am raising. It would be an easy slip to assume that NL's cannot be DCFL's on the grounds that English is well known to be ambiguous. We need to distinguish carefully between ambiguity and inherent ambiguity. An inherently ambiguous language is one such that all of the gra~mrs that weakly generate it are ambiguous. LR gr-----rs are never ambiguous; but the LR grammars characterize exactly the set of DCFL's, hence no inherently ambiguous language is a DCFL. But it has never been argued, as far as I know, that English as a stringset is inherently ambiguous. Rather, it has been argued that a descriptively adequate grammar for it should, to account for semantic intuitions, be ambiguous. But obviously, a DCFL can have an ambiguous grammar. In fact, all languages have ambiguous grazmnars. (The proof is trivial. Let w be a string in a language ~ generated by a grannnar G with initial symbol S and production set P. Let B be a nonter- minal not used by G. Construct a new grammar G" with production set P" = E U {S --> B, B --> w}. G" is an ambiguous grannuar that assigns two struc- tural descriptions to w.) The relevance of this becomes clear when we observe that in natural language processing appli- cations it is often taken to be desirable that a parser or translator should yield just a single analysis of an input sentence. One can imagine an impemented natural language processing system in which the language accepted is described by an ambiguous CF-PSG but is nonetheless (weakly) a DCFL. When access to all possible analyses of an input is desired (say, in development work, or when one wants to take no risks in using a database front end), an all-paths parser~translator is used, but when quick-and-dirty responses are required, at the risk of missing certain potential parses of 114 ambiguous strings, this is replaced by a deter- ministic one-path parser. Despite the difference in results, the language analyzed and the grammar used could be the same. The idea of a deterministic parser with an ambi- guous grammar, which arises directly out of what has been done for programming languages in, for example, the Yacc system (Johnson 1978), is explored for natural languages in work by Fernando Pereira and Stuart Shieber. Shieber (1983) describes an implementation of a parser which uses an ambiguous grammar but parses deterministically. The parser uses shift-reduce scheduling in the manner proposed by Pereira (1984). Shieber (1983, i16) gives two rules for resolving conflicts between parsing actions: (I) Resolve shift-reduce conflicts by shifting. (If) Resolve reduce-reduce conflicts by performing the longer reduction. The first of these is exactly the same as the one given for Yacc by Johnson (1978, 13). The second is more principled than the corresponding Yacc rule, which simply says that a rule listed earlier in the grammar should take precedence over a rule listed later to resolve a reduce-reduce conflict: But it is particularly interesting that the two are in practice equivalent in all sensible cases, for reasons I will briefly explain. A reduce-reduce conflict arises when a string of categories on the stack appears on the right hand side of two different rules in the grammar. If one of the reducible sequences is longer than the other, it must properly include the other. But in that case the prior application of the properly including rule is mandated by an extension into parsing theory of the familiar rule interaction principle of Proper Inclusion Precedence, due ori- ginally to the ancient Indian grammarian Panini (see Pullum 1979, 81-86 for discussion and refer- ences). Thus, if a rule N_~P --> NP PP were ordered before a rule VP --> V NP PP in the list accessed by the parser, it would be impossible for the sequence "NP PP" ever to appear in a VP, since it would always be reduced to NP by the earlier rule; the VP rule is useless, and could have been left out of the grammar. But if the rule with the prop- erly including expansion "V NP PP" is ordered first, the NP rule is not useless. A string "V NP PP PP", for example, could in principle be reduced to "V NP PP" by the NP rule and then to "VP" by the VP rule. Under a principle of rule interaction made explicit in the practice of linguists, there- fore, the proposal made by Pereira and Shieber can be seen to be largely equivalent to the cruder Yacc resolution procedure for deterministic parsing with ambiguous grammars. Techniques straight out of programming language and compiler design may, therefore, be of consider- able interest in the context of natural language processing applications. Indeed, Shieber goes so far as to suggest psycholinguistic implications. He considers the,class of "garden-path sentences" such as those in (4). (4) The diners hurried through their meal were annoyed. That shaggy-looking sheep should be sheared is important. On these, his parser fails. Strictly speaking, therefore, they indicate that the language parsed is not the same under the one-path and the all- paths parsers. But interestingly, human beings are prone to fail just as badly as Shieber's parser on sentences such as these. The trouble with these cases is that they lack the prefix property---that is, they have an initial proper substring which is a sentence. (From this we know that English does not have an LR(0) grammar, incidentally.) English speakers tend to mis-parse the prefix as a sen- tence, and baulk at the remaining portion of the string. We might think of characterizing the notion "garden-path sentence" in a rigorous and non-psychological way in terms of an all-paths parser and a deterministic one-path parser for the given language: the garden path sentences are just those that parse under the former but fail under the latter. To say that there might be an appropriate deter- ministic parser for English that fails on certain sentences, thus defining them as garden-path sen- tences, is not to deny the existence of a deter- ministic pushdown automaton that accepts the whole of English, garden-path sentences included, it is an open question, as far as I can see, whether English as a whole is weakly a DCFL. The likeli- hood that the answer is positive is increased by the results of Bermudez (1984) concerning the remarkable power and richness of many classes of deterministic parsers for subsets of the CFL's. If the answer were indeed positive, we would have some interesting corollaries. To take just one example, the intersection between two dialects of English that were both DCFL's would itself be a DCFL (since the DCFL's are closed under intersec- tion). This seems right: if your dialect and mine share enough for us to communicate without hin- drance, and both our dialects are DCFL's, it would be peculiar indeed if our shared set of mutually agreed-upon sentences was not a DCFL. Yet with the CFL's in general we do not have such a result. Claiming merely that English dialects are CFL's would not rule out the strange situation of having a pair of dialects, both CFL's, such that the intersection is not a CFL. 4. ARE ALL NL'S CFL'S? More than a quarter-century of mistaken efforts have attempted to show that not all NL's are CFL's. This history is carefully reviewed by Pullum and Gazdar (1982). But there is no reason why future attempts should continue this record Of failure. It is perfectly clear what sorts of data from a NL would show it to be outside the class of CFL's. For example, an infinite intersection with a regular set having the form of a triple-counting language or a string matching language (Pullum 1983) would suffice. However, the new arguments for non- 115 context-freeness of English that have appeared between 1982 and the present all seem to be quite wide of the mark. Manaster-Ramer (1983) points to the contemptuous reduplication pattern of Yiddish-influenced English, and suggests that it instantiates an infinite string matching language. But does our ability to construct phrases like Manaster-Ramer Schmanaster-Ramer (and analogously for any other word or phrase) really indicate that the syntax of English constrains the process? I do not think so. Manaster-Ramer is missihg the distinction between the structure of a language and the culture of ver- bal play associated with it. I can speak in rhym- ing couplets, or with adjacent word-pairs deli- berately Spoonerized, or solely in sentences having an even number of words, if I wish. The structure of my language allows for such games, but does not legislate regarding them. Higginbotham (1984) presents a complex pumping- lemma argument on the basis of the alleged fact that sentences containing the construction a N such that S always contain an anaphoric pronoun within the clause S that is in syntactic agreement with the noun N. But his claim is false. Consider a phrase like any society such that more people get divorced than get married in an average 7ear. This is perfectly grammmtical, but has no overt ana- phoric pronoun in the such that clause. (A similar ex-mple is concealed elsewhere in the text of this paper.) Langendoen and Postal (1984) consider sentences like Joe was talking about some bourbon-lover, but WHICH bourbon-lover is unknown, and argue that a compound noun of any length can replace the first occurrence of bourbon-lover provided the same string is substituted for the second occurrence as well. They claim that this yields an infinite string matching language extractable from English through intersection with a regular set. gut this argument presupposes that the ellipsis in WHICH bourbon-lover [Joe was talking about] must find its antecedent in the current sentence. This is not so. Linguistic accounts of anaphora have often been overly fixated on the intrasentential syntac- tic conditions on antecedent-anaphor pairings. Artificial intelligence researchers, on the other hand, have concentrated more on the resolution of anaphora within the larger context of the discourse. The latter emphasis is more likely to bring to our attention that ellipsis in one sen- tence can have its resolution through material in a preceding one. Consider the following exchange: (5)A: It looks like they're going to appoint another bourbon-hater as Chair of the Liquor Purchasing Committee. B: Yes--even though Joe nominated some bourbon-lovers; but WHICH bourbon-hater is still unknown. It is possible for the expression WHICH bourbon- hater in B's utterance to be understood as WHICH bourbon-hater [they're ~oinR to appoint] despite the presence in the same sentence of a mention of bourbon-lovers. There is thus no reason to believe that Langendoen and Postal's crucial example type is syntactically constrained to take its antecedent from within its own sentence, even though that is the only interpretation that would occur to the reader when judging the sentence in isolation. Nothing known to me so far, therefore, suggests that English is syntactically other than a CFL; indeed, I know of no reason to think it is not a deterministic CFL. As far as engineering is con- cerned, this means that workers in natural language processing and artificial intelligence should not overlook (as they generally do at the moment) the possibilities inherent in the technology that has been independently developed for the computer pro- cessing of CFL's, or the mathematical results con- cerning their structures and properties. From the theoretical standpoint, however, a dif- ferent issue arises: is the oontext-free-ness of English just an accident, much like the accident it would be if we found that Chinese was regular? Are there other languages that genuinely show non- context-free properties? I devote the next section to this question, because some very important results bearing on it have been reported recently. Since these results have not yet been published, l will have to st~anarize them rather abstractly, and cite forthcoming or in-preparation papers- for further details. 5. NON-CONTEXT-FREENESS IN NATURAL LANGUAGES Some remarkable facts recently reported by Christopher Culy suggest that the African language Bambara (Mande family, spoken in Senegal, Mali, and Upper Volta by over a million speakers) may be a non-CYL. Culy notes that Bambara forms from noun stems compound words of the form '~oun-~-Noun" with the meaning "whatever N". Thus, given that wulu means "dog", wulu-o-wulu means "whatever dog." He then observes that Bambara also forms compound noun stems of arbitrary length; wulu-filela means "dog- watcher," wulu-nyinila means "dog-hunter," wulu- filela-nyinila means "dog-watcher-hunter," and so on. From this it is clear that arbitrarily long words like wulu-filela-nyinila-o-wulu-filela- nyinila "whatever dog-watcher-hunter '~ will be in the language. This is a realization of a hypothet- ical situation sketched by Langendoen (1981), in which reduplication applies to a class of stems that have no upper length bound. Culy (forthcom- ing) attempts to provide a formal demonstration that this phenomenon renders Bambara non-context- free. If gambara turns out to have a reduplication rule defined on strings of potentially unbounded length, then so might other languages. It would be reasonable, therefore, to investigate the case of Engenni (another African language, in the Kwa fam- ily, spoken in Rivers State, Nigeria by about 12,000 people). Carlson (1983), citing Thomas (1978), notes that Engenni is reported to have a phrasal reduplication construction: the final phrase of the clause is reduplicated to indicate "secondary aspect." Carlson is correct in noting that if there is no grammatical upper bound to the length of a phrase that may be reduplicated, there is a strong possibility that Engenni could be shown to be a non-CFL. 116 But it is not only African languages in which relevant evidence is being turned up. Swiss German may be another case. In Swiss German, there is evidence of a pattern of word order in subordinate infinitival clauses that is very similar to that observed in Dutch. Dutch shows a pattern in which an arbitrary number of noun phrases (NP's) may be followed by a finite verb and an arbitrary number of nonfinite verbs, and the semantic relations between them exhibit a crossed serial pattern--- i.e. verbs further to the right in the string of verbs take as their objects NP's further to the right in the string of NP's. Bresnan et al. (1982) have shown that a CF-PSG could not assign such a set of dependencies syntactically, but as Pull,-, and Gazdar (1982, section 5) show, this does not make the stringset non-context-free. It is a semantic problem rather than a syntactic one. In Swiss German, however, there is a wrinkle that renders the phenomenon syntactic: certain verbs demand dative rather than accusative case on their objects, as a matter of pure syntax. This pattern will in general not be one that a CF-PSC can describe. For example, if there are two verbs and ~" and two nouns ~ and n', the set {xv I ~ is in (n, n')* and ~ is in (v, v')* and for all ~, if the i'th member of x is n the i'th member of y is ~} is not a CFL. Shieber (1984) has gathered data from Swiss German to support a rigorously formu- lated argument along these lines that the language is indeed not a CFL because of this construction. It is possible that other languages will have properties that render them non-context-free. One case discussed in 1981 in unpublished work by Elisabet Eugdahl and Annie Zaenen concerns Swedish. In Swedish, there are three grammatical genders, and adjectives agree in gender with the noun they describe. Consider the possibility of a "respectively"-sentence with a meaning like '~he NI, N2, and N3 are respectively AI, A2, and A3," where NI, N2, and N3 have different genders end AI, A2, and A3 are required to agree with their corresponding nouns in gender. If the gender agreement were truly a syntactic matter (c~ntra Pullum and Gazdar (1982, 500-501, note 12)), there could be an argument to be made that Swedish (or any language with these sort of facts) was not a CFL. It is worth noting that arguments based on the above sets of facts have not yet been published for general scholarly scrutiny. Nonetheless, what I have seen convinces me that it is now very likely that we shall soon see a sound published demonstra- tion that some natural language is non-context- free. It is time to consider carefully what the implications are if this is true. 6. CONTEXT-FREE GRAMMARS AND SEMANTIC FILTERING What sort of expressive power do we obtain by allowing the definition of a language to be given jointly by the syntax and the semantics rather than just by the syntax, so that the syntactic rules can generate strings judged ill-formed by native speak- ers provided that the semantic rules are unable to assign interpretations to them? This idea may seem to have a long history, in view of the fact that generative gr-mmsrians engaged in much feuding in the seventies over the rival merits of gr--,-ars that let "semantic" fac- tors constrain syntactic rules and grammars that disallowed this but allowed "interpretive rules" to filter the output of the syntax. But in fact, the sterile disputes of those days were based on a use of the term "semantic" that bore little relation to its original or current senses. Rules that operated purely on representations of sentence structure were called "semantic" virtually at whim, despite matching perfectly the normal definition of "syntactic" in that they concerned relations hold- ing among linguistic signs. The disputes were really about differently ornamented models of syn- tax. What I mean by semantic filtering my be illus- trated by reference to the analysis of expletive NP's like there in Sag (1982). It is generally taken to be a matter of syntax that the dummy pro- noun subject there can appear as the subject in sentences like There are some knives in the drawer but not in strings like *There broke all existin2 records. Sag simply allows the syntax to generate structures for strings like the latter. He charac- terizes them as deviant by assigning to there a denotation (namely, an identity function on propo- sitions) that does not allow it to combine with the translation of ordinary VP's like broke all ex~st- in~ records. The VP aye ~pme knives in the drawer is assigned by the semantic rules a denotation the same as that of the sentence Some ~pives are in the drawer, so there combines with it and a sentence meaning is obtained. But br~ke all existinz records translates as a property, and no sentence meaning is obtained if it is given ~here as its subject. This is the sort of move that I will refer to as semantic filtering. A question that seems never to have been con- sidered carefully before is what kind of languages can be defined by providing a CF-PSG plus a set of semantic rules that leave some syntactically gen- erated sentences without a sentence meaning as their denotation. For instance, in a system with a CF-PSG and a denotational semantics, can the set of sentences that get assigned sentence denotations be non-CF? I am grateful to Len Schubert for pointing out to me that the answer is yes, and providing the following example; Consider the following gra~mmr, composed of syntactic rules paired with semantic translation schemata. (6) S --> L R F(L:(R')) L --> C C" R --> C C" C --> a a" C --> b b" C --> aC G(C') c --> bC SCC') Assume that there are two basic semantic types, and B, and that 4" andS" are constants denoting entities of types A and B respectively. ~, ~, and are cross-categorlal operators. ~(~) has the category of functions from X-type things to !-type things, ~(~) has the cate~ry of functions from A_- type things to ~-type things, and H(X) has the 117 category of functions from B-type things to X-type things. Given the semantic translation schemata, every different X constituent has a unique semantic category; the structure of the string is coded into the structure of its translation. But the first rule only yields a meaning for the S constituent if L" and ~" are of the same category. Whatever semantic category may have been built up for an instance of ~', the F operator applies to produce a function from things of that type to things of type B, and the rule says that this function must be applied to the translation of ~'. Clearly, if R" has exactly the same semantic category as L" this will succeed in yielding a B-type denotation for S, and under all other circumstances S will fail to be assigned a denotation. The set of strings of category S that are assigned denotations under these rules is thus {xx I ~ in (A, ~)+} which is a non-CF language. We know, therefore, that it is possible for semantic filtering of a set of syutactic rules to alter expressive power signi- ficantly. We know, in fact, that it would be pos- sible to handle Bambara noun stems in this way and design a set of translation principles that would only allow a string '~oun-~-Noun" to be assigned a denotation if the two instances of N were string- wise identical. What we do not know is how to for- mulate with clarity a principle of linguistic theory that adjudicates on the question of whether the resultant description, with its infinite number of distinct semantic categories, is permissible. Despite the efforts of Barbara Hall Partee and other scholars who have written on constraining the Moutague semantics framework over the past ten years, questions about permissible power in semantic apparatus are still not very well explored. One thing that is clear is that Gazdar and oth- ers who have claimed or assumed that NL's are context-free never intended to suggest that the entire mechanism of associating a sentence with a meaning could be carried out by a system equivalent to a pushdown automaton. Even if we take the notion "associating a sentence with a meaning" to be fully clear, which is granting a lot in the way of separating out pragmatic and discourse-related factors, it is obvious that operations beyond the power of a CY-PSG to define are involved. Things like identifying representations to which lambda- conversion can apply, determining whether ali vari- ables are bound, checking that every indexed ana- phoric element has an antecedent with the same index, verifying that a structure contains no vacu- ous quantification, and so on, are obviously of non-CF character when regarded as language recogni- tion problems. Indeed, in one case, that of disal- lowing vacuous quantifiers, it has been conjectured (Partee and Marsh 1984), though not yet proved, that even an indexed grammar does not have the requisite power. It therefore should not be regarded as surpris- ing that mechanisms devised to handle the sort of tasks involved in assigning meanings to sentences can come to the rescue in cases where a given syn- tactic framework has insufficient expressive power. Nor should it be surprising that those syntactic theories that build into the syntax a power that amply suffices to achieve a suitable syntax-to- semantics mapping have no trouble accommodating all new sets of facts that turn up. The moment we adopt any mechanisms with greater than, say, context-free power, our problem is that we are faced with a multiplicity of ways to handle almost any descriptive problem. 7. GRAMMARS WITH INFINITE NONTERMINAL VOCABULARIES Suppose we decide we want to reject the idea of allowing a souped-up semantic rule system do part of the job of defining the membership of the language. What syntactic options are reasonable ones, given the kind of non-context-free languages we think we might have to describe? There is a large range of theories of grammar definable if we relax the standard requirement that the set N of nonterminal vocabulary of the gr---,=r should be finite. Since a finite parser for such a gr,mm-r cannot contain an infinite list of nonter- minals, if the infinite majority of the nontermi- naIs are not to be useless symbols, the parser must be equipped with some way of parsing representa- tions of nonterminals, i.e. to test arbitrary objects for membership in N. If the tests do not guarantee results in finite time, then clearly the device may be of Turing-machine power, and may define an undecidable language. Two particularly interesting types of grammar that do not have this property are the following: Indexed 2rammars. If members of N are built up using sequences of indices affixed to a members of a finite set of basic nonterminals, and rules in P are able to add or remove sequence-initial indices, attached to a given basic nonterminal, the expressive power achieved is that of the indexed grammars of Aho (1968). These have an automata-theoretic characterization in terms of a stack automaton that can build stacks inside other stacks but can only empty a stack after all the stacks within it have been emptied. The time complexity of the parsing problem is exponential. Unification Krannaars. If members of N have internal hierarchical structure and parsing operations are permitted to match hierarchical representations one with another globally to determine whether they unify (roughly, whether there is a minimal consistent representation that includes the distinctive properties of both), and if the number of parses for a given sentence is kept to a finite number by requiring that we do not have A ==> A for any A, then the expressive power seems to be weakly equivalent to the grammars that Joan Bresnan and Ron Kaplan have developed under the name lexical-functional ~ramar (LF___GG; se___fie Bresnan, e__dd., 1982; c__ff, also the work of Martin Kay on unification grammars). The LFG languages include some non-indexed languages (Kelly Roach, unpublished work), and apparently have an NP- complete parsing problem (Ron Kaplan, personal communication). 118 Systems of this sort have an undeniable interest in connection with the study of natural language. Both theories of language structure and comput- ational implementations of grammars can be usefully explored in such terms. My criticism of them would be that it seems to me that the expressive power of these systems is too extreme. Linguistically they are insufficiently restrictive, and computationally they are implausibly wasteful of resources. How- ever, rather than attempt to support this vague prejudice with specific criticisms, I would prefer to use my space here to outline an alternative that seems to me extremely promising. 8. READ GRAMMARS AND NATURAL LANGUAGES In his recent doctoral dissertation, Carl Pol- lard (1984) has given a detailed exposition and motivation for a class of grammars he terms head K~ammars. Roach (1984) has proved that the languages generated by head grammars constitute a full AFL, showing all the significant closure pro- perties that characterize the class of CFL's. Head grammars have a greater expressive power, in terms of weak and strong generative capacity, than the CF-PSG's, but only to a very limited extent, as shown by some subtle and suprising results due to Roach (1984). For example, there is a head grammar for {anbncna n I n 2 0} but not for {anbncndna n [ n 2 O} and there is a head grammar for {ww I w is in (a, b)*} but not for {ww J w is in (a, b)*}. The time complexity of the recognition problem for head grammars is also known: a time bound pro- portional to the seventh power of the length of the input is sufficient to allow for recognition in the worst case on a deterministic Turing machine (Pol- lard 1984). This clearly places head grammars in the realm of tractable linguistic formalisms. The extension Pollard makes in CF-PSG to obtain the head gra--,ars is in essence fairly simple. First, he treats the notion '*head" as a primitive. The strings of terminals his syntactic rules define are headed s~rings, which means they are associated with an indication of a designated element to be known as the head. Second, he adds eight new '~rapping" operations to the standard concatenation operation on strings that a CF-PSG can define. For a given ordered pair <B,C> of headed strings there are twelve ways in which strings B and C can be combined to make a constituent A. I give here the descriptions of just two of them which I will use below: LCI(B,C): concatenate C onto end of B; first argument (B) is head of the result. Mnemonic: Left Concatenation with ist as new head. LL2(B,C): wrap B around C, with head of B to the left of C; C is head of the result. Mnemonic: Left wrapping with head to the Right and ~nd as new head. The full set of operations is given in the chart in figure I. A simple and linguistically motivated head gram- mar can be given for the Swiss German situation mentioned earlier. I will not deal with it here, because in the first place it would take consider- able space, and in the second place it is very sim- ple to read off the needed account from Pollard's (1984) treatment of the corresponding situation in Dutch, making the required change in the syntax of case-marking. In the next section I apply head grammar to cases like that of Bambara noun reduplication. 9. THE RIDDLE OF REDUPLICATION I have shown in section 6 that the set of Bambara complex nouns of the form '~oun--~-Noun" could be described using semantic filtering of a context- free grammar. Consider now how a head grammar could achieve a description of the same facts. Assume, to simplify the situation, just two noun stems in Bambara, represented here as ~ and b. The following head grammar generates the language {x Figure 1: combinatory operations in head gra~ar [ LC1 [ LC2 [ RC1 [ RC2 [ LL1 I LL2 ] LR1 [ LR2 [ RL1 [ RL2 ............. I ..... [ ..... I ..... I ..... I ..... I ..... I ..... I ..... I ..... I .... i Leftward or Rightward? Concatenate, wrap Left, wrap Right? 1 or 2 is head of the result? L I L J R 1 I I C I C I I I 2 I 1 L L R L R L R R RR1 [RR2 I ..... I ..... I I I S I R I I I R R 1 2 119 (7) Syntax Lexicon S ---> LCI(M, A) S ---> LCI(N, B) M ---> LL2(X, O) N ---> LL2(Y, O) X ---> LL2(Z, A) Y ---> LL2(Z, B) Z ---> LCI(X, A) Z ---> LCI(Y, B) A ---> a B ---> b 0 ---> o Z ---> e The structure this gr---,-r assigns to the string ba-o-ba is shown in figure 2 in the form of a tree with crossing branches, using asterisks to indicate heads (or strictly, nodes through which the path from a label to the head of its terminal string )asses). Figure 2: structure of the string "ba-o-ba" according to the gramar in (7). S /\ / \ II / I 1 I x *o /I I /I I / I I z *A I / ~-.I I /\ I / \ I / \ I z *B 1 l l l • b a o b a We know, therefore, that there are at least two options available to us when we consider how a case like Bambara may be described in rigorous and computationally tractable terms: semantic filtering of a CF-PSG, or the use of head gr-----rs. However, I would like to point to certain considerations suggesting that although both of these options are useful as existence proofs and mathematical bench- marks, neither is the right answer for the Bembara case. The semantic filtering account of Bembara complex nouns would imply that every complex noun stem in Bambara was of a different semantic category, for the encoding of the exact repetition of the terminal string of the noun stem would have to be in terms of a unique compositional structure. This seems inherent implausible; "dog-catcher- catcher-catcher" should have the same semantic category as "dog-catcher-catcher" (both should denote properties, I would assume). And the head grammar account of the same facts has two peculiar- ities. First, it predicts a peculiar structure of word-internal crossing syntactic dependencies (for example, that in dog-catcher-~-dog-catcher, one constituent is dog-dog and another is doK-catcher- ~-doR) that seem unmotivated and counter-intuitive. Second, the grammar for the set of complex nouns is profligate in the sense of Pullmn (1983): there are inherently and necessarily more nonterminals involved than terminals---and thus more different ad hoc syntactic categories than there are noun stems. Again, this seems abhorrent. What is the correct description? My analytical intuition (which of course, I do not ask others to accept unquestioningly) is that we need a direct reference to the reduplication of the surface string, and this is missing in both accounts. Somehow I think the grammatical rules should reflect the notion "repeat the morpheme-string" directly, and by the same token the parsing process should directly recognize the reduplication of the noun stem rather than happen indirectly to guaran- tee it. I even think there is evidence from English that offers support for such an idea. There is a con- struction illustrated by phrases like Trac 7 hit it and hit it and hit it. that was discussed by Browne (1964), an unpublished paper that is summar- ized by Lakoff and Peters (1969, 121-122, note 8). It involves reduplication of a constituent (here, a verb phrase). One of the curious features of this construction is that if the reduplicated phrase is an adjective phrase in the comparative degree, the expression of the comparative degree must be ident- ical throughout, down to the morphological and pho- nological level: (8)a. Kimgot lonelier and lonelier and lonelier. b. Kim got more and more and more lonely. c. *Kim got lonelier and more lonely and lonelier. This is a problem even under transformational con- ceptions of gr-----r, since at the levels where syn- tactic transformations apply, lonelier and more lonely are generally agreed to be indistinguish- able. The symmetry must be preserved at the phono- logical level. I suggest that again a primitive syntactic operation "repeat the morpheme-string" is called for. I have no idea at this stage how it would be appropriate to formalize such an operation and give it a place in syntactic theory. 10. CONCLUSION The arguments originally given at the start of the era of generative grammar were correct in their conclusion that NL's cannot be treated as simply regular sets of strings, as some early information-theoretic models of language users would have had it. However, questions of whether NL's were CFL's were dismissed rather too hastily; English was never shown to be outside the class of CFL's or even the DCFL's (the latter question never even having been raised), and for other languages the first apparently valid arguments for non-CFL status are only now being framed. If we are going to employ supra-CFL mechanisms in the characteriz- ing and processing of NL's, there are a host of items in the catalog for us to choose among. I have shown that semantic filtering is capable of enhancing the power of a CF-PSG, and so, in many different ways, is relaxing the finiteness condi- tion on the nonterminal vocabulary. Both of these 120 moves are likely to inflate expressive power quite dramatically, it seams to me. One of the most mod- est extensions of CF-PSG being explored is Pollard's head grannnar, which has enough expressive powe# to handle the cases that seem likely to arise, but I have suggested that even so, it does not seem to be the right formalism to cover the case of the complex nouns in the lexicon of Sam- bare. Something different is needed, and it is not quite clear what. This is a familiar situation in linguistics. Description of facts gets easier as the expressive power of one's mechanisms is enhanced, hut choosing among alternatives, of course, get harder. What I would offer as a closing suggestion is that until we are able to encode different theoretical propo- sals (head grammar, string transformations, LFG, unification grammar, definite clause grammar, indexed grammars, semantic filtering) in a single, implemented, well-understood formalism, our efforts to be sure we have shown one proposal to be better than another will be, in Gerald Gazdar's scathing phrase, "about as sensible as claims to the effect that Turing machines which employ narrow grey tape are less powerful than ones employing wide orange tape" (1982, 131). In this connection, the aims of the PATE project at SRI International seem particu- larly helpful. If the designers of PATE can demon- strate.that it has enough flexibility to encode rival descriptions of NL's like English, Bambara, Engenni, Dutch, Swedish, and Swiss German, and to do this in a neutral way, there may be some hope in the future (as there has not been in the past, as far as I can see) of evaluating alternative linguistic theories and descriptions as rigorously as computer scientists evaluate alternative sorting algorithms or LISP implementations. REFERENCES Bermudez, Manuel (1984) Retular .Lookahead an__dd Look- balk in LR Parsers. PhD thesis, University of California, Santa Cruz. Bresnan, Joan W., ed. (1982) The Mental Renresenta- tion of Grammatical Rel@tions. MIT Press, Cam- bridge, MA. Browne, Wayles (1964) "On adjectival comparison and reduplication in English." Unpublished paper. Carlson, Creg (1983) "Marking constituents," in Prank Heny, ed., Linguistic Categories: Auxi- liarie~ an___~dd Related Puzzles; vo__!. ~: Categories, 69-98. D. Reidel, Dordrecht. Chomsky, Noam (1957) Syntactic Structures. Mouton, The Hague. Church, Kenneth (1980) On Memory Limitations i._nn Natura~ Lan~uaRe Processing. M.Sc. thesis, MIT. Published by Indiana University Linguistics Club, Bloomington IN. Culy, Christopher (forthcoming) '~he complexity of the vocabulary of Bombers." Daly, R. T. (1974) Avvlications of th_.fie Mathematical Theory of Linguistics. Mouton, The Hague. " Gazdar, Gerald (1981a) '~nbounded dependencies and coordinate structure. Linguistic Inquiry 12, 155-184. Cazdar, Gerald (1981b) "On syntactic categories." Philosophical Transactions of the Ro¥al Society (Series B) 295, 267-283. Gazdar, Gerald (1982) "Phrase structure graluuar," in Jacobson and Pullum, eds., 131-186. Gazdar, Gerald; Pullum, Geoffrey K.; and Sag, Ivan A. (1982) '~uxiliaries and related phenomena in a restrictive theory of gramnmr," Language 58, 591-638. Hag~ge, Claude (1976) "Relative clause center- embedding and comprehensibility," Linguistic In, uir~ 7, 198-201. Higginbotham, James (1984) "English is not a context-free language." Linguistic Inquiry 15, 225-234. Jacobsen, Pauline, and Pullum, Geoffrey K., eds. (1982) Th_._ee Nature of Syntactic Representation. D. Reidel, Dordrecht, Holland. Lakoff, George, and Peters, Stanley (1969) 'Thrasal conjunction and symmetric predicates," in David A. Reibel and Sanford A. Schane, eds., Studies _~ English. Prentice-Hall, Englewood Cliffs. Langendoen, D. Terence (1975) "Finite-state parsing of phrase-structure languages and the status of readjustment rules in grammar," Inouir7 5, 533-554. Langendoen, D. Terence (1981) '~he generative capa- city of word-formation components," Linguistic In,uirv 12, 320-322. Langendoen, D. Terence, and Postal, Paul M. (1984) "English and the class of context-free languages," unpublished paper. Levelt, W. J. M. (1974) Formal GrR-,,-rs i__nn~ ~ics an__j Psvcholin~uistics (vol. II): Applica- tions in Linguistic Theory. Mouton, The Hague. Marcus, Mitchell (1980) A Theor¥ of Syntactic Recognition for Natural Langua2e. MI_~TPress, Cambridge MA. Manaster-Ramer, Alexis (1983) '~he soft formal underbelly of theoretical syntax," in Pavers ~!om the Nineteenth Regional Meeting, Chicago Linguistic Society, Chicago IL. Nozick, Robert (1974) Anarchy, State, and Utonia. Basic Books, New York. Partee, Barbara, and William Marsh (1984) '~ow non-context-free is variable binding?" Presented at the Third Nest Coast Conference on Formal Linguistics, University of California, Santa Cruz. Pereira, Fernando (1984) '~ new characterization of attachment preferences," in D. R. Dowry, L. Karttunen, and A. M. Zwicky, ads., Natural Language Processing: Ps¥cholin~uistic, Comput- ational an_.dd Theoretical Perspectiyes. Cambridge University Press, New York NY. Pollard, Carl J. (1984) Generalized phrase Struc- ture Grammars, Head G~a----rs, and Natural Languaees. Ph.D. thesis, Stanford University. Pullum, Geoffrey K. (1979) Rule Interaction and th_._.ee Organization o_~f~ Grammar. Garland, New York. Pullam, Geoffrey K. (1983) "Context-freeness and the computer processing of human languages," in 21st Annual Meetin~ of the Assocation f_.q/_ Computational Ling,istics: Proceedings of the Conference, 1-6. ACL, Menlo Park CA. 121 Pullum, Geoffrey K., and Gerald Gazdar (1982) "Natural languages and context-free languages," Linguistics an__~dPhilosoph¥ 4, 471-504. Rich, Elaine (1983) Artificial Intelligence. McGraw-Hill, New York NY. Roach, Kelly (1984) "Formal properties of head gr,--,=rs." Unpublished paper, Xerox Palo Alto Research Center, Palo Alto CA. Sag, Ivan A. (1982) '% semantic analysis of "NP- movement" dependencies in English." In Jacobson and Pull,--, eds., 427-466. Shieber, Stuart (1983) "Evidence against the context-freeness of natural language." Unpub- lished paper. SRI International, Menlo Park CA, and Center for the Study of Language and Infor- mation, Stanford CA. Shieber, Stuart (1983) "Sentence disambiguation by a shift-reduce parsing technique," in 21st Annual Meeting of the Assocation fo__E Comput- ational Linguistics: Proceedings of the Confer- ence, 113-118. ACL, Menlo Park CA. Thomas, E. (1978) A Grammatical Description of the Engenni Lan2ua2e. SIL Publication no. 60. Sum- mer Institute of Linguistics, Arlington TX. 122
1984
26
The Semantics of Grammar Formalisms Seen as Computer Languages Fernando C. N. Pereira and Stuart M. Shieber Artificial Intelligence Center SRI International and Center for the Study of Language and Information Stanford University Abstract The design, implementation, and use of grammar for- ma]isms for natural language have constituted a major branch of coml)utational linguistics throughout its devel- opment. By viewing grammar formalisms as just a spe- cial ease of computer languages, we can take advantage of the machinery of denotational semantics to provide a pre- cise specification of their meaning. Using Dana Scott's do- main theory, we elucidate the nature of the feature systems used in augmented phrase-structure grammar formalisms, in particular those of recent versions of generalized phrase structure grammar, lexical functional grammar and PATR- I1, and provide a (lcnotational semantics for a simple gram- mar formalism. We find that the mathematical structures developed for this purpose contain an operation of feature generalization, not available in those grammar formalisms, that can be used to give a partial account of the effect of coordination on syntactic features. 1. Introduction I The design, implementation, and use of grammar for- malisms for natural lang,age have constituted a major branch of computational linguistics throughout its devel- opment. Itowever, notwithstanding the obvious superfi- cial similarily between designing a grammar formalism and designing a programming language, the design techniques used for grammar formalisms have almost always fallen short with respect to those now available for programming language design. Formal and computational linguists most often explain the effect of a grammar formalism construct either by ex- ample or through its actual operation in a particular im- plementation. Such practices are frowned upon by most programming-language designers; they become even more dubious if one considers that most grammar formalisms in use are based either on a context-free skeleton with augmentations or on some closely related device (such as ATNs), consequently making them obvious candidates for IThe research reported in this paper has been made possible by a gift from the System Development Foundation. a declarative semantics z extended in the natural way from the declarative semantics of context-free grammars. The last point deserves amplification. Context-free grammars possess an obvious declarative semantics in which nonterminals represent sets of strings and rules rep- resent n-ary relations over strings. This is brought out by the reinterpretation familiar from formal language theory of context-free grammars as polynomials over concatena- tion and set union. The grammar formalisms developed from the definite-clause subset of first order logic are the only others used in natural-language analysis that have been accorded a rigorous declarative semantics--in this case derived from the declarative semantics of logic pro- grams [3,12,1 I]. Much confusion, wasted effort, and dissension have re- sulted from this state of affairs. In the absence of a rigorous semantics for a given grammar formalism, the user, critic, or implementer of the formalism risks misunderstanding the intended interpretation of a construct, and is in a poor posi- tion to compare it to alternatives. Likewise, the inventor of a new formalism can never be sure of how it compares with existing ones. As an example of these dillqculties, two sim- ple changes in the implementation of the ATN formalism, the addition of a well-formed substring table and the use of a bottom-up parsing strategy, required a rather subtle and unanticipated reinterpretation of the register-testing and -setting actions, thereby imparting a different meaning to grammars that had been developed for initial top-down backtrack implementation [22]. Rigorous definitions of grammar formalisms can and should be made available. Looking at grammar formalisms as just a special case of computer languages, we can take advantage of the machinery of denotational semantics [20 i to provide a precise specification of their meaning. This approach can elucidate the structure of the data objects manipulated by a formalism and the mathematical rela- tionships among various formalisms, suggest new possibil- ities for linguistic analysis (the subject matter of the for- malisms), and establish connections between grammar for- malisms and such other fields of research as programming- 2This use of the term "semantics" should not be confused with the more common usage denoting that portion of a grammar concerned with the meaning of object sentences. Here we are concerned with the meaning of the metalanguage. 123 language design and theories of abstract data types. This last point is particularly interesting because it opens up several possibilities--among them that of imposing a type discipline on the use of a formalism, with all the attendant advantages of compile-time error checking, modularity, and optimized compilation techniques for grammar rules, and that of relating grammar formalisms to other knowledge representation languages [l]. As a specific contribution of this study, we elucidate the nature of the feature systems used in augmented phrase- structure grammar formalisms, in particular those of recent versions of generalized phrase structure grammar (GPSG) [5,15], lexical functional grammar (LFG) [2] and PATR-II [ 18,17]; we find that the mathematical structures developed for this purpose contain an operation of feature generaliza- tion, not available in those grammar formalisms, that can be used to give a partial account of the effect of coordina- tion on syntactic features. Just as studies in the semantics of programming lan- guages start by giving semantics for simple languages, so we will start with simple grammar formalisms that capture the essence of the method without an excess of obscuring detail. The present enterprise should be contrasted with studies of the generative capacity of formalisms using the techniques of formal language theory. First, a precise defini- !;ion of the semantics of a formalism is a prerequisite for such generative-capacity studies, and this is precisely what we are trying to provide. Second, generative capacity is a very coarse gauge: in particular, it does not distinguish among different formalisms with the same generative capacity that may, however, have very different semantic accounts. Fi- nally, the tools of formal language theory are inadequate to describe at a sufficiently abstract level formalisms that are based on the simultaneous solution of sets of constraints [9,10]. An abstract analysis of those formalisms requires a notion of partial information that is precisely captured by the constructs of denotationai semantics. 2. Denotational Semantics In broad terms, denotational semantics is the study of the connection between programs and mathematical enti- ties that represent their input-output relations. For such an account to be useful, it must be compositional, in the sense that the meaning of a program is developed from the meanings of its parts by a fixed set of mathematical oper- ations that correspond directly to the ways in which the parts participate in the whole. For the purposes of the present work, denotational se- mantics will mean the semantic domain theory initiated by Scott and Strachey [20]. In accordance with this ap- proach, the meanings of programming language constructs are certain partial mappings between objects that represent partially specified data objects or partially defined states of computation. The essential idea is that the meaning of a construct describes what information it adds to a partial description of a data object or of a state of computation. Partial descriptions are used because computations in gen- eral may not terminate and may therefore never produce a fully defined output, although each individual step may be adding more and more information to a partial description of the undeliverable output. Domain theory is a mathematical theory of consider- able complexity. Potential nontermination and the use of functions as "first-class citizens" in computer languages ac- count for a substantial fraction of that complexity. If, as is the case in the present work, neither of those two aspects comes into play, one may be justified in asking why such a complex apparatus is used. Indeed, both the semantics of context-free grammars mentioned earlier and the seman- tics of logic grammars in general can be formulated using elementary set theory [7,21]. However, using the more complex machinery may be beneficial for the following reasons: • Inherent partiality:, many grammar formalisms oper- ate in terms of constraints between elements that do not fully specify all the possible features of an ele- ment. • Technical economy, results that require laborious constructions without utilizing domain theory can be reached trivially by using standard results of the the- ory. • Suggestiveness: domain theory brings with it a rich mathematical structure that suggests useful opera- tions one might add to a grammar formalism. • Eztensibilit~. unlike a domain-theoretic account, a specialized semantic account, say in terms of sets, may not be easily extended as new constructs are added to the formalism. 3. The Domain of Feature Struc- tures We will start with an abstract denotational description of a simple feature system which bears a close resemblance to the feature systems of GPSG, LFG and PATR-II, al- though this similarity, because of its abstractness, may not be apparent at first glance. Such feature systems tend to use data structures or mathematical objects that are more or less isomorphic to directed graphs of one sort or an- other, or, as they are sometimes described, partial func- tions. Just what the relation is between these two ways of viewing things will be explained later. In general, these graph structures are used to encode linguistic information in the form of attribute-vahm pairs. Most importantly, par- tial information is critical to the use of such systems--for instance, in the variables of definite clause grammars [12] and in the GPSG analysis of coordination [15]. That is, the elements of the feature systems, called fealure struclures (alternatively, feature bundles, f-structures [2], or terms} can be partial in some sense. The partial descriptions, be- ing in a domain of attributes and complex values, tend to be equational in nature: some feature's value is equated with some other value. Partial descriptions can be understood 124 in one of two w:ays: either the descriptions represent sets of fully specilied elements of an underlying domain or they are regarded as participating in a relationship of partiality with respect to each other. We will hold to the latter view here. What are feature structures from this perspective? They are repositories of information about linguistic enti- ties. In domain-theoretic terms, the underlying domain of feature structures F is a recursive domain of partial func- tions from a set of labels L (features, attribute names, at- tributes) to complex values or primitive atomic values taken from a set C of constants. Expressed formally, we have the domain equation F=IL~F]+G The solution of this domain equation can be understood as a set of trees (finite or infinite} with branches labeled by elements of L, and with other trees or constants as nodes. The branches la .... , Im from a node n point to the values n{lt),..., n(Im) for which the node, as a partial function, is defined. 4. The Domain of Descriptions What the grammar formalism does is to talk about F, not in F. That is, the grammar formalism uses a domain of descriptions of elements of F. From an intuitive standpoint, this is because, for any given phrase, we may know facts about it that cannot be encoded in the partial function associated with it.. A partial description of an element n of F will be a set of equations that constrain the values of n on certain labels. In general, to describe an element z E F we have equations of the following forms: (... (xII. })-..)ll;.) = (..-(z(li,))...)(l;.) (".(x{li,))".)(li,~) = ck , which we prefer to write as (t~,...I;.) = (Ij,..-i;.) (li,"'li=) = ck with x implicit. The terms of such equations are constants c E C' or paths {ll, ". It=), which we identify in what follows with strings in L*. Taken together, constants and paths comprise the descriptors. Using Scott's information systems approach to domain construction [16], we can now build directly a characteriza- tion of feature structures in terms of information-bearing elements, equations, that engender a system complete with notions of compatibility and partiality of information. The information system D describing the elements of F is defined, following Scott, as the tuple D = (/9, A, Con, ~-) , where 19 is a set of propositions, Con is a set of finite subsets of P, the consistent subsets, I- is an entailment relation between elements of Con and elements of D and A is a special least informative element that gives no information at all. We say that a subset S of D is deductively closed if every proposition entailed by a consistent subset of S is in S. The deductive closure -S of S ___ /9 is the smallest deductively closed subset of/9 that contains S. The descriptor equations discussed earlier are the propositions of the information system for feature structure descriptions. Equations express constraints among feature values in a feature structure and the entailment relation encodes the reflexivity, symmetry, transitivity and substi- tutivity of equality. More precisely, we say that a finite set of equations E entails an equation e if • Membership: e E E • Reflezivit~. e is A or d = d for some descriptor d • Symmetry. e is dl = d2 and dz = dl is in E • Transitivity. e is da = dz and there is a descriptor d such that dl = d and d = dz are in E • Substitutivit~r. e is dl = Pl • d2 and both pl = Pz and dl = P2 • d.~ are in E • Iteration: there is E' C E such that E' b e and for all e'E~ EF-e' With this notion of entailment, the most natural definition of the set Con is that a finite subset E of 19 is consistent if and only if it does not entail an inconsistent equation, which has the form e~ = cz, with et and Cz as distinct constants. An arbitrary subset of/9 is consistent if and only if all its finite subsets are consistent in the way defined above. The consistent and deductively closed subsets of D ordered by inclusion form a complete partial order or domain D, our domain of descriptions of feature structures. Deductive closure is used to define the elements of D so that elements defined by equivalent sets of equations are the same. In the rest of this paper, we will specify elements of D by convenient sets of equations, leaving the equations in the closure implicit. The inclusion order K in D provides the notion of a description being more or less specific than another. The least-upper-bound operation 12 combines two descrip- tions into the least instantiated description that satisfies the equations in both descriptions, their unification. The greatest-lower-bound operation n gives the most instanti- ated description containing all the equations common to two descriptions, their generalization. The foregoing definition of consistency may seem very natural, but it has the technical disadvantage that, in gen- eral, the union of two consistent sets is not itself a consistent set; therefore, the corresponding operation of unification may not be defined on certain pairs of inputs. Although this does not cause problems at this stage, it fails to deal with the fact that failure to unify is not the same as lack of definition and causes technical difficulties when providing rule denotations. We therefore need a slightly less natural definition. First we add another statement to the specification of the entailment relation: 125 • Falsitv. if e is inconsistent, {e} entails every element of P. - That is, falsity entails anything. Next we define Con to be simply the set of all finite subsets of P. The set Con no longer corresponds to sets of equations that are consistent in the usual equational sense. With the new definitions of Con and I-, the deductive closure of a set containing an inconsistent equation is the whole of P. The partial order D is now a lattice with top element T = P, and the unification operation t_l is always defined and returns T on unification failure. We can now define the description mapping 6 : D --* F that relates descriptions to the described feature structures. The idea is that, in proceeding from a description d 6 D to a feature structure f 6 F, we keep only definite informa- tion about values and discard information that only states value constraints, but does not specify the values them- selves. More precisely, seeing d as a set of equations, we consider only the subset LdJ of d with elements of the form (l~-..lm)=c~ . . Each e 6 [d] defines an element f(e) of F by the equations f(e)(l,) = f, fi-,(li) ---- fl f,._,(l,.) = ek , with each of the f~ undefined for all other labels. Then, we can define 6(d) as 6(d) = L] f(e) ~eL~l This description mapping can be shown to be continu- ous in the sense of domain theory, that is, it has the prop- erties that increasing information in a description leads to nendecreasing information in the described structures {monotonieity) and that if a sequence of descriptions ap- proximates another description, the same condition holds for the described structures. Note that 6 may map several elements of D on to one element of F. For example, the elements given by the two sets of equations (fh) = c (gi) = e describe the same structure, because the description map- ping ignores the link between (f h) and (g i) in the first description. Such links are useful only when unifying with further descriptive elements, not in the completed feature structure, which merely provides feature-value assignments. Informally, we can think of elements of D as directed rooted graphs and of elements of F as their unfoldings as trees, the unfolding being given by the mapping 6. It is worth noting that if a description is cyclic---that is, if it has cycles when viewed as a directed graph--then the resulting feature tree will be infinite2 Stated more precisely, an element f of a domain is fi- nite, if for any ascending sequence {d~} such that f E_ U~ d~, there is an i such that f C_ d~. Then the cyclic elements of D are those finite elements that are mapped by 6 into nonfinite elements of F. 5. Providing a Denotation for a Grammar We now move on to the question of how the domain D is used to provide a denotational semantics for a grammar formalism. We take a simple grammar formalism with rules con- sisting of a context-free part over a nonterminal vocabu- lary .t/= {Nt,..., Ark} and a set of equations over paths in ([0..c~]- L*)0C. A sample rule might be S ~ NP VP (o s,,bj) = (I) (o predicate) = (2) (1 agr) = (2 agr) This is a simplification of the rule format used in the PATR- II formalism [18,17]. The rule can be read as "an S is an NP followed by a VP, where the subject of the S is the NP, its predicate the VP, and the agreement of the NP the same as the agreement of tile VP'. More formally, a grammar is a quintuple G = (//, S, L, C, R), where • ,t/is a finite, nonempty set of nonterminals Nt,..., Nk • S is the set of strings over some alphabet (a fiat do- main with an ancillary continuous function concate- nation, notated with the symbol .). • R is a set of pairs r = (/~0 ~ N,, .. . N,., E~), where E. is a set of equations between elements of ([0..m] - L') 0 C. As with context-free grammars, local ambiguity of a grammar means that in general there are several ways of assembling the same subphrases into phra.ses. Thus, the semantics of context-free grammars is given in terms of sets of strings. The situation is somewhat more compli- cated in our sample formalism. The objects specified by the grammar are pairs of a string and a partial description. Because of partiality, the appropriate construction cannot be given in terms of sets of string-description pairs, but rather in terms of the related domain construction of pow- erdomains [14,19,16]. We will use the Hoare powerdomain P = PM(S x D) of the domain S x D of string-description pairs. Each element of P is an approximation of a transdue- tion relation, which is an association between strings and their possible descriptions. We can get a feeling for what the domain P is doing by examinin~ our notion of lexicon. A lexicon will be an SMote precisely a rational tree, that is, a tree with a finite number of distinct subtrees. 126 element of the domain pk, associating with each of the k nonterminals N;, I < i < k a transduction relation from the corresponding coordinate of pk. Thus, for each nontermi- nal, the lexicon tells us what phrases are under that non- terminal and what possible descriptions each such phrase has. llere is a sample lexicon: NP: {"Uther", } {(agr n,tm) = sg, (agr per) = 3}) ("many knights", { <agr num} = pl, (agr per) = 3}) VP: ("slorms Cornwall", } {(,~," n,,.,) = sg}) ("sit at the Round Table", {(agr hum} = pl}) s: {} By decomposing the effect of a rule into appropriate steps, we can associate with each rule r a denotation Ir~ :P~ --. pk that combines string-description pairs by concatenation and unification to build new string-description pairs for the nonterminal on the left-hand side of the rule, leaving all other nonterminals untouched• By taking the union of the denotations of the rules in a grammar, (which is a well- defined and continuous powerdomain operation,) we get a mapping TG(e) d~j U H(e) reR from pk to pk that represents a one-step application of all the rules of G "in parallel." We can now provide a denotation for the entire gram- mar as a mapping that completes a lexicon with all the derived phrases and their descriptions. The denotation of a grammar is the fimetion that maps each lexicon ~ into the smallest fixed point of To containing e. The fixed point is defined by i=O as Tc is contimmus. It remains to describe the decomposition of a rule's ef- fect into elementary steps. The main technicality to keep in mind is that rules stale constraints among several descrip- tions (associated with the parent and each child), whereas a set of equations in D constrains but a single descrip- tion. This nfismateh is solved by embedding the tuple (do,..., d,,) of descriptions in a single larger description, as expressed by (i) = di, 0 < i < r. and only then applying the rule constraints--now viewed as constraining parts of a single description. This is done by the indexing and combination steps described below. The rest of the work of applying a rule, extracting the result, is done by the projection and deindcxing steps• The four steps for applying a rule r = (N,, --* U,, . .. N,.., E,) to string-description pairs (s,,d,} ..... (sk,dk} are as fol- lows. First, we index each d,, into d~ by replacing every • . . • . $ • path p m any of tts equatmns with the path I " P. We then combine these indexed descriptions with the rule by unifying the deductive closure of E, with all the indexed descriptions: d= u Ud{, j=l We can now project d by removing from it all equations with paths that do not start with O. It is clearly evident that the result d o is still deductively closed. Finally, d o is deindexed into deo by removing 0 from the front of all paths O. p in its equations. The pair associated with N,o is then ( s,, . . . s,,, d,o). It is not difficult to show that the above operations can be lifted into operations over elements of pk that leave. untouched the coordinates not mentioned in the rule and that the lifted operations are continuous mappings• With a slight abuse of notation, we can summarize the foregoing discussion with the equation [r] = deindex o projecl o combine, o index, In the case of tile sample lexicon and one rule grammar presented earlier, [G~(e) would be NP : VP: S: {... as before.- .} {--. as before-..} ("Uther storms Cornwall", {(subj agr nnm} = sg .... }) ("many knights sit at the Round Table", {(sub 1 agr hum) = pl .... }) ("many knights storms Cornwall", T) 6. Applications We have used the techniques discussed here to analyze the feature systems of GPSG [15], LFG [2] and PATR-II [17]. All of them turn out to be specializations of our do- main D of descriptions. Figure 1 provides a summary of two of the most critical formal properties of context-free-based grammar formalisms, the domains of their feature systems (full F~ finite elements of F, or elements of F based on nonrecursive domain equations) and whether the context- free skeletons of grammars are constrained to be off-line paraeable [13] thereby guaranteeing decidability. 127 DCG-II a PATR-II LFG GPSG b FEATURE SYSTEM full finite finite nonrec. CF SKELETON full full off-line full aDCGs based on Prolog-lI which allows cyclic terms. bHPSG, the current Hewlett-Packard implementation derived from GPSG, would come more accurately under the PATR-II classification. Figure 1: Summary of Grammar System Properties Though notational differences and some grammatical devices are glossed over here, the comparison is useful as a first step in unifying the various formalisms under one semantic umbrella. Furthermore, this analysis elicits the need to distinguish carefully between the domain of fea- ture structures F and that of descriptions. This distinction is not clear in the published accounts of GPSG and LFG, which imprecision is responsible for a number of uncertain- ties in the interpretation of operators and conventions in those formalisms. In addition to formal insights, linguistic insights have also been gleaned from this work. First of all, we note 'that while the systems make crucial use of unification, gen- eralization is also a well-defined notion therein and might indeed be quite useful. In fact, it was this availability of the generalization operation that suggested a simplified account of coordination facts in English now being used in GPSG [15] and in an extension of PATR-II [8]. Though the issues of coordination and agreement are discussed in greater de- tail in these two works, we present here a simplified view of the use of generalization in a GPSG coordination analysis. Circa 1982 GPSG [6] analyzed coordination by using a special principle, the conjunct realization principle (CRP), to achieve partial instantiation of head features {including agreement} on the parent category. This principle, together with the head feature convention (HFC) and control agree- ment principle {CAP), guaranteed agreement between the head noun of a subject and the head verb of a predicate in English sentences. The HFC, in particular, can be stated in our notation as (0 head) = (n head) for n the head of 0. A more recent analysis [4,15] replaced the conjunct re- alization principle with a modified head feature conven- tion that required a head to be more instantiated than the parent, that is: (0 head) E (n head) for all constituents n which are heads of 0. Making coordinates heads of their parent achieved the effect of the CRP. Unfortunately, since the HFC no longer forced identity of agreement, a new principle--the nominal completeness principle (NCP), which required that NP's be fully instantiated--was re- quired to guarantee that the appropriate agreements were maintained. Making use of the order structure of the domains we have just built, we can achieve straightforwardly the effect of the CRP and the old HFC without any notion of the NCP. Our final version of the HFC merely requires that the parent's head features be the generalization of the head features of the head children. Formally, we have: (0 head) ---- [7 (i head) i~heads of 0 In the case of parents with one head child, this final HFC reduces to the old HFC requiring identity; it reduces to the newer one, however, in cases {like coordinate structures} where there are several head constituents. Furthermore, by utilizing an order structure on the do- main of constants C, it may be possible to model that trou- blesome coordination phenomenon, number agreement in coordinated noun phrases [8,15]. 7. Conclusion We have approached the problem of analyzing the meaning of grammar formalisms by applying the techniques of denotational semantics taken from work on the semantics of computer languages. This has enabled us to • account rigorously for intrinsically partial descrip- tions, • derive directly notions of unification, instantiation and generalization, • relate feature systems in linguistics with type systems in computer science, • show that feature systems in GPSG, I, FG and PATR- II are special cases of a single construction, • give semantics to a variety of mechanisms in grammar formalisms, and • introduce operations for modeling linguistic phenom- ena that have not previously been considered. We plan to develop the approach further to give ac- counts of negative and disjunctive constraints [8], besides the simple equational constraints discussed here. On the basis of these insights alone, it should be clear that the view of grammar formalisms as programming lan- guages offers considerable potential for investigation. But, even more importantly, the linguistic discipline enforced by a rigorous approach to the design and analysis of gram- mar formalisms may make possible a hitherto unachievable standard of research in this area. References [1] Ait-Kaci, H. "A New Model of Computation Based on a Calculus of Type Subsumption." Dept. of Com- puter and Information Science, Univer:ity of Penn- sylvania (November 1983). [2] Bresnan, J. and R. Kaplan. "Lexical-Functional Grammar: A Formal System for Granmlatical Repre- sentation." In J. Bresnan, Ed., The ,%Icntal Represen- tation of Grammatical Relations, MIT Press, Cam- bridge, Massachusetts (1982), pp. 173-281. 128 [3] Colmera,er, A. "Metamorphosis Grammars." In L. Bolc, Ed., Natural Language Communication with Computers, Springer-Verlag, Berlin (1978). First appeared as "Les Grammaires de M~tamorphose," Groupe d'lnt611igence Artificielle, Universit~ de Mar- seille II (November 1975). [4] Farkaz, D., D.P. Flickinger, G. Gazdar, W.A. Ladu- saw, A. Ojeda, J. Pinkham, G.K. Pullum, and P. Sells. "Some Revisions to the Theory of Features and Feature lnstantiation." Unpublished manuscript {August 1983). [5] Gazdar, Gerald and G. Pullum. "Generalized Phrase Structure Grammar: A Theoretical Synopsis." Indi- ana University Linguistics Club, Bloomington, Indi- ana (1982). [6] Gazdar, G., E. Klein, G.K. Pullum, and I.A. Sag. "Coordinate Structure and Unbounded Dependen- cies." In 1M. Barlow, D. P. Flickinger and I. A. Sag, eds., Developments in Generalized Phrase Struc- ture Grammar. Indiana University Linguistics Club, Bloomington, Indiana (1982). [7] Itarrison, M. Introduction to Formal Language The- ory. Addison-Wesley, Reading, Massachussets (1978). [8] Kaitunnen, Lauri. "Features and Values." Proceed- ings of the Tenth International Conference on Com- putational Linguistics, Stanford University, Stanford, California (4-7 July, 1984). [9] Kay, M. "Functional Grammar." Proceedings of the Fifth Annual Meeting of the Berkeley Linguistic Soci- ety, Berkeley Linguistic Society, Berkeley, California (February 17-19, 1979), pp. 142-158. [10] Marcus, M., D. Hindle and M. Fleck. "D-Theory: Talking about Talking about Trees." Proceedings of the 21st Annual Meeting of the Association for Com- putational Linguistics, Boston, Massachusetts (15-17 June, 1982). [11] Pereira, F. "Extraposition Grammars." American Journal of Computational Linguistics 7, 4 (October- December 198 I}, 243-256. [12] Pereira, F. and D. H. D. Warren. "Definite Clause Grammars for Language Analysis--a Survey of the Formalism and a Comparison with Augmented Tran- sition Networks." Artificial Intelligence 18 {1980), 231-278. [13] Pereira, F. C. N., and David H. D. Warren "Parsing as Deduction." Proceedings of the ~Ist Annual Meet- ing of the Association for Computational Linguistics, Boston, Massachusetts, (15-17 June, 1983), pp. 137- 144. [14] Plotkin, G. "A Powerdomain Construction." SIAM Journal of Computing 5 (1976), 452-487. [15] Sag, I., G. Gazdar, T. Wasow and S. Weisler. "Coor- dination and How to Distinguish Categories." Report No. CSLI-84-3, Center for the Study of Language and Information, Stanford University, Stanford, Cal- ifornia (June, 1982). [16] . [17] Scott, D. "Domains for Denotational Semantics." In ICALP 82, Springer-Verlag, Heidelberg (1982). Shieber, Stuart. "The Design of a Computer Lan- guage for Linguistic Information." Proceedings of the Tenth International Conference on Computational Linguistics [4-7 July, 1984) [18] Shieber, S., H. Uszkoreit, F. Pereira, J. Robinson and M. Tyson. "The Formalism and Implementation of PATR-II." In Research on Interactive Acquisition and Use of Knowledge, SRI Final Report 1894. SRI In- ternational, Menlo Park, Califi)rnia (1983). [19] Smyth, M. "Power Domains." Journal of Computer and System Sciences 16 (1978), 23-36. [20] Stoy, J. Denotational Semantics: The Seott-Strachey Approach to Programming Language Theory. MIT Press, Cambridge, Ma.ssachusetts (1977). [21] van Erodes, M. and R. A. Kowalski. "The Seman- tics of Predicate Logic as a Programming Language." Journal of the ACM 23, 4 {October 1976), 733-742. [22] Woods, W. et al. "Speech Understanding Systems: Final Report." BBN Report 3438, Bolt Beranek and Newman, Cambridge, Massachusetts (1976). 129
1984
27
THE RESOLUTION OF QUANTIFICATIONAL AMBIGUITY IN THE TENDUM SYSTEM Harry Bunt Computational Linguistics Research Unit Dept. of Language and Literature, Tilburg University P.O.Box 90153, 5000 LE Tilburg The Netherlands ABSTRACT A method is described for handling the ambiguity and vagueness that is often found in quantifications - the semantically complex relations between nominal and verbal constituents. In natural language certain aspects of quantification are often left open; it is argued that the analysis of quantification in a model-theoretic framework should use semantic representations in which this may also be done. This paper shows a form for such a representation and how "ambiguous" representations are used in an elegant and efficient procedure for semantic analysis, incorporated in the TENDUM dialogue system. The quantification ambi~uit[ explosion problem Quantification is a complex phenomenon that occurs whenever a nominal and a verbal constituent are combined in such a way that the denotation of the verbal constituent is predicated of arguments supplied by the (denotation of the) nominal constituent. This gives rise to a number of questions such as (i) What objects serve as predicate arguments? (2) Of how many objects is the predicate true? (3) How many objects are considered as potential arguments of the predicate? When we consider these questions for a sentence with a few noun phrases, we readily see that the sentence has a multitude of possible interpretations. Even a sentence with only one NP such as (I) Five boats were lifted has a variety of possible readings, depending on whether the boats were lifted individually, collectively, or in groups of five, and on whether the total number of boats involved is exactly five or at least five. For a sentence with two numerically quantified NPs, such as 'Three Russians visited five Frenchmen', Partee (1975) distinguished 8 readings depending on whether the Russians and the Frenchmen visited each other indivi- dually of collectively and on the relative scopes of the quantifiers. Partee's analysis is in fact still rather crude; a somewhat more refined analysis, which distinguishes group readings and readings with equally wide scope of the quantifiers, leads to 30 inter- pretations (Bunt, in press). This presents a problem for any attempt at a precise and systematic description of semantic structures in natural language. On the one hand an articulate analysis of quantification Js needed for obtaining the desired interpretations of every sentence, while on the other hand we do not want to end up with dozens of interpretations for every sentence. To some extent this "ambiguity explosion problem" is an artefact of the usual method of formal semantic analysis. In this method sentences are translated into formulae of a logical language, the truth conditions of which are determined by model-theoretic in- terpretation rules. Now one might want to consider a sentence like (i) not as ambiguous, but only as saying that five boats were lifted, w~thout specifying how they were lifted. But translation of the sentence into a logical representation forces one to be specific. That is, the logical representation language requires distinction between such interpreta- tions as represented by (2) (individual reading) and (3) (group reading): (2) ~({x e BOATS: LIFTED(x)}) = 5 (3) 3 x E{ y C BOATS:~ (y) = 5} : LIFTED(x) In other words, the analysis framework forces us to make distinctions which we might not always want to make. To tackle this problem, I have devised a method of representing quantified expressions in a logical language with the possibility of leaving certain quantification aspects open. This method has been implemented in the TENDUM dialogue system, developed jointly at the Institute for Perception Research in Eindhoven and the Computational Linguistics Research Unit at Tilburg University, Department of Linguistics (Bunt, 1982; ~983; Bunt & thoe Schwartzenberg, 1982;). This method is not only of theoretical interest, but also pro- vides a computationally efficient treatment of quantification. Ambiguity resolution In a semantic analysis system which translates natural language expressions into formal representations, all disambiguation takes place during this translation. 130 This applies both to purely lexical ambiguities and to structural ambiguities. For lexical disambigua- tion this means that a lexical item has several translations in the representation language (RL), which are all produced by a dictionary lookup at the beginning of the analysis. The generation of semantic representations for sentences that display both lexical and structural ambiguity thus takes place as depicted in Fig. i: " ~ Z;];~;;Z NL ~ RL ........ model \ " ~ ......... / \ ~-"-~ ;;;Z;;; • ~ ......... / dictionary application of interpre- lookup grammar rules tation Fig. i Longer arrows indicate larger amount of processing. Since the lexical ambiguities considered here are purely semantic, the same grammar rules will be applicable to all the lexical interpretations (assuming that the grammar does not contain world knowledge to filter out those interpretations that are meaningless in the discourse domain under consideration). Since the amount of processing involved in the application of grammar rules is very large compared to that of translating a lexi- cal item to its RL instances, this set-up is not very efficient. In the PHLIQAI question-answering system (Bronnenberg et al., 1980) the syntactic/ semantic and lexical processing stages were there- fore reversed, so that disambiguation takes place as depicted in Fig. 2: NL • :::::::: oO0Ol / ........ / ;;-_222;2 / application of dictionary Interpre- grammar rules lookup ration Fig. 2 Longer arrows indicate larger amount of processing. In this setup an intermediate representation language is u~ed which is identical to RL except that is has an ambiguous constant for every content word of the natural language. It turns out that semantic analysis along these lines can be formulated entirely in terms of the traditional model-theoretic framework (Bunt, in press), therefore this method is appropriately called two-level model-theoretic semantics. This method has been implemented in the TENDUM system, with an intermediate representation language that contains ambiguous constants corresponding to quantification aspects, in addition to ambiguous constants corresponding to nouns, verbs, etc. Quantification aspects The different aspects of quantification are closely related to the semantic functions of determiners. These functions depend on their syntactic position in a determiner sequence. A full-fledged basic noun phrase has the layout: (4) pre- + central + post- + head determiner determiner determiner noun (see Quirk et al., 1972, p.146). For example, in the NP (5) All my four children the centraldeterminer 'my' restricts the range of reference of the head noun 'children' to the set of my children; the predeterminer 'all' indicates that a predicate, combined with the noun phrase to form a proposition, is associated with all the members of that set, and the postdeterminer 'four' expresses the presupposition that the set consists of four elements. This set is determined by the central determiner plus the denotation of the head noun; I will call it the source of the quantifica- tion. In the case of an NP without central determiner the source is the denotation of the head noun. For the indication of the quantity or fraction of that part of the source that is invol- ved in a predication I will use the term source involvement. Quantification owes its name to the fact that source involvement is often made explicit by means of quantitative (pre-)determiners like 'five', 'many', 'all',or 'two liters of'. Obviously, source involvement is a central aspect of quantification. Another important aspect of quantification is illustrated by the following sentences: (6a) The chairs were lifted by all the boys (6b) The chairs were lifted by each of the boys These sentences differ in that (6b) says unambiguously that every one of the boys lifted the chairs, whereas (6a) is unspecific as to what each individual boy did: it only says that the chairs were lifted and that all the boys were involved in the lifting, but it does not specify, for instance, whether every one of the boys lifted the chairs or all the boys together lifted the chairs. The quantifiers 'all' and 'each (of)' thus both indicate complete involvement of the source, but differ in their determination of how a predicate ('lifted the chairs') is applied to the source. 'Each' indicates that the predicate is applied to the individual members of the source; 'all' leaves open whether the predicate is applied to individual members, to groups of meubers, or to the sources as a whole. To designate the way in which a pre- dicate is applied to, or "distributed over", the source of a quantification, I use the term distribution. A way of expressing the distribution of a quantification is by specifying the class of objects that the predicate is applied to, and how this class is related to the source. In the distributive case this class is precisely the : 131 source; in the collective case it is the set having the source as its only element. I will refer to the class of objects that the predicate is applied to as the domain of the quantification. The distribution of a quantification over an NP denotation can be viewed as specifying how the domain can be computed from the source. Where domain = source I will speak of individual distri- bution, where domain = {source} of collective distribution. Individual and collective are not the only possible distributions. Consider the sentence (7) All these machines assemble 12 parts. This sentence may describe a situation in which certain machines assemble sets of twelve parts, i.e. a relation between individual machines and groups of twelve parts. If PARTS is the set denoted by 'parts', the direct object quantification domain is ~I~(PARTS), the subset of ~(PARTS) containing only £~ose subsets of PARTS that have twelve members. I call this type of distribution group distribution. In this case the numerical quantifier indicates group size. A slightly different form of "group quantification" is found in the sentence (8) Twelve men conspired. In view of the collective nature of conspiring, it would seem that 'twelve' should again be inter- preted as indicating group size, so that the sentence may be represented by (9) B x E ~12(MEN): CONSPIRE(x) However, as the existential quantifier brings out clearly, this interpretation would leave open the possiblity that several groups of 12 men conspired, which is probably not what was intended. The more plausible interpretation, where exactly one group of 12 men conspired, I will call the strong group readinq of the sentence, and the other one the weak group reading. On the strong group reading the quantifier 'twelve' has a double function: it indicates both source involvement and group size. In a sentence like (i0) The crane lifted the tubes there is no indication as to whether the tubes were lifted one by one (individual distribution), two by two (weak group distribution with group size 2), one-or-two by one-or-two (weak group distribution with group size I-2), ..., or all in one go (collective distribution). The quantification is unspecific in this respect. In such a case I will say that the distribution is unspecific. If S is the source of the quantification, the domain is in this case the set consisting of the elements of S and the plural subsets of S. Distribution and source involvement are the two central aspects of quantification that I will focus on here. Quantification in two-level model-theoretic semantics Consider a non-intensional verb, denoting a one-place predicate P (a function from individuals to truth values), which is combined with a noun phrase with associated source S (a set of indivi- duals). The quantification then predicates the source involvement of the set of those elements of the quantification domain, defined by S and the distribution, for which P is true. This can be represented by a formula of the following form: (ii) S-INVOLVEMENT({xeQUANT.DOMAIN: P(x) } ) For example, consider the representation of the readings of sentence (I) 'Five boats were lifted', with individual, collective, and weak and strong group distribution: (12a) (Az:~z)=5) ({x ~ BOATS: LIFTED(x)}) (12b) (~z:~(z)>l) ({x 6 ~(BOATS) : LIFTED(x)}) (12c) (Az:~z)=l) ({x q~(BOATS): LIFTED(x)}) (12d) (Az:~z)=5) (UBoATSD({X e BOATS U ~+ (BOATS) : LIFTED(x) }) ) where~+(S) denotes the set of plural subsets of S. The notation U (D) is used to represent the set of S ,1 those members of S occuring in D"; the precise definition is: (13) Us(D) = {xES: xED v (B yED: x6y)} Note that in all cases the quantification domain is closely related to the source in a way determined by the distribution. I have claimed above that the distribution can be construed as a function that computes the quantification domain, given the source. Indeed, this can be acomplished by meads of a function of two arguments, one being the source and the other the group size, in the case of a group distribution. A little bit of formula manipulation readily shows that all the formulas (12a-d) can be cast in the form (14) (lz: N(Us(Z))) ({xed(k,S): P(x) } ) where S represents the quantification source, ~z: N(U_ (z))) the source involvement, k the group size, an~ d the "distribution function" computing the quantification domain. (For technical details of this representation see Bunt, in press). The most interesting point to note about this represen- tation is that the distribution of the quantifica- tion, which in other treatments is always reflec- ted in the syntactic structure of the representa- tion, corresponds to a term of the representation language here. For this term we substitute expressions like ~k,S:~k(S)) to obtain a particu- lar interpretation. I will now indicate how representations of the form (14) are constructed in the TENDUM system. The construction of quantification representation in the TENDUM system The TENDUM system uses a gra~nar consisting of phrase-structure rules augmented with semantic rules that construct a representation of a rewrit- ten phrase from those of its constituents (see Bunt, 1983). For the sentence 'Five boats were lifted' this works as follows. The number 'five' is represented in the lexicon as an item of syntactic category'number' with representation '5'. To this item, a rule applies that constructs a syntactic structure of category'numera~ with representation 132 (Ay:~ (y)=5), which I abbreviate as FIVE. TO this structure a rule applies that constructs a syntactic structure of category 'determiner with representation (15) (AX: (AP: FIVE(Ux({XEd(FIVE,X): P(x) } )))) A rule constructing a syntactic structure of cate- gory'noun phrase" from a determiner and a nominal (inthe simplest case: a noun) applies to 'five' and 'boats', combining their representations by applying (15) as a function to the noun representa- tion BOATS. After l-conversion, this results in (16) (AP: FIVE(t)BOATS( {xEd(FIVE, BOATS): P(x)}))) A rule constructing a sentence from a noun phrase and a verb applies to 'five boats' and 'were lifted', combining their representations by applying (16) as a function to the verb representa- tion LIFTED. After l-conversion, this results in (17) : (17) FIVE~3BOATs({XEd(FIVE , BOATS): P(x)} )) NOW suppose the sentence is interpreted relative to a domain of discourse where we have such boats and lifting facilities that it is impossible for more than one boat to be lifted at the same time. This is reflected in the fact that the RL predicate LIFTED r is of such a type that it can only apply to individual boats. Assuming that the ambiguous constant BOATS has the single instance BOATS and r that LIFTED has the single instance (Az: LIFTED (z)), the instantiation rules, con- strained byrthe type restrictions of RL, will produce the representation: (18) FIVE(UBOAT S ({xEBOATSr: LIFTEDr(X) } )) r (For the instantiation process see Bunt, in press, chapter 7.) This is readily seen to be equivalent to the more familiar form: (19) #( {xEBOATS : LIFTED (x)}) = 5 r r If, in addition to, or instead of the distributive reading we want to generate another reading of the sentence, then we extend or modify the instantia- tion function for LIFTED accordingly. This shows how the analysis method generates the representations of only those interpretations which are relevant in a given domain of discourse, and does so without generating intermediate representations as artefacts of the use of a logical representation language. References Bronnenberg, W.J., Bunt, H.C., Landsbergen, S.P.J., Scha, RoJ.H., Schoenmakers, W.J., van Utter,n, E.P.C. (1979) The question answering system PHLIQAI. In L.Bolc (ed.), Natural communica- tion with computers, McMillan, London; Hanser Verlag, M~nehen. Bunt, H.C. (1982) The IPO Dialogue Project. SIGART Newsletter 80. Bunt, H.C. (1983) A grammar formalism with augmented phrase-construction rules. IPO Annual Progress Report 18. Bunt, HoC. (in press) Mass terms and model- theoretic semantics. Cambridge University Pres s. Bunt, H.C. and thoe Schwartzenberg, G.O. (1982) Syntactic, semantic and pragmatic parsing for a natural language dialogue system. IPO Annual Progress Report 17. Partee, B. (IJ975) Comments on C.J. Fillmore's and N. Chemsky's papers. In: D.Austerlltz (ed) The scope of American linguistics. De Ridder Press, Lisse ° Quirk, R., Greenbaum, S., Leech, G., and Svartvik, J. (1972) A grammar of contemporary English. Longman, London. 133
1984
28
Preventing False Inferences 1 Aravind Joshi and Bonnie Webher Department of Computer and Information Science Moore School/D2 University of Pennsylvania Philadelphia PA 19104 Ralph M. Weischedel 2 Department of Computer & Information Sciences University of Delaware Newark DE 19716 ABSTRACT I Introduction In cooperative man-machine interaction, it is taken as necessary that a system truthfully and informatively respond to a user's question. It is not, however, sufficient. In particular, if the system has reason to believe that its planned response nfight lead the user to draw an inference that it knows to be false, then it must block it by nmdifying or adding to its response. The problem is that a system neither can nor should explore all eonchtsions a user might possibly draw: its reasoning must be constrained in some systematic and well-motivated way. Such cooperative behavior was investigated in [5], in which a modification of Griee's Maxim of Quality is proposed: Grice's Maxim of Quality- Do not say what you believe to be false or for which you lack adequate evidence. Joshi's Revised Maxim of Quality - If you, the speaker, plan to say anything which may imply for the hearer something that you believe to be false, then provide further information to block it. This behavior was studied in the context of interpreting certain definite noun phrases. In this paper, we investigate this revised principle as applied to question answering. In particular the goals of the research described here are to: I. characterize tractable cases in which the system as respondent (R) can anticipate the possibility of the user/questioner (Q) drawing false conclusions from its response and can hence alter or expand its response so as to prevent it happening; 2. develop a formal method for computing the projected inferences that Q may draw from a particular response, identifying those 1This work is partially supported by NSF Grants MCS 81-07290, MCS 8.3-05221, and [ST 83-11,100. 2At present visiting the Department of Computer and Information Science, University of Pennsylvania, Philadelphia, PA 19104. factors whose presence or absence catalyzes the inferences; 3. enable the system to generate modifications of its response that can defuse possible false inferences and that [nay provide additional useful information as well. Before we begin, it is important to see how this work differs from our related work on responding when the system notices a discrepancy between its beliefs and those of its user [7, 8, 9, 18]. For example, if a user asks • How many French students failed CSEI21 last term?', he shows that he .believes inter alia that the set of French students is non-empty, that there is a course CSEI21, and that it, was given last term. If the system simply answers "None', he will assume the system concurs w'ith these b~diefs since the answer is consistent with them. Furthermore, he may conclude that French students do r;'d.her well in a difficult course. But this may be a false conclusion if the system doesn't hold to all of those beliefs (e.g., it doesn't know of any French students). Thus while the system's assertion "No French students failed CSEI21 last term" is true, it has misled the user (1) inlo believing it concurs with the user's beliefs and (2) into drawing additional false conclusions from its response. 3 The differences between this related work and the current enterprise are that: 1. It is no_~t assumed in the current enterprise that there is any overt indication that the domain beliefs of the user are in any way at odds with those of the system. 2. In our related work, the user draws a false conclusion from what is said because the presuppositions of the response are not in accord with the system's beliefs {following a nice analysis in [lO]). In the current enterpri.~e, the us~,r draws a false conclusion from what is said because the system's response behavior is not in accord with the user's expectations. It. may or may not also 31t is a feature of Kaplan's CO-OP system [7] that it point~ out the discrepancy by saying "| don't know of any French students ° 134 involve false domain beliefs that the system attributes to the user. In this paper, we describe two kinds of false conclusions we are attempting to block by modifying otherwise true response: • false conclusions drawn by standard default reasoning - i.e., by the user/listener concluding (incorrectly) that there is nothing special about this case • false conclusions drawn in a task-oriented context on the basis of the user's expectations about the way a cooperative expert will respond. In Section II, we discuss examples of the first type, where the respondent (R) can reason that the questioner {Q) may inappropriately apply a default rule to the (true) information conveyed in R's response and hence draw a false conclusion. We characterize appropriate information for R to include in his response to block it. In Section HI, we describe examples of the second type. Finally, in Section IV, we discuss our claim regarding the primary constraint posed here on limiting R's responsibilities with respect to anticipating false conclusions that Q may draw from its response: that is, it is only that part of R's knowledge base that is already in focus (given the interaction up to that point, including R's formulating a direct answer to Q's query) that will be involved in anticipating the conclusions that Q may draw from R's response. H Blocking Potential Misapplication of Default Rules Default reasoning is usually studied in the context of a logical system in its own right or an agent who reasons about the world from partial information and hence may draw conclusions unsupported by traditional logic. However, one can also look at it in the context of interacting agents. An agent's reasoning depends not only on his perceptions of the world but also on the information he receives in interacting with other agents. This information is partial, in that another agent neither will nor can make everything explicit. Knowing this, the first agent (Q) will seek to derive information implicit in the interaction, in part by contrasting what the other agent (R) has made explicit with what Q assumes would have been made explicit, were something else the case. Because of this, R must be careful to forestall inappropriate derivations that Q might draw. The question is on what basis R should rea.~on that Q may ~sume some piece of infotmati(>n (P) would have been made explicit in the interaction, were it the ease. One basis, we contend, is the likelihood that Q will apply some staudard default rule of the type discussed by Reiter [15] if R doesn't make it explicite that the rule is not applicable. Reiter introduced the idea of default rules in the stand-alone context of an agent or logical system filling in its own partial information. Most standard default rules embody the sense that "given no reason to suspect otherwise, there's nothing special about the current case'. For example, for a bird what would be special is that it can't fly - i.e., •Most birds fly•. Knowing only that Tweety is a bird and no reason to suspect otherwise, an agent may conclude by default that there's nothing special about Tweety and so he can fly. This kind of default reasoning can lead to false conclusions in a stand-along situation, but also in an interaction. That is, in a question-answer interaction, if the respondent (l{) has reason for knowing or suspecting that the situation goes counter to the standard default, it seems to be common practice to convey this information to the questioner (Q), to block his pote, tially a.ssuming the default. To see this, consider the following two examples. (The first is very much like the "Tweety" case above, while the second seems more general.) A. Example 1 Suppose it's the case that most associate professors are tenured and most of them have Ph.Ds. Consider the following interchange Q: Is Sam an ~sociate professor? R: Yes, but he doesn't have tenure. There are two thi, gs to account for here: (1) Given the information w&s not requested, why did R include the "but" clause, and (2) why this clause and not another one? We claim that the answer to the second question has to do with that part of R's knowledge base that is currently in focus. This we discuss more in Section IV. In the meantime, we will just refer to this subset as • RBc ". Assume RBc contains at least the following information: (a) Sam is an associate professor. (b) Most associate professors are tenured. (c) Sam is not tenured. (b) may be in RBc because the question of tenure may be in context. Based on RBc, R's direct response is clearly "Yes'. This direct response however eouJd lead Q to conclude falsely, by default reasoning, that Sam is tenured. That is, R can reason that, given just (b) and his planned response "Yes" (i.e., if (c) is not in Q's knowledge base}, Q could infer by default reasoning that Sam is tenured, which R knows with respect to !RBc is false. Hence, R will modify that planned response to block this false inference, as in the response above. In general, we can represent R's reasoning about Q's reaction to a simple direct response •Yes, B(a)', given Q believes "Most Bs F=, in terms of the following default schema, using the notation introduced in [15 I. 135 told{ILQ,l~(c)) k (Most x)[B(x) = F(x)] &-~h:,ld(R,Q,-~Flc)): M(F[c}) ..__" ............................................ F(c) As in Reiter's discussion, "M(P)" means it is consistent to assume that P. In the associate professor example, B corresponds to the predicate "is an associate professor', F, to the predicate "has tenure', and c, to Sam. Using such an inslantiated rule schema, R will recognize that Q is likely to conclude F(c) - "Sam has tenure" - which. is false with rvspe(.t to RBc {and hence, with respect to all of R's knowledge base). Thus R will modify his direct response so as to block this false conclusion. B. Example 2 Consider a user one of the mail systems on the DEC-20. To exit from this system, a user who has finished reading all the messages he earlier specified can just type a carriage return. To exit under other circumstances, the user must type QUIT. Consider the following interchange between a new user who has finished reading all his messages and either a mail system expert or the mail system itself. Q: How (In I get out of mail? R~ Since you h:tve read all your specified messages, you can just type a carriage return. In all cases, you (':ill got ()lit by typing QHT. Here tile prohh,m is to account for all that part of R's response beyond the simple truthful statement "You can type a carriage return." A general statement of this probh,m is a.s follows: Agent Q is in one situation (Sl) and wants to be in another ($2). There is a general procedure P for achieving $2 from any of several situations including Sl. There is a special prodecure P* (i.e., shorter, faster, simpler, etc.) for achieving $2 frolu Sl. Q doesn't know how to achieve $2, but R does (including proced,res P and P*). Q asks R how to achieve $2. If R knows.i~lat Q is in situation SI and truthfully responds to Q's request by simply telling him P*, Q may falsely conclude that P* is a general procedure for achieving $2. That is, as in the Tweety and Sam examples, if Q has no reason to suspect anything special about SI (such that P* only applies to it), then there is nothing special about it. Therefore P* is adequate for achieving $2, whatever situation Q is in. 4 Later when Q tries to apply P* in a different situation to achieve $2, he may find that it doesn't work. As a particular examl)le of this, consider the mail case again. In this ca.se~ SI = Q has read all his messages $2 = Q is out of the mail system P ~--- typing QUIT P* -- typing a carriage return ~Lssume RBc contains at least the following informa.tion: (a) Sl (b) want(Q,S2) (c) ¥s6S. P(s) = S2 (d) P*(Sl) = s2 (e) Sl6r (f) simpler(P*,P) (g) VsE,~. "-{s = SI) =* -~(P*ls) = $21 where 17 is some set of states which includes SI and P(s) indicates action P applied to state S. Based on RBc, R's direct response would be "You can exit the mail system by typing carriage return'. (It is &ssumed that an expert will always respond with the "best" procedure according to some metric, unle..~ he explicitly indicates otherwise - of. Section lIl, case 2}. However, this could lead Q to conclude falsely,-by default, something along tile lines of Vs . P*(s) ---- $2. 5 Thus R will modify his planned response to call attention to SI {in particular, how to recognize it) and the limited applicability of P* to SI alone. The other modification to R's response ('In all cages, you can get out by typing QUIT'), we would ascribe simply to R's adhering to Grice's Alaxim of Quantity - "Make your contribution ,~s informative as is required for tile current purposes of tile exchange" given R's assumption of what is required of him in his role as expert/teacher. HI Blocking False Conclusions in Expert Interactions Tile situations we are concerned with here are ones in which the system is explicitly tasked with providing help and expertise to the user. In such circumstances, the user has a strong expectation that the system has both the experience and motivation to provide the most appropriate help towards achieving the user's goals. The user does not expect behavior like: Q: How can I get to Camden? R: You can't. As many studies have shown Ill, what an advice seeker (Q) expects is that an expert (R) will attempt to recognize what plan Q is attempting to follow in pursuit of what goal and respond to Q's question accordingly. Further studies [11, 12, 13] show that Q may also expect that R will respond in terms of a better plan if the recognized one is either sub-optimal or unsuitable for attaining Q's perceived goal. Thus because of this principle of "expert cooperative behavior', Q may expect a response to a more general question than the one he has actually asked. That is, in asking an expert • flow do 1 do X?" or "Can I do X?', Q is anticipating a response to "How can I achieve my goal?" 4Moreover if Q (falsely) believes that R doesn't know Q is in SI, Q will certainly assume that P* is a general procedure. However, this isn't necessary to the default reasoning behavior we are investigating. 5Clearly , this is only for some subset of states, ones corresponding to being in the mail system. 136 Con',id,.r a slud,.ut ((,~) :+skhig th,' foll,+,+i.g que+thm, near the end of the term. Q'. Can I dr~q, C1~,-,77? Since it is already too late to drop a course, ti~e o~.!y dire,'t answer the ,x~*~rt (R) can give is "No'. Of course, part of :,:, expert's knowledge concerns the typical states users get into and the possible actions that permit transitions between them. Moreover it is al~o part of this expertise to infer such states from the current state of the inlrerac(.ion, Q's query, some shared knowledge of Q's goals and Pxpectali,ns and the shared assmnption that an expert is expected to attend to these higher goals. How the system should go about in"erring these states is a difficult task that others are exami,iug [2, 12, 13]. We assume that such an inference has been made. We al,~o assume for simplicity that the states are uniquely det.ermined. For example, we assume that the system has inferred that Q i.,: in state Sb (student is doing badly in the course} and wants to be in a state Sg {student is in a position to do better in this course or another one later), and that the a~tion a (diopping the course) will take him f:om Sb to Sg. Given this, the response in (2) may lead Q to draw some conclusiuns that I/. knows to be false. For example, R can reason that since a principle of cooperative behavior for an expert is to tell Q the best way to go from Sb to Sg, Q is likely to conclude from R's response that there is no way to go from Sb to Sg. This con+:lusion however would be false if R knows some other ways of going from Sb to Sg. To avoid potenlially misleading Q, R must provide additional information, such as R: No, bul you can take an incomplete and ask for more time to finish the work. As we noted earlier, an important question is how much reasoning R should do to block fals~ conclusions on Q's part. Again. we assume that R should only concern itself with those false conclusions that Q is likely to draw that involve that part of R's knowledge base currently in focus (RBc}, including of course that subset it nc~ds in order to answer the query in the first place. We will make this a little more precise by considering several cases corresponding to the different states of R's knowledge base with r~peet to Sb, Sg. and tran~iti,m~ between them. For convenie,,.e, ~,: ~ill give an appropriate re~p~mse in terms of Sb, Sg and the actions. Clearly, it should be given in terms of descriptions of ~lat,.s and actions understandable to Q. (Moreover, by making further assumptions about Q's beliefs, R may be able to validly trim some of its respond.) 1. Suppose that it is possible to go from Sb to Sg by dropping the course aml that. this is the only action that will take one from Sb to Sg. Sb Sg In this ca.se, the respon~ is R: Yes. ct is t h~ only action that will take you fr,,m Sb to St. 2. Suppose that in addition to going from Sb to Sg by dropping the cour~,~o there is a better way, say ~, of doing so.e • .j Sb : Sg In this ca~e, the response is 6"Betteruess" is yet another urea for future research. H: Yes, but there is a better action ,9 that will take you from Sb to Sg. 3. Suppose that dropping the course does not take you from Sb to St, but another action ~ will. This is the situation we considered in our earlier discussion. Sb Sg In this case the response is H: No, but there is an action ~ that will take you from Sb to St. 4. Suppose that there is no action that will take one from Sb to Sg. Sb Sg , / In this the rcspon~ is R: No. There is no action that will take you from Sb to Sg. Of course, other situations are possible. The point, however, is that the additional information that R provides to prevent Q from drawing fal~ conclusions is limited to just that part of R's knowledge hase that R is focussed on in answering Q's query. IV Constraining the Renpondent's Obligations As many people have observed - from studies across a range of linguistic phenomena, including co-referring expressions [3, 4, 16], left dislocations [14], epitomizatkm [17], etc. - a speaker (R) normally focuses on n particular part of its knowledge base. What he focuses on dcpends in part oil (1) eoutext, (2} R's partial knf~wledge of Q's overall goals, as well as what Q knows already as a result of the interaction up to that point, and (3} Q's particular query, etc. The precise nature of how these various factors affect focusing is complex and is receiving much attention [3, 4, 16]. However, no matter how these various factors contribute to focusing, we can certainly assume that H comes to focus on a subset of its knowledge base in order to provide a direr answer to Q's query (at some level of inl,.rpretalion). Let us call this subset RBc for "R's current belief.~ ~. Our claim is tlmt one important constraint on cooperative behavior is that it is determined b.v RBc only. Clearly the i;ib~rmal.ion needed for a direct response is contained in RBc, a.~ is the information needed for many types of helpful responses. In other words, RBc -- that part of R's knowledge base that R deeide~ to focus on in order to glve-a direct. response to Q's quer~ - also has the information needed to generate several classes of h~Ipful responses. The simplest ease is presupposition failure [7], as in (he following Q: llow many A's were given in (',IS 500 ? where Q presumes that CIS 500 was offered. In trying to formulate a direct response, R will have to ascertain that CIS 500 was offered. If it was (Q's presumption is true}, then R can go ahead and give a direct response. If not, then R can indicate that CIS 500 was not offered and thereby avoid misleading Q. All of this is straightforward. The point here is that the information needed to provide this extra response is already there in that part of R's knowledge base which R had to look up anyway in order to try to give the direct, response. In the above example, it is clear how the response can be localized to RP, c. We would like to claim that this approach has a wider applicability: that RBc alone is the basis for responses that anticipate and attempt to block interactional defaults as well. Since RBc contains the information for a direct response, R can plan one (r}. From r, R can reason whether it is possible for Q to infer some conclusion (g) which R knows to be false because -~g is in RBe. If so, then R should modify r so as to eliminate this possibility. The point is that the only false inferences that R will attempt to block are those whose falsity can be checked in RBc. 137 There may be other false inferences that Q may draw, whose falsity cannot be deterntined solely with respect to RBc (although it might be possible with respect to R's entire knowledge base). While intuitively this may not seen enough of a constraint on the amount of anticipatory reasoning that Joshi's revised maxim imposes on R, it does constrain things a lot by only considering a (relatively small) subset of knowledge base. Factors such as context may further delimit S's responses, but they will all be relative to RBc. V Conclusion There are many gaps in the current work and several aspects not discussed here. In particular, 1. We are developing a formMism for accommodating the system's reasoning based on a type of HOLDS predicate whose two arguments are a proposition and a state; see [6]. 2. We are working on more examples, especially more problematic cases in which, for example, a direct answer to Q's query would be myes m [or the requested procedure} BUT a response to Q's higher goals would be "no t or "no" plus a warning - e.g., Q: Can I buy a 50K savings bond? S: Yes, but you could get the same security on other investments with higher returns. 3. We need to be more precise in specifying RBc, if we are to assume that all the information needed to account for R's cooperative behevior is contained there. This may in turn reflect on how the user's knowledge base must be structured. 4. We need to be more precise in specifying how default rules play a role in causing R to modify his direct response, in recognition of Q's likelihood of drawing what seems like a generalized "script" default - if there is no reason to assume that there is anything special about the current case, don't. REFERENCES [I] Allen, J. Recognizing Intentions from Natural Language Utterances. In M. Brady (editor), Computational Models of Discourse, • M1T Press, Cambridge MA, 1982. [2] Carberry, S. Tracking User Goals in an Information-Seeking Environment. In Proceedings of the National Conference on Artificial Intelligence, pages 59-63. AAAI, 1983. [31 Groat, B. The Representation and Use of Focus in Dialogue Understanding. Technical Report 151, SRI International, Menlo Park CA, 1977. 14] Grosz, B., Joshi, A.K. & Weinstein, S. Providing a Unified Account of Definite Noun Phrases in Discourse. In Proc. °Mst Annual Medin9, pages 44-50. Assoc. for Computational Ling., Cambridge MA, June, 1983. 15} Joshi, A.K. Mutual Beliefs in Question Answering Systems. In N. Smith leditor), Mutual Belief, . Academic Press, New York, 1982. [6] Joshi, A., Webber, B. & Wei~hedel, R. Living Up to Expectations: Computing Expert Responses. In Proceedings of AAAI-8~. Austin TX, August, 1984• 07] Kaplan, J. Cooperative Responses from a Portable Natural Language Database Query System• In M. Brady (editor), Computational Models o] Discourse, • MIT Press, Cambridge MA, 1982. Isl Mays, E. Failures in natural language systems: application to data ba~e query systems. In Proc. First National Conference on Artificial Intelligence (AAAI]. Stanford CA, August, 1980. 191 McCoy, K. Correcting Miseunceptions: What to S~y. In CH1'83 Conference Human Fhctors in Computing Systems. Cambridge MA, December, 1983. [101 Nlercer, R. & Rosenberg, R. Generating Corrective Answers by Computing Presuppositions of Answers, not of Questions. In Proceedings of the 1984 Con fere, ce, pages 16-19. Canadian Society for Computational Studies of Intelligence, University of Western Ontario, London, Ontario, May. 1984. [111 Pollack, M., Hirschberg, J. and Webber. B. User Participation in the Reasoning Processes of Expert Systems. In Proc. AAA[-8e. CMU, Pittsburgh PA, August, 1982• A longer version appears as Technical Report CIS-8~9, Dept. of Computer and Information Science, University of Pennsylvania, July 1982. 112] Pollack, Martha E. Goal Inference in Expert S~lstesm. Technical Report MS-CIS-84-07, University of Pennsylvania, 1984. Doctoral dissertaion proposal. 113] Pollack, M. Good Answers to Bad Questions. In taeoc. Canadian Sodettt for Computational Studies of Intelligence (CSCSI], Univ. of Western Ontario, Waterloo, Canada, May, 1984. [141 Prince, E. Topicalization, Focus Movement and Yiddish Movement: A pragmatic differentiation. In D. Alford et al. (editor), Proceedin#s of the 7th Annual Altering, pages 249-64. Berkeley Linguistics Society, February, 1981. (15] Reiter, R. A Logic for Default Reasoning. Artificial lnteUigence 13:81-132, 1980. [16] Sidner, C,. L. Focusing in the Comprehension of Definite Anaphora. In M. Brady (editor), Computational Models of Discourse, • MIT Press, Cambrid~.e MA, 1982. 117] Ward, G. A Pragmatic Analysis of E~.,itomization: Topicalization it's tint. In Procredin9.~ of the Summer Aft.sting 198£. LSA, College Park MD, Augu.~t, 1982. Also in Papers in Linguisti,:s 17. (181 Webber, B. & Mays, E. Varieties of User Misconceptions: Detection and Correction. In Proc. IJCAI-8. Karlsruhe, Germany, August, 1983. 138
1984
29
TRANSFORMING ENGLISH INTERFACES TO OTHER NATURAL LANGUAGES: AN EXPERIMENT WITH PORTUGUESE GABRIEL PEREIRA LOPES (1) Departamento de Matem~tica • Instituto Superior de Agronomia Tapada da Ajuda - 1399 Lisboa Codex, Portugal ABSTRACT Nowadays it is common the construction of English understanding systems (interfaces) that soo- ner or later one has to re-use, adapting and conve~ ting them to other natural languages. This is not an easy task and in many cases the arisen problems are quite complex. In this paper an experiment that was accomplished for Portuguese language is reported and some conclusions are explicitely stated. A know ledge information processing system, known as SSIPA, with natural language comprehension capabilities that interacts with users in Portuguese through a Portuguese interface, LUSO, was built. Logic was u- sed as a mental aid and as a practical tool. I. INTRODUCTION The CHAT-80 program for English (Warren & Pereira, 1981; Pereira, 1983) was transformed and a dapted to Portuguese. Logic Programming as a mental aid, and Prolog (Coelho, 1983; Clocksin & Melish , 1981) and Extraposition Grammars (Pereira, 1983) as practical tools, were adopted to implement a natu- ral language interface for Portuguese. The interfa- ce here reported, called LUSO, was then coupled to a knowledge base for geography, an extension of the CHAT-80 knowledge base. In an ulterior experiment , LUSO dictionary was augmented with new vocabulary and LUSO was coupled to other modules that conside- rably augmented the expertise capabilities of SSIPA (Sistema Simulador de um Interlocutor Portugu~s Au- tom~tico (2)). SSIPA is a complex knowledge information processing system with natural language comprehension and syn- thesis capabilitites that interacts with users in Portuguese due to the linguistic knowledge that is logically organized and codified in the above men- tioned SSIPA's interface ca]led LUSO.After the first step of its development, SSIPA was able to answer (1) Present Adress: Centro de Inform~tica, Laborat5 rio Nacional de Engenharia Civil, lOl, Av. do Bra= sil, 1799 Lisboa Codex, Portugal (2) Simulating System of a Portuguese Automatic In- terlocutor. questions about geography and could agree or disa- gree with the opinions stated by the users about its geographical knowledge. After the second step of its development SSIPA became more powerful and intelligent because it could also perform actions that traditionally were attributes of computer mo- nitors (Lopes & Viccari, 1984).As a matter of fact, SSIPA can create and delete files, fill them, change their names, list and change their, contents; SSIPA receives, keeps and send messaqes answers questions not only about geography but also about the knowledge SSIPA represents; it a - grees or disagrees with the opinions stated byusers about the Knowledg~ context behind dialogues, reacts when users try to cheat it but, as a rule, SSIPA behaves as a helpful, deligent and cooperat~ ve interlocutor willing to serve human users, chan ging from one to another topic of conversation and developing intelligent clarification dialogues (Lo pes, 1984). All these features require a very power ful Portuguese language interface whosemain moron~ -syntactic features are pointed out in this pa- per. 2. FORMALIZATION OF NATURAL LANGUAGE CONSTRUCTS Natural language are complex structured systems difficult to formalize. Formalization can be understood as a step by step construction of a theory to achieve , as an ultimate goal, an axioma tic definition of natural language constructs. If this descriptive theory can also function as the linguistic structured knowledge necessary to simu- late a human native using his mother language then, the formalization effort has acquired and gained a new insight. While representing a natural language system, it may represent a native competence about his mother language and, simultaneously, it mayper form the role of a native using that competence. This dual unity, incorporatingadescription of lin guistic knowledge and incorporating the same lin - guistic knowledge ready to be active, is central to this work.This unification in the same unit of two apparently conflicting and contraditory aspects of natural languages is possible due to the usage of logic as a mental and a practical tool. SSIPA enca psulates both views of natural language. Practice demonstrates that, for the cons truction of complex models it is better to begin with simple model versions to represent the system one intends to simulate. This practical conclusion 8 seems reasonable because knowledge about a system and about its representation keeps on augmenting as far as, to achieve the validation of the simula - ting model, empirical investigation progresses(Klir, 1975). However one must be aware that while Know - ledge about a real system keeps on growing so do the complexitythat one can unwillingly introduce in to the model. Having all this in mind, if we want to formalize linguistic knowledge about natural fan guage we must be prepared to use powerful formal- languages prone to description of complex systems and able to be used as programming languages. Here it is subsumed that computers are tools adapted to deal with complexity, augmenting considerably hu- man capabilities to handle highly complex represen tational systems. 3. LUSO LUSO input subsystem is a device that transforms a sequence of words morfologically, syn tactically and semantically significant into a Lo- gical Form. A Logical Form is here understood as a sequence of predicates, envelopes for knowledge transportation from users to SSIPA central proces- sing unit (the EVENT DRIVER) and from this unit to users. These predicates generalize and augment the potencialities of Pereira's equivalent predicates, (Pereira, 1983). They can also be compared with the lexical functions of Bresnam (Ig81). However we don't use case classification. In Portuguese, pre- positions associated to noun semanticfeatures seem to be enough to identify and differentiate mea- nings of verbal, noun, adjectival and even prepos~ tional form functions (Lopes, 1984). LUSO is a natural language interface that concentrates linguistic expert knowledge about Pot tuguese language. LUSO input subsystem works sequentially. In a first step it performs the syntactical analy- sis of an input Portuguese sequence of words. De- pending on the task LUSO has been commited to per- form, a lexically filled syntagmatic marker or a failure is the result of LUSO eagerness to prove the above mentioned input sequence of words as a syntactically correct yes-no question, wh-question, imperative or declarative sentence, or as a syntac tically correct noun phrase or prepositional phra Z se. When a lexically filled syntagmatic marker is obtained, it is translated to a logical form. Fi- nally this form is planned and simplified accor - ding to the methodology described by Pereira (1983) and Warren (1981). The design of LUSO input subsystem re - flects the following hypothesis: • morphological analysis of Portuguese constructs is syntactically driven; • linguistic semantic analysis of Portu- guese constructs is lexically (functio nally) driven (in a quasi-bresnamian, sense (Bresnam, 1981; Pereira, 1983;Lo pes, 1984)); • cognitive semantic analysis of Portu - guese constructs depends on syntacti - cal and linguistic semantic analysis previously achieved for Portuguese cons tructs. This suggests SSIPA as a formal system that already theorizes some aspects of Portuguese language while LUSO specificates the form of for- mal functions whose cognitive content and formal ap titude for transforming system state are defined at the semantic level of the formal system. To complete the formal role wewanted SS ! PA to play, LUSO output subsystem synthesizes Por- tuguese noun phrases, prepositional phrases or se D tences whenever it receives correspondent requests to output such constructs. To achieve that goal LU SO transforms any previously lexically filled syn- tagmatic marker into a sequence of Portuguesewords in its final forms, ready to be sent to a user. 4. MORPHO-SYNTACTICAL ANALYSIS AND SYNTHE - SIS OF PORTUGUESE LANGUAGE CONSTRUCTS The morpho-syntactical analysis of Portu guese language constructs is application indepen - dent and is based on the various concepts develo- ped by Chomsky and followers in the framework of the Extended Standard Theory of Generative Grammar (Chomsky, 1980, 1981a, 1981b; Rouveret, 1983 and many others)• As it was already mentioned in this paper, one of the crucial hypothesis behind LUSO's design reflects the idea that morphological analy- sis of Portuguese constructs is syntactically dri- ven. This means that when the syntactical parseris waiting for a specific grammatical category, it ta kes the next word to be analysed from the input se quence of words and searches the dictionary for that category, trying to find the input word. If the i put word does not match any dictionary entry for that particular category, all possible input word endings, one after another, starting from the lon- gest towards ths shortest, are matched against the ending entries for that category until a success - ful match will occur. If such a match does not suc ceed, this means that the input word does not be- long to the foreseen grammatical category. As a co) sequence, a failure occurs and the Prolog mecha - nism for backtracking is automatically activated. When one of the input word possible endings mat - ches an ending entry for the syntactically predic- ted category, a basic form for the input word is coined. The newly coined basic form for that in - put word is then checked against the subdictionary entries for the foreseen grammatical category.A pr~ cess of successes and/or failures proceeds. A syn- tagmatic marker for each input Portuguese construct is filled with word basic forms and correspon - dingsyntactic features information (person, gender and number for noun phrases; tense, mode, aspect , voice and negation for verbs; etc.). The basic form fora-verb is its infinitive form; for a nouhisits singular form; for a pronoun, article or adjective is its singular masculine form. The morphological synthesis of Portugue- se constructs is syntactically driven. This means that, departing from a syntagmatic marker lexical- lp filled with basic forms of Portuguese words, u- sing the syntactic features that are explicitelly considered into that marker, LUSO output subsystem coines the corresponding sequence of Portuguese words in its final output form ready to be sent to the user with whom the system is interacting. For this purpose most of the rules that were designed to consult LUSO's dictionary were reordered. Depa~ ting from basic forms of words, their final forms are obtained by a process nearly inverse of the process used for input. Extraposition grammars, the formalism d e veloped by Pereira (1983), were used to implement the analyser and the synthesizer for Portuguese.It is worth telling that this formalism proved to be quite adequate for the description of move-alpha ru le (Chomsky, IgBlb) in complex syntactical environ ments such as those that frequently occur in Portu guese. As a matter of fact phrase constituents or- der in Portuguese sentences is quite free. LUSO ta kes into account the same type of problems handled by CHAT-80 program. Additionally, it analysis syn- tactical structures involving prepositional phra - ses and verb headed sentences where there is reor- dering of noun phrase constituents inside those se~ tences due to the heading process. Problems rela- ted to common nouns followed by the proper nouns they refer, in the context where they appear,is a ! so handled. 5. CONCLUSIONS It is wiser to concentrate efforts to o 0 tain more and more powerful morpho-syntactic anal~ sets, linguistic semantic analysers and cognitive, semantic interpreters for the natural language we are working in. Constructing replicants of applica tion directed interfaces starting from scratch is unproductive. Constructing more and more powerful interfaces, as the number of applications natural- ly grows, the natural language analyser, planned to be application independent, is always under impro- vement because it is always incorporating more and more linguistic knowledge. At the same time one is freed from consideration of morphological and syn- tactic basic problems and so one can shift his at- tention to more subtle problems related to tense , modality and others and one can concentrate his mind to the way how concepts related to words are defined. As a consequence, the implementing task can be organized by areas of specialization. When one has to construct an interface for a specific language it is reasonable to look for interfaces implemented for other languages wh e re the faced syntactical and morphological prob - lems have a similar degree of complexity. Having this in mind, Portuguese language seriously compe- tes with English because it rises quite important syntactic, semantic and pragmatic problems similar to problems risen by latin, slavonic and germanic languages. 6. AKNOWLEDGEMENTS I would like to thank Helder Coelho for his insightful comments and suggestions throughout this research and the writing of this paper. 7 REFERENCES BRESNAM, J., "The passive in lexical theory", Occa sional Paper 7, The Center for Cognitive Science MIT, 1981. CHOMSKY, N., 'bn binding", Linguistic Inquiry,vol. II, n9 l, 1-46, 1980. CHOMSKY, N., "Lectures on government and binding", Foris Publications, Dordrecht, Holland, I981a. CHOMSKY, N., "On the representation of form and function", The Linguistic Review, vol. l, n9 l, 30-40, 1981a. COELHO, H., "The art of knowledge engineering with Prolog", INFOLOG PROJ, Faculdade de Ci~ncias, U- nivers~dade Cl~ssica de Lisboa, 1983. KLIR, G., "On the representationof activity arrays~ Int. J. General Systems, 2, 149-168, 1975 LOPES, G., "Implementing dialogues in a knowledge information system", paper submited to Interna - tional Workshop on Natural Language Understan ding and Logic Programming, Rennes, France, 1984. LOPES, G. and VICCARI, R., "An intelligent monitor interacting in Portuguese language", short paper accepted for ECAI-84, Pisa. PEREIRA, F., "Logic for natural language analysis~ Technical Note 275, SRI International, 1983. ROUVERET, A., unpublished lectures lectured in Lis bon, 1983. WARREN, D., "Efficient processing of interactive r e lational data base queries expressed in logic" , Dept. of Artificial Intelligence, Univ. of Edin- burgh, 1981. WARREN, D. and PEREIRA, F., "An efficient easilly adaptable system for interpreting natural langua ge queries", DAI research paper nQ 155, Univ. of Edinburgh, 1981. 10
1984
3
• O B L E M LOCALIZATION STRATEGIES FOR PRAGMATI~S ~ IN NATURAL-LANGUAGE FRONT ENDS• Lance A. Remshaw & Ralph ~L Welschedel Department of Ccaputer and Information Sciences University of Delaware Newark, Delaware 19716 USA ABSTRACT Problem localization Is the identification of the most slgnlflcant failures in the AND-OR tree resulting from an unsuocass/ul attempt to achieve a goal, for instance, In planning, backward-chnining inference, or top-down parnin~ We examine beurls- tics and strategies for problem localization in the context of using a planner to check for pragmatic failures in natural language input to computer sys- tems, such as a cooperative natural language interface to Unix •• . Our heuristics call for selecting the most hopeful branch at ORs, but the most problematic one at ANDs. Surprise scores and speclal-purpose rules are the maln strategies sug- gested to determine this. I PRAGMATIC OVERSHOOT AND PRCBLEM LOCALIZATION Even if the syntactic and semantic content of a request is correct, so that a natural language front end can derive s coherent representation of its meaning, its praamatlc content or the structure of the underlying system may make aSy direct response to the request impossible or mlsleadln~ According to Sondbelmer end Welschedel (Sondhelmer, 1980), an input exhibits ~ ~ If the representation of its meaning is beyond the capa- bilities of the underlying system. Kaplan (1979), Mays (1980e), and Carberry (1984) have each worked on strategies for dealing wltb particular classes of such praamatlc failures. This paper addresses the problem of identifying the most si~ctflcant reason that a plan to achieve a user goal cannot be carried out. The approach to pragmatic fnilure taken In thls paper is to use a planner to verify the presumptions in a request. The presumptions behind a request become the subEoals of e plan to fulfill the request. Oslng Mays' (1980a) example, the query "Which faculty members take coursas?" Is here handled as an instance of an IDENTIFY-SET-~EHS • This material Is based upon work supported by the National Sclence Foundation under grants LST-8009673 and IST-8311~00. • • Unix is a trademark of Bell Laboratories. goal, and the pragmatlcs of the query are checked by looklng for a plan to achieve that goal. Deter- mining both that faculty members and courses do exist and that faculty members can take courses are subEoals within that plan. A presuppositlonal failure is noted if the planner is unable to com- plete a plan for the goal. Furthermore, £r~formation for recovery process- ing or expleaatory responses can be derived directly from the fniled plan by identifying what- ever blocked goal in the planning tree of subgoals Is most nignif~cant. Thus, in the example above, if the planner failed because It was unable to show that faculty can take courses, the helpful response would be to explain this presumption failure. We concentrate here on identifying the signifleant blocks rather than on generating natural language responses. The examples in this paper will be drawn from a pleaning System intended to function as the prag- matic overshoot component of a cooperative natural language interface to the Unix operating system. We chose Unix, much as Wilensky (1982) did for his Unix Consultant, as a fomiliar domain that was still complex enough to require interesting plan- ning~ In this system, the praRmatics of a user request are tested by building a tree of plan structures whose leaves are elementary facts avail- able to the operating system. For instance, the following planning tree Is built in response to the request to print a file: (PRINT-FILE ?user ?file ?device) & (IS-TEXT-FILE ?file) & (UP-AND-RUNNING ?device) & (READ-PERM ?user ?file) I (WORLD-READ-PERM-BIT-SET ?file) I (READ-PERM-USER ?user ?file) & (IS-O~NER ?user ?file) & (USER-READ-PERM-BIT-SET ?file)" [ (READ-PERM-GROUP ?user ?file) & (SA~-GROUP ?user ?file) & (OROUP-REAI>-PERM-BIT-SET ?file) I (READ-PERM-SUPER-USER ?user) & (AUTHORIZED-SUPER-USER ?user) & (SUPEH-USER-PASSWORD-GIV~ ?user) (The children of AND nodes are preceded by amper- sands, and OR children by vertical bars. Initial question marks precede plea variables.) If a singie node In thls planning tree fails, say (IS-TEXT-FILE ?file), that Information can be used In explnining the failure to the user. 139 The failure of certain nodes could also trigger recovery processing, as in the following example, where the failure of (UP-AND-RUNNING ?device) triggers the suggestion of an alternative device: User: Please send the file to the laser printer. System: The laser printer is dowm Is the line printer satisfactory? This planning scheme offers a way of recognizing and responding to such temporarily unfulfillable requests as well as to other pragmatic failures from requests unfulfillable in context, which is an important, though largely untouched, problem. A difficulty arises, however, when more than one of the planning tree precondition nodes fail. Even in a tree that was entirely made up of AND nodes, multiple failures would require either a llst of responses, or else scme way of choosing which of the failures is most meaningful to report. In a plan tree containing OR nodes, where there are often many alternative ways that have all failed of achieving particular goals, it becomes even more important that the system be able to identify which of the failures is most significant. This process of identifying the significant failures is called "problem localization", and this paper describes heuristics and strategies that can be used for problem localization in failed planning trees. II HEURISTICS FOR PROBLEM LOCALIZATION The basic heuristics for problem localization can be derived by considering how a human expert would respond to someone who was pursuing an impos- aible goal. Hot finding any suosessful plan, the expert tries to explain the block by showing that every plan must fail. Thus, if more than one branch of an AND node in a plan fails, the most significant one to be reported is the one that the user is least likely to be able to change, since it makes the strongest case. (The planner must check all the branches of an AND node, even after one fails, to know which is most significant to report.) For instance, if all three of the children of PRINT-FILE in our example fail, (I~-TEXT-FILE ?file) is the one that should be reported, since it is least llkely that the user can affect that node. If the READ-PERM failure were reported first, the user would waste time changing the read permission of a non-text file. Unix's actual behavior, which reports the first problem that it happens to dis- cover in trying to execute the co@mend, is often frustrating for exactly that reason. This heuris- tic of reporting the most serious failure at an AND node is closely related to ABSTRIP's use of "crltl- callty" numbers to divide a planner into levels of abstraction, so that the most critical features are dealt with first (Sacerdoti, 1974). The situation is different at OR nodes, where only a single child has to sueseed. Here the most serious failure can safely be ignored, as long as some other branch can be repaire~ Thus the most si~iflcant branch at an OR node should be the one the user is most likely to be able to affect. In • our example, READ-PERM-USER should usually be reported rather than READ-PERM-SUPER-USER, if both have failed, since most users have more hope of changing the former than the letter. There is a duality here between the AND and OR node heuristics that is llke the duality in the minimax evaluation of a move in a game tree, where one picks the best score at nodes where the choice is one's own, and the worst score at nodes where the opponent gets to choose. III STRATEGIES FOR PR~LEM LOCALIZATION Identification of the most significant failure requires the addition to the planner of knowledge about significance to be used in problea loealiza- tio~ Many mechanisms are possible, ranging from fixed, pre-set ordering of the children of nodes up through complex knowledge-based mechanlqms that include knowledge about the user,s probable goals. In this paper, we suggest a combination of statist- Ical "surprise scores" and speclal-purpose rules. Statistical ~UslnISurorise Scores This strategy relies on statistics that the system keeps dynamically onthe number of times that each branch of each plan has succeeded or failed. These are used to define a success ratio for each branch. For example, the PRINT-FILE plan might be annotated as follows: SUCCESSES RATIO (PRINT-FILE ?user ?file ?device) & (IS-TEXT-FILE ?file) 235 3 0.99 & (UP-AND-RUNNING ?device) 185 53 0.78 & (READ-PERM ?user ?file) 228 10 0.96 FAILURES From these ratios, we derive surprise scores to provide some measure of how usual or unusual it is for a particular node to have succeeded or failed in the context of the goal giving rise to the node. The surprise score of a successful node is defined as 1.0 minus the success ratio, so that the success of a node llke I~-TEXT-FILE, that almost always succeeds, is less surprising than the success of UP-AND-RUNNING. Failed nodes get nega- tive surprise scores, with the absolute value of the score again reflecting the amount of surprise. The surprise score of a failed node is set to the negative of the success ratio, so that the failure of IE-TEXT-FILE would be more surprising than that of UP-AND-RUNNING, and that would be reflected by a more strongly negative score. Here is an example of our PRINT-FILE plan instantiated for an unlucky user who has failed on all but two preconditions, with surprise scores added: 140 SURPRISE SUCCESS/FAILURE SCORE (PR~T-FILE Ann Filel laser) & (IS-TEXT-FIIE Filel) -.99 & (UP-AND-RUNNING laser) -.78 & (READ-PERM Ann Filel) -.96 I (WORLD-READ-PERF,-BIT-SET Filel) -.02 ] (READ-PERM-USER Ann Filel) -.87 & (IS-0WNER Ann Fllel) -.87 & (USER-READ-PERM-BIT-SET Fllel) +.01 J (READ-PERF,-GROUP Ann Filel) -.55 & (SA~-GROUP Ann Filel) +.05 -.58 I -.02 -.03 -.02 F F F F F F S F S & (GROUP-READ-PERM-BIT-SET Filel) F (BEAD-PERF~SUPER-USER Ann) F & (AUTHORIZED-SUPER-USER Ann) F & (SUPER-USER-PASSWORD-GIVEN Ann) F Note tbat the success of USER-READ-PERM-BIT-SET is not very surprising, since that node almost always succeeds; the failure of a node llke READ-PERM- SUPER-USER, which seldom succeeds, is much less surprising than the failure of UP-AND-RUNNING. We suggest keeping statistics and deriving surprise scores because we believe that they pro- vide a useful if imperfect handle on judging the signlflcence cf failed nodes. Regarding OR nodes, strongly negative surprise scores identify branches that in the past experience of the system have usu- ally succeeded, and these are the best guesses to be likely to succeed again. Thus READ-PERM-USER, the child of READ-PERM with the most strongly nega- tive score, turns out to be the most likely to be tractable. The negatlve surprise scores at a failed OR node give a profile of the typical suc- cess ratios; to select the nodes that are generally most likely to succeed, we pick the most surprising failures, those with the most strongly negatlve surprise scores. At AND nodes, on the other hand, the goal is to identify the branch that is most critical, that is, least likely to succeed. Surprisingly, we find that the most critical branch tends in thls case also to be the most surprlalng failure. In our example, IS-TEXT-FILE, which the user can do noth- ing about, is the most surprising failure under PRINT-FILE, READ-PERM is next most surprising, and UP-AND-RUNNING, for which simply waiting often works, comes last. Therefore at AND nodes, llke at OR nodes, we will report the child wlth the most negative surprise score; at AND nodes, this tends to identify the most critical failures, while at OR nodes, it tends to select the most hopeful. Note that the combined effect of the AND and OR stra- tegies is to choose from among all the failed nodes those that were statistically most likely to succeed. The main advantage of the statistical surprise score strategy is its low cost, both to design and execute. Another nice feature is the self- adjusting character of the surprise scores, based as they are on success statistics that the system updates on an onEolng basis. For example, the likelihood of GROUP-READ-PERM being reported would depend on how often that feature was used at a par- tlcular site. The main difficulty is that surprise scores are only a rough guide to the actual siEnl- ficance of a failed node. The true significance of a failure in the context of a particular command may depend on world knowledge that is beyond the grasp of the planning system (e.~, the laser printer is down for days this time rather than hours), or even on a part of the planning context itself that is not reflected in the statistical averages (e.~, READ-PERM-SUPER-USER is much more likely to succeed when READ-PERM is called as part of a system d,-,p ceamand than when it is called as part of PRINT-FILE). To get a more accurate grasp on the significance of particular failures, more knowledge-intenslve strategies must be employed. ~. Svecial-Purnose Problem Localization Rules As a mechanism for adding extra knowledge, we propose supplementing the surprise scores with conditlon-action rules attached to particular nodes in the planning tree. The cendltlons in these rules can test the success or failure of other nodes in the tree or determine the hi~er-level planning context, while the actions alter the prob- lem localization result by changing the surprise scores attached to the nodes. The speclal-purpose rules which we have found useful so far add information about the criticality of particular nodes. Consider the following plan- aing tree, which is somewhat more successful than the previous one: SURPRISE SUCCESS/FAILURE SCORE (PRINT-FILE Ann File2 laser) & (IS-TEXT-FILE Flle2) S & (UP-AND-RUNNING laser) S & (READ-PERM Ann Flle2) F I (WORLD-READ-PERM-BIT-SET Flle2) F ] (READ-PERM-USER Ann File2) F & (IS-OWNER Ann File2) F & (USER-REAI~PERM-BIT-SET File2) 3 I (READ-PERM-GROUP Ann Flle2) F & (SA~.-GROUP Ann Flle2) S & (GRODP-READ-PERM-BIT-SET Flle2) F I (READ-PERM-~PER-USER Ann) F & (AUTHORIZED-S~PER-USER Ann) S & (SUPER-USER-PASSWORD-GIVEN Ann) F +.01 ÷.22 -. 96 -.02 -. 87 -. 87 ÷.01 -.55 ÷.05 -.58 -.02 +. 97 -.02 Relying on surprise scores alone, the most signifi- cant child of READ-PERM would be READ-PERM-USER, since its score is most strongly negative. How- ever, since IS-OWNER has failed, a node which most users are powerless to change, it is clearly not helpful to choose READ-PERM-USER as the path to report. This is an example of the general rule that if we know that one child of an AND node is critl- cal, we should include a rule to suppress that AND node whenever that child fails. Thus we attach the followln8 rule to READ-PENM-USER: IF (FAILED-CHILD (IS-OWNER ?user ?file)) TH~ (SUPPRESS-SCORE 0.8) In our current formulation, the numeric argument to SUPPRESS-SCORE gives the factor (i.e., percentage) 141 by which the score should be reduced. The-rule's affect is to change READ-PERM-USER's score to-.17, which prevents it from being selected. With READ-PERM-USER suppressed, the surprise scores would then select READ-PERM-GROUP, which is a reasonable choice, but probably not the best one. While the failure of IS-~NER makes us less interested in READ-PERM-USER, the very surprising success of AUTHORIZED-SUPER-USER should draw the system's attention to the READ-PERM-SUPER-USER branch. We can arrange for this by attaching to READ-PERM-SUPER-USER a rule that states: IF ( ~CCESSFUL-CHILD ( AUTH 0RIZ ED--qU PER-USER ?user)) THEN (ENHANCE-SCORE 0.8) This rule would change READ-PERM-SUPER-USER's score from -.02 to -.79, and thus cause it to be the branch of READ-PEBM selected for reportln~ While our current rules are ell in these two forms, either suppressing or enhancing a parent's score on the basis of a critical child's failure or success, the mechanlam of special-purpose rules could be expanded to handle more complex forms of deduction. For example, it mlght be useful to add rules that calculate a criticality score for each node, working upward frem preassigned scores assigned to the leaves. If the rules could access information about the state of the system, they could also use that in Judging criticality, so that an UP-AND-RUNNING failure would be more critical If the device was expected to be down for a long time. OtheF Problem Localization While our System depends on surprise scores and rules, an entire range of strategies is possi- ble. The simplest strategy would be to hand-code the problam localization into the plans themselves by the ordering of the branches. At AND nodes, the children that are more critical would be listed first, while at OR nodes, the lees critical, more hopeful, children would come first. In such a blocked tree, the first failed child could be selected below each node. A form of this hand- coded strategy is in force in a~y planner that stops exploring an AND node when a single child blocks; that effectively selects the first child tested as the significant failure in every case, since the others are not even explored. Hand- coding is an alternative to surprise scores for providing an initial comparative ranking of the children at each node, but it also would need sup- plementingwlth a strategy that can take account of unusual situations, such as our specisi-purpose rules. It might be possible to improve the parfor~- mance of a surprise score System without adding the complexity of special-purpose rules by using a for- mula that allows the surprising success or failure of a child to Inarease or decrease the chances o£ its parent being reported. While such a formula could perhaps do much of the work now done by special-purpose rules, it seams a harder approach to control, and one more likely to be sensitive to inaccuracies in the surprise scores themselves. Proper Level p..~Deta.4.1 One final question concerns identifying the proper level of detail for helpful responses. The strategies discussed so far have all focused on choosing which of multiple blocked children to report, so that they identify a path frem the root to a leaf. Yet the leaves of the planning tree may well be too detailed to represent helpful responses. A selection strategy could report the node containing the appropriate level of detail for a given user. Modeling the expertise o£ a user and using that to select an appropriate description of the problem are significant problems in natural • language generation which we have not addressed. IV RELATED APPLICATION ARE~ While developed here in the context of a prag- matice planner, strategies for problem localization could have wide applicability. For instance, the MYCIN-llke "How?" and "why?" questions (Shortllffe, 1976) used in the explanation components of many expert systems already use either the already-built successful proof tree or the portion currently being explored as a source of explanation~ Swat- tout (1983) adds extra knowledge that allows the system to Justify its answers in the user's terms, but the user must still direct the exploration. An effective problem localization facility would allow the System to answer the question "Why not?e; that is, the user could ask why a certain goal was not substantiated, and the System would reply by iden- tifying the surprising nodes that are likely to be the slgnlflcant causes of the failure. Such "Why not? n questions could be useful not only in expla- nation but also in debugEin~ / In the same way, since the execution of a PRO- LCQ progr-m can be seen as the exploration of and AND-OR tree, effective problem localization tech- niques could be useful in debugging the failed trees that result frem incorrect logic programs. Another example is recovery processing in top-down paralng, such as using au~nented transi- tion networks (Woods, 1970). When an ATN fails to parse a sentence, the blocked parse tree is quite similar to a blocked planning tree. Weischedel (1983) suEaests an approach to understanding ill- formed input that makes use of meta-rules to relax some of' the constraints on ATN arcs that blocked the original parse. Recovery processing in that model requires searching the blocked parse tree for nodes to which meta-rules can be applied. A prob- lem localization strategy could be used to sort the 142 llst of blocked nodes, so that the most llkely can- didatea would be tested first. The statistics of success ratios here would describe likely paths through the grammar. Nodes that exhibit surprising failure would be prime candidates for mets-rule processiag~ Before problem lor~alization can be applied in these related areas, further work needs to be done to see how many of the heuristics and strategies that apply to problem localization in the planning context can be carried over. The larger and more complex trees of an ATN or PROLO~. program may well require development of further strategies. Ho~- ever, the nature of the problem is such that even an imperfect result is likely to be useful. V IMPLEMENTATION DE~CRIPTION The examples in this paper are taken frem an Interlisp implementation of a planner which does prs~atics checking for a limited set of Unix- do, sin requests. The problem localization c~- ponent uses a combination of surprise scores and special purpose rules, as desoA'ibed. The statis- tics were derived by running the planner on a test set of commands in a simulated Unix environment. VI CONCLUSIONS In planning-based pra~matlcs processing, prob- lem localization addresses the largely untouched problem of providing helpful responses to requests unfulfillable in context. Problem localization in the planning context requires identifying the most hopeful and tractable choice at OR nodes, but the most critical and problematic one at AND nodes. Statistical surprise scores provide a cheap but effective base strategy for problem localization, and condition-action rules are an appropriate mechanism for adding further sophistlcatio~ Further work should address (1) applying recovery strategies to the localized problem, if any recovery is appropriate; (2) investigating other applications, such as expert systems, back~ard-chnining inference, and top-down parsing; and (3) exploring natural language generation to report a block at an appropriate level of detail. VII REFER~ CE -~ Carberry, E $andra. "Understanding Pragmatically Ill-Formed Input. • ~ of ~he Intern~ 1984. Kaplan, Samuel J. ~ ~ From a Portable Natural ~ Data ~ase Ouerv System. PbD. Dissertation, Computer and Information Sci- ence Dept., University of Pennsylvania, 1979. Mays, Eric. "Correcting Misconceptions About Data Base Structure. " ~ of the ~ of the Canadian Society for ~ Studies of ~ . Victoria, British Col,~bla, Canada, May 1980, 123-128. Maya, Eric. WFailtmes in Natural Language Systems: Applications to Data Base Query Systems. • ~ of t~e Ltnn~ Ammal aa~Aonal Conre~ ence on ~ ~ (AAA~-~0~. Stan- ford, California, August 1980, 3~-330. Sacerdoti, F~ D. =Planning in a Hierarchy of Abstraction Spaces." ~ ~ (197~l), 115-135. $hortllffe, F. ~ Comvuter Based Medical Cons~t~- ~ons: ~ (North-Holland, 1976). Sondheimer, N. and R. ~t Weischedel. "A Rule-Based Approach to Ill-Formed Input. • ~ of the 8th ~ ~ on ~ ~ , 1 980. Swartout, Willlam R. "IPLA~: A System for Creat- ing and Explaining Expert Consultlng Programs. • ~ 21 (1983), 285-325. Weischedel, Ralph ~ and Norman K. Sondheimer. • Meta-Rules as a Basis for ProcessinE Ill-Formed Input. = AmeriQan Journal of .~.Ji~JI.~4ZJ~ ~ (1983) , to appear. Wilensk~, Robert. "Talking to UNIX in English: An Overview of UC." ~ of the 1982 National Co~e~nae of ~ ~ (AA~-~), 103-106. Woods, Willi am A. "Transition Network Grammars for Natural Language Analysis." ~.dm~g£.i,Q/,,~l~ of the ~ 1.~ (Oct. 1970), 591-606. 143
1984
30
A CONNECTIONIST MODEL OF SOME ASPECTS OF ANAPHOR RESOLUTION Ronan G. Reilly Educational Research Centre St Patrick's College, Drumcondra Dublin 9, Ireland ABSTRACT This paper describes some recent developments in language processing involving computational models which more closely resemble the brain in both structure and function. These models employ a large number of interconnected parallel computational units which communicate via weighted levels of excitation and inhibition. A specific model is described which uses this approach to process some fragments of connected discourse. I CONNECTIONIST MODELS The human brain consists of about i00,000 million neuronal units with between a lO00 and I0,000 connections each. The two main classes of cells in the cortex are the striate and pyramidal cells. The pyramidal cells are generally larse and heavily arborized. They are the main output cells of a region of cortex, and they mediate connections between one region and the next. The strlate cells are smaller, and act more locally. The neural circuitry of the cortex is, apart from some minor variations, remarkably consistent. Its dominant characteristics are Its parallelism, its large number processing units, and the extensive interconnection of these units. This is a fundamentally different structure from the traditional von Neumann model. Those in favor of adopting a connectionist approach to modelling human cognition argue that the structure of the human nervous system is so different from the structure implicit in current information- processing models that the standard approach cannot ultimately be successful. They argue that even at an abstract level, removed from immediate neural considerations, the fundamental structure of the human nervous system has a pervasive effect. Counectloulst models form a class of spreading activation or active semantic network model. Each primitive computing unit in the network can be thought of as a stylized neuron. Its output is a function of a vector of inputs from neighbourlng units and a current level of excitation. The inputs can be both excitatory and inhibtory. The output of each unit has a restricted range (in the case of the model described here, it can have a value between i and lO). Associated with each unit are a number of computational functions. At each input site there are /unctions which determine how the inputs are to be summarized. A potential function determines the relationship between the summarized site inputs and the unit's overall potential. Finally, au output function determines the relationship between a unit's potential and the value that it transmits to its nelghhours. There are a number of constraints inhererent in a neurally based model. One of the most significant is that the coinage of the brain is frequency of firing. This means that the inputs and outputs cannot carry more than a few bits of information. There are not enough bits in firing frequency to allow symbol passing between individual units. This is perhaps the single biggest difference between thls approach and and that of standard informatlon-processing models. Another important constraint is that decisions in the network are completely distributed, each unit computes its output solely on the basis of its inputs; it cannot "look around" to see what others are doing, and no central controller gives it instructions. A number of language related applications have been developed using this type of approach. The most notable of these is the model of McClelland and Rumelhart (1981). They demonstrated that a model based on connectionist principles could reproduce many of the characteristcs of the so-called word-superiority effect. This is an effect in which letters in briefly presented words and pseudo-words are more easily identifiable than letters in non-words. At a higher level in the processing hierarchy, connectionist schemes have been proposed for modelling wOr~.sense disambiguation (Cottrell & Small, 1983), and for sentence parsing in general (Small, Cottrell, & Shastrl, 1982). 144 The model described in this paper is basically an extension of the work of Cottrell and Small (1983), and of Small (1982). It extends their sentence-centred model to deal with connected text, or discourse, and specifically with anaphorlc resolution in discourse. The model is not proposed as definitive in any way. It merely sets out to illustrate the properties of connectlonlst models, and to show how such models might be extended beyond simple word recognition applications. IT ANAPHORA The term anaphor derives from the Greek for "pointing back". What is pointed to is often referred to as the antecedent of the anaphor. However, the precise definition of an antecedent is problematic. Superflclally, it might be thought of as a preceding text element. However, as Sidner (1983) pointed out words do not refer to other words; people use words to refer to objects, and anaphora are used to refer to objects which have already been mentioned in a discourse. Sidner also maintains that the concept of co-reference is inadequate to explain the relationship between anaphor and antecedent. Co-reference means that anaphor and antecedent both refer to the same object. This explanation suffices for a sentence llke: (i) I think green apples are best and they make the best cooking apples too. where both the~ and green apples refer to the same object. However, it is inadequate when dealing with the following discourse: (2) My neighbour has an Irish Wolfhound. The~ are really huge, but friendly dogs. In this case they refers to the class of Irish Wolfhounds, but the antecedent phrase refers to a member of that set. Therefore, the anaphor and antecedent cannot be said to co-refer. Sidner introduces the concept of specification and co-speclflcetlon to get around this problem. Tnstead of referring to objects in the real world, the anaphor and its antecedent specify a cognitive element in the hearerls mind. Even though the same element is not co-speclfled one specification may be used generate the other. This is not possible with co-reference because, as Sidner puts it: Co-speclflcatlon, unlike co-reference, allows one to construct abstract representations and define relationships between them which can be studied in a computational framework. With coreference, no such use is posslble, since the object referred to exists in the world and is not available for examination by the computational process. (Sidner, 1983; p. 269). Sidner proposes two major sources of constraint on what can become the co-speclflcatlon of an anaphorlc reference. One is the shared knowledge of speaker and hearer, and the other is the concept of focus. At any given time the focus of a discourse is that discourse element which is currently being elaborated upon, and on which the speakers have centered their attention. This concept of focus will be Implemented in the model to be described, though differently from the way Sidner (1983) has envisaged it. In her model possible focuses are examined serlally, and a decision is not made until a sentence has been completely analyzed. In the model proposed here, the focus is arrived at on-llne, and the process used is a parallel one. Ill THE SIMULATOR The model described here was constructed using an interactive eonnectionist simulator written in Salford LISP and based on the design for the University of Rochester's ISCON simulator (Small, Shastri, Brucks, Kaufman, Cottrell, & Addanki, 1983). The simulator allows the user to design different types of units. These can have any number of input sites, each with an associated site function. Units also have an associated potential and output function. As well as unit types, ISCON allows the user to design different types of weighted llnk. A network is constructed by generating units of various types and connecting them up. Processln E is initiated by activating designated input units. The simulator is implemented on a Prime 550. A network of about 50 units and 300 links takes approximately 30 CPU seconds per iteration. As the number of units increases the simulator takes exponentially longer, making it very unwieldy for networks of more than 100 units. One solution to the speed problem is to compile the networks so that they can be executed faster. A more radical solution, and one which we are currently working on, is to develop a progra--,ing language which has as its basic unit a network. This language would involve a batch system rather than an interactive one. There would, therefore, be a trade-off between the ease of use of an interactive system and the speed and power of a batch approach. Although ISCON is an excellent medium for the construction of networks, it is inadequate for any form of sophisticated execution of networks. The proposed Network Programming Language (NPL) would permit the definition and construction of networks in much the same way as ISCON. However, with N-PL it will also be possible to selectively activate sections of a particular network, to create new networks by combining separate sub-networks, to calculate summary indices of any network, and to use these indices in guiding the flow of control in the 145 program. NPL will have a number of modern flow of control facilities (for example, FOR and WHILE loops). Unfortunately, thls language is still at the design stage and is not available for use. IV THE MODEL The model consists of five main components which interact in the manner illustrated in Figure i. The llnes ending in filled circles indicate inhibitory connections, the ordinary lines, excitatory ones. Each component consists of sets of neuron-llke units which can either excite or inhibit neighbouring nodes, and nodes in connected components. A successful parsing of a sentence is deemed to have taken place if~ during the processing of the discourse, the focus is accurately followed, and if at its end there is a stable coalition of only those units central to the discourse. A set of units is deemed a stable coalition if their level of activity is above threshold and non-decreasing. CASE SCHEMA i/ SENSE l Figure I. The main components of the model. A. Lexical Level There is one unit at the lexical level for every word in the model's lexicon. Most of the units are connected to the word sense level by unidirectional links, and after activation they decay rapidly. Units which do not have a word sense representation, such as function words and pronouns, are connected by unidirectional llnk to the case and schema levels. A lexical unit is connected to all the possible senses of the word. These connections are weighted according to the frequency of occurence of the senses. To simulate hearing or reading a sentence the lexlcal units are activated one after another from left to right, in the order they occur in the sentence. B. Word Sense Level The units at this level represent the "meaning" of the morphemes in the sentence. Ambiguous words are connected to all their posslble meaning units, which are connected to each other by inhibitory links. As Cottrell and Small (1983) have shown, this arrangement provides an accuraate model of the processes involved in word sense dlsamblguatlon. Grammatical morphemes, function words, and pronouns do not have explicit representations at this level, rather they connect directly to the case and schema levels. C. Focus Level The units at this level represent possible focuses of the discourse in the sense that Sidner (1983) intends. The focus with the strongest activation inhibits competelng focuses. At any one time there is a single dominant focus, though it may shift as the discourse progresses. A shift in focus occurs when evidence for the new focus pushes its level of activation above that of the old one. In keeping with Sidner's (1983) position there are two types of focus used in this model, an actor focus and a discourse focus. The actor focus represents the animate object in the agent case in the most recent sentence. The discourse focus is, as its name suggests, the central theme of the discourse. The actor focus and discourse focus can be one and the same. D. Case Level This modal employs what Cottrell and Small (1982) call an "exploded case" representation. Instead of general cases such as Agent, Object, Patient, and so on, more specific case categories are used. For instance, the sentence John kicked the ball would activate the specific cases of Kick-agent and Kick-object. The units at this level only fire when there is evidence from the predicate and at least one filler. Their output then goes to the appropriate units at the focus level. In the example above, the predicate for Kick-~gent is kick, and its filler is John. The unit Kick-agent then activates the actor focus unit for John. E. Schema Level This model employs a partial implementation of Small's (1982) proposal for an exploded system of schemas. The schema level consists of a hierarchy of ever more abstract schemas. At the bottom of the hierarchy there are schemas which are so speclfc that the number of possible options for filllng their slots is highly 146 constrained, and the activation of each schema serves, in turn, to activate all its slot fillers. Levels further up in the hierarchy contain more general schema details, and the connections between slots and their potential fillers are less strong. V THE MODEL'S PERFORMANCE At its current stage of development the model can handle discourse involving pronoun anaphora in which the discourse focus is made to shift. It can resolve the type of reference involved in the following two discourse examples (based on examples by Sidner, 1983; p. 276): DI-I: I've arranged a meeting with Mick and Peter. 2: It should be in the afternoon. 3: We can meet in my office. 4: Invite Pat to come too. D2-1: I've arranged a meeting with Mick, Peter, and Pat. 2: It should be in the afternoon. 3: We can meet in my office. 4: It's kind of small, 5: but we'll only need it for an hour. In discourse DI, the focus throughout is the meeting mentioned in DI-I. The it in DI-2 can be seen to co-speclfy the focus. In order to determine this a human llstner must use their knowledge that meetings have times, among other things. Although no mention is made of the meeting in DI-3 to DI-4 human llstners can interpret the sentences as being consistent with a meetlng focus. In the discourse D2 the initial focus is the meeting, but at D2-4 the focus has clearly shifted to my office~ and remains there until the end of the discourse. The network which handles this discourse does not parse it in its entirety. The aim is not for completeness, but to illustrate the operation of the schema level of the model, and to show how it aids in determining the focus of the discourse. Initlally, in analyzlng D1 the word meetin~ activates the schema WORK PLACE MEETING. This schema gets activated, rather--than~ny other meeting schema, because the overall context of the discourse is that of an office memo. Below, is a representation of the schema. On the left are its component slots, and on the right are all the possible fillers for these slots. WORK PLACE MEETING schema WPM location: library tom office my~fflce WPM time: morning afternoon WPM_partlclpants: tom vincent patricla mick peter me When this schema is activated the slots become active, and generate a low level of subthreshold activity in their potential fillers. When one or more fillers become active, as they do when the words Hick and Peter are encountered at the end of DI-I, the slot forms a feedback loop with the fillers which lasts until the activity of the sense representation of meetln~ declines below a threshold. A slot can only be active if the word activating the schema is active, which in this case is meetin$. When a number of fillers can fill a slot, as is the case with the WPM participant slot, a form of regulated sub-~etwork is used. On the other hand, when there can only be one filler for a slot, as with the WPM location slot, a winner- take-all network is u~ed (both these types of sub-network are described in Feldman and Ballard, 1982). Associated with each unit at the sense level is a focus unit. A focus unit is connected to its corresponding sense unit by a bidirectional excitatory link, and to other focus units by inhibitory links. As mentioned above, there are two separate networks of focus units, corresponding to actor focuses and discourse focuses, respectively. Actors are animate objects which can serve as agents for verbs. An actor focus unit can only become active if its associated sense level unit is a filler for an agent case slot. The discourse focus and actor focus can be, but need not be, one and the same. The distinction between the two types of focus is in llne with a similar distinction made by Sidner (1983). The structure of the focus level network ensures that there can only be one discourse focus and one actor focus at a given time. In discourses D1 and D2 the actor focus throughout is the speaker. At the end of the sentence DI-1 the WORK PLACE MEETING schema is in a stable coal~ion w~th the sense units representing Hick and Peter. The focus units active at this stage are those representing the speaker of the discourse (the actor focus), and the meeting (the discourse focus). When the sentence D1-2 is 147 encountered the system must determine the co-speclflcatlon of it. The lexlcal unit tt is connected to all focus units of inanimate objects. It serves to boost the potential of all the focus units active at the time. At this stage, if there are a number of competitors for co-speclficatlon, a number of focus units will be activated. However, by the end of the sentence, if the discourse is coherent, one or other of the focuses should have received sufficient activation to suppress the activation of its competitors. In the case of DI there is no competitor for the focus, so the it serves to further activate the meeting focus, and does so right from the beginning of the sentence. The sentence DI-3 serves to fill the WPM location slot. The stable coalition is then enl~rged to include the sense unit my office. The activation of my office activates a schema, which might look llke this: MY OFFICE schema MO location: Prefab 1 MO size: small MO windows: two It is not strictly correct to call the above structure a schema. Being so specific, there are only single fillers for any of its slots. It is really a representation of the properties of a specific office, rather than predictions concerning offices in general. However, in the context of this type of model, with the emphasis on highly specific rather than general structures, the differences between the two schemas presented above is not a clearcut one. When my office is activated, its focus unit also receives some activation. This is not enough to switch the focus away from meeting. However, it is enough to make it candidate, which would permit a switch in focus in the very next sentence. If a switch does not take place, the candidate's level of activity rapidly decays. This is what happens in DI-4, where the sentence specifies another participant, and the focus stays with meeting. The final result of the analysis of discourse DI is a stable coalition of the elements of the WORK PLACE MEETING frame, and the various part~clpan~, times, and locations mentioned in the discourse. The final actor focus is the speaker, and the final discourse focus is the meeting. The analysis of discourse D2 proceeds identically up to D2-4, where the focus shifts from meeting to my office. At the beginning of D2-4 there are two candidates for the discourse focus, meeting and my office. The occurence of the ~ord it then causes both these focuses to become equally active. This situation reflects our intuitions that at this stage in the sentence the co-specifler of i~t is ambiguous. However, the occurence of the word small causes a stable coalition to form with the MY OFFICE schema, and gives the my office focus the ~xtra activation it needs to overcome the competing meeting focus. Thus, by the end of the sentence, the focus has shifted from meeting to my office. By the time the it in the final sentence is encountered, there is no competing focus, and the anaphor is resolved immediately. There are a number of fairly obvious drawbacks with the above model. The most important of these being the specificity of the the schema representations. There is no obvious way of implementing a system of variable binding, where a general schema can be used, and various fillers can be bound to, and unbound from, the slots. It is not possible to have such symbol passing in a connectionist network. Instead, all possible slot fillers must be already bound to their slots, and selectively activated when needed. To make this selective activation less unwieldy, a logical step is to use a large number of very specific schemas, rather than a few general ones. Another drawback of the model proposed here is that there is no obvious way of showing how new schemas might be developed, or how existing ones might be modified. One of the basic rules in building connectlonist models is that the connections themselves cannot be modified, although their associated weights can be. This means that any new knowledge must be incorporated in an old structure by changing the weights on the connections between the old structure and the new knowledge. This also implies that the new and old elements must already be connected up. In spite of the apparent oversupply of neuronal elements in the human cortex, to have everything connected to virtually everything else seems to be profligate. Another problem with connectlonist models is their potential "brittleness". When trying to program a network to behave in a particular way, it is difficult to resist the urge to patch in arbitrary fixes here and there. There are, as yet, nO equivalents of structured programming techniques for networks. However, there are some hopeful signs that researchers are identifying basic network types whose behavior is robust over a range of conditions. In particular, there are the wlnner-take-all and regulated networks. The latter type, permits the specification of upper and lower bounds on the activity of a sub- network, which allows the designer to avoid the twin perils of total saturation of the network on the one hand, and total silence on the other. A reliable taxonomy of sub-networks would greatly aid the designer in building robust networks. 148 VI CONCLUSION This paper briefly described the connectlonist approach to cognitive modelling, and showed how it might be applied to langauge processing. A connectionist model of language processing was outlined, which employed schemas and focusing techniques to analyse fragments of discourse. The paper described how the model was successfully able to resolve simple i__ttanaphora. A tape of the simulator used in this paper, • along with a specification of the network used to analyze the sample discourses, is available from the author at the above address, upon receipt of a blank tape. VII REFERENCES Cottrell, G.W., & Small, S.L. (1983). A connectionist scheme for modelling word sense disambiguatlon. Cognition and Brain Theory, ~, 89-120. Feldman, J.A., & Ballard, D.N. (1982). Connectlonlst models and their properties. Cognitive Science, 6, 205-254. McClelland, J.L., & Rumelhart, D.E. (1981). An interactive activation model of context effects in letter perception: Part i. An account of basic findings. Psychological Review, 88, 375-407. Sidner, C.L. (1983). Focussing in the comprehension of definite anaphora. In M. Brady & R.C. Berwick (Eds.), Computational models of discourse, Cambridge, Massachusetts: MIT Press. Small, S.L. (1982). Exploded connections: Unchunklng schematic knowledge. In Proceedings of the Fourth Annual Conference of the Cognitive Science Society, Ann Arbor, Michigan. Small, S.L., Cottrell, G.W., & ShastrI, L. (1982). Toward connectionlst parsing. In Proceedings of the National Conference on Artificial Intelligence, Pittsburgh, Pennsylvania. Small, S.L., Shastrl, L., Brucks, M.L., Kaufman, S.G., Cottrell, G.W., & Addanki, S. (1983). ISCON: a network construction aid and simulator for connectlonlst models. TRIO9. Department of Computer Science, University of Rochester. 149
1984
31
CONCURRENT PARSING IN PROGRAMMABLE LOGIC ARRAY (PLA-) NETS PROBLEMS AND PROPOSALS Helmut Schnelle RUHR-Universit~t Bochum Sp~achwissenschaftliches Institut D-4630 Bochum 1 West-Germany ABSTRACT This contribution attempts a conceptual and practical introduction into the principles of wiring or constructing special machines for lan- guage processing tasks instead of programming a universal machine. Construction would in princi- ple provide higher descriptive adequacy in com- putationally based linguistics. After all, our heads do not apply programs on stored symbol arrays but are appropriately wired for under- standing or producing language. Introductor~ Remarks i. For me, computational linguistics is not primarily a technical discipline implementing performance processes for independently defined formal structures of linguistic competence. Computational linguistics should be a foundatio- nal discipline: It should be related to process- oriented linguistics as the theory of logical calculi is to formal linguistics (e.g. genera- tive linguistics, Montague-grammars etc.). 2. As it stands, computational linguistics does not yet meet the requirements for a founda- tional discipline. Searle's arguments against the claims of artificial intelligence apply fully to computational linguistics: Programmed solutions of tasks may execute the task satisfactorily with- out giving a model of its execution in the orga- nism. Our intentional linguistic acts are caused by and realized in complicated concurrent pro- cesses occcurring in networks of neurons and are experienced as spontaneous. This also applies to special cases such as the recognition of syntac- tic structure (parsing). These processes are not controlled and executed by central processor units. 3. Computational linguistics must meet the challenge to satisfy the double criterion of des- criptive adequacy: Adequacy in the description of what human beings do (e.g. parsing) and adequacy in the description of ho__~w they do it (namely by spontaneous concurrent processes corresponding to unconscious intuitive understanding). It must try to meet the challenge to provide the foundations for a descriptively and explanatorily adequate process-oriented linguistic, even when it is clear that the presently available conceptual means for describing complicated concurrent processes - mainly the elements of computer architecture - are far less understood than programming theory and programming technique. 4. Note: It does not stand to question that there is any problem which, in principle, could not be solved by programming. It is simply the case that almost all solutions are descriptively inadequate for representing and understanding what goes on in human beings even where they pro- vide an adequate representation of input - output relations - and would thus pass Turing's test. 5. In my opinion, the main features to be rea- lized in more adequate computational systems are - concurrency of localized operations (in- stead of centrally controlled sequential processes), and - signal processing (instead of symbol manipu- lation). These features cannot be represented by a program on an ordinary von Neumann machine since this type of machine is by definition a sequential,cen- trally controlled symbol manipulator. This does not exclude that programs may simulate concurrent processes. For instance, programs for testing gate array designs are of this kind. But simu- lating programs must clearly separate the fea- tures they simulate from the features which are only specific for their sequential operation. Electronic worksheet programs (in particular those used for planning and testing of gate arrays) are appropriate simulators of this type since their display on the monitor shows the network and signal flow whereas the specifics of program exe- cut/on are concealed from the user. 6. How should computational linguistics be de- veloped to meet the challenge? I think that the general method has already been specified by yon Neumann and Burks in their attempt to compare be- havior and structure in computers and brains in terms of cellular automata. They have shown in this context that we have always two alternatives: Solutions for tasks can be realized by programs to be executed on an universal centrally con- trolled (von Neumann) machine, or they can be realized by constructing a machine. Since ordi- nary - i.e. non-cellular-von-Neumann machines - are sequential, realization of concurrent pro- cesses can only be approached by constructing (or describing the construction of such a system, e.g. the brain). 150 My Approach 7. In view of this, I have developed theoreti- cal net-linguistics on the basis of neurological insights. My primary intention was to gain in- sights into the principles of construction and functionin~ (or structure and behavior) more than to arrive at a very detailed descriptive neuro- logical adequacy (as e.g. in H. Gigley's ap- proach, cp. her contribution on this conference). 8. The method which to me seemed the most fruitful one for principled analysis is the one applied in systematic architecture for pro- cessor construction. In setting up idealized architectures we should proceed in steps: - select appropriate 9~erationalprimitives, - build basic network modules and define their properties - construct complex networks from modules showing a behavior which is typical for the field to be described. A possible choice is the following: - take logical operators of digital switching networks as primitives (and show how they are related to models of neurons), - take AND-planes and OR-planes (the consti- tuents of progralmmable array logic-PLA) to- gether with certain simple configurations such as shift-registers, - show how linguistic processes (such as gene- rators and parsers for CF grammars) could be defined as a combination of basic modules. 9. The method is described and applied in Mead/ Conway (1980). They show how logical operators can be realized. Their combination into a com- binational logic module presents three types of design problems (cp. ibid. p. 77), the first two being simple, the third being related to our prob- lem: "a complex function must be implemented for which no direct mapping into a regular structure is known" (ibid. p. 79). "Fortunately, there is a way to map irregular combinational functions onto regular structures, using the progra/mnable logic array (PLA) ... This technique of implementing combinational functions has a great advantage: functions may be significantly changed without requiring major changes in either the design or layout of the PLA structure. [Figure 13 illus- trates the overall structure of a PLA. The diagram includes the input and output registers, in order to show how easily these are integrated into the PLA design. The inputs stored during [clocksig- nal] ~l in the input register are run vertically through a matrix of circuit elements called the AND plane. The AND plane generates specific logic combinations of the inputs. The outputs of the AND plane leave at right angles to its input and run horizontally through another matrix called the OR plane. The outputs of the OR plane then run vertically and are stored in the output re- gister during [clocksignal] ~2" (ibid. p. 80). F • "~ ~w l,lal,e ~Pt " ~ I-- ROgA ster L I ............... I "----'•l OR p|anq 1 ............... l ~'l~Ju,e I; Ovegall stcucLuro of Z|,a PLA Icf. Mea,]/Conway, 1980, |,. 81k "There is a very straightforward way to imple- ment finite state machines in integrated systems: we use the PLA form of combinational logic and feedback some of the outputs to inputs ... The circuit's structure is topologically regular, has a reasonable topological interface as a subsystem, and is of a shape and size which are functions of the appropriate parameters. The function of this circuit is determined by the 'programming' of its PLA logic" (ibid. p. 84). iO. As a first example of the application of these methods, it has been shown in Schnelle (forthcoming) how a complex PLA network composed from AND-planes, OR-planes, ordinary registers, and shift registers can be derived by a general and formal method from any CF-grammar, such that the network generates a sequence of control sig- nals,triggering the production of a corresponding terminal symbol (or of a string of terminal sym- bols). The structure derived is a set of units, one for each non-terminal occurring in the gram- mar and one for each terminal symbol. Before pre- senting the network realizing simple units of this type, we give an informal indication of its functioning. A unit for a nonterminal symbol oc- curring to the left of an arrow in the CF gra~muar to be realized which allows m rule alternatives and occurs at n places to the right of the rule arrow has the form of figure 2a. A unit for a terminal symbol - say "A" - occurring at n places to the right of an arrow has the form of figure 2b. The "STORE" - units can be realized by OR- planes, the "READ"-units by AND-planes. The flip- flops (FF) are simple register units and the shift register is a simple PLA network of well known structure. The reader should note that the no- tions such as "store", "read" and "address" are metaphorical and chosen only to indicate the func- tioning: The boxes are no_~t subprograms or rules but circuits. There are neither addresses nor acts of selection,nor storing or reading of sym- bols. 151 I i, I llllU l s(.¢cl/it er +le/++:l:t i ll,j llmXt i,.+., + m+ i .+. :,l i [uL(~ "~%ll,J r+s~l" -F. -:-1 .... I m I _ .L.. ~;+-+ +~ ~_~I_ I i. Plgufc 2a: (;+ll¢+l'al+ [o~m oi ~ .1111. ++alJz|l*<j ~i llOl1-Le[mtn;l| +yallx, I o+ LII<~ (jl~Jlmlnr more complicated cases the signal flow cannot be properly organized by a schematic adaptation of the system realized for production. I am there- fore planning to investigate realizations of con- current signal flows for bottom-up processors. At the moment I do not yet have a general method for specifying bottom-up processors in terms of net- works. 12. In order to illustrate concurrent infor- mation flow during parsing let me present two simple examples. The first example provides de- tails by an extremely simple wiring diagram of figure 3, which realizes the "gran~mar" S + ;~, S + AC. I ," • I i _t~-_+~+ ~.l .... _h++; ...+_+_. _ . ++';,+.'L + L,;:,II. ...... - ] I II I .... i;+ , " . . . . . . . ; 'c:";:'+r t:,T. .... ............ t ..... • . . . . . 1~!r,~-I l~Inlor nctlv~tlnn x ~ ~ x # p(.js1+r. 21++ C.enorml rn,m o[ +~ .,st£ remll~(n%l . L..mI.*~L ~yml~-~l o( th. <Irm~r (tile .~ymt*)! "~" (, thl, ~a~q) ii. The complex networks definable by a general method from CF-granunar specifications, as shown in Schnelle (forthcoming) can be easily extended into a predictive top-to-bottom, left-to-right parser such that the prediction paths are gener- ated in parallel by concurrent signal flows (as will be illustrated below). At the real£zations of a terminal symbol a TEST PREDICTION "a" is in- cluded, as indicated in figure 2b. However, a detailed analysis of this system shows that in rl~ur~ 3 It illustrates the general type of wiring where the hyphenated units must be multiplied into n storage units, whenever there are n inputs. The box for PRINT "a" or TEST PREDICTION "a" shows a multiplicity of 2 storage units marked 3 and 4 for the case of two input and output lines. For the details of PLA construction of such networks the reader is referred to Schnelle (forthcoming). 13. We shall now illustrate the signal flow occurring £n a PLA realization of the grammar: S + Ac, S + aD, A ÷ a, A + ab, D + bd, D + d. A grammatically perspicuous topology of the network is shown in figure 4. The double lines are wires, the boxes have an internal structure as explained above. For a parse of the string abd the wiring realizes the following concurrent signal flow on 152 the wires corresponding to the numbers indicated in figure 4. Gra~ar: S~Ac S-aD A-a A*ab D-bd D-d 3 15 Since the only possible generation derivable from this parse information is $1, DI, the structure is [a[bd]D] S whereas the informations AI and A2 remain unused, i.e. non confirmed, by the com- plete parse. 14. We have presented only very simple illus- trations of concurrent information flow and their realizations in integrated circuits. Much more research will be necessary. Our contribution tried to illustrate (together with Schnelle forth- coming) how current VLSI design methods - and simulation programs used in the context of such designs - could be applied. It is hoped that several years of experience with designs of such types may lead to fruitful foundational concepts for process-oriented linguistics, which solves its tasks by constructing descriptively adequate special machines instead of programming universal yon Neumann machines. References C. Mead, L. Conway (1980) Introduction to VLSI Design, Reading, Mass.: Addison Wesley H. Schnelle (forthcoming) Array logic for syn- tactic production processors - An exercise in structured net-linguistics -. In: Ec. Hajicov&, J. Mey (eds.), Petr. Sgall Festschrift Figure 4 (Whenever a signal reaches a TEST PREDICTION "x" box via a line numbered y we write y(x); "Ai" means: the i-th rule-alternative at A). Time Active lines (i) i , 2(a) (2) 3(a), 4(a) (3) Read "a" (4) 5, 6(b), 7 AI (5) iO(c), 8(b), 14(d) (6) Read "b" (7) g, 12(d) A2 (8) lO(c) (9) Read "d" (iO) 13 D1 (11) 16 $2 Parse information 153
1984
32
A Case Analysis Method Cooperating with ATNG and Its Application to Machine Translation Hitoshi IIDA, Kentaro OGURA and Hirosato NOMURA Musashino Electrical Communication Laboratory, N.T.T. Musashino-shi, Tokyo, 180, Japan Abstract This paper present a new method for parsing English sentences. The parser called LUTE-EJ parser is combined with case analysis and ATNG-based analysis. LUTE-EJ parser has two interesting mechanical characteristics. One is providing a structured buffer, Structured Constituent Buffer, so as to hold previous fillers for a case structure, instead of case registers before a verb appears in a sentence. The other is extended HOLD mechanism(in ATN), in whose use an embedded clause, especially a "be- deleted" clause, is recursively analyzed by case analysis. This parser's features are (1)extracting a case filler, basically as a noun phrase, by ATNG- based analysis, including recursive case analysis, and (2)mixing syntactic and semantic analysis by using case frames in case analysis. I. Introduction In a lot of natural language processing including machine translation, ATNG-based analysis is a usual method, while case analysis is commonly employed for Japanese language processing.The parser described in this paper consists of two major parts. One is ATNG-based analysis for getting case elements and the other is case-analysis for getting a semantic clause analysis. LUTE-EJ parser has been implemented on an experimental machine translation system LUTE (Language Understander, Translator & Editor) which can translate English into Japanese and vice versa. LUTE-EJ is the English-to-Japanece version of LUTE. In case analysis, two ways are generally used for parsing. One way analyzes a sentence from left to right, by using case registers. Case fillers which fill each case registers are major participants of constituents, for example SUBJECT, OBJECT, PP(Prepositional Phrase)'s and so on, in a sentence. In particular, before a verb appears, at least one participant(the subject) will be registered, for example, in the AGENT register. The other method has two phases on the analysis processing. In the first processing, phrases are extracted as case elements in order to fill the slots of a case frame. The second is to choose the adequate case element among the extracted phrases for a certain case slot and to continue this process for the other phrases and the other case slots. In this method, there are no special actions, i.e. no registering before a verb appears.(Winograd [83] ) English question-answering system PLANES (Waltz [78] ) uses a special kind of case frames, "concept case frames". By using them, phrases in a sentence, which are described by using particular "subnets" and semantic features (for a plane type and so on), are gathered and an action of a requirement (a sentence) is constructed. 2. LUTE-EJ Parser 2.1. LUTE-EJ Parser's Domain The domain treated by LUTE-EJ parser is what might be called a set of "complex sentences and compound sentences". Let S be an element of this set and let CLAUSE be a simple sentence (which might include an embedded sentence). Now, if MAJOR-CL and MINOR-CL are principal clause and subordinate clause, respectively, S can be written as follows. (R1} <S > :: = (< MINOR-CL >) < MAJOR-CL > (<MINOR-CL>) (R2) <MAJOR-CL>::= <CLAUSE> / <S> (R3) <MINOR-CL>::= <CONJUNCTION> <CLAUSE> (in BNF) The syntactic and semantic structure for a CLAUSE is basically expressed by a case structure. In this expression, the structure can be described by using case frames. The described structure implies the semantic structure intended by a CLAUSE and mainly depending on verb lexical information. Case elements in a CLAUSE are Noun Phrases, object NPs of PPs or some kinds of ADVerbs with relation to times and locations. The NP structure is described as follows, (R4) <NP> :: = (<NHD >){ < NP>/NOUN}( < NMP >) / < Gerund-PH > / < To-infmitive~PH > /That < CLAUSE > 154 where NHD(Noun HeaDer) is ~premodification" and NMP(Noun Modifier Phrase) is "postmodification'. Thus, NMP is a set including various kinds of embedded finite clauses, relative or be-deleted relative finite clauses. 2.2. LUTE-EJ Parser Overview After morphological analysis with looking up words for an input sentence in the dictionary, an input sentence analysis is begun from left to right. Thus, after a verb has been seen, it makes progress to analyze a CLAUSE by referring to the case frame corresponding to the verb, as each slot in the case frame is filled with an NP or an object of PP. A case slot consists of three elements: one semantic filler condition slot and two syntactic and semantic marker slots. Here, a preposition is directly used as a syntactic marker. Furthermore, four pseudo markers, ~subject", "object", ~indirect-object" and ~complement", are used. As a semantic marker, a so- called deep case is used (now, 41 ready for this case system). Then, LUTE-EJ Parser extracts the semantic structure implied in a sentence (S or CLAUSE) as an event or state instance created from a case frame, which is a class or a prototype. An NP is parsed by the ATNG-based analysis in order to decide a case slot filler {now, 81 nodes on this ATNG). Next, the reason why the case analysis and ATNG-based analysis are merged will be stated. It has two main points. One point is about the depth of embedded structures. For example, the investigation on the degree of a CLAUSE complexity resulted in the necessity to handle a high degree of complexity with efficiency. The NMP structure is also more complex. In particular, embedded VPs or ADJPHs appear recursively. Therefore, a recursive process for analyzing NP is needed. The other point is about the representation of grammatical structures. Grammar descriptions should be easy to read and write. Representations by using case frames make rules of any kind for NMP very simple, describing no NMP contents. In order to deal with the above two points, combining the case analysis with ATNG-based analysis solves those problems. Verbal NMP(VTYPE-NMP)s are dealt with by reeursive case-analyzing 2.3. Structured Constituent Buffer As mentioned above, syntactic and semantic structures are basically derived from a sentence by analyzing a CLAUSE. Analysis control depends on the case frame, when the verb has been just appearing in a CLAUSE. However until seeing the verb, all of the phrases, which may be noun phrases with embedded clauses, PPs or ADVs before the verb, must be held in certain registers or buffers. Here, a new buffer, STRuctured CONstituent Buffer(STRCONB), is introduced to hold these phrases. This buffer has surface constituents structure, and consists of specific slots. There are two slot types. One is a register to control English analysis and the other is a buffer to hold some mentioned-above constituents. The first type has two slots ; one is similar to a blackboard and registers the names of unfilled-slots. The other stacks the names of filled-slots in order of phrase appearance and is used for backtracking in the analysis. The second slot type involves several kinds of procedures. One of the main procedures, ~getphrase", extracts some candidates for the slot filler from the left side of a CLAUSE. It fills the slot with these candidates. This procedure takes one argument, which is a constituent marker, ~prepositional-phrase", ~noun-phrase" and so on (in practice, using each abbreviation). For example, when the following sentence is given, the evaluation for ~(getphrase 'preph)"in LISP returns one symbol generated for the head prepositional phrase, ~n the machine language", and determines the slot filler. (sl) '~In the machine language each basic machine operation is represented by the numerical code that invokes it in the computer, and ..... " However, if the argument is ~verb", this procedure only tells that the top word of unprocessed CLAUSE is a verb. At that moment, the process of filling with slots in STRCONB ends. Then case analysis starts. 2.4. CLAUSE Analysis After seeing a verb in a CLAUSE, that is, filling the verb slot in the STRCONB, the case analysis starts. When the parser control moves on the case frame, the analyzer falls to work in order to fill the first case slot, which is generally one for the constituent SUBJECT and for the case AGENT or INSTRUMENT, etc. in the semantic structure. This first slot is special, because the filler has already been predicted in the slot for SUBJECT in STRCONB. Therfore, the predicted phrase is tested to determine whether or not it satisfies the semantic condition of the first case slot. If it is good, the slot is filled with it as a case instance. The parser control moves to the next case slot and a candidate phrase for it is extracted from the remainder of the input sentence by invoking the function ~getphrase" with NP- 1.55 argument. This slot is usually OBJECT, or obligatory prepositional phrase name if the verb is intransitive. Furthermore, the control moves to the next case slot to fill it,if the case frame has more slots, all of which are obligatory case slots. They are described in a meaning slot (whose value is a meaning frame) in a case frame, while optional case slots are united in a special frame. The process to fill the case slots is continuing until the end of the case frame. Then, more than one candidate for a case structure may be extracted. More than one for an NP extracted by "getphrase" gives many case structures, because of the difference in input remainders. Next, recusive parsing will be mentioned. In analyzing embedded clauses, which are VTYPE- NMPs. CLAUSE analysis also gets in use of NPs parsing. It is supported with a new STRCONB. The procedure to call NP analysis is described in the next section. The conceptual diagram for LUTE-EJ analysis as a recusive CLAUSE is shown in Fig.1. STRUCTURED-CONSTITUENT-BUFFER l <*sub ..... > l L--. Case Analysis ! ] *case-frame* <*agent> <*object> <*recipient > STRUCTURED-CONSTITUENT-BUFFER • L.._ Case Analysis [ *case-frame* <*agent> J I <*object> I __~ STRUCTU~D-CONSTITUZNT-BUFFER I ~ Case Analysis [ ] Fig.1 Conceptual Diagram of LUTE-EJ Analysis analysis of i NOUN Phrase ATNG-based analysis process (embedded clause, noun clause I. I 2.5. NP Analysis An N'P structure is basically described as the rule (R4). In this paper, NHD structure and the analysis for it are omitted. NMP is another main NP constituent and will be explained here. NM:P is described in the following form. (R5) < NMP > : : = <PP> i <PResent-Participle-PHrase> / <PaSt-Participle-PH > / <ADJective-PH> / <INFinitive-PH > / <RELative-PH > / <CARDINAL> <UNIT> <ADJ> If an NMP is represented by any kind of VP or ADJ-PH, it is described in a case structure by using a case frame. That is, VTYPE-NMPs are parsed in the same way as CLAUSEs. However, a VTYPE-NMP has one (or more) structural missing element (a hole) compared with a CLAUSE. T h e r e f o r e , complementing them is needed by restoring a reduced form to the complete CLAUSE. Extending "HOLD'- manipulation in ATN makes it possible. This extension deals with not only relative clauses but also VTYPE-NMPs. That is, the phrases with a "whiz- deletion" in Transformational Grammar can be treated. ADJ-PHs can also be treated. For example, the following phrase is discussed. (s2) '~I know an actor suitable for the part." In the above case, the deletion of the words, "who is", results in the complete sentence being the above representation. The extending HOLD-mm~ipulation holds the antecedent of a CLAUSE with a VTYPE- NMP. Calling the case analysis recursively, the VTYPE-NMP is parsed by it. Each VTYPE-NMP has a specific type, PRP-PH, PSP-PH, INF-PH or ADJ- PH. Each of them looks for an antecedent, as the object or the subject: so that each is treated according to the procedure to decide the role of the antecedent and the omitting grammatical relation. Therefore, it is necessary to introduce one "context" representing VTYPE-NMP. The present extension demands the context with the antecedent and calls the case analysis. The following structured representation describes a NOUN, as stated above. (NOUN (*TYPE ($value (instance))) (*CATEGORY ($value Csemantic-category'))} (*SELF ($value ("entry-name'))) (*POS ($value (noun))) (*MEANING ($value ("each-meaning-frame-list"))) (*NUMBER ($value ("singular-or-plural"))) (*MODIFIERS ($value CNHD-or-NMP-instance-list"))) (*MODIFYING ($value Cmodificand"))) (*APPOSITION($value (" appositional-phrase-instance"))) (*PRE ($value Cprepositional-phrase-instance"))) (*COORD ($value ("coordinate-phrase")))) Each word with prefix "*" describes a slot name such as a case frame has. However many slots are prepared for holding pointers to represent a syntactic structure of an NP. The value for VTYPE-NMPs *MODIFIERS is a pair of VTYPE-NMPs and an individual verbal symbol, for example, "(PRP-PH verb*l)". 156 Complementing NP's structure, an appositional structure is introduced. It is described in *APPOSITION-slot and treated in the same way as NMPs. Those phrases are discriminated from another NMP by a pair of a delimiter ~," and a phrase terminal symbol, or, in particular, by proper nouns. A Coordinate conjunction is another important structure for an NP. There are three kinds of coordinates in the present NP rule. The first is between NPs, the second is NHDs, and the third is NMPs. The NP representation with that conjunction is described by an individual coordinate structure. That is, the conjunction looks like a predicate with any NPs as parameters, for example, (and NP1 NP2 ..... NPi). Therfore, the coordinate structure has "*COORDINATE-OBJECTS" and "*OBJ-CAT'" slot, each of which is filled with any instanciated NP/NHD/NMP symbol or any coordinate type, respectively. Some linguistic heuristics are needed to parse NPs, along with extracting as few inadequate NP structures as possible. Several heuristics are introduced into LUTE-EJ parser. They are shown as follows. (1) Heuristics for a compound NP "Getphrase" function value for an NP is the list of candidates for an adequate NP structure. The function first extracts the longest NP candidate from an input. In this analysis, its end word is separated from the remainder of the input by some heuristics, (a) The top word in the remainder is a personal pronoun. (b) Its end word has a plural form. (c) Its top is a determiner. These heuristics prevent the value from having abundant non-semantical structures. (2) I-Ieuristics by using contexts When NP analysis is called when filling a case slot, the case-marker's value for it is delivered to N'P analysis. This value is called "syntactic local context". It is useful in rejecting pronouns, which are ungrsmmatically inflected, by testing the agreement with the syntactic local context and the subject or the object. Another context usage is shown below. Assume that a phrase containing a coordinate conjunction '~and", for example, is in a context which is an object or a complement, and the word next to the conjunction is a pronoun. If the pronoun is a subjective case, the conjunction is determined to be one between CLAUSEs. To the contrary, the pronoun being a objective case determines the conjunction to connect an NP with it. (3) Apposition Many various kinds of appositions are used in texts. Most of them are shown by N. Sager [80]. The preceding appositional structures are used. 3. LUTE-EJ Parser Merits 3.1. A Merit of Using Case Analysis In two sentences, each having different syntactic structures, there is a problem involved in identifying each case by extracting semantic relations between a predicate and arguments (NPs, or NPs having prepositional marks). LUTE-EJ case analysis has solved this problem by introducing a new case slot with three components (Section 2.2.). For case frames in LUTE-EJ analysis containing the slots, an analysis result has two features at the same time. One is a surface syntactic structure and the other is a semantic structure in two slots. Therefore, many case frames are prepared according to predicate meanings and case frames are prepared according to predicate meanings and syntactic sentence patterns, depending on one predicate (verb). An analysis example is shown for the same semantic structure, according to which there are three different syntactic structures. These three sentences are as follow (from Marcus [80] ). (s3) "The judge presented the prize to the boy." (s4) ~The judge presented the boy with the prize." (s5) "The judge presented the boy the prize." Three individual structures are obtained for each sentence and their meaning equivalence for each slot is proved by matching the fillers of case-instances and by doing the same for case-names. Incidentally, a sentence containing another meaning of "present" is as follows. It means "to show or to offer to the sight", for example, in a sentence, (s6) ~l~ney presented the tickets at the gate." In this case, the "present" frame must prepare the obligatory "at" case slot. 3.2. An Effect of Combining Case Analysis with ATNG-based Analysis The next section shows one application of the LUTE-EJ parser, which is a machine translation system. So, taking the translated sample sentence in Section 4., effective points in parsing are shown in this section. The sample sentence is as follows. (s7) ~In the higher-level progrsmming languages the instructions are complex statements, each equivalent to several machine-language instructions, and they refer to memory locations by names called variables." One point is NMP analysis method by recursive calling for case frame analysis. In the example, two 157 NMP phrases are seen. (a) The phrase which is an adjective phrase and modifies "each", appositive to the preceding "statements", (b) The phrase which is a past participle phrase and modifies "names". These phrases are analyzed in the same case frame analysis, except for the phrase deletion types (depending on VTYPE-NMP) appearing in them. The deleted phrases are the subject part and the object part respectively. Judging from the point of a parsing mechanism, extended HOLD-manipulation transports the deleted phrases, "each" and "names", with the contexts to the case frame analysis. The other point is to hold undecided case elements in STRCONB. The head PP and the subject in the sentences, for example, are buffering until seeing the main verb. 4. An Application to Machine Translation One of the effective applications can be shown by considering the NMP analysis with embedded phrases. These NMPs are represented by instances of actions, i.e. individual case frames which may be having an unfilled case slot. Applying LUTE-EJ parser to an automatic machine translation system, there may be a little problem in lacking the case slots information. The reason is because the lacking information can be thought of as being indispensable for a semantic structure in one language, for example a target language Japanese, in spite of having them in another languages, for example a source language English. The problem is the difference in how to modify a head noun by an NMP or an embedded clause. In Japanese, a NOUN is often modified by an embedded clause in the following pattern. "<predicate's arguments>* <predicate> NOUN" ; * representing recursive applications Therefore, in Japanese, an NMP phrase represented by a case frame corresponds to an embedded clause and the verb of the frame corresponds to the predicate. A translation example is shown in Fig.2. References Marcus, Mitchell P., "A Theory of Syntactic Recognition for Natural Language", MIT Press, 1980. Sager, Naomi, "Natural Language Information Processing", Addison-Wesley, 1981. Waltz, David L., "An English Language Question- Answering System for a Language Relational Data Base", CACM Vol.21, 1978. Winograd, Terry, "Language as a Cognitive Process", Vol.1, Addison-Wesley, 1983. I ln the h~gher-leuel progr-am,..' ~--J''~=~--P~ ;"--'-- " ~ "~ --"-~'-='~- I ;n 9 languages the instruct[o: I'~]/'J£/'l~ '~< J / / ~ . I~C3 • ntS , each equ[va|ent to se: A -~- . . . . . ~ . . . . . . . . . . -¢r I jeralnmach'=r,e-lamguage ;nstr '. ~[=1~rd2tjarc~'~JT~-~%r~'~- -C, uCt[O s ar~cl the~ fencer to i ~/ "" {]'' " --~ I l~,emor~ tocat,ons o~ names ca t` ~ - - ----- l Original Text (English) J---4~u~Z, .~Or~ ~ - • . ~=-=~ I~ =~ ~ E~4TEINLE:~]t;~E]2 E:C~t~DID~TE ~L ( fr~Oi IUt~ E= SEt 'TEt~CE : 0818 E: CP4ND l DI~TE-2 I I.,m,[ '~' E:PPEDIC~TE:e82.4 E:UERB=~ I-'~-" ]-" ~-J'n~F[~_4' 75.~ Z' 4] }~;F~'l'~'~--r"r ~ }t ~[l(1t 0 .... _ E: E T~:0869 E : rlEIIORY l I ( It'| ~-:--- E.'S~TEb~CC:OOte E:CA,'.IDIDATE4" "~ "~" "-~ '- ~ ~'~' '-- I I~0L / : ~ ! £ ~ ELEMENT :0034 ~'.CASE- I~ I!!i I T!I !oii i = I-- 16k~ ".pp'° ,.~,: ,T,~, ,ooo~- ' ,-,~T,,T,-,,-.T= .... -,j" ~ - " = ' ' _ " E:r Ou HEADEI~ : (]~352 E'ADJm35 E: Q E "OO F EP 006.'2 E ADL'EPB-18 ~ . .., ~. i[~'~ E;iH E:PPEDICAT~:k~Q24 E'ADJPH-5 - ~ . ~'4 ' ~rh E : EF4T R, ~ : {3869 E : EQU I UI~L Er tT . " " - - E : C~qSE - EL EMEt.IT : ~3054 E : C~SE - EL ErlEr4[ - 4 ~.. a ~ ~m .~.,y, "1 Generated Internal Representation Processes Window Fig. 2 An Example of LUTE Translation Results on the Display (from EngLish to Japanese) 158
1984
33
A PROPER TREATMEMT OF SYNTAX AND SEMANTICS IN MACHINE TRANSLATION ¥oshihiko Nitta, Atsushi Okajima, Hiroyuki Kaji, Youichi Hidano, Koichiro Ishihara Systems Development Laboratory, Hitachi, Ltd. 1099 Ohzenji Asao-ku, Kawasaki-shi, 215 JAPAN ABSTRACT A proper treatment of syntax and semantics in machine translation is introduced and discussed from the empirical viewpoint. For English- Japanese machine translation, the syntax directed approach is effective where the Heuristic Parsing Model (HPM) and the Syntactic Role System play important roles. For Japanese-English translation, the semantics directed approach is powerful where the Conceptual Dependency Diagram (CDD) and the Augmented Case Marker System (which is a kind of Semantic Role System) play essential roles. Some examples of the difference between Japanese sentence structure and English sentence structure, which is vital to machine translation~ are also discussed together with various interesting ambiguities. I INTRODUCTION We have been studying machine translation between Japanese and English for several years. Experiences gained in systems development and in linguistic data investigation suggest that the essential point in constructing a practical machine translation system is in the appropriate blending of syntax directed processing and the semantics directed processing. In order to clarify the above-mentioned suggestion, let us compare the characteristics of the syntax directed approach with those of the semantics directed approach. The advantages of the syntax directed approach are as follows: (i) It is not so difficult to construct the necessary linguistic data for syntax directed processors because the majority of these data can be reconstructed from already established and well-structured lexical items such as verb pattern codes and parts of speech codes, which are overflowingly abundant in popular lexicons. (2) The total number of grammatical rules necessary for syntactic processing usually stays within a controllable range. (3) The essential aspects of syntactic processing are already well-known, apart from efficiency problems. The disadvantage of the syntax directed approach is its insufficient ability to resolve various ambiguities inherent in natural languages. On the other hand, the advantages of the semantics directed approach are as follows: (i) The meaning of sentences or texts can be grasped in a unified form without being affected by the syntactic variety. (2) Semantic representation can play a pivotal role for language transformation and can provide a basis for constructing a transparent machine translation system, because semantic representa- tion is fairly independent of the differences in language classes. (3) Consequently, semantics directed internal representation can produce accurate translations. The disadvantages of the semantics directed approach are as follows: (I) It is not easy to construct a semantic lexicon which covers real world phenomena of a reasonably wide range. The main reason for this difficulty is that a well-established and widely-accepted method of describing semantics does not exist. (For strongly restricted statements or topics, of course, there exist well-elaborated methods such as Montague grammar [2], Script and MOP (Memory Organization Packet) theory [13], Procedural Semantics [14], and Semantic Interlingual Representation [15].) (2) The second but intractable problem is that, even if you could devise a fairly acceptable method to describe semantics, the total number of semantic rule descriptions may expand beyond all manageable limits. Therefore, we think that it is necessary to seek proper combinations of syntactic processing and semantic processing so as to compensate for the disadvantages of each. The purpose of this paper is to propose a proper treatment of syntax and semantics in machine translation systems from a heuristic viewpoint, together with persuasive examples obtained through operating experiences. A sub-language approach which would put some moderate restrictions on the syntax and semantics of source language is also discussed. 159 II SYNTAX AND SEMANTICS It is not entirely possible to distinguish a syntax directed approach from a semantics directed approach, because syntax and semantics are always performing their linguistic functions reciprocally• As Wilks [16] points out, it is plausible but a great mistake to identify syntactic processing with superficial processing, or to identify semantic processing with deep processing. The term "superficial" or "deep" only reflects the intuitive distance from the language represen- tation in (superficial) character strings or from the language representation in our (deep) minds. Needless to say, machine translation inevitably has something to do with superficial processing• In various aspects of natural language processing, it is quite common to segment a superficial sentence into a collection of phrases• A phrase itself is a collection of words• In order to restructure the collection of phrases, the processor must first of all attach some sorts of labels to the phrases• If these labels are something like subject, object, complement, etc., then we will call this processor a syntax directed processor, and if these labels are something like agent, object, instrument, etc., or animate, inanimate, concrete, abstract, human, etc., then we will call this processor a semantics directed processor• The above definition is oversimplified and of course incomplete, but it is still enough for the arguments in this paper• III SYNTAX DIRECTED APPROACH: A PROTOTYPE ENGLISH-JAPANESE MACHINE TRANSLATION SYSTEM So far we have developed two prototype machine translation systems; one is for English-Japanese translation [6] and the other is for Japanese- English translation• The prototype model system for English- Japanese translation (Figure I) is constructed as a syntax directed processor using a phrase structure type internal representation called HPM (Heuristic Parsing Model), where the semantics is utilized to disambiguate dependency relationships• The somewhat new name HPM (Heuristic Parsing Model) reflects the parsing strategy by which the machine translation tries to simultate the heuristic way of actual human of language translation• The essential features of heuristic translation are summarized in the following three steps: (I) To segment an input sentence into phrasal elements (PE) and clausal elements (CE). (2) To assign syntactic roles to PE's and CE's, and restructure the segmented elements into tree-forms by governing relation, and into link-forms by modifying relation• (3) To permute the segmented elements, and to assign appropriate Japanese equivalents with necessary case suffixes and postpositions. Noteworthy findings from operational experience and efforts to improve the prototype model are as follows: Lexicons [7] entry: • word • phrase • idiom • etc. I description: • attribute • Japanese equivalent • controlling marks for analysis, transformation and generation • etc. Input English Sentence I Lexicon Retrieval I_ _~'~'----"---'~ I Morphological Analysis - llnternal Language ' IRepresentation O on HPM] ~Syntactic Analysis [based on HPM] Tree/Link Transformation [Sentence Generation ~Morphological Synthesis =I F•adj ustment of tense and l | mode | i ![Parsed ~|•assignment of | Tree/Link [ L postpositions J - G Post-editing Support I_ ~ ['solution to manifold] [meanings J 1 ~.. G. Output Japanese Sentence Figure 1 Configuration of Machine Translation System: ATHENE [6] 160 TWith .... helpTf ........... Tj~the Jap ..... Tare beglnningTa 10-year R&D effortTintendedTto yield~a fifth g ..... tion systemT.~ \ \ \ \ I I \ \ \ \ \ I I / / /// / / / • WE: Word Element •PE; Phrasal Element ' CP: Clausal Element • SE: Sentence • This sample English sentence is taken from Datamation Jan. 1982. Figure 2 An Example of Phrase Structure Type Representation (I) The essential structure of English sentences should be grasped by phrase structure type representations. An example of phrase strucure type representation, which we call HPM (Heuristic Parsing Model), is illustrated in Figure 2. In Figure 2, a parsed tree is composed of two substructures. One is "tree ( ~/ )," representing a compulsory dependency relation, and the other is "link (k~)," representing an optional dependency relation. Each node corresponds to a certain constituent of the sentence. The most important constituent is a "phrasal element (PE)" which is composed of one or more word element(s) and carries a part of the sentential meaning in the smallest possible form. PE's are mutually exclusive. In Figure 2, PE's are shown by using the "segmenting marker (T)", such as TWith some help (ADVL)[, [from overseas (ADJV)[j T,(co~)T, Tthe Japanese (SUBJ)T and Tare beginning (GOV)T, where the terminologies in parentheses are the syntactic roles which will be discussed later. A "clausal element (CE)" is composed of one or more PE('s) which carries a part of sentential meaning in a nexus-like form. A CE roughly corresponds to a Japanese simple sentence such as: "%{wa/ga/wo/no/ni} ~ {suru/dearu} [koto]." CE's allow mutual intersection. Typical examples are the underlined parts in the following: "It is important for you to do so." "... intended to yield a fifth generation system." One interesting example in Figure 2 may be the part "With some help from overseas", which is treated as only two consecutive phrasal elements. This is the typical result of a syntax directed parser. In the case of a semantics directed parser, the above-mentioned part will be treated as a clausal element. This is because the meaning of this part is "(by) getting some help from overseas" or the like, which is rather clausal than phrasal. (2) Syntax directed processors are effective and powerful to get phrase structure type parsed trees. Our HPM parser operates both in a top-down way globally and in a bottom-up way locally. An example of top-down operation would be the segmentation of an input sentence (i.e. the sequence of word elements (WE's)) to get phrasal elements (PE), and an example of bottom-up operation would be the construction of tree-forms or link-forms to get clausal elements (CE) or a sentence (SE). These operations are supported by syntax directed grammatical data such as verb dependency type codes (cf. Table i, which is a simplified version of Hornby's classification [5]), syntactic role codes (Table 2) and some production rule type grammars (Table 3 & Table 4). It may be permissible to say that all these syntactic data are fairly compact and the kernel parts are already well-elaborated (cf. [i], [8], [ii], [12]). 161 Code Vl V2 V3 V6 V7 V8 V14 Code SUBJ OK/ TOOBJ NAPP GOV TOGOV ENGOV ADJV ENADj ADVL SENT Table 1 Dependency Pattern of Verb Verb Pattern Be + ... Vi (# Be) + Complement, It/There + Vi + ... Vi [+ Adverbial Modifier] Vt + To-infinitive Vt + Object vt + that + ... Vt + Object [+not] + To-infinitive Examples be get, look rise~ walk intend begin~ yield agree, think know, bring Table 2 Syntactic Roles Role Subject Object Object in To-infinitive Form Noun in Apposition Governing Verb Governing Verb in To-infinitive Form Governing Verb in Past Participle Form Adjectival Adjectival in Past Participle Form Adverbial Sentence (3) The weak point of syntax directed processors is their insufficient ability to disambiguate; i.e. the ability to identify dependency types of verb phrases and the ability to determine heads of prepositional phrase modifiers. (4) In order to boost the aforementioned disambiguation power, it is useful to apply semantic filters that facilitate the selective restrictions on linking a verb with nominals and on linking a modifier with its head. A typical example of the semantic filter is illustrated in Figure 3. The semantic filter may operate along with selective restriction rules such as: • N22 (Animal) + with + N753 (Accessory) Plausible [': N22 is equipped with N753] • V21 (Watching-Action) + with + N541 (Watching Instrument) ~ OK [vV21 by using N541 as an instrument] The semantic filter is not complete, especially for metaphorical expressions. A bird could also use binoculars. Table 3 Rules for Assigning Syntactic Roles to Phrasal Elements Pattern to be Scanned New Pattern to be Generated TOGOV~ + OBJ *: focus, --: not mentioned, ~: empty, [...]: optional Table 4 Rules for Constructing Clausal Elements Pattern to be Scanned New Element to be Generated I* [ SENT | 162 He saw a bird with a ribbon. He saw a bird with binoculars• O I II f> (a) and (d) are plausible. * X~_.. Y implies that X Js modified by Y. Figure 3 A Typical Operation of Semantic Filter (5) The aforementioned semantic filters are compatible with syntax directed processors; i.e. there is no need to reconstruct processors or to modify internal representations. It is only necessary to add filtrating programs to the syntax directed processor. One noteworthy point is that the thesaurus for controlling the semantic fields or semantic features of words should be constructed in an appropriate form (such as word hierarchy) so as to avoid the so-called combinatorial explosion of the number of selective restriction rules. (6) For the Japaneses sentence generating process, it may be necessary to devise a very complicated semantic processor if a system to produce natural idiomatic Japanese sentences is desired. But the majority of Japanese users may tolerate awkward word-by-word translation and understand its meaning. Thus we have concluded that our research efforts should give priority to the syntax directed analysis of English sentences. The semantics directed generation of Japanese sentences might not be an urgent issue; rather it should be treated as a kind of profound basic science to be studied without haste. (7) Even though the output Japanese translation may be an awkward word-by-word translation, it should be composed of pertinent function words and proper equivalents for content words. Otherwise it could not express the proper meaning of the input English sentences. (8) In order to select proper equivalents, semantic filters can be applied fairly effectively to test the agreement among the semantic codes assigned to words (or phrases). Again the semantic filter is not always complete. For example, in Figure 2, the verb "yield" has at least two different meanings (and consequently has at least two different Japanese equivalents): "yield"-->I"produce" (ffi Umidasu) ["concede" (ffi Yuzuru). But it is neither easy nor certain how to devise a filter to distinguish the above two meanings mechanically. Thus we need some human aids such as post-editing and inter-editing. (9) As for the pertinent selection of function words such as postpositions, there are no formal computational rules to perform it. So we must find and store heuristic rules empirically and then make proper use of them. Some heruistic rules to select appropriate Japanese postpositions are shown in Table 5. Table 5 Heuristic Rules for Selecting Postpositions for "in + N" Semantic Japanese Post- positions for Category of N ADVL/ADJV in+Nl (NlfPlace) Nl+de/Nl+niokeru in+N3 (N3=Time) N3+ni/N3+no in+N3&N4 --/N3&Nd+go-ni (Nd=Quantit~) in+N6 N6÷dewa/N6+no (N6fAbstract Concept) in+N8 (N8ffiMeans) NS+de/NS+niyoru • No rules. +de/+no • A kind of +wo-kite/ idiom [7] to +wo-kita be retrieved +wo-kakete/ directly from +wo-kaketa a lexicon. English Examples in California in Spring in two days in my opinion in Z-method (speak) in English in uniform in spectacles (i0) To get back to the previous findings (I) and (2), the heuristic approach was also found to be effective in segmenting the input English sentence into a sequence of phrasal elements, and in structuring them into a tree-llke dependency diagram (cf. Figure 2). (Ii) A practical machine translation should be considered from a kind of heuristic viewpoint rather than from a purely rigid analytical linguistic viewpoint. One persuasive reason for this is the fact that humans, even foreign language learners, can translate fairly difficult English sentences without going into the details of parsing problems. IV SEMANTICS DIRECTED APPROACH: A PROTOTYPE JAPANESE-ENGLISH MACHINE TRANSLATION SYSTEM The prototype model system for Japanese- English translation is constructed as a semantics directed processor using a conceptual dependency diagram as the internal representation. Noteworthy findings through operational experience and efforts to improve on the prototype model are as follows: 163 (I) Considering some of the characteristics of the Japanese language, such as flexible word ordering and ambiguous usage of function words, it is not advantageous to adopt a syntax directed representation for the internal base of language transformation. For example, the following five Japanese sentences have almost the same meaning except for word ordering and a subtle nuance. Lowercase letters represent function words. Boku wa Fude de Tegami wo Kaku. (11 (brush)(with)(letter) (write) Boku wa tegami wo Fude de Kaku. Fude de Boku wa Tegami wo Kaku. Tegami wa Boku wa Fude de Kaku. Boku wa Tegami wa Fude de Kaku. (2) Therefore we have decided to adopt the conceptual dependency diagram (CDD) as a compact and powerful semantics directed internal representation. Our idea of the CDD is similar to the well-known dependency grammar defined by Hays [4] and Robinson [9] [i0], except for the augmented case markers which play essentially semantic roles. (31 The conceptual dependency diagram for Japanese sentences is composed of predicate phrase nodes (PPNs in abbreviationl and nominal phrase nodes (NTNs in abbreviation). Each PPN governs a few NPNs as its dependants. Even among PPNs there exist some governor-dependant relationships. Examples of formal CDD description are: PPN (NPNI, NPN2, ... N-PNnl, Kaku (Boku, Te~ami, Fude), Write (I, Letter, Brus--~'~, where the underlined word "~' represents the m concept code corresponding to the superficial word "a", and the augmented case markers are omitted. In the avove description, the order of dependants NI, N2, ..., Nn are to be neglected. For example, PPN (NPNn, ..., NPN2, NPNI) is identical to the above first formula. This convention may be different from the one defined by Hays [4]. Our convention was introduced to cope with the above-mentioned flexible word ordering in Japanese sentences. (4) The aforementioned dependency relationships can be represented as a linking topology, where each link has one governor node and one dependant node as its top and bottom terminal point (Figure 4). (5) The links are labeled with case markers. Our case marker system is obtained by augmenting the traditional case markers such as Fillmore's [3] from the standpoint of machine translation. For the PPN-NPN link, its label usually represents agent, object, goal, location, topic, etc. For the PPN-PPN link, its label is usually represent causality, temporality, restrictiveness, etc. (cf. Figure 4). PPN' PPN ~'C4 --~ Kaku Write __ -~.J /T0\ /T0 NPN I NPN 2 NPN 3 8oku Tegaml Fude I Letter Brush * CI: case markar Figure 4 Examples of a Conceptual Dependency Diagram (CDD) (6) As for the total number of case markers, our current conclusion is that the number of compulsory case markers to represent predicative dominance should be small, say around 20; and that the number of optional case markers to represent adjective or adverbial modification should be large, say from 50 to 70 (Table 6). (7) The reason for the large number of optional case markers is that the detailed classification of optional cases is very useful for making an appropriate selection of prepositions and participles (Table 7). (g) Each NPN is to be labeled with some properly selected semantic features which are under the control of a thesaurus type lexicon. Semantic features are effective to disambiguate predicative dependency so as to produce an appropriate English verb phrase. (9) The essential difference between a Japanese sentence and the equivalent English sentence can be grasped as the difference in the mode of PPN selections, taken from the viewpoint of conceptual dependency diagram (Figure 51. Once an appropriate PPN selection is made, it will be rather simple and mechanical to determine the rest of the dependency topology. (I0) Thus the essential task of Japanese-English translation can be reduced to the task of constructing the rules for transforming the dependency topology by changing PPNs, while preserving the meaning of the original dependency topology (cf. Figure 5). (Ill All the aforementioned findings have something to do with the semantic directed approach. Once the English oriented conceptual dependency diagram is obtained, the rest of the translation process is rather syntactic. That is, the phrase structure generation can easily be handled with somewhat traditional syntax directed processors. 164 (12) As is well known, the Japanese language has a very high degree of complexity and ambiguity mainly caused by frequent ellipsis and functional multiplicity, which creates serious obstacles for the achievement of a totally automatic treatment of "raw" Japanese sentences. (ex i) "Sakana wa Taberu." (fish) (eat) has at least two different interpretations: • "[Sombody] can eat a fish." . "The fish may eat [something]." Table 6 Case Markers for CDD (subset only) Predicative A Agent Dominance 0 Object (Compulsory) C Complement R Recipient AC Agent in Causative T Theme, Topic (Mental Subject) P Partner Q Quote RI Range of Interest RQ Range of Qualification RM Range of Mention I Instrument E Element Adverbial CT Goal in Abstract Collection Modification CF Source in Abstract Collection (Optional) TP Point in Time Adjective ET Embedding Sentence Type Modifier Modification whose gapping is Theme (Optional) EA whose gapping is Agent EO whose gapping is Object Link and ~" ilnking through "AND" Conjunction BT Conjunction through "BUT" (Optional) . . . . . . . . . (ex 2) "Kawaii Ningyou wo Motteiru Onnanoko." (lovely) (doll) (carry) (girl) has also two different interpretations: "The lovel~ ~irl who carries a doll with her." "The girl who carries a lovel[ doll with her." (13) Thus we have judged that some sub-Japanese language should be constructed so as to restrict the input Japanese sentences within a range of clear tractable structures. The essential restrictions given by the sub-language should be concerned with the usage of function words and sentential embeddings. Table 7 Detailed Classification of Optional Case Markers for Modification (subset only) Phase Code Most-Likely Prepositions or Participles F T D P I O V U S B A AL H AB SE WI •.. from to, till during at in, inside out, outside over, above under, below beside before, in front of after, behind along through over, superior to apart from within . Case Marker E Body Code + Phase Code • Body Code ~ T (=Time)IS (=Space)IC (=Collection) • Kasoukioku-~usesu-Hou nlyorl, Dalyouryou-Deitasetto eno Kourltsu no Yol Nyushutsuryoku ga Kanou nl Naru. ~ Analysls ~ 4)' J i ] II i l oon I ,Ival.o r °°IUf7 ~itasetto I IT J ". ...... ~ /~ A 5)" Naru (-Become)-type CDD Transformation > " The virtual storage access method enables the efficient input-output processing to a large capacity data set. ~ Generatlon 4) I enable I access method processing / 3) \ 5) Suru (=Make)-type CDD Figure 5 Difference between Japanese and English Grasped Through CDD 165 (IA) A sub-language approach will not fetter the users, if a Japanese-Engllsh translation system is used as an English sentence composing aid for Japanese people. V CONCLUSION We have found that there are some proper approaches to the treatment of syntax and semantics from the viewpoint of machine translation. Our conclusions are as follows: (i) In order to construct a practical English-Japanese machine translation system, it is advantageous to take the syntax directed approach, in which a syntactic role system plays a central role, together with phrase structure type internal representation (which we call HPM). (2) In English-Japanese machine translation, syntax should be treated in a heuristic manner based on actual human translation methods. Semantics plays an assistant role in disambiguating the dependency among phrases. (3) In English-Japanese machine translation, an output Japanese sentence can be obtained directly from the internal phrase structure representation (HPM) which is essentially a structured set of syntactic roles. Output sentences from the above are, of course, a kind of literal translation of stilted style, but no doubt they are understandable enough for practical use. (4) In order to construct a practical Japanese-English machine translation system, it is advantageous to take the approach in which semantics plays a central role together with conceptual dependency type internal representation (which we call CDD). (5) In Japanese-English machine translation, augmented case markers play a powerful semantic ro le. (6) In Japanese-English machine translation, the essential part of language transformation between Japanese and English can be performed in terms of changing dependency diagrams (CDD) which involves predicate replacements. One further problem concerns establishing a practical method of compensating a machine translation system for its mistakes or limitations caused by the intractable complexities inherent to natural languages. This problem may be solved through the concept of sublanguage, pre-editing and post-editing to modify source/target languages. The sub-Japanese language approach in particular seems to be effective for Japanese-English machine translaton. One of our current interests is in a proper treatment of syntax and semantics in the sublanguage approach. ACKNOWLEDGEMENTS We would like to thank Prof. M. Nagao of Kyoto University and Prof. H. Tanaka of Tokyo Institute of Technology, for their kind and stimulative discussion on various aspects of machine translation. Thanks are also due to Dr. J. Kawasaki, Dr. T. Mitsumaki and Dr. S. Mitsumori of 5DL Hitachi Ltd. for their constant encouragement to this work, and Mr. F. Yamano and Mr. A. Hirai for their enthusiastic assistance in programming. REFERENCES [i] Chomsky, N., Aspects of the Theory of Syntax (MIT Press, Cambridge, MA, 1965). [2] Dowty, D.R. et. al., Introduction to Montague Semantics (D. Reidel Publishing Company, Dordrecht: Holland, Boston: U.S.A., London: England, 1981) [3] Fillmore, C.J., The Case for Case, in: Bach and Harms (eds.), Universals in Linguistic Theory, (Holt, Reinhart and Winston, 1968) 1-90 [4] Hays, D.G., Dependency Theory: A Formalism and Some Observations, Language, vol.40, no.4 (1964) 511-525 [5] Hornby, A.S., Guide to Patterns and Usage in English, second edition (Oxford University Press, London, 1975). [6] Nitta, Y., Okajlma, A. et. al., A Heuristic Approach to English-into-Japanese Machine Translation, COLING-82, Prague (1982) 283-288 [7] Okajima, A., Nitta, Y. at. al., Lexicon Structure for Machine Translation, ICTP-83, Tokyo (1983) 252-255 [8] Quirk et. al., A Grammar of Contemporary English (Longman, London; Seminar Press, New York, 1972). [9] Robinson, J.J., Case, Category and Configuration, Journal of Linguistics, vol.6 no.l (1970) 57-80 [I0] Robinson, J.J., Dependency Structures and Transformational Rules, Language, voi.46, no.2 (1970) 259-285 [ii] Robinson, J.J., DIAGRAM: A Grammar for Dialogues, Co=~m. ACM voi.25, no.l (1982) 27-47. [12] Sager, N., Natural Language Information Processing (Addison Wesley, Reading, MA., 1981). [13] Schank, R.C., Reminding and Memory Organization: An Introduction to MOPs, in: Lehnert W.C. and Ringle, M.H. (ads.), Strategies for Natural Language Processing (Lawrence Erlbaum Associates, Publishers, Hillsdale, New Jersey, London, 1982) 455-493 [14] Wilks, Y., Some Thoughts on Procedural Semantics, in: ibid. 495-521 [15] Wilks, Y., An Artificial Intelligence Approach to Machine Translation, in: Schank, R.C. and Colby, K.M. (ads.), Computer Models of Thought and Language (W.H. Freeman and Company, San Francisco, 1973) 114-151 [16] Wilks, Y., Deep and Superficial Parsing, in: King, M. (ed.), Parsing Natural Language (Academic Press, London, 1983) 219-246 166
1984
34
A CONSIDERATION ON THE CONCEPTS STRUCTURE AND LANGUAGE IN RELATION TO SELECTIONS OF TRANSLATION EQUIVALENTS OF VERBS IN MACHINE TRANSLATION SYSTEMS Sho Yoshida Department of Electronics, Kyushu University 36, Fukuoka 812, Japan ABSTRACT To give appropriate translation equivalents for target words is one of the most fundamental problems in machine translation systrms. Especially, when the MT systems handle languages that have completely different structures like Japanese and European languages as source and target languages. In this report, we discuss about the data strucutre that enables appropriate selections of translation equivalents for verbs in the target language. This structure is based on the concepts strucutre with associated infor- mation relating source and target languages. Discussion have been made from the standpoint of realizability of the structure (e.g. from the standpoint of easiness of data collection and arrangement, easiness of realization and compact- ness of the size of storage space). i. Selection of Translation Equivalent Selection of translation equivalent of a verb becomes necessary when, (1) the verb has multiple meanings, or (2) the meaning of the verb is modified under different contexts (though it cannot be thought as multiple meanigns). For example, those words '?~ ', '~9~;~ ', '~< ', '~', ' r ~ ', '@~< ', ... are selectively used as translation equivalents of an English verb 'play' according as its context. i. play tennis : ~--- ~r~ 2. play in the ground : ~ ~ ~ ~"C~ 3. The children were playing ball (with each other) : -~/~,'--~rI~'g~t ~. play piano : ~'T~r~( 5. Lightning palyed across the sky as the storm began : ~ : ~ ~ f ~ h In the above examples, they are not essential- ly due to multiple meanigns of 'play' but need to assign different translation euqivalents according as the differences of contexts in the case of 1. to 3., and due to multiple meanings in the cases of 4. orS. A typical idea for selecting translation euqivalents so far is shown in the following example. Lets take a verb 'play'. If the object words of the verb belong to a category C play: ~ ~ obj we give a verb '?~ '(=do) as its appropriate translation equivalent. If the object words belong to a category CI~ : ~< , we give '~< ' as an appropriate translation equivalent of 'play'. Thus, we categories words (in the target language) that are agent, object, -." of a given verb (in the source language) according as differences of its appropriate translation equivalents. In other words, these words are categorized according as "such expression as a verb with its case filled with these words be afforded in the target language or not", and are by no means categorized by their concepts (meaning) alone. For example, for tennis, baseball, ... E CPobl~: S~ =(tennis, baseball, card, ...}, trans- lation of 'play' are given as follows. play tennis : T--x~clt play baseball : ~ c i ~ play card : ~-- F~c?" To the words belonging to C play: 9~ ( = obJ {piano, violine, harp, -.. ), the translation equivalent of 'play' is given as follows. play piano : ~'TJ ~z~< play violine : ~4 ~ i) ~r~ pla~ harp : ~" -- / ~r ~ < Categories given in this way have a problem that not a small part of them do not coincide with natural categories of concepts. For example, members ' 7 ~ (ten/lid) ' and ' ~(baseball) ' of a category belong to a natural category of concepts ~(ball game), but ' ~--Y(card)' does'nt. Instead it belorEs to a conceptual category ~ (game in general). ~ is considered as a sub-category of ~ . Therefore, if we regard C play: ~ ~ obJ as ~ , then ~---~ (tennis), ~-- ~" (card), 7 ~ ~ ~'--~ (football), ~7 (golf), --- can be members of it, but ~(go), ~;~(shogi) which also belong to the conceptual category ~, are not appropriate as members of ~obl~ : $ ~ ('pl%y go : ~r~', 'play shogi : ~}~%~&' are not appropriate, instead we say 'pla~ go : ~r_~u _~ ', 'play shogi : ~_~._~') Therefore, cPla. y: $~ should be derided OD~ play" ~ & _~lay. ~ into two categories Cob j " and tobJ " @ The problem here is that, such division of categories do not necessarily coincide with natural division of conceptual categories. For 167 example, translation equivalent '~'' cannot be assigned to a verb 'play' when object word of it is ~ ~ ~ (chess), which is a game similar to ~ or ~ . Moreover, if the verb differs from 'play', then the corresponding structure of categories of nouns also differs from that of play. Thus we have to prepare different structure of categories for each verb. This is by no means preferable from both considerations of space size and realizability on actual data, because we have to check all the combinations of several ten thousands nouns with each verb. 2. Concepts Structure with. Associated Information So we turn our standpoint and take natural categories of nouns (concepts) as a base and associate to it through case relation pairs of a verb and its translation equivalent. Let a structure of natural categories of nouns were given (independently of verbs). A part of the categories (concepts) structure and associated information (such as a verb and its translation equivalent pair through case relation etc.) is given in Fig.1. In Fig.l, verbs associated are limited to a few ones such as Do (obJ = musical instrument)~ Pla~ (obJ = musical instrument). Becsuse, from the definition of musical instrument :'an object which is played to give musical sound (such as a piano, a horn, etc.)", we can easily recall a verb 'play' as the most closely related verb in this ease. It can generally be said that the more the noun's relation to human becomes closer and the more the level of abstract of the noun becomes lower the numbers of verbs that are closely related to them ~id therefore have to associate to them (nouns) become large. And that the numbers of associated ideoms or ideom like parases become large, Therefore, the division of categories must further be done. The process of constructing this data structure is as follows. (1) Find a pair of verb and associated transla- tion equivalent (Do, Play : ~9-& ) that can be associated in common to a part of the structure of the categories as in Fig.l, and then find appropriate translation equivalents in detail at the lower level categories. (2) To each verb found in the process of the association, consults ordinary dictionary of translation equivalents and word usage of verbs and obtain the set of all the translation euqivalents for the verb. (3) Then find nouns (categories) related through case relation to each translation equivalent verb thus obtained by consulting word usage dictionary. Then check all the nouns belonging to nearby categories in the given concepts structure and find a nouns group to which we associate the translation equivalent. In this manner, we can find pairs of verb and its translation equivalent for any noun belonging to a given category. To summarize the advantage of the ls~ter method, (1) to (4) follows. (i) The only one natural conceptural categories structure should be given as the basis of this data structure. This categories structure is stable, and will not be changed basically, and is constructed independently from verbs. In other words, it is constructed indepndently from target language expression. (2) To each noun in a given conceptual category, ,numbers of associated pairs of verb and its translation equivalent are generally small and can easily be found. (3) Association of the pair of verb and its trans- lation equivalent through case relation should be given to one category for which the associa- tion hold in common for any member of it. In cplay : ~ < Fig.l, a conceptual category -obJ is created from two categories ~ (keyboad musical instrument) a n d ~ (string musical instrument) for this purpose. And then associate through case relation specific pair of verb and its translation equivalent to exceptional nouns in the category. (4) From (i) to (3), it follows that this data structure needs considerably less space and is more practical to construct than the former method.(chapter i) 3. Concludin5 Remarks We proposed a data structure based on con- cepts structure with associated pairs of verb and its translation equivalent through case relations to enable the appropriate selections of transla- tion equivalents of verbs in MT systems. Additional information that should be associated to this data structure for the selec- tions of translation equivalents is ideoms or ideom like phrases. The association process is similar to the association process in chapter 2. 0nly the selections of translation equiva- lents for English into Japanese MT have been discussed on the ass~nmption that the translation equivalents for nouns were given. Though the selection of translation equiva- lents for nouns are also important, the effect of application domain depeadence is so great that we strongly relied on that property at the present circumstances. There are cases that translation equivalents are determined by pairs of verbs and nouns to each other. So we need to study the problem of selection of translation equivalent also from this point of view. Reference (i) Sho Yoshida : Conceptual Taxonomy for Natural Language Processing, Computer Science & Technologies, Japan Annual Reviews in Electro- nics, Computers & Telecommunications, CH~HA & North-Hollg_ud, 1982. 168 / ~ ~ ( :Keyboard instrument) ~ ~'T/ (:Piano) ~~--~u~ y( : Organ) / ... / C obj Play:. < i ~~(:String instrument) O (:Things) ~ (:Musical instrument) ~ ~obj Do.Play: ~ ~ ~ -'<4~1)--~(:vi°line) J~D~° (@ W: n; : i~s<t r ume n t) Conc 7~u-- F (:Flute) inlnglish ~t -- ~,'m ( : Oboe ) ~/ C°ncept''''''''-! ~ (:Percussion inst~ume~t) Case ,obtDo ~Play:~/O~ .... Translation (Japanese) ~..~- ~ equivalent Associated verb~" F'~ (:Drum) l / Appropriate associated verb ~ Fig. 1 A Part of Concepts Structure with Associated Information 169
1984
35
DETECTING PATTERNS IN A LEXICAL DATA BASE Nicoletta Calzolari Dipartimento di Linguistica - Universita' di Pisa Istituto di Linguistica Computazionale del CNR Via della Faggiola 32 50100 Pisa - Italy ABSTRACT In a well-structured Lexica] Data Base, a number of relations among lexica] entries can he interactively evidenced. The present article examines hyponymy, as an example of paradigmatic relation, and "restriction" relation, as a syntagmatic relation. The theoretical results of their implementation are illustrated. I INTRODUCTION In previous papers it has been pointed out that ill a well-structured Lexical Data Has(. it becomes possible to detect automatical;y, an(l ~e evidence through interactlve queries a number Of morphologica] , syntact.ic, or semant i~. relationships between lexical entries, .~uch ~lb synonymy, hyponymy, hyperonymy, der ivat ion, case-argument, lexical field, etc. The present article examines hyponymy, a.~ dI: example of paradigmatic relation, and what can b(. called "restriction or modification" relaLion, as a syntagmat ic relation, l-~y reSLl'iet Jell or modification relation, l mean that part of a so-called "aristotellan" definition which has tiJe function of linking th(~ "genus" and the "differentia specifica". When evidenced in a lexicon, tile hyponymy relation produces hierarchical trees partitioniI*K the lexicon in many semant ica i ly coilerent subsets. These trees are not created once and for al i, but it is important that uhey are procedurally activated at the query moment. While evidencing the second relation considered, one can investigate as to whether it is possible to discover any correlation be~wneI* lexical or grammatical features in definitions and particular kinds of "definienda", and thus try to answer questions such as the following: "Are there any connections between these restriction relations and ~he fundamental ways of definition, i.e. the criterial parameters by which people defines things?" For both relations, the paper presents the different procedures by which they are" automatically recognized and extracted from the natural language definitions, the degree of reliability of their automatic labeling, the use of these labels in interactive queries on the lexical data base, and finally the theoretical results of their implementation in a Machine-Dictionary. II THE LANGUAGE OF DEFINITIONS AS A SUBLANGUAGE 1 am trying to develop and exploit the idea of considering the language of dictionary definitions as a particular sublanguage within natural language. This perspective cannot obviously be adopted for subject matter restrictions in definitions, but only for the purpose of the text, i.e. the specific communicative goal. From this restriction on the purpose of the text, certain lexico-grammatical restrictions do result, which prove to be very useful. As to tile restrictions on tile lexical richness of definitions, these are not due to the fact that they relate to a specific domain of discourse, but only to the property of closure (although not satisfied at 100%') that the defining vocabulary should in principle be simpler and more restricted than the defined set of ]emmas, i.e. the former should be a proper subset of the latter. This kind of quantitative restriction on the vocabulary of definitions would not be of any interest in itself, if it were not accompanied by other kinds of constraints both on a) the lexical, and on b) the grammatical side. a) From the frequency list of the words used in definitions (about 800,000 word-occurrences, and 75,000 word-types), it appears in fact that some words have a much greater importance than in normal language, as evidenced by a comparison with the data of the Lessico di Frequenza della Lingua Italiano Contemporaneo (Bortolini et al., 1971). These are the defining generic terms 170 which are traditionally used by lexicographers, such as ACT, EFFECT, PERSON, OBJECT, WHO, PROCESS, CAUSE, etc. It is not by chance that these same concepts are of relevance in many Artificial Intelligence systems. b) Not only single words, or classes of words, are particularly relevant in the defining sublanguage. There are also lexical patterns and syntactic patterns which occur with great frequency, and which play a very special role in defining sentences. The combination of these constraints carl be and actually is very useful, when trying to exploit the information contained in definitions, and when transforming an archive of natural language definitions into a knowledge base. structured as a network. Some important parts of knowledge are in fact already retrievable in interactive mode from the Italian Lexica] Data Base, which has recently been restructured. Analyses on large corpora of definitions, carried out on many dictionaries (Amsler. I')80; Calzolari, 1983a, 1983b; Michiels, Noel, 1')82) have in fact shown that the definitions sublanguage displays several regularities of lexJca] and syntactic occurrences and patterns. These general lexica] c]asses and the classes of recurrent patterns can be more or less eusi]y captured for instance by pattern-matching r. les. and if possible characterized with formal rules. II] HYPONYMY RELATION Hyponymy is the most important relation to b(, evidenced ill a lexicon. Due tO it.% taxollom i {: nature, it gives the lexicon, when implemented, a particular hierarchical structure: its result is obviously not a tree, but many tangled hierarchies (Amsler, 1980). Instead of evidencing and labelling this relation by hand, I have tried to characterize it procedurally. The procedure which automatically coded (with a precision of more thah 90% calculated on a random sample of 2000 definitions) true superordinates in all the definitions (approx. 185.000 for ]03.000 iemmas). was based almost exclusively on the position of the "genus" term at the beginning of the definitional phrases, giving Nouns, Verbs. and Adjectives as superordinates of defined entries of the same lexical category. Ad hoc subroutines solved exceptional cases where a) quantifiers, or other modifiers preceded the genus term (e.g. aletta ---> piccolo gruppo di Donne dietro l'angolo dell'ala), or b) more than one genus was present in the definition (e.g. Qssordore ---> attutire, smorzarsi detto di suono), or c) a prepositional phrase, usually of locative type, was at the beginning of the phrase (e.g. piazzato ---> nel rugby, calcio al pallone collocate sul terreno). Even though the first immediate purpose of this procedure is of classificationa] nature, the ultimate goal is the extraction and formalization of the most relevant relationship between lexical items which is implicitly stored in any standard printed dictionary. It is in fact now possible to retrieve in the ]exica] data base not only all the definitions in which any possible word-form appears, together with the defined lemmas (e.g. SUONO appears in 328 definitions), but also to retrieve on-line, if desired, only the definitions in which the given word-form is used as a superordinate, therefore with the list of its hyponyms (e.g. the same word SUONO is used as superordinate of only 65 words, i.e. of a subset of the preceding set containing MUSICA, RUNORE, SQUILLO, SUSSURRO, etc.~. The query-language so far implemented for the lexica] data base permits therefore to retrieve information on this hierarchical relation. identifying on-line the a]lowable interconnections within the entire lexicon. The links produced can he analyzed, evaluated, and, if necessary, interactive]y corrected. From explorations on the trees thus obtained. we can also try Lo set up classes and subclasses of superordinates, on the basis of the upper nodes to which many other nodes are connected as descendants. Only as an example, the identification criterion for the noun-class "SET-OF" containing ]NSIEME, GRUPPO, COLLEZJONE, COMPLESSO. AGGREGATO. etc., among the set of noun-superordinates, is the fact that they are linked one to the other in the tree which results from querying the data base. Their hyponyms will obviously be for the most part collective nouns. The identification of word-classes like this one leads to the next step Jn the formalization of the hyponymy relation, which will consist in the insertion of a label indicating a semantic class to these sets of superordinates. It will thus be possible to retrieve, for example, all the nouns generically definable as "SET-OF", independently of tile particular word denoting a set used in definitions. Since it is already possible to trace these chains of hyponyms going upwards or downwards for more than one level, one can immediately ask whether, for example, MASSERIA belongs to the set of collectives even if it is defined as HANDRIA, because MANDRIA is defined as BRANCO, which is in turn defined as INSIENE, which finally is one of the nouns belonging to the class "SET-OF". 171 IV RESTRICTION RELATION Even though some refinements are still required in order to improve the reliability of the automatic recovery of ISA-re]ated terms chains, this kind of structural relation within the lexicon, that is hyponymy, is at a good stage of implementation in the Italian ]exica] data base. Much still remains to be done as far as other very interesting rel at iouships bt~tween tile entries are concerned. I am now considering what could be called "restriction or modificatioi*" relation, since its purpose is to restrict or modify the meaning of the genus term. It is exemplified in the following definitions by the words in italics: stannJte ---> calcopirite contenente stagno arricciolare ---> modellare o [ormo di rieciolo risonatore ---:" dispositivo otto o generaro risonauza I wish to evaluate what could be done with respect to this kind of relation, starting from the available definitional data. One of the first aims of this lexicologJcal rese;Irch is to analyze, by m~ans of computational tools. ;llld to use tile information ConLalned in tile dJ fl or,,nL definitional formats and suructures. "l'i~c implementaLion of a number of proc:eduros which convert the natural language information convey~,d by definitions into processable formals, made tlp by structured relational links between lexJcal items or classes of lexical items, i.~ nok Lakol; into consideration. These formals call be made ~raceable e.g. in all Information Retrieval system on definitions, like, the one actually implemented, on th,: entir., corpus, for the taxonomic part of the |exical structure. But these formatted re I ationa ] structures can also be used as starting points for a computationally exploitable reorgnnizat~on of the definitional content. (me, of the characteristics of the definitional sublanguage, i.e. the presence of recurrent patterns ( ,%uch as proprio di, relotivo o, prodotro do, originorio di, etc.), enables, at least in certain cases, to produce a constant mapplng from certain variable types of more frequently detected definitional phrases no constant underlying relationa! structures. Using rather simple pattern-matching procedures some classes and subclasse~ of definitions can be separated, and a small number of simpler types of definitions have already been converted into a formalized coded format also with regard to this restriction relation. A new virtual Relation is thus added to the original data base. The distinguished elements of a number of simple natural language patterns are mapped into some general structured information formats. Up to now, some of the definitions displaying the following restriction relations have been treated: REL.FORM (e.g. o formo di) REL.PROV (e.g. provvisto di) REL.APT (e.g. otto o) and the corresponding relational links generated. Among the lexical variants of REL.PROV there are fornito di, dototo di, munito di, pieno di, rlcco di, etc.; while REL.FORM groups the following variants of a different type: in [ormo di, che ha (la) forma (di), di formo, di formo simile a (quella di), $otto forma dl, avente formo di, etc, It is thus possible, for example, to retrieve, among the 1271 definitions in which the word FORHA appears, only those defining something as "having the shape of something else". The implementation of these links allows to produce another kind of partitioning within the lexical system, and permits to better investigate the internal structure of words. A procedure of the kind exemplified above, based on pattern-matching, is possible for a good number of definition types; for example, with a different formaL, for many adjectives: def , NP = Adj .... >> REL.X : VP : where several groups of definitions are found to share a common underlying structure in terms of the restriction relation involved, in spite of other lexical and syntactic differences. V FUTURE PERSPECTIVES A comparison with the definitional corpora of other dictionaries, also of other languages, will certainly prove to be useful in establishing the set of the most general or primitive Relations, used for definition in lexicographieal practice, often overlapping with the primitive Relations stated in many AI systems. These relations, mapped into a formal link in the data base, can then be paraphrased in each language, in the standard language. The data base structure envisaged does permit both to maintain at a lower level (the starting level), and to eliminate at an upper level, many peculiarities and variations in the linguistic 172 expression of the same or of similar concepts or relations; their effect is to facilitate the comprehension by the users of the printed dictionary, inhibiting however immediate comprehension by procedural routines in the mechanical processing of dictionary data. By applying similar methods of automatic conversion and mapping into suitable formats, as extensively as possible throughout the lexicon, many definitional expressions can be submitted to an attempt of standardization, thus achieving major precision, which gives a considerable improvement when performing, for example, information retrieval operations on the content of a dictionary. This more structured, but, in another sense. simplified version of definitions, which also accounts for their relational nature, provides an excellent basis for testing and studying the "knowledge of the world" which underlies the structure of a dictionary. Vl REFERENCES Alinei, M., La Struttura del l,essico, Bologna: Ii Hulino, 1974. Amsler, R.A., The Structure of the Herriam-Webster Pocket Dictionary, Ph.D, Thesis, Department of Computer Science~. University of Texas, Austin, Texas, 1')80. Bortolini, U., Tag]iavini, C., Zampolli, A.. Lessico di Frequenza de] la Lingua I ta] ian,J Contemporanea, Hilano: Garzanti. 1972. Calzolari, N. , "Towards the organization of lexical definitions or. a data bus,' structure , COLING82 Abstracts, ed. by" E. Haji~ov~, Prague: Charles University, 1982, 61-64. Calzolari, N., "Lexiual definitions in a computerized dictionary'", Computers and Artificial Intelligence, II(1983a~3, 225-233. Calzolari, N. , "Semantic links and the dictionary", in Proceedings of the ~tl ! International Conference on Computers and the Humanities, ed. by S.K.Burton, D.D.ShorL, Rockville (Haryland): Computer Science Press, 1983b, 47-50. Calzolari, N., Ceccotti, H.L., "Organizing a large scale lexica] database dictionary", Acres du Con~r~s Informatique et Sciences Humaines, Li&ge: L.A.S.L.A., 1981, 155-163. Clark, E.V., Clark, H.H., "When nouns surface as verbs", Language, 55(1979)4, 767-811. Evens, M.W., Litowitz, B.E., Harkowitz, J.A., Smith, R.N., Werner, O., Lexical-Semantic Relations: a Comparative Survey, Edmonton, Alberta: Linguistic Research Inc., 1980. Findler, N.V. (ed.), Associative Networks, New York: Academic Press, 1979. Hendrix, G.G., "Natural-language interface", Proceedings of the Workshop 'Applied Computational Linguistics in Perspective', American Journal of Computational Linguistics, 8(198-)-, 56-61. Michiels, A., M~llenders, J., No~l, J., "Exploiting a large data base by Longman", COLING80: Proceedings of the 8th International Conference on Computational Linguistics, Tokyo, 1980, 374-382. Hichiels, A., Noel, J., "Approaches to thesaurus production", COLING82: Proceedings of the Ninth International Conference on Computational Linguistics. ed. by J.]lorecky', Amsterdam: North-}lo]land, 1982, 227-232. Nagao, M., Tsujii, J., t;eda, Y., Takiyama, M., "An attempt to computerize dictionary dale bases", COLING80: Proceedings of tht: ~th International Confermme on Computational Linguistics, Tokyo, ]qSO, 534-542. Quillian, H.R. , "Semantic memory'", in Semantic Information Processing, ed. by .~I..~li:*s ky, Cambridge (.~lass.): }liT Press. 1!)68, -,,°°'--;0."" Smith, R.N., "On defining adjectives: part II]" Dictionaries, the Journal of the Dictionary Society of North America, Winter, {lq~l)5. 28-38. Smith, R.N., ,Haxwell, E., "An English diction-ry for computerized syntactic and semantic process lug", in Comput at i one ] ar, d Hathematica] Linguistics, ed. by A.Zampo]li, N.Calzolari, Firenze: Olschki, 1977, 303-322. Walker, D.E., Amsler, R.A., Proposal to the National Science Foundation on alJ Invitational Workshop on Machine-Readahl~ Dictionaries, SRI, 1982 (mimeo). Zingarelli, N., Vocabolario della ital~99a, Bologna: Zanichelli, 1971. lingua 173
1984
36
~ISTIC PROSL~ IN ~JLTILINfiUAL HOI~(-DLOGICAL ~J~O~:~OSITION G.Thurmair Siemens AG ZT ZTI Otto-Hahn-Ring 6 Munich 83 West-Germany ABSTRACT An algorithm for the morphological decomposition of words into morphemes is presented. The application area is information retrieval, and the purpose is to find morphologically related terms to a given search term. First, the parsing framework is presented, then several linguistic decisions are discussed: morpheme selection and segmentation, morpheme classes, morpheme grammar, allomorph handling, etc. Since the system works in several languages, language-specific phenomena are mentioned. I BAC~GRO~ I. Application domain In Information Retrieval (document retrieval), the usual way of searching documents is to use key words (descriptors). In most of the existent systems, these descriptors are extracted form the texts automatically and by no means standardised; this means that the searcher must know the exact shape of the descriptor (plural form, compound word etc.~, which he doesn't; therefore the search results are often poor and meager. To improve them, we have developed several analysis systems, based on linguistic techniques. One of them is the Morphological Analysis for Retrieval Support (MARS). It expands the terms of search questions morphologically and returns an expanded set of tokens containing the same root as the search term. This is done through analysis of a set of documents: Each word in a document is decomposed into its morphemes, the roots are extracted, allomorphes are brought to the same morpheme representation, and the morphemes are inverted in a way that they are connected to all the words they occur in (see fig.l). Retrieval is done by evaluating these inverted files. As a result, the searcher is independent of the morphological shape of the term he wants to search with. From a pure linguistic point of view, the aim is to find the morphological structure of each word as well as the information about which morpheme occurs in which word. The system has been developed for several languages: We took 36000 english tokens (from the Food Science Technology Abstracts document files), 53000 German tokens (from the German Patent Office document files) and 35000 Spanish tokens (from several kinds of texts: short stories, telephone maintenance, newspapers etc.). In 95-97% of the tokens, the correct roots were extracted; the retrieval results could be improved by overall 70% (for the English version; the German version is currently being tested). Since the kernel part of the system consists of a morphological decomposition algo- rithm, it can also be used for the handling of other phenomena below the lexical level, e.g. handling of lexical gaps. 2. The decomposition algorithm The parser works essentially language independent (see below for language-specific points), using a morpheme list an a morphological gr~,~r of the language in question. First of all, a preprocessing transforms the input string in order to eliminate some kinds of allomorphes (see below); its operations are just grapheme insertion, deletion and changing; therefore it can be developed language- independent. The transformation conditions and contents, of course, differ for the languages. Then the transformed string is trans- ferred to the parser. The decomposition works in a left-to-right bredth-first manner. It builds up a network graph consisting of possible morphemes. At a certain state in the graph, the algorithm searches for possible successors: It Fig.l: System structure Morphemel ( ~ IAllomorph i Lexicon| ~ List l ~I Decomposition ~' Inverte~ Parsin~ ~ ~ Scoring I~ ~I Root fil~ Grammar J _ 174 identifies possible morphemes by looking them up in the morpheme list, and checks if they can hold their current position in the word; this is done by means of a morpheme grammar. The morpheme gr~mm~r contains the possible sequences of morpheme classes and is represented as a state- transition automaton. If the new morpheme is accepted it is stored in a morpheme chart and connected with the network graph. If the whole input string is processed, the graph is evaluated and put out. Since the morpheme list and the morpheme grammar are language-specific, they are separated from the parser and stored in files; so the decomposition itself to a large extent is language independent. In a number of cases (owing both to the morpheme gr~mm~r and to true ambiguities in natural language), the parser produces more than one possible result; this means that additional strategies have to be applied to select the most plausible decomposition. The system scores the best result highest and puts it on the top of the decomposition list; nevertheless it keeps the others, because different decompositions may be correct, e.g. for different parts of speech; but this goes beyond pure morphology. The scored decomposition results are used to extract the root(s) and to disambiguate some morphs. The roots are inverted in such a way that they point to the set of tokens they belong to. Allomorphes of the same morphemes are inverted to the same set of tokens, which means that in searching with any allomorph of a word the system nevertheless will come up with the correct set of tokens. II LINGUISTIC ISSUES IN DECOMPOSITION Dealing with large amounts of data, some linguistic problems arise which not only influence the claim that the system should be language independent, but also concern pure morphology. Some of them are presented in the following sections. I. Morpheme definition The first problem is to set up rules to define possible morphemes. One crucial point is the morpheme selection: What about proper names (BAGDAD, TOKYO)? What about special terminology (e.g. chemical terms which need special suffix- ation rules)? What about foreign words and morphemes, which are used quite frequently and have to be considered if the language is seen as a synchronic system? As a result, a pure single- language morphology is highly artificial from the language system point of view, and there is some arbitrariness in selecting morphemes for the morpheme lexicon. We decided not to deal with proper names and to pick up the morphemes which are quite frequent (with respect to the number of tokens they occur in) and which have many different derivations. So, the morphology of one language (e.g. German) has to be mixed up with the morphology of other languages (Latin, Greek, English) in order to cover a broad variety of the synchronic language system. In addition to this, it has been found that the resulting morpheme lists differ depending on what topic the documents we analyse deal with: Some special vocabulary has to be added to the morpheme list, e.g. with respect to food science. The vocabulary which is considered as basic to all topics consists of approx. 4-5000 morphemes, the rest is special vocabulary. With this morpheme selection, we got error rates (words which could not be decomposed) of 4-8%; most of them were proper names or typing errors. Another crucial point is morpheme segmentation. As the analysis should be syn- chronic, no diachronic segmentation is done. Diachronic issues sometimes occur but are not taken into account. But there are two criteria that lead to different segmentations: Purely dis- tributional based segmentation reduces semant- ically quite different morphemes to the same root (e.g. English ABROAD vs. BROAD, German VERZUG vs. ZUG) and sometimes creates artificial over- lappings (e.g. Spanish SOL-O vs. SOL vs. SOL- AR); on the other hand some clear derivations are not recoghised because of gaps in the distribution of the lexical material (e.g. -LICH in German OEFFENTLICH). On the other hand, semantically oriented segmentation sometimes leads to a loss of morphological information, e.g. in German prefix- ation: If the prefixes (VER-LUST, VER-ZUG) are taken as a part of the root, which is correct from a semantic point of view, some information about derivational behaviour gets lost. We decided to base morpheme segmentation on the semantic criterion to distinguish the meanings of the roots as far as possible, and to segment the morphemes according to their distribution as far as possible: We take the longest possible string which is common to all its derivations even if it contains affixes from a diachronical (and partly derivational) point of view. Since there is some intuition used with respect to which derivations basically carry the same meaning and which affixes should belong to the root (i.e. should be part of a morpheme), the morpheme list is not very elegant from a theoretical point of view; but it must be stated that the language data often don't fit the criteria of morphologists. These problems are common to the three languages and sometimes lead to irregularities in the morpheme list. The resulting lists consist of 8000 to 10000 morphemes. 2. Morpheme cate~orisation Every morpheme has some categorial information attached to it, namely morpheme class, morpheme part of speech, allomorph information, morphological idiosyncrasies and inflectional behaviour. 175 All this information is language dependent: In English, some morphemes duplicate their last consonant in derivation (INFER - INFERRING), and there seems to be phonological but no morphological regularity in that behaviour, so this information has to be explicitly stored. Get-man and Spanish need quite more inflectional information than English does, etc. All this information can be stored within the same data structure for the different languages, but the interpretation of these data has to be programmed separately. The morpheme classes also depend on the language. Affix classes don't differ very much: Prefixes, suffixes an inflections are common to all the languages considered; but there are fillers in German compound words that don't exist in Spanish, and there are doubleing consonants in English. More differences are found within the lexical morphemes: The three languages have in common the basic distinction between bound and unbound morphemes, but there are special subcategories e.g. within the bound ones: Some need suffixes (THEOR-), some need inflections (the bound version of Spanish SOL-, German SPRACH-), some need doubleing consonants. With respect to this, information about possible succeeding morphemes is stored; but to be able to analyse new or unknown derivations, no additional information should be used: an unbound morpheme can or cannot take a suffix, it can or cannot form a compound word, etc. 3. The morpheme ~ranm~ar In this automaton, the sequences of possible morphemes are fixed. Fop each language it specifies which morpheme class may be followed by which other one: E.g. a prefix may be followed by a bound or unbound lexical morpheme or by another prefix, but not by a suffix or an inflection. The grammar automaton is stored in a file and interpreted by the parser; so the parser can handle different languages. The automaton restricts the number of morphemes that can occur in a given input word (see fig. 2). Nevertheless, the most effective constraints work on the subclass level: A prefix can be followed by another one, but not every combination is allowed. An unbound morpheme can be followed by an inflection, but the inflection must fit to the inflectional properties of the morpheme (e.g. verb endings to a noun). All these constraints are written in procedures attached to the transitions between possible morpheme grammar states; these procedures are highly language- specific. In fact, this is the biggest problem when talking about language-independency. 4. Allomorph handlin~ There are several kinds of allomorphes: Some are quite regular and can be eliminated in a preprocessing step: English RELY vs. RELIES, Spanish CUENTA vs. CONTAR are transformed before the decomposition goes on; this is pure string transformation, which can be performed by a quite similar procedure in each language. Other allomorphes can not be handled automatically; so we attach the allomorph information to the lexical entry of the morphem in question. This is done with strong verbs, with German derivatlonal and inflectional vowel mutations, with some kinds of Greek and Latin morphemes (eg. ABSORB- ING vs. ABSORP-TION, which in fact is regular but ineffective to deal with automatically), etc. Different spellings of morphemes (CENTRE vs. CENTER) also have to be handled as allomorphes. In our system, the allomorph stems point to the same set of words they occur in, so that the user searching with FOOD will also find words with FEED or FED. On the other hand, artificial overlappings (Spanish SOL vs. SOL-O, English PIN vs. PINE) should point to different sets of words in order to disambiguate these morphemes; this can be done by looking at the morphological context of the morph in question; but this is not always sufficient for disambiguation. These kinds of overlappings are very common in Spanish, less frequent in English and rather seldom in German. 5. Selection strategies In 55% of all cases, the decomposition comes up with only one possible result. This, in over 99% of the cases, is a correct result. In over 40%, however, the result is ambiguous: From a morphological point of view, several decom- positions of a word are acceptable. Since the system has no syntactical or semantic knowledge, it cannot find out the correct one (e.g. German DIEN-ST is correct for a verb, DIENST for a noun; similar English BUILD-ING (verb) vs. BUILDING (noun)). We decided not to integrate a scoring algorithm into the decomposition itself but to compare ambiguous results and try to find the most plausible decomposition. To do this, we apply several strategies: First, we compare the sequences of morpheme classes: Suffixation is more frequent than compounding: The compound LINGUISTIC-ALLY therefore is less plausible than the suffixation LINGUISTIC-AL-LY. The morpheme class sequence information can partly be collected statistically 176 (by evaluating the decompositions with one correct result); nevertheless it has do be optim~]ised manually by evaluating several thousands of decomposition results. (The statistics partly depend on the type of the text considered). This strategy works with different results for the different languages. If the affixes of a language are very ambiguous (as it is in German), this strategy is too poor and has to be supported by several others we are just developing. In English and Spanish, however, the results are quite satisfactory: The first 10 of the morpheme class sequences in English cover 60%, the first 50 over 80% of the tokens. If the morpheme class sequence strategy falls below a threshold (which mostly happens with long compounds), the strategy is switched to longest matching: The decomposition with the fewest morphemes is scored best. As a result, the disambiguation returns correct roots in 90-94% of the cases; in German, the ambiguous affixes don't influence the root extraction, although the decompositions as a whole are correct only in 85% of the tokens. Together with the decompositions with only one correct result, the whole system works correctly in about 96% of the input words. morphemes. Words which are morphologically related, like German ZUG vs. BEZUG vs. VERZUG, LUST vs. VERLUST, DAMM vs. VERDAMMEN, are completely different from a semantical point of view. This could mean that the semantic formation rules do not correspond to the morphological ones. But considering large amounts of data, up to now no certain rules can be given how word meaning could be derived from the "basic units of meaning" (what the morphemes claim to be). Semantically and even syntactically regular behaviour can be observed at the level of words rather than morphemes. The result of our research on morphemes tends to support those who stress the status of the word as the basic unit of linguistic theory. ACKNOWLEDGEMENTS The work which was described here was done by A.Baumer, M.Streit, G.Thurmair (German), I.Buettel, G.Th.Niedermair, Ph.Hoole (English) and M.Meya (Spanish). Ill L~ITATIOgS Although the morphological decomposition works quite well and is useful with respect to information retrieval problems, there are some problems concerning the integration of such an algorithm into a whole natural language system. The reason is, that some information needed therefore is not easily available; this is the information which goes beyond morphology and is based on the interpretation of decomposition results. Two examples should be mentioned here. I. Parts of speech It is not easy to derive the part of speech of a word out of its decomposition. In German, the prefix VER- forms verb-derivations, but the derivation VER-TRAUEN from the verb TRAUEN is also a noun, whereas the same derivation VER-TRETEN form the verb TRETEN does not, and the derivation VER-LEGEN (from the verb LEGEN) is also an adJectiv. The past participle GE-F~I.U~ (from the verb FAI.L~N) is also a noun, the same derivation from LAUFEN (GE-LAUFEN) is not. This fact is due to the diachronic development of the language which led to a structure of the vocabulary that followed rather the needs of usage than morphological consistency. REFERENCES Meya,M.: Morpheme Grammar. Proc. of 11th Int. Conference of ALLC, April 1984, Louvain- La-Neuve Niedermair,G.Th., Thurmalr,G., Buettel,I. : MARS: A retrieval tool on the basis of Morphological Analysis. Proc. of the ACM Conference on Research and Development in Information Retrieval, July 1984, Cambridge Hunnicut,S. : A new Morph Lexicon for English. Proc. of the COLING Conference, 1976 Karttunen, L. : KIlO - a general morphological Processor. Texas Linguistic Forum 22, 1983. Koskenniemi,K. : Two-level model for morphological analysis. Proc. of the 8th Intern. Joint Conference on Artificial Intelligence, 1983. 2. S~m~ntics There is some evidence that the meaning of a word can not be predicted out of its 177
1984
37
A GENERAL COMPUTATIONAL MODEL FOR WORD-FORM RECOGNITION AND PRODUCTION Kimmo Koskenniemi Department of General Linguistics Univeristy of Helsinki Hallituskatu 11-13, Helsinki 10, Finland ABSTRACT A language independent model for recognition and production of word forms is presented. This "two-level model" is based on a new way of describing morpho- logical alternations. All rules describing the morphophonological variations are par- allel and relatively independent of each other. Individual rules are implemented as finite state automata, as in an earlier model due to Martin Kay and Ron Kaplan. The two-level model has been implemented as an operational computer programs in several places. A number of operational two-level descriptions have been written or are in progress (Finnish, English, Japanese, Rumanian, French, Swedish, Old Church Slavonic, Greek, Lappish, Arabic, Icelandic). The model is bidirectional and it is capable of both analyzing and syn- thesizing word-forms. I. Generative phonology The formalism of generative phonology has been widely used since its introduc- tion in the 1960's. The morphology of any language may be described with the formal- ism by constructing a set of rewriting rules. The rules start from an underlying lexical representation, and transform it step by step until the surface representa- tion is reached. The generative formalism is unidirec- tional and it has proven to be computa- tionally difficult, and therefore it has found little use in practical morphologi- cal programs. 2. The model of Kay and Kaplan Martin Kay and Ron Kaplan from Xerox PARC noticed that each of the generative rewriting rules can be represented by a finite state automaton (or transducer) (Kay 1982). Such an automaton would com- pare two successive levels of the genera- tive framework: the level immediately The work described in this paper is a part of the project 593 sponsored by the Acade- my of Finland. before application of the rule, and the level after application of the rule. The whole morphological grammar would then be a cascade of such levels and automata: lexical representation IFSA II t after ist rule t after 2nd rule ! t after (n-1)st rule surface representation A cascade of automata is not opera- tional as such, but Kay and Kaplan noted that the automata could be merged into a single, larger automaton by using the techniques of automata theory. The large automaton would be functionally identical to the cascade, although single rules could no more be identified within it. The merged automaton would be both operation- al, efficient and bidirectional. Given a lexical representation, it would produce the surface form, and, vice versa, given a surface form it would guide lexical search and locate the appropriate endings in the lexicon. In principle, the approach seems ideal. But there is one vital problem: the size of the merged automaton. Descriptions of languages with complex morphology, such as Finnish, seem to result in very large merged automata. Although there are no conclusive numerical estimates yet, it seems probable that the size may grow prohibitively large. 3. The two-level approach My approach is computationally close to that of Kay and Kaplan, but it is based on a different morphological theory. In- 178 stead of abstract phonology, I follow the lines of concrete or natural morphology (e.g. Linell, Jackendoff, Zager, Dressler, Wurzel). Using this alternative orienta- tion I arrive at a theory, where there is no need for merging the automata in order to reach an operational system. The two-level model rejects abstract lexical representations, i.e. there need not always be a single invariant under- lying representation. Some variations are considered suppletion-like and are not described with rules. The role of rules is restricted to one-segment variations, which are fairly natural. Alternations which affect more than one segment, or where the alternating segments are unre- lated, are considered suppletion-like and handled by the lexicon system. 4. Two-level rules There are only two representations in the two-level model: the lexical represen- tation and the surface representation. No intermediate stages "exist", even in prin- ciple. To demonstrate this, we take an example from Finnish morphology. The noun lasi 'glass' represents the productive and most common type of nouns ending in i. The lexical representation of the partitive plural form consists of the stem lasi, the plural morpheme I, and the partitive end- ing A. In the two-level framework we write the lexical representation lasiIA above the surface form laseja: Lexical representation: 1 a s i I A Surface representation: 1 a s e j a This configuration exhibits three morpho- phonological variations: a) Stem final i is realized as e in front of typical plural forms, i.e. when I follows on the lexical level, schemati- cally: ~I (1) b) The plural I itself is realized as j if it occurs between vowels on the sur- face, schematically: , (2) V V c) The partitive ending, like other end- ings, agrees with the stem with respect to vowel harmony. An archiphoneme A is used instead of two distinct partitive endings. It is realized as ~ or a according to the harmonic value of the stem, schematically: back-V ...~~a (3) The task of the two-level rules is to specify how lexical and surface represen- tations may correspond to each other. For each lexical segment one must define the various possible surface realizations. The rule component should state the necessary and sufficient conditions for each alter- native. A rule formalism has been designed for expressing such statements. A typical two-level rule states that a lexical segment may be realized in a certain way if and only if a context con- dition is met. The alternation (i) in the above example can be expressed as the following two-level rule: i <=> ___ I (i') e = This rule states that a lexical i may be realized as an e only if it is followed by a plural I, and if we have a lexical i in such an environment, it must be realized as e (and as nothing else). Both state- ments are needed: the former to exlude i-e correspondences occurring elsewhere, and the latter to prevent the default i-i correspondence in this context. Rule (i') referred to a lexical seg- ment I, and it did not matter what was the surface character corresponding to it (thus the pair I-=). The following rule governs the realization of I: <°> v--- v This rule requires that the plural I must be between vowels on the surface. Because certain stem final vowels are realized as zero in front of plural I, the generative phonology orders the rule for plural I to be applied after the rules for stem final vowels. In the two-level framework there is no such ordering. The rules only state a static correspondence relation, and they are nondirectional and parallel. 5. Rules as automata In the following we construct an automaton which performs the checking needed for the i-e alternation discussed above. Instead of single characters, the automaton accepts character pairs. This automaton (and the automata for other rules) must accept the following sequence of pairs: i-I, a-a, s-s, i-e, I-j, A-a The task of the rule-automaton is to permit the pair i-e if and only if the plural I follows. The following automaton with three states (I, 2, 3) performs this: 179 (i") State 1 is the initial state of the autom- aton. If the automaton receives pairs without lexical i it will remain in state 1 (the symbol =-= denotes "any other pair"). Receiving a pair i-e causes a transition to state 3. States 1 and 2 are final states (denoted by double circles), i.e. if the automaton is in one of them at the end of the input, the automaton ac- cepts the input. State 3 is, however, a nonfinal state, and the automaton should leave it before the input ends (or else the input is rejected). If the next char- acter pair has plural I as its lexical character (which is denoted bY I-=), the automaton returns to state 1. Any other pair will cause the input to be rejected because there is no appropriate transition arc. This part of the automaton accom- plishes the "only if" part of the corre- spondence: the pair i-e is allowed only if it is followed by the plural I. The state 2 is needed for the "if" part. If a lexical i is followed by plural I, we must have the correspondence i-e. Thus, if we encounter a correspondence of lexical i other than i-e (i-=) it must not be followed by the plural I. Anything else (=-=) will return the automaton to state i. Each rule of a two-level description model corresponds to a finite state autom- aton as in the model of Kay and Kaplan. In the two-level model the rules or the au- tomata operate, however, in parallel in- stead of being cascaded: Lexical ~ . ~ r e p r e s e n t a t i o n . . - Surface representation The rule-automata compare the two repre- sentations, and a configuration must be accepted by each of them in order to be valid. The two-level model (and the program) operates in both directions: the same description is utilized as such for pro- ducing surface word-forms from lexical representations, and for analyzing surface forms. As it stands now, two-level programs read the rules as tabular automata, e.g. the automaton (i") is coded as: "i - e in front of plural I" 3 4 i i I = = e = = i: 2 3 1 1 2: 2 3 0 1 3. 0 0 1 0 This entry format is, in fact, more prac- tical than the state transition diagrams. The tabular representation remains more readable even when there are half a dozen states or more. It has also proven to be quite feasible even for those who are lin- guists rather than computer professionals. Although it is feasible to write morphological descriptions directly as automata, this is far from ideal. The two- level rule formalism is a much more read- able way of documenting two-level descrip- tions, even if hand compiled automata are used in the actual implementation. A com- piler which would accept rules directly in some two-level rule formalism would be of great value. The compiler could automati- cally transform the rules into finite state automata, and thus facilitate the creation of new descriptions and further development of existing ones. 5. Two-level lexicon system Single two-level rules are at least as powerful as single rules of generative phonology. The two-level rule component as a whole (at least in practical descrip- tions) appears to be less powerful, be- cause of the lack of extrinsic rule order- ing. Variations affecting longer sequences of phonemes, or where the relation between the alternatives is phonologically other- wise nonnatural, are described by giving distinct lexical representations. General- izations are not lost since insofar as the variation pertains to many lexemes, the alternatives are given as a minilexicon referred to by all entries possessing the same alternation. The alternation in words of the fol- lowing types are described using the mini- lexicon method: hevonen - hevosen 'horse' vapaus - vapautena - vapauksia 'freedom' The lexical entries of such words gives only the nonvarying part of the stem and refers to a common alternation pattern nen/S or s-t-ks/S: hevo nen/S "Horse S"; vapau s-t-ks/S "Freedom S"; The minilexicons for the alternation pat- 180 terns list the alternative lexical repre- sentations and associate them with the appropriate sets of endings: LEXICON nen/S LEXICON s-t-ks/S nen S 0 "" ; sE S123 " " s $0 "" ; TE S13 ""; ksE $2 "" 6. Current status The two-level program has been imple- mented first in PASCAL language and is running at least on the Burroughs B7800, DEC-20, and large IBM systems. The program is fully operational and reasonably fast (about 0.05 CPU seconds per word although hardly any effort has been spent to opti- mize the execution speed). It could be used run on 128 kB micro-computeres as well. Lauri Karttunen and his students at the University of Texas have implemented the model in INTERLISP (Karttunen 1983, Gajek & al. 1983, Khan & al. 1983). The execution speed of their version is com- parable to that of the PASCAL version. The two-level model has also been rewritten in Zetalisp (Ken Church at Bell) and in NIL (Hank Bromley in Helsinki and Ume~). The model has been tested by writing a comprehensive description of Finnish morphology covering all types of nominal and verbal inflection including compound- ing (Koskenniemi, 1983a,b). Karttunen and his students have made two-level descrip- tions of Japanese, Rumanian, English and French (see articles in TLF 22). At the University of Helsinki, two comprehensive descriptions have been completed: one of Swedish by Olli Bl~berg (1984) and one of Old Church Slavonic by Jouko Lindstedt (forthcoming). Further work is in progress in Helsinki for making descriptions for Arabic (Jaakko H~meen-Anttila) and for Modern Greek (Martti Nyman). The system is also used the University of Oulu, where a description for Lappish is in progress (Pekka Sammallahti), in Uppsala, where a more comprehensive French description is in progress (Anette Ostling), and in Goth- enburg. The two-level model could be part of any natural language processing system. Especially the ability both to analyze and to generate is useful. Systems dealing with many languages, such as machine translation systems, could benefit from the uniform language-independent formal- ism. The accuracy of information retrieval systems can be enhanced by using the two- level model for discarding hits which are not true inflected forms of the search key. The algorithm could be also used for detecting spelling errors. ACKNOWLEDGEMENTS My sincere thanks are due to my in- structor, professor Fred Karlsson, and to Martin Kay, Ron Kaplan and Lauri Karttunen for fruitful ideas and for acquainting me with their research. REFERENCES Alam, Y., 1983. A Two-Level Morphological Analysis of Japanese. In TLF 22. Bl~berg, O., 1984. Svensk b~jningsmorfo- logi: en till~mpning av tv~niv~- modellen. Unpublished seminar paper. Department of General Linguistics, University of Helsinki. Gajek, O., H. Beck, D. Elder, and G. Whit- remote, 1983. KIMMO: LISP Implementa- tion. In TLF 22. Karlsson, F. & Koskenniemi, K., forth- coming. A process model of morphology and lexicon. Folia Linguistica. Karttunen, L., 1983. KIMMO: A General Morphological Processor. In TLF 22. Karttunen, L. & Root, R. & Uszkoreit, H., 1981. TEXFIN: Morphological analysis of Finnish by computer. A paper read at 71st Meeting of the SASS, Albu- querque, New Mexico. Karttunen, L. & Wittenburg, K., 1983. A Two-Level Morphological Description of English. In TLF 22. Kay, M., 1982. When meta-rules are not meta-rules. In Sparck-Jones & Wilks (eds.) Automatic natural language processing. University of Essex, Cog- nitive Studies Centre. (CSM-10.) Khan, R., 1983. A Two-Level Morphological Analysis of Rumanian. In TLF 22. Khan, R. & Liu, J. & Ito, T. & Shuldberg, K., 1983. KIMMO User's Manual. In TLF 22. Koskenniemi, K., 1983a. Two-level Model for Morphological Analysis. Proceed- ings of IJCAI-83, pp. 683-685. ---, 1983b. Two-level Morphology: A Gen- eral Computational Model for Word- Form Recognition and Production. Uni- versity of Helsinki, Dept. of General Linguistics, Publications, No. ii. Lindstedt, J., forthcoming. A two-level description of Old Church Slavonic morphology. Scando-Slavica. Lun, S., 1983. A Two-Level Analysis of French. In TLF 22. TLF: Texas Linguistic Forum. Department of Linguistics, University of Texas, Austin, TX 78712. 181
1984
38
PANEL NATURAL LANGUAGE AND DATABASES, AGAIN Karen Sparck Jones Computer Laboratory, University of Cambridge Corn Exchange Street, Cambridge CB2 3QG, England INTRODUCTION Natural Language and Databases has been a common panel topic for some years, partly because it has been an active area of work, but more importantly, because it has been widely assumed that database access is a good test environment for language research. I thought the time had come to look again at this assumption, and that it would be useful, for COLING 84, to do this. I therefore invited the members of the Panel to speak to the proposition (developed below) that database query is no longer a good, let alone the best, test environment for language processing research, because it is insufficiently demanding in its linguistic aspects and too idiosyncratically demanding in its non-linguistic ones; and to propose better task environments for language understanding research, without the disadvantages of database query, but with its crucial advantage of an independent evaluation test. DATABASES: PROS, CONS, AND WHAT INSTEAD? Database query has a long and honourable history as a vehicle for natural language research. Its value for this purpose was restated, for example, by Bonnie Webber at IJCAI-83 (Webber 1983). I nevertheless think it is now time to question the value of database query as a continuing vehicle for language research. Database query has two major points in its favour. The task is relatively restricted, so success in building a front end does not depend on solving all the problems of language and knowledge processing at once. More importantly, the task provides a hard, rather than soft, test environment for a language processor: the processor's performance is independently evaluated via its output formal search query. Natural language research has profited in the past from the restrictions on the database task: its limited linguistic functions and world references have allowed concentration on, and hence progress in dealing with, obvious problems of language and knowledge processing. But I believe that database query is reaching the end of its utility for fundamental research on natural language understanding, for two reasons. The first is that current database systems are too impoverished to call for some important language-processing capabilities in their front ends, so work on these capabilities is discouraged. Obvious examples of the expressive poverty of typical database systems include their lack of resources for handling, at all properly, such important components of text meaning as qualifying concepts like negation and a variety of quantifiers; intensional concepts including meta description, modality, presupposition, different semantic relations, and constraints of all sorts; and the full range of linguistic functions subsumable under the heading of speech acts. More generally, the nature of the task means that many typical requirements of language understanding, e.g. the determination of the domain of discourse and hence senses of words, and many typical forms of language use, e.g. interactive dialogue, are never investigated. (Though attempts may be made, forced by the way natural language is actually used in input, to handle some of these phenomena via superimposed knowledge bases, this does not undermine my general point: the additional resources are merely devices for reducing the richness of natural language expressions to obtain sensible database mappings.) The second reason for doubting the continuing utility of database query as a field for natural language research, is that the autonomous characteristics of database systems impose idiosyncratic constraints on the language processor that are of no wider interest for natural language understanding in general. Most of the problems listed by Robert Moore at ACL-82 (Moore 1982) fall into this class, as do many of those identified by, for example, Templeton and Burger (1983). The examples include database-specific quantifier interpretation, quantity determination, procedures for mapping to compound attributes, techniques for dealing with open value word sets, and ripping apart complex queries. Further, even more database oriented, problems include, for instance, path optimisation, parallel (coroutine based) query evaluation, and null values. These problems can be very intractable for individual data models or databases, and as the solutions tend to be ad hoe and specialised, the issues are essentially diversions from research on more pervasive language phenomena and functions, and hence on generally relevant language understanding procedures. 182 This is of course not to deny that database access presents many perfectly 'ordinary' language interpretation problems. The crux is whether the central interpretive process, mapping from language concepts onto database ones, is sufficiently like the interpretation procedures required for other natural language using functions, for it to be an appropriate study model for these. I believe that much of the attraction of the database case comes from the stimulus to logic-based meaning representation provided by the formal database query languages into which natural language questions are usually ultimately mapped. The database application naturally appeals to those who believe that the meanings of natural language texts should be expressed in something like first order logic. But current data languages, however logical, are very limited. More importantly, they are geared to data models expressing properties of databases that are manifestly artificial, and are not properties of the real worlds with which natural language is concerned. Third normal form is a property of this kind. I do not believe that third normal form has got anything to do with the meaning of natural language expressions. But the ultimate consequence of working with present data models is behaving as if it does. This is clearly unsatisfactory. I am of course not attacking the idea of logical meaning representations. What I am claiming is that the database application is an inadequate test environment for natural language understanding systems. One argument for continuing with database query processing must therefore be that those mainstream language handling problems which do arise have not been fully resolved, so it is legitimate to concentrate on these, in what is a convenient test environment, and defer an attack on other language processing tasks. The second is that there are ill-understood knowledge handling operations triggered by and interacting with language processing that are not specialised to one contemporary computational task, but are sufficiently typical of a whole range of other knowledge processing tasks to justify further study in the exemplary database case. Without wishing to imply that the database query function is all wrapped up (or doubting the need for much further system engineering), I do not think these arguments are strong, simply because it is impossible to disentangle general language problems from database ones, and database problems from current highly restricted data models and implementations. Moore's example of time and tense illustrates this very well. Time information determination problems arise in database questions; but because of the database domain context, they are typically only an arbitrary subset of those ordinarily occurring, and require interpretive responses biassed to the particular time concepts of the database. It may be that finding anything out about time interpretation, even in a limited context, is of some use. ~t it is surely better to consider time interpretation in the more motivated way allowed by a richer environment involving a fuller range, or at least less arbitrarily selected set, of temporal concepts than those of current databases. My point is that to make progress in natural language research in the next five to ten years we need the stimulus of a new application context. This must meet the following criteria: it must be more 'central' to language understanding than database query; it must be harder, without overwhelming us with its difficulty; and we should preferably be able to make a start on it by exploiting what we have learnt from the database application. But most importantly, the new task must have built-in evaluation criteria for the performance of language processors. This is more difficult to achieve with systems whose entire function is language processing, like translation, than with systems where natural language processing is required for the system's external world interface; but it is still possible to evaluate translation, for example, or summarising, reasonably objectively: the problem is the sheer effort involved. Some candidate applications meeting these criteria are: natural language interfaces to conventional computing systems (e.g. operating systems, numerical packages, etc.) natural language interfaces to expert systems natural language interfaces to robots natural language interfaces to teaching systems All of these meet the evaluation requirement; what requires examination is the extent to which non-trivial back end systems (e.g. a robot more interesting than SHRDLU) would be too severe a challenge for language processing. It is not necessary, in this context of principle, to base choices on potential market interest: expert systems would score here, presumably. However it is necessary to consider the expected 'technological' plausibility for the requirement for a natural language interface e.g. to a robot. These candidates are for interface systems. Should we instead be renewing the attack on language systems, e.g. for translation or summarising; or upgrading semi-linguistic systems like those for document retrieval? REFERENCES Webber, B.L. 'Pragmatics and database question answering', IJCAI-83, Proceedings of the Eighth International Joint Conference on Artificial Intelligence, 198-3~ 204-205. Moore, R.C. 'Natural-language access to databases - theoretical~technical issues', Proceedings of the 20th Annual Meeting of the---Association for Computational Linguistics ' 1982, ~4-45. Templeton, M. and Burger, J. 'Problems in natural-language interface to DBMS with examples from EUFID', proceedings of the Conference on Applied Natural Language Processing, 1983, 3-16. 183
1984
39
UN OUTIL MULTIDIMENSIONNEL DE L'ANALYSE DU DISCOURS J. CHAUCHE Laboratoire de Traitement de l'Information I.U.T. LE HAVRE Place Robert Schuman - 76610 LE HAVRE FRANCE & C.E.L.T.A. 23, Boulevard Albert let - 54000 NANCY FRANCE RESUME : Le traitement automatique du discours suppose un traitement algorithmique et informatique. Plu- sieurs m~thodes permettent d'appr~hender cet as- pect. L'utilisation d'un langage de programmation g~n~ral (par exemple PL/I) ou plus orient~ (par exemple LISP) repr~sente la premiere approche. A l'oppos~, l'utilisation d'un logiciel sp~cialis~ permet d'~viter l' ~tude algorithmlque n~cessaire dana le premier cas et de concentrer cette ~tude sur les aspects r~ellement sp~cifiques de ce trai- tement. Lea choix qui ont conduit ~ la d~finition du syst~ne SYGI4ART sont exposes ici. L'aspect mul- tldimensionnel eat analys~ du point de rue concep- tuel et permet de situer cette r~alisation par rapport aux diff~rents syst~mes existants. INTRODUCTION : Un iogiciel sp~cifique de traitement automati- que du discours comporte plusieurs ~l~ments : en premier lieu la description des objets manipul~s permet de d~finir l'univers de travail du r~alisa- teur. En second lieu la mani~re de manipuler ces oh jets rend compte des potentialit~s de r~alisa- tion d'application diverses. 11 eat n~cessaire au pr~alable de d~finir la nature du module sous- jacent par rapport aux theories existantes. Dana le present article on exposera donc successivement une approche du module th~orique, une description des objets manipul~s et enfln, lea outils de mani- pulations. L'exemple du syst~me SYGMART montre une r~alisation concrete des choix pr~c~de,=,ent expo- ses. Le module transformationnel. Du point de rue formel lea outils utilis~s pour le traitement automatique des langues naturelles peuvent se diviser en deux grandes categories : - le module g~n~ratif d~finissant un processus formel engendrant un langage. L'analyse consiste alors ~ retrouver le processus d~ductif condulsant la phrase ou au texte ~tudi~. C'est dana ce cadre que sont effectu~es la plupart des r~alisa- tions actuelles. L'exemple le plus important eat sans doute la d~finitlon des grammaires syntagmatiques et des analyseurs associ~s. Nous pouvons sch~natiser une r~alisation par le graphe suivant : Gr~-,-,ire Algorithme d' analyse syntagnmtique > / associ~ structure g~n~rative texte du texte Beaucoup de points s'opposent h cette d~marche. Lea principales dlfficult~s sont : Existe-t-il une gr,m,mlre compl~te des textes traiter ? Quel algorithme d'analyse mettre en oeuvre si lea restrictions formelles sont trop contrai- gnantes ? Dana le cas du traitement des langues naturel- lea, l'slgorithme utilis~ est-il suffisa-~ent souple pour permettre une adaptabilit~ cons- tante ? - Le module transformationnel qui d~finit une fonction d'un espace (textuel) dana un autre espace (relationnel) ou une fonctiou de l'espa- ce relationnel sur lui-m~me. Le schema eat alors le suivant : D~flnition du module > Algorithme de simula- transformationnel tion du modAle structure imag~ ~ ~ ' ~ ' ~ te!te Lea princlpales questions sont alors lea suivantes : Analyse : comment d~finir un accepteur d'un langage donn~ ? Preuve que la fonction transformationnelle eat partout d~finie. Existe-t-il un algorithme transformationnel acceptable et co~ment le d~crire ? Des r~alisations out d~j~ ~t~ effectu~es suivant cet aspect formel, nota-,-ent lea syst~nes Q, CETA puis ROBEA. Le but du present article eat d'exposer une ~volution de cette approche et en particulier l'approche multirelationnelle ou multidlmensionnelle. La s~paration relation ~tiquette ou structure at signification. Lorsque l'on utilise ~mod~le pour une appli- cation donn~e, on projette une signification sur un objet rowel. Pour cette raison chaque ~l~ent de la structure eat affect~ d'une ~tiquette ayant un sens particulier. Ex~ple : 11 Cette approche a l'inconv~nient de rassembler deux ~ldments distincts par leurs natures et leurs si- gnifications : la structure et les ~tiquettes. GN ART SUB le livre structure ~tiquettes Sans cette s~paration chaque point posskde une seule identit~ et la structure doit alors r~pon- dre ~ au moins deux objectifs : -les liaisons ou relations syntaxiques • -les liaisons ou relations qualltatives Noun aurons dana le premier can : GN d~finissant le groupe nominal / ~ composd d'un article et d'un ART SUB substantlf dens le second can : ART ddfinissant l'article comme / d~fini SINGle singul ier DEF La plupart des modules transformationnels ont dt~ d~finis avec un multi-~tiquetage. GN ART DEF SUB MAS SING MAS 1 ivre Cette approche importante d~termine les objets qui seront manipul~s de fa~on abstraite (th~orique) ou concrete (progr---,e). Ainsi les syst~nes Q par exemple op~rent sur des Q-graphes dont chaque bran- che est ~tiquet~e par une arborescence slmplement ~tiquet~e. Le syst~me CETA op~re sur des arbores- cences multi-~tlquet~es. Dana ces deux can l'ana- lyse du discours consiste & rechercher une struc- ture qui repr~sentera alors la compr~henslou du syst~ne pour ce texte. L'exploitation de cette structure d~finira alors t'application. Une ~tude approfondie conduit h d~flnir comme objet de base un triplet : structure, multi-~tiquette, fonction d' association. 1 A : GN i ÷ A / k k B : ART DEF SING MAS 2 ÷ B 2 4 C : le 3 "," C I ~ D : SUB HAS 4 ÷ D 3 E : livre 5 "* E structure mul ti-~tiquettes fonction d' association La fonction d'associetion n'est pas n~cessairement injective. Cette propri~t~ permet de mieux disso- cier structure et contenu : Exemple : Le grand et le petit arbre. I / ~ A : COORD II : grand 2' 3 B : GN I : DEF A A\ "°" D : DEF K : SUB 4 5 6 7 8 9 E : GA L : le G : le N : arbre i0 II 12 13 14 15 I÷A 8÷J 2÷B 9÷K 3÷C IO÷G 4÷D II÷H 5+E 12÷N 6÷F 13÷L 7÷I 14÷M 15÷N L'ellipse du mot 'arbre" n'existe pan dana la structure et existe par la d~finition de la fonc- tion d'~tiquetage. Ce qui correspond sch~matique- ment au graphe suivant : \. le grand le petit arbre La d~finitlon pr~c~dente permet de d~finir des al- gorithmes de traitements slmples et efficaces alors que pour ce dernler type de graphe lea trai- tements comporteront des algorlthmes complexes. Elements structures. Un ~l~ment structur~ est par d~finition un ob- jet multidimensionnel ou multichamp. La structure pr~c@dente eat issue de l'~tude syntaxique des textes. Elle permet de d~finir une forme @labor~e du texte et d'avoir un acc~s h ses diff~rentes composantes en rapport avec leurs fonctions. Pour le traitement des langues naturelles il est bien sQr ~vident que cette analyse ne suffit pas. Cela ne signifie pan que tousles probl~mes li~s cette analyse soient r~solus mais que la levde des obstacles, de l'analyse syntaxique ou autre, suppose une ~tude plus approfondie. Lorsqu'une r~alisation utilise le m~ne espece d~finitionnel pour representer le seas et la forme les probl~- mes ~voquds pr~cddemment sur les difficultds li~es la confusion strueture-~tiquette se multiplient et se transportent au niveau structurel. Comment representer deux structures d'un texte donn~ sous forme arborescente sices deux arborescences sont contradictoires ? Ce probl~me eat insoluble dens le cadre arborescent classique. On peut bien sQr d~finir plusieurs types d'analyses, obtenir plu- sieurs arborescences du m~me texte. Dans ce cas la liaison entre ces diff~rentes arborescences sera tr~s difficile sinon impossible h formaliser et ~ mettre en oeuvre. El est donc n~cessaire d' avoir un module de representation qui permette de d~finir plusieurs structures sur le m@me ensemble de points, chacun de ces points ~tant associ~ une multi-~tlquette suivant une fonction quelcon- que. Cette d4finition correspond ~ la d~finition des ~l~ments structurds dont l'approche formelle eat la suivante : Un ~l~ment structur~ est d~fini par un quadruplet (P,S,E,F) o~ : P :est un ensemble fini de points S :est un ensemble fini de structures arbores- centes sur les points de Pet tel que chaque point de P appartient ~ au moins une structu- re de S. E :est un ensemble fini de multi-~tiquettes. 12 F : est une application surjective de P sur E. Exemple : ~ ~ 6 34 715 {E 1, E2, E3, E4} { 1~E4, 2-~E1,3~E1,4+E4,5÷E3,6~E2, 7-+E 1,8-~E3 } ) la representation graphique d'un tel objet est plus facile lorsque l'on regarde une seule structure (une seule dimension ou champ). La synth~se gra- phique de cet exemple donne la figure suivante : . .... .\., ', \\ LI I ",,,:" ... ', Le problime classique de l'analyse textuelle, (ddfinir une grau.naire syntagmatlque engendrant un langage), est transform~ et devient : d~finir pour chaque ~l~ment du langage un ~l~nent struc- tur~ associ~. Le probl~me qui se pose alors est similaire ~ celui obtenu dans le cadre des gram- maires syntagmatiques : la d~finition de l'image structurelle recouvre-t-elle l'ensemble du langa- ge ? On peut remarquer que le cas des grammaires syntagumtiques est un cas particulier de cette approche. L'association est alors la suivante : on affecte ~ chaque ~l~ment du langage engendr~ par la gr----nire la structure syntaxique de cet ~l~ment. Cette approche permet de ddfinir une associa- tion plus complexe par la multlplicitd des struc- tures assocides au m~me ensemble de points. On aura donc associd ~ chaque texte ses structures syntaxiques, sdmantiques, logiques, etc... En pratique le nombre de champs ou dimensions est limit~ (par exemple 16 dans le cas du syst~me SYGMART). Rdseau transformationnel : Un objet formel est intdressant dans la mesure o~ il existe un moyen de le manipuler. Cet aspect algorithmique est n~cessaire ~ route r~allsation et limite la complexitd des objets ddfinis. Le module op~ratoire pour les ~l~ments structures d~finis ci-dessus est r~alis~ par un r~seau trans- formatlonnel. Chaque point du r~seau est consti- tu~ d'une grammaire transformationnelle et chaque arc partant d'un point de ce r~seau est ~tiquet~ d'une condition bas~e sur la presence d'un schema. Exemple : ~ . , ~ ' G2 / ~ Le r~sultat de l'application du r~seau transfor- mationnel est d~fini par l'~l~ment structur~ obte- nu apr~s le parcours de ce r~seau d'un point d'entr~e E ~ un point de 9ortie S. Le r~seau d~finit donc une application de l'ensemble des ~l~ments structures dans lui-mSme. Le parcours de ce r~seau peut @tre simple ou r~cursif sulvant la nature des r~gles appliqu~es dans les gr---,-i- res ~l~mentalres. Une gram,mire transformationnel- le ~l~mentaire a donc pour but de d~finir une transformation de l'~l~ment structure. Cette transformation est r~alis~e par un ensemble de r~gles transformationnelles ordonn~es. Chaque r~gle d~finie un module de remplacement permet- rant une modification d'un ~l~ment structur~ quelconque. Cette r~gle pouvant @tre simple ou r~cursive et dans ce dernier cas falre appel au r~seau pour son execution. Le point central d'une graummire ~l~mentaire est donc constitu~ par une r~gle ~l~mentaire. Une r~gle ~l~mentaire est d~finie par un ensemble de transformations d'arhorescences, chacunede ces transformations devant s'appliquer sur un champ simultan~ment aux autres transformations des autres champs. Des contraintes correspondant ~ des points communs inter-champs peuvent ~tre d~finies. On peut remar- quer que le syst~me CETA constitue dans ce cadre un cas particulier de traitement sur un seul champ. La transformation dans un champ est une extension des d~finitlons de transformations d'arbre d~finies par Gladkij et Melcuk [ 7 3. One gra~maire 41~mentaire poss~de ~galement un mode d'application permettant de limiter l'applicabi- lit4 des rAgles, cecl afin de d~finir un proces- sus transformationnel fini. L'ensemble des r~gles d'une grammaire ~l~mentaire est ordonn~ et d~finl un algorithme de Markov ~ 8 ~ ~tendu aux ~l~ments structures. La d~finition d'un modAle de recon- naissance s'effectue suivant un processus analo- gue k la recherche d'un programme d~finlssant une fonctlon donn~e. Les objets trait~s sont des ob" jets non classiques en progra~Ination et les modi- fications de ces objets ne s'effectuent pas travers un parcours de l'objet traitS, mais par la d~finition de transformations oumodiflcatlons de sous-objets. Solt par exemple la d~finitlon de l'analyse d'une phrase par Wang Huilln [ 9 ~ : phrase : "sur ces donn~es, l'ordinateur dolt effectuer certains calculs sulvant un programme d~termin~." Structure recherch~e : • ULF~A I ~ol ~ ~in . .... o,a" I 13 Par convention le texte est projetd suivant la fomne d'dldment structurd la plus proche du texte: L'dcriture du r4seau de grammaire va d~finir un processus de transformations pour obtenir la structure souhait~e. Pour des raisons ~videntes nous avons simplifi4 la representation dans eat exemple en d~finissant sur chaque point une par- tie de l'ensemble des valeurs de l'dtiquette as- socide et an ne consld~rant qu'un seul champ. La premiare grammnlre dolt permettre une distinc- tion entre phrase au cas o~ le texte en comporte- rait plusieurs (bien s~r ~galement dans le cas o~ l'analyse a dt~ choisie phrase par phrase). Ceci s'effectue en trois ~tapes : initialisation • > ~ A r~.le ..n.rique / y ~ '~ /~ X . X . PH • oA • y ~ r~gle finale • #A X La structure recherch~e est d~duite de la structu- re syntaxique qui dana ce cas eat la suivante : ^ ~ ~ La r~gle suivante (rgnfl dans ~ 9 ] ~st utilis~e pour obtenir les regroupements GN : Cette r~gle appliqu~e sur le texte pr~cddent donne par exemple : "I '" "or dinar eur" "i ' " " " ordlnateur" Cet exemple utilise deux r~seaux de grammaires enchain4es, le premier correspondant ~ la recher- che de la structure syntaxique, le second, ~ la construction de la structure choisie (grammaire FI2 et FI3 dans[ 9 3). La s~paratlon structure-~tlquette induit une pro- pri~t~ importante par rapport ~ la puissance de d~finition d'une r~gle : La g~n~ralit~ des transformations peut se d4finir en deux 4tapes : d~finition structurelle et d~fi- nltion sdmantlque. La d4finition structurelle est tr~s g~n~rale et la d~finition s~mantique tr~s sp~cifique. La r~gle est alors applicable si la d~finition s4mantique adapt4e ~ la d~finition structurelle correspond ~ une rdallsation effec- tive clans l'~l~ment structur4 trait4. Nous avons le ach~na fonctionnel suivant : I base de .connaissance [ d~finition structurelle • r~gle produite Si par example on veut d~finir la transformation : apprendre quelque chose ~ quelqu'un ~ enseigner quelque chose ~ quelqu'un. la base de connaissance pr~cisera : apprendre ÷ enseigner et la r~gle structurelle : l/O~3 i/O~3 dans ce cas prdcis il n'y a pas de modification struc- I ~ I I turelle, la structure est 2 4 2 4 n~ar~moins n~cessaire Avecla mame r~gle nous pouvons avoir dans la base de connaissance la transformation : offrir ~ ÷ donner & permettant la transformation : offrir quelque chose h quelqu'un ÷ donner quelque chose ~ quelqu'un. hbus avons ainsi avec une seule r~gle structurelle d~fini deux r~gles potentiellement applicables. L'avantage d'une telle ddfinltion est 4vident : factorisatlon des r~gles, ind~pendance de la grammaire par rapport aux lexique, possibilitd de d~finir un comportement sp4cifique pour chaque ~1~ment du lexique sans avoir h d~flnir une gram- .mire de transformations structurelles trop im- portante. Le syst~me SYGMART : Le syst~me SYGMART est un syst~me op4ratlonnel simulant un module transformationnel d'~l~ments structur4s. II est compos~ de trois sous-syst~mes OPALE, TELESI et AGATE, chacun de ces sous-syst~- mes correspondant aux diff~rentes fonctions essen- tielles de traitement d'un texte : OPALE effectue le passage texte 414ment struc- ture. TELESI effectue la transformation d'~l~ments structur4s. AGATE effectue le passage d'41~nent structur~ texte. La forme g4n4rale de l'applicatlon d'un sous syst~me est la suivante : l donn~es c o m p i l ~ donn~es programmes compil4es texte , I simulationl , image 14 Les donn4es programes comportent deux ~l~ments : un dictionnaire d~finissant la base de connaissan- ce et une grammaire d~finissant le processus transformationnel. Le sous-syst~me OPALE : Ce sous-syst~me permet de d~finir un ~14ment structur4 ~ partir d'un texte. Chaque champ com- portera la m~me structure et chaque point de cette structure sera associ4 h une 4tiquette correspondant au r~sultat d'une analyse d'un mot suivant ce sous-syst~me. Cette analyse est bas4e sur un automate d'4tats finis permettant une lecture d'un dictionnaire avec segmentation. Au cours de cette segmentation diff~rents renseignements sont ~valu4s et m~moris~s dans l'~tiquette r~sultante de l'analyse. Le sous-syst~me TELESI : Ce sous-syst~me d~finit le processus central du syst~nne SYCMART. Ii permet de d~finlr un r4seau transformationnel. Ce r~seau est compos~ de grammaires comportant un ensemble (4ventuel- lement vide) de r~gles. Chaque gra~snaire d~finit une transformation d'414ments structures et le r~sultat de cette grannnaire d4finit le parcours du r~seau. Chaque granm~aire poss~de un mode d'ap- plication, le plus complexe ~tant le mode r~cursif qui permet de d4finir un parcours de l'objet transformS. Le r~seau d~finit lui-m~me une trans- formation d'41~ments structures. L'entr4e du sys- t~me est compos~ soit du r~sultat du sous-syst~me OPALE soit du r~sultat de l'application de ce sous-syst~me lui-m~me. Le dictionnaire associ4 au sous-syst~me TELESI d~finit la base de con- naissances h associer auX r~gles de transforma- tions. Cette application du contenu du dictionnai- re par rapport aux r~gles de transformations, s'effectue de mani~re dynamique. Le sous-syst~me AGATE : Ce dernier sous-syst~me d~finit la transfor- mation ~l~ment structur~ texte. Cette transfor- mation est n~cessaire dans beaucoup d'application et s'effectue par le parcours canonique d'une arborescence d'un champ d~termin~. Chaque ~tiquet- te associ~e ~ un point de ce parcours permet de d~finir un mot ~ l'alde d'un automate d'~tats finis de synth~se, mirolr du sous-syst~me OPALE. La forme g~n~rale de l'application du syst~me SYGMART est la suivante : '~TELESI OPALE . ~l&nent AGATE ) texte texte structur~ Du point de rue pratique, le syst~me SYGMART existe en trois versions. Deux versions PL/I et une version C. Les versions PL/I sont d~flnies sous les syst~nes IBM OS/MVS et Honeywell Multics. La version C est d~finie sous le syst~me UNIX et fonctionne sons un syst~me ~ base du microproces- seur MC680OO. Une r~alisatlon sur une traduction automatique Espagnol-Frangals effectu~e au CELTA avec le syst~me SYGMART donne un exemple du temps d'ex~cution n~cessaire : la traduction d'un texte de 800 mots trait~s ensembles (et non phrase par phrase, ce qui implique la manipulation d'arbo- rescences et d'~l~ments structures de plus d'un millier de points) a ~t~ r~alis~e sur un Amdahl 470/V7 en 33 mn 38 s (soit 14 106 op~rations/mots) La version micro-ordinateur n~cessite une m~moire d'au moins 756 Ko et un dlsque dur d'au moins 20 Mo. Les trois exemples sulvants sont extraits de trois r~alisations distlnctes et repr4sentent des parties de gra*mnaires TELESI : 1) extrait de la grammaire d'analyse de l'espa- gnol C. VIGROUX CELTA France. 2) extrait de la grammaire d'analyse du Chinois WANG HUIN LIN Institut de Linguistique Pekin Chine. 3) extrait de la grammaire d'analyse du N~erlandais P. ROLF Universit~ Catholique de Nim~gue Hollande. --~ --m--= --~= --=--= --= --= -~=- REFERENCES : [ 1 ] : BOITET C., GUILLAUME P., QUEZEL-AMBRUNAZ M Manipulation d'arborescences et parall~lis- me : syst~me ROBRA, COLING 1978. [ 2 U : ~UORE 3. Transducteurs et arborescences Th~se, Grenoble 1975. [ 3 ] : c CHE j Le Syst~me SYGMART Document privisoire, Le Havre 1980. [ 4 ] : CHAUCHE J., CHEBOLDAEFF V., JATTEAU M., LESCOEUR R. Specification d'un syst~me de traduction assist~e par ordinateur. [ 5 ] : COU'~E~UER A. Les syst~mes Q, Universit~ de Montreal 1970. [ 6 ] : n.a~ A, BOURQUIN Me, ATTALI A., I~COMTE J. Les probl~mes li~s au passage de la structure de surface vers la structure d'interface. CELTA Nancy, 1981. [ 7 ] : GLADKIJ A.V., MEL'CUK I.A. Tree grammars, Linguistics Mouton 1975. [ 8 ] : MENDELSON Introduction to mathematical logic VAN NOSTRAND 1964 [9] : WANG H. La place de la modalit~ dans un syst~me de traduction automatique trilingue Fran~ais-Anglals-Chinois. Thase, NANCY 1983 15
1984
4
THERE STILL IS GOLD IN THE DATABASE MINE Madeleine Bates BBN Laboratories 10 Moulton Street Cambridge, MA 02238 Let me state clearly at the outset that I disagree with the premise that the problem of interfacing to database systems has outlived its usefulness as a productive environment for NL research. But I can take this stand strongly only by being very liberal in defining both "natural language interface" and "database systems". same as "Are there any vice presidents who are either male or female". This same system, when asked for all the Michigan doctors and Pennsylvania dentists, produced a list of all the people who were either doctors or dentists and who lived in either Michigan or Pennsylvania. This is the state of our art? Instead of assuming that the problem is one of using typed English to access and/or update a file or files in a single database system, let us define a spectrum of potential natural language interfaces (limiting that phrase, for the moment, to mean typed English sentences) to various kinds of information systems. At one end of this spectrum is simple, single database query, in which the translation from NL to the db system is quite direct. This problem has been addressed by serious researchers for several years, and, if one is to measure productivity in terms of volume, has proved its worth by the number of papers published and panels held on the subject. Indeed, it has been so deeply mined that the thought "Oh, no! Not another panel on natural language interfaces to databasesl" has resulted in this panel, which is supposed to debate the necessity of continuing work in this area rather than to debate technical issues in the area. And yet if this problem has been solved, where is the solution? Where are the applications of this research? True, commercial natural language access interfaces for some database systems have been available for several years, and new ones are being advertised every month. Yet these systems are, now, not very capable. For example, one of these systems carried on the following sequence of exchar~es with me: User: Are all the vice presidents male? System: Yes. User: Are any of the vice presidents female? System: Yes. User: Are any of the male vice presidents female? System: Yes. Nothing was unusual about either this database or the corporate officers represented in it. The system merely made no distinction between "all" and "any", and interpreted the final query to mean the But, you are probably thinking, those examples don't illustrate research problems that need to be worked on; they are problems that were "solved" years ago. But I contend that it is not enough to strip broad areas of research and develop isolated theories to account for those areas, because the result is similar to that of strip mining coal: local profit followed by more global losses. It is more beneficial to choose a limited area (such as database interfaces, perhaps extended a bit as described below) and mine it very deeply, not necessarily discovering every aspect of the domain but requiring that the various aspects be integrated with one another to produce a coherent whole. Even in the most simple database access environment, one can find in natural queries and commands examples involving meta-knowledge ("What can you tell me about X?"), presupposition (Q: "How many students failed Math 108 last semester?" A: "Math 108 wasn't given last semester."), and other not-yet-mined-out topics. Extending the notion of database access to one of knowledge-base access where information may be manipulated in more complex ways, it is easy to generate natural examples of counterfactual conditionals ("If I hadn't sold my IBM stock and had invested my savings in that health spa for cats, what would my net worth be now?"), word sense ambiguity (the word "yield" is ambiguous if there is both financial and productivity data in the knowledge base), and other complex linguistic phenomena. Let us go on to define the other end of the spectrum I began to explicate above. At this end lles a conversational system for query, display, update, and interaction in which the system acts like a helpful, intelligent, knowledgeable assistant. In this situation, the user carries on a dialogue (perhaps using speech) using language in exactly the same way s/he would interact with a human assistant. The system being interfaced to would, in this case, be much more complex than a 184 single database; it might include a number of different types of databases, an "expert system" or two, fancy display capabilities, and other goodies. In this environment, the user will quite naturally employ a wider variety of linguistic forms and speech acts than when interfacing to a simple db system. One criticism of the simple db interfaces is that the interpretive process of mapping from language concepts onto database concepts is sufficiently unlike the interpretation procedures for other uses of natural language that the db domain is an inappropriate model for study. But not all of the db interfaces, simple or more complex, perform such a direct translation. There is a strong argument to be made for understanding language in a fairly uniform way, with little or no influence from the fact that the activity to be performed after understanding is db access as opposed to some other kind of activity. The point of the spectrum is that there is a continuum from "database" to "knowledge base", and that the supposed limitations of one arise from the application of techniques that are not powerful enough to generalize to the other. The fault lies in the inadequate theories, not in the problem environment, and radically changing the problem environment will not guarantee the development of better theories. By relaxing one constraint at a time (in the direction of access to update, one database system to many, a database system to a knowledge-based system, simple presentation of answers to more complex resonses, static databases to dynamic ones, etc.), the research environment can be enriched while still providing both a base to build on and a way to evaluate results based on what has been done before. ~9_~ Research ~ Related to Databases Here are a few of the areas which can be considered extensions of the current interest in database interfaces and in which considerable research is needed. Large, shiny nuggets of theory are waiting to be discovered by enterprising computational linguists! I. Speech input. Interest in speech input to systems is undergoing a revival in both research and applications. Several "voice typewriters" are likely to be marketed soon, and will probably have less capability than the typed natural language interfaces have today. But, technical and theoretical problems of speech recognition aside, natural spoken language is different linguistically from natural written language, and there remains a lot of work to be done to understand the exact nature of these differences and to develop ways to handle them. 2. "Real language". or spoken) language ungrammaticalities, telegraphic compression, By which is meant (written complete with errors, Jargon, abbreviations, etc. Research in these areas has been going on for some time and shows no sign of running dry. 3. Generating language. An intelligent database interface assistant should be able to interject comments as appropriate, in addition to displaying retrieved data. 4. Extended dialogues. What do we really know about handling more than a few sentences of context? How can a natural conversation be carried on when only one of the conversants produces language? If able to generate language as well as to understand it, a database assistant could carry on a natural conversation with the user. 5. Different types of data bases and data. By extending the notion of a static, probably relational, database to one that changes in real time, contains large amounts of textual data, or is more of a knowledge base than a data base, one can manipulate the kind of language that a user would "naturally" use to access such a system, for example, complex tense, time, and modality expressions are almost entirely absent from simple database query, but this need not be the case. All of this is not to say that all the research problems in computational linguistics can be carried on even in the extended context of database access. It is rather a plea for careful individual evaluation of problems, with a bias toward building on work that has already been done. This environment is a rich one. We can choose to strip it carelessly of the easy-to-gather nuggets near the surface and then go on to another environment, or we can choose to mine it as deeply as we can for as long as it is productive. Which will our future colleagues thank us for? 185
1984
40
Is There Natural Language after Data Bases? Jaime G. Carbonell Computer Science Department Carnegie-Mellon University Pittsburgh, PA 15213 1. Why Not Data Base Query? The undisputed favorite application for natural language interfaces has been data base query. Why? The reasons range from the relative simplicity of the task, including shallow semantic processing, to the potential real-world utility of the resultant system. Because of such reasons, the data base query task was an excellent paradigmatic problem for computational linguistics, and for the very same reasons it is now time for the field to abandon its protective cocoon and progress beyond this rather limiting task. But, one may ask, what task shall then become the new paradigmatic problem? Alas, such question presupposes that a single, universally acceptable, syntactically and semantically challenging task exists. I will argue that better progress can be made by diversification and focusing on different theoretically meaningful problems, with some research groups opting to investigate issues arisinq from the development of integrated multi-purpose systems. 2. But I Still Like Data Bases... Well, then, have I got the natural language interface task for you! Data base update presents many unsolved problems not present in pure query systems. "Aha," the data base adherents t would say, "just a minor extension to our workU' Not at all; there is nothing minor about such an extension [4]. Consider, for example, the following update request to an employee-record data base: "Smith should work with the marketing team and Jones with sales" First, the internal ellipsis in the coordinate structure is typical of such requests, but is mostly absent from most DB queries. However, let us assume that such constructions present no insurmountable problems, so that we can address an equally fundamental issue: What action should the system take? Should Smith be deleted from sales and added to marketing (and vice versa for Jones)? Or, should Smith and Jones remain fixed points while all other sales and marketing employees are swapped? As Kaplan and Davidson [3] point out, one can postulate heuristics to ameliorate the problem. They proposed a minimal mutilation criterion, whereby the action entailing the smallest change to the 11 must confess that I would have to include myself in any group claiming adherence to data base query as a unify=ng task. I am still actively working in the area, and to some extent expect to contmue doing so. The practical applications are immense, but theoretical breakthroughs require fresh ideas and more challenging problems. Hence I advocate a switch based on scientific research criteria, rather than practical applicability or engineering significance. data base is preferred. However, their bag of tricks fails miserably when confronted with examples such as: "The sales building should house the marketing people and vice versa" Applying the above heuristic, the bewildered system will prefer to uproot the two buildings, swap them, and lay them on each other's foundations. Then, only two DB records need to be changed. Such absurdities can only be forestalled if a semantic model of the underlying domain is built and queried, one that models actions, including their preconditions and consequences, and knows about objects, relations, and entailments. So, data base update presents many difficult issues not apparent in the simpler data base query problem. Why not, then, select this as the paradigmatic task? My only objection i3 to the definite article the--I advocate data base update as one of several theoretically significant tasks with major practical utility that should be selected. Other tasks highlight additional problems of an equally meaningful and difficult nature. 3. How Should I Select A Good Task Domain? At the risk of offending a number of researchers in computational linguistics, I propose some selection criteria illustrated both by tasks that fail to meet them, and later by a much better set of tasks designed to satisfy these criteria for theoretical significance, and computational tractability. 1. The task should, if possible, be able to build upon past work, rather than addressing a completely disjoint set of problems. This quality enhances communication with other researchers, and enables a much shorter ramp-up period before meaningful results can be obtained. For instance, an automated poetry comprehension device fails to meet this criterion. 2. The task should be computationally tractable and grounded in an external validation test. Interfaces to as yet non-existent systems, or ones that must wait for radically new hardware (e.g., connectionist machines) before they can be implemented fail to meet this criterion. However, data base query interfaces met this criterion admirably. 3. The task should motivation investigation of a set of language phenomena of recognizable theoretical significance that can be addressed from a computational standpoint. Ideally, the task should focus on restricted instances of a general and difficult phenomenon to encourage progress towards initial solutions that may be extended to (or may suggest) Solutions to the general problem. Data base query has been thoroughly 186 mined for such phenomena; hence it is time to go prospecting on virgin land. 4. The task should be of practical import, or should be a major step towards a task of practical import. Aside from very real if mundane concerns of securing funding, one desires a.large, eager, potential user community as an inexhaustible source of examples, needs, encouragement, and empirical motivation and validation. A parser for Summerian cunneiform tablets or a dialog engine built around the arbitrary rules of a talk-show game such as "You don't say" would completely fail on this criterion. 4. What Then Are Some Other Paradigmatic Tasks? Armed with the four criteria above, let us examine some tasks that promise to be quite fruitful both as vehicles for research and as means of providing significant and practical natural language interfaces. • Command Interfaces to Operating Systems - Imperative command dialogs differ from data base queries in many important ways beyond the obvious differences in surface syntactic structure, But, much of the research on limited- domain semantics, ambiguity resolution, ellipsis and anaphora resolution can be exploited, extended and implemented in such domains. Moreover, there is no question as to the practical import and readily-available user community for such systems. What new linguistic phenomena do they highlight? More than one would expect. In our preliminary work leading up the the PLUME interface to the VMS operating system, we have found intersentential meta-language utterances, crass- party ellipsis and anaphora, and dynamic language redefinition, to name a few. An instance of intersentential meta.language typical to this domain would be: USER: Copy foo.bar to my directory. SYST: File copied to/carbonell]foo.bar. USER: Oops, I meant to copy lure.bar. There is no "oops command", nor any act for editing, re- executing, and undoing the effects of a prior utterance in the discourse. This is a phenomenon not heretofore analyzed, but one whose presence and significance was highlighted by the choice of application domain. See[2] for additional discussion of this topic. • Interfaces to expert systems -- There is little question about the necessity, practicality and complexity of such a task. One can view expert systems as reactive, super data bases that require deduction in addition to simple retrieval. As such, the task of interpreting commands and providing answers is merely an extension of the familiar data-base retrieval scenario. However, much of the interesting human computer interaction with expert systems, as we discovered in our XCALIBUR interface[I], goes beyond this simple interaction. To wit, expert system interfaces require: o Mixed-initiative communication, where the system must take the initiative in order to gather needed information from the user in a focused manner. o Explanation generation, where the system must justify its conclusion in human-comprehensible terms, requiring user modelling and comparative analysis of multiple viable deduction paths. o Knowledge acquisition, where information supplied in natural language must be translated and integrated into the internal workings of the system. • Unified multi-function interfaces -- Ideally one would desire communication with multiple "back ends" (expert systems, data bases, operating systems, utility packages, electronic mail systems, etc.) through a single uniform natural language interface. The integration of multiple discourse goals and need to transfer information across contexts and subtasks present an additional layer of problems .- mostly at the dialog structure level -- that are absent from interfaces to single-task, single-function backends. The possible applications meeting the criteria have not by any means been enumerated exhaustively above. However, these reflect an initial set, most of which have received some attention of late from the computational linguistics community, and all appear to define theoretically and practically fruitful areas of research. 5. References 1. CarbonelL J.G., Boggs, W.M., Mauldin, M.L. and Anick, P.G., "The XCALIBUR Project, A Natural Language Interface to Expert Systems," Proceedings of the Eighth International Joint Conference Dn Artificial Intelligence. 1983. 2. Carbonell, J. G.. "Meta-Language Utterances in Purposive Discourse," Tech. report, Carnegie.Mellon University, Computer Science Department, 1982. 3. Kaplan. S.J. and Davidson, J., "Interpreting Natural Language Data Base Updates," Proceedings of the 19th Meeting of the Association for Computational Linguistics. 1981. 4. Salvater. S., "Natural Language Data Ba,s~ Update," Tech. report 84/001, Boston University, 1984. 187
1984
41
Panel on Natural Language and Databases Daniel P. Flickinger Computer Research Center Hewlett-Packard Company 1501 Page Mill Road Palo Alto, California 94304 USA While I disagree with the proposition that database query has outlived its usefulness as a test environment for natural language processing (for reasons that I give below), I believe there are other reasonable tasks which can also spur new research in NL processing. In particular, I will suggest that the task of providing a natural language inter- face to a rich programming environment offers a convenient yet challenging extension of work already being done with database query. First I recite some of the merits of continuing research on natural language within the confines of constructing an interface for ordinary databases. One advantage is that the speed of processing is not of overwhelming importance in this application, since one who requests information from a database can expect the retrieval to take time, with or without a natural language interface. Of course speed is desirable, and waiting for answers to apparently simple re- quests will be irritating, but some delay will be tolerable. This tolerance on the part of the user will, I suggest, dis- appear in applications where an attempt is made to engage a system in dialogue with the user, as would be the case in some expert systems, or in teaching systems. Assum- ing that natural language systems will not by themselves get faster as they are made to cope with larger fragments of a natural language, it will be useful to continue with database query while we wait for miracles of technology to fill our demands for ever greater processing speed. A second reason for not yet abandoning the database query as a test environment is that a great deal of important natural language processing research remains to be done in generalizing systems to cope with more than one natural language. Work on language universals gives reason to be- lieve that some significant part of a natural language system for English should be recyclable in constructing a system for some other language. How much these cross-linguistic concerns ought to affect the construction of a particular system is itself one of the questions deserving of atten- tion, but our experience to date suggests that it pays to avoid language-particular solutions in an implementation which aspires to treatment of any sizable fragment of a lan- guage, even a single language like English. The degree of language-independence that a natural language system can boast may also prove to be one useful metric for evaluat- ing and comparing such systems. [t seems clear that even the task of answering database queries will provide a more than adequate supply of linguistically interesting problems for this line of research. Finally, it has simply not been our experience at Hewlett-Packard that there is any shortage of theoretically interesting problems to solve in constructing a natural lan- guage interface for databases. For example, in building such an interface, we have recently designed and implemented a hierarchically structured lexicon for a fragment of English, together with a set of lexical rules that can be run either when loading the system, or when parsing, to greatly ex- pand the size of the lexicon actually used in the system. Several questions of theoretical interest that arose in that process remain unanswered; at least some can be answered by experimenting with our present system, functioning sim- ply as an interface to an ordinary relational database. Having argued that significant work remains to be done in natural language processing as an interface to databases, I nonetheless believe that it would be fruitful to expand the scope of a natural language interface, to permit some manipulation of a programming environment, allowing not only the retrieval of information describing the state of the system, but also some modification of the system via nat- ural language. Of course, such a task would be facilitated by having the information about the environment stored in a manner similar to that of a database, so that our atten- tion could be devoted to the new range of linguistic issues raised, rather than to details of how the whole program- ming environment is structured and maintained. I will not offer an account of how such a merging of database and general programming environment might be accomplished, but instead will offer some motivation for stretching natural language research in this direction. It seems clear, first of all, that such an interface would be useful, given that even a common programming envi- ronment provides a wide array of tools, not all of which are familiar to any one user. While it is usually the case that one who is accustomed to a given facility would be hampered by having to employ only a natural language in- terface to accomplish familiar tasks (e.g., imagine typing "Move down to the beginning of the next line" every time a carriage-return was required), such an interface would be invaluable when trying to utilize an unfamiliar part of the system. A related benefit would be the ability of a user new to a programming environment to customize that environment without any detailed knowledge of it. This in- direct access to the multitude of parameters that determine the behavior of a complex environment would also be con- venient for an experienced user attempting to alter some 188 rarely-changed aspect of the environment. Such a natural language interface might also cope with "how to" questions, at least serving as another link to on-line documentation. The linguistically interesting issues that such an ex- tended interface would raise include a greater need for some language production capability (where the ordinary database query system can get by with only language un- derstanding), and a greater need for some discourse repre-. sentation. I suspect that some new syntactic constructions might also appear, rare in a database application but more common in programming applications. Using an extended interface of this kind, some dialogue between the user and the system would be useful, especially in cases where a request was too vague, and the system (like an expert system) could present a series of choice points to the user in order to reduce the original request to a manageable one. Presenting these choices would provide a convenient forum for research in language production, while suffering the disadvantage mentioned above of forcing us to worry more about the speed with which the system performs. Issues concerning discourse representation could be studied with this kind of task in a fairly natural way, since questions about a programming environment would have to do in part with the changes taking place during a ses- sion, so that the system would want to keep track of at least some history of a session, both previous events and previous discourse. In addition to providing a testbed for discourse-. related research, a system like this would also offer a good setting for study of tense and aspect issues which are not so readily raised in a simple database query application. A final advantage of extending a natural language inter- face to include the programming environment is that if the interface were being developed in such an environment, one could use natural language to develop the natural language system itself, a property that would be not only useful but also elegant. 189
1984
42
Natural Language for Expert Systems: Comparisons with Database Systems Kathleen R. McKeown Department of Computer Science Columbia University New York, N.Y. 10027 1 Introduction Do natural language database systems still ,~lovide a valuable environment for further work on n~,tural language processing? Are there other systems which provide the same hard environment :for testing, but allow us to explore more interesting natural language questions? In order to answer ,o to the first question and yes to the second (the position taken by our panel's chair}, there must be an interesting language problem which is more naturally studied in some other system than in the database system. We are currently working on natural language for expert systems at Columbia and thus, expert systems provide a natural alternative environment to compare against the database system. The relatively recent success of expert systems in commercial environments (e.g. Stolfo and Vesonder 83, McDermott 81) indicates that they meet the criteria of a hard test environment. In our work, we are particularly interested in developing the ability to generate explanations that are tailored to the user of the system based on the previous discourse. In order to do this in an interesting way, we assume that explanation will be part of natural language dialog with the system, allowing the user maximum flexibility in interacting with the system and allowing the system maximum opportunity to provide different explanations. The influence of the discourse situation on the meaning of an utterance and the choice of response falls into the category of pragmatics, one of the areas of natural language research which has only recently begun to receive much attention. Given this interesting and relatively new area in natural language research, my goals for the paper are to explore whether the expert system or database system better supports study of the effect of previous discourse on current responses and in what ways. 1The work described in this paper is partially supported by ONR grant N00014-82-K-0256. 2 Pragmatics and Databases There have already been a number of efforts which investigate pragmatics in the database environment. These fall into two classes: those that are based on Gricean principles of conversation and those that make use of a model of possible user plans. The first category revolves around the ability to make use of all that is known in the database and principles that dictate what kind of inferences will be drawn from a statement in order to avoid creating false implicatures in a response. Kaplan (79) first applied this technique to detect failed presuppositions in questions when the response would otherwise be negative and to gener&te responses that correct the presupposition instead~. Kaplan's work has only scratched the surface as there have followed a number of efforts looking at different types of implicatures, the most recent being Hirschberg's (83) work on scalar implicature. She identifies a variety of orderings in the underlying knowledge base and shows how these can interact with conversational principles both to allow inferences to be drawn from a given utterance and to form responses carrying sufficient ~formation to avoid creating false implicatures °. Webber (83) has indicated how this work can be incorporated as part of a database interface. The second class of work on pragmatics and language for information systems was initiated by Allen and Perrault (80), and Cohen (78) and involves maintaining a formal model of possible domain plans, of speech acts as plans, and of plausible inference rules which together can be used to derive a 2Kaplan's oft-quoted example of this occurs in the following sequence. If response (B) were generated, the false implicature that CSEll0 was ~iven in Spring '77 would be created. (C) corrects this false presupposition and entails (B) at the same time. A: How many students failed CSEll0 in Spring '77? B: None. C: CSEll0 wasn't given in Spring 77. 3For example, knowledge about set membership allows the inference that not all the Bennets were invited to be drawn from response (E) to quesUon (D): D: Did you invite the Bennets? E: 1 invited Elizabeth. 190 speaker's intended meaning from a question. Their work was done within the context of a railroad information system, a type of database. As with the Grieean-based work, their approach is being carried on by others in the field. An example is the work of Carberry (83) who is developing a system which will track a user's plans and uses this information to resolve pragmatic overshoot. While this work has not been done within a traditional database system, it would be possible to incorporate it if the database were supplemented with a knowledge base of plans. All of these efforts make use of system knowledge (whether database contents or possible plans), the user's question, and a set of rules relating system knowledge to the question (whether conversational principles or plausible inference rules) to meet the user's needs for the current question. That this work is relatively recent and that there is promising ongoing work on related topics indicates that the database continues to provide a good environment for research issues of this sort. 3 Extended Discourse What the database work does not address is the influence of previous discourse on response generation. That is, given what has been said in the discourse so far, how does this affect wh~t should be said in response to the current question "~ Our work addresses these questions in the context of a student advisor expert 5 system. To handle these questions, we first note that being able to generate an explanation (the type of response that is required in the expert system) that is tailored to a user requires that the system be capable of generating different explanations for the same piece of advice. We have identified 4 dimensions of explanation which can each be varied in an individual response: point of view, level of detail, discourse strategy, and surface choice. For example, in the student advisor domain, there are a number of different points of view the student can adopt of the process of choosing courses to take. It can be viewed as a state model process (i.e., "what should be completed at each state in the process f"), as a semester scheduling process (i.e., "how can courses fit into schedule slots?"), as a process of meeting requirements (i.e., "how do courses tie in with requirement sequencinge"), or as process of achieving a balanced workload. Given 4Note that some natural language database systems do maintain a discourse history, but in most cases this is used for ellipsis and anaphora resolution and thus, plays a role in the interpretation of questions and not in the generation o! responses. 5This system was developed by a seminar class under the direction of Sa]vatore Stotfo. We are currently working on expanding the capabilities and knowledge of this system to bring it closer to a eneral roblem solvin sstem Matthews 84. these different points of view, a number of different explanations of the same piece of advice (i.e., yes) can be generated in response to the question, "Should I take both discrete math and data structures next semesterS": • State Model: Yes, you usually take them both first semester sophomore year. • Semester Scheduling: Yes, they're offered next semester, but not in the spring and you need to get them out of the way as soon as possible. • Requirements: Yes, data structures is a requirement for all later Computer Science courses and discrete math is a co-requisite for data structures. • Workload: Yes, they complement each other and while data structures requires a lot of programming, discrete does not. To show that the expert system environment allows us to study this kind of problem, we first must consider what the obvious natural language interface for an expert system should look like. Here it is necessary to examine the full range of interaction, including both interpretation and response generation, in order to determine what kind of discourse will be possible and how it can influence any single explanation. A typical expert system does problem-solving by gathering information relevant to the problem and making deductions based on that information. In some cases, that information is gathered from a system environment, while in others, the information is gathered interactively from a user This paper will be limited to backward chaining systems that gather information interactively as these provide a more suitable environment for natural language (in fact, it is unclear how natural language would be used at all in other systems, except to provide explanations after the system has produced its advice). In a backward chaining system, the expert system begins by pursuing a goal (for example, to diagnose the patient as having myoeardia). To ascertain whether the goal holds or not, the system gathers information from the user often using multiple choice questions to do so. Depending on the answer given to a single question, the system forms a partial hypothesis and asks other questions based on that hypothesis. If natural language were used in place of such a menu-like interface, the interaction might look somewhat different. Instead of the system beginning by asking questions, the user might initiate interaction by suggesting a plausible goal (for example, a likely diagnosis) supporting it with several justifications. The system could use this information to decide which goal to pursue first and to fill in information which the system would otherwise have 191 had to request using menu questions. Alternatively, if the system has several top-level problem solving capabilities (e.g., perform a diagnosis or recommend drug treatment), the user's initial question may indicate which o! these problem solving capabilities is being called on.. Again, the system can use this information to avoid asking a question it would normally have to ask. The use of natural language as an "overlay" on an underlying menu system to allow the user to directly state his/her goals, to skip irrelelvant questions, and to provide information to a sequence of menu questions in a single utterance is an issue we are currently exploring at Columbia. For example, the underlying expert system in the student advisor domain has two possible problem solving capabilities: it can help the student plan next semester's schedule or it can provide information about university courses. Using the menu interface, the system would first query the user to ask which of these is desired. If the student chose to plan the next semester schedule, the system next asks a series of questions to determine the student's year, what courses have already been taken, and what courses the student wants to take. A hypothetical natural language interaction (towards which we are working) is shown below. Note that the system can avoid asking three questions in this case (the required problem solving capability, the student's year, and the courses already taken) as these are all indicated in the first user utterance. In the last system response, the system provides its advice with jusUfication: 1) User: I'm a sophomore and just finished data structures. What courses should I take this spring? 2) System: Have you taken discrete math? 3) User: Yes. 4) System: Are there any courses you'd particularly like to take? 5) User: I'd like to take less programming courses. 6) System: I suggest you take fundamental algorithms and finite math which are both offered next semester You should have finished both courses by the end of your sophomore year and only fundamental algorithms requires programming. There are a number of ways in which this type of discourse allows us to address our objectives of taking previous discourse into account to generate tailored responses. This discourse segment is clearly concerned with a single purpose which is stated by the user at the beginnning of the session s This is the goal that the expert system must pursue and the ensuing discourse is directed at gathering information and defining criteria that are pertinent to this goal. Since the system must ask the user for information to solve the problem, the user is given the opportunity to provide additional relevant information. Even if this information is not strictly necessary for the problem-solving activity, it provides information about the user's plans and concerns and allows the system to select information in its iustifieation which is aimed at those concerns. Thus, in the above example, the system can use the volunteered information that the user is a sophomore and wants to take less programming courses to tailor its justification to just those concerns, leaving out other potentially relevant information. Is this type of extended discourse, revolving around an underlying goal, possible in the database domain? First, note that extended discourse in a natural language database system would consist of a sequence of questions related to the same underlying goal. Second, note that the domain of the database has a strong influence on whether or not the user is likely to have an underlying goal requiring a related sequence of questions. In domains such as the standard suppliers and parts database (Codd 78), it is hard to imagine what such an underlying goal might be. In domains such as IBM's TQA town planning database (Petrick 82), on the other hand, a user is more likely to ask a series of related questions. Even in domains where such goals are feasible, however, the sequence of questions is only implicitly related to a given goal. For example, suppose our system were a student advisor database in place of an expert system. As in any database system, the user is allowed to ask questions and will receive answers. Extended discourse in this environment would be a sequence of questions which gather the information the user needs in order to solve his/her problem. Suppose the user again has the goal of determining which courses to take next semester. S/he might ask the following sequence of questions to gather the information needed to make the decision: 1. What courses are offered next semester? 2. What are the pre-requisites? 3. Which of those courses are sophomore level courses? 4. What is the programming load in each course? 6Over a longer sequence of discourse, more than a single user ~oa--] is likely to surface. I am concerned here with discourse segments which deal with a sinle or related set of oals. 192 Although these questions are all aimed at solving the same problem, the problem is never clearly stated. The system must do quite a bit of work in inferring what the user's goal is as well as the criteria which the user has for how the goal is to be satisfied. Furthermore, the user has the responsibility for determining what information is needed to solve the problem and for producing the final solution. In contrast, in the expert system environment, the underlying expert system has responsibility coming up with a solution to the given problem and thus, the natural language system Is aware of information needed to solve that goal. It can use that information to take the responsibility for directing the discourse towards the solution of the goal (see Matthews 84). Moreover, the goal itself is made clear in the course of the discourse. Such discourse is likely to be segmented into discernable topics revolving around the current problem being solved. Note that one task for the natural language system is determining where the discourse is segmented and this is not necessarily an easy task. When previous discourse is related to the current question being asked, it is possible to use it in shaping the current answer. Thus, the expert system does provide a better environment m which to explore issues of user modeling based on previous discourse. 4 Conclusions The question of whether natural language database systems still provide a valuable environment for natural language research is not a simple one. As evidenced by the growing body of work on Gricean implicature and user modelling of plans, the database environment is still a good one for some unsolved natural language problems. Nevertheless, there are interesting natural language problems which cannot be properly addressed in the database environment. One of these is the problem of tailoring responses to a given user based on previous discourse and for this problem, the expert system provides a more suitable testbed. References (Allen and Perrault 80). Allen, J.F. and C.R. Perrault, "Analyzing intention in utterances," Artificial Intelligence 15, 3, 1980. (Carberry 83). Carberry, S., "Tracking user goals in an information-seeking environment," in Proceedings of the National Conference on Artificial Intelligence, Washington D.C., August 1983. pp. 59-63. (Codd 78). Codd, E. F., et. al., Rendezvous Version 1: An Experimental English-Language Query Formulation System for Casual Users of Relational Databases, IBM Research Laboratory, San Jose, Ca., Technical Report RJ2144(29407), 1978. (Cohen 78). Cohen, P., On Knowing What to Say: Planning Speech Acts, Technical Report No. 118, University of Toronto, Toronto, 1978. (Grice 75). Grice, H P., "Logic and conversation," in P. Cole and J. L Morgan (eds) Syntax and Semantics: Speech Acts, Vol. 3, Academic Press, N.Y., 1975. (Hirschberg 83). Hirschberg, J., Scalar quantity implicature: A strategy for processing scalar utterances. Technical Report MS-CIS-83-10, Dept. of Computer and Information Science, University of Pennsylvania, Philadelphia, Pa., 1983. (Kaplan 79). Kaplan, S. J., Cooperative responses from a portable natural language database query system Ph. D. dissertation, Univ. of Pennsylvania,Philadelphia, Pa., 1979. (Matthew 84). Matthews, K. and K. McKeown, "Taking the initiative in problem solving discourse," Technical Report, Department of Computer Science, Columbia University, 1984. (McDermott 81). McDermott, J., "Rl: The formative years," A/ Magazine 2:21-9, 1981. (Petrick 82). Petrick, S., "Theoretical /Technical Issues in Natural Language Access to Databases," in Proceedings of the 20th Annual Meeting of the Association for Computational Linguistics, Toronto, Ontario, 1982 pp. 51-6. (Stolfo and Vesonder 82). Stolfo, S. and G. Vesonder, "ACE: An expert system supporting analysis and management decision making," Technical Report, Department of Computer Science, Columbia University, 198~, to appear in Bell Systems Technical Journal. (Webber 83). "Pragmatics and database question answering," in Proceedings of the Eighth International Joint Conference on Artificial Intelligence, Karlsruhe, Germany, August 1983, pp. 1204-5. 193
1984
43
REPRESENTING KNOWLEDGE ABOUT KNOWLEDGE AND MUTUAL KNOWLEDGE Sald Soulhi Equipe de Comprehension du Raisonnement Naturel LSI - UPS llg route de Narbonne 31062 Toulouse - FRANCE ABSTRACT In order to represent speech acts, in a multi-agent context, we choose a knowledge representation based on the modal logic of knowledge KT4 which is defined by Sato. Such a formalism allows us to reason about know- ledge and represent knowledge about knowled- ge, the notions of truth value and of defi- nite reference. I INTRODUCTION Speech act representation and the lan- guage planning require that the system can reason about intensional concepts like know- ledge and belief. A problem resolver must understand the concept of knowledge and know for example what knowledge it needs to achie- ve specific goals. Our assumption is that a theory of language is part of a theory of ac- tion (Austin [4] ). Reasoning about knowledge encounters the problem of intensionality. One aspect of this problem is the indirect reference introduced by Frege ~] during the last century. Mc Car- thy [15] presents this problem by giving the following example : Let the two phrases : Pat knows Mike's tele- phone number (I) and Pat dialled Mike's te- lephone number (2) The meaning of the proposition "Mike's tele- phone number" in (I) is the concept of the telephone number, whereas its meaning in (2) is the number itself. Then if we have : "Mary's telephone number = Mike's telephone number", we can deduce that : "Pat dialled Mary's tele- phone number" but we cannot deduce that : "Pat knows Mary's telephone number", because Pat may not have known the equality mentioned above. Thus there are verbs like "to know", "to believe" and "to want" that create an "opaque" context. For Frege a sentence is a name, refe- rence of a sentence is its truth value, the sense of a sentence is the proposi- tion. In an oblique context, the refe- rence becomes the proposition. For exam- ple the referent of the sentence p in the indirect context "A knows that p" is a proposition and no longer a truth value. Me Carthy [15] and Konolige [I I] have adopted Frege's approach. They consi- der the concepts like objects of a first- order language. Thus one term will denote Mike's telephone number and another will denote the concept of Mike's telephone number. The problem of replacing equalities by equalities is then avoided because the concept of Mike's telephone number and the number itself are different entities. Mc Carthy's distinction concept/object corresponds to Frege's sense/reference or to modern logicians' intension/extension. Maida and Shapiro [13] adopt the same approach but use propositional semantic networks that are labelled graphs, and that only represent intenslons and not exten- sions, that is to say individual concepts and propositions and not referents and truth values. We bear in mind that a seman- tic network is a graph whose nodes repre- sent individuals and whose oriented arcs represent binary relations. Cohen E6], being interested in speech act planning, proposes the formalism of partitioned semantic networks as data base to represent an agent's beliefs. A parti- tioned semantic network is a labelled graph whose nodes and arcs are distributed into spaces. Every node or space is identified by its own label. Hendrix ~9] introduced it to represent the situations requiring the delimitation of information sub-sets. In this way Cohen succeeds in avoiding the problems raised by the data base approach. These problems are clearly identified by Moore FI7,18]. For example to represent 'A does-not believe P', Cohen asserts Believe (A,P) in a global data base, en- tirely separated from any agent's know- ledge base. But as Appelt ~] notes, this solution raised problems when one needs to combine facts from a particular data base 194 with global facts to prove a single assertion. For example, from the assertion : know (John,Q) & know (John,P ~Q) where P~ Q is in John's data base and ~ know (John,Q) is in the global data base, it should be possible to conclude % know (John,P) but a good strategy must be found ! In a nutshell, in this first approach which we will call a syntactical one, an a- gent's beliefs are identified with formulas in a first-order language, and propositional attitudes are modelled as relations between an agent and a formula in the object langua- ge, but Montague showed that modalities can- not consistently be treated as predicates ap- plying to nouns of propositions. The other approach no longer considers the intenslon as an object but as a function from possible worlds to entities. For ins- tance the intension of a predicate P is the function which to each possible world W (or more generally a point of reference, see Scott [23] ) associates the extension of P in W. This approach is the one that Moore D7,18] adopted. He gave a first-order axio- matization of Kripke's possible worlds seman- tics [12] for Hintikka's modal logic of know- ledge [,0]. The fundamental assumption that makes this translation possible, is that an attri- bution of any propositional attitude like "to know", "to believe", "to remember", "to strive" entails a division of the set of pos- sible worlds into two classes : the possible worlds that go with the propositional attitu- de that is considered, and those that are in- compatible with it. Thus "A knows that P" is equivalent to "P is true in every world com- patible with what A knows". We think that possible worlds language is complicated and unintuitive, since, rather than reasoning directly about facts that some- one knows, we reason about the possible worlds compatible with what he knows. This transla- tion also presents some problems for the plan- ning. For instance to establish that A knows that P, we must make P true in every world which is compatible with A's knowledge. This set of worlds is a potentially infinite set. The most important advantage of Moore's approach [17,183 is that it gives a smart axiomatization of the interaction between knowledge and action. II PRESENTATION OF OUR APPROACH Our approach is comprised in the general framework of the second approach, but in- stead of encoding Hintikka's modal logic of knowledge in a first-order language, we consider the logic of knowledge propo- sed by Mc Carthy, the decidability of which was proved by Sato [21] and we pro- pose a prover of this logic, based on na- tural deduction. We bear in mind that the idea of u- sing the modal logic of knowledge in A.I. was proposed for the first time by Mc Car- thy and Hayes [14]. A. Languages A language L is a triple (Pr,Sp,T) where : .Pr is the set of propositional va- riables, .Sp is the set of persons, .T is the set of positive integers. The language of classical proposi- tional calculus is L = (Pr,6,~). SoCSp will also be denoted by 0 and will be called "FOOL". B. Well Formed Formulas The set of well formed formulas is defined to be the least set Wff such as : (W|) PrC Wff (W 2) a,b-~ Wff implies aD b eWff (W 3) S6_Sp,t 6.T,aeWff implles(St)a~_Wff The symbol D denotes "implication". (St)a means "S knows a at time t" <St>a (= % (St) ~ a) means "a is pos- sible for S at time t". {St}a (= (St)a V (St) % a) means "S knows whether a at time t". 195 C. Hilbert-type System KT4 The axiom schemata for KT4 are : At. Axioms of ordinary propositional lo- gic A2. (St)a • a A3. (Ot)a ~ (Or) (St)a A4. (St) (a D b) ~ ((Su)a D(Su)b), where t 6 u A5. (St)a ~ (St) (St)a A6. If a is an axiom, then (St)a is an axiom. Now, we give the meaning of axioms : (A2) says that what is known is true, that is to say that it is impossible to have false knowledge. If P is false, we cannot say : "John knows that P" but we can say "John believes that P". This axiom is the main difference between knowledge and be- lief. This distinction is important for plan- ning because when an agent achieves his goals, the beliefs on which he bases his actions must generally be true. (A3) says that what FOOL knows at time t, FOOL knows at time t that anyone knows it at time t. FOOL's knowledge represents universal knowledge, that is to say all agents knowledge. (A4) says that what is known will remain true and that every agent can apply modus ponens, that is, he knows all the logical consequences of his knowledge. (A5) says that if someone knows something then he knows that he knows it. This a- xiom is often required to reason about plans composed of several steps. It will be referred to as the positive introspec- tive axiom. (A6) is the rule of inference. D. Representation of the notion of truth va- lue. We give a great importance to the repre- sentation of the notion of truth value of a proposition, for example the utterance : John knows whether he is taller than Bill (I) can be considered as an assertion that mentions the truth value of the proposition P = John is taller than Bill, without taking a position as to whether the latter is true or false. In our formalism (I) is represented by : {John} P This disjunctive solution is also adopted by Allen and Perrault D]" Maida and Sha- piro [13] represent this notion by a node because the truth value is a concept (an object of thought). The representation of the notion of truth value is useful to plan questions : A speaker can ask a hearer whether a cer- tain proposition is true, if the latter knows whether this proposition is true. E. Representing definite descriptions in conversational systems : Let us consider a dialogue between two participants : A speaker S and a hea- rer H. The language is then reduced to : Sp = (O,H,S} and T = {l} Let P stand for the proposition : "The description D in the context C is unique- ly satisfied by E". Clark and Marshall [5] give examples that show that for S to refer to H to some en- tity E using some description D in a con- text C, it is sufficient that P is a mu- tual knowledge; this condition is tanta- mount to (O)P is provable. Perrault and Cohen [20] show that this condition is too strong. They claim that an infinite number of conjuncts are necessary for suc- cessful reference : (S) P& (S)(H) e& (S)(H)(S) e & ... with only a finite number of false conjuncts. Finally, Nadathur and Joshi ~9] give the following expression as sufficient condition for using D to refer to E : (S) BD (S)(H) P & ~ ((S) BO(S)~(O)P) where B is the conjunction of the set of sentences that form the core knowledge of S and ~ is the inference symbole. III SCHOTTE - TYPE SYSTEM KT4' Gentzen's goal was to build a forma- lism reflecting most of the logical rea- sonings that are really used in mathemati- 196 cal proofs• He is the inventor of natural de- duction (for classical and intultionistic lo- gics). Sato ~|] defines Gentzen - type sys- men GT4 which is equivalent to KT4. We consi- der here, schStte-type system KT4' [22] which is a generalization of S4 and equivalent to GT4 (and thus to KT4), in order to avoid the thinning rule of the system GT4 (which intro- duces a cumbersome combinatory). Firstly, we are going to give some difinitions to intro- duce KT4'. A. Inductive definition of positive and ne- gative parts of a formula F Logical symbols are ~ and V. a. F is a positive part of F. b. If % A is a positive part of F, then A is a negative part of F. c. If ~ A is a negative part of F, then A is a positive part of F. d. If A V B is a positive part of F, then A and B are positive parts of F. Positive parts or negative parts which do not contain any other positive parts or negative parts are called minimal parts. B. Semantic property The truth of a positive part implies the truth of the formula which contains this posi- tive part. The falsehood of a negative part implies the truth of the formula which contains this negative part. C. Notation F[A+] is a formula which contains A as a positive part F[A-] is a formula which contains A as a negative part. F[A+,B-] is a formula which contains A as a positive part and B as a negative part where A and B are disjoined (i. e, o~e is not a subformula of the o- ther). D. Inductive definition of F [.j From a formula F [A], we build another formula or the empty formula F [.] by dele- ting A : a. If F [A 3 ° A, then F[.] is the empty formula. c. If F G[A V BJ or = G V AJ then . = G [BJ. E. Axiom An axiom is any formula of the form F[P+,P-] where P is a propositional varia- ble. F. Inference rules (R!) F[(A V B)j V ~ A, FI(A V B) ] v ~ B ~ FL(A V B) J -- (R2) F[(St)A 3 V~A ~ FT(st)A~ (PO) ~(Su)A 1V ... V ~(Su)Am V ~(Ou)B. V ... V ~(Ou)Bn V C where (Su)A I ..... (Su)Am, (Ou)B I , ..., (Ou) B6 must appear as neg6- tire parts in the conclusion, and uK t 51c 9, F2[C-] F, v F2[J (cut) G. Cut-elimlnation theorem (Hauptsatz) Any KT4' proof-figure can be trans- formed into a KT4' proof-figure with the same conclusion and without any cut as a rule of inference (hence, the rule (R4) is superfluous. The proof of this theo- rem is an extension of Sch~tte's one for $4'. This theorem allows derivations "without detour"• IV DECISION PROCEDURE A logical axiom is a formula of the form F[P+,P-]. A proof is an single-roo- ted tree of formulas all of whose leaves are logical axioms. It is grown upwards from the root, the rules (RI), (R2) and (R3) must be applied in a reverse sense. These reversal rules will be used as "production rules"• The meaning of each production expressed in terms of the pro- granting language PROLOG is an implication• It can be shown [24J that the following strategy is a complete proof procedure : • The formula to prove is at the star- 197 ring node; • Queue the minimal parts in the given for- mula; • Grow the tree by using the rule (R|) in priority , followed by the rule (R2), then by the rule (R3). The choice of the rule to apply can be done intelligently. In general, the choice of (RI) then (R2) increases the likelihood to find a proof because these (reversal) rules give more complex formulas. In the case where (R3) does not lead to a loss of formulas, it is more efficient to choose it at first• The following example is given to illustrate this strategy : Example Take (A4) as an example and let Fo deno- tes its equivalent version in our language (Fo is at the start node) : Fo = ~(St)(~a V b) V ~(Su)a V (Su)b where t < u P~ denotes positive parts and P? denotes I negative parts l P+ = {~(St)(~ a V b), %(Su)a,(Su)b}; 2 P = {(St)(~ a V b),(Su)a}; O By (R3) we have (no losses of formulas) : F l = ~(St)(% a V b) V %(Su)a V b ÷ PI = {%(St)(~ a V b), ~(Su)a,b} F- = {(St)(% a V b),(Su)a} By (~2) we have : F~ = F~ V ~,(~a V b) P2 PI U {%(~a V b)} P2 = P7 U {~a V b} By (RI) we have : F~ = F~ V ~ a P3 P2 U {~ a,a} andP~ = P2 O {~ a} F 4 = F 2 V % b + + P4 = P2 ~ {~ b} P~= P2 U {b} +' P~ {b} F 4 is a logical axiom because P4 ~ = Finally, we have to apply (R2) to the last but one node : F 5 F~ F~V~a P5 [ P3 U {~ a} P5 = P3 iJ {a} is a logical axiom because P51~ F 5 =[a} The generated derivation tree is then : I ÷ -- Fo,Po,Po I F,,P ,FT 1 I , F2'P~'P2 j 1 / + -- F3,P3,P 3 R 2 + - +;] P5 = {a} F5'Pb'P5 P5 I ÷ -- I F4'P4'P4 1 rPV~4-- {b} Derivation tree 198 V ACKNOWLEDGMENTS We would like to express our sincerest thanks to Professor AndrOs Raggio who has gui- ded and adviced us to achieve this work. We would like to express our hearty thanks to Professors Mario Borillo, Jacques Virbel and Luis Fari~as Del Cerro for their encouragments. Vl REFERENCES Allen J.F., Perrault C.R. Analyzing intention l in utterances. Artificial Intelligence ]5, ]980. Appelt D. A planner for reasoning about know- 2 ledge and belief. Proc. of the First Annual Conference of the American Association for ~rtificial Intelligence, Stanford, ]980. Appelt D. Planning natural-languages utteran- 3 ces to satisfy multiple goals. SRI Interna- "{ional AI Center, Technical Note 259, 1982. Austin J.L. How to do things with words, Ox- 4 ford (french translation, Quand dire, c'est faire, Paris), 1962. Clark H.H., Marshall C. 'Definite Reference 5 and Mutual Knowledge', in Elements of Dis- course Understanding (eds. A.K. Joshi, B.L. Webber and I.A. Sag), Cambridge University Press., 1981. Cohen P. On knowing what to say : Planning 6 speech acts, Technical Report n~]]8, Toronto ]978. Frege G. Sens et d~notation, in Ecrits logi- 7 ~ues et philosophiques, Claude Imbert's French traduction, Ed. du Scull, Paris,1982. Gentzen G. Recherches sur la d~duction loglque. 8 Robert Feys and Jean Ladri~re's French tra- duction, (PUF, Paris), 1965. Hendrix G. Expanding the utility of semantic 9 networks through partitioning. IJCAI-4,1975. Hintikka J. Semantics for propositional atti- ]O tudes, in L. Linsky (Ed.), Reference and Mo- dality, Oxford University Press., London, 1971. Konolige K. A first-order formalisation of ]] knowledge and action for a multi-agent plan- ning system. Machine Intelligence 10, ]981. Kripke S. Semantical considerations on modal ]2 logic, in Linsky (Ed.) Reference and Modali- ty, Oxford University Press., London, ]971. Maida A.S., Shapiro S.C. Intensional con- ]3 cepts in propositional semantic networks, Cognitive Science 6, ]982. McCarthy J., Hayes P. Some philosophical 14 problems from the standpoint of AI. Ma- chine Intelllgence 4, 1969. McCarthy J. First order theories of indivi- ]5 dual concepts and propositions. Machine Intelligence 9, ]979. Montague R. Syntactical treatments of moda- l6 lity with corollaries on reflexion princi- ples and finite axiomatizahility. Acta Phi- losophica Fennica, Vol.16, 1963. Moore R.C. Reasoning about knowledge and ac- 17 tion. IJCAI-5, 1977. Moore R.C. Reasoning about knowledge and ac- 18 tion. Artificial Intelligence Center, Tech- nical Note n°]91, Menlo Park : SRI Interna- tional, J980. Nadathur G., Joshi A.K. Mutual beliefs in con- 19 versational systems : their role in refer- ring expressions. IJCAI-8, ]983. Perrault C.R., Cohen P.R. 'It's for your own 20 good : a note on Inaccurate Reference', in Elements of Discourse Understanding (eds. A.K. Joshi, B.L. Webber, and I.A. Sag), Cam- bridge University Press., 1981. Sato M. A study of Kripke-type models for so- 21 me modallogics by Gentzen's sequential me- thod. Research Institute for Mathematical Sciences, Kyoto University, Japan, ]977. Schutte K. yollstandige systeme modaler und 22 intuitlonistischer logik. Erg. d. Mathem. und ihrer brenzgebiete, Band 42, Springer- Verlag, Berlin, ]968. Scott D. Advice on modal logic, in Philoso- 23 phical problems in logic, ed. K. Lambert, Reidel (Jean Largeault's French traduc- tion, UTM, Unpublished memo), 1968. Soulhi S. A decision procedure for knowledge 24 l ogle KT4, Technical Report, LSI; ECRN, ]983. 199
1984
44
UNDERSTANDING PRAGMATICALLY ILL-FORMED INPUT FL Sandra Carberry Department of Computer Science University of Delaware Newark, Delaware 19711 USA ABSTRACT An utterance may be syntactically and semant- Ically well-formed yet violate the pragmatic rules of the world model. This paper presents a context-based strateEy for constructing a coopera- tive but limited response to pragmatlcally ill- formed queries. Sug~estlon heuristics use a con- text model of the speaker's task inferred from the preceding dialogue to propose revisions to the speaker's ill-formed query. Selection heuristics then evaluate these suggestions based upon seman- tic and relevance criteria. I INTRODUCTION An utterance may be syntactically and semant- ically well-formed yet violate the prasmatlc rules of the world model. The system will therefore view it as "ill-formed" even if a native speaker finds it perfectly normal. This phenomenon has been termed "pragmatic overshoot" [Sondheimer and Weischedel,1980] and may be divided into three classes: [ I] User-specifled relationships that do exist in the world model. [2] not EXAMPLE: "Which apartments are for sale?" In a real estate model, single apart- ments are rented, not sold. However apart- ment buildings, condominiums, townhouses, and houses are for sale. User-specified restrictions on the relation- ships which can never be satisfied, even with new entries. EXAMPLE: "Which lower-level English courses have a maxim,-, enrollment of at most 25 students?" In a University world model, it may be the case that the maxim,-, enrollments of This material is based upon work supported by the National Science Foundation under grants IST- 8009673 and IST-8311400 lower-level English courses are constrained to have values larger than 25 but that such constraints do not apply to the current enrollments of courses, the maximum enroll- ments of upper-level English courses, and the maximum enrollments of lower-level courses in other departments. The sample utterance is pragmatically ill-formed since world model constraints prohibit the restricted relations specified by tbe user. [3] User-specifled relationships which result in a query that is irrelevant to the user's underlying task. EXAMPLE: "What is Dr. Smlth ' s home address?" The home addresses of faculty at a university may be available. However if a student wants to obtain special permission to take a course, a query requesting the instructor's home address is inappropriate; the speaker should request the instructor's office address or phone. Although such utterances do not violate the underlying domain world model, they are a variation of pragmatic overshoot in that they violate the listener's model of the speaker's underlying task. A cooperative partlc/pant uses the informa- tion exchanged during a dialogue and his knowledge of the domain to hypothesize the speaker's goals and plans for achieving those goals. This context model of goals and plans provides clues for inter- preting utterances and formulating cooperative responses. When pragmatic overshoot occurs, a human listener can modify the speaker's ill-formed query to form a similar query X that is both mean- ingful and relevant. For example, the query "What is the area of the special weapons mag~azine of the Alamo?" erroneously presumes that storage locations have an AREA attribute in the REL database of ships [Thompson, 1980] ; this is an instance of the first class of pragmatlc overshoot. Depending upon the speaker's underlying task, a listener m/ght infer that the speaker wants to know the REMAINING- CAPACITY, TOTAL-CAPACITY, or perhaps even the LOCATION (if "area" is interpreted as referring to "place") of the Alamo's Special Weapons Magazine. In each case, a cooperative participant uses the preceding dialogue and his knowledge of the 200 speaker to formulate a response that ~.%ght provide the desired information. This paper presents a method for handling this first class of pragmatic overshoot by formu- lating a modified query X that satisfies the speaker's needs. Future research may extend thls technique to handle other pragmatic overshoot classes. Our work on pragmatic overshoot processing is part of an on-going project to develop a robust natural language interface [Weischedel and Son- dhetmer, 1983]. Mays[1980], Webber and Nays[1983], and Ramshaw and Welschedel[1984] have suggested mechanisms for detecting the occurrence of pragmatic overshoot and identifying its causes. The ms.ln contribution of our work is a context- based strategy for constructing a cooperative but llm~ted response to pragmatically ill-formed queries. This response satisfies the user's per- ceived needs, inferred beth from the preceding dialogue and the ill-formed utterance. In partic- ular, [i] A context model of the user's goals and plans provides expectations about utterances, expectations that may be used to model the user's goals. We use e context mechanism [Carberry, 1983] to build the speaker's underlying task-related plan as the dialogue progresses and differentiate between local and global contexts. [23 Only alternative queries which mis~ht represent the user's intent or at least satisfy his needs are considered. Our bvDothesls is that the user'a lnferred plan, ~bythecontextmodel, ~Jtggg4Lt,~ substitution for the ZL ~ causln~ the overshoot. II KNOWLEDGE REPRES~TATION Our system requires a representation for each of the following: [i] [2] [3] [,] the set of dome/n-dependent plans and goals the speaker,s plan inferred from the preced- ing dialogue the existing relationships among attributes and entity sets in the underlying world model the semantic difference of attributes, rela- tions, entity sets, and functlon~ Plans are represented using an extended STRIPS [Fikes and Nilsson, 1971] formalism. A plan can contain subgoals and actions that have associ- ated plans. We use a context tree [Carberry, 1983] to represent the speaker's inferred plan as constructed from the preceding dialogue. Nodes within this tree represent goals and actions which the speaker has investlgated;these nodes are des- cendants of parent nodes representing higher-level goals whose associated plans contain these lower- level actions. The context tree represents the global context or overall plan inferred for the speaker. The focused plan is a subtree of the context tree and represents the local context or particular aspect of the plan upon which the speaker's attention is currently focused. This focused plan produces the strongest expectations for future utterances. An entity-relationship model states the pos- sible primitive relationships among entity sets. Our world model includes a generalization hierar- chy of entity sets, attributes, relations, and functions and also specifies the types of attri- butes and the dome/ns of functions. III CONSTRUCTING THE CONTEXT MODEL The plan construction component is described in [Carberry, 1983]. It hypothesizes and tracks the changing task-level goals of a speaker during the course of a dialogue. Our approach is to infer a lower-level task-related goal frsm the speaker,s explicitly comaunlcated goal, relate it to potential hi~er-level plans, and build the complete plan context as the dialogue progresses. The context mechanism distinguishes local and glo- bal contexts and uses these to predict new speaker goals from the current utterance. IV PRAGMATIC OVERSHOOT PROCESSING Once pragmatic overshoot has been detected, the system formulates a revised query QR request- ing the lnformatlon needed by the user. Our hypothesis is that the user's inferred plan, represented by the context model, suggests a sub- stitution for the proposition that caused the pragmatic overshoot. The system then selects from amongst these suggestions using the criteria of relevance to the current dialogue, semantic difference from the proposition in the user's query, and the type of revision operation applied to this proposition. A. Su~stion The suggestion mechanism examines the current context model and possible expansions of its con- stituent goals and actions, proposing substitu- tions for the proposition causing the pragmatlc overshoot. This erroneous proposition represents either a non-exlstent attribute or entity set relationship or a function applied to an inap- propriate set of attribute values. The suggestion mechanism applies two classes of rules. The first class proposes a simple sub- 201 atitution for an attribute, entity set, relation, or function appearing in the erroneous proposi- tion. The second class proposes a conjunction of propositions representing an expanded relatlon~ip path as a substitution for the user-specifled propositlo~ These two classes of rules may be used together to propose both an expanded rela- tionship path .and an attribute or entity set sub- stitution. I. SimD~-Substitution Rules Suppose a student wants to pursue an indepen- dent study project; such projects can be directed by full-time or part-time faculty but not by faculty who are "extension" or "on sabbatical". The student might erroneously enter the query "what is the classificatioD of Dr. Smith?" Only students have classification attributes (such as Arts&Science-1985, Engineerlng-1987); faculty have attributes such as rank, status, age, and title. Pursuing an independent study project under the direction of Dr. Smith requires that Dr. Smith's status be "full-time" or "part-time". If the listener knows the student wants to pursue independent study, then he might infer that the student needs the value of this status attribute and anger the revised query "What is the status of Dr. Smith?" The suggestion mechanic, contains five simple substitution rules for handling such erroneous queries. One such rule proposes a substitution for the user-specifled attribute in the erroneous propositio~ Intuitively, a listener anticipates that the speaker will need to know each entity and attribute value in the speaker's plan inferred from the domain and the preceding dialogue. Sup- pose this inferred plan contains an attribute ATTI for a member of ENTITY-SETI, namely ATTI(ENTITY- SETI ,attribute-value), and that the speaker erroneously requests the value of attribute ATTU for a member entl of ENTITY-SETI. Then a coopera- tive listener might infer that the value of ATTI for entity entl will satisfy the speaker's needs, especially if attributes ATTI and ATTU are closely related. The substitution mechanism searches the user's inferred plan and its possible expansions for propositions whose arguments unify with the arguments in the erroneous proposition causing the pragmatic overshoot. The above rule then suggests substituting the attribute from the plan's propo- sition for the attribute specified in the user's query. This substitution produces a query relevant to the current dialogue and may capture the speaker's intent or at least satisfy his needs. 2. ExDanded Path Rules Suppose a student wants to contact Dr. Smith to discuss the appropriate background for a new seminar course. Then the student might enter the query "What is Dr. Smith's phone number?" Phone numbers are associated with homes, offices, and departmental offices. Course discussions with professors may be handled in person or by phone; contacting a professor by phone requires that the student dial the phone number of Dr. Smith,s office. Thus the listener might infer that the student needs the phone number of the office occu- pied by Dr. Smith. The second class of rules handles such "miss- ing logical Joins". (This is somewhat related to the philosophical concept of "deferred ostenalon" [Qulne,1569].) These rules apply when the entity sets are not directly related by the user- specified relation RLU--- but there is a path R in the entity relationship model between the entity sets. We call this path expansion since by finding the missing Joins between entity sets, we are constructing an expanded relational path. Suppose the inferred plan for the speaker includes a sequence of relations R1 (ENTITY-SETI ,~TITY-SETA) R2 ( ENTITY-SETA, ~ TITY-SETB) R3(ENTITY-SETB, ~TITY-SET2) ; then the listener anticipates that the speaker will need to know those members of ~TITY-SETI that are related by the composition of relations RI ,R2,R3 to a member of EIqTITY-SET2. If the speaker erroneously requests those members" of ENTITY-SETI that are related by ~ (or alterna- tively RI or R3) to members of ~TITY-SET2, then perhaps the speaker really meant the expanded path RImR2*R3. The path expansion rules suggest sub- stituting this expanded path for the user- specified relation. We employ a user model to constrain path expansion. This model represents the speaker's beliefs about membership in entity sets. If prag- matic overshoot occurs because the speaker misused a relation R(ENTITY-SETI, ~TITY-SET2) by specifying an argument that is not a member of the correct entity set for the relation, then path expansion is permitted only if the user model indicates that the speaker may believe the errone- ous argument is not a member of that entity set. EXAMPLE: "Which bed is Dr. Brown assigned?" Suppose beds are assigned to patients in a hospital model. If Dr. Brown is a doctor and doctors cannot simultaneously be patients, then path expansion is permitted if our user model indicates that the speaker may recognize that Dr. Brown is not a patient. In this case, our expanded path expression may retrieve the beds assigned to patients of Dr. Brown, if this is suggested by the inferred task-related plan. 202 To limit the components of path expressions to those relations which can be meaningfully com- bined in a given context, we make a strong assump- tion: that the relations comprising the relevant expansion appear on a single path within the con- text tree representing the speaker's inferred plan. For example, suppose the speaker's inferred plan is to take C-$105. Expansion of this plan will contain the two actions Learn-From-Teacher- In-Cl ass( SPEAKER, se ction, faculty) such that Teach( faculty, section) Obtain-Necessary-Extra-Help( SPEAKER, section, teaching-asslstant) such that Assists(teaching-assistant, section) The associated plans for these two actions specify respectively that the speaker attend class at the time the section meets and that the speaker meet with the section's teaching assistant at the time of his office hours. Now consider the utterance "When are teaching assistants available?" A direct relationship between teachinE assistants and time does not exist. The constraint that all components of a path expression appear on a single path in the inferred task-related plan prohibits composing Assists(teachlng-asslstant,sectlon) and Meet-Time(sectlon, tlme) to suggest a reply con- sisting of the times that the CSI05 sections meet. S. ~ ~ c h a ~ s m The substitution and path expansion rules propose substitutions for the erroneous proposi- tion that caused the pragmatic overshoot. Three criteria are used to select frnm the proposed sub- stitutions the revised query, if any, that is most likely to satisfy the speaker's intent in making the utterance. First, the relevance of the revised query to the speaker's plans and goals is measured by three factors: [i] A revised query that interrogates an aspect of the current focused plan is most relevant to the current dialogue. [2] The set of higher level plans whose expan- sions led to the current focused plan form a stack of increasingly more general, and therefore less immediately relevant, active plans to which the user may return. A revised query which interrogates an aspect of an active plan closer to the top of this stack is more expected than a query which reverts back to a more general active plan. [33 Within a given active plan, a revised query that investigates the single-level expansion of an action is more expected, and therefore more relevant, than a revised query that investigates details at a much deeper level of expsnsion. Second, we can classify the substitution T-->V which produced the revlsed query into four categories, each of which represents a more signl- flcant, and therefore less preferable, alteration of the user's query (Figure I). Category I con- tains expanded relational paths R11P.?S... mRn such that the user-speclfied attribute or relation appears in the path expression. For example, the expanded path Treats( Dr. BrOwn, patient) Wls- Assigned( patient, room) is a Category I substitution for the user- specified proposition Is- Assigned( Dr. Brown, rotz~) SUBSTITUTION CATEGORY TERM T Expanded relational path including the user-specifled attribute or relation Attribute, relation, entity set, or function semantically similar to that specified by the user Expanded relational path, including an attribute or relation semantically similar to that speclfled by the user Double substitution: entity set and relation semantically similar to a user-speclfled entity set and relation SUBSTITUTION VARIABLE V User-speclfled attribute or relation User-specified attribute, [ relation, entity se~, or function User-specifled attribute or relation User-specified entity set[ and relation I I I Figure I. Classification of Query Revision Operations 203 contained in the semantic representation of the query "Which bed is Dr. Brown assigned?" Category 2 contains simple substitutions that are semantically similar to the attribute, rela- tion, entity set, or function specified by the speaker. An example of Category 2 is the previ- ously discussed substitution of attribute "status" for the user specified attribute "classification" in the query "What is the classification of Dr. Smith?" Categories 3 and 4 contain substitutions that are formed by either a Category I path expansion followed by a Category 2 substitution or by two Category 2 substltutlons. Third, the semantic difference between the revised query and the original query is measured in two ways. First, if the revised query is an expanded path, we count the number of relations comprising that path; shorter paths are more desirable than longer ones. Second, if the revised query contains an attribute, relation, function, or entity set substitution, we use a generalization hierarchy to semantically compare substitutions with the items for which they are substituted. Our difference measure is the dis- tance from the item for which the substitution is being made to the closest common ancestor of it and the substituted item; small difference meas- ures are preferred. In particular, each attri- bute, relation, function, and entity set ATTRFENT is assigned to a primitive semantic class: PRIM-CLASS( ATTRFENT , CLASSA) Each semantic class is assigned at most one immediate auperclass of which it is a proper sub- set : SUPER( CLASSA, CL ASSB) We define function f such that f(ATTRFENT , i+1) = CL~.SS if PRIM-CLASS( ATTRFENT, CLASSal ) and SUPER( CLA$Sal, CLASSa2) and SUPER( CLASSa2, CLASSaS) and . . . and SUPER( CLkSSal, CLASS) If a revised query proposes substituting ATTRFENTnew for ATTRFENTold, then semantl c#difference ( ATTRFEN Tnew, ATTRFEN Told) =NIL if there does not exist j,k such that f( ATTRFEN Tnew, j) =f( ATTRFENTold, k) =mln k such that there exists j such that f( ATTRFEN Tnew, j) =f( ATTRFEN Tol d, k) otherwise An initial set is constructed conslstil~g of those suggested revised queries that interrogate an aspect of the current focused plan in the con- text model. These revised queries are particu- larly relevant to the current local context of the dialogue. Members of this set whose difference measure is small and whose revision operation con- sists of a path expansion or simple substitution are considered and the most relevant of these are selected by measuring the depth within the focused plan of the component that suggested each revised query. If none of these revised queries meets a predetermined acceptance level, the same selection criteria are applied to a newly constructed set of revised queries sug~sted by a higher level active plan whose expansion ied to the current focused plan, and a less stringent set of selection cri- teria are applied to the original revised query . ~et. (The revised queries in this new set are not immediately relevant to the current local dialogue context but are relevant to the global context.) As we consider revised queries suggested by higher level plans in the stack of active plans representing the global context, the acceptance level for previously considered queries is decreased. Thus revised queries which were not rated hilly enough to terminate processing when first suggested may eventually be accepted after less relevant aspects of the dialogue have been investigated. This relaxation and query set expansion is repeated until either an acceptable revised query is produced or all potential revised queries have been consldered. V EX~.MPLF~ Several examples are provided to illustrate the suggestion and selection strategies. [I] Relation or Entity Set Substitution "Which apartments are for sale?" In a real-estate model, single apart- ments are rented, not sold. However apart- ment buildings, condc~ini,-,s, townhouses, and houses are for sale. Thus the speaker's utterance contains the erroneous proposition For-Sale(apar tment) where apartment is a member of entity set APARTMENT. If the preceding dialogue indicates that the speaker is seeking temporary living arrangements, then expansion of the context model representing the speaker's inferred plan will contain the posslble action Rent( SPEAKER, apartment) such that For-Rent(apartment) The substitution rules propose substituting relation For-Rent frc~ this plan in place of relation For-Sale in" the speaker's utterance. On the other hand, if the preceding dialogue indicates that the speaker represents a real estate investment trust interested in expanding its holdings, an 204 expansion of the context model representing the speaker's inferred plan will contain the possible action Purchase( SPEAE~B, apartment-building) where apartment-buildlng ls a member of entity set APARTmeNT-BUILDING. Purchasing an apartment building necessitates that the bttllding be for sale or that one convince the owner to sell It. Thus one expansion of this Purchase plan includes the precondition For-Sale(apartment-bullding) The substitution rules propose substituting entity set APABT~NT-BUILDING from thls plan for the entity set APABT~NT in the speaker's utterance. [2] Function Substitution "What is the average rank of CS faculty?" The function AVEBAGE cannot be applied to non-numerlc elements such as "professor". The speaker's utterance contains the errone- ous proposition AVERAGE( rank, fn- value) such that Department-Of(faculty,CS) and Bank( faculty, rank) If the preceding dialogue indicates that the speaker is evaluating the C~ department, then an expansion of the context model represent- lng the speaker's lnferred plan wlll contain the possible action Evaluate-Faculty( SPEAKER, CS) The plan for Evaluate-Faculty contains the action Evaluate( SPEAKER, ave-rank) such that ORDERED-AVE( rank, ave-rank) and Department-Of( faculty, CS) and Bank( faculty, rank) If a domain D of non-numeric elements has an explicit ordering, then we can associate wlth each of the n dome.ln elements an lndex number between 0 and n-1 speclfylng its poaltlon in the sorted domain. The function ORDERED-AVE appearing In the speaker's plan operates upon non-numeric elements of such domains by cal- culating the average of the index numbers associated wlth each element instead of attempting to calculate the average of the elements themselves. The substitution rules propose substituting the function ORDERED-AVE from the speaker's inferred plan for the function AVERAGE in the speaker's utterance. ORDERED-AVE and AVERAGE are semantically similar functions so the difference measure for the resultant revised query will be emall. [3] Expanded Relational Path "when does Mltchel meet?" A university model does not contain a relation mET between FACULTY and TI~S. H~ever, faculty teach courses, present sem- inars, chair ooamlttees, etc., and courses, seminars, and committees meet at scheduled times. The speaker's utterance contalns the erroneous proposition Meet- Tlme( Dr. Mt tchel, time) If the preceding dialogue indicates that the speaker is considering taking CSI05, then an expansion of the context model represent- ing the speaker's inferred plan will contain the action Earn-Credi t- In-Sectl on( SPEAKER, section) such that Is-Sectlon-Of(section, CS105) Expansion of the plan for Earn-Credlt-ln- Section contains the action Learn-From- Teacher- In-C1 ass( SPE AKEB, section, faculty) such that Teach( faculty, section) and the plan for thls action contains the action At tend-Cl ass( SPEAKER, place, time) such that Meet-Plave(sectlon, place) and Meet- Time( section, time) The two relations Teach(Dr.~fltchel,sectton) and Meet-Time( section, time) appear on the • same path in the context model. Therefore the path expansion heuristics suggest the expanded relational path Teach( Dr. Mi tchel, section) "Meet-Time( ae ctlon, time) as a substitution for the relation Meet- Time( Dr. Mi tchel, time) in the user's utterance. Only one arc Is added to produce the expanded relational path and it contains the user-specifled relation Meet-Time, so the difference measure for this revlsed query ls small. VI BELATED WORK Erlk Mays[1980] discusses the recognition of pragmatic overshoot and proposes a response con- talnlng a llst of those entity sets that are related by the user-speclfied relation and a llst of those relations that connect the user-speclfled entity sets. Houever he does not use a model of whether these pos~ibllltles are applicable to the user's underlying task. In a large database, such responses will be too lengthy and include too many irrelevant alternatives. 205 Kapl an[ 1 979], Chang[ 1 97 8] , and Sowa[ 1 976] have investigated the problem of missing Joins between entity sets. Kaplan proposes using the shortest relational path connecting the entity sets; Chang proposes an algorithm based on minimal spanning trees, using an a priori weighting of the arcs; $owa uses a conceptual graph (semantic net) for constructing the expanded relation. None of these present a model of whether the proposed path is relevant to the speaker's intentions. VII LIMITATIONS ~ND FUTURE WORK Pragmatic overshoot processing has been implemented for a domain consisting of a subset of the courses, requirements, and policies for stu- dents at a University. Our system ass,s, es that the relations comprising a meaningful and relevant path expansion will appear on a single path within the context tree representing the speaker's inferred plan. This restricts such expansions to those communicated via the speaker's underlying inferred task-related plan. However this plan may fall to capture some associations, such as between a person's Social Security Number and his name. This problem of producing precisely the set of path expansions that are meaningful and relevant must be investigated further. Other areas for future work include: [I] Extensions to handle relationships among more than two entity sets [2] Extensions to the other classes of pragmatic overshoot mentioned in the introduction. [3] Extensions to detect and respond to queries which exceed the knowledge represented in the underlying world model. We are currently assuming that the system can provide the i r2ormation needed by the speaker. VIII CONCLUSIONS The main contribution of our work is a context-based strategy for constructing a coopera- tive but limited response to pragmatically ill- formed queries. This response satisfies the speaker's perceived needs, inferred both from the preceding dialogue and the ill-formed utterance. Our hypothesis is that the speaker's inferred task-related plan, represented by the context model, suggests a substitution for the proposition causing the pragmatic overshoot and that such suggestions then must be evaluated on the basis of relevance and semantic criteria. ACKNOWLEDGMENTS I would like to thank Ralph Weischedel for his encouragement and direction iD this research and for his suggestions on the style and content of this paper and Lance Ramshaw for many helpful discussions. REFEREI~CES 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. Carberry, S., "Tracking User Goals in an Information-Seeking Environment", Proc. R~ ~. on Artificial Intelli~ence, Washing- ton, D.C., 1983 Chang, ~ L., "Finding Missing Joins for Incomplete Queries in Relational Data Bases" IBM Res. Lab., RJ2145, San Jose, Ca., 1978 Fikes, R. E. and N. J. Nilsson, "STRIPS: A New Approach to the Application of Theorem Proving to Problem Solving", Artificial I g / _ ~ 2 , 1971 Kaplan, S. J. , "Cooperative Responses from a Portable Natural Language Data Base Query System", Ph.D. Dlsa., Univ. of Pennsyl- vanla,1979 Mays,L , "Failures in Natural Language Query Systems: Applications to Data Base Query Sys- tems", Proc. Nat. Conf. on Artificial I n t ~ , Stanford, 1980 Quine, W. V., "Ontologlcal Relativity" in Ontological ~@lativltv and Qther ~ , Columbia University Press, New York 1969 Ramshaw, L. A. end N. ~ Weischedel, "Problem Localization Strategies for Pragmatic Pro- cessing in Natural Language Front Ends", Proe. of 9~ Int. Conf. on ComDutatlonal ~ , 1 984 Sondbeimer, N. K. and R. ~ Welschedel, "A Rule-Based Approach to Ill-Formed Input", Proo. 8th ~Jl~. Conf. on ~gmDutatlonal ~ g , 1980 Sowa, J. F., "Conceptual Graphs for a Data Base Interface", IBM Journal of Research and D . ~ , July 1 976 Thompson, B. H., "Linguistic Analysis of Natural Language Communication with Comput- ers", Proc. 8th Int. Conf. on Comouta- tlonal Lin~ulstics, 1980 Webber, B. L. and E. Mays, "Varieties of User Misconceptions: Detection and Correction", Proc. ~ Int. Joint Conf. on Artificial ~telli~ence, Karlsruhe, West Germany, August I 983 Weischedel, R. ~L and N. K. Sondheimer, "Meta-Rules as a Basis for Processing lll- Formed Input", (to appear in ~ Journal of ~ Linguistics, Vol. 9, #3, I 983) 206
1984
45
Referring as Requesting Philip R. Cohen Artificial Intelligence Center SRI International and Center for the Study of Language and Information Stanford University 1. Introduction 1 Searle [14] has arg,ed forcefully that referring is a speech act; that people refer, uot just expressions. This paper considers what kind of speech act referring might be. I propose a generalization of Searle's "propositional" act of referring that treats it as an illocution- ary act, a request, and argue that the propositional act of referring is unnecessary. The essence of the argument is as follows: First, I consider Searle's definition of the propositional act of referring {which I term the PAA, for Propositional Act Account). This definition is found inadequate to deal with various utterances in discourse used for the sole pur- pose of referring. Although the relevance of such utterances to the propositional act has been defined away by Searle, it is clear that any comprehensive account of referring should treat them. l develop an ac- count of their use in terms of a speaker's requesting the act of referent identification, which is to be understood in a perceptual sense. This itlocutionary act analysis (IAA) is shown to satisfy Searle's conditions for referring yet captures utterances that the PAA cannot. The con- verse positif)n is then examined: Can the IAA capture the same uses of referring expressions as the PAA? If one extends the perceptually- based notion of referent identification to include Searle's concept of identification, then by associating a complex propositional attitude to one use of the definite determiner, a request can be derived. The ]AA thus handles the referring use of definite noun phrases with in- dependently motivated rules. Referring becomes a kind of requesting. llence, the propositional act of referring is unnecessary. 2. Referring as a propositional speech act Revising Austin's [2] Iocutionary/illocutionary dichotomy, Searle distinguishes between illoeutionary acts (IAs) and propositional acts (PAs) of referring and predicating. Both kinds of acts are performed in making an utterance, but propositional acts can only be performed in the course of performing some illocutionary act. Let us consider Searle's rules for referring, which I term the "propositional act analysis", or PAA. A speaker, S, "successfully and non-defectively performs the speech act of singular identifying refer- ence" in uttering a referring expression, R, in the presence of hearer, H, in a context, C, if and only if: 1. Normal input and output conditions obtain. 2. Tile utterance of R occurs as part of the utterance of some sentence (or similar stretch of discourse) T. 3. The utterance of T is the (purported) performance of an illocutionary act. 4. There exists some object X such that either R con- ]The research reported in this paper was supported initially by the Fairchild Cam- era and Instrument Corp. Its subsequent development has been made possible by a gift from the System Development Foundation. I have benefitted from many discussions with Itector Levesque and Ray Ferra.ult. rains an identifying description of X cr S is able to supplement R with an identifying description of X. 5. S intends that the utterance of R will pick out or identify X to If. 6. S intends that the utterance of R will ideatify X to tt by means of If's recognition of S's intention to iden- tify X and he intends this recognition to be achieved by means of It's knowledge of the rules governing R and his awareness of C. 7. The semantical rules governing R are such that it is correctly uttered in T in C if and only if conditi~ms 1-6 obtain." ( l]4], PP. 94-95.) Conditions 2 and 3 are justified as follows: Propositional acts cannot occur alone: that is oue can- not just [emphasis in original -- PRC] refer and pred- icate without making an assertifm or asking a question or performing some other illocutionary act ... One only refers as part of the performance of an illocutionary act, and the grammatical clothing of an iilocutionary act is the complete sentence. An utterance of a referring ex- pression only counts as referring if one says something. (Ibid, p. 25.) The essence of Conditions 4 and 5 is that the speaker needs to utter an "identifying description". For Searle, "i(lentification ~ means "... there should no longer be any doubt what exactly is being talked about". (lbid, p. 85.) Furthermore, not only should the description be an identifying one (one that would pick out an object), but the speaker should intend it to do so uniquely (Colidition 5). Moreover, the speaker's intention is supposed to be recognized by the hearer {Condition 6}. This latter Gricean [7] condition is needed to distin- guish having the hearer pick out an object by referring to it versus, for example, hitting him in the back with it. 3. Problems for the Propo~itional Act Ac- count In a recent experiment [3], it was shown that in giving instructions over a telephone, speakers, but not users of keyboards, often made separate utterances for reference and for predication. Frequently° thes.e "referential utterances" took the form of existent ial sentences, such as "Now, there's a black O-ring". Occasionally, speakers used question noun phrases "OK, now, the smallest of the red pieces?" The data present two problems for the PAA. 3.1. Referring as a Sentential Phenomenon Conditions 2 and 3 require the referring expression to be em- bedded in a sentence or "similar stretch of discourse" that predicates 207 something .f the refi, rent as part of the performance of some illocu- ti,nary act. llowever, it is ~bvious that speakers can refer by issuing isolated noun phra,:e or prepositional phrases. Since speakers per- formed illoculi,mary acts iu making these utterances, then, according to Conditi.ns 2 and 3, tl'ere should be an act of predication, either in the sentence .r the "similar stretch of discourse". For example, consider the f.l!,wing dial,,gue fragment: 1. "Now, the small blue cap we talked about before?" '2. "l~'h-h u h", 3. "Put that over the hole Oil the side of thai tube ..." The illocutlonary art performed by uttering phrase (it is finished and responded to in phrase (2} before the illocutionary act performed in phrase (3) containing the predication "put" is performed. The ap- peal to a sentence or stretch of discourse in which to find the illocu- tiouary act containing tile propositional act in (1) is therefore is un- convincing. The cause of this inadequacy is that, according to Searle, to perform an ill.eutionary act, an act of predicating is required, and the predicate must be uttered (Ibid, pp. 1k26.-127~. llence, there is no appeal to context to supply t,bvious predications. Likewise, there is no room for context to supply an obvi~ms focus of attention. Unfortu- nately, we can easily imagiue cases in which an object is mutually, but nonlinguistically, bcused upon {e.g., when Ilolmes, having come upon a body on the ground, listens for a heartbeat, and says to Watson: "Dead"). In snch a case, we need only l)redicate. Thus, the require- ment that the act of reference b~" j~,intly located with some predication in a sentence or ilh~cutiuuary act is t~o restrictive -- the 9oals involved with reference and predication can be satisfied separately and contex- tually. The point of this paper is to bring such goals to the fore. 3.2. Referring without a Propositional Act Tile second pr(llqi'ni is that Inost of tile separate utterances issued to secure reference were declarative sentences whose logical form was 3 z P(z). For example, "there is a little yellow piece of rubber", and "it's got a plug in it". However, Searle claims that these utterances contain no referring act. (lbid, p. 29.) flow then can speakers use them to refer? The answer inwdves an analysis of indirect speech acts. Although such declarative utterances can be issued just to be informative, they are also issued as requests that the hearer identify the referent. : The analysis of these utterances as requests depends on our positing an action of referent identification. 4. Identification as a Requested Action In Searle's account, speakers identify referents for hearers. ! re- vise this notion slightly and treat identification as an act performed by the hearer 131 . I use the term "identify" in a very. narrow, though important and basi-, sense -- one that intimately involves perception. Thus, the analysis is not intended to be general; it applies only to the case when the referents are perceptually accessible to the hearer, and when the hearer is intended to use perceptual means to pick them out. For the time b~'iug. I am explicitly not concerned with a hearer's mentally "identifying" some entity satisfying a description, or discov- ering a coreferring descripticm. Th, perceptual use of "identification" would appear to be a ~pe('ial case of .%arb"s use of the term, and thus Searle's condition,s sh,.uhl apply to, it. Referent identifica~:ion in this perceptual sense requires an agent and a description. The essence of the act is that the agent pick out the thingorth!nffs satisfying the description. The agent need not '2The classification of these utterances as identification requests was done by two coders who were trained by the author, but who worked independently. The reliability of their ¢odings were high -- over 90 per cent for such existential ~tatements. be the speaker of the description, and indeed, the description need not be communicated linguistically, or even conlmunicated at all. A crucial component of referent identification is the act of perceptually searching for something that satisfies the description. The description is decomposed by the agent into a plan of action for identifying the referent. The intended and expected physical, sensory, and cognitive actions may be signalled by tile speaker's choice of predicates. For example, a speaker uttering the phrase "the magnetic screwdriver", may intend for the hearer to place various screwdrivers against some piece of iron to determine which is magnetic. Speakers know that hearers map (at least some) predicates, onto actions that determine their extensions, and thus, using a model of the heater's capabilities and the causal connections among people, their senses, and physical objects, design their expression, D, so that hearers can successfully execute those actions in the context of the overall plan. Not only does a speaker plan for a hearer to identify the referent of a description, but he often indicates his intention that the hearer do so. According to Searle, one possible way to do this is to use a definite determiner. Of course, not all definite NP's are used to refer: for example, in the sentence "the last piece is the nozzle', the referent of the first NP is intended to be identified, whereas the referent of the second NP is not. The attributive use of definite noun phrases [6] is a case in which the speaker has no intention that the hearer identify a referent. Yet other nonanaphoric u~es of definite noun phrases include labeling an object, correcting a referential miscommunicatiou, having the hearer wait while the speaker identifies the referent, etc. '~ To respond appropriately, a hearer decide~ when identi[icatiou is the act he is supposed to perform on a description, what part this act " will play in the speaker's and hearer's plan~, and how aud when to perform the act. If perceptually identifying a feb'rent L~ represented as an action in the speaker's plan, hearers can reas,,u ab<,ut it just as any other act. thereby allowing them to infer the speaker's iat;'ntions behind indirect identification re.quests. In snmmary, referent identification shall mean l.lie conducting =,f a perceptual search process for the referent <,f a description. Tile verb "pick out" should be taken as synonymous. The following is a sketchy definition of tlw ceil,rent identification action, in which the description is formed fr~mi "a/lh, • y such that D(y)". 4 3 X [PERCEPTUALLY- ACCESSIBLE(X. Agt) & D(X) & IDENTIFIADLE(Agt, D) D RESULT(Agt, IDENTIFY-REFERENT {Agt. D). IDENTIFIED- REF E I'HDNT (Agt. D. X)I The formula follows the usual axi,ml;ttizati,m of acti,)n~ in dy- namic logic: P D fAct]Q; that is, if P, aft,'r d.iug Act. Q. ["..ll.,'*ing Moore's [9] possible worlds semantics for actbJn, tit," nv,,,lal operau,r RESULT is taken to be true of an agent, an acti.n, an,I a formula, iff in the world resulting from the agent's perf,~rmfltg that actitm, tl,e formula is true. s The antecedent includes three conditions. The first is a -per- 3See also {15, IO] for discu-:~ion of sl)~,akors' goal~ toward~ th,. irC~r i r,.,a'i,,n of descriptions. 4Thls definition is not particularly ilhlullnating, t, ut it is ng!. ;lily ,:a'.:u,.r thza Searle's. The point of giving it is that if a definition can be given in this form (i.e., as an action characterizable in a dynamic logic), the illocuti~nary analysis applies. SActually, Moore characterizes R.ESULT as taking an event and a fornmla as arguments, and an agent's doing an action denotes an event. This di~ereace is not critical for what follows. 208 ceptual accessibility" condition to guarantee that the IDENTIFY- REFERENT action is applicable. This should guarantee that, for example, a speaker does not intend someone to pick out the referent of "3 ~, "democracy", or "the first man to land on Mars ~. The condition is satisfied in the experimental task since it rapidly becomes mutual knowledge that the task requires communication about the objects in front of the hearer. The second condition states that X fulfills the description D. Here, 1 am ignoring cases in which the description is not literally true of the intended referent, including metonymy, irony, and the like (but see [12]). Lastly, D should be a description that is identifiable to this particular Agt. It should use descriptors whose extension the agent already knows or can discover via action. I am assuming that we can know that a combination of descriptors is identifiable without having formed a plan for identifying the referent. To give a name to the state of knowledge we are in after having identified the referent of D, we will use {IDENTIFIED- REFERENT Agt D X). That is, Agt has identified the referent of D to be X. Of course, what has been notoriously difficult to specify is just what Agt has to know about X to say he has identified it as the reforent of D. At a minimum, the notion of identification needs to be made relative to a purpose, which, perhaps, could be derived from the bodily acti.ns that someone (in the context} is intended to per- form upon X. Clearly, "knowing who the D is" 18, 9], is no substitute for having identified a referent. After having picked out the referent of a doseription D, we may still not not know who the D is. On the other hand, we may know who or what the description denotes, for example, hy knowing ~ome "standard name" for it, and yet be unable to use that kn.wledge to pick out the object. For example, if we ask "Which is tho Seattle train?" and receive the reply "It's train number 11689", we may ~till not be able to pick out and board the train if its serial number is not plainly in view. Finally, athough not stated in this definition, the means by which the act is performed is some function mapping D to some procedure that, when executed by Agt, enables Agt to discover the X that is the referent of D. 4.1. Requesting Con.~ider what it takes to make a request. Hector Levesque and I [4, 5] argu~, that requests and other illocutionary acts can be defined in terms of interacting plans -- i.e., as beliefs about the conversants' shared knowledge of the speaker's goals and the causal consequences of achieving those goals. In this formalism, illocutionacy acts are no longer conceptually primitive, but rather amount to theorems that can be proven about a state of affairs. The proof requires an axiomatiza- tion of agents' beliefs, expectations, goals, plans, actions, and a cur* relation of utterance mood with certain propositional attitudes. The important point here is that the definition of a request is not merely stipulated, but is derived from an independently motivated theory of action. Any act that brings about the right effects in the right way satisfies the request theorem. Briefly, a request is an action (or collection of actions) that makes it ( 1 ) shared knowledge that the speaker's goal is that the hearer thinks the speaker wants the hearer to adopt the goal of doing a particular act, thereby making it (2) shared knowledge between the speaker and hearer that the speaker wants the hearer to do that act. This inference requires an additional "gating" condition that it be shared knowledge that the speaker is both sincere and can perform the requested act (i.e., he knows how, and the preconditions of that act are true). The processing of an utterance is assumed to begin by applying the propositional attitude correlated with its mood to the proposi- tional content associated with its literal interpretation. Thus, corre- lated with imperatives and interrogatives is the attitude above {corre- spending to goal (I)): 6 (MUTUAL-BELIEF Hearer Speaker (GOAL Speaker (BEL Hearer (GOAL Speaker (GOAL Hearer (DONE Hearer Act P)))))} . (DONE Hearer Act P) is true if Hearer has done act Act and has brought about P. For yes/no interrogatives, Act would be an INFORMIF [11]; for imperatives, it would be the act mentioned in the sentence. Declaratives would be correlated with a different propositional attitude. Beginning with the utterance-correlated atti- tudes, a derivation process that constitutes plan-recogniti.n reasoning determines what the speaker meant [7]. Thus, for example, what the speaker meant could be classified as a request if the derivation included making (2) true by making the above formula true. An act may simultaneousJy achieve the goals constituting more than one illocutionary act. This ability underlies the analysis of indi- rect speech acts. Formalisms have been developed [5, 11] that describe when we can conclude, from a speaker's wanting the hearer to want the precondition of some act to hold, (or wanting the hearer to be- lieve the precondition does hold), that the speaker wants the hearer to adopt the goal of performing the act. The conditi.ns licensing this inference are that it be mutually known that the act (or its effect) is an expected goal of the speaker, and that it be mutually known that: the hearer can perform the act. is cooperative, and does not want not to do it. Returning to the troublesome existential sentences, this pattern of reasoning, which I term the "illoculionary act analysis" {[AAI, can be used to derive a request for reh'rent identification. The reasoning is similar to that needed to infer a request to open the door on hearing a speaker, with two arm-loads of groceries, say "the door i~ closed". The general form of this reasoning involves the a.~serti.n o! an action's precondition when the effect of the action is an expected goal of the speaker. In the case at hand, the speaker's existential assertion causes the hearer to believe the existential precondition of the referent iden- tification act, since speaker and hearer both think they are talking about objects in front of the h~'arer, and because the description is identifiable. Ilence, the hearer concludes he is intended to pick out its referent. The hearer may go on to infer that he is intended to perform other acts, such as to pick up the object. This inference process also indicates when the indirect request interpretation is not iutended, for example, if it is mutually known that the description is not identifi- able, or if it is mutually known that the hearer would not want to identify the referent. I argue that this kind of reasoning underlies the propositional act account. First, I show that Searle's conditions on referring are a special case of the conditions for requesting referent identification. Then, I show that if one extends the definition of IDENTIFY-REFERENT to cover Searle's more general concept of identificatiou, the IAA is applicable in the same circumstances a~s Searle's analysis. Because the IAA is independently motivated and covers more cases, it should be preferred. 5. Accounting for Searle's Conditions on Referring Assume Searle's Condition 1, the "normal i/O conditions." For the reasons outlined above, do not assume Conditions 2 and 3. Now, clearly, a speaker's planning of a request that the hearer identify the referent of some description should comply with the rules for request- ing, namely: the speaker is trying to achieve one of the effects of the CThe justification for this formula can be found in [5]. 209 requested action (i.e., IDENTIFIED-REFERENT} by way of commu- nicating (in the (;ricean sense) his intent that the hearer perform the action, provided that it is shared knowledge that the hearer can do the acti.n. The last condition is true if it is shared knowledge that the the prec.ndili.n to the action hohts, which includes Seacle's existen- tial Condition 4. Scarle's Condition 5 states that the speaker intends to identify Ihe referent to the hearer. This condition is captured in the IAA by its bee.ruing umtual knowledge that the speaker intends to achieve the effect of the referent identification act, IDENTIFIED- IH'H"I';RENT. Finally, Searle's Gricean intent recognition Condition [G) takes h-ld in the same way that it does for other illocutionary acts. namely in virtue of a "feature" of the utterance (e.g., utter- ance mood, or a definite determiner} that is correlated with a complex prop.sitional attitude. This attitude becomes the basis for subsequent reasoning about the speaker's plans. In summary, Searle's conditions can be acconnted for by simply positing an action that the speaker requests aml that the hearer reasons about and performs. So far, the IAA and PAA are complementary. They each account for different aspects of referring. The IAA characterizes utterances who~e sole p.rpose is to secure referent identification, and the PAA characterizes the use of referring phrases within an illocutionary act. I now precede to show how the IAA can subsume the PAA. Searle argues that tree use of the definite article in uttering an NP is to indicate the speaker's intention to refer uniquely. Moreover, from Condition 5, this intention is supposed to be recognized by the hearer. We can get this effect by correlating the following expression with the delinite determiner: A D [(MUTUAL-BELIEF Hearer Speaker (GOAL Speaker (BEL Hearer 3 ! X (GOAL Speaker (GOAL Hearer (DONE Hearer IDENTIFY-REFERENT (Hearer, D), IDENTIFIED-REFERENT (Hearer, D, X))))))] Think -f tiffs expression as being a pragmatic "feature" of a syn- tactic constituent, as in current linguistic formalisms. When this ex- pression is applied to a descriptor (supplied from the semantics of the NP} we have a complete formula that becomes the seed for deriving a request. Namely, if it is mutually believed the speaker is sincere, 7 then it is mutually believed there is a unique object that speaker wants the hearer to want to pick out. If it is mutually believed the hearer can do it {i.e.. the prec~mditions to the referent identification act hold, and the hearer kn.ws how to do it by decomposing the description into a plan c~f action), it is mutually believed of some object that the speaker's goal is that the hearer actually pick it out. llence, a request. * Thus. for the perceptual ease, the IAA subsumes the PAA. 5.1. Extending the Analysis Assume that instead of just considering the act of identification in its perceptual sense, we adopt Searle's concept -- namely that *... there should no longer be any doubt what exactly is being talked about." Identification in this sense is primarily a process of establish- ing a coreferential link between the description in question and some other whose referent is in some way known to the hearer. However, we again regard identification as an act that the hearer performs, not something the speaker does to/for a hearer. If an analysis of this "rSincerity can be dispensed with at no significant loss of generality SThat is, I am suggesting that the interpretation of how the speaker intends the noun phrase to be interpreted le.g., referentially, attributively, etc.} begins with such a propositional attitude. If the referential reading is unsuccessful, the hearer needs to make other inferences to derive the intended reading. extended notion can be made similar in form to the analysis of the perceptual identification act, then the IAA completely subsumes the PAA. Because both accounts are equally vague on what constitutes identification (as are, for that matter, all other accounts of which [ am aware), the choice between them must rest on other grounds. The grounds favoring the identification request analysis include the use of separate utterances and illocutionary acts for referring, and the inde- pendently motivated satisfaction of Searle's conditions on referring. 5.2. Searle vs. Russell Using the propositional act of referring. Searle argues against Rus- sews [13] theory of descriptions, which holds that the uttering of an expression "the ~" is equivalent to the assertion of an uniquely exis- tential proposition ~there is a unique ~b". Thus, when reference fails, it is because the uniquely existential proposition is nc, t true. Searle claims instead that the existence of the referent is a precondition to the action of referring. In referring to X, we do not a~ssert that X exists any more than we do in hitting X (lbid, p. 1G0.) However, the pre- condition is necessary for successful performance. Searle's argument against this theory essentially comes down to: ... It [Russell's theory] presents the propositional act of definite reference, when performed with definite descrip- tions ... as equivalent to the illocutionary act of assert- ing a uniquely existential proposition, and there is no coherent way to integrate such a theory into a theory of illocutionary acts. Under no condition is a prol.J:~itional act identical with the illocntionary act of asserti.n, for a. propositional act can only occur as part of s, uuc illo- cutionary act, never simply by itself (Ibid, p. 15. I There are two difficulties with this argument. First, the require- " ment that acts of referring be part of an ilk, cutionary act was shown to be unnecessarily restrictive. Second, there is a way to assimilate the assertion of an existential proposition -- an act that .%arle claims does not contain a referring act -- into an analysis of illocutionary acts, namely as an indirect req,est fur referent identification. How- ever, because an assertion of a uniquely existential proposition may fail to convey an indirect request for referent identification (just as uttering ~It's cold in here" may fail to convey an iuditert request} Searle's argument, though weakened, still stands. 6. Summary There are a number of advantages for treatitq~- refero:lt hh'ntifi- cation as an action that speakers request, and thus f~Jr treating the speech act of referring as a request. The analysis n-t ,,nly itrc,~tllltS for data that Searle's account can]tot, but it al.<o predicts each of Searle's conditions for performing tlw act of slngnlar idei~tifying refer- ence, yet it allows for appropriate exteushm into a Idaunitlg Impress. If we extend the perceptual use of referent identification t,~ Searl,.':~ more general concept of identification, and we correlate a certain {(;ricean) propositional attitude with the use of definite determiners in a noun phrase, then Searle's analysis is subsumed by the act of requesting referent identification. The propositional act of referring is therefore unnecessary. The promissory note introduced by this approach is to show how the same kind of plan-based reasoning u~ed in analyzing indi- rect speech acts can take hold when a hearer realizes he cann.t, and was not intended to, identify the referent of a description. That is. plan-based reasoning should explain how a hearer might decide that the speaker's intention cannot be what it appears to be (based on the intent correlated with the use of a definite determiner), leading, him, for example, to decide to treat a description attributively [6 I. Moreover, such reasoning should be useful in determining intended referents, as 210 Ortony I10] has argued. To keep this promise, we need to be specific about speaker- intentions for other uses of noun phrases. This will be no easy task. One difficulty will be to capture the distinction between achieving ef- fects on a hearer, and doing so communicatively {i.e., in the Gricean way). Thus, for example, a hearer cannot comply with the illocution- ary force of =Quick, don't think of an elephant* because there seems to be an "automatic" process of ~concept activation * 11]. Achieving ef- fects non-communicatively, i.e., without the recognition of intent, may be central to some kinds of reference. In such cases, speakers would be able to identify referents for a hearer. If this held for singular iden- tifying reference, then tlwre could be grounds for a propositional act. llowever, we might have to give up the Gricean condition (5), which l suspect Searle would not want to do. Finally, there are obviously many a.~pects of reference that need to be accounted for by any comprehensive theory. I make no claims {yet) about the utility of the present approach for dealing with them. Rather, 1 hope to have opened the door to a formal pragmatics for one aspect of referring. 7. References 1. Appelt, D. Planning natural language utterances to satisfy mul- tiple 9oals. Ph.D. Th., Stanford University, Stanford, Califor- nia, December 1981. 2. Austin, J. L. How to do things with words. Oxford University Press, London, 1962. 3. Cohen, P. R. Pragmatics, speaker-reference, and the modality of communication. To appear in Computational Linguistics, 1984. 4. Cohen, P. R., & Levesque, If. J. Speech Acts and the Recog- nition of Shared Plans. Prec. of the Third Biennial Confer- ence, Canadian Society for Computational Studies of Intelli- gence, Victoria, B. C., May, 1980, 263-271. 5. Cohen, P. R., & Levesque, 11. J. Speech acts as summaries of shared plans. In preparation. 6. Donnellan, K. Reference and definite description. The Philo- sophical Re~Jieu, 7S. 1960, pp. 281-304. 7. Grice, 11. P. Meaning. Philosophical Review 66, 1957, pp. 377- 388. 8. ]lintikka, J. Semantics for propositional attitudes. In Philo- sophical lo#ic, D. Reidel Publishing Co., Dordrecht-Holland,1969. 9. Moore, R. C. Reasoning about knowledge and action. Techni- cal Note 191, Artificial Intelligence Center, SRI International, October, 1980. 10. Ortony, A. Some psycholinguistic constraints on the construc- tion and interpretation of definite descriptions. Proceedings of the Second Conference on Theoretical Issues in Natural Lan- guage Processing, Urbana, Illinois, 1978, 73-78. I 1. Perrault, C. R., & Allen, J. F. A plan-based analysis of indirect speech acts. American Journal of Computational Linguistics 6, 3, 1980, pp. 167-182. 12. Perrault, C. R., & Cohen, P. R. It's for your own good: A note on inaccurate reference. In Elements of Discourss Understand- ing, Cambridge Irniversity Press, Joshi, A., Sag, I., & Webber, B., {Eds. , Cambridge, Mass., 1981.) 13. Russell, B. On denoting. Mind I4, 1905, pp. 479-492. 14. Searle, J. R. Speech acts: An essay in the philosophy of lan- guage. Cambridge University Press, Cambridge, 1969. 15. Sidner, C. L. The pragmatics of non-anaphoric noun plzrases. In Research in Knowledge Representation for Natural Language Understandin#: Annual Report, 9/1/8£-8/81/88, Bolt Beranek and Newman, Inc., , Cambridge, Mass, 1983. 16. Wilkes-Gibbs, D. How todo things with reference: The function of goals in determining referential choice. Unpublished ms. 211
1984
46
Entity-Oriented Parsing Philip J. Hayes Computer Science Department, Carnegie.Mellon Llniversity Pi~tsbur~ih, PA 152_13, USA Abstract f An entity-oriented approach to restricted-domain parsing is proposed, In this approach, the definitions of the structure and surface representation of domain entities are grouped together. Like semantic grammar, this allows easy exploitation of limited dolnain semantics. In addition, it facilitates fragmentary recognition and the use of multiple parsing strategies, and so is particularly useful for robust recognition of extragrammatical input. Several advantages from the point of view of language definition are also noted. Representative samples from an enlity-oriented language definition are presented, along with a control structure for an entity-oriented parser, some parsing strategies that use the control structure, and worked examples of parses. A parser incorporaling the control structure and the parsing strategies is currently under implementation. 1. Introduction The task of lypical natural language interface systems is much simpler than the general problem of natural language understanding: The simplificati~ns arise because: 1. the systems operate within a highly restricted domain of discourse, so that a preci..~e set of object types c;~n be established, and many of tl;e ambiguities that come up in more general natural language processing can be ignored or constrained away; 2. even within the restricted dolnain of discourse, a natural language i.terface system only needs to recognize a limited subset of all the Ihings that could be said -- the subset that its back-end can respond to. The most commonly used tr:chnique to exploit these limited domain constraints is semantic ~j~amrnar [I, 2, 9] in which semantically defined categories (such as <ship> or <ship- attrihute>) are used in a gramrnur (usually ATN based) in place of syntactic categories (such as <noun> or <adjective>). While semantic grammar has been very successful in exploiting limited domain constraint.~ to reduce ambiguities and eliminate spurious parses of grammatical input, it still suffers from the fragility in the face of extragrammatical input characteristic of parsing based on transition nets [41. AI~o. the task of restricted-domain language definition is typically difficult in interlaces based on semantic grammar, in part bscaus~ th.,: grammar definition formalism is not well imegrated with the method of d~..fining the object and actions of tl~e domain of discourse (though see [6]). 1This r~t,;e~rch wmJ spont;(.cd by the At; Fnrco Office of Scient=fic Resr.,'l¢,";h und{;r Cow,tract AFOC, R-82-0219 ]his paper proposes an alternat;ve approach to restricted domain langua~fe recognition calI~d entity-oriented p;rsing. Entity-orie=-ted parsing uses the same notion of semar~tlcally- defined catctjeries a.', ~2mantic grammar, but does net embed these cate,:.iories in a grammatical structure designed for sy.tactic recognition. Instead, a scheme more reminiscent of conceptual or case.frame parsers [3, 10, II] is employmf. An entity-oriented parser operates from a collection of definitions of the various entities (objects. events, cem, m~mds, states, etc.) that a particular interf:~ce sy-~teln needs to r:.~cognize. These definitions contain informatiol~ about the internal structure of the entities, about the way the entitie:~ will be manifested in the natural language input, s~}(I about the correspondence belween the internal shucture and surface repres.~ntation. ]his arrangement provides a good frarnewo~k for exploiting the simplifications possible in restricted £locY~ain natt:rnl lanouage recognition because: 1. the entitle:z; form a ~dtural set of !ypes through which to cun:~train Ih~; recognition semantically. the types also form a p.alura~ basis fnr the structurctl definitions of entities. 2. the set of things thai the back-end can respond to corresponds to a subSet of the domain -:-nlities (remember that entities can be events or commar,ds as well as objects). Re the f~o~l of an entity.ori,;nted ~ystem will normally be to recognize one of a "top.ievel" class of entities. This is analogous to the sot el basic message pa~.terns that Lhe ir;[~.chin~; translation system of Wilks [11] aimed to recognize in any input. In addition to providing a good general basis for restricted domain n41ural language recognition, we claim that the entity~ o;iented ,~pproach also fa,.;iJitate5 rubu:.;tness in the face of ex~r~tgrammatical input ~.l~(I ease nf k~guage definition for ros;r!ctc:l d'm;cJn I~ng~.~Ua:~. EnLity.arie,~ted parsh;g I',.~.s the potential to provide better parsing robustness Lhan more traditional semantic gramn~;]r techniques for two major reasons: • The individual definition of aq domain entities facilit~los their indepcncl,~mt recoL4rfilion. As:,um;;t,':l there is apl)rof~riaLe inde'<ing at entiLies tl~rough lex~cai ~toms that mir;iht appt~ar in a surface dt.'.~cription '.}f them. thi:~ rc.cognitior: c;;n be done bottom.up, thus rnuking pos:.ible recognition of elliptical, tru~Fner{~ary, or p~rtially incornpr~.h~;,,siblo input. The same de~imtions can ~i..-;(, be us~cl i~ a m.:.~re eft;cic:nt top-down f[l;Jt*ll!~:'l when t!le input conlorrns to the system's exDect.alio~]s. ,, Recem work [5, 8] h~ls suggested the usefulness of multiple cor~structioq.specific reco.qnition str;tt(;gies f,ar restrict,~d domah] parsing, pat ticularly for dealing witll extragr;.'nimaiic.q! input. 1 he ir~dividual entity cJo!initlons form an i(h;al [rc, rnewur}~ arcq~,d which to organize lhr multiple 212 strateg!es. In particular, each definitio~ can specify which strategies are applicable to recognizing it. Of course, "this only provides a framework for robust recognition, the robustness achieved still depends on the quality of the actual recognition strategies used. The advantages of entity-oriented parsing for language definition include: • All information relating to an entity is grouped in one place, so that a language definer will be able to see more clearly whether a dehnition is complete and what would be the conseouences of any addition or change to the definition. • Since surface (syntactic) nnd structural information about an entity is groupe~t to~]ether, tile s,.trface information cau refer to the structure in a clear al';{] coherent way. In particular, this allows hierarchical surface information to use the natural hierarchy defined by the structural informatiol~, leading to greater consistency of coverage in the surface language. • Since entity definitions are independent, the information necessary In drive Jecognilion by the mulliple construction- spucific strL, tegi~:s mentioned above can be represented directly in the form most useful to each strategy, thus removing the need for any kind of "grammar co~pilation" step and allowing more rapid £irammar development. In the remainder of the paper, we make these arguments more concrete by looking at some fragments of an entity-oriented lan(]u~ge definition, by outlining the control :~truclure of a robust resUicted-domain parser driven by such defiqitions, and by tracing through some worked examples of !he parser in operation. These examples also shown describe some specifi~ parsing strategies that exploit the control structures. A parser i~=corporating the control structure and the parsing strategies is currently under implementation. Its design embodies our e;{perience with ~ pilot entily-oriented parser that has already been implemented, but is not described here. r--v 4 .,. ~,,ampie Entity Definitions This section present'~ .~r)me example eat=t,/ and language (lefi,fitions suitable for use in entity-oriente(] parsing. The examples are drawn fi om the Oomain of an in!~rface to a database of college courses. Here is the (partial) de[initio=~ of a course, [ Ent ttyNarne : Col legeCourse type: Structured Components : ( [Componen tName: £.otlrseNumber type: Integer Greater1han : g9 LeSSI I~an : |000 ] [ComponentName : CourseDepartment lype: Co1 legeDepartment ] [ C 011ll}0 n e n L N ~ll[le : CourseC I&ss F3,po : CollegeC lass ] [CemponentName : Cuurse[nstructo¢ lype: Col|egeProressor J ) Silt raceRupresen LaL ion: [SynLaxfype : NounPhr~se IIo,l¢l: (course I sesninsr $CoursoDepartmenL SCour'set, umber I • • • ) AdiectivalCo,lponen£s: (Courseaepartment ...) Adjectives: ( JAdjecLiva]Phrase: (new J most. recent) CotllpOllOn L : CollrseSemos ter Value: CUI'I t!q LSdm(}S ter ] i" PostNomina ICases: ( [PreposiLion: (?intended For J directed to J .) Cofi|ponellt : CourseClass J LPrl:posiLion: (?L~ughL b v I ..,) Colnpollel1 t : Co(~rse [ i1.~ L rllc tot ] ) J ] For reasons of space, we cannot explain all the details of this language. In essence, zz course is definc'd as 3 structured object with components: number, department, instructor, etc. (square brackets denote attribute/value lists, and round brackets ordinary lists). "lhis definition is kept separate from the surface representation of a course which is defined to be a noun phrase with adjectives, postnor~irla! cases, etc.. At a more deiailed level, note the special purpose way of specifying a course by its department juxtaposed with its number (e.g. Computer Science 101) is handled by an alternate patt.'.,rn for the head of the noun phrase (dollar signs refer back to the components). Tiffs allows the user to s,sy (redur=,~antly) phrases like "CS 101 taught by Smith". Nolo. also that the way the dep~¢rtment of a course can appear in the surface representation of a course is specified in terms of the £:ourseDepartment component (and hence in terms of its type, Colleg(;Depmln]ent) rather than directly as an explicit surface representation. This ensures consistency througl~out the language in what will be recognized as a description of a department. Coupled wdh the ability to use general syntactic descriptors (like NounPhrase in the description of a SurfaceRepresentation), this can prevent the ki~,J of patchy coveraqe prevalent with standard semantic grammar language definitions. Subsidiary objects like CollegeDepartment are defined in similar fashion. [ r n t i LyNnmn : £o I I egel)epa v Linen t |ypo: Er.uiiler'~L ion E numeratodVa lues : { Conlptltel SC i,nceDepartment Ma t hema I. i c sl)el)a r Linen t II istorylJeparLment "i" SurfaceRepresentat ion : J Syntaxlype: PaLternSet Patterns: ( [Patt*:rn: (CS I Computer Scie,ce J Camp Sol J ...) Va hte : CompuLerSc ietLcel}~lpal'tment ] ) ] 1 213 r;cllegeCoursu will also be involved in higher-level entities ef our restricted domain such as a cc}mrnan(I to the data base ay.*t:.~m to +:.rol a student in a course. [ I Ill. i~l,lalllO: [l)l'O|COlll/ll~tl(I lype: Structured Comllonul~ts : ( I.CompononI.Nam+!: Fnrol leo fypo: CO I I~UeSL.det~L .I [CemponenLNamu : I:nee] [n Type: Co I leg,'~Co,lrse ] ) Sur f'aceRopr,;se.ta L =el;: Sy=lta~ [:tp~,: [lll;~.~r.lt. iveC.tsel'ramo Ilea'J: (corgi I ¢etliSLe¢ ] incl~(le [ ...) II i re¢ LObju,: I.: ($E.rol lee) Cases: ( [PreposiLi,~n: (in I tote J ...) CO;tlpOltOl| L : ~: It I'01 ] I} ] ) ] ] These examples als~ show how all information about an entity, co.cerning both tundamental structure and surface representation, is grouped tooeth',~r al~d integrated. Tiff,.; supports the claim that entity-c~ri~nted lanuuage definition makes it easier to deter.nine whether a language definition is complete. 3. Control Structure for a tqcbust Entity- Oriented Parser lhe potential advanta.qes of an entily-oriented approach from tile point of view of robLmtne.~3 in the face of ungr:¢mmatical input were outlined in the inlrodu(.tion. To exploit this potential while maintaining efficiency in parsing grammatical input, special attention must he paid to the control structure of the parser used. Desirable characteri,=.tics for the control Structure uf ;my parser capable of handling ungrammatical as well as grammatical input include: . the control structure allows grammatical input to be parsed straightforwardly without consider.ring any of the possible gralnmatical deviations d;at could occur; • the om~trol structure enables progr~:,~siw:.ly highP.r degrees of grammatical (leviatior~ Io be consi(Ic~:.~d when the ilt[~LIt does not satisfy grammatical exp,~ctations; • the control structure ;dlows simpler deviatio.s to be considered before more complex deviations. ]he first two points are self-evident, but the third lll;+ty require some explanalion. "The r, robl~m it addresses arises particularly when there are several alternative parses under consideration. In s.ch cases, it is important to prevent the parser h'om cons!tiering drastic (levi.xtions in one branch of the par.~'e before cor~si(lering si~nple ones in the othur. For in::'.ance, tile par.~er sh(;uld not start hypothesizir=g missing words ir; one bra.ch when a ~;impl,~) sp~flli~l O correction in another blanch would allow tile parse I¢~ go through. We have (le-;i(jned a parser control .~hucture for use in e~,tity- oriented p~.':;in U which i}a~; all (,~, the rh;lracteristics lis~e,t above. Thi.~ control structure operates thrr~u~;h an acJenda mechanism. Each item of the agenda represents a dii'ier,.:nt nonU/]uati.on of the paine, i.e. a partial parse plus a specificatit,+~ of what to do next to continue that partial parse, With each cont}nuation is associated an integer flexibility level that represents the degree of grammatical deviation imphed by the continuation. That is, the flexibility level represents the degree of grammatical deviation in the input if the continuation were to produce a complete parse' without finding any more deviation. Continuations with a lower flexibility are run before continuations with a higher flexibility level. Once a complete parse has been obtained, continuations with a, flexibility level higher than that of the continuation which resulted in the parse are abandoned. This means that the agenda mechanism never activates any continuations with a flexibility level higher than the level representing the lowest level of grammatical deviation necessary to account for the input. Thus effort is not wasted exploring more exotic grammatical deviations when the input can be accounted for by simpler ones. This shows that the parser has the first two of the characteristics listed above. In addition to taking care of alternatives at different flexibility levels, this control structure also handles the more usual kind of alternatives faced by parsers -- those representing alternative parses due to local ambiguity in the input. Whenever such an ambiguity arises, the control structure duplicates the relevant continuation as many times as there are ambiguous alternatives, giving each of the duplicated continuations the same flexibility level. From there on, the same agenda mechanism used for the various flexibility levels will keep each of the ambiguous alternatives separate and ensure that all are investigated (as long as their flexibility level is not too high). Integrating the treatment of the normal kind of ambiguities with the treatment of alternative ways of handling grammatical deviations ensures that the level of grammatical deviation under consideration can be kept the same in locally cmbiguous branches of a parse. This fulfills the third characteristic listed above. Flexibility levels are additive, i.e. if some grammatical deviation has already been found in the input, then finding a new one will raise the flexibility level of the continuation concerned to the sum of the flexibility levels involved. This ensures a relatively h!gh flexibility level and thus a relatively low likelihood of activation for continuations in which combinations of deviations are being postulated to account for the input, Since space is limited, we cannot go into the implementation of this control structure. However, it is possible to give a brief description of the control structure primitives used in programming the parser. Recall first that the kind of entity- oriented parser we have been discussing consists of a collection of recognition strategies. The more specific strategies exploit the idiosyncratic features of the entities/construction types they are specific to, while the more general strategies apply to wider cl3sses of entities and depend on more universal characteristics. In either case, the strategies are pieces of (Lisp) program r~.ther than more abstract rules or networks. Integration of such strategies with the general scheme of flexibility levels described above is made straightforward through a special split function which the control structure supports as a primitive. This split function allows the programmer of a strategy to specify one or more alternative continuations from any point in the strategy and to associate a different flexibility increment with each of them. 214 The implementation of this statement takes care of restarting each of the alternative continuations at the appropriate time and with the appropriate local context. Some examples should make this account of the control structure much clearer. The examples will also present some specific parsing strategies and show how they use the split function described above. These strategies are designed to effect robust recognition of extragrammatical input and efficient recognition of grammatical input by exploiting entity-oriented language definitions like those in the previous section. 4. Example Parses t.et us examine first how a simple data base command like: Enro; Susan Smith in CS 101 might be parsed with the control structure and language defin;tions presented in the two previous sections. We start off with the top-level parsing strategy, RecognizeAnyEntity. This strategy first tries to identify a top-level domain entity (in this case a data base command) that might account for the entire input. It does this in a bottom-up manner by indexing from words in the input to those entities that they could appear in. In this case, the best indexer is the first word, 'enro!', which indexes EnrolCommand. In general, however, the best indexer need not be the first word of the input and we need to consider all words, thus raising the potential of indexing more than one entity. In our example, we would also index CollegeStudent, CollegeCourse, and Co!legeDepartment However, tt'ese are not top.level domain entities and are subsumed by EnrolCommand, and so can be ignored in favour of it. Once EnrolCommand has been identified as an entity that might account for the input, RecognizeAnyEntity initiates an attempt to recognize it. Since EnrolCommand is listed as an imperative case frama, this task is handled by the ImperativeCaseFrame recognizer strategy. In contrast to the bottom-up approach of RecognizeAnyEntity, this strategy tackles its more specific task in a top-down manner using the case frame recognition algorithm developed for the CASPAR parser [8]. In particular, the strategy will match the case frame header and the preposition 'in', and initiate recognitions of fillers of its direct object case and its case marked by 'in'. These subgoals are to recognize a CollegeStudent to fill the Enrollee case on the input segment "Susan Smith'" and a CollegeCourse to fill the Enrolln case on the segment "CS 101 ". Both of the~e recognitions will be successful, hence causing the ImperativeCaseFrame recognizer to succeed and hence the entire recognition. The resulting parse would be: [InstanceOf : Enro ICo~nand £nrol]ee: [InstanceOt': Co]]egeStudent FirstNaaes : (Susan) Surname: Smith ] [nrotZn: []nstance0£: CollegeCourse EourseDepar tment : Compute rSc I enceDepar tment. CourseNumber : t01 ] ] Note how this parse result is expressed in terms of the underlying structural representation used in the entity definitions without the need for a separate semantic interpretation step. The last example was completely grammatical and so did not require any flexibility. After an initial bottom-up step to find a dominant entity, that entity was recognized in a highly efficient top-down manner. For an example involving input that is ungrammaUcal (as far as the parser is concerned), consider: Place Susan Smith in computer science for freshmen There are two problems here: we assume that the user intended 'place' as a synonym for 'enror, but that it happens not to be in the system's vocabulary; the user has a!so shortened the grammatically acceptable phrase, 'the computer science course for freshmen', to an equivalent phrasenot covered by the surface representation for CollegeCourse as defined earlier. Since 'place' is not a synonym for 'enrol' in the language as presently defined, the RecognizeAnyEntity strategy cannot index EnrolCommand from it and hence cannot (as it did in tl~e previous example) initiate a top-down recognition of the entire input. To deal with such eventualities, RecognizeAnyEntity executes a split statement specifying two continuations immediately after it has found all the entities indexed by the input. The first continuation has a zero flexibility level increment. It looks at the indexed entities to see if one subsumes all the others. If it finds one, it attempts a top-down recognition as described in the previous example. If it cannot find one, or if it does and the top- down recognition fails, then the continuation itself fails. The second continuation has a positive flexibility increment and follows a more robust bottom-up approach described below. This second continuation was established in the previous example too, but was never activated since a complete parse was found at the zero flexibility level. So we did not mention it. In the present example, the first continuation fails since there is no subsuming entity, and so the second continuation gets a chance to run. Instead of insisting on identifyir,g a single top-level entity, this second continuation attempts to recognize all of the entities that are indexed in the hope of later being able to piece together the various fragmentary recognitions that result. The entities directly indexed are CollegeStudent by "Susan" and "Smith", 2 CollegeDepartment by "computer" and "science", and CollegeClass by "freshmen". So a top-down attempt is made to recognize each of these entities. We can assume these goals are fulfilled by simple top-down strategies, appropriate to the SurfaceRepresentation of the corresponding entities, and operating with no flexibility level increment. Having recognized the low-level fragments, the second continuation of RecognizeAnyEntity now attempts to unify them into larger fragments, with the ultimate goal of unifying them into a description of a single entity that spans the whole input. To do this, it takes adjacent fragments pairwise and looks for entities of which they are both components, and then tries to recognize the subsuming entity in the spanning segment. The two pairs here are CollegeStudent and CollegeDepartment (subsumed by CollegeStudent) and CollegeDepartment and CollegeClass (subsumed by CollegeCourse). To investigate the second of these pairings, RecognizeAnyEntity would try to recognize a CollegeCourse in the spanning segment 'computer science for freshmen' using an elevated level of flexibility. This gGal would be handled, just like all recognitions of 215 CollegeCourse, by the NominalCaseFrame recognizer. With no flexibility increment, tiffs strategy fails because the head noun is missing. However. with another flexibility increment, the recognition can go through with the CcllegeDepartment being treated as an adjective and the CollegeClass being treated as a postnominal case -- it has the right case marker, "for", and the adjective and post-nominal are in the right order. This successful fragment unification leaves two fragments to unify -- the old CollegeStudent and the newly derived CollegeCourse. There are several ways of unifying a CollegeStudent and a CollegeCourse -- either could subsume the other, or they could form the parameters to one of three database modification commands: EnrolCommand, WithdrawCommand, and TransferCommand (with the obvious interpretations). Since the commands are higher level entities than CollegeStudent and CollegeCourse, they would be preferred as top.level fragment unifiers. We can also rule out TransferCommand in favour of the first two because it requires two courses and we only have one. In addition, a recognition of EnrolCommand would succeed at a lower Ile×ibility increment than WithdrawCommand, 3 since the preposition 'in' tilat marks the CollegeCourse in the input is the correct marker of the Enrolln case of EnrolCommand, but is not the appropriate marker for WithdrawFrom, the course-containing case of WithdrawCommand. Thus a fragment unification based on EnrolCommand would be preferred. Also, the alternate path of fragment amalgamation -- combining CollegeStudent and CollegeDepartment into CollegeStudent and then combining CoilegeStudent and CollegeCourse -- that we left pending above cannot lead to a complete instantiation of a top-level database command. So RecognizeAnyEntity will be in a position to assume that the user really intended the EnrolCommand. Since th~s recognition involved several significant assumptions, we would need to use focused interaction techniques[7] to present the interpretation to the user for approval before acting on it. Note that if the user does approve it, it should be possible (with further approval) to add 'place' to the vocabulary as a synonym for 'enrol' since 'place' was an unrecognized word in the surface position where 'enrol' should have been. For a final example, let us examine an extragrammatical input that involves continuations at several different flexibility levels: Transfel Smith from Coi,~pter Science 101 Economics 203 The problems here are that 'Computer' has been misspelt and the preposition 'to' is missing from before 'Economics'. The example is similar to the first one in that RecognizeAnyEntity is able to identify a top-level entity to be recognized top-down, in this case, TransferCommand. Like EnrolCommand, TransferCommand is an imperative case frame, and so the task of recognizing it is handled by the ImperativeCaseFrame strategy. This strategy can find the preposition 'from', and so can !nitiate the appropriate recognitions for fillers of the O.tOfCour~e and Student cases. The recognition for the student case succeeds without trouble, but the recognition for the OutOfCourse case requires a spelling correction. 2We assume we have a complete listing of students and SO can index from their names. Whenever a top-down parsing strategy fails to verify that an input word is in a specific lexical class, there is the possibility that the word that failed is a misspelling of a word that would have succeeded. In such cases, the lexical lookup mechanism executes a split statement. 4 A zero increment branch fails immediately, but a second branch with a small positive increment tries spelling correction against the words in the predicted lexical class. If the correction fails, this second branch fails, but if the correction succeeds, the branch succeeds also. In our example, the continuation involving the second branch of the lexical lookup is highest on the agenda after the primary branch has failed. In particular, it is higher than the second branch of RecognizeAnyEntity described in the previous example, since the flexibility level increment for spelling correction is small. This means that the lexical lookup is continued with a spelling correction, thus resolving the problem. Note also that since the spelling correction is only attempted within the context of recognizing a CollegeCourse -- the filler of OutOfCourse -- the target words are limited to course names. This means spelling correction is much more accurate and efficient than if correction were attempted against the whole dictionary. After the OutOfCourse and Student cases have been successfully filled, the ImperativeCaseFrame strategy can do no more without a flexibility level increment. But it has not filled all the required cases of TransferCommand, and it has not used up all the input it was given, so it splits and fails at the zero-level flexibility increment. However, in a continuation with a positive flexibility level increment, it is able to attempt recognition of cases without their marking prepositions. Assuming the sum of this increment and the 3pelling correction increment are still less than the increment associated with the second branch of RecognizeAnyEntity, this continuation would be the next one run. In this continuation, the ImperativeCaseFrameRecognizer attempts to match unparsed segments of the input against unfilled cases. There is only one of each, and the resulting attempt to recognize 'Economics 203' as the filler of IntoCourse succeeds straightforwardly. Now all required cases are filled and all input is accounted for, so the ImperativeCaseFrame strategy and hence the whole parse succeeds with the correct result. For the example just presented, obtaining the ideal behaviour depends on careful choice of the flexibility level increments. There is a danger here that the performance of the parser as a whole will be dependent on iterative tuning of these increments, and may become unstable with even small changes in the increments. It is too early yet to say how easy it will be to manage this problem, but we plan to pay close attention to it as the parser comes into operatio n . 3This relatively fine distinction between Enro]Command and Withd~awCemmand. based on the appropriateness of the preposition 'in', is problem~',tical in that it assumes that the flexibility level would be incremented in very fine grained steps. If that was impractical, the final outcome of the parse would be ambiguous between an EnrolCommand and a WithdrawCommand and the user would have to be asked to make the discrimination. 4If this causes too many splits, an alternative is only to do the split when the input word in question is not in the system's lexicon at all. 216 5. Conclusion Entity-oriented parsing has several ~dvantages as a basisfor language rueognilion in restricted domain natural language int.£[faces. Like techniques based on semantic grammar, it ext~loits limited domain semantics through a series of domain- specific entity types. However, because of its suitability for fragmentary recogniticn and its ability to accornmodate multiple construction.specific parsing strategies, it has the i>otential for greater robustness in the face of extragrammaLical input than the usu[;I semantic grammar techniques. In this way, it more closely resembles conceptual or case-frame parsi~lg tc{:t,niques. Moreover, entity-oriented pursing offers advanta.'jes h:, I:~ngua0e d~inition because of the integration of struchlr;tl anJ :aurfJ'c~ representutio~z information and the ability to ropr~ sent surta.'.e information in the form most convenient to drive co+zstruction. specific recogqifion strategies directly. A pilot implementation of a~ entity-oriented parser has been completed and provides preliminary support for our claims. t4owever, a more rigorous lest of the entity-oriented approach rnust wait for the more complete implementation <:urrently being undertaken. ]he agenda-style control structure we plan to use in this imptementath)~ is described above, along wilh some parsing sbateGies it will employ and some worked examples of the sbategies and control structure in action. Acknowler.igements I-he ideas in this paper benefited cousiderably from discussions with other membr~rs of the Multipar group at Carnegie-Mellon Cnraputer Science Department, parlicu!arly Jaimo CarbonelL Jill Fain, ..rod Ste,~e F4inton. Steva Minton was a co-dc~si§ner o! the. control stru<;tu+e ;~resented att)ov.~:, and also founrl :m efficient w:w to iruplement the split function de.'..cribed in coa+~ec+tion with that control structure. References 1. Brown, J. S. and Bt;rton. R. I::l. Multiple Representations of "Q~owl~dgo for I utoriai Reasoning. In Repf(~s,'~nt;ttion and Uod~-:rstan'.'.'mrj, Bubr,,w, D. ,.G. and Collins, A., Ed.,Academic Press, New York, 1975, pp. ,311-349. 2. Burton, R. R. Semantic Grammar: An Engineering Technique for Ccnstructing Natural I.ai%luae, ~ Understanding Systems. BBN Reporl 3453, Bolt, Beranek, and Newman, Inc., Cambridge, Mass., December, 1976. 3. Carbonell, J. G., Boggs, W. M., Mau]din, M. L., and Anick, P. G. The ×CAI.tBUR Project: A Natural Lan{luage Interface ~o Expert Systems. Prt;c. Eighth Int. Jt. Conf. on Artificial Intelligence, Karl.'~ruhe, August, 1983. 4. Carbonell, J. G+ and Hayes, P.J. "Recovery Strategies for Parsing Extragrammatical Language." Com~utational Linguistics 10 (t 984). 5. Carbonell, J. G. and 14ayes, P. J. Robust Parsing Using Multiple Construction-Specific Strategies. In Natural Language Pcrsing Systems, L. Bole, Ed.,Springer-Verlag, 1984. 6. Grosz, B. J. TEAM: A Transport[~ble Nalural Language Interface System. Prec. Conf. on Applie(I Natural L:~n~tuage Processing, S'mta Monica, February, 198,3. 7. Hayes P. J. A Construction Specific Approach to Focused h,teraction in Flexible Parsing. Prec. of 19th Annual Nl~-.,~ting of the Assoc. for Comp~Jt. ling.. Stanford University, June, 1981, pp. 149-152. 8. Hi:yes, P. J. and Ca~t:onell, J. G. lvtulti-Strategy P~r,~i+~g ~;nd its Role in [~'obust Man. I~,tachin÷.~ Cnmmunicatio'.~. Carnegie-Mellon IJ~iversity Computer Sc~olJce Department. ,May, 1981. 9. I'lendrix, G. G. Hum~.n Engine+;ring for At)plied Natural Language Processi~;g. Prec. Fifth Int. Jt. Conf. on Arlificial Into!l;genc,~., t,.;; r. 1077, pp. 183. ! 91. IO. i:hes;)e,.;;~. C. K. ao,-I Sch~-nk. R.C. Comprehension by C'ompuLr~r: Expectation.[lase, l An;.tly:,;3 el S~nteac+~G irt Context. rech. Ru'pL 7~5, C, omputc;r Science Dept., Y£1e Uoiveruity, 1976. 1 I. W~lks, ?. A. Prefere:-,ce Semantics. In F-ormal Semantics of IV~tural L~.ngu:zge , Keer;an, k(I..Can}bridge University Press, 1975. 217
1984
47
Combining Functionality and Ob]ec~Orientedness for Natural Language Processing Toyoakl Nishida I and Shuji Doshita Department of Information Science, Kyoto University Sakyo-ku, Kyoto 606, JAPAN Abstract This paper proposes a method for organizing linguistic knowledge in both systematic and flexible fashion. We introduce a purely applicative language (PAL) as an intermediate representation and an object-oriented computation mechanism for its interpretation. PAL enables the establishment of a principled and well-constrained method of interaction among lexicon-oriented linguistic modules. The object-oriented computation mechanism provides a flexible means of abstracting modules and sharing common knowledge. 1. Introduction The goal of this paper is to elaborate a domain-independent way of organizing linguistic knowledge, as a step forwards a cognitive processor consisting of two components: a linguistic component and a memory component. In this paper we assume the existence of the latter component meeting the requirements described in [Schank 82]. Thus the memory component attempts to understand the input in terms of its empirical knowledge, predict what happens next, and reorganize its knowledge based on new observations. Additionally, we assume that the memory component can judge whether a given observation is plausible or not, by consulting its empirical knowledge. The role of the linguistic component, on the other hand, is to supply "stimulus" to the memory component. More specifically, the linguistic component attempts to determine the propositional content, to supply missing constituents for elliptical expressions, to resolve references, to identify the focus, to infer the intention of the speaker, etc. In short, the role of the [iguistic component is to "translate" the input into an internal representation. For example, the output of the linguistic component for an input: When did you go to New York? is something like the following2: There is an event e specified by a set of predicates: isa(e)=going A past(e) A agent(e)=the_hearer A destination(e)=New_York. The speaker is asking the hearer for the time when an event e took place. The hearer presupposes that the event e actually took place at some time in the past. 1Currently visiting Department of Computer Science, Yale University, New Haven, Connecticut 06520, USA. If the presupposition contradicts what the memory component knows, then the memory component will recognize the input as a loaded question [Kaplan 82]. As a result, the memory component may change its content or execute a plan to informing the user that the input is inconsistent with what it knows. The primary concern of this paper is with the linguistic component. The approach we take in this paper is to combine the notion of eompositionality a and an object-oriented computational mechanism to explore a principled and flexible way of organizing linguistic knowledge. 2. Intermediate Representation and Computational Device for Interpretation 2.1 PAL (Purely Applicative Language) Effective use of intermediate representations is useful. We propose the use of a language which we call PAL (Purely Applicative Language). In PAL, new composite expressions are constructed only with a binary form of function application. Thus, if z and I/ are well-formed formulas of PAL, so is a form z(y). Expressions of PAL are related to expressions of natural language as follows: Generally, when a phrase consists of its immediate descendants, say z and y, a PAL expression for the phrase is one of the following forms: <z>( <V>) or <p>( <z>) where ~a> stands for a PAL expression for a phrase ~*. Which expression is the case depends on which phrase modifies which. If a phrase z modifies V then the PAL expression for z takes the functor position, i.e., the form is ~z~(~y~). Simple examples are: big apple =* big~apple) ; adjectives modify .anne very big ~ very(big) ; adverbs modify adjectives very big apple ~ (very(big)Xapple) ; reeuesive composition 2As illustrated in this example, we assume a predicate notation a~ an output of the linguistic component. But this choice is only for descriptive purposes and is not significant. awe prefer the term *functionality" to "eompositionality", reflecting a procedural view rather than a purely mathematicaJ view. 218 How about other cases? In principle, this work is based on Montague's observations [Montague 74]. Thus we take the position that noun phrases modify (are functions of, to be more precise) verb phrases. But unlike Montague grammar we do not use iambda expressions to bind case elements. Instead we use special functors standing for case markers. For example, he runs ~ (*subject(he)Xruns) he eats it ~ (*subject(he)X(*object(it)Xeats)) Another example, involving a determiner, is illustrated below: a big apple ~ a(big(apple)) ; determiners modlf~l nouns Sometimes we assume "null" words or items corresponding to morphemes, such as, role indicators, nominalizer, null NP, etc. apple which he eats ~ (which ((*subject(he)) ((*object(*null)) (eats)))) (apple) ; restrictive relative clauses modif~ nouns, ; rdativizers modify sentences to make adjectives In the discussion above, the notion of modify is crucial. What do we mean when we say z modifies y? In the case of Montague grammar, this question is answered based on a predetermined set theoretical model. For example, a noun is interpreted as a set of entities; the noun "penguin", for instance, is interpreted as a set of all penguins. An adjective, on the other hand, is interpreted as a function from sets of entities to sets of entities; an adjective "small" is interpreted as a selector function which takes such a set of entities (interpretation of each noun) and picks up from it a set of "small" entities. Note that this is a simplified discussion; intension is neglected. Note also that different conception may lead to a different definition of the relation modifp, which will in turn lead to intermediate representations with different function-argument relationships. After all, the choice of semantic representation is relative to the underlying model and how it is interpreted. A good choice of a semantic representation - interpretation pair leads to a less complicated system and makes it easier to realize. The next section discusses a computational device for interpreting PAL expressions. 2.2 Object-Oriented Domain The notion of object-orientedness is widely used in computer science. We employ the notion in LOOPS [Bobrow 81]. The general idea is as follows: We have a number of objects. Objects can be viewed as both data and procedures. They are data in the sense that they have a place (called a local variable) to store information. At the same time, they are procedures in that they can manipulate data. An object can only update local variables belonging to itself. When data belongs to another object, a message must be sent to request the update. A message consista of a label and its value. In order to send a message, the agent has to know the name of the receiver. There is no other means for manipulating data. Objects can be classified into classes and instances. A class defines a procedure [called a method) for handling incoming messages of its instances. A class inherits methods of its superclasses. Z. Interpretation of PAL Expressions in Object-Oriented Domain A class is defined for each constant of PAL. A class object for a lexical item contains linguistic knowledge in a procedural form. In other words, a class contains information as to how a corresponding lexical item is mapped into memory structures. A PAL expression is interpreted by evaluating the form which results from replacing each constant of a given PAL expression by an instance of an object whose class name is the same as the label of the constant. The evaluation is done by repeating the following cycle: • an object in argument position sends to an object in functor position a message whose label is "argument ~ and whose value is the object itself. • a corresponding method is invoked and an object is returned as a result of application; usually one object causes another object to modify its content and the result is a modified version of either a functor or an argument. Note that objects can interact only in a constrained way. This is a stronger claim than that allowing arbitrary communication. The more principled and constrained way modules of the linguistic component interact, the less complicated will be the system and therefore the better perspective we can obtain for writing a large grammar. a.1 A Simple Example Let's start by seeing how our simple example for a sentence "he runs" is interpreted in our framework. A PAL expression for this sentence is: (*subject(he)Xruus) Class definitions for related objects are shown in figure 3.1. The interpretation process goes as follows: • lnstantiating '*subject': let's call the new instance *subject 0. • lnstantiating 'he': a referent is looked for from the memory. The referent (let's call this i0) is set to the local variable den, which stands for 'denotation'. Let the new instance be he 0. • Evaluating '*subject0(he0)': a message whose label is 'case' and whose value is 'subject' is sent to the object he 0. As a result, he0's variable case has a value 'subject'. The value of the evaluation is a modified version of he0, which we call he I to indicate a different version. • Iustantiating 'runs': let's call the new instance runs 0. An event node (of the memory component) is created and its reference (let's call this e0) is set to the local variable den. Then a new proposition 'takes_place(e0)' is asserted to the memory component. 219 class *subject: argument: scndImcssage , case:subject]; return[sc/j~. ; if a message with label 'argument' comes, this method will send to the object pointed to bll the variable rrtessage a message whose label is 'ease' and whose value is 'subject '. ; a variable rrteasage holds the value of an incoming message and a variable self points to the oSjeet itself. class he: if instantiated then dcn*-'look for referent'. ; when a new instance is created, the referent is looked for and the value is set to the local variable den. ease: ease*-messagc; return[selJ]. ; when a message comes whleh is labeled "ease', the local variable ease will be assigned the value the incoming message contains. The value of this method is the object itself. argument: return[send[message, case:sol,l; ; when this instance is applied to another object, this object will send a message whose label is the value of the local variable cone and whose value field is the object itself. The value of the message processing is the value of this application. class runs: if instantiated then den.---ereate['event:run']; assert[takes_ place(den)]. ; when a new instance of class '~ns" is instantiuted t a new ¢oent will be asserted to the memorf eornpanent. The referenee to the new event is set to the local variable den. subject: assert['agent~den)~message.den']; return[sel~. ; when a message with label 'subject' comes, a new proposition is asserted to the mernor~ component. 7he value of this message handling is this obfeet itself. Figure 3.-1: Definitions of Sample Objects 3.3 Linking Case Elements One of the basic tasks of the linguistic component is to find out which constituent is linked explicitly or implicitly to which constituent. From the example shown in section 3.1, the reader can see at least three possibilities: Case linking by sending messages. Using conventional terms of case grammar, we can say that "governer" receives a message whose label is a surface ease and whose value is the "dependant'. This implementation leads us to the notion of abstraction to be discussed in section 3.4. Lexleon-drlven methods of determining deep ease. Surface case is converted into deep case by a method defined for each governer. This makes it possible to handle this hard problem without being concerned with how many different meanings each function word has. Governers which have the same characteristics in this respect can be grouped together as a superclass. This enables to avoid duplication of knowledge by means of hierarchy. The latter issue is discussed in section 3.2. The use of implicit case markers. We call items such as *subject or *object implicit, as they do not appear in the surface form, as opposed to prepositions, which are explicit (surface} markers. The introduction of implicit case marker seems to be reasonable if we see a language like Japanese in which surface case is explicitly indicated by postpositions. Thus we can assign to the translation of our sample sentence a PAL expression with the same structure as its English version: KARE GA HASHIRU ~ (GA(KARE)XHASHIRU) where, "KARE" means "he", "GA" postposition indicating surface subject, "HASHIRU" "run ~, respectively. • Evaluating hel(runs0): a message whose label is 'subject' and whose value is he ! is sent to runs0, which causes a new proposition 'agent(e0)--i 0, to be asserted in the memory component. The final result of the evaluation is a new version of the object runs0, say runs 1. The above discussion is overly simplified for the purpose of explanation. The following sections discuss a number of other issues. 3.2 Sharing Common Knowledge Object-oriented systems use the notion of hierarchy to share common procedures. Lexical items with similar eharacterics can be grouped together as a class; we may, for example, have a class 'noun' as a superclass of lexicai items 'boy', 'girl', 'computer' and so forth. ~,Vhen a difference is recognized among objects of a class, the class may be subdivided; we may subcategorize a verb into static verbs, action verbs, achievement verbs, etc. Common properties can be shared at the supercla~s. This offers a flexible way for writing a large grammar; one may start by defining both most general classes and least general classes. The more observations are obtained, the richer will be the class-superclass network. Additionally, mechanisms for supporting a multiple hierarchy and for borrowing a method are useful in coping with sophistication of linguistic knowledge, e.g., introduction of more than one subcategorization. 3.4 Abstraction By attaching a sort of a message controller in front of an object, we can have a new version of the object whose linguistic knowledge is essentially the same as the original one but whose input/output specification is different. As a typical example we can show how a passivizer *en is dealt with. An object *en can have an embedded object as a value of its local variable embedded. If an instance of *en receives a message with label '*subject', then it will send to the object pointed by embedded the message with its label replaced by '*object'; if it receives a message with label 'by', then it will transfer the message to the "embedded" object by replacing the label field by '*subject'. Thus the object *en coupled with a transitive verb can be viewed as if they were a single intransitive verb. This offers an abstracted way of handling linguistic objects. The effect can be seen by tracing how a PAL expression: ( *subject(this(sentence))) ((by(a~computer))) [*en{understand))) "This sentence is understood by a computer." is interpreted 4. 4Notice how the method for a transitive verb "understand" is defined, by extending the definition for an intransitive verb ~run ~. 220 3.5 Impiieit Case Linklng We can use a very similar mechanism to deal with case linking by causative verbs. Consider the following sentence: z wants It to do z. This sentence implies that the subject of the infinitive is the grammatical object of the main verb "wants ~. Such a property can be shared by a number of other verbs such as "allow ~, "cause ~, "leC, "make", etc. In the object-oriented implementation, this can be handled by letting the object defined for this class transfer a message from its subject to the infinitive. Note that the object for the these verbs must pass the message from its subject to the infinitive when its grammatical object is missing. Another example of implicit case linking can be seen in relative clauses. In an object-oriented implementation, a relativizer transfers a message containing a pointer to the head noun to a null NP occupying the gap in the relative clause. Intermediate objects serve as re-transmitting nodes as in computer networks. 3.6 Obligatory Case versus Non-Obligatory Case In building a practical system, the problem of distinguishing obligatory case and non-obligatory case is always controversial. The notion of hierarchy is useful in dealing with this problem in a "lazy" fashion. What we means by this is as follows: In procedural approach, the distinction we make between obligatory and non-obligatory cases seems to be based on economical reason. To put this another way, we do not want to let each lexical item have cases such as locative, instrumental, temporal, etc. This would merely mean useless duplication of knowledge. We can use the notion of hierarchy to share methods for these cases. Any exceptional method can be attached to lower level items. For example, we can define a class "action verb" which has methods for instrumental cases, while its superclass ~verb ~ may not. This is useful for not only reflecting linguistic generalization but also offering a grammar designer a flexible means for designing a knowledge base. 4. A Few Remarks As is often pointed out, there are a lot of relationships which can be determined purely by examining linguistic structure. For example, presupposition, intra-sentential reference, focus, surface speech acts, etc. This eventually means that the linguistic component itself is domain independent. However, other issues such as, resolving ambiguity, resolving task-dependent reference, filling task-dependent ellipsis, or inferring the speaker's intention, cannot be solved solely by the linguistic component [Sehank 80]. They require interaction with the memory component. Thus the domain dependent information must be stored in the memory component. To go beyond the semantics-on-top-of-syntax paradigm, we must allow rich interaction between the memory and linguistic components. In particular, the memory component must be able to predict a structure, to guide the parsing process, or to give a low rating to a partial structure which is not plausible based on the experience, while the linguistic component must be able to explain what is going on and what it tries to see. To do this, the notion of object-orientedness provides a fairly flexible method of interaction. Finally, we would like to mention how this framework differs from the authors' previous work on machine translation [Nishida 83], which could be viewed as an instantiation of this framework. The difference is that in the previous work, the notion of lambda binding is used for linking cases. We directly used inteusional logic of Montague grammar as an intermediate language. Though it brought some advantages, this scheme caused a number of technical problems. First, using lambda forms causes difficulty in procedural interpretation. In the case of Montague grammar this is not so, because the amount of computation doet not cause any theoretical problem in a mathematical theory. Second, though lambda expressions give an explicit form of representing some linguistic relations, other relations remain implicit. Some sort of additional mechanism should be introduced to cope with those implicit relations. Such a mechanism, however, may spoil the clarity or explicitness of lambda forms. This paper has proposed an alternative to address these problems. Acknowledgements We appreciate the useful comments made by Margot Flowers and Lawrence Birnbanm of Yale University, Department of Computer Science. References [Bobrow 81] [Kaplan 82] [Montague 74] [Nishida 88] [Schank 80] [Schank 82] Bobrow, D. G. and Stefik, M. The LOOPS Manual. Technical Report KB-VLS1-81-13, Xerox PARC, 1981. Kaplan, S. J. Cooperative Responses from a Portable Natural Language Query System. Artificial Intelligence 19(1982) :165-187, 1982. Montague, R. Proper Treatment of Quantification in Ordinary English. In Thompson (editor), Formal Philosophy, pages 247-270. Yale University, 1974. Nishida, T. Studies on the Application of Formal Semantics to English-Japanese Machine Tran elation. Doctoral Thesis, Kyoto University, 1083. Schank, R. C. and Birnbaum. Memory, Meaning, and Syntax. Technical Report 189, Yale University, Department of Computer Science, 1980. Sehank, R. C. Dynamic Memory: A Theory of Reminding and Learning in Computers and Peaple. Cambridge University Press, 1982. 221
1984
48
USE OF H~ru'RISTIC KN~L~EDGE IN CHINF-.SELANGUAGEANALYSIS Yiming Yang, Toyoaki Nishida and Shuji Doshita Department of Information Science, Kyoto University, Sakyo-ku, Kyoto 606, JAPAN ABSTRACT This paper describes an analysis method which uses heuristic knowledge to find local syntactic structures of Chinese sentences. We call it a preprocessing, because we use it before we do global syntactic structure analysisCl]of the input sentence. Our purpose is to guide the global analysis through the search space, to avoid unnecessary computation. To realize this, we use a set of special words that appear in commonly used patterns in Chinese. We call them "characteristic words" . They enable us to pick out fragments that might figure in the syntactic structure of the sentence. Knowledge concerning the use of characteristic words enables us to rate alternative fragments, according to pattern statistics, fragment length, distance between characteristic words, and so on. The prepro- cessing system proposes to the global analysis level a most "likely" partial structure. In case this choice is rejected, backtracking looks for a second choice, and so on. For our system, we use 200 characteristic words. Their rules are written by 101 automata. We tested them against 120 sentences taken from a Chinese physics text book. For this limited set, correct partial structures were proposed as first choice for 94% of sentences. Allowing a 2nd choice_, the score is 98%, with a 3rd choice, the score is 100%. I. THE PROBLEM OF CHINESE LANGUAGE ANALYSIS Being a language in which only characters ( ideograns ) are used, Chinese language has specific problems. Compared to languages such as English, there are few formal inflections to indicate the grammatical category of a word, and the few inflections that do exist are often omitted. In English, postfixes are often used to distinguish syntactical categories (e.g. transla- tion, translate; difficul!, dificulty), but in Chinese it is very common to use the same word (characters) for a verb, a noun, an adjective, etc.. So the ambiguity of syntactic category of words is a big problem in Chinese analysis. In another exa~ole, in English, "-ing" is used to indicate a participle, or "-ed" can be used to distinguish passive mode from active. In Chinese, there is nothing to indicate participle, and although there is aword, "~ " , whose function is to indicate passive mode, it is often omitted. Thus for a verb occurring in a sentence, there is often no w~y of telling if it transitive or intransitive, active or passive, participle or predicate of the main sentence, so there may be many ambiguities in deciding the structure it occurs in. If we attempt Chinese language analysis using a conputer, and try to perform the syntactic analysis in a straightforward way, we run into a combinatorial explosion due to such ambiguities. What is lacking, therefore, is a simple method to decide syntactic structure. 2. REDUCING AMBIGUITIES USING CHARACTERISTIC WORDS In the Chinese language, there is a kind of word (such as preposition, auxiliary verb, modifier verb, adverbial noun, etc..), that is used as an independant word (not an affix). They usually have key functions, they are not so numerous, their use is very frequent, and so they may be used to reduce anbiguities. Here we shall call them "characteristic words". Several hundreds of these words have been collected by linguists[2],and they are often used to distinguish the detailed meaning in each part of a Chinese sentence. Here we selected about 200 such words, and we use them to try to pick out fragments of the sentence and figure out their syntactic structure before we attempt global syntactic analysis and deep meaning analysis. The use of the characteristic words is described below. a) Category decision: Some characteristic words may serve to decide the category of neighboring words. For example, words such as "~ ", "~", "~", "4~", are rather like verb postfixes, indicating that the preceding word must be a verb, even though the same characters might spell a noun. Words like " ~ ", " ~ ", can be used as both verb and auxiliary. If, for example, "~ " is followed by a word that could be read as either a verb or a noun, then this word is a verb and "~ " is an auxiliary. b) Fragment picking In Chinese, many prepositional phrases start 222 I fl,PP o o x x f2, #vP o 0 x ~ ~f5, #VP o o o x x Translation: © o x The ball must run a longer distance before returning to the initial altitude on this slope. distinguish a word fremothers characteristical word fragment verb Or adjective the word can not he predicate of sentence Fig.iAn Example of Fragment Finding with a preposition such as "~", "~", "~", and finish on a characteristic word belonging to a subset of adverbial nouns that are often used to express position, direction, etc.. When such characteristic words are spotted in a sentence, they serve to forecast a prepositional phrase. Another example is the pattern "...{ ... ~", used a little like "... is to ..." in English, so when we find it, we may predict a verbal phrase from "~ " to "%.~", that is in addition the predicate VP of the sentence. These forecasts make it more likely for the subsequent analysis system to find the correct phrase early. c) Role deciding The preceding rules are rather simple rules like a human might use. With a cxmputer it is possible to use more ~lex rules (such as involving many exceptions or providing partial knowledge) with the same efficiency. For example, a rule can not usually with certainty decide if a given verb is the predicate of a sentence, but we know that a predicate is not likely to precede a characteristic word such as "~9 " or " { " or follow a word like "~-~", "~" or "~". We use this kind of rule to reduce the range of possible predicates. This knowledge can be used in turn to predict the partial structure in a sentence, because the verbal proposition begins with the predicate and ends at the end of the sentence. In the example shown in Fig.l, fragments f3 and f4 are obtained through step (a) (see above), fl through (b), and f2 and f5 through (c). The symbol "o" shows a possible predicate, and "x" means that the possibility has been ruled out. Out of 7 possibilities, only 2 remained. 3. RESOLVING CONFLICT The rules we mentioned above are written for each characteristic word independantly. They are not absolute rules, so when they are applied to a sentence, several fragments may overlap and thus be incrmpatible. Several crmabinations of compatible fragments my exist, and frcm these we must choose the most "likely" one. Instead of attempting to evaluate the likelihood of every combination, we use a scheme that gives different priority scores to each fragment, and thus constructs directly the "hest" combination. If this combination (partial structure) is rejected by subsequent analysis, back-tracking occurs and searches for the next possibility, and so on. Fig.2 shows an example involving conflicting fragments. We select f3 first because it has the highest priority. We find that f2 , f4 and f5 collide with f3, so only fl is then selected next. The resulting combination (fl,f3) is correct. Fig.3 shows the parsing result obtained by computer in our preprocessing subsystem. 4. PRIORITY In the preprocessing, we determine all the possible fragments that might occur in the sentence and involving the characteristic words. Then we give each one a measure of priority. This measure is a complex function, determined largely by trial and error. It is calculated by the following principles: a) Kind of fragment Some kinds of fragments, for example, com- pound verbs involving "~", occur more often than others and are accordingly given higher priority 223 f2 , PP t" . . . . . . . . I ' v/. "F, ~ - .., t. - - - " . . . . r - - ~ f3,V3 I ] I . . . . Translation r---I ~-----J V/N : In the perfect situation -without friction the object will keep moving with constant speed. : pattern of fragment : a word which is either a verb or a noun (undetermined at this stage) Fig. 2 An Example of Conflicting Fragments 61 III I . . . . . . . . . . . . . . . . . . . . . . . . . . FWD I ? I F ..... M-DO5--DE .... M-XR1 . . . . . . . . M ...... FW-D04-FZD0-L6 I I I I I I 2 3 4 5 6 7 I I I I I I I I I I I I I I I I I I AI4A MEI2YOU3 MO2CA1 DE4A LI3XIANG3 QING2KUANG4 XIA4A S I ? I JD . . . . . . . . . . . . . . . . . . . . . . . . . . . i . . . . . . . . . . . . . . . . . . . . . . ~--DODA . . . . . . . . . . EN I ro I # I DO3 I III I DO3 . . . . . . . FZDO I I 14 & l I I 15 .... 16 I I I /UN4DONG4 XIA4A QU4A Translation fl , f3 : In the perfect situation without friction the object will keep moving with constant speed. : fragment obtained by preprocessing subsystem : the names of fragments shown in Fig. 2 : the omitted part of the resultant structure tree Fig. 3 An Exan~le of The Analysing Result Obtained by The Preprocessing Subsystem 224 1 Ii ...... v3 i vl i ( have processed ) ( finished )I ! ® @ ( process ) ( have/finish ) ( -ed ) Translation : had processed I : fragment given the higher priority r--~ : fragment given : ~ the lower priority Fi~.4 An Example of Fragment Priority (Fig.4). We distinguish 26 kinds of fragments. b) Preciseness We call "precise" a pattern that contains recognizable characteristic words or subpatterns, and imprecise a pattern that contains words we cannot recognize at this stage. For example, f3 of Fig.2 is more precise than fl, f2 or f4. We put the more precise patterns on a higher priority level. c) Fragment length Length is a useful parameter, but its effect on priority depends on the kind of fragment. Accordingly, a longer fragment gets higher priority in some cases, lower priority in other cases. The actual rules are rather complex to state explicitly. At present we use 7 levels of priority. tried the method on a set of mere complex sentences. From the same textbook, out of 800 sentences containing prepositional phrases, 80 contained conflicts, involving 209 phrases. Of these conflicts, in our test 83% ware resolved at first choice, 90% at second choice, 98% at third choice. 6. SUMMARY In this paper, we outlined a preprocessing technique for Chinese language analysis. Heuristic knowledge rules involving a limited set of characteristic words are used to forecast partial syntactic structure of sentences before global analysis, thus restricting the path through the search space in syntactic analysis. Comparative processing using knowledge about priority is introduced to resolve fragment conflict, and so we can obtain the correct result as early as possible. In conclusion, we expect this scheme to be useful for efficient analysis of a language such as Chinese that contains a lot of syntactic ambiguities. ACKNOWLEDGMENTS We wish to thank the members of our labora- tory for their help and fruitful discussions, and Dr. Alain de Cheveigne for help with the English. REFERENCE [i]. Yiming Yang: A Study of a System for Analyzing Chinese Sentence, masters dissertation, (1982) [2]. Shuxiang Lu: " ~ , \ ~ " , (800 Mandarin Chinese Words), Bejing, (1980) 5. PREPROCESSING EFFICIENCY The preprocessing system for chinese language mentioned in the paper is in the course of development and it is partly ~u~leted. The inputs are sentences separated into words (not consecutive sequences of characters). We use 200 characteristic words and have written the rules by I01 automata for ~ them. As a preliminary evaluation, we tested the system (partly by hand) against 120 sentences taken from a Chinese physics text book. Frem these 369 fragments were obtained, of which 122 ware in conflict. The result of preprocessing was correct at first choice ( no back-tracking ) in 94% of sentences. Allowing one back-tracking yeilded 98%, two back- trackings gave 100% correctness. In this limited set, few conflicting pre- positional phrases appeared. To test the performance of our preprocessing in this case we 225
1984
49
A STOCHASTIC APPROACH TO SENTENCE PARSING Tetsunosuke FuJisaki Science Institute, IBM Japan, Ltd. No. 36 Kowa Building 5-19 Sanbancho,Chiyoda-ku Tokyo 102, Japan ABSTRACT A description will be given of a procedure to asslgn the most likely probabilitles to each of the rules of a given context-free grammar. The grammar devel- oped by S. Kuno at Harvard University was picked as the basis and was successfully augmented with rule probabilities. A brief exposition of the method with some preliminary results, whenused as a device for disamblguatingparsing English texts picked from natural corpus, will be given. Z. INTRODUCTION To prepare a grammar which can parse arbitrary sen- tances taken from a natural corpus is a difficult task. One of the most serious problems is the poten- tlally unbounded number of ambiguities. Pure syn- tactic analysis with an imprudent grammar will sometimes result in hundreds of parses. With prepositional phrase attachments and conjunc- tions, for example, it is known that the actual growth of ambiguities can be approximated by a Cat- fan number [Knuth], the number of ways to insert parentheses into a formula of M terms: 1, 2, 5, 14, 42, 132, 469, 1430, 4892, ... The five ambiguities in the following sentence with three ambiguous con- structions can be well explained wlth this number. [ I saw a man in a park with a scope. [ I ! This Catalan number is essentially exponentlal and [Martin] reported a syntactically amblguous sentence with 455 parses: List the sales of products produced in 1973 I I with the products produced in 1972. I On the other hand, throughout the long history of natural language understanding work, semantic and pragmatic constraints are known to be indispensable and are recommended to be represented in some formal way and to be referred to during or after the syntac- tic analysis process. However, to represent semantic and pragmatic con- straints, (which are usually domain sensitive) in a well-formed way is a very difficult and expensive task. A lot of effort in that direction has been expended, especially in Artificial Intelligence, using semantic networks, frame theory, etc. Howev- er, to our knowledge no one has ever succeeded in preparing them except in relatlvely small restricted domains. [Winograd, Sibuya]. Faced with this situation, we propose in this paper to use statistics as a device for reducing ambigui- ties. In other words, we propose a scheme for gram- matical inference as defined by [Fu], a stochastic augmentatlon of a given grammar; furthermore, we propose to use the resultant statistics as a device for semantic and pragmatic constraints. Wlthin this stochastic framework, semantic and pragmatic con- straints are expected to be coded implicitly in the statistics. A simple bottom-up parse referring to the grammar rules as well as the statistics will assign relative probabilities among ambiguous deri- vations. And these relative probabilities should be useful for filtering meaningless garbage parses because high probabilities will be asslgned to the parse trees corresponding to meaningful interpreta- tions and iow probabilities, hopefully 0.0, to other parse trees which are grammatlcally correct but are not meaningful. Most importantly, stochastic augmentation of a gram- mar will be done automatically by feeding a set of sentences as samples from the relevant domain in which we are interested, while the preparation of semantic and pragmatic constraints in the form of usual semantic network, for example, should be done by human experts for each specific domain. This paper first introduces the basic ideas of auto- matic training process of statistics from given example sentences, and then shows how it works wit experimental results. II. GRAMMATICAL INFERENCE OF A STOCHASTIC GRAMMAR A. Estimation of Markov Parameters for sample texts Assume a Markov source model as a collectlon of states connected to one another by transitions which produce symbols from a finite alphabet. To each transition, t from a state s, is associated a proba- bility q(s,t), which is the probability that t will be chosen next when s is reached. When output sentences [B(i)} from this markov model are observed, we can estimate the transition proba- bilities {q(s,t)} through an iteration process in the following way: i. Make an initial guess of {q(s,t]}. 16 2. Parse each output sentence B(1). Let d(i,j) be a j-th derivation of the i-th output sentence B(i]. 3. 4. Then the probability p|d(i,J}} of each deriva- tion d{i,J] can be defined in the following way: p{d|i,j}} is the product of probability of all the transitions q{s,~) which contribute to that derivation d(~,~). From this p(d(i,~}), the Bayes a posterlori estimate of the count c{s,t,i,j), how many times the transition t from state $ is used on the der- ivation d[i,J}, can be estimated as follows: 5. n(s,t,i,j) x p(d(i,j)) c(s,t,i,j) = ~-p(d(i,j)) J where n{s,t,i,~} is a number of times the tran- sition t from state s is used in the derivation d{i,j}. Obviously, c{s,t,i,~} becomes nfs,t,i,J} in an unambiguous case. From this ={a,t,l,j}, new estimate of the proba- billties @{$,t} can be calculated. ~-~ c(s,t,i,j) £j f(s,t) = Y- Y- Y-c(s,t,£,j) ijt 6. Replace {qfs, t}} with this new estimate {@{s,t}} and repeat from step 2. Through this process, asymptotic convergence will hold in the entropy of {q{$,t]} which is defined as: Zntoropy = ~- ~ -q(s,t)xlog(q(s,t)) st and the {q(s,t)) will approach the real transition probability [Baum-1970~1792]. Further optimized versions of this algorlthm can be found in [Bahl-1983] and have been successfully used for estimating parameters of various Markov models which approximate speech processes [Bahl - 1978, 1980]. B. Extension to context-free grammar" This procedure for automatically estimating Markov source parameters can easily be extended to con- text-free grammars in the following manner. Assume that each state in the Markov model corre- sponds to a possible sentential form based on a giv- en context-free grammar. Then each transition corresponds to the application of a context-free production rule to the previous state, i.e. previ- ous sentential form. For example, the state NP. VP can be reached from the state S by applying a rule S->NP VP, the state ART. NOUN. VP can be reached from the state NP. VP by applying the rule NP->ART NOUN to the first NP of the state NP. VP, and so on. Since the derivations correspond to sequences of state transitions among the states defined above, parsin E over the set of sentences given as training data will enable us to count how many times each transition is fired from the given sample sentences. For example, transitions from the state S to the state NP. VP may occur for almost every sentence because the correspondin E rule, 'S->NP VP', must be used to derive the most frequent declarative sen- tences; the transition from state ART. NOUN. VP to the stats 'every'.NOUN. VP may happen 103 times; etc. If we associate each grammar rule with an a priori probabillty as an initial guess, then the Bayes a posteriorl estimate of the number of times each transition will be traversed can be calculated from the initial probabilities and the actual counts observed as described above. Since each production is expected to occur independ- ently of the context, the new estimate of the proba- billty for a rule will be calculated at each iteration step by masking the contexts. That is, the Bayes estimate counts from all of the transi- tions which correspond to a single context free rule; all transitions between states llke xxx. A. yyy and xxx. B.C. yyy correspond to the production rule 'A->B C' regardless of the contents of xxx and yyy; are tied together to get the new probability esti- mate of the corresponding rule. Renewing the probabilities of the rules with new estimates, the same steps will be repeated until they converge. ZZZ. EXPERIHENTATZON A. Base Grammar As the basis of this research, the grammar developed by Prof. S. Kuno in the 1960's for the machine trans- lation project at Harvard University [Ktmo-1963, 1966] was chosen, with few modifications. The set of grammar specifications in that grammar, whlchare in Greibach normal form, were translated into a form which is favorable to our method. 2118 rules of the original rules were rewrlttenas 5241 rules in Chom- sky normal form. B. Parser A bottom-up context-free parser based on Cocke-Kasa- mi-Yotmg algorithm was developed especially for this purpose. Special emphasis was put on the design of the parser to get better performance in highly ambiguous cases. That is, alternative-links, the dotted llnk shown in the figure below, are intro- duced to reduce the number of intermediate substruc- ture as far as possible. A/P 17 C. Test Corpus Training sentences were selected from the magazines, 31 articles from Reader's Digest and Datamation, and from IBM correspondence. Among 5528 selected sen- tences from the magazine articles, 3582 sentences were successfully parsed with 0.89 seconds of CPU time ( IBM 3033-UP ) and with 48.5 ambiguities per a sentence. The average word lengths were 10.85 words from this corpus. From the corpus of IBM correspondence, 1001 sen- tences, 12.65 words in length in average, were cho- sen end 624 sentences were successfully parsed with --average of 13.5 ambiguities. D. Resultant Stochastic Context-free Grammar After a certain number of iterations, probabilities were successfully associated to all of the grammar rules and the lexlcal rules as shown below: * IT4 0.98788 HELP 0.00931 SEE 0.00141 HEAR 0.00139 WATCH 0.00000 HAVE 0.00000 FEEL ---(a) ---(b) * SE 0.28754 PRN VX PD ---(c) 0.25530 AAA 4XVX PD ---(d) 0.14856 NNNVX PD 0.13567 AV1 SE 0.04006 PRE NQ SE 0.02693 AV4 IX MX PD 0.01714 NUM 4XVXPD 0.01319 IT1 N2 PD *VE 0.16295 VT1 N2 0.14372 VIl 0.11963 AUX BV 0.10174 PRE NQ VX 0.09460 8E3 PA In the above llst, (a) means that "HELP" will be gen- erated from part-of-speech "IT4" with the probabili- ty 0.98788, and (b) means that "SEE" will be generated from part-of-speech "IT4" with the proba- bility 0.00931. (c) means that the non-terminal "SE (sentence)" will generate the sequence, "PRN (pro- noun)", "VX (predicate)" and "PD (period or post sententlal modifiers followed by period)" with the probability 0.28754. (d) means that "SE" will gener- ate the sequence, "AAA(artlcle, adjective, etc.)" , "4X (subject noun phrase)", "VX" and "PD" with the probability 0.25530. The remaining lines are to be interpreted similarly. E. Parse Trees with Probabilities Parse trees were printed as shown below including relative probabilities of each parse. WE DO NOT UTILIZE OUTSIDE ART SERVICES DIRECTLY . ** total ambiguity is : 3 *: SENTENCE *: PRONOUN 'we' *: PREDICATE *: AUXILIARY 'do' *: INFINITE VERB PHRASE * ADVERB TYPE1 'not' A: 0.356 INFINITE VERB PHRASE I*: VERB TYPE ITl'utilize' [*: OBJECT [ *: NOUN 'outside' ] *: ADJ CLAUSE [ *: NOUN 'art' [ *: PRED. WITH NO OBJECT [ *: VERB TYPE VT1 'services' B: 0.003 INFINITE VERB PHRASE [*: VERB TYPE ITl'utillze' [*: OBJECT I *: PREPOSITION 'outside' [ *: NOUN OBJECT [ *: NOUN ' art ' [ *: OBJECT [ *: NOUN 'services' C: 0. 641 INFINITE VERB PHRASE [*: VERB TYPE ITl'utilize' [*: OBJECT ] *:. NOUN 'outside' [ *: OBJECT MASTER [ *: NOUN ' art' [ *: OBJECT MASTER ] * NOUN 'services' *: PERIOD *: ADVERB TYPE1 'directly' *: PRD w ! This example shows that the sentence 'We do not uti- lize outside art services directly.' was parsed in three different ways. The differences are shown as the difference of the sub-trees identified by A, B and C in the figure. The numbers following the identifiers are the rela- tive probabilities. As shown in this case, the cor- rect parse, the third one, got the highest relatlve probability, as was expected. F. Result 63 ambiguous sentences from magazine corpus and 21 ambiguous sentences from IBM correspondence were chosen at random from the sample sentences and their parse trees with probabilities were manually exam- ined as shown in the table below: 18 a• b. C. d. e. f. Corpus Magazine 63 Number of sentences checked manually Number of sentences 4 with no correct parse I ~umber of sentences 54 which got highest prob. on most natural parse Number of sentences 5 which did not get the highest prob. on the most natural parse Success ratio d/(d+e) .915 IBM 21 18 • 947 Taking into consideration that the grammar is not tailored for this experiment in any way, the result is quite satisfactory. The only erroneous case of the IBM corpus is due to a grammar problem. That is, in this grammar, such modifier phrases as TO-infinltives, prepositional phrases, adverbials, etc. after the main verb will be derived from the 'end marker' of the sentence, i.e. period, rather then from the relevant constitu- ent being modified. The parse tree in the previous figure is a typical example, that is, the adverb 'DIRECTLY' is derived from the 'PERIOD' rather then from the verb 'UTILIZE '. This simplified handling of dependencies will not keep information between modifying and modified phrases end as a result, will cause problems where the dependencies have crucial roles in the analysis. This error occurred in a sen- tenoe ' ... is going ~o work out', where the two interpretations for the phrase '%o work' exist: '~0 work' modifies 'period' as: 1. A TO-infinitlve phrase 2. A prepositional phrase Ignoring the relationship to the previous context 'Is going', the second interpretation got the higher probability because prepositionalphrases occur more frequently then TO-infinltivephrases if the context is not taken into account. IV. CONCLUSION The result from the trials suggests the strong potential of this method. And this also suggests some application possibility of this method such as: refining, minimizing, and optimizing a given con- text-free grammar. It will be also useful for giv- ing a dlsamblguation capability to a given ambiguous context-free grammar. In this experiment, an existing grammar was picked with few modlflcatlons, therefore, only statistics due to the syntactic differences' of the sub-strut- tured units were gathered. Applying this method to the collection of statistics which relate more to sementlcs should be investigated as the next step of this project• Introduction into the grammar of a dependency relationship among sub-structured units, semantically categorized parts-of-speech, head word inheritance among sub-structured units, etc. might be essential for this purpose. More investigation should be done on this direction. V. ACKNOWLEDGEMENTS This work was carried out when the author was in the Computer Science Department of the IBM Thomas J. Watson Research Center. The author would llke to thank Dr. John Cocke, Dr. F. Jelinek, Dr. B. Mercer, Dr. L. Bahl of the IBM Thomas J• Watson Research Center, end Prof. S. Kuno, of Harvard University for their encouragement and valu- able technical suggestions. Also the author is indebted to Mr. E. Black, Mr. B. Green end Mr. J Lutz for their assistance end dis- cussions. VIZ. REFERENCES • Bahl,L. ,Jelinek,F. , end Mercer,R. ,A Maximum Likelihood Approarch to Continuous Speech Recog- nition,Vol. PAMI-5,No. 2, IEEE Trans. Pattern Analysis end Machine Intelligence,1983 • Bahl,L. ,et. al. ,Automatic Recognition of Contin- uously Spoken Sentences from a finite state grammar, Pron. IEEE Int. Conf. Acoust., Speech, Signal Processing, Tulsa, OK, Apr. 1978 • Bahl,L. ,et. al. ,Further results on the recogni- tion of a continuousl read natural corpus, Pron. IEEE Int. Conf. Acoust., Speech,Signal Process- ing, Denver, CO,Apr. 1980 • Baum,L.E. ,A Maximazatlon Technique occurring in the Statistical Analysis of Probablistlc Func- tlons of Markov Chains, Vol. 41, No.l, The Annals of Mathematical Statistlcs, 1970 • Baum,L.E. ,An Inequality and Associated Maximi- zation Technique in Statistical Estimation for Probabllstlc Functions of Markov Processes, Ine- qualities, Vol. 3, Academic Press, 1972 • Fu,K.S. ,Syntactic Methods in Pattern Recogni- tion,Vol 112, Mathematics in science end Engi- neering, Academic Press, 1974 • Knuth,D. ,Fundamental Algorlthms,Vol I. in The Art of Computer Programming, Addison Wesley, 1975 • Kuno,S. ,The Augmented Predictive Analyzer for Context-free Languages-Its Relative Efficiency, Vol. 9, No. 11, CACM, 1966 • Kttno,S. ,Oettinger,A. G. ,Syntactic Structure and Ambiguity of English, Pron. FJCC, AFIPS, 1963 • Martln,W. , at. al. ,Preliminary Analysis of a Breadth-First Parsing Algorithm: Theoretical and Experimental Results, MIT LCS report TR-261, MIT 1981 • Sibuya,M. ,FuJlsakl,T. end Takao,Y. ,Noun-Phrase Model end Natural Query Language, Vol 22, No 5,IBM J. Res. Dev. 1978 • Winograd,T. ,Understanding Natural Language, Academic Press, 1972 • Woods ,W. ,The Lunar Sciences Natural Language Information System, BBN Report No. 2378, Bolt, Berenek end Newman 19
1984
5
THE DESIGN OF THE KERNEL ARCHITECTURE FOR THE EUROTRA* SOFTWARE R.L. Johnson**, U.M.I.S.T., P.O. Box 88, Manchester M60 IQD, U.K. S. Krauwer, Rijksuniversiteit, Trans 14, 3512 JK Utrecht, Holland M.A. RUsher, ISSCO, University of Geneva, 1211 Geneve 4, Switzerland G.B. Varile, Commission of the European Conm~unities, P.O. Box 1907, Luxembourg ABSTRACT Starting from the assumption that machine translation (MT) should be based on theoretically sound grounds, we argue that, given the state of the art, the only viable solution for the designer of software tools for MT, is to provide the linguists building the MT system with a generator of highly specialized, problem oriented systems. We propose that such theory sensitive systems be generated automatically by supplying a set of definitions to a kernel software, of which we give an informal description in the paper. We give a formal functional definition of its architecture and briefly explain how a prototype system was built. I. INTRODUCTION A. Specialized vs generic software tools for MT Developing the software for a specific task or class of tasks requires that one knows the structure of the tasks involved. In the case of Machine Translation (MT) this structure is not a priori known. Yet it has been envisaged in the planning of the Eurotra project that the software development takes place before a general MT theory is present. This approach has both advantages and disadvantages. It is an advantage that the presence of a software framework will provide a formal language for expressing the MT theory, either explicitly or implicitly. On the other hand this places a heavy responsibility on the shoulders of the software designers, since they will have to provide a language without knowing what this language will have to express. We are grateful to the Commission of the European Communities for continuing support for the Eurotra Machine Translation project and for permission to publish this paper; and also to our colleagues in Eurotra for many interesting and stimulating discussions. ** order not significant There are several ways open to the software designer. One would be to create a framework that is sufficiently general to sccomodate any theory. This is not very attractive, not only because this could trivially be achieved by selecting any existing programming language, but also because this would not be of any help for the people doing the linguistic work. Another, equally unattractive alternative would be to produce a very specific and specialized formalism and offer this to the linguistic community. Unfortunately there is no way to decide in a sensible way in which directions this formalism should be specialized, and hence it would be a mere accident if the device would turn out to be adequate. What is worse, the user of the formalism would spend a considerable amount of his time trying to overcome its deficiencies. In other words, the difficulties that face the designer of such a software system is that it is the user of the system, in our case the linguist, who knows the structure of the problem domain, but is very often unable to articulate it until the language for the transfer of domain knowledge has been established. Although the provision of such a language gives the user the ability to express himself, it normally comes after fundamental decisions regarding the meaning of the language have been frozen into the system architecture. At this point, it is too late to do anything about it: the architecture will embody a certain theoretical conTaitment which delimits both what can be said to the system, and how the system can handle what it is told. This problem is particularly severe when there is not one user, but several, each of whom may have a different approach to the problem that in their own terms is the best one. This requires a considerable amount of [!exibility to be built into the system, not only within a specific instance of the system, but as well across instances, since it is to be expected that during the construction phase of an MT system, a wide variety of theories will be tried (and rejected) as possible candidates. 226 In order to best suit these apparently conflicting requirements we have taken the following design decisions : 1. On the one hand, the software to be designed will be oriented towards a class of abstract systems (see below) rather than one specific system. This class should be so restricted that the decisions to be taken during the linguistic development of the end user system have direct relevance to the linguistic problem domain, while powerful enough ~o accommodate a variety of linguistic strategies. 2. On the other hand, just specifying a class of systems would be insufficient, given our expectation that the highly experimental nature of the linguistic development phase will give rise to a vast number of experlmental instantiations of the system, which should not lead to continuously creating completely new versions of the system. What is needed is a coherent set of software tools that enable the system developers to adapt the system to changes with a minimal amount of effort, i.e. a system generator. Thus, we reject the view that the architecture should achieve this flexibility by simply evading theoretical commitment. Instead it should be capable of displaying a whole range of highly specialized behaviours, and therefore be capable of a high degree of internal reconfiguration according to externally supplied specifications. In other words we aim at a system which is theory sensitive. In our philosophy the reconfiguration of the system should be achieved by supplying the new specifications to the system rather than to a team in charge of redesigning the system whenever new needs for the user arise. Therefore the part of the system that is visible to the linguistic user will be a system generator, rather than an instance of an MT system. B. Computational Paradigm for MT Software The computational paradigm we have chosen for the systems to be generated is the one of expert systems because the design of software for an MT system of the scope of gurotra has much in common with the design of a very large expert system. In both cases successful operation relies as much on the ease with which the specialist knowledge of experts in the problem domain can be communicated to and used by the system as on the programming skill of the software designers and implementers. Typically, the designers of expert systems accommodate the need to incorporate large amounts of specialist knowledge in a flexible way by attempting to build into the system design a separation between knowledge of a domain and the way in which that knowledge is applied. The characteristic architecture of an expert system is in the form of a Production System(PS) (cf Davis & King 1977). A progranuuing scheme is conventionally pictured as having two aspects ("Algorithms + Data = Programs") -- (cf Wirth 1976); a production system has three : a data base, a set of rules (sometimes called 'productions' -- hence the name), and an interpreter. Input to the computation is the initial state of the data base. Rules consist, explicitly or implicitly, of two parts : a pattern and an action. Computation proceeds by progressive modifications to the data base as the interpreter searches the data base and attempts to match patterns in rules and apply the corresponding actions in the event of a successful match. The process halts either when the interpreter attempts to apply a halting action or when no more rules can be applied. This kind of organisation is clearly attractive for knowledge-based computations. The data base can be set up to model objects in the problem domain. The rules represent small, modular items of knowledge, whose syntax can be adjusted to reflect formalisms with which expert users are familiar. And the interpreter embodies a general principle about the appropriate way to apply the expert knowledge coded into the rules. Given an appropriate problem domain, a good expert system design can make it appear as if the statement of expert knowledge is entirely declarative -- the ideal situation from the user's point of view. A major aim in designing Eurotra has been to adapt the essential declarative spirit of production systems to the requirements of a system for large scale machine translation. The reason for adapting the architecture of classical expert systems to our special needs was that the simple production system scheme is likely to be inadequate for our purposes. In fact, the success of a classical PS model in a given domain requires that a number of assumptions be satisfied, namely: 1. that the knowledge required can be appropriately expressed in the form of production rules; 2. that there exists a single, uniform principle for applying that knowledge; 3. finally, that the principle of application is compatible with the natural expression of such knowledge by an expert user. 227 In machine translation, the domain of knowledge with which we are primarily concerned is that of language. With respect to assumption (I), we think automatically of rewrite rules as being an obvious way of expressing linguistic knowledge. Some caution is necessary, however. First of all, rewrite rules take on a number of different forms and interpretations depending on the linguistic theory with which they are associated. In the simplest case, they are merely criteria of the well-formedness of strings, and a collection of such rules is simply equivalent to a recognition device. Usually, however, they are also understood as describing pieces of tree structure, although in some cases -- phrase structure rules in particular -- no tree structure may be explicitly mentioned in the rule: a set of such rules then corresponds to some kind of transducer rather than a simple accepting automaton. The point is that rules which look the same may mean different things according to what is implicit in the formalism. When such rules are used to drive a computation, everything which is implicit becomes the responsibility of the interpreter. This has two consequences : a. if there are different interpretations of rules according to the task which they are supposed to perform, then we need different interpreters to interpret them, which is contrary to assumption (2); an obvious case is the same set of phrase structure rules used to drive a builder of phrase structure trees given a string as input, and to drive an integrity checker given a set of possibly well-formed trees; b. alternatively, in some cases, information which is implicit for one type of interpreter may need to be made explicit for another, causing violation of assumption (3); an obvious case here is the fact that a phrase structure analyser can be written in terms of transductions on trees for a general rewrite interpreter, but at considerable cost in clarity and security. Secondly, it is not evident that 'rules', in either the pattern-action or the rewrite sense, are necessarily the most appropriate representation for all linguistic description. Examples where other styles of expression may well be more fitting are in the description of morphological paradigms for highly inflected languages or the formulation of judgements of relative semantic or pragmatic acceptability. The organisational complexity of Eurotra also poses problems for software design. Quite separate strategies for analysis and synthesis will be developed independently by language groups working in their own countries, although the results of this decentralised and distributed development will ultimately have to be combinable together into one integrated translation system. What is more, new languages or sublanguages may be added at any time, requiring new strategies and modes of description. Finally, the Eurotra software is intended not only as the basis for a single, large MT system, but as a general purpose facility for researchers in MT and computational linguistics in general. These extra considerations impose requirements of complexity, modularity, extensibility and transparency not commonly expected of today's expert systems. The conclusion we have drawn from these and similar observations is that the inflexible, monolithic nature of a simple PS is far too rigid to accommodate the variety of diverse tasks involved in machine translation. The problem, however, is one of size and complexity, rather than of the basic spirit of production systems. The above considerations have led us to adopt the principle of a controlled production system, that is a PS enhanced with a control language (Georgeff 1982). The elements of the vocabulary of a control language are names of PSs, and the well-formed strings of the language define just those sequences of PS applications which are allowed. The user supplies a control 'grammar', which, in a concise and perspicuous way, specifies the class of allowable application sequences. Our proposal for Eurotra supports an enhanced context free control language, in which names of user-defined processes act as non-terminal symbols. Since the language is context free, process definitions may refer recursively to other processes, as well as to gran~aars, whose names are the terminal symbols of the control language. A grammar specifies a primitive task to be performed. Like a production system, it consists of a collection of declarative statements about the data for the task, plus details of the interpretation scheme used to apply the declarative information to the data base. Again, as in a production system, it is important that the information in the declarative part should be homogeneous, and that there should be a single method of application for the whole grammar. We depart somewhat from conventional productions system philosophy in that our commitment is to declarative expression rather than to production rules. 228 The device of using a control language to define the organlsation of a collection of grs~muars provides the user with a powerful tool for simulating the procedural knowledge inherent in constructing and testing strategies, without departing radically from an essentially declarative framework. An important feature of our design methodology is the commitment to interaction with potential users in order to delineate the class of tasks which users themselves feel to be necessary. In this way, we aim to avoid the error which has often been made in the past, of presentin 8 users with a fixed set of selected generally on computational grounds, which they, the users, must adjust to their own requirements as best they can. II OVERVIEW OF THE SYSTEM GENERATOR The task of our users is to design the problem oriented machine here called "eurotra". Our contribution to this task is to provide them with a machine 1 in terms of which they can express their perception of solutions to the problem (bearing in mind also that we may need to accommodate in the future not only modifications to users's perception of the solution but also to their perception of the problem itself). It is clearly unreasonable to expect users to express themselves directly in terms of some computer, especially given the characteristics of the conventional von Neumann computers which we can expect to be available in the inuuediate future. The normal strategy, which we adopt is to design a problem-oriented language which then becomes the users' interface to a special purpose virtual machine, mediated by a compiler which transforms solutions expressed in the problem-oriented language into programs which can be run directly on the appropriate computer. Functionally, can express this in the followlng way : We use the term "computer" to refer to a physical object implemented in hardware, while "machine" is the object with which a programmer communicates. The essence of the task of designing software tools is to transform a computer into a machine which corresponds as closely as possible to the terms of the problem domain of the user for whom the tools are written. eurotra = compiler : usd 2 where usd stands for "user solution definition". We can depict the architecture graphically as : usd COMPILER ,L source text --~ COMPUTER --~ target text Fig. 1 In Symbols : (compiler : usd) : text -9 text The picture above is clearly still an oversimplification. In the first place, such a compiler would be enormously difficult to write and maintain, given the tremendous complexity of the possible solution space of Machine Translation problems which the compiler is intended to represent. Secondly, especially in the light of our observation above that the users' view of the problem space itself may change, it would be very unwise to invest enormous effort in the construction of a very complex compiler which may turn out in the end to be constructed to accept solutions to the wrong class of problems. Following well-established software engineering practice, we can compensate for this difficulty by using a compiler generator to generate appropriate compilers rather than building a complete new compiler from scratch. Apart from making the actual process of compiler construction more rapid, we observe that use of a compiler generator has important beneficial side effects. Firstly, it enables us to concentrate on the central issue of language design rather than secondary questions of compiler implementation. Secondly, if we choose a well-designed compiler generator, it turns out that the description of the user language which is input to the generator may be very close to an abstract specification of the language, and hence in an importance sense a description of the potential of the user machine. 2 For the remainder of this section we shall use the notation x:y-~ z with the informal meaning of "application of x to y yields result z", or "execution of x with input y gives output z". 229 After the introduction of a compiler generator the picture of our architecture looks llke this (uld stands for "user language defintion"; CGstands for "compiler generator"): usd uld ---~ CG ---9 COMPILER source text -- ~ COMPUTER -- target text Fig. 2 In symbols : ((CG : uld) : usd) : text -~ text For many software engineering projects this might be an entirely adequate architecture to support the design of problem oriented systems. In OUr case, however, an architecture of this kind only offers a partial resolution of the two important issues already raised above : incomplete knowledge of the problem domain, and complexity of the semantics of any possible solution space. The use of a compiler generator certainly helps us to separate the problem of defining a good user language from that of implementing it. It also gives us the very important insight that the use of generators as design tools means that in optimal cases input to the generator and formal specification of the machine to be generated may be very close or even identical. However, we really only are addressing the question of finding an appropriate syntax in which users can formulate solutions in some problem domain; the issue of defining the semantics underlying that syntax, of stating formally what a particular solution means is still open. We can perhaps make the point more explicitly by considering the conventional decomposition of a compiler into a parser and a code generator (cf, for example, Richards and Whitby-Strevens 1979). The function of the parser is to transform a text of a programming language into a formal object such as a parse tree which is syntactically uniform and easy to describe; this object is then transformed by the code generator into a semantically equivalent text in the language of the target machine. Within this approach, it is possible to contemplate an organisatlon which, in large measure, separates the manipulation of the syntax of a language from computation of its meaning. Since the syntactic manipulation of programming languages is by now well understood, we can take advantage of this separation to arrive at formal definitions of language syntax which can be used directly to generate the syntactic component of a compiler. The process of automatically computing the meaning of a program is, unfortunately much more obscure. Our task is rendered doubly difficult by the fact that there is no obvious relation between the kind of user program we can expect to have to treat and the string of von Neumann instructions which even the most advanced semantically oriented compiler generator is likely to be tuned to produce. We can gain some insight into a way round this difficulty by considering strategies like the one described for BCPL (Richards and Whitby-Strevens, clt). In this two-stage compiler, the input program is first translated into the language of a pseudo-machine, known as O-code. The implementer then has the choice of implementing an O-code machine directly as an interpreter or of writing a second stage compiler which translates an O-code program into an equivalent program which is runnable directly on target machine. This technique, which is relatively well established, is normally used as a means Of constructing easily portable compilers, since only the second-stage intermediate code to target code translation need be changed, a job which is rendered much easier by the fact that the input language to the translation is invariant over all compilers in the family. Clearly we cannot adopt this model directly, since O-code in order to be optimally portable is designed as the language of a generic stack-oriented yon Neumann machine, and we have made the point repeatedly that yon Neumann architectures are not the appropriate point of reference for the semantics of MT definitions. However, we can also see the same organisation in a different light, namely as a device for allowlng us to build a compiler for languages whose semantics are not necessarily fully determined, or at least subject to change and redefinition at short notice. In other words, we want to be able to construct compilers which can compile code for a class of machines, so as to concentrate attention on finding the most appropriate member of the class for the task in hand. we now have a system architecture in which user solutions are translated into a syntactically simple but semantically rather empty intermediate language rather than the native code of a real computer. We want to be able easily to change the behaviour of the associated virtual machine, preferably by adding or changing external definitions of its functions. We choose to represent this machine as an interpreter for a functional language; there are many reasons for this choice, in particular we observe here that such machines are characterised by a very simple evaluator which can even accept external redefinitions of itself and apply 230 them dynamically, if necessary; they typically have a very simple syntax - normally composed only of atoms and tuples - which is simple for a compiler to generate; and the function definitions have, in programming terms, a very tractable semantics which we can exploit in identifying an instance of an experimental implementation with a formal system definition. With the addition of the interpreter slmulating the abstract machine, our informal picture now looks like this : uld--~ CG--~ source text --3 usd COMPILER INTERPRETER --~ COMPUTER target text Fig. 3 or in symbols : (INTERPRETER : ((CG:uld) : usd)) : text -~ text We now turn to the kind of definitions which we shall want to introduce into this system. We decompose the function of the machine notionally into control functions and data manipulation functions (this decomposition is important because of the great importance of pattern-directed computations in ~rr). Informally, in deference to the internal organisation of more conventional machines, we sometimes refer to the functionality of these two parts with the terms CPU and MMU, respectively. What we want to do is to make the "empty" kernel machine into a complete and effective computing device by the addition of a set of definitions which : allow the kernel interpreter to distinguish between control operations and data operations in an input language construct; define the complete set of control operations; define the domain of legal data configurations and operations on them. With these additions, the complete architecture has the form : FP : controldef --~ REL : datadef --~ languages usd LR(k) : uld -~ CG --~ COMPILER inner prog. CPU l I I ~ w ! ! KERNEL I $ COMPUTER Fig. 4 or symbolically, writing "adder" for the name of the function which adds definitions: (((adder : controldef,datadef ) : KERNEL) : ((CG : uld) : usd)) : text -~ text Capitalized symbols denote components which are part of the system generator, while lower case symbols denote definitions to generate a system instance. An alternative way of describing Fig 4. is to see the system generator as consisting of a set of generators (languages and programs). The languages of the generator are : a. an LR(k) language for defining the user language syntax (cf Knuth 1965); b. a functional programming (FP) language for defining the semantics of the user supplied control (for FP cf Backus 1978); c. a relatlonal language (REL) for defining the semantics of user defined pattern descriptions; d. the definition of the inner program syntax (see APPENDIX). The programme of the system, which, supplied with the appropriate definitions, will generate system instances, are : e. a compiler-compiler defined functionally by a. and d. in such a way that for each token of user language syntax definition and each token of user program expressed in this syntax it will generate a unique token of inner program. f. a CPU, which is essentially an FP system, to be complemented with the definitions of point b. The CPU is responsible for interpreting the scheduling (control) parts of 231 the user program. It can pass control to the MMU at defined points. g. a MMU to be complemented with the definitions of point c. The MMU is responsible for manipulating the data upon request of the CPU. Given the above scheme, a token of a problem oriented system for processing user programs is obtained by supplying the definition of : - the user language syntax; - the semantics of the control descriptions; - the semantics of the data pattern descriptions; - the expansion of certain nonterminal symbols of the inner program syntax. Note that a primitive (rule-)execution scheme (i.e. a grammar), is obtained recursively in the same way, modulo the modification necessary given the different meaning of the control definition. III. FORMAL DEFINITION OF THE SYSTEM GENERATOR'S ARCHITECTURE This section presupposes some knowledge of FP and FFP (cf. Backus cit, Williams 1982). Readers unfamiliar with these formalisms may skip this section. We now give a formal definition of the generator's architecture by functionally defining a monitor M for the machine depicted in Fig. 4. We will do so by defining M as an FFP functional (i.e. higher order function) (cf. Backus cit, Williams cit). An FP system has a set of functions which is fully determined by a set of primitive functions, a set of functional forms, and a set of definitions. The main difference between FP systems and FFP systems is that in the latter objects (e.g. sequences) are used to represent functions, which has as a consequence that in FFP one can create new functionals. The monitor M is just the definition of one such'functional. Sequences in FFP represent functionals in the following way : there is a representation function D (which belongs to the representation system of FFP, not to FFP itself) which associates objects and the functions they represent. The association between objects and functions is given by the following rule (metacomposition) : (p <xl ..... xn>) : y = (o xl) :~xl ..... xn> ,y> The formal definition of the overall architecture of the system is obtained by the following FFP definition of its monitor M : D ~M, uld, cd, dd~ : usd = ~M):<<pM, uld, cd, dd >, usd> with : M E apply.[capply,l'[applyl" [apply2"[ yapply,2-1] , 23, apply.[~(3-1), 'CD3 , apply'[~(4"l),'DDT]] where : M is the name of the system monitor uld is the user language definition in BNF cd is the control definition (controldef in Fig 4.) dd is the data definition (datadef in Fig 4.) usd is the user solution definition The meaning of the definition is as follows : M is defined to be the application of capply to the internal programe ip apply : <capply, ip.> capply is the semantic definition of the machine's CPU (see below). ip is obtained in the following way : applyl : ~apply2 : ~yapply,uld>, usd> Where apply2 : Cyapply, uld~ yields the COMPILER which is then applied to the usd. For a definition of applyl, apply2, yapply see the section on the implementation. apply" [;(3-1), 'CD] and apply" [4(4"1), 'DD ] just add definitions to the control, reap. data definition stores of the CPU and the MMU respectively. is the 'store' functional of FFP. A. Semantic Definition of the CPU As mentioned earlier, the bare CPU consists essentially of the semantic definition of an FP-type application mechanism, the set of primitive functions and functionals being the ones defined in standard FP. 232 The application mechanism of the CPU is called capply, and its definition is as follows : p(x) = x s A ~ •; x = <xl ..... xn> ~ (~xl ..... ~xn > ; • = (y:z) (yeA & (~:DD) = T ~ mapply:~y,z> ; yea & (~:CD) = # ->~((py) (~z)); yaA & (~:CD) = w ->~(w:z); y = <yl ..... yn)~(yl:<y,z> ); ~(~y:z)); being the FFP semantic function defining the meaning of objects and expressions (which belongs to the descriptive system of FFP, not to FFP itself, (cf Backus cit)). The functionality of ~ is : Expression -> Object that is, ~ associates to each FFP expression an object which is its meaning. It is defined in the following way : x is an object -> ~x = x e = <el ..... en> is an expression -> ~ef~el ..... pen> if x,y are objects -> ~(x:y) = ~(~:y) where OX is the function represented by the object x. is the FFP functional 'fetch' DD is the definition store of the MMU CD is the definition store of the CPU # is the result of an unsuccesful search mapply is the apply mechanism of the MMU The execution of a primitive (i.e. a granuuar) represents a recursive call to the monitor M, modulo the different function of the control interpreter (the CPU). For the rest, as far as the user language definition is concerned things remain unchanged (remember that if approprlate,the language for expressing knowledge inside a gratmuar as well as the data structure can be redefined for different primitives). The recursive call of M is caused by capply whose definition has to be augmented by inserting after line 6 Of the definition given above ~he following eondt%ion| y = applyprlm 9 <M,uld,cd,dd) :x where x is the specification of the primitive (e.g. the rule set). IV. EXPERIMENTAL IMPLEMENTATION An experimental implementation of the architecture described above has to accomodate two distinct aims. First, it must reflect the proposed functionality, which is to say, roughly, that the parts out of which it is made correspond in content, in function and interrelationship to those laid down in the design. Second, it must, when supplied with a set of definitions, generate a system instance that is both correct, and sufficiently robust to be released into the user community to serve as an experlmental tool. The entire implementation runs under, and is partly defined in terms of the Unix* operating system. The main reason for this choice is that from the start, Unix has been conceived as a functional architecture. What the user sees is externally defined, being the result of applying the Unix kernel to a shell program. Furthermore, the standard shell,or csh,itself provides us with a language which can both describe and construct a complex system, essentially by having the vocabulary and the constructs to express the decomposition of the whole into more primitive parts. We shall see some examples of this below. Another reason for the choice of Unix is the availability of suitable, ready-made software that has turned out to be sufficient, in large measure, to construct a respectable first approximation to the system. Finall~, the decentralised nature of our project demands that experimental implementations should be maximally distributable over a potentially large number of different hardware configurations. At present, Unix is the only practical choice. A. System Components The system consists of 4 main parts, these being : a. A user language compiler generator. b. A control definition generator. c. a kernel CPU. d. A data definition generator. These modules, together with a user language description, a control description, and a data description, are sufficient to specify an instance of the system. 1. User Language Compiler Generator YACC After reviewing a number of compiler-compilers, it was decided to use YACC * UNIX is a trademark of the Bell Laboratories 233 (Johnson 1915). Quite apart from its availability under Unix, YACC accepts an LALR(1) grammar, a development of LR(k) grammars (Knuth cit; Aho & Johnson (1974). LALR parsers (Look Ahead LR) give considerably smaller parsing tables than canonical LR tables. The reader is referred to Aho & Ullman (1977) which gives details of how to derive LALR parsing tables from LR ones. LEg LEX (Lesk 1975) generates lexlcal anslysers, end is designed to be used in conjunction with YACC. LEg accepts a specification of lexical rules in the form of reBular expressions. Arbitrary actions may be performed when certain strings are recognised, although in our case, the value of the token recognised is passed, and an entry in the symbol table created. 2. Control Generator A user programe presupposes, and an inner program contains a number of control constructs for organlslng the scheduling of processes, end the performance of complex database manipulations. The meaning that these constructs shall have is determined by the definitions present in the control store of the kernel. The language in which we have chosen to define such constructs is FP (Backus cit). It follows that the generator must provide compilations of these defintions in the language of the kernel machine. The implementation of the control generator is an adaptation of Baden's (1982) FP interpreter. This is a stand-alone program that essentially translates FP definitions into kernel language ones. 3. Kernel CPU We are currently using the Unix Lisp interpreter (Foderaro & Sklower 1982) to stand in for FFP, although an efficient interpreter for the latter is under development. Notice that an FFP (or Lisp) system is necessary to implement the appllcative schema described in section Ill, since these systems have the power to describe their own evaluation mechanisms; FP itself does not. 4. Data Definition Generator Unfortunately, we know of no language as suitable for the description of data as FP for the description of control. The reason is that at this moment, we are insufficiently confident of the basic character of data in this domain to make any definitive claims about the nature of an ideal data description ]anguage. We have therefore chosen to express data definitions in the precise, but over general terms of first order logic, which are then embedded with very little syntactic transformation into the database of a standard Prolog implementation (Pereira & Byrd 1982). The augmented interpreter then constitutes the MMU referred to above. The data definition for the current experiment presents the user with a database consisting of an ordered collection of trees, over which he may define arbitrary transductions. The CPU and MMU run in parallel, and communicate with each other through a pair of Unix pipelines usin 8 a defined protocol that minlmises the quantity of information passed. A small bootstrap program initlelises the MMU and sets up the pipelines. B. ConstructinK the System The decomposition of a system instance into parts can be largely described within the shell language. Figure 5. below summarises the organisation using the convention that a module preceded by a colon is constructed by executing the shell commands on the next line. The runnable version of figure 4. (that contains rather more odd symbols) conforms to the input requirements of the Unix 'make' program. targettext : ((cpu<bootstratp)< eurotra)< sourcetext > targettext /*capply*/ eurotra : compiler < usd >eurotra /*apply I*/ COMPILER : yacc <uld [ cc~compiler /*apply 2*/ controldef : fpcomp < cd> controldef MMU : echo 'save(mmu)' I prolog dd CPU : echop '(damplisp cpu)' I lisp<controldef Fig. 5 V. CONCLUSION We have arBued for the need of theory-specific software for computational linguistics. In cases where, as in MT, such a theory is not available from the beginning of a project. hut rather, is expected as a result of it, we have argued for the need of a problem-oriented system generator. 234 We have proposed a solution by which, starting from the notion of a compiler generator driven by an external definition, one arrives at a way of building runnable, problem-oriented systems which are almost entirely externally defined. In our view, this approach has the advantage, for a domain where the class of problems to be solved is underdetermined, that the semantics of the underlying machine can be redefined rapidly in a clean and elegant way. By a careful choice of definition languages, we can use the definitions simultaneously as input to a generator for experimental prototype implementations and as the central part of a formal specification of a particular application-oriented machine. VI REFERENCES Aho, A.V & Johnson, S.C. (1974) - LR parsing. Computing Surveys 6 : 2 Aho, A.V. & Ullman, J.V. (1977) - Principles of Compiler Design. Addison-Wesley. Backus, J (1978) - Can programming be liberated from the von Neumann style? Comm. ACM 21 : 8. Baden, S. (1982) - Berkeley FP User's Manual, rev 4.1. Department of Computer Science, University of California, Berkeley. Davis, R. & King, J.J. (1977) - An overview of production systems, in : Elcock, E.W. & Michie, D. (eds)- Machine Intelligence B: Machine representation of knowledge, Ellis Horwood. Foderaro J.K. & Skowler K. (1982). The Franz Lisp Manual. University of California. Georgeff, M.P. (1982) - Procedural control in production systems. Artificial Intelligence 18 : 2. Johnson, S.C. (1975) - Yacc : Yet another Compiler-Compiler, Computing Science Technical Report No. 32, Bell Laboratories, NJ Knuth, D.E. (1965) - On the translation of languages from left to right. Information and Control 8:6. Lesk, M.E. (1975) -Lex : a Lexical Analyzer Generator, Computing Science Technical Report No. 39, Bell Laboratories, NJ. Pereira & Byrd (1982) - C-Prolog, Ed CAAD, Department of Architecture, University of Edinburgh. Richards, M & Whitby-Strevens, C. (1979) - BCPL: The language and its compiler, Cambridge University Press. Williams, (1982) - Notes on the FP functional style of programming, in: Darlington, J., Henderson, P. and Turner, D.A. (eds), Functional programming and its applications, CUP. Wirth, N. (1976) - Algorithms + Data Structures = Programs, Prentice Hall, Englewood Cliffs, New Jersey. VII. APPENDIX Below we give a BNF definition of the inner program syntax. Capitalized symbols denote non-terminal symbols, lower ease symbols denote terminals. PROC ::= <quI~> QUINT ::= ~NAME EXPECTN FOCUS BODY GOALL> NAME ::= IDENTIFIER IDENTIFIER ::= "" EXPECTN ::= PAT I nil FOCUS ::= VARPAIR VARPAIR ::= ~ARG ARG> VAR ::= VARID VARID ::= ** BODY ::= <nonprim CEXP~prim PRIMSP> CEXP ::= COMPLEX I SIMPLEX COMPLEX ::= ~CONTRLTYP CEXP+> SIMPLEX ::= NAME CONTRLTYP ::= serial [ parallel Jlterate PRIMSP ::= ~RULE+> RULE ::= <PAT PAT> GOALL : := <PAT z PAT : :ffi ~ SYMBTAB ASSERT > SYMBTAB : : = ARGL ARGL ::= <ARG + > ASSERT ::= ~b ASSET ASSRT>I <vASSRT ASSRT ~(~ASSI~> ASSET ::= SIMPLASSRT I ASSERT SIMPLASSRT ::= ~EELNAM TERML> EELNAM ::= >1< I =l *l IDENTIFIER[ prec[ domJprefix I suffix I infix TERML : : ffi <TERN ~ > TE~ : := ~G ! <FUSC TERm. > ARG ::= (TYP VAR>[ LITERAL null LITERAL ::ffi "* FUNC ::ffi IDENTIFIER I length TYP ::= node I tree i chain I bound For each instance of the system, there is an instance of the inner program syntax which differs from the bare inner program syntax in that certain symbols are expanded differently depending on other definitions supplied to the system. ** trivial expansions omitted here.:= PAT* 235
1984
50
MACHINE TRANSLATION : WHAT TYPE OF POST-EDITING ON WHAT TYPE OF DOCUMENTS FOR WHAT TYPE OF USERS Anne-Marie LAURIAN Centre National de la Recherche Scientifique Universitd de la Sorbonne Nouvelle - Paris III 19 rue des 8ernardins, 75005 Paris (France) ABSTRACT Various typologies of technical and seientifical texts nave already been proposeO bv authors involved in multilingual transfer problems. They were usually aimed at a better knowledge of the criteria for deciding if a document has to be or can be machine trans- lated. Such a typology could also lead to a better knowledge of the typical errors oc- curing, and so lead to more appropriate post-editing, as well as to improvements in the system. Raw translations being usable, as they are quite often for rapid information needs, it is important to draw the limits between a style adequate for rapid information, and an elegant, high qualitv style such as required for information large dissemination. Style could be given a new definition through a linguistic analysis based on machine trans- lation, on communication situations and on the users' requirements and satisfaction. I. MACHINE TRANSLATION AND POST-EDITING, A EUROPEAN EXAMPLE Machine translation is often considered as a project, an experimental process, if not an impossible dream. Translation theoreti- clans would sav no machine can understand the meaning of a text and re-express it in an other language, so no machine can translate. The debate is about the necessity of a deep semantic understanding for translating, opposed to a language structure knowledge to be sufficient to produce a translation. The usual debate is thus about the ideal concept each one has of what a translation should be. Translation can only be defined in particular situations, regarding particular documents. And machine translation is only to be used for certain types of documents to be handled a certain way. HY observations are based on several studies I carried out on the SYSTRAN output produced in Luxembourg within the Commission of the European Communities. In Luxembourg the amount of documents to be translated is not only very big, it is also growing very fast. The european rule is that all official documents have to be translated into the seven official languages; technical documents needed for conferences or experts meetings are sometimes translated only in three or four languages (english, french, german, italian). The delay available is often very short. That led the C.E.C. General Direction for Multilingual Transfers to promote machine translation. When they started it, some six years ago, SYSTRAN was the only system ready to produce transla- tions. This system, originated in the U.S., has then been developed for the proper use of the Commission. The output was far from being perfect, far from being usable as it was. Post-editing was being done. Even with the huge progress of the output quality, post-editing is still necessarY. It will, in fact, be always necessary because as people get used to their translation to be done by a computer, their requirements are becoming more precise. The errors one would admit at an experimental stage, are no more possible at a productive stage. Post-editing is thus becoming a new specialization within the numerous fields related to translation. I ; - A TYPOLOGY OF DOCUMENTS BASED ON M.T. ERRORS All documents are not suitable for machine translation. Lots of negative reactions against M.T. have been induced by a wrong use of M.T. Aware of the necessity of differentiating the documents, people res- ponsible for translation proposed several types of typologies. They were mainly based on the subject field of the text, on its function, on its structure, on the sentence and paragraph length and complexity, on the use of particular terminologies. 236 The aim was to enable the chief of a translation division to choose which texts were to be sent to a human translator, and which could be processed by M.T. My study of the errors remaining in the raw translations led me to propose a strictly linguistic typology. I There are three major tvpes of errors : i. errors on isolated words, 2. errors on the expression of relations, 3. errors on the structure and on the information display. These errors are classified in three tables : i.i vocabulary, terminology 1.2 proper names and abbreviations, 1.3 relators : - in nominai groups, - in verbai groups, 1.4 noun determinants, verbal modificators ; 2.5 verb forms (tense), 2.6 verb forms (passive/active) and per- sonalization (passive/non personai), 2.7 expression of modaIity or not, 2.8 negation ; 3.9 logical relations, phrase introducers, ].10 words order, 3.11 general problems of incidence. The relative frequence of these errors can be read in my tables. These tables can be used to evaluate the probable quantity and location of errors existing after M.T., i.e. the probable quan- tity, location and type of post-editing. With a short training in linguistics, anyone could get trained to use these tables. By a rapid reading of the documents to be transla- ted on the basis of these features, and according to the relative frequence of one category of probable errors or the other, one could then easily evaluate if a document should be translated by a translator or is suitable for M.T. I I I - TYPES OF POST-EDITING The system used in Luxembourg is still being developped. That means that errors are getting fewer. For instance three years ago verb forms were translated "form to form", now new rules have been introduced in order to get a past tense for a present tense (or reverse), a passive form for an impersonaI one (or reverse), a.s.o. i cf. A.M. Loffler-Laurian, Pour une tvpo- logie des erreurs dans la traduction automa- tique, in MULTiLINGUA, 2-2 (1983), 65-78 But at the same time the variety of documents machine trabslated is growing. That means new sources on errors (mainIv vocabulary, but aiso modaiities, structures, a.s.o.). Post-editing is always necessary. Until now post-editing has been done by translators who are wishing to do it. The amount of post-editing to do is increasing every day, it becomes obvious that post-editing can't be done just according to somebody's feeling of language and style. There has to be some rules. Post-editing is not revision, nor correction, nor rewriting. It is a new way of considering a text, a new wav of working on it, for a new aim. In order to define the characteristics of post-editing, I carried out a study on the two major types of post- editing as they appear in the C.E.C. 2 i. The conventional post-editing (C.P.E.) is supposed to produce a text as similar as possibie to what a human translation would have been, that means a high quality text. 2. The rapid post-editing (R.P.E.) is supposed to produce a correct text (on the language level as well as on the level of the meaning) but without taking care of the stvie. In the experiment I carried out, time required for post-editing was the only criteria to differenciate these two methods. It appeared that special Iinguistical at- titudes were induced by time Iimitation. A statistical survey of C.P.E. and R.P.E. shows the limits between : I. necessary post-editing, 2. possible post-editing, 3. superfluous post-editing. First group includes all post-editing that has to be done to make the text under- standabie, clear, readable, exact. Second group inciudes some research in style focused on the adaptation to the communication situation, to the author and to the presumed reader. Third group is post-editing done bY peopIe who didn't want to admit that perfec- tion was not the aim, and that a document that will be read quickIy and thrown away immediateIv ooes not require the same style as a oocument that will be pubiished and largeiv distributed. These people usuaIly could not give out their R.P.E. in the limited time allowed for it. 2 cf. A.M.L.L., Post-~dition conventionnelle t post-6dition rapide~ vers une m6thodologie de la post-~dition, to be published. 237 In rapid post-editing one has to focus on the central information, and is naturally kept out from the temptation of rewriting the sentence were errors occur. Then the post- editor finds the shortest solution, which is usually the right one. By staying very close to the raw translation, post-editors succeed in giving a good and acceptable translation. Those who, after having post-edited according to the minimal requirements, try to make the text fit better the usual style they know, give us indications to point out the difference between : - a text that is correct according to standard language rules, - a text that obeys the usage rules in use on that level of documents or level of language (some "sub-rules" specific to some specialized fields, authors, situations). IV - STYLE, SITUATIONS AND USERS Style in literature is usually defined as the specific way an author writes. Do technical and scientific documents have a specific style ? Many people would agree on the idea that these documents have no style -or have a neutral stYle-. In terms of linguistic features, they can be described as well as any other writing. However the non-apparent aspect of style in informative documents is an important component of their ability to be machine translated. In a novel, the style of the author would be its main value whereas in an informative document, the transparency of style, its leaving the reader unaware of it would be essential. Even more : if style were to be felt, the information would most probably loose some of its accuracy and credibility. In every translation situation the author has some information to transmit to a user. Let it be a technical or a political information, a scientific or a social infor- mation, the goal may be double : have the reader know more about a question (that relates to didactics), and have the reader react in a specific way to the text. Regar- ding this second goal, the best style, most adequate, would be the one that would bring the reader to the point the author wanted him. The neutrality of a computerized system is quite fitted to that situation. And the minimal post-editing creates often the best style. The users' satisfaction should be the ultimate criterion to evaluate the adequacy of a style. Are readers getting used to some new style based on machine translation ? Some people fear for the future of their language: it could evolve uncontrolled because of a new kind of users getting used to some new variety of language induced by a new tool for translation. They fear a loss of some linguistical property. Languages have always been exposed to multiple influences (wars, invasions, economical trends, cultural exchanges, a.s.o.). They are now exposed to technical influences. Machine translation is already used by translation services. It will certainly be soon used by private translators (various systems are developped or under development in several countries). It could be used with great profit by linguists and professors to help them think about their own use of language, about the varieties of specialized uses of language, and about the future programmes that could be built up for new generations of students. REFERENCES - MULTILINGUA, a journal of interlanguage communication, Mouton publishers, see : G. Van Slype, 1-4 (1982), 221-237 A.M. Loffler-Laurian, 2-2 (1983), 65-78 I.M. Pigott, 2-3 (1983), 149-156 - CONTRASTES, a journal of contrastive linguistics, ADEC publisher, see : J. Humbley, N ° 7, Nov. 1983, 35-47 M. King, N a A), 1983, 53-59 A.M. Loffler-Laurian, S. Krsuwer & L. Des Tombe, M.C. Bourquin-Launey, X. Huang, G. Bourquin, J.L. Vidalenc; R. Johnson, J.M. Zemb, N ° A4 ("Traduc- tion automatique - aspects europ~- ens"), 1984, 167 pp. 238
1984
51
Simplifying Deterministic Parsing Alan W. Carter z Department of Computer Science University of British Columbia Vancouver, B.C. V6T IW5 Michael J. Frelllng 2 Department of Computer Science Oregon State University Corvallis, OR 07331 ABSTRACT This paper presents a model for deterministic parsing which was designed to simplify the task of writing and understanding a deterministic grammar. While retaining structures and operations similar to those of Mareus's PARSIFAL parser [Marcus 80] the grammar language incorporates the following changes. (1) The use of productions operating in parallel has essentially been eliminated and instead the productions are organized into sequences. Not only does this improve the understandability of the grammar, it is felt that, this organization corresponds more closely to the task of per- forming the sequence of buffer transformations and attachments required to parse the most common constituent types. (2) A general method for interfacing between the parser and a semavtic represen- tation system is introduced. This interface is independent of the particular semantic representation used and hides all details of the semantic processing from the grammar writer. (3) The interface also provides a general method for dealing with syntactic ambiguities which arise from the attachment of optional modifiers such as prepo- sitional phrases. This frees the grammar writer from determining each point at which such ambiguities can occur. 1. INTRODUCTION Marcus has effectively described the advantages of a deter- ministic parsing model as is embodied in his PARSIFAL system. Unfortunately a hindrance to the usability of PARSIFAL is the com- plexity of its grammar. The popularity of Woods' ATN parsing model [Woods 70] demonstrates that the ease with which a grammar can be written and understood is one of the greatest factors contri- buting to its usability. This paper describes DPARSER (Determinis- tic PARSER) which is an implementation of an alternate determinis- tic parsing model intended to reduce the complexity of deterministic grammars. DPARSER has been implemented and a small grammar writ- ten. In developing the grammar the focus has been on dealing with the syntactic ambiguities between the attachment of phrases and thus it can currently handle only simple noun and verb phrases. 2. CONSTITUENT BUFFER DPARSER maintains a constituent buffer which is manipu- lated by the grammar to derive the constituent structure of the input sentence. Each node of the buffer contains a constituent con- sisting of a set of feature-type, feature-value pairs, and a set of sub- constituents. When parsing begins the constituent buffer contains a single node with an associated subgrammar for parsing sentence con- stituents. As the subgrammar of the sentence node examines the buffer positions to its right, words are brought in from the input sen- tence to fill the empty positions. When the grammar discovers a subconstituent phrase to be parsed, it performs a PUSH operation specifying a subgrammar for parsing the constituent and the posi- tion of the rightmost word in the constituent phrase. The PUSIt operation inserts a new node into the buffer immediately preceding the constituent phrase and begins executing the specified Isupported in part by an I.W. Killaw Predoctoral Fellowship 2supported in part by the Blum-Kovler Foundation, Chicago, Ill. subgrammar. This subgrammar may of course perform its own PUSH operations and the same process will be repeated. Once the subeonstituent is complete control returns to the sentence node and the buffer will contain the parsed constituent in place of those which made up the constituent phrase. The sentence node can now attach the parsed constituent removing it from the buffer. When all the subconstituents of the sentence node have been attached the parsing is complete. To familiarize the reader with the form of the constituent buffer we consider the processing of the sentence Jones teaches the course, as the final NP is about to be parsed. Figure 1 shows the current state of each buffer node giving its position, state of execu- tion, essential syntactic features, and the phrase which it dominates so far. Following the terminology of Marcus we refer to the nodes which have associated subgrammars as active nodes and the one currently executing is called the current active node. All buffer posi- tions are given relative to the current active node whose position is labeled **" . The buffer in its current state contains two active nodes: the original sentence node and a new node which was created to parse the sentence predicate (i.e. verb phrase and its complements}. The next modification of the buffer takes place when the subgrammar for the predicate node examines its first position causing the word the to be inserted in that position. At this point a bottom-up parsing mechanism recognizes that this is the beginning of a noun phrase and a PUSH is performed to parse it; this leaves the buffer in the state shown in Figure 2 . The subgrammar for the noun phrase now executes and attaches the words the and course. It then examines the buffer for modifiers of the simple NP which causes the final punctuation, ".", to be inserted into the buffer. Since the period can not be part of a noun phrase, the subgrammar ends its execution, the PUSH is Figure 1. POSITION -1 active SYNCLASS S SENT-TYPE DECL (Jooc,) POSITION * current active SYNCLASS PHED VTYPE ONE-OBJ {te~cAe,) UNSEEN WORDS: the course. before pushln$ to parse the NP Figure 2. POSITION -2 active SYNCLASS S SENT-TYPE DECL (Jo.co) POSITION -1 active SYNCLASS PRED VTYPE ONE-OBJ {te*ehe,) POSITION * current active SYNCLASS NP 0 POSITION 1 not active SYNCLASS DET WORD THE EXT DEF (,he) UNSEEN WORDS: course. pxrsin~ the noun phrase 239 completed, and the predicate node again becomes the current active node. The resulting state of the buffer is shown in Figure 3; the words the and course have been replaced by the noun phrase consti- tuent which dominates them. Aside from PUSH and ATTACH, the following three opera- tions are commonly used by the grammar to manipulate the consti- tuent buffer. LABEL label a constituent with a syntactic feature MOVE move a constituent from one position to another INSERT insert a word into a specified position Examples of these actions are presented in the following section. The differences between the data structures maintained by PARSIFAL and DPARSER are for the most part conceptual. PARSIFAL's active nodes are stored in an active node stack which is separate from the constituent buffer. To allow active nodes to parse constituent phrases which are not at the front of the buffer an offset into the buffer can be associated with an active node. The control of which active node is currently executing is affected through operations which explicitly manipulate the active node stack. Church's deterministic parser, YAP [Church 80], uses a consti- tuent buffer consisting of two halls: an upper buffer and a lower POSITION -I active SYNCLASS S SENT-TYPE DECL No"~') POSITION * current active SYNCLASS PRED VTYPE ONE-OBJ (tcachc,] POSITION I not active SYNCLASS NP NVFORM N3PS {the course) POSITION 2 not active SYNCLASS FINAL-PUNCT WORD . (.) Figure3. after the push is completed buffer. The grammar rules try to attach nodes from the lower buffer to those in the upper buffer. While this structure is similar to PARSIFAL's, it does not draw such a rigid distinction between active and inactive nodes. There are no separate subgrammars asso- ciated with the nodes which constituents are being attached to, and nodes may be moved freely from one buffer to the other allowing them to be attached before they are complete. While our consti- tuent structure does maintain active nodes with separate subgram- mars, the control of the parsing process is similar to that used by Church in that it is possible for incomplete nodes to be attached. As will be seen in a latter section this is an essential feature of DPARSER's constituent buffer. 3. SEQUENCES In DPARSER each constituent is assigned a sequence. Each sequence consists of n list of steps which are applied to the buffer in the order specified by the sequence. A step operator indicates how many times each step can apply: steps marked with ~+" need never apply, those marked by "=" must apply once, and those marked by "*" can apply any number of times. A step may call another sequence which has the effect of inserting immediately following that step, the steps of the named sequence. Each step consists of a list of rules where the priority of the rules are made explicit by their ordering in the list. Each rule is of the form [Pl] [P~] ""[PJ --> (al){a2)'"(a) Each precondition, p,. tests a buffer node for the presence or absence of specified feature-type, feature-value pairs. When a rule is applied each action, a c is evaluated in the specified order. In attempting to apply a step each of the step's rules is tested in order, the first one whose preconditions match the current buffer state is performed. In order to recognize certain constituent types bottom-up, sequences may be associated with a bottom-up precondition. When the parser encounters a node which matches such a precondition, a PUSH to the sequence is performed. This mechanism is equivalent to PARSIFAL's attention shifting rules and is used primarily for parsing noun phrases. In order to clarify the form of a sequence, the example sequence TRANS-MAJOR-S shown in Figure 4 is discussed in detail. This sequence is associated with the initial sentence node of every input sentence. It performs the operations necessary to reduce the task of parsing an input sentence to that of parsing a normal sen- tence constituent as would occur in a relative clause or a sentence complement. While this sequence will misanMyze certain sentences it does handle a large number through a small set of rules. STEP 1 handles the words which and who which behave differently when they appear at the beginning of a sentence. The first rule determines if which is the first word; if it is then it labels it as a determiner. The second rule handles who which is labels as a NP. STEP: I [: STEP: 2 11 I1 I1 [1 I1 STEP: 3 [l STEP: 4 [: + WORD WHICH] --> (LABEL 1 {SYNCLASS DET} {EXT WH}) WORD WHO] --> {LABEL I {SYNCLASS NP} {EXT WH}) EXT W~l -> (LABEL * {SENT-TYPE QUEST} {QUEST-TYPE NP}) SYNCLASS NP] --> (LABEL * {SENT-TYPE DECL}) ROOT HAVE]f2 SYNCLASS NP][3 TENSE TENSELESS] --> (LABEL * {SENT-TYPE IMPER}) VTYPE AUXVERB] --> (LABEL * {SENT-T'ITE QUEST} {QUEST-TYPE YN}) TENSE TENSELESS] ---> {LABEL * {SENT-TYPE IlVIPER}) + E.XT WH][2 VTYPE AUXVERB][3 SYNCLASS NP] [4 NOT PTYPE FINAL l --> {MOVE 1 Wlt-COMP) + QUEST.TYPE (YN NP-QUEST)] --> {MOVE 2 l} STYPE LMPER] --> (INSERT I you) Fisure 4. SEQUENCE TRANS-MAJOR-S. ..... STEP 2 examines the initial constituents of the sentence to determine whether the sentence is imperative, interrogative, declara- tive, etc.. Since each sentence must be analyzed as one of these types the step is modified by the "-----" operator indicating that one of the step's rules must apply. The first rule tests whether the ini- tial constituent of the sentence is a WH type NP; NP's like who, which professor, what time, etc. fall into this category. If this precondition succeeds then the sentence is labeled as a question whose focus is a noun phrase. The second rule tests for a leading NP and, if it is found, the sentence is labeled as declarative. Note that this rule will not be tested if the first rule is successful and the step depends on this feature of step evaluation. The following rule tries to determine if have, appearing as the first word in a sen- tence, is a displaced auxiliary or is the main verb in an imperative sentence. If the rule succeeds then the sentence is labeled as imperative, otherwise the following rule will label any sentence beginning with an auxiliary as a yes/no type question. The final rule of the step labels sentences which begin with a tenseless verb as imperatives. STEP 3 picks up a constituent which has been displaced to the front of the sentence and places it in the special WH-COMP regis- ter. Generally a constituent must have been displaced if it is a WH type Nap followed by an auxiliary followed by another NaP; however, an exception to this is sentences like Who is the professor? in which the entire sentence consists of these three constituents. STEP 4 undoes any interrogative or imperative transforma- tions. The first rule moves a displaced auxiliary around the NP in sentences like Has Jones taught Lisp ~ and When did Jones teach Lisp f. Note that for the latter sentence the previous step would have picked up when and hence did would be at the front of the buffer. The second rule of this step inserts you into the buffer in front of imperative sentences. Like DPARSER, PARSIFAL's grammar language is composed of a large set of production rules. The major difference between the two languages is how the rules are organized. PARSIFAL's rules are 240 divided into packets several of which may be active at once. At any point in the parsing each of the rules in each active packet may exe- cute if its precondition is matched. In contrast to this organization, DPARSER's sequences impose a much stronger control on the order of execution of the productions. Aside from the bottom up parsing mechanism the only com- petition between rules is between those in the individual steps. The purpose of constraining the order of execution of the productions is to reflect the fact that the parsing of a particular constituent type is essentially a sequential process. Most of the rules involved in the parsing of a constituent can only apply at a particular point in the parsing process. This is particularly true of transformational rules and rules which attach constituents. Those rules which can apply at various points in the parsing may be repeated within the sequence so that they will only be tested when it is possible for them to apply and they will not be allowed to apply at points where they should not. Clearly the necessity to repeat rules at different points in a sequence can increase the size of the grammar; however, it is felt that a grammar which clearly specifies the possible set of actions at each point can be more easily understood and modified. 4. SEMANTIC PROCESSING While semantic processing was outside Marcus's central con- cern a semantic system was developed which operates in parallel with PARSIFAL , constructing the semantic representation as its subconstituents were attached. In order to deal with syntactic ambiguities the action part of rules can contain semantic tests which compare the semantic well-formedness of interpretations resulting from a set of possible attachments. Such comparative tests can choose between one or more constituents to attach in a particular syntactic role; for example a rule for attaching a direct object can use such a test to choose whether to attach a displaced constituent or the next constituent in the buffer. Comparative tests can also be used to decide whether to attach an optional modifier (such as a prepositional phrase) or leave it because it better modifies a higher level node. Unfortunately this latter class of tests requires each rule which attaches an optional modifier to determine each node which it is syntactically possible to attach the node to. Once this set of syntactically possible nodes is found, semantics must be called to determine which is the best semantic choice. Such tests complicate the grammar by destroying the modularity between the subgram- mars which parse different constituent types. For the LUNAR system [Woods 73] Woods added an experi- mental facility to the basic ATN framework which allowed an ATN to perform such comparative tests without requiring them to be explicitly coded in the grammar. The Selective Modifier Placement mechanism was invoked upon completion of an optional modifier such as a PP. It then collected all the constituents which could attach the modifier and performed the attachment it determined to be the best semantic fit. A mechanism similar to this is incor- porated as a central part of DPARSER and is intended to be used whenever an attachment is locally optional. Before giving the details of this mechanism we discuss the semantic interface in general. In DPARSER a narrow interface is maintained between syntax and semantics which alleviates the grammar writer of any responsi- bility for semantic processing. The interface consists of the ATTACH action which immediately performs the specified attach- ment and the W-ATTACH test which only succeeds if the attach- ment can be performed in light of the other constituents which may want to attach it. Both ATTACH and IF-ATTACH have the same parameters: the buffer position of the constituent to be attached and a label identifying the syntactic relationship between the constituent and its parent. Such a label is equivalent to a "functional label" of the BUS system [Bobrow & Webber 80]. When an attachment is per- formed the semantic system is passed the parameters of the attach- ment which it then uses to recompute the interpretation of the current active node. W-ATTACH tests are included as the final precondition of those grammar rules which wish to attach a trailing modifier; the test returns true if it is syntactically possible for the modifier to be attached and the modifier best semantically modifies that node. If the test is true then the attachment is performed as a side effect of the test. To the grammar writer the IF-ATTACH test has the prescient capability to foresee which active node should be allowed to attach the modifier and immediately returns true or false. However, the implementation requires that when an IF-ATTACH test is per- formed, the current active node must be suspended and the node which pushed to it restarted. This node can then execute normally with the suspended active node appearing like any other node in the buffer. The node continues executing until it either completes, in which case the process continues with the next higher active node, or it encounters the IF-ATTACHed node. If, at this point, the active node issues another IF-ATTACH then this new request is recorded with the previous ones and the process continues with the next higher active node. This sequence of suspensions will end if an active node becomes blocked because it expects a different consti- tuent type than the one in the position of the IF-ATTACHed node. When this occurs the interpretations which would result from each of the pending IF-ATTACH tests are computed and the attachment whose interpretation the semantic system considers to be the most plausible is performed. Alternately, a sequence of suspensions may be terminated when an active node ATTACHes the node that the suspended active nodes had tried to IF-ATTACH. Such a situation, an example of which occurs in the parsing of the sentence Is the block in the boar, indicates that the pending W-ATTACH requests are syntactically impossible and so must fail. The following example shows how the IF-ATTACH mechanism is used to handle sentences where the attachment of a prepositional phrase is in question. We consider the parsing of the sentence Jones teaches the course in Lisp. We start the example immediately fol- lowing the parsing of the PP (Figure 5). At this point the sequence POSITION -2 active SYNCLASS S SENT-TYPE DECL (Jo..) POSITION -I active SYNCLASS PRED VTYPE ONE-OBJ (te6che,) POSITION * current active SYNCLASS NP NVFORM N3PS (the to.m) POSITION 1 not active SYNCLASS PP (in L~,p) UNSEEN WORDS: . Fi[~ure 6. after the completion of 'in Lisp' for the noun phrase is about to apply the rule shown in Figure 6" which tries to attach PP modifiers. Since the precondition preceding the IF-ATTACH test is true the IF-ATTACH test is made. This causes the current active node to be suspended until it can be decided whether the attachment can be performed {Figure 7). Control now returns to the predicate node which attaches the suspended NP as the object of the verb. As normally occurs after an attachment, the NP node is removed from the buffer; however, because the node will eventually be restarted it retains a virtual buffer position. The sequence for parsing the predicate now applies the same IF-ATTACH rule (Figure 6) to attach any prepositional phrase modifiers. Again since the PP is the first constituent in the buffer the IF-ATTACH test is performed and the predicate node is suspended returning control to the sentence active node (Figure 8). When the sentence node restarts it execution, it attaches the predicate of the sentence leaving the buffer as shown in Figure 9. [I SYNCLASS PP]~F-ATTACH I PP-MOD] -~ Figure 6. rule for attaehin?~ prepositional phrases 241 Fisure 7. POSITION -I active SYNCLASS S SENT-TYPE DECL (Jones/ POSITION * current active SYNCLASS PRED VTYPE ONE-OBJ (teaches) POSITION 1 suspended active SYNCLASS NP NVFORM N3PS (the co,~rse) POSITION 2 not active SYNCLASS PP (in Lisp) POSITION 3 not active SYNCLASS I:'INAL-PUNCT WORD . f3 after the NP has tried to attach the PP Figure 8. POSITION * active SYNCLASS S SENT-TYPE DECL (Jo.ss) POSITION 1 suspended active SYNCLASS PRED VTYPE ONE-OBJ (teaches) DELETED suspended active SYNCLASS NP NW'FORM N3PS (the course) POSITION 2 not active SYNCLASS PP 6a Limp) POSITION 3 not active SYNCLASS FINAL-PUNCT WORD. (3 after the PRED node has tried to attach the PP POSITION * current active SYNCLASS S SENT-TYPE DECL (Jones teaches the course) DELETED suspended active SYNCLASS PRED VTYPE ONE-OBJ (teaches the course) DELETED suspended active SYNCLASS NP NVFORM N3PS (the course) POSITION 1 not active SYNCLASS PP 6n L"p) POSITION 2 not active SYNCLASS FINAL-PUNCT WORD. (3 Figure0. after the subject and predicate have been attached Ilaving found a complete sentence the sentence node executes a final step which expects to find the final punctuation; since there is none the step fails. This failure triggers the arbitration of the set of pending IF-ATTACH requests for the attachment of the PP. In this case the semantic system determines that the PP should modify the NP. The parser then restarts the NP node at the point where it i~ued the IF-ATTACH and allows it to make the attachment (Fig- ure 10). The NP node then tries again to attach a PP but seeing only the period it realizes that its constituent is complete and ter- minates. Next the monitor restarts the predicate active node but does not allow it to make the attachment. This results in the node eventually terminating without performing any more actions. At this point each of the IF-ATTACH requests have been processed and the step whose failure caused the processing of the requests is retried. This time it is successful in finding the final punctuation and attaches it. The parse is now complete {Figure 11). Aside from prepositional phrase attachment there are many other situations where optional modifiers can arise. For example in POSITION -I active SYNCLASS S SENT-TYPE DECL (Jones teaches the course in lisp) DELETED suspended active SYNCLASS PRED VTYPE ONE-OBJ (teaches the course in Lisp) DELETED * current active SYNCLASS NP NVFORM N3PS (the co,Jrse in Lisp) POSITION I not active SYNCLA3S FINAL-PUNCT WORD. (3 Figure 10. after the PP is attached POSITION * current active SYNCLASS S SENT-TYPE DECL (Jones teaches the course ia Lisp .) Fisure It. the sentence ! saw the boy using the telescope the phrase using the telescope may modify boy as a relative clause where the relative pro- noun has been deleted, or it may modify saw where the preposition by has been deleted. Another example is the sentence Is the block in the boz?. In this sentence the PP in the b0z must, for syntactic rea- sons, complement the verb; however, in the local context of parsing the NP the block, it is possible for the PP to modify it. IF- ATTACH can easily be extended to attach optional pre-modifiers; it could then be used to derive the internal structure of such complex noun phrases a8 the Lisp course programming assignment. The IF-ATTACH test is proposed as a mechanism to solve this general cl~s of problems without requiring the grammar writer to explicitly list all constituents to which an unattached constituent can be attached. Instead, it is sufficient to indicate that a trailing modifier is optional and the monitor does the work in determining whether the attachment should be made. 5. CONCLUSION A grammar language for deterministic parsing has been out- lined which is designed to improve the understandability of the grammar. Instead of allowing a large set of rules to be active at once, the grammar language requires that rules be organized into sequences of steps where each step contains only a small number of rules. Such an organization corresponds to the essentially sequential nature of language processing and greatly improves the perspicuity of the grammar. The grammar is further simplified by means of a general method of interfacing between syntactic and semantic pro- cessing. This interface provides a general mechanism for dealing with syntactic ambiguities which arise from optional post-modifiers. REFERENCES Bobrow, R.J. and B.L. Webber [19801 "PSI-KLONE: Parsing and Semantic Interpretation in tile BBN Natural Language Understanding Sys- tem", in Proceedings of the CSCSI/SCEIO Conference 1080. Carter, A.W. [1083] "DPARSER -- A Deterministic Parser', Masters Thesis, Oregon State University. Church, K.W. [10801 On Memory Limitations in Natural Language Pro. cenin¢, ~flT/LCS Technical Report #245, Cambridge, Mass. Marcus, M.P. [1076] "A Design for a Parser for English", in Proceedings of the ACM Conference 1978. Marcus, M.P. [1080] A Theory of Syntactic Recognition for Natural Language, The ~flT Press, Cambridge, Mass. Rustin R. [1973] Natura~ Language Proc,ssing, Algorithmics Press, New York. Woods, W.A. [1970] "Transition Network Grammars for Natural Language Analysis", in Communications of the ACM 13:591. Woods, W.A. [19731 "An Experimental Parsing System for Transition Net- work Grammars', in [Rustin 73 I. 242
1984
52
DEALING WITH CONJUNCTIONS IN A MACHINE TRANSLATION ENVIRONMENT Xiumlng HUANG Institute of Linguistics Chinese Academy of Social Sciences BeiJing, China* ABSTRACT The paper presents an algorithm, written in PROLOG, for processing English sentences which contain either Gapping, Right Node Raising (RNR) or Reduced Conjunction (RC). The DCG (Definite Clause Grammar) formalism (Pereira & Warren 80) is adopted. The algorithm is highly efficient and capable of processing a full range of coordinate constructions containing any number of coordinate conjunctions ('and', 'or', and 'but'). The algorithm is part of an English-Chinese machine translation system which is in the course of construction. 0 INTRODUCTION Theoretical linguists have made a considerable investigation into coordinate constructions (Ross 67a, Hankamer 73, Schachter 77, Sag 77, Gazdar 81 and Sobin 82, to name a few), giving descriptions of the phenomena from various perspectives. Some of the descriptions are stimulating or convincing. Computational linguists, on the other hand, have achieved less than their theoretical counterparts. (Woods 73)'s SYSCONJ, to my knowledge, is the first and the most often referenced facility designed specifically for coordinate construction processing. It can get the correct analysis for RC sentences like (i) John drove his car through and completely demolished a plate glass window but only after trying and failing an indefinite number of times, due to its highly non- deterministic nature. (Church 79) claims '~ome impressive initial progress" processing conjunctions with his NL parser YAP. Using a Marcus-type attention shift mechanism, YAP can parse many conjunction constructions including some cases of Gapping. It doesn't offer a complete solution to conjunction processing though: the Gapping sentences YAP deals with are only those wlth two NP remnants in a Gapped conjunct. * Mailing address: Cognitive Studies Centre, University of Essex, Colchester C04 3SQ, England. (McCord 80) proposes a "more straightforward and more controllable" way of parsing sentences like (I) within a Slot Grammar framework. He treats "drove his car through and completely demolished" as a conjoined VP, which doesn't seem quite valid. (Boguraev 83) suggests that when "and" is encountered, a new ATN arc be dynamlcally constructed which seeks to recognise a right hand constituent categorlally similar to the left hand one just completed or being currently processed. The problem is that the left-hand conjunct may not be the current or most recent constituent hut the constituent of which that former one is a part. (Berwlck 83) parses successfully Gapped sentences like (2) Max gave Sally a nickel yesterday, and a dime today using an extended Marcus-type deterministic parser. It is not clear, though, how his parser would treat RC sentences llke (I) where the fi~t conjunct is not a complete clause. The present work attacks the coordinate construction problem along the lines of DCG. Its coverage is wider than the existing systems: both Gapping, RNR and RC, as well as ordinary cases of coordinate sentences, are taken into consideration. The work is a major development of (Huang 83)'s CASSEX package, which in turn was based on (Boguraev 79)'s work, a system for resolving linguistic ambiguities which combined ATN grammars (Woods 73) and Preference Semantics (Wilks 75). In the first section of the paper, problems raised for Natural Language Processing by Gapping, RNR and RC are investigated. Section 2 gives a grouping of sentences containing coordinate conjunctions. Finally, the algorithm is described in Section 3. I GAPPING, RIGHT NODE RAISING AND REDUCED CONJUNCTION I.I Gapping Gapping is the case where the verb or the verb together with some other elements in the non-leftmost conjuncts is deleted from a sentence: (3) Bob saw Bill and Sue [saw] Mary. 243 (4) Max wants to try to begin to write a novel, and Alex [wants to try to begin to write] a play. Linguists have described rules for generating Gapping, though none of them has made any effort to formulate a rule for detecting Gapping. (Ross 67b) is the first who suggested a rule for Gapping. The formalisation of the rule is due to (Hankamer 73): Gap pl ng NP X A Z and NP X B Z --> NP X A Z and NP B where A and B are nonidentical major constituents*. (Sag 76) pointed out that there were cases where the left peripheral in the right conjunct might be a non-NP, as in (5) At our house, we play poker, and at Betsy's house, bridge. It should be noted that the two NPs in the Gapping rule must not be the same, otherwise (7) would be derived from (6): (6) Bob saw Bill and Bob saw Mary. (7) Bob saw Bill and Bob Mary. whereas people actually say (8) Bob saw Bill and Mary. When processing (8), we treat it as a simplex containing a compound object ("Bill and Mary") functioning as a unit ("unit interpretation"), although as a rule we treat sentence containing conjunction as derived from a "complex", a sentence consisting of more than one clause, in this case "Bob saw Bill and Bob saw Mary" ("sentence coordination interpretation"). The reason for analysing (8) as a simplex is first, for the purpose of translation, unit interpretation is adequate (the ambiguity, if any, will be "transferred" to the target language); secondly, it is easier to process. Another fact worth noticing is that in the above Gapping rule, B in the second conjunct could be anything, but not empty. E.g., the (a)s in the following sentences are Gapping examples, but the (b)s are not: (9) (a) Max spoke fluently, and Albert haltingly. *(b) Max spoke fluently, and Albert. (I0) (a) Max wrote a novel, and Alex a play. *(b) Max wrote a novel, and Alex. (II) (a) Bob saw Bill, and Sue Mary. (b) Bob saw Bill, and Sue. Before trying to draw a rule for detecting * According to the dependency grammar we adopt, we define a major constituent of a given sentence S as a constituent immediately dominated by the main verb of S. Gapping, we will observe the difference between (12) and (13) on one hand, and (14) on the other: (12) Bob met Sue and Mar k in London. (13) I knew the man with the telescope and the woman with the umbrella. (14) Bob met Sue in Paris and Mary in London. As we stated above, (12) is not a case of Gapping; instead, we take "Sue and Mary" as a coordinate NP. Nor is (13) a case of Gapping. (14), however, cannot be treated as phrasal coordination because the PP in the left conjunct ("in Paris") is directly dominated by the main verb so that "Mary" is prevented from being conjoined to "Sue". Now, the Gapping Detecting Rule: The structure "NPI V A X and NP2 B" where the left conjunct is a complete clause, A and B are major constituents, and X is either NIL or a constituent not dominated by A, is a case of Gapping if (OR (AND (X = NIL) (B = NP)) (AND (V = 3-valency verb)* (OR (B = NP) (B = to NP))) (AND (X /= NP) (X /= NIL)))** 1.2 Right Node Raising (RNR) RNR is the case where the object in the non- rightmost conjunct is missing. (15) John struck and kicked the boy. (16) Bob looked at and Bill took the jar. RNR raises less serious problems than Gapping does. All we need to do is to parse the right conjunct first, then copy the object over to the left conjunct so that a representation for the left clause can be constructed. Then we combine the two to get a representation for the sentence. Sentences llke the following may raise difficulty for parsing: (17) I ate and you drank everything they brought. (cf. Church 79) (17) can be analysed either as a complex of two full clauses, or RNR, according to whether we treat '~te" as transitive or intransitive. 1.3 Reduced Conjunction Reduced Conjunction is the case where the conjoined surface strings are not well-formed constituents as in (18) John drove his car through and completely demolished a plate glass window. where the conjoined surface strings "drove his car through" and "completely demolished" are not well- formed constituents. The problem will not be as * 3-valency verbs are those which can appear in the structure "NP V NP NP', such as "give', "name', "select', 'call', etc. ** Here "/=" means "is not". 244 serious as might have seemed, given our understanding of Gapping and RNR. After we process the left conjunct, we know that an object is still needed (assuming that "through" is a preposition). Then we parse the right conjunct, copying over the subject from the left; finally, we copy the object from the right conjunct to the left to complete the left clause. II GROUPING OF SENTENCES CONTAINING CONJUNCTIONS We can sort sentences containing conjunctions into three major groups on the basis of the nature of the left-most conjunct: Group A contains sentences whose left-most conjuncts are recognized by the analyser as complete clauses; Group B, the left-most conjuncts are not complete clauses, but contain verbs; and Group C, all the other cases. The following is a detailed grouping with example sentences: AI. (Gapping) Clause-lnternal ellipsis: (19) I played football and John tennis. (20) Bob met Sue in Paris and John Mary in London. (21) Max spoke fluently and Albert haltingly. A2. (Capping) Left-peripheral ellipsis wlth two NP remnants: (22) Max gave a nickel to Sally and a dime to Harvey. (23) Max gave Sally a nickel and Harvey a dime. (24) Jack calls Joe Mike and Sam Harry. A3. (Gapping)Left-perlpheral ellipsis with one NP remnant and some non-NP remnant(s): (25) Bob met Sue in Paris and Mary In London. (26) John played football yesterday and tennis today. A4. (Gapping) Right-perlpheral ellipsis concomitant with clause-internal elllpsls: (27) Jack begged Elsie to get married and Wilfred Phoebe. (2~) John persuaded Dr. Thomas to examine Mary, and Bill Dr. Jones. (29) Betsy talked to Bill on Sunday, and Alan to Sandy. A5. The right conjunct is a complete clause: (30) I played football and John watched the television. A6. The right conjunct is a verb phrase to be treated as a clause with the subject deleted: (31) The man kicked the child and threw the ball. AT. Sentences where the "unit interpretation" should be taken: (32) Bob met Sue and Mary in London. (33) I knew the glrl bitten by the dog and the cat. BI. Right Node Raising: (34) The man kicked and threw the ball. (35) The man kicked and the woman threw the ba I 1. B2. Reduced Conjunction: (36) John drove hls car through and completely demolished a plate glass window. C. Unlt interpretations: (37) The man with the telescope and the woman with the umbrella kicked the ball. (38) Slowly and stealthily, he crept towards his victim. III THE ALGORITHM The following algorithm, implemented in PROLOG Version 3.3 (shown here in much abridged form), produces correct syntactlco-semantic representations for all the sentences given in Section 2. We show here some of the essential clauses* of the algorithm: "sentence', "rest sentencel" and "sentence conjunction'. The top-most clause "sentence" parses sentences consisting of one or more conjuncts. In the body of "sentence', we have as sub-goals the disjunction of "noun_phrase" and 'noun phrasel', for getting the sentence subject; the disjunction of "[W], Is verb" and 'verbl', plus 'rest verb', for treating the verb of the sentence; the disjunction of 'rest sentence" and "rest sentence1" for handling The object, preposltlonaT phrases, etc; and finally "sentence conJunctlon', for handling coordinate conjunctlon~ The Gapping, RNR and RC sentences In Section II contain deletions from either left or right conjuncts or both. Deleted subjects in right conjuncts are handled by 'noun phrasel' in our program; deleted verbs in right conjuncts by 'verbl'. The most difficult deletions to handle (for previous systems) are those from the left conjuncts, ie. the deleted objects of RNR (Group BI) and the deleted preposition objects of RC (Group B2), because when the left conJuncts are being parsed, the deleted parts are not avallabl~ This is dealt with neatly in PROLOG DCG by using logical variables which stand for the deleted parts, are "holes" In the structures built, and get filled later by unification as the parsing proceeds. sentence(Stn, P Sub j, P Subj Head Noun, P Verb, P V Type, P Contentverb, P Tense, P~Ob-j, PObJH~dNoun)--> % P means "possible": P arguments only % ~ve values if "sentenCe' is called by % 'sentence_conjunctlon' to parsea second % (right) conjunct. Those values will be % carried over from the left conjunct. (noun phrase(Sub J, HeadNoun); noun phrasel (P Sub J, P SubJ Head Noun, Sub J, HeadNoun) ), % "noun_phrasel" copies over the subject % from the left conjunct. adve rblal_phrase (Adv), ([w], % W is the next lexlcal item. is_verb(W, Verb, Tense) ; % Is W a verb? verbl(P_Verb, Verb, PContentverb, Contentverb, P Tense, Tense, P_VType, VType)), "verb1" copies over the verb from the % left conjunct. * A "clause" in our DCG comprises a head (a single goal) and a body (a sequence of zero or more goal s ). 245 rest verb(Verb ,Tense,Verbl,Tensel), 'rest verb" checks whether Verb is an % auxi~ary. (rest sentence(dcl,Subj,Head Noun,Verbl, VType, Co~tentverb,Tensel ,Obj, O~j_.Head_Noun, P__ObJ, P Obj Head Noun, Indobj, S); % "rest sentence" handles all cases but RC. rest sentence I (d cl, SubJ, HeadNoun, Verb I, VType, C~ntentverb,Tensel, Obj, Obj_Head_Noun, P ObJ, P_.Obj_.Head._Noun, Indobj, S)), "rest sentencel" handles RC. sentence_.co~junctlon(S, Stn, Sub j, HeadNoun, Verbl, V_Type, Contentverb, Tensel, Obj, ObjHeadNoun ) • rest sentence I (Type, Sub j, Head_Noun, Verbl, VType, ~ontentver5, Tense, Prep ObJ,Prep ObJHead Noun, P_Obj, P ObJ Head Noun, Indobj, s(type(Type), tense(Tense), v(Verb sense, agent(Subj), object(Obj), pos t--ve rb_ mods(prep(Prep), pre~obj(Prep_Obj)))Y --> % Here Prep ObJ is a logical variable which %will be Instantlated later when the % right conjunct has been parsed. {verb type(Verb, VType)}, comp~ement(V Type, Verb, Contentverb, Sub j, Head Noun, Obj, Obj_Head Noun, P Obj, P_Ob~_Head_Noun, v(Verb sense, agent(~ubj), object(Oh j), post_v~rb_mods(prep(W), pr ep_obJ ( Pr ep_.Obj ) ) ), % The sentence object is processed and the % verb structure built here. [w], {prepositlon(W) }. sentence_.conjunction(S,s(conj(W), S, Sconj), Sub j, Head Noun, Verbl, VType, Verb2, Tense, Obj, Obj ~ead Noun) --> ([" ]. [wT; [w]), {conj(W)}, % Checks whether W is a conjunction. sentence(Sconj, Subj, Head Noun, Verbl, V_Type, Verb2, Tense, 0bj, 0bjHe~dNoun). % "sentence" is called recursively to parse % right conjuncts. sentence conjunction(S, S, _, _, _, _, _, _, _, _) --> ]]. % Boundary condition. For sentence (36) ("John drove his car through and completely demolished a plate glass window"), for instance, when parsing the left conjunct, "rest sentencel" will be called event- ually. The follo~ing verb structure will be built: v(drovel ,agent(np(pronoun(John))), object(np(det (his), pre mod([]), n(carl), post mods([]))), post verbmods~prep mods ( prep ( through~, pre~obJ (Prep Obj)), where th[ logical variable PrepObJ will be unified later with the argument standing for the object in the right conjunct (ie, "a plate glass window"). When 'sentence" is called via the sub- goal 'sentence_conjunctlon" to process the right conjunct, the deleted subject "John" will be copied over via "noun phrasel'. Finally a structure is built which i-s a combination of two complete clauses. During the processing little effort is wasted. The backward deleted consti- tuents ("a plate glass window" here) are recovered by using logical variables; the forward deleted ones ("John" here) by passing over values (via unification) from the conjunct already processed. Moreover, the "try-and-fail" procedure is carried out in a controlled and intelligent way. Thus a high efficiency lacking in many other systems is achieved (space prevents us from providing a detailed discussion of this issue here). ACKNOWLEDGEME NTS I would llke to thank Y. Wilks, D. Arnold, D. Fass and C. Grover for their comments and instructive discussions. Any errors are mine. BIBLIOGRAPHY Berwlck, ~ C. (1983) "A deterministic parser with broad coverage." Bundy, A. (ed), Proceedings of IJCAI 83, William Kaufman, Inc. Boguraev, B. K. (1979) Automatic Resolution of Linguistic Ambiguities. Technical Report No. II, University of Cambridge Computer Laboratory, Cambridge. Boguraev, B. K. (1983) "Recognlslng conjunctions withing the ATN framework." Sparck-Jones, ~ and Wilks, Y. (eds), Automatic Natural Language Parsing, Ellis Horwood. Church, K. W. (1980) On Memory Limitations in Natural Language Processing. MIT. Reproduced by Indiana Univ. Ling. Club, Bloomingtong, 1982. Gazdar, G. (1981) "Unbounded dependencies and coordinate structure," Linguistic Enquiry, 12: 155 - 184. Hankamer, J. (1973) "Unacceptable ambiguity," Lingulstic Inquiry, 4: 17-68. Huang, X-M. (1983)"Dealing with conjunctions in a machine translation environment," Proceedings of the Association for Computational Linguistics European Chapter Meeting, Pisa. McCord, M. C. (1980) "Slot grammars," American Journal of Computational Linguistics, 6:1,31-43. Perelra, F. & Warren, D. (1980)"Definite clause grammars for language analysis - a survey of the formalism and a comparison with augmented transition networks," Artificial Intelllgence, 13:231 - 278. Ross, J. R. (1967a) Constraints on Variables in Syntax. Doctoral Dissertation, MIT,Cambridge, Massachusetts. Reproduced by Indiana Univ. Ling. Club, Bloomington, 1968. Ross, J. R. (1967b) "Gapping and the order of constituents," Indiana Univ. Ling. Club, Bloomington. Also in Bierwisch, M. and K. Heidolph, (eds), Recent Developments i__nn Linguistics, Mouton, The Hague, 1971. Sag, I. A. (1976) Deletion and Logical Form. Ph.D. thesis, MIT, Cambridge, Mass. Schachter, P. (1977) "Constraints on coordination," Language, 53:86 - 103. Sobin, N. (1982) "On gapping and discontinuous constituent structure," Linguistics,20:727-745. Wilks, Y. A. (1975) "Preference Semantics," Keenan (ed), Formal Semantics of Natural Language, Cambridge Univ. Press, London. Woods, W. ~ (1973)"A experimental parsing system for Transition Network Grammar," Rustin, (ed), Natural Language Processing, Algorithmic Press, N. Y. 246
1984
53
ON PARSING PREFERENCES Lenhart K. Schubert Department of Computing Science University of Alberta, Edmonton Abstract. It is argued that syntactic preference principles such as Right Association and Minimal Attachment are unsatisfactory as usually formulated. Among the difficulties are: (I) dependence on ill-specified or implausible principles of parser operation; (2) dependence on questionable assumptions about syntax; (3) lack Of provision, even in principle, for integration with semantic and pragmatic preference principles; and (4) apparent counterexamples, even when discounting (I)-(3). A possible approach to a solution is sketched. I. Some preference principles The following are some standard kinds of sentences illustrating the role of syntactic preferences. (I) John bought the book which I had selected for Mary (2) John promised to visit frequently (3) The girl in the chair with the spindly legs looks bored (4) John carried the groceries for Mary (5) She wanted the dress on that rack (6) The horse raced past the Darn fell (7) The boy got fat melted (I) (3) illustrate Right Association of PP's and adverbs, i.e., the preferred association of these modifiers with the rightmost verb (phrase) or noun (phrase) they can modify (Kimball 1973). Some variants of Right Association (also characterized as Late Closure or Low Attachment) which have Dean proposed are Final Arguments (Ford et al. 1982) and Shifting Preference (Shieber 1983); the former is roughly Late Closure restricted to the last obligatory constituent and any following optional constituents of verb phrases, while the latter is Late Closure within the context of an LR(1) shift- reduce parser. Regarding (4), it would seem that according to Right Association the PP for Mar~ should be preferred as postmodifier of groceries rather than carried; yet the opposite is the case. Frazier & Fodor's (1979) explanation is based on the assumed phrase structure rules VP -> V NP PP, and NP -> NP PP: attachment of the PP into the VP minimizes the resultant number of nodes. This principle of Minimal Attachment is assumed to take precedence over Right Association. Ford et al's (1982) variant is Invoked Attachment, and Shieber's (1983) variant is Maximal Reduction; roughly speaking, the former amounts to early closure of no___nn-final constituents, while the latter chooses the longest reduction among those possible reductions whose initial constituent is "strongest" (e.g., reducing V NP PP to VP is preferred to reducing NP PP to PP). In (5), Minimal Attachment would predict association of the PP on that rack with wanted, while the actual preference is for association with dress. Both Ford et al. and Shieber account for this fact by appeal to lexical preferences: for Ford et al., the strongest form of want takes an NP complement only, so that Final Arguments prevails; for Shieber, the NP the dress is stronger than wanted, viewed as a V requiring NP and PP complements, so that the shorter reduction prevails. sentence (6) leads most people "down the garden path", a fact explainable in terms of Minimal Attachment or its variants. The explanation also works for (7) (in the case of Ford et al. with appeal to the additional principle that re-analysis of complete phrases requiring re-categorization of lexical constituents is not possible). Purportedly, this is an advantage over Marcus' (1980) parsing model, whose three-phrase buffer should allow trouble-free parsing of (7). 2. Problems with the preference principles 2.1 Dependence on ill-specified or implausible principles of parser operation. Frazier & Fodor's (1979) model does not completely specify what structures are built as each new word is accommodated. Consequently it is hard to tell exactly what the effects Of their preference principles are. Shieber's (1983) shift-reduce parser is well- defined. However, it postulates complete phrases only, whereas human parsing appears to involve integration of completely analyzed phrases into larger, incomplete phrases. Consider for example the following sentence Deginnings: (8) So I says to the ... (9) The man reconciled herself to the ... (10) The news announced on the ... (11) The reporter announced on the ... (12) John beat a rather hasty and undignified ... People presented with complete, spoken sentences beginning like (8) and (9) are able to signal detection of the errors about two or three syllables after their occurrence. Thus agreement 247 features appear to propagate upward from incomplete constituents. (10) and (11) suggest that even semantic features (logical translations?) are propagated before phrase completion. The "premature" recognition of the idiom in (12) provides further evidence for early integration of partial structures. These considerations appear to favour a "full- paths" parser which integrates each successive word (in possibly more ways than one) into a comprehensive parse tree (with overlaid alternatives) spanning all of the text processed. Ford et al.'s (1982) parser does develop complete top-down paths, but the nodes on these paths dominate no text. Nodes postulated bottom-up extend only one level above complete nodes. 2.2 Dependence on questionable assumptions ab____out syntax The successful prediction of observed preferences in (4) depended on an assumption that PP postmodifiers are added to carried via the rule VP -> V NP PP and to groceries via the rule NP -> NP PP. However, these rules fail to do justice to certain systematic similarities between verb phrases and noun phrases, evident in such pairs as (13) John loudly quarreled with Mary in the kitchen (14) John's loud quarrel with Mary in the kitchen When the analyses are aligned by postulating two levels of postmodification for both verbs and nouns, the accounts of many examples that supposedly involve Minimal Attachment (or Maximal Reduction) are spoiled. These include (4) as well as standard examples involving non-preferred relative clauses, such as (15) John told the girl that he loved the story (16) Is the block sitting in the box? 2.3 Lack of provision for integration with semantic/pragmatic preference principles Right Association and Minimal Attachment (and their variants) are typically presented as principles which prescribe particular parser choices. As such, they are simply wrong, since the choices often do not coincide with human choices for text which is semantically or pragmatically biased. For example, there are conceivable contexts in which the PP in (4) associates with the verb, or in which (7) is trouble-free. (For the latter, imagine a story in which a young worker in a shortening factory toils long hours melting down hog fat in clarifying vats.) Indeed, even isolated sentences demonstrate the effect of semantics: (~7) John met the girl that he married at a dance (]8) John saw the bird with t~e yellow wings (!9) She wanted the gun on her night table (20) This lens gets light focused These sentences should be contrasted with (I), (4), (5). and (7) respectively. While the reversal of choices Dy semantic and pragmatic factors is regularly acknowledged, these factors are rarely assigned any explicit role in the theory; (however, see Crain & Steedman 1981). Two views that seem to underlie some discussions of this issue are (a) that syntactic preferences are "defaults" that come into effect only in the absence Of semantic/pragmatic preferences; or (b) that alternatives are tried in order of syntactic preference, with semantic tests serving to reject incoherent combinations. Evidence against both positions is found in sentences in which syntactic preferences prevail over much more coherent alternatives: (21) Mary saw the man who had lived with her while on maternity leave. (22) John met the tall, slim, auburn-haired girl from Montreal that he married at a dance (23) John was named after his twin sister What we apparently need is not hard and fast decision rules, but some way of trading off syntactic and non-syntactic preferences of various strengths against each other. 2.4 Apparent counterexamples. There appear to be straightforward counterexamples to the syntactic preference principles which have been proposed, even if we discount evidence for integration of incomplete structures, accept the syntactic assumptions made, and restrict ourselves to cases where none of the alternatives show any semantic anomaly. The following are apparent counterexamples to Right Association (and Shifting Preference. etc.): (24) John stopped speaking frequently (25) John discussed the girl that he met with his mother (26) John was alarmed by the disappearance of the administrator from head office (27) The deranged inventor announced that he had perfected his design of a clip car shoe (shoe car clip, clip shoe car, shoe clip car, etc.) (28) Lee and Kim or Sandy departed (29) a. John removed all of the fat and some of the bones from the roast b. John removed all of the fat and sinewy pieces of meat The point Of (24)-(26) should De clear. (27) and (28) show the lack of right-associative tendencies in compound nouns and coordinated phrases. (29a) illustrates the non-occurrence of a garden path predicted by Right Association (at least Dy Shieber's version); note the possible adjectival reading of fat and ..., as illustrated in (29b). The following are apparent counterexamples to Minimal Attachment (or Maximal Reduction): (30) John abandoned the attempt to please Mary (31) Kim overheard John and Mary's quarrel with Sue (32) John carried the umDre!la, the transister radio, the bundle of old magazines, and the groceries for Mary (33) The boy got fat spattered on his arm While the account of (30) and (31) can be rescued by distinguishing subcategorized and non- subcategorized noun postmodifiers, such a move would lead to the failures already mentioned in section 2.2. Ford et al. (1982) would have no 248 trouble with (30) or (31), but they, too, pay a price: they would erroneously predict association of the PP with the object NP in (34) Sue had difficulties with the teachers (35) Sue wanted the dress for Mary (36) Sue returned the dress for Mary (32) is the sort of example which motivated Frazier & Fodor's (1979) Local Attachment principle, but their parsing model remains too sketchy for the implications of the principle to be clear. Concerning (33), a small-scale experiment indicates that this is not a garden path. This result appears to invalidate the accounts of (7) based on irreversible closure at fat. Moreover, the difference between (7) and (33) cannot De explained in terms of one-word lookahead, since a further experiment has indicated that (37) The boy got fat spattered. is quite as difficult to understand as (7). 3. Towards an account of preference trade-offs My main objective has been to point out deficiencies in current theories of parsing preferences, and hence to spur their revision. ] conclude with my own rather speculative proposals, which represent work in progress. In summary, the proposed model involves (I) a full-paths parser that schedules tree pruning decisions so as to limit the number of ambiguous constituents to three; and (2) a system of numerical "potentials" as a way of implementing preference trade-offs. These potentials (or "levels of activation") are assigned to nodes as a function of their syntactic/semantic/pragmatic structure, and the preferred structures are those which lead to a globally high potential. The total potential of a node consists of (a) a negative rule potential~ (b) a positive semantic potential, (c) positive expectation potentials contributed by all daughters following the head (where these decay with distance from the head lexeme), and (d) transmitted potentials passed on from the daughters to the mother. I have already argued for a full-paths approach in which not only complete phrases but also all incomplete phrases are fully integrated into (overlaid) parse trees dominating all of the text seen so far. Thus features and partial logical translations can be propagated and checked for consistency as early as possible, and alternatives chosen or discarded on the basis of all of the available information. The rule potential is a negative increment contributed by a phrase structure rule to any node which instantiates that rule. Rule potentials lead to a minimal-attachment tendency: they "inhibit" the use of rules, so that a parse tree using few rules will generally De preferred to one using many. Lexical preferences can be captured by making the rule potential more negative for the more unusual rules (e.g., for N --> fat, and for V -~ time). Each "expected" daughter of a node which follows the node's head lexeme contribqtes a non-negative expectation potential to the total potential of the node. The expectation potential contributed by a daughter is maximal if the daughter immediately follows the mother's head lexeme, and decreases as the distance (in words) of the daughter from the head lexeme increases. The decay of expectation potentials with distance evidently results in a right-associative tendency. The maximal expectation potentials of the daughters of a node are fixed parameters of the rule instantiated by the node. They can be thought Of as encoding the "affinity" of the head daughter for the remaining constituents, with "strongly expected" constituents having relatively large expectation potentials. For example, I would assume that verbs have a generally stronger affinity for (certain kinds Of) PP adjuncts than do nouns. This assumption can explain PP-association with the verb in examples like (4), even if the rules governing verb and noun postmodification are taken to be structurally analogous. Similarly the scheme allows for counterexamples to Right Association like (24), where the affinity of the first verb (stop) for the frequency adverbial may be assumed to De sufficiently great compared to that of the second (speak) to overpower a weak right-associatlve effect resulting from the decay of expectation potentials with distance. I suggest that the effect Of semantics and pragmatics can in principle be captured through a semantic potential contributed to each node potential by semantic/pragmatic processing of the node. The semantic potential of a terminal node (i.e., a lexical node with a particular choice of word sense for the word it dominates) is high to the extent that the associated word sense refers to a familiar (highly consolidated) and contextually salient concept (entity, predicate, or function). For example, a noun node dominating star, with a translation expressing the astronomical sense Of the word, presumably has a higher semantic potential than a similar node for the show-bus~ness sense Of the word, when an astronomical context (but no show-business context) has been established; and vice versa. Possibly a spreading activation mechanism could account for the context- dependent part of the semantic potential (of., Quillian 1968, Collins & Loftus 1975, Charniak 1983). The semantic potential of a nonterminal node is high to the extent that its logical translation (obtained by suitably combining the logical translations of the daughters) is easily transformed and elaborated into a description of a familiar and contextually relevant kind of object or situation. (My assumption is that an unambiguous meaning representation of a phrase is computed on the basis of its initial logical form by context- dependent pragmatic processes; see Schubert & Pelletier 1982.) For example, the sentences Time flies, The years pass swiftly, The minutes creep by, etc., are instances of the familiar pattern of predication <predicate of locomotion> (<time term>), and as such are easily transformable into certain commonplace (and unambiguous) assertions about one's personal sense of progression through time. Thus they are likely to be assigned high semantic 249 potentials, and so will not easily admit any alternative analysis. Similarly the phrases met [someone] at a dance (versus married [someone] at a dance) in sentence (17), and bird with the yellow wings (versus saw [something] with the yellow wings ) in (18) are easily interpreted as descriptions of familiar kinds of objects and situations, and as such contribute semantic potentials that help to edge Out competing analyses. Crain & Steedman's (1981) very interesting suggestion that readings with few new presuppositions are preferred has a possible place in the proposed scheme: the mapping from logical form to unambiguous meaning representation may often be relatively simple when few presuppositions need to De added to the context. However, their more general plausibility principle appears to fail for examples like (21)-(23). Note that the above pattern of temporal predication may well be considered to violate a selectional restriction, in that predicates of locomotion cannot literally apply to times. Thus the nodes with the highest semantic potential are not necessarily those conforming most fully with selectional restrictions. This leads to some departures from Wilks' theory of semantic preferences (e.g., 1976), although I suppose that normally the most easily interpretable nodes, and hence those with the highest semantic potential, are indeed the ones that conform with selectional restrictions. The difference between such pairs of sentences as (17) and (22) can now be explained in terms of semantic/syntactic potential trade-offs. In both sentences the semantic potential of the reading which associates the PP with the first verb is relatively high. However, only in (17) is the PP close enough to the first verb for this effect to overpower the right-associative tendency inherent in the decay of expectation potentials. The final contribution to the potential of a node is the transmitted potential, i.e., the sum of potentials of the daughters. Thus the total potential at a node reflects the syntactic/semantic/pragmatic properties of the entire tree it dominates. A crucial question that remains concerns the scheduling Of decisions to discard globally weak hypotheses. Examples like (33) have convinced me that Marcus (1980) was essentially correct in positing a three-phrase limit on successive ambiguous constituents. (In the context of a full- paths parser, ambiguous constituents can be defined in terms of "upward or-forks" in phrase structure trees.) Thus I propose to discard the globally weakest alternative at the latest when it is not possible to proceed rightward without creating a fourth ambiguous constituent. Very weak alternatives (relative to the others) may be discarded earlier, and this assumption can account for early disambiguation in cases like (10) and (11). Although these proposals are not fully worked out (especially with regard to the definition of semantic potential), preliminary investigation suggests that they can do justice to examples like (I)-(37). Schubert & Pelletier 1982 briefly described a full-paths parser which chains upward from the current word to current "expectations" by "left-corner stack-ups" Of rules. However, this parser searched alternatives by backtracking only and did not handle gaps or coordination. A new version designed to handle most aspects of Generalized Phrase Structure Grammar (see Gazdar et al., to appear) is currently being implemented. Acknowledgements I thank my unpaid informants who patiently answered strange questions about strange sentences. I have also benefited from discussions with members Of the Logical Grammar Study Group at the University of Alberta, especially Matthew Dryer, who suggested some relevant references. The research was supported by the Natural Sciences and Engineering Research Council of Canada under Operating Grant A8818. References Charniak, E. (1983). Passing markers: a theory of contextual influence in language comprehension. Cognitive Science 7, pp. 171-190. Collins, A. M. & Loftus, E. F. (1975). A spreading activation theory of semantic processing. Psychological Review 82, pp. 407-428. Crain, S. & Steedman, M. (1981). The use of context by the Psychological Parser. Paper presented at the Symposium on Modelling Human Parsing Strategies, Center for Cognitive Science, Univ. of Texas, Austin. Ford, M., Bresnan, J. & Kaplan, R. (1981). A competence-based theory of syntactic closure. In Sresnan, J. (ed.), The Mental Representation of Grammatical Relations MIT Press, Cambridge, MA. Frazier, L. & Fodor, J. (1979). The Sausage Machine: a new two-stage parsing model. Cognition 6, pp. 191-325. Gazdar, G., Klein, E., Pullum, G. K. & Sag, I. A. (to appear). Generalized Phrase Structure Grammar: A Study in English Syntax. Kimball, J. (1973). Seven principles of surface structure parsing in natural language. Cognition 2, pp. 15-47. Marcus, M. (1980). A Theory of Syntactic Recognition for Natural Language, MIT Press, Cambridge, MA. Quillian, M. R. (1968). Semantic memory. In Minsky, M. (ed.), Semantic Information Processing, MIT Press, Cambridge, MA, pp. 227-270. Schubert, L.K. & Pelletier, F. J. (1982). From English to logic: context-free computation of 'conventional' logical translations. Am. J. of Computational Linguistics 8, pp. 26-44. Shieber, S. M. (1983). Sentence disambiguation by a shift-reduce parsing technique. Proc. Sth Int. Conf. on Artificial Intelligence, Aug. 8-12, Karlsruhe, W. Germany, pp. 699-703. Also in Proc. of the 21st Ann. Meet. of the Assoc. for Computational Linguistics, June 15-17, MIT, Cambridge, MA., pp. 113-118. Wilks, Y. (1976). Parsing English II. In Charniak, E. & Wilks, Y. (eds.), Computational Semantics, North-Holland, Amsterdam, pp. 155-184. 250
1984
54
A COMFUTATIONAL THEORY OF THE FUNCTION OF CLUE WORDS IN ARGUMENT UNDERSTANDING Robin Cohen Department of Computer Science University of Toronto 'lDronto, CANADA MSS IA4 A~TNACT This paper examines the use of clue words in argument dialogues. These are special words and phrases directly indicating the structure of the argument to the hearer. Two main conclusions are drawn: I) clue words can occur in conjunction with coherent transmissions, to reduce processing of the hearer 2) clue words must occur with more complex forms of transmission, to facilitate recognition of the argument structure. Interpretation rules to process clues are proposed. In addition, a relationship between use of clues and complexity of processing is suggested for the case of exceptional transmission strategies. ! Overview In argt~nent dialogues, one often encounters words which serve to indicate overall structure - phrases that link individual propositions to form one coherent presentation. Other researchers in language understanding have acknowledged the existence of these "clue words". Birnbat~n [Birnbaum 823 states that in order to recognize argument structures it would be useful to identify typical signals of each form. In [Cohen 83] we develop a computational model for argument analysis. The setting is a dialogue where the speaker tries to convince the hearer of a particular point of view; as a first step, the hearer tries to construct a representation for the structure of the arg~ent, indicating the underlying claim and evidence relations between propositions. Within this framework, a theory of linguistic clues is developed whlch categorizes the function of different phrases, presenting interpretation rules. What we have done is develop a model for argument analysis which is sufficiently well-defined in terms of algorithms, with measurable complexity, to allow convenient study of the effect of clue words on processing. Two important observations are made: (I) clue words cut processing of the hearer in recognizing coherent transmissions (2) clue words are used to allow the recognition of transmissions which would be incoherent (too complex to reconstruct) in the absence of clues. Considering arguments as goal-oriented dialogues, the use of clue words by the speaker can be construed as attempts to facilitate the heater's plan reconstruction process. Thus, there exist words and even entire statements with the sole function of indicating structure (vs. content) in the argument. The importance of structure to argument understanding is first of all a by-product of our imposed pragmatic approach to analysis. To understand the argument intended by the speaker, the hearer must determine, for each proposition uttered, both where it fits with respect to the dialogue so far and how, in particular, it relates to some prior statement. In addition, it is precisely the expected form of arguments which can be used to control the analysis (since content can't be stereotyped as in the case of stories). It is this importance of form which necessitates clue words and presents the research problem of specifiying their function precisely. II Background To understand the role of clue words in facilitating analysis, some detail on the overall argument understanding model is required. (For further reference, see [Cohen 80], [Cohen 81], [Cohen 83]). Each proposition of the argument is analyzed, in turn, with respect to the argument so far. A proposition is interpreted by determining the claim and evidence relations it shares with the rest of the argument's propositions. Leaving the verification of evidence to an oracle, the main analysis task is determining where a current proposition fits. To understand the examples introduced in this paper, it is useful to present the starting definition of evidence, as used in the model. A proposition P is evidence for a proposition O if there is some rule of inference such that P is premise to Q's conclusion. The rule most often observed is modus ponens, with missing major premise - i.e. P, Q are given and one must fill P --> Q to recognize the support intended from P to Q. More detail on the definition of evidence is presented in [Cohen 83]. Determining an interpretation for a proposition is restricted to a computationally reasonable task by characterizing possible coherent transmission 251 strategies on the part of the speaker and reducing analysis to a recognition of these forms. These algorithms are outlined in detail in [Cohen 83]. The basic restrictions yield e limited set of propositions to search. The representation is a tree of claim and evidence relations where evidence are sons to the father claim. Essentially, the last proposition eligible to relate to the current is tracked (called LAST). LAST and its ancestors in the tree are all eligible relatives and each is tested in turn, to set the interpretation of the current p~oposition. The analysis algorithm is termed "hybrid reception" because it is designed to recognize transmission strategies where each constituent sub-argument is presented either claim first or claim last. Complexity analysis of this algorithm indicates that it works in linear time (i.e. it takes a linear factor of the number of nodes of the tree to locate all propositions in tile representation). A sample tree and the processing required for the current proposition is illustrated below: initial" [ final: 2 4< I" /I/9~ / ~ 5\6, ` ^/II~5\ 7 z 3 "6x, i With the initial argument above, a new proposition (8) would be checked to be evidence for 7, 6, 5 and I in turn. If these tests fail, it is then attached as a son to the dummy root (expecting a father in upcoming propositions). The final tree above, for example, may result if the next proposition (9) is processed and succeeds as father to 8. Note that in processing 8 initially, 4, 3, and 2 were not eligible relatives. This is because an earlier brother to a subsequent proposition is closed off from consideration according to the specifications of the hybrid algorithm. See Appendix I for a detailed description of possible coherent transmission strategies and their "reception" algorithms. III Clues to reduce processing (Helpfulness) With coherent transmissions characterized, the role of clue words can be investigated more closely. Note first that the restrictions of the analysis algorithms are such that the proposition to which the current one relates is not always the immedimte prior proposition. In fact, sometimes the claim is located far back in the dialogue. Consider the following example: EXI: 1)The city is a mess 2)The parks are a disaster 3)The playground area is all run down 4)The sandboxes arc dirty 5)The swings are broken 6)The highway system also needs revamping Here, the representation for the following tree: #2/'I~6 argument is the The last proposition, b, is evidence for I, one of the claims higher up in the tree. Many arguments which re-address earlier claims assist the hearer by specifically including a clue of re-direction as in EX2 below. EX2: 1)The city is a mess 2)The parks are a disaster 3)The playground area is all run down 4)The swings are broken 5)The sandboxes are dirty 6)Returning to city problems, the highway system needs revamping Here, the search up the right border of the tree (from 5, 3, 2 to I) for a possible claim to the current proposition b is cut short and the correct father (I) indicated directly. One can hypothesize a general reduction on processing complexity from linear to real-time, if clues are consistently used by the speaker to re-direct the hearer with chains that are sufficiently long. Connectives are another type of clue word, used extensively. Hobbs ([Hobbs 76J) attempts a characterization with respect to his coherence relations for a couple of words. Reichman ([Reichman 81]) associates certain expressions with particular conversational moves, but there is no unified attempt at classification. We develop a taxonomy so that clues of the same semantic function are grouped to assign one interpretation rule for the dominated proposition within the claim and evidence framework. Consider the following example: EX3: 1)The city needs help 2)All the roads are ruined 3)The buildings are crumbling 4)As a result, we are asking for federal support with the representation: 2/I ~ 3 The connective in 4, "as a result", suggests that some prior proposition connects to 4 and that this proposition acts as evidence for 4. 'lhe relation of the prior proposition is set out b.elow according the the interpretation rule for the category that "as a result" belongs to in the taxonomy. The particular evidence connection advocated here is of the form: "If our city needs help, then we will ask for federal aid". [Note: Whether I is evidence for 4 is tested by trying a modus ponens major premise of the form: "(For all cities) if a 252 city needs help, then it can ask for federal aid", and then using "our city" as the specific case]. The taxonomy (drawn from [Quirk 72]) is intended to cover the class of connectives and presents default interpretation rules. (P indicates prior proposition; S has the clue) CATEGORY RELATION:P to S EXAMPLE parallel brother in addition detail father in particular inference son as a result summary multiple sons in SL~n reformulation father and son in other words contrast father or brother conversely Note that the classification of connectives provides a reduction in processing for the hearer. For example, in EX3 with a casual connective, the analysis for the proposition 4 is restricted to a search for a son. In short, connective interpretation rules help specify the type of relation between propositions; re-direction clues help determine which prior proposition is related to the current one. All together, clue words function to reduce overall processing operations. See Appendix II for more examples of relations of the taxonomy. IV Clues to support complex transmissions (Necessity) C%ue words also exist in conjunction with transmissions which violate the constraints of the hybrid model of expected coherent structure. The claim is that clues provide a necessary reduction in complexity, to enable the hearer to recognize the intended structure. Consider the following examples: EX4: 1)The city is a mess 2)The parks are run down 3)The highways need revamping 4)The buildings are crumbling 5)The sandbox area is a mess EX5: 1)The city is a mess 2)The parks are run down 3)The highways need revamping 4)The buildings are crumbling 5)With regard to parks, the sandboxes are a mess 6)As for the highways, the gravel is shot 7)And as for the buildings, the bricks are rotting The initial tree for the argument is as follows: In EX4, the last proposition cannot be interpreted as desired; the probable intended father proposition (2) is not an eligible candidate to relate to the current proposition (5) according to .he hybrid specifications. In EX5, however, a parallel construction is specifically indicated through clue words, so that the connections can be recognized by the hearer and the appropriate representation constructed as below: 11C--.. 5.2 6/3 7/4 It now becomes important to provide a framework for accommodating "extended" transmission strategies in the model. First, the complexity of processing without clues is a good measure for determining whether a strategy should be considered exceptional. Then, to be acceptable in the model the proposed transmission must have some characterizable algorithm - i.e. still reflect a coherent plan of the speaker. Further, exceptional tranmsission strategies must be clearly marked by the speaker, using clues, in cases where the transmission can be assigned an alternate reading according to the basic processing strategy. The hearer should be expected to expend the minimum computational effort, so that the onus is on the speaker to make exceptional readings explicit. In brief, we propose developing a "clue interpetation module" for the analysis model, which would be called by the basic proposition analyzer to handle extended transmissions in the presence of clues. Then, complexity of processing should be used as s guide for determining the preferred analysis. To illustrate, consider another acceptable extended transmission strategy - mixed-mode sub-arguments, where evidence both precedes and follows a claim. EXd: l)The grass is rotting 2)The roads are dusty 3)The city is a mess 4)In particular, the parks are a ruin Preferred rep: ..~..3~ Other possible rep: 1 2 4 / \ I 2 Here, it is preferable to keep I and 2 as evidence for 3, because this requires less computational effort than the re-attachment of sons which takes place to construct the other possible representation. In other words, computational effort is a good guide for the specification of processing strategies. Finally, it is worth noting that the specific clue word used may influence the processing for these extended transmissions. In EXd, if the last proposition (4) was introduced by the clue word "in addition", then the alternate tree would not be an eligible reading. This is because "in addition" forces 4 to find a brother among the earlier propositions, according to the interpretation rule for the "parallel" class of the taxonomy of 253 connectives. In sum, we propose particular extended transmission strategies for the model, including (i) parallel (ii) mixed-mode (iii) multiple relations. [Note: More discussion of (iii) is in [Cohen 33]. We consider as an acceptable exceptional strategy the case where one proposition acts as evidence for an entire set of claims following it immediately in the stream. Other configurations of multiple relations seem to present additional processing problems]. We demand clue words to facilitate the analysis and we begin to suggest how to accommodate uses of these exceptional cases in the overall analysis model. V Related Topics A. Nature of clues The exact specification of a clue is a topic for further research. Since it is hypothesized that clues are necessary to admit exceptional transmissions, what constitutes a clue is a key issue. Within Quirk's classification of connectives ([Quirk 72]) both special words and connecting phrases ("integrated markers") are possible. For example, one may say "in conclusion" or "I will conc].ude by saying". Quirk also discusses several mechanisms for indicating connectives which need to be examined more closely as candidates for clue words. These comstructions are all "indirect" indications. a) lexical equivalence: This includes the case where synonyms are used to suggest a connection to a previous clause. For example: "The monkey learned to use a tractor. By age 9, he could work solo on the vehicle." In searching for evidence relations, the hearer may faciltate his analysis by recognizing this type of connective device. But it unclear that the construction should be considered an additional "clue". b) substitution, reference, comparison, ellipsis: Here, the "abbreviated" nature of the constructions may be significant enough to provide an extra signal to the hearer. For now, we do not consider these devices as clues, but examine the relations between the use of anaphors and clues in the next section. Even w!thin the classification of connectives, there is a question of level of explicitness of the clues. Consider the example: EX7: 1)The city is dangerous 2a)I will now tel! you why this is so 2b)The reason for the danger is... 2c)The reason is... 2d)The problem is ... 2a) is an explicit indication of evidence; b) and c) have a phrase indicating a causal connection, but c) requires a kind of referent resolution as well; d) requires recognizing "the problem" as an indication of cause. The problem addressed in this example is similar to the one faced by Allen ([Allen 79]): handling a variety of surface forms which all convey the same intention. In our case, the "intention" is that one proposition act as evidence for another. Finally, there are different kinds of special phrases used to influence the credibilty of the hearer: I) attitudinal expressions reflecting the speaker's beliefs and 2) expressions of emphasis. Since our model focuses on the first step in processing of recognizing structural connections, these clues have not be examined more closely. However, examples of these expressions are listed in Appendix III, along with phrases indicating structure. B. Relation to reference resolution and focus There are some important similarities between our approach to reconstructing argument structure and the problem of representing focus for referent resolution addressed in [Sidner 79] and [Grosz 77J. For both tasks, a particular kind of semantic re]ation between parts of a dialogue must be found and verified. In both cases, a hierarchical representation is constructed to hold structural information and is searched in some restricted fashion. Orosz's hierarchical model of focus spaces, with visibility constraints imposed by the task domain, is maintained in a fashion similar to our tree model. Information on which of the focus spaces is "active" and which are "open" (possible to shift to) is kept; open spaces are determined by the active space and the visibilty constraints. Analysis for a problem such as resolving definite noun phrase referents can be limited by choosing only those items "in focus". In [Sidner 79] focus is introduced to determine eligible candidates for a co-specification. But the ultimate choice rests with verification by the hearer, using inferencing, that the focus element relates to the anaphor. This is parallel to our approach of narrowing the search for a proposition's intepretation, but requiring testing of possible relations in order to establish the desired link. To set the focus, Sidner suggests either: I) using special words to signal the hearer or 2) relying on shared knowledge to establish an unstated connection. This is analogous to our cases of processing with and without clues. In Sidner's theory there is also a clear distinction between returning to an element previously in focus (one from the focus stack) or choosing a completely "new topic" from prior elements (using the alternate focus list). We distinguish returning to some ancestor of the last proposition (a choice of eligible proposition) from the case of re-addressing a "closed" proposition. 254 In this latter case, we require a clue word to re-direct. What we have tried to do is clearly separate eligible relatives from exceptional cases and connect the required use of clues to the exceptional category. Grosz and Sidner both allow "focus shifts" and Sidner explicitly discusses uses of "special phrases", but we have tried to study the connections between clues and exceptions more closely. Finally, it is worth noting that the problem of reference resolution is similar to that of evidence determination, but still distinct. In the example below, constraints suggested by referent resolution theories should not be violated by our restricted processing suggestions: Exa: 1)The city is a mess 2)The park is ruined 3)The highway is run down 4)Every 3 miles, you find a pothole in it In 4, "it" is resolved as referring to "the highway" in 3; this proposition is eligible and the closer connection is preferred. But clue interpretation is not equivalent to referent resolution. The clue "for example" may be expressed as "one example for this is" but could also be presented as "one example for this problem is". Since the search for a referent may differ according to the surface form ([Sidner 79]) there is no clear mapping from processing propositions with clues to those with referents. For our model, surface form may vary widely, but the search is restricted according to interpretation rules for a taxonomy - according to the semantics of the clue - and the solution is dictated by the structure of the argument so far. C. Necessity in the base case The main points raised in this paper are that clues can be used with a basic transmission strategy to cut processing and must be used in more complex transmissions. The question of whether certain basic transmissions still require clues is worth investigating further. In particular, it has been suggested (personal communication with psychologists) that deep stacks require clues to remind the hearer, due to "space" limitations. It may be productive to examine the computational properties of this situation more closely. Further, clues are often used to delineate sub-arguments when shifting topics. Again, some memory limitations for the hearer may be in effect here. VI Conclusion In conclusion, this paper outlines one crucial component of the computational model for argument analysis described in [Cohen 83]. It presents a first attempt at a solid framework for clue interpretation within argument understanding. The approach of studying goal-based dialogue and structure reconstruction also allows us to comment on the the function of clue words within analysis. The theory of clue interpretation gives insight into a known construction within sample dialogues; examining the computational properties provides a framework for design of the analysis model. It is important to note that there has been no effort to date to study the use of clue words extensively, distinguishing cases where they occur and suggesting when clues are necessary. The clue theory presented here also has possible implications for other application areas. For example, in resolving referents Sidner ([Sidner 79J) has suggested that clues will occur whenever the alternate focus list is consulted, beyond the focus stack default. Our claim is that the necessity for clues is closely tied to the complexity of processing and the reduction in processing operations afforded by the additional structural information provided by the clue words. REFERENCES [Allen 79] Allen, d.; "A Plan Based Approach to Speech Act Recognition"; University of Toronto Department of Computer Science Technical Report No. 131 [Birnbatm 82] Birnbaum, L.; "Argument Molecules: A Functional Representation of Argument Structure"; Proceedings of AAAI 82 [Cohen 80J Cohen, R.; "Understanding Arguments"; Proceedings of CSCSI 80 [Cohen 81] Cohen, R.; "Investigation of Processing Strategies for the Structural Analysis of Arguments"; Proceedings of ACL 81 LCohen 83J Cohen, R.; A Computational Model for the Analysis of Arguments; University of Toronto Department of Computer Science Ph.D. thesis (University of Toronto Computer Systems Research Group Technical Report No. 151) [Grosz 77J Grosz, B.; "The Representation and, Use of Focus in Dialogue Understanding"; SRI Technical Note No. 151 [Hobbs 76] Hobbs, J.; "A Computational Approach to Discourse Analysis"; Department of Computer Sciences, CUNY Research Report No. 76-2 [Quirk 72] Quirk, R. et al. ; A Grammar of Contemporary English; Longmans Co., London [Reichman 81] Reichman, R.; "Plain Speaking: A Theory and Grammar of Spontaneous Discourse"; BBN Report No. 4681 [Sadock 77] Sadock, J.; "Modus Brevis: The Truncated Argument"; in Papers from the 13th Regional Meeting, Chicago Linguistics Society 255 [Sidner 79] Sidner, C; "Towards a Computational Theory of Definite Anaphora Comprehension in English Discourse"; MIT AI Lab Report TR-537 Appendix I: Coherent Transmission Strategies Coherent transmissions are illustrated and reception algorithms required to recognize these transmissions outlined. 1~is material is first introduced in [Cohen 81]. a)PRE-ORDER: state claim, then present evidence EXAI: 1)Jones would make a good president I 2)He has lots of experience /\ 3)He's been on the board for 10 years 2 4 4)And he's honest I J 5)He refused bribes while on the force 3 5 In the above example, each claim consistently precedes its evidence in the stream of propositions. b)POST-ORDER: present evidence, then state claim EXA2: 1)Jones has been on the board 10 years 5 2)He has lots of experience |\ 3)And he's refused bribes 4)So he's honest i i 5)He would really make a good president I 3 Here, the comparable example in post-order (where evidence precedes claim in the stream) is still coherent. The hearer can construct particular reception a]gorithms to recognize either of the transmission strategies. To interpret a current proposition in the case of pro-order transmission, the hearer must simply look for a father: in fact, the test is performed only on the last proposition and its ancestors, up the right border of the tree. In post-order, the algorithm makes use of a stack to hold potential sons to the current proposition; the test is to be father to the top of the stack; if the test succeeds, all sons are popped and the resulting tree pushed onto the stack: if the test fails, the current proposition is added to the top of the stsck. c)HYBRID: any sub-argument may be in pre- or post- order EXA3: 1)Jones would make a good president I 2)He has lots of experience /~ 3)He's been on the board 10 years 2 5 4)And he's refused bribes / 5)So he's honest 3 4 The above exgmple illustrates a coherent hybrid transmission. The hybrid reception algorithm is then a good approximation to a general processing strategy used by the speaker. Essentially, the algorithm combines techniques from pro- and post- order reception algorithms, where both a father and sons for a current proposition must be found. The search is still restricted, as certain propositions are closed off as eligible relatives to the current one, according to the specifications of the hybrid transmission. There is an additional problem, due to the fact that evidence is treated as a transitive relation. Sons are to be attached to their immediate father; so, it may be necessary to relocate sons that have been attached initially to a higher ancestor. This situation is illustrated below: Here, 4 any 5 would succeed as evidence for I (since they are evidence for 6 and 6 is evidence for I); they will initially attach to I and relocate as sons to 6 when 6 attaches as son to I. Here is an outline of the proposed hybrid reception algorithm. It makes uses of a dummy root node, for which all nodes are evidence. L is a pointer into the tree, representing the lowest node that can receive more evidence. For every node NEW in the input stream: forever do: if NEW evidence for L then if no sons of L are evidence for NEW then /* just test lastson for evidence*/ attach NEW below L set L to NEW exit forever loop else attach all sons of L which are evidence for L below NEW /* attach lastson; bump ptr. to lastson */ /* back I and keep testing for evidence */ attach NEW below L exit forever loop else set L to father (L) end forever loop APPENDIX II: Examples of Taxonomic Relations [Cohen 81] first suggests using common interpretation rules for connectives in one category of a taxonomy. Various examples presented in that paper are included here as additional background. In the discussion below, S refers to the proposition with the clue; P refers to the prior proposition which connects to S. 1)Parallel: This category includes the most basic connectors like "in addition" as well as lists of clues (e.g. "First, secondly, thirdly..."). P must be brother to S. Finding a brother involves locating the common father when testing evidence relations. E~4: 1)The city is in serious trouble /I\ 2)There are some fires going 2 4 3)Three separate blazes have broken out ~3 4)In addition, a tornado is passing through 256 The parallel category has additional rules for cases where lists of clues are present. Then, propositions with clues from the same list must relate. But note that it is not always a brother relation between these specific propositions. In fact, the brothers are the propositions which serve as claims in each sub-argument controlled by a list clue. EXA5: 1)The city is awful ~/I\4 2)First, no one cleans the parks 3)So the parks are ugly I \ 4)Then the roads are a mess 2 5 5)There's always garbage there Here, 2 and 4 contain the clues; 3 and ~ are brothers. 2)Inference: There are clues like "therefore" which directly indicate inferences being drawn. The classification of "result" covers cause and effect relations which are of the form: if cause true then (most likely) effect true. Clues of this type are also included in the inference category. P will be son for S. EXA6: 1)The fire destroyed half the city 13 2)People are homeless I 3)As a result, streets are crowded I 3)Detail: Included in this category are clues of example and particularization, where S lends partial support to P. Here, P will be father to S. EXAT: 1)Sharks are not likeable I~ 2)They are unfriendly to humans 2% 3)In particular, they eat people 3 4)Summary: Ordinarily, summary suggests that a set of sons are to be found. S is father to a set of P's. EXA8: 1)The benches are broken / ~ 2)The trails are choppy I 2 3 3)The trees are dying 4)In sum, the park is a mess 5)Reformulation: The taxonomy rule suggests looking for a prior proposition to be both father and son to the one with the clue. To represent this relation our tree model is inadequate. However, reformulations are often seen as additional evidence, adding detail and emphasis, and could then be recorded simply as sons to the prior statement. The example below suggests that interpretation: EXA9: 1)We need more money 2)In other words, we are broke Note that additional discussion of the role of reformulation is included in [Cohen 83]. 6)Contrast: Although the notion of contrast is complex, for now we interpret a proposition which offers contrast to some evidence for a claim as providing (counter) evidence for that claim, and hence S is a son of P; likewise, a proposition which contrasts another directly without evidence presented, is a (counter) claim, and hence S is a brother to P. EXAIO: 1)The city's a disaster 1 ~ 2)The parks have uprooted trees 2 3 3)But at least the playground's safe EXAlt: 1)The city is dangerous / ~ 2)The parks have muggers 4 3 3)But the city has no pollution 2 4)And there are great roads 5)So, I think the city's great In EXAI0, the clue signals a s0n to higher claim; in EXA11, the clue connects two brother claims. APPENDIX III:Sample List of Clue Words This list is drawn from [Quirk 72]. Note that some words may belong to more than one category. I Coinciding with the connective taxonomy 1:Parallel I first 17 on top of it all 2 second..etc. 18 and what is more 3 secondly..etc. 19 and 4 next 20 neither...nor 5 then 21 either...or 6 finally 22 as well as 7 last 23 rather than 8 in the first place 24 as well 9 for one thing 25 too 10 for a start 26 likewise 11 to begin with 27 similarly 12 to conclude 28 equally 13 furthermore 29 again 14 moreover 30 also 15 in addition 31 further 16 above all [Note that 24-31 are appositions; 20 - 23 operate between clauses in one sentence]. 2: Summary 32 altogther 33 overall 34 therefore 35 thus 36 all in all 37 in conclusion 38 in sum 39 to conclude 40 to summarize 41 I will sum by saying 42 My conclusion is [Note that 41 and 42 are whole phrases or "integrated markers"]. 3: Reformulation 43 namely 45 that is to say 44 in other words 46 alternately 4: Detail 47 for example 48 for instance 49 another instance is 50 in particular 257 5: Inference 51 that is 57 if so 52 accordingly 58 if not 53 consequently 59 That implies 5a hence 60 l deduce from th,~b 55 as a consequence 61 You can conclude from that 56 as a result [Note 57 and 58 operate betueen clauses within one sentence; 60 @nd 61 are whole phrases]. 6: Contrast 62 otherwise 63 conversely 64 on the contrary 65 in contrast 66 by comparison 67 however ~8 nonetheless 69 though 70 yet 71 in any case 72 at any rate 73 after all 74 in spite of that 75 meanwhile 76 rather than 77 I would rather say 78 The alternative is [Note 77 and 78 are whole phrases]. II Attitudinal expressions These adverbs indicate a degree of belief of the speaker. primarily, principally, especially, chiefly, largely, mainly, mostly, notably, actually. certainly, clearly, definitely, indeed, obviously, plainly, really, surely, for certain, for sure. of course, frankly, honestly, literally, simply, kind of. sort of. more or less, mildly, moderately. partially, slightly, somewhat, in part. in some respects, to some extent, scarcely, hardly, barely. a bit. a little, in the least, in the slightest, almost, nearly, virtually, practically, approximately, briefly, broadly, roughly. admittedly, decidedly, definitely, doubtless. possibly, reportedly, amazingly, remarkably. naturally, fortunately, tragically, unfortunately, delightfully, annoyingly, thankfully, correctly. justly [II Emphasis: indicate and defend ~ claim to be sure. it is true. there is little doubt, I admit, it cannot be denied, the truth is. in f~ct. in actual fact IV Transitions (re-directing structure) let us now turn to. spea]'ing of. that reminds me Note that this appendix is not intended to list all possible clue words, but merely gives the reader an indication of the existing forms and possible categories. 258
1984
55
CONTROL STRUCTURES AND THEORIES OF INTERACTION IN SPEECII UNDEP~.WI'ANDING SYSTEMS E.J. Briscoe and B.K. Boguraev University of Cambridge, Computer Laboratory Corn Exchange Street, Cambridge CB2 3QG, England ABSTRACT lr: this paper, we approach the problem of organisation and control ip. automatic speech understanding systems firaT.ly, by presentin~ a theory of the non-serial interactions "~eces';ary between two processors in the system; namely, the morphosyntaetic and the prosodic, and secondly, by showing how, when generalised, this theory allows one to specify a highly efficient architecture for a speech understanding system with a simple control structure and genuinely independent components. The theory of non-serial interactions we present predicts that speech is temporally organised in a very specific way; that is, tee system would not function effectively if the temporal distribution of various types of information in speech were different. The architecture we propose is developed from a study of the task of speech, unde:standing and, furthermore, is specific to this task. Consequently, the paper argues that general problem solving methods are unnecessary for speech understanding. ! INTRODUCTION ]t is generally accepted that (he control structures of speech understanding systems (SUSs) must allow for non-serial interactions between different knowledge sources or components within the system. By r, on- serial interaction (NS1) we refer to communication which extends beyond the normal, serial, flow of information entailed by the tasks undertaken by each component. For example, the output of the word recognition system will provide the input to morphosyntactic analysis, almost by definition; however, the operation of the morpho.~yntaetic anaiyser .,~hould be constrained on some occasions by prosodic cues: say, that he:" is accented and followed by a "pause". whil,':'.t dog is not, in (1) Max gave her dog b4-';cuits. Similarly, the output of the morphosyntactic analyser will provide the input to scrnantie analysis, but on occasion, the operation of the rnorphosyntacLic analyser will be more efficient if it has access to information about the discourse: say, that the horse has no unique referent ip, (2) "/he horse raced past the barn fell, because this information will facilitate the reduced relative interpretation (see Crain & Steedman, in press). Thus, NSIs will be required between components which occur both before and after the morphosyntactie analyser in the serial chain of processors which constitute the complete SUS. NSls can be captured in a strictly serial, hierarchical model, in which the flow of information is always "upwards", by computing every possibility compatible with the input at each level of processing. However, this will involve much unnecessary computation within each separate component which could be avoided by utilising information already ten,:;orally available in the signal or context of utterance, ]::ut net part of the input to that level. An alternative architecture is the heterarchical system; this avoids such inefficiency, in principle, by allowing each component, to communicate with all other components in the system. However, controlling the flow of information and specifying the interfaces between components in such systems has proved very difficult (Rcddy & Erman, 1975). The most sophisticated SUS architecture to date is the blackboard model (Erman at a!., 1980). The model provides a means for common representation and a global database for communication between components and allows control of the system to be eentralised and relatively independent of individual components. The four essential elements of the model blackboard entries, knowledge sources, the blackboard and an intelligent control mechanism - interact t.o emulate a problem solving style that is charactemsticatly incremental and opportunistic. NSIs arc thus allowed to occur, in principle, when they will be of greatest value for preventing unnecessary computation. What is striking about these system architectures is that they place no limlts on the kinds of interaction which occur between component.% that is. none of them are based on any theory of what kind of interactions and eomrnunication will be needed in a SUS. The designers of tile Hearsay-ll system were exphcit about this, arguing that. what was required was an architecture capable of supporting ally form of interaction, but which was still relatively efficient (Erman & Lesser, 1975:484). 259 qhcrc appear to bc al least two problems with such an approach Fir.~tly. the designer of an mdivMua] con'.pe~lent must stdl take ml.o account whmh other components should be activated by its outputs, as well as who prey,des ~ts inputs, precmcly because no prmc~plcs of interaction are provided by the model. This entails, even within the loosely structured aggregation hierarchy of the blackboard, some commttment to deci'.;ions about inter-component traffic in information - rational answers to these decismns cannot be provided without a theory of mteractmn between individual components in a SUS. Secondly. a considerable amount of effort has gone into specifying global scheduling heuristics for maintaining an agenda of knowledge sourcc activation records m blackboard system~, and this has sometimes led to treating the control problem as a distinct issue independent of the don-~ain under consideration, localismg it on a scparatc, schcdu]ing, blackboard (I]alzcr, Errnan and London, t980; Haycs-Roth, 1983a). Once again, this is because the blackboard framework, as iL is defined, provides no inherent constraints on mtcractions (|tayes-Hoth, 1983b). While this means that the model is powerful enough to replicate control strategies used in qualitatively different. AI systems, as well as generatise to problem-solwng in multiple domains (}laycs-I,:oth, 1983a), the blackboard method of control still fails to provide a complete answer to the scheduling problem. It is intended predommantty for solving problems whose solutien depends on heuristics which must cope with large volumes of nmsy data. In the context of a blackboard-based SUS, where the assumptmn that the formation of the "correct" interpretation of an input signal will, mevitably, be accompanied hy the generatmn of many competing (partial) mterprctatmns is Impiicit m the redundancy encoded in the individual knowledge sources, the only real and practical answer to the control problem remains the development of global strategies to keep unnecessary computatmn within practical limits. These stratcgms are developed by tuning the system on the basis of performance critema: this tuareg appears to hmlt interactions to just. those optimal cases which are likely to yield successful analyses, tlowever, msofar as the fmal system might claim to embody a theory about ~hicil int,-,ractions are useful, this will never be represented in an explicit form in the loosely structured syzt.cm components, but only implimtly in the the run- time behaviour of the whole system: and therefore is unlikely to be rceow.'rable (see the analogous criticism in ]Iayes-l~.oth, 1983a:55). I INTERACTIVE DETERMINISM: A THEORY OF NON-SERIAL INTERACTION In this section, we concentrate on the study of NSI between morphosyntactm and prosodic information in specch, largely from the perspective of morphosyntactic analysis. This interaction occurs between two of the better understood components of a SUS and therefore seems an appropriate starting point for the development of a theory of NSIs. Lea (1950) argues that prosodic information will be of use for morphosyntaetic processing. This dmcussion is bascd on the observation (see Cooper & Paccia-Cooper, 1980; Cooper & Sorenson, 1981), that there is a strong correlation between some syntactic boundaries and prosodic effects such as lengthening, step up in fundamental frequency, changes of amplitude and, sometimes, pausing. However, many of these effects are probably irrelevant to morphosyntactic analysis, being, for example, side effects of production, such as planning, hesitation, afterthorghts, false starts, and so forth. If prosody is to be utilised effectively to facilitate morphosyntactic analysis, then we rcqmre a theory eapab!c of indicating when an ambiguous prosodic cue such as lengthening is a consequence of syntactic environment and, therefore, relevant to morphosyntactie analysis. None of tea's proposals make this distinction. In order to develop such a theory, we require a precise account of morphosyntactie analysm embedded in a model of a SUS which specifies the nature of the NSIs available to the morphosyntaetie analyser Conmdcr a simple modular architecture of a SUS m which most informatmn flows upwards through each lcvel of processing, as in the serial, hierarchical model This information is passed without delay, so any operation performed by a processor will be passed up to its successor m the cham of processors immediately (see Fig. l). Furthermore, we constrain the model as follows: at least from the point of word recognition upwards, only one interpretation is computed at each level. That is, word recognition returns a series of unique, correct words, then morphosyntactic analysis provides the unique, correct grammatical description of these words, and so forth. In order to implement such a constraint on the processmg, the model includes, in addition to the primary flow of information, secondary channels of commumcation which provide for the NSIs (represented by stogie arrows tn the diagram). These interactive channels are bidirectional, allowing one component to request certain highly restrtcted kinds of information from another component and, in principle, can connect any pair of processors in a SUS 260 DISCOURSE[ <-~ [ SEMANTICS I O" PARSE '~---J 4> WORDS '~1 PROSODY I Fig. 1 imagine a morphosyntactie analyser which builds a unique structure without backtracking and employs no, or very little, look-ahead Such a parser will face a ehmce point, irresolvable morphcsyntaetically, almost every time it encounters a structural ambiguity, whether local or global Further, suppose that this parser seeks to apply some general strategies to resolve such choices, that is, to select a particular grammatical interpretation when faced with ambiguity. If such a parser m to be able to operate dcterrninlstically, and still return the correct analysis without errer, m cases when a general strategy would yield the wrong analysis, then it will require interactive channels for transmitting a signal capable of blocking the application of the strategy and forcing the correct analysis. These are the secondary channels of communication posited in the model of the SUS above. A theory of NSls should specify when, in terms of the operation of any individual processor, interaction will be necessary; interactive channels for this parser must be capable of providing this information at the onset of any given morphosyntaetic ambiguity, which is defined as the point at which the parser will have to apply its resolution strategy. In order to make the concept of onset of ambiguity precise a model of the This diagram is not intended to be complete and is only included to illustrate the two different types of communication proposed in this paper. morphosyntactic component of a SUS was designed and implemented. This analyser (henceforth the LEXieal-CATegorial parser - because it employs an Extended Categorial Grammar (eg. Ades & Steedman, 1982) representing morphosyntactic information as an extension of the lexicon) makes specific predictions about the temporal availability of non-morphosyntactie information crucial to the theory of NSls presented here. LEXICAT's strategy for resolution of ambiguities is approximately a combination of late closure (Frazier, 1979) and right association (Kimball, 1973). LEXICAT is a species of shift-reduce parser which ernp~oys the same stack for the storage and analysis of input and inspects the top three cells of the stack before each parsing operation. Reduction, however, never involves more than two ee'.ls, so the top cell of the stack acts as a very restricted one word look- ahead buffer. In general, LEXICAT reduces the items in cells two and three provided thai. reduction between cells one and two is not grammatically possible*. ;Yhen LEXICAT encounters ambiguity, in the majority of situations this surfaces as a choice between shifting and reducing. When a shift-reduce ehmce arises between either cells one and two or two and three, reduction will be preferred by default; although, of course, a set of interactive requests will be generated at the point when thin choice arises, and these may provide information which blocks the preferred strategy. The approximate effect of the preference for reduction is that incoming material is attached to the constituent currently under analysis which is "lowest" in the phrase structure tree. LEXICAT is mrnilar to recent proposals by Church (1980), i:'ercira (in press) and Shieber (1983), in that it employs general strategies, stated in terms of the parser's basic operations, in order to parse determinislieally with an ambiguous grammar. A theory of NSls should also specify how interaction occurs. When LEXICAT recogniscs a choice point, it makes a request for non-morphosyntactic information relevant to this thrace on all of the interactive channels to which it is connected; if any of these channels returns a positive response, the default interpretation is overridden. The parser is therefore agnostic concerning which channel might provide the relevant information; for example, analysing (3) ha fore the King rides h~:s horse it's :tsually groomed. The onset of this rnorphosyntactic ambiguity arises when the horse has bcen analysed as a noun phrase. LEXICAT must decide at this point whether Tides is to be treated as transitive or intransitive: the transitive . . . . . . . . . . . . . . . . . . . . . . . This is not completely accurate; see 1984:Ch3 fer a full description of LEXICAT. E~riszoe 261 reading Is preferred given the rcsnluLion strategy outlin(,.d above. "(herefore, an interactive request will be generated reque:~tin~ information concerning the rcP:tmnship between these two constituents. A simple yes/no rcsponse is all that m needed along this interactive channei: "yes" to prevent appl;.cation of the strategy, "no" if the processor concerned finds nothing relevant to the decision. In relation to this example, consider the channel to the prosodic analyser which monitors for prosodic "breaks" (defined in terms or vowel lengthening, change of fundamental frequency and so forth): whcn the request is rcecivcd the prosodic analyscr returns a positive response if such a break is prcscnt in the appropriate part of the speech signal. In (3) none of these cues is likely to occur since t.hc rclcvant boundary is syntactically wcak (see Cooper & Paecm-Coopcr, 1980), so the interactive request will not rcsu!t in a positive response, the default resolution strategy will apply and his horse will bc intcrprctcd as direct object of rides. In (4) [Tefore the h~ng rides his horse is usually groomed, cn the ether hand, an interactive request will be generated at the same point, but the interactive channel between the prosodic and morphosyntactic components is likely to produce a positive response since the boundary between rides end his horse is synLactically sLrongcr. Thus, altachment will be blocked, closing the subordinate clause, and thereby forcing the correct interpretation. NSI ,then, is restricted to a set. of yes/no responses over the interactive channels at the explicit. :'equcst of the processor connected to those channels, where a positive response on one interactive channel suffices to override th:~ unmarked choice which would be made in the absence of such a signal. This highly restricted form of interaction is :;ufficient to guarantee that I,EXICAT will proouce the correct analysis even in cases of severe muttiplc ambiguity; for example, ,Jnalymng the noun compound in (b)lioron epoxy rocket motor chambers, (from Mareu:~, [980:253), th(:rc are fourteen + licit morph:~syntactm interpretations, assuming standard gramrnat.ical analyses (eg. Sell{irk, t983). However, if this example were spoken and we assume that it would have the prosodic structure predicted by Cooper & Paceia-Cooper's (1980) algorithm for deriving prosody . . . . . . . . . . . . . . . . . . . . . Possibly Lhese responses shon!d be represented as confidence ratings rather Lhan a discrete choice. In this case levels of certainty concerning the prcscnce/absencc of relevant events cculd be rvpre~i'ntcd, llowcver, for tim rest of ~.his paper we assume binary channels wi!! suffice. + Corresponding to the Catalan numbers; see Martin eL al. (198l). from syntactic structure, LEXICAT could produce the correct analyms without error, just through interaction with the prosodic analyser. As each noun enters the ar,alyser, reduction will be blocked by the general strategy but, because LEXICAT will reeognise the existence of ambLguity, an interactive request will be generated before each shift. The prosodic break channel will then prevent reduction after epoxy and after ~otor, forcing the correct analysis ((boron epoxy) ((rocket motor) chambers)), as opposed to the default right-branching structure. Thus, NSI between the morphosyntaetie and prosodic components can be captured by a bistable, bidirectional link capable of transmitting a request and signaling a binary reponse, either blocking or allowing the application of the relevant strategy according to the presence or absence of a prosodic break. Given the simplicity of this interaction, the prosodic analyser requires no more information from Lhe parser than that a decision is requested concerning a particular boundary. Nor need the prosodic analyser decide, prior to an interactive request on this channel, whether a particular occurrence of, say lengthening, is signalling the presence of a prosodic break, rather than for instance stress, since the request itself will help resolve the interpretation of the cue. Moreover, we have a simple generalisation about when inLeractive requests will be made since Lhis account of NSIs predicts that prosodic infermatmn will only be relevant to morphosyntaetic analysis at the onset of a morphosyntactic ambiguity. If we assume (boldly) that this account of NSI bcLween the morphosyntaetie and prosodic analysers will generalisc to a complete model of SUS, then such a model rnakcs a set of predictions concerning the temporal availability of interacQvc information in the speech signal and representaQon of the context of utterance. In effect, it claims that the SUS architecture simply presupposes that language is organiscd Jil the appropriate fashion since the model will not. function if it is not. We call this strong prediction about the temporal organisation of the speech signal the Interactive Determinism (ID) Hypothes,s since it is essenQally an extension of Marcus' (1980) Determinism Hypothesis. II TESTING THE INTERACTIVE DETERMINISM HYPOTttESIS The ID hypothesis predicts th,~t speech and the represcntation of context is organiscd in such a way that. information will be available, when needed, vza NSI Lo resolve a choice in any individual component at the point when that choice arises. Thus m the case of prosodic interaction with morphosyntaetie analysis the theory predicts that a prosodic break should be present in speech at the onset of a morphosyntaetie 262 ambiguity which requires a non-default interpretation and which is not resolved by other non- morphosyntactic information. This aspect of the ID hypothesis has been tested and corroborated by Paul Warren (1983; in prep; also see Briscoe, 1984:Ch4), who has undertaken a series of speech production experiments in which (typically) ten subjects read aloud a list of sentences. This list contains sets of pairs of locally ambiguous sentences, and some filler sentences so that the purpose of the experiment is not apparent to the subjects. Their productions arc analysed acoustically and the results of this analysis arc then checked statistically. The technique gives a good indicatio~ of whether the cues associated with a prosodic break are present at the appropriate points in the speech signal, and their cons,,stency across different speakers. Returning to examples (3) and (4) above, we noted that a prosodic break would be required in (4), but not (3), to prevent attachment of rides and hzs horse. Warren found exactly this pattern of results; the duration of rides (and similar items in this position) is an average 51% longer in (4) and the fall in fundamental frequency is almost twice as great with a corresponding step up to horse, as compared to a smooth declination across this boundary in (3). Similarly, analysing (6) 7he company awarded the contract [to/was] the highest bidcler. I,E),qCAT prefers attachment of The company to awarded, treating awarded as the main verb. In the case where awarded must be treated as the beginning cf a reduced relative, Warren found that the duration of the final syllable of company is lengthened and that the same pattern of fall and step up in fundamental frequency occurs. Perhaps the mo'~t interesting cases are ambiguous constituent questmns; Church (19g0,117) argued that it is probably impossible to parse these dcterministieally by employing look-ahead: "The really hard problem with wh-movement is finding the "gap" where the wh-element originated. This is not particularly difficult for a non-deterministic competence theory, but it is (probably) impossible for a deterministic processing model." LEXICAT predicts that in a sentence such as (7) ~Vho did you want to give the presents to 5~.e? the potential point of attachment of Who as direct object of want will bc ignored by default in preference for the immediate attachment of to give. Thus there is a prediction that the sentence, when spoken, should contain a prosodic break at this point. Warren has found some evidence for this prediction, i.e. want is lengthened as compared to examples where this is not the correct point of attachment of the prcposed phrase, such as (8) Who did you want t.~ give the presents to? but the prosodic cues, although consistent, are comparatively weak, and it is not clear that listeners are utilising them in the manner predicted by the theory (see Briscoe, 1984:Ch4). A different kind of support is provided by sentences such as (9) Before the I~ng rides a servant grooms his horse. which exhibit the same local ambiguity as (3) and (,t) but where the semantic interpretation of the noun phrase makes the direct object reading implausible, in this case it is likely that an interactive channel between the semantic and morphosyntactlc analysers would block the incorrect interpretation. So there is a prediction that the functional load on prosodic information will decrease and, therefore, that the prosodic cues to the break may be less marked. This prediction was again corroborated by Warren who found that the prosodic break in examples such as (9) was significantly less rnarked acoustically than for c~arnplcs such as (4)*. In general then, these experimental results support the ID hypothesis. Ill CONTROl, STRUCI'URE AND ORGANISATION In a SU~J based on the ID model, the main flow of information will be defined by the tasks of each component, and their medium of communication, will be a natural consequence of these tasks; as for the serial, hierarchical model. However, in the ID model, unlike the hierarchical model, there arc less overheads because unnecessary computation at any icv(.l of processing will be eliminated by the NSIs between components. These interactions will, of course, require a large number of interactive channels; but these do not imply a common representation language because the information which passes along them is representation-independent and restricted to a minimal request and a binary response. Each channel in the full SUS will be dedicated to a specific interaction between components; so the morphosyntactie component will require a prosodic break channel and a unique referent channel (see example (1)), and so forth. Thus, a complete model of SUS will implement a theory of the types of NSI required between all components. Finally, the ID model will not require that any individual processor has knowledge of the nature of the operations of another processor; that is, the Note that this result is inexplicable for theories which attempt to derlve the prosodlc structure of a sentence directly from its syntactic structure; see Cooper 3: Paccia-Cooper (].980:181f). 263 morphosyr:tacLic analyser need riot know what is being eoiT~puted at the other end of the prosodic break channel, or how; nor riced the p:'osodic analyser know why it is eomputin~ the presence or absence of a prosodic break. Rather, the knowledge that this infor'ma~lon is potentially important is expressed by the existence of this particular inLeractive channel. The control structure of this model is straightforward; after each separate operation of each individual c~mponent the results of this operation will be passed to the next component in the serial chain ol processors. An interactive request ~'ill be made by an}, component only when faced with an indeterminism irresolvable in "erms of the input available to it. No further scheduhng or eent.ralised control of processing will be reqmred. Furthermore, although each individual eomK.enent determines when .N3Is will occur, because of the restricted nature of this interaction each component can still be developed as a completely independent knowledge source. The deterministic nature of the individual component~ of this SUS eliminates the need for any glob,d hcurm!ies to be brought into the analysis o[ the speech signal. Thus we have di--pensed neatly with the requirement for an over-powerful and over-general problem-solving framework, such as the blackboard, and replacr:d it with a theory specific to the domain under conmderalion; namely, language. The theory of X~q}s offers a uatisfaetory specific method for speech undci.-:tallding which allowrr the separate specialist c,~mpor;ent procedures of a SUS to be "a!Forithmetized'" and compiled. As Erman et al. (1980::L16) suggest: "In such a ease tile flexibility of a system like Hcarsay-ll may no longer be needed". "fhe restrictions on the nature and directionality of NSI ehanneis in a SUE:, and the situations in which they [iced to be activated, a;Iowt; a modular system who'.~e control structure is not inuch more complex than th:.~t of the hierarchical mode}, and yet, via the net.work of interactive channels, achieves the efficiency sought 5y the heterarchieal and blackboard models, without the concomitant prcblems of common knowledge representations and complex eom!Tmni~zations protocols between separate knowledge sources. Thus, the ID mode! dispenses with the overhe.id costs of data-directed activation of '.mowledge sources and the need for opportunistic scheduling or a complex focus-of-control mechanism. IV CONCLUSION In this paper we have proposed a very idealised model of a SUS with a simple organisation and control structure, Clearly, the ID model assumes a greater level of understanding of many aspects of speech processing than is current. For example, we have assurncd that the word recognition component is capable of returning a series of unique, correct lexical items; even with interaction of the kind envisaged, it is doubtful that our current understanding of acoustic-phcnetic analysis is good enough for it to be possible to build such a component now. Nevertheless, ti.. experimental work reported by Marslcn-Wilson & Tyler (1980) and Cole & Jakimik (1980), for example, suggests that listeners are capable of accessing a unique Icxical item on the basis of the acoustic signal and interactive fcedback from the developing analysis of the utterance and its context (often before the acoustic signal is complete). More seriously, from the perspective of interactive determinism, little has been said about the many other interactive channels which will be required for speech understanding and, in particular, whether, these channels can be as restricted a.~: the prosodic break channel. For example, consider the channel which will be required to capture the interaction in example (9); this will need to be sensiLive to something like semantic "anomaly". tIowever, ?.emantic anomaly is an inherently vague concept, particularly by comparison with that of a prosodic break. Similarly, as we noted above, the morphosyntactic analyser will require an interactive channel to the discourse analyser which indieates whether a noun phrase followed by a potential relative clause, such as tar horse in (3), has a unique referent. However. since this ehannel would only seem to be relevant to ambiguities involving relative clauses, it appears to east doubt on the claim that interaetive requests are generated automatically on every channel each time any type of ambiguity is encountered. This, in turn, suggests that the control structure proposed in the last section is oversimplified. Nevertheless, by studying these tasks in terms of far more re,;trictcd and potentially more eomputationally efficient models, we are more likely to uncover restrictions on language which, once discovered, will take us a step closer to tractable solutions to the task of speech understanding. Thus, the work reported here suggests that language is organised in such a manner that morphosyntactic analysis can proceed detcrministically on the basis of a very restricted parsing algorithm, because non- structural information necessary to resolve ambiguities will be available in the speech signal (or representation of the context of utterance) at the point when the choice arises during mcrphosyntaetic analysis. Tile account of morphosyntactie analysis that thls constraint allows is more elegant, parsimonious 264 and empirically adequate than employing look-ahead (Marcus, 1980). Firstly, an account based on look- ahead is forced to claim that local and global ambiguities are resolved by different mechanisms (since the latter, by definition, cannot be resolved by the use of morphosyntaetic information further downstream in the signal), whilst the ID model requires only one mechanism. Secondly, restricted look-ahead fails to delimit accurately the class of so- called garden path sentences (Milne, 1982; Briscoe, 1983), whilst the ID account correctly predicts their "interactive" nature (Briscoe, 1982, 1984; Crain & Steedman, in press). Thirdly, look-ahead involves delaying decisions, a strategy which is made implausible, at least in the context of speech understanding, by the body of experimental results summarised by Tyler (1981), which suggest that morphosynta:':tie analysis is extremely rapid. The generatisation of these results to a complete model of SUS represents commitment to a research programme which sets as its goal the discovery of const.raints on language which allow the associated processing tasks to bc implemented in an efficient and tractable manner What is advocated here, therefore, is the development of a computational theory of iangoage processing derived through the study of language from the perspective of these processing tasks, much in the ~ame way in whmh Marr (1982) developed his comput.ational theory of vision. Acknowledgements: We would like to thank David Carter, Jane Robinson, Karen Sparck Jones and John Tait for their helpful comments. Mistakes remain our own. V REFERENCES Ades,A. and Steedman,M.(1982) 'On the Order of Words', Linguistics and Philosophy, col.5, 320-363 Balzer,R., Erman,L., London,P. and Williams,C.(1980) 'HEARSAY-Ill: A Domain-Independent Framework for Expert Systems', Proceedings of the AAAI(1), SLanford, CA, pp. 108-110 Briscoe,E.(1982) 'Garden Path Sentences or Garden Path Utterances?', Cambridge Papers in Phonetics and Experimental Lingui.~tics, vol.], 1-9 Briscoc,E.(1983) 'Determinism and its implementation m Parsifal' in Sparck Jones,K and Wilks,Y.(eds.), Automatic Natural Language Parsing, Ellis Horwood, Chichester, pp.61-68 Briscoe,E.(1984) Towards an Understanding of Spoken Sentence Comprehension: The Interactive Determinism H~jpothesis, Doctoral Thesis, Cambridge University Church,K(1980) On Memory Limitations in Natural Language Processing, MIT/LCS/TR-245 Cole,R and Jakimek,J.(1980) 'A Model of Speech Perception' in Cole,R.(eds ), Perception and Production of Fluent Speech, Lawrence Erlbaum, New Jersey Cooper,W. and Paccia-Cooper,J. (1980) 3yntax and Speech, Harvard University Press, Cambridge, Mass Cooper,W. and Sorenson,J.(1981) Pundamental Prequency in Sentence Production, Springer Verlag, New York Crain,S. and Steedman,M.(In press) 'On Not Being Led Up the Garden Path: the Use of Context by the Psychologmal Parser' in Dowty,D., Karttuncn,L and Zwicky,A.(eds.), Natural Language Processing, Cambridge University Press, Cambridge Erman,L, Hayes-Roth,F., Lesser,V. and Rcddy,R.(1980) 'The tlearsay-II Speech Understanding System: Integrating Knowledge to Resolve Uncertainty', Computing Surveys, col. 12, 213-253 Erman,L. and Lesser,V.(1975) 'A Multi-Level Organisation for Problem Solving Using Many, Diverse, Cooperating Sources of Knowledge', Proceedings of the 4th IJCAI, Tbilisi, Georgia, pp.d83-490 Fra:'ier,L. (1979) On Comprehending Sentences: Syntactic Parsing 52rategies, IULC, Bloomington, Indiana }Iayes-Roth,B.(1983a) A Blackboard Model of Control, Report No.HPP-83-38, Department of Computer Science, Stanford University llayes-Roth,B.(1983b) 7he Blackboard Architecture: A General Framework for Problem Solving?, Report No HPP-83-30, Department. of Computer Science, Stanford University Kimbatl,J.(1973) 'Seven Principles of Surface Structure Parsing in Natural Language', Cognition, col.2, 15- 47 I,ea,W.(1980) 'Prosodic Aids to Speech Recognition' in W. l,ea(cds. ), Trends in Speech Recognition, Prentice Hall, New Jersey, pp 166-205 Marcus,M.(1980) A Theory of S)jntactie Recognition for Natural I~nguage, MIT Press, Cambridge, Mass. Marr,D.(1982) V/sion, W.H.Freeman and Co., San Francisco Marslcn-Wdson,W. and Tyler,L.(1980) 'The Temporal Structure of Spoken ]_,anguagc Understanding: the Perception of Sentences and Words in Sentences', Cbgnition, col 8, 1-74 Martin,W., Church,K. and Patil,R.(1982) Preliminary Analysis of a I3readth-F~rst Parsing Algorithm: Theoretical and Experimental Results, MIT / I,CS/TR- 261 Milne,R.(1982) 'Predicting Garden Path Sentences', Cognitive Science, col.6, 349-373 Percira,F.(]n press) 'A New Characterization of Attachment Preferences' in Dowty,D., Karttunen,L. and Zwicky,A.(eds.), Natural I~nguage Processing, Cambridge University Press, Cambridge Selkwk,E.(1983) The Syntaz of Words, MIT Press, Cambridge Mass. Shieber,S (1983) 'Sentence Disambiguation by a Shift- 265 t,~ccltJ(',~ Par~irL.q Technique', I~'oceedings of th.e 21.st A~.n.~zctl ,,~4eeti.ng of AC[,. C~rnbridgc, Mass, pp 1 13-ilFJ t,~eddy,JL and Erman,[,(197,5) 'Tutorial on System Organlsatlon for Speech Understanding' in R!{eddy(eds), Speech [?ecogr~tior~" Invited Papers of th.e ll';J'."," .b~.qrrtpos'i.um. Academic Pre~s, New York, pp.,IbT- ,179 'ryler,L.(1981) ',~er~ai and Interact lye-Parallel Theories of Sentence Proces~;ing', 7~eorelLcat [,ir~g~zistics, vot.[L 29-65 War'ren,P.(19l]3) 'Temporal and Non-Ternporal Cues to Sent.encc Structure'. 6"ctmbmdge Papers irL Phonetics ~nd I;zperimenta.l l,£r~guist£cs, vot.H Warren,P.(|n prep) lhzrational i;~ctors in 5~geech ~5'ocessinE, Doctoral Thesis, Cambridge University 266
1984
56
Analysts Grammar or Japanese tn the Nu-ProJect - A Procedural Approach to Analysts Grammar - Jun-tcht TSUJII. Jun-tcht NAKANURA and Nakoto NAGAO Department of Electrical Engineering Kyoto University Kyoto. JAPAN Abstract Analysts grammar of Japanese tn the Mu-proJect ts presented, It is emphasized that rules expressing constraints on stngle linguistic structures and rules for selecting the most preferable readtngs are completely different In nature, and that rules for selecting preferale readings should be utilized tn analysts grammars of practical HT systems. It ts also clatmed that procedural control ts essential tn integrating such rules tnto a unified grammar. Some sample rules are gtven to make the points of discussion clear and concrete. 1. Introduction The Hu-ProJect ts a Japanese nattonal project supported by grants from the Special Coordination Funds for Promoting Science & Technology of STA(Sctence and Technology Agency). whlch atms to develop Japanese-English and English-Japanese machine translation systems. Ve currently restrict the domain of translation to abstracts of scientific and technological papers. The systems are based on the transfer approach[;], and consist of three phases: analysts, transfer and generation. In thts paper, we focus on the analysts grammar of Japanese tn the Japanese-English system. The grammar has been developed by using GRADE which ts a programming language specially designed for thts project[2]. The grammar now consists of about 900 GRADE rules. The experiments so far show that the grammar works very well and ts comprehensive enough to treat various linguistic phenomena tn abstracts. In thts paper we wtll discuss some of the basic design principles of the grammar together wtth its detatled construction. Some examples of grammar rules and analysts results wtll be shown to make the points of our discussion clear and concrete. 2. Procedural Grammar There has been a prominent tendency tn recent computational linguistics to re-evaluate CFG and use tt dtrectly or augment tt to analyze sentences[3.4.5]. In these systems(frameworks), CFG rules Independently describe constraints on stngle linguistic structures, and a universal rule application mechanism automatically produces a set of posstble structures which satisfy the given constraints. It ts well-known, however, that such sets of posstble structures often become unmanageably large. Because two separate rules such as NP ..... • NP PREP-P VP ..... • VP PREP-P are usually prepared tn CFG grammars tn order to analyze noun and verb phrases modifted by prepositional phrases. CFG grammars provide two syntactic analyses for She was given flowers by her uncle. Furthermore. the ambiguity of the sentence ts doubled by the lexlcal ambiguity of "by". which can be read as etther a locattve or an agenttve preposition. Since the two syntactic structures are recognized by compZetely independent ru]es and the semantic interpretations of "by" are given by independent processes tn the ]ater stages. It ts difficult to compare these four readings during the anaZysts to gtve a preference to one of these four readings. A rule such as "If a sentence ts passlve and there ts a "by"-prepostttonal phrase, tt ts often the case that the prepositional phrase ftlls the deep agenttve case. (try thts ana]ysts first)" seems reasonable and quite useful for choosing the most preferable interpretation, but tt cannot be expressed by refining the ordinary CFG rules. Thts ktnd of ru]e ts quite different In nature from a CFG ru]e. It ts not a rule of constraint on a stng]e ]tngutsttc structure(in fact. the above four readings are a]l ]tngulsttcal]y posstb]e), but tt ts a "heuristic" ru]e concerned with preference of readings, which compares several alternative analysts paths and chooses the most feastble one. Human translaters (or humans tn general) have many 267 such preference rules based on vartous sorts of cue such as morphological forms of words, collocations of words, text styles, word semantics, etc. These heuristic rules are quite useful not only for increasing efficiency but also for preventing proliferation of analysts results. As Wllks[6] potnted out, we cannot use semanttc Information as constraints on stngle linguistic structures, but Just as preference cues to choose the most feastble Interpretations among linguistically posstble Interpretations. We clatm that many sorts of preference cues other than semanttc ones exist tn real texts whtch cannot be captured by CFG rules. We will show tn thts paper that. by utilizing vartous sorts of preference cues. our analysts grammar of Japanese can work almost determtntsttcally to gtve the most preferable Interpretation as the ftrst output, wtthout any extensive semanttc processing (note that even "semant|c" processing cannot dtsambtguate the above sentence. The four readings are semantically possible. It requtres deep understanding of contexts or situations, whtch we cannot expect tn a practical MT system). In order to Integrate heuristic rules based on var|ous levels of cues tnto a untfted analysts grammar, we have developed a programming langauage. GRADE. GRADE provtdes us wtth the following facilities. Expllctt Control of Rule Appl|cattons : Heuristic rules can be ordered according to thetr strength(See 4-2). - Nulttple Relatton Representation : Vartous levels of Informer|on Including morphological. syntactic, semantic, logtcal etc. are expressed tn a s|ngle annotated tree and can be manipulated at any ttme durtng the analysts. Thts ts requtred not only because many heuristic rules are based on heterogeneous levels of cues. but also because the analysts grammar should perform semantic/logical Interpretation of sentences at the same ttme and the rules for these phases should be wrttten tn the same framework as syntactic analysis rules (See 4-2. 4-4). - Lextcon Drtven Processing : We can wrtte heuristic rules spectftc to a stngle or a 11mtted number of words such as rules concerned wtth collocations among words. These rules are strong tn the sense that they almost always succeed. They are stored tn the lextcon and tnvoked at appropriate times durtng the analysts wtthout decreasing efficiency (See 4-1). - Expltct% Definition of Analysts Strategies : The whole analysts phase can be dtvtded into steps. Thts makes the whole grammar efficient, natural and easy %o read. Furthermore. strategic consideration plays an essential role tn preventing undesirable interpretations from betng generated (See 4-3). 3 Organization of Grammar In thts sectton, we will give the organization of the grammar necessary for understanding the discuss|on |n the follow|ng sections. The matn components of the grammar are as follows. (1) Post-Morphological Analysts (2) Determination of Scopes (3) Analysts of Stmple Noun Phrases (4) Analysts of Stmple Sentences (5) Analysts of Embedded Sentences (Relative Clauses) (6) Analysts of Relationships of SentenCes (7) Analysts of Outer Cases (8) Contextual Processing (Processing of Omttted case elements. Interpretation of 'Ha' . etc.) (9) Reduction of Structures for Transfer Phase Each component conststs of from 60 to 120 GRADE rules. 47 morpho-syntacttc categories are provtded for Japanese analysts, each of whtch has tts own lextcal description format. 12.000 lextcal entrtes have already been prepared according to the formats. In thts classification. Japanese nouns are categorized |nto 8 sub-classes according to thetr morpho-syntacttc behavtour, and 53 semanttc markers are used to characterize thetr semanttc behaviour. Each verb has a set of case frame descriptions (CFD) whtch correspond to different usages of the verb. A CFD g|ves mapping rules between surface case markers (SCN - postpostttonal case particles are used as SCN's tn Japanese) and thetr deep case interpretations (DCZ 33 deep cases are used). DC! of an SCM often depends on verbs so that the mapping rules are given %o CFD's of Individual verbs. A CFO also gtves a normal collocation between the verb and SCM's(postpositonal case particles). Oetatled lextcal descriptions are gtven and discussed tn another paper[7]. The analysts results are dependency trees whtch show the semanttc relationships among tnput words. 4. Typtcal Steps of Analysts Grammar In the following, we w111 take some sample rules to Illustrate our points of discussion. 4-; Relative Clauses Relative clause constructions in Japanese express several different relationships between modifying clauses (relative clauses) and thelr antecedents. Some relattve clause constructions 268 cannot be translated as relative clauses tn Engltsh. Me classified Japanese relattve clauses Into the followtn 9 four types, according to the relationships between clauses and their antecedents. (1) Type 1 : Gaps In Cases One of the case elements of the relattve clause ts deleted and the antecedent fills the gap. (2) Type 2 : Gaps In Case Elements The antecedent modifies a case element tn the clause. That ts. a gap exists tn a noun phrase tn the clause. (3) Type 3 : Apposition The clause describes the content of the antecedent as the Engltsh "that"-clause tn 'the tdea that the earth ts round'. (4) Type 4 : Partlal Apposltlon The antecedent and the clause are related by certain semantic/pragmatic relationships. The relative clause of thts type doesn't have any gaps. This type cannot be translated dtrectly lnto English relative clauses. Me have to Interpolate In English appropriate phrases or clauses whtch are Implicit tn Japanese. tn order to express the semantic/pragmatic relationships between the antecedents and relative clauses explicitly. In other words, gaps extst tn the Interpolated phrases or clauses. Because the above four types of relattve clauses have the same surface forms fn Japanese ......... (verb) (noun). RelattvefClause Antecedent careful processing ts requtred to d|sttngutsh them (note that the "antecedents' -modified nouns- ape located after the relat|ve clauses tn Japanese). A sophisticated analysis procedure has already been developed, which fully ut|ltzes vartous levels of heuristic cues as follows. (Rule 1) There are a 11mtted number of nouns whtch are often used as antecedents of Type 3 clauses. (Rule 2) Vhen nouns with certa|n semanttc markers appear tn the relattve clauses and those nouns are followed by one of spectflc postpostttonal case part4cles, there ts a htgh possibility that the relattve clauses are Type 2. In the following example, the word "SHORISOKUDO"(processtn 9 speed) has the semanttc marker AO (attribute). [ex-1] [Type 2] "SHORZSOKUDO" "GA" (processing speed) (case particle: subject I case) RelattvetClause "HAYA[" "KEISANK[" (htgh) I (computer) I /t Antecedent -->(English Translation) A computer whose processing speed ts htgh (Rule 3) Nouns such as "MOKUTEKZ"(puPpose). "GEN ZN"(reason), "SHUDAN"(method) etc. express deep case relationships by themselves, and. when these nouns appear as antecedents. |t is often the case that they ft11 the gaps of the corresponding deep cases tn the relattve clauses. [ex-2] [Type 1] "KONO" "SOUCHI" "O" "TSUKAT" "TA" "MOKUTEK[" (th,s)l(dev,c. (c.. ICpurpos.) |part,cle:h /,ormat,ve: I J I / °bJect l / pest) l /case) ~ / RelattvetClause Antecedent --> (English Translation) The purpose for wh|ch (someone) used thts devtce The purpose of ustn9 thts devtce (Rule 4) There ts a 11mtted number of nouns whtch are often used as antecedents In Type 4 relattve clauses. Each of such nouns requtres a specific phrase or clause to be Interpolated tn Engltsh. [ex-3] [Type 4] "KONO" "SOUCHI" "0" "TSUKAT"-- "TA" "KEKKA" (th,s),(devlce)/~case e.~. (to use)/~tense ~'...(;esult) ...l fformat,ve:h J 1 ,object , Ipast) I 1 [ I case) l Rel at tve ~ Clause Antecedent --> (Engllsh Translation) The result which was obtatned by ustng thts dev|ce In the above example, the clause "the result whtch someone obtatned (the result : gap)" ts onmitted tn Japanese. whtch relates the antecedent "KEKKA"(result) and the relattve clause "KONO SOUCHI 0 TSUKAT_TA"(someone used thts devtce). 269 A set of lextcal rules ts defined for "KEKKA"(resulL). which basically works as follows : tt examines first whether the deep object case has already been filled by a noun phrase tn the relattve clause. If so, the relattve clause ts taken as type 4 and an appropriate phrase ts Interpolated as tn [ex-3]. If not, the relattve clause ts taken as type 1 as tn the following example where the noun *KEKKA" (result) ftlls the gap of object case tn the relattve clause. [ex-4] [Type 1] "KONO" "JIKKEN • / •GA". "TSUKAT• J"TA" l "KEKKA" (thts)J(expertment)//(case~(to use)~(tense (r~ult) rParticle~ iformsttve:]l IsubJect I I past)| I [ _ll case) l / I Relattve Clause Antecedent -->(English Translation) The result whtch thts experiment used Such lextcal rules are Invoked at the beginning of the relattve clause analysts by a rule tn the math flow of processing. The noun "KEKKA • (result) is given a mark as a lexlcal property which Indicates the noun has special rules to be Invoked when tt appears as an antecedent of a relatlve clause. A11 the nouns which requlre speclal treatments In the relative clause analysts are given the same marker. The rule tn the matn flow only checks thts mark and Invokes the lextcal rules defined tn the lextcon. (Rule 5) Only the cases marked by postpostttonal case particles 'GA'. 'WO" and 'NI" can be deleted tn Type 1 relattve clauses, when the antecedents are ordtnary nouns. Gaps tn Type 1 relative clauses can have other surface case marks, only when the antecedents are spectal nouns such as described tn Rule (3). 4-2 ConJuncted Noun Phrases ConJuncted noun phrases often appear in abstracts of scientific and technological papers. It ts Important to analyze them correctly. especially to determine scopes of conjunctions correctly, because they often lead to proliferation of analysis results. The particle "TO" plays almost the same role as the Engllsh "and" to conjunct noun phrases. There are several heuristic rules based on various levels of information to determine the scopes. <Scope Decision Rules of ConJuncted Noun Phrases by Partlcle 'TO'> (Rule 1) Stnce parttcle "TO" ts also used as a case particle, tf It appears tn the position: Noun 'TO" verb Noun, Noun 'TO' adjective Noun. there are two posstble Interpretations. one tn whlch "TO" Is a case parttcle and "noun TO adjective(verb)' forms a relattve clause that modifies the second noun. and the other one tn which "TO" ts a conjunctive particle to form a conJuncted noun phrase. However. it ts very 11kely that the parttcle 'TO' ts not 8 conjunctive parttcle but a post-positional case particle, if the adjective (verb) ts one of adjectives (verbs) which requtre case elements wtth surface case mark "TO' and there are no extra words between "TO • end the adjective (verb). In the following example. "KOTONARU(to be different)" ts an adjective which ts often collocated wtth a noun phrase followed by case particle "TO". [ex-5] YOSOKU-CHI "TO" KOTONARU ATAI (predicted value) (to be different) (value) [dominant interpretation] IYOSOKU-CHI "TO" KOTONARU ATIAI relattve~clause ant/cedent • the value which ts different from the predicted value [less domtnant Interpretation] YOSOKU-CHI "TO" KOTONARU ATAI Me N~ I I conJuncte~ noun phrase = the predicted value and the different value (Rule 2) If two "TO* particles appear tn the position: Noun-1 'TO' . ......... Noun-2 'TO' 'NO" NOUN-3 the right boundary of the scope of the conJuctton ts almost always Noun-2. The second 'TO" plays a role of a delimiter which deltmtts the right boundary of the conjunction. Thts 'TO" tS optional, but tn real texts one often places tt to make the scope unambiguous, especially when the second conjunct IS a long noun phrase and the scope is highly ambiguous without tt. Because the second 'TO' can be Interpreted as a case parttcle (not as a delimiter of the conjunction) and 'NO' following a case parttcle turns the preceding phrase to a 270 modlfter of s noun. on Interpretation tn whtch "NOUN-2 TO NO" ts taken as o modtrter of NOUN-3 and NOUN-3 ts token as the hood noun of the second conJunt ts also linguistically possible. However, In most cases, when two 'TO" particles appear tn the above position, the second "TO' Is Just a delimiter of the scope(see [ex-6]). [ex-6] YOSOKU-CHI TO JIKKEN DE.NO JISSOKU-CHI TO 60 SA (predtctedl'~expertment~'~case'~(octual valu~ I value) J ~orttcle~ (dtt'ference) t pl°c°) ] [dominant Interpretation] YOSOKU-CHI TO J[KKEN DE 60 O[$$OKU-CH] TO NO SA NP NP 1 I ConJuncted HP I NP • the difference between the predicted value and the actual value tn the experiment [less domtnant tnterpnetattons] (A) YOSOKU-CHI TO JIKKEN DE NO JISSOKU-CHI TO NO $A NP NP I I ConJuncted NP - the difference wtth the actual value tn the predicted value and the experiment (e) YOS~KU-CH] .p ~p l I ConJun~ted NP TO J[KKEN DE NO JZSSOKU-CH[ TO NO SA "l "" I • the predicted value and the difference wtth the actual value tn the experiment (Rule 3) If a spectal noun whtch ts often collocated wtth conjunctive noun phrases appear tn the position: Noun-1 'TO' . ....... Noun-2 "NO'<spectal-noun>, the rtght boundary of the conjunction ts almost always Noun-2. Such spectal nouns are marked tn the lextcon. [n the following example. "KANKEI" ts such a spectal noun. [ex-7] JISSOKU-CHI~O" (actual value) I RIRON-DE E-TA YOSOKU-CHI. NO, KANKE[__ 1(theory ]( ( to~( prod tcted~ (l:e lot ton~ " Iobtatn)l value) // shtp)J II spectal noun [dominant Interpretation] JISSOKU-CH! "TO" . ...... YOSOKU-CH[ NO KANKEI L._;___I (relative antecedent clsuse)l J NP ~P I I con]u~cted NP • the relationship between the actual value and the predicted value obtatned by the theory [less domtnant Interpretations] (A) JIS$OKU-CHI "TO" R]ROH-DE ...YO$OKU-CH[ NO KANKE! NP I I conJun~ted NF I relattvetclouse antecedent • the relationship of the predicted value whtch was obtatned by the actual value and the theory (e) JX$SOKU-CH! "TO" . ......... YO$OKU-CHX NO KANKEX ~P NP I I conJuncted NP • the actual value and the relationship of the predicted value whtch was obtatned by the theory (Rule 4) Zn Noun-1 'TO' . ..... Noun-2, tf Noun-1 and Noun-2 are the same nouns, the rtght boundary of the conjunction ts almost always Noun-2. (Rule 5) In Noun-! 'TO' . ...... Noun-2. tf Noun-! and Noun-2 are not exactly the some but nouns wtth the same morphemes, the rtght boundary 271 ts often Noun-2. In [ex-7] above, both of the heed nouns of the conJuncts. JISSOKU°CHI(actual value) and YOSOKU-CH[(predtcted value), have the same morpheme "CH[" (whtch meams "value"). Thus, thts rule can correctly determine the scope, even tf the spectal word "KANKE1"(relattonshtp) does not extst. (Rule 6) If some spectal words (11ke 'SONO" 'SORE-NO' etc. whtch roughly correspond to 'the'. '1iS' tn Engllsh) appear tn the position: Phrases whtchlNoun-1 "TO' <spectal word> Noun-2. modtfy noun phrases the modifiers preceding Noun-1 modtfy only Noun*l but not the whole conJuncted noun phrase. (Rule 7) [n ...... Noun-1 'TO' . ........... Noun-2. tf Noun-1 and flour-2 belong to the same spectftc semanttc categories, 11Le actton nouns, abstract nouns etc, the rtght boundary ts often Noun-2. (Rule 8) [n most conJuncted noun phrases, the structures of conJuncts are well-balanced. Therefore, tf a relattve clause precedes the first conjunct and the length of the second conjunct (the number of words between 'TO" and Noun-2) ts short 11ke [Relative Clause] Noun-1 'TO" . ....... Noun-2 the relattve clause modtftes both conJuncts, that ts. the antecedent of the relattve clause ts the whole conJuncted phrase. These heuristic rules are based on different levels of Information (some are based on surface lexlcal Items. some are based on morphemes of words, some on semanttc |nformatton) and may lead to different decisions about scopes. However. we can distinguish strong heuristic rules (t.e. rules whtch almost always give correct scopes when they are applled) from others. In fact. there extsts some ordertng of heuristic rules according to thetr strength. Rules (1). (2). (3), (4) and (6). for example, almost always succeed, and rules like (7) and (8) often lead to wrong decisions. Rules 11ke (7) and (8) should be treated as default rules whtch are applted only when the other stronger rules cannot dectde the scopes. We can deftne tn GRADE an arbitrary ordertng of rule applications. Thts capability of contro114ng the sequences of rule applications ts essential tn Integrating heuristic rules based on heterogeneous levels of Information tnto a untried set of rules. Note that most of these rules cannot be naturally expressed by ordtnary CFG rules. Rule (2). for example, ts a rule whtch blocks the application of the ordtnary CFG rule such as NP ---> NP <case-particle> NO N when the <case-particle> ts 'TO' and a conjunctive parttcle 'TO' precedes thts sequence of words. 4-3 Determination of Scopes Scopes of conJuncted noun phrases often overlap wtth scopes of relattve clauses, whtch males the problem of scope determination more complicated. For the surface sequence of phrases 11ke NP-1 'TO' NP-2 <case-particle> ..... <verb> NP-3 there are two passable scopes of conJuncted noun clause 11ke relationships between the phrase and the relattve (1) NP-1 'TO" NP-2 <case-particle> .... <verb> NP-3 I J conJ~ncted noun phrase I Relattv~ Clause I Antecedent I t NP (2)NP-2 'TO' NP-2 <case-particle> ..... <verb> NP-3 I Relattve ~ Clause Antecedent J I N,P ConJuncted* Noun Phrase Thts ambiguity together with genutne ambtgu|ttes tn scopes of conJuncted noun phrases tn 4-2 produces combinatorial Interpretations tn CFG grammars, most of whtch are linguistically posstble but practically unth|nkable. It Is not only Inefficient but also almost Impossible to compare such an enormous number of linguistically posstble structures after they have been generated. In our analys|s grammar, a set of scope dectston rules are applted in the early stages of processing tn order to block the generation of combinatorial Interpretations. ]n fact. the structure (2) tn whtch a relsttve clause extsts wtthtn the scope of • conJuncted noun phrase is relatively rare tn real texts, especially when the relattve clause ts rather long. Such constructions wtth long relattve clauses are a ktnd or garden path sentence. Therefore. unless strong heuristic rules like (2). (3) and (4) tn 4-2 suggest the structure (2). the structure (1) ts adopted as the ftrst chotce (Note that, tn [ex-7] tn 4-2, the strong heuristic rule[rule (3)] suggests the structure (2)). Stnce 272 the result of such a decision ts explicitly expressed tn the tree: S C O P E - O F - C O N U N ~ C T I ~ and the grammar rules in the later stages of processing work on thts structure, the other interpretations of scopes will not be tried unless the ftrst choice fatls at e later stage for some reason or alternative interpretations are explicitly requested by a human operator. Note that a structure llke NP-1 'TO' . ..... <verb> NP-2 ....... <verb> NP-3 r[ relettve~clause 8!tecedent I relattve ~clause antecedent I I I conJunct~d noun phrase which ts linguistically posstble but extremely rare tn real texts, is naturally blocked. 4-4 Sentence Relationships and Outer Case Analysts Corresponding to Engltsh sub-ordinators and co-ordinators like 'although'. 'tn order to'. 'and' etc.. we have several different syntactic constructions as follows. (1) .......... (Verb wtthe specific ............ Inflection form) I I I I $1 S2 (2) ...... (Verb)(a postpostttonal particle) ...... ! S1 S2 (3) ..... (Verb)(a conjunctive noun) ............. ! | I i S1 S2 (1) roughly corresponds to Engllsh co-ordinate constructions, end (2) end (3) to Engltsh sub-ordinate constructions. However. the correspondence between the forms of Japanese end Engltsh sentence connections ts not so straightforward. Some postposttional particles tn (2). for example, are used to express several different semantic relationships between sentences. and therefore, should he translated tnto different sub-ordtnators in Engltsh according to the semantic relationships. The postpostttonal parttcle 'TAME' expresses either 'purpose-action" relationships or 'cause-effect' relationships. In order to dtsambtguate the semantic relationships expressed by 'TAME'. a set of lextcal rules ts defined in the dictionary of "TAME'. The rules are roughly as follows. (1) If S1 expresses a completed actton or a stative assertion, the relationship ts "cause-effect'. (2) If $1 expresses neither a completed event nor e statIve assertion and $2 expresses s controllable action, the relationship ts 'purpose- action'. [ex-e] (A) $1: TOKYO-NX (Tokyo) IT- TEITA (to go) (aspect formative) TAME 52: KAIGI-N! SHUSSEK| DEKINAKA- TA (meeting) (to attend) (cennot)(tense format- ive : past) $1: completed actton (the aspect formative "TEITA" means completion of an action) ---> [cause-effect] - Because I was in Tokyo. I couldn't attend the meeting. (B) $1: TOKYO-NI IKU (Tokyo) (to go) TAME $2: KAIGI-NI SHUSSEKI DEKINAI (meeting) (to attend) (cannot) $1: neither a completed action nor a stattve assertion S2: "whether I can attend the meeting or not • ts not controllable. ---> [cause-effect] • Because ! go to Tokyo. I cannot attend the meeting. (C) S1: TOKYO-NI IKU (Tokyo) (to go) TAME S2: KIPPU-O KAT- TA (ttcket) (to buy) (tense formative: past) $1: neither a completed action nor a stative assertion S2: volitional action ---> [purpose-action] • In order to go to Tokyo. I bought a ticket. Note that whether S1 expresses a completed action or not is determined tn the preceding phases 273 by ustng rules whtch uttllze espectual features of verbs described tn the dictionary and aspect formattves following the verbs (The classification of Japanese verbs based on thetr aspectual features and related toptcs are discussed tn [8]). Ve have already wrttten rules (some of whtch are heuristic ones) for 57 postpostttonal particles for conJucttons of sentences 11ke 'TAME'. Postpostttonal particles for cases, whtch follow noun phrases and express case relationships, are also very ambiguous In the sense that they express several different deep cases. Vhtle the Interpretation of tnner case elements are dtrectly given tn the verb dictionary as the form of mapping between surface case part|cles and thetr deep case Interpretations. the outer case elements should be semantically Interpreted by referring to semanttc categories or noun phrases and properties of verbs. Lextcal rules for 62 case particles have also been Implemented and tested. 5 Conclusions Analysts Grammar of Japanese tn the Mu-proJect ts discussed tn thts paper. By Integrating vartous levels of heuristic Information, the grammar can work very efficiently to produce the most natural and preferable readtn 9 as the f|rst output result. wtthout any extensive semanttc processtngs. The concept of procedural granwars was originally proposed by Wtnograd[9] and Independently persued by other research groups[lO]. However. thetr clatms have not been well appreciated by other researchers (or even by themselves). One often argues agatnst procedural grammars, saytng that: the linguistic facts Wtnograd's grammar captures can also be expressed by ATN. and the expressive power of ATN ts equivalent wtth that of the augmented CFG. Therefore; procedural grammars have no advantages over the augmented CFG. They Just make the whole grammars complicated and hard to maintain. The above argument, however, mtsses an Important po|nt and confuses procedural grammar wtth the representation of grammars tn the form of programs (as Shown tn Vtnograd[9]). Ve showed tn thts paper that: the rules whtch gtve structural constraints on ftnal analysts results and the rules whtch choose the most preferable linguistic structures (or the rules whtch block "garden path" structures) are different tn nature. [n order to Integrate the latter type of rules tn a untfted analysts grammar, tt ts essential to control the sequence of rule applications explicitly and Introduce strategic knowledge tnto grammar organizations. Furthermore. Introduction of control specifications doesn't necessarily lead to the grammar In the form of programs. Our grammar wrtttng system GRADE allows us a rule based specification of grammar, and the grammar developed by ustng GRADE ts easy to maintain. Ve also dtscuss the usefulness of lexicon driven processing 4n treattng Idiosyncratic phenomena tn natural languages. Lax|con drtven prcesstng ts extremely useful tn the transfer phase of machtne translation systems, because the transfer of lextcal ttems (selection of appropriate target lextcal ttems) ts htghly dependent on each lextcal ttem[tt]. The current verston of our analysts grammar works qutte well on t.O00 sample sentences tn real abstracts wtthout any pre-edtttng. Acknowledgements Appreciations go to the members of the Nu-ProJect, especially to the members of the Japanese analys4s group [Mr. E.Sumtta (Japan [BH). Hr. M.gato (Sord Co.). Hr. S.Ten|gucht (Kyosera Co.). Hr. A.Kosaka (~EC Co.). Mr. H.Sakamoto (Ok1 Electr|c Co.), MtSS H.Kume (JCS). Hr. N.[shtkawa (Kyoto Untv.)] who are engaged tn Implementing the comprehensive Japanese analysts grammar, and also to Or. 6.Vauquots. Dr. C.Bottet (Grenoble Untv.. France) and Dr. P.Sabat|er (CNRS. France) for their fnuttful discussions and comments. References [t] S.Vauquots: La Traductton Automat|que 8 Grenoble, Documents de Linguist|qua Quantitative, No. 24, Par|s, Dunod, t975 [2] J.Nakamura et.al.: Granunar Vrtttng System (GRADE) of Nu-Machtne Translation Project and tts Characteristics, Prec. of COL[NG 84. t984 [3] J.Slocum: A Status Report on the LRC Nach|ne Translation System, Vorktng Paper LRC-82-3. Linguistic Research Center, Untv. of Texas, t982 [4] F.Pere|ra et.al.: Oef|ntte Clause GRammars of Natural Language Analysts. Artificial Intelligence. Vol. 13. 1980 [5] G.Gazdan: Phrase Structure Grammars and Natural Languages. Prec. of 8th [JCA[. 1983 [6] Y.M|lks: Preference Semantics, tn The Formal Semant4cs of Natural Language (ed: E.L.Keenan), Cambridge University Press, t975 [7] Y.Sakamoto et.al.: Lextcon Features for Japanese Syntactic Analysts In Mu-ProJect-JE, Prec. of COLING 84, 1984 [8] J.TsuJ41: The Transfer Phase tn an English-Japanese Translation System. Proc. of COLING 82. t982 [g] T.Mtnognad: Understanding Natural Language, Academic Press, t975 [tO] C.Bottet et.al.: Recent Developments tn Russian-French Machtne Translation at Grenoble, Linguistics, Vol. 19, tg8t [tt] M.Nagao. et.al.: Dealing wtth [ncompleteness of L4ngutsttc Knowledge on Language Translation. Proc. of COLZNG 84. 1984 274
1984
57
LEXICON-GRAMMAR AND THE SYNTACTIC ANALYSIS OF FRENCH Maurice Gross Laboretoire d'Automatique Documentsire et Linguistique University of Paris 7 2 place Jussieu 75251 Paris CEDEX 05 France ABSTRACT A lexicon-grammar is constituted ot the elementary sentences of a language. Instead of considering words as basic syntactic units to which grammatical information is attached, we use simple sentences (subject-verb-objects) as dictionary entries, Hence, s full dictionary item is a simple sentence with a description of the corresponding distributional and transformational properties, The systematic study of French has led to an organization of its lexicon-grammar based on three main components: - the lexicon-grammar of free sentences, that is, of sentences whose verb imposes selactionel restrictions on its subject and complements (e.g. to fall, to eat, to watch), - the lexicon-grammar of frozen or idiomatic expressions (e.g. N takes N into account, N faiaea a question, - the lexicon-grammar ot support verbs. These verbs do not have the common selactional restrictions, but more complex dependencies between subject and complement (e.g. to have, to make in N has an impact on N, N makes a certain impression on N) These three components interact in specific ways. We present the structure of the lexicon-grammar built for French and we discuss its algorithmic implications for parsing. The construction of a lexicon-grammar of French has led to an accumulation of linguistic information that should significantly bear on the procedures ot automatic analysis of natural languages. We shall present the structure of a lexicon-grammar built for French <2> and will discuss its algorithmic main implications. 1. VERBS The syntactic properties of French verbs have been limited in terms of the size of sentences, that is, by restricting the type of complements to object complements. We considered 3 main types of objects: direct, and with prepositions ~ and de. Verbs have been selected from current dictionaries according to the reproducibility of the syntactic judgments carried out on them by a team of linguists. A set of about 10~000 verbs has thus been studied. The properties systematically studied for each verb are the standard ones: 1 E.R.A. 247 of the C.N.R.S. afiliated to the Universities Paris 7 and Paris Viii. 2 Publication of the lexicon-grammar is under way. The main segments available are: Boons, Guillet, Lecldre 1976a, 1976b and Gross 1975 for French verbs, Giry-Schneider 1978, A. Meunier 1981, de Ndgroni 1978, for nominalizations, - distributional properties, such as human or non human nouns, and their pronominal shapes (definite, relative, interrogative pronouns <3>, clitics), possibility of sentential subjects and complements que • (that S), ai 3 (whether S, if S) or reduced infinitive forms noted V Comp, transformational properties, such as passive, extraposition, clit icization, etc, /~logether, 500 properties have been checked against the 1~000 verbs <4>. More precisely, each property can be viewed as a sentence form. Consider for example the transitive structure (1) N O V N 1 We are using Z.S. Harris' notation for sentence structure: noun phrases are indexed by numerical subscripts, starting with the subject indexed by 0. We can note the property "human subject" in the following equivalent ways: (2) Nhum V N 1 or N O (:: Nhum) V N t w~ere the symbol :: is used to specify a structure . A passive structure will be noted (3) N I be V-ed by N O A transformation is a relation between two structures noted "=°': (1) = (3) corresponds to the Passive rule The syntactic information attached to simple sentences can thus be represented in a uniform way by means ot binary matrix (Table 1). Each row ot the matrix corresponds to a verb, each column to a sentence form. When a verb enters into a sentence form, a "+" sign is placed at the intersection of the corresponding row and column, if not s "-" sagn. The description of the French verbs does not have the shape of a 10,000x500 matrix. Because of its redundancy (cf. note 4 1, the matrix has been broken down into about 50 submatrices whose size is 200x40 on the average. It is such a system of submatrices that we call a lexicon-grammar. J Actually, the shape of interrogative pronouns: qu~ (who), que-quoi (what) has been used to define a formal notion of object. 4 Not all properties are relevant to each of the 10~000 verbs. For example, the properties of clitics associated to object complements are irrelevant to intransitive verbs. 275 i!tt I dt~-, ='eml:~l,¢ - - - ='em~ 4- -+ + 4- 4. - ;'e~lmmmr +-4.4-+ b~km - + ÷ - -- :=pkmr + - - - m f~ + .... +-4"4- 4---- +-+- +- -- +-+- +-++ @--- +--- N I :tz i z I =" Iz I z I z I =. }; -4"- ---+---+- ..... +- - +! 4= + .... +-+--i 4= ++-@-+-4"-- o==u~ ++-+-++ 4"+--+-++ --4-- e= - -+---4-I de - -÷-4---i 4= -÷---+-+-- .I > I~. z:lz~ -ii + ÷ ÷ + - - ] r m ~ + + - + - + - 4. . . . . ,~= -ear~ 2-+;;-- .++'-*- + ; - 4.-@- -l|uW dkw==¢ + .... + + Intransitive Verbs (From Boons. GuilipP. r~l "~ S, Guillet, 5ecl~re 1976a) Table 1 Although the 3 prepositions "zero", a and de ere felt and described as the basic ones by traditional grammarians, the descriptions have never received any objective bee,s. The lexicon-grammar we have constructed provides s general picture of the shapes of obleCts tn French. The numerical distr,butlon of oblect patterns is given ,n table 2, according to their number in a sentence and to their preposlhonal shape. N O V N O V N 1 NoV&N 1 N O V de N I N O V N 1 N 2 N O V N 1 ~= N 2 N O V N 1 de N 2 NoV&N1 &N 2 N O V & N 1 de N 2 N O V de N 1 de N 2 !,800 3,700 350 500 150 1,600 1,900 3 10 1 DISTRIBUTION OF OBJECTS Table 2 AS can be seen on table 2, direct oblects are the most numerous in the JPXlCOn. Also, we have not observed a single example of verbs with 30blects according to our definition. In 2. and 3. we will make more precise the lexicel nature of the Nl's attached to the verbs. The signs in a row of the matrix provides the syntactic paradigm of a verb, that is, the sentence forms into which the verb may enter. The lexicon-grammar is in computer form. Thus, by sorting the rows of signs, one can construct equivalence classes for verbs: Two verbs are in the same class if their two rows of signs are identical. We have obtained the following result: for 10,000 verbs there are about 8,000 classes. On the average, each class contains 1.25 verb. This statistical result can easily be strengthened. When one studies the classes that contain more than one verb, it is always possible to find syntactic properties not yet in the matrix and that will separate the verbs. Hence, it our description were extended, each verb would have • unique syntactic paradigm. Thus, the correspondence between a verb morpheme end the set of sentence forms where it may occur is one-to-one. Another way of stating this result is by saying that structures depend on individual lexical elements, which leads to the following representation of structures: N O eat N 1 N o owe N 1 to N 2 We still use class symbols to describe noun phrases, but specific verbs must appear in each structure. Class symbols of verbs are no longer used, since they cannot determine the syntactic behsviour of individual verbs. The nature of the lexicon-grammar should then become clearer. An entry of the lexicon-grammar of verbs is • simple sentence form with an explicit verb appearing in • row. In general, the decleretive sentence is taken as the representative element of the equivalence class of structures corresponding to the "+" signs of a row. The lexicon-grammar suggests a new component for parsing algorithms. This component is limited to elementary sentences. It includes the following steps: - (A) Verbs are morphologically recognized in the input string. - (B) The dictionary is looked up, that is, the space of the lexicon-grammar that contains the verbs is searched for the input verbs. - (C) A verb being located in the matrix, its rows of signs provide a set of sentence forms. These dictionary forms are matched with the input string. This algorithm is mcomplete in several respects: - In step (C). matching one of the dictionary shapes with the input string may involve another component of the grammar. The structures represented in the lexicon-grammar are elementary structures, subject only to "unary" transformations, in the sense of Harris' transformations or of early generative grammar (Chomsky 1955). Binary or generalized transformations apply to elementary sentences and may change their appearance in the sentence under analysis (e.g. conjunction reduction). As a consequence, their effect may have to be taken into account in the matching process. 276 Looking up the matrix dictionary may result in the finding of several entries with same form (homographs) or of several uses of a given entry. We will see that these situations are quite common. in general, more than one pattern may match the input, mulbple paths of analysis are thus generated and require book keeping. We will come back to these aspects of syntactic computation. We now present two other components of the lexicon-grammar of simple sentences. 2 I D I O M S The sentences we just described can be called free sentences, for the lexlcal choices Of nouns in each noun phrase N i has certain degrees of freedom. We use this distributional feature to separate free from frozen sentences, that is, from sentences with an idiomatic part. The main difference between free end frozen sentences can be stated in terms of the distributions of nouns: - in a frozen nominal posibon, a change of noun either changes the meaning of the expression to an unrelated expression as in to lay down one's arms vs to lay down one's feet or else, the variant noun does not introduce any difference in meaning (up to stylistic differences), as m to put someone off the (scent. track, trail) or else. an idiomatic noun appears at the same level as ordinary nouns of the distribution, and the general meaning of the (free) expression is preserved, as in to miss (an opportunity, the bus] - in a free position, a change of noun introduces a change of meaning that does not affect the general meaning of the whole sentence. For example, the two sentences The boy ate the apple My sister ate the pie that differ by distributional changes in subject and object positions have same general meaning: changes can be considered to be localized to the arguments of the predicate or function with constant meaning EAT. We have systematically described the idiomatic sentences of French, making use of the framework developed for the free sentences. Sentential idioms have been classified according to the nature (frozen or not) of their arguments (subject and complements). With respect to the structures of Table 2, a new classificatory feature has been introduced: the poaslbdity for a frozen noun or noun phrase to accept a free noun complement. Thus, for example, we built two classes CP1 and CPN corresponding to the two types of constructions N O V Prep C 1 :: Jo plays on words N O V Prep Nhum'a C 1 =: Jo got on Sob's nerves The symbol C refers to a frozen nominal position and Prep stands for preposition. Although frozen structures tend to undergo less transformations than the free forms, we found that every transformation that applies to a free structure also applies to some frozen structures. There is no qualitative difference between free and frozen structures from the syntactic point of view. As a consequence, we can use the same type of representation: a matrix where each idiomatic combination of words appears in a row and each sentence shape m a column (of. Tables 3 and 4), I SiJJElS = m T',' + - + - + - + - + - ÷ - + - + - + + + - + - ¢ - + - + - + - . _ + - ¢ , ÷ - ÷ - V(RB($ ADVEnES rIG(S VENIR DAMS PARTIR 5UR DEMONTRER N A N PAR PARTIR DANS DIRE NAN ~N TRICHER ARRETER.$-. VENIR A ESPERER N DE ARRANGER N A OAGNER N A VENIR CONTRE PARTIR A VENIR PAR PATER N A CONSULTER N A CONSULTER N DANS CHOISIR N A DISCUTER BOIRE N AVANT SPECULER A PARLER TRICHZR DE FONCER A AGIR A CUIRE N A FONCER A CUIRE N A ACCEPTER N EN RIRE DE LUTTER JUSOU'A CUIRE N $UR FONCES A CUIP~ N A VENIR PAR CUIRE N A CUIRE N A DORNIR ~N CUIRE N SOUS REMBOURSER N A LA "PERIODE" CE L' ABSURDE L' AFFIRMATIVE L' AIR POSS-O AISE L' ALLER TOUTS ALLURE TOUTE POSS-O L' AMIABLE L' ARRACHE TOUTE ATTENTE L' AUBE L' AUTOSUS L' AVANCE L' AVENIR L' AVBNIR L' AVEUGLETTE TOUT AZIP~T LA BAGARP~ LA BAISSE TOUT RAS PLUS BELLE TOUTS BERZINGUE LE BESOIN LE BEURRE TOUTS BITURE LE B015 TOUTE BONNE FOI TOUTS POSS-O BOUCHE LE BOUT LA BRAISE TOUTS BRIDE LA BROCHE LE BUS LE BUTAGAZ LE BUTANE TOUT CAS LA CENDNE LE CENTUPLE Frozen adverbs Table 3 We have systematically classified I15.000 idiomatic sentences, When one compares thls figure with those of table 2', one must conclude that frozen sentences constitute one of the most important components of the lexicon-grammar. An important lexlcal feature of frozen sentences should be stressed. There are examples such as They went astray • where words such as astray cannot be found io any other syntactically unrelated sentence; notice that the causative sentence The# led them astray Is considered as syntactically related. In this case, the expression can be direcly recogmzed by dictionary look-up. But such examples are rare. In general, a frozen expression is • compound of words that are also used in free expressmns wJth unrelated meanings. Hence, frozen sentences are in general ambiguous, having an ~dmmahc meaning and a literal meaning. 277 However, the hteral meanings are almost always mcongruous In the context where the idlomahc meamng is mtended (unless of course tr:e author of the utterance played on words). Thus, when a word combination that constitutes an idiom is encountered m a text, one IS practically ensured that the corresponding meaning is the idiomatic one, I 0 ! I ;;I "I o H I u N * • CONNAITRE COMNAITRE CO~NAZTRE NE CONNAITRE PAS : NE CONNAITRE OUR CONSERVER SE CONTEKPLER COUPER DEBLOQUER DETENIR DISTILLER DOMINER DRESSER Erfl)OSSER ENFONCER £TRE . N PAS ETRE . N PAS ETRE . N FAS ETRE . S DIT FAIRE FAIRE FAIRE FAZRE i FAIRE FAIRE ] FAIRE j FAIRE FAIRE I FAI~ J FAIRE ENTENDRE FAIRE PASSER FAIRE SAUTER FERVOR FLETRIR FORCER FOR~R FORMER FORMER FORNER FRANCHIR I ! I I | ' ] -~ .E .'3 .=~ I - * L£ COUP - - POSS-¢ i i DOULEUR - + L£ TRUC - - POSS-~ BONH£UR - - - CA - - POSS-¢ CHgHISE r - - LE NOMBRZL - . det SITUATION + - LA VERITE LE VENIN - + LE LOT J- , POSS-(P - ÷ BATTERTES J - ~ LE HARNOIS - ~ LE CLOU - . UNE LUHIERE i: NORT 'NC"OT ii! Tout N BRIN DE TOILETTE GRISE MZN~ HARA-KIRI JURISPRUDENCE ;- + UN£ NINUTE DE SILENCE NO~BRE :- + DET OPERATION PORTE OUVERTE - - DU QUARANTE CINO FILLETTE TAPIS TINTIN - - POSS-~ VOIX - - DET ENFANT - - DET ENFANT - * POS$-~ PORTES - + DET CRIME _ _ LA CHANCE - + L£ CARRE - ~ DET NUNERO - + DET NUNERO DE TELEPHONE - . LES PANGS i - . DET CAP Frozen sentences Table 4 Returmng to the algorithm sketched in 1, we see that we have to middy steps (A) and (B) in order to recognize frozen expressions: - NOt only verbs, but nouns have to be immediately located in the input string. - The verbs and the nouns columns of the lexicon-grammar of frozen expressions have to be looked up for combinations of words. It Js mterestmg to note that there is no ground for stating a priordy such as look up verbs before nouns or the reverse. Rather, the nature of frozen forms suggests simultaneous searches for the composing words. About the diHerence between free and frozen sentences, we have observed that many free sentences (if not all) have highly restricted nominal posdlons. Consider for example the entry N O smoke N t =n Jo smokes the finest tobacco In the direct object complement, one will find few other nouns: nouns of other smoking material, objects made of smoking material such as cigarette, cigar, pipe and brand names for these oblects. This is a common situation with technical verbs. Such examples suggest that, semantically at least, the nominal arguments are halted to one noun, which comes close to having the status of frozen expression. Thus, to smoke would have here one complement, perhaps tobacco, and all other nouns occurring m its place would be brought in by syntactic operations. We consider that this situatmn is quite general although not always transparent. Our analysis of free elementary sentences has shown that when subjects and Oblects allow wide variations for their nouns, then well defined syntactic operations account for the variation: - separation of entries: For example, there is another verb N O smoke Nt, as m They smoke meat, and a third one: N O smoke N 1 out in They smoked the room out; or consider the verb to eat in Rust ate both rear wings of my car This verb will constitute an entry different of the one in to eat lamb; various zerolngs: The following sentence pairs will be related by different deletions: Bob ale s nrce preparation = Bob ale a nice preparation of lamb Bob ate a whole bakery = Bob ate a whole bakery of apple pies Other operations introduce nouns in syntactic positions where they are foreign to the semantic distributions, among them are ralsmg operations, which induce distributional differences such as I imagined the situation I imagined the bridge destroyed situation is the "natural" direct oblect of to imagine, while brrdge ts derived; - other restructuration operations (Gulllet, Lecl~re 1981), as between the two sentences This confirmed Bib's opinion of Jo This confirmed Bob m his opinion of Jo Although the full lexicon of French has not yet been analyzed from this point of view, we can plausibly assert that a targe class of nommal distributions could be made semantically regular by using Z.S. Harris' account of elementary distributions, namely, by determining a basic form for each meaning, for example A person eats food with undetermined human subject and characteristic object, and by 278 introducing classificatory sentences that describe universe: (The boy, My sister) ia • person, etc. the semantic (A pie, This cake) is food, etc. Classificatory and basic sentences are combined by syntactic operations such as relatlvizstion: The person who is the boy eats food which is this pie WH-ia deletion: The person the boy eats food this pie redundancy removal: The boy eats this pie In this way, the semantic variations are explicitly attributed to lexical variations, and not to intuitive abstract features, that is, arbitrary features, or acmes or the like. The requirement of using WORDS in such descriptions is a crucial means for controlling the construction of an empirically adequate linguistic system. In this respect, one is led to categorizing words by evaluating actual classificatory sentences. Hence, all the knowledge linguistically expressible (i.e. in terms of words) is represented by both the basic and the classificatory sentences. A good deal of the inferences that one has to draw in order to understand sentences era contained in the derivations that lead to the seemingly simple sentences. From a formal point of view, the entries of the lexicon-grammar become much more specifi~ We have eliminated class symbols altogether, replacing them by specific nouns <5>. Entries are then of the type {persen) 0 eat (food) 1 (person) 0 ;We (ObleCt) 1 to (person) 2 (per=ran) 0 k~ck the bucket An application of this representation of simple sentences is the treatment of certain metaphors. Consider the two sentences (1) Jo filled the turkey with truffles (2) Jo filled his report with poor jokes (1) is a proper use of fo fill, while (2) is • metaphoric or figurative meaning. The properties of these sentences vary according to the lexical choices in the complements {Boons 1971). For example, the with-complement that can be occupied by an internal noun in the proper meaning can be omitted: Jo tilled the turkey with • certain filling = Jo filled the turkey 5 It is doubtful that actual nouns such as food will be available in the language for each distribution of each entry, but then, expressions such as smoking stuff can be used {in the object of to smoke), again avoiding the use ot abstract features. iThis is not the case in the figurative meaning: *Jo filled hie report How to represent (1) and (2) is a problem in terms of number of entries. On the one hand, the two constructions have common syntactic and semantic features, on the other, they ere significantly different in form and content. Setting up two entries is • solution, but not a satisfactory one, since both entries are left unrelated. A possible solution in the framework of lexicon-grammars is to consider having just one entry: N O fill N 1 with N 2 and to specify N t lexJcally by means of columns of the matrix. For example N 1 =: food N t =: text 11~en, the content of N 2 is largely determined end has to be roughly of the type N 2 =: stuffing N 2 =: eubtext An inclusion relation <6> holds between the two complements. We can write for this relation N 2 is in N 1 But now, in our parsing procedure, we have to compensate for the tact that in the lexicon-grammar, the nouns that are represented in the free positions ere not the ones that in general occur in the input sentences. In consequence, occurrences of nouns will have to undergo a complex process of identification that will determine whether they have been introduced by syntactic operations (e.g. restructuration), or by chains of substitutions defined by classificatory sentences, or by both processes. 3. SUPPORT AND OPERATOR VERB8 We have alluded to the tact that only • certain class of contences could be reduced to entries of the lexicon-gremmr as presented in 1. and 2. We will now give examples of simple sentences that have structures different of the structures of free and frozen sentences, in sentences such as (1) Her remarks made no difference (2) Her remarks have some (importance for, influence) on Jo (3) Her remarks ere in contradiction with your plan it is difficult to argue that the verbs to make, to have and to be in semantically select their subjects end complement& Rather, these verbs should be considered as auxiliaries. The predicative element is here the nominal form in complement position. This intuition can be given a formal basis. Let us look at nominalizationa as being relations between two simple sentences (Z.S. Harris 1964), as in 6 This relation is an extension of the Vaup relations of 3. To fill could be considered as a (causative) Vop. 279 Max walked : Max look a walk Her remarks are important for Jo = Her remarks are of a certain importance for Jo = Her remarks have s certain importance for Jo Jo resembles Max : Jo has a certain resemblance with Max = Jo (bears. carries) a certain resemblance with Max -- There is a certain resemblance between Jo and Max It is then clear that the roots walk, important and resemble select the other noun phrases. We call support verbs (Vsup) the verbs in such sentences that have no selectional function, Some support verbs are semantically neutral, others introduce modal or aspectual meanings, as for example in Bob loves Jo = Bob Is in love with Jo = Bob fell in love with Jo = Bob has a deep love for Jo to tall, as other motion verbs do, introduces an inchoative meaning. In this example, the mare semantm relation holds between Bob and love, and the support verbs simply add their meaning to the relation. If we use s dependency tree to schematize the relations in simple sentences, we can oppose ordinary verbs with one obleCt and support verbs of superficially identical structures such as in figure 1: described Ma~x love Bo b ' s ~ ~ Jo Two problems arise in connection with the distribution of support verbs: - s noun or a nommalized verb accepts a certain set of support verbs and this set varies with each nominal; not every verb is a support verb; thus in the sentence (4) Max described Bob'a love for Jo to describe is not a Vsup. The question is then to delimit the set of Vaups, if such a set can be isolated, or else to provide general conditions under which s verb acts as a Vaup, One of the structural features that separates support verbs from other verbs is the possibility of clefting noun complements. For example, for Jo is a noun complement of the same type in both structures, but we observe *If is for Jo that Max described Bob'a love It is for Jo that Bob has a deep love The main semantic difference between the two constructions lies in the cyclic structure of the graph. This cyclic structure is also found in more complex sentences such as (5) This note put her remarks in contradiction with your plan (6) Bob gave a certain importance to her remarks Both verbs fo put and to give have two complements, exactly as in sentences such as (7) Bob put (the book) 1 (in the drawe~| 2 (8) Bob gave (e book) t (to Jo) 2 Whde in (7) and (8), there is no evidence of any formal relation between both complements, in (5) and (6) we find dependencies already observed on support verbs (cf. figure 2). gave B ° ~ m s r k s has BJ ove put The notre ~ her remarks, in contra~ctmn \ with your plan Figure I Figure 2 280 The verbs to put and to give are semantically minimal, for they only introduce s causative and/or an agentive argument with respect to the sentence with Vsup. We call such verbs operator verbs (Vop). There are other operator verbs that add various modaltties to the minimal meanings, as in The note introduced a contradiction between her remarks and your plan Bob attributed a certain importance to her remarks Other syntactic shapes are lound: Bob credsted her remarks with a certain importance Again, the set of nouns (supported by o Vsup) to which the Vops apply vary from verb to verb. As a consequence, we have to represent the distributions of Vsups and Vops with respect to nominals by means of a matrix such as the one in Table 4'. In each row, we place a noun and each column contains a support verb or an operator verb. A preliminary classification of Ns (and V-ns) has been made in terms of a few elementary support verbs (e.g. to have, to be Prep). In a sense, this representation is symmetrical with the representation of free sentences. With free sentences, the verb is taken as the central item of the sentence. Varying then the nouns allowed with the verb does not change fundamentally the meaning of the corresponding sentences. With support verbs, the central item is a noun. Varying then the support verbs only introduces a distributional-like change in meaning. The recognition procedure has to be modified, in order to account for this component of the language: - first, the took-up procedure must determine whether s verb is an ordinary verb (i.e. an entry found in a row of the lexicon-grammar) or a Vaup or a Vop, which are to be found in columns; - simultaneously, nouns have to be looked up in order to cheek their combination with support verbs. 4. CONCLUSION We have shown that simple sentence structures were of varied types. At the same time, we have seen that their representation in terms of the entries of traditional "linear" dictionaries, that is, In terms of words alphabetically or otherwise ordered, is inadequate. An improvement appears to involve the look-up of two-dimensional patterns, for example the matrices we proposed for frozen sentences and their generalization to support verbs and operator verbs. More generally, syntactic structures are determined by combinat|ons of a verb morpheme with one or more noun morpheme(s). Hence, the general way to access the lexicon will have to be through the selectional matrix of Tables 3 and 4, In practice, syntactic computations are context-free computations in natural language processing. Context-free algorithms have been studied in many respects by computer scientists, theoreticians and speciahsts ot programming languages. The principles of these algorithms are clearly understood and currently in use, even for natural languages where new problems arise because of the numerous ambiguities and the various terminologies attached to each theoretical viewpoint. The tact that context-free recognition is a mastered technique has certainly contributed to the shaping of the grammars used in automatic parsing. The numerous sample grammars presented so far are practically all context-tree. There is also a deep linguistic reason for building context-free grammars: natural languages use embedding processes and tend to avoid discontinuous structures. Much less attention has been peJd to the complex syntactic phenomena occurring Jn simple sentences and to the organization of the lexicon. The tact that we could not separate the syntactic properties of verbs from their lexical features has led us to construct a representation for linguistic phenomena which is more specJhc than the current context-free models. A context-free component will still be useful in the parsing procesS, but it will be relevant only to embedded structures found in complex sentences, with not much incidence on meaning, To summarize, the syntactic patterns are determined by pairs (verb, noun): - the frozen sentence N O k~ck the bucket Js thus entirely specified, while the pair (take, bull) needs to be disambiguated by the second complement by the horns, requiring thus a more complex device to be identified; (take, walk) and (take, food) are support sentences, so are (have, faith) and (have, food); the verbs have, kick and take together with concrete obiect select ordinary sentence forms. But the selectional process for structures may not be direct. The words in the previously discussed pairs may not appear in the input text. Words appearing in the input are then related to the words in the selectJonal matrix by: cfassifJcatlonal relations: food classifies cake, soup, etc. concrete obiect classifies ball, chair, etc. - relations between support sentences, such as Jo (had, took,threw out) some food Jo (took, was out for, went out for) a walk Jo (has, keeps, looses) faith in Bob relations between support and operator sentences: Thie gave to Jo faith in Bob All these relations in fact add a third dimension to the selectional matrix. The complete selectional device is now a complex network of relations that cross-relates the entries. It will have to be organized in order to optimize the speed of parsing algorithms. 281 REFERENCES Boons, J.-P, 1971. Metaphore et balsse de la redondance, Langue tran~a/se 11, ParDs: Larousse, pp. 15-t6, Boons, J., GuHlet, A. and Lecl~re, Ch. 1976a. La structure des phrases slmples en trancals. Constructions intrans/hvea, Droz, Geneva, 377 p. Boons, J., Gutllet, A. and Lecl~re, Ch. 1976b. La structure des phrases simplea en franFals. Clas~ea de constructions transitives, Rapport de recherches NO 6, Paris: University Paris 7, L.A.D.L., t43 p. Freckleton, P. 1984. A Systemahc Classlhcation of Frozen Expressions in English° Doctoral Thesis, University of Paris 7, L.A.D.L. Glry-Schnelder, J. 1978. Lea nommahsations en franFala. L'op~rateur FAIRE, Geneva: Droz, 414 p. Gross. M. 1975. M#thodes en ayntaxe, Paris: Hermann, 414 p. Gross, Maunce 1982. Une classificatmn des phrases tig~es du fran|:a=s, Revue qudb#coise de hngulstlque, Vol. 11, No 2, Montreal : Presses de I'Universitb du Quebec & Montreal, pp. 151-18,5. Gulllet, A. and Leclbre. Ch. 1981. Restructuratlon du groupe nom0nal, Langagea, Par=s : Larousse, pp, 99-125. Harris, Z.S. 1964. The elementary Tranformations, Transformations and Discourse Analysis Papers 54, m Harris, Zeltig 5. 1970, Papers m Structural and Transformational Linguratics, Reldel, Dordrecht. pp. 482-532. Harris, Zeltig 1983. A Grammar of Enghsh on Mathematical Principles, New York : Wiley Intersc=ence,429 p. Meumer, A. 1'377. Sur les bases syntaxlques de la morphologle dGrlvatlonnelle, Lingv;stlcae Investlgatlones 1:2, John Benlamms B.V., Amsterdam, pp. 287-331. i'l(~g ron=-Peyre, D. 1978. Nommalisations par ETRE EN et r~flexJvatlon, Lingvlstlcae Investlgationea I1:1, John Benlamms B.V., Amsterdam, pp, 127-163. 282
1984
58
Building a Large Knowledge Base for a Natural Language System Jerry R. Hobbs Artificial Intelligence Center SRI International and Center for the Study of Language and Information Stanford University Abstract A sophisticated natural language system requires a large knowledge base. A methodology is described for con- structing one in a principled way. Facts are selected for the knowledge base by determining what facts are linguistically presupposed by a text in the domain of interest. The facts are sorted into clnsters, and within each cluster they are organized according to their logical dependencies. Finally, the facts are encoded as predicate calculus axioms. 1. The Problem I It is well-known that the interpretation of natural language discourse can require arbitrarily detailed world knowledge and that a sophisticated natural language sys- tem must have a large knowledge base. But heretofore, the knowledge bases in natural language systems have either encoded only a few kinds of knowledge - e.g., sort hier- archies - or facts in only very narrow domains. The aim of this paper is to present a methodology for construct- ing an intermediate-size knowledge base for a natural lan- guage system, which constitutes a manageable and princi- pled midway point between these simple knowledge bases and the impossibly detailed knowledge bases that people seem to use. The work described in this paper has been carried out as part of a project to build a system for natural language access to a computerized medical textbook on hepatitis. The user asks a question in English, and rather than at- tempting to answer it, the system returns the passages in the text relevant to the question. The English query is translated into a logical form by a syntactic and semantic translation component [Grosz et al., 1982]. The textbook is represented by a "text structure", consisting, among other things, of summaries of the contents of individual passages, expressed in a logical language. Inference procedures, mak- ing use of a knowledge base, seek to match the logical form i I am indebted to Bob Amsler and Don Walker for discussions concern- ing this work. This research was supported by NIH Grant LM03611 from the National Library of Medicine, by Grant IST-8209346 from the National Science Foundation, and by a gift from the Systems De= velopment Foundation. of the query with some part of the text structure. In ad- dition, they attempt to attempt to solve various pragmatic problems posed by the query, including the resolution of coreference, metonymy, and tile implicit predicates ill com- pound nominals. The inference procedures are discussed elsewhere [Walker and Hobbs, 1981]. In this paper a brief example will have to suffice. Suppose the user asks the question, "Can a patient with mild hepatitis engage in strenuous exercise?" The relevant passage in the textbook is labelled, "Management of the Patient: Requirements for Bed Rest". The inference pro- cedures must show that this heading is relevant to this ques- tion by drawing the appropriate inferences from the knowl- edge base. Thus the knowledge base must contain the facts that rest is an activity that consumes little energy, that ex- ercise is an activity, and that if something is strenuous it consumes much energy, and axioms that relate the concepts "can" and "require" via the concept of possibility. One way to build the knowledge base would have been to analyze the queries in some target dialogs we collected to determine what facts they seem to require, and to put just these facts into our knowledge base. llowever, we are interested in discovering gcneral principles of selection and structuring of such intermediate-sized knowledge bases, principles that would give us reason to believe our knowl- edge base would be useful for unanticipated queries. Thus we have developed a three-stage methodology: I. Select the facts that should be in the knowledge base by determining what facts are linguistically presul)posed by the medical textbook. This gives us a very good indication of what knowledge of the domain the user is expected to bring to the textbook and would bring to the system. 2. Organize the facts into clusters and organize the facts within each cluster according to the logical dependencies among the concepts they involve. 3. Encode the facts as predicate calculus axioms, regu- larizing the concepts, or predicates, as necessary. These stages are discussed in the next three sections. 2. Selecting the Facts To be useful, a natural language system nmst have a 283 large vocabulary. Moreover, when one sets out to axioma- tize a domain, unl-ss one haz a rich set of predicates and facts to be respousible f-r, a sense of coherence in the ax- iomatizatio, i~ hard to achieve. One's efforts seem ad hoe. So the first step in building the knowledge base is to make up an extensive list of words, or predicates, or concepts (the three terms will be used interchangeably here), and an extensiv,~ list of rebwant facts about these predicates. We chose about 350 w,,rds from our target dialogs and headings in the textl,-ok :,ld encoded the relevant facts involving these con~'epts. Because there are dozens of facts one could state involving any one of these predicates, we were faced with the problem of determining those facts that would be most pertinent for natural language understanding in this domain. Our principal tool at this stage was a full-sentence con- cordance of the textbook, displaying the contexts in which the words were used. Our method was to examine these contexts and to ask what facts about each concept were required to justify each of these uses, what did their uses linguist ically presuppose. The three principal linguistic phenomena we looked at were predicate-argument relations, compound nominals, and conjoined phrases. As an example of the first, consider two uses of the word "data". The phrase "extensive data on histocoml)atibility antigens" points to the fact about data that it. is a set (justifying "extensive") of particular facts abo~d some subjecl (justifying the "on" argument). The phrase "the data do not consistently show ..." points to the fact that data is mssembled to support some conclu- sion. To arrive at the facts, we ask questions like "What is data that it can be extensive or that it can show some- thing?" For coml)ound nominals we ask, "What general facts about the two nouns underlie the implicit relation?" So for "casual contact circumstances" we posit that contact is a concomitant of activities, and the phrase "contact mode of transmission" leads us to the fact that contact possibly leads to transmi.-:sion of an agent. Conjoined noun phrases indicate the existence of a superordinate in a sort hierar- chy covering all the conjoined concepts. Thus, the phrase "epidemiolo~, clinical aspects, pathology, diagnosis, and marmgement" tells us to encode the facts that all of these are aspects of a disease. As an ill,stration of the method, let us examine various uses of the word "disease" to see what facts it suggests: • "destructive liver disease": A disease has a harmful effect on one or more body parts. • "hepatitis A virus plays a role in chronic liver dis- ease": A disease may be caused by an agent. • "the clinical manifestations of a disease": A disease is detectable by signs and symptoms. • "the course of a disease" : A disease goes through sev- eral stages in time. • "infectious disease": A disease can be transmitted. • '% notifiable disease": A disease has patterns in the population that can be traced by the medical com- munity. We emphasize that this is not a mechanical procedure but a method of discovery that relies on our informed in- tuitions. Since it is largely background knowledge we are after, we can not expect to get it directly by interviewing experts. Our method is a way of extracting it frorn the presuppositions behind linguistic use. The first thing our method gives us is a great deal of selectivity in the facts we encode. Consider the word "an- imal". There are hundreds of facts that we know about animals. However, in this domain there are only two facts we need. Animals are used in experiments, ms seen in tl-e compound nominal "laboratory animal", and animals can have a disease, and thus transmit it, a.s seen in the phrase "animals implicated in hepatitis". Similarly, the only rele- vant fact about "water" is that it may be a medium for the transmission of disease. Secondly, the method points us toward generalizations we might otherwise miss, when we see a number of uses that seem to fall within the same class. For example, the uses of the word "laboratory" seem to be of two kinds: 1. "laboratory animals", "laboratory spores", "labora- tory contamination", "laboratory nwthods". 2. "a study by a research laboratory", "laboratory test- ing", "laboratory abnormalities", ':laboratory charac- teristics of hepatitis A', "laboratory picture". The first of these rests on the fact that experiments involv- ing certain events and entities take place in laboratories. The second rests on the fact that information is acquired there. A classical issue in lexical semantics that arises at this stage is the problem of polysemy. Should we consider a word, or predicate, as ambiguous, or should we try to find a very general characterization of its meaning that abstracts away from its use in various contexts? The concordance method suggests a solution. The rule of thumb we have followed is this: if the uses fall into two or three distinct, large classes, the word is treated a.s having separate senses ........ whereas if the uses seem to be spread all over the map, we try to find a general characterization that covers them all. The word "derive" is an example of the first case. A deriva- tion is either of information from an investigative activity, as in "epidemiologic patterns derived from historical stud- ies", or of chemicals from body parts, as in "enzymes de- rived from intestinal mucosa". By contr,~st, the word "pro- duce" (and the word "product"} can be used in a variety of ways: a disease can produce a condition, a virus can pro- duce a disease or a viral particle, something can produce a virus ("the amount of virus produced in the carrier state"), intestinal flora can produce compounds, and something can produce chemicals from blood ("blood products"). All of this suggests that we want to encode only the fact that if x produces y, then x causes y to come into existence. At this stage in our method, we aimed at only infor- mal, English statements of the facts. We ended up with approximately 1000 facts for the knowledge base. "- 284 3. Organizing the Knowledge Base The next step is to sort the facts into natural "clusters" (cf. [Hayes, 1984]). For example, the fact "If x produces y, then x causes y to exist" is a fact about causality. The fact "The replication of a virus requires components of a cell of an organism" is a fact about viruses. The fact "A household is an environment with a high rate of intimate contact, thus a high risk of transmission" is in the cluster of facts about people and their activities. The fact "If bilirubin is not secreted by the liver, it may indicate injury to the liver tissues" is in the medical practice cluster. It is useful to distinguish between clusters of "core knowledge" that is common to most domains and "domain- specific knowledge". Among the clusters of core knowledge are space, time, belief, and goal-directed behavior. The domain-specific knowledge includes clusters of facts about viruses, imnmnology, physiology, disease, and medical prac- tice. The cluster of facts about people and their activities lies somewhere in between these two. We are taking a rather novel approach to the axiom- atization of core knowledge. Much of our knowledge and language seems to be based on an underlying "topology", which is then instantiated in many other areas, like space, time, belief, social organizations, and so on. We have be- gun by axiomatizing this fundamental topology. At its base is set theory, axiomatized along traditional lines. Next is a theory of granularity, in which the key concept is "x is indistinguishable from y with respect to grain g'. A the- ory of scalar concepts combines granularity and partial or- ders. The concept of change of state and the interactions of containment and causality are given (perhaps overly sim- ple) axiomatizations. Finally there is a cluster centered around the notion of a "system", which is defined as a set of entities and a set of relations among them. In the "sys- tem" cluster we provide an interrelated set of predicates enabling one to characterize the "structure" of a system, producer-consumer relations among the components, the ':function" of a component of a system as a relation be- tween the component's behavior and the behavior of the system as a whole, notions of normality, and distributions of properties among the elements of a system. The appli- cability of the notion of "system" is very wide; among the entities that can be viewed as systems are viruses, organs, activities, populations, and scientific disciplines. Other general commonsense knowledge is built on top of this naive topolog'y. The domain of time is seen as a particular kind of scale defined by change of state, and the axiomatization builds toward such predicates as "regular" and "persist". The domain of belief has three principal subclusters in this application: learning, which includes such predicates as "find", "test" and "manifest"; reasoning, explicating predicates such as "leads-to" and "consistent"; and classifying, with such predicates as "distinguish", "dif- ferentiate" and "identify". The domain of modalities expli- cates such concepts as necessity, possibility, and likelihood. Finally, in the domain of goal-directed behavior, we char- acterize such predicates as "help", "care" and "risk". In the lowest-level domain-specific clusters - viruses, immunology, physiology, and people and their activities - we begin by specifying their ontology (the different sorts of entities and classes of entities in the cluster), tile inclu- sion relations among the classes, the behaviors of entities in the clusters and their interactions with other entities. The "Disease" cluster is axiomatized primarily in terms of a temporal schema of the progress of an infection. The cluster of "Medical Practice", or medical intervention in the natural course of the disease, can be axiomatized as a plan, in the AI sense, for maintaining or achieving a state of health in the patient, where different branches of the plan correspond to where in the temporal schema for disease the physician intervenes and to the mode of intervention. Most of the content of the domain-specific cluster.~ is specific to medicine, but the general principles along which it was constructed are relevant to many applications. Fre- quently the best way to proceed is first to identify the enti- ties and classification schemes in several clusters, state the relationships among the entities, and encode axioms artic- ulating clusters with higher- and lower-level clusters. Often one then wants to specify temporal schemas involving inter- actions of entities from several domains and goal-directed intervention in the natural course of these schemas. The concordance method of the second stage is quite useful in ferreting out the relevant facts, but it leaves some lacunae, or gaps, that become apparent when we look at the knowledge base as a whole. The gaps are especially fre- quent in commonsense knowledge. The general pl'inciple we follow in encoding this lowest level of the knowledge base is to aim for a vocabulary of predicates that is minimally ad- equate for expressing the higher-level, medical facts and to encode the obvious connections among them. One heuristic has proved useful: If the axioms in higher-level domains are especially complicated to express, this indicates that some underlying domain has not been sufficienlly explicated and axiomatized. For example, this consideration h:~s h~d to a fuller elaboration of the "systems" domain. Another ex- ample concerns the predicates '~parenteral", "needle" and "bite", appearing in the domain of "disease transmission". Initial attempts to axiomatize them in4icated the need for axioms, in the "naive topology" domain, about m(unbranes and the penetration of membranes allowing substances to move from one side of the membrane to the other. Within each cluster, concepts and facts seem Ic, fall into small groups that need to be defined together. I%r cxample. the predicates "clean" and "contaminate" need to be de- fined in tandem. There is a larger example in the "Disease Transmission" cluster. The predicate "transmit" is funda- mental, and once it has been characterized ,~s the motion of an infectious agent from a person or animal to a person via some medium, the predicates "source", "route", "mech- anism", "mode", "vehicle" and "expose" can be de, fined in terms of its schema. In addition, relevant facts about body fluids, food, water, contamination, needles, bites, propaga- tion, and epidemiology rest on an under~tanding of "trans- mit". In each domain there tends to be a core of central • predicates whose nature must be explicated with some care. A large number of other predicates can then be character- ized fairly easily in terms of these. 285 4. Encoding the Facts in Predicate Calculus Encoding world knowledge in a logical language is of- ten tal,:en to be a very hard problem. It is my belief that the di|licultics result from attempts to devise representa- tions that lend themselves in obvious ways to efficient de- duction algorithms and that adhere to stringent ontological scruples. I h;~.ve abandoned the latter constraint altogether (see [llobbs, 198.1], for arguments} and believe the former concern should be postponed until we have a better idea of precisely what sort of deductions need to be optimized. Under these ground rules, translating individual facts into predicate calculus is usually fairly straightforward. There are still considerable difficulties in making the axioms mesh well together. A predicate should not be used in some higher-level cluster unless it has been elucidated in that or some lower-level cluster. This necessarily restricts one's vocabulnry. For example, the predicate "in" does a lot of work. There are facts about viruses in tissues, chemicals in body fluids, infections in patient's bodies, and so on, and a direct translation of some of these axioms back into English is somewhat awkward. One has the feeling th~tt subtle .~hades of meaning have been lost. But this is inevitable in a knowledge base whose size is intended to be intermediate rather than exhaustive. Jersey. Hobbs, J. 1984. Ontological promiscuity. Manuscript. Walker, D. and J. Hobbs, 1981. Natural language access to med- ical text. SRI International Technical Note 240. March 1981. 5. Summary Much of this paper has been written almost as a case study. It would be useful for me to highlight the new and general principles and results that come out of this project. Tile method of using linguistic presuppositions as a "fore- lug function" fi~r the underlying knowledge is fairly gen- erally apl)licable in any domain for which there is a large body of text to exploit. It has been used in ethnography :rod discourse anal.vsis, but to my knowledge it has not been previously used in the construction of an AI knowl- edge base. The core knowledge has been encoded in ways Ihat are indel)endent of domain and hence should be useful for any natural language application. Of particular interest here is the klentification and axiomatization of the topologi- cal .~ubstructure of language The domain-specific knowledge will not of course carry over to other applications, but, as mentioned above, certain general principles of axiomatizing coml)lex domains have emerged. References Grosz, B., N. llaas, G. tlendrix, J. llobbs, P. Martin, R. Moore, .1. Robinson, and S. l~osensehein, 1982. DIALOGIC: A core natural lan,ouiage processing system. Proceedings of the Ninth Internation(d Conference on Computational Linguis- tics. 95-1/~i,. Prno~w, Czechoslovakia. llayes, P., 1981. The second naive physics manifesto. In Hobbs, d. and R. Moore (Eds.), Formal Theories of the Common- sense World. Ablex Publishing Company, Norwood, New 286
1984
59
IIOLiNI)EI) CONH'XT PARSING AND FASY I.I'AI+.NAIIII.ITY Robert C. Ilcrwick Room 820. MH" Artificial Intelligence I ~lb Cambridge. MA 02139 AIISTRACI" Natural langt~ages are often assumed to be constrained so that they are either easily learnable or parsdble, but few studies have investigated the conrtcction between these two "'functional'" demands, Without a fonnal model of pamtbility or learnability, it is difficult to determine which is morc "dominant" in fixing the properties of natural languages. In this paper we show that if we adopt one precise model of "easy" parsability, namely, that of boumled context parsabilio,, and a precise model of "easy" learnability, namely, that of degree 2 learnabilio" then we can show that certain families of grammars that meet the bounded context parsability ct~ndition will also be degree 2 learnable. Some implications of this result for learning in other subsystems of linguistic knowledge are suggested. 1 I INTRODUCTION Natural languages are usually assumed to be constrained so that they arc both learnable and par'sable. But how are these two functional demands related computationally? With some exceptions, 2 there has been little or no work connecting these two key constraints on natural languages, even though linguistic researchers conventionally assume that learnability somehow plays a dominant role in "shaping" language, while eomputationalists usually assume that efficient prncessability is dominant. Can these two functional demands be recrtnciled? There is in fact no a priori reason to believe that the demands of learnability and parsability are necessarily compatible. After all. learuability has to do with the scattering of possible grammars with respect tu evidence input to a learning procedure. This is a property of a family of grammars. Efficient parsability, on the other hand. is a property of a single grammar. A family of grammars could be easily learnable but not easily parsable, or vice-versa. It is easy to provide examples of both sorts. For example, there are finite collections of grammars generating non-rccursivc languages that are easily learnable (just use a disjoint vocabulary as triggering evidcncc to distinguish among them), Yet by dcfinition these languages cannot be easily parsable. On the other hand as is wcll known even the class of all 1. This v,'ork has h~n ~rried out at the MIT Artificial Intelliger.¢e I,aboratory. Support for the l.aborator3"s artificial intdligenc¢ research ~s provided m part by the Dcf~:nse Advanced Research Projects Agency. 2. See Ik~r~iek 1980 for a sketch of the connections between learnability and parsability. Iinite languages plus the tmiver~d inlirtite language coxcring them all is not learnable from just positive evidence (Gold 1967). Yet each of these languages is linite state and hence efficiently analyzable. 'lhis paper establishes tile first known resolts lbnnally linking efficient par~tbility to efficient Icarnability. It connects a particular model of efficient parsing, namely, bounded context pal.'sing with lookahead as developed by Marcus 1980. to a particular model of language acqnisilitm, the Bounded l)egree of Error (Ill)E) model of Wexlcr and Culicovcr 1980. The key result: bounded context parsability implies "'easy" learnability. Here, "easily learnable" means "'learnable from simple, positive (grammatical) sentences of bounded dcgrec of embedding." In this case then, the constraints required to guarantee easy parsability, as enforced by the bounded context eortstraJllt, are at least as strong as those required for easy learnability. This means that if we have a language and associated grammar that is known to be parsable by a Marcus-type machine. then we already know that it meets the constraints of bounded degree learning, as defined by Wcxler and Culicover. A number of extensions to the learnability-parsability connection are also suggested. One is to apply the result to other linguistic subsystems, notably, morphological and phonological rule systems. Although these subsystems are finite state, this does not automatically imply easy learnability, as Gold (1967) shows. In fact, identification is still computationally intractable -- it is NP-hard (Gold 1978), taking an amount of evidence exponentially proportional to the number of states in the target finite state system. Since a given natural language could have a morphological system of a few hundred or even a few thousand states (Kimmn 1983, for Finnish), this is a serious problem, Thus we must find additional constraints to make natural morphological systems tractably learnable. An analog of the bounded context model for morphological systems may suffice. If we require that such systems be k-reversible, as defined by Angluin (in press), then art efficient polynomial time induction algorithm exists. To summarize, what is the importance of this result for computational linguistics? o It shows for the first time that parsability is stronger constraint titan learnability, at least given this particular way of defining the comparison. Thus computationalists may have been right in tbcusing on efficient parsability as a metric for comparing theories. 20 o It provides an explicit criterion for learnability. This criterion can bc tied to known grammar and language class results. For example, we can .say that the language anbncn will be easily learnable, since it is hounded context parsablc (in an extended sense). u It Ibrlnall.~ cnnnects the Marcus model fi~r p.nsing to a model of acquisition. It pinf~oints the rcl,ttionship of tile Marcus parser ~o the 1.1~,( k I and btmndcd context p,trsmg models. o It suggests criteria fi~r tile learnability ~f phomflogical and rnorphulugical systems. In particular, fl~c notitm of k-reversibility, the anah~g of bounded context par.~d'~ility Ibr Iinite slaue s3,stems, may play a key nile here. The reversibility constraint thus lends learnahilit.v support to computational frameworks that propose "'reversible" rules (such as that of Koskcnnicmi 1983) versus those that do not (such as standard generative approaches). This paper is organized as follows. Section l reviews the basic definitions of the bounded context model for parsing and the bounded degree of error model for learning. Section 2 sketches the main result, leaving aside the details of certain lemmas. Section 3 extends the bounded context--bounded degree of error model to morphological and phtmological systems, and advances the notion of k.reversibility as the analog of bounded context parsability for such finite state sysiems. 1I IIOUNDED CONTEXT PARSAIflI.ITY AND I)OUNDED DEGREE OF EI~,ROR I.EARNING To begin, we define the models of parsing and learning that will be used in the sequel. The parsing model is a variant of the Marcus parser. "I11e learning theory is the Degree 2 theory of Wexler and Culicover (1980). The Marcus parser defines a class of languages (and associated grammars) that are easily pa~able; Degree 2 theory, a class of languages (and asstx:iated grammars) that is easily learnable. To begin our comparison, We must say what class of "easily learnable" languages l)egrec 2 theory defines. The aim of the theory is to define constraints such that a family of transfonnational grammars will be learnable from "'simple" data; the learning procedure can get positive (grammatical) example sentences of depth of embedding of two or tess (sentences up to two embedded sentences, but no more). The key property of the translbrmational family that establishes learnability is dubbed Bounded Degree of I?rror. Roughly and intuitively. BI)E is a property related to the "separability" of langu:tges and grammars given simple data: if there is a way for the learner to tell that a currently hypnthesized language {and grammar) is incorrect, then there must be some simple scntc'~ce that reveals this -- all languages in the family must be separable b',' simple sentences. The wa.~ that the learner can tell that a currentl~ I1H~othesizcd grammar is wrong given some sample sentence is by trying to see whether the current granlmar can nl~lp from a deep structure for the sentence to the observed ~mple sentence. That is, we imagine the learner being li~d with a series of hase (deep structnre)-st, rface sentence (denoted "'b, s") pairs. (See Wexler and Culicover 1980 fur details and justification of this approach, as well as a weakening of the requirement that base structures be available: see Berwick 1980 1982 for an independently developed conlputational version.) Ifthe learner's current transformational component. '1 I, can map from b to s. then all is well. If not. and Tl(b)=s does not equal s. then a detectable error has been uncovered. With this background we can provide a precise definition of the BI)E property: A family of transrormationally-generated languages k possesses the BI)t- property iff for any base grammar B (fur languages in 13 there exists a finite integer U. such that for an). possible adult transformational component A and learner component C, if A and C disagree on any phrase-marker b generated by B. then they disagree on some phrase-marker b generated by B, with b' ofdegree at most U. Wexler and Culicover 1980 page 108. If we substitute 2 for U in the theorem, we get the Degree 2 constraint. Once IIDE is established for some family of languages, then convergence of a learning procedure is easy to proved. Wexler and Culicover 1980 have the details, but the key insight is that the number of possible errors is now bounded from above. The BDE property can be defined in any grammatical framework, and this is what we shall do here. We retain the idea of mapping from some underlying "base" structure to the surface sentence. (If we are parsing, we must map from the surface sentence to this underlying structure.) The mapping is not necessarily transformational, however; for example, a set of context-free rules could carry it out. In this paper w? assume that the mapping from surface sentences to underlying structures is carried out by a Marcus-type parser. The mapping from structure to sentence is then defined by the inverse of the operation of this machine. This fixes one possible target language. (The full version of this paper defines this mapping in full.) Note further that the BDE property is defined not just with respect to possible adult target languages, but also with respect to the distribution of the learner's possible guesses. So for example, even if there were just ten target languages (defining 10 underlying grammars), the BDE property must hold with respect to those languages and any intervening learner languages (grammars). So we must also define a family of languages to be acquired. This is done in the next section. BI)E, then, is our criterial property for easy learnability. Just those lhmilies of grammars that possess the BI)E property (with respect to a learner's guesses) are easily learnable. Now let us I11rn to bounded context parsal)ilit). (llCl>). The definition ~)1" IICI ) used here an extension t)f the standard delinition as in Aht)and Lillmall 1972 p. 427. Intuitively. a grammar is IICP if it is "'backwards deterministic" given a radius nf k tokens around 21 cvcry parsing decision. That is. it is possible to find dcte.rmiuistically the production that vpplied at a given step in a derivation by examining just a btnmded mnuber of tokens (fixed in advance) to the left and right at that point in the derivation. Following Aho and UIIman we have this definition for bounded right-context grammars: G is bounded right-context if the following four conditions: (1) S=:'aA,~=:'a#~ and (2) S=%,Bx=~-~,~x = a'B,b are rightmost derivations in the grammar; (3) the length ofx is less than or equal to the length of,/, and (4) the last m symbols of a and a' coincide, and the first n symbols of,., and ~, coincide imply that A=B, a'=v, and ,/' = x. We will u~ the term "bounded context" instead of "bounded right-context." To extend the definition we drop the requirement that the derivation is rightmost and use instead non-canonical derivation sequences as defined by Szymanski and Williams (1976). This model corresponds to Marcus's (1980) use of attention shi.Bs to postpone parsing decisions until more right context is examined. The effect is to have a lookahead that can include nonterminai names like NP or VP. For example, in order to successfully parse Have the students take the exam, the Marcus parser must delay analyzing hare until the full NP the students is processed. Thus a canonical (rightmost) parse is not produced, and the lookahead for the parser includes the sequence NP--take, successfully distinguishing this parse from the NP--taken sequence for a yes-no question. This extension was first proposed by Knuth (1965) and developed by Szymanski and Williams (1976). In this model we can postpone a canonical rightmost derivation some fixed number of thnes t. This corresponds to building t complete subtrees and making these part of the lookahead before we return to the postponed analysis. The Marcus machine (and the model we adopt here) is not as general as an l.R(k) type parser in one key respect. An I.R(k) parser can use the entire left context m making its parsing decisions. (It alst) uses a bounded right context, its h)okahead.)The 1.R(k) ,nachine can do this because the entire left context can be stored as a regular set in the finite control of the parsing machine (see Knuth 1965). That is, l.R(k) parsers make use uf an encoding of the left context in order to keep track of what to do. The Marcus machine is much mure limited than this. l.ocal parsing decisions arc made by examining strictly litend contexts an)und file current locus of parsing contexts. A finite state encoding of left context is not permitted. The BCP class also makes sense its a pn)xy for "'efficiently parsable" because all its members are analyzable in time linear in the length t)[" their input sentences, at least if file associated gr~lllllllars are COlttext-fiee. If die ~r~lllllTlars are nol etmtext-free. then BCP members are parsahle in at ~orst quadratic (n squared) time. (See Szymanski and Williams 1976 fur proofs of these results.) III CONNIT_q'ING PARSABII.ITY AND I.EARNABII.ITY We can now at least furmalize our problem of comparing learnability and parsability. The question now becomes: What is the relationship between the Ill)t" property and the BCP property? Intuitively, a grammar is BCP if we can always tell which of two rules applied in a given bounded context. Also intuitively, a family of grammars is III)E il: given any two grammars in the family G and G" with different roles R and R" say. we can tell which rule is the correct one by looking at two derivations ofbotmded degree, with R applying in one and yielding surface string s, and R" applying in the udder yielding surface string s'. with s not equal to s'. This property must hold with respect to all possible adult and learner grammars. So a space of possible target grammars must be considered. The way we do this is by considering some '*fixed" grammar G and possible variants of G formed by substituting the production rules in G with hypothesized alternatives. The theorem we want to now prove is: If the grammars formed by augmenting G with possible hypothesized grammar rules arc BCP. then that family is also BDE. The theorem is established by using the BCP property to directly construct a small-degree phrase marker that meets the BDE condition. We select two grammars G, G' from the family of grammars. Both are BCP, by definition. By assumption, there is a detectable error that distinguishes G with rule R from G' with rule R'. Letus .say that Rule R is of the form A~a; R' is B=*'a'. Since R' determines a detectable error, there must be a derivation with a common sentential form ,t, such that R applies to ,I, and eventually derives sentence s, while R' applies to ¢, and eventually derives s' different from s. The number of steps in the derivation of the the two sentences may be arbitrary, however. What we must show is that there are two derivations bounded in advance by some constant that yield two different sentences. The BCP conditions state that identical (re.n) contexts imply that A and B are equal. Taking the contrapositive, if A and B are unequal, then the 0n,n) context must be nonidentical. This establishes that BCP implies (re.n) context error detectability. 3 We are not yet done though. An (Ul.U) context detectable error could consist of tenninal and nonterminal elements, not just terminals (words) as required by the detectable error condition. We must show that we can extend such a detectable error to a surface sentence detectable error with an underlying structure of bounded degree. An easy lemma establishes this. If R' is an (m.n) context detectable error, then R' is bounded degree of error detectable. The proof (by induction) is omitted: only a sketch will be given here. Intuitively. the reason is that ~e can extend any nonterminals in the error-detectable (m,n) context to some valid surface sentence and bound this derivation by some constant fixed in advance and depending only on the grammar. This is because unbounded derivations are possible only by the repetitiort of nontermirmls via recursion: since there are only a finite number of distinct nonterminals, it is only via recursion that wc can obtain a derivation chain that is arbitrarily deep. But. as is well knuwn (compare the proof of the pumping lemma for context-free grammars), any such arbitrarily deep derivation producing a valid surface sentence also has an associated truncated derivation, bounded by a constant 22 dependent on the grammar, that yields a valid sentcnce of the language. Thus we can convert any (re.n) context detectable error to a bounded degree of error sentence. This proves the basic result. As an application, consider the strictly context-sensitive language anbnc n. This language has a grammar that is BCP in the extended sense (Szymanski and Williams 1976). The family of grammars obtained by replacing the rules of this IICP grammar by alternative rules that are also 11CP (including the original grammar) meets the BDE condition. This result was established independently by Wexler 1982. IV EXTENSIONS OF THE BASIC RESULT In the domain of syntax, we have seen that constraints ensuring efficiem parsability also guarantee easy lcarnability. This result suggests an extension to other domains of linguistic knowledge. Consider morphological rule systems. Several recent models suggest finite state transducers as a way to pair lexical (surface) and underlying titans of words (Koskenniemi 1983: Kaplan and Kay 1983). While such systems may well be efficiently analyzable, it is not so ~ell known that easy learnability does not follow directly from this adopted formalism. To learn even a finite state system one must examine all possible state-transition combinations. This is combinatorially explosive, as Gold 1978 proves. Without additional constraints, finite trzmsducer induction is intractable. What is needed is some way to localize errors: this is what the bounded degree ofern)r condition does. Is there ill) an;dog tlf the the IICP condition for finite state systems that also implies easy learnahility? The answer is yes. The essence of BCP is that derivations are backwards and forwards deterministic within local (m.n) contexts. But this is precisely the notion of k-reversibilit.I; as defined by Angluin (in press). Angluin shows that k-reversible automata have polynomial time induction algorithms, in contrast to the result for general finite state automata. It then becomes important to .see if k-reversibility holds for current theories of morphological rule systems. The fifll paper analyzes bt)th "'classical" generative theories (that do not seem to meet the test of reversibility) and recent transducer theories. Since k-reversibility is a sufficient, but evidently not a necessary constraint fi,r Icarnability. there could be other conditions guaranteeing the Ic;,rnability of finite state systems. For instance. One of the~, the strict cycle condition in phonology, is also examined in the full paper. We show that the strict cycle also st, flices to meet the III)E condition. In short, it eppcars that .".t Icz:st in terms of one framework in which a fontal comparison can bc made, the same constraints that forge efficient parsability also ensure easy learnability. V REFERENCES Aho, J. and Ullman, J. 1972. The Theory of Parsh~g, Translation, and Compiling, vol. 1., Englewood-Cliffs, N J: Prentice-Hall. Angluin, D. 1982. Induction of k-reversible languages. In press, JACM. Berwiek, R. 1980. Computational analogs of constraints on grammars. Proceedings of the 18th Annual Meeting of the Association for Computational Linguistics. Berwick, R. 1982. Locality Principles and the Acquisition of Syntactic Knowledge, PhD dissertation, MIT Department of Electrical Engineering and Computer Science. Gold, E. 1967. Language identification in the limit. Information and Control, 10. Gold, E. 1978. On the complexity of minimum inference of regular sets. h~fonnation and Control 39, 337-350. Kaplan, R. and Kay, M. 1983. Word recognition. Xerox Palo Alto Research Center. Koskennicmi, K. 1983. Two-Level Morphology: A General Computational Model for Word Form Recognition and Production, Phi) dissc~ltion, University ofl lelsinki. Knuth. D. 1965. On the translation of languages from left to right. In.fimnathm and ('ontroL 8. Marcus. M. 1980. A Model of Syntactic Recognition for Natural Language. Cambridge MA: MIT Press. Szymanski. T. and Williams. J. 1976. Noncanonical extensions of bottomup parsing techniques. SIAM .1. Computing, 5. Wexler, K. 1982. Some isst,es in the formal theory of learnability. in C. Baker and J. McCarthy (eds.). The Logical Problem of l,anguage Acquisition. Wexler, K. and P. Culicover 1980. Formal Principles of Language Acquisition, Cambridge, MA: Mrr Press. 3 One of lhe nlh,,'r ~hJee nCP ~mdilions could al.~ be ~ioldle.d, bu! ll'lcs~ ate a::~:un.ed t.~e .~)) ~,~Ud,nlic::, W;" ."..',~Jme (h~' existence of dcd,.ali~,ns meeting ,"(mdh!(m.~ t l ).rod L",) ~n Ihc cxlet:,l..'d !:¢n,.u. i!s v.cJl as ccmdi!ion (3). 23
1984
6
LINGUISTICALLY MOTIVATED DESCRIPTIVE TERM SELECTION K. Sparck Jones and J.I. Tait* Computer Laboratory, University of Cambridge Corn Exchange Street, Cambridge CB2 3QG, U.K. ABSTRACT A linguistically motivated approach to indexing, that is the provision of descriptive terms for texts of any kind, is presented and illustrated. The approach is designed to achieve good, i.e. accurate and flexible, indexing by identifying index term sources in the meaning representations built by a powerful general purpose analyser, and providing a range of text expressions constituting semantic and syntactic variants for each term concept. Indexing is seen as a legitimate form of shallow text processing, but one requiring serious semantically based language processing, particularly to obtain well-founded complex terms, which is the main objective of the project described. The type of indexing strategy described is further seen as having utility in a range of applications environments. I INDEXING NEEDS Indexing terms are required for a variety of purposes, in a variety of contexts. Much effort has gone into indexing, and more especially automatic indexing, for conventional document retrieval; but the extension of automation, e.g. in the area of office systems, implies a wider need for effective indexing, and preferably for effective automatic indexing. Providing index descriptions for access to documents is not necessarily, moreover, a poor substitute for fully understanding documents and incorporating their contents into knowledge bases. Indexing has its own proper function and hence utility, and can be successfully done without deep understanding of the texts being processed. Insofar as access to documents is by way of an explicit textual representation of a user's information need, i.e. a request, this has also to be indexed, and the retrieval problem is selecting relevant documents when matching request and document term descriptions. Though retrieval experiments hitherto have shown that better indexing (on some criterion of descriptive quality) does not lead to really large improvements in average retrieval performance, careful and sophisticated indexing, especially of the search request, does promote effective retrieval. Sophisticated indexing here means conceptually discriminating, linguistically motivated indexing, i.e. indexing in which terms are linguistically well motivated because they are accurate indicators of complex concepts. Though indexing concepts may in , Current address: Acorn Computers Ltd, Fulbourn Road, Cherry Hinton, Cambridge CBI 4JN, U.K. _ This work was supported by the ~ritish Library Research and Development Department. some cases be adequately expressed in single words, the concepts being indexed frequently have an internal structure requiring expression as a so- called 'precoordinate' term, i.e. a linguistically well-deflned multi-word unit. Earlier attempts to obtain such precoordinate terms automatically were not particularly successful, mainly because the text analysis procedures used were primarily syntactic, and even shallowly and crudely syntactic. Further, adopting source text units as terms, when they are only mininmally characterised, limits indexing to one particular expression of the underlying concept, and does not allow for alternatives: requests and documents may therefore not match. (Stemming helps somewhat but, for example, does not change word order.) The research reported below was thus designed to test a more radical approach to indexing, using an AI- type language analyser exploiting a powerful syntactico-semantic apparatus to analyse texts, and specifically request texts; a term extractor to identify indexing concepts in the resulting text meaning representation and construct their semantic variants; and a language generator to produce a range of alternative syntactic expressions for all the forms of each concept, constituting the terms variant sets for searching the document file. The major operation is the identification of indexing concepts, or term sources, in text meaning representations. If both user requests and stored documents could be processed, there would be no need for lexical expressions of these concepts, since matching would be conducted at the representational level (cf Hobbs et al 1982 or, earlier, Syntol (Bely et al 1970)). However there are many reasons, stemming both from the current state of automatic natural language processing and from naked economics, why full document processing is not feasible, though request processing should be. The generation of alternative text expressions of concepts, for use in searching stored texts, is therefore necessary. We indeed believe that text searching is an important facility for many practical purposes. The provision of indexing descriptions is thus a direct operation only on requests, but the provision of alternative well- founded expressions of request concepts constitutes an indirect indexing of documents aimed at improving request document matching. There would nevertheless appear to be a major problem with this type of application of AI language 287 analysers. In general, successful 'deep' language analysis programs have been those working within very limited domains; and the world of ordinary document collections, for example those consisting of tens or hundreds of thousands of scientific papers, is not so limited. Programs like FRUMP (DeJong 1979), on the other hand, though less domain specialised, achieve only partial text analysis. They in any case, like 'deep' analysers, imply an effort in providing an analysis system which can hardly be envisaged for language processing related to large bodies of heterogenous text. The challenge for the project was therefore whether sophisticated language analysis techniques could be applied in a sufficiently discriminating way, without backup from a large non-llnguistic knowledge base, given that only a partial interpretation of texts is required. The partial interpretation must nevertheless be sufficient to generate good, i.e. accurate and significant, index terms; and the important point is therefore that the partial interpretation process has to be a flexible one, driven bottom up from the given text rather than top down by scripts or frames. Thus the crucial issue was whether the desired result could be obtained through a powerful and rich enough general, i.e. non domain-specific, semantics. II REQUEST ANALYSIS To test the proposition that the desired result could be obtained, we exploited Boguraev's analyser (Boguraev and Sparck Jones, in press), which applies primitive-based semantic pattern matching in conjunction with conventional syntactic analysis, to obtain 8 request meaning representation in the form of a case labelled dependency tree relating word senses characterised by primitive formulae. Thus a primary objective was to see whether the type of word and message meaning characterisatlon allowed by the general semantic primitives used by the analyser could suffice for the interpretation of technical text for the purpose in hand. There is an early limit to the refinement of lexical characterisation which can be achieved with about 1OO general-purpose primitives like THING and WHERE for a vocabulary containing words like "transistor", "oscillator" and "circuit"; and with semantic lexical entries for individual word senses at the level of 'oscillator: THING', structural disambiguation of the sentence as a whole may be difficult to attain. In this situation, the analyser is unlikely to be able to achieve comprehensive ambiguity resolution; but the project belief was that lower-level sentence components could be fairly unequivocally identified, which may be adequate for indexing, since it is not clear how far comprehensive higher-level structural links should be reflected in terms. A modest level of lexical resolution may also be sufficient as long as some trace of the input word is preserved to use for output variant generation (which may of course include synonym generation). The fact that the semantic apparatus supporting Boguraev's analyser is rich and robust enough to tolerate some 'degradation' or 'relaxation' was one reason for using this analyser. The second was the nature of the meaning representations it delivers. The output case-labelled dependency tree provides a clear, semantically characterised representation of the essential propositional structure of the input text. This should in principle facilitate the identification of tree components as term sources, according to more or less comprehensive scope criteria, as suggested by the needs of request- document matching. The third reason for adopting Boguraev's analyser was the fact that it has been used for a concurrent project on a query interpretation front end for accessing formatted databases, and hence was viewed as an analyser capable of supporting an integrated information inquiry system. The principle underlying the projects taken together was that it should be recognised that information systems consist of a range of different types of information source, which it should be possible to reach from a single input user question. That is, the user should be able to express an information need, and the system should be able to couch this in the different forms appropriate to seeking response items of different sorts from the range of available information source types. Thus a question could be treated both as a query addressed to a formatted database, and as a request addressed to a document collection, without presuppositions as to what type of information should be sought, in order to maximise the chances of finding something germane. In other projects, e.g. LUNAR (Woods et al 1972), treating questions as document requests was either triggered by specific words like "papers", or by a failure to process the question as a database query. We regard the treatment of the user's question in various styles at once as a normal requirement of a true integrated information system. In the event, Boguraev's anal yser had to be extended significantly for the document retrieval project, primarily to handle compound nouns. These are a very common feature of technical prose, so some means of processing them during analysis, and some way of representing them in the analyser's output, is required, even if they cannot be fully interpreted without, for example, inference calling on pragmatic (domain) knowledge. The necessarily somewhat minimal procedure adopted was to represent compounds as a string of modifiers plus head noun without requiring an explicit bracketing or reconstruction of implicit semantic relations. (Sense selection on modifiers thus cannot in general be expected.) In general, such a strategy implies that little term variation can be achieved; however, as detailed belo~ some follows from limited semantic inference. The type of meaning representation provided by the analyser for a typical request is illustrated (in a simplified form) in Figure la. III TERM EXTRACTION From the indexing point of view, the most important operation is the selection of elements of the analyser's output meaning representation(s) as term sources. Subject to the way the representation defines well-formed units, the criteria for term source selection must stem ultimately from the empirical requirements mainly of request-document matching, but also, since index descriptions can have other functions than pure matching, from the requirements for descriptions which are, for example, comprehensible and indicative to the quickly scanning human reader. The particular requirements to be met 288 Request: GIVE HE INFORMATION ABOUT THE PRACTICAL CIRCUIT DETAILS OF HIGH FREQUENCY OSCILLATORS USING TRANSISTORS a) 18 analyses including (simplifled illustration): (clause... (V.o. I @@sKent...)(@@reclpient...) @@oSject...(@@mental object ... <n(detaill szgn (@@atttribu~e (trace (clause v agent) (clause (v (use1 use (@@agent (n (osclllatorl thing (##nmod (trace (clause v agent) (clause (v (be2 be l@@ag ent (n (frequencyl sign))) @@state ...high3kind~)) ~ ) )) ) (@@object (n (transistorl thing)) ) )) )) ) (##r~nod (trace (clause v agent) (clause (v (be2 be (@@agent (n (circuitl thing))) (@@state ...practical2 kind) )) )) ) ) > ))) b) 10 term sources of scale 2 for this analysis including: (n (detaill sign (##nmod (((n (circuitl thing)))) ))) ((trace (clause v agent)) (clause tel) • (type (v (usel use (@@agent (n (oscillator1 thing)))) ))) ((trace (clause v object)) (clause (type rel) (v (use1 use (@@object (n (transistor1 thing)))) ))) c) semantic variants using inference for compound nouns, selecting prepositional cases from 17 possible: for 'circuit detail' in this analysis 3 new variants: (n (detaill sign (@@abstract location (n (circuitl thing)))) ) (n(detaill sign (@@mental oSJect (n (circuitl thing)))) ) (n(detaill sign (@@attribute (n (circuitl thing)))) ) d) 15 term search specification for the request using terms of scale 2, with compound noun inference: - variant set of 5 for 'frequency oscillator' including: "a frequency oscillator" "frequency oscillators" variant set of 25 for 'circuit detail' interpreted as 'detail about circuit' including: "the details about the circuits" "detail about circuits" "details about a circuit" Figure 1. Example request processing can only be determined by extensive and onerous experiment. However some of the possibilities open can be indicated here, since specific decisions had to be made for the first, very small scale, tests we have already conducted. Roughly speaking, the definition of term sources is a matter of scale, i.e. of the larger or smaller scope of dependency tree connections. At the surface text level this is reflected in (on average) larger or smaller word strings, corresponding to more or less elaborately modified concepts, or more or less extensively linked concepts. Given the type of propositional structure defined by the analyser's dependency trees, it was natural to define term sources by a scale count exploiting case constructions. In the simplest case the scale count is effectively applied to a verb and its case role filler nouns. Thus a count of 3 takes a verb and any pair of its role-filling nouns, a count of 2 takes the verb and any one of its nouns, while a count of I takes just verb or noun. A structure with a verb and three noun case fillers will therefore produce three scale 3 terms, three scale 2, and 4 scale I sources. Figure Ib shows sources of scale 2 extracted from the dependency structure representing the concept 'oscillator use transistor' for the example request. It should be emphasised that some types of linguistic construction, e.g. states, are represented in a verb-based way, and that other dependency tree structures are handled in an analogous manner. Equally, the definition of scale count is in fact more complicated, to take account of modifiers on nouns like quantifiers. Moreover an important part of the term source selection process is the elimination of 'unhelpful' parts of the sentence representation, for example those derived from the input text string "Give me papers on". This elimination is achieved by 'stop structures' tied to individual word senses, and can be quite discriminating, e.g. distinguishing significant from non-significant uses of "paper". Term sources are then derived from the resulting 'partial' sentence structures. (In Figure la this is the structure bounded by < >.) Overall, the effect of the term source derivation procedure is a list of source structures, representing propositions or subpropositions, which overlap through the presence of common individual conceptual elements, namely word senses. It is indeed important that the indexing of a text is 'redundant' in this way. If this conceptual indexing were to be carried out on both requests and stored documents, such lists would be the base for searching and matching. The fragmentation characteristic of indexing suggests that considerable mileage could be got simply from the lists of extracted term sources, without extensive 'inferential' processing either to generate additional sources or to support complex matching in the style advocated by Hobbs et al. However the objectives of indexing are unlikely to be achieved by restricting indexing concepts to the precise detailed forms they have in the analyaer's meaning representation. In general one is interested in the essential concept, rather than in its fine detail: for instance, in most cases it is immaterial whether singular or plural, definite or indefinite, apply to nominals. Indexing only at the conceptual level would simply throw such information away, to emerge with a 'reduced' or 'normalised' version of the concept, though one which conveys more specific structural information than the 'association' or 'coordination' ordinarily used in indexing. However if searching is to be at the text level, proper bases for the text expressions involved must be retained. Moreover 'paring down' representations may lead to the lack of precision in term characterisation which it is the aim of the whole enterprise to avoid, so an alternative strategy, allowing for more control, is required. The one we adopted was to define a set of permitted semantic variations, for example deriving plural and/or indefinite nominals from a given single definite construction. 289 Such semantic variants are easily obtained. Compound nouns present more interesting problems, and we have adopted a semantic variant strategy for these which may be described as embodying a very crude form of linguistic inference. Variants on given compounds are created by applying, in reverse, the semantic patterns designed to interpret and attach prepositional p~rases in text input. That is, if the semantic formulae for a pair of nouns in a compound satisfy the requirements for linking these with some (sense of a) preposition, the preposition sense, which embodies a case relationship, is supplied explicitly. Figure Ic shows some inferred variants for the example request. Clearly this technique (to be described in detail in the full paper) could be extended to the linking of nouns in a compound by verbs. But further, indexing strategies involve more than choices of term source and semantic variant types. Indexing implies coverage of text content, and it may in practice be the case that text content is not fully covered if indexing is confined to terms of a certain type, and specifically those of a more exigent, higher scale. Thus an exclusive indexing strategy may be restricted in coverage, where a relaxed one accepts terms of lower scale if ones of the preferred higher scale are not available, and so increases coverage. Moreover it may be desirable, to increase matching chances, to index with an inclusive strategy, with subcomponent terms of lower scale as well as their parents of higher scale, treating subcomponents as variants. The relative merits of these alternatives can only be established by experiment. IV VARIANT EXPRESSION More importantly, indexing cannot in practice stop at the level of term sources and their semantic variants, i.e. operate with the components of text meaning representations. The volumes of material to be scanned imply searching for request-document matches at the textual rather than the underlying conceptual level. This is not only a matter of the limited capacity for full text (or even abstract) processing of current language processing systems. It can be argued that text level scanning without proper meaning interpretation is a valid activity in its own right, for example as a precursor to deeper processing. The final stage of request processing is therefore the generation of text equivalents for the given term sources (i.e. for all the variants of each source). This includes the generation of syntactic variants, exploiting further the power given by explicit descriptions of linguistic constructs: though relations between words are implicit in word strings pulled out of texts, they cannot be accessed to produce alternative forms. What constitutes a syntactic as opposed to a semantic variant is ultimately arbitrary; in the implemented generator it includes, for example, variations on aspect. This generator, a replacement of Boguraev's original, builds a surface syntactic tree from a meaning representation fragment, from which the output word string is derived. The process includes the listing (if these are available) of lexical variants, i.e. words which are sense synonymous with the input ones. The final step in the production of the search formulation for the input request is the packaging of the sets of variants derived from the request's constituent concepts into a Boolean expression, with the variants in the set for each source linked by 'or' and the sets, representing terms, linked by 'and'. This stage includes merging the results of alternative analyses of the input request. Figure Id illustrates some of the text expressions of semantic and syntactic variants for the example request. From the retrieval point of view, our tests have been very limited. As noted, text searching is extremely costly, and requires a highly optimised program. Our initial experiment was therefore in the nature of a feasibility study, aimed at showing that real requests could be processed, and the output query specifications searched against real abstract texts. We matched 10 requests against 11429 abstracts, in the area of electronics, using terms of scales 3, 2, and I, and also 2 with compound noun inference, and the exclusive strategy. The strategies performed identically, but it has to be said that otherwise the results, especially for the higher scales, were not impressive. However, as retrieval testing over the past twenty years has demonstrated, the request sample is too small to support any valid performance conclusions about the merits of the indexing methods studied: a much larger sample is needed. Moreover much more work is needed on the best ways of forming search specifications from the mass of term material available: this is currently fairly ad hoe. V CONCLUSION The work described represents a first study of the systematic use of a powerful language processing tool for indexing purposes. It could in principle be used to manipulate terms at the meaning representation level, which would have the advantage of permitting more flexible matches between requests and documents differing at the detailed text level (e.g. "retrieval of information" and "retrieval of relevant information"). More practically, the indexing is extended to provide alternative text expressions of indexing concepts, for text matching. The claim for the approach is that useful indexing can be achieved by general semantic rather than domain-specific knowledge, though much more testing, includng tests with different indexing applications, is needed. VI ACKNOWLEDGEMENT We are grateful to Dr. B. K. Boguraev for his advice and assistance throughout the project. VII REFERENCES Bely, N. et al, Procedures d'Analyse Semantiques Appliquees a la Documentation Scientifique, Paris: Gauthier-Villars, 1970. Boguraev, B. and Sparck Jones, K. 'A natural language front end to databases with evaluative feedback' in New Applications of Databases (ed Gardarin and Gelenbe), London: Academic Press (in press). DeJong, G. Skimming Stories in Real Time, Report 158, Department of Computer Science, Yale University, 1979. Hobbs, J.R. et al, 'Natural language access to structured texts' in COLING 82 (ed Horecky), Amsterdam: North-Holland, 1982. Woods, W.A. et al, The LUNAR Sciences Natural Language Information System, Report 2378, Bolt Beranek and Newman Inc., Cambridge MA, 1972. 290
1984
60
INFERENCING ON LINGUISTICALLY BASED ZZ~IANTIC STRUCTUR~F Eva Ilaji~ov~, Milena Hn~tkov~ Department of Applied Mathematics Faculty of Mathematics and Physics Charles University ~lalostransk4 n. 25 118 O0 Praha I, Czechoslovakia ABSTRACT The paper characterizes natural lang- uage inferencing in the TIBAQ method of question-answering, focussing on three asp- ects: ~i) specification of the structures on which the inference rules operate, (ii) classification of the rules that have been formulated and implemented up to now, according to the kind of modification of the input structure ti~e rules invoke, an~ (iii) discussion of some points in which a proverly designed inference procedure may help the searc~ of the answer, and vice versa. I SPECIFICATION OF THE I:~PUT STRUCTURES FOR INFE~ENC I[IG A. Outline of the TIBAQ ~lethod hhen the TIBA~ (~ext-and-~nference based ~nswering of ~uestions) project was ~esigned, main emphasis was laid on the automatic build-up of the stock of know- ledge from the (non-~re-edited% input text. The experimental system based 6n this meth- od converses automatically the natural language input (both the questions and new Fieces of information, i.e. Czech sentences in their usual form) into the reDresentat- ions of n,eaning (tectogranmlatical repres- entations, TR's]; these TR's serve as input structures for the inference procedure tilat enriches the set of TR's selected by the system itself as possibly relevant for an answer to the input question. In this en- riched set suitable TR's for direct and in- direct answers to the given question are retrieved, and then transfered by a synth- esis procedure into the output (surface) form if sentences (for an outline of the method as such, see Haji~ov~, 197~; 3aji~o- v~ and Sgall, 19~i; Sgall, 1982). B. :?hat Kind of Structure Inferences ~houl~i Be Based on To decide what kind of structures the inference procedure should operate, one has to take into account several criteria, some of which seemingly contradict each other: the structures should be as simple and transparent as possible, so that inferenc- ing can be perfor,ued in a well-defined way, and at the s~e ti~ue, these structures ~hould be as"exDressive"as the natural lang- uage sentences are, not to lose any piece of information captured by the text. "~atural language has a major draw- back in its ambiguity: when a listener is told that the criticisl~ of the Polish del- egate was fully justified, one does not know (unless indicated by the context or situation) whether s/he should infer that soE~eone criticized the Polish delegate, or whether the Polish delegate criticized someone/something. On the other hand, there are means in natural language that are not preserved by most languages that logicians have used for drawing consequences, but that are critical for the latter to be drawn correctly: when a listener is told that ~ussiau is ~poken in SIBERIA, s/he draws conclusions partly different from those when s/he is told that in Siberla, RUS3IAN is spoken (caoitals denoting the intonation center); or, to borrow one of the widely discussed examples in linguist- ic writings, if one hears that Jonn called :ary a ~U~LICA~ and that then she insult- ed I~IM, one should infer that the sneaker considers "being a ~eoublican" an insult~ this is not the case, if the speaker said that then she I~SULTED hi~. These and similar considerations have led the authors of TIDAn to a stronc con- viction that the structures representing F.nowledge and serving as the base for in- ferencing in a q-uestion-answerin[~ system with a natural language interface should be linguistically based: they should be de- prived of all ambiguities of natural lang- uage and at the same til:ie they should pre- serve all the information relevant for drawing conclusions that the natural lanci- uage sentences encompass. The exr.erir,~ental syster~, based on TI~A(:, which was carried out by the group of formal linauistics at Charles University, Prague [implemented on ~C 1040 c~:n?11ter, compatible with 15::4 360) works with representations of :~eaning (te- ctogrammatical representations, fR's2 worked nut in the framework of functional generahive descrintion, or ~GD (for the linguistic background of this aopro~ch we refer to Sgall, 1964; ~;~all et ai.,1959; 291 Haji~ov~ and Jgall, 19:~O ). C. l ectocrar.~n~tical ~eor:_'sentations One of the b~sic tenets of VGD is the articulation of the sc'~antic relation, i.e. th_- relation bet.:een sound and r,~ean- ing, into a hierarchy o[ levels, connected with the relativiz~tion o[ the rel~tion of form" an~ 'function' a:~ known from the • ~;ritings of Prague &chool sc'nolar,3. This relativizatio~ .iakes it i~ossibl.., to di'~t- ingui.~h t::o levels of se:,tence structure: the level of surface syntax and that of t~e underlying or tectogramomatical struct- ure of sentences. As for a forn~al specification of the comolex unit oF- this lev,;l, that is the T!~., the [)re~{ent version (see :'l.<ite]-, Sgall an/ qgall, in }~ress) w~rks ::ith the notion of basic .]e})endency structure (5DR) ,;hich is defined a~ ] structure over the aloha- bet A (corres\~onding to tne labels of no~l- es) and the set of sy~,~ools C (corres~ond- ing.to the labels of e'lqes). 'i'he set of 5Dr- s is the sec of the tectogra:unatical representations of sentences containing no coordinated structures. 'fi%e ~-]Dq s are generated by the gra:,~.~ar G = (V.,V ,5,q), where V = A ka C, A = {(a ~, ,~)], a is in- T terpreted as a lexical unit, g is a vari- aole standing for t and f (contextually bound and non-bound, res~ectively] an., ~ is internreted as a set of <Ira,~,~aten~es be- longing to a; C is a '~et of com~)lementat- ions (c ~ C, where c is an inter;or denot- ing a certain type of comi~ler.'entation, called a functor),C" lenotes the set [<, >, %, >c~ for uvery C ~ C. %'o reuresent coordination, the form- al a~paratus for sentence generation is to be complemented by another aluhabet Q, ..,here q ~ e is interpreted as tynes of coordination (conjun~ive, disjunctive, ad- versative, ..., ap}9osition) , .Ind by ~ ne',,! kinu of brackets denotinq the boundary of coordinated structures; .3"={[ , ~, ] for every q ~ ~. The structures generated oy the grammar are then called comT~lex '.]e:gend- ency str~ctures (CD~). Coming back to the notions of elem- entary and com~!ex units of the tecto- gra~c, atical level, we can say that the comnlex unit of the TR is the com?lex de- pendency structure as briefly characteriz- ed above, while the ele.nentary units are the symbol~ of ti~e shaoes a, g, c, q, the ele[:ents of 3"~, and the ~arentheses. 'i'he lexical units a are conceiv..,<~ of as elem- entary rather th~n zom:_~lex, since for the time being we .1o not work with anv kind of lexical d~co.,;>osition. ,'.very le:~ical unit is assig~le] V.n~: [eat:/re conte.':tually bound" or 'non-bound" . The set of gra.'nmat- e~,~zs GR cov:_'rs a :;ide ranme o£ [}henomena; they can be classifie,i into two groups. Grammatemes representing morphological rleanin C in the narrow sense are specific for different (semantic) word classes: for nouns, w~ distinguish grammatemes of num- ber an~ of delimitation (indefinite, def- inite, specifying):for adjectives and ad- verbs, grammate~es of degree, for verbs, we work with grammatemes of aspect (pro- cessual, complex, resultative), iterative- hess (iterative, non-iterative), tense (simultaneous, anterior, posterior), im- :nediateness (immediate, non--immediate), predicate modality (indicative, Dossibil- itive, necessitive, voluntative), assert- ive modality (affirmative, negative), and sentential modality (ieclarative, inter- rogative, imperative). The other group o~ gr~mmatemes is not - with some exceptions - %~ord-class specific and similarly as the set of the types of complementations is closely connected with the kinds of the dependency relations between the governor and the dependent node; thus the Locative is accom}~anied by one member of the set {in, on, under, between .... ]. %'he dependency relations are very rich and varied, and it is no wonder that there were many efforts to classify them. In FGD, a ,lear boundary is being made be- tween -~tJcipants (deep cases) and(free) modifications: participants are those com- !~lementations that can occur with the same verb token only once and that have to be sr~uci~ied for each verb (and similarly for each noun, adjective, etc.), while free modifications are those comolementations that may appear more than once with the same verb token and that can be listed for all the verbs once for all; for a ~ore detaile:i discussion and the use of operat- ional criteria for this classification, see ?anevov~ 1974; 1980; Eaji~ov~ and Panevov~, in press; Haji~ov~, 1977; 1983. Doth ;~articipants and modifications can be (semantically) optional or obligatory; ~oth optional and obligatory oarticiDants are to be stated in the case frames of verbs, while modificatiors belong there only with such verbs with which they are obligatory. In the nresent version of FGD, the following five participants are disting- uished: actor/bearer, patient (objective), addressee, origin, an~ effect. The list o4 ~odifications is by far richer and more differentiated; a good starting ~oint for tills differentiation can be found in Czech gram~lars (esp. ~milauer, 1947). %'bus one can arrive at the following grou~?ings: (a) local: where, lirection, "~lhich ~:ray, (b) tem~3oral: when, since when, till when, how long, for ho%J long, luring, (c) causal: cause, condition real and un- rdal, aim, concession, consequence, (d) manner: manner, regard, extent, norm (criterion) , substitution, accompani- ment, means (instrument), difference, 292 benefit, comparison. In our discussion on types of complementat- ions we have up to now concentrated on comp- lementations of verbs; with Zhe FGD frame- work, however, all word classes have their frames. Specific to nouns (cf. Pi[ha, 1980), there is the partitive participant (a glass of water) and the free modifications of appurtenance (a leg of the table], of gen- eral relationship (nice weather), of ident- ity (the city of Prague] and of a descript- ive attribute (golden Prague). To illustrate the structure of the re- presentation on the tectogrammatical level of FG;), we present in Fi~. $ a com21ex de- pendency structure of one of the readings of of the sentence "Before the ~ar began, Charles lived in P~AGUE and Jane in BFRLIN" (which it has in cormnon with "Before the be- ginning of the war, Charles lived in PRAGUE and Jane lived in rSERLIN ~) ;to make the graph easier to survey, we omit there the values of the gram.~atemes. lize t AND live t ~ a r l e s t ~ Prague f ~ane t % Berlin f the linearized form: <~war t, {sing, def])>Act (beglnt' {enter, compl, noniter, nonimmed, indic,lffirm, before]]>whe n (<(Charles t, {sing, det]].~Ac t (live t, {enter, compl, noniter, non- inmled, decler, indie,effirm]] whe~re(Pregue f, {sing,def,in])> < ( Janet; {sing, def] ] .~ct (liver' {enter, eompl, noniter,nonirmled, declar, indic, affirm)) where (Berlin , {sing, def, in)] >SAND Fic.f. 1 II INFERENCE TYPES A. [q_eans of Implementation The inference rules are progranm~ed in 9-1anguage (Colmerauer, 1982), which provides rules that carry out transforr~at~ ions of oriented graphs. Since the struct- ures accepted by the rules must not con- tain complex labels, every complex sy~bol labelling a no~e in WR's has the form of a whole subtree in the Q-language notation (in a "~-tree). The set of TR's constitutes a seman- tic network, in which the individu~l T!{'s are connected into a com[~lex whole hy means of pointers between tl]e occurrences of lexical units and the corresponding entries in the lexicon. (Ouestions of dif- ferent objects of the same kind referred to in different TR's will be handled only in the future ex]~eriments.) The following procedures eperate o~n TI{ "s : (i] the extraction of (possibly] relevant pieces of information from the stock of kno,:?led~e ; (ii] the application of inference rules on the relevant }?ieces of information, (iii) the retrieval of the answer(s). ']:he extraction of the so-calleE rs- levant .~J~c,~s of inforT~'.~tion is based on ~:atcbing the. ~"~.-. of the input question with the lexicon and extracti~,~ khos¢: Y[''<~ that intersect with the Tq o~ the give,: questi- on in at least one s~-.ecific 1 ..... ~c~_ v~lue (i.e. other than the"g~nerll %ztor, -~.,:. one, the copula, etc.] ; the rezt cf the t r~es (s~]~-~oscd to }:~. irreluvant for ~h.- ,liven questJ.~n) are th?.n d~let_~,}.. The set of i".!:~,'~nt U'. "'-~ [~{ c,-cr-ztmi U.:O~l k. V t['~: rules o~ i~r-~r ~.",.cc. r f [, rui.? of in.fer~=r, Ce l..;z bee-] ?.-~-li? ~, '::th i:h,.:: 293 source TR as well as the derived TR consti- tute a part of the stock of knowledge a,,d a,, serve as source TR s for further pro- cessing. In order to avoid infinite cycles, the whole proced :r= oI inferencing is div- ided into several Q-systems (notice that rules within a single Q-system are applied s ~o,:g as the conditions for their applic- ation are fulfilled, i.e. there is no order- ing of the rules ). E. Types of Inference Rules I. Rules operatin@ on a single TR: (i) the structure of the tree is preserv- ed; the transformation concerns only (a) part(s) of the .o..p~ex symbol of some node of the CDS (i.e. label(s) of some node(s)in the Q-tree of the TR): (a) change of a grammateme: V exform-POssib (Ndevice-ACt) (X-Pat) ... == Vperform-lndic INdevice -Act) ~X-Pat) ... A0te:, In our highly simplified and schematic shapes of the rules we quote only thos~ labels of the nodes that are relevlnt for the rule in question; the sign == stands for "rewrite as"; Ndevice stands for any no~n ,,i%h the sem~,~t£u f=ature of "device", Vperfor m for a verb with the semantic feature of action ve£b=, ~ossib and II~dic de- note the |raimnatemes of predicate mod- ality. E x.: An implifier can activate a Das,-- ive network to form an active analogue. == An amplifier activates a passive network to form an active anal~gue. (b) change of a functor (type of complement- ation): V-use (Ni-Pat) (Nj-Accomp) ... == V-use (Ni-Regard) (~j-Pat) ... E__{x.: Operational amplifier is used with negative feedback. == With operational a,uplifier negative feedback is used. Vperfor m LNi-Act ) (Nj-Pat) ... == Vperfor m (Dgen-ACt) (~li-Instr) (Nj-Pat).. E x.: Operational amplifiers perform mathematical operations == Mathematic- al operations are performed by means of operational amplifiers. Note: Act, Pat, Instr, Accomp, Reg- ard stand for the functors of Actor, Patient, Instrument, Accompaniment and Regard, respectively; D denot- es a general participant, gen ~g~ change of the lexical part of the comp- lex symbol accompanied by a change of some gramnlateme or functor: V.-Possibl ((few)Ni) (V-use(Nk-ACc°mpneg) ...)... ==Vi-Necess ((most)Ni) (V-use ( Nk-ACcompposit)...)... Ex.: With few hlgh-performance oper- a-~ional amplifiers it is possible to maintain a linear relationship betw- een input and output without employ- ing negative feedback.== Hith most &i. it is necessary to maintain ... employing negative feedback. (ii) a whole subtree is replaced by another subtree: Ex.: a negative feedback == a negat- ive feedback circuit (iii) extraction of a subtree to create an independent TR: - relative clause in the topic part of the TR V i (Vj-Gener-L(...))... == Vj-Gener-L (...) Ex.: An operational amplifier, which a--~tivates a passive network to form an active analogue, is an unusually versatile device. == An operational amplifier activates a passive net- work to form an active analogue. Note: L stands for the grammateme "contextually bound", R for "non- -bound", Gener for the functor of general relationship. - causal clause in TR's with affir- mative modality Vi-Affirm (Vj-Cause (...))... == vj t...) EX.: Since an operational amplifier i-~ designed to perform mathematical operations, such basic operations as ... are performed readily. == An operational amplifier is designed to perform mathematical operations. - deletion of an attribute in the focus part of a TR V i (Nj-R (X-Gener-R)) V i (Nj-R) ... i 6 0 294 E_~x.: Operational amplifiers are used as regulators ... to minimize load- ing of reference ~]iod~ vermittlng full exploitation of the diode's precision temperature stability. == Operational amplifiers are used as regulators ... to minimize loading of reference diodes. (iv) the transformation gives rise to two TR s distributivity of conjunction and disjunction (under certain condit- ions: e.g. for the distributivity of disjunction to hold, the gramm- ateme of Indic with the main verb is replaced by the grammateme of Possib) E x.: Operational amplifiers are used in active filter networks to provide gain and frequency selectivity. == Operatinal amplifiers are used in active filter networks to provide gain. Operational amplifiBrs are used in active networks to provide frequency selectivity. 2. Rules operatin 9 (simultaneously) on two TR s left-hand side of the rule refers to two TR's) - conjoining of TR's with the same Actor Ex.: An operational amplifier act- ivates a passive network to form an active analogue. An operational amplifier performs mathematical op- erations. =~ An operational amplif- ier activates .... and performs .... use of definitions: the rule is triggered by the presence of an as- sertion of the form "X is called Y" and substitutes all occurrences of the lex~cal labels X in all TR's by the lexical label Y III EFFECTIVE LINKS BETWEEN INFERENCING AND ANSWER RETRIEVAL A. The Retrieval Procedure Th~ retrieval of an answer in the en- riched set of assertions (TR's) is perform- ed in the following stepsl (a) first it is checked whether the lexical value of the root of the TR is id- entical with that of the TR of the question; if the question has the form "What is per- formed (done, carried out) by X?", then the TR from the enriched set must include an action verb as a label of its root; (b) the path leading from the root to the wh-word is checked (yes-no questions are. excluded from the first stage of our exper- iments); the rightmost path in the relevant TR must coincide with the wh-path in its lexical labels, contextual--boundness, grammatemes and functors (with some poss- ible deviations determined by conditions of substitutability: Singular - Plural, Manner - Accompaniment, etc.); the wh-word in the question must be matched by ~-lex- ical unit of the potential answer, where the latter may be further expanded; (c~ if also the rest of the two compared TR s meet the conditions of identity or substitutability, the relevant TR is mark- ed as a full answer to the given question; if this is not the case but at least one of the nodes depending on a node included in the wh-path meets these conditions, then the relevant TR is marked as an indirect (partial) answer. B. Towards an Effective Application of Inference Rules In the course of the experiments it soon became clear that even with a very limited number of inference rules the mem- ory space was rapidly exceeded. It was then necessary to find a way how to achie- ve an effective application of the inferen- ce rules and at the same time not to re- strict the choice of relevant answers. Among other things, the following issues should be taken into consideration: The rules substituting subtrees for subtrees are used rather frequently, as well as those substituting only a label of one node (in the Q-tree, i.e. one ele- ment of the complex symbol in the CDS), preserving the overall structure of the tree untouched. These rules operate in both directions, so that it appears as use- ful to use in such cases a similar strat- egy as with synonymous expressions, i.e. to decide on a single representation both in the TR of the question and that includ- ed in the stock of knowledge; this would lead to an important decrease of the num- ber of TR's that undergo further inference transformations. Only those TR's are selected for the final steps of the retrieval of the answer (see point (a) in III.A) that coincide with the TR of the question in the lexical label of the root, i.e. the main verb. If the inference rules are ordered in such a way that the rules changing an element of the label of the root are applied before the rest of the rules, then the first step of the retrieval procedure can be made before the application of other in- ference rules. This again leads to a con- 295 siderable reduction of the number of TR's on which the rest of the inference rules are applied; only such TR's are left in the stock of relevant TR's (i)that agree with the TR of the question in the label of the root (its ~exical lab- el may belong to superordinated or subord- inated lexioal values: device - amplifier, etc.), (ii) that i~clude the lexical label of the root oC the question in some other place than at the root of the relevant TR, (iii) if the question has the form "Which N " (i.e the wh-n~de depends on its • o- , o head in the relation of general relation- ship), then also those TR's are preserved that contain an identical N node (noun) on any level of the tree. The use of Q-language brings about one difficulty, namely that the rules have to be formulated for each level for the tree separately. It is possible to avoid this complication by a simple tempor- ary rearrangement of the Q-tree, which re- sults in a tree in which all nodes with lexical labels are on the same level; the rules for a substitution of the lexical labels can be then applied in one step, after which the tree is "returned" into its original shape. These and similar considerations have led us to the following ordering of the in- dividual steps of the inference and retrie- val procedure: I. application of rules transforming the input structure to such an extent that the lexlcal label of the root of the tree is not preserved in the tree of a potent- ial answer; 2. a partial retrieval of the answer according to the root of the tree; 3. application of rules substituting other labels pertinent to the root of the tree; 4. partial retrieval of the answer according to the root of the tree; 5. application of inference rules operatinq on a single tree; 6. application of inference rules operating on two trees; 7. the steps (b) and (c) from the retrieval of the answer (see III.A above). REFERENCES Colmerauer A., 1982, Les systemes Q ou un formalisme pour analyser et syn=n& T tiser des phra~;es sur ordinateur, mimeo; Germ.transl. in: Prague Bull. of ~4athematical Linguistics 38, 1982, 45-74. Haji~ov~ E., 1976, Question and Answer in Linguistics and in Man-Machine Com- munication, SMIL,No.I,36-46T Haji~ov~ E., 1979, Agentive or Actor/Bear- er, Theoretical Linguistics 6, 173-190. Haji~ov~ E., 1983, Remarks on the Meaning of Cases, in Prague Studies in Mathematical Linguistics 8, 149-157. Haji~ov~ E. and J. Panevov~, in press, Valency (Case) Frames of Verbs, in Sgall, in press. Haji~ov~ E. and P. Sgall, 1980, Linguistic Meaning and Knowledge Representat7 ion in Automatic Understanding of Natural Language, in COLING 80 - Proceedings, Tokio, 67-75; reprint- ed in Prague Bulletin of Mathemat- ical Linguistics 34, 5-21. Haji~ov~ E. and P. Sgall, 1981, Towards Automatic Understanding of Techn- ical Texts, Prague Bulletin of Mathematical Linguistics 36, 5-23. Panevovl J., 19~4, On Verbal Frames in Functional Generative Description, Part I, Prague Bulletin of Mathem- atical Linguistics 22, 3-40; Part II, PBML 23, 1975, 17-52. Panevov~ J., 1980, Formy a funkce ve stav- b~ ~esk4 v~ty /Forms and Functions in the Structure of Czech Sentence/, Prague Pi£ha P., 1980, Case Frames for Nouns, in Linguistic Studies Offered to B. Siertsema, ed. by D.J.v.Alkemade, Amsterdam, 91-99 Pl~tek M., Sgall J. and P. Sgall, in press, A Dependency Base for a Linguistic Description, to appear in Sgall, in press. Sgall P., 1964, Zur Frage der Ebenen in Sprachsystem, Travaux linguistiques de Prague I, 95-106. Sgall P. , 1982, Natural Language Understand- ing and the Perspectives of Questi- on Answering, in COLING 82, ed. by J. Horeckg, 357-364. 296 Sgall P., ed., in press, Contributions to Functional Syntax, Semantics and Lang- uage Comprehension, to appear in Am- sterdam and Prague. Sgall P., Nebesk9 L., Goral~fkov~ A. and E. Haji~ovl, 1969, A Functional Approach to Syntax, New York. ~milauer V., 1947, Novo~esk~ skladba /A Present-Day Czech Syntax/, Prague. 297
1984
61
SEMANTIC RELEVANCE AND ASPECT DEPENDENCY IN A GIVEN SUBJECT DOMAIN Contents-drlven algorithmic processing of fuzzy wordmeanings to form dynamic stereotype representations Burghard B. Rieger Arbeitsgruppe fur mathematisch-empirische Systemforschung (MESY) German Department, Technical University of Aachen, Aachen, West Germany ABSTRACT Cognitive principles underlying the (re-)construc- tion of word meaning and/or world knowledge struc- tures are poorly understood yet. In a rather sharp departure from more orthodox lines of introspective acquisition of structural data on meaning and know- ledge representation in cognitive science, an empi- rical approach is explored that analyses natural language data s t a t i s t i c a l l y , represents its numeri- cal findings fuzzy-set theoretically, and inter- pret5 its intermediate constructs (stereotype mean- ing points) topologically as elements of semantic space. As connotative meaning representations, these elements allow an aspect-controlled, con- tents-driven algorithm to operate which reorganizes them dynamically in dispositional dependency struc- tures (DDS-trees) which constitute a procedurally defined meaning representation format. O. Introduction Modelling system structures of word meanings and/or world knowledge is to face the problem of their mutual and complex relatedness. As the cognitive principles underlying these structures are poorly understood yet, the work of psychologists, AI-re- searchers, and linguists active in that field ap- pears to be determined by the respective disci- pllne's general line of approach rather than by consequences drawn from these approaches' intersec- ting results in their common field of interest. In linguistic semantics, cognitive psychology, and knowledge representation most of the necessary data concerning lexical, semantic and/or external world information is s t i l l provided introspectively. Be- searchers are exploring (or make test-persons ex- plore) their own linguistic/cognitive capacities and memory structures to depict their findings (or let hypotheses about them be tested) in various representational formats ( l i s t s . arrays, trees, nets, active networks, etc.). It is widely accepted that these modelstructures do have a more or less ad hoc character and tend to be confined to their limited theoretical or operational performances within a specified approach, subject domain or im- plemented system. Basically interpretative approa- ches like these, however, lack the most salient characteristics of more constructive modelstruc- tures that can be developed along the lines of an entity-re!stlonshio approach (CHEN 1980). Their properties of f l e x i b i l i t y and dynamics are needed for automatic meaning representation f r o m input texts to build up and/or modify the realm and scope of their own knowledge, however baseline and vague that may appear compared to human understanding. In a rather sharp departure from those more ortho- dox lines of introspective data acquisition in mea- ning and knowledge representation research, the present approach (I) has been based on the algo- rithmic analysis of discourse that real speakers/ writers produce in actual situations of performed or intended communication on a certain subject do- main, and (2) the approach makes essential use of the word-usage/entity-relationship paradigm in com- bination with procedural means to map fuzzy word meanings and their connotative interrelations in a format of stereotypes. Their dynamic dependencies (3) constitute semantic dispositions that render only those conceptual interrelations accessible to automatic processing which can - under differing aspects differently - be considered relevant. Such dispositional dependency structures (DDS) would seem to be an operational prerequisite to and a promising candidate for the simulation of contents- driven (analogically-associative), instead of for- mal (logically-deductive) inferences in semantic processing. I. The approach The empirical analysis of discourse and the formal representation of vague word meanings in natural language texts as a system of interrelated concepts (RIEGER 1980) is based on a WITTGENSTEINian assump- tion according to which a great number of texts analysed for any of the employed terms' usage regu- larztie~ will reveal essential parts of the con- cepts and hence the meanings conveyed. It has been shown elsewhere (RIEGER 1980), that in a sufficiently large sample of pragmatically homo- geneous texts,called corpus, only a restricted vo- cabulary, i.e. a limited number of lexical items will be used by the interlocutors however compre- hensive their personal vocabularies in general might be. Consequently, the lexical items employed to convey information on a certain subject domain under consideration in the discourse concerned will be distributed according to their conventionalized communicative properties, constituting semantic re- gu!aritiez which may be detected empirically from the texts. For the quantitative analysis not of propositional strings but of their elements, namely words in na- tural language texts, rather simple statistics ser- ve the basicalkly descriptive purpose. Developed from and centred around a correlational measure to specify intensities of co-occurring lexical items used in natural language discourse, these analysing 298 algorithms allow for the systematic modelling of a fragment of the lexical structure constituted by the vocabulary employed in the texts as part of the concomitantly conveyed world knowledge. A correlation coefficient appropriately modified for the purpose has been used as a mapping function (RIEGER 1981a). It allows to compute the relational interdependency of any two lexical items from their textual frequencies. Those items which co-occur frequently in a number of texts will positively be correlated and hence called affined, those of which only one (and not the other) frequently occurs in a number of texts will negatively be correlated and hence called repugnant. Different degrees of word- repugnancy and word-affinity may thus be ascer- tained without recurring to an investigator's or his test-persons' word and/or world knowledge (se- mantic competence), but can instead solely be based upon the usage regularities of lexical items obser- ved in a corpus of pragmatically homogeneous texts, spoken or written by real speakers~hearers in ac- tual or intended acts of communication (communica- tive performance). 2. The semantic space structure Following a system-theoretic approach and taking each word employed as a potential descriptor to characterize any other word's virtual meaning, the modified correlation coefficient can be used to map each lexical item into fuzzy subsets (ZADEH 1981) of the vocabulary according to its numerically spe- cified usage regularities. Measuring the differen- ces of any one's lexical item's usages, represented as fuzzy subsets of the vocabulary, against those of all others allows for a consecutive mapping of items onto another abstract entity of the theoreti- cal construct. These new operationally defined en- t i t i e s - called an item's meanings - may verbally be characterized as a function of all the d i f f e - rences of all regularities any one item is used with compared to any other item in the same corpus of discourse. UNTERNEHM/enterpr 0.000 SYSTEM/system 2.035 ELEKTR/electron 2.195 DIPCOM/diploma 2.288 INDUSTR/industry 2.538 SUCHE/search 2.772 SCHUC/school 2.922 FOLGE/consequ 3.135 ERFAHR/experienc 3.485 ORGANISAT/organis 3.84b GEBIET/area 4.055 LEIT/guide 2.113 COMPUTER 2.208 VERBAND/assoc 2.299 STELLE/position 2.620 SCHREIB/write 2.791 AUFTRAG/order 3.058 BERUF/professn 3.477 UNTERR/instruct 3.586 VERWALT/administ 3.952 WUNSCH/wish/desir 4.081 , o . Table I: Topological environment E<UNTERNEHM> The resulting system of sets of fuzzy subsets con- stitutes the semantic space. As a distance-relatio- nal datastructure of stereotypically formatted mea- ning representations i t may be interpreted topo- logically as a hyperspace with a natural metric. Its linguistically labelled elements represent mea- ning points, and their mutual distances represent meaning differences. The position of a meaning point may be described by its semantic environment. Tab.1 shows the topologi- cal envlronment E<UNTNEHM>, i.e. those adjacent points being situated within the hypersphere of a certain diameter around its center meaning point UNTERNEHM/enterprise as computed from a corpus of German newspaper texts comprising some 8000 tokens of 360 types in 175 texts from the 1964 editions of the daily DIE WELT. Having checked a great number of environments, %t was ascertained that they do in fact assemble mea- ning points of a certain semantic a f f i n i t y . Further investigation revealed (RIEGER 1983) that there are regions of higher point density in the semantic space, forming clouds and clusters. These were de- tected by multivariate and cluster-analyzing me- thods which showed, however, that the both, para- digmatically and syntagmatically, related items formed what may be named connotatlve clouds rather than what is known to be called semantic fle!ds. Although its internal relations appeared to be un- specifiable in terms of any logically deductive or concept hierarchical system, their elements' posi- tions showed high degree of stable structures which suggested a regular form of contents-dependant as- sociative connectedness (RIEGER 19Bib). 3. The dispositional dependency Following a more semiotic understanding of meaning constitution, the present semantic space model may become part of a word meaning/world knowledge re- presentation system which separates the format of a basic (stereotype) meaning representation from its latent (dependency) relational organization. Where- as the former is a rather static, topologically structured (associative) memory representing the data that text analysing algorithms provide, the latter can be characterized as a collection of dy- namic and flexible structuring processes to re- organize these data under various principles (RIE- 6ER 1981b). Other than declarative knowledge that can be represented in pre-defined semantic network structures, meaning relations of lexical relevance and semantic dispositlons which are haevlly depen- dent on context and domain of knowledge concerned will more adequately be defined procedurally, i.e. by generative algorithms that induce them on chang- ing data only and whenever necessary. This is achieved by a recursively defined procedure that produces hierarchies of meaning points, structured under given aspects according to and in dependence of their meanings' relevancy (RIEGER 1984b). Corroborating ideas expressed within the theories spreading activation and the process of priming studied in cognitive psychology (LORCH 1982), a new algorithm has been developed which operates on the semantic space data and generates - other than in RIEGER (1982) - dispositional dependency structures (DDS) in the format of n-ary trees. Given one mean- ing point's position as a start, the algorithm of least distances (LD) w~ll first l i s t all its neigh- bouring points and stack them by increasing distan- ces, second prime the starting point as head node or root of the DDS-tree to be generated before, third, the algorithm's generic procedure takes over. It will take the f i r s t entry from the stack, generate a l i s t of its neighbours, determine from i t the least distant one that has already been primed, and identify it as the ancestor-node to 299 whlcn the new point is linked as descendant-node to be primed next. Repeated succesively for each of the meaning polnts stacked and in turn primed in accordance with this procedure, the algorithm will select a particular fragment of the relational structure e latentlv inherent in the semantic space data and depending on the aspect, i.e. the i n i t i a l - ly primed meaning point the algorithm is started with. Working its way through and consuming all lapeled points in the space structure - unless stopped under conditions of given target nodes, number of nodes to be processed, or threshold of maximum distance - the algorithm transforms pre- vailing similarities of meanings as represented by adjacent points to establish a binary, non-symme- t r i c , and transitive relation of semantic relevance between them. This relation allows for the hierar- chical re-organization of meaning points as nodes under a pr,med head in an n-arv DDS-tree (RIEGER 1984a). Without introducing the algorithms formally, some of their operatlve characteristics can well be i l - lustrated in the sequel by a few simplified examp- les. Beginning with the schema of a distance-like data structure as shown in the two-dimensional con- figuration of 11 points, labeled a to k (Fig. I . I } the stimulation of e.g. points a or c will start the procedure and produce two specific selections of distances activated among these 11 points (Fig. 1.2). The order of how these particular distances are selected can be represented either by step- l i s t s (Fig. 1.3), or n-ary tree-structures (Fig. 1.41, or their binary transformations {Fig. 1.5). I t is apparent that stimulation of other points within the same configuration of basic data points will result in similar but nevertheless differing trees, depending on the aspect under which the structure is accessed, i.e. the point i n i t l a l l y stimulated to start the algorithm wlth. Applied to the semantic space data of 360 defined meaning points calculated from the textcorpus of the t964 editions of the German newspaper DIE WELT, the Dispositional Dependency Structure ¢DDS) of UNTERNEHMlenterprise is given in Fig. 2 as gene- rated by the procedure described. Beside giving distances between nodes in the DDS- tree, a numerlcal measure has been devised which describes any node's degree of relevance according to that tree structure. As a numerical measure, a node's crzteriality is to be calculated with re- spect to its root or aspect and has been defined as a function of both, its distance values and its level tn the tree concerned. For a w~de range of purposes ~n processing DDS-trees, different crlte- r i a l i t i e s of nodes can be used to estimate which paths are more likely being taken against others being followed less l i k e l y under priming of certain meanlng points. Source-orlented, contents-drlven search and rattlers! procedures may thus be perfor- med effectively on the semantlc space structure, allowing for the actlvatlon of depeneency paths. These are to trace those intermediate nodes which determine the associative transitions of any target node under any specifiable aspect. f e d h J Fig. I . I £ d b d.c. l Step Zd Za 0 a -÷ a 1 e -@ a 2 b -@ a 3 c -÷ b 4 f -@ e 5 g -9 a 6 d -~ b 7 h -÷ g 8 i -~ h 9 k -÷ b I0 J -÷ c Fig. 1.2 Ste Zd Za 0 c -~ c I j -~ c 2 i -÷ c 3 b -~ c 4 h -} i 5 k -~ b 6 a -} b T 9 -÷ h 8 d -÷ b 9 e -~ a !0 f -÷ e I /l\ I f c d k h I J i Fig. 1.3 h k a d I I r f 8 v e f v f c I Fig. 1.4 c v v v d k n h I 1 1 g Fig. 1.5 ¥ b v v k ,m J m I f 300 AHT 5.326/.158 FOLGE 3.135/.242 UNTERNEHMEN ~. SYSTEM O.OOO/1 .00 2.035/ .329 ==.VERNANDELN 4.559JO50 BERUF ==ERFAHREN 2.521/.115 2.677/.O41 ~. GEUIET ==INDOSTRIE 1,104/.230 F~HIG r 1.86o/.o22 ~¢~ORGANISA'I' 1.88B/.o21 UOCH ~ 4.O23/.O15 M~.GCH INE 3.310/.O1~ HERRSCHAFT L 3.445/.O63 ~3.913/.O16 STELLE KOSTEN 2 .OO3/. IO3 > 4 .644/.022 =AUFTRAG 1.923/.089 =,SUCHE O.720/.207 :~VERBAND O.734/.204 • TECIINIK ~1.440/.O15 ==AUSGA~E 2.220/.009 BKITE ~a.531/.005 ~ 1.227/.012 2.165/.LOb KENNEN EiNSATZ RADM ].513/.O10 ~='4.459/.OO2 ~='3,890/.iX~I WIRT~CI~FT F 3.459/.O11 VERWALTEN VEHANTWORTK ENTWZCKELN 2.650/.O90 =>'2.242/.O39 N1~"3.405/.Oll UNTERRICHT 1.583/.142 SCllULE NUNI:iCli 1.150/.186 ;~"1.795/.O94 I t SCHREIUEN 1.257/.173 LEITEN LOEL~:KTRO COMPUI'Ek =" 1.425/. 188 .528/,263 O.O95/,735 Fi Using these tracing capabilities wthin DDS-trees proved particularly promising in an analogical, contents-driven form of automatic inferencing ,hich - as opposed to logical deduction - has ope- rationally be described in RIEGER (1984c) and simu- lated by pay of parallel processing of two (or more) dependency-trees. REFERENCES Chen, P.P.(1980)(Ed.): Proceedings of the Ist In- tern. Conference on Entity-Relationship Ap- proach to Systems Analysis and Design (UCLA), Amsterdam/NewYork (North Holland) 1980 Lurch, R.F.(1982): Priming and Search Processes in Semantic Memory: A Test of Three Models of Spreading Activation. Journ.ef Verbal Lear- nir, g and Verbal Behavior 21(1982) 468-492 Rieger, B.(1980): Fuzzy Word Meaning Analysis and Representation, Proceedings of COLINS 80, Tok- yo 1980, 76-84 Rieger, B.(1981a): Feasible Fuzzy Semantics. Eik- meyer/Rieser (Eds.): Words, Worlds, and Con- texts. New Approaches to Word Semantics, Ber- lin/ NewYork (deSruyter) 1981, 193-209 Rieger,B.(1981b): Connotative Dependency Structures in Seman tic Space. in: Rieger (Ed.): Empiri- cal Semantics II, Bochum (Brockmeyer) 1981, 622-711 AUGLAND ~ '3.04J/.004 ]~ HKNDEL 4.7?4/.O02 B/~t) . tills F 4.650/.000 ~1.983/.OOO EkWAH'|'EN KU~Z I-~'4.611/.OO2 1:"'4.U92/.OOO J.426/.004 ~KRA/~K ~.NTRAuE N'fEUEH 2.875/.O57 4.4J5/.013 [~"4.427/.c.~3 DIPLOM ";="O.115/.865 g. 2 Rieger, B.(1982): Procedural Meaning Representa- tion. in: Horecky (Ed.): COLIN8 82. Procee- dings of the 9th Intern. Conference on Compu- tational Linguistics, Amsterdam/New York (North Holland) 1982, 319-324 Rieger, B.(1983): Clusters in Semantic Space. in: Delatte (Ed.): Actes du Congrds International Informatique et Sciences Humaines, Universitd de Lieges (LASLA), 1983, 805-814 Rieger, B. (1984a): Semantische Dispositionen. Pro- zedurale Wissensstrukturen mit stereotypisch repraesentierten Wortbedeutungen. in: Rieger (Ed.): Dynamik in der Bedeutungskonstitution, Hamburg (Buske) 1983 Kin print) Rieger, B.(1984b):Inducing a Relevance Relation in a Distance-like Data Structure of Fuzzy Word Menanlng Representation. in: Allen, R.F.(Ed.): Data Bases in the Humanities and Social Scien- ces (ICDBHSS/83), Rutgers University, N.J. Amsterdam/NewYork (North Holland) 1984 (in pr) Rieger, B.(1984c): Lexikal Relevance and Semantic Dispposition. in: Hoppenbrouwes/Seuren/Weij- ters (Eds.): Meaning and the Lexicon. Nijmegen University (M.I.S. Press) 1984 (in print) Zadeh, L.A.(1981): Test-Score Semantics for Natural Languages and Meaning Representation via PRUF. in: Rieger (Ed.): Empirical Semantics I, Bo- chum (Brockmeyer) 1981, 281-349 301
1984
62
A Plan Recognition Model for Clarification Subdialogues Diane J. Litman and James F. Allen Department of Computer Science University of Rochester, Rochester, NY 14627 Abstract One of the promising approaches to analyzing task- oriented dialogues has involved modeling the plans of the speakers in the task domain. In general, these models work well as long as the topic follows the task structure closely, but they have difficulty in accounting for clarification subdialogues and topic change. We have developed a model based on a hierarchy of plans and metaplans that accounts for the clarification subdialogues while maintaining the advantages of the plan-based approach. I. Introduction One of the promising approaches to analyzing task- oriented dialogues has involved modeling the plans of the speakers in the task domain. The earliest work in this area involved tracking the topic of a dialogue by tracking the progress of the plan in the task domain [Grosz, 1977], as well as explicitly incorporating speech acts into a planning framework [Cohen and Perrault, 1979; Allen and Perrault, 1980]. A good example of the current status of these approaches can be found in [Carberry, 1983]. In general, these models work well as long as the topic follows the task structure closely, but they have difficulty in accounting for clarification subdialogues and topic change. Sidner and Israel [1981]suggest a solution to a class of clarification subdialogues that correspond to debugging the plan in the task domain. They allow utterances to talk about the task plan, rather than always being a step in the plan. Using their suggestions, as well as our early work [Allen et al., 1982: Litman, 1983], we have developed a model based on a hierarchy of plans and metaplans that This work was supported in part by the National Science Foundation under Grant IST-8210564, the Office of Naval Research under Grant N00014-80-C-1097, and the Defense Advanced Research Projects Agency under Grant N00014-82-K-0193. accounts for the debugging subdialogues they discussed, as well as other forms of clarification and topic shi~. Reichman [1981] has a structural model of discourse that addresses clarification subdialogues and topic switch in unconstrained spontaneous discourse. Unfortunately, there is a large gap between her abstract model and the actual processing of utterances. Although not the focus of this paper, we claim that our new plan recognition model provides the link from the processing of actual input to its abstract discourse structure. Even more important, this allows us to use the linguistic results from such work to guide and be guided by our plan recognition. For example, consider the following two dialogue fragments. The first was collected at an information booth in a train station in Toronto [Horrigan, 1977], while the second is a scenario developed from protocols in a graphics command and control system that displays network structures [Sidner and Bates, 1983]. 1) Passenger: 2) Clerk: 3) Passenger: 4) Clerk: 5) Passenger: 6) User: 7) System: 8) User: 9) System: 10) User: 11) System: The eight-fifty to Montreal? Eight-fifty to Montreal. Gate seven. Where is it? Down this way to the left. Second one on the left. OK. Thank you. Dialogue i Show me the generic concept called "employee." OK. <system displays network> [ can't fit a new IC below it. Can you move it up? Yes. <system displays network> OK, now make an individual employee concept whose first name is "Sam" and whose last name is "Jones." The Social Security number is 234-56- 7899. OK. Dialogue 2 302 While still "task-oriented," these dialogues illustrate phenomena characteristic of spontaneous conversation. That is, subdialogues correspond not only to subtasks (utterances (6)-(7) and (10)-(11)), but also to clarifications ((3)-(4)), debugging of task execution ((8)-(9)), and other types of topic switch and resumption. Furthermore, since these are extended discourses rather than unrelated question/answer exchanges, participants need to use the information provided by previous utterances. For example, (3) would be difficult to understand without the discourse context of (1) and (2). Finally, these dialogues illustrate the following of conversational conventions such as terminating dialogues (utterance (5)) and answering questions appropriately. For example, in response to (1), the clerk could have conveyed much the same information with "The departure location of train 537 is gate seven," which would not have been as appropriate. To address these issues, we are developing a plan- based natural language system that incorporates knowledge of both task and discourse structure. In particular, we develop a new model of plan recognition that accounts for the recursive nature of plan suspensions and resumptions. Section 2 presents this model, followed in Section 3 by a brief description of the discourse analysis performed and the task and discourse interactions. Section 4 then traces the processing of Dialogue 1 in detail, and then this work is compared to previous work in Section 5. 2. Task Analysis 2.1 The Plan Structures in addition to the standard domain-dependent knowledge of task plans, we introduce some knowledge about the planning process itself. These are domain- independent plans that refer to the state of other plans. During a dialogue, we shall build a stack of such plans, each plan on the stack referring to the plan below it, with the domain-dependent task plan at the bottom. As an example, a clarification subdialogue is modeled by a plan structure that refers to the plan that is the topic of the clarification. As we shall see, the manipulations of this stack of plans is similar to the manipulation of topic hierarchies that arise in discourse models. To allow plans about plans, i.e., metaplans, we need a vocabulary for referring to and describing plans. Developing a fully adequate formal model would be a large research effort in its own right. Our development so far is meant to be suggestive of what is needed, and is specific enough for our preliminary implementation. We are also, for the purpose of this paper, ignoring all temporal qualifications (e.g., the constraints need to be temporally qualified), and all issues involving beliefs of agents. All plans constructed in this paper should be considered mutually known by the speaker and hearer. We consider plans to be networks of actions and states connected by links indicating causality and subpart relationships. Every plan has a header', a parameterized action description that names the plan. The parameters of a plan are the parameters in the header. Associated with each plan is a set of constraints, which are assertions about the plan and its terms and parameters. The use of constraints will be made clear with examples. As usual, plans may also contain prerequisites, effects, and a decomposition. Decompositions may be sequences of actions, sequences of subgoals to be achieved, or a mixture of both. We will ignore most prerequisites and effects thoughout this paper, except when needed in examples. For example, the first plan in Figure 1 summarizes a simple plan schema with a header "BOARD (agent, train)," with parameters "agent" and "train," and with the constraint "depart-station (train) = Toronto." This constraint captures the knowledge that the information booth is in the Toronto station. The plan consists of the HEADER: BOARD (agent, train) STEPS: do BUY-TICKET (agent, train) do GOTO (agent, depart-location (train), depart-time (train)) do GETON (agent,train) CONSTRAINTS: depart-station (train) = Toronto . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . HEADER: GOTO (agent, location, time) EFFECT: AT (agent, location, time) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . HEADER: MEET (agent, train) STEPS: do GOTO (agent, arrive-location (train), arrive-time (train)) CONSTRAINTS: arrive-station (train) = Toronto Figure I: Domain Plans 303 shown. The second plan indicates a primitive action and its effect. Other plans needed in this domain would include plans to meet trains, plans to buy tickets, etc. We must also discuss the way terms are described, for some descriptions of a term are not informative enough to allow a plan to be executed. What counts as an informative description varies from plan to plan. We define the predicate KNOWREF (agent, term, plan) to mean that the agent has a description of the specified term that is informative enough to execute the specified plan, all other things being equal. Throughout this paper we assume a typed logic that will be implicit from the naming of variables. Thus, in the above formula, agent is restricted to entities capable of agency, term is a description of some object, and plan is restricted to objects that are plans. Plans about plans, or metaplans, deal with specifying parts of plans, debugging plans, abandoning plans, etc. To talk about the structure of plans we will assume the predicate IS-PARAMETER-OF (parameter, plan), which asserts that the specified parameter is a parameter of the specified plan. More formally, parameters are skolem functions dependent on the plan. Other than the fact that they refer to other plans, metaplans are identical in structure to domain plans. Two examples of metaplans are given in Figure 2. The first one, SEEK-ID-PARAMETER, is a plan schema to find out a suitable description of the parameter that would allow the plan to be executed. It has one step in this version, namely to achieve KNOWREF (agent, parameter, plan), and it has two constraints that capture the relationship between the metaplan and the plan it concerns, namely that "parameter" must be a parameter of the specified plan, and that its value must be presently unknown. The second metaplan, ASK, involves achieving KNOWREF (agent, term, plan) by asking a question and receiving back an answer. Another way to achieve KNOWREF goals would be to look up the answer in a reference source. At the train station, for example, one can find departure times and locations from a schedule. We are assuming suitable definitions of the speech acts, as in Allen and Perrault [1980]. The only deviation from that treatment invol~es adding an extra argument onto each (nonsurface) speech act, namely a plan parameter that provides the context for the speech act. For HEADER: SEEK-ID-PARAMETER (agent, parameter, plan) STEPS: achieve KNOWREF (agent, parameter, plan) CONSTRAINTS: IS-PARAMETER-OF (parameter, plan) ~KNOWREF (agent, parameter, plan) .......................................................................... HEADER: ASK (agent, term, plan) STEPS: do REQUEST (agent, agent2, INFORMREF (agent2, agent, term, plan), plan) do INFORMREF (agent2., agent, term, plan) EFFECTS: KNOWREF (agent, term, plan) CONSTRAINTS: ~KNOWREF (agent, term, plan) .......................................................................... Figure 2: Metaplans example, the action INFORMREF (agent, hearer, term, plan) consists of the agent informing the hearer of a description of the term with the effect that KNOWREF (hearer, term, plan). Similarly, the action REQUEST (agent, hearer, act, plan) consists of the agent requesting the hearer to do the act as a step in the specified plan. This argument allows us to express constraints on the plans suitable for various speech acts. There are obviously many more metaplans concerning plan debugging, plan specification, etc. Also, as discussed later, many conventional indirect speech acts can be accounted for using a metaplan for each form. 2.2 Plan Recognition The plan recognizer attempts to recognize the plan(s) that led to the production of the input utterance. Typically, an utterance either extends an existing plan on the stack or introduces a metaplan to a plan on the stack. If either of these is not possible for some reason, the recognizer attempts to construct a plausible plan using any plan schemas it knows about. At the beginning of a dialogue, a disjunction of the general expectations from the task domain is used to guide the plan recognizer. More specifically, the plan recognizer attempts to incorporate the observed action into a plan according to the following preferences: l) by a direct match with a step in an existing plan on the stack; 304 2) by introducing a plausible subplan for a plan on the stack; 3) by introducing a metaplan to a plan on the stack; 4) by constructing a plan, or stack of plans, that is plausible given the domain-specific expectations about plausible goals of the speaker. Class (1) above involves situations where the speaker says exactly what was expected given the situation. The most common example of this occurs in answering a question, where the answer is explicitly expected. The remaining classes all involve limited bottom-up forward chaining from the utterance act- In other words, the system tries to find plans in which the utterance is a step, and then tries to find more abstract plans for which the postulated plan is a subplan, and so on. Throughout this process, postulated plans are eliminated by a set of heuristics based on those in Allen and Perrault [1980]. For example, plans that are postulated whose effects are already true are eliminated, as are plans whose constraints cannot be satisfied. When heuristics cannot eliminate all but one postulated plan, the chaining stops. Class (3) involves not only recognizing a metaplan based on the utterance, but in satisfying its constraints, also involves connecting the metaplan to a plan on the stack. If the plan on the stack is not the top plan, the stack must be popped down to this plan before the new metaplan is added to the stack. Class (4) may involve not only recognizing metaplans from scratch, but also recursively constructing a plausible plan for the metaplan to be about. This occurs most frequently at the start of a dialogue. This will be shown in the examples. For all of the preference classes, once a plan or set of plans is recognized, it is expanded by adding the definitions of all steps and substeps until there is no unique expansion for any of the remaining substeps. If there are multiple interpretations remaining at the end of this process, multiple versions of the stack are created to record each possibility. There are then several ways in which one might be chosen over the others. For example, if it is the hearer's turn in the dialogue (i.e., no additional utterance is expected from the speaker), then the hearer must initiate a clarification subdialogue. If it is still the speaker's turn, the hearer may wait for further dialogue to distinguish between the possibilities. 3. Communicative Analysis and Interaction with Task Analysis Much research in recent years has studied largely domain-independent linguistic issues. Since our work concentrates on incorporating the results of such work into our framework, rather than on a new investigation of these issues, we will first present the relevant results and then explain our work in those terms. Grosz [1977] noted that in task-oriented dialogues the task structure could be used to guide the discourse structure. She developed the notion of global focus of attention to represent the influence of the discourse structure; this proved useful for the resolution of definite noun phrases. Immediate focus [Grosz, 1977; Sidner, 1983] represented the influence of the linguistic form of the utterance and proved useful for understanding ellipsis, definite noun phrases, pronominalization, "this" and "that." Reichman [1981] developed the context space theory, in which the non- linear structure underlying a dialogue was reflected by the use of surface phenomena such as mode of reference and clue words. Clue words signaled a boundary shift between context spaces (the discourse units hierarchically structured) as well as the kind of shift, e.g., the clue word "now" indicated the start of a new context space which further developed the currently active space. However, Reichman's model was not limited to task-oriented dialogues; she accounted for a much wider range of discourse popping (e.g., topic switch), but used no task knowledge. Sacks et ai. [1974] present the systematics of the turn-taking system for conversation and present the notion of adjacency pairs. That is, one way conversation is interactively governed is when speakers take turns completing such conventional, paired forms as question/answer. Our communicative analysis is a step toward incorporating these results, with some modification, into a whole system. As in Grosz [1977], the task structure guides the focus mechanism, which marks the currently executing subtask as focused. Grosz, however, assumed an initial complete model of the task structure, as well as the mapping from an utterance to a given subtask in this 305 structure. Plan recognizers obviously cannot make such assumptions. Carberry [1983] provided explicit rules for tracking shifts in the task structure. From an utterance, she recognized part of the task plan, which was then used as an expectation structure for future plan recognition. For example, upon completion of a subtask, execution of the next subtask was the most salient expectation. Similarly, our focus mechanism updates the current focus by knowing what kind of plan structure traversals correspond to coherent topic continuation. These in turn provide expectations for the plan recognizer. As in Grosz [1977] and Reichman [1981], we also use surface linguistic phenomena to help determine focus shifts. For example, clue words often explicitly mark what would be an otherwise incoherent or unexpected focus switch. Our metaplans and stack mechanism capture Reichman's manipulation of the context space hierarchies for topic suspension and resumption. Clue words become explicit markers of meta-acts. In particular, the stack manipulations can be viewed as corresponding to the following discourse situations. If the plan is already on the stack, then the speaker is continuing the current topic, or is resuming a previous (stacked) topic. If the plan is a metaplan to a stacked plan, then the speaker is commenting on the current topic, or on a previous topic that is implicitly resumed. Finally, in other cases, the speaker is introducing a new topic. Conceptually, the communicative and task analysis work in parallel, although the parallelism is constrained by synchronization requirements. For example, when the task structure is used to guide the discourse structure [Grosz, 1977], plan recognition (production of the task structure) must be performed first. However, suppose the user suddenly changes task plans. Communicative analysis could pick up any clue words signalling this unexpected topic shift, indicating the expectation changes to the plan recognizer. What is important is that such a strategy is dynamically chosen depending on the utterance, in contrast to any a priori sequential (or even cascaded [Bolt, Beranek and Newman, Inc., 1979]) ordering. The example below illustrates the necessity of such a model of interaction. 4. Example This section illustrates the system's task and communicative processing of Dialogue 1. As above, we will concentrate on the task analysis; some discourse analysis will be briefly presented to give a feel for the complete system. We will take the role of the clerk, thus concentrating on understanding the passenger's utterances. Currently, our system performs the plan recognition outlined here and is driven by the output of a parser using a semantic grammar for the train domain. The incorporation of the discourse mechanism is under development. The system at present does not generate natural language responses. The following analysis of "The eight-fifty to Montreal?" is output from the parser: S-REQUEST (Person1, Clerkl, (R1) INFORMREF (Clerkl, Person1, ?fn (train1), ?plan) with constraints: IS-PARAMETER-OF (?plan, ?fn(trainl)) arrive-station (trainl) = Montreal depart-time (trainl) = eight-fifty In other words, Person1 is querying the clerk about some (as yet unspecified) piece of information regarding trainl. In the knowledge representation, objects have a set of distinguished roles that capture their properties relevant to the domain. The notation "?fn (train1)" indicates one of these roles of trainl. Throughout, the "?" notation is used to indicate skolem variables that need to be identified. S- REQUEST is a surface request, as described in Allen and Perrault [19801. Since the stack is empty, the plan recognizer can only construct an analysis in class (4), where an entire plan stack is constructed based on the domain-specific expectations that the speaker will try to BOARD or MEET a train. From the S-REQUEST, via REQUEST, it recognizes the ASK plan and then postulates the SEEK- ID-PARAMETER plan, i.e., ASK is the only known plan for which the utterance is a step. Since its effect does not hold and its constraint is satisfied, SEEK-ID- PARAMETER can then be similarly postulated. In a more complex example, at this stage there would be competing interpretations that would need to be eliminated by the plan recognition heuristics discussed above. 306 In satisfying the IS-PARAMETER-OF constraint of SEEK-ID-PARAMETER, a second plan is introduced that must contain a property of a train as its parameter. This new plan will be placed on the stack before the SEEK-ID- PARAMETER plan and should satisfy one of the domain- specific expectations. An eligible domain plan is the GOTO plan, with the ?fn being either a time or a location. Since there are no plans for which SEEK-ID- PARAMETER is a step, chaining stops. The state of the stack after this plan recognition process is as follows: PLAN2 SEEK-ID-PARAMETER (Personl, ?fn (trainl), PLAN1) I ASK (Person1, ?fn (train 1), PLAN1) I REQUEST (Person1, Clerk1, INFORMREF (Clerk1, Person1, I ?fn (trainl), PLAN1)) S-REQUEST (Personl, Clerkl, INFORMREF (Clerkl, Person1, ?fn (trainl), PLAN1)) CONSTRAINT: ?fn is location or time role of trains PLANI: GOTO (?agent, ?location, ?time) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Since SEEK-ID-PARAMETER is a metaplan, the algorithm then performs a recursive recognition on PLAN1. This selects the BOARD plan; the MEET plan is eliminated due to constraint violation, since the arrive- station is not Toronto. Recognition of the BOARD plan also constrains ?fn to be depart-time or depart-location. The constraint on the ASK plan indicated that the speaker does not know the ?fn property of the train. Since the depart-time was known from the utterance, depart-time can be eliminated as a possibility. Thus, ?fn has been constrained to be the depart-location. Also, since the expected agent of the BOARD plan is the speaker, ?agent is set equal to Person1. Once the recursive call is completed, plan recognition ends and all postulated plans are expanded to include the rest of their steps. The state of the stack is now as shown in Figure 3. As desired, we have constructed an entire plan stack based on the original domain-specific expectations to BOARD or MEET a train. Recall that in parallel with the above, communicative analysis is also taking place. Once the task structure is recognized the global focus (the executing step) in each plan structure is noted. These are the S-REQUEST in the metaplan and the GOTO in the task plan. Furthermore, since R1 has been completed, the focus tracking mechanism updates the foci to the next coherent moves (the next possible steps in the task structures). These are the INFORMREF or a metaplan to the SEEK-ID- PARAMETER. PLAN2 SEEK-ID-PARAMETER (Person1, depart-loc (train1), PLAN1) ! ASK (Person1, depart-loc (trainl) PLAN1) REQUEST (Personl, Clerkl, ~ R E F (Clerkl, Personl, INFORMREF (Clerk1, Person1, depart-loc (trainl), PLAN1) depart-loc (trainl), PLAN1)) PLAN1 BOARD (Person l, trainl) BUY-TICKET(Pe o 1, trainl) ] GET-ON (Personl, train1) ! GOTO (Person1, depart-loc (trainl), depart-time (trainl)) Figure 3: The Plan Stack after the First Utterance 307 The clerk's response to the passenger is the INFORMREF in PLAN2 as expected, which could be realized by a generation system as "Eight-fifty to Montreal. Gate seven." The global focus then corresponds to the executed INFORMREF plan step; moreover, since this step was completed the focus can be updated to the next likely task moves, a metaplan relative to the SEEK- ID-PARAMETER or a pop back to the stacked BOARD plan. Also note that this updating provides expectations for the clerk's upcoming plan recognition task. The passenger then asks "Where is it?", i.e., S-REQUEST (Person1, clerk1 INFORMREF (clerk1, Person1, loc(Gate7), ?plan) (assuming the appropriate resolution of "it" by the immediate focus mechanism of the communicative analysis). The plan recognizer now attempts to incorporate this utterance using the preferences described above. The first two preferences fail since the S-REQUEST does not match directly or by chaining any of the steps on the stack expected for execution. The third preference succeeds and the utterance is recognized as part of a new SEEK-ID- PARAMETER referring to the old one. This process is basically analogous to the process discussed in detail above, with the exception that the plan to which the SEEK-ID-PARAMETER refers is found in the stack rather than constructed. Also note that recognition of this metaplan satisfies one of our expectations. The other expectation involving popping the stack is not possible, for the utterance cannot be seen as a step of the BOARD plan. With the exception of the resolution of the pronoun, communicative analysis is also analogous to the above. The final results of the task and communicative analysis are shown in Figure 4. Note the inclusion of INFORM, the clerk's actual realization of the INFORMREF. PLAN3 S-REQUEST (Person1, clerk1, INFORMREF (clerk1, Person1, loc (Gate7), PLAN2) SEEK-ID-PARAMETER (Person1, loc (Gate7), PLAN2) l ASK (~rsonl, loc (Gate7~), PLAN2) INFO-~MREF (clerkl, Person1, loc (Gate7), PLAN2) PLAN2 REQUEST (Person1, Clerk1, INFORMREF (Clerk1, Person1, depart-loc (train1), PLAN1)) SEEK-ID-PARAMETER (Person1, depart-loc (uainl), PLAN1) / A~,~nl, depart-loc~LAN1) INFORMREF (Clerk1, Person1, depart-loc (train1), PLAN1) I S-INFORM (Clerk1, Person1, equal (depart-loc (trainl), loc (Gate7))) PLAN1 ~ . ~ R D t Personl, trainl) BUY-TICKET P~Pe~onl, trainl) ~ ~ G E ON (Personl, trainl) GOTO (Personl, depart-loc (train1), depart-time (trainl)) Figure 4: The Plan Stack after the Third Utterance 308 After the clerk replies with the INFORMREF in PLAN3, corresponding to "Down this way to the left-- second one on the left," the focus updates the expected possible moves to include a metaplan to the top SEEK- ID-PARAMETER (e.g., "Second wharf") or a pop. The pop allows a metaplan to the stacked SEEK-ID- PARAMETER of PLAN2 ("What's a gate?") or a pop, which allows a metaplan to the original domain plan ("It's from Toronto?"). Since the original domain plan involved no communication, there are no utterances that can be a continuation of the domain plan itself. The dialogue concludes with the passenger's "OK. Thank you." The "OK" is an example of a clue word [Reichman, 1981], words correlated with specific manipulations to the discourse structure. In particular, "OK" may indicate a pop [Grosz, 1977], eliminating the first of the possible expectations. All but the last are then eliminated by "thank you," a discourse convention indicating termination of the dialogue. Note that unlike before, what is going on with respect to the task plan is determined via communicative analysis. 5. Comparisons with Other Work 5.1 Recognizing Speech Acts The major difference between our present approach and previous plan recognition approaches to speech acts (e.g., [Alien and Perrault, 1980]) is that we have a hierarchy of plans, whereas all the actions in Allen and Perrault were contained in a single plan. By doing so, we have simplified the notion of what a plan is and have solved a puzzle that arose in the one-plan systems. In such systems, plans were networks of action and state de~riptions linked by causality and subpart relationships, plus a set of knowledge-based relationships. This latter class could not be categorized as either a causal or a subpart relationship and so needed a special mechanism. The problem was that these relationships were not part of any plan itself, but a relationship between plans. In our system, this is explicit_ The "knowref" and "know-pos" and "know-neg" relations are modeled as constraints between a plan and a metaplan, i.e., the plan to perform the task and the plan to obtain the knowledge necessary to perform the task. Besides simplifying what counts as a plan, the multiplan approach provides some insight into how much of the user's intentions must be recognized in order to respond appropriately. We suggest that the top plan on the stack must be connected to a discourse goal. The lower plans may be only partially specified, and be filled in by later utterances. An example of this appears in considering Dialogue 2 from the first section, but there is no space to discuss this here (see [Litman and Allen, forthcoming]). The knowledge-based relationships were crucial to the analysis of indirect speech acts (ISA) in Allen and Perrault [1980]. Following the argument above, this means that the indirect speech act analysis will always occur in a metaplan to the task plan. This makes sense since the ISA analysis is a communicative phenomena. As far as the task is concerned, whether a request was indirect or direct is irrelevant_ In our present system we have a set of metaplans that correspond to the common conventional ISA. These plans are abstractions of inference paths that can be derived from first principles as in Allen and Perrault- Similar "compilation" of ISA can be found in Sidner and Israel [1981] and Carberry [1983]. It is not clear in those systems, however, whether the literal interpretation of such utterances could ever be recognized. In their systems, the ISA analysis is performed before the plan recognition phase. In our system, the presence of "compiled" metaplans for ISA allows indirect forms to be considered easily, but they are just one more option to the plan recognizer. The literal interpretation is still available and will be recognized in appropriate contexts. For example, if we set up a plan to ask about someone's knowledge (say, by an initial utterance of "I need to know where the schedule is incomplete"), then the utterance "Do you know when the Windsor train leaves?" is interpreted literally as a yes/no question because that is the interpretation explicitly expected from the analysis of the initial utterance. Sidner and Israel [1981] outlined an approach that extended Allen and Perrault in the direction we have done as well. They allowed for multiple plans to be recognized but did not appear to relate the plans in any systematic way. Much of what we have done builds on their 309 suggestions and outlines specific aspects that were left unexplored in their paper. In the longer version of this paper [Litman and Allen, forthcoming], our analysis of the dialogue from their paper is shown in detail. Grosz [1979], Levy [1979], and Appelt [1981] extended the planning framework to incorporate multiple perspectives, for example both communicative and task goal analysis; however, they did not present details for extended dialogues. ARGOT [Allen et al., 1982] was an attempt to fill this gap and led to the development of what has been presented here. Pollack [1984] is extending plan recognition for understanding in the domain of dialogues with experts; she abandons the assumption that people always know what they really need to know in order to achieve their goals. In our work we have implicitly assumed appropriate queries and have not yet addressed this issue. Wilensky's use of meta planning knowledge [1983] enables his planner to deal with goal interaction. For example, he has meta-goals such as resolving goal conflicts and eliminating circular goals. This treatment is similar to ours except for a matter of emphasis. His meta-knowledge is concerned with his planning mechanism, whereas our metaplans are concerned with acquiring knowledge about plans and interacting with other agents. The two approaches are also similar in that they use the same planning and recognition processes for both plans and metaplans. 5.2 Discourse Although both Sidner and Israel [1981] and Carberry [1983] have extended the Allen and Perrault paradigm to deal with task plan recognition in extended dialogues, neither system currently performs any explicit discourse analysis. As described earlier, Carberry does have a (non- discourse) tracking mechanism similar to that used in [Grosz, 1977]; however, the mechanism cannot handle topic switches and resumptions, nor use surface linguistic phenomena to decrease the search space. Yet Carberry is concerned with tracking goals in an information-seeking domain, one in which a user seeks information in order to formulate a plan which will not be executed during the dialogue. (This is similar to what happens in our train domain.) Thus, her recognition procedure is also not as tied to the task structure. Supplementing our model with metaplans provided a unifying (and cleaner) framework for understanding in both task-execution and information- seeking domains. Reichman [1981] and Grosz [1977] used a dialogue's discourse structure and surface phenomena to mutually account for and track one another. Grosz concentrated on task-oriented dialogues with subdialogues corresponding only to subtasks. Reichman was concerned with a model underlying all discourse genres. However, although she distinguished communicative goals from speaker intent her research was not concerned with either speaker intent or any interactions. Since our system incorporates both types of analysis, we have not found it necessary to perform complex communicative goal recognition as advocated by Reichman. Knowledge of plans and metaplans, linguistic surface phenomena, and simple discourse conventions have so far sufficed. This approach appears to be more tractable than the use of rhetorical predicates advocated by Reichman and others such as Mann et al. [1977] and McKeown [1982]. Carbonell [1982] suggests that any comprehensive theory of discourse must address issues of recta-language communication, as well as integrate the results with other discourse and domain knowledge, but does not outline a specific framework. We have presented a computational model which addresses many of these issues for an important class of dialogues. 6. References Allen, J.F., A.M. Frisch, and D.J. Litman, "ARGOT: The Rochester Dialogue System," Proc., Nat'l. Conf. on Artificial Intelligence, Pittsburgh, PA, August 1982. Allen, J.F. and C.R. Perrault, "Analyzing intention in utterances," TR 50, Computer Science Dept., U. Rochester, 1979: Artificial lntell. 15, 3, Dec. 1980. Appelt, D.E., "Planning natural language utterances to satisfy multiple goals," Ph.D. thesis, Stanford U., 1981. Bolt, Beranek and Newman, Inc., "Research in natural language understanding," Report 4274 (Annual Report), September 1978 - August 1979. 310 Carberry, S., "Tracking user goals in an information seeking environment," Proc., Nat'L Conf. on Artificial Intelligence, 1983. Carbonell, J.G., "Meta-language utterances in purposive discourse," TR 125, Computer Science Dept., Carnegie-Mellon U., June 1982. Cohen, P.R. and C.R. Perrault, "Elements of a plan-based theory of speech acts," Cognitive Science 3, 3, 1979. Grosz, B.J., "The representation and use of focus in dialogue understanding," TN 151, SRI, July 1977. Grosz, B.J., "Utterance and objective: Issues in natural language communication," Proc., IJCAI, 1979. Horrigan, M.K., "Modelling simple dialogs," Master's Thesis, TR 108, U. Toronto, May 1977. Levy, D., "Communicative goals and strategies: Between discourse and syntax," in T. Givon (ed). Syntax and Semantics (vol. 12). New York: Academic Press, 1979. Litman, D.J., "Discourse and problem solving," Report 5338, Bolt Beranek and Newman, July 1983; TR 130, Computer Science Dept., U. Rochester, Sept. 1983. Litman, D.J. and J.F. Allen, "A plan recognition model for clarification subdialogues," forthcoming TR, Computer Science Dept., U. Rochester, expected 1984. Mann, W.C., J.A. Moore, and J,A. Levin, "A comprehension model for human dialogue," Proc., 5th IJCAi, MIT, 1977. McKeown, K.R., "Generating natural language text in response to questions about database structure," Ph.D. thesis, U. Pennsylvania, 1982. Pollack, M.E., "Goal inference in expert systems," Ph.D. thesis proposal, U. Penn., January 1984. Reichman, R., "Plain speaking: A theory and grammar of spontaneous discourse," Report 4681, Bolt, Beranek and Newman, Inc., 1981. Sacks, H., E.A. Schegloff. and G. Jefferson, "A simplest systematics for the organization of turn-taking for conversation," Language 50, 4, Part 1, December 1974. Sidner, C.L., "Focusing in the comprehension of definite anaphora," in M. Brady (ed). Computational Models of Discourse. Cambridge, MA: MIT Press, 1983. Sidner, C.L. and M. Bates, "Requirements for natural language understanding in a system with grapic displays," Report 5242, Bolt Beranek and Newman, Inc., 1983. Sidner, C.L. and D. Israel, "Recognizing intended meaning and speakers" plans," Proc., 7th IJCAI, Vancouver, B.C., August 1981. Wilensky, R. Planning and Understanding. Addison- Wesley, 1983. 311
1984
63
A COMPUTATIONAL THEORY OF DISPOSITIONS Lotfi A. Zadeh Computer Science Division University of California, Berkeley, California 94720, U.S.A. ABSTRACT Informally, a disposition is a proposition which is prepon- derantly, but no necessarily always, true. For example, birds can fly is a disposition, as are the propositions Swedes are blond and Spaniards are dark. An idea which underlies the theory described in this paper is that a disposition may be viewed as a proposition with implicit fuzzy quantifiers which are approximations to all and always, e.g., almost all, almost always, most, frequently, etc. For example, birds can fly may be interpreted as the result of supressing the fuzzy quantifier most in the proposi- tion most birds can fly. Similarly, young men like young women may be read as most young men like mostly young women. The process of transforming a disposition into a proposition is referred to as ezplicitation or restoration. Explicitation sets the stage for representing the meaning of a proposition through the use of test-score semantics (Zadeh, 1978, 1982). In this approach to semantics, the mean- ing of a proposition, p, is represented as a procedure which tests, scores and aggregates the elastic constraints which are induced by p. The paper closes with a description of an approach to reasoning with dispositions which is based on the concept of a fuzzy syllogism. Syllogistic reasoning with dispositions has an important bearing on commonsense reasoning as well as on the management of uncertainty in expert systems. As a sim- ple application of the techniques described in this paper, we formulate a definition of typicality- a concept which plays an important role in human cognition and is of relevance to default reasoning. 1. Introduction Informally, a disposition is a proposition which is prepon- derantly, but not necessarily always, true. Simple examples of dispositions are: Smoking is addictive, exercise is good for your health, long sentences are more difficult to parse than short sen- tences, overeating causes obesity, Trudi is always right, etc. Dispositions play a central role in human reasoning, since much of human knowledge and, especially, commousense knowledge, may be viewed as a collection of dispositions. The concept of a disposition gives rise to a number of related concepts among which is the concept of a dispositional predicate. Familiar examples of unary predicates of this type are: Healthy, honest, optimist, safe, etc., with binary disposi- tional predicates exemplified by: taller than in Swedes are taller than Frenchmen, like in Italians are like Spaniards, like in youn 9 men like young women, and smokes in Ron smokes cigarettes. Another related concept is that of a dispositional command {or imperative) which is exemplified by proceed with caution, avoid overexertion, keep under refrigeration, be frank, etc. To Protessor Nancy Cartwright. Research supported in part by NASA Grant NCC2-275 and NSF Grant IST-8320416. The basic idea underlying the approach described in this paper is that a disposition may be viewed as a proposition with suppressed, or, more generally, implicit fuzzy quantifiers such as most~ almost all, almost always, usually, rarely, much of the time, etc . To illustrate, the disposition gestating causes obesity may be viewed as the result of suppression of the fuzzy quantifier most in the proposition most of those who overeat are obese. Similarly, the disposition young men like young women may be interpreted as most young men like mostly young women. It should be stressed, however, that restoration (or ezplicitation) -- viewed as the inverse of suppression - is an interpretation-dependent process in the sense that, in general, a disposition may be interpreted in different ways depending on the manner in which the fuzzy quantifiers are restored and defined. The implicit presence of fuzzy quantifiers stands in the way of representing the meaning of dispositional concepts through the use of conventional methods based on truth- conditional, possible-world or model-theoretic semantics (Cresswell, 1973; McCawley, 1981; Miller and Johnson-Laird, 1970),~-tn the computational approach which is described in this paper, a fuzzy quantifier is manipulated as a fuzzy number. This idea serves two purposes. First, it provides a basis for representing the meaning of dispositions; and second, it opens a way of reasoning with dispositions through the use of a collection of syllogisms. This aspect of the concept of a disposition is of relevance to default reasoning and non- monotonic logic (McCarthy, 1980; McDermott and Doyle, 1980; McDermott, 1982; Reiter, 1983). To illustrate the manner in which fuzzy quantifiers may be manipulated as fuzzy numbers, assume that, after restora- tion, two dispositions d I and d 2 may be expressed as proposi- tions of the form Pl A Qt A t s are BI s (1.1) P2 A = Q2 Be s are CI s , (1.2) in which Ql and Q2 are fuzzy quantifiers, and A, B and C are fuzzy predicates. For example, Pl &- most students are undergraduates (1.3) P2 ~ most undergraduates are young . By treating Pl and P2 as the major and minor premises in a syllogism, the following chaining syllogism may be esta- blished if B C A (Zadeh, 1983): 1. In the literature of linguistics, logic and philosophy of languages, fuz- zy quantifiers are usually referred to as ~agne or generalized quantifiers (Barwise and Cooper, 1981; Peterson, 1979). In the approach described in this paper, a fuszy quantifier is interpreted as a fuzzy number which provides an approximate characterization of absolute or relative cardi- nality. 312 Q1A ' s ore Bt s (1.4) Q: BI s are CI s >_(QI ~ Q2) A#s are C's in which Q1 ~ Q2 represents the product of the fuzzy numbers QI and Q2 (Figure 1). II 1 -///--- ; Os sol @ [ a=bc Proportion 02 Figure 1. Multiplication of fuzzy quantifiers and ~_(Ql ~ Q:t) should be read as "at least Q1 ~ Q2." As shown in Figure 1, Q~ and Q2 are defined by their respective possibility distributions, which means that if the value of Q1 at the point u is a, then a represents the possibility that the proportion of A ~ s in B ~ s is u. In the special case where Pl and P2 are expressed by (1.3), the chaining syllogism yields most students are undergraduates most nnderqradnates are vounq most 2 students are young where most ~ represents the product of the fuzzy number most with itself (Figure 2). /z I // most = most Proportion Figure 2. Representation of most and most 2. 2. Meaning Representation and Test-Score Semantics To represent the meaning of a disposition, d, ~¢e employ a two-stage process. First, the suppressed fuzzy quantifiers in d are restored, resulting in a fuzzily quantified proposition p. Then, the meaning of p is represented -- through the use of test-score semantics (Zadeh, 1978, 1982) - as a procedure which acts on a collection of relations in an explanatory data- base and returns a test score which represents the degree of compatibility of p with the database. In effect, this implies that p may be viewed as a collection of elastic constraints which are tested, scored and aggregated by the meaning- representation procedure. In test-score semantics, these elastic constraints play a role which is analogous to that truth- conditions in truth-conditional semantics (Cresswell, 1973). As a simple illustration, consider the familiar example d A snow is white which we interpret as a disposition whose intended meaning is the proposition p A usually snow is white . To represent the meaning of p, we assume that the ezplana- tory database, EDF (Zadeh, 1982), consists of the following relations whose meaning is presumed to be known EDF A WHITE [Sample;p] + USUALLY[Proportion;p], in which + should be read as and. The ith row in WHITE is a tuple (Si,ri), i = 1,...,m, in which S i is the ith sample of snow, and ri is is the degree to which the color of S i matches white. Thus, r i may be interpreted as the test score for the constraint on the color of Si induced by the elastic constraint WHITE. Similarly, the relation USUALLY may be inter- preted as an elastic constraint on the variable Proportion, with p representing the test score associated with a numerical value of Proportion. The steps in the procedure which represents the meaning of p may be described as follows: 1. Find the proportion of samples whose color is white: rl-k • • • -b r m m in which the proportion is expressed as the arith- metic average of the test scores. 2. Compute the degree to which ¢ satisfies the con- straint induced by USUALL Y: r ~ ~ USUALLY[Proportion ~ p] , in which r is the overall test score, i.e., the degree of compatibility of p with ED, and the notation ~R[X = a] means: Set the variable X in the rela- tion R equal to a and read the value of the variable p. More generally, to represent the meaning of a disposition it is necessary to define the cardinality of a fuzzy set. Specifically, if A is a subset of a finite universe of discourse U ---- {ul,...,u,}, then the sigma-count of A is defined as ~Count(A ) = I:~pA(U~), (2.1) in which pA(Ui), i ---- l,...,n, is the grade of membership of u/ in A (Zadeh, 1983a), and it is understood that the sum may be rounded, if need be, to the nearest integer. Furthermore, one may stipulate that the terms whose grade of membership falls below a specified threshold be excluded from the summation. The purpose of such an exclusion is to avoid a situation in which a large number of terms with low grades of membership become count-equivalent to a small number of terms with high membership. The relative sigma-count, denoted by ~ Count( B / A ), may be interpreted as the proportion of elements of B in A. More explicitly, ~Count(B/A ) --~ ~Count(A fl B) (2.2) ECount(a ) ' where B D A, the intersection of B and A, is defined by 313 itBnA(U)fUS/U) ^ US(U), U e U , where A denotes the sin operator in infix form. Thus, in terms of the membership functions of B and A, the relative slgma-count of B and A is given by ~,#B(u,) A tin(u,) Z Count( B / A } = (2.3} ~,tJa(u,) As an illustration, consider the disposition d A overating causes obesity (2.4) which after restoration is assumed to read 2 p A most of those who overeat are obese . (2.5) To represent the meaning of p, we shall employ an expla- natory database whose constituent relations are: EDF ~- POPULATION[Nome; Overeat; Obese] + MOST(Proportion;it] . The relation POPULA TION is a list of names of individuals, with the variables Overeat and Obese representing, respec- tively, the degrees to which Name overeats and is obese. In MOST, p is the degree to which a numerical value of Propor- tion fits the intended meaning of MOST. To test procedure which represents the meaning of p involves the following steps. 1. Let Name~, i -- 1 ..... m, be the name of ith indivi- dual in POPULATION. For each Name, find the degrees to which Namei overeats and is obese: ai A POVEREA r(Namei) A 0 ..... t POPULA T/ON(Name = Namei] #, A ItonEsE( Namei} ~ o6,, POPULA TlON[Name ~ Namei] . 2. Compute the relative sigma-count of OBESE in OVEREAT: =iai A #i p @ ~Count(OBESE/OVEREAT)= E,ai 3. Compute the test score for the constraint induced by MOST: r-~ ~MOST[Proportion --~ p] . This test score represents the compatibility of p with the explanatory database. 3. The Scope of a Fuzzy Quantifier In dealing with the conventional quantifiers all and some in flint-order logic, the scope of a quantifier plays an essential role in defining its meaning. In the case of a fuzzy quantifier which is characterized by a relative sigma-count, what matters is the identity of the sets which enter into the relative count. Thus, if the sigma-count is of the form ECount(B/A ), which should be read as the proportion of BIs in A Is, then B and A will be referred to as the n-set [with n standing for numera- tor) and b-set (with b standing for base), respectively. The ordered pair {n-set, b-set}, then, may be viewed a~ a generali- zation of the concept of the scope of a quantifier. Note, how- ever, that, in this sense, the scope of a fuzzy quantifier is a semantic rather than syntactic concept. As a simple illustration, consider the proposition p A most students are undergraduates. In this case, the n- set of most is undergraduates, the b-set is students, and the scope of most is the pair { undergraduates, students}. 2. It should be understood that (2.5) is just one of many possible in- terpret~.tions of (2.4), with no implicat;on that is constitutes a prescrip- tive interpretation of causality. See Suppes (1970}. As an additional illustration of the interaction between scope and meaning, consider the disposition d A young men like young women . (3.1) Among the possible interpretations of this disposition, we shall focus our attention on the following (the symbol rd denotes a restoration of a disposition): rd I A most young men like most young women rd 2 A most young men like mostly young women . To place in evidence the difference between rd I and rdz, it is expedient to express them in the form rdl -~- most young men PI rd 2 ~ most young men P2, where Pl and P2 are the fuzzy predicates Pl A likes most young women and P2 A likes mostly young women , with the understanding that, for grammatical correctness, likes in PI and P2 should be replaced by llke when Pl and P2 act as constituents of rd I and rd 2. In more explicit terms, PI and P2 may be expressed as PI A P,[Name;p] (3.2) P2 ~- P2[Name;p], in which Name is the name of a male person and # is the degree to which the person in question satisfies the predicate. [Equivalently, p is the grade of membership of the person in the fuzzy set which represents the denotation or, equivalently, the extension of the predicate.) To represent the meaning of PI and P2 through the use of test-score semantics, we assume that the explanatory data- base consists of the following relations (gadeh, 1983b): EDF A POPULATION(Name; Age; Sex] + LlKE[Namel;Name2; p] + YOUNG(Age; p] + MOST(Proportion; It] . In LIKE, it is the degree to which Namel likes Name9 ; and in YOUNG, it is the degree to which a person whose age is Age is young. First, we shall represent the meaning of PI by the follow- ing test procedure. 1. Divide POPULATION into the population of males, M.POPULATION, and the population of females, F.POPULA TION: M.POPULA TION A N .... Ag, POPULA TION[Sez---Male] F.POPULA TON A Ne,,,,age POPULA TION[Sez---Female] , where N~mc,AocPOPULATION denotes the projec- tion of POPULATION on the attributes Name and Age. 2. For each Name:,j ~ 1 ..... L, in F.POPULATION, find the age of Namei: Ai A Age F.POPULA TION[Name~Namei] . 3. For each Namei, find the degree to which Name i is young: ai A ~YOUNG[Age=Ai ] , where a i may be interpreted as the grade of 314 membership of Name i in the fuzzy set, YW, of young women. 4. For each Namei, i=l,...,K, in M.POPULATION, find the age of Namei: Bi A Age M.POPULA TlON[Name---Namei] . 5. For each Namei, find the degree to which Namei likes Name i : ~ii ~- ~LIKE[Namel = Namel;Name2 = Namei] , with the understanding that ~i/ may be interpreted as the grade of membership of Name i in the fuzzy set, WLi, of women whom Name, likes. 6. For each Name/ find the degree to which Name, likes Name i and Name i is young: "Tii A ai A #ii • Note: As in previous examples, we employ the aggre- gation operator rain (A) to represent the meaning of conjunction. In effect, 70 is the grade of membership of Name i in the intersection of the fuzzy sets WLI and YW. 7. Compute the relative sigma-count of women whom Name i likes among young women: Pi A ~CounttWLi/YW) (3.4) ECount(WL i N YW) ~Count( YW) _ ~i 76 a i F. i a i 8. Compute the test score for the constraint induced by MOST: r i = ~ MOST[Proportion ---- Pi] (3.5) This test-score way be interpreted as the degree to which Name i satisfies PI, i.e., ri = p PI [Name = Namei] The test procedure described above represents the meaning of P,. In effect, it tests the constraint expressed by the proposition E Count ( Y W/WL i ) is MOST and implies that the n-set and the b-set for the quantifier most in PI are given by: n-set = WLi = N.,.,2LIKE[Name 1 --~ Namei] fl F.POPULA TION and b-set = YW = YOUNG fl F.POPULA TION . By contrast, in the case of P2, the identities of the n-set and the b-set are interchanged, i.e., n-set = YW and b-set = WL i , which implies that the constraint which defines P2 is expressed by ECount( YW[ WLi) is MOST . 9. 10. 11. Thus, whereas the scope of the quantifier most in PI is {WLi, YW}, the scope of mostly in P2 is { YW, WL~}. Having represented the meaning of P1 and P~, it becomes a simple matter to represent the meaning of rd, and rd~. Taking rd D for example, we have to add the following steps to the test procedure which defines Pr For each Namei, find the degree to which Name i is young: 6i A uYOUNG[Age = Bi] , where /f i may be interpreted as the grade of membership of Name i in the fuzzy set, YM, of young men. Compute the relative sigma-count of men who have property P* among young men: 6 &-- ~Count(Pl/YM ) ~Count(Pi fl YM) Count(YM) ~iri A $i ~i~i Test the constraint induced by MOST: r = ~MOST[Proportion=--p] . The test score expressed by (3.6) represents the overall test score for the disposition d A young men like young women if d is interpreted as rd 1. If d is interpreted as rd2, which is a more likely interpretation, then the pro- cedure is unchanged except that r i in (3.5) should he replaced by r i = ~MOST[Proportion -~- 6i] where 6, A ~Count(YW/WL,) 4. Representation of Dhspos|tlonal Commands and Concepts The approach described in the preceding sections can be applied not only to the representation of the meaning of dispo- sitions and dispositional predicates, but, more generally, to various types of semantic entities as well as dispositional con- cepts. As an illustration of its application to the representation of the meaning of dispositional commands, consider dc A stay away from bald men , (4.1) whose explicit representation will be assumed to be the com- m and c A stay away from most bald men . (4.2) The meaning of c is defined by its compliance criterion (gadeh, 1982) or, equivalently, its propositional content (Searle, 1979), which may be expressed as ee A staying away from most bald men . To represent the meaning of ce through the use of test- score semantics, we shall employ the explanatory database 315 EDF A RECORD[Name; pBald; Action] + MOST[Proposition; #] . The relation RECORD may be interpreted as a diary -- kept during the period of interest -- in which Name is the name of a man; pBald is the degree to which he is bald; and Action describes whether the man in question was stayed away from (Action~l) or not (Action=0). The test procedure which defines the meaning of dc may be described as follows: 1. For each Name i, i~I ..... n, find (a) the degree to which Namel is bald; and (b) the action taken: #Baldi A ,B~IdRECORD[Name --. Namei] Action i A a~tionRECORO[Nam e --. Namei] . 2. Compute the relative sigma-count of compliance: 1 [~i pBaldl A Acti°ni}" (4.3) p=--# 3. Test the constraint induced by MOST: r = ~MOST[PropoMtion = p] • (4.4) The computed test score expressed by (4.4) represents the degree of compliance with c, while the procedure which leads to r represents the meaning of de. The concept of dispositionality applies not only to seman- tic entities such as propositions, predicates, commands, etc., but, more generally, to concepts and their definitions. As an illustration, we shall consider the concept of typicality -- a concept which plays a basic role in human reasoning, especially in default reasoning '(Reiter, 1983), concept formation (Smith and Media, 1981), and pattern recognition (Zadeh, 1977}. Let U be a universe of discourse and let A be a fuzzy set in A (e.g., U A cars and A ~ station wagons). The definition of a typical element of A may be expressed in verbal terms as follows: t is a typical element of A if and only if (4.5) (a) t has a high grade of membership in A, and (b) most dements of ,4 are similar to t. it should be remarked that this definition should be viewed as a dispositional definition, that is, as a definition which may fail, in some cases, to reflect our intuitive perception of the meaning of typicality. To put the verbal definition expressed by (4.5) into a more precise form, we can employ test-score semantics to represent the meaning of (a) and (h). Specifically, let S be a similarity relation defined on U which associates wi~h each ele- ment u in U the degree to which u is similar to t ~. Further- more, let S(t) be the Mmilarity clas~ of t, i.e., the fuzzy set of elements of U which are similar to t. ~Vhat this means is that the grade of membership of u in S(t) is equal to #s(t,u), the degree to which u is similar to t (Zadeh, 1971). Let HIGH denote the fuzzy subset of the unit interval which is the extension of the fuzzy predicate high. Then, the verbal definition (4.5) may be expressed more precisely in the form: t is a typical element of A if and only if (4.6) 3. For consistency with the definition of A, S must be such that if u and u I have a high degree of similarity, then their grades of member- ship in A should be close in magnitude. (a) Pa(t) is HIGH (b) ECount(S(t)/A ) is MOST. The fuzzy predicate high may be characterized by its membership function PHtCH or, equivalently, as the fuzzy rein- ton IIIGfI [Grade; PL in which Grade is a number in the inter- val [0,1] and p. is the degree to which the value of Grade fits the intended meaning of high. An important implication of this definition is that typi- cality is a matter of degree. Thus, it follows at once from (4.6) that the degree, r, to which t is typical or, equivalently, the grade of membership of t in the fuzzy set of typical elements of A, is given by r = tHIGH[Grade = t] A (4.7) aMOST[Proportion = ~, Count(S(t)/A ] . In terms of the membe~hip functions of HIGH, MOST,S and A, (4.7} may be written as [ ~, Pstt, u) A PA( u) I r A V.-LF. J' (4.8) where tHIGH, PMOSr, PS and PA are the membership functions of HIGH, MOST, S and A, respectively, and the summation Zu extends over the elements of U. It is of interest to observe that if pa(t) ----- 1 and .s(t,n) = ~a(u), (4.9) that is, the grade of membership of u in A is equal to the degree of similarity of u to t, then the degree of typicality of t is unity. This is reminiscent of definitions of prototypicality (Rosch, 1978) in which the grade of membership of an object in a category is assumed to be inversely related to its "dis- tance" from the prototype. In a definition of prototypicality which we gave in gadeh (1982), a prototype is interpreted as a so-called a-summary. In relation to the definition of typicality expressed by (4.5), we may say that a prototype is a a -summary of typical elements of A. In this sense, a prototype is not, in general, an element of U whereas a typical element of A is, by definition, an cle- ment of U. As a simple illustration of this difference, assume that U is a collection of movies, and A is the fuzzy set of Western movies. A prototype of A is a summary of the sum- maries {i.e., plots) of Western movies, and thus is not a movie. A typical Western movie, on the other hand, is a movie and thus is an element of U. 5. Fuzzy Syllogisms A concept which plays an essential role in reasoning with dispositions is that of a fuzzy syllogism (Zadeh, 1983c). As a general inference schema, a fuzzy syllogism may be expressed in the form QIA'a are Bin (5.1) Q2 CI8 are DIs fQs E' a are F~ a where Ql and Q2 are given fuzzy quantifiers, Q3 is fuzzy quantifier which is to be determined, and A, /3, C, D, E and F are interrelated fuzzy predicates. In what follows, we shall present a brief discussion of two basic types of fuzzy syllogisms. A more detailed description of these and other fuzzy syllogisms may be found in Zadeh (1983c, 1984). The intersection~product syllogism may be viewed as an instance of (5.1) in which 316 6' ~ A and B EAA F A B andD , and Qa= Q1 ~ Q2, i.e-, Qa is the product of QI and Q2in fuzzy arithmetic. Thus, we have as the statement of the syllo- gism: Q1A's are B' s (5.2) QT(A and B)' s arc CI s (Q1 (~ Q2) AIs are (Band C)ls • In particular, if B is contained in A, i.e., PB --< PA, where PA and P8 are the membership functions of A and B, respec- tively, then A and B = B, and (5.2) becomes Q1A's are Be s (5.3) Q~ B' s arc CI s (QI ~ Q2) A's are (B andC)'s . Since B and C implies C, it follows at once from (5.3) that Q1A I s arc BI s (5.4) Q2 BI s are C' s >(QI ~ Q2) A's arc C's, which is the chaining syllogism expressed by (1.4). Further- more, if the quantifiers Q] and Q2 are monotonic, i.e., >- QI -- Q1 and _> Q2 = Q2, then (5.4) becomes the product syllogism QI A e s are B' s (5.5) Q~ BIs are CIs (QI ~ Q2) A's ore C's the case of the consequent conjunction syllogism, we ]n have C~_A E~_A F = B and D . In this ease, the statement of syllogism is: QI A's are B's (5.0) Q:Afs are CIs Qa A e s are (B and C) Is where Q is a fuzzy number (or interval) defined by the ine- qualities 0~(Q 1 • Q201)_~ Q _~ QI~)Q2, (5.7) where (~ , ~ ~ and @ are the operations of addition, subtrac- tion, rain and max in fuzzy arithmetic. As a simple illustration, consider the dispositions dl A students are young d 2 ~-- students are single. Upon restoration, these dispositions become the propositions Pl A most students are young P2 A most students are single Then, applying the consequent conjunction syllogism to Pl and P2, we can infer that Q students are single and young where 2 most 01 <_ Q <_ most . (5.8) Thus, from the dispositions in question we can infer the dispo- sition d A students are ,ingle and young on the understanding that the implicit fuzzy quantifier in d is expressed by (5.8). 6. Negation of Dispositlona In dealing with dispositions, it is natural to raise the question: What happens when a disposition is acted upon with an operator, T, where T might be the operation of negation, active-to-passive transformation, etc. More generally, the same question may be asked when T is an operator which is defined on pairs or n-tuples of disp?sitions. As an illustration, we shall focus our attention on the operation of negation. More specifically, the question which we shall consider briefly is the following: Given a disposition, d, what can be said about the negaton of d, not d? For exam- ple, what can be said about not (birds can fly) or not (young men like young women). For simplicity, assume that, after restoration, d may be expressed in the form rd A Q A W s are BIs . (6.1) Then, not d = not (Q A ' s ore B ' s). (6.2) Now, using the semantic equivalence established in Zadeh (1978), we may write not (Q A's are B's)E(not Q)A's ore B'o , (6.3) where not Q is the complement of the fuzzy quantifier Q in the sense that the membership function of not Q is given by P,,ot Q(u).~- 1-pQ(u),0 < u < 1 . (6.4) Furthermore, the following inference rule can readily be established (gadeh, 1983a): Q A ' s ore B' s (0.5) ~__ (ant Q ) A I s arc not B t o ' where ant Q denotes the antonym of Q, defined by ~,,,~(u) = ~q(1-n), o < u < 1, (6.o) On combining (0.3) and (0.5), we are led to the following result: not(Q A # s are B' s)= (6.7) >_ (oat (not q)) A ' o ore not Bt , which reduces to not(q A's are B'*)= (0.8) (ant (not q)) A ' , are not B' * if Q is monotonic (e.g., Q A most). As an illustration, if d A birds can fly and Q A most, then (0.8) yields not (birds can fly) (ant (not most)) birds cannot fly. (o.g) It should be observed that if Q is an approximation to all, then ant(not Q) is an approximation to some. For the right-hand member of (0.9) to be a disposition, most must be 317 an approximation to at least a half. In this case ant [not most] will be an approximation to most, and consequently the right- hand member of (0.9) may be expressed -- upon the suppres- sion of most -- as the disposition birds cannot fly. REFERENCES AND RELATED PUBLICATIONS Barwise, J. and Cooper, R., Generalized quantifiers and natural language, Linguistics and Philosophy 4 (1981) 159-219. Bellman, R.E. and Zadeh, L.A., Local and fuzzy logics, in: Modern Uses of Multiple-Valued Logic, Epstein, G., (ed.), Dordrecht: Reidel, 103-165, 1977. Brachman, R.J. and Smith, B.C., Special Issue on Knowledge Representation, SIGART 70, 1980. Cresswell, M.J., Logic and Languages. London: Methuen, 1973. Cushing, S., Quantifier Meanings -- A Study in the Dimensions o/ Semantic Compentence. Amsterdam: North-Holland, 1982. Dubois, D. and Prade, H., Fuzzy Sets and Systems: Theory and Applications. New York: Academic Press, 1980. Goguen, J.A., The logic of inexact concepts, Synthese 19 (1969) 325-373. Keenan, E.L., Quantifier structures in English, Foundations of Language 7 (1971) 255-336. Mamdani, E.H., and Gaines, B.R., Fuzzy Reasoning and its Applications. London: Academic Press, 1981. McCarthy, J., Circumscription: A non-monotonic inference rule, Artificial Intelligence 13 (1980) 27-40. McCawley, J.D., Everything that Linguists have Always Wanted to Know about Logic. Chicago: University of Chicago Press, 1981. McDermott, D.V. and Doyle, J., Non-monotonic logic, I. Artificial Intelligence 13 (1980) 41-72. McDermott, D.V., Non-monotonic logic, lh non-monotonic modal theories, J. Assoc. Camp. Mach. 29 (1982) 33-57. Miller, G.A. and Johnson-Laird, P.N., Language and Percep- tion. Cambridge: Harvard University Press, 1970. Peterson, P., On the logic of few, many and moot, Notre Dame J. Formal Logic gO (1979) 155-179. Reiter, R. and Criscuolo, G., Some representational issues in default reasoning, Computers and Mathematics 9 (1983) 15-28. Rescher, N., Plausible Reasoning. Amsterdam: Van Gorcum, 1976. Roseh, E., Principles of categorization, in: Cognition and Categorization, Rosch, E. and Lloyd, B.B., (eds.). Hills- dale, N J: Erlbaum, 1978. Searle, J., Ezpression and Meaning. Cambridge: Cambridge University Press, 1979. Smith, E. and Medin, D.L., Categories and Concepts. Cam- bridge: Harvard University Press, 1981. Suppes, P., A Probabilistic Theory of Causality. Amsterdam: North-Holland, 1970. Yager, R.R., Quantified propositions in a linguistic logic, in: Proceedings of the end International Seminar on Fuzzy Set Theory, Klement, E.P., (ed.). Johannes Kepler University, Linz, Austria, 1980. Zadeh, L.A., Similarity relations and fuzzy orderings, Informa- tion Sciences 3 (1971) 177-200. Zadeh, L.A., Fuzzy sets and their application to pattern classification and clustering analysis, in: Classification and Clustering, Ryzin, J., (ed.), New York: Academic Press, 251-299, 1977. Zadeh, L.A., PRUF -- A meaning representation language for natural languages, Inter. J. Man-Machine Studies I0 (1978) 395-400. Zadeh, L.A., A note on prototype theory and fuzzy sets, Cog- nition 12 (1982) 291-297. Zadeh, L.A., Test-score semantics for natural languages and meaning-representation via PRUF, Proc. COLING 82, Prague, 425-430, 1982. Full text in: Empirical Semantics, Rieger, B.B., (ed.). Bochum: Brockmeyer, 281-349, 1982. Zadeh, L.A., A computational approach to fuzzy quantifiers in natural languages, Computers and Mathematics Y (1983a) 149-184. Zadeh, L.A., Linguistic variables, approximate reasoning and dispositions, Medical lnformatics 8(1983b) 173-186. Zadeh, L.A., Fuzzy logic as a basis for the management of uncertainty in expert systems, Fuzzy Sets and Systems 11 (1983c) 199-227. Zadeh, L.A., A theory of commonsense knowledge, in: Aspects of Vagueness, Skala, H.J., Termini, S. and Trillas, E., (eds.). Dordrecht: Reidel, 1984. 318
1984
64
Using Focus to Generate Complex and Simple Sentences Marcia A. Derr Kathleen R. McKeown AT&T Bell Laboratories Murray Hill, NJ 07974 USA and Department of Computer Science Columbia University Department of Computer Science Columbia University New York, NY 10027 USA Abstract One problem for the generation of natural language text is determining when to use a sequence of simple sentences and when a single complex one is more appropriate. In this paper, we show how focus of attention is one factor that influences this decision and describe its implementation in a system that generates explanations for a student advisor expert system. The implementation uses tests on functional information such as focus of attention within the Prolog definite clause grammar formalism to determine when to use complex sentences, resulting in an efficient generator that has the same benefits as a functional grammar system. 1. Introduction Two problems in natural language generation are deciding what to say and how to say it. This paper addresses issues in the second of these tasks, that of surface generation. Given a semantic representation of what to say, a surface generator must construct an appropriate surface structure taking into consideration a wide variety of alternatives. When a generator is used to produce text and not just single sentences, one decision it must make is whether to use a sequence of simple sentences or a single complex one. We show how focus of attention can be used as the basis on which this decision can be made. A second goal of this paper is to introduce a formalism for surface generation that uses aspects of Kay's functional grammar (Kay, 1979) within a Prolog definite clause grammar (Pereira and Warren, 1980). This formalism was used to implement a surface generator that makes choices about sentence complexity based on shifts in focus of attention. The implementation was done as part of an explanation facility for a student advisor expert system being developed at Columbia University. 2. Language Generation Model In our model of natural language generation, we assume that the task of generating a response can be divided into two stages: determining the semantic content of the response and choosing a surface structure I. One component makes decisions about which information to include in the response and passes this information to a surface generator. For example, an expert system explanation facility may select part of the goal tree, a particular goal and its antecedent subgoals, to explain a behavior of the system. In the advisor system, the output of this component consists of one or more logical propositions where each proposition consists of a predicate relating a group of arguments. The output includes functional information, such as focus, and some syntactic features, such as number and tense, for convenience. Other information, such as the relationships between propositions, is implicit in the organizational structure of the output. The output of the semantic component is passed on to another component, the surface generator. The job of generator is to use whatever syntactic and lexical information is needed to translate the logical propositions into English. The generator must be able to make choices concerning various alternatives, such as whether to use active or passive voice, or when to pronominalize. While we have found the explanation facility for the advisor system to be a valuable testbed for the surface generator, the generator is an independent module that can be transported to other domains by changing only the vocabulary. 3. Choosing Surface Structure Given a set of propositions, one decision a surface generator must make is whether to produce a simple sentence for each proposition or whether to combine propositions to form complex sentences. As an example, consider propositions 1 and 2 below. These may be expressed as two simple sentences (sequence l) or as one sentence containing a subordinate clause (sentence 2). The sentences in 1 and 2 also show that a generation system should be able to choose between definite and indefinite reference and decide when to pronominalize. Another decision is what syntactic structure to use, such as 1. In order to concentrate on the task of surface generation, these two stages are totally separate in our system, but we doN't dispute the value of interaction between the two (Appclt, 1983). 319 whether to use the active or the passive voice. Thus, proposition l may be expressed as any of the sentences shown in 3-5. proposition 1: 1. 2. 3. 4. 5. predicate=give protagonist~John goal '~ book beneficiary~Mary John gave Mary a book. Mary needed the book. proposition 2: predicate-need protagonist -Mary goal = book John gave Mary a book that she needed. John gave Mary a book. Mary was given a book by John. A book was given to Mary by John. Given that there are multiple ways to express the same underlying message, how does a text generator choose a surface structure that is appropriate? What are some mechanisms for guiding the various choices? Previous research has identified focus of attention as one choice mechanism. McKeown (1982) demonstrated how focus can be used to select sentence voice, and to determine whether pronominalization is called for. In this paper, we will show how focus can also be used as the basis on which to combine propositions. 3.1 Linguistic Background Grosz (1977) distinguished between two types of focus: global and immediate. Immediate focus refers to how a speaker's center of attention shifts or remains constant over two consecutive sentences, while global focus describes the effect of a speaker's center of attention throughout a sequence of discourse utterances on succeeding utterances. In this paper, when we refer to "focus of attention," we are referring to immediate focus. Phenomena reflecting immediate focus in text have been studied by several linguists. Terminology and definitions for these vary widely; some of the names that have emerged include topic/comment, given/new, and theme/rheme. These linguistic concepts describe distinctions between functional roles elements play in a sentence. In brief, they can be defined as follows: • Topic: Constituents in the sentence that represent what the speaker is talking about. Comment labels constituents that represent what s/he has to say about that topic (see Sgall, Hajicova, and Benesova, 1973; Lyons, 1968; Reinhart, 1981). • Given: Information that is assumed by the speaker to be derivable from context where context may mean either the preceding discourse or shared world knowledge. New labels information that cannot be derived (see Halliday, 1967; Prince, 1979; and Chafe, 1976). • Theme: The Prague School of linguists (see Firbas, 1966; Firbas, 1974) define the theme of a sentence as elements providing common ground for the conversants. Rheme refers to elements that function in conveying the information to be imparted. In sentences containing elements that are contextually dependent, the contextually dependent elements always function as theme. Thus, the Prague School version is close to the given/new distinction with the exception that a sentence always contains a theme, while it need not always contain given information 2. What is important here is that each of these concepts, at one time or another, has been associated with the selection of various syntactic structures. For example, it has been suggested that new information and rheme usually occur toward the end of a sentence (e.g., Halliday, 1967; Lyons, 1968; Sgall et al., 1973; Firbas, 1974). To place this information in its proper position in the sentence, structures other than the unmarked active sentence may be required (for example, the passive). Structures such as it-extraposition, there-insertion, 3 topicalization, and left-dislocation have been shown to function in the introduction of new information into discourse (Sidner, 1979; Prince, 1979), often with the assumption that it will be talked about for a period of time (Joshi and Weinstein, 1981). Pronominalization is another linguistic device associated with these distinctions (see Akmajian, 1973; Sidner, 1979). One major difference between linguistic concepts and immediate focus is that focusing describes an active process on the part of speaker and listener. However, the speaker's immediate focus influences the surfacing of each of the linguistic concepts in the text. It influences topic (and Halliday's theme) in that it specifies what the speaker is focusing on (i.e., talking about) now. But it also influences given information in that immediate focus is linked to something that has been mentioned in the previous utterance and thus, is already present in the reader's consciousness. Since immediate focus is intimately related to the linguistic definitions of functional information, the influence of functional information on the surface structure of the sentence can be extended to immediate focus as well. 2. Halliday also discusses theme (Halliday, 1967), but he defines theme as that which the speaker is talking about now, as opposed to given, that which the speaker was talking about. Thus, his notion of theme is closer to the concept of topic/comment articulation. Furthermore, Halliday always ascribes the term theme to the element occurring first in the sentence. 3. Some examples of these constructions are: I. It was Sam who left the door open. (it-extraposition) 2. There are 3 blocks on the table. (there-insertion) 3. Sam, I like him. (left-dislocation) 4. Sam I like. (topicalization) 320 3.2 Focus and Complex Sentences While previous research in both linguistics and computer science has identified focus as a basis for choosing sentence voice and for deciding when to pronominalize, its influence on selecting complex sentence structure over several simple sentences has for the most part gone unnoticed. If a speaker wants to focus on a single concept over a sequence of utterances, s/he may need to present information about a second concept. In such a case, a temporary digression must be made to the second concept, but the speaker will immediately continue to focus on the first. To signal that s/he is not shifting focus, the speaker can use subordinate sentence structure in describing the second concept. Suppose that, in the previous example, focus is on John in proposition 1 and book in proposition 2. If a third proposition follows with focus returning to John, then the surface generator can signal that the shift to book is only temporary by combining the two propositions using subordination as in sentence 2. A textual sequence illustrating this possibility is shown in 6 below. On the other hand, if the third proposition continues to focus on book, then it is more appropriate to generate the first and second propositions as two separate sentences as in sentence 1 above. It may even be possible to combine the second and third propositions using coordination as in the textual sequence shown in 7 below. 6. John gave Mary a book that she needed. He had seen it in the Columbia bookstore. 7. John gave Mary a book. Mary needed the book and had been planning on buying it herself. Argument identity can also serve with focus as a basis for combining propositions as shown in the example below. In proposition 3, the values of predicate, protagonist, and focus match the values of the corresponding arguments in proposition 4. Thus, the two propositions can be joined by deleting the protagonist and predicate of the second proposition and using conjunction to combine the two goals as in sentence 8. Note that if the focus arguments were different, the propositions could not be combined on this basis. Propositions 5 and 6, with matching values for focus can also be combined by using coordination and deleting the focused protagonist in the second proposition (sentence 9). proposition 3: predicate = buy protagonist - John goal -" book focus - John proposition 5: predicate -- read protagonist = Mary goal = book focus = Mary proposition 4: predicate = buy protagonist -- John goal -- cassette focus "~ John proposition 6: predicate - play protagonist = Mary goal = cassette focus = Mary 8. John bought a book and a cassette. 9. Mary read the book and played the cassette. 4. A Formalism for Surface Generation In this section we discuss the Prolog definite clause grammar (DCG) formalism (Pereira and Warren, 1980) and how it can be used for language generation, as well as recognition. We then review the functional grammar formalism (Kay, 1979) that has been used in other generation systems (e.g., McKeown, 1982; Appelt, 1983). Finally, we describe how aspects of a functional grammar can be encoded in a DCG to produce a generator with the best features of both formalisms. 4.1 Definite Clause Grammars The DCG formalism (Pereira and Warren, 1980) is based on a method for for expressing grammar rules as clauses of first-order predicate logic (Colmerauer, 1978; Kowalski, 1980). As discussed by Pereira and Warren, DCGs extend context-free grammars in several ways that make them suitable for describing and analyzing natural language. DCGs allow nonterminals to have arguments that can be used to hold the string being analyzed, build and pass structures that represent the parse tree, and carry and test contextual information (such as number or person). DCGs also allow extra conditions (enclosed within brackets '{' and '}') to be included in the rules, providing another mechanism for encoding tests. A simple sentence grammar is shown in Figure 1. Viewed as a set of grammar rules, a DCG functions as a declarative description of a language. Viewed as a set of logic clauses, it functions as an executable program for analyzing strings of the language. In particular, a DCG can be executed by Prolog, a logic programming language that implements an efficient resolution proof procedure using a depth-first search strategy with backtracking and a matching algorithm based on unification (Robinson, 1965). To analyze a sentence, the sentence is encoded as an argument to a Prolog goal. Prolog attempts to prove this goal by matching it against the set of grammar clauses. If the proof succeeds, the sentence is valid and a second argument is instantiated to the parse tree structure. A recognition goal and its resulting parse tree are shown in Figure 1. More extensive examples can be found in Pereira and Warren (1980). 321 sentence(s(N P,VP)) --> n phrase(N P,N um),v phrase(VP,N urn). n__phrase (np (Noun),N urn) -- > noun(Noun,Num). noun(n(Root,Num),Num) --> [the], [Word], {is_noun(Root,Word,Num)}. v__phrase(vp(Verb,NP),Num) --> verb(Verb,Num), n__phrase(N P,N2). verb (v (Root, N u m ,Tense), N u m) -- > [Word], {is verb (Root,Word,N urn,Tense) }. is_noun (student,student,singular). is_noun (student,students,plural). is_noun (answer,answer,singular). is_noun (answer,answers,plural). is__verb (give,gives,singular,pres) is_ver b (give,give,plural,pres). is_verb(give,gave, ,past). Recognition Goal: sentence (T,[the,student,gave,the,answer],[]). Result: T = s(np(n (student,singular)), vp(v (give,singular,past), np(n (answer,singular)))) Generation Goal: sentence (s (np (n (student,singular)), vp(v (give,singular,past), np(n (answer,singular)))),S,[l). Result: S = [the,student,gave,the,answer] Figure I. Language recognition and generation using a DCG While Pereira and Warren concentrate on describing the DCG formalism for language recognition they also note its use for language generation, which is similar to its use for language recognition. The main difference is in the specification of input in the goal arguments. In recognition, the input argument specifies a surface string that is analyzed and returned as a parse tree structure in another argument. In generation, the input goal argument specifies a deep structure and a resulting surface string is returned in the second argument. Though not always practical, grammar rules can be designed to work in both directions (as were the rules in Figure 1). A generation goal and the sentence it produces are shown in Figure I. 4.2 Functional Grammars Another formalism that has been used in previous .generation systems (McKeown, 1982; Appelt, 1983) is the functional grammar formalism (Kay, 1979) 4 . In a functional, grammar, functional information such as focus and protagonist are treated in the same manner as syntactic and grammatical information such as subject and NP. By using functional information, input to the generator is simplified as it need not completely specify all the syntactic details. Instead, tests on functional information, that select between alternative surface structures, can be encoded in the grammar to arrive at the complete syntactic structure from which the string is generated. This formalism is consistent with the assumption that is part of our generation model: that one generation component produces a semantic specification that feeds into another component for selecting the final surface structure. In the functional grammar formalism, both the underlying message and the grammar are specified as functional descriptions, lists of attribute-value pairs, that are unified 5 to produce a single complete surface structure description. The text is then derived by linearizing the complete surface structure description. As an example, consider the proposition encoded as a functional description below. When unified with a sentence grammar that contains tests on focus to determine voice and order constituents, sentence 12 is generated. If FOCUS were <GOAL>, instead, sentence 13 would result. CAT ~ S PRED = [LEX = give] TENSE = PAST PROT ---- [LEX = student] GOAL = [LEX = answer] BENEF = NONE FOCUS = <PROT> 12. The student gave the answer. 13. The answer was given by the student. Previous implementations of functional grammars have been concerned with the efficiency of the functional grammar unification algorithm. Straightforward implementations of the algorithm have proved too time- consuming (McKeown, 1982) and efforts have been made to alter the algorithm to improve efficiency (Appelt, 1983). Efficiency continues to be a problem and a functional grammar generator that can be used practically has as yet to be developed. 4.3 Combining the Formalisms We have implemented a surface generator based on both the DCG formalism and the functional grammar 4. Functional grammar has also been referred to as unification grammar (Appett, 1983). 5. The functional grammar unification operation is similar to set union. A description of the algorithm is given in Appelt (1983). It is not to be confused with the unification process used in resolution theorem proving, though a similarity has been noted by Pereira and Warren (1983). 322 formalism. The result is a generator with the best features of both grammars: simplification of input by using functional information and efficiency of execution through Prolog. Functional information, supplied as part of the generation goal's input argument, is used by the grammar rules to select an appropriate surface structure. The extra conditions and context arguments allowed by the DCG formalism provide the mechanism for testing functional information. Figure 2 shows a proposition encoded as the input argument to a DCG goal. The proposition specifies, in order, a predicate, protagonist, goal, beneficiary, and focus. In this example, the focus argument is the same as the protagonist. While the proposition also includes tense and number information, less syntactic information is specified compared to the input argument of the generation goal in Figure 1. In particular, no information regarding constituent order is specified. Also shown in Figure 2 are some DCG rules for choosing syntactic structure based on focus. The rules test for number agreement and tense, as well. The sentence rule selects the focused argument as the subject noun phrase. The vp rule determines that focus is on the protagonist, selects active voice, and puts the goal into a noun phrase followed by the beneficiary in a to prepositional phrase. Thus, the order of constituents in the generated sentence is not explicitly stated in the input goal, but is determined during the generation process. The sentence that results from the given proposition is shown at the bottom of Figure 2. Generation Goal: sentence(prop(pred (give, past), arg(student, singular), arg(answer, singular), arg(nil, __), arg(student, singular)),S,[]). Rules: sentence (prop(Pred,Prot,Goal,Bene, Foc)) -- > np(Foc), vp(Pred,Prot,Goal,Bene,Foc). vp (pred (Verb,Tense),Prot,Goal,Bene,Prot) -- > {gemum(Prot,Num)}, verb (Verb,Tense,Num,active), np(Goal), pp(to,Bene). Result: S m [the,student,gave,the,answer] Figure 2. DCG rules that use focus to select syntactic structure 5. Surface Generator Implementation A surface generator, with mechanisms for selecting surface structure and, in particular, combining propositions, was implemented as part of an explanation facility for a student advisor expert system which is implemented in Prolog. One component of the advisor system, the planner, determines a student's schedule of courses for a particular semester. An explanation of the results of the planning process can be derived from a trace of the Prolog goals that were invoked during planning (Davis and Lenat, 1982). Each element of the trace is a proposition that corresponds to a goal. The propositions are organized hierarchically, with propositions toward the top of the hierarchy corresponding to higher level goals. Relationships between propositions are implicit in this organization. For example, satisfying a higher level goal is conditional on satisfying its subgoals. This provides a rich testbed on which to experiment with techniques for combining propositions. Because the expert system does not yet automatically generate a trace of its execution, the propositions that served as input to the surface generator were hand-encoded from the results of several system executions. In the current implementation, the grammar is limited to handling input propositions structured as a list of antecedents (subgoals) followed by a single consequence (goal). A grammar for automatically generating explanations was implemented using the formalism described in the previous section. The grammar encodes several tests for combining propositions. Based on temporary focus shift, it forms complex sentences using subordination. Based on focus and argument identities it uses coordination and identity deletion to combine propositions. The grammar also includes tests on focus for determining active/passive sentence voice, but does not currently pronominalize on the basis of focus. The generator determines that subordination is necessary by checking whether focus shifts over a sequence of three propositions. A simplified example of a DCG rule focshift, that tests for this is shown in Figure 3. The left-hand-side of this rule contains three input propositions and an output proposition. Each proposition has five arguments: verb, protagonist, goal, beneficiary, and focus. If the first proposition focuses on Focl and mentions an unfocused argument Goall, and if the second proposition specifies Goall as its focus, 6 but in the third proposition the focus returns to Focl, then the first and second propositions can be combined using subordination. The combined propositions are returned as a single proposition in the fourth argument; the third proposition is returned, unchanged, in the third argument. Both can be tested for further combination with other propositions. A sample text produced using this rule is shown in 14 below. 6. The right-hand-side of the rule contains a test to check that the focus of the second proposition is different from the focus of the first. 323 foc shift ( prop (Verbl, Protl, Goall, Benl, Focl), prop (Verb2, Prot2, Goal2, Ben2, Goall ), prop (Verb3, Prot3, Goal3, Ben3, Focl), prop (Verbl, Protl, np(Goall, prop(Verb2, Prot2, Goal2, Ben2, Goall )), Benl, Focl)) {Goall \~= Focl }. 14. Assembly Language has a prerequisite that was taken. Assembly Language does not conflict. Figure 3. Combining propositions using subordination Other tests for combining propositions look for identities among the arguments of propositions. Simplified examples of these rules are id del and focdel in Figure 4. According to id_del, if the -first and second proposition differ only by the arguments Goall and Goal2o these arguments are combined into one Goal and returned in the third proposition. The result is a single sentence containing a noun phrase conjunction as sentence 15 illustrates. The other rule, foc_del, specifies that if two propositions have the same focus, Foc, and in the second proposition, the focus specifies the protagonist, then the two propositions can form a coordinate sentence, deleting the focused protagonist of the second proposition. Instead of returning a proposition, foc_del in its right-hand-side, invokes rules for generating a compound sentence. Sample text generated by this rule is shown in 16. id del ( prop (Verb, Prot, Goall, Ben, Foe), prop (Verb, Prot, Goal2, Ben, Foe), prop (Verb, Prot, Goal, Ben, Foe)) {Goall \=" Goal2, append (Goall, Goal2, Goal)}. foc del ( prop (Verbl, Protl, Goall, Benl, Foe), prop (Verb2, Prot2, Goal2, Ben2, Foe)) sentence (prop (Verb I, Prot I, Goal 1, Ben 1, Foc) ), [andl, verb_phrase (Verb2, Prot2, Goal2, Ben2, Foe). 15. Analysis of Algorithms requires Data Structures and Discrete Math. 16. Introduction to Computer Programming does not have prerequisites and does not conflict. Figure 4. Combining propositions using coordination and identity deletion The generator uses of the organization of the input to show causal connectives. Recall that the input to the generator is a set of propositions divided into a list of antecedents and a single consequence that was derived by the expert system. The generator can identify the consequence for the reader by using a causal connective. An explanation for why a particular course was not scheduled is shown in 17. The antecedents are presented in the first part of the explanation; the consequence, introduced by therefore, follows. 17. Modeling and Analysis of Operating Systems requires Fundamental Algorithms, Computability and Formal Languages, and Probability. Fundamental Algorithms and Computability and Formal Languages were taken. Probability was not taken. Therefore, Modeling and Analysis of Operating Systems was not added. 6. Related Work in Generation There are two basic classes of related work in generation. The first class of systems makes use of functional information in constructing the surface structure of the text and has relatively little to say about how and when to produce complex sentences. The second class of work has addressed the problem of producing complex sentences but does not incorporate functional information as part of this decision making process. Of the systems which make use of functional information, three (Kay, 1979; McKeown, 1982; Appelt, 1983) have already been mentioned. Kay's work provides the basis for McKeown's and Appelt's and emphasizes the development of a formalism and grammar for generation that allows for the use of functional information. Both McKeown and Appelt make direct use of Kay's formalism, with McKeown's emphasis being on the influence of focus information on syntax and Appelt's emphasis being on the development of a facility that allows interaction between the grammar and an underlying planning component. Nigel (Mann, 1983) is a fourth system that makes use of functional information and is based on systemic grammar (Hudson, 1974). A systemic grammar contains choice points that query the environment to decide between alternatives (the environment may include functional, discourse, semantic, or contextual information). Mann's emphasis, so far, has been on the development of the system, on the development of a large linguistically justified grammar, and on the influence of underlying semantics on choices. The influence of functional information on syntactic choice as well as the generation of complex propositions are issues he has not yet addressed within the systemic grammar framework. Of previous systems that are able to combine simple clauses to produce complex sentences, Davey's (1978) is probably the most sophisticated. Davey's system is able to recognize underlying semantic and rhetorical relations between propositions to combine phrases using textual connectives, also an important basis for combining . propositions. His emphasis is on the identification of contrastive relations that could be specified by connectives such as although, but, or however. While Davey uses a systemic grammar in his generator, he does not exploit the 324 influence of functional information on generating complex sentences. Several other systems also touch on the generation of complex sentences although it is not their main focus. MUMBLE (McDonald, 1983) can produce complex sentences if directed to do so. It is capable of ignoring these directions when it is syntactically inappropriate to produce complex sentences, but it can not decide when to combine propositions. KDS (Mann, 1981) uses heuristics that sometimes dictate that a complex sentence is appropriate, but the heuristics are not based on general linguistic principles. Ana (Kukich, 1983) can also combine propositions, although, like Davey, the decision is based on rhetorical relations rather than functional information. In sum, those systems that are capable of generating complex sentences tend to rely on rhetorical, semantic, or syntactic information to make their decisions. Those systems that make use of functional information have not investigated the general problem of choosing between complex and simple sentences. 7. Future Directions The current implementation can be extended in a variety of ways to produce better connected text. Additional research is required to determine how and when to use other textual connectives for combining propositions. For example, the second and third sentences of 17 might be better expressed as 18. 18. Although Fundamental Algorithms and Computability and Formal Languages were taken, Probability was not taken. The question of how to organize propositions and how to design the grammar to handle various organizations deserves further attention. In the current implementation, the grammar is limited to handling input propositions structured as a list of antecedents and a single consequence. If propositions were organized in trees rather than lists, as in more complex explanations, the use of additional connectives would be necessary. The grammar can also be extended to include tests for other kinds of surface choice such as definite/indefinite reference, pronominalization, and lexical choice. As the grammar grows larger and more complex, the task of specifying rules becomes unwieldy. Further work is needed to devise a method for automatically generating DCG rules. 8. Conclusions We have shown how focus of attention can be used as the basis for a language generator to decide when to combine propositions. By encoding tests on functional information within the DCG formalism, we have implemented an efficient generator that has the same benefits as a functional grammar: input is simplified and surface structure can be determined based on constituents' function within the sentence. In addition to producing natural language explanations for the student advisor application, this formalism provides a useful research tool for experimenting with techniques for automatic text generation. We plan to use it to investigate additional criteria for determining surface choice. Acknowledgments We would like to thank Mitch Marcus for his comments on an earlier version of this paper. References Akmajian, A. (1973), "The role of focus in the interpretation of anaphoric expressions," In Anderson and Kiparsky (Ed.), Festschrifffor Morris Halle, Holt, Rinehart, and Winston, New York, NY, 1973. Appelt, Douglas E. (1983), "Telegram: a grammar formalism for language planning," Proceedings of the 21st Annual Meeting of the Association for Computational Linguistics, 74-78, ! 983. Chafe, W. L. (1976), "'Givenness, contrastiveness, definiteness, subjects, topics, and points of view," In Li, C. N. (Ed.), Subject and Topic, Academic Press, New York, 1976. Colmerauer, A. (1978), "Metamorphosis grammars," In Bole, L. (Ed.), Natural Language Communication with Computers, Springer, Berlin, 1978. Davey, Anthony. (1978), Discourse Production, Edinburgh University Press, 1978. Davis, Randall and Lenat, Douglas B. (1982), Knowledge-Based Systems in Artificial Intelligence, McGraw-Hill, New York, 1982. Firbas, J. (1966), "On defining the theme in functional sentence analysis," Travaux Linguistiques de Prague !, University of Alabama Press, 1966. Firbas, J. (1974), "Some aspects of the Czechoslovak approach to problems of functional sentence perspective," Papers on Functional Sentence Perspective, Academia, Prague, 1974. Grosz, B.J. (1977), The representation and use of focus in dialogue understanding. Technical note 151, Stanford Research Institute, Menlo Park, CA, 1977. Halliday, M. A. K. (1967), "Notes on transitivity and theme in English," Journal of Linguistics, 3, 1967. Hudson, R.A. (1974), "Systemic generative grammar," Linguistics, 139, 5-42, 1974. Joshi, A. and Weinstein, S. (1981), "Control of inference: role of some aspects of discourse structure - centering," Proceedings of the 7th International Joint Cot~erence on Artificial Intelligence, 198 I. Kay, Martin. (1979), "Functional grammar," Proceedings of the 5th Annual Meeting of the Berkeley Linguistic Society, 1979. Kowalski, R.A. (1980), Logic for Problem Solving, North Holland, New York, NY, 1980. Kukich, Karen. (1983), "Design of a knowledge-based report generator," Proceedings of the 21st Annual Meeting of the Association for Computational 325 Linguistics, 145-150, 1983. Lyons, J, (1968), Introduction to Theoretical Linguistics, Cambridge University Press, London, 1968. Mann, W.A. and Moore, J.A. (1981), "Computer generation of multiparagraph English text," American Journal of Computational Linguistics, 7 (1), 17-29, 1981. Mann, William C. (1983), "An overview of the Nigel text generation grammar," Proceedings of the 21st Annual Meeting of the Association for Computational Linguistics, 79-84, 1983. McDonald, David D. (1983), "Description directed control: its implications for natural language generation," Computers and Mathematics with Applications, 9 (I), I 11-129, 1983. McKeown, Kathleen R. (1982), Generating Natural Language Text in Response to Questions about Database Structure, Ph.D. dissertation, University of Pennsylvania, 1982. Pereira, Fernando C. N. and Warren, David H. D. (1980), "'Definite clause grammars for language analysis--a survey of the formalism and a comparison with augmented transition networks," Artilqcial Intelligence, 13, 231-278, 1980. Pereira, Fernando C. N. and Warren. David H. D. (1983), "'Parsing as deduction," Proceedings of the 21st Annual Meeting o[ the Association for Computational Linguistics. 137-144, 1983. Prince, E. (1979), "On the given/new distinction," CLS, 15, 1979. Reinhart, T. (1981), "Pragmatics and linguistics: an analysis of sentence topics," Philosophica, 1981. Robinson, J. A. (1965), "A machine-oriented logic based on the resolution principle," Journal of the ACM, 12 (I), 23-41, 1965. Sgall, P., Hajicova. E., and Benesova, E. (1973), Focus and Generative Semantics, Scriptor Verlag, Democratic Republic of Germany, 1973. Sidner, C. L. (1979), Towards a Computation Theory of Definite Anaphora Comprehension in English Discourse, Ph.D. dissertation, MIT, Cambridge, MA, 1979. 326
1984
65
A RATIONAL RECONSTRUCTION OF THE PROTEUS SENTENCE PLANNER Graeme Ritchie Department of Artificial Intelligence University of Edinburgh, Hope Park Square Edinburgh EH8 9NW ABSTRACT A revised and more structured version of Davey's discourse generation program has been implemented, which constructs the underlying forms for sentences and clauses by using rules which annotate and segment the initial sequence of events in various ways. i. The Proteus Program The text generation program designed and im- plemented by Davey (1974,1978) achieved a high level of fluency in the generation of small para- graphs of English describing events in a limited domain (games of "tic-tac-toe"/"noughts-and- crosses"). Although that work was completed ten years ago, the performance is still impressive by current standards. The program could play a game of "noughts-and-crosses" with a user, then produce a fluent sunmmry of what had happened during the game(whether or not the game was complete). For example: The game began with your taking a corner, and I took the middle of an adjacent edge. If you had taken the corner opposite the one which you had just taken, you would have threatened me, but you took the one adjacent to the square which I had just taken. The game hasn't finished yet. As well as heuristics for actually playing a game, the program contained rules for text genera- tion, which could be regarded as having the follow- ing components (this is not a decomposition used by Davey, but an organisation imposed here in order to clarify the processing): (a) Sentence planner (b) Description constructor (c) Systems network The third (syntactic) component, is a major part of the original Proteus program, and Davey included a very detailed systemic grammar (in the style of Hudson (1971)) for the area of English he was concerned with; consequently the written accoun~ (Davey (1974,1978)) deal mainly with these grammatical aspects. However, much of the fluency of the discourses produced by Proteus seems to derive from the crucial computations performed by This research was supported by SERC grants GR/B/9874.6 and GR/C/8845.1. components (a) and (b), since the syntactic system is largely set up to convert deep representations into surface tokens, without too much regard for global contextual factors. Unfortunately, the written accounts give only a rough informal outline of how these components operated. A completely re- vised version of Proteus has been implemented in Prolog on a DEC System iO, and this paper describes the working of its sentence planner. The system outlined below is not an exact replication of Davey's program, but is a "rational reconstruction'~ that'is, an attempt to present a slightly cleaner, more general method, based on Davey's ideas and performing the same specific task as Proteus. Paradoxically, this cleaning up process may lead to minor losses of fluency, where particular effects were gained in Proteus by slightly ad hoc measures. 2. The Sentence Planner The module which creates the overall clausal structure of each sentence works on a list of numbers representing the course of a game (complete or unfinished), where each square is represented by a number between i and 9. The processing carried out by the sentence planner can be seen as occurring in three logical phases: i. move annotation 2. sentence segmentation 3. case-frame linking Although these stages are logically distinct, they need not occur wholly in temporal sequence. However, the abstract model is clearer if viewed in separate stages. 2.1. Move Annotation The system has a set of heuristic rules which enable it to play noughts-and-crosses to a reasonable standard. (A non-optimal set of rules helps to introduce some variety into the play). It uses these move-generating rules to work through the history of the game, computing at each position which move it would have made for that situation and which move-generating rule gives rise to the move actually made at that point. This allows it to mark the actual move in the given history with certain tactical details, using the implicit assumption that whoever made the moves had the same knowledge of the game as the system itself does. The five move-generators are totally ordered to reflect a "priority" or "significance" with 327 respect to the game, and each move-generator is labelled with one of three categories - "defen- sive" (e.g. blocking the third square in an opponent's near-complete line), "offensive" (e.g. creating a near-complete line, which thus threatens the opponent) or "neutral" (e.g. taking a square to start the game). In addition to basic organisational entries (square taken, name of player, pointer to preceding move, pointer to following move), the annotation of the moves contains the following information: (a) generating heuristic(s) - there is a list, in priority order, of the heuristics which could have given rise to that move. (b) tactically equivalent alternatives - for each heuristic listed in (a), there is a list of the other squares which could also have resulted from that heuristic. (c) lines involved - for each square mentioned in the various entries, there is a note of which lines (if any) were (or would have been) tactically involved in that move. (d) better move - if there is a higher priority heuristic that would give rise to a different choice of square, an annotated description of that "better" move is attached. For example, the game described by the discourse in Section 1 above would initially be just a sequence of square-numbers, together with the name of the first player: user 1 2 3 After annotation, the third move (square 3) would have the following information attached: square : 3 heuristics/alternatives : take [9 8 7 6 5 4) better move : square : 9 (i 5 9) heuristics/alternatives : threaten f7 (i 4 7) 5 (i 5 9) 4 (I 4 7)] 2.2 Sentence Segmentation The sentence segmentation process involves grouping the annotated moves into clusters so that each cluster contains an appropriate amount of information for one sentence. This uses the following guidelines, in the following order, to determine the number of moves within a sentence: i. If there is just one move left in the sequence, that must be a single sentence. 2. If there are just two moves left, they form a single sentence. 3. If a move is a "mistake" (i.e. there is a tactically better alternative) then start a new sentence to describe it. This is quite a dominant principle, in that the system will perform "look-ahead" of two (actual) moves in the annotated chain to check if there is a mistake looming up. 4. If a move is a combined attack and defenc~ give it a sentence to itself. 5. If this move is an attack, and the next move successfully thwarts that attack, then put these two moves into a sentence on their own. 6. Put the next three moves in a sentence. (No more than three moves may occur in a single sentence structure). As well as segmenting the moves, this module attaches to each move a tag indicating its overall tactical relationship to the preceding moves. This is a gross summary of some of the tactical information provided by the annotator, and encodes much of the information needed by the next stage (case-frame linking). There are four tag-values used - "consequence" (the move is a result of the preceding one), "thwart" (the move prevents an attack by the preceding one), "mistake" (the move is a failure to make the best possible move), and "null" (an all-purpose default). 2.3 Case-frame Linkin$ Once the moves have been annotated, grouped and tagged, their descriptions can be constructed and linked together, to form the internal structure of the sentence. In this process, various case-frame structures are com- puted from the information attached to each move, and are placed in order, linked by various relationships. There may be, within a sentence, several descriptions associated with a single move, since it is possible for more than one aspect of a move to be mentioned. In each case- frame structure, the other roles will contain suitable fillers - e.g. the square taken (for a "take" description), or the other player (for a "threat") - which are computable from the anno- tations. Each such case-frame description will eventually give rise to a full tensed clause. In addition, some of these case-frames will have, embedded within them on the "method" case-role, further simple case-frames which will eventually give rise to adjuncts to the tensed clause in the form of verb phrases (e.g. "...by taking a corner.."). Hence the linking process involves selecting those descriptive structures (from the annotations) which are to be expressed linguisti- cally, formulating these as filled case-frame~, and labelling the relationships between these descriptions. Relationships between case-frame descriptions are indicated by attaching to each case-frame a "link" symbol indicating its relation to the surrounding discourse (either within that sentence, or across the preceding sentence boundary). This process is non-deterministic in the sense that there are usually several equally good ways of expressing a given move or sequence of moves within a sentence. The program contains 328 rules for all such possibilities, and works through all the possible combinations using a simple depth-first search. The case-frame construction also determines the clausal structure of the sentence, in that the nesting or con- joining of clauses is fixed at this stage. The clausal structure does not allow recursive levels - there are, for example, no verbs with sentential complements. The case-frame construc- tion and tagging depends on the links inserted by the sentence-segmenter, together with three items of information from the annotations on the moves - whether the move has two aspects, defen- sive and offensive; any "better" move that has been attached; and whether the tactic-name uniquely defines, within the context, which square must have been taken. The case-frame construction and linking proceeds according to certain guidelines: I. if the move is a "mistake", indicate that by describing both the better move and the actual move. 2. if a move has two possible descriptions, one "offensive" and the other "defensive", describe both aspects. 3. if a move has two possible descriptions which have the same classification within the set {neutral, offensive, defensive}, then choose the most significant (as determined by the priority ordering of tactics). 4. if two consecutive (actual) moves are such that the second one prevents an attack made by the first, then select the tactics corresponding to these aspects to describe them. 5. if there are no "offensive" or "defensive" aspects listed, use the simple "take" form. The following rule is also applied to all moves described: if the square taken is not uniquely determined by the tactic-name, and the tactic-name is not "take", then create a "take" case-frame describing the move, and either make it into a separate conjoined clause (if the move has a sentence to itself) or attach it to the main case-frame as the "method". Since the aim of the current project is to use this discourse domain as a "back-end" for experimenting with functional unification grammar (Kay (1979)), the sentence planner has to produce "fuctional descriptions" to indicate the under- lying grau~natical form for each sentence. The linked case-frames are therefore reformulated into functional descriptions, with the links attached to the front of each clause determining two aspects of the syntactic structure - the lexical item (if any) to be used as "binder" or "connective" at the front of the clause (again, a non-deterministic choice), and the grammntical features (e.g. modality, aspect) to be added to the clause in addition to those default settings programmed into the system. The ten possible "links", with their possible surface realisations are: hypothetical altho although condante if condconse sequence external-contrast however internal-contrast but conjunction and internal-result and so external-result consequently as a result In addition, the first four of the above links cause the clause to have perfect aspect, "hypothetical" and "altho" cause the presence of the modality "can", and "condconse" results in the modality "will". (Notice that "could" is regarded as the past tense of "can", and "would" as the past tense of "will"). 3. Possible Generalisations After establishing a suitably implementation independent description of the processing necessary to achieve the behaviour of Proteus, the next step should be to try to extract some general notion of how to describe a sequence of events. The domain used here (tic-tac-toe) has the unusually convenient feature that there is a basic canonical form for representing (in a relatively neutral, primitive form) what the sequence of events was. That is, the original list of moves is a non-grammatical representation of the world events to be described. It is not realistic to make such an assumption in general, so a more abstract model may have to take up the planning process at a slightly later stage, when moves already have some form of "descriptions". REFERENCES Davey, Anthony (1974) The Formalisation of Discourse Production. Ph.D. Thesis, University of Edinburgh. Davey, Anthony (1978) Discourse Production. Edinburgh: Edinburgh University Press. Hudson, Richard (1971) English Complex Sentences. Amsterdam: North Holland. Kay, Martin (1979) Functional Grammar. Pp.142- 158 in Proceedings of the Fifth Annual Meeting of the Berkeley Linguistics Society. Berkeley, CA: University of California. 329
1984
66
SOFTWARE TOOLS FOR THE ENVIRONMENT OF A COMPUTER AIDED TRANSLATION SYSTEM I Daniel BACHUT - Nelson VERASTEGUI IFCI GETA INPG, 46, av. F~lix-Viallet Universit~ de Grenoble 3803] Grenoble C~dex 38402 Saint-Martin-d'H~res FRANCE FRANCE ABSTRACT In this paper we will present three systems, ATLAS, THAM and VISULEX, which have been designed and implemented at GETA (Study Group for Machine Translation) in collaboration with IFCI (Institut de Formation et de Conseil en Informatique) as tools operating around the ARIANE-78 system. We will describe in turn the basic characteristics of each system, their possibilities, actual use, and performance. I - INTRODUCTION ARIANE-T8 is a computer system designed to offer an adequate environment for constructing machine translation programs, for running them, and for (humanly) revising the rough translations produced by the computer. It has been used for a number of applications (Russian and Japanese, English to French and Malay, Portuguese to English) and has been constantly been amended to meet the needs of the users[Ch. BOITET et al., 1982].In this paper, we will present three software tools for this environment which have been requested by the systemts users. II - ATLAS ATLAS is an Kid to the linguist for introdu- cing new words and their associated codes into a coded dictionary of a Computer Aided Translation (CAT) application. Previously, linguists used indexing manuals when adding new words to dictionaries. These manuals contained indexing charts, sorts of graphs enabling the search for the linguistic code asso- ciated with a given lexical unit in a particular linguistic application. The choice of one path in a chart is the result of successive choices made at each node. This may be represented by associating questions to each node and the possible answers to the arcs coming from a node ; the leaves of the tree bear the name of the code and an example. A language to write the "indexing charts" is provided to the linguist. An ATLAS session begins with an optional compilation phase. Then, the system functions in a conversational way in order to execute commands. The main functions of ATLAS are the following : - Editing and updating of indexing charts : compi- lation of an external form of the chart, and modification of the internal form through inte- raction with the user, with the possibility of returning a new external form. - Interpretation of these charts, in order to obtain the linguistic codes and the indexing of dictionaries. A chart is interpreted like a menu, so that the user can traverse the charts answering the questions. He can also view the code found, or any other code, by request, and examine and update the dictionary by writing the code in the correct field of the current record. - Visualisation of charts in a tree-like form in order to build the indexing manuals. In the case of interpretation, the screen is handling as a whole by the system : it manages several fields such as the dictionary field, the chart field and the command field. The system is written in PASCAL, with a small routine in assembler for screen-handling. Below, we give two examples : - The first is a piece of tree built by the system based on an indexing chart. - The second is a screen such as the user sees it in the interpretation phase. 1noun both : lregular and • :variable? ! ! : yes i : ÷ ......... -t INVN : Iuuage e ! :~ :NIRG: is the I +- ..... -~noun invariable ! :? ! : yes ' : + ........ -21NVHZ : leaf ! ! : sinsular :NIRR|: is the t : ÷ . . . . . . Rslngular ! : :~biguous? ! : : : ! : : : no ! : ÷ ....... ----21NVN: mouse :no :NIRR: there are t ! + . . . . . . . . -I 2 bases Co be ! : indexed ! ! t yes : + ......... -~INVNZ : leaves! i : . ! :plural :NIRR2: is the ! i + .......... ~plural ! :ambiguous? ! : no ! ! + ......... "~ I ~,'N : mice ! I Work supported by ADI contract number 83/175 and by DRET contract number 81/164. 330 + .......................................................... ! -- INTERPRETEUR DE MENUS -- !NREG(q) : 'what is the noun type ?'; ! -- type | -- plural with S ! -- type 2 == plural with ES ! -- type 3 -- sing with Y, plural with lea ! 1 : 'type 1, ambigoous' --> NIZ(v) : 'type'; ! 2 : 'type 1, non ambiguous' --> Nl(v) : 'folder'; ! 3 : 'type 2. mb~guous' --> N2Z(v) : 'flalh'; ! 4 : 'type 2. non ambiguous' --> N2(v) : 'c:ockroach'; ! 5 : 'type 3, mablguous' --> N3Z(v) : 'fl(y)'; ! 6 : 'type 3, non ambiguous' --> N3(v) : 'propert(y)'. ! --> &env NI !WENT ==INVI ( ~'PRET ,GO ) • !WERE --INVI (~RE ,BE ) • !WHAT --INVI (WHICH ,WHAT ). ! .= ( , ). + ................................................................. Figure 2. Screen Display during Interpretation Phase. III - THAM Computers can help translators in several ways, particularly with Machine Aided Human Trans- lation (MAHT). The translator is provided with a text editing system, as well as an uncoded dictionary which may be directly accessed on the screen. But the translation is always done by the translator. THAM consists of a set of functions programmed in the macro language associated with a powerful text editor. These functions help the translator and improve his effeciency. The conventional translation of a text is generally performed in several stages, often by different people : a rough translation followed by one or several revisions : linguistic revision, "postediting", or "technical revision". Hence, the THAM system works with four types of objects : source text (S), translated text (T), revised text (R) and uncoded dictionary (D). In the actual system, each of these objects corresponds to one "file". The file S contains the original text to be translated, the file T contains the rough transla- tion resulting from a mechanical translation or a first unrevised human translation. The uncoded dictionary is composed of a sorted list of records following a fairly simple syntax. The access key is a character string followed by the record content, on one or several lines, in a free format. In general, the "content" gives one or several equivalents, but it can also contain definitions, examples, and equivalents in several languages : it is totally free (and uncontrolled). Finally, the file R is the final translation of the original text realized by the user from the three previous files. THAM is designed for display terminals. It can simultaneously display one, two, three or four files, in the order desired by the user. The screen is divided into variable horizontal windows. The user can consult the dictionary with an arbitrary character string (which may be extracted from one of the working files), update the dictionary, insert into the revision file a part of another file, make permutations or transpositions of several parts of a file, and receive suggestions for the translation of a word displayed in a win- dow. Moreover, the system can simultaneously use many source, translation, dictionary or revision files. Basic ideas for THAM come from various sources such as IBM's DTAF system (only used in-house on a limited scale) and [A. MELBY's TWS |982].Initial experiments have shown this tool to be quite useful. IV - VISULEX VISULEX is a handy and easy-to-use visualisa- tion tool designed to reassemble and clearly distinguish certain information contained in a linguistic application data base. VISULEX is intended to facilitate the comprehension and development of coded dictionaries which may be hindered by two factors : the dispersal of infor- mation and the obscurity of the coding. In ARIANE-78, the lexical data base may reside on much more 50 files, for a given pair of language. This data base is composed of dictionaries, "formats" and "procedures" of the analysis, trans- fer and synthesis phases (the 3 conventional phases of a CAT system). For any given source lexical unit in this data base, VISULEX searches for all the associated information. VISULEX offers two levels of detail. At the first level, the information is presented by using only the comments associated with the codes found. At the second level, a parallel listing is produced, with the codes themselves, and their symbolic definition. The first level output can be considered as the kernel of an "uncoded dictionar~ The system provides, on one or several output units, a formated output, with these different visualisation levels. This system can be considered to have several possible uses : - as a documentation tool for linguistic applications ; - as a debugging tool for linguistic applications ; - as a tool for converting the lexical base into a new form (for instance, loading it into a conventional data base). It is possible to imagine VISULEX results being used as a pedagogical introduction to a CAT application, seeing that the output form is more comprehensible than the original form. For the Russian-French application, VISULEX output gives two listings of around 150,O00 lines each. This makes it a lot easier to detect indexing errors, at all levels. This is a first step towards improved "lexical knowledge processing". Finally, we give an example of a VISULEX output. The chosen lexical unit is "CHANGE" in the English-French pedagogical prototype application. The two levels are showed (the left column corres- pond to the first level, the right column to the second) . 331 + ....................................................................... ++ ............................................................. + !VISULEX Version-I BEXFEX 11:31:54 [I/29/83 Niveau: 1 Page I!?VISULEX Version-I BEXFEX II:31:54 11/29/83 Niveau: !'CI~NGE' !!'CHANGE' ! ......... !, ........ ! --morphologie-- !! --morphologie-- ! CHANGE !? PNIFITFO: ! process verb !! PROCV:SEM-E-PROC,SEMV-E-PROC ! Is! valency: N, infinitive clause and from; 2nd valency: to and for !! NIFITOFO:VLI-E-N-U-I-U-FROM, VL2-E-TO-U-FOR [! JPCL-E-BACK-U-OVER ! ambiguous verb, possible endings : E, ES, ED, ING (ex state) !! V2Z:CAT-E-V,SUBV-E-VB,VEND-E-2 ! CHANG- !! CHANG- ! first valency : IN and for and from !! INFRFOI:VLI-E-IN-U-FROM-U-FOR ? ambiguous (or key word of an idiom) noun derived from a verb, ...!! DVNIZ:CAT-E-N,SUBN-E-CN,DRV-E-VN,NUM-E-SIN,NEND-E-I ! and which take an 's' for the plural (ex change) 1! ! CHANGE- !! CHANGE- ! --equivalents-- l! --equivalents-- ! ............... l! .............. !--si: la valence l = nomet la valence 2 - for !!--si: ZN2FO:VLI-E-N -ET- VL2-E-FOR ! 'CHANGER' !! 'CHANGER' ! NOEUD TERMINAL: RL, RE, ASP ET TENSE SONT NETTOY~S !! INT:RL:-RLO, RS:=RSO, ASP:+ASPO, TENSE:=TENEEO t la valence l = nom, la valence 2 - pour + nom !! ZN2PON:VALI:-N,VAL2:-POUKN ! c'est un verbe pouvant d~river en nom d'action (VN) ou en ...!! KVDNPAN:CAT:=V,POTDRV:=VN-U-VPA-U-VPAN ? adjectif passi f (VPA) ou en nom (AN) ! 'CHANG' ! FOND+ER,EMENT,EUR,ANT !--si: la valence 1 = in ! 'CHANGER' ! NOEUD TERMINAL: EL, RE, ASP ET TENSE SONT NETTOY~S ] c'est un verbe pouvant d~river en nom d'action (VN) ! la valence l = de + nom ! 'CHANG' ! FOND÷ER,EMENT,EUR,ANT t--si: la valence 1 = nomet la valence 2 = into ! 'TRANSFORMER' ! NOEUD TERMINAL: RL, RS, ASP ET TENSE SONT NETTOY~S ! la valence l = nom, la valence 2 - an + nom t? !! 'CHANG' !! VIAMENTI:FLXV-E-AIMER,DRNV-E-EMENTI !!--si: ZIN:VLI-E-IN !! 'CHANGER' !! INT:RL:=RLO, RS:=RSO, ASP:=ASPO, TENSE:-TENSEO !! KVDN:CAT:=V,POTDRV:-VN !! ZDEN:VALI:=DEN !! 'CHANG' !! VIAMENTI:FLXV-E-AIMER,DRNV-E-EMENT] !!--si: ZN21T:VLI-E-N -ET- VL2-E-INTO !! 'TRANSFORMER' !! INT:RL:=RLO, RS:'RSO, ASP:=ASPO, TENSE:=TENSEO !! ZN2ENN:VAL|:-N,VAL2:'ENN ! c'est ua verbe pouvant d~river en nom d'action (VN) on en ! adjectif passif (VPA) ou en nom (AN) ! 'TRANSFORM' ! PERFOR+ER,ATION,ATEUR=AGENT ET ADJECT !+-s[: la valence ! = from et la valence 2 = to ! 'PASSER' ! NOEUD TERMINAL: RL, RS, ASP ET TENSE SONT NETTOY~S ! la valence I - de + nom, la valence 2 + ~ + nom ! c'est un verbe pouvant d~river en nom d'action (VN) ou en ! adjectlf passif (VPA) ou en ham (AN) ! 'PASS' ! ECLAIR+ER,EUR,ANT,AGE !--si: particule = over ! 'PASSER' ! NOEUD TERMINAL: RL, RS, ASP ET TENSE SONT NETTOY~S ! e'est un verbe pouvant d~river en nom d'action (VN) ! la valence ] - de + nom, la valence 2 - ~ + nom ! 'PASS' t ECLAIR+ER,EUR,ANT,AGE !--sinon: ! 'CHANGER' ? NOEUD TERMINAL: EL, RE, ASP ET TENSE SONT NETTOY~S ! c'est un verbe pouvant d~river en nom d'action (VN) ou en ? adjectif passif (VPA) ou en nom (AN) ! la valence 1 = nom ! 'CHANG' ! FOND+ER,EMENT,EUR,ANT ...!! KVDNPAN:CAT:'V,POTDRV:-VN-U-VPA-U-VPAN !! 'TRANSFORM' !! VIBION2:FLXV-E-AIMER,DRNV-E-ATION2 !!--si: ZFR2TO:VLI-E-FROM -ET- VL2-E-TO !? 'PASSER' !! INT:RL:-RLO, RS:=RSG, ASP:=ASPO, TENSE:-TENSEO !! ZDEN2AN:VALI:=DEN,VAL2:=AN ...!! KVDNPAN:CAT:-V,POTDRV:=VN-U-VPA-U-VPAN !! 'PASS' !! VIAAGI:FLXV-E-AIMER,DRNV-E-AGEI !!--si: JPOV:JPCL-E-OVER !! 'PASSER' !! INT:RL:=RLO, RS:=RSO, ASP:=ASPO, TENSE:'TENSEO !! KVDN:CAT:-V,POTDRV:=VN !? ZDEN2AN:VALI:=DEN,VAL2:-AN !! 'PASS' !t VIAAGI:FLXV-E-AIMER,DRNV-E-AGEI t!--sinon: [! 'CHANCER' !! INT:RL:-RLO, RS:=RSO, ASP:=ASPO, TENSE:-TENSEO ...!! KVDNPAN:CAT:=V,POTDRV:-VN-U-VPA-U-VPAN t~ !! ZNN:VALI:-N !! 'CHANG' !! VIAMENTI:FLXV-E-AIMER,DRNV-E-EMENT] 2 ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! t ! ! ! ! ! ! ! ! ! ! l ! I I ! I ! ! t ! ! ! t ! ! ! ! ! ! ! t ! ! t ! ! ! ! ÷ ..................................................... ~ ............... ++ .......................................................... ÷ Figure 3. The two levels of VISULEX output V - CONCLUSION These software tools have been designed to be easily adaptable to different dialogue languages (multilinguism). The development method used is conventional structured, modular and descending programming. Altogether the design, programming, documentation and complete testing represent around two man/years of work. The size of the total source code is around |5,000 PASCAL lines and 4,500 EXEC2/XEDIT lines, comments included. The ARIANE-78 system extended by ATLAS, THAM and VlSULEX is more comfortable and more homoge- neous for the user to work with. This is the first version, and we already have many ideas provided by the users and our own experience for improving these systems. 332 VI - REFERENCES BACHUT D. "ATLAS - Manuel d'Utilisation", Document GETA/ADI, 37 pp., Grenoble, March ]983. BACHUT D. and VERASTEGUI N. "VISULEX - Manuel d'exploitation sous CMS", Document GETA/ADI, 29 pp., Grenoble, January 1984. BOITET Ch., GUILLAUME P. and QUEZEL-AMBRUNAZ M. "Implementation and conversational environment of ARIANE-78.4, an integrated system for translation and human revision", Proceedings COLING-82, pp. 19-27, Prague, July 1982. MELBY A.K. "Multi-level translation aids in a distributed system", Proceedings COLING-82, p. 2]5-220, Prague, July 1982. VERASTEGUI N. "THAM - Manuel d'Utilisation", Document GETA/ADI, 35 pp., Grenoble, May ]983. 333
1984
67
DESIGN OF A MACHINE TRANSLATION SYST~4 FOR A SUBIASK~A(~ Beat Bu~, Susan Warwick, Patrick Shann Dalle Molle Institute for Semantic and Cognitive Studies University of Geneva Switzerland ABSTRACT This paper describes the design of a prototype machine translation system for a sublanguage of job advertis~nents. The design is based on the hy- pothesis that specialized linguistic subsystems may require special crmputational treatment and that therefore a relatively shallow analysis of the text may be sufficient for automatic translation of the sublanguage. This hypothesis and the desire to mi- nimize computation in the transfer phase has led to the adoption of a flat tree representation of the linguistic data. 1. INTRODUCTION The most prcraising results in computational linguistics and specifically in Machine Translation (MT) have been obtained where applications were limited to languages for special purposes and to restricted text types (Kittredge, Lehrberger, 1982). In light of these prospects, the prototype MT sys- tem described below I should be seen as an experi- ment in the ecnputational trea~nent of a particular sublanguage. The project is meant to serve both as a didactic tool and as a vehicle for research in MT. The development of a large-scale operational system is not envisaged at present. The following research objectives have been defined for this project: - to establish linguistic specifications of the sublanguage as a basis for automatic processing; - to develop translation algorithms tailored to a cc~putational treatment of the sublanguage. The emphasis of the research lies in defining the depth of linguistic analysis necessary to ade- quately treat the ccrmplexity of the text type with a view to acceptable machine translation. It is the conjecture of our research group that, within the particular sublanguage defined by our corpus, ac- ceptable translation does not necessarily depend on standard linguistic structural analysis but can be obtained with a relatively shallow analysis. Thus, as a working hypothesis, the principle of 'flat trees' has been adopted for the representation of the linguistic data. Flat trees, as opposed to deep trees, only partially reflect the dependency strucn. 1 Project sponsored by the Swiss government. ture obtained by a traditional IC-analysis. The adoption of flat trees goes hand in hand with the further hypothesis that the sublanguage can be translated mechanically with only minimal semm~tic analysis similarly to the TAUM-M~'I%0 system (Chevalier, et al., 1978). 2. THE SUBLAN(ETAGE The corpus is taken from a weekly publication by the Swiss goverrm~nt announcing federal job openings. The wordload of this publication amounts to ca. I0,000 words per week; however, many of the advertisements are carried for several weeks. All job adds are published in the three national lan- guages: German, French and Italian, with German usually serving as the source language (SL), French and Italian as the target language (TL). The study is hence based on a collection of texts already translated by human translators. The ads are grouped according to profession, e.g. academic, technical, administrative, etc. At present, the corpus is limited to the domain of administrative positions, an example of which is given in figu- re I. Verwaltungsbeamtin Fonctionnaire d'administration Funzionaria amministrativa FOhren des Sekretadates eines Sektionschefs. Ausfertigen yon Korrespondenzen und 8erichten nach Diktat und Vorlage in deutscher, franz6sischer und englischer Sprache, Abgeschlos- sene kaufm~nnische Lehre oder Handelsschulbildung, Berufs- erfahrung erwOnscht, Sprachen: Deutsch, Franz6sisch. Eng- Iisch in Wort und Schrift. Italienisch und/oder Spanisch er- w0nscht. Diriger le secr(~tariat d'un chef de section. Dactylographier de la correspondance allemande, franqaise et anglaise et des rap- ports sous dictee ou d'apr@s manuscrits. Certificat d'ernployee de commerce ou dipl6me d'une ecole de commerce, Exp@- rience professionnelle d@sirbe. Langues: le fran~:ais, I'altemand et I'anglais parles et ~crits. Connaissances de I'italien ou de I'espagnol, voire des deux souhaitees. Dirigere il segretariato di un capo sezione. Stesura di corri- spondenza e rapporti secondo dettato o manoscritto. Tirocinio commerciale o formazione commerciale. Pratica pluriennale. Lingue: tedesco, francese, inglese (orale e seritto). Buone no- zioni deil'itahano e/o dello spagnolo auspicate. Figure i. Advertisement for an administrative position ("Die Stelle", 1981). 334 The corpus exhibits many of the textual fea- tures generally used to characterize a sublanguage, i.e. (i) limited subject matter, (ii) lexical and syntactic restrictions, and (iii) high frequency of certain constructions. AS can be seen from the example, the style of the sublanguage is distin- guished by cc~plex nominal dependencies with va- rious levels of coordination. In addition, most sentences are inoc~lete in that they consist of a series of nominal phrases and do not oontain a m~ verb; no relative phrases nor dependent clauses occur. The inportance of nominal constituents is reflected in the statistics of the German texts: over 55% of the words in the corpus are nouns, 11% adjectives, 11% prepositions, 17% conjunctions ; verbs only make up 1% of the corpus. A ccr~parison with the statistics of the French and Italian translations reveal approximately the sane distri- bution except for infinitival venbs. The higher frequency of verbs in French and Italian is due to a preference for infinitival phrases in place of deverbal nominal constructions. Apart from this difference, the major textual characteristics carry over from source to target sublanguage there- by facilitating mechanical translation. 3. BRIEF DESCRIPTION OF THE SYb-i~4 Modem transfer-based MT systems are based on the following design principles : (i) modularity, e.g. separation of linguistic data and algorithms, (ii) multilinguality i.e. independent analysis, transfer, and generation phases, (iii) formalized specification of the linguistic model (Hutchins, 1982). Although only a prototype, the system was • designed in accordance with these considerations. As to modularity, the software used is a gene- ral purpose rule-based transducer especially deve- loped for MT (Shann, Cod%ard, 1984). This software tool not only allows for the separation of data and algorithms but also provides great flexibility in the organization of grammars and subgrammars, and in the control of the cc~putational processes applied to them. As a multilingual system it is not directly oriented towards any specific language pair; the s~ne Gem1~n analysis module serves as input for the German-French as well as the German-Italian transfer module. Separate French and Italian gene- ration modules use only language specific knowledge to produce the final translation. However, the Ger- man analysis is indirectly influenced by target language considerations: the interface structure between analysis and transfer was defined to take advantage of the similarities between the three languages and to accommodate the differences. 4. L~ISTIC APPBDACH: MINIMAL BUT SUFFICIENT DEPTH With the sublanguage investigated displaying restricted syntactic structures within a limited semantic dcmain, a grammar specifically tailored to these job advertisements can be defined. Moreover, the linear series of nominal phrases as well as the almost one-to-one lexical equivalences found in the SL and TL texts suggest that a shallow ana- lysis without a semantic component is sufficient for adequate translation. The flat tree represen- tation resulting from such a minimal depth ~;Tp~oach does not make any claim to linguistic generaliza- bility for purposes other than the translation of this particular sublanguage. 4.1 Ccmputational considerations In a transfer-based MT system, actual trans- lation takes place in transfer and can be descri- bed as the ocr~putaticnal manipulation of tree structures. In the absenoe of any formal theory of translation for MT, and given the relatively well- developed analysis techniques currently available, a major concern in Mr research is to minimize the o~n~station neoessazy in the transfer phase. A flat tree representation provides one way of sim- plifying the structures to be processed; an inter- faoe representation defined to acocmmodate both SL and TL structures in the same manner, thus avoiding tree structure manipulation, is yet ano- ther means. The representation of the linguistic data in this system is a direct result of these two considerations. 4.2 Flat trees The fact that the linearity of the surface structure constituents carries o~r from SL to the TLs justifies the adoption of a minimal depth ana- lysis. The analysis is restricted to the identifi- cation of the phrasal constituents and their inter- nal structure; dependencies holding between consti- tuents are only partially ccr~puted. Thus, the interface structure resulting from analysis and serving as input to transfer does not reflect a linguistically correct dependency structure. Instead, the IS respects the linear surface order of the constituents (with the exception of predi- cate groups, see below) in a flat tree represen- tation. In a flat tree, the major phrasal consti- tuents, in particular the prepositional phrases, are not attached at the node from which they de- pend linguistically but at specified nodes higher up in the tree. Schematically, the differences can be illustrated as follows: NP NP N PP NP pp pp \ t i~ N Fig. 2. Standard IC-tree vs. Flat tree The flat tree representation applies to all three mjor phrasal constituents defined for this cor- pus: (i) nominal phrases proper, (ii) deverbal 335 ncminal phrases, and (iii) verbal phrases. Samples taken from the oorpus are given below to illustrate each of the three constituent structures. (i) Ncminal phrases proper b~ve a standard noun phrase as their head, possibly followed by a linear sequence of prepositional phrases. (G~ stands for both standard NPs and PPs. ) GN ~ Kauf~naennische mit in der Ausbildung Erfahrung Verwaltung (ii) Deverbal nominal phrases have a deverbal noun as their head, followed by a linear sequence of GNs. GDEV GN (deverbal) GN GN Schreiben yon nach Texten Manuskrlpt (iii) Verbal phrases have a predicate as their head, followed by a linear sequence of GNs. (F~ enccrn- passes predicative participles, predicative adjec- tives, and infinitival predicates; the few finite verbs in the corpus (0.4%) are not treated.) GR~D PRED GN G~ erwuenscht Erfahr%ulg in der Datenverarbeitung ("Erfahrung in der Datenverarbeitung erwuenscht") 4.3 Normalized tree structures In order to further minimize manipulation of structure in transfer, the interface representation is also normalized for two impo~t categories in the sublanguage, narely deverbal ncminal phrases (GDEV) and noun and prepositional phrases (~N). The structures are defined such that they remain valid for both the source and target language. 4.3.1 Devenbal nominal phrases A marked stylistic difference between the SL and the TLs occurring with high frequency in the corpus is the translation of a German deverbal noun into an infinitive in French and Italian. With the deverbal noun in Gennan usually serving as the head of a ccmplex D~minal structure with several ccsple- ments, the translation of the noun into an infini- tive in the target language changes the type of cc~plement structure accordingly. The complete linearization of the deverbal crmplements provides a format for acccmrcdating the target language infinitival construction aimed at in translation. Structural transfer is thus reduced to renaming the nodes; the normalized tree structure remains the same, as can be seen in the SL and TL repre- sentations shown below. GDEV GN ~ GN Ueberwachen der hinsichtlich Bestellungen Materiallieferungen Fig. 3. SL (German) deverbal ncminal phrase analysis. GPRED PRED GN G~ Surveiller les quant a la oc~mandes livraison du materiel Fig. 4. Equivalent TL (French) verbal phrase analysis. 4.3.2 Noun ~hrases and prepositional phrases Certain noun phrases in German (e.g. genetive attributes) are translated into prepositional phrases in French and Italian. In order to avoid structural transfer of noun phrases into preposi- tional phrases and vice-versa, a normalized form for noun phrases has been defined which reserves a position in the tree for prepositions. For stan- dard noun phrases a special value (NIL) has been defined to fill the empty preposition slot. There- fore, in the transfer phase, a translation from a noun Dhrase to a prepositional phrase or vice- versa is merely a change in the value of the pre- positional slot without any change in the tree structure. PREP N ART GN ... Fig. 5. Example of the normalized form for NPs and PPs. 4.4 CONSIDERATIONS FOR TRANSLATION The goal of the system, and perhaps of MT in general, has to be to carry over the information content from SL to TL, to produce output acceptable 336 in terms of TL conventions, and to respect the style of the text type. It seems that treating a well-defined sublanguage enhances the possibili- ties for an Mr system to answer these requirements. In fact, the sublanguage itself suggests possible strategies for dealing with some of the classical translation problems in Mr such as (i) lexical anbiguity, (2) translation of prepositions, and (3) treatment of coordination. 4.4.1 Lexi~ip~lems Two well-known lexical problems in computatio- nal linguistics are homograph resolution and poly- semy disambiguaticn. Given the small number of possible syntactic structures in the sublanguage, the few homographs found in the corpus do not pre- sent any problems for analysis. In turn, the limi- ted s~mantic danain of the sublanguage cc~pletely eliminates multiple word senses so that the trans- fer of lexical meanings is basically a one-to-one mapping. Therefore, with the nouns serving as the major carriers of the textual meaning, lexical transfer ensures that the information content of the text is carried over. 4.4.2 Translation of prepositions The fact that the types of nouns occurring in the sublanguage are restricted and repetitive and that the possible prepositions commanded by any given noun is small in nt~nber (max. 3 in the cor- pus) allows the adoption of a limited noun-focused approach for the translation of prepositions. In such an approach, it is the particular noun or noun class rather than general s~mantic features that determine the translation of prepositions. At present, the info~nation relevant to correct translation of prepositions is attached to indi- vidual noun entries in the transfer dictionary; semantic noun subclassification similar to other sublanguage research (Sager, 1982) is being investigated. 4.4.3 Coordination With SL and TLs exhibiting parallel surface syntactic structure, and with inherent ambiguities of scope therefore carrying over, analysis of co- ordination remains shallow. Conjunctions and in- trasentential punctuation are defined functionally as coordinators to yield, in keeping with the flat tree representation, a structure such as the one shown below. PH O00RD G~ O00RD GN Sprachen : Deutsch und Englisch in Wort und Schri ft Fig. 6. Coordinated structure at sentence level. 5. CONCLUSION The evidence available to-date seem~ to show that, for the particular sublanguage dealt with, correct translation is feasible under the hypo- theses described in this paper. The non-generali- zability of such an approach is quite evident; however, the fact that such a 'minimal depth' ap- proach semns to work for this particular sublan- guage gives substance to the impression that spe- cialized linguistic subsystems differ quite sharply, both in complexity and linguistic fea- tures, frc~ the standard language and may there- fore require special computational treatment. P4~ENCES Chevalier et al. T/K94-~'I'bO, Description du sys- t/~re. Universit~ de Montreal, 1978. EidgenSssisches Personalamt (ed.). Die Stelle. Stellenzeiger des Bundes. No. 21, 1981. Grist, R., Hirsdnman, L. and Frieclman, C. "Natural Language Interfaces Using Limited Semantic Information." Proc. 9th International Conference on Computational Linguistics, 1982. Hutchins, W.J. "Tne Evolution of Madline Transla- tion Systems." In: Lawson, V. (ed.), Practical Experience of Madnine Translation, Amsterdam, N.Y., Oxford, 1982. Kittredge, R., Lehrberger, J. (eds.). Sublangua- @es, Studies of Lanuuage in Restricted Do- mai'ns, Berlin, N.Y., 1982. Sager, N. "Syntactic Formatting of Science Infor- mation." In: Kittredge, Lehrburger, 1982. Shann, P., Cochard, J.L. "GIT : A General Trans- ducer for Teaduing Ccmputational Linguistics." COLING Ccmmunication, 1984. 337
1984
68
Grammar Writing System (GRADE) of Mu-Machtne Translation Project and its Characteristics Jun-tcht NAKAMURA. Jun-tcht TSUJII. Makoto NAGAO Department of Electrical Engineering Kyoto University Sakyo. Kyoto. Japan ABSTRACT A powerful grammar writing system has been developed. Thts grammar wrtttng system ts called GRADE (GRAmmar DEscriber). GRADE allows a grammar writer to write grammars Including analysts, transfer, and generation using the same expression. GRADE has powerful grammar writing facility. GRADE allows a grammar writer to control the process of a machine translation. GRADE also has a function to use grammatical rules written tn a word dictionary. GRADE has been used for more than a year as the software of the machine translation project from Japanese Into Engltsh. which ts supported by the Japanese Government and called Nu-proJect. 1. Objectives Vhen we develop a machine translation system, the intention of a grammar writer should be accurately stated tn the form of grammatical rules. Otherwise, a good grammar system cannot be achieved. A programming language to write a grammar, which ts composed of a grammar writing language, and a software system to execute tt. ts necessary for the development of a machine translation system (Bottet 82). If a grammar writing language for a machine translation system is to have a powerful writing facility, tt must fulfill the following needs. A grammar wrttlng language must be able to manipulate linguistic characteristics tn Japanese and other languages. The 11ngulstlc structure of Jcpanese tS largely different from that of Engltsh, for instance. Japanese does not restrict the word order strongly, and allows the omission of some syntactic components. Vhen a machine translation system translates sentences between Japanese and English, a grammar writer must be able to express such characteristics. A grammar writing language should have a framework to write grammars tn analysis, transfer, and generation phase using the same expression. It Is undeslrable for the grammar writer to learn several different expressions for different stages of a machine translation. There are many word specific linguistic phenomena tn a natural language. A grammar writer must be able to add word specific rules to a machine translation system one after another to deal wtth word specific linguistic phenomena, and improve hts machine translation system over a long period. Therefore. a grammar writing language must be able to handle grammatical rules written tn word dictionaries. There ts a natural sequence tn a translation process. For example, a parstng of noun phrases which do not contain sententtal forms is executed before a parsing of more complex noun phrases. An approximate parsing of compound sentences is executed before a parsing of complex sentences. Also. when an application sequence of grammatical rules are written explicitly, a grammar writing system can execute the rules efficiently. because the system Just needs to test the applicability of a restricted number of grammatical rules. So. a grammar writing language must be able to express several phases of a translation process in the expression explicitly. A grammar writing language must be able to treat the syntactic and semantic ambiguities tn natural languages. But tt must have some mechanisms to avoid a combinatorial explosion. Keeping these points in mind, we developed a new programming system, which ts composed of the grammar writing language and its executing system. Ve wtll call it GRADE (Grammar Describer). 2. Expression of the data for a processing The form of data to express the structure of a sentence during an analysis, a transfer, and a generation process has a strong effect on the framework of a grammar wrtttng language. GRADE uses an annotated tree structure for expressing a sentence. Grammatical rules tn GRADE are described tn the form of tree-to-tree transformation wtth annotation to each node. 338 The annotated tree tn GRADE ts a tree structure whose nodes have ltsts of property names and their values. Figure 1 shows an example of the annotated tree. ~ -CAT - S~ -NUMBER - SINGULAI LE-NUMBER - -SEM = HUMAN E-CAT : Engllsh Category Symbol E-NUMBER: English Number (SINGULAR or PLURAL) E-SEM :Engltsh Semantic Marker Ftgure 1 An example of the annotated tree tn GRADE The annotated tree can express a lot of Information such as syntactic category, number. semantic marker, and other thtngs. The annotated tree can also express a flag tn tts node. whlch ts stmtlar to a flag tn a conventional programming language, to control the process of a translation. For example, in a grammar of a generation, a grammatical rule ts applled to all nodes tn the annotated tree, whose processtngs are not finished. In such a case, a grammatical rule checks the DONE flag whether ttts processed or not. end sets T to the newly processed ones. 3. Rewriting Rule tn GRADE The bastc component of a grammar wrtttng language is a rewriting rule. The rewriting rule In GRADE transforms one annotated tree tnto anoti~er annotated tree. The rewriting rule can be used In the grammars of analysts, transfer and generation phase in a machtne translation system, because the tree-to-tree transformation by thts rewriting rule ts very powerful. A rewriting rule tn GRADE conststs of a declaration part and amatn part. The declaration part has the following four components. (1) Directory Entry part, whtch contains a grammar writer's name, a verston number of the rewrttin 9 rule, and the last date of the revision. Thts part ts not used at the execution ttme of the rewriting rule. A grammar wrtter ts able to see the information by ustng the help factltty of the GRADE system. (2) Property Definition part, where a grammar writer declares the property names and thetr values. (3) Vartable Intt. part, where a grammar wrtter declares the names of variables. (4) Matchtng Instruction part, where a grammar wrtter specifies the mode to apply the rewriting rule to an annotated tree. The matn pant specifies the transformation tn the rewriting rule. and has the following three parts. (1) Matchtng Condition part. where the condition of a structure and the property values of an annotated tree ts described. (2) Substructure Operation part, whtch specifies operations for the annotated tree that has matched wtth the condition wrttten tn the matching condition pant. (3) Creatton part, whtch spec|ftes the structure and the property values of the transformed annotated tree. 3.1. Matching Condition part The matchtng condition part specifies the condition of the structure and the property values of the annotated tree. The matchtng condition part allows a grammar writer to spectfy not only a rtgtd structure of the annotated tree, but also structures whtch may repeat several ttmes, structures which may be omttted, and structures tn which the order of thetn sub-structures ts not restricted. For example, the structure tn whtch adjectives (ADJ) repeat arbitrary ttmes and a noun (N) follows them tn Engllsh ts expressed as follows. ADJ ... ADJ N ---> matching_condition: • (ADJS N): AOJS: anyC~(ADJ)): The structure 11ke a combination of a verb (V) and an adverbial parttcle (ADVPART) tn thts sequence wtth or without a pronoun (PRON) tn between tn Engltsh tswrttten as follows. V (PRON) ADVPART ---> matching_condition: • (V PRON ADVPART): PRON: optional: Atyptcal Japanese sententtal structure tn whtch three adverbial phrases (ADVP). each composed of a noun phrase (NP) and a case particle (GA, WO. or NI) proceed an verb (V) tn no particular order ts expressed as follows. matching_condition; ~(A1 A2 A3 Y); A1. A3: disorder; ADVP1 ADVP2 ADVP3 V ---> Al: ~((ADVP1NP1GA)): A A A A2:zCCADVP2 NP2 WO)): NPl GA NP2 we NPa NZ A3: zCCAOVPa Ne3 .X)): The matchtng condition part allows a grammar wrtter to spectfy conditions about property names and property values for the nodes of the annotated tree. A grammar wrtter can compare not only a property value of a node wttha constant value, but also values between two nodes tn a tree. 339 For example, the number agreement between a subject noun and a verb Is written as follows. matching_condition: ~(NP UP): NP.NUNBER " VP.NUNBE~; 3.2. Substructure Operation part The substructure operation part spec'tftes operations for the annotated tree which has matched wtth the matchtng condition part. The substructure operation part allows a grammar writer to set a property value to a node. and to assign a tree or a property value to a variable, whtch is declared tn the variable tntt. part. It also allows htm to call a subgnammar, a subgnammar network, a dictlonary rule. a bullt-ln functlon, and a LISP function. The subgrammar, the subgramman network. the dicttonany rule, and the butlt-tn function w111 be discussed tn sectton 4.. 5., and 6. In addition, a grammar wntter can write a conditional operation by using the IF-THEN-ELSE form. An operaLion to set 'A' to the lextcal untt of the determiner node (DET.LEX). tf the number of the NP node |S SINGULAR. Is wrttten as follows. substnuctune_operatton: tf NP.NUMBER - 'SINGULAR': then DET.LEX <- "A': else DET.LEX <- "NIL'; end_tf: Transformation of matn part tn a newntttng rule: A A /b,, I B C O ---> E /t,,, B C D Transformation of a whole annotated tree: A A A B C D ---> A E /t,, I /b,, BCD E BCD /t,,, BCD Figure 2 An example of an application of the main part The matching tnstnuctton pant specifies the travense path of the annotated tree. There are four types of the traverse pathes, whtch are the combinations of <left-to-right or night-to-left> and <bottom-to-top on top-to-bottom>. When a grammar writer specifies left-to-right and bottom-to-top mode, the annotated tree w111 be traversed as follows. 5 3 / 3.3. Creation part The structure and the property values of the transformed annotated tree ts written tn the creation part. The transformed tree ts described by node names such as NP and VP, whtch are used in the matchtng condition part on the substructure operation part. A cneatton part to create the tree whose top node ts S and whtch has a NP sub-tree and a VP sub-tree ts wnttten as follows. creation: Z((S NP VP)): 3.4. Matching Instruction part The maln part of a rewrltlng rule In GRADE (the matching condition part, the substructure operation part. and the creatlon part) can be applied not only to a whole tree, but also to sub-trees. Figure 2 shows an example of the application of a maln part. 4. Control of the grammatical rule applications A grammar writing language must be able to express detailed phases of a translation process tn the expression expltctt]y. GRADE allows a grammar writer to divide a whole grammar into several parts. Each part of the grammar ts called a subgnammar. A subgrammar may correspond to a grammatical unit such as the parstng of a stmple noun phrase and the partng of a compound sentence. A whole grammar ts then described by a network of subgrammars. Thts network ts called a subgnammar network. A subgrammar network allows a grammar writer to control the process of a translation tn detatl. When • subgrammar network tn the analysts phase consists of a subgrammar for a noun-phrase (SG1) and a subgrammar for a verb-phrase (SG2) tn this sequence, the executor of GRADE first appltes SG1 to an input sentence, then appltes SG2 to the result of an application of SG1. 4.1. Subgrammar A subgrammar conststs of a set of rewriting rules. Rewriting rules tn a subgrammar have a prtontty ondertng tn their application. The n-th 340 rewriting rule tn a subgrammar tstrted before the (n+l)-th rule. A grammar wrtter can spectfy four types of application sequence of rewriting rules tn a subgrammar. Let us assume the situation that a set or rewriting rules tn the subgrammar ts composed of RR1. RR2 ..... and RRn. that RR1 ..... and RR|-I cannot be applied to an tnput tree. and that RRt can be applted to tt. When a grammar wrtter specifies the ftrst type. whtch ts called ORDER(I). the effect of the subgrammar execution ts the application of RRt to the tnput tree. When a grammar wrtter specifies the second type. which |s called ORDER(2). the executor of GRADE trtes to apply RRt+I ..... RRn to the result of the application of RRt. So. ORDER(2) means that rewriting rui~s tn the subgrammsr are sequentially applted to an tnput tree. The thtrd and fourth type. whtch are called ORDER(3) and ORDER(4). are the Iteration type of ORDER(l) end ORDER(2) respectively. So, the executor of GRADE trtes to apply rewriting rules untt1 no rewriting rule Is applicable to the annotated tree. SEARCH-CANDIDATE-OF-HOUNS.sg: sg_mode: order(Z): rr_tn_sg: CANDIDATE-OF-NOUNS-t: UP-NP-TO-PNP: CANDIDATE-OF-NOUNS-Z; end_sg.SEARCH-CANDIDATE-OF-NOUNS: Ftgure 3 An example of a subgrammar Ftgure 3 shows an example of a subgrammsr. When thts subgrammar is applted to an annotated tree. the executor of GRADE ftrst trtes to apply the rewriting rule CANDIDATE-OF-NOUNS-1 to the tnput tree. If the appl|catton of thts rule succeeds, the tnput tree ts transformed to the result of the application of the rewriting rule CANDIDATE-OF-NOUNS-1. Otherwise. the tnput tree ts not modified. In etther case. the executor of GRADE next tr|es to apply the rewrtt|ng rule UP-NP-TO-PNP to the tnput tree. The executor continues such a process untt1 the application of the last rewriting rule CANDIDATE-OF-NOUNS-2 ts finished. 4.2. Subgramar Network A subgrammar network descr|bes the application sequence of subgrauars. The specification of a subgrammar network conststs of the following ftve parts. (1) Directory Entry part. whtch ts as the same as the one tn a rewriting rule. (2) Property Definition part. whtch Is the same as the one tn a rewriting rule. This part ts used as the default declaration tn rewriting rules. (3) Vsrtable ]ntt. part. which ts the same as the one tn a rewriting rule. The variables are used to control the transition of the subgrammar network. The variables are referred to and asstgned tn the substructure operation part of the rewriting rule. The variables are also referred tne 11nk specification part. whtch wtll be described later. (4) Entry part. whtch specifies a start node of the network. (5) Network part. whtch specifies a network of subgrammars, The network part spec|f]es the network structure of subgrammars, and conststs of node specifications and 11nk spectftcat|ons. The node specification has a label and a subgrammer or s subgnammar network name. whlch ts called when the node gets the control of the processing. The 11nk specification specifies the transit|on among nodes tn a subgramman network. The 11nk specification checks the value of a verteble whtch |s set tn • rewriting rule. and dectdes the label of a node whtch wtll be processed next. PRE.sgn; directory_entry: owner(J.NAKAHURA): verston(VO2L05): last_update(83/12/25): var_tntt; OPRE-FLAG tntt(T): entry: START: network: START: PRE-STEP-|osg; LOOP : PRE-STEP-2.sg; A: PRE-STEP-3.sg: B: PRE-END-CHECK.sg: |f OPRE-FLAG: then goto LOOP: else goto LAST: LAST: PRE-STEP-4.s9: extt: end_sgn.PRE; Ftgure 4 An example of a subgrammar network. Ftgure 4 shows an example of a subgrammar network. When the executor of GRADE appltes thts subgranunar network to an tnput tree. the executor checks the var-tntt part. then puts a new vartable OPRE-FLAG on a stack, and sets T to OPRE-FLAG as an tntttal value. After that. the executor checks the entry part and find the label of the start node START tn the network. Then the executor searches the node START and applles the subgrammar PRE-STEP-1 to the tnput tree. After the application, the executor appltes the subgrammer PRE-STEP-2 (node name: LOOP) and PRE-STEP-3 (node name: A) to the annotated tree tn thts sequence. Next. the executor applles the subgrammar PRE-END-CHECK (node name: B) to the tree. 341 Rewriting rules in PRE-END-CHECK examine the tree and set T or NIL to the variable ePRE-FLAG. The executor checks the link spectf|catJon part, which is started by IF. and examines the value of the variable @PRE-FLAG. The node in the network which will be activated next is the node LOOP if @PRE-FLAG is not NZL, otherwlse, the node LAST. Thus, while @FRE-FLAG ts not NIL, the executor repeats the applications of three subgrammars, PRE-STEP-2. PRE-STEP-3. and PRE-END-CHECK. to the annotated tree. When @PRE-FLAG becomes NIL. the subgrammar PRE-STEP-4 tn the node LAST ts applted to the tree. and the application of thls subgrammar network PRE Is terminated. 5. Handling the grannaatlcal rule tn the word dictionaries GRADE allows a grammar wrtter to write word specific grammatical rules as a subgramman In an entry of word dictionaries of a machine translation system. A subgramman written in a dictionary entry is called a dictionary rule. The dictionary rule is specific to a particular word In the dictionary. The dictionary rule is retrieved wttha entry word and a rule identifier as the key. and is applied to the annotated tree which is specified by a grammar writer, when CALL-DIC operation In the substructure operation part Is executed. Figure 5 shows an example of a rewriting rule which calls a dictionary rule. In thts case. a dictionary rule which ts written in an entry of a word as indicated by V.LEX (the value of the lextcal untt of verb). and whose name ts ANALYSIS. ts epplted to the sequence of NP1. V. NP2. and PP (noun phrase 1. verb phrase, noun phrase 2. and prepositional phrase). Then the result of the application of the dictionary rule Is assigned to the vartable aS. CASE-FRANE.rr: var_tntt: aS; matching_condition: Z(NPZ v Me2 PP): substructure_operation: @S <- ca11-dtc(V.LEX ANALYSIS Z(NP1V NP2 PP)): creation: ~(es): end_Pr.CASE-FRAME: Ftgure S An example of a rewriting rule which calls a dictionary rule 6. Treatment of Ambiguities A grammar wrtttng language must be able to treat the syntactic end semantic ambiguities in natural languages. GRADE allows a grammar writer to collect all the result of possible tree-to-tree transformations by a subgrammar. However, It must avoid a combinatorial explosion, when tt encounters the ambiguities. For instance, let us assume that a grammar writer writes a subgramman which contains two rewriting rules to analyze the case frame of • verb, that a rewriting rules ts the rule to construct VP (verb phrase) from V and UP (a verb and a noun phrase), and that the other ts the rule to construct VP (verb phrase) from V. NP and PP (a verb. a noun phrase, and a prepositional phrase). When he specifies NONDETERMINISTIC_PARALLELED mode to the subgremmar, the executor of GRADE 8ppltes both rewriting rules to an Input tree, constructs two transformed trees, and merges them tnto 8 new tree whose top node has 8 spectal property PARA. The top node of this structure is called a pare special node. whose sub-trees are the transformed trees by the rewriting rules. Figure 6 shows an example of thts mode and apara node. --'7 V NP PP SG PARA VP PP VP A A",, V NP V NP PP Figure 6 An example of a pars speclal node A grammar writer can select the most appropriate one from the sub-trees under a pare special node. A grammar writer ts able to use built-in functlons. MAP-SG. MAP-SGN. SORT. CUT. and INJECTION in the substructure operation part to choose the most appnoprlate one. Figure 7 shows an example to use these bullt-Jn functions. substructure_operation: eX <= ca11-dtc(V.LEX CASE-FRAME Z(N NP PP)): eX <- ca11-butlt(map-sg ~(gX) tree EVALUATE-CASE-FRAME): @X <- call-built(sort Z(@X) tree SCORE): @X <- cell-built(cut [(eX) tree 1): 9X <- call-built(Injection ~(eX) tree 1): Figure 7 An example of bullt-ln functions In this substructure operation part. the executor of GRADE appltes the dictionary rule wrttten tn a word which ts the value of V.LEX (lexlcal untt of verb) to the tree. and sets the result to the vartable eX. When the nondetermtnisttc-paralleled mode ts used tn the dictionary rule. the value of eX ts the tree whose root node tsa pare spectel node. After that, the executor calls butlt-tn functton MAP-SG to apply 342 the subgrammar EVALUATE-CASE-FRAME to each sub-tree of the value of OK. and sets the result to eX again. The subgrammar EVALUATE-CASE-FRAME computes the evaluation score end sets the score to the value of the property SCORE tn the root node of the sub-trees. Next, the executor calls butlt-tn functton SORT. CUT. and INJECTION to get the sub-tree whose score Is the highest one among the sub-trees under the pare spectal node. This tree ts then set to 9X as the most appropriate result of the dictionary ru]e. The para spectal node ts treated as the same as the other nodes tn the current Implementation of GRADE. A grammar wrtter can use the para node as he want, and can select a sub-tree under a pare node at the later grammatical rule application. 7. System configuration end the environment The system configuration of GRADE ts Shown tn Figure 8. Grammatical rules written tn GRADE are first translated tnto tnternal forms, which are expressed by s-expressions tn LISP. This translation ts performed by GRADE translator. The Internal forms of grammatical rules are applted to an tnput tree. which ts an output of the morphological analysts program. Thts rule application Is performed by GRADE executor. The result of rule applications |s sent to the morphological generat4on program. Dictionary Grammar f J GRADE translator 1/ \ Dictionary Grammar (Internal form) rule ~ ~ r~ tnput_~ GRADE ~output sententtal tree|executor J sententtal tree Ftgure 8 The system configuration of GRADE GRADE system ts mrttten tn UTILISP (University of Tokyo Interactive LISP) and Implemented on FACON M382 wtth the additional functton of handllng Chatnese characters. The system ts also usable on Ltsp Machtne Symbollcs 3600. The program stze of GRADE system ts about 10.000 ltnes. the form of tree-to-tree transformation rtth annotation to each node. (2) Rewriting rule has • powerful wrtttng facility. (3) Grammar can be divided Into several parts and can be 11nked together as a subgrammar network. (4) Subgrammar can be written tn the dictionary entrtes to express word spectftc linguiStiC phenomena. (5) Spectel node ts provtded tn a tree for embedding ambiguities. GRADE has been used for more than a year as the software of the nattonal machtne translation project from Japanese Into English. The effectiveness of GRADE has been demonstrated tn thts project. The linguistic parts of the project such as the morphological analysts/generation programs, the grammars for the analysts of Japanese. the transfer from Japanese Into Engltsh and the generation of Engllsh. are discussed tn other papers (Sakamoto 84) (TsuJt1 84) (Raged 84). Thts study: "Research on the machtne translation system (Japanese-English) of scientific and technological documents" Is betng performed through Spectal Coordination Funds for Promoting Science & Technology of the Science and Technology Agency of the Japanese Government. ACKNOWLEDGEMENTS Ve would 11ke to acknowlege the contribution of N. Kogt. F. Ntshtno. Y. Sakane. M. Kobayasht. S. Sate. and Y. Senda. who programmed much of the system. We mould also 11ke to thank the other member of Me-project for their useful comments. REFERENCES Bottet. Ch., et el. Implementation and Conversational Environment of ARIANE 78.4. Proc. COLING82. 1982. RageD, M., et el, Dealtng wtth Incompleteness of Linguistic Kno~ledego on Language Translation. Proc. COLING84o ;964. Sakamoto, Y.. et al, Lextcon Features for Japanese Syntactic Analysts In Mu-ProJect-JE, Proc. COLING84, 1984. TsuJtt, J., et el, Analysts Grammar or Japanese tn Hu-ProJect, Proc. COLING84, ;984. 8. Conclusion The grammar wrtttng system GRADE ts discussed 4n thts paper. GRADE has the follow4ng featureS. (I) Rewriting rule ts an expression tn 343
1984
69
THE REPRESENTATION OF CONSTITUENT STRUCTURES FOR FINITE-STATE PARSING D. Terence Langendoen Yedldyah Langsam Departments of English and Computer & Information Science Brooklyn College of the City University of New York Brooklyn, New York 11210 U.S.A. ABSTBACT A mixed prefix-postfix notation for repre- sentations of the constituent structures of the expressions of natural languages is proposed, which are of limited degree of center embedding if the original expressions are noncenter-embedding. The method of constructing these representations is applicable to expressions with center embed- ding, and results in representations which seem to reflect the ways in which people actually parse those expressions. Both the representations and their interpretations can be computed from the ex- pressions from left to right by finite-state de- vices. The class of acceptable expressions of a na- tural language L all manifest no more than a small, fixeR, finite degree n of center embedding. From this observation, it fo~lows that the ability of human beings to parse the expressions of L can be modeled by a finite transducer that associates with the acceptable expressions of L representa- tions of the structural descriptions of those ex- pressions. This paper considers some initial steps in the construction of such a model. The first step is to determine a method of represen- ting the class of constituent structures of the expressions of L without center embedding in such a way that the members of that class themselves have no more than a small fixed finite degree of center embedding. Given a grammar that directly generates that class of constituent structures, it is not difficult to construct a deterministic fi- nite-state transducer (parser) that assigns the appropriate members of that class to the noncen- ter-embedded expressions of L from left to right. The second step is to extend the method so that it is capable of representing the class of constitu- ent structures of expressions of L with no more than degree n of center embedding in a manner which appears to accord with the way in which hu- man beings actually parse those sentences. Given certain reasonable assumptions about the character of the rules of grammar of natural languages, we show how this step can also be taken. *This work was partly supported by a gran t from the PSC-CUNY Faculty Research Award Program. Let G be a context-free phrase-structure grammar (CFPSG). First, suppose that the category A in G is right-recursive; i.e., that there are subderivations with respect to G such that A ==~ X A, where X is a nonnull string of symbols (terminal, nonterminal, or mixed). We seek a new CFPSG G*, derived from G, that contains the cate- gory A* (corresponding to A), such that there are subderivations with respect to G* of the form A* ==8 X* A*, where X* represents the constituent structure of X with respect to G. Next, suppose that the category B in G is left-recursive; i.e., that there are subderivations with respect to G such that B ==~ B Y, where Y is nonnull. We seek a new CFPSG G*, derived from G, that contains the category B* (corresponding to B), such that there are subderivations with respect to G* of the form B* ==~ B* Y*, where Y* represents the constituent structure of Y with respect to G. In other words, given a grammar G, we seek a grammar G* that di- rectly generates strings that represent the con- stituent structures of the noncenter-embedded ex- pressions generated by G, that is right-recursive wherever G is right-recursive and is left-recur- sive wherever G is left-recursive. In order to find such a G*, we must first de- termine what kinds of strings are available that can represent constituent structures and at the same time can be directly generated by noncenter- embedding grammars. Full bracketing diagrams are not suitable, since grammars that generate them are center embedding whenever the original gram- mars are left- or right-recursive (Langendoen 1975). Suppose, however, that we leave off right brackets in right-recursive structures and left brackets in left-recursive structures. In right- recursive structures, the positions of the left brackets that remain indicate where each constitu- ent begins; the position where each constituent ends can be determined by a simple counting pro- cedure provided that the number of daughters of that constituent is known (e.g., when the original grammar is in Chomsky-normal-form). Similarly, in left-recursive structures, the positions of the right brackets that remain indicate where each constituent ends, and the position where each con- stituent begins can also be determined simply by counting. Moreover, since brackets no longer oc- cur in matched pairs, the brackets themselves can be omitted, leaving only the category labels. In left-recursive structures, these category symbols occur as postfixes; in right-recursive structures, 24 they occur as prefixes. Let us call any symbol which occurs as a prefix or a postfix in a string that represents the constituent structure of an expression an affix; the strings themselves af- fixed strings; and the grammars that generate those strings affix gra1~ars. To see how affix grammars may be constructed, consider the noncenter-embedding CFPSG GI, which generates the artificial language L1 = a(b*a)*b*a. (G1) a. S • S A b. A • B A c. A ~ a d. B * b e. S ) a A noncenter-embedding affix grammar that generates the affixed strings that represent the constituent structures of the expressions of L1 with respect to G1 is given in GI*. (GI*) a. S ~ ~ S* A* S b. A* • A B* A* c. A* > A a d. B* • B b e. S ~ ~ S a Among the expressions generated by GI is El; the affixed string generated by GI* that represents its structural description is El*. (El) abbaba (El*) SaABbABbAaSABbAaS Let us say that an affix covers elements in an affixed string which correspond to its consti- tuents (not necessarily immediate). Then El* may be interpreted as a structural description of E1 with respect to GI according to the rules in R, in which J, K, and L are affixes; k is a word; x and y are substrings of affixed strings; and G is a CFPSG (in this case, GI). (R) a. If K ~ k is a rule of G, then in the configuration ... K k ..., K is a prefix which covers k. b. If J ~ K L is a rule of G, then in the configuration ... J K x L ..., in which x does not contain L, J is a prefix which covers K L. c° If J d K L is a rule of G, then in the configuration ... K x L y J ..., in which x does not contain L and y does not contain K, J is a postfix which covers K L. Coverage of constituents by the rules in R may be thought to be assigned dynamically from left to right. A postfix is used in rule Gl*a because the category S is left-recursive in GI, whereas a pre- fix is used in rule Gl*b because the category A is right-recursive in GI. The use of prefixes in rules Gl*c-e, on the other hand, is unmotivated if the only criteria for choosing an affix type have to do with direction of recursion. For affix grammars of natural languages, however, one can motivate the decision to use a particular type of affix by principles other than those having to do with direction of recursion. The use of a prefix can be interpreted as in- dicating a decision (or guess) on the part of the language user as to the identity of a particular constituent on the basis of the identity of the first constituent in it. Since lexical items are assigned to lexical categories essentially as soon as they are recognized (Forster 1976), we may sup- pose first that prefixes are used for rules such as those in Gl*c-e that assign lexical items to lexical categories. Second, if, as seems reason- able, a decision about the identity of constitu- ents is always made as soon as possible, then we may suppose that prefixes are used for all rules in which the leftmost daughter of a particular constituent provides sufficient evidence for the identification of that constituent; e.g., if the leftmost daughter is either the specifier or the head of that constituent in the sense of Jacken- doff (1977). Third, we may suppose that even if the leftmost daughter of a particular constituent does not provide sufficient evidence for the iden- tification of that constituent, a prefix may still be used if that constituent is the left sister of a constituent that provides sufficient evidence for its identification. Fourth, we may suppose that postfixes are used in all other cases. To illustrate the use of these four prin- ciples, consider the noncenter-embedding partial grammar G2 that generates a fragment of English that we call L2. (G2) a. S ~ NP VP b. lip ~ D c.~ • ~g d.~ • ~c e. H > N f. VP P V ([~,C~) g. C P C S h. C , that i. D > Zhe j. C k. ~ ~ {boss, child ...~ 1. V • {knew, saw .... • o s Among the expressions of L2 are those with both right-recursion and left-recursion, such as E2. (E2) the boss knew that the teacher's sis- ter's neighbor's friend believed that the student saw the child We now give an affix grammar G2* that direct- ly generates affixed strings that represent the structural descriptions of the expressions of L2 with respect to G2, and that has been constructed in accordance with the four principles described above. 25 (G2*) a. i. S* S NP* VP* I C that ii. S*----> NP* VP* S / elsewhere b. NP* ~ NP D* N* c. NP*---~ G* N* NP e. ~*---~ R N* g. ~* ~ Uc*S* h. C* • C that i. 1~---~ D the j. G ~ > G's k. N* • N ~child, house, ...~ 1. V ~ ) V ~k.new, saw, ---i Rules G2Wh-I conform to the first principle, according to which lexical categories generally appear as prefixes. Rules G2*b,e-g conform to the second principle, according to which a category appears as a prefix if Its leftmost daughter in the corresponding rule of G2 is its head or speci- fier. Rule G2*ai conforms to the third principle, according to which a category appears as a prefix if its presence can be predicted from its right sister in G2. Finally, rules G2*aii,c,d conform to the fourth principle, according to which a ca- tegory appears as a postfix if it cannot appear as a prefix according co the preceding three prin- ciples. The affixed string that G2* generates as the representation of the structural description of E2 with respect to G2 is given in E2*. (E2*) NP D the N N boss VP V knew C C that S NP D the N N teacher G's G N N sister NP G's G N N neighbor NP G's G N N friend NP VP V believed C C that S NP D the N N student VP V saw NP D the N N child S E2* can be interpreted as the structural descrip- tion of E2 with respect to G2 by the rules in R, with the addition of a rule to handle unary non- lexical branching (as in G2e), and a modification of Rc to prevent a postfix from simply covering a sequence of affixes already covered by a prefix. (This restriction is needed to prevent the postfix S in E2* from simply covering any of the subordi- nate clauses in that expression.) It is worth noting how the application of those rules dynami- cally enlarges the NP that is covered by the S prefix that follows the words knew that. First the tea- cher is covered; then the teacher's sister; then the teacher's sister's neighbor; and finally the teacher's sister's neighbor's friend. The derivation of E2* manifests first-degree center embedding of the category S*, as a result of the treatment of S as both a prefix and a suf- fix in G2*. However, no derivation of an affixed string generated by G2* manifests any greater de- gree of center embedding; hence, the affixed strings associated with the expressions of L2 can still be assigned to them by a finite-state parser. The added complexity involved in interpreting E2* results from the fact that all but the first of the NP-VP sequences in E2* are covered by prefix Ss, so that the constituents covered by the post- fix S in E2* according to rule Rc are considerably far away from it. It will be noted that we have provided two logically independent sets of principles by which affixed grammars may be constructed from a given CFPSG. The first set is explicitly designed to preserve the property of noncenter-embedding. The second is designed to maximize the use of prefixes on the basis of being able to predict the identity of a constituent by the time its leftmost descen- dent has been identified. There is no reason to believe a priori that affixed grammars constructed according to the second set of principles should preserve noncenter-embedding, and indeed as we have just seen, they don't. However, we conjec- ture chat natural languages are designed so that representations of the structural descriptions of acceptable expressions of those languages can be assigned to them by finite-state parsers that op- erate by identifying constituents as quickly as possible. We call this the Efficient Finite- State Parser Hypothesis. The four principles for determining whether to use a prefix or a postfix to mark the presence of a particular constituent apply to grammars that are center embedding as well as to those that are not. Suppose we extend the grammar G2 by replac- ing rules G2e and f by rules G2e' and f' respec- tively, and adding rules G2m-s as follows: (G2) e'. N ---~ N (PP1) f,. ve > v (sP) ({Pe2, ~) m. NP • NP PP2 n. PP1 • PI NP o. PP2 • P2 NP p w ~ vP IA, PP21 q. A ~ yesterday r. P1 ---> of S. P2 ~ ~in, on, ...] Among the expressions generated by the extended grammar G2 are those in E3. (E3) a. the boss knew that the teacher saw the child yesterday b. the friend of the teacher's sister 26 Although each of the expressions in E3 is am- biguous with respect to G2, each has a strongly preferred interpretation. Moreover, under each interpretation, each of these sentences manifests first-degree center embedding. In E3, the includ- ed VP saw the child is wholly contained in the in- cluding VP knew that the teacher saw the child yesterday; and in E3b, the included NP the teacher is wholly contained in the including NP the friend of the teacher's sister. Curiously enough, the extension of the affix grammar that our principles derive from the exten- sion of the grammar G2 just given associates only one affixed string with each of the expressions in E3. That grammar is obtained by replacing rules G2*e and F with G2*e' and f' respectively, and ad- ding the rules G2*m-s as follows. (G2*) e' N* > N M* (PPI*) f'. VP* > VP V* (NP*) ([PP2*, C*}) m. NP* ~ NP* PP2* NP n. PPI* ---> PP1PI* NP ~ o. PP2* > PP2 P2* NP* p. VP* ~ VP* {A*, PP2*} VP q. A* P A yesterday r. PI* • P1 of s. F2*---~ P2 fin, on .... J The affix strings that the extended affix grammar G2* associates with the expressions in E3 are given in E3*. (E3 ~) a. NP D the N N boss VP V knew C C that S NP D the N N teacher VP V saw NP D the N N child A yesterday VP S b. NP D the N N friend PP1 P1 of NP D the N N teacher G's G N N sister NP We contend that the fact that the expressions in E3 have a single strongly preferred interpreta- tion results from the fact that those expressions have a single affixed string associated with them. Consider first E3a and its associated affixed string E3*a. According to rule Rc, the affix VP following yesterday is a postfix which covers the affixes VP and A. Now, there is only one occur- rence of A in E3*a, namely the one that immediate- ly precedes yesterday; hence that must be the oc- currence which is covered by the postfix VP. On the other hand, there are two occurrences of pre- fix VP in E3*a that can legitimately be covered by the postfix, the one before saw and the one before knew. Suppose in such circumstances, rule Rc picks out the nearer prefix. Then automatically the complex VP, saw the child yesterday, is co- vered by the subordinate S prefix, in accordance with the natural interpretation of the expression as a whole. Next, consider E3b and its associated affixed string E3*b. According to rule Rc, the G is a postfix that covers the affixes NP and G. Two oc- currences of the prefix NP are available to be covered; again, we may suppose that rule Rc picks out the nearer one. If so, then automatically the complex NP, the teacher's sister, is covered by PPI, again in accordance with the natural inter- pretation of the expression as a whole. This completes our demonstration of the abil- ity of affixed strings to represent the structural descriptions of the acceptable sentences of a na- tural language in a manner which enables them to be parsed by a finite-state device, and which also predicts the way in which (at least) certain ex- pressions with center embedding are actually in- terpreted. Much more could be said about the sys- tem of representation we propose, but time and space limitations preclude further discussion here. We leave as exercises to the reader the demonstration that the expression E4a has a single affixed string associated with it by G2*, and that the left-branching (stacked) interpretation of E4b is predicted to be preferred over the right- branching interpretation. (E4) a. the student saw the teacher in the house b. the house in the woods near the stream ACKNOWLEDGMENT We thank Maria Edelstein for her invaluable help in developing the work presented here. REFERENCES Forster, Kenneth I. (1976) Accessing the mental lexicon. In R.J. Wales and E.T. Walker, eds., New Approaches to Language Mechanisms. Amsterdam: North-Holland. Jackendoff, Ray S. (1977) X-Bar Syntax. Cam- bridge, Mass.: MIT Press. Langendoen, D. Terence (1975) Finite-state par- sing of phrase-structure languages and the status of readjustment rules in grammar. Linguistic Inquiry 6.533-54. 27
1984
7
A DISCOVERY PROCEDURE FOR CERTAIN PHONOLOGICAL RULES Mark Johnson Linguistics, UCSD. ABSTRACT Acquisition of phonological systems can be insightfully studied in terms of discovery procedures. This paper describes a discovery procedure, implemented in Lisp, capable of deter- mining a set of ordered phonological rules, which may be in opaque contexts~ from a set of surface forms arranged in para- digms. 1. INTRODUCTION For generative grammarians, such as Chomsky (1965), a primary problem of linguistics is to explain how the language learner can acquire the grammar of his or her language on the basis of the limited evidence available to him or her. Chomsky introduced the idealization of instantaneous acquisition, which 1 adopt here, in order to model the language acquisition device as a function from primary linguistic data to possible gram- mars, rather than as a process. Assuming that the set of possible human languages is small, rather than large, appears to make acquisition easier, since there are fewer possible grammars to choose from, and less data should be required to choose between them. Accord- ingly, generative linguists are interested in delimiting the class of possible human languages. This is done by looking for pro- perties common to all human languages, or universals. Together, these universals form universal grammar, a set of principles that all human languages obey. Assuming that universal grammar is innate, the language learner can use it to restrict the number of possible grammars he or she must con- sider when learning a language. As part of universal grammar, the language learner is supposed to innately possess an evaluation metric, which is used to "decide" between two grammars when both are con- sistent with other principles of universal grammar and the available language data. 2. DISCOVERY PROCEDURES This approach deals with acquisition without reference to a specific discovery procedure, and so in some sense the results of such research are general~ in that in principle they apply to all discovery procedures. Still, I think that there is some util- ity in considering the problem of acquisition in terms of actual discovery procedures. Firstly, we can identify the parts of a grammar that are underspeeified with respect to the available data. Parts of a grammar or a rule are strongly data determined if they are fixed or uniquely determined by the data, given the require- ment that overall grammar be empirically correct. By contrast, a part of a grammar or of a rule is weakly data determined if there is a large class of grammar or rule parts that are all consistent with the available data. For example, if there are two possible analyses that equally well account for the available data, then the choice of which of these analyses should be incorporated in the final grammar is weakly data determined. Strong or weak data determination is therefore a property of the grammar formalism and the data combined, and independent of the choice of discovery procedure. Secondly, a discovery procedure may partition a phono- logical system in an interesting way. For instance, in the discovery procedure described here tile evaluation metric is not called apon to compare one grammar with another, but rather to make smaller, more local, comparisons. This leads to a fac- toring of the evaluation metric that may prove useful for its further investigation. Thirdly, focussing on discovery procedures forces us to identify what the surface indications of the various construc- tions in the grammar are. Of course, this does not mean one should look for a one-to-one correspondence between individual grammar constructions and the surface data; but rather com- plexes of grammar constructions that interact to yield particu- lar patterns on the surface. One is then investigating the logi- cal implications of the existence of a particular constructions in the data. Following from the last point, 1 think a discovery pro- cedure should have a deductive rather than enumerative struc- ture. In particular, procedures that work essentially by enumerating all possible (sub)grammars and seeing which ones work are not only in general very inefficient, but. also not. very insightful. These discovery by enumeration procedures simply give us a list of all rule systems that are empirically adequate as a result, but they give us no idea as to what properties of these systems were crucial in their being empirically adequate. This is because the structure imposed on the problem by a simple recursive enumeration procedure is in general not related to the intrinsic structure of the rule discovery problem. 3. A PHONOLOGICAL RULE DISCOVERY PRO- CEDURE Below and in Appendix A I outline a discovery pro- cedure: which I have fully implemented in Franz Lisp on a VAX 11/750 computer, for a restricted class of phonological rules, namely rules of the type shown in (1). (1) ~ ~ b / c Rule (1) means that any segment a that appears in con- text Cin the input to the rule appears asa bin the rule's out- put. Context C is a feature matrix, and to say that a appears in context C means that C is a subse! of the fvature malrix 344 formed by the segments around a 1. A phonological system consists of an ordered 2 set of such rules, where the rules are considered to apply in a cascaded fashion, that. is, the output of one rule is the input to the next.. The problem the discovery procedure must solve is, given some data, to determine the set of rules. As an idealization, I assume that the input to the discovery procedure is a set of surface paradigms, a two dimensional array of words with all words in the same row possessing the same stem and all words in the same column the same affix. Moreover, l assume the root and suffix morphemes are already identified, ahhough I admit this task may be non-trivial. 4. DETERMINING THE CONTEXT THAT CONDI- TIONS AN ALTERNATION Consider the simplest phonological system: one in which only one phonological rule is operative. In this system the alternating segements a and b can be determined by inspec- tion, since a and b will be the only alternating segments in the data (although there will be a systematic ambiguity as to which is a and which is b). Thus a and b are strongly data determined. Given a and b. we can write a set of equations that the rule context C that conditions this alternation must obey. Our rule rnust apply in all contexts C b where a b appears that alternates with an a, since by hypothesis b was produced by this rule. We can represent this by equation (2). (2) ~7]Cb, C matches C b The second condition that our rule must obey is that it doesn't apply in any context. C a where an a appears. If it did, of course, we would expect a b, not an a, in this position on the surface. We can write this condition by equation (3). (3) ~¢C,, C does not match 6', These two equations define the rule context C. Note that in general these equations do not yield a unique value for C; depending apon the data tbere may be no C that simultane- ously satisfies (2) and (3). or there may be several different C that simultaneously satisfies (2) and (3). We cannot appeal further to the data to decide which C to use, since they all are equally consistent with the data. Let us call the set of C that simultaneously satisfies (2) and (3) S o Then S c is strongly data determined; in fact, there is an efficient algorithm for computing S c from the C,s and Cbs that does not involve enumerating and testing all ima- ginable C (the algorithm is described in Appendix A). However, if S c contains more than one 6', the choice of which C from Sc to actually use as the rule's context is weakly 1 What is crucial for what follows is that saying context C matches a portion of a word W is equivalent to saying that C is a subset of W. Since both rule contexts and words can be written as sets of features, 1 use "contexts" to refer both to rule contexts and to words. z I make this assumption as a first approximation. In fact, in real phonological systems phonological rules may be unordered with respect to each other. data determined. Moreover. the choice of v, hich ('from Sclo use does not affect any other decisions that the discovery pro- cedure has to make - that is. nothing else in the complete grammar must change if we decide to use one C instead of another. Plausibly, the evaluation metric and universal principles decide which C to use in this situation. For example, if the alternation involves nasafization of a vowel, something that usually only occurs in the context, of a nasal, and one of the contexts in S c involves the feature nasal but the other C in S c do not, a reasonable requirement is that the discovery pro- cedure should select the context involving the feature nasal as the appropriate context Cfor the rule. Another possibility is that .qc'S containing more than one, member indicates to the discovery procedure that it simply has too little data to determine the grammar, and it defers making a decision on which C to use until it has the relevant data. The decision as to which of these possibilities is correct is is not unimportant, and may have interesting empirical conse- quences regarding language acquisition. McCarthy (1981) gives some data on a related issue. Spanish does not tolerate word initial sC clusters, a fact. which might be accounted for in two ways; either with a rule that inserts e before word initial sC clusters, or by a constraint on well-formed underlying structures (a redundancy rule) barring word initial sC. McCarthy reports that either constraint is adequate to account for Spanish morphopbonemics, and there is no particular language internal evidence to prefer one over the other. The two accounts make differing predictions regarding the treatrnent of loan words. The e insertion rule predicts that loan words beginning with sC should receive an initial e (as they do: esnob, esmoking, esprey), while the well-formedness constraint makes no such prediction. McCarthy's evidence from Spanish therefore suggests that the human acquisition procedure can adopt one potential analysis and rejects an other without empirical evidence to dis- tinguish between them. ltowever, in the Spanish case, the two potential analyses differ as to which components of the gram- mar they involve (active phonological processes versus lexical redundancy rules) which affects the overall structure of the adopted grammar to a much greater degree than the choice of one C from S c over another. 5. RULE ORDERING In the last section 1 showed that a single phonological rule can be determined from the surface data. In practice, very few, if any, phonological systems involve only one rule. Systems involving more than one rule show complexity that single rule systems do not. In particular, a rules may be ordered in such a fashion that one rule affects segments that are part of the context that conditions the operation of another rule. If a rule's context is visible on the surface (ie. has not been destroyed by the operation of another rule) it is said to be transparent, while if a rule's context is no longer visible on the surface it is opaque. On the face of it, opaque contexts could pose problems for discovery procedures. 345 ()r<h,rillg (,i r,lh,~ h~u- b<'(q+ a topic ~,ul>,,l+jlilial re.~e~-~r[h it+ ?h..<,h,g',. Xl'. mai,, ,d,i,.cli'..c, i. thi- ~,,rti.. is t(, shov. that e×trirlsically ordered ruh,s i,, prilu'iph' pose t~o prohlem for a discover) prl,tt'durl'. ('~l'n if later ruh's obscure Ihe ('ontext of earlier ones. I don't make any elaitn that Ihe procedure presented here is optinlal - in fact I can think of at least two ways to make it perform its job more effil'ienlly. The output of this (lisc<~very procedure is the set of all possible ordered ruh. s3stelllS z aud their correspondiHg u lderhing forms that can pr(,duee the given surface fort,is. As before. I ass,lnle thal the data is in the form of sets of paradigms. I also assunu, that for e~er) ruh, ctlanging an a to a b. an aheri,aiion hetween a and b appears in the data: thus ++e know hy listing the alternations in ttw data just what the possihle as and bs of the ruh' are 4. Frorn the assumpxion thai ruh,s are ex tins[(ally ordered il folh,ws lhat one of the ruh's must have appli(,(t last: that is. there is a urJique "most surfaev" rule. The ('ontext or this ruh. +~ill ne<essariLy I,r t ransl)aret, t (visible in the surface hJrms), as there is ill) later rule to nlake its context opaque. Of coHrse, till' (liscover.', procedure has no a priori way of tellhJg +~hit'h alt(.rnati.n (.,rresponds In the nlost surfacy rule. ThlLy> although tilt, identh) of till' segnlelitS involved in tile niosl suffal", rule ilia)" he strictly data delerlnined, at this stall, Ihls inftlrnlaliiln i."; Ill)| availahle to the discovery pro- ('edure. SO at this point, tile discovery pr(lcedure proposed here systematically investigates all of the surface ahernations: fi)r each alternation it makes the hypothesis that h, is the the alternation (if lilt, nlost sllrfa(') rub'. ('herks that a context Call be fouud thai conditions this alternation (this lnust he so if the hypothesis is correct) using the sirigle rule algorithm presented earlier, and then investigates if it, is possible to con- strut( an empirically correct set of rules based on this hylitlt.hesis. Given thai we have found a potential IlIIIOSI surfacy" ruh,, all of the surface alternates are replaced by the putative underlying segment to fornl a set of intermediate forms, in whi<'h the rule just discovered has been undone. We can undo this rule berause we previously identified tile alternating seg- nlents, ull),.rtantly, undoing this rule means that all other Thus if the n rules in the systetn are unoi'dered, this procedure returns n! solutions corresponding to the n ways of ordering these rules. The reason why the class of phonological rules con- sidered in this paper was restricted to those mapping segments into segments was so that all alternations could be identified by simply comparing surface forms segment by segment. Thus in this discovery procedure the algorithm for identifying possi- ble alternates can be of a particularly simple form. If we are willing It) complicate the rnachinery that deterlnines the possi- bh' ahernations in some data. we can relax the restriction prohibiting epe+nt, hesis and deletion rules, and the requirement that all alternations are visible on tile surface. That is, if the approach here is correct, the problem of identifying which seg- ments alternate is a different problem to discovering the ((Ull|'~t llllll tl~hdllll~ll~, lhl ~ ,flit I hill ll,il, ruh.s whl)se cot, texts had been made opaque in the surface dala b.v the operation of the most surfacy rule will now be t ransparen t. The hypothesis tester proceeds to look for another alter- nation, this tilne in the intermediate forms, rather than in the surface fi)rms, and so on until all alternations have been accounted for. If at an.',' stage the hypothesis tester fails to find a rule I,o dr'scribe the alternation it is currently working with, that is, the single-rule algorithm determines thai no rule context exists that can capture this alternation, the hypothesis tester dis- cards ttte current hypothesis, and tries auother. The hypothesis tester is responsible for proposing dif- ferent rule order[ass, which are tested by applying the rules in reverse to arrive at progressively more renloved representa- lions, with the single-ruh' algorithm being applied at each step to deterlnine if a rule exists that relates one level of intermedi- ate representation with the next. We ran regard the hyp(itilesis tester as systematically searching through tile space of different rule orderings, seeking rub' orderings that success- fully accounts for the ohserved data. q'tJe output of this procedure is therefore a list of all pos- sible rule orderings. As ] tnentioned before, I think that tile etlumeratlve approacit adopted here is basically flawed. So althougit this procedure is relatively efficient, in situations where rule ordering is strictly data determined (that is, where only one nile ordering is consistent with the data), in situa- tions where the rules are tmordered (any rule ordering will do), the procedure will generate all possible n! orderings of the n rules. This was most striking while working with some Japanese data. with 6 dislincl alternations, 4 of which were unordered with respect to each other. The discovery procedure, as presented above, required approximately 1 hour of CPU time to completely analyse this data: it. found <l different underlying forms and 512 different rule s.vstems that generate the Japanese data, differing primarily in tile ordering of the rules. This demonstrates that a discovery procedure that simply enumerates all possible rule ordering is failing to capture some inlportant insight regarding rule ordering, since unordered rules are much more difficult for this type of procedure to han- dle, yet, unordered rules are the most comtnon situation in natural langnage phonology. This problem may be traced back to the assumption made above that a phonological system consists of an ordered set of rules. The Japanese example shows that in many real phonological systems, the ordering of particular rules is simply not strongly data determined. What we need is some way of partitioning different, rule orderings into equivalence classes, as was done with this the different rule contexts in the single rule algorithm, and then compute with these equivalence classes rather than individual rule systems; that is. seek to localize the weak data determinacy. Looking at the problem in another way, we asked the discovery procedure to find all sets of ordered rules that gen- erate the surface data, which it did. However, it seems that this simply was not rigllt question, since the answer to this question, a set of 512 different systems, is virtually 346 uninterpretable by human beings. Part of the problem is lhat phonologists in general have not yet agreed what exactly the principles of rule ordering are s . Still, the present discovery procedure, whatever its defi- ciencies, does demonstrate that rule ordering in phonology does not pose any principled insurmountable problems for discovery procedures (although the procedure presented here is certainly practically lacking in certain situations), even if a later rule is allowed to disturb the context of an earlier rule, so that the rule's context is no longer "surface true". None the less, it is an empirical question as to whether phonology is best described in terms of ordered interacting rules~ all that l have shown is that such systems are not in principle unlearnable. 6. CONCLUSION In this paper I have presented the details of a discovery procedure that can determine a limited class of phonological rules with arbitrary rule ordering. The procedure has the interesting property that it can be separated into two separate phases, the first, phase being superificial data analysis, that is, collecting the sets C, and C b of equations (2) and (3), and the second phase being the application of the procedure proper, which need never reference the data directly, but can do all of its calculations using C, and Cb ~. This property is interesting because it is likely that 6", and C a have limiting values, as the number of forms in the surface data increases. That is, presumably the language only has a fixed number of alterna- tions, and each of these only occurs in some fixed contexts, and as soon as we have enough data to see all of these con- texts we will have determined C, and C b. and extra data will not. make these sets larger. Thus the computational complex- ity of the second phase of the discovery procedure is more or less independent, of the size the lexicon, making the entire pro- cedure require linear time with respect to the size of the data. i think this is a desirable result, since there is something coun- terintuitive to a situation in which the difficulty of discovering a grammar increases rapidly with the size of the lexicon. 7. APPENDIX A: DETERMINING A RULE'S CON- TEXT In this appendix ! describe an algorithm for calculating the set of rule contexts S c = { C } that satisify equations (2) and (3) repeated below in set notation as (4) and (5). Recall that C b are the contexts in which the alternation did take place, and C a are the contexts in which the alternations did not take place. We want to find (the set, of) contexts that. simultaneously match all the Cb, while not matching any C.. (4) V C~, C C_ C b In this paper 1 adopted strict ordering of all rules be- cause it is one of the more stringent rule ordering hypotheses available. e In fact, the sets C a and C b as defined above do not con- tain quite enough information alone. We must also indicate which segments in these contexts alternate, and what they al- ternate to. This may form the basis of a very different rule order discovery procedure. (5) Vc,. c ; c, We can manipulat.e these into computationally more tractable forms. Starting with (4), we have c~, c c c~ (= (4)) VCb,\/fE C,f~ C b ~/e c, fc A CbCC I"3 Cb Put C, = f"l Cb- Then CC 6"i. Now consider equation (5). ~'c,,c~ c, Vc,,~ i~ ( c- c.) But since C~ C 1, if f~ ( C- C0). then fE ( C 1 - C,) N C. Then ~/c.,q_ /~ ( c,- c,),/~ c This last equation says thal ever), context thai fulfills the conditions above contains at least one feature that distin- guishes it from each C0, and that this feature must be in the intersection of all the C b. If for any C,. C] - C e=O (the null set of features), then there are no contexts C that simultane- ously match all the C b and none of the C,, implying that no rule exists that accounts for the observed ah.ernation. We can construct the set S c using this last formula by first, calculating C1, the intersection of all the Cb, and then for each C,, calculating C I : ( C I - C° ), a member of which must be in every 6'. The idea is to keep a set of the minimal C needed to account for the C, so far; if C conl.ains a member of C! we don't need to modify it; if C does not contain a member of C I then we have to add a member of C I to it in order for it to satisfy the equations above. The algorithm below acomplishes this. set C 1 : ["I Cb set S c = {~} foreach C. set%= c,- c. if%-O return "No rule contezts" foreach C in S c ifcn el=-0 remove C from S c foreaeh/in 6'/ add CU {/}t°S c return S c where the subroutine "add" adds a set to S c only if it or its subset is not already present. After this algorithm has applied, S c will contain all the minimal different C that satisfy equations (4) and (5) above. 347
1984
70
WHAT NOT TO SAY Jan Fornell Department of Linguistics & Phonetics Lund University Helgonabacken 12, Lund, Sweden ABSTRACT A problem with most text production and language generation systems is that they tend to become rather verbose. This may be due to negleetion of the pragmatic factors involved in communication. In this paper, a text production system, COMMENTATOR, is described and taken as a starting point for a more general discussion of some problems in Computational Pragmatics. A new line of research is suggested, based on the concept of unification. I COMMENTATOR A. The original model I. General purpqse The original version of Commentator was written in BASIC on a small micro computer. It was intended as a generator of text (rather than just sentences), but has in fact proved quite useful, in a somewhat more general sense, as a generator of linguistic problems, and is often thought of as a "linguistic research tool". The idea was to create a model that worked at all levels, from "raw data" like perceptions and knowledge, via syntactic, semantic and pragmatic components to coherent text or speech, in order to be able to study the various levels and the interaction between them at the same time. This means that the model is very narrow and "vertical", rather than like most other computational models, which are usually characterized by huge databases at a single level of representation. 2. The model The system dynamically describes the movements and locations of a few objects on the computer screen. (In one version: two persons, called Adam and Eve, moving around in a yard with a gate and a tree. In another version, some ships outside a harbour). The comments are presented in Swedish or English in a written and a spoken version simultaneously (using a VOTRAX speech synthesis device). No real perceptive mechanism (such as a video camera) is included in the system, (instead it is fed the successive coordinates of the moving objects) but otherwise all the other abovementioned components are present, to some extent. For both practical and intuitive reasons the system is "pragmatically deterministic" in some sense. By this I mean that a certain state of affairs is investigated only if it might lead to an expressible comment. For every change of the scene, potentially relevant and commentable topics are selected from a question menu. If something actually has happened (i e a change of state [I] has occurred), a syntactic rule is selected and appropriate words and phrases are put in. A choice is made between pronouns and other nounphrases, depending on the previous sentences. If a change of focus has occurred, contrastive stress is added to the new focus. Some "discourse connectives" like ocks~ (also/too) and heller (neither) are also added. There are apparently some more or less obligatory contexts for this, namely when all parts (predicates and arguments) of two sentences are equal except for one. For example "Adam is approaching the gate." "Eve is also approaching it." (predicates equal, but subjects different) "John hit Mary." "He kicked her too." (subjects and objects equal, but different predicates), etc. Stating the respective second sentences of the examples above without the also/too sounds highly unnatural. This is however only part of the truth (see below). Note that all selections of relevant topics and syntactic forms are made at an abstract level. Once words have begun being inserted, the sentence will be expressed, and it is never the case that a sentence is constructed, but not expressed. Neither are words first put in, and then deleted. This is in contrast with many other text production systems, where a range of sentences are constructed, and then compared to find the "best" way of expressing the proposition. That might be a possible approach when writing a (single) text, such as an instruction manual, or a paper like this, but it seems unsuitable for dynamic text production in a changing environment like Commentator's. 348 B. A new model A new version is currently being inplemented in Prolog on a VAX11/730, avoiding many of the drawbacks and limitations of the BASIC model. It is highly modular, and can easily be expanded in any given direction. It does not yet include any speech synthesis mechanism, but plans are being made to connect the system to the quite sophisticated ILS program package available at the department of linguistics. On the other hand, it does include some interactive components, and some facilities for (simple) machine translation within the specified domains, using Prolog as an intermediary level of representation. The major aim, however, is not to re-implement a slightly more sophisticated version of the original Commentator, which is basically a monologue generator, but instead to develop a new, highly interactive model, nick-named CONVERSATOR, in order to study the properties of human discourse. What will be described in the following, is mostly the original Commentator, though. II COMPUTATIONAL PRAGMATICS A. Relevance StrateGies in Commentator The previous presentation of Commentator of course raises some questions, such as "What is a relevant topic?" It is a well known fact, that for most text production systems it is a major problem to reatriet the computer output - to get the computer to shut up, as it were, and avoid stating the obvious. In many cases this problem is not solved at all, and the system goes on to become quite verbose. On the other hand, Commentator was developed with this in mind. I. Chan~es A major strategy has been to only comment on changes [2]. Thus, for example, if Commentator notes that the object called Adam is approaching the object called the gate (where approach is defined as something like "moving in the direction of the goal, with diminishing distance" - this is not obvious, but perhaps a problem of pattern recognition rather than semantics), the system will say something like (I) "Adam is approaching the gate". Then, if in the next few scenes he's still approaching the gate, nothing more need to be said about it. Only when something new happens, a comment will be generated, such as if Adam reaches the gate, which is what one might expect him to do sooner or later, if (I) is to be at all appropriate. Or if Adam suddenly reverses his direction, a slightly more drastic comment might be generated, such as (2) "Now he's moving away from it". Note however, that the Commentator can only observe Adam's behaviour and make guesses about his intentions. Since he is not Adam himself, he can never know what Adam's real intentions are. He can never say what Adam is in fact doing, only what he thinks Adam is doing, and any presuppositions or impllcatures conveyed are only those of his beliefs. Thus, uttering (I) somehow implicates that the Commentator believes that Adam is approaching the gate in order to reach it, but not that Adam is in fact doing so. This might be quite important. 2. Nearness Another criterion for relevance is nearness. It seems reasonable to talk about objects in relation to other objects close by [3], rather than to objects further away. For instance, if Adam is close to the gate, but the tree is on the other side of the yard, it would probably make more sense to say (3) than (4), even though they may be equally true. (3) Adam is approaching the gate. (4) Adam is moving away from the tree. All of this, of course, presupposes that it is sensible to talk about these things at all, and this is not obvious. What is a text generation system supposed to do, really? B. Why talk? Expert systems require some kind of text generation module to be able to present output in a comprehensible way. This means that the input to the system (some set of data) is fairly well-known, as well as the desired format of the output. But this means that the quality of the output can only be measured against how well it meets the pre-determined standards. There is obviously much more to human communication than that. I believe that the serious limitations and unnaturalness of existing text generation systems (whether they are included in an expert system or not. There aren't really many of the latter type.) cannot be overcome, unless a certain important question is ~sked, namely "Why ever say anything at all?" Two different dimensions can be recognized. One is prompted vs spontaneous speech, and the other is the informative content. At one end of the information scale is talk that contains almost no information at all, such as most talk about the weather. This is usually a very ritualized behaviour [4], and is quite different from the exchange of data, which characterizes most interactions with computers and would be the other end of the scale. 349 Aside from the abovementioned kind of social interaction, it seems that one talks when one is in possession of some information, and believes that the listener-to-be is interested in this information. The most obvious case is when a question has been asked, or the speaker otherwise has been prompted. In fact, this is the only case that text generation systems ever seem to take care of. Expert systems speak only when spoken to. The Commentator is made to talk about what's happening, assuming that someone is listening, and interested in what it says. But for a conversating system this is not enough. The properties of spontaneous speech has to be investigated, in order to address questions like "When does one volunteer information?", '[When does one initiate a conversation?" and "When does one change topic?" It will involve quite a lot of knowledge about the potential listener and the world in general, which might be extremely hard to implement, but which I believe is necessary anyway, for other reasons as well (see below). C. Natural Language-Understandin~ It has been pointed out (Green (1983), and references cited therein) that "communication is not usefully thought of as a matter of decoding someone's encryption of their thoughts, but is better considered as a matter of guessing at what someone has in mind, on the basis of clues afforded by the way that person says what s/he says". Still, much work in linguistics relies on the assumption that the meaning of a sentence can be identified with its truth-conditions, and that it can somehow be calculated from the meaning of its parts [5], where the meanings of the words themselves usually is left entirely untreated. But again, this is a far cry from what a speaker can be said to mean by uttering a sentence [6]. While some interesting work has been done trying to recognize Gricean conventional implicatures and presuppositions in a computational, model-theoretical framework (Gunji, 1981), the particularized conversational implicatures were left aside, and for a good reason too. With the kind of approaches used hitherto, they seem entirely untreatable. Instead, I would say that understanding language is very much a creative ability. To understand what someone means by uttering some sentence, is to construct a context where the utterance fits in. This involves not only the linguistic context (what has been said before) and the extra-linguistic context (the speech situation), but also the listener's knowledge about the speaker and the world in general. It also involves recognizing that every utterance is made for a purpose. The speaker says what s/he does rather than something else. The used mode of expression (e g syntactic construction) was selected, rather than some uther. In this sense, what is not said is as important as what is actually said. Note that I said "a context" rather than "the context": one can do no more than guess what the speaker had in mind, since it strictly is impossible to know. D. Text Generation Revisited A text generation system would also need the same kind of creative ability, in order to have some conception of how the listener will interpret the message. This will of course affect how the message is put forward. One does not say what one believes the listener already knows, or is uninterested in, and on the other hand, one does not use words or syntactic constructions that one believes the listener is unfamiliar with. Since speakers generally will tend to avoid stating the obvious, and at the same time say as much as possible with as few words as possible, conversational implicatures will be the rule, rather than the exception. For example, using words like "too" and "also" means that the current sentence is to be connected to something previous. Only in a few, very obvious cases (such as the Commentator examples above) will the "previous" sentence actually have been stated. In most cases, the speaker will rely on the listener's ability to construct that sentence (or rather context) for himself. III CONCLUSIONS Does this paint too grim a picture of the future for text generation and natural language understanding systems? I don't think so. I have just wanted to point out that unless quite a lot of information about the world is included, and a suitable Context Creating Mechanism is constructed, these systems will never rise above the phrase-book level, and any questions of "naturalness" will be more or less irrelevant, since what is discussed is something highly artificial, namely a "speaker" with the grammar and dictionary of an adult, but no knowledge of the world whatsoever. How is this Creative Mechanism supposed to work? Well, that is the question that I intend to explore. The concept of unification seems very promising [7]. Unification is currently used in several syntactic theories for the handling of features, but I can see no reason why it shouldn't be useful in handling semantics, discourse structure and the connections with world-knowledge as well. Any suggestions would be greatly appreciated. 350 NOTES [I] In this sense, something like "X is approaching Y" is as much a state as "X is in front of Y". [2] This is apart from an initial description of the scene for a listener who can't see it for himself, or is otherwise unfamiliar with it. Cf a radio sports eolmantator, who would hardly descibe what a tennis court looks like, or the general rules of the game, but will probably say something about who is playing, the weather and other conditions, etc. [3] Though closeness is of course not just a physical property. Two people in love might be said to be very close, even though they are physically far apart. This is something, however, that the Commentator would have to know, since it's usually not immediately observable. [4] For instance, if someone says "Nice weather today, isn't it?", you're supposed to answer "Yes" no matter what you really think about the weather. Not much information can be said to be exchanged. [5] This is of course valuable in the sense that it says that "John hit Bill" means that somebody called John did something called hittin K to somebody called Bill, rather than vice versa. [6] And, importantly, it is the speaker who means something, and not the words used. [7] Unification is an operation a bit like putting together two pieces of a jigsaw puzzle. They can be fitted together (unified) if they have something in common (some edge), and are then, for all practieal purposes, moved around as a single, slightly larger piece. For an excellent introduction to unification and its linguistic applications see Karttunen (1984). Unification is also very much at the heart of Prolog, REFERENCES Fornell,Jan (1983): "Commentator - ett mikrodatorbaserat forskningsredskap for llngvister", Praktisk llngvistlk 8, Dept of Linguistics, Lund University. Green, Georgia M. (1983): Some Remarks on flow Words Mean, Indiana University Linguistics Club, Bloomington, Indiana. Gunjl, Takao (1981): Toward a Computational Theory of Pragmaties, Indiana University Lingulsties Club, Bloomington, Indiana. Karttunen, Lauri (1984): "Features and Values", in this volume? Sigurd, Bengt (1983): "Commentator: A Computer Model of Verbal Production", Linguistiea 20-9/10. 351
1984
71
WHEN IS THE NEXT ALPAC REPORT DUE ? Margaret KING Dalle MolIe Institute for Semantic and Cognitive Studies University of Geneva Switzerland ~.~chine translation has a scme%~at checquered history. There were already proposals for autcmatic translation systems in the 30's, but it was not until after the second world war that real enthu- siasm led to heavy funding and unrealistic expec- tations. Traditionally, the start of intensive work on machine translation is taken as being a memorand~n of Warren Weaver, then Director of the Natural Sciences Division of the Rockefeller Foundation, in 1949. In this memorandL~n, called 'Translation', Weaver took stock of earlier work done by Booth and Richens. He likened the problem of machine translation to the problem of code breaking, for which digital cc~uters had been used with considerable success : "It is very tempting to say that a book written in Chinese is silly a book written in English which was coded into the 'Chinese code'. If we have useful methods for solving almost any cryptographic pro- blem, may it not be that with proper interpreta- tion we already have useful methods for transla- tion?" (Weaver, 1949). Weaver's m~rorand~ led to a great deal of activity in resoarch on machine translation, and eventually to the first conference on the topic, organised by Bar-Hillel in 1952. At this confe- rence, optimism reigned. Afterwards, tea~s in a number of American universities pursued research along the general lines agreed at the conference to be fruitful. At Georgetown University, L.E. Dostert started up a machine translation project with the declared aim of building a pilot system to convince potential funding agencies of the feasibility and the practicability of machine translation. This led in 1954 to the famous Georgetown experiment, a pilot system translating from Russian to English, which was hailed as an unqualified success: during the next ten years over 20 million dollars were invested in machine translation by various US government agencies. An idea of the anount of resoarch between 1956 and 1959 can be gained by considering that in those years no fewer than twelve research groups were established in the US, a number of groups in the USSR ca~e into existence, most within the Academy of Sciences in Moscow, and two British Universities were carrying on research. Most of the systems developed were based on what Buchmann (1984) has called a 'brute force' approach: Syntactic analysis was only done at a local word-centred level, both so-called syntax and dictionary cc~pilation ~ere very narrowly corpus based, and thus almost totally empirical. Indeed, the problem of machine translation was perceived as being an engineering problem requir- ing clever programming rather than linguistic insight. By_ the late 1960"s, workers in mchine trans- lation themselves had begun to see that the enpi- rical approach was unsatisfactory. The European projects begun in the early 1960's at Grenoble and Milan reflect this, as does the work of the group sot up in Montreal in 1962. These groups based their work from the start on clear theore- tical foundations (dependency theory in Grenoble, correlational grammar in Milan, transformational theory in Montreal). However, the growing perception that brute force was not enough came too late to save re- search in the US. In 1964, the US National Academy of Sciences set up an investigatory committee, the Autcmatic Language Processing Advisory C~n- mlttee (ALPAC), with the task of investigating the results so far obtained and advising on fur- ther funding. The committee, in setting up a fra~e- work for assessing machine translation, considered such questions as quality and effectiveness of h~an translation, t_he time and money required for scientists to learn Russian, amounts spent for translation within the US goverrfaent and the need for translations and translators. Based on such criteria, the committee care to a strong negative conclusion '... we do not have useful machine translation. Further, there is no imme- diate or predictable prospect of useful machine translation '. The ALPAC report effectively killed machine translation research in the States, although some European projects survived. In the years since the ALPAC report, a number of commercial systems has been developed, some of them, ironically, based on the very system so roundly condemned by the ALPAC conndttee. Two trends can he distinguished: systems, such as SYSTRAN, which still aim at no significant human intervention during the translation process, but accept pre- and/or post-editing, and interactive systems which aim primarily at being translators' aids, such as Weidner or Alps. 352 In recent years, partially because the deve- lopment of commercial systems renewed faith in the feasibility of mad%ine translation, partially because of the results achievt~ by the surviving res~ar~--h projects, above all because of the grow- ing and pressing need for tramslation, research in machine translation has begun to revive. At the recreant, the European Ccnmunity is sponsoring a large research and development programme, France has a National Project on machine translation, a very large ntm~r of projects are being funded in Japan and a German Corporation is proposing mercial development of a system developed at the University of Texas. There are people who see strong parallels between the present situation and that ~ately before the publication of the ALPAC report, fore- seeing a second 'failure' for machine translation as a discipline. Others believe that advances in linguistics and in computer science, together with the results of the last twenty years, justify a cautious optimism, especially when the more rea- listic expectations of today's research workers (and of their funding authorities) are taken into account. The panel discussion will aim at clarifying similarities and differences in the two states of the world, weighing both scientific conside- rations and other relevant factors. The availability of Buc~m~%n (1984) greatly facilitated the writing of the first part of this panel paper. I would like to record my thanks to its author. REFERENCES ALPAE, 1966. Language and Machines{ C~ters in Translation and Linguistics. Washington D.C., Publication 1416, National Academy of Sciences. Buchmann, B. Early His.tor~ of Machine Translation.. Paper prepared for the Lugano Tutorial on Machine Translation, April 1984. Wea%~r, W. Translation. New York, 1949. Mimeo. 353
1984
72
LR Pa rse rs For Natural Languages, Masaru Tomita Computer Science Department Carnegie-Mellon University Pittsburgh, PA 15213 Abstract MLR, an extended LR parser, is introduced, and its application to natural language parsing is discussed. An LR parser is a ~;hift-reduce parser which is doterministically guided by a parsing table. A parsing table can be obtained automatically from a context- free phrase structure grammar. LR parsers cannot manage antl)iguous grammars such as natural language grammars, because their I)arsing tables would have multiply-defined entries, which precludes deterministic parsing. MLR, however, can handle mulliply-defined entries, using a dynamic programnting method. When an input sentence is ambiguous, the MI.R parser produces all possible parse trees witftoul parsing any part of the input sentenc:e more than once in the same way, despite the fact that the parser does not maintain a chart as in chart par~ing. Our method also prnvkles an elegant solution to the problem of multi-part-of-speech words such as "that". The MLR parser and its parsing table generator have been implemented at Carnegie-Mellon University. 1 Introduction LR parsers[I, 2] have been developed originally for programming language of compilers. An LR parser is a shift- reduce parser which is detenninistically guided by a par.~it~g table indicating what action should be taken next. The parsing table can be obtained automatically from a context-free phrase structure grammar, using an algorithm first developed by DeRemer [5, 6]. We do not describe the algorithm here, reffering the render to Chapter 6 in Aho and UIIman [4]. The LR parsers have seldom been used for Natural Language Processing probably because: 1. It has been thought that natural languages are not context-free, whereas LR parsers can deal only with context-free languages. 2. Natural languages are ambiguous, while standard LR parsers can not handle ambi~juous languages. The recent literature[8] shows that the belief "natural languages are not context-free" is not necessarily true, and there is no reason for us to give up the context-freedom of natural languages. We (to not discuss on this matter further, considering the fact that even if natural languages are not context-free, a fairly comprehensive grammar for a subset of natural language suflicient for practical systems can be written in context.free phrase structure. lhtJ.% our main concern is how to cope with the ambiguity of natural languages, and this concern is addressed in the fallowing section. 2 LR parsers and Ambiguous Grammars If a given grammar is ambiguous? we cannot have a parsing table in which ~ve~y entry is uniquely defined; at lea~t one entry of it~ parsing table is inulliply defined. It has been thought that, for LR pa~sers, nndtiple entries are fatal because they make deterministic parsing no longer po~$ible. Aho et. al. [3] and Shieber[121 coped with this ambiguity problem by statically 3 selecting one desired action out of multiple actions, and thus converting n=ulliply-defined entries into uniquely-defined ones.With this approach, every input sentence has no more than one parse tree. This fact is desirable for progralnming languages. For natural languages, however, it is sometimes necessary for a parser to produce more than one parse tree. For example, consider the following short story. I saw the man with a telescope. He should have bought it at the department store. When the first sentence is read, there is absolutely no way to resolve the ambiguity 4 at that time. The only action the system can take is to produce two parse trees and store them somewhere for later disambiguation. In contrast with Aho et. al. and Shieber, our approach is to extend LR parsers so that they can handle multiple entries and produce more than one parse tree if needed. We call the extended LR parsers MLR parsers. ll'his rP.~i:i'¢l'Ctl was -~pon~oled by the Df.'ieose Advanced Research Projects Agency (DOD), ARPA Older No. 3597, munitoled hy lhe Air Foi'r:e Avionics Lot)oratory Under C, uolracl F3:)(~15 81 K-t539. The views and con,.;lusion$ conl,lii~cd i=1 lhi.~; (lo=;unlq;nt a~i.~ tho'.;e ()| tt1~.! ;iu|hor.~; alld should not be illlerpreted as n:pre.-',enling the official p(':licie:;, c, ilher expressed or implied, of the Defense Advanced Re,ql..';.trch Projects Ag4.tncy or the US Gow.~.rnnlent. 2A grammar is ambiQuous, if some input sentence can be parsed in more than on~. W,gy, 3By t'~tatically", we mean the ~..:election is done at par.~ing table construction time, 4"1" have the telescope, or "the man" has the telescope. 354 3 MLR Parsers of different parses have in the chart parsing method [10, 11]. The idea should be made clear by the following example. An example grammar and its MLR parsing table produced by the construction algorithm are shown in fig. 1 and 2, respectively. The MLR parsing table construction algorithm is exactly the same as the algorithm for LR parsers. Only the difference is that an MLR parsing table may have multiple entries. Grammar symbols starting with ..... represent pre-terminals. "sh n" in the action table (the left part of the table) indicates the action "shift one word from input buffer onto the stack, and go to state n". "re n" indicates the action "reduce constituents on the stack using rule n". "acc" stands for tile action "accept", and blank spaces represent "error". Goto table (the right part of the table) decides to what state the parser should go aftera reduce action. The exact definition and operation of LR parsers can be found in Aho and Ulhnan [4]. We can see that there are two multiple entries ir~ the table; on the rows of state tt and 12 at the column of "'prep". As mentioned above, once a parsing table has multiple entries, deterministic parsing is no longer possible; some kind of non- determinism is necessary. We .~hali see that our dynamic programming approach, which is described below, is much more efficient than conventional breath-first or depth-first search, and makes MLR parsing feasible. 4 An Example In this section, we demonstrate, step by step, how our MLR parser processes the sentence: I SAW A MAN WITH A TELESCOPE using the grammar and the parsing table shown in fig t and 2. This sentence is ambiguous, and the parser should accept the sentence in two ways. Until the system finds a multiple entry, it behaves in tile exact same manner as a conventional LR parser, as shown in fig 3-a below. The number on the top (ri.qhtmost) of the stack indicates the current state. Initially, the current state is 0. Since the parser is looking at the word "1", whose category is "*n", the next action "shift and goto state 4" is determined from the parsing table. "]he. parser takes the word "1" away from the input buffer, and pushes the preterminal "*n" onto tile stack. The next word the parser is looking at is "SAW", whose category is "'v", and "reduce using rule 3" is determined as the next action. After reducing, the parser determines the current state, 2, by looking at the intersection of the row of state 0 and the column of "NP °', and so on. Our approach is basically pseudo-parallelism (breath-first search). When a process encounters a multiple entry with n different actions, the process is split into n processes, and they are executed individually and parallelly. Each process is continued until either an "error" or an "accept" action is found. The processes are, however, synchronized in the following way: When a process "shifts" a word, it waits until all other processes "shift" the word. Intuitively, all processes always look at the same word. After all processes shift a word, the system may find that two or more processes are in the ~lnle state; that is, some processes have a common state number on the top of their stacks. These processes would do the exactly same thing until that common state number is popped from their stacks by some "reduce" action. In our parser, this common part is processed only once. As soon as two or more processes in a common state are found, they are combined into one process. This combining mechanism guarantees that any part of an input sentence is parsed no more than once in the same manner." This makes the parsing much more efficient than simple breath-first or depth-first search. Our method has the same effect in terms of parsing efficiency that posting and recognizing common subconstituents STACK MrXT-ACI ION NEXT-WORD ............................................................. 0 sh 4 [ 0 =n 4 re 3 SAW 0 NP Z sh 7 SAW 0 NP 2 "v 7 sh 3 A 0 NP 2 ev 7 =det. 3 sh IO MAN 0 NP 2 Ov 7 O¢let, 3 en tO re 4 WITH 0 NP 2 =v 7 NP tZ re 7, sh 6 WI[II ..................................... .: ....................... Fig 3oa At this point, tile system finds a multiple entry with two different actions, "reduce 7" and ".3hilt g". Both actions are processed in parallel, as shown in fig 3-b. State *det *n *v "prep $ NP PP VP S .................................................................... sh3 sh4 2 t sh6 acc 5 sh7 sh6 9 8 sht0 re3 re3 re3 re2 re2 sh3 sh4 11 sh3 sh4 12 0 1 .......................... 2 (I) S --> NP VP 3 (2) S --> S PP 4 (3) NP --> =n 5 (4) NP --> *det *n 6 (5) NP --> NP PP 7 (6) PP --> =prep NP 8 (7) VP --> "v NP 9 .......................... 10 11 12 Fig 1 ret tel re5 re5 re5 re4 re4 re4 re6 re6,sh6 re6 9 re7,sh6 re7 9 Fig 2 355 0 NP 2 VP 8 re t W[FII 0 NP 2 *v 1 HI ) 12 *prep 6 wait A 0 S [ sh 6 WI[II 0 NP 2 "v l NP 12 "prep 6 wait A This process is also finished by the action "accept". The system has accepted the input sentence in both ways. It is important to note that any part of the input sentence, including the prepositional phrase "WITH A TELESCOPE", is parsed only once in the same way, without maintaining a chart. 0 S I *l)rep 6 sh 3 A 0 NP Z *v 7 NP t2 "prep 6 sh 3 A ............................................................... Fig 3-b Here, the system finds that both processes have the common state number, 6, on the top of their slacks. It combines two proces:;os into one, and operates as if there is only one process, as shown in fig 3-c. 5 Another Example Some English words belong to more than one gramillatical category. When such a word is encountered, tile MLR parsing table can immediately tell which of its cutegories are legal and which are not. When more than one of its categories are legal, tile parser behaves as if a multiple entry were encountered. The idea should be'made clear by the following example. .................................... e ........................ O S | III "prep 6 sh 3 A 0 HI' 2 "v 1 i'lP 12 4v 0 S t "prep 13 "det 3 sh 10 TELESCOPE 0 MP 2 "v 7 NP t2 d#" 0 S I I "prep 6 "dot 3 "n )0 re 4 $ 0 NP 2 "v 7 NP t2 alP" Consider the word "that" in the sentence: That information is important is doubtful. A ~3ample grammar and its parsing table are shown in Fig. 4 and 5, respectively. Initially, the parser is at state O. The first word "that" can be either ""det" or "*that", and the parsing table tells us that both categories are legal. Thus, the parser processes "sh 5" and "sh 3" in parallel, as shown below. 0 S ! j "prop G ~IP tt re 6 $ 0 NP 2 "v 7 NP 12 ~ ............................................................... STACK NEXI ACIION N[XI WORD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 0 sh 5, sh 3 I'hat Fig 3-c The action "reduce 6" pops the common state number 6, and the system can no longer operate the two processes as one. The two processes are, again, operated in parallel, as shown in fig 3-d. 0 S I PP 5 re 2 $ 0 NP 2 =v 7 NP 12 PP 9 re 5 $ 0 S [ accept 0 NP 2 *v 7 NP 12 re 7 $ ............................................................. Fig 3-d NOW, one of the two processes is finished by the action "accept". The other process is still continued, as shown in fig 3-e. ............................................................. 0 NP 2 VP 8 re t $ 0 S t accept 0 sh 5 Fhat 0 sh 3 That 0 *det 5 sh 9 information 0 "that 3 sh 4 information 0 *det 5 *n 9 re 2 is 0 *that 3 *n 4 re 3 is 0 NP 2 sh 6 Is 0 =that 3 NP 2 sh 6 is Fig. 6-a At this point, the parser founds that both processes are in the same state, namely state 2, and they are combined as one process. Fig 3-e 0 (1) S --> NP VP 2 (2) NP --> "det *n 3 (3) NP --> "n 4 (4) NP --) *that S 5 (5) VP --> "be "adj 6 ......................... 7 8 9 Fig. 4 10 State *adj "be "det *n *that $ NP S VP ..................................................................... sh5 sh4 sh3 2 1 acc sh6 7 sh5 sh4 sh3 2 8 re3 sh9 shlO re1 re1 re4 re2 re5 re5 ..................................................................... Fig. 5 356 00 *t at 3 NP G M P h q~ml~a'~P 2 sh 6 iS 0 NP ~ Z *he 6 sh 10 important 0 "that 3 NP 0 NPh=mmmm~2 "be 6 ".d j . t at 3 NP f tO re 5 1, o 0 N P ~ 2 VP 7 re t |s 0 "that 3 NP- ........................................................ Fig. 6- b The process is split into two processes again. 0 ~IP 2 VP 7 re I i$ 0 *that 3 NP 2 VP 7 re 1 1=1 0 5 1 #ERRORI tl 0 "thor 3 $ 8 re 4 is ........................................................ Fig. 6-¢ • One of two processes detects "error" and halts; only the other process goes on. 0 NP 2 sh 6 t= 0 NP 2 *he 6 sh tO doubtful 0 ~JP Z "be 6 "adJ tO re 5 $ 0 .P 2 vP 7 re 1 $ 0 s I ace $ ........................................................ Fig. 6-d Finally, the sentence has been parsed in only one way. We emphasize again that, "in spite of pseudo-parallelism, each part of the sentence was parsed only once in the same way. 6 Concluding Remarks The MLR parser and its parsing table generator have been implemented at Computer Science Department, Carnegie.Mellon University. The system is written in MACLISP and running on Tops-20. One good feature of an MLR parser (and of an LR parser) is that, even if the parser is to run on a small computer, the construction of the parsing table can be done on more powerful, larger computers. Once a parsing table is constructed, the execution time for parsing depends weakly on the number of productions or symbols in a grammar. Also, in spite of pseudo. parallelism, our MLR parsing is theoretically still deterministic. This is because the number of processes in our pseudo. parallelism never exceeds the number of states in the parsing table. One concern of our parser is whether the size of a parsing table remains tractable as the size of a grammar grows. Fig. 6 shows the relationship between the complexity of a grammar and its LR parsing table (excerpt from Inoue [9]). XPL EULER FORTRAN ALGOL60 Terminals 47 74 63 66 Non-terminal s 51 45 77 99 Product ions 108 121 172 205 States 180 t93 322 337 TableSize(byte) 2041 2587 3662 4264 Fig. 6 Although the example grammars above are for programming langauges, it seems that the size of a parsing table grows only in proportion to the size of its grammar and does not grow rapidly. Therefore, there is a hope that our MLR parsers can manage grammars with thousands of phrase structure rules, which would be generated by rule-schema and meta-rules for natural language in systems such as GPSG [7]. Acknowledgements I would like to thank Takehiro Tokuda, Osamu Watanabe, Jaime Carbonell and Herb Simon for thoughtful comments on an earlier version of this paper. References [1] Aho, A. V. and Ullman, J. D. The Theory of Parsing, Translation and Compiling. Prentice-Hall, Englewood Cliffs, N. J., 1972. [2] AhO, A. V. and Johnson, S. C. LR parsing. ComPuting Surveys 6:2:99-124, 1974. [3] Aho, A. V., Johnson, S. C. and UIIman, J. D. Deterministic parsing of ambiguous grammars. Comm. ACM 18:8:441-452, 1975. [4] Aho, A. V. and UIIman, J. D. Principles of Compiler Design. Addison Wesley, 1977. [5] Oeremer, F. L Practical Translators for LR(k) Languages. PhD thesis, MIT, 1969. [6] DeRemer, F. L. Simple LR(k) grammars. Comm. ACM 14:7:453-460, 1971. FI Gazdar, G. Phrase Structure Grammar. D. Reid,l, 1982, pages 131.186. [8] G=zder, G. Phrase Structure Grammars and Natural Language. Proceedings of the Eighth International Joint Conference on Artificial Intelligence v.1, August, 1983. [9] Inoue, K. and Fujiwara, F, On LLC(k) Parsing Method of LR(k) Grammars. Journal of Inlormation Processing vol.6(no.4):pp.206-217, 1983. [10] Kapisn, R. M. A general syntactic processor. Algorithmics Press, New York, 1973, pages 193.241. [1~] Kay, M. The MIND system. Algorithmics Press, New York, 1973, pages 155-188. [12] Shieber, S. M. Sentence Disambiguation by a ShiR-Reduce Parsing Technique. Proceedings of the Eighth International Joint Conference on Artificial Intelligence v.2, August, 1983. 357
1984
73
LFG ~ystsm in Prolog Hide~ Ya~u'~awa The Second Laboratory Institute for New Generation Computer Technology (ICOT) To~/o, 108, Japan ABSTRACT In order to design and maintain a latE? scale grammar, the formal system for representing syntactic knowledEe should be provided. Lexlcal Functional Grammar (LFG) [Kaplan, Bresnan 82] is a powerful formalism for that purpose, In this paper, the Prolog implementation of LFG system is described. Prolog provides a Eood tools for the implementation of LFG. LFG can be translated into DCG [Perelra,IIarren 80] and functional structures (f-structures) are generated durlnK the parsing process. I INTRODUCTIOr~ The fundamental purposes of syntactic analysis are to check the Eramnatlcallty and to clariDI the mapping between semantic structures and syntactic constituents. DCG provides tools for fulfillln 6 these purposes. But, due to the fact that the arbitrary 9rolog programs can be embedded into DCG rules, the grammar becomes too complicated to understand, debug and maintain. So, the develo~ent of the formal system to represent syntactic knowled~es is needed. The main concern is to define the appropriate set of the descriptive primitives used to represent the syntactic knowledges. LFG seems to be promising formalism from current llngulstlc theories which satisfies these requirements. LFG is adopted for our prelimlna~y version of the formal system and the Prolog implementation of LFG is described in this paper. ii SII:~.Z OVERVI~ OF LFG in this section, the simple overview of LF~ is described (See [Eaplan, Bresnan 82] for details ). LFG is an e::tention of context free grammar (C~'G) and has two-levels of representation, i.e. c-structures (constituent structures) and f-~tructures (functional structures). A c-structure is generated by CFG and represents the surface uord and phrase configurations in a ~entence, and the f-structure is generated by the functional equations a=sociated with the o~rammar rules and represents the conflo~uratlon of the surface ~ra=matical functions. Fi~. I shows the c-structure and f-structure for the sentence "a e~f.rl handed the baby a toy" ([Kaplan,Bresnan 82]). np I det---n I I f a s I .... Vp I v ...... np- ...... np det---n det--n glrl hands the baby a toy (a) c-structure subJ spec a hum ng pred "glrl" tense past pred "hand<(T subJ)(T obJ2)(T obJ)>" obJ spec the num sg pred "baby" obJ2 spec a num sg pred "toy" (b) f-structure Fig. 1 The eY~mgle c-structure and f-structure As shown in Fig. I, f-structure is a hierarchical structure constructed by the pairs of at~rlbute and its value. An attribute represents ~ra=matlcal function or syntactic feature. Lexlcal entries specify a direct mappinE betueen semantic arguments and confizuratlons of surface grammatlcal functions, and ~rammar rules specify a direct mapping between these surface Cr~umatlcal functions and particular constituent structure conflguratlons. To represent these Cra=matlcal relations, several devices and schemata are provided in LFG as shown below. (a) meta variables (1) T & $ (immediate dominance) (il) ~ & ~ (bounded dominance) (b) functional notations a designator (T subj) indicates the aSubja attribute of the f- structure. (c) Equational schema l l) ( functional equation) ii) ~ (set inclusion) the va!ue of mother node's 358 (d) Constrainln~ schema {i) =c (equational constraint) (ii) d (existential constraint) where d is a desIcnator (ill) negation of (1) and (il) Fi~. 2 sh~#s the e~anple ~ra~uar rules and le"~ical entries in LF~, wl~ch senerate the c-structure and the f-structure in Fig. 1. 1. s-> np vp (T subJ)=+ T=+ 2. np -> det n 1=~ T=~ 3. vp-> v np np T=+ (T obJ)=~ CT obJ2)=+ ~. det-> [a] (T spec):a (T num):s~ 5. det-> [the] (T spec) =the 6. n-> [girl] (T nu~):sg ('~ pred):'glrl" 7. n-> [baby] (T nun):sg (T pred)='baby" 8. n-> [toy] (r num)=sg (T pred)='toy" 9. v-> [handed] (T tense) =past (T pred)='hand<(~ subJ)(T obJ2)(T obJ)>" FiE. 2 Example ~rammar rules and lex~oal entries of LFG. (from [Kaplan,Bresnan 82]) As sh~n in Fi~. ~, the prlnltlves to re~resent ~r3~.atlcal relations are encoded in ~ra~:aar rules and le~cal entries. Each syntaotle node h~s i~s own f-structure and the partial value of the f-structure is defined by the Equational ~ch~m. For exauple, the functional equation "(~ sub~)=$" associated with the dau~hter "np" node of ~r~-u~r rule I. of Fi~. 2 specifies that the value of the "sub~" attribute of the f-structure of th~ ~other "s" node is the f-structure o/ its d~u~ter "np" node. ~ne value constraints on the f-~tructure are specified by the Constraln~r~ schema, i:oreover, the o~rauatlcallty of the sentence is defined by the three conditions shown bel~. (I) ~nlqueness: a particular attribute may have at :cost one value in a ~iven f-structure. (2) Completeness: a f-structure must contain all the ~overnable ~r~uatical functions ~overned by It~ predicate. (~) Coherence: all the ~overr~ble ~ran~uatlcal functions that a f-structure contain must be ~overned by its predicates. ZZZ Z;~L~L:TATIO:~ OF L,.'G P~--~rTZVE~ As indicated in section iI, two distinct ~chenata ~re enploycd in the constructions of f-~trucbures. In the current lupleuentatlon, f-3tructures are ~enerated durln~" the ~arslr~ process by executin~ the functional equations and ~et inclusions associated with each syntactic node. After ~e .,~urslr~ is done, the f-structures ~.~ checked whether their value assicr~ents are consistent ~ith the value conutralnts on them. The Completeness condition on ~r~atlc~l!~y is also checked after the parsln~. ~e L~'~J primitives are realized by the Prolo~ procra~s and embedded into the DCG rules. The Equational schema is executed durln~ the parsln~ process by the execution of DCG rules. The functional equation can be seen as the extension of ~e unification Of Prolog by introduclr~ equality on f-structures. A. Representations of Data Types The prlnltlve data types constructi.~ f-structures are symbols, semantic predicates, subsidiary f-structures, and sets of sy=bols, semantic predicates, or f-structures. In current implementation, these data types are represented as follows: I) symbols ==> atem or Inte~r 2) semantic predicates ==> sea(X) where X is a predicate 3) f-structure ==> Id:Obt where the "Id" is an identifier variable (ID-varlable). Each syntactic node has unique ID-variable which is used to Identify its f-structure. The "Obt" is a ordered blrmry tree each leaf contains the pair of an attribute and its value. q) set ==> {elementl, element2, ..., element;!} A f-structure can be seen as a partially defined data structure, because its value is partially Emnarated by the Equational schema during the paralng process. An ordered binary tree, obt for short, is suitable for representln~ partially defined data. An obt is a binary tree whose labels are ordered. A binary tree "Obt" is represented by an term of the following foru. Obt = obt(v(Attr,Value),Less,Greater) The "v(Attr,Value)" is a leaf node of the tree. The "Attr" is an attribute name and used as the label of the leaf node, and the "Value" is its value. The "Less" and "Greater" are also binary trees. The "Obt" is ordered when the "Less" ("Greater") is also ordered and each label of its leaf nodes is less (greater) than the label of "ObtW,i.e. "Attr". If none of the leaf of a tree is defined, it is represented by a logical variable, l~en its label is defined later, the logical variable is In~antlated. The insertion of a label and its value into an obt is done by only oneunlflcatlon, without rewrltln~ the tree. This is the merit in uslnE an ordered blna~j tree. For m Y-mple, the f-structure for the noun phrase "a glrl", the value of the "subJ" in Fi~.1 (b), can be ~-a~leally represented in Fig. 3. The "Vi"'s in Fig. 3 are the variables representing the unlnstantlated subtrees. B. Functional !~otatlon 359 iD-variable --> v(spec,a) v( nun, aS) .......... + I ~-- ..... v(per3,3) ~i~. 3 +....+ Vl v2 v3 v~ the ~raphical representalon of an obt The functional notations are represented by !D-variables instead of l~ta variables ~ and $, i.e. ~Mta variables must be replaced by the object level variable. For example, the designator (7 subj) associated with the category 3, is described as [subJ, IdS], where Ida is the ZD-variable for S. ~e meta variables for bounded dominance are represented by the terms controllee(Cat) and controller(Cat), where the "Cat" is the name of the syntactic category of the controller or ccntrollee. C. Predicates for LFG Primitives The predicates for each LFG primitives are as follows : (d,dl,d2 are designators, s is a set, and " is a negation symbol) I) dl = d2 -> equate(dl,d2,01d,New) 2) d & s -> include(d,s,Old,New) 3) dl =c d2 -> eonstrain(dl,d2,01dC,NewC) 4) d -> exlst(d,OldC,~lewC) 5) "(dl =c d2) -> ne&_constraln(dl,d2,01dC,~ewC) 6) "d -> not_exist(d,OldC,~ewC) The "Old" and "New, are global value assIcnnenta. ~%ey are used to propagate the chan~es of ~iobal value assignments made by the execution of each predicate. The "OldC" and "~;ewC" are constraint lists and used to gather all the constraints in the analysis. Desides these predicates, the additional predicates are provided for checking a constraints durln~ the parsing process. They are used to k~ll the parsing process zeneratlng inconsistent result as soon as the inconsistency is found. ~e predicate "equate" gets the temporary values of the desi~nators dl and d2, consulting the global value assignments. Then "equate" performs the unification of their values. The unification is similar to set-theoretlc union except that it is only defined for sets of nondistlnct attributes. Fig. 4 shows the example trace output of the "equate" in the course of analyzing the sentence "a girl hands the baby a ~oy". in order to keep grammar rules highly understandable, it would be better to hide unnecessary data, such as c!obal value assicr~ents or constraint lists. The macro notations similar to the original notation of LFG are provided to users for that purpose. The macro expander translates the macro notations into Prolog programs corresponding to the LFG primitives. The value of the designator Det is spec the The value of the designator ~! is hum sg per 3 pred aeu(glrl) Result of unification is spec the hum sg per 3 pred sem(glrl) Fig. 4 Tracing results of equate. This macro expansion results in considerable improvement of the wrltability and the understandability of the grammar. The syntax of macro notations are : (a) dl = d2 -> eqCdl,d2) (b) d e s -> InclCd,s) Co) dl =c d2 -> o(dl,d2) (d) d -> ex(d) (e) "(dl =c d2) -> not_c(dl,d2) (f) "d -> not~ex(d) These macro notations for LFG primitives are placed at the third arsument of the each predicate in DCG rules correspondln~ to syntactic categories as shown in Fig. 5 (a), which corresponds to the grammar rule I. in Fig. 2. s(s(Np, Vp),Id_$,[]) --> np(Np, I~_Np,[eq([subJ,Id..S],Id..:Ip]), vp(Vp, Id_Vp,[eq(I~_S, Id..Vp)]). (a) The DCG rule with macro for LF~ s( s( Np, Vp), I~_$, Old, :;ew, 01dO, I~ewC) --> np( Np, IdJ1p, Old, Oldl, OldC, OldC1 ), {equate( [subj, Id_S], Id_~Ip, Oldl, 01d2) }, vp( Vp, Id__Vp, Old2,01d3, OldC1, ~ewC), {equate(Id_S, Id_Vp, Old3 ,New) }. (b) The result of macro expansion Fig. 5 Example DCG rule for LFG analysis The variables "~d_S", ,IdjIp,, and "Id_Vp" are the ID-variables for each syntactic category. For example, the ~rs=mar rule in Fi~. 5 (a) is translated into the one shown in Fig. 5 (b). ~cro descriptions are translated Into the corresponding predicate in the case of a ~r~ar rule. In the case of a le:cical entry, macro descriptions are translated into the corresponding predicate, which is executed further more and the f-structure of the lexical entry is generated. D. Issues on the Implementation Though f-structures are constructed durin~ the parsing process, the execution of the Equational schema is independent of the parsing 360 strate~'. This is necessary to keep the crayuaar rules highly declarative. There are some advantages of using Prolog in implementin~ LFG. First, the Uniqueness condition on a f-structure is fulfilled by the ori~inal unification of Prolog. Second, an ordered binary tree is a good data structure for representing a f-structure. The use of an ordered binary tree reduces the processin~ time by 30 percents compared with the case using a llst for representing a f-structure. And third, the use of ID-varlable also effective, because the sharing of a f-structure can be done oaly by one unification of the corresponding !D-variables. Though the computational complexity of the ~quational schema is very expensive, the LF~ provides expressive and natural account for lin~ulstic evidence. In order to overcome the inefficiency, the introduction of parallel or concurrent execution mechanism seems to be a promising approach. The computation model of LFG is similar to the constraint model of computation [Steele 80]. ~qe Prolos implementation of LF~ by Reyle and Fray [Reyle, Frey 83] aimed at more direct translation of functional equations into DCG. Although their implementation is more efficient, it does not treat the Constraining schema, set inclusions, the compound functional equation such as (" vco:~p subj), and the bounded dominance. And their zr~ar rules seem to be too complex by direct encoding of f-structures into them. In order to provide an formal system havlr~ powerful description capabilities for representing syntactic knowled~es, the more LFG primitives are realized than their implementation and the ~rammar rules are more understandable and can be more easily modified in my implementation. Time used in analysis is 972 ms. (parsing) 19 ms.(checkin~ constraints) ~I ms. (for checFin~ completeness) subJ spec the nun sg per 3 pred sem(glrl) pred sam(persuade ([subj, A], [obJ, A], [ vcomp, A]) ) obj spec the num sg per 3 pred sam(baby) tense past vcomp subj spee the hUm sg per 3 pred sam(baby) Inf ÷ pred sam(so ( [ subJ, B] ) ) to ÷ Fig. 6 The result of analyzi.~ the sentence, • the glrl persuaded the baby to So" VII. AC~I~!LEDGE~NTS The author is thankful to Dr. K. Furuka~a, the chief of the second research laboratory of ICOT Research Center, and the me, bars of the natural language processing ~roup in ICOT Research Center, both for their discussion. The author is grateful to Dr. E. Fuchl, Director of the ICOT Research Center, for providing the opportunity to conduct this research. !'~. ~i'-" RESULT OF A~' EXPER~NT Fig. 6 shows the result of analyzing the sentence "the ~irl persuaded the baby to go". LFG system is written in Dec-10 Prolog [Pereira,et.al. 73] and executed on Dec 2060. As shorn in Fi~. 6, the functional control [::aplan, Eresnan 82] is realized in the f-structure of vp. ~e value of the "subj" attribute of the "vcoup" is functionally controlled by the "obJ" of i;he f-structure of the "s" node. The time used for syntactic analysis includes the time consumed by parsinj process and the time consumed ~j ~quational schema. V. CO:ICLUSTON The Prolog implementation of LFG is described. It is the first step of the formal nysteu for represent!nz syntactic kno~;ledzes. As "- result, it beco.&es quite obvious that Prolos is suitable for i:iD!e:~entln.- LFG. Further research on the for::al syster~ will be carried by analyzing the wider variety of actual utt-rznce~ to e':tract the more pri:~i tlves ~-eces~.r." for the analyses, and to ~ive the ;:ccesaary sc:-e:~aca for tho~e pri_~itives. VIII. REFEREIICE$ [Kaplan, Bresnan 82] "Lexical-Functlonal Gr~ar: A Formal System for Grammatical Representation" in ~lental Representation of Grammatical Relations", Bresnan ads., I ET Press, 1982 [Reyle,Frey 83] "A Prolog T_mplementation of Lexlcal Functional Grammar", Pros. of L/CAI-83, PP. 693-695, 1983 [ Perelra, at. al. 78] "User' s Guide to D~C System- I0 Prolog", Department of Artificial Intelligence, Univ. of Edlnbur-:h, 1978 [Pereira,'.;arren 30] "Definite Clause Gr-~--_r for Language Analysis -- A Survey of the For~...allsm and a Comparison with Au~ented Transition -'.'etworks", Artificial Intelligence, 13, PP. 231-278, I%80 [Steele 80] "The Definition and !mpl-~uentation of a Computer Pr ogr -~.unin~. Lanzuase base~ on Constraints", .~ET AI-TR-595, 19~0 361
1984
74
The Design of a Computer Language for Linguistic Information Stuart M. Shieber Artificial Intelligence Center SRI International and Center for the Study of Language and Information Stanford University Abstract A considerable body of accumulated knowledge about the design of languages for communicating information to computers has been derived from the subfields of program- ming language design and semantics. It has been the goal of the PArR group at SRI to utilize a relevant portion of this knowledge in implementing tools to facilitate communica- tion of linguistic information to computers. The PATR-II formalism is our current computer language for encoding linguistic information. This paper, a brief overview of that formalism, attempts to explicate our design decisions in terms of a set of properties that effective computer lan- guages should incorporate. I. Introduction I The goal of natural-language processing research can be stated quite simply: to endow computers with human language capability. The pursuit of this objective, however, has been a di~cult task for at least two reuons: first, this capability is far from being a well-understood phenomenon; second, the tools for teaching computers what we do know about human language are still very primitive. The solu- tion of these problems lies within the respective domains of linguistics and computer science. Similar problems have arisen previously in computer science. Whenever a new computer application area emerges, there follow new modes of communication with computers that are geared towards such area& Computer languages are a direct result of this need for effective com- munication with computers. A considerable body of accu- mulated knowledge about the design of languages for com- municating information to computers has been derived from the subfields of programming language design and seman- IThis research has been made possible in part by a gift from the Sys- tems Development Foundation, and was also supported by the Defense Advanced Research Projects Agency under Contract N00039-80-C- 0575 with the Naval Electronic Systems Command. The views and conclusions contained in this document are those of the author and should not be interpreted as representative of the official policies, ei- ther expre.,sed or implied, of the Defense Advanced Research Projects Agency, or the United States government. The author is indebted to Fernando Pereira, Barbara Grosr. and Ray Perrault for their comments on earlier dra/ts. tics. It has been the goal of the PArR group at SRI 2 to utilize a relevant portion of this knowledge in implementing tools to facilitate communication of linguistic information to computers. The PATR-II formalism is our current computer lan- guage for encoding linguistic information. This paper, a brief overview of that formalism, attempts to explicate our design decisions in terms of a set of properties that effec- tive computer languages should incorporate, namely: sim- plicity, power, mathematical weU-foundedness, flexibility, implementability, modularity, and declarativeness. More extensive discussions of various aspects of the PATR-II for- malism and systems can be found in papers by Shieber et a/., [83], Pereira and Shieber [84] and Karttunen [84]. The notion of designing specialized computer lan- guages and systems to encode linguistic information is not new; PROGRAMMAR [Winograd, 72], ATNs [Woods, 70], and DIALOGIC [Grosz, et al., 82] are but a few of the better-known examples. Furthermore, a trend has arisen recently in linguistics towards declarativeness in gram- mar formalisms--for instance, lexical-functional grammar (LFG) [Bresnan, 83], generalized phrase-structure gram- mar (GPSG) [Gazdar and Pullum, 82] and functional uni- fication grammar (UG) [Kay, 83]. Finally, in computer .sci- ence there has been a great deal of interest in declarative languages (e.g., logic programming and specification lan- guages), and their supporting denotational semantics. But to our knowledge, no attempt has yet been made to combine the three approaches so as to yield a declarative computer language with clear semantics designed specifically for en- coding linguistic information. Such a language, of which PATR-II is an example, would reflect a felicitous conver- gence of ideas from linguistics, artificial intelligence, and computer science. 2. The Critical Properties of the Language It is not the purpose of this paper to provide a compre~ hensive description of the PATR-II project, or even of the formalism itself. Rather, we will discuss briefly the critical 2This rather liquid group ham included at various times: John Bear, Lauri Karttuneu, Fernando Pereira, Jane Robinson, Stan Rosenschein, Susan Stueky, Mabry Tyson, Hans Uszkoreit, and the author. 362 properties of PATR-II to give a flavor for our approach to the design of the language. References to papers with more complete descriptions of particular aspects of the project are provided when appropriate. 2.1. Simplicity: An Introduction to the PATR-II Formalism Building on a convergence of ideas from the linguistics and AI communities, PATR-II takes as its primitive opera- tion an extended paltern-matching technique, unification, first used in logic and theorem-proving research and lately finding its way into research in linguistics [Kay, 79; Gazdar and Pullum, 821 and knowledge representation [Reynolds, 70; Ait-Kaci~ 831. Instead of unifying logic terms, how- ever, PATR unilication operates on directed acyclic graphs (DAG}. s DAGs can be atomic symbols or sets of label/value pairs, where the values are themselves DAGs (either atomic or complex). Two labels can have the same value--thus the use of the term graph rather than tree. DAGs are notated either by drawing the graph structure itself, with the la- bels marking the arcs, or, as in this paper, by notating the sets of label/value pairs in square brackets, with the labels separated from their values by a colon; e.g., a DAG associ- ated with the verb "knight" (as in "Uther wants to knight Arthur") would appear (in at least one of our grammars) as [cat : v head: [aux: false form: nonfinite voice: active trans: [pred: knight argl: <f1134> [] arg2: <f1138> [111 syncat: [first: [cat: np head: [trane: <f1134>]] rest: [first: [cat: np head: [trans: <f1188>]] rest: <f1140> lambda] tail: <fl140>]] Reentrant structure is notated by labeling the DAG with an arbitrary label (in angle brackets), then using that label for future references to the DAG. Associated with each entry in the lexicon is a set of DAGs. 4 The root of each DAG will have an arc labeled eat aTechnically, these are rooted, directed, acyclic graphs with labeled arcs. Formal definition of these and other technical notions can be found in Appendix A of Shieber et aL [83]. Note that some imple- mentations have been extended to handle cyclic graph structures as well as graph structures with disjunction and negation [Karttunen, 84]. 4In our implementation, this association is not directly encoded--since this would yield a grossly inefficient characterization of the lexicon~ but is mediated by a morphological analyzer. See Section 2.6 for further details. whose value will be the calegory of the associated iexical entry. Other arcs may encode information about the syn- tactic features, translation, or syntactic subcategorization of the entry. But only the label cat has ally special sig- nificance; it provides the link between context-free phrase structure rules and the DAGs, as explicated below. PATR-II grammars consist of rules with a context-free phrase structure portion and a set of unifications on the DAGs associated with the constituents that participate in the application of the rule. The grammar rules describe how constituents can be built up to form new constituents with associated DAGs. The right side of the rule lists the cat values of the DAGs associated with the filial constituents; the left side, the eat of the parent.. The associated uni- fications specify equivalences that must exist among the various DAGs and sub-DAGs of the parent and children. Thus, the formalism uses only one representation---DAGs-- for iexical, syntactic, and semantic information, and one operation--unification--on this representation. By way of example, we present a trivial grammar for a fragment of English with a lexicon associating words with DAGs. S ~ NP VP < VP afr> = <NP agr> VP --* V IVP Uther: < VP agr> = < V agr> < eat > =np <agr number> = singular <agr person> = third Arthur: <eat> = np <agr number> = singular <agrperson> = third knights: <eat> = v <aqr number> = singular <agr person> = third This grammar (plus lexicon) admits tile two sentences "Uther knights Arthur" and "Arthur knights Uther." Tile phrase structure associated with the first of these is: [s INP Utherl [vp [v knightsl [Nr' Arthurlll The VP rule requires that the agr feature of the DAG associated with the VP be the same as (unified with) the agr of the V. Thus, the VP's agr feature will have as its value the same node as the V's agr, and hence the same values for the person and number features. Similarly, by virtue of the unification associated with the S rule, the NP will have the same agr value as the VP and, consequently, the V. We have thus encoded a form of subject-verb agreement. Note that the process of unification is order-independent. For instance, we would get the same effect regardless of whether the unifications at the top of the parse tree were effected before or after those at the bottom. In either case, the DAG associated with, e.g., the VP node would be 363 [cat : vp agr: [person: third number: singular]] The.~e trivial examples of grammars and lexicons offer but a glimp.~e ,~f the techniques used in writing PATR-II granmlar~, and do not begin to employ the power of unifi- cati,,n :is rl general information-passing mechanism. Exam- ples of the use of PATR-[I for encoding much more complex linguistic phenr~mena can be found in Shieber et al. [83]. 2.2. Power: Two Variants Augmented I)hrase-structure grammars such as PATR- II can in fact be quite powerful. The ability to encode unbc,l~nded amcmnts of information in the augmentations (which I'ATR-II obviously allows) gives this formalism the p,~wer c~f a 'rt, ring machine. As a linguistic theory, this much power might be considered disadvantageous; as a compuler language, however, such power is clearly desir- able..-.ince the intent of the language is to enable the mod- eling of m~my kinds of linguistic analyses from a range of theories. As s*l,'h, PATR-II is a tool, not a result. N,~v(,rthelc.~s, a good case could be made for maintain- ing at least the decidability of determining whether a string is admitted by a PATR-II grammar. This property can be ensured by requiring the context-free skeleton to have the property ~f off-line parsability [Pereira, 83], which was used originally in the definition of LFG to maintain the decid- ability of that f{,rmalism [Kaplan and Bresnan, 83]. Off-line parsability req.ires that the context-free "skeleton" of the grammar allows no trivial cyclic derivations of the form A ~ A. 2.3. Mathematical Well-Foundedness: A Denotational Semantics One reason for maintaining the simplicity of the bare PATR-II formalism is to permit a clean semantics for the language. We have provided a denotational semantics for PATR-ll [Pereira and Shieber, 84] based on the information systems domain theory of Dana Scott [Scott, 82]. Insofar as more com[)lex formalisms, such as GPSG and LFG, can be modeled a~s appropriate notations for PATR-II grammars, PATR-II's denotational semantics constitutes a framework in which the semantics of these formalisms can also be de- fined, discussed, and compared. As it appears that not all the power of domain theory is needed for the semantics of PATR-II, we are currently pursuing the possibility of build- ing a semantics based on a less powerful model, s 2.4. FIexibillty: Modeling Linguistic Con- structs Clearly, the bare PATR-II formalism, as it was pre- sented in Section 2.1, is sorely inadequate for any major attempt at building natural-language grammars because of its verbosity and redundancy. Efficiency of encoding was s But see Pereira and Shieber [84] for arguments in favor of using domain theory even if all the available power is not utilized. temporarily sacrificed in an attempt to keep the underlying formalism simple, general, and semantically well-founded. However, given a simple underlying formalism, we carl build more efficient, specialized languages on top of it, nmch as MACLISP might be built on top of pure LISP. And just as MACLISP need not be implemented (and is not imple- mented) directly in pure LISP, specialized formalisms built conceptually on top of pure PATR-I1 need not be so imple- mented (although currently we do implement thenl directly through pure PATR-II). The effectiveness of this approach can be seen in the fact that at lea:st a sizable portion of English syntax has been encoded in various experimental PATR-II grammars constructed to date. The syntactic con- structs encoded include subcategorization of various com- plement types (N/as, Ss, etc.), active, passive, "there" in- sertion, extraposition, raising, and equi-NP constructic)ns, and unbounded dependencies (such a~s Wh-movement and relative clauses). Other theory-dependent devices that have been modeled with PATR-II include head-feature percola- tion [Gazdar and Puilum, 82], and LFG-like semantic forms [Kaplan and Bresnan, 83]. Note that none of these con- structs and techniques required expansion of the underly- ing formalism; indeed, the constructions all make use of the techniques described in this section. See Shieber et al. [83] for a detailed discussion of the modeling of some ,)f these phenomena. The devices now available for molding PATR-II to con- form to a particular intended usage or linguistic theory are in their nascent stage, llowever, because of their great im- portance in making the PATR-II system a usaHe one, we will discuss them briefly. It is important to keep in mind that these methods should not be considered a part of the underlying formalism, but merely "syntactic sugar" to in- crease the system's utility and allow it to conform to a user's intentions. 2.4.1. Templates Because so much of the information in tile PATR-II grammars under actual development tends to be encoded in the lexicon, most of our research has been devoted to methods for removing redundancy in the lexicon by all,w- ing the users themselves to define primitive constructs and operations on lexical items. Primitive constructs, such as the transitive, dyadic, or equi-NP properties of a verb, can be defined by means of templates, that is, DAGs that en- code some linguistically isolable portion of the DAG of a lexical item. These template DAGs can then be c(~mbined to build the lexical item out of tile user-defined primitives. As a simple example, we could define (with the follow- ing syntax) the template Verb as Let Verb be <eat> = V and the template ThirdSing as Let ThirdSing be <agr number> = singular <agr person> = third The lexical entry for "knights" would then be 364 knights: Verb ThirdSin 9 Templates can themselves refer to other templates, en- abling definition of abstract linguistic concepts hierarchi- cally. For instance, a modal verb template may use an aux- iliary verb template, which in term may be defined using the verb template above. In fact, templates are currently employed for abstracting notions of subcategorization, verb form, semantic type, and a host of other concepts. 2.4.2. Lexical Rules More complex relationships among lexical items can be encoded by means of lexical rules These rules, such as passive and "there" insertion, are user-definable operations on the lexical items, enabling one variant of a word to be built from the specification of another variant. A lexical rule is specified as a set of selective unifications relating an input DAG and an output DAG. Thus, unification is the primitive used in this device as well. Lexieal rules are used to encode the relationships among various lexical entries that would typically be thought of as transformations or relation-changing rules (depending on one's ideological outlook}. Because lexical rules perform these operations, the lexicon need include only a proto- type entry for each verb. The variant forms can be derived through lexical rules applied in accordance with the mor- phology actually found on the verb. (The morphological analysis in the implementations of PATR-II is performed by a program based on the system of Koskenniemi [83] and was written by Lauri Karttunen [83].) For instance, given a PATR-II grammar in which the DAGs are used to emulate the f-structures of LFG, we might write a passive lexical rule as follows (following Bres- nan [83]): e Define Passive as <out cat> = <in cat> < out form > = passprt <out subj> = <in obj> <out obj> = <in subj> The rule states in effect that the output DAG (the one associated with the passive verb form) marks the lexical item as being a passive verb whose object is the input DAG's subject and whose subject is the input's object. Such lexical rules have been used for encoding the active/passive dichotomy, "there" insertion, extraposition, and other so- called relation-changing rules. 2.5. Modularity and Declaratlveness The PATR-II formalism is a completely declarative for- malism, as evidenced by its denotational semantics and the order-independence of its definition. Modularity is achieved through the ability to define primitive templates and lex- ical rules that are shared among lexical items, as well as by the declarative nature of the grammar formalism itself, 6The example is merely meant to be indicative of the syntax for and operation of lexical rules. We do not present this as a valid definition of Passive for any grammar we have written in PATR-IL removing problems of interaction of rules. Rules are guar- anteed to always mean the same thing, regardless of the environment of other rules in which they are placed. 2.6. Implementability Implementability is an empirical matter, given credence by the fact that we now have three implementations of the formalism. One desirable aspect of the simplicity and declarative nature of the formalism is that even though the three implementations differ substantially from one an- other, using different parsing algorithms {with both top down and bottom up properties}, different implementations of unification, different methods of compiling the rules, all are able to run on exactly the same grammars yielding the identical results. The three implementations of the PATR-II system cur- rently in operation at SRI are as follows: • An INTERLISP version for the DEC-2060 using a variant of the Cocke-Kasami-Younger parsing algo- rithm and the KIMMO morphological analyzer [Kart- tunen, 83], and a limited programming environment. • A ZETALISP version for the Symbolics 3600 using a left-corner parsing algorithm and the KIMMO mor- phological analyzer, with an extensive programming environment {due primarily to Mabry Tyson} that in- cludes incremental compilation, multiple window de- bugging facilities, tracing, and an integrated editor. • A Prolog version (DEC-10 Prolog) running on the DEC-2060 by Fernando Pereira, designed primarily as a testbed for experimentation with efficient structure- sharing DAG unification algorithms, and incorporat- ing an Earley-style parsing algorithm. In addition, Lauri Karttunen and his students at the University of Texas have implemented a system based on. PATR-II but with several interesting extensions, including disjunction and negation in the graph structures [b:art- tunen, 84]. These extensions will undoubtedly be inte- grated into the SRI systems and formal semantics for them are being pursued. 3. Conclusion The PATR-II formalism was designed as a computer language for encoding linguistic information. The design was influenced by current theory and practice in computer science, and especially in the arenas of programming lan- guage design and semantics. The formalism is simple (con- sisting of just one primitive operation, unification), power- ful (although it can be constrained to be decidable), math- ematieally well-founded (with a complete denotational se- mantics), flexible (as demonstrated by its ability to model analyses in GPSG, LFG, DCG and other formalisms), mod- ular (because of its higher-level notational devices such as templates and lexical rules), declarative (yielding order- independence of operations), and implementable (as demon- strated by three quite dissimilar implemented systems and one highly developed programming environment). 365 As we have ,mq)hasized herein, PATR-II seems to rep- l'OSO.l'it. ~'I c(~nvol'~(.llCC of techniques from several domains-- comt)utor science, programming language design, natural language processing and linguistics. Its positioning at the center of these trends arises, however, not from the ad- mixture of many discrete techniques, but rather from the application of a single simple yet powerful concept to the encoding of linguistic information. References Ait-Kaci, II., 1~..~83: "A new Model of Computation Based on a Calcuhls of Type Sul)snml)tion," Doctoral Dissertation Pro- posal, I)ept. of (?;oml~uter and Information Science, Univer- sity of Pennsylvania (Noveml:er). Bresnan, .loan. 19::t:~: The mental representation of grammatical relations (ed.), (:nmbriHge: MIT Press. Gazdar, C. and C.K. Pullum, 198'2.: "GPSG: A Theoretical Syn- opsis," Indiana University I,inguistics Club, Bloomington, Indiana. Grosz, B., N. llaas, (~. Ilon,.Irix. J. tlobbs, P. Martin, R. Moore, J. l~¢~l)inson att,I S. Rosenschein, 1982: "DIALOGIC: a core natnral-hmgu;H~e processing system," Proceedings of the Ninth International Co,fercnce on Computational Linguis. tics, Prague, Czeehoslavakia (July), pp. 95-100. Kaplan, R. and J. Bresnan, 1983: "LexlcaI-Functionai Gram- mar: A Formal System for Grammatical Representation," in J. 13resnan (ed.), The mental representation of grammat- ical rclati, rr~ (ed.), (:ambridge: MIT Press. Karttunen, I.., 1981: "Features and Values, ~ Proceedings of the Tenth Inter,atiomd Conference on Computational Lin. guistics, Stanford Universil.y, Stanford California (4-7 July, 1984). Karttuneu, L., 1983: "NIMMO: a general morphological proces- sor," Texas Lingui.~tic Forum, Volume 22 (December), pp. 161-185. Kay, M., 1979: "Functional C',rammar," in Proceedings of the Fifth Annttal Meeting of the Berkeley Linguistics Society, Berkeley, California (17-19 February). Kay, M., 1983: "linifieation Grammar," unpublished memo, Xe- rox Pale Alto Research Center. Koskennicmi, 1<., 198.q: "A Two level Model for Morphologi- cal Analysis and Synthesis," forthcoming Ph.D. dissertation, University of Ilclsinki, llelsinki, Finland. Pereira, F. and D.II.D. Warren, 1983: "Parsing as Deduction," in Proceedings of the elst .4nn~tal Meeting of the Association for Computath, n~d l.ing,istics 115-17 June), pp. 137-144. Pereira, F. and S. $hi~,ber, 1984: "The Semantics of Grammar Formalisms Seen ~.s Comlmter Languages," Proceedings of the Te~,th International Conference on Computational Lin. guistics, Stanford University, Stanford California (4-7 July, 1980. Reynolds, J., 1970: "Transformational Systems and the Alge- braic Structure of Atomic Formulas," in D. Miehie (ed.), Machine Intelligence, Vol. 5, Chapter 7, Edinburgh, Scot- land: Edinburgh University Press, pp. 135-151. Scott, D., 1982: "Domains for Denotationai Semantics," ICALP '82, Aarhus, Denmark (July). Shieber, S., H. Uszkoreit, F. Percira, J. Robinson, and M. Tyson, 1983: "The Formalism a.lld Implementation of PATI~.-[I," in B. Grosz and M. Stickel, Research on Interactive Acquisi- tion and Use of Knowledge, SRI Final Report 1894, SRI International, Menlo Park, California (November). Winograd, T., 1972: Understanding Natural Lattyuage, New York, New York: Academic Press. Woods, W., 1970: "Transition Network Grammars for Natural Language Analysis," Communications of the A CM, Vol. 13, No. 10 (October). 366
1984
75
Discourse Structu res for Text Generation William C. Mann USC/Intorrnation Sciences Institute 4676 Admiralty Way Marina del Rey, CA 90292-6695 A bst ract Text generation programs need to be designed around a theory of text organization. This paper introduces Rhetorical Structure Theory, a theory of text structure in which each region of text has a central nuclear part and a number of satellites related to it. A natural text is analyzed as an example, the mechanisms of the theory are identified, and their formalization is discussed. In a comparison, Rhetorical Structure Theory is found to be more comprehensive and more informative about text function than the text organization parts of previous text generation systems. 1, The Text Organization Problem Text generation is already established as a research area within computational linguistics. Although so far there have been only a few research computer programs that can generate text in a technically interesting way, text generation is recognized as having problems and accomplishments that are distinct from those of the rest of computational linguistics. Text generation involves creation of multisentential text without any direct use of people's linguistic skills; it is not computer-aided text creation. Text planning is a major activity within text generation, one that strongly influences the effectiveness of generated text. Among the things that have been taken to be part of text planning, this paper focuses on just one: text oreanization. People commonly recognize that well.written text is organized, and that it succeeds partly by exhibiting its organization to the reader. Computer generated text must be organized. To create This research was supported by the Air Force Office of Scientific Research contract No. F49620-79-C-0181. The views and conclusions contained in this document are those of the author and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the Air Force Office of Scientific Research of the U.S. Government. text generators, we must first have a suitable theory of text organization. In order to be most useful in computational linguistics, we want a theory of text organization to have these attributes: 1. comprehensiveness: applicable to every kind of text; 2. functionality: informative in terms of how text achieves its effects for the writer; 3. scale insensitivity: applicable to every size of text, and capable of describing all of the various sized units of text organization that occur; 4. definiteness: susceptible to formalization and programming; 5. generativity: capable of use in text construction as well as text description. Unfortunately, no such theory exists. Our approach to creating such a theory is described below, and then compared with previous work on text generation in Section 3. 2. Rhetorical Structure Theory Creating a comprehensive theory of text organization is necessarily a very complex effort. In order to limit the immediate complexity of the task we have concentrated first on creating a descriotiv~ theory, one which fits naturally occurring text. In the future the descriptive theory will be augmented in order to create a constructive theory, one which can be implemented for text generation. The term Rhetorical Structure Theory (RST) refers to the combination of the descriptive and constructive parts. An organized text is one which is composed of discernible parts, with the parts arranged in a particular way and connected together to form a whole. Therefore a theory of text organization must tell at least: 1. What kinds of parts are there? 2. How can parts be arranged? 3. How can parts be connected together to form a whole text? 367 In RST we specify all of these jointly, identifying the organizational resources available to the writer. 2.1. Descriptive Rhetorical Structure Theory 1 What are the organizational resources available to the writer?. Here we present the mechanisms and character of rhetorical structure theory by showing how we have applied it to a particular natural text. As each new construct is introduced in the example, its abstract content is described. Our illustrative text is shown in Figure 2.1.23 In the figure, we have divided the running text into numbered clause-like units. 4 At the highest level, the text is a request addressed to CCC members to vote against making the nuclear freeze initiative (NFI) one of the issues about which CCC actively lobbies and promotes a position. The structure of the text at this level consists of two parts: the request (clause 13) and the material put forth to support the request (clauses 1 through 12), 2.1.1. The Request Schema --. 1-12; 13 To represent the highest level of structure, we use the Request schema shown in Figure 2-2. The Request schema is one of about 25 schemas in the current version of RST. Each schema indicates how a particular unit of text structure is decomposed into other units. Such units are called spans. Spans are further differentiated into text spans and conceptual spans, text spans denoting the portion of explicit text being described, and conceptual spans denoting clusters of propositions concerning the subject matter (and sometimes the process of expressing it) being expressed by the text span. 1The descriptive portion of rhetorical structure theory has been developed over the pest two years by Sandra Thoml:~son and me, with major contributions by Christian Matthiassen and Barbara Fox. They have also given helpful reactions to a previous draft of this paper. 2Quoted (with permission) from The InsidQr, California Common Cause state newsletter, 2.1, July 1982. 3We expect the generation of this sort of text to eventually become very impo~Rant in Artificial Intelligence, because systems will have to establish the acceptability of their conclusions on heuristic grounds. AI systems will have to establish their credibility by arguing for it in English. 4Although we have not used technically-defined clauses as units, the character of the theory is not affected. The decision concerning what will be the finast-grain unit of description is rather arbitrary; here it is set by a preliminary syntax.oriented manuel process which identifies low-level, relatively independent units to use in the discourse analysis. One reason for picking such units is that we intend to build a text generator in which most smaller units are organized by a programmed grammar [Mann & Matthieasen 3.]. 1. I don't believe that endorsing the Nuclear Freeze Initiative is the right step for California CC. 2. Tempting as it may be, 3. we shouldn't embrace every popular issue that comes along. 4. When we do so we use precious, limited resources where other players with superior resources are already doing an adequate job. 6. Rather, I think we will be stronger and more effective 7. if we stick to those issues of governmental structure and process, broadly defined, that have formed the core of our agenda for years. 8. Open government, campaign finance reform, and fighting the influence of special interests and big money, these are our kinds of issues. 9. (New paragraph) Let's be clear: 10. I personally favor the initiative and ardently support disarmament negotiations to reduce the risk of war. 11. But I don't think endorsing a specific nuclear freeze proposal is appropriate fol: CCC. 12. We should limit our involvement in defense and weaponry to matters of process, such as exposing the weapons industry's influence on the political process. 13. Therefore, I urge you to vote against a CCC endorsement of the nuclear freeze initiative. (signed) Michael Asimow, California Common Cause Vice-Chair and UCLA Law Professor Figure 2.1 : A text which urges an action Each schema diagram has a vertical line indicating that one particular part is nuclear. The nuclear part is the one whose function most nearly represents the function of the text span analyzed in the structure by using the schema. In the example, clause 13 ("Therefore, I urge you to vote against a CCC endorsement of the nuclear freeze initiative.") is nuclear. It is a request. If it could plausibly have been successful by itself, something like clause 13 (without "Therefore") might have been used instead of the entire text. However, in this case, the writer did not expect that much to be enough, so some additional support was added. 368 Request ~/~~e~ablement Evidence Figure 2-2: The Request and Evidence schemas The support, clauses 1 through 12, plays a satellite role in this application of the Request schema. Here, as in most cases, satellite text is used to make it more likely that the nuclear text will succeed. In this example, the writer is arguing that the requested action is right for the organization. In Figure 2-2 the nucleus is connected to each satellite by a relation. In the text clause 13 is related to clauses 1 through 12 by a motivation relation. Clauses 1 through 12 are being used to motivate the reader to perform the action put forth in clause 13. The relations relate the conceptual span of a nucleus with the conceptual span of a satellite. Since, in s text structure, each conceptual span corresponds to a text span, the relations may be more loosely spoken of as relating text spans as well. The ReQuest schema also contains an eneblement relation. Text in an "enablement" relation to the nucleus conveys information (such as a password or telephone number) that makes the reader able to perform the requested action. In this example the option is not taken of having a satellite related to the nucleus by an "enablement" relation. One or more schemas may be instsntiated in a text. The pattern of instantiation of schemas in a text is called a text structure. So, for our example text, one part of its text structure says that the text span of the whole text corresponds to an instance of the Request schema, and that in that instance clause 13 is the text span corresponding to the schema nucleus and clauses 1 through 12 are the text span corresponding to a satellite related to the nucleus by a "motivation" relation. In any instance of a schema in a text structure, the nucleus must be present, but all satellites are optional. We s do not instantiate a schema unless it shows some decomposition of its text span, so at least one of the satellites must be present. Any of the relations of a schema may be instantiated indefinitely many times, producing indefinitely many satellites. 5Here and below, the knowledgeable person using RST to describe a text. The schemas do not restrict the order of textual elements. There is a usual order, the one which is most frequent when the schema is used to describe a large text span; schemas are drawn with this order in the figures describing them apart from their instantiation in text structure. However, any order is allowed. 2.1.2. The Evidence Schema ... 1; 2-8; 9-12 At the second level of decomposition each of the two text spans of the first level must be accounted for. The final text span, clause 13, is a single unit. For more detailed description a suitable grammar (and other companion theories) could be employed at this point. The initial span, clauses 1 through 12, consists of three parts: an assertion of a particular claim, clause 1, and two arguments supporting that claim, clauses 2 through 8 and 9 through 12. The claim says that it would not be right for CCC to endorse the nuclear freeze initiative (NFI). The first argument is about how to allocate CCC's resources, and the second argument is about the categories of issues that CCC is best able to address. To represent this argument structure we use the Evidence schema, shown in Figure 2-2. Conceptual spans in an evidence relation stand as evidence that the conceptual span of the nucleus is correct. Note that the Evidence schema could not have been instantiated in place of the Request schema as the most comprehensive structure of the text, because clause 13 urges an action rather than supporting credibility. The "motivation" relation and the "evidence" relation restrict the nucleus in different ways, and thus provide application conditions on the schemas. The relations are perhaps the most restrictive source of conditions on how the schemas may apply. In addition, there are other application conventions for the schema, described in Section 2.2.3. The top two levels of structure of the text, the portion analyzed so far, are shown in Figure 2-3. The entire structure is shown in Figure 2-5. 369 Rcqunl Ev~enct 1 2 3 4 5 6 7 8 9 10 tt 12 13 Figu re 2-3: The upper structure of the CCC text At each level of structure it is possible to trace down the chain of nuclei to find a single clause which is representative of the entire level. Thus the representative of the whole text is clause 13 (about voting), the representative of the first argument is clause 6 (about being stronger and more effective), and the representative of the second argument is clause 12 (about limiting involvement to process issues). 2.1.3. The Thesis/Antithesis Schema --- 2-5; 6-8 The first argument is organized contrastively, in terms of one collection of ideas which the writer does not identify with, and a second collection of ideas which the writer does identify with. The first collection involves choosing issues on the basis of their popularity, a method which the writer opposes. The second collection concerns choosing issues of the kinds which have been successfully approached in the past, a method which the writer supports. To account for this pattern we use the Thesis/Antithesis schema shown in Figure 2.4. The ideas the writer is rejecting, clauses 2 through 5, are connected to the nucleus (clauses 6 through 8) by a Thesis/Antithesis relation, which requires that the respective sections be in contrast and that the writer identify or not identify with them appropriately. Notice that in our instantiations of the Evidence schema and the Thesis/Antithesis schema, the roles of the nuclei relative to the satellites are similar: Under favorable conditions, the satellites would not be needed, but under the conditions as the author conceives them, the satellites increase the likelihood that the nucleus will succeed. The assertion of clause 1 is more likely to succeed because the evidence is present; the antithesis idea is made clearer and more appealing by rejecting the competing thesis idea. The Evidence schema is different from the Thesis/Antithesis schema because evidence and theses provide different kinds of support for assertions. 2.1.4. The Evidence Schema --- 2-3; 4-5 6 In RST, schemes are recursive. So, the Evidence schema can be instantiated to account for a text span identified by any schema, including the Evidence schema itself. This text illustrates this recursive character only twice, but mutual inclusion of schemas is actually used very frequently in general. It is the recursiveness of schemas which makes RST applicable at a wide range of scales, and which also allows it to describe structural units at a full range of sizes within a text. 7 Clauses 2 and 3 make a statement about popular causes (centrally, that "we shouldn't embrace every popular issue that comes along"). Clauses 4 and 5 provide evidence that we shouldn't embrace them, in the form of an argument about effective use of resources. The Evidence schema shown in Figure 2.2 has thus been used again, this time with only one satellite. 2.1.5. The Concessive Schema --- 2; 3 Clause 2 suggests that embracing every popular issue is tempting (and thus both attractive and defective). The attractiveness of the move is acknowledged in the notion of a popular issue. Clause 3 identifies the defect: resources are used badly. The corresponding schema is the Concessive schema, shown in Figure 2-4. The concession relation relates the conceded conceptual span to the conceptual span which the writer is emphasizing. The "concession" relation differs from the "thesis/antithesis" relation in acknowledging the conceptual 6Except for single-clause text spans, the structure of the text is presented depth-first, left to right, and shown in Figuro 2-5. 7This contrasts with some approaches to text structure which do not provide structure between the whole-text level and the clause level. Stodes, problem-solution texts, advertisements, and interactive discourse have been analyzed in that way, 370 Thes Ls /A nlithesis Concessive inform b.7,ou Justify Conditional Figure 2-4: Five other schemas I I I I I I I I I I I I i i i i i I I I I I l l l l l l l l l l l l I I l l l l l l l l l i i i I l l R~eff ~blll 1 TheSu/AnrltheJiJ Co~m~ Cmm',m~,m/ .... 7,.-" ~ ,o7---~ I Info~ Co~d,m~l 2 3 4 5 6 ? Justly. ImD~at~ C(WCeUI~ 77Wu/Amlalb~J 10 |~l|l/Int I | ~ / 11 12 3 4 S 6 ? g V g 10 11 12 Figu re 2.5: The full rhetorical structure of the CCC text '7 371 span of the satellite. The strategy for using a concessive is to acknowledge some potential detraction or refutation of the point to be made, By accepting it, it is seen as not contradictory with other beliefs held in the same context, and thus not a real refutation for the main point. Concessive structures are abundant in text that argues points which the writer sees as unpopular or in conflict with the audience's strongly held beliefs. In this text (which has two Concessive structures), we can infer that the writer believes that his audience strongly supports the NFI. 2.1.6. The Conditional Schema --- 4; 5 Clauses 4 and 5 present a consequence of embracing "every popular issue that comes along." Clause 4 ("when we do so") presents a condition, and clause 5 a result (use of resources) that occurs specifically under that condition. TO express this, we use the Conditional schema shown in Figure 2-4. The condition is related to the nuclear part by a condition relation, which carries the appropriate application restrictions to maintain the conditionality of the schema. 2.1.7. The Inform Schema --- 6-7; 8 The central assertion of the first argument, in clauses 6 through 8, is that CCC can be stronger and more effective under the condition that it sticks to certain kinds of issues (implicitly excluding NFI). This assertion is then elaborated by exemplifying the kinds of issues meant. This presentation is described by applying the Inform schema shown in Figure 2-4. The central assertion is nuclear, and the detailed identification of kinds of issues is related to it by an elaboration relation. The option of having a span in the instantiation of the Inform schema related to the nucleus by a background relation is not taken. This text is anomalous among expository texts in not making much use of the Inform schema. 8 It is widely used, in part because it carries the "elaboration" relation. The "elaboration" relation is particularly versatile. It supplements the nuclear statement with various kinds of detail, including relationships of: 1. sat:member 2. abstraction:instance 3. whole:part 4. process:step 5. object:attribute 2.1.8. The Conditional Schema --- 6; 7 This second use of the Conditional schema is unusual principally because the condition (clause 7) is expressed after the .consequence (clause 6). This may make the consequence more prominent or make it seem less uncertain. 2.1.9. The JustifySchema --- 9; 10-12 The writer has argued his case to a conclusion, and now wants to argue for this unpopular conclusion again. To gain acceptance for this tactic, and perhaps to show that a second argument is beginning, he says "Let's be clear." This is an instance of the Justify schema, shown in Figure 2-4. Here the satellite is attempting to make acceptable the act of exoressinq the nuclear conceptual span. 2.1.10. The Concessive Schema -.- 10; 1 1-12 The writer again employs the concessive schema, this time to show that favoring the NFI is consistent with voting against having CCC endorse it. In clause 10, the writer concedes that he personally favors the NFI. 2.1.1 1. The Thesis/Antithesis Schema -.- 1 1 ; 12 The writer states his position by contrasting two actions: CCC endorsing the NFI, which he does not approve, and CCC acting on matters of process, which he does approve. 2.2. The Mechanisms of Descriptive RST In the preceding example we have seen how rhetorical schemas can be used to describe text. This section describes the three basic mechanisms of descriptive RST which have been exemplified above: 1. Schemas 2. Relation Definitions 3. Schema Application Conventions 2.2.1. Schemas A schema is defined entirely by identifying the set of relations which can relate a satellite to the nucleus. 2.2.2. Relation Definitions A relation is defined by specifying three kinds of information: 1. A characterization of the nucleus, 2. A characterization of the satellite, 3. A characterization of what sorts of interactions between the conceptual span of the nucleus and the conceptual span of the satellite must be plausible, s 8It is also anomalous in another way: the widely used pattern of presenting a problem and its solution does not occur in this text. 9All of these characterizations must be made propedy relative to the writer's viewpoint and knowledge. 372 In addition, the relations are heavily involved in implicit communication; if this aspect is to be described, the relation definition must be extended accordingly. This aspect is outside of the scope of this paper but is discussed at length in [Mann & Thompson 83]. So, for example, to define the "motivation" relation, we would include at least the following material: 1. The nucleus is an action performable but not yet performed by the reader. 2. The satellite describes the action, the situation in which the action takes place, or the result of the action, in ways which help the reader to associate value assessments with the action. 3. The value assessments are positive (to lead the reader to want to perform the action). 2.2.3. Schema Application Conventions Most of the schema application conventions have already been mentioned: 1. One schema is instantiated to describe the entire text. 2. Schemas are instantiated to describe the text spans produced in instantiating other schemas. 3. The schemas do not constrain the order of nucleus or satellites in the text span in which the schema is instantiated. 4. All satellites are optional. 5. At least one satellite must occur. 6. A relation which is part of a schema may be instantiated indefinitely many times in the instantiation of that schema. 7. The nucleus and satellites do not necessarily correspond to a single uninterrupted text span. Of course, there are strong patterns in the use of schemas in text: relations tend to be used just once, nucleus and satellites tend to occur in certain orders, and schemas tend to be used On uninterrupted spans of text. The theory currently contains about 25 schemas and 30 relations. 1° We have applied it to a diverse collection of approximately 100 short natural texts, including administrative memos, advertisements, personal letters, newspaper articles, and magazine articles. These analyses have identified the usual patterns of schema use, along with many interesting exceptions. The theory is currently informal. Applying it requires making judgments about the applicability of the relations, e.g., what counts as evidence or as an attempt to motivate or justify some action. These are complex judgments, not easily formalized. 10In this paper we do not separate the theow into framework and schemas, zdthough for other purposes there is a clear advantage and possibility of doing so. In its informal form the theory is still quite useful as a part of a linguistic approach to discourse. We do not expect to formalize it before going on to create a constructive theory. (Of course, since the constructive theory specifies text construction rather than describing natural texts, it need not depend on human judgements in the same way that the descriptive theory does.) 2.3. Assessing Descriptive RST The most basic requirement on descriptive RST is that it be capable of describing the discernible organizational properties of natural texts, i.e., that it be a theory of discourse organization. The example above and our analyses of other texts have satisfied us that this is the case. 11 tn addition, we want the theory to have the attributes mentioned in Section 1. Of these, descriptive RST already satisfies the first three to a significant degree: 1. comprehensiveness: It has fit many different kinds of text, and has not failed to fit any kind of non-literary monologue we have tried to analyze. 2. functionality: By means of the relation definitions, the theory says a great deal about what the text is doing for the writer (motivating, providing evidence, etc,). 3. scale insensitivity: The recursiveness of schemas allows us to posit structural units at many scales between the clause and the whole text. Analysis of complete magazine articles indicates that the theory scales up well from the smaller texts on which it was originally developed. We See no immediate possibility of formalizing and programming the descriptive theory to create a programmed text analyzer. To do so would require reconciling it with mutually compatible formal theories of speech acts, lexical semantics, grammar, human inference, and social relationships, a collection which does not yet exist. Fortunately, however, this does not impede the development of a constructive version of RST for text generation. 2.4. Developing a Constructive RST Why do we expect to be able to augment RST so that it is a formalizable and programmable theoretical framework for generating text? Text appears as it does because of intentional activity by the writer. It exists to serve the writer's purposes. Many 11in another paper, we have shown that implicit communication arises from the use of the relations, that this communication is specific to each relation, and that as linguistic phenomena the relations and their implicit communication are not accounted for by particular existing discourse theories [Mann & Thompson 83]. 373 of the linguistic resources of natural languages are associated with particular kinds of purposes which they serve: questions for obtaining information, marked syntactic constructions for creating emphasis, and so forth. At the schema level as well, it is easy to associate particular schemas with the effects that they tend to produce: the Request schema for inducing actions, the Evidence schema for making claims credible, the/nform schema for causing the reader to know particular information, and so forth. Our knowledge of language in general and rhetorical structures in particular can be organized around the kinds of human goals that the linguistic resources tend to advance. The mechanisms of RST can thus be described within a more general theory of action, one which recognizes means and ends. Text generation can be treated as a variety of goal pursuit. Schemas are a kind of means, their effects are a kind of ends, and the restrictions created by the use of particular relations are a kind of precondition to using a particular means. Goal pursuit methods are well precedented in artificial intelligence, in both linguistic and nonlinguistic domains [Appelt 81, Allen 78, Cohen 78, Cohen & Perrault 77, Perrault & Cohen 78, Cohen & Perrault 79, Newell & Simon 72]. We expect to be able to create the constructive part of RST by mapping the existing part of RST onto AI goal pursuit methods. In particular computational domains, it is often easy to locate formal correlates for the notions of evidence, elaboration, condition, and so forth, that are expressed in rhetorical structure; the problem of formalization is not necessarily hard. At another level, we have some experience in using RST informally as a writer's guide. This paper and others have been written by first designing their rhetorical structure in response to stated goals. For this kind of construction, the theory seems to facilitate rather than impede creating the text. 3. Comparing RST to Other Text Generation Research Given the mechanisms and example above, we can compare RST to other computational linguistic work on text generation. 12 The most relevant and well known efforts are by Appelt (the KAMP system [Appelt 81]), Davey (the PROTEUS system [Davey 79]), Mann and Moore (the KDS system [Mann & Moore 80, Mann & Moore 81]), McDonald (the MUMBLE system 12Relating RST to the relevant /inguistic literature is partly done in [Mann & Thompson 83], and is outside the scope of this paper. However, we have been particularly influenced by Grimes [Grimes 75], Hobbs [Hobbs 76], and the work of McKeown discussed below. [McDonald 80]) and McKeown (the TEXT system [McKeown 82]). All of these are informative in other areas but, except for McKeown, they say very little about text organization. Appelt acknowledges the need for a discourse component, but his system operates only at the level of single utterances. Davey's excellent system uses a simple fixed narrative text organization for describing tic.tac.toe games: moves are described in the sequence in which they occurred, and opportunities not taken are described just before the actual move which occurred instead. Mann and Moore's KDS system organizes the text, but only at the whole-text and single-utterance levels. It has no recursion in text structure, and no notion of text structure components which themselves have text structure. McDonald took as his target what he called "immediate mode," attempting to simulate spontaneous unplanned speech. His system thus represents a speaker who continually works to identify something useful to say next, and having said it, recycles. It operates without following any particular theory of text structure and without trying to solve a text organization problem. McKeown's TEXT system is the only one of this collection that has any hint of a scale-insensitive view of text structure. It has four programmed "schemas" (limited to four mainly by the computational environment and task). Schemas are defined in terms of a sequence of text regions, each of which satisfies a particular "rhetorical predicate." The sequence notation specifies optionality, repeatability, and allowable alternations separately for each sequence element. Recursion is provided by associating schemas with particular predicates and allowing segments of text satisfying those predicates to be expressed using entire schemas. Since there are many more predicates than schemas, the system as a whole is only partially recursive. McKeown's approach differs from RST in several ways: McKeown's schemas are ordered, those of RST unordered. Repetition and optionality are specified locally; in RST they are specified by a general convention. McKeown's schemas do not have a notion of a nuclear element. McKeown has no direct correlate of the RST relation. Some schema elements are implicitly relational (e.g., an "attributive" element must express an attribute of something, but that thing is not located as a schema element). The difference is reduced by McKeown's direct incorporation of "focus." The presence of nuclear elements in RST and its diverse collection of schemas make it more informative about the functioning of the texts it describes. Its relations make the 374 connectivity of the text more explicit and contribute strongly to an account of implicit communication. Beyond these differences, McKeown's schemas give the impressio n of defining a more finely divided set of distinctions over a narrower range. The four schemas of TEXT seem to cover a range included within that of the RST Inform schema, which relies strongly on its five variants of the "elaboration" relation. Thus RST is more comprehensive, but possibly coarser.grained in providing varieties of description. Our role for text organization is also different from McKeown's. In the TEXT system, the text was organized by a schema-controlled search over thinas that are oermissible to sav. In constructive RST, text will be organized by goal pursuit, i.e., by ooal-based selection. For McKeown's task the difference might not have been important, but the theoretical differences are large. They project very different roles for the writer, and very different top-level general statements about the nature of text. Relative to all of these prior efforts, RST offers a more comprehensive basis for text organization. Its treatment of order, optionality, organization around a nucleus, and the relations between parts are all distinct from previous text generation work, and all appear to have advantages. 4. Summary A text generation process must be designed around a theory of text organization. Most of the prior computational linguistic work offers very little content for such a theory. In this paper we have described a new theoretical approach to text organization, one which is more comprehensive than previous approaches. It identifies particular structures with particular ways in which the text writer is served. The existing descriptive version of the theory appears to be directly extendible for use in text construction. References [Allen 78] Allen, J., Recognizing Inlention in Dialogue, Ph.D. thesis, University of Toronto, 1978. [Appelt 81] Appeit, D., Planning natural language utterances to satisfy multiple goals. Forthcoming Ph.D. thesis, Stanford University. [Cohen 76] Cohen, P. R., On Knowing What to Say: Planning Speech Acts, University of Toronto, Technical Report 118, 1978. [Cohen & Perrault 77] Cohen, P. R., and C. R. Perrault, "Overview of 'planning speech acts'," in Proceedings of the Fifth International Joint Conference on Artificial Intelligence, Massachusetts Institute of Technology, August 1977. 375 [Cohen & Perrauit 79] Cohen, P. R., and C. R. Perrault, "Elements of a plan-based theory of speech acts," Cognitive Science 3, 1979. [Davey 79] Davey, A., Discourse Production, Edinburgh University Press, Edinburgh, 1979. [Grimes 75] Grimes, J. E., The Thread of Discourse, Mouton, The Hague, 1975. [Hobbs 76] Hobbs, J., A Computational Approach to Discourse Analysis, Department of Computer Science, City College, City University of New York, Technical Report 76-2, December 1976. [Mann & Matthiessen 3.] Mann, W. C., and C. M. L M. Matthiessen, Nigeh A Systemic Grammar for Text Generation, USC/Information Sciences Institute, RR.83-105, February 1983. The papers in this report will also appear in a forthcoming volume of the Advances in Discourse Processes Series, R. Freedle (ed.): Systemic Perspectives on Discourse: Selected Theoretical Papers from the 9th International Systemic Workshop, tO be published by Ablex. [Mann & Moore 80] Mann, W. C., and J. A. Moore, Computer as Author-.Results and Prospects, USC/Information Sciences Institute, RR-79-82, 1980. [Mann & Moore 81] Mann, W. C., and J. A. Moore, "Computer generation of multiparagraph English text," American Journal of Computational Linguistics 7, (1), January - March 1981. [Mann & Thompson 83] Mann, W. C., and S. A. Thompson, Relational Propositions in Discourse, USC/Information Sciences Institute, Marina del Rey, CA 90291, Technical Report RR-83.115, July 1983. [McDonald 80] McDonald, David D., Natural Language Production as a Process of Decision-making under Constraints, Ph.D. thesis, MIT, Cambridge, Mass., November 1980. [McKeown 82] McKeown, K.R., Generating Natural Language Text in Response to Questions about Database Structure, Ph.D. thesis, University of Pennsylvania, 1982. [Newell & Simon 72] Newell, A., and H. A. Simon, Human Problem Solving, Prentice-Hall, Englewood Cliffs, N.J., 1972. [Perrault & Cohen 78] Perreuit, C. R., and P. R. Cohen, Planning Speech Acts, University of Toronto, Department of Computer Science, Technical Report, 1978.
1984
76
Semantic Rule Based Text Generation Michael L. Mauldin Department of Computer Science Carnegie-Mellon University Pittsburgh, Pennsylvania 15213 USA ABSTRACT This paper presents a semantically oriented, rule based method for single sentence text generation and discusses its implementation in the Kafka generator. This generator is part of the XCALIBUR natural language interface developed at CMU to provide natural language facilities for a wide range of expert systems and data bases. Kafka takes as input the knowledge representation used in XCALIBUR system and incrementally transforms it first into conceptual dependency graphs and then into English? 1. Introduction Transformational text generators have traditionally emphasized syntactic processing. One example is Bates ILIAD system which is based on Chomsky's theory of transformational generative grammar [1]. Another is Mann's Nigel program, based on the systemic grammar of Halliday [4]. In contrast, other text generators have emphasized semantic processing of text, most notably those systems based on case grammar such as Goldman's BABEL generator[7] and Swanout's GIST[9]. Kafka combines etements of both paradigms in the generation of English text. Kafka is a rule based English sentence generator used in the XCALIBUR natural language interface. Kafka uses a transformational rule interpreter written in Franz Lisp. These transformations are used to convert the XCALIBUR knowledge representation to conceptual dependency graphs and then into English text. Kafka includes confirmational information in the generated text, providing sufficient redundancy for the user to ascertain whether his query/command was correctly understood. The goal of this work has been to simplify the text generation process by providing a single computational formalism of sufficient power to implement both semantic and syntactic processing. A prototype system has been written which demonstrates the feasibility of this approach to single sentence text generation. 1This research is part of the XCALIBUR project at CMU. Digital Equipment Corporation has funOed this project as part of its artificial intelligence program. XCALIrrJuR was initially based on software developed at CMU. Members of the XCALISUR team include: Jaime CarbonelL Mark Boggs, Micl~ael Mauldin, Peter Anick, Robert Frederking, Ira Mon~'~rch. Steve rt,~o, ris~ on and Scott Sailer 2. The XCALIBUR Natural Language Interface XCALIBUR is a natural language interface to expert systems. It is primarily a front-end for the XCON/XSEL expert systems developed by John McDermott [5]. XCALIBUR supports mixed-initiative dialogs which allow the user to issue commands, request data, and answer system queries in any desired order. XCALIBUR correctly understands some forms of ellipsis, and incorporates expectation directed spelling correction as error recovery steps to allow the processing of non-grammatical user input. Figure 2.1 shows the gross structure of the XCALIBUR interface. Figure 2-2 shows some typical user queries and the corresponding responses from the generator z. More details about XCALIBUR can be found in [2]. XCALIBUR EXPERTS dialog dypar H ,~ .... Figure 2.1: The XCALIBUR Natural Language Interface 3. Declarative Knowledge Representation The XCALIBUR system uses a case-frame based inter/ingua for communication between the components. To provide as nearly canonical a representation as possible, the semantic information in each case-frame is used to determine the major structure of the tree, and any syntactic information is stored at the leaves of the semantic tree. The " resulting case-frame can be converted into a canonical representation by merely deleting the leaves of the tree. The canonical representation is very useful for handling ellipsis, since phrases such as "dual port disk" and "disk with two ports" are represented identically. Figure 3-1 shows a sample query and its representation with purely syntactic information removed. 2These responses include confirmational text to assure the user that hie query has been understood. Without this requirement, these sentences would have been rendered using anaphora, resulting in It costs 38000 do~late, or even just 38000 dollars, See section 5.3. 376 + What is the largest 11780 fixed disk under $40,0O0? The rpO7.aa is a 516 MB fixed pack disk that costs 38000 dollars. ÷ Tell me about the Ixyl 1. The Ixyl 1 is a 240 I/m line printer with plotting capabilities. Tell me all about the largest dual port disk with removable pack. The rmO5-ba is a 39140 dollar 256 MB dual port disk with removable pack, 1200 KB peak transfer rate and 38 ms access time. + What is the price of the largest single port disk? The 176 MB single port rpO6-aa costs 34000 dollars. Figure 2-2: Sample Queries and Responses ÷ What is the price of the two largest single port disks? (*clause (head (*factual-query)) (destination (*default)) (object (*nominal (head (price)) (of (*nomlnal (head (disk)) (ports (value {1))) (size (value (*descending)) (range-high (1)) (range-low (2)) (range-origin (*absolute))) (determiner (*def)))) (determiner (.der)))) (level (*main)) (verb (°conjugation (root (be)) (mode (*interrogative)) (tense (*present)) (number (*singular))))) Figure 3-1 : A Sample Case-frame 4. The Kafka Generator Kafka is used to build replies to user queries, to paraphrase the user's input for clarificational dialogs, and to generate the system's queries for the user. Figure 4-1 shows the major components and data flow of the Kafka generator. Either one or two inputs are provided: (1) a case frame in the XCALIBUR format, and (2) a set of tuples from the information broker (such as might be returned from a relational database). Either of these may be omitted. Four of the seven major components of Kafka use the transformational formalism, and are shown in bold outlines. query I 1 H- I H°°i--n" English Figure 4-1 : Data flow in the Kafka Generator 4.1. Relation to Other Systems Kafka is a direct descendant of an earlier natural language generator described in [2], which in turn had many components either derived from or inspired by Goldman's BABEL generator [7]. The case frame knowledge representation used in XCALIBUR has much in common with Schank's Conceptual Dependency graphs i8]. The earlier XCALIBUR generator was very much ad hoc, and Kafka is an effort to formalize the processes used in that generator. The main similarity between Kafka and BABEL is in the verb selection process (described in section 5). The embedded transformational language used by Kafka was inspired by the OPS5 programming language developed by Forgy at CMU [3]. ors5 was in fact an early candidate for the implementation of Kafka, but OPS,5 supports only flat data structures. Since the case frame knowledge representation used in XCALIBUR is highly recursive, an embedded language supporting case frame matches was developed. The kafka programming language can be viewed as a production system with only a single working memory element and a case frame match rather than the flat match used in ors,5. 4.2. Transformational Rules Some of the transformation rules in Kafka were derived from the verb selection method of BABEL, and others were derived taken from TGG rules given in [10]. Although Kafka has been based mainly on the generative grammar theory of Chomsky, the rule syntax allows much more powerful rules 377 than tl~ose allowed in either the standard or extended standard theory. We have tried to provide a sufficiently powerful formalism to encode more than one grammatical tradition, and have not restricted our rules to any particular linguistic convention. Our goal has not been to validate any linguistic theory but rather to demonstrate the feasibility of using a single computational mechanism for text generation. The basic unit of knowledge in XCALIBUR is the case lrame. Kafka repeatedly transforms case frames into other case frames until either an error is found, no pattern matches, or a surface level case frame is generated. The surface case frame is converted into English by render, which traverses the case frame according to the sentence plan, printing out lexical items. A transformation is defined by an ordered set of rules. Each rule has up to four parts: =A pattern, which is matched against the current node. This match, if successful, usually binds several local variables to the sub- expressions matched. • A result, which is another case frame with variables at some leaves. These variables are replaced with the values found during the match. This process is called instantiation. = An optional variable check, the name of a lisp function which takes a binding list as input and returns either nil which causes the rule to fail, or a new binding list to be used in the instantiation phase. This feature allows representation of functional constraints. • An optional final flag, indicating that the output from this rule should be returned as the value of the rule set's transformation. A transformation is applied to a case frame by first recursively matching and instantiating the sub-cases of the expression and then transforming the parent node. Variables match either a single s-expression or a list of them. For example = HEAD would match either an atom or a list, = "REST would match zero or more s-expressions, and = +OTHER would match one or more s-expressions. If a variable occurs more than once in a pattern, the first binds a value to that variable, and the second and subsequent occurrences must match that binding exactly. This organization is very similar to that of the ILIAD program developed by Bates at BBN [1]. The pattern, a result, and variable check correspond to the structural description, structural change, and condition of Bates' transformational rules, with only a minor variation in the semantics of each operation. The ILIAD system, though, is very highly syntax oriented (since the application is teaching English grammar to deaf children) and uses semantic information only in lexical insertion. The rules in Kafka perform both semantic and syntactic operations. 4.3. A Sample Rule Figure 4-2 sample rule from the Kafka grammar for the XCALIBUR domain. The rule takes a structure of the form The price ot X is FO0 and converts it to X costs FO0. More sophisticated rules for verb selection check for semantic agreement between various slot fillers, but this rule merely encodes knowledge about the relationship between the PRICE attribute and the COST verb. Figure 4-3 shows an input structure that this rule would match; Figure 4-4 shows the structure which would be returned. (R query-to-declare active-voice-cost (cd (primitive (be)) (actor (*nominal (head: (price)) (or =x) ='other)) (object =y) =*be-rest) => (cd (primitive (cost)) (actor =x) (object =y) =be-rest)) Figure 4-2: Rule 'active-voice-cost' (cd (primitive (be)) (actor (*nominal (head: (price)) (of (*nominal (ports (1)) (size (*descending (range-low: (1)) (range-high: (2)) (range-origin: (*absolute)))) (heed: (disk)) (determiner: (*def)))) (determiner: (*def)))) (object (*unknown (head: (price)))) (destination (*default)) (]evel: (*main)) (tense: (*present)) (number: (*singular))) Figure 4-3: Input Case Frame (cd (primitive (cost)) (actor (*nominal (ports (1)) (size (*descending (range-low: (1)) (range-high: (2)) (range-origin: (*absolute)))) (head: (disk)) (determiner: (*def)))) (object (*unknown (head: (price)))) (destination ('default)) (level: ('main)) (tense: (*present)) (number: (*singular))) Figure 4-4: Output Case Frame 5. The Generation Process The first step in the generation process is preprocessing, which removes a lot of unnecessary fields from each case frame. These are mostly syntactic information left by the parser which are not used during the semantic processing of the query. Some complex input forms are converted into simpler forms. This step provides a high degree of insulation between the XCALIBUR system and the text generator, since changes in the XCALIBUR representation 378 can be caught and converted here before any of Kafka's internal rules are affected. In the second phase (not used when paraphrasing user input) the case frame is converted from a query into a declarative response by filling some slots with ("UNKNOWN) place-holders. Next the re~at module replaces these place- holders with information from the back-end (usually data from the XCON static database). The result is a CD graph representing a reply for each item in the user's query, with all the available data filled in. In the third phase of operation, the verb transform selects an English verb for each underlying CD primitive. Verb selection is very similar to that used by Goldman in his BABEL generator [7], except that BABEL uses hand-coded discrimination nets to select verbs, while Kafka keeps the rules separate. A rule compiler is being written which builds these discrimination nets automatically. The D-nets are used to weed out rules which cannot possibly apply to the current structure. Since the compiler in not yet installed, Kafka uses an interpreter which tries each rule in turn. After verb selection, the np-instantiation transform provides texical items for each of the adjectives and nouns present in each CD graph. Finally the order module linearizes the parse tree by choosing an order for the cases and deciding whether they need case markers. The final sentence is produced by the render module, which traverses the parse tree according to the sentence plan produced by order, printing the lexical items from each leaf node. 5.1. A Sample Run The following is a transcript of an actual generation run which required 30 seconds of CPU time on a VAX 11/780 running under Franz Lisp. Most of the system's time is wasted by attempting futile matches during the first part of the match/instantiate loop. The variable parse1 has been set by the parser to the case frame shown in Figure 3-1. The variable data1 is the response from the information broker to the user's query. This transcript shows the Kafka system combining these two inputs to produce a reply for the user including (1) the answer to his direct query and (2) the information used by the information broker to determine that answer. -> (orint data1) ((name class number-of-megabytes ports price) ((rp07-aa disk 510 1 38000))) -> (render-resul~ Rprsel datal~ Applying rules for preparso... Rule 'input-string-deletion' applied... Rule 'input-string-deletion' applied.,. Rule 'position-deletion' applied... Rule 'property-fronting1' applied... Rule 'input-string-deletion' applied... Rule 'position-deletion' applied... Rule 'property-fronting1' applied... Rule 'input-string-deletion' applied... Rule 'modifiers-breakout' applied... Rule 'modifiers-breakout' applied... Rule 'modifiers-deletion' applied... Rule 'input-string-de]etion' applied... Rule 'project' applied... Rule 'Input-string-deletion' applied... Rule 'cases-breakout' applied... Rule 'cases-breakout' applied... Rule 'cases-deletion' applied... Applying rules for query-to-declare... Rule 'fact-to*cd' applied... Rule 'active-voice-cost' applied... Applylng rules for verb-select... Rule 'cd-cost' applied... Applying rules for np-instanttate... Rule 'k-largest' applied... Rule 'size-deletion' applied... Rule 'prenominal-megabytes' applied,.. Rule 'prenominal-single-port' applied... Rule 'nominal-price' applied... (cr (verb (cost)) (agent (*nominal (head: (disk)) (prenominal: (516 M8) (single port)) (determiner: (*def)) (name (rp07-aa)))) (object (*nominal (head: (dollar)) (determiner: (*generic)) (count (38000)777 (destination (*default)) (level: (*main)) (mode: (declarative)) (voice: (active)) (tense: (*present)) (number: (singular)) (person: (third)) (subject: (agent)) (plan: ((unmarked agent) *verb (unmarked object))) (verb-conj: (costs))) And the resulting surface string is: The 516 MB single port rpO7-aa costs 38000 dollars. 5.2. Generating Anaphora Kafka has minimal capability to generate anaphora. A discourse history is kept by the dialog manager which maps each nominal case frame to a surface noun phrase. Anaphoric references are generated by choosing the shortest noun phrase representing the nominal not already bound to another nominal. Thus the pronoun it could only refer to a single item. Each noun phrase is generated in the output order, so the discourse history can be used to make decisions about anaphoric reference based on what the user has read p to that point. This technique is similar to but less sophisticated than that used by McDonald[6]. Generation of anaphora is inhibited when new information must be displayed to the user, or when confirmational text is to be included. 5.3. Confirmational Information Speakers in a conversation use redundancy to ensure that all parties understand one another. This redundancy can be incorporated into natural language interfaces by "echoing," or including additional information such as paraphrases in the generated text to confirm that the computer has chosen the correct meaning of the user's input, For example, of the user asks: + What is the price of the largest single port disk? 379 The following reply, while strictly correct, is likely to be unhelpful, and does not reassure the user that the meaning of the query has been properly understood: 34000 dollars. The XCALIBUR system would answer with the following sentence which not only answers the user's question, but includes evidence that the system has correctly determined the user's request: The 176 MB single port rpO6-aa costs 34000 dollars. XCALIBUR uses focus information to provide echoing. As each part of the user's query is processed, all the attributes of the object in question which are needed to answer the queryare recorded. Then the generator assures that the value of each of these attributes is presented in the final output. 6. Summary and Future Work The current prototype of the Kafka generator is running and generating text for both paraphrases and responses for the XCALIBUR system. This system demonstrates the feasibility of the semantic transformational method of text generation. The transformational rule formalism allows much simpler specification of diverse syntactic and semantic computations than the hard-coded lisp used in the previous version of the XCALIBUR generator. Current work on Kafka is focused on three goals: first, more grammar rules are necessary to increase coverage. Now that the basic method has been validated, the grammar will be extended to cover the entire XCALIBUR domain. The second goal is making Kafka more efficient. Most of the system's time is spent trying matches that are doomed to fail. The discrimination network now being added to the system will avoid these pointless matches, providing the speed required in an interactive system like XCALIBUR. The third goal is formalizing the remaining ad hoc phases of the generator. Four of seven major components now use transformations; two of these seem amenable to the transformational method. The case ordering can easily be done by transformations. Converting the semantic processing done by the relat module will be more difficult, since the rule interpreter will have to be expanded to allow multiple inputs. 7. Acknowledgments Many thanks to Karen Kukich for her encouragement and for access to her wealth of literature on text generation. I would also like to thank Jaime Carbonell for his criticism and suggestions which were most helpful in revising earlier drafts of this paper. References 1. Bates, M. and Wilson, K., "Final Report, ILIAD, Interactive Language Instruction Assistance for the Deaf," Tech. report, Bolt Berank and Newman, 1981, No. 4771. 2. Carbonell, J.G., Boggs, W.M., Mauldin, M.L. and Anick, P.G., "The XCALIBUR Project, A Natural Language Interface to Expert Systems," Proceedings of the Eighth International Joint Conference on Artificial Intelligence, 1983. 3. Forgy, C.L., "OPS5 User's Manual," Tech. report, Dept. of Computer Science, Carnegie-Mellon University, 1981, CMU-CS-81-135. 4. Mann, W.C., "An Overview of the Nigel Text Generation Grammar," Proceedings of the 21st Meeting of the Association for Computational Linguistics, 1983, pp. 79-84. 5. McDermott, J., "RI: A Rule-Baeed Configurer of Computer Systems," Tech. report, Dept. of Computer Science, Carnegie-Mellon University, 1980. 6. McDonald, D.D., "Subsequent Reference: Syntactic and Rhetorical Constraints," Theoretical Issues in Natural Language Processing-2, 1978, pp. 64-72. 7. Schank, R.C., Conceptual Information Processing, Amsterdam: North-Holland, 1975. 8. Schank, R.C. and Riesbeck, C.K., Inside Computer Understanding, Hillside, NJ: Lawrence Erlbaum, 1981. 9. Swartout, B., "GIST English Generator," Tech. report, USC/Information Sciences Institute, 1982. 10. Wardhaugh, R., Introduction to Linguistics, McGraw Hill, 1977. 380
1984
77
Controlling Lexical Substitution in Computer Text Generation 1 Robert Granville MIT Laboratory for Computer Science 545 Technology Square Cambridge, Massachusetts 02139 Abstract Th=s report describes Paul, a computer text generation system desig~ed LO create cohesive text through the use o| lexlcal substitutions. Specihcally, Ihas system is designed Io determmistically choose between provluminahzat0on, superordinate suhstntut0on, and dehmte noun phrase reiterabon. The system identities a strength el antecedence recovery for each of the lex~cal subshtutions, and matches them against the strength el potenfml antecedence of each element m the text to select the proper substitutions for these elements. 1. Introduction This report descrnbes Paul. a computer text generation system designed to cre~:te collesive text through tile use of lexical substitutuons. Spec;hcalty. thts system ~s designed tn deterministically choose between pronominal:zabon sup(:rordinate substitution, and delinite noun phrase reitcrat}on. Fl~e system identifies a strength at antecedence recovery for each of the lexical substitutions, anti matches them against the strength of potenU,# entececJence of each element =n the text to select the proper sub3litubons for these elements. P~ul is a natural language generation program initially developed at IBM's Thomas J. Watson Research Center as part of the ongoing Epistle project I5.6}, "[he emphasis of the the work reported here is in the research oJ discourse phenomena, the study of cohesion and its effects on mLJlhsententiat texts [3, 9]. Paul accepts as input LISP knowledge structures consisbng of case frame l1] formalisms representing each sentence to be gernerated. These knowledge structures are translated into Enghsh, with the appropriate lexical substitutions being made at this time. No attempt vs made by the system to create these knowledge structures. 2. Cohesion The purpose of communication is for one person (the speaker or writer) to express her thoughts and ideas so that another (the listener or reader) can understand them. ]here aJe many restrictions placed on the realization of these thoughts inio language so that the listener may understand. One ot the most important requiroments fo~ an utterance is that it seem to be unified, that it form a text. The theory of text and what distinguishes it from isolated sentences that is used in Paul is that of Halliday and Hasan [3]. One of the items that enhances the unity of text is cohesion. Cohesion refers to the linguistic phenomena that establish relationships between sentences, thc~reby tying them together. There are two major goals that are accomplished tl~rougi~ cohesiu, that enhance a passage's qualily of text. The fiust is the obwous oesure to avoid unnecessary repetibon. The other goal is to dislinguL",h new information from old. ,so that the listener can tully undemtand what fs being said. [1} The room has a large window, The room has a window facing east. {1} appears to he describing two windows, because there is no device indicating that the window of the second sentence is the same as the window of tile first sentence. If in tact the speaker me:mr to describe the stone w;ndow, silo must somehow inform the listener that this is 1This research was s.pported (in part) by Office of Naval Research contract NO0 14-80-C.0505, anJ (in pint) by Nation31 Institutes of I-le31lh Grant No. 1 POt LM 03374.04 from the National Library of Medicine. indeed the case. Cohesion us a device that will accomplish thas goal, Cohesion is created when the interpretation of an element is dependent on the me.aning of another. ]he element in guestion can.at be hJIly understood until 1he element d is dependenl on zs ~dcntdned. rhe first presupposes [3] the second in that it requ,es for its understanding the exnstence of the second. An element at a sentence presupposes the existence of another when its interpretation requires relerence tO another. Once we can trace these lelerences to their sources, we can correctly interpret the elements of the sentences. The very same devices that create these depende, leies for interpretation help distinguish olct intolrnation from new. I[ the use of a cohesive element pre~.upposes the exnste~ce of another role=once el the element lor its ir}terpretahon, tl~en tile hstener can be assured tltat the olher reference exists, and that the element =n question can be understood as old reformation. lhurefore, that act at associating seJltences through reference deponde.cies heips make the text unambiguous, arid cohcs=on can be seen to be a very important part of text. 3. Lexical Substitution In [3], Halliday and I-lasan cat.~log and discuss many devices used in English to acmove cohes,on. Fhese include refe;ence, substitution ellaDsis, and conjunction. Another f.t, mily ut devices they discuss is know,-" as lexical substitulion. ]he lexlcal substitution devices incorporated into Paul are pronommalizatior,, s.perordinate substitution, and definite noun phrase reiteration. Superordinate substitution is the replacement of an etement with a noun or phrase that ps a .;ore general term for the element As an example, consPder Figure 1, a sample hierarchy the system uses to generate sentences. ................................................. ANIMAL MAMMAL REPT ILE i POSSUM SKUNK TURTLE I I r POGO HEPZIBAH CHURCHY Ftgure la ................................................. 1, POGO IS A HALE POSSUM. 2. HEPZIBAH IS A FEMAt.[ SKUNK. 3. CItURCHY IS A M~LE TURTLE. 4. POSSUMS ARE SHALL, GREY MAMMALS. 5, SKUNKS ARE SMALL, BLACK MAMMALS. 6. TURILES ARE SMALL, GREEN REPTILES, 7. MAMMALS ARE FURRY ANIMALS. B, REPTILES ARE SCALED ANIMALS, Figure Ib: A S~mple llierarchy for Paul 381 It1 Ih~s t!x:lrh;.~l(~, lap SIJ|)(!roI(JlE~;.aIO of POds() I~ POS~l.lf.f, that of PO~';S{IM ~s MAM~.IAI. aMd ,~,;jain for M,lt/MAI the supo~ordmate is A^#IMAI Suporord,natet; c,,t;n contraLtO for as long as the h~erarchical tree will s~ppor t. The n,echanlct~ Io, performing superord~nate substdutio:'~ is fairly {,asy. All ,)+~e no(nil,; tO Of: ;S t++ t'l++;~tO, a list C}t s+q'~++ior,flllm~!:~ try tr~ICllSg up the hi~:rarch+cal bet!. an~J Cub~l;,,rfly c l~(;ose It(,i;x C!},s list. t Iowever. lhere are sev(:l,d i~[;uob that IrlUbI I,e dddrr;sbcrJ to prevcllt s;,perorjir~ate SUbStitutIOn florrl hell"i{j alll~)lgtlL)llS or rY!n,,,,ln{j ('lloneous CO;HK)tatiOrlS. The etrofle(Als CO~H)otatlunS ~'CCLI r It Ih(~ h:';t O! L;upelordlnL~,lu+. t i~% allowed to extot+d too long An ex:lmpIn will t:l,+kc ih;:4 c:ltLff. Let us ;]£~umo that we have a h~C'ralchy in wn=++t'~ th+,le is ar~ (:~drv ! Hi It. ll'le superor'dlnate Of [t~ED iS MAf4. t~Jf A,I,It# t'/t}t,t,,'~N. ANIMAL for tlfJM.'~IV. :rod rilING for ANIM,1L. fhorefore, the superordu,ate hsl for hR~.D ~s IMAN tlUMAN AHIM4L THINGS. Whilo retenin{I to frcd as llle man seems fmc, calling h~m the ,~tuman seems a Iitl=e z, tran{je. And lurtherlF~ore, using the animal o+ + the thing to refer to Fred ~s actually insulting. ]+he reason these superordinates have negative connalations is that there are e~sentKd quahttes that hH+;rans p,':,ssess that s,+p~;rate ,is from ell;or animals. Calhug FrEd an "anlIi;id" m+1111es that he lai-ks tar,so quahhea, al]:.f is tt;oreiore insulhog. "l.h+man" sotJnds change because it is the hvihest e=rlry in the seln~mtic hterlrrchy that exhibits these qualities. lalk,:g about "the humnr~" tl~ves erie the feeling that there are other creatules in the d=scourse that aren't human. Paul is senmtive to the connotabons that are possible Ihrough superordinate substitubon. The+ system tdeobfies an es;~e+~tial quality, usu[]liy ir=telligence, wilich acts as a block for further supurordinate subsbtution. If the item to be replaced with a superordmate has the prou.~rty of intelhgence, either d~reclly or through semantic inheritance, a superordinate list is made only {)f tho..:e entnes that have themselves the quality el intothgenco, a{j.qir, either d~rectly or through inheritance. If the item does=rt have intelhgence the list is allowed to extend as far as the hierarcl~ical entries will allow. Once the proper list of superordinates =3 established, Paul randomly chooses one, preventing repetition by remembering previous choices. The other problem with superordinato substitution is that it may =ntroduce ambiguity. Again cons=tier Figure 1. If we wanted to perform a superord.]ato subshhlho;+ for POrJO. we would have the sup~'rordJt13te hst (POSSUM MAMMAL ANIM4L ) to choose from. But HEPZlI]AH is also a nlammal, so the rnammal cauld refer to either POGO or HEPZIBAH. And not only are both POGO e,r}d ItEPZIBAtl anunals, but sn is CtlURCHY, so the armnat could be any o,}e of them. ]herefore, saying lhe matnmal or the arr+mal would form an ambiguous refecence which the listener or reader would have rio v,,,ay to ur~derstand. Paul reco{.lniz££ [hts ambiguity. Once the superordinate has been selected, it ~s tested against all the other nour~s mentioned so far in the text If any other noun is a rn{;mbet of th.e superordu+ale set m question, the reference is ambl,~!uous. 1his reference can be disarnbiguated by using some feature ot the eh:ment be,to replaced as a modilier. In our example of Figure 1. we hrd that all possums are grey. and therefore POGO ~s grey. Thus. the grey mamma! can refer only to POGO, and is not atnb=guous. In the Pogo world, the features the system uses to d~sarr;oiuuate these references are gender, s~ze, color, and skin type (furry. scaled, of foath{,~('d). Or+co the leature ~s arb~trC.rily selected and the correct value has been determined. ~t ~s tested to see that it genuinely diba+nb~guales the reference, tt any of the nouns that were members of the :,t;pcrordmate set have the same value to~ this feature, it cannot be use,') to (f~s.~mb~guate the reference, arid il is relected. For instance, tl~e size of POGO ~s small, but s~ying the .~n',all mammal ~3 still ambiguous bec~use HEPZll~Atl is also small, and the phrase could just as likely refer to her. The search for a disambiguatmg ieature continues until one is found. Pronominalizat+on, the use of personal pronouns in place of an element, is mechan~c~dly simple. The selecbon of the appropriate persnnal pronoun is strictly gramm;-~lical. Once lhe syntactic case, the oendor, and the number of the element are known, the correct pronoun is dictated by the language. the final ~ex~cal substitution available to Paul is the definite noun phrase, the use of a dehnite artielr~, t,'~e m English, as opposed to an indefinlle article, a or some The definite ~rticle clearly marks an item as erie that has been pre,~iously mentioned, and is therefore old information. "f:',e .'~rlefu,te oracle 31mllatiy marks an item as not havlnq been pre..qc~usiy mentioned. ,~d therefore is new information. 1"his capacity of the defimte article makes ils use required with superordinates. {2} My collie is smart. The dog fetches my newspaper every day. "My collie is smart. A dog fetches my newspaper every day. Willie the mocharlisms for performing the various lexical substitutions are conceptualiy slra~ghtforward, they don't solve the entire problotn uf usin~.l le,:icdl suOstltuhon. Nolhing has been said about how the system chooses WlllCh IOxICUl substilutlor'i to use. This is a serious issue because lexlcGI sLJbsbtutiol~ dOWCOS ace nc;t interchangeable. This is tru.,3, bec;~u:;e le~Jcal substiluhons, as Wltll most cohesive devices, create text by using pze:;uppo-~t;d dependencies tor Iheir inlerpreti'|tioi1s, as we have seeri. If those pr£~Supposod elemeats do not exist, or if it is not possible to Correctly idcnhly whtch of the m~'.ny possiDle elements is the one presuppns,.xi, then it is imoossiblo to correctly int(,rpret the element, arid the only possd.)le r¢su!t ~s cunlus~on. A computer text generation symptom mat incorporates lexical substituhon in its output must insure that tne presupposed element ex:sts, and that it can be readily identified by the reader. Pa~d controls the se!ection of lexicai substitution devices by conceptually dividing the p+ helen rote two I'.,sks. "rho first is to ~dentify the strength of antecedence rucov'crv of toO lexical substitution devices. The second ~s to iderztffy the str~..ngth el pote~:hal arrteceder~ce of each element in the passage, and determine which il any Icxical substitution would be appropriate. 4. Strength of Antecedence Recovery Each time a cohesive devic~ is used, a presupposition clependency is created. rhe itef~ tIlat i:; being presupposed must be correctly identified tor the correct interp~etabon of the element. The relative ease with wh=ch one c3n recover this pre~supposed item from the cohesive element is called the strength el antecedence recove,y. The stronger an eleraent's strength of antecedence recovery, the easier it is to identify the presupposed element. The lexical substitution with the highest strength of antece-lonce recovery is the dehnite noun. This is because the element is actually a recetition of the original item, w~th a definite article to mark the fact that it is old information. There is no real need to refer to the presupposed element, since all the reformation is being repeated. Superordinate subslitution is the lexical substitution witl; the next highest strength of antecedence recovery. Presupposition oepondency genuinely does ernst with Ihe use of superordmates, because some intorrnation is lost When w* ~. move up the semanhc hierarchy, all the traits that are specihc to the element in question are test. To recover this and fully understand the ret(;rence at Ilano. we must trace back to the original element in the hierarchy. Fortunately, the manner in which Paul pedorms suporordmate substitution faohtates this recovery. By insunng that the superordmate substitt;tlon will never be ambiguous, the system only generates suporofdmate ~L, bstttutlons that are readily recoverable. The th,d device used by Paul. ~he personal pronoun, has the lowest strength of antecedence recovery. Pronouns genuinely ~re nothing more tharl plat:e holders, variables that lea=tHole the pnsihotls Of the elements they are replacing A pronoun contains no real semahhc irdormation. The only readily available p~eces of iniormation from a pronoun are the syntactic role Jn the currenl sentence, the gender, and the number of the replaced item. For this mason, pronouns are the hardest to recover of the substitutions discussed. 5. Strength of Potential Antecedence Wl~tle the forms of lexical substitution provide clues (tO various degrees) teat aid the reader in recovering the presupposed elemeflt, the actual way m which the e!orr;er;t =S currerttly being used, how ;t was prev;:)usly used. its cir,,:um,~ tances within the current sentence and within the eqt~re text, can prowce addit;on31 clues. These factors combine to give tne 5pecIhc reference a s~ret;gth el potentiat antecedence. Some etemer~ts, try the ;,ature of their current and previous us~.~ge, will be easier to recover u;depetl~ont of u~e fox,cat subst~lutton dewce selected. Strength of potential antecedence involves several factors, One is the syntachc role the element ~s pl~ying in tr}e current sentence, as well as in the previous relere;ice. Anoti~er is the d~stance of the previous reference from the current. Here distance is defined as the number of clauses between the references, and Paul arbitrarily uses a distance of no more than two clauses as an acceptable distance. The current expected 382 focus of the text also affects an element's potential strength of antecedence. In order to identify the current expected locus, Paul uses the detailed algorithm for focus developed by Sidner [10]. Paul identifies five classes of potenhal antecedence strength. Class I being the strongest and Class V the weakest, as well as a sixth "non- class" for elements being mentioned for the first time. These five classes are shown in Figure 2. Class h 1. The sole referent of a given gender and number (singular or plural) last menbo~lod within an acceptable distance. OR 2. The locus or the head of the expected locus list for the previous sentence. Class Ih The last relerent el a g=ven gender and number last mentioned w;thin an acceptable distance. Class IIh An element that filled the same syntactic role in the previous sentence. Class IV: 1. A referent that has been previously mentioned, OR 2. A referent that is a member of a previously mentioned set that has been mentioned within an acceptable distance. Class V: A referent that is known to be a part of a previously mentioned item. F~gure 2: The Five Classes of Potential Antecedence Once an element's class of potential antecedence is identified, Ihe selection of the proper toxical substitubon IS easy. TI~O stronger an element's potenbal a~teceder, ce. the weaker the antecedence of the lexJcal subslrtutior) I-igule 3 illustrates the mappings lrom potential antecedence to lex,c:ll 3ut)stltut~on devices. Note that Class I11 elements are unusual i~ that the device used to replace them can vary. If the previous instance of the element was of Chtss I. if it was replaced with a pronoun, then the Cunent instance =s replaced with a pror~oun, too. Othorwh'e, Class III elements are replaced with superordinates, the same as Class I1. Class I . . . . . . . . . . . . . . . . . . . . . . Pronoun Substitution Class II . . . . . . . . . . . . . . . Superordinate Substitution Class I l l (previous reference Class I) . . . . . . . . . . . . . . . . . . . Pronoun Substitution Class I I I . . . . . . . . . . . . . . Superordinate Substitution Class IV . . . . . . . . . . . . . . . . . . . . . Definite Noun Phrase Class V . . . . . . . . . . . . . . . . . . . . . . Definite Noun Phrase Figure 3: Happing of Potential Antecedence Classes to Lexical Substitutions 6. An Example To see the effects of controlled lexical substitution, and to help clarify the ideas discussed, an example is provided. The following is an actual example of text generated by Paul Tile domain is the so-called children's story, and the example discussed here is one about characters frorn Walt Kelly's Pogo comic strip, as shown in Figure 1 above. Figure 4 contains the semantic representation for the example story to be generated, in the syntax of NL P [4] records. P al('like'.exp:='a2',recip:='a3',stative); aZ('pogo'); a3('hepzibah'); bt('tike',exp:='b2',recip:='a3'0staLive); b2('churchy'); cl('give',agnt:='aZ',aff:='cZ',rectp:='a3', active,effect:='c3'); c2('rose'); c3('enjoy\'.recip:='a3',stative); dl('want\',exp:='a3',recip:='d2',neg,stative); d2('rose',pussess:='b2'); e1('b2',char:='jeatous'.entity); f 1 ( ' h i t \ ' , a g n t : = ' b 2 ' . a f f : = ' a 2 ' . a c t i v e ) ; g l ( ' g i v e ' , a g n t : = ' b 2 ' , a f f : = ' g 2 ' , recip:='a3',ective); gZ('rose'); hl('drop\',exp:='h2',stative); h2('petal',partof:='g2',plur): il('upset\',recip:='a3',cause:='hl',stetlve): j)('cry\',agnt:='a3',active)[] Figure 4: NLP Records for Example Story ................................................. If the SIOFy were to be generated without any lexical subslitutions at all, it would look like the following. POGO CARES FOR HEPZIBAH. CHURCHY LIKES HEPZIBAH, TOO. POGO GIVES A ROSE TO HEPZIBAH, WHICH PLEASES HEPZIBAH. HEPZIBAH DOES NOT WANT CHURCHY'S ROSE. CHURCHY IS JEALOUS. CHURCHY HITS POGO. CHURCHY GIVES A ROSE TO HEPZIBAH. PETALS DROP OFF. THIS UPSETS HEPZIBAH. HEPZIBAH CRIES. While this version of the story would be unacceptable as tile final product of a text generator', and it is not the text Paul would produce from the input of Figure 4. it is shown here so that the reader can more easily understand the story reiJrosonted semantically in Figure 4. To go to the nther extreme, uncontrolled pronominalization would be at least a~ unacceptable as no Icxicai subslihJtions at all. POGO LIKES HEPZlBAH. CHURCHY CARES FOR HER, TOO. HE GIVES A ROSE TO tIER. WHICH PLEASES HER. SHE DOES NOT WANT HIS ROSE. HE IS JEALOUS. HE SLUGS HIM. HE GIVES A ROSE TO HER. PETALS DROP OFF. THIS UPSETS HER. SHE CRIES. Again. this is unacceptable text. and the system would not generate it, but it is shown hero to dramatize the need for control over lexical substitutions. Tile text that Paul actually does produce from the input of Figure 4 is the following story. POGO CARES FOR HEPZII3AH. CHURCHY LIKES HER, TOO. POGO GIVES A ROSE TO HER, WHICH PLEASES HER. SHE DOES NOT WANT CHURCI-IY "S ROSE. HE IS JEALOUS. I.IE PUNCHES POGO. FIE GIVES A ROSE l'O itEPZIBAH. THE PETALS DROP OFF. THIS UPSETS HER. SHE CRIES. 2For a discus~on of the imptornentalion el NI.P for Paul .~e [2]. 383 7. Conclusions The need for good te,~:t generation is rapidly increasing. One requirement for generated Output to be Cor'.~idored text is to exhibit cohesion I.ex~cal substiluhon ~S a family of cohesive devices that help p~ow(te coho:;~on and achtew~ the two mater goals of cohesion, the avoLdmg of unnecussary repet=t=on and the d=shnguishing of old inlormat~on from new. Ftowovor. uncontrolled use of lexicai substitution dewces wdl prodHce texl thai is t,n~ntelhgible and nonsensical. P~'~ul is Ihe first text genehltlr~n syslet:, tn,II Incorporates Iox~oai substiluhon8 in a controlled mantlet, tnereby producing COhesive text that is ~,;rJorstandal)le By ~dentify]n0 the L;trurlgth Of antecedence recovery for each of the lexical subslitutJor~s, and the strength of potential antecedence for each element i~ the discourse, the syslom i$ able to choose the app,'opnate lexical substitutions. 8. Acknowledgments t would like to thank Pete SLolovits and Bob Berwick for their advice and encoura,aen;ent while suporvisu}g this work. I would also like to thank Geor,jo t ieidorn and Karon Jensen for or~'!inc~lly introducing me to the problem addressed here, as well as their expert help at the ec, rly stages of this project. 9. References 1. Fillmore, Chc, rles J. The Case for Case. In Universals in Linguistic Tlleory. Emmon Bach and Robert T. Harms, Ed., Holt, Rinehart and W~nston, Inc., New York, 1968. 2. Granville, Robert Alan. Cohesion in Computer Text Generation: Lexical Substitution. Tech. Rcp. MIT/LCS/TR-310, MIT,Cambridge, 1983. 3. Halliday, M. A. K., and Ruquaiya Hasan. Cohesion in English. Lon§mar~ Group Limited, London, 1976. 4. Heidorn, George E. Natural Language Inputs to a Simulation Programming System. Tech. Rep. NPS-551 ID72101 A, Naval Postgraduate School, Monterey, Cal., 1972. 5. l'teidorn, G. E., K. Jensen, L. A. Miller, R. J. Byrd, and M. S. Chodorow. The Epistle Text-Critiquing System. IBM Systems Journal 21, 3 (1982). 6. Jonson, Karen, and George E. Heidorn. rhe Fitted Parse: 100% Parmng Capability in a Syntactic Grammar el English. "l-ech. Rep. RC 9729 ( # 42958), IBM Thomas J. Watson Research Center, 1982. 7. Jensen. K.. R. Ambresio, R. Granville, M. Kluger, aud A. Zwarico. Compuler GeneTahon of Topic Paragraphs: Structure and Style. Proceedings o1 the 19th Annual Meeting of the Association for Cornputahonal Linguistics, Association for Computational Linguistics, 1981. 8. Mann. William C., Madeline Bates, Barbara J. Grosz, David D. McDonald. Kathleen R. McKeown. and William R. Swartout. Text Generation: The State of the Art and the Literature. Tech. Rep. ISI/RR. 81 .t01, information Sciences Institute, Marina del Rey, Cal., 1981. Also University of Pennsylvania MS-CIS-81-9. 9. Quirk, ,~andolph, Sidney Greenbaum. Geoffrey Leech, and Jan Svartik. A Grammar el Contemporary English. Lol~.gman Group Limited, London, 1972. 10. Sidner, Candace Lee. Towards a Computational Theory of Definite Anaphora Comprehension in English Discourse Tech. Rep. AI-TR 537, MI r, Cambridge, 1979. 384
1984
78
UNDERSTANDING OF JAPANESE IN AN INTERACTIVE PROGRAMMING SYSTEM Kenji Sugiyama I, Masayuki Kameda, Kouji Akiyama, Akifumi Makinouehi Software Laboratory Fujitsu Laboratories Ltd. 1015 Kamikodanaka, Nakahara-ku, Kawasaki 211, JAPAN ABSTRACT KIPS is an automatic programming system which generates standardized business application programs through interactive natural language dialogue. KIPS models the program under discussion and the content of the user's statements as organizations of dynamic objects in the object*oriented programming sense. This paper describes the statement*model and the program-model, their use in understanding Japanese program specifications, and bow they are shaped by the linguistic singularities of Japanese input sentences. I INTRODUCTION KIPS, an interactive natural language programming system, that generates standardized business application programs through interactive natural language dialogue, is under development at Fujitsu (Sugiyama, 1984). Research on natural language programming systems ('NLPS') (l-leidorn, 1976, McCune, 1979) has been pursued in America since the late 1960's and some results of prototype systems are emerging (Biermaun, 1983). But in Japan, although Japanese-like programming languages (Ueda, 1983) have recently appeared, there is no natural language programming system. Generally, for a Net~PS to understand natural language specifications, modeling of both the program under discussion and of the content of the user's statement: is required. In conventional systems (Heidorn, 1970, McCune, 1979), programs and rules encoding linguistic knowledge first govern parsing procedures which extract from the user's input a statement*model; then "program model building rules" direct procedures which update or modify the program-model in light of what the user has stated. There are thus two separate models and two separate procedural components. However, we believe that knowledge about semantic parsing and program model building should be incorporated into the statement*model and the program-model, respectively. In the NLPS we are working on, these two models are organizations of objects (in the object-oriented programming sense (Bobrow, 1981)), each possessing local knowledge and procedures. The user's input is first parsed by a syntactic analysis procedure which communicates sub- trees to the statement*model objects for semantic judgments and annotations, such that the completed parse tree is trivially transformable into the statement model. In the second stage, the statement model is sent to an object in the program model (#PROGRAM) which sends messages to other program-model objects corresponding to components of the user's statement; it is these objects which perform the updating and modification operations. This paper describes the statement*model and the program- model, their use in understanding Japanese program specifications, and how they have been shaped by the linguistic singularities of the Japanese input sentences dealt with so far. Isuglyams's current address k Advanced Computer Systems Department, SRI InternatlonsJ, Menlo Park, CA 94028. II MODELS A.. Prol[ram .Model To get a better understanding of the way users describe programs, we asked programmers to specify programs in a short paragraph, and sampled illustrative descriptions of simple programs from a Hyper COBOL user's manual (Fujitsu, 1981) (Hyper COBOL is the target programming language of KIPS). This resulted in a corpus of 60 program descriptions, comprising about 300 sentences. The program model we built to deal with this corpus is divided into a model of files and a model of processes (Figure I). ....... model of processes ............ model of files .... . . . . . . . . . . . . . . ~" . . . . . . . "r . . . . . . . . . . . . . . . . . "r- . . . . . . . b . . . . .~ CI~,U B ~ file-type ', ' ....... /,,,,,,\'/ ..... / I,,..- I #s'rATEI ~ ~ / #S~A~ / Ityp,, ...................................................... inutmmcu c property ~-.-- 8upurlsub relation .... clans/instance relation =~-~= coapouitu object8 Fl~re 1. The progr~ aod,l 385 The model of files comprises in turn several sub-models, objects containing knowledge about file types, record types and item types. A particular file is represented by an object which is an instance of all three of these. Class-level objects have such properties as bearing a certain relation to other class-level objects, having a name, and so forth. For example, the object #RECORD- TYPE has ITEM-TYPES relations with the #1TEM-TYPE object, and DATA-LENGTH and CHARACTER-CLASS properties. Objects on the instance level have such properties as z specific data length and a specific name. The model of processes is a taxonomy of objects bearing super/subset relations to one another. On the highest level we find such objects as #OPERATION, #DATA, #PROGRAM, #CONDITION, and #STATE. The specific program-model, which is built up through a dialogue with the user, is a set of instance-level objects belonging to both file and process classes. B. Statement Model In a NLPS system, it is necessary to represent the content of the user's input sentences in an intermediary form, rather than incorporating it directly into the program model, because the user's statements may either contradict what was said previously, or omit some essential information. The statement model provides this intermediary representation, whose content must be checked for consistency, and sometimes augmented, before it is assimilated and acted upon. The sentences in the corpus can, for the purpose of statement* model building, be classified into operations sentences, parameter sentences, and item*condition sentences (Figure 2). Their semantic components can be divided into nominal phrases and relations - names or descriptions of operations, parameters, data classes, and specific pieces of data (e.g. the item "Hinmei'), and relations between these 2 (Figure 3). Naming these elements, identifying subclasses of operations, and categorizing the dependencies yields the statement model (Figure 4): subcomponents of the sentence correspond to class-level objects organised in a super/sub hierarchy, and the content of the sentence as a whole corresponds to a system of instance-level objects, descendants from those classes. operation sontenco pea'smnCer 8entente £tnn-cond£t£on 8un~oncn 5ort~a~ account ~ewithak~'Hinm~¶ then outp~ ~totheacco~nt ~el. ~ek~em~a~ i#'Hinm~ Figure 2. Three 8ontnnce typos sort's key item "Hinmei " is operation , spnctf.t¢ dat& d&ta clams / paxannter Figure 3. The 8emmtlc nlununts HI Understanding of Japanese KIPS understands Japanese program specifications in two phases. The sentence analysis phase analyzes an input and generates an instance of a statement model. The specification acquisition phase builds an instance of the program model from the extracted semantics. A k, Implementing the Models To realize a natural language understanding system using the models we are developing, objects in the models have to be dynamic as well as static, in the sense that the objects should express, for instance, how to instantiate themselves as well as static relations such as super/sub relations. Object-oriented and data-oriented program structures (Bobrow, 1981) are good ways to express dynamic objects of this sort. KIPS uses FRL (Roberts, 1977) extended by message passing functions to realize these programming styles. B. Sentence Anal},sis The sentence analysis phase performs both syntactic and sematic analysis. As described above, the semantics is represented in the statement model. Syntax in KIPS is expressed by rules of TEC (Sugiyama, 1982) which is an enhancement of PARSIFAL (Marcus, 1980). The fundamental difference is that TEC has look-back buffers whereas PARSIFAL has an attention shift mechanism. This change was made in order to cope with two important aspects of Japanese, viz., (1) the predicate comes last in a sentence, and (2) bunsetsu s sequences are otherwise relatively arbitrary. The basic idea of TEC is as follows• To determine the relationship between a noun bnnsetstt, which comes early in the sentence, and the predicate, the predicate bunsetsu has to be parsed. Since it comes last in the sentence, the noun bnnsetsn has to be stored for later use to form an upper grammatical constituent. The arbitrary number of noun bunsetsus are stored in look-back buffers, and are later used one by one in a relatively sequence-independent way. 1. Overview The syntactic characteristics of the sample sentences, which were found to be useful in designing the sentence analysis, are that (1) the semantic elements, which are stated above, correspond closely to bunsetsu, (2) parameter sentences and item-condition sentences can be embeded in operation sentences and tend to be expressed in noun sentences (sentences like "A is B'), and (3) operation sentences tend to be expressed in verb sentences (sentences like "do A'). Guided by these observations, parsing rules are divided into three phases; bunsetsu parsing, operand parsing, and [*0e~TZOil ,r~- t t. rATx rxcs I.i icA 0 0e~rI0S I ? \ , / \ ~ I \. .... ~" . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ¢lUn F£guro 4. The st&tonnn~ node1 2Subordinstlnz sententls] conjunctions m fret.ted u relations between states or operations, seen u described by seutentisl clauseS, 8A linguistic constituent which zpproximltely corresponds to "phrue" in English. 386 operation parsing. Bunsetsn parsing identifies from the input word sequence a set of bunsetsu structures, each of which contains at most one semantic element. Operand parsing makes up such operands as parameter and item-condition specifications that may be governed directly by operations. Operation parsing determines the relations between an operation and various operands that have been found in the input sentence. Each of these phases sends messages to the statement model, so that it can add to a parse tree information necessary for building the semantic structure of an input or can determine the relationship between the partial trees built so far. An The neuron.at model rule *USEF ÷ . . . . . . . . . . . . . . . . . . . . . . . . . . . • l TO-GET $vlAun SAS:GET l ...... L ................ l ITDfS lunar *ITEM I l ORDBI Susef *ORDER l "T0-GET ,rrl~. • I'I"D~, (-1; * IS lOT DECLIllABLE] [ C; ( S ~ < S i d l e F~iX,q~ OF c 'T0-GET <Sl~tgrIC FEARUTE OF -lST>)] -> ... I I I Jm I ct /~ J I &-~ I I key I -1st 1st I ms I I es I I ~f,~ I I c~-~,~., I I "Hinmei" I I earl I Figure 6. Syntax and Semantic Interaction instance of the statement model b extracted from the semantic information attached to the final parse tree. 2. S)'ntax and Semantlcn Interaction Figure ,5 shows how message passing between the syntactic component (rules) and the semantic component (model) occurs in order to determine the semantic relationship between the bunaetgus ('Hinmei" and key), The boxes denoted by -lst, C, 1st are grammatical constituent storages called look-back buffer, look-up stack, and look-ahead buffer in TEC (Sugiyama, 1982), respectively. One portion of the rule's patterns (viz. [-1;...]) checks if the constituent iu the -lst buffer is not declinable. Another portion (viz. [C;...]) sends the message "TO-GET *ITEM" to the semantic component (*KEY) asking it to perform semantic analysis. On receiving the message from the syntax rule, *KEY determines the semantic relation with *ITEM, and returns the answer =ITEMS = . The process is as follows. The message activates a method corresponding to the first argument of the message (viz. TO-GET). Since the corresponding method is not defined in *KEY itself, it inherits the method SAS:GET from the upper frame *USEF. This method searches for the slot names that have the facet $usef with *ITEM, and finds the semantic relation ITEMS. As illustrated in the example, the syntax and semantics interaction results in a syntactic component free from semantics, and a semantic component free from syntax. Knowledge of semantic analysis can be localized, and duplication of the same knowledge can be avoided through the use of an inheritance mechanism. Introducing a new semantic element is easy, because a new semantic frame can be defined on the basis of semantic characteristics shared with other semantic elements. O.. Specification Acquisition Filling the slots which represent a user's program specification is considered as a set of subgoals and completing a frame as a goal. Program models are built through message passing among program model objects in a goal-oriented manner. 1. Subgo.ding [Strucure of subgoaling knowledge] The input semantic structure to the acquisition (1) is fragmentary, (2) varies in specifying the same program, and (3) the sequence of specifying program functions is relatively arbitrary. To deal these phenomena, several subgoaling methods, each of which corresponds to a different way of specifing a piece of program information, are defined in different facets under n same slot. For example, u program model object #CHECK in Figure 6 has Stile and $acquire facets under the slot INPUT. ingtffince8 of the statement model • TO-ACqUIRE *CHECKI" (The #emantic #truc~ure for the Japanese cent.nee each ae "make the account file an input, and check it. ") The progrn model instance clanu 8PROGRAMI I gPSF 4' . . . . . . . . . . . . . . . ~ . . . . . . . . . . • 4' . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4" -'~J PROCESSES gvalue 8C!.!~1 I J J TO-ACQUIRE gvalue RULE-INTPR i • "---r .................. J ................. ............. " A \ \ "TO-INSTAETIATE" ~ / mTO-ACQUIRE eCHECgl = ~ * ................. • ................ I I ~ J *RULE1 Spat ISAC:PATI I J~ #CHE~I ~-~l Sexuc (IRPUT hcqulre) l +--Y . . . . . . . . . . . . . . . . . . . . * I .... I I IIII~T gvtlue IFII, E3 I I IgPUT Stile ISAC:IIIFILE I • ....................... * I Sucquire ISAC: INPUT I A J OUTPUT Ill-added SAME-RECORD I "TO-ACQUIRE eFILEI ° * .................................. * ....... . I I Figure g. Subgotltng 387 In order to select one of the different subgoaling methods, depending on the input semantic structure, a rule-like structure is introduced. A pattern for a rule (e.g. "RULE1 in #CHECK) is defined under Spat which tests the input semantic structure, and an action part of a rule is defined under Sexec which shows the subgoal's names (slots) to be filled and the subgoaling methods (facets) to do the job. The message "TO-ACQUIRE us triggers a rule interpreter. The interpreter is physically defined in the highest frame of the process model (#PSF), since it expresses overall common knowledge. #PROGRAMI has a discourse model in order to acquire information provided relatively arbitrarily. The current model depends on the kind of operations and the sequence in which they are defined. Usually, the most currently defined or referred to operation gets first attention. [Process of subgoaling] The example of acquisition of the semantic structure in Figure 6 begins with sending the message "TO-ACQUIRE *CHECKI" to #PROGRAMI. On receiving the message, #PROGRAMI eventually instantiates the #CHECK operation, makes the instance (#CIIECKI) one of the processes, and then send it another message "TO-ACQUIRE *CHECKI" which specifies what semantic structure it must acquire (viz. the structure under *CHECKI). The me~sage sent to #CHECKI then activates the rule interpreter defined in #PSF. The interpreter finds *RULEI as appropriate, and executes the subgoaling methods specified as (INPUT $acquire) and so forth. One of the methods (ISAC:INPUT) creates #FILE3, makes it INPUT of the current frame (#CHECKI), and asks it to acquire the remaining semantic structure (*FILEI). 2. Internal Subgoalln~ As explained before, some inputs lack the information necessary to complete the program model. This information is considered to be in subgoals internal to the system and supplemented by either defaults, demons (Roberts, 1977) or composite objects (Bobrow, 1981). For example, the default is used to supplement the sorting order unless stated otherwise explicitly. Demons are used to build a record type automatically. The input sentence seldom specifies the record types. This is because output record type is automatically calculable from the input record type depending on the operation employed. However, the program model needs explicit record type descriptions. This is accomplished by the demons defined under the OUTPUT slot in the operation frames. For example, when a output file is created for the operation #CHECK in Figure 6, the sir-added demon (viz. SAME-RECORD) is activated to find a record type for the output file. As shown in Figure 1, this results in finding the same record type (#ACCOUNT- RECORD) for the output files (#FILEI, #FILE2) as that of the input file (#FILE3). Specification of output files is implicit in many cases. For example, the CHECK operation assumes that it creates a valid file which satisfies the constraints, and an invalid file which does not. As a natural way of implementation, composite objects are employed, and the output files as well as the files' states are also instantiated as a part of #CHECK's instantiation (Figure 1). 3. Discussion Program specification acquisition is realized using the program model, which is a natural representation of the user's program intage. This is accomplished through message passing, default usage, demon activation and composite objects instantiation. Knowledge in an object in the model is localized and hence easy to update. Inheritance makes it possible to eliminate duplicate representation of the same knowledge, and adding a new object is easy because of the knowledge localization. IV CONCLUSION This paper discussed the problems encountered when implementing a Japanese understanding subsystem in an interactive programming system, KIPS, and proposed an "object-centered" approach. The subsystem consists of sentence analysis and specification acquisition, and the task domain of each is modeled using dynamic objects. The "obj~t-centered" approach is shown to be useful for making the system flexible. A prototype system is now operational on M-series machines and has successfully produced several dozens of programs from the Japanese specification. Our next research will be directed toward understanding Japanese sentences that contain other than the process specifications. V ACKNOWLEDGEMENTS The authors would like to express their thanks to Tatsuya Hayashi, Manager of Software Laboratory, for providing a stimulating place in which to work. We would also like to thank Dr. Don Walker, Dr. Robert Amsler and Mr. Armar Archbold of SRI International, who have provided valuable help in preparing this paper. VI REFERENCES Biermann,A.W.; Ballard,B.W.; Sigmou,A.H. An Experimental Study of Natural Language Programming. Int. J. Mun-Machine Studies, 1083, (18), 71-87. Bobrow,D.G; Stefik,M. The LOOPS Manual. Technical Report, Xerox PARC, 1981. KB-VLSI-81-13. Fujitsu Ltd. Hyper COBOL Programming Manual V01. , 1081. [in Japanese]. Heidorn,G.E. Automatic Programming Through Natural Language Dialogue: A Survey. IBM J. Res. ~/ Develop., 1976, £0(~), 302-313. Marcus,M.P. A Theory of Syntactic Recognition for Natural L4nguage. : MIT Press 1980. MeCune,B.P. Building Program Model lncrementall~ from Informal Descriptions. PhD thesm, Stanford Univ., 1979. AIMo333. Roberts,R.B.; Goldstcin,l.P. The FRL Manual. Technical Report, MIT, AI Lab., 1977. memo 409. Sugiyama,K.; Yachida,M.; Makinouchi,A. A Tool for Natural Language Analysis: TEC. £5th Annual Convention, Information Processing Societal of Japan, 1982, , 1033-1034. [in Japanese]. Sugiyama,K.; Akiyama,K.; Kameda,M.; Makinouchi,A. An Experimental Interactive Natural Language Programming System. The Transactions of the Institute of Electronics and Communication Engincerings of Japan, 1984, J67-D(3), 297-304. [in Japanese, and is being translated into English by USC Information Sciences Institute]. Ueda; Kanno; Honda. Development of Japanese Programming Language on Personal Computer. Nikkci Computer, 1983, (34), 110-131. [in Japanese]. 388
1984
79
Features and Values Lauri Karttunen University of Texas at Austin Artificial Intelligence Center SRI International and Center for the Study of Language and Information Stanford University Abstract The paper discusses the linguistic aspects of a new gen- eral purpose facility for computing with features. The pro- gram was developed in connection with the course I taught at the University of Texas in the fall of 1983. It is a general- ized and expanded version of a system that Stuart Shieber originally designed for the PATR-II project at SRI in the spring of 1983 with later modifications by Fernando Pereira and me. Like its predecessors, the new Texas version of the "DG {directed graph}" package is primarily intended for representing morphological and syntactic information but it may turn out to be very useful for semantic representa- tions too. 1. Introduction Most schools of linguistics use some type of feature no- tation in their phonological, morphological, syntactic, and semantic descriptions. Although the objects that appear in rules and conditions may have atomic names, such as "k," "NP," "Subject," and the like, such high-level terms typically stand for collections of features. Features, in this sense of the word, are usually thought of as attribute-value pairs: [person: lst], [number: sg], although singleton fea- tures are also admitted in some theories. The values of phonological and morphological features are traditionally atomic; e.g. 1st, 2nd, 3rd; they are often binary: +, -. Most current theories also allow features that have com- plex values. A complex value is a collection of features, for example: Isgreement: r per$°n: 3rdll Lnumber: sgJJ Lexical Functional Grammar (LFG) [Kaplan and Bres- nan, 83], Unification Grammar (UG) [Kay, 79], General- ized Phrase Structure Grammar (GPSG) [Gazdar and Pul- lum, 82l, among others, use complex features. Another way to represent feature matrices is to think of them as directed graphs where values correspond to nodes and attributes to vectors: "lag reement numb~/~eri°n sg 3rd In graphs of this sort, values are reached by traversing paths of attribute names. We use angle brackets to mark expressions that designate paths. With that convention, the above graph can also be represented as a set of equa- tions: <agreement number> = sg <agreement person> = 3rd Such equations also provide a convenient way to ex- press conditions on features. This idea lies at the heart of UG, LFG, and the PATR-II grammar for English [Shieber, et al., 83] constructed at SRI. For example, the equation <subject agreement> = <predicate agreement> states that subject and predicate have the same value for agreement. In graph terms, this corresponds to a lattice where two vectors point to the same node: subject ~ I predicate agreement ~~agreement numb~erson sg 3rd 28 In a ca~'~e like this, the values of the two paths have been "unified." To represent unification in terms of feature ma- trices we need to introduce some new convention to distin- guish between identity and mere likeness. Even that would not quite suffice because the graph formalism also allows unification of values that have not yet been assigned. A third way to view these structures is to think of them ~s partial functions that assign values to attributes [Sag et.aL, 8.1]. 2. Unification and Generalization Several related grammar formalisms (UG, LFG, PATR- II, and GPSG) now e×ist that are based on a very similar conception of features and use unification as their basic op- eration. Because feature matrices (lattice nodes) are sets of attribute-value pairs, unification is closely related to the operation of forming a union of two sets. However, while the latter always yields something-at least the null set, unification is an operation that may fail or succeed. When it fails, no result is produced and the operands remain un- changed; when it succeeds, the operands are permanently altered in the process. They become the same object. This is an important characteristic. The result of unifying three or more graphs in pairs with one another does not depend on the order in which the operations are performed. They all become the same graph at the end. If graphs A and B contain the same attribute but have incompatible values for it, they cannot be unified. If A and B arc compatible, then (Unify A B) contains every attribute that appears only in A or only in B with the value it has there. If some attribute appears both in A and B, then the value of that attribute in (Unify A B) is the unification of the two values. For example, r . rnumber* )" == I sgreernent: be,son: 2n J [case: nominative r II B " lagreement: Iperson: 3rd / Lgender* m.sc, j Lease: genitive (Generalige A B) = [agreement: ['number:. SI~.~] Generalization seems to be a very useful notion for ex- pressing how number and gender agreement works in coor- dinate noun phrases. One curious fact about coordination is that conjunction of "I" with "you" or "he" in the subject position typically produces first person verb agreement. In sentences like "he and I agree" the verb has the same form as in "we agree. " The morphological equivalence of "he" and I," "you and I," and "we" is partially obscured in En- glish but very clear in many other languages. The problem is discussed in Section V below. 3. Limitations of Some Current For- malisms Most current grammar formalisms for features have certain built-in limitations. Three are relevant here: • no cyclic structures • no negation • no disjunction. The prohibition against cyclicity rules out structures that contain circular paths, as in the following example. A = [agreement: ['number:, pill] B = (Unify A B) I: greement: ['person: 31u:l]l ase: nominative -r.... , I' ge e.' be,=on: Lease: nominative Simple cases of grammatical concord, such as number, case and gender agreement between determiners and nouns in many languages, can be expressed straight-forwardly by stating that the values of these features must unify. Another useful operation on feature matrices is gen- eralization. It is closely related to set intersection. The generalization of two simple matrices A and B consists of the attribute-value pairs that A and B have in common. If the ~lues themselves are complex, we take the general- ization of those values. For example, a Here the path <a b c> folds back onto itself, that is, <a> = <a b c>. It is not clear whether such descriptions should be ruled out on theoretical grounds. Whatever the case might be, current implementations of LFG, UG, or GPSG with which I am familiar do not support them. The prohibition against negation makes it impossible to characterize a feature by saying that it does NOT have such and such a value. None of the above theories allows specifications such as the following. We use the symbol "-" to mean 'not.' [o==,: dat]] 29 [.°,..o.o, The first statement says that case is "not dative," the second says that the value of agreement is "anything but 3rd person singular." Not allowing disjunctive specifications rules out ma- trices of the following sort. We indicate disjunction by enclosing the alternative values in {}. I. ,..,III g,,,.,,,t: IL","b,':. ,Q ,?! L['number: pl~] jj loose: {nora aoo} The first line describes the value of case as being "ei- ther nominative or accusative." The value for agreement is given as "either feminine singular or plural." Among the theories mentioned above, only Kay's UG allows dis- junctive feature specifications in its formalism. (In LFG, disjunctions are allowed in control equations but not in the specification of values.) Of the three limitations, the first one may be theo- retically justified since it has not been shown that there are phenomena in natural languages that involve circular structures (of. [Kaplan and Bresnan, 83], p. 281). PATR-II at SRI and its expanded version at the University of Texas allow such structures for practical reasons because they tend to arise, mostly inadvertently, in the course of gram- mar construction and testing. An implementation that does not handle unification correctly in such cases is too fragile to use. The other two restrictions are linguistically unmoti- vated. There are many cases, especially in morphology, in which the most natural feature specifications are nega- tive or disjunctive. In fact, the examples given above all represent such cases. The first example, [case: -dat], arises in the plu- ral paradigm of words like "Kind" child in German. Such words have two forms in the plural: "Kinder" and "Kindern." The latter is used only in the plural dative, the former in the other three cases (nominative, genitive, accusative). If we accept the view that there should be just one rather than three entries for the plural suffix "-er", we have the choice between -ez" ffi number: pl ac c).l ase: {nora gen -er = Fnumber: pl l [_case' ~atJJ The second alternative seems preferrable given the fact that there is, in this particular declension, a clear two- way contrast. The marked dative is in opposition with an unmarked form representing all the other cases. The ~econd example is from English. Although the fea- tures "number" and "person" are both clearly needed in English verb morphology, most verbs are very incompletely specified for them. In fact, the present tense paradigm of all regular verbs just has two forms of which one represents the 3rd person singular ("walks") and the other ("walk") is used for all other persons. Thus the most natural char- acterization for "walk" is that it is not 3rd person singu- lar. The alternative is to say, in effect, that "walk" in the present tense has five different interpretations. The system of articles in German provides many ex- amples that call for disjunctive feature specifications. The article "die," for example, is used in the nominative and accusative cases of singular feminine nouns and all plural nouns. The entry given above succinctly encodes exactly this fact. There are many cases where disjunctive specifications seem necessary for reasons other than just descriptive el- egance. Agreement conditions on conjunctions, for exam- pie, typically fail to exclude pairs where differences in case and number are not overtly marked. For example, in Ger- man [Eisenberg, 73] noun phrases like: des Dozenten (gen sg) the docent's der Dozenten (gen pl) the docents'. can blend as in der Antrag des oder der Dozenten the petition of the docent or docents. This is not possible when the noun is overtly marked for number, as in the case of "des Professors" (gen sg) and "der Professoren" (gen pl): *der Antrag des oder der Professors *der Antrag des oder der Professoren the petition of the professor or professors In the light of such cases, it seems reasonable to as- sume that there is a single form, "Dozenten," which has a disjunctive feature specification, instead of postulating several fully specified, homonymous lexical entries. It is obvious that the grammaticality of the example crucially depends on the fact that "Dozenten" is not definitely sin- gular or definitely plural but can be either. 4. Unification with Disjunctive and Negative Feature Specifications I sketch here briefly how the basic unification proce- dure can be modified to admit negative and disjunctive values. These ideas have been implemented in the new Texas version of the PATR-II system for features. (I am much indebted to Fernando Pereira for his advice on this topic.) Negative values are created by the following operation. If A and B are distinct, i.e. contain a different value for some feature, then (Negate A B) does nothing to them. Otherwise both nodes acquire a "negative constraint." In effect, A is marked with -B and B with -A. These con- straints prevent the two nodes from ever becoming alike. 30 When A is unified with C, unification succeeds only if the result is distinct from B. The result of (Unify A C) has to satisfy all the negative constraints of both A and C and it inherits all that could fail in some later unification. Disjunction is more complicated. Suppose A, B and C are all simple atomic values. In this situation C unifies with {A B} just in case it is identical to one or the other of the disjuncts. The result is C. Now suppose that A, B, and C are all complex. Furthermore, let us suppose that A and B are distinct but C is compatible with both of them as in the following: A : F..oo.,: ,.mq Lnumber: sg.J 13 = ['nur"ber: pl"] c-- [=.,,: .=o'1 What should be the result of (Unify {A B} ~)? Because A and B are incompatible, we cannot actually unify C with both of them. That operation would fail. Because there is no basis for choosing one, both alternatives have to be leR open. Nevertheless, we need to take note of the fact that either A or B is to be unified with C. We can do this by making the result a complex disjunction. c' = {(A C) (B C)) The new value of C, C', is a disjunction of tuples which can be, but have not yet been unified. Thus (A C) and {B C) are sets that consist, of compatible structures. Further- more, at least one of the tuples in the complex disjunction must remain consistent regardless of what happens to A and B. After the first unification we can still unify A with any structure that it is compatible with, such as: D- ['oa.se: nor.'] If this happens, then the tuple (A C) is no longer con- sistent. A side effect of A becoming A, o Fge. e,: ,.mq I-umb,,: sg / LC,,se: nor" j is that C' simultaniously reduces to {(B C)}. Since there is now only one viable alternative left, B and C can at this point be unified. The original result from (Unify {A B} C) now reduces to the same as (Unify B C). c" = ((B c)) = F..r"be,: p'l ! / Lease: acoj As the example shows, once C is unified with {A B}, A and B acquire a "positive constraint." All later unifications involving them must keep at least one of the two pairs (A C), (B C) unifieable. If at some later point one of the two tuples becomes inconsistent, the members of the sole remaining tuple finally can and should be unified. When that has happened, the positive constraint on A and B can also be discarded. A more elaborate example of this sort is given in the Appendix. Essentially the same procedure also works for more complicated cases. For example, unification of {A B} with {C D} yields {(A C) (i D) (B C) (B D)} assuming that the two values in each tuple are compatible. Any pairs that could not be unified are left out. The complex disjunction is added as a positive constraint to all of the values that appear in it. The result of unifying {(A C) (B C)} with {(DF) (E F)} is {(A C D F) (ACEF) (BCDF)(BC E F)}, again assuming that no alternative can initially be ruled out. As for generalization, things are considerably simpler. The result of (Generalize A B) inherits both negative and positive constraints of A and B. This follows from the fact that the generalization of A and B is the ma~ximal sub- graph of A and B that will unify with either one them. Consequently, it is subject to any constraint that affects A or B. This is analogous to the fact that, in set theory, (A - C) n (B - D) = (A n B) - (C u D) In our current implementation, negative constraints are dropped as soon as they become redundant as far as unification is concerned. For example, when [case: ace] is unified with with [case: -dat], the resulting matrix is simply [case: acc]. The negative constraint, is eliminated since there is no possibility that it could ever be violated later. This may be a wrong policy. It has to be modified to make generalization work as proposed in Section V for structures with negative constraints. If generalization is defined as we have suggested above, negative constraints must always be kept because they never become redundant for generalization. When negative or positive constraints are involved, unification obviously takes more time. Nevertheless, the basic algorithm remains pretty much the same. Allowing for constraints does not significantly reduce the speed at which values that do not have any get unified in the Texas implementation. In the course of working on the project, I gained one insight that perhaps should have been obvious from the very beginning: the problems that arise in this connection are very similar to those that come up in logic program- ming. One can actually use the feature system for certain • kind of inferencing. For example, let Mary, Jane, and John have the following values: Mary- ~ha~r: blond~] Jane- [h~r: dA~'1 John = ['sister:. {Jane Mary~-~] 31 If we now unify John with [sister: [eyes: blue]]. both Jane and Mary get marked with the positive con- straint that at least one of them has blue eyes. Suppose that we now learn that Mary has green eyes. This imme- diately gives us more information about John and Jane as well. Now we know that Jane's eyes are blue and that she definitely is John's sister. The role of positive constraints is to keep track of partial information in such a way that no inconsistencies are allowed and proper updating is done when more things become known. 5. Future prospects: Agreement in Coordinate Structures One problem of long standing for which the present sys- tem may provide a simple solution is person agreement in coordinate noun phrases. The conjunction of a 1st person pronoun with either 2nd or 3rd person pronoun invariably yields 1st person agreement. =I and you" is equivalent to =we," as far as agreement is concerned. When a second person pronoun is conjoined with a third person NP, the resulting conjunction has the agreement properties of a second person pronoun. Schematically: let + 2nd - Is~ ts~ + 3rd -Ist 2nd + 3rd - 2nd. Sag, Gazdar, Wasow, and Weisler [841 propose a so- lution which is based on the idea of deriving the person feature for a coordinate noun phrase by generalization (in- tersection) from the person features of its heads. It is ob- vious that the desired effect can be obtained in any feature system that uses the fewest features to mark 1st person, some additional feature for 2nd person, and yet another for 3rd person. Because generalization of 1st and 2nd, for ex- ample, yields only the features that two have in common, the one with fewest features wins. Any such solution can probably be implemented easily in the framework outlined above. However, this proposal has one very counterintuitive aspect: markedness hierar- chy is the reverse of what traditionally has been assumed. Designating something as 3rd person requires the greatest number of feature specifications. In the Sag et ai. system, 3rd person is the most highly marked member and 1st per- son the least marked member of the trio. Traditionally, 3rd person has been regarded as the unmarked case. In our system, there is a rather simple solution under which the value of person feature in coordinate NPs is de- rived by generalization, just as Sag it et al. propose, which nevertheless preserves the traditional view of markedness. The desired result can be obtained by using negative con- straints rather than additional features for establishing a markedness hierarchy. For example, the following feature specifications have the effect that we seek. 181; == Foonversant: +] Lspeake~ + 2rid :" Fc°nversant: +1 [speaker: -- 3rd " ['conversant: "1 Lspeake~ o The corresponding negative constraints are: ,.,. r-roo,,,,.,...,.-]] L tspeaker. - 2nd =" [--['conversant:-]] 3rd - (no constraints) Assuming that generalization with negative constraints works as indicated above, i.e. negative constraints are al- ways inherited, it immediately follows that the generaliza- tion of Ist person with any other person is compatible with only 1st person and that 2nd person wins over 3rd when they are combined. The results are as follows. rconversant: +]] 181; + 2rid = ]_Foonversant: L L speaker, - ,,,,,. ,rd- _ I-,pea,,.,.: _ ] 2nd + 3rd = .] Note that the proper part of lst+2nd excludes 3rd person. It is compatible with both 1st and 2nd person but the negative constraint rules out the latter one. In th~ case of lst+3rd, the negative constraint is compatible with 1st person but incompatible with 2nd and 3rd. In the last case, the specification [speaker: -] rules out 1st person and the negative constraint -[conversant: -] eliminates 3rd person. When negative constraints are counted in, 1st person is the most and 3rd person the least marked member of the three. In that respect, the proposed analysis is in line with traditional views on markedness. Another relevant observation is that the negative constraints on which the result crucially depends are themselves not too unnatural. In effect, they say of 1st person that it is "neither 2nd nor 3rd" and that 2nd person is "not 3rd." It will be interesting to see whether other cases of markedness can be analyzed in the same way. 32 6. Acknowledgements I am indebted to Martin Kay for introducing me to uni- fication and to Fernando Pereira, Stuart Shieber, Remo Pareschi, and Annie Zaenen for many insightful sugges- tions on the project. References Eisenberg, Peter, "A Note on Identity of Constituents," Linguis- tic Inquiry 4:3..117-20 (1973). Gazdar, Gerald and G. Pullum. "Generalized Phrase Structure Grammar: A Theoretical Synopsis." Indiana University Linguistics Club, Bloomington, Indiana (1982). Kaplan, Ronald M. and Joan Bresnan, 1983: "Lexieal- Functional Grammar: A Formal System for Grammatical Representation," Ch.4 in J. Bresnan, The Mental Repre- sentation of Grammatical Relations (ed.), Cambridge, MIT Press. Kay, Martin, 1979: "Functional Grammar." Proceedings of the Fifth Annual Meeting of the Berkeley Linguistic ,Society, Berkeley l,inguistic Society, Berkeley, California (February 17-19, 1979), pp. 142-158. Pereira, Fernando and Stuart Shieber, 1984: "The semantics of Grammar Formalism Seen as Computer Languages." Pro- eeedh2gs of the Tenth International Conference on Compu- tational Linguistics, Stanford University, Stanford Califor- nia (4-7 July, 1984). Sag, Ivan, Gerald Gazdar, Thomas Wasow, and Steven Weisler, 1984: "(Joordination and How to Distinguish Categories." CLSI Report No. 3. Center for the Study of Language and Information, Stanford, Ca., (March 1984). Shieber, S., II. Uszkoreit, F. Pereira, J. Robinson, and M. Tyson, 1983: "The Formalism and Implementation of PATR-II," in B. Grosz and M. Stiekel, Research on Interactive Acqui- sition and Use of Knowledge, SRI Final Report 1894, SRI International, Menlo Park, California (November 1983). A. Appendix: Some Examples of Unification (These examples were produced using the Texas version of the DG package.)_ ro.,e: <oom .oo> die / [r0.o0,,: "mll i n.: I'o': i Ln''mb'': so j? L tr,,umb,,: pO J -=f nfl: ~ , Fgender: neut L ag`` [number: pl die Kinder = f [o,,.:<oom.oo> n o,,: L,,0,: r~,.o<,.,: neu.l// [number: pl ,JJJ i ]]I den = II rg.nd.,: n,,: ~L'"" t~omO.," ",;'°] IF,,.. 0,,, l,L,,g,: ['number,, PO den Kinder = *FAILS* f , den Kindez"r, = tease,, a.t .euql nfh | r rgender: L ='°: L(umber: p, .Jj I = r ro=,e: nora ' t nJ[, I Fnumber: L "°': Lperson: IstJ I he = J rgen~e,.: ,'...s= nfl: tagr: [number', sg L Lperson: 3rd °'"°' :]]] do = [ F-Fnumber: sg nfh La.,: L [person: 3r Ido= ~ense: present II lease: nom ll nil: I Fnumber: sglll L -~r L.erson: ,,uJ] he do = *FAILS* < . + LI:: (Unify x y) = [::.;]] f:: • ; [: :]: (Unify (Unify x y) z) b: 33
1984
8
~%D-WAY FINITE ~ % AND D~a-I~NDENCY GRAMMAR: A PARSING METHOD ~-OR INFLECTIONAL FREE WORD ORDER LAN(~I%GES I Esa Nelimarkka, Harri J~ppinen and Aarno Lehtola Helsinki University of Technology Helsinki, Finland ARSTRACT This paper presents a parser of an inflectional free word order language, namely Finnish. Two-way finite automata are used to specify a functional dependency grammar and to actually parse Finnish sentences. Each automaton gives a functional description of a dependency structure within a constituent. Dynamic local control of the parser is realized by augmenting the automata with simple operations to make the automata, associated with the words of an input sentence, activate one another. I ~ O N This Daper introduces a computational model for the description and analysis of an inflectional free word order language, namely Finnish. We argue that such a language can be conveniently described in the framework of a functional dependency grammar which uses formally defined syntactic functions to specify dependency structures and deep case relations to introduce semantics into s%mtax. We show how such a functional grammar can be compactly and efficiently modelled with finite two-way automata which recognize the dependants of a word in various syntactic functions on its both sides and build corresponding dependency structures. The automata along with formal descriptions of the functions define the grammar. The functional structure specifications are augmented with simple control instructions so that the automata associated with the words of an input sentence actually parse the sentence. This gives a strategy of local decisions resulting in a strongly data driven left-to-right and bottom-up parse. A parser based on this model is being implemented as a component of a Finnish natural language data base interface where it follows a separate morphological analyzer. Hence, throughout the paper we assume that all relevant morphological and lexical information has already been extracted and is computationally available for the parser. I This research is s,~pported by SITRA (Finnish National Fund for Research and Development). Although we focus on Finnish we feel that the model and its specification formalism might be applicable to other inflectional free word order languages as well. II LINGUISTIC MOIT~ATI ON There are certain features of Finnish which suggest us to prefer dependency grammar to pure phrase structure grammars as a linguistic foundation of our model. Firstly, Finnish is a "free word order" language in the sense that the order of the main constituents of a sentence is relatively free. Variations in word order configurations convey thematical and discursional information. Hence, the parser must be ready to meet sentences with variant word orders. A computational model should acknowledge this characteristic and cope efficiently with it. This demands a structure within which word order variations can be conveniently described. An important case in point is to avoid structural discontinuities and holes caused by transformations. We argue that a functional depend s~cy- constituency structure induced by a dependency grammar meets the requirements. This structure consists of part-of-whole relations of constituents and labelled binary dependency relations between the regent and its dependants within a constituent. The labels are pairs which express syntactic functions and their semantic interpretations. For example, the sentence "Nuorena poika heitti kiekkoa" ("As young, the boy used to throw the discus") has the structure heitti adver bial~ubj~ t ~ . ~ object Nuorena poika kiekkoa or, equivalently, the linearized structure ( (Nuorena)advl (poika) ~ubj he~tti (kiekkoalob j I, TIW~ AGF/~ N~ L~;J, 389 ar~@, w!th [". -~ ich ..:,'),~u~ i [:dent, the ,,x.:,,rd without [nflected %ocd d~)peaLs as a complex of its syntac- tic, .morphological and semantic properties. Hence, our sentence structure is a labelled tree whose nodes are complex expressions. The advantage of the functional dependency structures lies in the fact that many word order varying transformations can be described as permutations of the head and its labellex9 dependants in a constituent. Reducing the depth of structures (e.g. by having a verb and its subject, object, adverbials on the same level) we bypass many discontinuities that would otherwise appear in a deeper structure as a result of certain transformations. As an example we have the permutations ((Poika) subj heitti (kiekkoa)obj (nuorena)advl) (Heittik~ (poika) subj (nuorena) advl (kiekkoa) obj) and ((Kiekkoako)obj (poika) subj heitti (nuorena) advl). ("The bov used to threw the discus when he was young", "Did the boy use to throw...?", "Was it discus that the boy used to throw... ?", respectively. ) The second argunent for our choices is the well acknowledged prominent role of a finite verb in regard to the form and meaning of a sentence. The meaning of a verb includes, for example, knowledge of its deep cases, and the choice of a particular verb to express this meaning determines to a great extent what deep cases are present on the surface level and in what functions. Moreover, due to the relatively free word order of Finnish, the main means of indicating the function of a word in a sentence is the use of surface case suffixes, and very often the actual surface case depends not only on the intended function or role but on the verb as Well. Finally, we wish to describe the sentence analysis as a series of local decisions of the following kind. Suppose we have a sequence CI,... , Ci_l, Ci, Ci+l, ..., C n of constituents as a result of earlier steps of the analysis of an input sentence, and asinine further that the focus of the analyzer is at the constituent C i. In such a situation the parser has to decide whether C i is (a) a dependant of the left neighbour Ci_l, (b) the reagent of the left neiqhbour Ci_l, (CI a d~).~%gant of some f,~rtU~r,~[n ~ Fie]h+ (a) ":.- .~ent ~f some. fortJ]coming right neighbour. ~b.~erv@ that d~c.lsinng f~% and (d) refer either c~ a const[tJe~t w~ich alceadv exists on the right side of C i or which will appear there after some steps of the analysis. Further, it should be noticed that We do not want the parser to make any hypothesis of the syntactic or semantic nature of the possible dependency relation in (a) and (c) at this moment. We claim that a functional combination of dependency grammar and case grammar can be put into a computational form, and that the resulting model efficiently takes advantage of the central role of a constituent head in the actual parsing pr.ocess by letting the head find its dependants using functional descriptions. We outline in the next sections how we have done this with formally defined functions and 2-way automata. III FORMALLY DEFINED ~CTIC FIYNCIXONS We abstract the restrictions imposed on the head and its dependant in a given subordinate relation. Recall that a constituent consists of the heed - a word regarded as a complex of its relevant properties - and of the dependants - from zero to n (sub) constituents. The traditional parsing categories such as the (deep structure) subject, object, adverbial and adjectival attribute will be modelled as functions f: ~f ->C, where C is the set of constituents and ~)L e C" C is the domain of the function. T The domain of a function f will be defined with a kind of Boolean expression over predicates which test properties of the arguments, i.e. the regent and the potential dependant. In the analysis this relation is used to recognize and interprete an occurance of a <head,dependant>-pair in the given relation. The actual mapping of such pairs into C builds the structure corresponding to this function. For notational and i~plementational reasons we specify the functions with a conditional expression formalism. A (primitive) conditional expression is either a truth valued predicate which tests properties of a potential constituent head (R) and its 4ependant (D) and deletes non-matchina [mterore~ations of an ambigu(~]s word, or an actier. which performs one of the basic construction operations such as labelling (:=), attaching (:-), or deletion, and returns a truth value. Primitive expressions can be written into series (PI P2 .-- Pn) or in parallel (Pl;P2; ...; Pn) to yield complex expressions. Logically, the former corresponds roughly to an and-operation and the latter an or-operation. A conditional operation -> and recursion yield new complex expressions from old ones. 390 As an exa~91e, consider the expressions 'Object', 'Recobj' and 'IntObj' in Figure i. ILMIIII |jilt IlilKOtjlllntOiJ) -) II I. ObIKtIIC :, IIII(L I)l 18JTlOIts ItKrA J lilt • *lrM|JtJv, "tk~inlll(I • *lMilliil *~ntlmcJ) -) II| • Plrt,, -) 11 • h)i ill • I~' ") IJ • "f~mtdlil)l 't} " t(mtlkleJli " ( hi ~j ))l,,,,,, |(| • ~'I;'IPI'N k(,,ll • POll -) T'I lit • ( Ikm )),,l , PH) -) ,,ll • IO*) -) '0 " PL',, 1() • ~:)(I • ( his II~t IW~ ( IP 2P )1 ) '''l ,,,,1 • lira UI'R • ACt ( lind Clmd Pot (l~I~ ~P' )))') ,,Ill • *Irlmsit,,ve '41ol|sl])( I • -P~l~tence +llolisll)) ") 'D " ( IMI ~I kC Part..) lll.ltllalll tJt|j ,,,,ll • ( JoviqVerkl l~qplVlqlll )) ") '| I, Ilvtrl|)): ¢III • ¢lim'cl~'t'l)(| * .ililre4tiNl *) li I. lntril,,,, Figure I. The relation 'RecObj ' approximates the syntactic and mDrphological restrictions imposed on a verb and its nominal object in Finnish. (It represents partly the partitive-accusative opposition of an object, and, for an accusative object, its nominative-genetive distribution.) The relation 'IntObj', on the other hand, tries to interprete the postulated object using semantic features and a subcategorization of verbs with respect to deep case structures and their realizations. The semantic restrictions imposed on t~e underlying deep cases are checked at this point. 'Object', after a succesful match of these syntactic and semantic conditions, labels the postulated dependant (D) as 'Object' and attaches it to the postulated regent (R). IV FU~'~ONAL DESCRIPTIONS WI~ ,TflD-~AY AUT(3MA,~ We introduced the formal functions to define conditions and structures associated with syntactic dependency relations. What is also needed is a description of what dependants a word can have and in what order. In a free Word order language we would f~ce, for exile, a paradigm fragment of the form (subj) V (obj) (advl) (advl) (subj) V (obj) V (subj) (obj) (advl) (obj) (subj) V (advl) for functional dependency structures of a verb. (Observe that we do not assume transformations to describe the variants. ) We combine the descriptions of such a paradigm int~ a m~dified two-way finite automaton. A 2-way finite automaton consists of a set nf states, one of which is the initial state and some of which are final states, and of a set of transition arcs between the states. Each arc recognizes a word, changes the state of the automaton and moves the reading head either to the left or right. We modify this standard notion to recognize left and right dependants of a word starting from its immediate neighbour. Instead of recognizing words (or word categories) these automata recognize functions, i.e. instances of abstract relations between a postulated head and its either neighbour. In addition to a mare recognition the transitions build the structures determined by the observed function, e.g. attach the neighbour as a dependant, label it in agreement with the function and its interpretation. STATE.. ~ LE.CT ((D • +PhriSe) -) (Subject -) (C I, WS }); (Objlct -) (C I, WO )); CAdv~bJal -) (C S, .W |); (SenSubj -) (C :, VS? )); +(Snti4vl -) (C :, .W )); • IT ,) IC t'~ ))); lID • -Phrast) -) (C ;- V? )) |TAT[." V? RISHT |(D • *Phrase) -) {Subject -) (C s- VS? )); (Object -) (C ,,. V~ )); (SlmtPmbj -) |C ,,,- ~r-~-.ntS?)); (SntOA| -) (C s. VgmtO? )); |Mverbial -) (C :, I1? ))t |SentMvl -) (C t" VSmttt? )); ¢T -) ¢C *, "%'Final )|); led • -Phrise) -) (C ,,, V? )(JuildPhra|eOn RIGHT)) STATE: WS LEFT (1| • "+Phra$1) -) (Objlct -) (C I, ?VSO )); (AdvlrbJ,| -) (C I. WS )); (SlmtMvl -) (C :, VS? }); (T -) (C t" VS? )111 ((S • -IP*rlml) -) (C ,," W? 1) Figure 9. Figure 2. exhibits part of a verb automaton which recognizes and builds, for exm~ple, partial structures like v v V V V //////\ subj , obj , advl , obj subj , advl subj .... The states are divided into 'left' and 'right' states ho indicate the side where the dependant is to be found. Each state indicates the formal functions which are available for a verb in that particular state. A succesfull applicati~ of a f~Jnct[or, transfers the v6.~b [nt~ .~nother :~t~te tc, [~ok for f,rther d_~?endants. 391 Heuristic rules and look-ahead can a]~> used, For example, the rule ((RI = ', )(R2 = 'ett~ )(C = +gattr) -> (C := N?Sattr) (Buil~PhraseOn RI(RT)) in the state N? of the noun automaton anticipates an evident forthcoming sentence attribute of, say, a cognitive noun and sets the noun to the state N?Sattr to wait for this sentence. V PARSING WITH A SE~CE OF 2-WAY AUTCMATA So far we have shc~n how to associate a 2-way automaton to a word via its syntactic category. This gives a local descriotion of the grammar. With a few simple control instructions these local automata are made to activate each other and, after a sequence of local decisions, actually parse an input sentence. An unfinished parse of a sentence consists of a sequence CI,C2,...,C n of constituents, which may be complete or incomplete. Each constituent is associated with an automaton which is in some state and reading position. At any time, exactly one of the automata is active and tries to recognize a neighbouring constituent as a dependant. Most often, only a complete constituent (one featured as '+phrase') qualifies as a potential dependant. To start the completion of an incomplete constituent the control has to be moved to its associated automaton. This is done with a kind of push operation (BuildPhraseOn RIGHT) which deactivates the current automaton and activates the neighbour next to the right (see Figure 2). This decision corresponds to a choice of type (d). A complete constituent in a final state will be labelled as a '+phrase' (along with other relevant labels such as '+-sentence', '+_nominal', '~main'). Operations (FindRegOn L~T) and (FindRegOn RIGHT), which correspond to choices (a) and (c), deactivate the current constituent (i.e. the corresponding automaton) and activate the leftmost or rightmost constituent, respectively. Observe that the automata need not remember when and why they were activated. Such simple "local control" we have outlined above yields a strongly data driven bottom-up and left-to-right parsing strategy which has also top-down features as expectations of lacking, aependants. ATN-par sets. (There are also other major differences. ) In our dependency oriented model non-terminal categories (S, VP, NP, AP, ... ) are not needed, and a constituent is not postulated until its head is found. This feature separates our parser from those which build pure constituent structures without any reference to dependency relations within a constituent. In fact, each word collects actively its dependants to make up a constituent where the word is the head. A further characteristic of our model is the late postulation of syntactic functions and semantic roles. Constituents are built blindly without any predecided purpose so that the completed censtituents do not know why they were built. The function or semantic role of a constituent is not postulated tmtil a neighbour is activated to recognize its own dependants. Thus, a constituent just waits to be chosen into some function so that no registers for functions or roles are needed. VII REF~S Hudson, R. : Arguments for a Non-transformational Grammar. The University "6f" ~ ~ ~-6. Hudson, R.: Constituency and Dependency. Linguistics 18, 1980, 179_.198. J~pinen, H., Nelimarkka, E., Lehtola, A. and Ylilammi, M.: Knowledge engineering approach to morphological analysis. Proc. of the First Conference of the European Chapter of ACL, Pisa, 1983, 49-51. Lehtola, A.: Compilation and i,~lementation of 2-way tree automata for the parsing of Finnish. HeLsinki University of ~chnology (forthcoming M.Sc. the thesis). Nelimarkka, E., J~ppinen, H. and Leh~ola A.: Dependency oriented parsing of an inflectional language (manuscript). VI DISCUSSION AS we have shown, cur parser consists of a collection of finite transition networks which .~c~:,~u'~ ~:h ~J~er. The ~.=e of ~-wa V instead of i-why ~ut: ~mat ~ :] i[~t h~.gui 5he~ o.ic parse[ f['om 392
1984
80
INTERRUPTABLE TRANSITION NETWORKS Sergei Nirenburg Colgate University Chagit Attiya Hebrew University of Jerusalem ABSTRACT A specialized transition network mechanism, the interruptable transition network (ITN) is used to perform the last of three stages in a multiprocessor syntactic parser. This approach can be seen as an exercise in implementing a parsing procedure of the active chart parser family. Most of the ATN parser implementations use the left-to-right top-down chronological backtracking control structure (cf. Bates, 1978 for discussion). The control strategies of the active chart type permit a blend of bottom-up and top-down parsing at the expense of time and space overhead (cf. Kaplan, 1973). The environment in which the interruptable transition network (ITN) has been implemented is not similar to 6hat of a typical ATN model. Nor is it a straightforward implementation of an active chart. ITN is responsible for one stage in a multiprocessor parsing technique described in Lozinskii & Nirenburg, (1982a and b), where parsing is performed in essentially the bottom-up fashion in parallel by a set of relatively small and "dumb" processing units running identical software. The process involves three stages: (a) producing the candidate strings of preterminal category symbols; (b) determining the positions in this string at which higher-level constituents start and (c) determining the closing boundaries of these constituents. Each of the processors allocated to the first stage obtains the set of all syntactic readings of one word in the input string. Using a table grammar, the processors then choose a subset of the word's readings to ensure compatibility with similar subsets generated by this processor's right and left neighbor. Stage 2 uses the results of stage 1 and a different tabular grammar to establish the left ("opening") boundaries for composite sentence constituents, such as NP or PP. The output of this stage assumes the form of a string of triads llabel x M), where lah~l belongs to the vocabulary of constituent types. In our implementation this set includes S, NP, VP, PP, NP& (the "virtual" NP), Del (the delimiter), etc. X and M are the left and the right indices of the boundaries of these constituents in the input string. They mark the points at which parentheses are to be opened (x) and closed (y) in the tree representation. The values x and y relate to positions of words in the initial input string. For example, the sentence (i) will be processed at stage 2 into the string (2). The '?' in (2) stand for unknown coordinates y. (i) The very big brick building that sits 1 2 3 4 5 6 7 on the hill belongs to the university. 8 9 i0 ii 12 13 14 (2) (s 1 ?)(np 1 ?)(s 6 ?)(np& 6 6) (vp 7 ?)(pp 8 ?)(np 9 ?)(vp ii ?) (pp 12 ?)(np 13 ?) It is at this point that the interruptable transition network starts its work of finding the unknown boundary coordinates and thus determining the upper levels of the parse tree. An input string ~ triads long will be allocated n identical processors. Initially the chunk of every participating processor will be one triad long. After these processors finish with their chunks (either succeeding or failing to find the missing coordinate) a "change of levels" interrupt occurs: the size of the chunks is doubled and the number of active processors halved. These latter continue the scanning of the I TN from the point they were interrupted taking as input what was formerly the chunk of their right neighbor. Note that all constituents already closed in that chunk are transparent to the current processor and already closed in that chunk are transparent to the current processor and are not rescanned. The number of active processors steadily reduces during parsing. The choice of processors that are to remain active is made with the help of the Pyramid protocol (cf. Uozinskii & Nirenburg, 1982). The processors released 393 after each "layout" are returned to the system pool of available resources. At the top level in the pyramLd only one processor wL]l remain. The status of such a processor is declared final, and this trlggers the wrap-up operations and the construction of output. The wrap-up uses the original string of words and the appropriate string of preterminal symbols obtalned at stage 1 together with the results of stage 3 to build the parse tree. ITN can start processing at an arbltrary position in the input string, not necessarily at the beginning of a sentence. Therefore, we introduce an additional subnetwork, "initial", used for handling control flow among the other subnetworks. The llst of "closed" constituents obtained through ITN-based parsing of string (2) can be found in (3), while (4) is the output of ITN processing of (3). (3) (s 1 [4)(np 1 10)(s 6 10)(np& 6 6) (vp 7 10)(pp 8 10)(np 9 10)(vp ll 14) (pp 12 14)(np 13 14) (4) (s(np(s(np&)(vp(pp(np)))))(vp(pp))) 3. An ITN Interpreter. The interpreter was designed for a parallel processing system. This goal compelled us to use a program environment somewhat different from the usual practice of writing ATN interpreters. Our interpreter can, however, be used to interpret both ITNs and ATNs. A new type of arc was introduced: the interrupt arc INTR. The interrupt arc is a way out of a network state additional to the regular POP. It gives the process the opportunity to resume from the very point where the interrupt had been called, but at a later stage (this mechanlsm is rather similar to the detach-type commands in programming languages which support coroutines, such as, for instance, SIMULA). Thus, the interpreter must be able to suspend processing after trying to proceed through any arc in a state and to resume processing later in that very state, from the arc immediately following the interrupt arc. For example, if [NTR is the fourth of seven arcs in a state, the work resumes from the fifth arc in this state. This is implemented with a stack in which the transitions in the net are recorded. The PUSH and POP arcs are also implemented through this stack and not through the recursion handling mechanisms built into Lisp. Since it is never known to any processor whether it will be active at the next stage, it is necessary that the information it obtained be saved in a place where another processor will be able to find it. Unlike the standard ATN parsers (which return the parse tree as the value of the parsing function), the I%N parser records the results in a special working area (see discussion below). impl~m~nLaLiun The ITN interpreter was implemented in YLISP, the dialect of LISP developed at the Hebrew University of Jerusalem. A special scheduler routine for simulating parallel processes on a VAX 11/780 was written by Jacob Levy. The interpreter also uses the pyramid protocol program by Shmuel Bahr. In what follows we will describe the organization of the stack, the working area, and the program itself. a) The stack. The item to be stacked must describe a position in the network. An item is pushed onto the stack every time a PUSH or an INTR arc is traversed. Every time a POP arc is traversed or a return from an interrupt occurs one item is popped. The stack item consists of: I) names and values of the current network registers; 2) the remainder of the arcs in the state (after the PUSH or the INTR traversed); 3) the actions of the PUSH arc traversed; 4) the name of the current network (i.e. that of the latter's initial state); 5) the value of the input pointer (for the case of a PUSH failure). The working area is used for two purposes: to support message passing between the processors and to hold the findings. The working area is organized as an array, R, that holds a doubly linked list used to construct the output tree. The actions defined on the working area are: a) initialization (procedure init-input): every cell R[i] in R obtains a token from input, while the links Rill.[next-index] and R[i].[previous-index] obtain the values i+l and i-l, respectively; b) CLOSE, the tool for delimiting subtrees in the input string; The array R is used in parallel by a number of processors. At every level of processing the active processors' chunks cover the array R. This arrangement does not corrupt the parallel character of the process, since no processor actually seeks information from the chunks other than its own. 394 The main function of the interpreter is called //,El. It obtains the stack containing the history of processing. If an interrupt is encountered, the function returns the stack with new history, to be used for invoking this function again, by the pyramid protocol. If a call to itn is a return from the interrupt status, then a stack item is popped (it corresponds to the last state entered during the previous run). If the function call is the initial one, we start to scan the network from the first state of the "initial" subnetwork. At this stage we already know which state of which network fragment we are in. Moreover, we even know the path through the states and fragments we took in order to reach this state and the exact arc in this state from which we have to start processing. So, we execute the test on the current arc. If the test succeeds we perform branching on the arc name. The INTR arc has the following syntax: (INTR<dummy><test><action>*). The current state is stacked and the procedure is exited returning the stack as the value. <dummy> was inserted simply to preserve the usual convention of situating the test in the third slot in an arc. The ABORT arc has the syntax (ABORT<message><test>). When we encounter an error and it becomes clear that the input string is illegal, we want to be able to stop processing immediately and print a diagnostic message. The actions on the stack involve the movement of an item to and from the stack. The stack item is the quantum value that can be pushed and popped, that is no part of the item is accessed separately from the rest of the values in it. The functions managing the stack are push-on-stack and pop-from-stack. The push-on-stack is called whenever a PUSH or an INTR arc is traversed. The pop-from-stack is called, first, when the POP arc is traversed and, second, when the process resumes after return from an interrupt. The closa action is performed when we find a boundary for a certain subtree for which the opposite boundary is already known (in our case the boundary that is found is always the right boundary, y). QIo~@ performs two tasks: first, it inserts the numeric value for y and, second, it declares the newly built subtree a new token in the input string. For example, if the input string had been <s 1 ?><np 1 ?><vp 4 ?><np 6 8><pp 9 i0> 1 2 3 4 5 after the action (close 3 i0) is performed the input for further processing has the form: <s 1 ?><np i ?><vp 4 I0>. The parameters of ~lose are i) the number of the triad we want to close and 2) the value for which the y in this triad is to be substituted. The default value for the second parameter is the value of the y in the triad current at the moment a call to ~ios~ is made. When the processing is parallel, £1os~ is applied multiply at every level, which would mean that a higher level processor will obtain prefabricated subtrees as elementary input tokens. This is a major source of the efficiency of multiprocessor parsing. The ITN in the current implementation is relatively small. A broader implementation will be needed to study the properties of this parsing scheme, including the estimates for its time complexity, and the extendability of the grammar. A comparison should also be made with other multiprocessor parsing schemes, including those that are based not on organizing communication among relatively "dumb" processors running identical software but rather on interaction of highly specialized and "intelligent" processors -- cf., e.g., the word expert parser (Small, 1981). Acknowledgments. The authors thank E. Lozinskii and Y. Ben Asher for the many discussions of the ideas described in this paper. Bibliography Bates, M. (1978), The theory and practice of augmented transition network grammars. In: L. Bolc (ed.), Natural Language Communication with Computers. Berlin: Springer. Kaplan, R. M. (1973), A general syntactic processor. In R. Rustin (ed.), Natural Language Processing. NY: Academic Press. Loz~nskii, E.L. and S. Nlrenburg (1982a). Locality in Natural Language processing. In: R. Trappl (ed.), Cybernetics and Systems Research. Amsterdam: North Holland. 395 Lozinskii, (1982b), language. France. E.L. and S. Nirenburg Parallel processing of natural Proceedings of ECAI, Orsay, Small, S. (1981), Viewing word expert parsing as a linguistic theory. Proceedings of IJCAI, Vancouver, B.C.. Appendix A. ITN: the main function of the interruptable transition network interpreter (def Itn (lambda ( stack ) ; stack - current processing stack (prog (regs curr-state-arcs net-name curt-arc $ test arc-name) ; regs - current registers of the network ; curr-state-arcs list of arcs not yet ; processed zn current state ; net-name - name of network being : processed ; curt-arc - arc in processing ;(all these are pushed on stack when a ; 'push' arc occurs) ; $ - a special register. ; the functlon first checks if stack is ; nil; if not then this call is a return ; from interrupt previous values must be ; popped from the stack [cond (stack (seta ec pn nil) ;set end-chunk flag to nil (pop-from-stack t)) (t (set-net 'al] loop [ cond ((null curr-state-arcs) (cond((null (pop nil)) (return nil)] (set 'curt-arc (setcdr 'curt-state-arcs)) ( set 'test (*nth curr-arc 3) ) ( cond ((eval test) ;test succeeds - traverse the arc ( set 'arc-name (car curr-arc)) [cond ((eq arc-name 'push ) ; PUSH (evlist (*nth curr-arc 4)) (push-on-stack) (set-net (cadr curr-arc)) (go loop)) ((eq arc-name 'pop ) ; POP (evlist (*nthcdr curr-arc 3)) (cond ((null (pop(eval(cadr curr-arc)))) (return $))) (go loop)) ((eq arc-name 'jump ) ; JUMP (evlist (*nthcdr curr-arc 3)) (set-state (*nth curt-arc 2)) (go loop)) ((eq arc-name 'to) ; TO (evlist (*nthcdr curr-arc 3)) (set-state (*nth curr-arc 2)) (get-input) (go loop)) ((eq arc-name 'cat) ; CAT (cond L[eq (currlI~B)) (*nth curt-arc 2)) (evlist (*nthcdr curr-arc 3)))) (go loop)) ((eq arc-name 'abort) ; ABORT (tpatom (*nth curr-arc 2)) (return nil)) ((eq arc-name 'intr) ; INTeRrupt (push-on-stack) (return stack)) (t ; error (tpatom '"illegal arc") (return nil)) ( go loop ] ; try next arc Append ix B. A Fragment of an ITN network (the "initial" and the sentence subnetworks) ;Note that "jump" and "to" can be either ;terminal actions on an arc or separate ;arcs (def-net '(s-place) '( (initial (pop t (end-of-sent) (close*)) (intr nil (end-of-chunk)((to initial))) (push S (lab s) ((setr s-place (inp-pointer))) ((jump initial/DEL))) (push NP (lab np) nil ((to initial))) (push VP (lab vp) nil ((to initial))) (push PP (lab pp) nil ((to initial))) (cat np& t (to initial)) (cat del t (to initial))) (initial/DEL (cat del t (close* (getr s-place)) (to initial)) (to initial t] (def-net '( vp-place no-pp pp-place np-place) ,( (s (pop t (is-def (Y))(close (inp-pointer))) (to S/ t (setr no-pp 0))) (S/ (intr nil (end-of-chunk)((to S/))) (Bush PP (and (lab pp) (le (getr no-pp) 2)) ((and (gt (getr no-pp) 0) (close* (getr pp-place))) (setr pp-place (inp-pointer)) ) ((setr no-pp (addl (getr no-pp))) (jump S/))) (abort "more than 2 PPs in S" (lab pp) ) (cat np& t (to S/NP&)) ;(s (pp & pp) ..) (cat del (gt (getr no-pp) 0) (close* pp-place) (setr no-pp l) (to S/)) (abort "DEL cannot appear at beginning of sent" (lab del)) (jump S/NP& t] (S/NP& (intr nil (end-of-chunk)((to S/NP&))) (push NP 396 (lab np) ((and (getr pp-place) (close* (getr pp-place))) (setr np-place (inp-pointer))) ((to S/NP))) ;here we can allow PPs after an NP! (push VP (lab vp) ((and (getr pp-place) (close* (getr pp-place)))) ((jump S/OUT))) (abort =no NP or VP in the input sentence" t) (jump S/NP t] (S/NP (abort "not enough VPs in S" (end-of-sent)) (intr nil (end-of-chunk)((to S/NP))) (push VP (lab vp) ((setr vp-place (inp-pointer)) ;if there is a del (close* (getr np-place))) ;close the preceding NP ;and everything in it ((jump S/VP))) ;(s .. (np & np) ..) (cat del (lab del) (close" (getr np-place)) (to S/NP&)) (abort "too many NPs before a VP" (lab np] (s/vP (cat del (lab del) (close* (getr vp-place)) (jump S/VP/DEL)) (jump S/OUT t] (S/VP/DEL ;standing at 'del' and looking ahead (abort "del at EOS?" (ge (next-one (inp-pointer)) sent-len)) ; the above is a test for eos (intr nil (null (look-ahead i)) ((lump S/VP/DEL))) (to S/NP (eq (look-ahead l) 'vp)) (jump S/OUT t] ;exit: it must be an s (S/OUT (pop t (end-of-sent) (close*)) (pop t t] 397
1984
81
AUTOMATIC CONSTRUCTION OF DISCOURSE REPRESENTATION STRUCTURES Franz Guenthner Universit~it Tiibingen Wilhelmstr. 50 D-7400 Tdbingen, FRG Hubert Lehmann IBM Deutschland GmbH Heidelberg Scientific Center Tiergartenstr. 15 D-6900 Heidelberg, FRG Abstract Kamp's Discourse Representation Theory is a major breakthrough regarding the systematic translation of natural language discourse into logical form. We have therefore chosen to marry the User Specialty Languages System, which was originally designed as a natural language frontend to a relational database system, with this new theory. In the paper we try to show taking - for the sake of simplicity - Kemp's fragment of English how this is achieved. The re- search reported is going on in the context of the project Linguistics and Logic Based Legal Expert System undertaken jointly by the IBM Heidelberg Scientific Center and the Universit~it Tiibingen. 1 Introduction In this paper we are concerned with the systematic translation of natural language discourse into Dis- course Representation Structures as they are de- fined in Discourse Representation Theory (DRT) first formulated by Kamp (1981). This theory re- presents a major breakthrough in that it systemat- ically accounts for the context dependent interpretation of sentences, in particular with re- gard to anaphoric relations. From a syntactic point of view, however, Kamp chose a very restricted fragment of English. It is our goal, therefore, to extend the syntactic cover- age for DRT by linking it to the grammars described for the User Specialty Languages (USL) system (Lehmann (1978), Ott and Zoeppritz (1979), Leh- mann (1980), Sopefia (1982), Zoeppritz (1984)) which are comprehensive enough to deal with realis- tic discourses. Our main tasks are then to describe the syntactic framework chosen Discourse Representation Structures (DRSs) the translation from parse trees to DRSs The translation from parse trees to DRSs will, as we shall see, not proceed directly but rather via Inter- mediate Structures. which were already used in the USL system. Clearly, it is not possible here to de- scribe the complete process in full detail. We will hence limit ourselves here to a presentation Kamp's fragment of English in our framework. The work reported here forms part of the devel- opment of a Natural Language Analyzer that will translate natural language discourse into DRSs and that is evolving out of the USL system. We intend to use this Natural Language Analyzer as a part of a legal expert system the construction of which is the objective of a joint project of the University of Tiibingen and the IBM Heidelberg Scientific Center. 2 SyntaJc 2.1 Syntactic framework and parsing process The parser used in the Natural Language Analyzer was originally described by Kay (1967) and subse- quently implemented in the REL system (Thompson et. al. (1969)). The Natural Language Analyzer uses a modified version of this parser which is due to Bertrand &al (1976, IBM (1981)). Each grammar rule contains the name of an inteP- pretation routine, and hence each node in the parse tree for a given sentence also contains the name of such a routine. The semantic executer invokes the interpretation routines in the order in which they appear in the parse tree, starting at the root of the tree. 2.2 Syntactic coverage The syntactic coverage of the Natural Language An- alyzer presently includes Nouns Verbs Adjectives and adjectival phrases: gradation, modification by modal adverbial, modification by ordinal number - Units of measure - Noun phrases: definiteness, quantification, in- terrogative pronouns, personal pronouns, pos- sessive pronouns, relative pronouns - Verb complements: subjects and nominative com- plements, direct objects, indirect objects, prepo- sitional objects - Noun complements: relative clauses, participial attribute phrases, genitive attributes, apposi- tions, prepositional attributes - Complements of noun and verb: negation, loca- tive adverbials, temporal adverbials - Coordination for nouns, noun phrases, adjectives, verb complexes and sentences - Comparative constructions - Subordinate clauses: conditionals - Sentences : declarative sentences, questions, commands 398 2.3 Syntax rules to cover the Kamp fragment In this section we give the categories and rules used to process the Kamp fragment. The syntax rules given below are somewhat simplified with regard to the full grammars used in the Natural Language Ana- lyzer, but they have been formulated in the same spirit. For a detailed account of the German syntax see Zoeppritz (1984), for the Spanish grammar see Sopefia (1982). Syntactic categories We need the following categories : <NAME>, <NOMEN>, <QU>, <NP> (features: REL, PRO, NOM, ACC), <VERB> (features: TYP=NI, TYP=NA), <SENT>, <SC> (feature: REL). Vocabulary The vocabulary items we have taken from Ramp (1981). <NAME> : Pedro, Chiquita, John, Mary, Bill, ... <NOMEN:÷NOM,÷ACC> : farmer, donkey, widow, man, woman, ... <VERB:TYP=NI> : thrives, ... <VERB:TYP=NA> : owns, beats, loves, admires, courts, likes, feeds .... <QU> : a, an, every <NP:+PRO,+NOM> : he, she, it <NP:÷PRO,+ACC> : him, her, it <NP: ÷REL,÷NOM> : who, which, that <NP:÷REL,+ACC> : whom, which, that 2.3.I Syntax rules To help readability, the specification of interpreta- tion routines has been taken out of the left hand side of the syntax rules and has been placed in the succeeding line. The numbers appearing as parame- ters to interpretation routines refer to the position of the categories on the right hand side of the rules. As can be seen, interpretation routines can be nested where appropriate. The operation of the interpretation routines is explained below. 1. <NP> <- <NAME> PRNAME ( 1 ) 2. <NP> <- <QU> <NOMEN> NPQUAN(1,2) 3. <NOMEN> <- <NOMEN> <SC:*REL> RELCL(1,2) 4. <SC:÷REL> <- <NP:÷REL> <VERB:TYP=NI> NOM(VERB (I), i) 5. <SC: ÷REL> <- <NP: *REL, ÷NOM> <VERB : TYP=NA> <NP: - REL> NOM (ACC (VERB (2), 3), I) 6. <SC: ÷REL> <- <NP: *REL, ÷ACC> <VERB: TYP=NA> <NP: -REL> ACC (NOM(VERB (2), 1), 3) 7. <SC> <- <NP> <VERB:TYP=NI> NOM(VERB (2), 1) 8. <SC> <- <NP: ÷NOM> <VERB:TYP=NA> <NP> NOM (ACC (VERB (2), 1), 3) 9. <SENT> <- <SC> STMT(1) 10• <SENT> <- if <SC> then <SC> STMT (COND (1,2)) 3 Intermediate Structures Intermediate Structures are used to facilitate the translation from parse trees to the semantic repre- sentation language. They are trees containing all the information necessary to generate adequate ex- pressions in the semantic representation language for the sentences they represent. 3.1 The definition of Intermediate Structures The basic notions used in Intermediate Structures are RELATION and ARGUMENT. In order to come to adequate meaning representations it has also to be distinguished whether RELATIONs stand for verbs or nominals, therefore the notions VERBSTR and NOMSTR have been introduced in addition. In case of coordinate structures a branching is needed for the ARGUMENTs. It is provided by COORD. In- formation not needed to treat the Kamp fragment is left out here to simplify the presentation. 3.1.1 Relation nodes and Argument nodes Nodes of type Relation contain the relation name and pointers to first and last ARGUMENT. Nodes of type Argument contain the following infor- mation: type, standard role name, pointers to the node representing the contents of the argument, and to the previous and next ARGUMENTs. 3.1.2 Verb nodes Verb nodes consist of a VERBSTR with a pointer to a RELATION. That is verb nodes are Relation nodes where the relation corresponds to a verb. Verb nodes (VERBSTR) contain a pointer to the RE- LATION represented by the verb• They can be ARGUMENTs, e.g., when they represent a relative clause (which modifies a noun, i.e. is attached to a RELATION in a nominal node). 3.1.3 Nominal nodes Nominal nodes are Argument nodes where the AR- GUMENT contains a nominal element, i.e. a noun, an adjective, or a noun phrase. They contain the fol- lowing information in NOMSTR: type on noun, a pointer to contents of NOMSTR, congruence informa- tion (number and gender), quantifier, a pointer to referent of demonstrative or relative pronoun. 3.1.4 Formation rules for Intermediate ¢:truetures 1. An Intermediate Structure representing a sen- tence is called a sentential Intermediate Struct,~re (SIS). Any well-formed Intermediate Structure represent- ing a sentence has a verb node as its root. 399 2. An Intermediate Structure with an Argument node as root is called an Argument Intermediate Structure (AIS). An Intermediate Structure representing a nominal is an AIS. 3. If s is a SIS and a is an AIS, then s' is a well-formed SIS, if s' is constructed from s and a by attaching a as last element to the list of ARGUMENTs of the RELATION in the root of s and defining the role name of the ARGUMENT forming the root of a. 4. If n and m are AIS, then n' is a well-formed AIS, if the root node of n contains a RELATION and m is attached to its list of ARGUMENTs and a role name is defined for the ARGUMENT forming the root of m. 5. If s is a SIS and a is an Argument node, then a' is an AIS, if s is attached to a and the argument type is set to VERBSTR. 6. If a and b are AIS and e is an Argument node of type COORD, then c' is an AIS if the contents of a is attached as left part of COORD, the contents of b is attached as right part of COORD, and the con- junction operator is defined. 3.2 The construction of Intermediate Structures from parse trees To cover the Ramp fragment the following interpre- tation routines are needed: PRNAME and NOMEN which map strings of charac- ters to elements of AIS; NPDEF, NPINDEF and blPQUAN which map pairs consisting of strings of characters and elements of AIS to elements of AIS; VERB which maps strings of characters to elements of SIS ; NOM and ACC which operate according to Intermedi- ate Structure formation rule 3; RELCL which applies Intermediate Structure forma- tion rule 5 and then 4; COND which combines a pair of elements of SIS by applying Intermediate Structure formation rule 5 and then rule 3; STMT which maps elements of SIS to DRSs. These routines are applied as indicated in the parse tree and give the desired Intermediate Struc- ture as a result. 4 Discourse Representation Structures In this section we give a brief description of Kamp's Discourse Representation Theory (DRT). For a more detailed discussion of this theory and its gen- eral ramifications for natural language processing, cf. the papers by Kamp (1981) and Guenthner (1984a, 1984b). According to DRT, each natural language sen- tence (or discourse) is associated with a so-called Discourse Representation Structure (DRS) on the basis of a set of DRS forrnatior rules. These rules are sensitive to both the syntactic structure of the sentences in question as well as to the DRS context in which in the sentence occurs. 4.1 Definition of Discourse Representation Struc- tures A DRS K for a discourse has the general form K = <U, Con> where U is a set of "discourse referents" for K and Con a set of "conditions" on these indi- viduals. Conditions can be either atomic or complex. An atomic condition has the form P(tl,...,tn) or tl=c, where ti is a discourse refer- ent, c a proper name and P an n-place predicate. Of the complex conditions we will only mention "implicational" conditions, written as K1 IMP K2, where K1 and K2 are also DRSs. With a discourse D is thus associated a Discourse Representation Struc- ture which represents D in a quantifier-free "clausal" form, and which captures the propositional import of the discourse. Among other things, DRT has important conse- quences for the treatment of anaphora which are due to the condition that only those discourse referents are admissible for a pronoun that are accessible from the DRS in which the pronoun occurs (A precise de- finition of accessibility is given in Ramp (1981)). Discourse Representation Structures have been implemented by means of the three relations AS- SERTION, ACCESSIBLE, and DR shown in the ap- pendix. These three relations are written out to the relational database system (Astrahan &al (1976)) af- ter the current text has been processed. 4.2 From Intermediate Structures to DRSs The Intermediate Structures are processed starting at the top. The transformation of all the items in the Intermediate Structure are relatively straight- forward, except for the proper semantic represen- tation of pronouns. According to the spirit of DRT, pronouns are assigned discourse referents accessi- ble from the DRS in which the pronoun occurs. In the example given in the appendix, as we can see from the ACCESSIBLE table there are only two dis- course referents available, namely ul and u2. Given the morphological information about these in- dividuals the pronoun "it" can only be assigned the discourse referent u2 and this is as it should be. For further problems arising in anaphora resolution in general cf. Kamp (1981) and Guenthner and Leh- mann (1983). 5 Remarks on work in progress We are at present engaged in extending the above construction algorithm to a much wider variety of linguistic structures, in particular to the entire fragment of English covered by the USL grammar. Besides incorporating quite a few more aspects of discourse structure (presupposition, ambiguitity, cohesion) we are particularly interested in formulat- ing a deductive account for the retrieval of information from DRSs. This account will mainly consist in combining techniques from the theory of relational database query as well as from present techniques in theorem proving. 400 In our opinion Ramp's theory of Discourse Repre- sentation Structures is at the moment the most prom- ising vehicle for an adequate and efficient implementation of a natural language processing sys- tem. It incorporates an extremely versatile dis- course-oriented representation language and it allows the precise specification of a number of up to now intractable discourse phenomena. References Astrahan, M. M., M. W. Blasgen, D. D. Chamberlin, K. P. Eswaran, J. N. Gray, P. P. Griffiths, W. F. King, R. A. Lorie, P. R. McJones, J. W. Mehl, G. R. Putzolu, I. L. Traiger, B. W. Wade, V. Watson (1976): "System R: Relational Ap- proach to Database Management", ACM Transactions on Database Systems, vol. 1, no. 2, June 1976, p. 97. Bertrand, O., J. J. Daudenarde, D. Starynkevich, A. Stenbock-Fermor (1976) : "User Application Generator", Proceedings of the IBM Technical Con- ference on Relational Data Base Systems, Bari, Italy, p. 83. Guenthner, F. (1984a) "Discourse Representation Theory and Databases", forthcoming. Guenthner, F. (1984b) "Representing Discourse Re- presentation Theory in PROLOG", forthcoming. Guenthner, F., H. Lehmann (1983) "Rules for Pron- ominalization", Proc. 1st Conference and Inaugural Meeting of the European Chapter of the ACL, Pisa, 1983. II3M (1981) : User Language Generator: Program Description~Operation Manual, SBI0-7352, IBM Prance, Paris. Ramp, H. (1981) "A Theory of Truth and Semantic Representation", in Groenendijk, J. et al. Formal Methods in the Study of Language. Amsterdam. Lehmann, H. (1978): "Interpretation of Natural Language in an Information System", IBM J. Res. Develop. vol. 22, p. 533. Lehmann, H. (1980): "A System for Answering Questions in German", paper presented at the 6th International Symposium of the ALLC, Cambridge, England. Ott, N. and M. Zoeppritz (1979): "USL- an Exper- imental Information System based on Natural Lan- guage", in L. Bolc (ed): Natural Language Based Computer Systems, Hanser, Munich. de Sopefia Pastor, L. (1982): "Grammar of Spanish for User Specialty Languages", TR 82.05.004, IBM Heidelberg Scientific Cente ~. Zoeppritz, M. (1984): Syntax for German in the User Specialty Languages System, Niemeyer, Tfibingen. Appendix: E~mmple SENT i SC I 4------ . . . . . ~ . . . . . . . . . - . . . . . . . . . + . . . . -4- I NP i + ..... + I NOHEN I + ...... + I QU NOHEN VERB NP I I I I every farmer donkey beats it SC I +...+ .... + I I [ I i xP [ I [ I I + ...... + I I I i NP VERB QD NOHEN I I I i who owns a Parse tree R: BEAT A(NOH): R: FARHER (EVERY) A(NOH): R: OWN A(NOM): RELPRO A(ACC): R: DONKEY (A) A(ACC): PERSPR0 Intermediate Structure ASSERTION table I i ]DRS#1 ASSERTION 1 FARHER(ul) 1 OWN(ul,u2) 1 DONKEY(u2) 2 BEAT(ul,u2) DR relation iDRiVRS iCongriS i'evel i I I lull 1 he ]1 1 lu21 1 it 11 2 I I I I I I I I I ACCESSIBLE relation [upper DRS lower DRS I I 1 2 i 401
1984
82
TEXTUAL EXPERTISE IN WORD EXPERTS: AN APPROACH TO TEXT PARSING BASED ON TOPIC/COMMENT MONITORING * Udo Hahn Universitaet Konstanz Informationswissenschaft ProJekt TOPIC Postfach 5560 D-7750 Konstanz i, West Germany ABSTRACT In this paper prototype versions of two word experts for text analysis are dealt with which demonstrate that word experts are a feasible tool for parsing texts on the level of text cohesion as well as text coherence. The analysis is based on two major knowledge sources: context information is modelled in terms of a frame knowledge base, while the co-text keeps record of the linear sequencing of text analysis. The result of text parsing consists of a text graph reflecting the thematic organization of topics in a text. i. Word Experts as a Text Parsing Device This paper outlines an operational repre- sentation of the notion of text cohesion and text coherence based on a collection of word experts as central procedural components of a distributed lexical grammar. By text cohesion, we refer to the micro level of textuallty as provided, e.g. by reference, substitution, ellipsis, conjunction and lexical cohesion (cf. HALLIDAY/HASAN 1976), whereas text coherence relates to the macro level of textuality as induced, e.g. by patterns of semantic recurrence of topics (thematic progression) of a text (cf. DANES 1974). On a deeper level of propositional analysis of texts further types of semantic development of a text can be examined, e.g. coherence relations, such as contrast, generaliza- tion, explanation (cf. HOBBS 1979, HOBBS 1982, DIJK 1980a), basic modes of topic development, such as expansion, shift, or splitting (cf. GRIMES 1978), and operations on different levels of tex- tual macro-structures (DIJK 1980a) or schematlzed superstructures (DIJK 1980b). The identification of cohesive parts of a text is needed to determine the continuous development and increment of information with regard to single thematic focl, i.e. topics of the text. As we have topic elaborations, shifts, breaks, etc. in texts the extension of topics has to be delimited exactly and different topics have to be related properly. The identification of coherent parts of a text serves this purpose, in that the determina- tion of the coherence relations mentioned above * Work reported in this paper is supported by BMFT/GID under grant no. PT 200.08. contributes to the delimitation of topics and their organization in terms of text grammatical well-formedness considerations. Text graphs are used as the resulting structure of text parsing and serve to represent corresponding relatlons holding between different topics. The word experts outlined below are part of a genuine text-based parsing formalism incorporating a llnguistical level in terms of a distributed text grammar and a computational level in terms of a corresponding text parser (HAHN/REIMER 1983; for an account of the original conception of word expert parsing, cf. SMALL/RIgGER 1982). This paper is intended to provide an empirical assessment of word experts for the purpose of text parsing. We thus arrive at a predominantly functional description of this parsing device neglecting to a large extent its procedural aspects. The word expert parser is currently being implemented as a major system component of TOPIC, a knowledge-based text analysis system which is intended to provide text summarization (abstract- ing) facilities on varlable layers of informational speclfity for German language texts (each approx. 2000-4000 words) dealing with information technol- ogy. Word expert construction and modification is supported by a word expert editor using a special word expert representation language fragments of which are introduced in this paper (for a more detailed account, cf. HAHN/REIMER 1983, HAHN 1984). Word experts are executed by interpretation of their representation language description. TOPIC's word expert system and its editor are written in the C programming language and are running under UNIX. 2. Some General Remarks about Word Expert Strut- ture and the Knowledge Sources Available for Text Parsin~ A word expert is a procedural agent incor- porating linguistic and world knowledge about a particular word. This knowledge is represented declaratlvely in terms of a decision net whose nodes are constructed of various conditions. Word experts communicate among each other as well as with other system components in order to elaborate a word's meaning (reading). The conditions at least are tested for two kinds of knowledge sources, the context and the co-text of the corresponding word. 402 Context is a frame knowledge base which con- tains the conceptual world knowledge relevant for the texts being processed. Simple conditions to be tested in that knowledge base are: ACTIVE ( f ) : <---=> f is an active frame EISA ( f , f" ) : <---> frame f is subordinate or instance of frame f" HAS SLOT ( f , s ) : <===> frame f has slot s associated to it HAS SVAL ( f , s , v ) : <-==> slot s of frame f has been assigned the slot value v SVAL RANGE ( sir , s , f ) : <ffi==> --string sir is a permitted slot value with respect to slot s of frame f Co-text is a data repository which keeps record of the sequential course of the text analysis actually going on - this linear type of information is completely lost in the context, although it is badly needed for various sorts of textual cohesion and coherence phenomena. As co-text necessarily reflects basic properties of the frame representation structures underlying the context, some conditions to be tested in the co-text also take certain aspects of context knowledge into accout: BEFORE ( exp , strl , str2 ) : <-=-> strl occurs maximally exp many trans- actions before sir2 in the co-text AFTER ( exp , strl , str2 ) : <---> strl occurs maximally exp many trans- actions after str2 in the co-text IN PHRASE ( strl , str2 ) : <---> strl occurs in the same sentence as str2 EQUAL ( strl , str2 ) : <---> strl equals str2 FACT ( f ) : <==-> frame f was affected by an activation op- eration in the knowledge base SACT ( f , s ) : <-=-> slot s of frame f was affected by an ac- tivation operation in the knowledge base SVAL ( f , s , v ) : <--=> slot s of frame f was affected by the as- signment of a slot value v in the know- ledge base SAME TRANSACTION ( f , f" ) : <---> --frame f and frame f" are part of the same transaction with respect to a single text token, i.e. the set of all operations on the frame knowledge base which are car- ried out due to the readings generated by the word experts which have been put into operation with respect to this token From the above atomic predicates more complex conditions can be generated using common logical operators (AND, OR, NOT). These expressions under- lie an implicit existential quantification, unless specified otherwise. During the operation of a word expert the variables of each condition have to be bound in order to work out a truth value. In App.A and App.B underlining of variables indicates that they have already been bound, i.e. the evaluation of the condition in which a variable occurs takes the value already assigned, otherwise a value assign- ment is made which satisfies the condition being tested. Items stored in the co-text are in the format TOKEN TYPE ANNOT actual form of text word normalized form of text word after morpho- logical reduction or decomposition proce- dures have operated on it annotation indicating whether TYPE is iden- tified as FRAME a frame name WEXP a word expert name STOP a stop word or NUM a numerical string NIL an unknown text word or TYPE consists of parameters frame . slot . sval which are affected by a special type of op- eration executed in the frame knowledge base which is alternatively denoted by FACT frame activation SACT slot activation SVAL slot value assignment 3. Two Word Experts for Text Parsin$ We now turn to an operational representation of the notions introduced in sec.1. The discussion will be limited to well-known cases of textual cohesion and coherence as illustrated by the fol- lowing text segment: [1] In seiner Grundversion ist der Mikrocomputer mit einem Z-80 und 48 KByte RAM ausgeruestet und laeuft unter CP/M. An Peripherie werden Tastatur, Bildschirm und ein Tintenspritz- drucker bereitgestellt. Schliesslich verfuegt das System ueber ~ Programmiersprachen: Basic wird yon SystemSoft geliefert und der Pas- cal-Compiler kommt yon PascWare. [The basic version of the micro is supplied with a Z-80, 48 kbyte RAM and runs under CP/M. Peripheral devices provided include a keyboard, a CRT display and an ink Jet printer. Finally, the system makes available 2 programming languages: Basic is supplied b~ SystemSoft while PascWare furnished the Pascal compiler.] First, in set.3.1 we will examine textual cohesion phenomena illustrated by special cases of lexical cohesion, namely the tendency of terms to share the same lexical environment (collocatlon of terms) and the occurrence of "general nouns" refer- ring to more specific terms (cf. HALLIDAY/flASAN 1976). Then, in sec.3.2 our discussion will be centered around various modes of thematic progres- sion in texts, such as linear thematization of rhemes (cf. DANES 1974) which is often used to establish text coherence (for a similar approach to combine the topic/comment analysis of texts and knowledge representation based on the frame model, 403 cf. CRITZ 1982; computational analysis of textual coherence is also provided by HOBBS 1979, 1982 applying a logical representation model). Word experts capable of handling corresponding textual phenomena are given in App.A and App.B. However, only simplified versions of word experts (prototypes) can be supplied restricting their scope to' the recognition of the text structures under examination. The representation of the textual analysis also lacks completeness skipping a lot of intermediary steps concerning the operation of other (e.g. phrasal) types of word experts (for more details, cf. HAHN 1984). 3.1 A Word Expert for Text Cohesion We now illustrate the operation of the word expert designed to handle special cases of text cohesion (App.A) as indicated by text segment [i]. Suppose, the analysis of the text has been carried out covering the first 9 text words of [I] as indicated by the entries in co-text: No. TOKEN TYPE A~ ................................................................... {~I} In in STOP [e2} seinet sein STOP {~3} Grundversi~ - NIL {04} ist ist STOP {~5} der de~ STOP {g6} Mikrocomputer Mikroc~ter {07} mit mit STOP {08} eine~ ein STOP [e9} Z-Be Z-88 The word expert given in App.A starts running whenever a frame name occurs in the text. Starting at the occurrence of frame "Mikrocnmputer" indi- cated by {06} no reading is worked out. At {09} the expert's input variable "frame" is bound to "Z-80" as it starts again. A test in the knowledge base indicates that "Z-80" is an active frame (by default operation). Proceeding backwards from the current entry in co-text the evaluation of nodes #i0 and #Ii yields TRUE, since pronoun llst con- tains an element "ein" a morphological variant of which occurs immediately before frame (Z-80) within the same sentence. In addition, we set frame" to "Mikrocomputer" (micro computer) as it is next before frame (with proximity left unconstrained due to "any') in correspondence with {06}, and it is an active frame, too. The evaluation of node #12, finally, produces FALSE, since frame" (Mikrocom- purer) is not a subordinate or instance of frame (Z-80) - actually, "Z-80" is an instance of "Hik- roprozessor" (micro processor). Following the FALSE arc of #12 leads to expression #2 which evaluates to FALSE, as frame" (Mikrocomputer) is a frame which roughly consists of the following set of slots (given by indentation) Mikrocomputer Mikroprozessor Peripherie Hauptspelcher Programmiersprache Systemsoftware micro computer mirco processor peripheral devices main memory programming language system software Following the FALSE arc of #2, #3 also evaluates to FALSE as according to the current state of analysis context contains no information indicating that frame" (Mikrocomputer) has a slot" to which has been assigned any slot value (in addition, "Z-80" is not used as a default slot value of any of the slots supplied above). Turning now to the evalua- tion of #4 slot" has to be identified which must be a slot of frame" (Mikrocomputer) and frame (Z-80) must be within the value range of permitted slot values for slot" of frame'. Trying "Mikroprozes- sor" for slot" succeeds, as "Z-80" is an instance of "Mikroprozessor" and thus (due to model-dependent semantic integrity constraints inherent to the underlying frame data model {REIMER/HAHN 1983]) it is a permitted slot value with respect to slot" (Mikroprozessor) which in turn is a slot of frame" (Mikrocomputer). Thus, the interpretation slot" as "~tlkroprozessor" holds. The execution of word experts terminates if a reading has been generated. Readings are labels of leaf nodes of word experts, so followlng the TRUE arc of #4 the reading SVAL ASSIGN ( Mikrocomputer , Mikroprozessor , Z-80 ) i~ reached. SVAL ASSIGN* is a command issued to the frame knowledge base (as is done with every reading referring to cohesion properties of texts) which leads to the assignment of the slot value "Z-80" to the slot "Mikroprozes- sor" of the frame "Mikrocomputer", This operation also gets recorded in co-text (SVAL). Therefore, entry {09} get augmented: • ~K~ TYPE ANNOT {eg] z-8~ z-so FRA~ Mikroc~ter.Mikroprozessor.Z-Se SVAL The next steps of the analysis are skipped, until a second basic type of text cohesion can be examined with regard to {34}: {II} 48 48 Nt~ RAM-I .GrOesse. 48 KByte SVAL - Mik roconlputer. Haupt speicber. RAM- 1 SVAL { 18 } CP/~ CP/~ F~ Mikroc~ter. Bet r i ebssys tern. CP/M SVAL {19} . . w~xp {21} Fer ipherle Periphe~ie Miktocomputer. Pet i pherie SACT {23} Tastatur Tastatu~ FRA~ - Miktoc~ter. Peripherie.Tastatur SVAL {25} Bi idschirm Bildschirm FRAt~ - Miktoc~ter. Per ipher ie. Bi Idschirm SVAL {28] Tintenspritzdrucker Tintenspritzdrucker FRAME Mikr oc~tet. Per ipher ie ° Tintenspr i t zdrucker SVAL {3e) . ~p { 33 } das das STOP { 34 } System System FR~ At {34} the word expert dealing with text cohesion phenomena again starts running. Its input variable "frame" is set to "System" (system). With respect to #i0 the evaluation of BEFORE yields a positive result, since "das" which is an element of pronoun list occurs immediately before frame. As the SWEIGHT INC (f, s) which is also provided in App.A says that the activation weight of slot s of frame f gets incremented. 404 IN PHRASE predicate also evaluates to TRUE, the wh~le expression #I0 turns out to be TRUE. Proceeding backwards to the next frame which is active in the frame knowledge base search stops at position {28}. When more than a slngle frame within the same transaction may be referred to by word experts the following reference convention is applied: [2i] [2ii] if ANNOT - FRAME and an annotation of type FACT exists examine the frame corresponding to FACT if ANNOT - FRAME or ANNOT - WEXP and annota- tions of type SACT or SVAL exist examine f as frame, s as slot, and v as slot value, resp. according to the order of parameters f . s . v In these cases reference of word experts to the frame correponding to the annotation FRAME would cause the provision of insufficient or even false structural information about the context of the current lexlcal item, although more significant information actually is available in the knowledge sources. In the word expert considered, frame" is set to "Mikrocomputer" according to [211]. Follow- ing the TRUE arc of #ii expression #12 states that frame" (Mikroeomputer) must be a subordinate or instance of frame (System) which also holds TRUE. Thus, one gets the reading SHIFT ( System , M/k- rocomputer ) which says that the activation weight of frame (System) has to be decremented (thus neutralizing the default activation), while the activation weight of frame" (Mikrocomputer) gets incremented instead. Based on this re-asslgnment of activation weights the system is protected against invalid activation states, since "Mikrocom- purer" is referred to by "System" due to styllstl- cal reasons only and no indication is available that a real topical change in the the text is implied, e.g. some generalization with respect to the whole class of micro computers. We thus have an augmented entry for {34} in co-text together with the result of processing the remainder of [1]: No. ~KEN TYPE {34} system Systmo FRA~ - Mikro~ter FACT {36) 2 2 { 37 } Pzogr~ersprachen Pzogr~miersprache FRA~Z Mikroc~ ter. PrOgra~ersprache. SIL'T {39} Basic Basic F~ - Mikroc~uter. Pr ogrammier sprache. Basic SVAL {42} System~oft Syst~oft FRAME Basic. Herstel lee. SystemSoft SVAL {46} Pasta l-C~i let ~asca l-Cmmpi lee FRA~ Mikrocumputer. Systemso f tware, pascal-Ccmpi let SVAL Pascal - Mik=oc~te~. l~oqre~nierspracbe. Pascal SVAL {49} PascWare PascWaze FRA~ Pasta 1--Compt let. Herstel lez. pascWare SV~L Pasta 1. Hers ~eller. PasCWa re SVAL While expressions #1-#4 of App.A handle the usual kind of lexlcal cohesion sequencing in German a variant form of lexlcal cohesion is provided for by #5-#8 with reverse order of sequencing ("... die Tastatur fuer den Mikrorechner ..." or "... die Tastatur des Mikros ..."). From this outline one gets a slight impression of the text parsing capabilities inherent to word experts on the level of text cohesion as parsing is performed irrespec- tive of sentence boundaries on a primarily semantic level of text processing in a non-expenslve way (partial parsing). With respect to other kinds of cohesive phenomena in texts, e.g. pronominal anaphora, conjunction, delxls, word experts are available similar in structure, but adapted to identify corresponding phenomena. 3.2 A Word Expert for Text Coherence We now examine the generation of a second type of reading, so-called coherence readings, concern- ing the structural organization of cohesive parts of a text. Unlike cohesion readings, coherence readings of that type are not issued to the frame knowledge base to instantlate various operations, but are passed over to a data repository in which coherence indicators of different sorts are col- lected continuously. A device operating on these coherence indicators computes text structure pat- terns in terms of a text graph which is the final result of text parsing in TOPIC. A text graph constructed that way is composed of a small set of basic coherence relations. We only mention here the application of further rela- tions due to other types of linguistic coherence readings (cf. HAHN 1984) as well as coherence readings from computation procedures based exclusively on configuration data from the frame knowledge base (HAHN/REIMER 1984). One common type of coherence relations is accounted for in the remainder of section which provides for a struc- tural representation of texts which is already well-known following DANES" 1974 distinction among various patterns of thematic progression: SPLITTING THEWS (~RIVED YHE~) SPLITTING RHEMES F' l =~ STR l • . . F' N ='" $~R N F' . . . F'~ ~SCAD]NG THEMES {LJN[AR TMEI~,£TIZ&TSON OF RMEM~$) nESCENDJNG RMEM£$ F*,, 1 ~. STRI F'' F'N m F'''N "" $TRN Fig.l: Graphical Interpretation of Patterns of Thematic Progression in Texts The meaning of the coherence readings provided in App.B with respect to the construction of the text graph is stated below: SPLITTING RHEMES ( f , f" ) fram~ f is alpha ancestor to f" DESCENDING RHEMES ( f , f" , f'" ) frame-'f is alpha ancestor to f" & frame f" is alpha ancestor to f'" 405 CONSTANT THEME ( f , str ) frame f is beta ancestor=~strlng str SPLITTING THEMES ( f , f', str) fram~ f is alpha ancestor to f" & frame f" is beta ancestor to string str CASCADING THEMES ( f , f', f'' , f''" , sir ) fram-e f is alpha ancestor f" & frame f" is beta ancestor to f'" & frame f'" is alpha ancestor to f''" & frame f''" is beta ancestor to string str SEPARATOR ( f ) frame f is alpha ancestor to a separator symbol We now illustrate the operation of the word expert designed to handle special cases of text coherence (App.B) as indicated by text segment [i]. It gets started whenever a frame name has been identified in the text. Suppose, we have frame set to "Mikrocomputer" with respect to {06}. Since #i fails (there is no other frame" available within transaction {06}), evaluating #2 leads to the assignment of "Mikroeomputer" to frame" (with respect to {09}), since according to convention [21i] and to the entries of co-text frame" (Mik- rocomputer/{09}) occurs after frame and is immediately adjacent to frame (Mikrocomputer/06}); in addition, both, frame as well as frame', belong to different transactions. Thus, #2 is evaluated TRUE. Obviously, #3 also holds TRUE, whereas #4 evaluates to FALSE, since frame" is annotated by SVAL according to the co-text Instead of SACT, as is required by #4. Note that only the same trans- action (if #I holds TRUE) or the next transaction (if #2 holds TRUE) is examined for appropriate occurrences of SACTs or SVALs. With respect to #5 the SVAL annotation covers the following parameters in {09}: frame" (Mikrocomputer), slot" (Mikroprozes- sot) and sval" (Z-80). Proceeeding to the next state of the word expert (#6) we have frame (Mik- rocomputer) but no SVAL or SACT annotation with respect to {06}. Thus, @6 necessarily gets FALSE, so that, flnally, the reading SPLITTING THEMES (Mikrocomputer , Mikroprozessor , z-g0 ) --is gener- ated. A second example of the generation of a coherence reading starts setting frame to "RAM-l" at position {13} in the co-text. Evaluating #1 leads to the asslgment of "Mikrocomputer" to frame', since two frames are available within the same transaction. Both frames being different from each other one has to follow the FALSE arc of #3. Similar to the case above, both transaction ele- ments in {13} are annotated by SVAL, such that #7 as well as #9 are evaluated FALSE, thus reaching #11. Since frame (RAM-I) has got no slot to which has been assigned frame" (Mikrocomputer), #ii evaluates to FALSE. With respect to #13 we have frame" (Mikrocomputer) whose slot" (Hauptspelcher) has been assigned a slot value which equals frame (RAM-l). At #14, finally, slot (Groesse) and sval (48 KByte) are determined with respect to frame (RAM-l). The coherence reading worked out is stated as CASCADING THEMES ( Mikrocomputer , Hauptspelcher , RAM-I , Groesse , 48 KByte ). Completing the coherence analysis of text segment [I] at last yields the final expansion of co-text (note that both word experts described operate in parallel, as they are activated by the same starting criterion): Jo. READING pEERS 99} SPLITrING TH~N~S 13} S PLI TTI NG--TH~Y.S CASCADING THE~S 181 SPLZ~Z~-_~ 21} SPLITTING ~EMES 123} SPLICING THEMES 25} SPLI~'r I~_THE}~S 28} S~I~I ~G_'mE~S ,34 } SEPARATOR 13~} S PU~Z ~G_P,H~'ZS 14e} sPr.I~X~c_'n~}~s 142} ~ING_CHU~.S {46} SPLI~TING THEFC~S { } SPLITTING TH~ES i } ~zN='r,.,m~,~ Mikroeu.puter .Mikroprozessor .Z-Sg Mikr ocomputet. Hauptspeicher. RAM- 1 Mikrocomputer. Hauptspeiche~. RAM- I .Gr oesse. 48 KByte Mikroccmputer. Bet r iebssystem. CP/M Mikroc~ter. Per ipher ie Mikroc~ter. Per ipher ie. Tasta tur Mik rockier. Per ipher ie. Bi Idschi rm Mikrocomputer. Per ipber ie. Tintenspr i t zd tucker Mi~r~ter Mikroc~ter. Pr ogr ammier sprache Mik roc~ter. Pr ogr ammiez sprache. Bas ic Mikr oc~ter, p~ogr ammler spr a~he. Bas ic. Hersteller. SystemSoft Mikroc~ ter. Systemsof tware. Pasta I -Cc~i let Mikrocumputer. programmier sptache. PaSca 1 Mikroc~ter. SyStemsoftware. Pasta l-Compi let. Herstel let. FascWate Mikroc~ter. p~ogr an~iersprsche. Pascal. Hersteller. PascWare The word expert Just discussed accounts for a single frame (here: M_Ikrocomputer) with nested slot values of arbitrary depth. This basic descrip- tion only slightly has to be changed to account for knowledge structures which are implicitly connected inthe text. Basically divergent types of coherence patterns are worked out by word experts operating on, e.g. aspectual or contrastlve coherence rela- tions (cf. HAHN 1984). 4. The Generation of Text Graphs Based on Topic/Comment Monitoring The procedure of text graph generation for this basic type of thematic progression can be described as follows. After initialization by drawing upon the first frame entry occurring in co-text the text graph gets incrementally con- structed whenever a new coherence reading is avail- able in the corresponding data repository. Then, it has to be determined, whether its first parameter equals the current node of text graph which iselther the leaf node of the initialized text graph (when the procedure starts) or the leaf node of the toplc/comment subgraph which has pre- viously been attached to the text graph. If equality holds, the coherence reading is attached to this node of the graph (including some merging operation to exclude redundant information from the text graph). If equality does not hold, remaining siblings or ancestors (in this order) are tried, until a node equal to the first parameter of the current coherence reading is found to which the reading will be attached dlrectly. If no matching node in the text graph can be found, a new text graph is constructed which gets inltlallzed by the current coherence reading. The text graph as the result of parsing of the text segment [i] with respect to the coherence readings generated in set.3.2 is provided in App.C. Note that the text graph generation procedure allows for an interpretation of basic coherence readings supplied by various word experts in terms of compound patterns of thematic progression, e.g. as given by the exposition of splitting rhemes (DANES 1974). Nevertheless, the whole procedure essentially depends upon the continuous availability of reference topics to construct a 406 coherent graph. Accordingly, the ~raph generation procedure also operates as a kind ot topic/comment monitoring device. Obviously, one also has to take into account defective topic/c~ent patterns in the text under analysis. The SEPARATOR reading is a basic indicator of interruptions of toplc/comment sequencing. Its evaluation leads to the notion of toplc/comment islands for texts which only par- tially fulfill the requirements of toplc/comment sequencing. Further coherence readings are gener- ated by computations based solely on world knowledge indicators generating condensed lists of dominant concepts (lists of topics instead of topic graphs) (HAHN/REIMER 1984). 5. Conclusion In this paper we have argued in favor of a word expert approach to text parsing based on the notions of text cohesion and text coherence. Read- ings word experts work out are represented in text graphs which illustrate the topic/comment structure of the underlying texts. Since these graphs repre- sent the texts" thematic structure they lend them- selves easily for abstracting purposes. Coherency factors of the text graphs generated, the depth of each text graph, the amount of actual branching as compared to possible branching, etc. provide overt assessment parameters which are intended to control abstracting procedures based on the toplc/comment structure of texts. In addition, as much effort will be devoted to graphical modes of system inter- cation, graph structures are a quite natural and direct medium of access to TOPIC as a text informa- tion system. ACKNOWLEDGEMENTS I would like to express my deep gratitude to U. Reimer for many valuable discussions we had on the word expert system of TOPIC. R. Hammwoehner and U. Thiel also made helpful remarks on an ear- lier version of this paper. REFERENCES Critz, J.T.: Frame Based Recognition of Theme Continuity. In: COLING 82: Proc. of the 9th Int. Conf. on Computational Linguistics. Prague: Academia, 1982, pp.71-75. Danes, F.: Functional Sentence Perspective and the Organization of the Text. In: F. Danes (ed): Papers on Functional Sentence Perspective. The Hague, Paris: Mouton, 1974, pp.106-128. DiJk, T.A. van: Text and Context: Explorations in the Semantics and PTagmatics of Discourse. London, New York: Longman, (1977) 1980 (a). DiJk, T.A. van: Macrostructures: An Interdiscipli- nary Study of Global Structures in Discourse, Interaction, and Cognition. 8/llsdale/NJ: L. Erlbaum, 1980 (b). Grimes, J.E.: Topic Levels. In: TINLAP-2: Theoreti- cal Issues in Natural Language Processing-2. New York: ACM, 1978, pp.104-108. Hahn, U.: Textual Expertise in Word Experts: An Approach to Text Parsing Based on Topic/Co---ent Monitoring (Extended Version). Konstanz: Univ. Konstanz, Informatlonswissenschaft, (May) 1984 (- Bericht TOPIC-9/84). Hahn, U. & Reimer, U.: Word Expert Parsing: An Approach to Text Parsing with a Distributed Lexical Gr-,-,mr. Konstanz: Univ. Konstanz, Informationswissenschaft, (Nov) 1983 (- Bericht TOPIC-6/83). [In: Linguistlsche Berichte, No.88, (Dec) 1983, pp.56-78. (in German)] Hahn, U. & Reimer, U.: Computing Text Constituency: An Algorithmic Approach to the Generation of Text Graphs. Konstanz: Univ. Konstanz, lnfor- mationswissenschnft, (April) 1984 (- Bericht TOPlC-8/84)). Halliday, M.A.K. / Hasan, R.: Cohesion in English. London: Longman, 1976. Hobbs, J.R.: Coherence and Coreference. In: Cogni- tive Science 3. 1979, No.l, pp.67-90. Hobbs, J.R.: Towards an Understanding of Coherence in Discourse. In: In: W.G. Lehnert / M.H. Ittngle (eds): Strategies for Natural Language Processing. Hillsdale/NJ, London: L. Erlbaum, 1982, pp.223-243. Reimer, U. & Hahn, U.: A Formal Approach to the Semantics of a Frame Data Model. In IJCAI-83: Proc. of the 8th Int. Joint Conf. on Artificial Intelligence. Los Altos/CA: W. Kaufmann, 1983, pp.337-339. Small, S. / Rieger, C.: Parsing and Comprehending with Word Experts (a Theory and its Realiza- tion). In: W.G. Lehnert / M.H. Itingle (eds): Strategies for Natural Language Processing. Hlllsdale/NJ: L. Erlba,-,, 1982, pp.89-147. 407 z > ~ ~'i o o oo > °° o ° _. ,.,~,.~ . . > m Io oo -.[2_11 ~:.i ~ o o ~ o ~ o ~ , o ~ ° o > • ~< -.-.. i i~ l . _ :! io i i..-.i > ~_ _;: ~ ": :'; ~.... i~ - ~.~,~ i... oo o o --- __ ! i o~ - - - - i° o-. .'° °° ;o oo oo oo ~n ~ . .o !~i T ~i!!,~ ~, i ii i:.i I_.2___" -- !:.- ~ . i ;\' i k. _.. ~. 408
1984
83
SOME LINGUISTIC ASPECTS FOR AUTOMATIC TEXT UNDERSTANDING Yutaka Kusanagi Institute of Literature and Linguistics University of Tsukuba Sakura-mura, Ibarakl 305 JAPAN ABSTRACT This paper proposes a system of map- ping classes of syntactic structures as instruments for automatic text under- standing. The system illustrated in Japa- nese consists of a set of verb classes and information on mapping them together with noun phrases, tense and aspect. The sys- tem. having information on direction of possible inferences between the verb classes with information on tense and as- pect, is supposed to be utilized for rea- soning in automatic text understanding. I. INTRODUCTION The purpose of this paper is to pro- pose a system of mapping classes of syn- tactic structures as instruments for auto- matic text understanding. The system con- sists of a set of verb classes and Jnfor- matlon on mapping them together with noun phrases, tense and aspect, and ]s supposed to he utilized for inference in automatic text understanding. The language used for illustration of the system is Japanese. There Is a tendency for non-syntactic analysers and semantic grammars In auto- matic text understanding. However. this proposal Is motivated by the fact that syntactic structures, once analyzed and classified in terms of semantic related- ness, provide much information for" under- standing. This is supported by the fact that human beings use syntactically re- lated sentences when they ask questions about texts. The system we are proposing has the following elements: 1) Verb classes. 2) Mapping of noun phrases between or among some verb classes. 3) Direction of possible infel'ence between the classes with information on tense and aspect. Our experiment, in which subjects are asked to make true-false questions about certain texts, revealed that native speak- ers think that they understand texts by deducting sentences lexically or semanti- cally related. For instance, a human being relates questions such as 'Did Mary go to a theater?' to a sentence in texts such as 'John took Mary to a theater.' Or, by the same sentence, he understands that 'Mary was in the theater." II. FEATURES OF THE JAPANESE SYNTAX Features of ,Japanese syntax relevant to the discussion in this paper are pre- sented below. The sentence usually ha:# case mark- ings as postpositions to noun phrases. For instance. I. John qa Mary D_J_ himitsu o hanashita 'John told a secret to Mary.' In sentence 1. postpositions ga. ni and o indicate nominative, dative alld accusa- tive. respectively. 409 However. postposJtions do not unique- {y map to deep cases. Take the followitlg sentences for example. 2. John ~ia_ sanii B i_ itta. "John went at :? o'cio(-k.' 3. John w_a Tokyo r!t itta. "John ~,~'ellt to Tokyo." 4. Johr~ w;~ Tokyo ILI :~undeiru. 'John lives in Tokyo.' Ni in the sentences 2, 3. 4 indicate time. goal and location, r e s p e c t i v e l y . This is due to the verb ca|egory (3 and 41 OF the class of noun phrases (2 and 31 appearing in each sentence. Certain mor'phemc classes hide the casemark ing. e.g. 5. John ~Q itta. "John also went (y;omewhere). 6. Tokyo mo i t t a . 'Someone went to Tokyo also.' The mo in sentence 5 and 6 means ' a l s o ' . Therefore these sentences are derived from d i f f e r e n t s y n t a c t i c a l c o n s t r u c t i o n s , that is. sentences 7 and 8. r e s p e c t i v e l y . 7. John ga itta. "John went (somewhere).' 8. Tokyo n__ki itta. • Someone went to Tokyo." Furthermore. as i l l u s t r a t e d in sen- tences 5 through 6, noun phrases ,lay be deleted f r e e l y , provided the context gives f u l l information. In sentences 6 and 7. a noun phrase indicating the goal is missing and sentences 6 and 8 lack thal indicating the subject. Finally. there are many pairs of lexicalLy related verbs, tz'ansi t ire and inst] a~it ire, indicating the :;ame phenomenon differently 9. John ga t,4ary ni hon o m_!seta. ",h)hn showed a hook to Mary. 10. Mal'y ga hon o !~ita. "Uary saw a book.' The two expressions, or viewpoints, on the same phenomenon, that is, 'John showed to Mary a book which she saw.' are related in Japanese by the verb root ~_l. The system under consideration uti- lizes some of the above features (case marking and lexically related verbs) and in turn can be used to ease difficulties of automatic understanding, caused by some other features (case hiding, ambiguious case marking and d e l e t i o n of noun p h r a s e s . ) III. VERB CLASS The system is i l l u s t r a t e d below with verbs r e l a t e d to the notion of movement. The verb c l a s s e s in t h i s category are as follows: (1) Verb class of causality of movementtCM) Examples:tsureteiku ' t o take (a person)' tsuretekuru 'to bring (a person)" hakobu 'to carry" yaru 'to give" oshieru "to tell' Verbs of t h i s c l a s s indicate that someone causes something or someone moves. How to move varies as seen later. (2) Verb c l a s s of movement(MV) Examples:iku "to go' kuru 'to come" idousuru "to move" Verbs of t h i s c l a s s indicated that some- thing or someone moves from one place to another. (3) Verb class of existence(EX) Examples:iru '(animate) be" aru "(inanimate) be' Verbs of t h i s c l a s s indicate the existence of something or someone. 410 (4) Verb class of possesslon(PS) Examples:motsu 'to possess' kau 'to keep' Verbs of this class indicate someone's possession of something or someone. the case slot. As seen below, the differ- ence between yaru, 'to give' and uru, 'to sell' is that the latter has 'money' as instrument, while the former does not. In- cidentally, Japanese has a verb yuzuru which can be used whether the instruh~ent Is money or not. Notice that the fundamental notion of MOVE here is much wider than the normal meaning of the word 'move'. When someone learns some idea from someone else. it is understood that an abstract notion moves from the former to the latter. IV. MAPPING OF SYNTACTIC STRUCTURES Furthermore, verbs of each class dif- fer slightly from each other in semantic structures. But the difference is de- scribed as difference in features filling Sentence I I I I i I I I I Agent Object Source Goal Instr Time Loc PRED I I I I t I I I B C O E F G HOVE Diagram l: Semantic Structure CV MV tsureteiku mottekuru hakobu ya ru uru oshi eru osowaru iku idousuru tsutawaru ta ke bring- Lo bring - for carry give sell tell learn SO move he conveyed Obj +ani -ani -ani ÷ahs +a bs +abs Suppose sentences of the verb of MOVE have a semantic fram roughly as illus- trated in Diagram ]. The relationship among the surface A ga B o C kara D ni E de CI'I A ga B o C kara O ni E de MVsase B ga C kara D ni E de RV B ga C kara D ni E de CHrare B ga D n i EX D ga B o PS (sase and rare indicate causative and passive expressions respectively.) Diagram II:Mapping of Syntactic Structures Source Inst Goal +loc +loc +ani +loc +ant +hum +ani =~gt =~gt. =Agt =~gt +ant +ani =~gt =~gt =Agt =4gt -mone~' +money E× PS iru aru motsu kau be be have keep +ant -ant (-anim) +anim i I i ................... _J OC o(' (ani, anim, h_.gum, abs and Ioc indicate animate, animal human, abstract and location, respectively) Diagram II1: ~erbs and conditions for realization 411 syntactic _~;tructures of the verb classes disc-usssed above is p]'esented ill Diagram If. Items fill|rig the case slots in the semantic frame, or the nolln phrases in .qtlrf3c(" syntaclic 5~truclHFe.5. have partic- ular conditions depending on individual verbs. Some examples of (-ond i t i pry.; are presented in Diagram III. inference would be possible among sen- tences II through 14 in automatic text un- derstanding. Furthermore. this system can also be u t i l i z e d in the automatic text understanding by locating missing noun phrases and determining ambiguous grammat- ical cases in the sentence, finding seman- t i c a l l y r e l a t e d sentences between the questions and the t e x t , and gathering the right semantic information. By the~ie conditions, the mapping of s y n t a c t i c s t r u c t u r e s presented in Diagram II is transformed to that in terms of in- dividual verbs. Furthermore, rules of d i - rections for reasoning presented in Dia- gram IV connect s p e c i f i c sentences. Take the following sentence for example. Since t h i s system uses information on s y n t a c t i c s t r u c t u r e s , it is much simpler in terms of the semantic s t r u c t u r e s than the Conceptual Dependencey Model, for in- stance, and the mapping among the sentence patterns semantically r e l a t e d much more e x p l i c i t . II. John ga keiki o r,,lary ni mott ekita. (+ani) (-ani) (+ani} (CV-past) 'John brought a cake for Mary.' has related sentences like the following. 12. Keiki ga r~ary ni itta. "A cake went to t,4ary. 13. Keiki ga ~,tary {no tokoro) ni aru. "There is a cake at Mary's" REFERENCE Fillmore. C. 1968. The case for case. IN E. Back and R. Harms (Eds.), Universals in linguistic theory. New York: Holt. Rinehart. and ~inston. Kusanagi, Yutaka et al. to appear. and Semantics 11 (in Japanese). Asakura Shoten. Syntax Tokyo: 14. Mary ga keiki o motteiru. 'Mary has cake. As far as air the rules and conditions are incorporated into the computer program. Schank. R.C.. and Abelson. R.P. 1977. Scripts, plans, goals, and under- standing. Hillsdate. N.J.: Lawrence Erlbaum. I) CM CM <==>CMrare CM <==>MV MVsa~_e<==>M~: MV <==>CMrare M~ ~ <==>PS 2) MV - ->EX ('V - ->EX MVsase -->EX ('r'l raL,2 - - > PS ~l~ - ->PS (%' - ->PS ~IV sase - - > PS cV_r_:!r_~e - - • I'S <==>MVsase (The arrow indicates the direction for reasoning. == indicates that reasoning is possible anytime, and -- indicates that reasoning may be impossible if further information on MOVEMENT is is provided in the context.) Condition by Lense and aspect 1) Same Lense and aspect on both of the arrow Per(fect).Past-->lmp(erfect).Non-Past 2)Imp. Non-Past -->~on-Past Past -->Past Diagram I~" Direction and condition for reasoning I I 412
1984
84
A SY}~ACTIC APPROACH TO DISCOURSE SEMANTICS Livia Polanyi and Remko Scha English Department University of Amsterdam Amsterdam The Netherlands ABSTRACT A correct structural analysis of a discourse is a prerequisite for understanding it. This paper sketches the outline of a discourse grammar which acknowledges several different levels of structure. This gram~nar, the "Dynamic Discourse Model", uses an Augmented Transition Network parsing mechanism to build a representation of the semantics of a discourse in a stepwise fashion, from left to right, on the basis of the semantic representations of the individual clauses which constitute the discourse. The intermediate states of the parser model the in- termediate states of the social situation which ge- nerates the discourse. The paper attempts to demonstrate that a dis- course may indeed be viewed as constructed by means of sequencing and recursive nesting of discourse constituents. It gives rather detailed examples of discourse structures at various levels, and shows how these structures are described in the framework proposed here. "I DISCOURSE STRUCTURES AT DIFFERE.NT LEVELS If a discourse understanding system is to be able to assemble the meaning of a complex discourse fragment (such as a story or an elaborate descrip- tion) out of the meanings of the utterances consti- tuting the fragment, it needs a correct structural analysis of it. Such an analysis is also necessary to assign a correct semantic interpretation to clauses as they occur in the discourse; this is seen most easily in cases where this interpretation depends on phenomena such as the discourse scope of temporal and locative adverbials, the movement of the reference time in a narrative, or the interpre- tation of discourse anaphora. The Dynamic Discourse Model, outlined in this paper, is a discourse grammar under development which analyses the structure of a discourse in or- der to be able to deal adequately with its semantic aspects. It should be emphasized at the outset that this system is a formal model of discourse syntax and semantics, but not a computer implemen- tation of such a model. For a system to be able to understand a dis- course, it must be able to analyse it at several different levels. i. Any piece of talk must be assigned to one Inter- action -- i.e., to a socially constructed verbal exchange which has, at any moment, a well-defined set of participants. 2. Virtually every interaction is viewed by its participants as belonging to a particular pre- defined genre -- be it a doctor-patient interaction, a religious ceremony, or a casual chat. Depending on the genre, certain participants may have specif- ic roles in the verbal exchange, and there may be a predefined agenda specifying consecutive parts of the interaction. An interaction which is socially "in- terpreted" in such a fashion is called a Speech Event (Hymes,1967,1972). 3. A stretch of talk within one Speech Event may be characterized as dealing with one Topic. 4. Within a Topic, we may find one or more Dis- course Units (DU's) -- socially acknowledged units of talk which have a recognizable "point" or purpose, while at the same time displaying a specific syntactic/semantic structure. Clear examples are stories, procedures, descriptions, and jokes. 5. When consecutive clauses are combined into one syntactic~semantic unit, we call this unit a discourse constituent unit (dcu). Examples are: lists, narrative structures, and various binary structures ("A but B", "A because B", etc.). 6. Adjacency Structures may well be viewed as a kind of dcu, but they deserve special mention. They are two or three part conversational rou- tines involving speaker change. The clearest examples are question-answer pairs and exchanges of greetings. 7. The smallest units which we shall deal with at the discourse level are clauses and operators. Operators include "connectors" like "and", "or", "because", as well as "discourse markers" like "well", "so", "incidentally". The levels of discourse structure just dis- cussed are hierarchically ordered. For instance, any DU must be part of a Speech Event, while it must be built up out of dcu's. The levels may thus be viewed as an expansion of the familiar linguis- tic hierarchy of phoneme, morpheme, word and clause. This does not mean, however, that every discourse is to be analysed in terms of a five level tree structure, with levels corresponding to dcu, DU, Topic, Speech Event and Interaction. To be able to describe discourse as it actual- ly occurs, discourse constituents of various types must be allowed to be embedded in constituents of the same and other types. We shall see various ex- amples of this in later sections. It is worth em- phasizing here already that "high level constitu- ents" may be embedded in "low level constituents". For instance, a dcu may be interrupted by a clause which initiates another Interaction. Thus, a struc- tural description of the unfolding discourse would include an Interaction as embedded in the dcu. In 413 this way, we can describe "intrusions", "asides to third parties", and other interruptions of one In- teraction by another. In the description of discourse semantics, the level of the dcu's (including the adjacency struc- tures) plays the most central role: at this level the system defines how the semantic representation of a complex discourse constituent is constructed out of the semantic representations of its parts. The other levels of structure are also of some re- levance, however: - The Discourse Unit establishes higher level se- mantic coherence. For instance, the semantics of different episodes of one story are integrated at this level. - The Topic provides a frame which determines the interpretation of many lexical items and descrip- tions. - The Speech Event provides a script which describes the conventional development of the discourse, and justifies assumptions about the purposes of dis- course participants. - The Interaction specifies referents for indexicals like "I", "you", "here", "now'~. II THE DYNAMIC DISCOURSE ~DEL Dealinq with linquistic structures above the clause level is an enterprise which differs in an essential way from the more common variant of lin- guistic activity which tries to describe the inter- nal structure of the verbal symbols people exchange. Discourse linguistics does not study static verbal objects, but must be involved with the social pro- cess which produces the discourse -- with the ways in which the discourse participants manipulate the obligations and possibilities of the discourse sit- uation, and with the ways in which their talk is constrained and framed by the structure of this discourse situation which they themselves created. The structure one may assign to the text of a dis- course is but a reflection of the structure of the process which produced it. Because of this, the Dynamic Discourse Model that we are developing is only indirectly involved in trying to account for the a posteriori structure of a finished discourse; instead, it tries to trace the relevant states of the social space in terms of which the discourse is constructed. This capability is obviously of crucial importance if the model is to be applied in the construction of computer sys- tems which can enter into actual dialogs. The Dynamic Discourse Model, therefore, must construct the semantic interpretation of a dis- course on a clause by clause basis, from left to right, yielding intermediate semantic representa- tions of unfinished constituents, as well as set- ting the semantic parameters whose values influence the interpretation of subsequent constituents. A syntactic/semantic system of this sort may very well be fromulated as an Augmented Transition Network grammar (Woods, 1970), a non-deterministic parsing system specified by a set of transition networks which may call each other recursively. Every Speech Event type, DU type and dcu type is associated with a transition network specifying its internal structure. As a transition network pro- cesses the consecutive constituents of a discourse segment, it builds up, step by step, a representa- tion of the meaning of the segment. This represen- tation is stored in a register associated with the network. At any stage of the process, this register contains a representation of the meaning of the dis- course segment so far. An ATN parser of this sort models important aspects of the discourse process. After each clause, the system is in a well-defined state, characterized by the stack of active transition networks and, for each of them, the values in its registers and the place where it was interrupted. When we say that discourse participants know "where they are" in a complicated discourse, we mean that they know which discourse constituent is being initiated or contin- ued, as well as which discourse constituents have been interrupted where and in what order -- in other words, they are aware of the embedding structure and other information captured by the ATN configuration. The meaning of most clause utterances cannot be determined on the basis of the clause alone, but involves register values of the embedding dcu -- as when a question sets up a frame in terms of which its answer is interpreted (cf. Scha, 1983) or when, to determine the temporal reference of a clause in a narrative, one needs a "reference time" which is established by the foregoing part of the narrative (section III B 2). From such examples, we see that the discourse constituent unit serves as a framework for the semantic interpretation of the clauses which constitute the text. By the same token, we see that the semantics of an utterance is not exhaustively described by indicating its illocutionary force and its propositional content. An utterance may also cause an update in one or more semantic registers of the dcu, and thereby influence the semantic in- terpretation of the following utterances. This phenomenon also gives us a useful pert spective on the notion of interruption which was mentioned before. For instance, we can now see the difference between the case of a story being inter- rupted by a discussion, and the superficially simi- lar case of a story followed by a discussion which is, in its turn, followed by another story. In the first case, the same dcu is resumed and all its register values are still available; in the second case, the first story has been finished before the discussion and the re-entry into a storyworld is via a different story. The first story has been closed off and its register values are no longer avilable for re-activation; the teller of the sec- ond story must re-initialize the variables of time, place and character, even if the events of the sec- ond story concern exactly the same characters and situations as the first. Thus, the notions of interruption and resump- tion have not only a social reality which is expe- rienced by the interactants involved. They also have semantic consequences for the building and interpretation of texts. Interruption and resumption are often expli- citly signalled by the occurrence of "discourse markers". Interruption is signalled by a PUSH- marker such as "incidentally", "by the way", "you know" or "like". Resumption is signalled by a POP- 414 -markers such as "O.K.", "well", "so" or "anyway". (For longer lists of discourse marking devices, and somewhat more discussion of their functioning, see Reichman (1981) and Polanyi and Scha(1983b).) In terms of our ATN description of discourse structure, the PUSH- and POP-markers do almost ex- actly what their names suggest. A PUSH-marker sig- nals the creation of a new embedded discourse con- stituent, while a POP-marker signals a return to an embedding constituent (though not necessarily the immediately embedding one), closing off the cur- rent constituent and all the intermediate ones. The fact that one POP-marker may thus create a whole cascade of discourse-POPs was one of Reichman's (1981) arguments for rejecting the AT~ model of dis- course structure. We have indicated before, however, that accommodating this phenomenon is at worst a matter of minor technical extensions of the A."~Ifor- malism (Polanyi and Scha, 1983b); in the present paper, we shall from now on ignore it. III DISCOURSE CONSTITD-ENT UNITS A. Introduction. This section reviews some important ways in which clauses (being our elementary discourse con- stituent units) can be combined to form complex discourse constituent units (which, in most cases, may be further combined to form larger dcu's, by recursive application of the same mechanisms). For the moment, we are thus focussing on the basic dis- course syntactic patterns which make it possible to construct complex discourses, and on the semantic interpretation of these patterns. Sections IV and V will then discuss the higher level structures, where the interactional perspective on discourse comes mote to the fore. To be able to focus on discourse level phe- nomena, we will assume that the material to be dealt with by the discourse granmu~r is a sequence con- sisting of clauses and operators (connectors and discourse markers). It is assumed that every clause carries the value it has for features such as speak- er, clause topic, propositional content (represented by a formula of a suitable logic), preposed consti- tuents (with thematic role and semantics), tense, mood, modality. (The syntactic features we must include here have semantic consequences which can not always be dealt with within the meaning of the clause, since they may involve discourse issues.) The semantics of a dcu is built up in par- allel with its syntactic analysis, by the~same re- cursive mechanism. ~4hen clauses or dcu's are com- bined to form a larger dcu, their meanings are com- bined to form the meaning of this dcu. Along with registers for storing syntactic features and seman- tic parameters, each dcu has a register which is used to build up the logical representation of its meaning. Since the syntactic and semantic rules op- erate in parallel, the syntactic rules have the possibility of referring to the semantics of the constituents they work on. This possibility is in fact used in certain cases. We shall see an example in section III C i. Complex discourse constituent units can be divided into four structurally different types: - sequences, which construct a dcu out of arbitrar- ily many constituents (e.g.: lists, narratives). - expansions, consisting of a clause and a subordi- nated unit which "expands" on it. - structures formed by a binary operator, such as "A because B", "If A then B". - adjacency structures, involving speaker change, such as question/answer pairs and exchanges of greetings. In the next subsections, III B and III C, we shall discuss sequences and expansions in more detail. One general point we should like to make here already: sequences as well as expansions cor- respond to extensional semantic operations. The propositions expressing the meanings of their con- stituents are evaluated with respect to the same possible world -- the successive constituents sim- ply add up to one description. (We may note that some of the binary structures which we shall not consider further now, certainly correspond to in- tensional operations. "If A then B" is a clear ex- ample.) Since we will not discuss adjacency struc- tures in any detail in this paper, the problem of accommodating speaker change and different illocu- tionary forces in the discourse semantics will be left for another occasion. B. Sequential Structures. We shall discuss three kinds of sequential structures: lists, narratives, and topic chaining. i. Lists. Perhaps the paradigmatic sequential structure is the list: a series of clauses CI,..., Ck, which have a s-~mm~tic structure of the form F(al) = v I ..... F(a k) = v k, i.e., the clauses express propositions which con- vey the values which one function has for a series of alternative arguments. For instance, when asked to describe the interior of a room, someone may give an answer structured like this: "When I come into the door, then I see, to the left of me on the wall, a large win- dow (...). Eh, the wall across from me, there is a eh basket chair (...). On the right wall is a mm chair (...). In the middle of the room there is, from left to right, an oblong table, next to that a round table, and next to that a tall cabinet. Now I think I got everything." (Transcript by Ehrich and Koster (1983), translated from Dutch; the constituents we left out, indicated by parenthesized dots, are subordinated consti- tuents appended to the ~ they follow.) The list here occurs embedded under the phrase "I see", and is closed off by the phrase "Now I think I got everything". Often, the successive arguments in a list arementioned in a non-random order -- in the above case, for instance, we first get the loca- tions successively encountered in a "glance tour" from left to right along the walls; then the rest. 415 The ATN description of lists is very simple*: ~ ciause: next clause: ~ ~ clause: O first ~O next )O list Both the first and the next arc parse clauses which must have the semantic structure F(a) = v. (Whether a clause can be analysed in this fashion, depends on surface properties such as stress pattern and preposing of constituents.) Various registers are set by the first clause and checked when next clauses are parsed, in order to enforce agreement in features such as tense, mood, modality. The se- mantics of a new clause being parsed is simply conjoined with the semantics of the list so far. 2. Narratives. Narratives may be seen as a special case of lists -- successive event clauses specify what happens at successive timepoints in the world de- scribed by the narrative. Narratives are subdivided into different genres, marked by different tense and/or person orientation of their main line clauses: specific past time narratives (marked by clauses in the simple past, though clauses in the "historical present" may also occur), generic past time narratives ( marked by the use of "would" and "used to"), procedural narratives (present tense), simultaneous reporting (present tense), plans (use of "will" and "shall"; present tense also occurs). We shall from now on focus on specific past narra- tives. The properties of other narratives turn out to be largely analogous. (Cf. Longacre (1979) who suggests treating the internal structure of a dis- course constituent and its "genre specification" as two independent dimensions.) clause: /~event I J clause: clause: \ ~/ circumstance O eventl~_~ flashback specific past narrative All clause-processing arcs in this network for "specific past narratives" require that the tense of the clause be present or simple past. The event arc and the event arc process clauses with a --~i non-durative aspect. The circumstance arc processes clauses with a durative aspect. (The aspectual ca- tegory of a clause is determined by the semantic categories of its constituents. Cf. Verkuyl, 1972.) The event arc is distinguished because it initial- 1 izes the register settings. * Notation: All diagrams in this paper have one ini- tial state (the leftmost one) and one final state (the rightmost one). The name of the diagram indi- cates the category of the constituent it parses. Arcs have labels of the form "A:B" (or sometimes just "A"), where A indicates the category of the constituent which must be parsed to traverse the arc, and B is a label identifying additional con- ditions and/or actions. The specific past narrative network has a time register containing a formula representing the current reference time in the progression of the narrative. ~,~en the time register has a value t, an incoming circumstance clause is evaluated at t, and it does not change the value of the time re- gister. An event clause, however, is evaluated with respect to a later but adjacent interval t', and resets the time register to an interval t", later than but adjacent to t'. (Cf. Polanyiand Scha, 1981) To show that this gives us the desired semantic consequences, we consider an abbreviated version of a detective story fragment, quoted by Hinrichs (1981): (El) He went to the window (E2) and pulled aside the soft drapes. (Cl) It was a casement window (C2) and both panels were cranked down to let in the night air. (E3) "You should keep this window locked," he said. (E4) "It's dangerous this way." The E clauses are events, the C clauses are circum- stances. The events are evaluated at disjoint, suc- sessively later intervals. The circumstances are evaluated at the same interval, between E2 and E3. To appreciate that the simultaneity of subsequent circumstance clauses in fact is a con- sequence of aspectual class rather than a matter of "world knowledge", one may compare the sequence "He went to the window and pulled aside the soft drapes" to the corresponding sequence of circum- stance clauses: "He was going to the window and was pulling aside the soft drapes". World knowledge does come in, however, when one has to decide how much the validity of a circumstance clause extends beyond the interval in the narrative sequence where it is explicitly asserted. Specific past narratives may also con- tain other constituents than clauses. An important case in point is the "flashback" -- an embedded nar- rative which relates events taking place in a peri- od before the reference time of the main narrative. A flashback is introduced by a clause in the plu- perfect; the clauses which continue it may be in the pluperfect or the simple past. clause: f-event clause: ~0 @ O f-init , pop> O ~ clause: f-circumstance flashback The first clause in a flashback (f-init) is an event clause; it initializes register set- tings. The reference time within a flashback moves according to the same meachanism sketched above for the main narrative line. After the completion of a flashback, the main narrative line continues where it left off -- i.e., it proceeds from the reference time of the main narrative. A simple example: Peter and Mary left the party in a hurry. Mary had ran into John and she had insulted him. So they got into the car and drove down Avenue C. 416 3. Topic Chainin~ Another sequential structure is the topic chaining structure, where a series of dis- tinct predications about the same argument are listed. A topic chain consists of a series of clauses C., ..., C k, with a semantic structure of the form~.(a),..., Pk(a), where "a" translates the topic NP'slof the clauses. In the first clause of the chain, the topic is expressed by a phrase (either a full NP or a pronoun) which occurs in subject position or as a preposed constituent. In the other clauses, it is usually a pronoun, often in subject position. An example: Wilbur's book I really liked. It was on relativity theory and talks mostly about quarks. I got it while I was working on the initial part of my research. (Based on Sidner (1983), example D26.) The topic chain may be defined by a very simple transition network. ~ clause: tcn clause: \ ./ clause: O tcl )O ~-- tcn >O topic chain The network has a topic register, which is set by the first clause (parsed by the tcl arc), which al- so sets various other registers. The tcn arc tests agreement in the usual way. As for the topic regis- ter, we require that the clause being parsed has a constituent which is interpreted as co- referential with the value of this register. The semantics of a topic chain is created by simple conjunction of the semantics of subsequent constit- ueHts, as in the case of the list. Lists, narratives and topic chains dif- fer as to their internal structure, but are distri- butionally indistinguishable -- they may occur in identical slots within larger discourse constitu- ents. For an elegant formulation of the grammar, it is therefore advantageous to bring them under a common denominator: we define the notion sequence to be the union of list, narrative and topic chain. C. Expansions. Under the heading "expansions" we describe two constructions in which a clause is followed by a unit which expands on it, either by elaborating its content ("elaborations") or by describing prop- erties of a referent introduced by the clause ("topic-dominant chaining"). i. Elaborations. A clause may be followed by a dcu (a clause or clause sequence) which expands on its content, i.e. redescribes it in more detail. For instance, an event clause may be expanded by a mini-narrative which recounts the details of the event. An example: Pedro dined at Madame Gilbert's. First there was an hors d'oeuvre. Then the fish. After that the butler brought a glazed chicken. The repast ended with a flaming dessert... The discourse syntax perspective suggests that in a case like this, the whole little narrative must be viewed as subordinated to the clause which pre- cedes it. We therefore construct one dcu which con- sists of the first clause plus the following se- quence. ..... An illustration of the semantic necessi- ty of such structural analyses is provided by the movement of the reference time in narratives. The above example (by H. Kamp) appeared in the context of the discussion about that phenomenon. (Cf. Dow- ty, 1982) Along with other, similar ones, it was brought up as complicating the idea that every event clause in a narrative moves the reference time to a later interval. We would like to suggest that it is no coincidence that such "problematic" cases involve clause sequences belonging to known paragraph types, and standing in an elaboration relation to the pre- ceding clause. The reason why they interrupt the flow of narrative time is simple enough: their clauses are not direct constituents of the narrative at all, but constitute their own embedded dcu. To describe elaborations, we ~redefine the notion of a clause to be either an elementary one or an elaborated one (where the elaboration can be constituted by a sequence or by a single clause). sequence O e-claus~ 0 ~ "±~0 e-clause clause If a clause C is followed by a dcu D, D may be parsed as an elaboration of C, if C and D may be plausibly viewed as describing the same situation. (Note that this is a relation not between the surface forms of C and D, but between their mean- ings C' and D'.) When constructing the semantics for the complex clause, this semantic coherence must al- so be made explicit. 2. Topic-Dominant Chaining. Another phenomenon which gives rise to a similar structure is "topic-dominant chaining". Within a clause with a given topic, certain other constituents may be identified as possibly dominant*. A dominant constituent may become the topic of the next clause or sequence of clauses. We suggest that such a continuation with a new topic be seen as ex- panding on the clause before the topic-switch, and as syntactically subordinated to this.clause. This subordinated constituent may either be a single clause or another topic chain sequence. Similarly, a clause may be followed by a relative clause, the relative pronoun referring to a dominant constituent of the embedding clause. Also in this case, the relative clause may be the first clause of an embedded topic chain. 0 e-claus~o topic chain ~O rel-clau~o_~topic tail clause * The notion of dominance links discourse phenomena with extraction phenomena within the sentence. See, e.g., Erteschik-Shir and Lappin (1979). 417 (We thus introduce an alternative network for clause into the grammar, in addition to the one given be- fore. ) The dominant constituents of the e-clause are stored in a register; the topic of the topic chain, as well as the relative pronoun of the tel. clause must be interpreted as coreferential with one of those constituents. The topic of topic tail (a "headless" topic chain) must in its turn corefer with the relative pronoun. The semantics consists of simple conjunction. Both variants of topic-dominant chaining allowed by the above network are exemplified in the following text (Sidner, 1983; example D26): (I) Wilbur is a fine scientist and a thoughtful guy. (2) He gave me a book a while back (2') which I really liked. (3) It was on relativity theory (4) and talks mostly about quarks. (5) They are hard to imagine (6) because they indicate the need for elementary field theories of a com plex nature. (7) These theories are absolutely es- sential to all relativity research. ( 8 ) Anyway (8') I got it (8") while I was working on the initial part of my research. (9) He's a really helpful colleague to have thought of giving it to me. (Indentation indicates subordination with respect to the most recent less indented clause.) This embed- ding of constituents by means of topic-dominant chaining would explain the "focus-stack" which Sidner (1983) postulates to describe the pronominal reference phenomena in examples like this. IV DISCOURSE UNITS We now leave the discussion of the basic syn- tactic/semantic mechanisms for building discourse out of clauses, and turn to the higher levels of analysis, where considerations involving the goals of the interaction start to come in. First of all, we shall discuss the entities which Wald (1978) calls Discourse Units*, corresponding closely to the entities which Longacre (1983) simply calls "Discourses". Discourse Units (DU's) are socially acknowledged units of talk, which have a recogniza- ble point or purpose, and which are built around one of the sequential dcu's discussed in section III B. Discourse Unit types which have been inves- tigated include stories (Labov, 1972; PTald, 1978; Polanyi, 1978b), descriptions of various sorts (Linde, 1979; Ehrich and Koster, 1983), procedural discourse and hortatory discourse (see various re- ferences in Longacre (1983)). * Wald restricts his notion to monologic discourse fragments. It seems reasonable to generalize it to cases where more than one speaker may be involved. Because of the pragmatic relation between the Dis- course Unit and the surrounding talk (specifical- ly, the need to appear "locally occasioned" (Jef- ferson, 1979) and to make a "point" (Polanyi, 1978b), the central part of the Discourse Unit usually is not a piece of talk standing completely on its o~ feet, but is supported by one or more stages of preparatory and introductory talk on one end, and by an explicit closure and/or conclusion at the other. This may be illustrated by taking a closer look at conversationally embedded stories -- the paradigmatic, and most widely studied, DU type. specific past ~ance settinu narrative dcu:exit O )O -~ 0 ~C 20 stor~ A typical story is initiated with entrance talk which sets the topic and establishes the rela- tion with the preceding talk. Often we find an ab- stract, and some kind of negotiation about the ac- tual telling of the story. Then follows the "setting" which gives the necessary background material for the story*. Then follows the "core": a specific past narrative, re- lating a sequence of events. The story is concluded with "exit talk" which may formulate the point of the story quite explicitly, connecting the story- world with more general discourse topics. For instance, one story in Labov's (1972) collection has as its entrance talk an explicit elicitation and its response to it: O: What was the most important fight that you remember, one that sticks in your mind... A: Well, one (I think) was with a girl. There is an extensive section describing the set- ting: "Like I was a kid you know. And she was the baddest girl, the baddest girl in the neigh- borhood. If you didn't bring her candy to school, she would punch you in the mouth;" and you had to kiss her when she'd tell you. This girl was only twelve years old, man, but she was a killer. She didn't take no junk; she whupped all her brothers." Then, the event chain starts, and finally ends: "And I came to school one day and I didn't have any money. ( .... ) And I hit the girl: powwww! and I put something on it. I win the fight." The story is explicitly closed off: "That was one of the most important." Not every specific past narrative may be the core of a story. Because of the interactional status of the story (its requirement to be "point- ful") there are other properties which are notice- able in the linguistic surface structure -- notably the occurrence of "evaluation" (Polanyi, 1978b) and of a "peak" in the narrative line (Longacre,l~83). * That the necessary background material must be given before the actual event sequence, is attested by a slightly complicated storytelling strategy, described in Polanyi (1978a) as the "True Start" repair: the storyteller first plunges right into the event sequence, then breaks off the narrative line and restarts the telling of the story, now with the insertion of the proper background data. 418 The structural description of stories, given above, should probably be further elaborated to account for the phenomenon of episodes: a story may be built by consecutive pieces of talk which constitute separate narrative dcu's. At the level of the story DU, the meanings of these narratives must be integrated to form a description of one storyworld rather than many. In English and other Western European lan- guages, the Discourse Unit seems to be a largely interactional notion. Its constituents are pieces of talk defined by the independently motivated dcu- grammar. The DU grarmnar only imposes constraints on the content-relations between its constituent dcu's; it does not define structures which an ade- quate dcu grammar would not define already. In other languages of the world, the situation seems to be somewhat different: there are syntac- tically defined ways for building DU's out of dcu's, which were not already part of the dcu grammar. For details, one should investigate, for instance, the various works referred to in Longacre (1983). Also in this body of work, however, one can find numerous cases where the structural difference between a DU ("Discourse", in Longacre's terms) and the corresponding sequential dcu ("paragraph", in his terms) is not very clear. V I~ERACTIONS AND SPEECH EVENTS The system we present here is intended to analyze the verbal material occurring in one Interaction. By an Interaction we mean a social situation in which a set of participants is in- volved in an exchange of talk. Each of the partici- pants knows to be taking part in this situation, a~d assigns to the others the same awareness. By focussing on one interaction, we single out, from all the talk that may be going on at one place at the same time, the talk which belongs together be- cause it is intended to be part of the same social situation. (Cf. Goffman, 1979) The set of participants of an Interaction determines the possible speakers and addressees of the talk occurring in it. Similarly, the physical time and place of an interaction provide the ref- erents for indexicals like "now" and "here". A simple two person Interaction would be described as an exchange of greetings, followed by a piece of talk as defined by a lower level of the grammar, followed by an exchange of farewells. Greetings and farewells are the only kinds of talk which directly engage the Interaction level of description -- they correspond to signing on and signing off to the list of participants. An "unframed" interaction between "unin- terpreted" people is a rare event. People use a refined system of subcategorization to classify the social situations they engage in. These sub- categories, which we shall call Speech Event types (cf. Hymes, 1967, 1972), often assign a specific purpose to the interaction, specify roles for the participants, constrain discourse topics and conversational registers, and, in many cases, specify a conventional sequence of component acti- vities. The most precisely circumscribed kinds of Speech Events are formal rituals. Speech Event types characterized by gran~nars which are less explicit and less detailed include service encounters (Mer- ritt, 1978), doctor-patient interactions (Byrne and Long, 1976), and casual conversations. The structure of talk which is exchanged in order to perform a task will follow the structure of some goal/subgoal analysis of this task (Grosz, 1977). In Speech Event types which involve a more or less fixed goal, this often leads to a fixed grammar of subsequent steps taken to attain it. For instance, students looking at transcripts of the on- goings in a Dutch butchershop, consistently found the following sequential structure in the interac- tion between the butcher and a customer: i. establishing that it is this customer's turn. 2. the first desired item is ordered, and the order is dealt with, .... , the n-th desired item is ordered and the order is dealt with. 3. it is established that the sequence of orders is finished. 4. the bill is dealt with. 5. the interaction is closed off. O dcu:2 0 dcu:l 30 dcu'2~OU'~cn'~O~Cn~4" " ~ " 90 dcu:5 ~O butchershop interaction Each of these steps is filled in in a large varie- ty of ways -- either of the parties may take the initiative at each step, question/answer sequences about the available meat, the right way to prepare it, or the exact wishes of the customer may all be embedded in the stage 2 steps, and clarification dialogs of various sorts may occur. In other words, we find the whole repertoire of possibilities ad- mitted by the dcu gralmnar ( particularly, the part dealing with the possible embeddings of adjacency structures within each other). Thus, we note that the arcs in a Speech Event diagram such as the above do not impose syn- tactic constraints on the talk they will parse.The labels on the arcs stand for conditions on the con- tent of the talk -- i.e., on the goals and topics that it may be overtly concerned with. An important Speech Event type with characteristics slightly different from the types mentioned so far, is the "casual conversation". In a casual conversation, all participants have the same role: to be "equals"; no purposes are pre- established; and the range of possible topics is open-ended, although conventionally constrained. VI I~ERRUPTION REVISITED One Speech Event type may occur embedded in another one. It may occupy a fixed Slot in it, as when an official gathering includes an informal prelude or postlude, where people don't act in their official roles but engage in casual conver- sation. (Goffman, 1979) Or, the embedding may occur at structurally arbitrary points, as when a Service Encounter in a neighborhood shop is interrupted for smalltalk. The latter case may be described by tacit- ly adding to each state in the Service Encounter network a looping arc which PUSIIes to the Casual 419
1984
85
DEALING WITH INCOMPLETENESS OF LINGUISTIC KNOWLEDGE IN LANGUAGE TRANSLATION TRANSFER AND GENERATION STAGE OF MU MACHINE TRANSLATION PROJECT Makoto Nagao, Toyoaki Nishida and Jun-ichi Tsujii Department of Electrical Engineering Kyoto University Sakyo-ku, Kyoto 606, JAPAN I. INTRODUCTION Linguistic knowledge usable for machine trans- lation is always imperfect. We cannot be free from the uncertainty of knowledge we have for machine translation. Especially at the transfer stage of machine translation, the selection of target lan- guage expression is rather subjective and optional. Therefore the linguistic contents of machine translation system always fluctuate, and make gradual progress. The system should be designed to allow such constant change and improvements. This paper explains the details of the transfer and gen- eration stages of Japanese-to-English system of the machine translation project by the Japanese Govern- ment, with the emphasis on the ideas to deal with the incompleteness of linguistic knowledge for machine translation. 2. DESIGN STRATEGIES 2.1 Annotated Dependency Structure The intermediate representation we adopted as the result of analysis in our machine translation is the annotated dependency structure. Each node has arbitrary number of features as shown in Fig. i. This makes it possible to access the constituents by more than one linguistic cues. This representa- tion is therefore powerful and flexible for the sophisticated grammatical and semantic checking, especially when the completeness of semantic analy- sis is not assured and trial-and-error improvements are required at the transfer and generation stages. 2.2 Multiple L~aver Grammar We have three conceptual levels for grammar rules. lowest level: default grammar which guarantees the output of the translation process. The quality of the translation is not assured. Rules of this level apply to those inputs for which no higher layer grammar rules are applicable. kernel level: main grammar which chooses and gener- ates target language structure according to semantic relations among constituents which are determined in the analysis stage. topmost level: heuristic grammar which attempts to get elegant translation for the input. Each rule bears heuristic nature in the sense that it is word specific and it is applicable only to some restricted classes of inputs. 2.3 Multiple Relation Structure In principle, we use deep case dependency structure as a semantic representation. Theoreti- cally we can assign a unique case dependency struc- ture to each input sentence. In practice, however, analysis phase may fail or may assign a wrong structure. Therefore we use as an intermediate representation a structure which makes it possible to annotate multiple possibilities as well as mul- tiple level representation. An example is shown in Fig. 2. Properties at a node is represented as a vector, so that this complex dependency structure is flexible in the sense that different interpreta- tion rules can be applied to the structure. 2.4 Lexicon Driven Feature Besides the transfer and generation rules which involve semantic checking functions, the grammar allows the reference to a lexical item in the dictionary. A lexical item contains its spe- cial grammatical usages and idiomatic expressions. During the transfer and generation stages, the~e rules are activated with the highest priority. This feature makes the system very flexible for dealing with exceptional cases. The improvement of translation quality can be achieved progressively by adding linguistic information and word usages in the dictionary entries. 2.5 Format-Oriented Description of Dictionary Entries The quality of a machine translation system heavily depends on the quality of the dictionary. In order to build a machine translation dictionary, we collaborate with expert translators. We develop- ed a format-oriented language to allow computer- naive human translators to encode their expertise without any conscious effort on programming. Although the format-oriented language we developed lacks full expressive power for highly sophisticat- ed linguistic phenomena, it can cover most of the common lexical information translators may want to describe. The formatted description is automati- cally converted into statements in GRADE, a pro- gramming language developed by the Mu-Project. We prepared a manual according to which a man can fill in the dictionary format with linguistic data of items. The manual guarantees a certain level of quality of the dictionary, which is important when many people have to work in parallel. 420 (Due %0 the advance of electronic instrumentation, auwmsted ship increases in number.) J-CAT=Verb J-LEX ffi It~ "f ~ (increase) J-DEEP-CASE = MAIN J-GAPffi'(SOUrce GOAl)' J-SEI~WENCE -CONNECTOR = DECLARATIVE J-SENTENCE-RELATION = NIL J.SEh~I'ENCE-END © NIL J-DEEP.TENSE = PRESENT J-DEEP.ASPECT= BeyondTime J.DEEP.MODE = NIL J.VERB.ASPECT ffi TRANSITIVE J-VERB.INT = NO J-VERB-PAT='(~: .~." ~' .:~ I "C" :: )' J-VERB-SD ~'(~ ~ -SUBject T-CAUse ... )' J-NEG = NIL J-CAT = No'tin J-LEX = ;~ ~(advance ) J.DEEP-CASE ffi CAUse J.SUT'.FACE-CASE ffi ~'- I I -CAT= Noun (electronic instrument.stion) .DEEP.CASE ffi SUBject :SURFACE-CASE = -'9 J-CAT = Noun J-£ZX ffi NIL J-DEEI'-CASE = SOUrce J-SURFACE-CASE = ~,. ,% J-CATfNoun J.LEX ,= g~ Ib~(['.~(antomatad ship) J-DEEP-CASE ffi SUBject J-SURFACE-CASE = ~' J-BKK-LEX = ~: J-NFffiNIL J-DEEP-BFKI-3 = NIL J-SURFACE-BFKI-3 ,= NIL J-BFK-LEX1-3 ,, NIL J-N,ffi C.,ommonNotm J.SEM,, OM(urthSeinl object) J-NUMBER = NIL .o. I J-CAT = Noun I J-LEX = NIL J-DEEP-CASE,ffi GOAl I J -SURFACE-CASE ='(( ". T" ) (:=))" dummy nodes Fig. i. Representation of analysis result by features. his work o work .... ] work work I I [ J-LEX = he agent OR possess ~ |J-DEEP-CASE l I -- I L = agent OR posessj he he Fig. 2. An example of complex dependency structure. 3. ORGANIZATION OF GRAMMAR RULES FOR TRANSFER AND GENERATION STAGES 3.1 Heuristic Rule First Grammar rules are organized along the princi- ple that "if better rule exists then the system uses it; otherwise the system attempts to use a standard rule: if it fails, the system will use a default rule." The grammar rule involves a number of stages for applying heuristic rules. Fig. 3 shows a processing flow for the transfer and gener- ation stages. Heuristic rules are word specific. GRADE makes it possible to define word specific rules. Such rules can be invoked in many ways. For example, we can associate a word selection rule for an ordinary verb in a dictionary entry for a noun, as shown in Fig. 4. 421 terna•l P re-transfer ~ post-transfer loop loop in TRANSFER internal representation -- ~ representation for Japanese for English ++,/ \ +..,,o, phrase structure,s/ tree ~ structure transformation MORPHOLOGICAL SYNTHESIS Fig. 3. Processing flow for the transfer and generation stages. (a) Activating a Lexical Rule for a Noun "~J~'(effect) from a Governing Verb "+~. + "(give) J-CAT= Verb J-CAT = Verb J-LEX= ~- ~ ~ (five) TRANSFER ... b. J-/~X = affect // \ /---\ J-CAT=Noum J J-LEX= P~ ~(effect) J J-DEEP-CASE =OBJect, I J-N-V-PROG = ~ ~-V-TRANSFER ......... J-N- KOUSETSU = ~ ~-KOUSETSU-TRANSFE R I::2 . . . . -" "''. P'-~- SUBGRAMMAR:~ ~- V-TRANSFER J /; dealing with c~ses like: / *'~ ... /" <VERB>:A~, ~£~ .... t I C~ve), (l~ive) ~ ~ected, a~ec~._ I other sub~'amrnars ~ J~(efl'eet ) (b) Form-Oriented Description of a Transfer Rule for a Noun "~J~m~'(effect) ~- EFFECT +-~>~ [ ftl&+t | I'[ I++.~ +'+ ..+ +'~,i s. I It 6 t i I = ~ I~FF~CT)TE IFtPptCTITE I I I IPE ¢. -,'~ it! .- I) X' !~! ~ua3 08; a~T ooJ I "tOO ~0 f./ ~ 2 ) • = ~ ^.c • I ~U=G ~ l AnG + +~.+ i + /ze( I 3} i: -! |! ;J. ! 3.2 Pre-transfer Rules Some heuristic rules are activated just after the standard analysis of a Japanese sentence is finish- ed, to obtain a more neutral (or target language oriente~ analyzed structure. We call such invocation the pre- transfer loop. Semantic and pragmatic interpretation are done in the pre-transfer loop. The more heuristic rules are applied in this loop, the better result will be obtained. Figs. 5 and 6 show some examples. 3.3 Word Selection in Target Language by Using Semantic Markers Word selection in the target language is a big problem in machine transla- tion. There are varieties of choices of translation for a word in the source language. Main principles adopted in our system are, (i) Area restriction by using field code, such as electrical Engineer- ing, nuclear science, medicine, and so on. (2) Semantic code attached to a word in the analy- sis phase is used for the selection ofaproper target language word or a phrase. (3) Sentential structure of the vicinity of a word to be translated is sometimes effective for the determination of a proper word or a phrase in the target language. Table i shows examples of a part of the verb trans- fer dictionary. Selection of English verb is done by the semantic categories of nouns related to the verb. The number i attached to verbs like form-l, produce- 2 is the i-th usage of the verb. When the semantic information of nouns is not available, the column indi- cated by ~ is applied to Fig. 4. Lexicon-oriented invocation of grammar rules. 422 I J-CAT= N0un { J.CAT=Verb J-LEX = ~" ~, Ido not have) J-CAT = Nouz 1 J-LEX = ~'~(sense) J.-DEEP-CASE = SUBject { J-CAT= Noun --J J-LEX = NIL J-CAT=Noun J-LEX = ~ in~.¢expression ) I "~ J-CAT = Al)Jamtive { J-LEX = ~ ~"~ ~ Cmeaning{ess) =expre~ion which d~s not have sere" ~ "meaningle~ expre~ion" Fig. 5. An example of a heuristic rule used in the pre-transfer loop. logarithmic have integral characteristics equation integral integral equation equation l { have \~ with integral logarithmic logarithmic equation characteristics characteristics c0nductivity give effect effect effect give / ? effect conductivity (REC: recipient) (3) ADJ [~ { Sl ~> :many Xl ~ X2 ~i X2 ~>-~:few ! ADJ ~ :be, exist,.. (to be determined I SUB at transfer step) X 1 14) ~DSI ( ~ , ~ ) - . ~(+tend tO) /A I z ~ ~ :there exist ~/~ ~ :tendency produce a default translation. In most cases, we can use a fixed format for describing a translation rule for lexical items. We developed a num- ber of dictionary formats specially designed for the ease of dictionary in- put by computer-naive expert translators. The expressive power of format- oriented description is, however, insuf- ficient for a number of common verbs such as "~ ~ " (make, do, perform .... ) and "~ ~ " (become, consist of, provide, ...) etc. In such cases, we can encode transfer rules directly by GRADE. An example is shown in Fig. 7. Varieties of usages are to be listed up with their corresponding English sentential struc- tures and semantic conditions. 3.4 Post-Transfer Rules The transfer stage bridges the gap between Japanese and English expressions. There are still many odd structures after this stage, and we have to adjust further more the English internal repre- sentation into more natural ones. We call this part as post-transfer loop. An example is given in Fig. 8, where a Japanese factitive verb is first trans- ferred to English "make", and then a structural change is made to eliminate it, and to have a more direct expression. 4. GENERATION PROCESS 4.1 Translation of Japanese Postpositions Postpositions in Japanese general- ly express the case slots for verbs. A postposition, however, has different usages, and the determination of English prepositions for each postposition is quite difficult. It also depends on the verb which governs the noun phrase hav- ing that postposition. Table 2 illustrates a part of a default table for determining deep and surface case labels when no higher level rule applies. This sort of tables are defined for all case combination. In this way, we confirm at least one trans- lation to be assigned to an input. A particular usage of preposition for a particular English verb is written in the lexical entry of the verb. 4.2 Determination of Global Sentential Structures in Target Language Fig. 6. Examples of pre-transfer rules. 423 non-living substance form-I structure social phenomena take place action,deed,movement occur-i reaction form X(obj) X take place X occur standard,property state,condition arise-i X arise relation produce-2 produce X form-I non-living substance structure X form Y phenomena,action cause-i X cause Y produce-2 improve-i x produce Y property measure increase-2 raise-i X raise Y Semantic marker for X/Y X improve Y X increase Y Table i. Word selection in target language by using semantic markers. ~ (NARO) (1) A ~rS ~'~ ~. (2) A,~'[3 I:- .~u~. : : __._.> == % ~r := }, l: i' NARU[J-VP=VI] consist of /\ ,. /\ A B A B (SUB) (COM) (OBJ) (COM) NARU [ J-VP=V2 ] /\ A B (suB) (GOAL) provide [B. J-SEM=CE] )" // X CE:means, equipment A B (AGT) (OBJ) reach MU : unit A B (OBJ) (STO) "B. J-CAT=ADJ ~ bzec°~e J-LEX= %'~ (easy) |~ / k I=<~(diffi- I A B cult) J (OBJ) (GOAL) turn [B. J-SEM=IT, IC] ~ / IT : theory ,method A B IC : conceptual (OBJ) (GOAL) object get B : complement marke I (OBJ) (GOAL) become ,default] .." / X A B (OBJ) (GOAL) (3) dictionary rules help become b give double become :. double cause become ~. A causes B Fig. 7. An example of dictionary transfer rules of popular verbs. Grobal sentential structures of Japanese and English are quite different, and correspondingly the internal structure of a Japanese sentence is not the same as that of English. Fundamental difference from Japanese internal representation to that of English is absorbed at the (pre-, post -) transfer stages. But at the stage of English generation, some structural transformations are still required in such oases as (a) embedded sentential structure, (b) complex sentential structure. We classified four kinds of embedded senten- tial structures. (i) a case slot of an embedded sentence is vacant, and the noun modified by the embedded sentence comes to fill the slot. (~)The form like "NI~" V ~ N2" m " (N 2 ~ NI~'V ) N2". In this case the noun N I must have the semantic properties like parts, attributes, and action. (~i~)The third and the fourth classes are particular embedded expressions in Japanese, which have the connecting expressions like " ~ " (in the case of), " ~9~ " (in the way that, "g~,P " (in that), and so on. An example of the structural transformation is shown in Fig. 9. The relative clause "vhy..." is generated after the structural transformation. Connection of two sentences in the compound and complex sentences is done according to Table 3. An example is given in Fig. i0. 4.3 The Process of Sentence Generation in English After the transfer is done from the Japanese deep dependency structure to the English one, conversion is done to a phrase structure tree with all the surface words attached to the tree. The processes explained in 4.1 and 4.2 are involved at this generation stage. The conversion is perform- ed top-down from the root node of the dependency tree to the leaf. Therefore when a governing verb demands a noun phrase expression or a to-infinitive expression to its dependent phrase, the structural change of the phrase must be performed. Noun to verb transformation, and noun to adjective 424 (transfer) ~ ~ ~ make A B C A B C 1 SUB I /1 B B / I (C:intransitive (consultation to verb) lexieal item C) (post-transfer) > C' ! A C (C':transitive verb derived derived from C) A~I"8tI~ ---~A make B rotate > A rotate B Fig. 8. An example of post-transfer rule application. J-SURFACE-CASE J-DEEP-CASE E-DEEP-CASE Default Preposition ~: (ni) RECipient REG. BENeficiary to (REC- to, BEN -- for) ORigin ORI from PARticipant PAR with TIMe Time-AT in ROLe ROL as GOAl GOA ~o Table 2. Default rule for assigning a case label of English to a Japanese postposition " l~ " (ni). JAPANESE ENGLISH SENTENTIAL SENTENTIAL CONNECTIVE DEEP-CASE CONNECTIVE RENYO (-SHI) TE RENYO (-SHI) TE - TAME -NODE -KARA -TO -TOKI -TE -TAME -NONI -YOU -YOU -KOTONAKU -NACAP~, -BA TOOL TOOL CAUSE TIME PURPOSE II MANNER ACCOMPANY CIRCUMSTANCE BY -ING . . BY -ING .. BECAUSE . . ¢! s! WHEN . . SO-THAT-MAY to AS-IF WITHOUT -ING .. WHILE -ING .. WHEN . . , ° . , Table 3. Correspondence of sentential connectives. he school resign reason N 1 N 2 V N 3 [ANALYSIS] reason(N3), \ resign(V) / -~ / m I "~'~O e / o I ~e, i l school(N 2 ) reason he [TRANSFER] 73 ,PROP. CAUSE i N 1 N 2 (N 3) [GENERATION] NP N 3 RELCL REL/~V S , why Fig. 9. Structural transformation of an embedded sentence of type 3. 425 (a) (b) ,ANALYSIS] ~i [i TAMENI --->V 2 YOUNI --~V. (PURPOSE) (PURPOSE)[ z X [TRANSFER] yl [I ~-- > V 2 ~ V? SO-THAT-~AY SO-THAT-MAY " (PURPOSE) (PURPOSE) X [GENERATION] S S V 1 INF V 1 SUB TO V 2 CONJ S T IN-ORDER-TO X AUX V 1 MAY Fig. i0. Structural transformation of an embedded sentence. transformation are often required due to the differ- ence of expressions in Japanese and English. This process goes down from the root node to all the leaf nodes. After this process of phrase structure genera- tion, some sentential transformations are performed such as follows. ( i ) When an agent is absent, passive transforma- tion is applied. ( ii ) When the agent and object are both missing, the predicative verb is nominalized and placed as the subject, and such verb phrases as "is made", and "is performed" are supple- mented. (iii) When a subject phrase is a big tree, the anticipatory subject "it" is introduced. ( iv ) Pronominalization of the same subject nouns is done in compound and complex sentences. ( v ) Duplication of a head noun in the conjunctive noun phrase is eliminated, such as, "uniform component and non-uniform component" > "uniform and non-uniform components". (vi) Others. Another big structural transformation required comes from the essential difference between DO- language (English) and BE-language (Japanese). In English the case slots such as tools, cause/reason, and some others come to the subject position very often, while in Japanese such expressions are never used. The transformation of this kind is incorpo- rated in the generation grannnar such as shown in Fig. ii, and produces more English-like expressions. This stylistic transformation part is still very primitive. We have to accumulate much more linguis- tic knowledge and lexical data to have more satis- fiable English expressions. earthquake building collapse collapse destroy building earthquake earthquake building = The buildings collapsed [CPO:causal potency] due to the earthquake. = The earthquake destroyed the buildings. Fig. ii An example of structural transformation in the generation phase. 5. SUMMARY This paper described a number of strategies we employed in the transfer and generation stages of our Mu system to make the system both powerful and fault-tolerant. As is mentioned above, our system has many advantages such as the flexibility of the generation process, the utilization of strong lexical information. The system is in the course of development in collaboration with a num- ber of computer scientists from computer industries and expert translators. Some of the translation results are attached in the last, which show the present level of the translation system. Progres- sive improvement is expected in the next two years. ACKNOWLEDGEMENTS We acknowledge the members of the Mu-Project, especially, Mr. S. Takai(JCS), Mr. Y. Fukumochi (Sharp Co.), Mr. T. Ishioka(JCS), Miss M. Kume (JCS), Mr. H. Sakamoto(Oki Co.), Mr. A. Kosaka (NEC Co.), Mr. H. Adachi(Toshiba Co.), Miss A. Okumura(Intergroup), and Miss A. Okuda(Intergroup) who contributed greatly for the implementation of the system. REFERENCES [i] M. Nagao: Machine Translation Project of the Japanese Government, a paper presented at the workshop between EUROTRA and Japanese machine translation experts, held in Brussels on November 24-25, 1983. [2] J. Nakamura, et al.: Grammar Writing System (GRADE) of Mu-Machine Translation Project and its Charactersitics, Proc. of COLING 84, 1984. [3] J. Tsujii, et al.: Analysis Grammar of Japanese in the Mu-Project -- A Procedural Approach to Analysis Grammar --, ibid. [4] Y. Sakamoto, et al.: Lexicon Features for Japanese Syntactic Analysis in Mu-Project- JE, ibid. [5] J. Tsujii: The transfer Phase in an English- Japanese Translation System, Proc. of COLING 82, 1982. Sample outputs as of April, 1984 are attached in the next page. 426 o N g~ ':'~ i/i ".'.'I ® o "="+ i o .~ ~ .~ +.-~ -~$ o., .Q E 0 T • Q. ~.~ e:~ ~.~ o o ~,L, ~.:- o ~:~ ~ ~ "o o) oJ c v,. m N X o e~ "o ¢ x v,l o 4 ® ~ g ¢ - g ~ ° tU .4.4 0 m 0 --- ¢1 o o o. .--, =--+ ~ ~' ~ ~= ~ .j~ .+ .C o*,.> u ~o uJ,~ 0 U 1) ~.o ~-, ~': ~ ~' 0 0 -E--- _ U ~" ~ -~'- [] ~;.~ .~ ~: °.~ ~3 o o °~- -o~ "- " [ ~ ~ o~ o~ 427
1984
86
LEXICAL SEMANTICS IN HUMAN-COMPUTER COMMUNICATION Jarrett Rosenberg Xerox Office Systems Division 3333 Coyote Hill Road PaiD Alto, CA 94304 USA ABSTRACT Most linguistic studies of human-computer communication have focused on the issues of syntax and discourse structure. However, another interesting and important area is the lexical semantics of command languages. The names that users and system designers give the objects and actions of a computer system can greatly affect its usability, and the lexical issues involved are as complicated as those in natural languages. 1"his paper presents an overview of the various studies of naming in computer systems, examining such issues as suggestiveness, memorability, descriptions of categories. and the use of non.words as names. A simple featural framework for the analysis of these phenomena is presented. 0. Introduction Most research on the language used in human-computer communication has focused on issues of syntax and discourse; it is hoped that eke day computers will understand a large subset of natural language, and the most obvious problems thus appear to be in parsing and understanding sequences of utterances. The constraints provided by the sublanguages used in current natural language interfaces provide a means for making these issues tractable. Until computers can easily understand these sublanguages, we must continue to use artificial command languages, although the increasing richness of these languages brings them closer and closer to being sublanguages themselves. This fact suggests that we might profitably view the command languages of computer systems as natural languages, having the same three levels of syntax, semantics, and pragmatics {perhaps also morpho-phonemics, if one considers the form in which the interaction takes place with the system: special keys, variant characters, etc.). A particularly interesting and, till recently, neglected area of investigation is the lexical semantics of command languages. What the objects and actions of a system are called is not only practically important but also as theoretically interesting as the lexical phenomena of natural languages. In the field of natural language interfaces there has been some study of complex references, such as Appelt's (1983) work on planning referring expressions, and Finin's (1982) work on parsing complex noun phrases, but individual lexical items have not been treated in much detail. In contrast, the human factors research on command languages and user-interface design has looked at lexical semantics in great detail, though without much linguistic sophistication. In addition, much of this research is psycholinguistic rather than strictly linguistic in character, involving phenomena such as the learning and remembering of names as much as their semantic relations. Nevertheless, a linguistic analysis may shed some light on these psycholinguistic phenomena. In this paper l will present an overview of the kinds of research that have been done in this area and suggest a simple featural framework in which they may be placed. I. Names for Actions By far the greatest amount of research on lexical semantics in command languages has been done with names for actions. It is easy to find instances of commands whose names are cryptic or dangerously misleading (such as Unix's cat for displaying a file, and Tenex's list for printing), or ones which are quite unmemorable (as are most of those in the EMACS editor). Consequently, there have been a number of studies examining the suggestiveness of command names, their learnability and memorability, their compositional structure, and their interaction with the syntax of the command language. Suggestiveness. In my own research (Rosenberg, 1982) [ have looked at how the meaning of a command name in ordinary English may or may not suggest its meaning in a text editor. This process of suggestiveness may be viewed as a mapping from the semantics of the ordinary word to the semantics of the system action, in which the user, given the name of command, attempts to predict what it does. This situation is encountered most often when first learning a system, and in the use of menus. A few simple experiments showed that if one obtains sets of features for the names and actions, a straightforward calculation of their similarity can predict people's guesses of what particular command names denote. Memorability. If we look at the converse mapping from actions to names, i.e., when, given a system action, ,me attempts to remember its name, we find a number of studies reporting similar results. Barnard et al. (19821 had subjects learn a ~et of either specific or general commands, and found that suhject~ learning the less distinctive, general names used a help menu of the commands and their definitions more el'ten, were less confident in recalling the names, and were less able to recall the actions of the commands. Black and Moran (1982) found that high-frequency (and thus more general) words were less well 428 remembered than low-frequency ones, and so were more "'discriminable" names Iones having a greater similarity to their eorrespondLng actions}. Seapin {1981l also found that general names like select and read were less well recalled than computer- oriented ones like search and display. Both Black and Moran { 1982} and Landauer et al. ( i9831 found that users varied widely in the names they preferred to give to system actions, and that user- provided names tended to be more general and thus less memorable, Congruence and hierarchicalness. Carroll (1982) has demonstrated two important properties of command name semantics: congruence and hierarchicalness. Two names are congruent if their relations are the same as those of the actions onto which they are mapped Thus the inverse actions of adding and subtracting text are best named by a pair of inverses such as insert and delete. As might be expected, then, Carroll found that congruent names like raise-lower are easier to learn than non- congruent ones like reach-down. Hierarchicalness has to do with the compositionality of semantic components and their surface realization. System actions may have common semantic components along with additional, distinguishing, ones (e.g., moving vs. copying, deleting a character vs. deleting a word}. The degree of commonality may range from none (all actions are mutually disjoint} to total (all actions are vectors in some n-dimensional matrix). Furthermore, words or phrases naming such hierarchical actions may or may not have some of their semantic components realized on the surface: for example, while both advance and move forward may have the semantic t'eatures + MOVE and + FORWARD, only the latter has them realized on the surface. Thus, in hierarchical names the semantic components and their relationships are more readily perceived, thus enhancing their distinctiveness. Not surprisingly, Carroll has found that hierarchical names, such as move forward-move backward, are easier to learn than non- hierarchical synonyms such as advance-retreat. Similar results on the effect of hierarchical structuring are reported by Scapin ( 1982}. Names and the command language syntax. There are two obvious ways in which the choice of names for commands can interact with the syntax of the command language. The first involves selection restrictions associated with the name. For example, one usually deletes objects, but stops processes: thus one wouldn't normally expect a command named delete to take both files and process-identifiers as objects. The second kind of interaction involves the syntactic frames associated with a word. For example, the sentence frame for substitute {"substitute x for y"} requires that the new information be specified before the old, while the frame for replace ("replace y with x") is just the opposite. A name whose syntactic frame is inconsistent with the command language syntax will thus cause errors. It should be noted that Barnard et al. {1981} have shown that total syntactic consistency can override this constraint and allow users to avoid confusion, but their results may be due to the fact that the set of two-argument commands they studied always had one argument in common, thus encouraging a consistent placement. Landauer et ol. (1983) found that using the same name for semantically similar but syntactically different commands created problems. Non-words as names. Some systems use non-words such as special characters or icons as commands, either partly or entirely. Hemenway (1982) has shown that the issues involved in contructing sets of command icons are much the same as with verbal names. There are two basic types of non-words: those with some semantics {e.g., '?' or pictorial icons} and those with little or none (e.g., control characters or abstract icons}. Non-words with some semantics behave much like words (so, for example, '?' is usually used as a name for a query command}. Meaningless non- words must have some surface property such as their shape mapped onto their actions. For example, an abstract line-drawing icon in a graphics program (a "brush") might have its shape serve as an indicator of what kind of line it draws. Control characters are often mapped onto names for actions which begin with the same letter (e.g., CONTROL-F might mean "move the cursor Forward one character"}. Similar considerations hold for the use of non-words to denote objects. 2. Names for Objects In addition to studies of command names, there have been a number of interesting studies of how users (or system designers} denote objects. One version of this has been called the "Yellow Pages problem:" how does a user or a computer describe a given object in a given context? Naming objects. Furnas et al. (1983} asked subjects to describe or name objects in various domains so that other people would be able to identify what they were talking about. The subjects were either to use key words or normal discourse. It was found that the average likelihood of any two people using the same main content word in describing the same object ranged from about 0.07 to 0.18 for the different domains studied. Carroll 11982) studied how people named their files on an [BM CMS system (CMS filenames are limited to 18 characters and are thus usually abbreviated). Subjects gave him a list of their files along with a description of their contents, and from this, Carroll inferred what the "unabbreviated" filenames were. He found that 85 percent of the filenames used simple organizing paradigms, two of which involved the concepts of congruence and hierarchicalness discussed above. Naming categories. Dumais and Landauer'11983} describe two major problems in naming and describing categories in computer systems. The first is that of inaccurate category names: a name for a category may not be very descriptive, or people's interpretation of it may differ radically. The second problem is that of inaccurate classification: categories may be fuzzy or overlapping, or there may be many different dimensions by which an object may be classified. Dumais and Landauer examined whether categories which are hard to describe could be better named simply by giving example of their members. They found that presenting three examples worked as well as using a description, or a description plus examples. In another study involving people's descriptions of objects (Dumais and Landauer, 1982} they found that their subjects' descriptions were often vague, and rarely used negations. The most common paradigm for 429 describing objects was to give a superordinate term followed by several of the item's distinctive features. Deixis. The pragmatic issue of deixis should be mentioned, since some systems allow context-dependent references in some contexts such as history mechanisms. For example, in INTERLISP the variable IT refers to the value of the user's last evaluated top-level expression, but sometimes this interpretation does not map exactly onto the one the user has. Physical pointing devices such as the "mouse" allow deixis as a more natural way of denoting objects, actions, and properties in cases where it is difficult or tedious to indicate the referent by a referring expression. There are, of course, many other aspects of the lexica[ semantics of command languages which cannot be covered here, such as abbreviations {Benbasat and Wand, 1984}, automatic spelling correction of user inputs (Durham et al., 1983}, and generic names (Rosenberg and Moran, 1984}. 3. A Featural Framework While the above results are interesting: they are disappointing in two respects. To the designer of computer systems they are disappointing because it is not clear how they are related to each other: there are no general principles to use in deciding how to name commands or objects, or what similarities or tradeoffs there are among the different aspects of naming in computer systems. To the linguist or psycholinguist they are disappointing because there is no theory or analytic framework for describing what is happening. In my own work (Rosenberg, 1983} [ have tried to formulate a simple featural framework in which to place these disparate results. My intention has been to develop a simple analysis which can be used in design, rather than a linguistic theory, but linguists will easily recognize its mixed parentage. At least a framework using semantic features has the advantage of simplicity, and can be converted into a more sophisticated theory if desired. In such a featural approach the features of a name or action can be thought of as properties falling into four major classes: Semantic features are those elemental components of meaning usually treated in discussions of lexical semantics. For example, insert has the semantic feature + ADD. Pragmatic features are meaning components which are context dependent in some sense, involving phenomena such as deixis or presuppositions. For example, an anaphorie referent like it has some sort of pragmatic feature, however one wishes to describe it. [t goes without saying that the distinction between semantic and pragmatic features is not a clear one, but for practical purposes that is not terribly important. Syntactic features are the sorts of selection restrictions, etc. which coordinate the lexical item into larger linguistic units such as entire commands. For example, substitute requires that the new object be specified before the old one. t, Surface features are perceptual properties such as sound or shape. The usefulness of including them in the analyis is seen in the discussion of non-words as names. As Bolinger {1965l pointed out long ago, names and actions have a potentially infinite number of features, but in the restricted world of command languages we can consider them to have a finite, even relatively small number. Furthermore, only some features of a name or action are relevant at given time due to the particular contexts involved: the task context is that of the task the user is performing (e.g., text editing vs. database querying); the name context is that of the other names being used; and the action context is that of the other actions in the system. These three kinds of context emphasize some features of the names and actions and make others irrelevant. Applying this framework to system naming, we can represent system actions and objects and their names as sets of features. The most important aspect of these feature representations is their similarity (or, conversely, their distinctiveness}. This featural similarity has been formally defined in work by Tversky {1977, 1979}. Within these two domains of names and actions (or objects}, distinctiveness is of primary importance, since it prevents confusion. Between the two domains, similarity is of primary importance, since it makes for a better mapping between items in the two domains. Although the details of this process vary among the different phenomena, this paradigm serves to unify a number of different results. For example, suggestiveness and memorability may both be interpreted in terms of a high degree of similarity between the features of a name and its referent, with high distinctiveness among names and referents reducing the possibilities of confusivn on either end. And the analysis easily extends to include non- words, since those without semantics map their surface features onto the semantic features of their referents. The role of syntactic and pragmatic features is analogous, but the issue there is not simply one of how similar the two sets of features are, but also of how, for example, the selection restrictions of a name mesh with the rules of the command language. Where the analysis will lead in those domains is a question I am currently pursuing. 4. Conclusion Thus it can be seen that, while syntax and discourse structure are important phenomena in human-computer communication. the lexical semantics of command languages is of equal importance and interest. The names which users or system designers give to the actions and objects in a command language can greatly faciliate or impair a system's u~efulness. Furthermore, similar issues of semontic relations, deixis, ambiguity, etc. occur with the lexical items of command languages as in natural language. This suggests both that linguistic theory may be of practical aid to system designers, and that the complex lexical phenomena of command languages may be of interest to linguists. 430 References Appelt, D. 1983. Planning English referring expressions. Technical Note 312.SRI International.Merilo Park, CA. Barnard, P., N. Hammond, J. Morton, and J. Long. 1981. Consistency and compatibility in human-computer dialogue. Int. J. of Man-Machine Studies. l 5:87-134. Barnard, P., N. Hammond, A. MacLean, andJ. Morton. 1982 Learning and remembering interactivecommands in a text-editing task. Behaviour and Information Technology. 1:347-358. Benbasat, l.. and Y. Wand. 1984. Command abbreviation behavior in human-computer interaction. Comm. ACM. 27(4): 376-383. Black, J., and T. Moran. 1982 Learning and remembering command names. Proc. Conference on Human Factors in Computing Systems. (Gaithersburg, Maryland). pp. 8-11. Bolinger D. 1982. The atomization of meaning. Language. 41:555-573. Carroll. J. 1982. Learning, using, and designing filenames and command paradigms. Behaviourandlnfbrmation Technology. 1:327-348. Dumais, S., and T. Landauer. 1982. Psychological investigations of natural terminology for.command and query languages. in A. Badre and B. Shneiderman, eds., Directions in HumansComputer Interaction. Norwood, NJ: Ablex. Dumais, S., and T. Landauer. 1983. Using examples to describe categories. Proc. CIt1"83 Conference on Human Factors in Computing Systems. (Boston}. pp. 112-115. Durham, l., D. Lamb, and J. Saxe. 1983. Spellingcorrection in user interfaces. Comm. ACM. 26(10): 764-773. Finin, 2'. 1982. The interpretation of nominal compounds in discourse. Technical Report MS-CIS-82-03. University of Pennsylvania Dept. of Computer and information Science, Philadelphia, PA. Furnas, G., T. Landauer, L.Gomez, and S. Dumais. 1983. Statistical semantics: analysis of the potential performance ofkey-word information systems. Bell System Technical Journal. 62(6}:1753-1806. Hemenway, K. 1982. Psychological issues in the use of icons in command menus. Proe. Conference on Human Factors in Computing Systems. (Gaithersburg, Maryland). pp. 20-24. Landauer, T., K. Galotti, and S. Hartwell. 1983. Natural command names and initial learning: a study of text- editing terms. Comm. ACM. 26(7): 495-503. Rosenberg, J. 1982. Evaluating the suggestiveness of command names. Behaviour and Information Technology. 1:371-400. Rosenberg, J. 1983. A featural approach to command names. Proc. CHI'83 Conference on Human Factors in Computing Systems. (Boston). pp. 116-119. Rosenberg, J., and T. Moran. 1984. Generic commands. Proe. First IFIP Conference on Human.Computer Interaction. London, September r984. Scapin, D. 1981. Computer commands in restricted natural language: some aspects of memory and experience. Human Factors. 23:365-375. Scapin, D. 1982. Generation effect, structuring and computer commands. Behaviour and Information Technology. 1:401-410. Tversky, A. 1977. Features ofsimilarity. Psychological Review. 84:327-352. Tversky, A. 1979. Studies in similarity. In E. Rosch and B. Lloyd, eds., Cognition and Categorization. Hillsdale, NJ: Erlbaum. 431
1984
87
A Response to the Need for Summary Responses J.K. Kalita, M.J. Colbourn + and G.I. McCalla Department of Computational Science University of Saskatchewan Saskatoon, Saskatchewan, STN 0W0 CANADA Abstract In this paper we argue that natural language inter- faces to databases should be able to produce summary responses as well as listing actual data. We describe a system (incorporating a number of heuristics and a knowledge base built on top of the database) that has been developed to generate such summary responses. It is largely domain-independent, has been tested on many examples, and handles a wide variety of situa- tions where summary responses would be useful. 1. Introduction For over a decade research has been ongoing into the diverse and complex issues involved in developing smart natural language interfaces to database systems. Pioneering front-end systems such as PLANES [15], REQUEST [121, TORUS [11] and RENDEZVOUS [1] experimented with, among other things, various parsing formalisms (e.g. semantic grammars, transformational grammars and auglmented transition networks); the need for knowledge representation (e.g. using produc- tion systems or semantic networks); and the usefulness of clarification dialogue in disambiguating a user's query. Recent research has addressed various dialogue issues in order to enhance the elegance of the database interactions. Such research includes attempts to resolve anaphoric references in queries [2,4,14,18], to track the user's focus of attention [2,4,14,18], and to generate cooperative responses. In particular, the CO-OP sys- tem [7] is able to analyze presumptions of the user in order to generate appropriate explanations for answers that may mislead the user. Janas [5] takes a similar approach to generate indirect answers instead of pro- viding direct inappropriate ones. Mays [8] has developed techniques to monitor changes in the data- base and provide relevant information on these changes to the user. McCoy [9] and McKeown [10] attempt to provide answers to questions about the structure of the database rather than extensional information as to its contents. We investigate herein, one particular approach to generating "non-extensional" responses - in particular the generation of "summary" responses. Generating abstract "summary" responses to users' queries is often preferable to providing enumerative replies. This follows from an important convention of human dialogue that no participant should monopolize the discourse (i.e. "be brief" [3]). Furthermore, exten- sional responses can occasionally mislead the user where summary responses would not. Consider the following example [13]: QI: Which department managers earn over $40k per year? SI-I: Abel, Baker, Charles, Doug. SI-2: All of them. By enumerating managers who earn over $40k, the fizst response implies that there are managers who do not earn that much. In linguistic pragmatics, this is called a scalar implicature [3]. In circumstances where the user is liable to infer an invalid scalar implicature, the sys- tem should be able to produce an appropriate response to block the generation of such an inference as is done by the response $1-2. 2. Overview of the System We describe herein a system which has been developed for the generation of summary responses to user's queries (fully detailed in [6]). The system arrives at concise responses by employing a search of the relevant data for the existence of "interesting" pat- terns. It uses heuristics to guide this search and a knowledge base to enhance efficiency and help deter- mine "interestinguess". The database used to test the system is a simple + Now, at the Department of Computer Science, University of Waterloo, Waterloo, Ontario, N2L 3G1, CANADA 432 relational database of student records, although the methods developed are largely domain-independent. In order to concentrate on the response generation issues, the input/output for the system is in an internal form - an actual parser and surface language generation capa- bilities will be incorporated in future versions of the system. The flow of control in the system is simple. The formal representation of the query is used to access the database and obtain the tuples which satisfy the user's query {which we will call T~; the other tuples will be called Tu,~,~). After the data is accessed, the system, in consultation with its knowledge base, calls upon its heuristics to find interesting non-enumerative patterns. The heuristics are tried in order, until one succeeds or all fail. When a heuristic detects an appropriate pat- tern, the system terminates the search and produces the response as dictated by the successful heuristic. If all heuristics fail, the system reports its inability to produce a descriptive response. In any event, the user may ask the system to produce an extensional answer by listing the data if he/she so desires. 3. The Heuristics The heuristics employed in the system are pro- cedural in nature. They guide the system to search for various patterns that may exist in the data. The heuris- ties are linearly ordered; they range from simple to complex. The ordering of the heuristics assumes that if more than one descriptive answer can be obtained for a query, it is sensible to produce the "simplest" one. The equality heuristic determines if all data values appearing for a particular attribute A in T~ are the same (say, c~). If so, and if no tuple in T,~u.~ has the same value for the attribute A, the general formulation of the response is: "All tuples having the value ~ for attribute A." The particular value under consideration must be one of the designated "distinguishing values" for the attri- bute. Response $1-2 (above) is an example of what this heuristic would do. The dual of the equality heuristic is the inequality heuristic where instead of looking for equalities, the system searches for inequalities. The inequality heuris- tic enables the system to produce responses such as: Q2: Which students are taking makeup courses? $2: All students with non-Computer Science undergradus~te background. Here, the value "Computer Science" for the attribute Ui~T~rERSITY-DEPARTMENT in the database under consideration may be considered a distinguishing value. If the equality or inequality heuristics are not appli- cable in their pure form and there are a "few" ("few" depends on the relative number of tuples in T~ and run~ and some other factors) tuples in Tu~, which do not satisfy the requirement of the heuristic, a modification of the response produced by the heuristic may be presented to the user. An example of such a modification is seen in the following: Q3: Which students are receiving University scholarships? $3: All but one foreign students. In addition, two Canadian students are also receiving University scholarships. Another set of heuristics, the range heuristics, determine if the data values for an attribute in the tuples in T~ are within a particular well-defined range. There are two main types of range heuristics - one is concerned with maximum values and the other with minimum values. We will discuss only the maximum range heuristic here. The maximum heuristic determines if the values of an attribute for all tuples in T~., are below a particular limit while the values of the attribute in all tuples in T,,~, are not. An example response produced by the maximum heuristic is: Q4: Which students have been advised to dis- continue studies at the University? $4: All students with a cumulative GPA of 2.0 or less. In some cases, the maximum and minimum heuris- tics may be used together to define the end-points of a range of values (for some attribute) which the tuplcs in Tq~ satisfy. This results in a range specification. If a is the minimum value and ~ is the maximum value of the attribute A in T~, then the corresponding response is: "All tuples with the value of attribute A ranging from ~ to ~" An example of an answer with range specification is: Q5: Which students are in section 1 of CMPTII0.3? $5: All students with surnames starting with 'A' through 'F'. There are several heuristic rules which the system follows in producing answers with range specification. For example, one of these rules limits the actual range specified in an answer to 75% or less of the potential range of the attribute values. This limitation of 759~ is not sacrosanct; it is an arbitrary decision by the imple- mentor of the knowledge base. In the current imple- mentation it is believed that if the actual range is more than 75~o of the potential range, no special meaning 433 can be attributed to the occurrence of this range in Trj~. Another rule requires that the actual range specified in an answer must not be so small as to identify the actual tuples which constitute the answer. For example, we should not produce a response such as: "All students with student-id-no between 821661 and 821663" In fact, such answers are not brief when compared to the size of the set of tuples which they qualify. A more complex heuristic is the conjunction heuris- tic. If all values of an attribute A in To., satisfy a rela- tion R {in the mathematical sense) and there are some tuples in Tu,~., in which the values of the attribute A satisfy this relation R, the system attempts to deter- mine via the above heuristics if there is/are some "interesting" distinguishing characteristic(s) which the set T~ satisfies, but the set of tuples in 2"u,~., satisfy- ing the relation R do not. Let us call the distinguishing characteristic(s} D. The general formulation of the response is "All tuples which satisfy the relation R for attri- bute A and have the characteristic(s) D." An example is: Q6: Which students are working as T.A. and R.A.? $6: Students who have completed at least two years at the University and who are not employed outside the University. If none of the above heuristics can be applied suc- cessfully, the disjunction heuristic attempts to divide the tuples in T~ into a number of subsets and deter- mine whether the above heuristics are appropriate for all of these subsets. The number of such subsets should be "small"; if too many subsets are identified, it is no more elegant than listing the data, which we are trying to avoid. The number of allowable subsets par- tially depends upon the number of tuples in T~ An example showing three partitions based on the values of three different attributes is: QT: Which graduate students are not receiving University scholarships? $7: Students who are receiving NSERC scholar- ships or have cumulative GPA less than 6.0 or have completed at least two years at the University. If none of the above heuristics produces a satisfac- tory response, the foreign-key heuristic searches other "related" relations. A related relation is one with which the relation under consideration has some common or join attribute(s). The names of such related relations and the attributes via which such a relation can be joined with the original target relation can be obtained from the knowledge base to be discussed later. An example of such a dialogue is: Q8: Which students are taking 880-level courses? $8: All second year students. In addition, two first year students are also taking 880-level COUrses. While attempting to answer Q8, the system finds that the question pertains to the relation COURSE- REGISTRATIONS. However, it fails to obtain any interesting descriptive pattern about the tuples in T~ by considering this relation alone. Hence, the system consults the knowledge base and finds that the relation COURSE-REGISTRATIONS can be joined with the relation STUDENTS. It takes the join of all the tuples constituting T~., with the relation STUDENTS and projects the resulting relation on the attributes of the relation STUDENTS. Let us call these tuples T,,,_o~. Next, it attempts to discover the existence of some pat- tern in the tuples in T,e~-~. It succeeds in producing the response given in $8 by employing modified equal- ity heuristic. 4. The Knowledge Base The knowledge base incorporates subjective percep- tions of the user as to the nature and contents of the database. It consists of two types of frames - the rela- tion and the attribute frames. These frames may be considered to be an extension of the database schema. The frames are created by the interface builder, and ditterent sets of frames must be provided for ditterent types of users and/or different databases. Each relation frame corresponds to an actual rela- tion in the database; it provides the possible links with all other relations in the database. In other words, these frames define all permissible joins of two rela- tions. If a direct join is not possible between two specific relations, the frame contains the name of a third relation which must be included in the join. The information in the relation frames is useful in the appli- cation of the foreign-key heuristic. The attribute frames play a role in our system simi- lar to that played by McCoy's axioms [9]. Each attri- bute frame corresponds to an attribute in the relations in the database. In addition to a description of the attributes, these frames indicate the nature and range of the attribute's potential values. The expected range of values that an attribute may assume is helpful to the range heuristics. Information regarding the relative preferability of the various attributes is also included. 434 Each attribute frame also contains a slot for "dis- tinguishing values" which the attribute might take. This slot provides information for distinguishing a sub- class of an entity from other sub-classes. The contents of this field are useful in producing descriptive responses to users' queries. This slot contains one or more clauses, each of the following format C[ ]' means optionality; '...' means arbitrary number of repetitions of the immediately preceding clause): (list-of-distingnishing-values-1 (applicable-operator-l-1 [denomination-l-l]) [(applicable-operator-l-2 [denomination-l-2])] ..o) If all the values of the attribute in T~ satisfy "applicable-operator-l-l" with respect to the contents of the list "list-of-distinguishing-values-l", the actual values may be termed as "denomination-l-l" for pro- ducing responses. If the value of "denomination-l-l" is null, no names can be attached to the actual values of the attribute. The Distinguishing Values slot enables the imple- mentor to specify classifications that he would a priori like to appear meaningfully in descriptive responses. This information enables the system to faithfully reflect the implementor perceived notions regarding how a database entity class may be appropriately partitioned into subclasses for generating summary responses. It is often useful to provide descriptive answers on the basis of certain preferred attributes. For example, for the STUDENTS relation, it is more "meaningful" to provide answers on the basis of the attribute NATIONALITY or UG-MAJOR, rather than STUDENT-ID-NO or AMOUNT-OF-FINANCIAL-AID. However, it is impossible to give a concrete weight regarding each attribute's preferability. Therefore, we have classified the attributes into several groups; all attributes in a group are considered equally useful in producing meaningful qualitative answers to queries. This classification means that it is preferable and more useful to produce descriptive responses using the attributes in preference category 1 than the attributes in category 2, 3 or 4. This categorization is based on one's familiarity with the data. The decision is subjec- tive, and hence it is bound to vary according to the judgement of the person building the interface. In the Preference Category slot, we have an entry correspond- ing to each relation the attribute occurs in. The infor- mation in this slot ensures that the system chooses a description based on the most salient attribute(s) for producing a response. A simple example of an attribute frame is given below: Name:- (NATIONALITY, STUDENTS) Nature-of-Attribute:- String of characters Distingnishing-Values:- {((Canadian)(-----)(~ foreign)) ((U.K.U.S.A. Australia ...) ( member-of English-speaking countries)) ((U.K. France ...) (member-of Europe)) Potential-Range:- Any member from a given list of countries Rounding-off-to-be-done?:- Not applicable Preference-Category:- 1 The example shows the frame for the attribute NATIONALITY belonging to the STUDENTS relation. It assumes character values. To be valid, the values must be members of a previously compiled list of coun- tries. It belongs to the preference category 1 discussed above. Let us consider the clause ((Canadian)(=)(~ foreign)) in the Distinguishing Value slot. The value "Canadian" is a distinguishing value in the domain of values which the attribute may take. The term "(=)" indicates that it is possible to identify a class of stu- dents using the descriptive expression "NATIONALITY ---- Canadian". If NATIONALITY ~ "Canadian", the student may be referred to as a "FOREIGN" student. Similarly, if the value for a student under the attribute NATIONALITY is a member of the set (U.K.U.S.A. Australia ...), he may be designated as coming from an English-speaking country. This information may be helpful in answering a query such as: Qg: Which students are taking the "Intensive English" course in the Fall term? $9: Most entering foreign students from non-English speaking countries. 5. Concluding Remarks A system incorporating the details explained above has been implemented and extensive experiments have been performed using a simple student database. Every heuristic has demonstrated its usefulness in pro- ducing summary responses by being successful in this environment. The heuristics are domain-independent, and the knowledge base is easily modifiable to adapt to the requirements of a new user or database domain. For performance enhancement, the knowledge base may be augmented with an additional component for storing away the results of the preceding database interactions to obviate the need to search the database for every query. The extended knowledge base may be utilized for improved modelling of the user's beliefs and perceptions about the data by providing a mechanism 435 to introduce the user's own definitions and descriptive terminologies. Further research is necessary in order to obtain an acceptable structure for this additional com- ponent of the knowledge base. In addition, the factors - linguistic or otherwise, that influence the appropriate- hess of the generation of a summary response for a give question at a particular point in the interaction are also to be investigated. Generation of summary responses has important implications if the interactions with a database management system are to have the properties and constraints normally associated with human dialogue. Interactions with traditional database management sys- tems lack the "intelligence" and elegance which we ascribe to human behaviour. We feel that providing summary responses will be an important tool to be used in achieving database interfaces that behave intelli- gently and co-operatively. Acknowledgements We would like to thank the National Science and Engineering Research Council of Canada {NSERC) and the University of Saskatchewan for supporting this research both flnaneially and through the provision of computing facilities. We would also like to express our gratitude to Paul Sorensen and Robin Cohen for their many helpful comments during the course of the research. References [1] Codd E.F., R.S. Arnold, J.M. Cadious, C.L. Chang, N. Roussopoulos, RENDEZVOUS Version I: An Ezperimental English Language Query For- m'~lation System for Casual Users of Relational Databases, Research Report No. RJ2144 {294071, IBM Research Laboratory, San Jose, California, 1978. 12] Davidson J., "Natural Language Access to Data- base: User Modelling and Focus", Proceedings of Canadian Society for Computational Studies of Intelligence, Saskatoon, May, 1982, 204-211. [3] Grice H.P., "Logic and Conversation", in P. Cole and J.L. Morgan (eds.), Syntaz and Semantics: Speech Acts, Vol. 3, Academic Press, New York, 1975, 41-58. I4] Grosz B.J., "The Representation and the Use of Focus in a System for Understanding Dialogues", Proe. 5th IJCAI, Cambridge, 1977, 67-76. [5] Janas J.M., "How to not Say "NIL" - Improving answers to Failing Queries in Data Base Sys- tems", Proe. 6th IJCAI, 1979, Tokyo, 429-434. [6] Kalita J.K., Generating Summary Responses to Natural Language Database Queries, M.Sc. Thesis; also available as TR-9, University of Saskatchewan, Saskatoon, 1984. [7] Kaplan S.J., "Cooperative Responses from a Port- able Natural Language Query System", Artificial Intelligence, Vol. 19, No. 2, Oct. 1982, 165-187. [8] Maya E., S. Lanka, A.K. Joshi and B.L. Webber, "Natural Language Interaction with Dynamic Knowledge Bases: Monitoring as Response", Proc. 7th IJCAI, Vancouver, 1981, 61-63. [9] McCoy K.F., "Augmenting a Database Knowledge Representation for Natural Language Generation", Proc. £Oth Annual Conference of the ACL, Toronto, Ontario, June, 1982, 121-128. [10] MeKeown K.R., "The TEXT system for Natural Language Generation: An Overview", Proc. £Oth Annual Conference of the ACL, Toronto, Ontario, June 1982, 113-120. [11] Mylopoulos J., A. Borgida, P. Cohen, N. Rousso- poulos, J. Tsotsos, H. Wing, "TORUS : A Step towards Bridging the Gap between Databases and Casual User", Information Systems, Vol. 2, 1976, 49-64. [12] Plath W.J., "REQUEST : A Natural Language Question Answering System", IBM Journal of Research and Development, Vol. 20, July 1976, 326-335. [13] Reiter R., H. Gallaire, J.J. King, J. Mylopoulos and B.L. Webber, "A Panel on AI and Data- bases", Proc. 8th IJCAI, 1983, Karlsruhe, West Germany, 1199-1208. [14] Sidner C.L., Towards A Computational Theory of Definite Anaphora Comprehension in English Discourse", TR-537, AI Laboratory, MIT, Cam- bridge, Massaehussets, 1979. [15] Waltz D.L., An English Language Question Answering System for a Large Relational Data- base, CACM, Vol. 21, July, 1978, 526-539. [16] Webber B. and R. Reiter, "Anaphora and Logical Form: On Formal Meaning Representations for Natural Language", Proc. 5th IJCAI, Cambridge, Massaehussets, 1977, 121-131. 436
1984
88
Coping with Extragrarnmaticality Jalme G. Carbonell and Philip J. Hayes Computer Science Department, Carnegie-Mellon University Pittsburgh, PA 15213. USA Abstract 1 Practical natural language interfaces must exhibit robust bei~aviour in the presence of extragrammaticat user input. This paper classifies different types of grammatical deviations and related phenomena at the lexical and sentential levels, discussing recovery strategies tailored to specific phenomena in the classification. Such strategies constitute a tool chest of computationally tractable methods for coping with extragrammaticality in restricted domain natural language. Some of the strategies have been tested and proven viable in existing parsers. 1. Introduction Any robust natural language interface must be capable of processing input utterances that deviate from its grammatical and semantic expectations. Many researchers have made this observation and have taken initial steps towards coverage of certain classes of extragrammatical constructions. Since robust parsers must deal primarily with input that does meet their expectations, the various efforts at coping with extragrammaticality have generally been structured as extensions to existing parsing methods. Probably the most popular approach has been to extend syntactically.oriented parsing techniques employing Augmented Transition Networks (ATNs) [21, 24, 25, 29]. Other researchers have attempted to deal with ungrammatical input through network-based semantic grammar techniques [19. 20j. through extensions to pattern matching parsing in which partial pattern matching is allowed [16], through conceptual case frame instantiafion [12, 22], and through approaches involving multiple cooperating parsing strategies [7, 9, 18]. Given the background of existing work, this paper focuses on two major objectives: 1. to create a taxonomy of grammatical deviations covering a broad range of extragrammaticalities, 2. to outline strategies for processing many of these deviations, 3. to assess how easily these strategies can be employed in conjunction with existing parsing methods. The overall result should be a synthesis of different parse. recovery strategies organized by the grammatical phenomena they address (or violate), an evaluation of how well the strategies • integrate with existing approaches to parsing extragrammatical 1This research was sponsored in part by the Air Force Office of Scientific Research under Contract AFOSR-82-0219 and in part by Digital Equipment Corporation as part of the XCALIBUR project. input, and a set of characteristics desirable in any parsing process dealing with extragrammatical input. We hope this will aid researchers designing robust natural language interfaces in two ways: t.by providing a tool chest of computationally effective approaches to cope with extragrammaticality; 2. by assisting in the selection of a basic parsing methodology in which to embed these recovery techniques. In assessing the degree of compatibility between recovery techniques and various approaches to parsing, we will avoid the issue of whether a given recovery technique can be used with a specific approach to parsing. The answer to such a question is almost always affirmative. Instead, we will be concerned with how naturally the recovery strategies fit with the various parsing approaches. In particular, we will consider the computational tractability of the recovery strategies and how easily they can obtain the information they need to operate in the context of different parsing approaches. Extragrammaticalities include patently ungrammatical constructions, which may nevertheless be semantically comprehensible, as well as lexical difficulties (e.g. misspellings), violations of semantic constraints, utterances that may be grammatically acceptable but are beyond the syntactic coverage of the system, ellipsed fragments and other dialogue phenomena, and any other difficulties that may arise in parsing individual utterances• An extragrammaticality is thus defined with respect to the capabilities of a particular system, rather than with respect to an absolute external competence model of the ideal speaker. Extragrammaticality may arise at various levels: lexical, sentential, and dialogue. This paper addresses the first two categories; the third is discussed in [8, 11]. Our discussions are based on direct experience with various working parsers: FLEXP, CASPAR and DYPAR [7, 8, 16]. 2. Lexical Level Extragrammaticalities One of the most frequent parsing problems is finding an unrecognizable word in the input stream. The following sections discuss the underlying reasons for the presence of unrecognizable words and describe suitable recovery strategies. 2.1. The unknown word problem The word is a legitimate lexeme but is not in the system's dictionary. There are three reasons for this: • The word is outside the intended coverage of the interface (e.g. There is no reason why a natural language interface to an electronic mail system should know words like "chair" or "sky", which cannot be defined in terms of concepts in its semantic domain). 437 o The word refers to a legitimate domain concept or combination of domain concepts, but was not included in the dictionary. (e.g. A word like "forward" [a message] can be defined as a command verb, its action can be clearly specified, and the objects upon which it operates- an old message and a new recipient -- are already well-formed domain concepts.) • The word is a proper name or a unique identifier, such as a catalogue part name/number, not heretofore encountered by the system, but recognizable by a combination of contextual expectations and morphological or orthographic features (e.g., capitalization). In the first situation, there is no meaningful recovery strategy other than focused interaction[15] to inform the user of the precise difficulty. In the third, little action is required beyond recognizing the proper name and recording it appropriately for future reference. The second situation is more complicated; three basic recovery strategies are possible: 1. Follow the KLAUS[14] approach where the system temporarily wrests initiative from the user and plays a well designed "twenty questions" game, classifying the unknown term syntactically, and relating it semantically to existing concepts encoded in an inheritance hierarchy. This method has proven successful for verbs, nouns and adjectives, but only when they turn out to be instances of predefined general classes of objects and actions in the domain model. 2. Apply the project and integrate method [6] to infer the meaning and syntactic category o.f the word from context. This method has proven useful for nouns and adjectives whose meaning can be viewed as a recombination of features present elsewhere in the input. Unlike the KLAUS method, it operates in the background, placing no major run-time burden on the user. However, it remains highly experimental and may not prove practical without user confirmation. 3. Interact with the user in a focused manner to provide a paraphrase of the segment of input containing the unknown word. If this paraphrase results in the desired action, it is stored and becomes the meaning of the new word in the immediate context in which it appeared. The LIFER system [20] had a rudimentary capacity for defining synonymous phrases. A more general method would distinguish between true synonymy and functional equivalence in order to classify the new word or phrase in different semantic contexts. 2.2. Misspellings Misspellings arise when an otherwise recognizable lexeme has letters omitted, substituted, transposed, or spuriously inserted. Misspellings are the most common form of extragrammaticality encountered by natural language interfaces. Usually, a word is misspell into an unrecognizable character string. But, occasionally a word is misspelt into another word in the dictionary that violates semantic or syntactic expectations. For instance: Copy the flies from ~he accounts direc!ory to my airectory Although "flies" may be a legitimate word in the domain of a particular interface (e.g., the files coulcJ consist of statistics on med-flv infestation in California). it is obvious to the human reader that there is a misspelling in the sentence above. There are well-known algorithms for matching a misspelt word against a set of possible corrections [13]. and the simplest recovery strategy is to match unknown words against the set of all words in an interface's dictionary. However, this obviously produces incorrect results when a word is misspell into a word already in the dictionary, and can produce unnecessary ambiguities in other cases. Superior results are obtained by making the spelling correction sensitive to the parser's syntactic and semantic expectations. In the following example: Add two fixed haed dual prot disks to the order "haed" can be corrected to: "had", "head", "hand:', "heed", and "hated". Syntactic expectations rule two of these out, and domain semantics rule out two others, leaving "fixed [lead disk" as the appropriate correction. Com[;utationally, there are two ways to organize this. One can either match parser expectations against all possible corrections in the parser:s current vocabulary, and rule out spurious corrections, or one can use the parse expectations to generate a set of possible words that can be recognized at the present point and use this as input to the spelling correction algorithm. The latter, when it can be done, is clearly the preferable choice on efficiency criteria. Generating all possible corrections with a 10,080 word dictionary, only to rule out all but one or two, is a computationally-intensive process, whereas exploiting fully-indexed parser expectations is far more constrained and less likely to generate ambiguity. For the example abcve, "pror' has 16 possible corrections in a small on- line dictionary. However, domain semantics allow only one word in the same position as "pror', so correction is most effective if the list of possible words is generated first. 2.3. Interaction of morphology and misspelling Troublesome side.effects of spelling correction can arise with parsers that have an initial morphological analysis phase to reduce words to their root form. For instance, a parser might just store the root form of 'directory' and reduce 'directories' to 'directory' plus a plural marker as part of its initial morphological phase. This process is triggered by failing to recognize the inflected form as a wind that is present in the dictionary. It operates by applying standard morphological rules (e.g. -tes => +,y) to derive a root from the inflected form. It a simple matter to check first for inflected forms and then for misspellings. However, if a word is both inflected and misspelt, the expectation-based spelling correcter must be invoked from within the morphological decomposition routines on potentially misspelt roots or inflexions. 2.4. Incorrect segmentation Input typed to a natural language interface is segmented into words by spaces and punctuation marks. Both kinds of segmenting markers, especially the second, can be omitted or inserted speciously. Incorrect segmentation at the lexical level results in two or more words being run together, as in "runtogether", or a single word being split up into two or more segments, as in "tog ether" or (inconveniently) "to get her", or combinations of these effects as in "runto geth er". In all these cases, it is possible to deal with such errors by extending the spelling correction mechanism to be able to recognize target words as initial segments of unknown words, and vice.versa. Compound errors, however, present some difficulties. For instance consider the following example where we have both a missing and a spurious delimiter: Add two du alport disks to the order After failing in the standard recovery methods, one letter at a time would be stripped off the beginning of the second unrecognizable word ("alporr') and added at the end of the first unrecognizable word ("du"). This process succeeds only if at some step both words are recognizable and enable the parse to continue. Migrating the delimiter (the space) backwards as well as forwards should also be attempted between a pair of unknown words, 438 stopping if both words become recognizable. Of course, additional compounding of multi.hie iexical deviations (e.g., misspellings, run-on words and split words in the same segment) requires combinatorially inefficient recovery strategies. Strong parser expectabons can reduce the impact of this problem, but at some point tradeoffs must be made between resilience and efficiency in compound error recovery. 3. Sentential Level Extragrammaticalities We examine ungrammaticalities at the sentential level in five basic categories: missing words, spurious words or phrases, out of order constituents, agreement violations, and semantic constraint violations. 3.1. Missing constituents It is not uncommon for the use; of a natural language interface to omit words from his input. The degree of recovery possible from such ungrammaticalities is, of course, dependent on which words were left out. In practice, words whose contribution to the sentence is redundant are often omitted in an attempt to be cryptic or "computer-like" (as in "Copy new files my directory"). This suggests that techniques that fill in the structural gaps on semantic grounds are more likely tobe successful than strategies which do not facilitate the application of oor,~ain semantics. A parsing process postulates a missing word error when its eYpectations (syntactic or semantic) of what should go at a certain place in the input utterance are violated. To discover that the problem is in fact a missing word, and to find the parse structure corresponding to the user's intention, the parsing process must "step back" and examine the context of the parse as a whole. It needs to ignore temporarily the unfulfilled expectations and their contribution to the overall structure while it tries to fulfil some of its other expectations through parsing other parts of the input and integrating them with already parsed constituents. More specifically, the parser needs to delimit the gap in the input utterance, correlate it with a gap in the parse structure (filling in that ga~ if it is uniquely determined), and realign the parsing mechanism as though the gap did not exist. Such a realignment can be done top-down by predicting the other constituents from the parse structure already obtained and attempting to find them in the input stream. Alternatively, realignment can be done bottom-up by recognizing as yet unparsed elements of the input, and either fitting them into an existing parse structure, or finding a larger structure to subsume both them and the existing structure. This latter approach is essential when the structuring words are missing or garbled. 3.2. Spurious and unrecognizable constituents Words in an input utterance that are spurious to a parse can arise from a variety of sources: • legitimate phrases that the parser cannot deal with: It is not uncommon for the user of a restricted domain interface to say things that the interface cannot understand because of either conceptual or grammatical limitations. Sometimes, spurious verbosity or politeness is involved: Add if you would be so kind two fixed head and if possible dual ported disks to my order. Or the user may offer irrelevant (to the system) explanations or justifications, as observed in preparatory experiments for the GUS system [4], e.g. / think / need more storage capacity, so add two fixed head dual ported disks to my order. Some common phrases of politeness can be recognized explicitly, but in most cases, the only reasonable response is to ignore the unknown phrases, realign the parse on the recognizable input, and if a semantically and syntactically complete structure results, postulate that the ignored segment was indeed redundant. Isolating certifiable noise phrases in the same way as truly spurious input provides the advantage that they can then be recognized at any point in the input without having to clutter the parser's normal processing with expectations about where they might occur. • broken-off and restarted utterances: These occur when people start to say one thing, change their mind, and say another: Add I mean remove a disk from my order Utterances in this form are more likely to occur in spoken input but a similar effect can arise in typed input when a user forgets to hit the erase line or erase character key: Add remove a disk from my order Add a single ported dual ported disk from my order Again the best tactic is to discard the broken-off fragment, but identifying and delineating the superseded fragment requares strategies such as the one discussed below. • unknown words filling a known grammatical role: Sometimes the user will generate an incomprehensible phrase synonymous with a constituent the system is perfectly capable of understanding: Add a dual ported rotating mass storage device to my order Here the system might not know that "rotating mass storage device" is synonymous with "disk". This phenomenon will result in missing words as well as spurious words. If the system has a unique expectation for what should go in the gap, it should (with appropriate confirmation from the user) record the unknown words as synonymous with what it expected. If the system has a limited set of expectations for what might go in the gap, it could ask the user which one (if any) he meant and again record the synonym for future reference. In cases where there are no strong expectations, tile system would ask for a paraphrase of the incomprehensible fragment. If this proved comprehensible, it would then postulate the synonymy relation, ask the user for confirmation, and again store the results for future reference. As for missing constituents, recovery from spurious interjections generally requires "stepping back" and examining the context of the parse as a whole. In this case however, violations of the parser's expectations should result in skipping over the troublesome segments, and attempting to fulfill the expectations by parsing subsequent segments of tile input. If this results in a complete parse, the skipped segment may well be spurious. On the other hand, if a gap in the parse strdcture remains, it can be correlated with the skipped segments to postulate possible constituents an• synonomy relations as illustrated above. In the case of broken-off utterances, there are some more specific methods that allow the spurious part of the input to be detected: • If a sequence of two constituents of identical syntactic and semantic type is found where only one is permissible, simply ignore the first constituent. Two main command verbs in sequence (e.g., in the "Add remove ..." example above), instantiate the identical sentential case I~eader role in a case frame parser, enabling the former to be ignored. Similarly, two ,lstantiations of the same prencminal case for the "disk" case frame would be recognized as mutually incompatible and the former again ignored. Other parsing strategies can 439 be extended to recognize equivalent constituent repetition, but case frame instantiation seems uniquely well suited to it. • Recognize explicit corrective phrases and if the constituent to the right is of equivalent syntactic and semantic type as the constituent at the left, substitute the right constituent for the left constituent and continue the parse. This strategy recovers from utterances such as "Add I mean remove ...", if "1 mean" is recognized as a corrective phrase. • Select the minimal constituent for all substitutions. For instance the most natural reading of: Add a nigh speed tape drive, that's disk drive, to the order is to substitute "disk drive" for "tape drive", and not for the larger phrase "high speed tape drive", which also forms a legitimate constituent of like semantic and syntactic type. 3.3. Out of order constituents and fragmentary input Sometimes, a user will employ non-standard word order. There are a variety of reasons why users violate expected constituent ordering relations, including unwillingness to change what has already been typed, especially when extensive retyping would be required: Two fixed head dual ported disk drives add to the order or a belief that a computer will understand a clipped pseudo- milita,~/style more easily than standard usage: two disk drives fixed head du~/ ported to my order add Similar myth~ about what computers understand best can lead to a very fragmented and cryptic style in which all function words are eliminated: Add disk drive order instead of "add a disk drive to my order". These two phenomena, out of order constituents and fragmentary input, are grouped together because they are similar from the parsing point of view. The parser's problem in each case is to put together a group of recognizable sentence fragments without the normal syntactic glue of function words or position cues to indicate how the fragments should be combined. Since this syntactic information is not present, semantic considerations have to shoulder the burden alone. Hence, parsers which make it easy for semantic information to be brought to bear are at a considerable advantage. Both bottom-up and top.down recovery strategies are possible for detecting and recovering from missing and spurious constituents. In the bottom-up approach, all the fragments are recognized independently, and purely semantic constraints are used to assemble them into a single framework meaningful in terms of the domain of discourse. When the domain is restricted enough, the semantic constraints can be such that they always produce a unique result. This characteristic was exploited to good effect in the PLANES system [23] in which an input utterance w~s recognized as a sequence of fragments which were then assembled into a meaningful whole on the basis of semantic considerations alone. A top-clown approach to fragment recognition requires that the top-level or organizing concept in the utterance ("add" in the above examples) be located, if it can be, the predictions obtainable from it about what else might appear in the utterance can be used to guide and constrain the recognition of the other fragments. As a final point, note that in the case of out of order constituents, a parser relying on a strict left-to-right scan will have much greater difficulty than one with more directional freedom. In out of order input, there may be no meaningful set of left-to-right expectations, even allowing for gaps or extra constituents, that will fit the input. For instance, a case frame parser that scans for the head of a case frame, and subsequently attempts to instantiate the individual cases from surrounding input, is far more amenable to this type of recovery than one whose expectations are expressed as word order constraints. 3.4. Syntactic and semantic constraint violations Input to a natural language system can violate both syntactic and semantic constraints. The most.common form of syntactic constraint violation is agreement failure between subject and verb or determiner and head noun: Do the order include a disk drives? Semantic constraint violations can occur because the user has conceptual problems: Add a floating head tape drive to the order or because he is imprecise in his language, using a related object in place of the object he really means. For instance, if he is trying to decide on the amount of memory to include in an order he might say: Can you connect a video disk drive to the two megabytes? When what he-really means is "... to the computer with two megabytes of memory?.". These different kinds of constraint violation require quite different kinds of treatment. In general, the syntactic agreement violations can be ignored; cases in which agreement or lack of it distinguishes between two otherwise valid readings of an input are rare. However, one problem that sometimes arises is knowing whether a noun phrase is singular or plural when the determiner or quantifier disagrees with the head noun. Semantic constraint violations due to a user's conceptual problems are harder to deal with. Once detected, the only solution is to inform the user of his misconcepLion and let him take it from there. The actual detection of the problem, however, can cause some difficulty for a parser re!ymg heavily on semantic constraints to guide its parse. The constraint violation miOht cause it to assume there was some oth~r problem such as out of order or spurious constituents, and look for (and perhaps even find) some alternative and unintended way of putting all the pieces together. This is one case where syntactic considerations should come to the fore. Semantic constraint violations based on the mention of a related object instead of the entity actually intended by the user will manifest themselves in the same way as the semantic constraint violations based on misconceptions, but their processing needs to be quite different. The violation can be resolved if the system can look at objects related to the one the user mentioned and find one that satisfies the constraints. In the example above, this means going from the memory size to the machine that has that amount of memory. Clearly, the semantic distance and the type of relationship over which this kind of substitution is allowed needs to be controlled fairly carefully -- m a restricted domain everything is eventually related to everything e!se. Preference rules are needed to control the kind of substitutions that are allowed. In the above example, it might be that a part ~s allowed to substitute for a whole (metonymy), especially if, as we assumed, the part had been used earlier in the dialogue to distinguish between different instances of the whole. 440 4. Support for recovery strategies by various parsing approaches We now turn to the question of incorporating recovery strategies into some of the approaches to parsing found in the literature. We consider three basic classes: transition network approaches (including syntactic ATNs and network-based semantic grammars), pattern matching approaches, and approaches based on case frame instantiation. These classes cover the majority of current catsing systems for restricted domain languages. All three approaches are able to cope with lexical level problems satisfactorily. However, as we have seen, the application of semantic constraints often makes the correction of lexical problems more efficient and less prone to ambiguity. So parsers that employ semantic constraints (e.g. semantic grammars [20, 5] or case frame instantiation [12, 17]) are more effective in recovery at the lexical level than parsers whose only expectations are syntactic (e.g., purely syntactic ATNs [28]). At the sentential level, however, differences in the abilities of the three approaches to cope naturally with extragrammaticality are far more pronounced. We will examine each approach in turn from this point of view. 4.1. Recovery strategies and transition network parsers Althou~jh attempts have been made to incorporate sentential level recovery strategies into network-based parsers including beth syntactically-based ATNs [21,24, 25, 29] and semantic grammar networks [20], the network paradigm itself is not well suited to the kinds of recovery strategaes discussed in the preceding sections. These strategies generally require an interpretive abdity to "step back" and take a broad view of the situation when a parser's expectations are violated, and this is very hard to do when using networks. The underlying problem is that a significant amount of state information during the parse is implicitly encoded by the position in the network; in the case of AThls, other aspects of the state are contained in the settings of scattered registers. As demonstrated by the recta-rule approach to diagnosing parse failures described by Weischedel and Sondheimer [24]. these and other difficulties elaborated below do not make recovery from extragrammaticality impossible. However, they do make it difficult and often impractical, since much of the implicitly encoded state must be made declarative and explicit to the recovery strategies. Often an ATN parse will continue beyond the point where the grammatical deviation, say an omitted word, occurred and reach a node in the network fiom which it can make no further progreSS (i.e., no arcs can be traversed). At this point, the parser cannot ascertain the source of th.~. ' error by examining its internal state even if the state is accessible -- the parser may have popped from embedded subnets, or followed a totally spurious sequence of arcs before blocking. If these problems can be overcome and the source of the error determined precisely, a major problem still remains: in order to recover, and parse input that does not accord with the grammar, while remaining true to the network formalism, the parser must modify the network dynamicall) and temporarily, and use the modified network to proceed through the present difficulties. Needless to say, this is at best a very complex process, one whose computational tractability is open to question in the most general case (though see [21]). It is perhaps not surprising that in one of the most effective recovery mechanisms developed for network-based parsing, the LIFER system's ellipsis handling routine [20], the key step operates completely outside the network formalism. As we have seen, semantic constraints are very important in recovering from many types of ungrammatical input, and these are by definition unavailable in a purely syntactic ATN parser. However, semantic information can be brought to bear on network based parsing, either through the semantic grammar approach in which joint semantic and syntactic categories are used directly in the ATN, or by allowing the tests on ATN arcs to depend on semantic criteria [2, 3]. In the former technique, the appropriate semantic information for recovery can be applied only if the correct network node can be located -- a sometimes difficult task as we have seen. In the latter technique, sometimes known as cascaded ATNs [27], the syntactic and semantic parts of the grammar are kept separate, thus giving the potential for a higher d~gree of interpretivem:ss in using the semantic information. However, semantic information represented in this fashion is generally only used to confirm or disconfirm parses arrived at on syntactic grounds and does not participate directly in the parsing process. A further disadvantage of the network approach for implementing flexible recovery strategies is that networks naturally operate in a top-down left-to-right mode. As we have seen, a bottom.up capability is essential for many recovery strategies, and directional flexibility often enables easier and more efficient operation of the strategies. Of course, the top.down left-to-right mode of operation is a characteristic of the network interpreter, not of the network formalism itself, and an attempt [29] has been made to operate an ATN in an "island" mode, i.e. bottom-up, center-out. This experiment was done in the context of a speech parser where the low-level recognition of many of the input words was uncertain, though the input as a whole was assumed to be grammatical. In that situation, there were clear advantages to starting with islands of relative lexicar certainty, and working out from them. Problems, however, arise during leftward expansion from an island when it is necessary to run the network backwards. The admissibility of ATN transitions can depend on tests which access the values of registers which would have been set earlier when traversing the network forwards, but which cannot have been set when traversing backwards. This leads at best to an increase in non-determinism, and at worse to blocking the traversal completely. 4.2. Recovery strategies and pattern matching parsers A pattern matching approach to parsing provides a better framework to recover from some sentential level deviations than a network-based approach. In parttcular, the definition of what constitutes a pattern match can be relaxed to allow for missing or spurious constituents. For mis.~ing constituents, patterns which match some, but not all, of their components can be counted temporarily as complete matches, and spurious constituents can be ignored so long as they are embedded in a pattern whose other components do match. In these cases, the patterns taken as a whole provide a basis on which to perforrn the kind of "stepping back" discussed above as being vdal for flexible recovery. In addition, when pattern elements are defined semantically instead of lexically, as with Wilks' machine translation system[26], semantic constraints can easily be brought to bear on the recognition. However, dealing with out of order constituents is not so easy for a pattern-based approach since constituent order is built into a pattern in a rigid way, similarly to a network. It is possible to accept any permutation of elements of a pattern as a match, but this provides so much flex;bility that many spurious recognitions are likely to be obtained as well as the correct ones (see [16]). 441 An underlying problem here is that there is no natural way to make the distinctions about the relative importance or difference in role between one word and another. For instance, parsing many of our examples might have involved use of a pattern like: (~.determiner> ~disk-drive-attribute,~" ~disk-drive,~) which specifies a determiner, followed by zero or more attributes of a disk drive, followed by a phrase synonymous with "disk drive". So this pattern would recognize phrases like "a dual ported disk" or "the disk drive". Using the method of dealing with missing constituents mentioned above, "the" would constitute just as good a partial match for this pattern as "disk drive", a clearly undesirable result. The problem is that there is no way to tell the flexible matcher which components of the pattern are discriminating from the point of view of recognition and which are not. Another manifestation of the same problem is that different words and constituents may be easier or harder to recognize (e.g. prepositions are easier to recognize than the noun phrases they introduce), and thus may be more or less worthwhile to look for in an attempt to recover from a grammatical deviation. The underlying problem is the uniformity of the grammar representation and the method of applying it to the input. Any uniformly represented grammar, whether based on patterns or networks, will have trouble representing and using the kinds of distinctions just outlined, and thus is poorly equipped to deal with many grammatical deviations in an efficient and discriminating manner. See [18] for a fuller discussion of this point. 4.3. Recovery strategies and case frame parsers Recursive case frame instantiation appears to provide a better framework for recovery from missing words than approaches based on either network traversal or pattern matchil~g. There are several reasons: • Case frame instantiation is inherently a highly interpretive process. Case frames provide a high-level set of syntactic and semantic expectations that can be applied to the input in a variety of ways. They also provide an overall framework that can be used to realize the notion of "stepping back" to obtain a broad view of a parser's expectations. o Case frame instantiation is a good vehicle for bringing semantic and pragmatic information to bear in order to help determine the appropriate parse in the absence of expected syntactic constituents. If a preposition is omitted (as commonly happens when dealing with cryptic input from hunt-and-peck typists), the resulting sentence is syntactically anomalous. However, semantic case constraints can be sufficiently strong to attach each noun phrase to the correct structure. Suppose, for instance, the following sentence is typed to an elec',ronic mail system interface: Send message John Smith The missing determiner presents few problems, but the missing preposition can be more serious. Do we mean to send a message "to John Smith", "about John Smith", "with John Smith", "for John Smith", "from John Smith", "in John Smith", "of John Smith", etc.? The domain semantics of the case frame rule out the latter three possibilities and others like them as nonsensical. However, pragmatic knowledge is required to select "to John Smith" as the preferred reading (possibly subject to user confirmation) -- the destination case of the verb is required for the command to be effective, whereas the other cases, if present, are optional. This knowledge of the underlying action must be brought to bear at parse time to disambiguate the cryptic command. In the XCALIBUR system case frame encoding [10], pragmatic knowledge of this kind is represented as oreference constraints (cf. [26]) on case fi!lers. This allows XCALIBUR to overcome problems created by the absence of expected case markers through the application of the appropriate domain knowledge. • The propagation of semantic knowledge through a case frame (via attached procedures such as those of KRL [1] or SRL [30]), can fiil in parser defaults and allow the internal completion of phrases such as "dual disks" to mean "dual ported disks". This process is also responsible for noticing when information is either missing or ambiguously determined, thereby initiating a focused clarificational dialogue [15]. • The representation of case frames is inherently non-uniform. Case fillers, case markers, and case headers are all represented separately, and thi$ distinction can be used by the parser interpretively mstantiating the case frame. For instance, if a case frame accounts for the non-spurious part of an input containing spurious constituents, a recovery strategy can skip over the unrecognizable words by scanning for case markers as opposed to case fillers which typically are much harder to find and parse. This ability to exploit non-uniformity goes a long way to overcoming the problems with uniform parsing methods outlined in the previous section on pattern matching. 5. Dialogue Level Extragrammaticality The underlying causes of many extragrammaticalities detected at the sentential level are rooted in dialogue phenomena. For instance, ellipses and other fragmentary inputs are patently ungrammatical at the sentential level, but can be understood in the context of a dialogue. Viewed at this more global level, ellipsis is not ungrammatical. Nevertheless, the same computational mechanisms required to recover from lexioal and (especially) sentential problems are neces.~ary to detect ellipsis and parse the fragments correctly for incorporation into a larger structure. In general, many dialogue phenomena can be classified pragmatically as extragrammaticalities. In addition to addressing dialogue level extragrammaticalities, any robust parsing system must engage the user in dialogue for cooperative resolution of parsing problems too difficult for automatic recovery. Interaction with the user is also necessary for a cooperative parser to confirm any assumptions it makes in interpreting extragrammatical input and to resolve any ambiguities it cannot overcome on its own. We have referred several times in our discussions to the principle of tocused interaction, and stated that practical recovery dialogues should be focused as tightly as possible on the specific problem at hand. Because of space limitations, this paper does not discuss details the automated resolution of dialogue level extragrarnmaticalities or the use of dialogue to engage the user in cooperative resolution. The interested reader is referred to [8]. 6. Concluding Remarks Any practical natural language interface must be capable of dealing with a wide range of extragrammatical input. This paper has proposed a partial taxonomy of extragrammatica!!ties that arise in spontaneously generated input to a restricted-domain natural language interface and has presented recovery strategies for handhng many of the categories. We also discussed how well three widely employed approaches to parsing -- network-based parsing, pattern matching, and case frame instantation -- could support the recovery strategies, and concluded that case frame instantiation provided the best basis The reader is referred to [8] 442 for a more complete presentation, including a more complete taxonomy and additional recovery strategies, particularly at the dialogue level. Based on the set of recovery strategies we have examined and the problems that arise in trying to integrate them with techniques for parsing grammatical input, we offer the following set of desiderata for a parsing process that has to deal with extragrammatical input: = The parsing process should be as interpretive as possible. We have seen several times the need for a parsing process to "stand back" and look at the broad picture of the set of expectations (or grammar) it is applying to the input when an ungrammaticality arises. The more interpretive a parser is, tbe better able it is to do this. A highly interpretive parser is also better able to apply its expectations to the input in more than one way, which may be crucial if the standard way does not work in the face of an ungrammaticality. • The parsing process should make it easy to apply semantic information. As we have seen, semantic information is often very important in resolving ungrammaticalities. = The parsing process should be able to take advantage of non-uniformity in language like that identified in Section 4.2. As we have seen, recovery can be much more efficient and reliable if a parser is able to make use of variations in ease of recognition or discriminating power between different constituents. Th~s kind of "opportunism" can be built into recovery strategies. = The parsing process should be capable of operating top. down as well as bottom-up. We have seen examples where both of these modes are essential. We believe that case frame mstantiation provides a better basis for parsing extragrammatical input than network-based parsing or pat!ern matching precisely because it satisfies these desiderata better than the other two approaches. We also believe that it is possible do even better than case frame instantiation by using a multi-strategy approach in which case frame instantiation is just one member (albeit a very important one) of a whole array of parsiag and recovery strategies. We argue this claim in detail in [8,] and support it by discussion of three experimental parsers that in varying degrees adopt the multi-strategy approach. 7. References 1 Bobro~.,,. D.G. and Winogred, T., "An Overview of KRL, a Knowledge Reprusentation Language," Cognitive Science, Vol. 1, No. 1, 1977, pp. 3-46, 2. Bobrow. R.J., "The RUS System," BBN Report 3878, Bolt, Beranek, and Newman, 1978. 3. Bobrow, R.J. and Webber. B, "Knowledge Representation for Syntactic/Semantic Processing," Prec. National Conference of the American Association IorArtilicial Intelligence. Stanford University, August 1980. 4. Bobrow, D.G., Kaplan, R.M., Kay, M., Norman D.A., Thompson, H., and Winograd, T., 'GUS: a Frame.Driven Dialogue System," Artificial Intelligence, VOL 8, 1977, pp. 155-173. 5. Brown, J.S. and Burton. R.R.. "Multiple Representations o! Knowledge for Tutorial Reasoning," in Representation and Understanding. Bobrow, D. G. and Collins, A., ed., Academic Press, New York, 1975, pp. 311-349. 6. CarbonelL J. G., "Towards a Self-Extending Parser," Proceedings of the 171h Meeting ot the Association for Computational Linguistics, 1979, pp. 3-7. 7. Carbonell, J. G. and Hayes, P. J., "Robust Parsing Us!ng Multiple Construction- Specific Strategies," in Natural Language Parsing Systems, L. BOIc, ed., Springer-Verlag, 1984. 8. Carbonell, J.G. and Hayes, P.J., "Recovery Strategies for Parsing Extragrammatical Language," Journal of Computational Linguistics, VOI. 10, 1984, (publication forthcoming). 9. Carbonell, J.G., Boggs, W.M., Mauldin, M.L. and Anick, P.G., "The XCALIBUR Project, A Natural Language Interlace to Expert Systems," Proceedings of the Eighth International Joi,'~t Conlerence on Artificial intelligence, 1983. 10, Carbonell, J.G.. Boggs, W. M., Mauldin, M L. and Anick, P.G., "XCALIBUR Progress Report #1: First Steps Towards an Integrated Natural Language Interface," Tech. report, Carneg~e-Mellon University, Computer Science Department, 1983. 1! Carbonell, J G., "Discourse Pragmatics in Task-Oriented Natural Language Interlaces," Proceedings of the 21st annual meeting of the Association tot Computalional Linguistics, 1983. 12, Dejong, G.. Skimming Stories in Real-Time, PhD dissertation. Computer Science Dept., Yale University. 1979. 13. Durham. I., Lamb, D.D.. and Saxe, J.B., "Spelling Correction in User Interfaces," Comm. ACM, Vol. 26, 1983. 14. Haas. N. and Hendrix, G. G., "Learning by Being Told: Acquiring Knowledge tot Infoll,~ahon Management," in Machine Learning, An Artificial Intelligence 4pproach, R. S. Michalski, J.G. C.arbonell and T.M. Mitchell, eds., Tioga Press, Pale Alto, CA, 1983. 15. Hayes P.J., "A Construction Specific Approach to Focused Interaction in Flexible Parsing," Prec. el 191h Annual Meeting of the Assoc. for Comput. Ling., June 1981, pp. 149.152. 16. Hayes, P.J. and Mouradian, G.V., "Flexible Parsing," American Journal of Computational Linguistics. VoL 7, No. 4, 1981, pp. 232-241. 17. Hayes, P. J. and Carbonell, J. G., "Multi Strategy Construction-Specific Parsing for Flexible Data Base Query and Update," Prec. Seventh Int. Jt. Conf. on Artificia//nte//igence, Vancouver, August 1981, pp. 432.439. 18 Hayes, P. J. and Carbonell, J. G., "Multi-Strategy Parsing and its Role in Robust Man-Machine Communication," Tech. report CMU-CS-81-118, Carnegie. Mellon Umversity, Computer Science Department, May 1981. 19. Hendrix, G.G., Sacerdoti, E.D. and Slocum, J., "Developing a Natural Language Interface m Complex Data," Tech. report Artificial Intelligence Canter., SRI International, 1976. 20. Hendrix, G.G, "Human Engineering for Applied Natural Language Processing," Prec. Fillh Int. Jt. Conf. on Artiliciel Intelligence. 1977, pp. 183-191. 21. Kwasny. S.C. and Sondheimer, N K., "Relaxalion Techniques for Parsing Grammatically IlI-Folmed Ioput in Natural Language Understanding Systems," American Journal o1 Computational Lmguistics. Vol. 7, No. 2, 1981, pp. 99-108. 22. S~.hank, R C., Lebowitz, M, Bimbaum, E., "An Integrated Undemtander," American Journal of Computational Linguistics. Vol. 6, NO. 1, 1980, pp, 13-30. 23. Waltz, D.L., "An English Language Question Answering System for a Large Relational Data Base," Comm. ACM, Vol. 21 ,'No. 7, 1978, pp. 526-539. 24. Weischedel, R.M. and Sondheimer, N K., "Me;a-Rules as a Basis for Processing Ill-formed Input," Computational Linguistics, VoL 10, 1984. 25. Wemchedel, R.M. and Black, J., "Responding to Potentially Unparseable Sentences," American Journal of Computational Linguistics, Vol, 6, 1980, pp. 97-109. 26. Wilks, Y.A., "Preference Semantics," in Formal Semantics of Natural Language, Keenan, ed., Cambridge University Press, 1975. 27. Woods, W.A., "Cascaded ATN Grammars," American Journal of Ccmput=tional Linguistics. Vol. 6, No. 1, August 1980, pp. 1-12. 28. Woods, W.A., Kaplan, R.M., and Nash-Webber, B., "The Lunar Sciences Language System; Final Report," lech. report 2378, Bolt, Beranek, and Newman, inc., Canlbridge, Mass., 1972. 29. Woods, W. A., Bates, M., Brown, G., Bruce, B,, Cook, C., Klovatad, J., Makhoul, J., Nash-Webber, B., Schwartz., R., Wolf, J., and Zue, V., "Speech UndeL'~tandmg Systems • Final Technical Report," Tech. report 3438, Bolt, Beranek, and Newman, Inc.. Cambridge, Mass., 1976. 30. Wright, K. and Fox, M, "The SRL User3 Manual," Tech. report, Robotic,= institute, Carnegie-Mellon University, 1983. 443
1984
89
APPLICATIONS OF A LEXICOGRAPHICAL DATA BASE FOR GERMAN Wolfgang Teubert Institut f~r deutsche Sprache Friedrich-Karl-Str. 12 6800 Mannheim i, West Germany ABSTRACT The Institut fHr deutsche Sprache recently has begun setting up a LExicographical DAta Base for German (LEDA). This data base is designed to improve efficiency in the collection, analysis, ordering and description of language material by facilitating access to textual samples within corpora and to word articles, within machine readable dictionaries and by providing a frame to store results of lexicographical research for further processing. LEDA thus consists of the three components Tezt Bank, Diationary Bank and ResuZt Bank and serves as a tool to suppport monolingual German dictionary projects at the Institute and elsewhere. I INTRODUCTORY REMARKS Since the foundation of the Institut fHr deutsche Sprache in 1964, its research has been based on empirical findings; samples of language produced in spoken or written from were the main basis. To handle efficiently large quantities of texts to be researched it was necessary to use a computer, to assemble machine readable corpora and to develop programs for corpus analysis. An outline of the computational activities of the Institute is given in LDV-Info (1981 ff); the basic corpora are described in Teubert (1982). The present main frame computer, which was installed in January 1983, is a Siemens 7.536 with a core storage of 2 megabytes, a number of tape and disc decks and at the moment 15 visual display units for interactive use. Whereas in former years most jobs were carried out in batch, the terminals now make it possible for the linguist to work interactively with the computer. It was therefore a logical step to devise Lexicographical Data Base for German (LEDA) as a tool for the compilation of new dictionaries. The ideology of interactive use demands a different concept of programming where the lexicographer himself can choose from the menu of alternatives offered by the system and fix his own search parameters. Work on the Lexicographical Data Base was begun in 1981; a first version incorporating all three components is planned to be. ready for use in 1986. What is the goal of LEDA? In any lexicographical project, once the concept for the new dictionary has been established, there are three major tasks where the computer can be employed: (i) For each lemma, textual samples have to be determined in the corpus which is the linguistic base of the dictionary. The text corpus and the programs to be applied to it will form one component of LEDA, namely the Text Bank. (ii) For each lemma, the lexico- grapher will want to compare corpus samples with the respective word articles of existing relevant dictionaries. For easy access, these dictionaries should be transformed into a machine readable corpus of integrated word articles. Word corpus and the pertaining retrieval programs will form the second component, i.e. the Dictionary Bank. (iii) Once the formal structure of the word articles in the new dictionary has been established, description of the lemmata within to the framework of this structure can be begun. A data base system will provide this frame so that homogenous and interrelated descriptions can be carried out by each member of the dictionary team at all stages of the compilation. This component of LEDA we call the Result Bank. II TEXT BANK Each dictionary project should make use of a text corpus assembled to the specific requirements of the particular lexicographical goal. As self-evident as this claim seems to be, it is nonetheless true for most German monolingual dictionaries on the market that they have been compiled without any corpus; this is apparently even the case for the new six volume BROCKHAUS-WAHRIG, as has been pointed out by Wiegand/Kucera (1981 and 1982). For a general dictionary of 34 contemporary German containing about 200 000 lemmata, the Homburger Thesen (1978) asked for a corpus of not less than 50 million words (tokens). To be used in the text bank, corpora will have to conform to the special codification or pre-editing requirements demanded by the interactive query system. At present, a number of machine readable corpora in unified codification are available at the Institute, including the Mannheim corpora of contemporary written language, the Freiburg corpus of spoken language and the East/West German newspaper corpus, totalling altogether about 7 million running words of text. Further corpora habe been taken over from other research institutions, publishing houses and other sources. These texts had been coded in all kinds of different conventions, and programs had to (and still have to) be develQped to transform them according to the Mannheim coding rules. Other texts to be included in the corpus of the text bank will be recorded by OCR, via terminal or by use of an optical scanner, if they are not available on machine readable data carriers. By the end of 1985 texts of a total length of 20 million words will be available from which any dictionary project can make its own selection. A special query system called REFER has been developed and is still being improved. For a detailed description of it, see Br~ckner (1982) and (1984). The purpose of this system is to ensure quick access to the data of the text bank, thus enabling the lexicographer to use the corpus interactively via the terminal. Unlike other query programs, REFER does not search a word form (or a combinantion of graphemes) in the corpus itself, but in registers containing all the word forms. One register is arranged in the usual alphabetical way, the other is organized in reverse or a tergo to allow a search for suffixes or the terminal elements of compounds. All word forms in the registers are connected with the references to their actual occurrence in the corpus, which are then looked up directly. With REFER, it normally takes no more than three to five seconds for the search procedure to be completed, and all occurrences of the word form within an arbitrarily chosen context can be viewed on the screen. Response behaviour does not depend on the size of the text bank. In addition, REFER following options: features the - The lexicographer can search for a word form, for word forms beginning or ending with a specified string of graphemes or for word forms containing a specified string of graphemes at any place. - The lexicographer can search for any combination of word forms and/or graphemic strings to occur within a single sentence of the corpus. - REFER is connected with a morphological generator supplying all inflected forms for the basic form, e.g. the infinitive (cf. fahren (inf.) --- fahre, f~hrst, fahrt, f-~rt, fuhr, fuhren, fuhrst, f~hre, f ~ , f-~st, 9efahren).-?--~s will make it much easler for the lexicographer to state his query. - For all word forms, REFER will provide information on the relative and absolute frequency and the distribution over the texts of the corpus. - The lexicographer hat a choice of options for the output. He can view the search item in the context of a full sentence, in the context of any number of sentences or in the form of a KWIC-Index, both on the screen and in print. - For each search procedure, the linguist can define his own subcorpus from the complete corpus. - Lemmatized registers are in preparation. They will be produced automatically using a complete dictionary of word forms with their morphological descriptions. These lemmatized registers not only reduce the search time, but also give the accurate frequency of a lemma, not just a word form, in the corpus. - Register of word classes and morphological descriptions (e.g. listing references of all past participles) will be produced automatically by inverting the lemmatized registers. Thus the linguist can search for relevant grammatical constructions, like all verb complexes in the passive voice. - Another feature will permit searching for an element at a predetermined sentence position, like all finite verbs as the first words of a sentence or all nouns preceded by two adjectives. Thus the text bank is a tool for the lexicographer to gain information of the following kind: - Which word forms of a lemma are found in the corpus? Are there spelling or inflectional variations? - In which meanings and syntactical constructions is the lemma employed? - What collocations are there? What compounds is the lemma part of? - Is there evidence for idiomatic and phraseological usuage? - What is the relative and absolute frequency of the lemma? Is there a characteristic distribution over different text types? - Which samples can best be used to demonstrate the meanings of the lemma? 35 Preliminary versions of the text bank are in use since 1982. Not only lexicographers but also grammarians employ this interactive system to gain the textual samples they need. A steadily growing number of service demands both from members of the Institute and from linguists at other institutions are being fulfilled by the text bank. III DICTIONARY BANK If access to the textual samples of a corpus is an indisputable prerequisite for successful dictionary compilation, consultation of other relevant dictionaries can facilitate the drawing up of lexical entries. It is virtually impossible to assemble a corpus so extensive and encompassing that it will suffice to describe the whole vocabulary of a language, even within the limits of the particular conception of any dictionary (unless it were a pure corpus dictionary). A dictionary of contemporary language should not let down its user if he is reading a text written in the early 19th century though it will contain words and meanings of words not found in a corpus of post World War II texts. This holds even more for languages for special purposes; they cannot be described without recurrence to technical dictionaries, collections of terminology and thesauri, because the more or less standardized meanings cannot be retrieved from their occurrences in texts. According to Nagao et al. (1982), "dictionaries themselves are rich sources, as linguistic corpora. When dictionary data is stored in a data base system, the data can be examined by making cross references of various viewpoints. This leads to new discoveries of linguistic facts which are almost impossible to achieve in the conventional printed versions" A dictionary bank will therefore form one of the components of the Lexicographical Data Base. Since 1979 a team at the Bonn Institut fur Kommunikationsforschung und Phonetik is compiling a 'cumulative word data base for German', using ii existing machine readable dictionaries of various kinds, including dictionaries assembled for Artificial Intelligence projects, machine translation systems and, for copyright reasons, only two generals purpose dictionaries. Programs have been developed to make up for the differences in the description of lemmata and to permit automatic cumulation. For further information regarding this project, see Hess/Brustkern/Lenders (1983) and Brustkern/Schulze (1983, 1983a). The cumulative word data base, which is due to be completed in 1984, will then be implemented in Mannheim and form the core of the dictionary bank of LEDA. In its final version, the dictionary bank will provide a fully integrated cumulation of the source dictionaries, down to the level of lexical entries, including statement of word class and morphosyntactical information. A complete integration within the microstructure of the lexical entry, however, seems neither possible nor even desirable. Automatic unification cannot be achieved on the level of semantic and pragmatic description. Here, the source for each information item has to be retrievable to assist the lexicographer in the evulation. The dictionary bank will be a valuable tool not only for the lexicographer but also for the grammarian. Retrieval programs will make it possible to come up with a listing of all verbs with a dative and accusative complement, or of all nouns belonging to a particular inflectional class. Since the construction of the dictionary bank and the result bank will be related to each other, every time a new dictionary has been compiled in the result bank, it can be copied into the dictionary bank, making it a growing source of lexical knowledge. The dictionary bank can then be used as a master dictionary as defined by Wolfart (1979), from which derived printed versions for different purposes can be produced. IV RESULT BANK Whereas text bank and dictionary bank supply the lexicographer with linguistic information, the result bank will be empty at the beginning of a project; it consists of a set of forms which are the frames for the word articles. Into these forms the lexicographer enters the (often preliminary) results of his work, which will be altered, amended or shortened and interrelated with other word articles (e.g. via synonymy or antonymy) in the course of compilation; he copies into those forms relevant textual samples from the text bank and useful information units from the dictionary bank. Access via terminal is not only possible to any file representing a word article but also to any record representing a category of explication. The result bank, which can be constructed within the framework of any standard data base management system, thus permits consultation and comparison on any level of lexical description. Descriptive uniformity in the morphosyntactical categories seems easy enough. But as has been shown in a number of studies, e.g. by Mugdan (1984), most existing dictionaries 36 abound in discrepancies and inaccuracies which easily can be avoided by cross-checking within the result bank. More difficult is homogeneity in the semantic description of the vocabulary, representing a partly hierarchical, ~artly associative net of conceptual relations. The words used in semantic explications must be used only in the same sense or senses in which they are defined under their respective heard words. These tasks can be carried out easier within a data base system. Furthermore, the result bank will support collecting and comparing the related elements of groups such us: - all verbs with the same sentence patterns - all adjectives used predicatively only - all nouns denoting tools - all words rated as obsolete - the vocabulary of automobile engineering. Files will differ from word class to word class, as particles or adverbs cannot be describend within the same cluster of categories as nouns or verbs. Similarily, macrostructure and microstructure will not be the same for any two dictionaries. Still Categories should be defined in such a way that the final version of the dictionary can be copied into the dictionary bank without additional manual work. After the dictionary has been compiled, it can be used as copy, using standard editing programs to produce the printed version directly from the result bank. At that level, strict formatting is no longer necessary and should be abandoned, whereever possible, in favour to economy of space. Work on the result bank will begin in autumn 1984. The pilot version of it will be applied to the current main dictionary project of the Institute, i. e. the "Manual of Hard Words", which at present is still in its planning stage. Even in its initial version, however, LEDA will be accessible and applicable for other lexicographical projects as well. REFERENCES Tobias Br~ckner. Programm Dokumentation Refer Version i. LDV-Info 2. Informationsschrift der Arbeitsstelle Linguistische Datenverarbeitung. Mannheim: Institut fur deutsche Sprache, 1982, pp. 1-26. Tobias Br~ckner. Der interaktive Zu@riff auf die Textdatei der Lexikographischen Datenbank (LEDA) Sprache und Datenverarbeitung 1-2/1982, 1984, pp. 28-33. Jan Brustkern/Wolfgang Schulze. Towards a Cumulated Word Data Base for the German Language. IKP-Arbeitsbberichte Abtei- lung LDV. Bonn: Institut fur Kommuni- kationsforschung und Phonetik der Universit~t Bonn, 1983, pp. 1-9. Jan Brustkern/Wolfgang Schulze. The Struc- ture of the Word Data Base for the German Language. IKP-Arbeitsberichte Abteilung LDV, Nr. i. Bonn: Institut fur Kommunikations f or schung und Pho- netik der Universit~t Bonn, 1983, pp 1-9. Klaus HeB/Jan Brustkern/Winfried Lenders. Maschinenlesbare deutsche W~rterb0cher. Dokumentation, Vergleich, Integration. T~bingen, 1983. LDV-Info. Informationsschrift der Arbeits- stelle Linguistische Datenverarbeitung, Mannheim : Institut fur deutsche Sprache, 1981 ff. Joachim Mugdan. Grammatik im W~rterbuch : wortbildung. Germanistische Linguistik 1-3/83, 1984, pp. 237-309. M. Nagao, J. Tsujii, Y. Ueda, M. Takiyama. An Attempt to Computerize Dictionary Data Bases. J. Gotschalckx, L. Rolling (eds.). Lexicography in the Electronic Age. Amsterdam, 1982, pp. 51-73. Wolfgang Teubert Corpus and Lexicography. Proceedings of the Second Scientific Meeting "Computer Processing of Linguistic Data". Bled, Yugoslavia, 1982, pp. 275-301. Herbert Ernst Wiegand / Antonin Kucera. Brockhaus-Wahrig. Deutsches W6rterbuch auf dem Pr~fstand der praktischen Lexikologie. I. Teil: I. Band (A-BT) ; 2. Band (BU-FZ). Kopenhagener Beitr~ge zur Germanistischen Linguistik, 18, 1981, pp.. 94-217. Herbert Ernst Wiegand / Antonin Kucerao Brockhaus-Wahri@. Deutsches W~rterbuch auf dem Pr~fstand der praktischen Lexi- kologie. II. Teil: i. Band (A-BT); 2. Band (BU-FZ); 3. Band (G-JZ). Germanistische Linguistik 3-6/80, 1982, pp. 285-373. H. C. Wolfart. Diversified Access in Lexi- cography. R.R.K.Hartmann (edo). Dictionaries and Their Users. Papers from the 1978 B.A.A.L. Seminar on Lexicography. (=Exeter Linguistic Studies, Vol.4). Exeter, 1979, pp. 143-153. 37
1984
9
Correcting Object-Related Misconceptions: How Should The System Respond? t Kathleen F. McCoy Department of Computer & Inft~rmation Science University of Pennsylvania Philadelphia, PA 19104 Abstract Tills paper describes a computational method for correcting users' miseonceptioas concerning the objects modelled by a compute," s.ystem. The method involves classifying object-related misc,mce|,tions according to the knowledge-base feature involved in the incorrect information. For each resulting class sub-types are identified, :.:cording to the structure of the knowledge base, which indicate wh:LI i.formativn may be supporting the misconception and therefore what information to include in the response. Such a characteriza*i,,n, along with a model of what the user knows, enables the syst.cm to reas,m in a domain-independent way about how best to c~rrv,'t [he user. 1. Introduction A meier ar,.a of Al research has been the development of "expert sys.tcms" - systems which are able to answer user's que:~titms concerning a particular domain. Studies identifying desirabl,, iutora,'tive capabilities for such systems [Pollack et al. 82] have ft,und that. it is not sufficient simply to allow the user to ,~k a question and Itavo the system answ~.r it. Users often want to question the system's rea-~oning,to make sure certain constraints have been taken into consideration, anti so on. Thus we must strive to provide expert systems with the ability to interact with the user in the kind of cooperative di:LIogues that we see between two bullish ctmversational partners. Allowing .,uch interactions between the system and a user raises difficulties for a Natural-Language system. Since the user is interacting with a system a.s s/he would with a human export, s/he will nizam likely exp-ct the system to b(have as a human expert. Among other things, the n:.er will expect the systenl to be adhering to the cooperative principles of conversation [Grice 7,5, .loshi 821. If these principte~ are not followed by the system, the user is bkeiy to become confu~ed. In this paper I.focus on one a,;pect of the cooperative behavior found between two conversat, ional partners: responding to recognized differences in the beliefs of the two participants. Often when two people interact, ouc reveals-a belief or assumption that is incompatible with the b~*liefs held by the other. Failure to correct this disparity may not only implicitly confirm the disparate bcli,'f, but may even make it impos~;ibie to complete tile ongoing task. Imagine the following excilange: U. Give ll|e the ItUI.L NO of all Destroyers whose MAST_IIEIGIIT is above 190. E. All Destrt,yers that I know al)out |lave a MAbT_HEIGllT between 85 and 90. Were you thinking of the Aircraft-Carriers? in this example, the user (U) ha.s apparently ctmfused a Destroyer with an Aircraft-Carrier. This confusion has caused her to attribute a property value to Destroyers that they do not have. In this case a correct a/tswer by the expert (E} of *none" is likely to confuse U'. In order to continue the conver.-ation with a minimal amount of eoafu.~ion, the user's incorrect belief must first be addressed. My primary interest is in what an expert system, aspiring to human expert performance, should include in such responses. In particular, [ am concerned with system responses to te~'ognized disparate beliefs/assumptions about cbflct.~. In the past this problem has been h, ft to the tutoring or CAI systems [Stevens et aL 79, Steven~ & ('ollins 80, Brown g:: Burton 78, Sleeman 82], which attetupt to correct student's misconceptions concerning a particular domain. For the most part, their approach ha.~ been to list a priori :dl mi.-conceptions in a given domain. Tile futility t,f this appr,~ach is empha'.,ized in [gleeman ,~2]. In contrast,the approach taken hvre i~ to ,-la:,~iry. in a dolttnin independent way, obj,'ct-related di.-pariti,~s ;u:c,~rding to the l'~n.wh'dge ~:tse (l(.I~) feature involved. A nund)er of respon:~e strategies :ire associated with each resulting cla,~. Deciding which strategy to use for a given miseoncepti,m will be determined by analyzing a user model and the discourse situation. 2. What Goes Into a Correction? In this work I am making thc btllowing assunlptions: • ]:or th*, purposes .f the initial correct.ion attempt, the system is a~umed to have complet,, attd corr~'ct knowledge of the domain. Th:tt is. the system will initiMly perceive a disparity as a mise.neel,tion on the part of the u~er It willthus attempt to bring tile user's beli~,fs into line with its own. • The system's KB i~tclude-: the following fo:t~trce: an object taxonomy, knowledge of object attributes and their possible values, and intornlation about I)O.~ible relationships between ol)jects. • Tile user's KB contains similar features, llowev,'r, mneh of the information (content} in the system's !'(B may he mb-.~ing from the u~or '~ b~ll [e.g., the us+,r's l'([~ may I)e ~parser ot coarser than the system's I(B, c,r various attributes (,~f c~:nccpts ma~ t;e missi:~g frets the u~,'r's I'(P,}. In additi~m. ~.me inf,~rmation ia the u.,er's KB may be wrong, in tiffs work, to say that the user's KB is u'rong means that it is i.,:'m.:i.~terJ with the ,~g.,t,m) KB (e.g., things may be c!a.'~ified differently, properties attributed differently, and ~'o on). IThiz v, ork is p~rtiMb" supported by the NSF gr~nt #MC~81-07200. 444 • Whiw the system t~]ay n,,t km,w e:;actly what is c(m~ained in the user's l,~b', information about the user ¢-:tt~ b, ~ d(,riv,'d hum two smtrcrs. First, the .~ystem can have ,q tm.h,I of a canoni,:at u,mr. (Of course this m.,h,[ m:ty turn o.t t,, differ from any given user's model.) ~.~,,,'~,ndl)', it ,'an deriw" knowledge about what • the user kn.ws fr,nt the ongoing dise.urse. This l:tt,.r type of km)~h'dge eop,~titutes what the' system discer~s to bt, tits, mutual h(.liv.:s of the system attd user as defin.d iu [.h,.hi 82]. "['he.-,e t~s,~ s,)ur(',~,s .f informati,m together r'.n~t it ul c the s)stem's model of the user's KB. ThN h,,,:t.I itself may be incompi,,te arid/or ine,,rrect witlt respect tt, t]te system's KB. A tt,-'r'~; utterance refh.cls .ither the state of his/her KIL -r ~,,m,) re:~s..i~,g s/he ha~ just done t() fill in some mi.,sing p:;rt of ~.h:,t K,q, or both. (;lynn Ilu,~e a~suinptit,ns, we earl consider what shouhl I)e htch~d,:d in a rcsp.nse to an object-r,'htt,'d disparity. If a person exhiltit~ wh.at hi-/her conv~ r.-.ationa] partn~,r perceives as a Inisconcellti,,n, I IH' vory least one w~mld expect from that partner is to deny t|.. fal.e inf.rmation ~ - for example - U. I th.ugh| a whale wa~ a fish. g. It's n.t. 'l'ranscript~ of "u:d ura[ly ~wcurring" expert systems show that experts often include more informati,m in their response than a siHIpl,' d,'nial. Tit(. ,'xp~,rt Inn)' provide all alterhative true st:~tem~.nt (e.g., "\Vha;,.~ :,re marnnt:d';'). S/he may offer ju~.t ifb'at ion andb,r supp.rt for the rt~rr,wtion (e.g., °VChales are nt:~mln:~l~ J)r,('au~*" t il%V hen:/the through hmgs and h'ed their young with milk.'}. S/he nmy als. refute the faulty reasoning s/he tho~tght the ns~r had d.ne tt, ~,rrive at the misconception (e.g., "llaving fins and li~ ing in the water is not enough to make a whale a fish.'}. This behavior can be characterized a.s confirming the corr4.et inh,rmation which mc]y have h'd the user to the wrong conclusion, but indi(:ating w.hy the false conclusion does no! follow by bringing in a:lditional, overriding information, s The ltroblem f,,r a computer sy,-tem is to decide what kind ~¢ ihformu!itm re:C,' I,e supporting a given misconception. What things m::y he relevant? What faulty reasoning may have been done? 1 char:~cterize -bject-relatcd misconeeptious in terms of the Kll fl,tturt inwJved. Misclgssifying an object, °1 thought a whale was a fish', i.wAw.s the SUlwrordinate KB feature. Giving an object a pr-p.rty it doe~ not have, "~Vhat is the interest rate on this st,,ck?', lovely,.: the, attriltu:e KB feature. This chatact¢~ri~:di-n i. helpful in d,-termining, in terms of the structure of a K[L what htform;]tion may be supporting a particular mis,'onr,'ption. Thus, it is helpful in determining what to include in the r-..'ponso. 2Throtlghout this work I am as.-~tmlng that tht miseone*ption if impttrt~nt to the tlk~k at hand and should therefore be corrected. The re.q,~ases I am intcrest(,d in £eneraVing at( the "full blown" resl, Ot;~es. if • mlsecneeption is det~,c]rd which N n,al ilnl,or].t.!~t to the task at hand. it is conceivable that eith,:r th,. lillSc')ll,'olltiOB tl~ ignored or a It, rlrtlllled I vPr¢~on of o/]e t;[' those r,,~l,Oll..,.$ |In givPii. 5'l'h~. :~r~l, ~;b' exhH.ih,.I hy *i~, '..:,r;.,u ..xp,tt~ is v,,cy Anfilar to the "grain of truth" rorr~.,'tion f,~.nd ic tu~erit~g si]uations a~ i,t, I,';fied in tWo.If & Mcl),*.ald ~3 I. "FhN .'trat,'gy first id,.nGSes th,, grai~; t,( truth i[~ a student's answ~.r xlld lip-it go~.'< Oil to give tit- eorr¢,t I ;,n,~or. In the foil.wing sections l will discuss the two classes of object trii~.conreptions just mentioned: superordinate misconceptions and attribute misconceptions. Examples of these classes :d.ng with correction strategies will be given. In addition, indications of how a system might choose a particular strategy will be investigated. 3. Superordinate Misconceptions Si.,.e the information ttmt human experts include in their respon~l. Co a gal.,r.rdinate misc.ncepti,m seems to hinge on the exl..rl's l,ere~.ption <,f ~ tiw misconception occurred or what informati(,n may h:tve bt.cn supporting the misconception, I have sub-cat,'g,,rized s,qwrordinate misconct, ptions according to the kind of support they hate. F.r each type (~ub-category) of sup,,r(udinat(, mis,.(,m:,,iJtion, 1 have identified information thal. would I." relevant u, the correction. In this analysis t,f supf.rordinate misconceptieus, I am assulning that the user's knowledge al)out the snperordinate concept is correct. The user therefore arrives at the misconception because of his/her incomplete understanding of the object. 1 am also, for I he moment, ignoring misconceptions that occur because two objects have similar names. Given these restrictions, 1 found three major correction strategies used by human experts. These correspond to three reasons why u user might misclassify an object: TYPE ONE - Object Shares Many Properties with Posited Supe~ordinate - This may cause the user wrongly to c.nclude that these shared attributes are inherited from the superordinate. This type of misconc,.ption is illustrated by an example involving a student and a teacher: 4 U. ] thoughl a whale w.~s a fish. E. No, it's a mammal. Ahhou~h it has fins and li~e~ in the water, it's a mamntal s~nce it is warm blooded and feeds its young with milk. Nc, tice the expert not only specifi~ the correct s0perordinate, but also gives additional inf.rn=ati,~n tt, justify the c~)rre, :i,~n. She do~.s this by acknowledging that there are some pr6per~ies that whales .d/are with fish which m:O' lead the student to conclude th8% a whah: is a fish. At the same time she indicates that these pc.pectins are not sufficient, h,r inclusion in the cla.~s of fish. The whale, in fact, lia.s other properties which define it to be a mamm:d. Thus, the strategy the expert uses when s/he perceives the misc,,J,ct,ption tu be of TYPE ONE may be characterized as: (I) l)e,y the posited superordinate and iudk:ate the correct one, (2) State at tributes (prol>,'rties) that the obj+ct has in common with the posited super<~rdin:tte, (at State defining attributes of the real super-r<thmte, thus giviug evidence/justification for the correct ch,~+:ifi,'~ti.n. The sy,lem may hdlow this strategy when the user mod~l indicates that the itser thinks the p++sited suFerordinate and the .hi]el are simih]r bee:ruse they share man)' common properties {n,,t held by the real SUl~.rordinate). TYPE TWO - Objt,ct Shares Properties with Auother Object which is a Member of Pos:ited Superordinate - In this c:rse the lAhho,Jgh the analysis given hero wa~ d~:rived through ,t~,lying xr~uLI human interactions, the exarapDs given ire simply illustrative and have not been extrs,,-t~d frorn a real interaetiJn. 445 misclassified object and the "other object" are similar because they have some other common superordinate. The properties that they share arc no_..~t those inherited from the posited superordinate; but those inherited from this other common superordhlate. Figure 3-1 shows a representation of this situation. OBJECT and OTIIEIi-LIBJEC'E have many common properties because they slt:.t.re a CtHltllton superordinate (COMMON-St !I'E|2OI2DINATE). Hence. if the user knows that OTIIEI1-OBJECT is a tnember of the POSrFED SUPEROllDINATE, ~/J|e inay wr~mgly conclude that OBJECT is also a member of POSITED :SUI>ERORD1NATE. Figure 3-1: TYPE TWO Superordinate Misconeeptio. For example, imagine the following exchange taking place i't a junior high sch.-I bioh,gy ela_,~s (here U is a st,d,.nt, E a teacher): U. I thought a tomato was a vegetable. E. No it's a fruit. You may think it's a vegetable since you grow tomatoes in your vegetal',]e garden :?h)ug with the lettuce and green beans. However. it's a fruit because it's really the ripened ovary of a seed plant. Here it is intportant for the student to understand about plants. Thus, the teacher denies the posited superordinate, vegetable, and gives the corr,-ct one, fruit. She backs this up by refuting evidence that the student may I)e using to support the misconception. In this ca...e, the stl.h nt may wrongly believe that tomatoes are vegetables becau~.e lh~'y are like some other objects which are vegetables, lettuce and green beans, in that all three share the common super.rdln:tte: I,l:mts grown in vegetable garden. The teacher acknowledges this similarity but refutes the conclusion that tomatoes are vegetables by giving the property of tomatoes which define them to be fruits. The correction strategy used in this case was: (I) Deny the chk, csification posited by the user attd indicate the correct ela:.,.ifieation. (2) Cite the -tiler memb~.rs of the posited sup*,rordinale that the user may be either confusing with the object being discu.'.sed (Dr makhtg a b:td an:dogy from. (,3) Give the features which disling~Jl.h the correct and p~sited superordinates thus justifying the classlfi(':ttion. A system may f.llow lt.;s strategy if a structure like that ht figure ;3-1 is f(~und in the user model. TYPE THREE - Wrong Information - The user either has been told wrying informali.n and h.'~ not done any rea;tming to justify it, or has tttisclassified the object in response to some cotnpl*.x rea.soniug process that the system can't duplicate. In this kind of situation, the system, just like a human expert, can only c.rtect the wrong information, give the corresponding true information, at.t possibly give some defining features distinguishing the posited and actual superordiuates. ;f this cnrrection does not satisfy the user. it is up to him/her to continue the interaction until the underlying misconception is ch.ared up (see [.J'eff~rson 72]). The iuformation included in this kind of response is similar to that which McKeown's TEXT system, which answers questions about database structure [McKeown 82 l, would include if the user had asked about the diff~.rence between two entities. In her case, the information included would depend on how' similar the two objects were according to the system KB, not on a model of what the user knows or why the user might be asking the question. 5 U. Is a debenture a secured bond? S. No it's an unsecured bond - it has nothing backing it should the issuing company default. AND U. Is the whiskey a missile? S. No. it's a submarine which is an underwater vehicle (not a destructive device). The strategy folh;wed in these ca..,es can be characterized as: (1} Deny posited supev,rdinate and give correct one. (2) Give additional iuformathm as lleeded. Tills .xtra inform:ttion may include defining features of the correct, superordinate or information ab.ut the highest superordinate that distinguishes the object from the posited superordinate. This strategy may be followed by the system when there is insufficient evidence in the user Ioodel for concI.Jding that either a TYPE ONE or a TYPE TWO mlsconcepti(m has occurred. 4. Attribute Misconceptions A second class of nlisconception occurs when a person wrongly attributes a properly to an object. There are at least three reasons wl v thi~, kind of ntisc~mception :nay occur. TYPE ()NE - Wren!.; Object - The user is either confusing the obj,ct being discussed with :Hmther object that has the specified property, or s/he is making a b~.t analogy using a similar object. In either c.'~e the second object should be included in the correfti.:lu SO the problem does not f:,~ulinu¢*. [u the foll,)wing example the ,'xpert assume.,~ the user is confusiug the object with asimilar object. U. I have my money in a money market certificate so I can get to it right away. E. But you can't! Your money is tied up in a eertit'icate - do you mean a money market fund? The strategy followed in this situation can be characterized ~.s: (l)Deny the wrong information. (2) (;ire the corresp.mling correct information. (3) Mention the object of confusion or possible analogical reas.ning. This s rategy can I)e followed by a .sy~tenl v.'hPit there is at}other obj,'ct which is "cio~e in con, eel = to Ihe object being discussed and zhi,:h ha.- the property involved in the inisconceptiou. Or course, the perception of h(,w "cl(.:~e in cant'clot = two objects are chan'~.es with conte.\t. This may be because some attributes are highlighted in SOlile contexts and hidden in others. };'or this reason it is anticipated that a el':sette'~s 5McKeown do~* indl.-:~te that this kind of inf'~rm:,tlon wou],i improve her re-ponsos. Th- niaior Ihru:~t of her work was ,~n t,,:.i ..trlicture; the tie# of i user model could hP eL.aily hltegrilil.d into her t'ri, m.w,-,rk. 446 measure such as that described in [Tversky 77], which takes into account the salience of various attributes, will be useful. TYPE TWO - Wrong Attribute - The user has confused the attribute being discussed with another attribute. In this case the correct attribute should be included in the response along with additional information concerning the confused attributes (e.g., their similarities and differences). In the following example the similarity of the two attributes, in this case a common function, is mentioned in the response: U. Where are the gills on the whale? S. Whales don't have gills, they breathe through lungs. The strategy followed was: (1) Deny attribute given, (2) Give correct attrihutc, (3) Bring in similarities/differences of the attributes which may have led to the confusion. A system may follow this strategy when a similar attribute can be found. There may be some difficulty in distinguishing between a TYPE ONE and a TYPE TWO attribute misconception. In some situations the user model alone will not be enough to distinguish the two cases. The use of past immediate focus (see [Sidner 83]) looks to be promising in this case. Heuristics are currently being worked out for determining the most likely misconception type based on what kinds of things {e.g., sets of attributes or objects) have been focused on in the recent past. TYPE THREE - The user w~s simply given bad information or has done some complicated reasoning which can not be duplicated by the system. Just as in the TYPE TI~IREE superordinate misconception, the system can only respond in a limited way. U. 1 am not working now and my husband has opened a spousal IRA for us. 1 understand that if 1 start working again, and want to contribute to my own IRA, that we will have to pay a penalty on anything that had been in our spousal account. E. No - There is no penalty. You can split that spousal one any way you wish• You can have 2000 in each. Here the strategy is: (1) Deny attribute given, (2) Give correct attribute. This strategy can be followed by the system when there is not enough evidence in the user model to conclude that either a TYPE ONE or a TYPE TWO attribute misconception has occurred. 5. Conclusions • In this paper I have argued that any Natural-Language system that allows the user to engage in extended dialogues must be prepared to handle misconceptions. Through studying various transcripts of how people correct misconceptions, I found that they not only correct the wrong information, but often include additional information to convince the user of the correction and/or refute the reasoning that may have led to the misconception. This paper describes a framework for allowing a computer system to mimic this behavior. The approach taken here is first to classify object-related misconceptions according to the KB feature involved. For each resulting class, sub-types are identified in terms of the structure of a KB rather than its content. The sub-types characterize the kind of information that may support the misconception. A correction strategy is associated with each sub-type that indicates what kind of information to include in the response. Finally, algorithms are being developed for identifying the type of a particular misconception based on a user model and a model of the discourse situation. 6. Acknowledgements I would like to thank Julia tlirschberg, Aravind Joshi, Martha Poll.~ck, and Bonnie Webber for their many helpful comments concerning this work. 7. References [Brown & Burton 78] Brown, J.S. and Burton, R.R. Diagnostic Models for Procedural Bugs in B~ic Mathematical Skills. Cognitive Science 2(2):155-192, 1978. [Grice 75] Grice, H. P. Logic and Conversation. In P. Cole and J. L. Morgan (editor), Syntax and Semantics 111: Speech Acts, pages 41-58. Academic Press, N.Y., 1975. [Jefferson 721 Jefferson, G. Side Sequences. In David Sudnow (editor), Studies in. Social Interaction, . Macmillan, New York, 1972. [Joshi 82] Joshi, A. K. Mutual Beliefs in Question-Answer Systems. in N. Smith [editor), Mutual Beliefs, . Academic Press, N.Y., 1982. [McKeown 82] McKeown, K.. Generating Natural Language Text in Response to Questions About Database Structure. PhD thesis, University of Pennsylvania, May, 1982. [Pollack et al. 82] Pollack, M., Hirschberg, J., & Webber, B. User Participation in the Reasoning Processes of Expert Systems. In t'¥oceedings of the 198e National Conference on Artificial Intelligence. AAAI, Pittsburgh, Pa., August, 1982. [Sidner 83] Sidner, C. L. Focusing in the Comprehension of Definite Anaphora. In Michael Brady and Robert Berwick (editor), Computational lt4odcl8 of Discourac, pages 267-330. MIT Press, Cambridge, Ma, 1983. [Sleeman 82] Sleeman, D. Inferring (Mal) Rules From Pupil's Protocols. In Proceedings of ECAI-8~, pages 160-164. ECAI-82, Orsay, France, 1982. [Stevens & Collins 80] Stevens, A.L. and Collins, A. Multiple Conceptual Models of a Complex System. In Richard E. Snow, Pat-Anthony Fedcrico and William E. Montague (editor), Aptitude, Learning, and Instruction, pages 177-197. Erlbaum, Hillsdale, N.J., 1980. [Stevens et al. 79] Stevens, A., Collins, A. and Goldin, S.E. Misconceptions in Student's Understanding. Intl. J. Alan-Machine Studic,s 11:145-156, 1979. [Tversky 77] Tversky, A. Features of Similarity. Psychological Review 84:327-352, 1977. [Woolf & McDonald 83J Woolf, B. and McDonald, D. Human-Computer Discourse in the Design of a PASCAL Tutor. In Ann Janda leditor}, CItI'88 Conference Proceedings - Human Factors in Computing Systems, pages 230-234. ACM SIGCHI/HFS, Boston, Ma., December, 1983. 447
1984
90
AN ALGORITHM FOR IDENTIFYING COGNATES BETWEEN RELATED LANGUAGES Jacques B.M. Guy Linguistics Department (RSPacS) Australian National University GPO Box 4, Canberra 2601 AUSTRALIA ABSTRACT The algorithm takes as only input a llst of words, preferably but not necessarily in phonemic transcription, in any two putatively related languages, and sorts it into decreasing order of probable cognatlon. The processing of a 250-1tem bilingual list takes about five seconds of CPU time on a DEC KLI091, and requires 56 pages of core memory. The algorithm is given no information whatsoever about the phonemic transcription .used, and even though cognate identification is carried out on the basis of a context-free one-for-one matching of indivldual characters, its cognation decisions are bettered by a trained linguist using more information only in cases of wordllsts sharing less than 40% cognates and involving complex, mu]tlple sound correspondences. I FUNDAMENTAL PROCEDURES A. Identifying Sound Correspondences Consider the following wordllst from two hypothetical Austronesian-llke ivnguages: Titla Sese "eye" mats nas "sea" tasi sah "father" tams san "mother" mama nan "tongue" miml nen "shellfish" slsl hehe "bad" satl has "to stand" tl se "to come" me na "with" ml ne "not" sa ha Take the first word pair, mata/nas. We base no information about the phonetic values of their constituent characters, we do not know whether the same system of transcription was used in both wordllsts: for all we know "a" might denotes a high back rounded vowel in Tit~a and a uvular trill in Sese. The only assumption allowed is that in each word llst the same characters represent, more or less, the same sounds. Under this assumption, the possibility that any one character of a member of a word pair may correspond to any character of the other member cannot be discarded. Thus in the pair mata/nas Titia "m" may correspond to Sese "n", "a", or "s", and so may Titia "a", "t", "s", and "s". We summarize the evidence for these possible correspondences in an TxS matrix, where T is the number of different characters found in the Titla wordllst, S that in the Sese wordllst. Thus the evidence afforded by the first pair, mats/has: Sums a e h n s of rows a 2 0 0 2 2 6 i 0 0 0 0 0 0 m I 0 0 i I 3 s 0 0 0 0 0 0 t I 0 0 1 I 3 Sums of columns 4 0 0 4 4 12 And by all ii pairs: Sums e e h n s of rows a I0 0 3 9 6 28 i 2 6 6 5 5 22 m 5 3 0 12 2 22 s 3 2 7 0 2 14 t 4 i 2 2 5 14 Sums of columns 24 12 18 28 18 I00 Matrix A (observed frequencies) If character correspondences between tbe Titla and Sese word pairs were random the expected frequency e[i,J] of recorded possible correspon- 448 dences between the ith character of the Tltla alphabet and the jth of the Sese alphabet would be: e[i ,J] - sum of ith row x sum of Jth column sum of cells giving a matrix of expected frequencies of possible sound correspondences: Sums e h n s of rows a 6.72 3.36 5.04 7.84 5.04 28 t 5.28 2.64 3.96 6.16 3.96 22 m 5.28 2.64 3.96 6.16 3.96 22 S 3.36 1.68 2.52 3.92 2.52 14 t 3.36 1.68 2.52 3.92 2.52 14 Sums of columns 24 12 18 28 18 100 Matrix B (expected frequencies) Note how the six character correspondences wlth the greatest differences between observed and expected frequencies give the simple substitution code used for generating Seat words from pseudo- Austroneslan Titla: Titta Sese Observed - Expected m n 5.84 s h 4.48 i e 3.36 a a 3.28 t s 2.48 B. Identifying Null Correspondences Call the difference between the observed and the expected frequency of a character corres- pondence its weight (s much less primitive definition of weight is used In the actual implementation). Take the first word palr (mats/has) and enter into a 4x3 matrix W the wel~hts of its 12 possible character correspondences: n a s m 5.84 -0.28 -1.96 a 1.16 3.28 0.96 t -1.92 0.64 2.48 a 1.16 3.28 0.96 Matrix W (weights) Call potential of a character correspon- dence the sum of its weight and of the highest potential of all possible character correspondences to its right, i.e. Pot(i,J) = W[I,J] + max(Pot(i+l..m,J+l..n)) giving the matrix of potentials P for word pair mata/nafl : n a a m 11.60 2.28 -1.96 a 4.44 5.76 0.96 t 1.36 1.60 2.48 a 1.16 3.28 0.96 Matrix P (petentlals) The character correspondence with the blghest potential is here m/n (P[I,I]-II.6). Of its possible successors, that with the highest potentlal is a/a (P[2,2]ffiS.76), itself followed by t/s (P[3,3]-2.48), which has no passible successor. Thus we have: Titia Sese Potential m n 11.60 a a 5.76 t s 2.48 a zero The same procedure applied to the rest of the wordllst gives the proper matches, Tltla flnals in polysyllabic words having been deleted when deriving the corresponding Sese words. C. A Relative Measure of Cognatlon Call index of cognatlon the maximum potentlal of a word palr divided by its number of correspondences, including null correspondences. Thus in the fictitious case of Tttia and Sese tbe index of cognatton of the pair mats/has is 2.9 (its maximum potential, 11.60, divided by the number of correspondences, 4). Word pairs with high cognation indices are foun~ to be more often genetically related than pairs with low cognatlon indices. II C l~REl~'rr DIPLF24E ~rAT I0N A. Weights. The difference between observed and expected frequencies does not provide a satisfactory measurement of the weight of a posslble character correspondence. Several alternative measurements were tested, out of whlcb standardized scores were retained: the weight of a character correspondence was redefined as the 449 probabillty of the discrepancy between its observed and expected frequencies of occurrence not beJng due to chance, expressed as a z score. Where absolute frequencies of 20 and less are involved the exact probabillty is calculated and translated into a z score using a polynomial approximation (Abramowitz and Stegun 1970). B. Vowel/Consonant Correspondences Disallowing correspondences between vowels and consonants vastly improved the performance of the algorltbm. No human intervention is needed to identify vowels from consonants, an improved version of an algorithm described in Suhotln 1962 being used to identify characters which represent vowel sounds. Whether consonants should be allowed to correspond to vowels is left as an option in the current implementation. C. Iterations Performance is again improved when word pairs showing individual character matches as computed from matrices of potentlals (section IB above) are reprocessed. The weights of possible character correspondences are recomputed. This time, however, only characters in the same positions in the two words are scored as possible correspondences. Thus for instance, the first pass of the algorithm having matched the "m" of "mata" to the "n" of "nas", Titla "m" is scored in the second pass as corresponding possibly only to Sese "n". Sequences of alternate null correspondences are collapsed so as not to preclude the identification of correspondences which might have been missed in the first pass, e.g. a pair mat/mot matched in the first pass as m m zero o a zero t t is relnput in the second pass as m m a o t t Weights of possible character correspon- dences having thus been recomputed, a new matrix of potentials and a new cognatlon index is computed for each word pair. Further iterations were found to yield negligible improvements to the results obtained. D. Improved Weights and Cognation Indices Frequent character correspondences often yield very high z scores (up to [email protected]). The presence of even one such hl~h score in a word pair often invalidates the character-matchlng procedure. A number of alternative alterations to the definition of weight were tried, out of which the simplest proved best: weights beyond an arbitrary value are set to that value. Practice showed a maximum value of 3.0 to 4.0 to give the best results. This is not surprlsing, since there is Do significant difference in the degrees of certainty corresponding to z scores of 4 and beyond. The last improvement in the performance of the algorithm to date was brought by a redefinition of the cognatlon index. Once the individual character matches of a word pair have been identified from its matrix of potentials their weights are adjusted as follows: I) Positive weights less tban 1.28 (corresponding to a 90% significance level) are set to zero; negative weights and weights greater than 1.28 are left unchanged. 2) Positive weights of character-to-zero matches are set to zero; negative weights are left unchanged. The cognatlon index is then defined as the sum of the adjusted weights divided by the number of matches, e.g. (an actual example from two languages of Vanuatu): Weight Origlnal Adjusted x zero -0.64 a a 3.98 h D 1.06 a zero 2.12 t D 3.12 i I 2.86 a zero 2.12 Cognatlon index: 9.32/8 -0.64 3.98 0.00 0.00 3.12 2.86 0.00 9.32 = 1.165 III PERFORMANCE OF THE ALGORITHM The algorithm as described has been implemented in Simula 67 on a DEC ELI091 and applied to a corpus of some 300 words in 75 languages and dialects of Vanuatu. Results are excellent for languages sharing 40% or more cognates, even when sound correspondences are complex. They deteriorate rapldly when lesser proportions of cognates and complex sound correspondences are involved, but remain excellent when mainly one-to-one correspondences are present. Thus for instance Sakao and Tolomako (Espirltu Santo, Vanuatu) were given as sharing 38.91~ cognates (cut-off cognation index: 1.28), as against a human estimate of 41% backed by a full knowledge of their dlachronlc phonologles and comparisons with other related languages. Out of the 50 word pairs with the highest cognation indices only two (the 38th and the 45th) were deflnltely not cognate and one (the 36th) doubtful. Yet, Sakao has undergone extremely complex phonological changes, viz.: Tolomako Sakao "eye" nata m6a "throat" tsalo rlo "banana" ~etali i~l "to blow" su~i hy "nine" Iinaratati l~ner~p£~ 450 IV FDRTHER IMPROVEMENTS The identification of environment- conditioned phonologlcal correspondences is the next, most obvious stage in further improving the algorithm. This problem has of course been, and is being, investigated. Difficulties arise from the fact that frequencies of possible correspondences in any given environment become too low to be handled by statlstlcal tests. Other approaches -- inspired from chess-playlng programs -- have been tried, but have proved too expensive in computer tlme so far. A further, much desirable, improvement is the ~dentlfication of rules of metatbesis. The solution to this problem appears to be subordinated to that of the dlscovery of context-sensitive rules. V PURPOSE OF THE ALGORITHM A billngua] wordllst is conceptually equivalent to a bilingual text: words of a llst to sentences of a text, phonemes of s word to morphemes of a sentence, cognate pairs to segments of the same meaning, non-cognates to segments of different meanings, and the algorithm described is tbe present state of an attempted solution to the much more general fol]owlng problem: given two texts of approximately equal lengths in two different languages, determine whether one is the translation of tbe other -- or both translations of a text in a third language -- wholly or In parts, and If so, establish the rules for translating one into the other. VI REFERENCES Abramowitz, Milton and Irene A. Stegun. Handbook of Mathematical Functions. National Bureau of Standards, 1970. Suhotin, P.V. Eksperimental'noe vydelenJe klassov bukv s pomoshchju elektronnoJ vychls]Itel'noj msshiny. Problemy strukturnoj llngvlstikl. Moscow I762. 451
1984
91
From HOPE en I'ESPERANCE On the Role of Computational Neurolinguistics in Cross-Language Studies I Helen M. Gigley Department of Computer Science University of New Hampshire Durham, NH 03824 ABSTRACT Computational neurolinguistics (CN) is an approach to computational linguistics which in- cludes neurally-motivated constraints in the design of models of natural language processing. Furthermore, the knowledge representations in- cluded in such models must be supported with documented behaviorial ev~ce, normal and patho- logical. This paper will discuss the contribution of CN models to ~the understanding of linguistic "competence" within recent research efforts to adapt HOPE (Gigley 1981; 1982a; 1982b; 1982c; 1983a), an implemented CN model for "under- standing" English to I'ESPERANCE, one which "un- derstands" French. I. INTRODUCTION Computational Neurolinguistics (CN) incorpor- ates initial assumptions about language processing that are often indirectly referenced in other computational approaches to language study. These assumptions focus on neural-like computational mechanisms (Ballard 1982; Feldman 1981; Gigley, 1982a; 1982b; 1983a; McClelland and Rumelhart, 1981) which subserve language behavior (Lavorel and Gigley, 1983). Furthermore, CN approaches to different aspects of language processing include extensive use of behavioral data. Research exists within the CN paradigm along various behaviorally defined dimensions. These are at the level of phonetic speech studies that simulate speech errors (Le- cours and Lhermitte, 1969; Reggia and Sanjeev, 1984), a model of aphasic language production, JARGONAUT, (Lavorel, 1982), as well as within lesionable models at a neural network level. These latter models simulate association, dis- crimination, and recognition of patterns employing associative network models that have been tuned or have adaptively learned to relate certain dis- criminations (Gordon, 1982; Wood, 1978; 1980). IThe research described in this paper was sup- ported by an NIH-CNRS research exchange grant entitled "Computational Neurolinguistics" and was undertaken at Laboratoire de Neuropsychologie Exp~rimentale, INSERM-Unit~ 94, BRON, France. There is much philosophical and linguistic discussion of the nature of the representations that exist in humans and form the basis of our cognitive function. We will not present the debate here, but instead will claim that the CN models we build include the assumption that the internal representation of concepts, words, and phonemes are given by the overall activation state of the "network" representation within the system at a moment in time. Furthermore, this means that unless activations are interpreted externally (in our case by labels so that we can talk about them), they in and of themselves reflect the "mental" representation. To this end, CN models present time- synchronized snapshots of an interactive, paral- lel, distributed process that are interpreted to represent hierarchies of linguistic knowledge that can be distinguished during processing, such as a recognized word, a grammatical interaction, or even a disambiguated meaning. Before turning to our efforts to adapt a working implementation within the CN paradigm, HOPE, into one that can process French with equal facility, I'ESPERANCE, we will present necessafy background to illustrate why focusing on the "process" of language, as it can exist, based on our current understanding of brain function, contributes significantly to our increased under- standing of representations which have been de- fined within linguistics, psycholinguistics, neurolinguistics, and AI approaches to language study. 2. FOCUS ON PROCESS In developing CN models, the claim is that by focusing on process independently from repre- sentation, we gain several perspectives that are unattainable from other more usual approaches. CN models include processing which is neurally plausible. Language is seen as the behavioral result of an interactive, time-dependent process. This frees us from pre-specifying either all "correct" linguistic possibilities for constraint satisfaction at all levels of representation, or all possible errors or recognized omissions as in more flexible approaches (Hayes and Mouradian, 1981; Kwasny and Sondheimer, 1981; Lehnert, Dyer, Johnson, Yong, and Hurley, 1983; Weischeidel and Black, 1980). 452 We utilize what has been discovered by these other approaches to be the most likely, most plausible set of relevant features to tune our "normal" model. Through interconnections at a metalinguistic level, between recognized phonetic word representations, grammatical aspects of meaning, and specific referential meaning for disambiguated words, CN models must tune the process so that asynchronously activated in- stantiations at these interpretable levels which result from local contextual recognition achieve the same behavioral results that are defined within different methodologies. In other words, we use the A! preconditions or ATN states with as much corroboration from psychological, and linguistic studies as is available to tune our models for "normal" processing. This provides an extremely valuable means of studying processing effects in neurally motivated "lesion" states that are consistent within our system, and completely defined over our model of study in a mathematical sense. This has been discussed in detail elsewhere in Gigley (1982b; 1983a; 1983b), and Gigley and Duffy (1982) and will not be repeated here. 3. PROCESSING ASSUMPTIONS IN HOPE HOPE is not an acronym but was chosen as the name of the system based on the legend of Pandora's box. While raising many questions of language within a new computational perspective, it provides a first attempt to answer them as well. The system presents an initial attempt to integrate AI and brain theory, BT, on two levels, behaviorally and within processing. HOPE uses concepts from cellular neurophysiology to define its control. Information in HOPE is encoded in a hierarchical graph which permits extensive ambiquity. For complete detail of the model with exam- ples in "normal" and "lesioned" states the inter- ested reader is referred to Gigley (1982a; 1982b; 1983a). We will only highlight the processing here. HOPE stresses the process of natural language by incorporating a neurally plausible control that is internal to the processing mechanism. There is no external evaluation made to decide what happens next. At each process time interval, there are six types of serial-order process that can occur and affect the state of the process. The most important aspect of the control is that all of the serial order computations can occur simultaneously and affect any information that has been defined in the model. Similar control philosophies have been em- ployed in letter perception by McClelland and Rumelhart (1981), and in the connectionist theories applied to visual processing and language parsing (Ballard, 1982; Cottrell, 1983; Feldman, 1982; Small, Cottrell, and Shastri, 1982). The major difference in the control in HOPE is that the control process can be "lesioned" by modifying parameter settings relative to their "normal" settings to define hypothesized causes of pathological language behavior. Example "lesions" are changes in memory decay, elimination of a knowledge type, and slowing of processing relative to on-line word recognition. Studying the results of such "lesions" and their occurrence or not in pathological behavior is used to further understanding of the behavior and to suggest evolutionary changes in the model to better its approximation to language process. Information is presented at a phonological level as phonetic representations of words, at a word m~aning level as multiple pairs of designed syntactic category types and orthographic spelling associates, within grammar, and as a pragmatic interpretation. Each piece of information is a thresholding device with memory. It has an activity value, initially at a resting state, that is modified over time depending on the input, interconnections to other information, and an automatic activity decay scheme. In addition, the decay scheme is based on the state of the information, whether it has reached threshold and fired or not. Activity is propagated in a fixed sense to all aspects of the meaning of words that are "connected" by spreading activation. (Collins and Loftus, 1975; Quillian, 1980/73; Small, Cottrell, and Shastri, 1982; Cottrell, 1983). Simultan- eously, information interacts asynchronously due to threshold firing. This is achieved by the time coordination of asynchronously encoded serial order processes. The serial-order processes that occur at any moment of the process are context dependent; they depend on the "current state" of the system. The serial order processes include: I. NEW-WORD-RECOGNITION: Introduction of the next phonetically recognized word in the sentence. 2. DECAY: Automatic memory decay reduces the activity of all active information that does not receive additional input. It is an im- portant part of the neural processes which occur during memory access. 3. REFRACTORY-STATE-ACTIVATION: An automatic change of state that occurs after active information has reached threshold and fired. In this state the information can not affect or be affected by other information in the system. 4. POST-REFRACTORY-STATE-ACTIVATION: An auto- matic change of state which all fired in- formation enters after it has existed in the REFRACTORY-STATE. The decay rate is differ- ent than before firing. 453 5. MEANING-PROPAGATION: Fixed-time spreading activation to the distributed parts of recognized words ' meanings. 6. FIRING-INFORMATION-PROPAGATION: Asynchronous activation propagation that occurs when information reaches threshold and fires. It can be INHIBITORY and EXCITATORY in its effect. INTERPRETATION is a result of acti- vation of a pragmatic representation of a disambiguated word meaning. It is the in interaction of the results of these asynchronous processes that the process of comprehension is defined. The processes are independent of the know- ledge representations defined and are blindly applied across all of them. This often produces unexpected but humanly interpretable results when the end state is compared with suitably defined behavioral test results. During processing, we can study both the change in state that results over time and "how" the change occurred. Analyzing both aspects of the process is the focus of comparison between "normal" and "lesion" performance of the model. In this way we are able to study the effect of the "lesion" in a well defined linguistic context, and to make behavioral predictions that can be veri- fied (Gigley, 1982b; 1983a; 1983b; Gigley and Duffy, 1982). 4. FROM HOPE en I'ESPERANCE Given that CN approaches to natural language processing assume a neural-like control paradigm, it is possible to assume that such a paradigm will work equally well for other natural languages by simply recoding the representations into the second language surface representation, grammar, and semantic structure. We assume that the pro- cesses can be tuned to produce "normal" results as they have been for the simple English fragment demonstrated to date. As a first attempt to determine if such a cross-linguistic adaptation is possible, we have begun to redefine the knowledge representations to encode suitable representations of French, homo- logous to those that HOPE includes in its present level of implementation. The beginnings of the adaptation raised questions about language representation from a different perspective than occurs within a strictly linguistic analysis. The remainder of the paper focuses on our initial work in the adaptation (Gigley, 1984). As the research is currently underway, the discussion will raise several unanswered questions in pointing out the value of applying a CN methodology to cross- linguistic study. In explaining the representation issues for French, we will first, briefly provide background in current linguistic research on French. This will include an overview of recent relevant psycholinguistic and neurolinguistic studies in French. Then we will present an overview of computational natural language systems for speech recognition comprehension and automatic transla- tion into French. One issue, how to chunk French into a phonetic representation of words, along with the implications of the determined repre- sentation for our processing approach to compre- hension of French, will form the basis of the discussion. 4.1 Word Boundaries in On-Line Comprehension of French Because of the parallel activation of all meanings of each recognized word in HOPE, the determination of the phonetic representation of a recognized word determines the breadth of active competition amon 9 meanings for subsequent time intervals of the process. Depending on how the words are chunked, different homophone sets, sets of associated meanings for a given homophone, may arise. For spoken English, word boundaries tend to be marked by intonation or pauses. However, for French this is not the case. Depending on the context, the ending of one word may be phone- tically affixed to the following one called liason. In addition, when a content word begins wl~ vowel or silent h, the ending vowel of the preceding word is dropped, called elision. The problem is particularly evident with respect to the use of articles which are very often spoken in such context. In addition, these same articles do not have the same meaning as they do in English. "Le, la, les" do not always mean "the" in the definite sense, but are often generic and mark masculine, feminine, or plural (Gross, 1977; Goffic and McBride, 1975). And furthermore, these same articles often are not translated into meaning preserving sentences in English. An example sentence demonstrating this is: Ce singe aime le cafe. (This monkey likes coffee.) The degradation of these same morphemes has also been associated with certain types of aphasic behavior in English speaking patients, speci- fically in agrammatics and Broca's aphasics. French neurolinguistic studies have documented a similar degradation in the ability of agrammatic and Broca's aphasics (LeCours and Lhermitte, 1969; Nespoulos, 1973; 1981; Segui, Mehler, Frauen- felder, and Morton, 1982; Tissot, Mounin, and Lhermitte, 1973). However, only the quantity of degradation is reported. The studies discuss performance in general and have not specifically addressed to what extent and in what ways these morphemes are affected as do some of the English studies (Zurif and Blumstein, 1978; Zurif, Green, Caramazza and Goodenough, 1976). Because of the import of articles in language processing, as briefly mentioned, how they are represented is of great interest when one wants to 454 use the adapted model, I'ESPERANCE, in its "le- sioned" state to study the linguistic results. Finally, to further illustrate the problems encountered in determining the phonetic repre- sentation, examples of the implications of de- ciding to represent the word for water, "eau," will be used. These implications are relevant to automatic speech recognition as well. The French equivalent for "some water" is "de l'eau" which includes the generic article, le, in an elision context. Water is spoken as l'eau even though there is another article as above. The question becomes should the phonetic representa- tion be defined as "l'eau" or as the content word in isolation, "eau?" The decision affects the homophone set association and will affect the entire across-time processing in any defined model. Current descriptions of research in automatic speech recognition for French (Pierrel, 1982; Quinton, 1982) provide no relevant information. The MYRTILLE II system described by Pierrel (1982) stresses use of linguistic knowledge and includes phonological substitutions for the same word. The system includes alternatives for words at their junction with other words in different phono- logical contexts. The system described by Quinton (1982), on the other hand, is very HEARSAY-like and does not specifically address how these mor- phemes are handled. Finally, the automatic translation work for French was consulted to see if there were any r~levant discussions included in the systems regarding the representations of words similar to "eau". In Ariane-78, article constraints are affixed as features to content words and elision is decided in the final stage of the production of the French sentences (Boitet and Nedobejkine, 1981). The content words are specifically marked as beginning with vowels or silent "h". The final stage of the process joins the marked content word with an appropriate article to produce output words such as l'eau. This suggests that for comprehension, one would first recognize the unit "l'eau" and decompose it to the article and con- tent word with appropriate masculine/feminine indicators (Jayez, 1982). Initial assessment of the literature with respect to this problem has provided little evi- dence. The role of articles has not been studied for French to the extent that it has for English. Therefore, a pilot study with French aphasics was designed to analyze if and in what contexts these morphemes are affected. The study includes off-line picture naming which forces use of articles in all of the above contexts, as well as on-line production of these morphemes in an attempt to determine in which way these morphemes are related to the words. Are they unified with the word in all instances or only in certain contexts? Adapting a neurolinguistically motivated CN model for a second language can be seen to moti- vate a different type of question with regard to the second language than occurs when one bases the studies on English surface phenomena. This is very important because often surface phenomena are assumed to be more similar than warranted. What we claim instead is that the processing is similar, indeed universal and that we must begin to make cross-linguistic studies that assume this underlying commonality and at the same time can account for the variation at the surface level. 5. SUMMARY Within developing computational neurolin- guistic research which assumes that we can define cognitively based simulation models using AI methodologies which are incorporated with neural processing paradigms, we have demonstrated how one can begin to study universals of language in a new perspective. The CN paradigm for natural language proces- sing includes claims that new perspectives on linguistically interpretable hierarchical repre- sentations that arise in language behavior are introduced by including neurally motivated pro- cessing control as the focus of model definition and by including behaviorially defined con- straints, both normal and pathological. The issues are not whether human brains work in a universal fashion, but instead raise ques- tions of how interpreted levels of representation, which functionally produce similar language be- havior need to be represented for different lan- guages. This processing approach includes many assumptions which are important to linguistic theory. Furthermore, it provides a way of de- veloping specific, verifiable questions about behavior which are mathematically better defined than through other methods, because it enables one to develop a broader perspective of the questions within an analysis of the hypothesis in the con- text of a characterization of the "how" of the entire behavior. By adapting HOPE for processing French, we furthermore claim that new perspectives on lan- guage universals are demonstrated. And finally, we feel that CN provides the only suitable way to begin developing a comprehensive understanding of a behavior as complex as language. 6. REFERENCES Boitet, Ch. and Nedobejkine, N., Recent develop- ments in Russian-French machine translation at Grenoble. Linguistics, 19, 1981. Ballard, D.H., Parameter Nets. Technical Report TR75, Department of Computer Science, Univer- sity of Rochester, 1982. 455 Cottrell, G.W., A Connectionist Scheme for Model- ling Word Sense Disambiguation. Cognition an.__dd Brain Theory, 6, l, 1983. Feldman, J.A., A Connectionist Model of Visual Memory. Parallel Models of Associative Memory, G.E. Hinton, and J.A. Anderson (eds.), Lawrence Erlbaum Associates, Publishers, 1981. Gigley, H.M., Neurolinguistically Based Modeling of Natural Language Processing. Paper pre- sented at the Linguistic Society of America-- Association for Computational Linguistics Meeting, New York, 1981. Gigley, H.M., A Computational Neurolinguistic Approach to Processing Models of Sentence Comprehension. COINS Technical Report 82-9, Computer and Information Sciences Department, University of Massachusetts/Amherst, 1982. Gigley, H.M., Neurolinguistically Constrained Simulation of Sentence Comprehension: Inte- grating ArtTfic--Ta~ Intelligence and Brain Theory. Ph.D. Dissertation, Unive-~sity of Massachuetts/ Amherst, 1982. Gigley, H.M., Artificial Intelligence Meets Brain Theory: An Integrated Approach to Simulation Modelling of Natural Language Processing. Proceedings of the Sixth European Meeting on Cybernetics and Systems Research, H. Trappl (ed.), North-H-oTland, 1982. Gigley, H.M., HOPE-- AI and the Dynamic Process of Language Behavior. Cognition and Brain Theory, 6, l, 1983. Gigley, H.M., Experiments in Artificial Aphasia -- Dynamics of Language Processing. Poster Ses- sion presented at the Academy of Aphasia, Minneapolis, 1983. Gigley, H.M., From HOPE en L'Esperance, Initial Investigation. Technical Report 84-24, Depart- ment of Computer Science, University of New Hampshire, 1984. Gigley, H.M., and Duffy, J.R., The Contribution of Clinical Intelligence and Artificial Aphasio- fogy to Clinical Aphasiology and Artificial Intelligence. Clinical Aphasiology, Proceed- ings of the Conference, R.H. Brookshire (ed.), Minneapolis, 1982. Goffic, P.L., and McBride, N.C., Les constructions fondamentales du francais. Libraries Hachette et Larousse, 1975. Gordon, B., Confrontation Naming: Computational Model and Disconnection Simulation. Neural Models of Language .Processes, M.A. Arbib, ~. Caplan, and J. Marshall (eds.), Academic Press, 1982. Gross, M., Grammaire transformationnelle du francais: syntaxe du nom. Larousse, Paris, 1977. Hayes, P.J., and Mouradian, G.V., Flexible Pars- ing. American Journal of Computational Linguistics, 7, 4, 1981. Jayez, J.-H., ComprEhension automatique du langage naturel. Masson, Paris, 1982. Kwasny, S.C., and Sondheimer, N.K., Relaxation Techniques for Parsing Ill-Formed Input. American Journal oi = Computational Linguistics, 7, 2, 1981. Lavorel, P.M., Production Strategies: A Systems Approach to Wernicke's Aphasia. Neural Models of Language Processes, M.A. Arbib, D. Caplan, and J. Marshall (eds.), Academic Press, 1982. Lavorel, P.M., and Gigley, H.M., ElemEnts pour une th~orie gEnErale des machines intelligentes. Intellectica, Bulletin of the ASSOCIATION pour la RECHERECHE COGNITIVE, 7, Orsay, France, 1983. Lecours, A.R., and Lehrmitte, F., Phonemic Para- phasias: Linguistic Structures and Tentative Hypotheses. CORTEX, 5, 1969. Lehnert, W.G., Dyer, M.G., Johnson, P.N. Yong, C.J., and Harley, S., BORIS--An Experiment in In-Depth Understanding of Narratives. Arti- ficial Intelligence, 20, 1983. McClelland, J.L. and Rumelhart, D.E., An Inter- active Activation Model of Context Effects in Letter Perception: Part I. An Account of Basic Findings. Psychological Review, 88, 5, 1981. Nespoulos, J.-L., Approche linguistique de divers phEnom~nes d'agrammatisme. ThEse 3rd cycle, UniversitE de Toulouse-le Mirail, Flammarion MEdecine-Sciences, Paris, 1973. Quinton, P., Utilisation de contraintes syn- taxiques pour la reconnaissance de la parole continue. Technique et Science Informatiques, l, 3, 1982. Reggia, J.A., and Sanjeev, B.A., Simulation of Phonemic Errors Using Artificial Intelligence Symbol Processing Techniques. Paper to be given at the Seventeenth Annual Simulation Symposium, 1984. Segui, J., Mehler, J., Frauenfelder, U., and Morton, J., The Word Frequency Effect and Lexical Access. Neuropsychologia, 20, 6, 1982. Small, S., Cottrell, G., and Shastri, L., Toward Connectionist Parsing. Proceedings of the National Conference on Artificial Intelligence, Pittsburgh, 1982. Tissot, R., Mounin, G., and Lhermitte, F., L'agrammatisme. Etude neuropsycholinguistique. Dessart, Bruxelles, 1973. Weischedel, R.M., and Black, J.E., If the Parse Fails. Proceedings of the 18th Annual Meeting of the Association for Computational Linguis- t~cs~nd Parasession on Topics in Interactive Discourse, Philadelphia, 1980. Wood, C.C., Variations on a theme by Lashley: Lesion experiments on the neural model of Anderson, Silverstein, Ritz, and Jones. Psychological Review, 85, 6, 1978. Wood, C.C., Interpretation of Real and Simulated Lesion Experiments. Psychological Review, 87, 5, 1980. 456
1984
92
PANEL SESSION MACHINE-READABLE DICTIONABr~.S Donald E. Walker Natural-Language and Knowledge-Ruouree Systems SRI International Menlo Park, California 04025, USA and Artificial Intelligence and Information Science Research Bell Communicatlons Research 445 South Street Morrlstown, New Jersey 07960, USA Abstract The papers in this panel consider machine-readable dictionaries from several perspectives: research in computational linguistics and computational lexicology, the development of tools for improving accessibility, the design of lexical reference systems for educational purposes, and applications of machine-readable dictionaries in information science contexts. As background and by way of introduction, a description is provided of a workshop on machine-readable dictionaries that was held at SRI International in April 1983. Introduction Dictionaries constitute a unique resource for a broad range of research involving natural language, information, knowledge, and the analysis of contemporary culture. Although they are often regarded as the special preserve of lexicographers and lexicologists, data contained in dictionaries have significant implications for research in linguistics, computational linguistics, artificial intelligence, information science, psychology, anthropology, sociology, philosophy, education, and probably other fields as well. Dictionaries embody the lexicon of the language. They provide phonological, grammatical, semantic, and historical information relevant for linguists and other language specialists. They are useful adjuncts for the development of natural-language-understanding systems and natural-language-interface technology. They can provide a mechanism for processing full-text data sources and for information retrieval more generally. Dictionary data figure in psychological experiments on language and perception. Semantics and usage are reflected in ways that are factored into ethnosemantic and sociolinguistic research. Philosophical and logical inquiries build on lexical information. For education, dictionaries provide not only reference, but are practical aid for teaching both adults and children reading and writing skills. Dictionaries have always had these potential attributes, but they are complex structures and difficult to manipulate. Having them available in machine-readable form makes more sophisticated research in lexieology and lexicography possible-- and the results of such work feed back into research in the other areas mentioned above. In addition, dictionaries can be utilized in areas like word processing and office automation, where people are currently showing considerable interest in them. A number of dictionaries have now been prepared by computer typesetting, so the tapes used to drive the photocomposer are available. However, there is a significant difference between having a dictionary in computerized form and having a database embodying its contents which can be accessed in a number of different ways. A Workshop Recognizing the potential of machine-readable dictionaries and, at the same time, the lack of coordination among people working in the field, Bob Amsler and 1 organized a A Workshop on Machine-Readable Dictionaries at SR! International in April 1983. The National Science Foundation agreed to provide funds (Grant No. 1ST-8300940; SRI Project 5699), and we succeeded in involving 29 people from Belgium, England, West Germany, Italy, Japan, Sweden, and the United States for a period of three days. The group included research scientists from universities and institutes, publishers, and people involved in marketing dictionary products. There were a number of objectives that motivated convening the workshop and that served as a guide to its organization and the assessment of its results: 1. Clarification of the research interests and goals of both the participants and the broader community that they represent. Including in the latter are dictionary publishers and the various classes of potential users of machine-readable dictionaries and their by-products. 2. Identification of the resources in the field: for example, dictionaries actually in machine-readable form, the people engaged in research on them, programs developed for processing dictionary data, references to the relevant literature. 3. Examination of the problems entailed in research in this area. 4. Delineation of computational requirements for various research tasks. 5. Specification of guidelines for dictionary design, both form and content. 6. Formulation of a comprehensive plan to coordinate research efforts in the field. 7. Determination of needs and potential sources of funding for research. 8. Arrangements for future workshops or other meetings. A volume containing a challenge paper prepared by BOb Amsler, contributions from a number of the participants, summaries of the discussions, and an extensive bibliography of work in the field is in preparation. 457
1984
93
LEXICAL KNOWLEDGE BASES Robert A. Ameler Natural-Lsngu.ge and Knowledge-Resource Systems SRI International Menlo Park, California 94025, USA A lexical knowledge base is a repository of computational information about concepts intended to be generally useful in many application areas including computational linguistics, artificial intelligence, and information science. It contains information derived from machine-readable dictionaries, the full text of reference books, the results of statistical analyses of text usages, and data manually obtained from human world knowledge. A lexical knowledge base is not intended to serve any one application, but to be a general repository of knowledge about lexical concepts and their relationships. Thus natural-language parsers, generators, or other intelligent processors must be able to interface to the knowledge base and are expected to only extract those portions of its knowledge which they need for specific tasks. Likewise, the knowledge base is designed, built, and maintained primarily as a repository-rather than a tool serving the needs of other computational processors. Just as human memory, the knowledge base doesn't distinguish between 'useful' knowledge and information for which it at present doesn't have any functional use. In this manner the knowledge base is a test bed for concept representation mechanisms and data structures, rather than an adjunct to other computational processes. Investigations of machine-readable dictionaries over the last decade have shown that they can be computationally useful for tasks such as parsing, computer-assisted instruction, speech generation, and content analysis. Sufficient knowledge of the contents of machine-readable dictionaries now exists to provide meaningful answers to questions concerning what additional information about lexical concepts will be needed to represent many aspects of human 'world knowledge.' Machine-readable dictionaries are seen as providing an index into human knowledge. A dictionary definition provides the minimal information necessary to evoke the concept it defines in the mind of a human reader who already knows to what this concept refers. It is neither intended nor capable of serving as the actual 'meaning' of that concept. A lexical knowledge base is intended to provide a means of economically integrating not only dictionary definitions, but other types of lexieal knowledge. The task of constructing a lexical knowledge base is seen as a goal in itself, distinct from the task of building natural language processing programs that will use that knowledge base. Several of the components of a lexical knowledge base are already known and await assembly into one database. One component is the tangled-hierarchy of concepts compiled as part of an analysis of the kernels of the definitions in a dictionary. This 'tangled' hierarchy provides ISA ares connecting 27,000 nominal concepts and 12,000 verbal concepts derived from the Merriam-Webster Pocket Dictionary [Amsler 1980]. Another component of the lexical knowledge base has been provided by the extraction of subject codes from the Longman Dictionary of Contemporary English. Some 17,000 concepts in the Longman dictionary possess subject designations that give the domain in which these concepts are used. There is a subtle distinction between the ISA hierarchy and the subject classification that is worth mentioning. A word such as 'crossbow' is taxonomically linked to 'weapon' in the ISA hierarchy; but appears in the subject domain 'military history.' Subjects thus do not duplicate ISA linkage information, but add another facet to conceptual understanding. There are a number of additional machine-readable dictionary properties that can of course be combined into a lexical knowledge base. Machine-readable dictionaries contain information regarding the appropriate level of usage of concepts; their geographic or chronologic associations; .and semantic and syntactic restrictions on their potential arguments and combinations. In addition to this immediatly available information listed for each concept in dictionary definitions, dictionaries contain much implicit information derivable from studying collections of definitions. For example, the verbs of motion can be analyzed to reveal much more about their core concept 'move' than would be seen from its definition alone. Two major components of conceptual understanding which dictionaries fail to adequately describe are procedural knowledge and information derived from the mental inspection of visual imagery. Sources for procedural knowledge may exist in other types of special purpose reference books, such as encyclopedias; but information derived from conceptual visual images will require special encoding to be useful for computational remsoning. Many questions of relative and absolute size, position, and orientation are not answerable from definitions. While some sizes are availab[e from reference books, there nevertheless remain many aspects of our understanding of tangible objects which can only be answered by examination of illustrations or scenes in which the objects appear. Such illustrations are, however, an accepted part of many 458 dictionaries and other lexical reference books. The famous 'Duden' series of pictorial dictionaries provide line drawings and illustrations of tangible objects, often collectively depicted in scenes which relate large amounts of information about their relative sizes, uses, etc. Such information will require encoding methods that bridge the gap between natural language understanding research and vision research. Other line drawings often show the a series of images of human figures going through the steps of an athletic event, such as diving into a swimming pool, or performing a pole vault. The information shown is chronological and spatial, giving relative locations of the performer throughout time. Capturing this pictorial information in a lexical knowledge base will be necessary for it to contain the data needed to fully understand text. These tasks are seen as providing the basis for building iexieal knowledge bases. The fundamental question governing whether new information must be added to a lexical knowledge base shall be whether natural-language understanding problems demonstrate the need for the information and it can be shown to not be inferrable from existing material in the knowledge base. [After July, 1984 the author will be joining the Artificial Intelligence and Information Science Group at Bell Communications Research in Morristown, New Jersey. Funding for this paper was provided in part by NSF grants IST-8208578, IST-8200346, and IST-8300040.] 459
1984
94
MACHINE-READABLE DICTIONARIES, LEXICAL DATA BASES AND THE LEXICAL SYSTEM Nicoletts Calsolsri Dipartimento dl Lingu|stica, Universita dl Plsa, Pisa, ITALY Istituto di Linguistics Cornputssionsle del CNR, Piss, ITALY I should like to raise some issues concerning the conversion from a traditional Marhine-Readable Dictionary (MRD) on tape to a Lexical Data Base (LDB), in order to highlight some important consequences for computational linguistics which can follow from this transition. The enormous potentialities of the information implicitly stored in a standard printed dictionary or a MRD can only be evidenced and made explicit when the same data are given a new logical structure in a data base model, and exploited by appropriate software. A suitable use of DB methodology is a good starting point to discover several kinds of lexical, morphological, syntactic, and semantic relationships between lexieal entries which would otherwise have remained unexploited. Moreover, the transformation of a "very large-scale" MRD into a LDB provides the means of operating throughout the lexicon in a really extensive manner. I think in fact that an "almost exhaustive" approach to lexical facts is essential both for reliable investigations of a lexical system, and for many kinds of linguistic applications which cannot be restricted to a particular domain of discourse. The possibility of abstracting significant regularities from recurrent patterns of natural language definitions by means of suitable computational methods, and of reaching a formalization of a number of important structuring relations within the lexicon will be discussed. An overview of the "associative links" already produced in the Italian LDB, and of other allowable interconnections will be given. In a "relational" organization of a computerized dictionary with complex interlinked structures, each word acquires its meaning as a result of its position in some of the partitionings created by the formalized relations. When an entry is activated, all of its relations with other entries can be activated, too. Conversely, when a relation is activated, all of its linked concepts are made immediately available. Conceptual and linguistic information at many levels is thus interactively retrievable from the LDB following the appropriate pointers. I shall especially take into consideration those types of relations which can be of relevance not only for "Computational Lexicology" research, but also in a more general Computational Linguistics framework. An example is provided by derivational relationships which, when formalized, give rise to families of semantically and syntactically connected entries, linked to the same base-word node, and substitutable in different syntactic formulations of the same conceptual meanings. Another example concerns ease or argument relations, both (a) between lexieal items, and (b) governed by lexical items. From (a) 1 expect to achieve, from the natural language definitions, useful information on the different lexicalizatons of ease-slot fillers in the case-frames of typical actions. In contrast, with (b) I can establish an encoding with each entry-and often with each word sense-of information on its surface and deep case-argument structure. The utility of the extensive inclusion of similar information in a LDB which should be the input for a lexically driven parser, for machine translation, etc., is obvious. As a conclusion, it should be pointed out how a LDB must be considered at the crossroad between texts and system, and in this perspective some essential properties of a LDB must be stressed. A first property is "multifunctionalism"; it is connected to the role of interfaces to the LDB. We must tend towards creating 'a single' integrated system which, through many different interfaces, can be adopted for all the range of possible applications, and by all the possible users, where user means both a human user and a computer program. Another important property is that of being "multi-perspective." This property of multiple access can create something like a constellation of sublexicons, which altogether capture the many possible structures which can be observed in the lexical system, along many dimensions of relatedness. The mediating function of a LDB between system and texts can thus be considered as the mapping of lexical structures, of many kinds, on linear unstructured texts. 460
1984
95
THE DICTIONARY SERVER Martin Kay Intelligent Systems Laboratory Xerox Palo Alto Research Center 3333 Coyote Hill Road Pain Alto, California 94304~ USA The term "machine-readable dictionary" can clearly be taken in two ways. In its stronger and better established interpretation, it presumably refers to dictionaries intended for machine consumption and use as in a language processing system of some sort. In a somewhat weaker sense, it has to do with dictionaries intended for human consumption, but through the intermediary of a machine. Ideally, of course, the two enterprises would be conflated, material from a single basic store of lexical information being furnished to different clients in different forms. Such a conflation would, if it could be accomplished, be beneficial to all parties. Certainly human users could surely benefit from some of the processes that the machine-oriented information in a machine-readable dictionary usually makes available. They can profit even more from other processes specifically oriented to the human user but which have not yet received much attention. For the~e reasons, 1 believe that machine-readable dictionaries should, and probably soon will, come to replace traditional book-form dictionaries fairly soon. 1 do not have in mind machine-readable dict~ionaries that single users load into their personal machines so much as centralized services to which individual clients subscribe. I have spend a considerable proportion of the last two years designing and implementing a "dictionary server." This is a computer with a large dictionary in its file system, specifically the American Heritage Dictionary (for the use of which we are indebted to its publisher, Houghton-Mifflin Company), together with a variety for indices for giving rapid access to it. The machine is connected thorugh a packet-switching network to a large number of other computers and work stations. Through a mechanism known as "remote procedure call" (RPC), developed concurrently with the dictionary server, a program running on any of these other machines can execute one of a number of procedures "exported" by the server, causing the corresponding procedure to be executed in the server and the result returned to the client as though it had all happened in the same machine. The client reaps several benefits from this mode of operation. First, he does not have to provide storage for this very considerable body of data, nor the time necessary to operate on it. Second, by consigning its care to others, he can profit from regular maintenance and improvements resulting from experience with a large community of users. Less obvious, though perhaps more important than these advantages, is the fact that he can hope to profit from the sophisticated and specialized processing methods available at the central location. The server I have built represents only a few steps towards the one that would provide the richness of service I can easily imagine. Among its present capabilities are the following: a client can discover if a sequence of letters constitutes a word recognized by the dictionary even though it is presented in an inflectional variant not actually stored in the dictionary. The methods used to accomplished this have sound theoretical bases and generalize across a wide range of language types so that languages with much richer morphological structures than English are provided for. A client can consult the dictionary for the spelling of a word by presenting it with candidate spellings. The server is able to locate entries that could be pronounced in the same way, or ways, as the candidate. It presents these to the client in order of decreasing similarity to the candidate. If the client has difficulty recognizing the appropriate one, he can have the associated definitions presented to him. Definitions, etymologies, synonyms, and so forth can be obtained in a variety of ways. The server undertakes the most onerous procedures that must be carried out by a spelling correction program, namely those that relate putatively misspelled words to actual words into which they can be transformed as a result of the kinds of error commonly made by typists. These facilities are relatively easy to provide, using as the data base, a machine readable version of a standard dictionary. A dictionary designed specifically for use through a dictionary server could do a great deal more. For example it could present a client with several different perspectives on a semantic field so as to provide a means of finding "le mot jnste," that is on the tip of the writer's tongue. This is the function that Roget designed his thesaurus to fill and 1 believe it is such a device as the dictionary server that will provide the first possibility of doing better than he. What stands in the way of dictionary services of far greater utility than even the largest currently available books is not technological inadequacies, or even shortcomings of linguistic or lexicological theory, so much as the courage and foresight to invest in lexicographic data bases of radical design. 461
1984
96
HOW TO MISREAD A DICTIONARY George A. Miller Department of Psychology Princeton University Princeton, NJ 08544 A dictionary is an extremely valuable reference book, but its familiarity tends to blind adults to the high level of intelligence required to read it. This aspect becomes apparent, however, when school children are observed learning dictionary skills. Children do not respect syntactic category and often wander into the wrong lexieai entry, apparently in search of something they can understand. They also find it difficult to match the sense of a polysemous word to the context of a particular passage. And they repeatedly assume that some familiar word in a definition can be substituted for the unfamiliar word it defines. The lexical information that children need can be provided better by a computer than by a book, but that remedy will not be realized if automated dictionaries are merely machine- readable versions of the standard printed dictionaries. Some suggestions for computer-based lexieal reference systems will be offered. 462
1984
97
MACHINE-READABLE COMPONENTS IN A VARIETY OF INFORMATION-SYSTEM APPLICATIONS Howard R. Webber Reference Publishing Division Houghton-Mifflln Company 2 Park Street Boston. MA 02108 Components of the machine-readable dictionary can be applied in a number of information systems. The most direct applications of the kind are in wordprocessing or in "writing- support" systems built on a wordprocessing base. However, because a central function of any dictionary is in fact data verification, there are other proposed applications in communications and data storage and retrieval systems. Moreover, the complete interrelational electronic dictionary is in some sense the model of the language; and there are, accordingly, additional implications for language-based information search and retrieval. In regard to wordprocessing, the electronic lexicon can serve as the base for spelling verification (in which the computer detects many spelling or typographical errors} and spelling correction (in which the computer offers corrections to the errors it has identified). Because it is possible to develop algorithms that permit the computer to calculate the chances that the single best alternative it offers is actually correct, this substitution can in many cases be made automatically. It is at this point in the development of such systems wise to flag such automatic corrections for inspection by the operator. At the present time, these processes generally depend upon the application of strict frequency measures, which permit the lexicon to be reduced to small-machine proportions and thereby reduce the possibility of a false hit--the passing of a misspelled common word that happens to coincide in orthography with a legitimate but rare word. As our ability to draw cognitive information from text increases, and as available memory increases, then such limits can be abandoned. Truncation of the lexicon for other specific applications can be considered. It is possible, for example, to shape the lexicon to reflect a children's vocabulary and thereby to develop spelling correction and other writing aids for the early educational years on a very small machine base. It is also possible to shape the lexicon to the needs of the educated adult user, for whom information about common words is unnecessary, and thereby to provide an exceptionally rich resource about "difficult" words within small-machine memory for on-line access to spelling, definition, and pronunciation. Configuring the lexicon pyramidally by frequency, including all words of high frequency, seems an inevitable model to us now, but it is of course a kind of historical accident. As many of these comments already make clear, even if one resolves to work within the linguistic bounds of the ordinary print dictionary, there are differences in the demands placed upon the dictionary by print applications and those arising out of electronic applications. It is a matter of judgment or taste for the print lexicographer not to include geographic and biographic terms in the lexicon, but the electronic lexicographer does not have that latitude. Access to on-line dictionaries can be by the standard alphabetic means or by well-developed phonetic algorithms (which solve the conundrum of needing to know spelling before being able to find spelling) or by definition (the reverse dictionary). As electronic citation for words and senses is done on the basis of machine scans of print-composition tapes and even of voice scans, then sensitive subject coding should permit the development of lexicons tailored to the user profile, with attendant benefits in comprehensiveness and economy of memory. One can conceive of dictionaries that monitor their own use and respond by offering only unkown information to the individual user. The dictionary that contains synonymy is a resource in the construction of electronic synonym generators, of which there is at least one model that returns synonyms in the inflections of the source words, including phrasal synonyms, taking precise account of all irregularities in doing so. Presentation of synonyms is useful for "knowledge workers" but not for clerical workers. If usage information is included in the dictionary, then it is deliverable as a discrete electronic product. The most direct key to specific usage guidance is by "trigger" words or phrases that call up guidance information for the operator, but much more sophisticated implementations are possible when programming addresses grammar and syntax. In large-system management, where accuracy of alpha data is a consideration, the machine dictionary can be the base or one of the bases for verification and correction of data streams in communication or of stored data. ~hat I have called the complete interrelational dictionary-fully coded to reflect the range of significant linguistic information-will serve as the base for retrieving information by meaning rather than mechanics. 463
1984
98
TRANSFER IN A MULTILINGUAL MT SYSTEM Steven Krauwer & Louis des Tombe Institute for General Linguistics Utrecht State University Trans 14, 3512 JK Utrecht, The Netherlands ABSTRACT In the context of transferbased MT systems, the nature of the intermediate represenations, and particularly their 'depth', is an important question. This paper explores the notions of 'independence of languages' and 'simple trans- fer', and provides some principles that may enable linguists to study this problem in a systematic way. I. Background This paper is relevant for a class of MT systems with the following characteristics: (i) The translation process is broken down into three stages: source text analysis, transfer, and target text synthesis. (ii) The text that serves as unit of translation is at least a sentence. (iii) The system is multilingual, at least in principle. These characteristics are not uncommon; however, Eurotra may be the only project in the world that applies (iii) not only as a matter of principle but as actual practice. We will regard a natural language as a set of texts. A translation pair is a pair of texts (T~, T~) from the source and target language, respectively. One sometimes wonders whether for every T$ there is at least one translation Tt, but we will ignore that kind of issue here. For translation systems of the analysis- transfer-synthesis family, the following diagram is a useful description: *The research described here was done in the context of the Eurotra project; we are grateful to e~l the Eurotrans for their stimulation and help (i) TRF R~ . . . . . . R~. I ! l ! AN i GEN ! ! l ! i ! Tm T~. TRA TRA, AN, TRF, and GEN are all binary relations. Given the sets of texts SL (source language) and TL (target language), and the set of representations RR, we can say: TRA__~--SL x TL, AN_C-SL x RR TRF ~-_ RR x RR, and GEN~ RR x TL The subsystems analysis, transfer, and synthesis are implementations of AN, TRF, and GEN. In this paper, we are not interested in the implementations, but in the relations to be implemented. Especially, we try to find a principled basis for the study of the represenations R and R . Such a basis can only be established in the context of some fundamental philosophy of the translation system. We will assume the follo- wing two basic ideas: (i) Simple transfer: Transfer should be kept as simple as possible. (ii) Independence of languages: The construction of analysis and synthesis for each of the languages should be entirely independent of knowledge about the other languages covered. These two ideas are certainly not trivial, and especially (ii) may be a bit exceptional compared to other MT projects; however, they are quite reasonable given a project that really tries to develop a multilingual trans- lation system. In any case, they are both held in the Eurotra project. 464 The reason for (i) is simply the number of trans- fer systems that must be developed for k langua- ges, which is k(k-1). From this, it follows that 'simple' here means 'simple to construct', not 'simple to execute'. The reason for principle (ii) also follows for multilinguality; while developing analysis and synthesis for some language, one may be able to take into account two or three other languages, but this does not hold in a case like Eurotra, where one not only has seven languages to deal with, but also the possibility of adding languages must be kept open. Principles (i) and (ii) together constitute a philosophy that can serve as a basis for the development of a theory about the nature of the representations R and R t in (I). The remainder of this paper is ~evoted to a clearer and more useful formulation of them. 2. Division of labour. Suppose that simple transfer is taken to mean that transfer will only substitute lexical elements, and that the theory of representation says that the representations are something in the way of syntactic structures. We now have a problem in cases where translation pairs consist of texts with different syntactic structures. Two well-known examples are: (i) the graag-like case; Example: Dutch 'Tom zwemt graag' translates as English 'Tom likes to swim', with syntactic structures: (2) Dutch: Is Tom C~£zwem [;~ graag ]3 ] (3) English: Tom~v~ like~ empty [w~swim~ In the case of Dutch-English transfer, lexical substitution would result in an R t like the following: (4) Possible R : Tom[,~ swim~%~, like-to.J3] In this way, the pair <.(4), 'Tom likes to swim'~ becomes a member of the relation GEN for English. However, it is hard to believe that English linguists will be able to accomodate such pairs without knowing a lot about the other languages that belong to the project. (ii) The kenner - somebody who knows case Dutch and English both have agentive derivation, like talk =~ talker, s~:in~ => swimmer. However, as usually, derivational processes are not entirely regular, and so, for example though Dutch has 'kenner', English does not have the corresponding 'knower'. So we have the follo- wing translation pair: (5) Dutch: 'kenner van het Turks' English: 'somebody who knows Turkish' Again, the English generation writer is in trouble if he has to know that the R t may contain a construction like 'C~now]+er~', because this implies knowledge about all the other languages that participate. The general idea is that we want to have a strictly monolingual basis for the development of the implementations of AN and GEN. Therefore, so, we have the following principle: (6) Division of labour (simple version): For each language L in the system, R,T~GEN L iff ~T,RY6AN L Principle (6) makes AN and GEN each others 'mirror image', and so it becomes more probable (though it is not guaranteed) that the linguists knowing L will understand the class of Rts they can expect. However, (6) is too strong, and may be in conflict with the idea of simple transfer. For example, if surface syntactic structure is taken as a theory of representation, then (6) implies that TRF relates source language surface word order to target language word order, which clearly involves a lot more than substitution of lexical elements. Therefore, the notion of isoduidy has been developed. Isoduidy is an equivalence relation between representations that belong to the same language. Literally, the word 'isoduid' (from Greek and Dutch stems) means 'same interpretation'; but the meaning should be generalized to something like 'equivalent with respect to the essence of translation'. To give an example, suppose that representations are surface trees with various labelings, including semantic ones like thematic relations and semantic markers. Isodui~y might then be defined loosely as follows: two representations are isoduid if they have the same vertical geometry, and the same lexical elements and semantic labels in the correspon- ding positions. Obviously, the definition of the contents of the isoduidy relation depends on the contents of the representation theory. However, we think that the general idea must be clear: isoduidy defines in some general way which aspects of representations are taken to be essential for translation. 465 Given isoduidy, one can give a more sophisti- cated version of the principle of division of labour as follows: (7) Division of labour (final version): For each language L in the system, R',T7 ~ GEN L iff KT,R7 6AN L and R' is isoduid to R As a consequence, TRF has not to take responsibili- ty for target language specific aspects like word order anymore. 3. Simple and complex transfer. Given the principle of division of labour, we can relate to each other the following three things: - the notion of simple transfer - the representation theory, especially, the 'depth' of representation; - the contents of the relation isoduidy Given some definition of what counts as simple transfer, we can now see whether the represen- tation theory is compatible with it. It is easy to see that some popular theories of simple transfer, including the one saying that transfer is just substitution of lexical elements, will now give rise to a rather 'deep' theory of representation. This follows from cases like 'graag-like' and 'kenner-knower', where some language happens to lack lexical elements that others happen to have. In such cases, the language lacking the element usually circumscribes the meaning in some way. If one excludes transfer other than lexical substitu- tion, such examples give rise to a theory of representation where similar circumscriptions must be assigned as representations in the language that does have the lexical element. So, in Dutch we get pairs in AN like 'kenner', ~somebody [who knows~ ~'Tom zwemt graag', ~ Tom graag ~ empty zwem~ ~ ~> Instead of having deep representations like these, one may consider the possibility that transfer is complicated sometimes. So, one may still desire that transfer consists of just lexi- cal substitution most of the time, but allow exceptions. The question then arises as to how simple and complex transfer interact. As a basis for that, one may observe that the relation TRF now holds between representations, while in practice just lexical elements are translated most of the time. A straightfoward generalization is possible for the case where a representation is some hierarchical object, say some tree. We can then introduce a new relation, called translates-as. This is a binary relation, probably many-to-many; its left-hand term is a subtree of R , and its righthand term is a tree. Clearl~, TRF is a subset of translates-as. We then have the following principle: (8) Transfer translates a tree node-by-node. Note that, obviously, this only makes sense as long as we have representations that are tree~.The following example may clarify the idea. Dotted lines indicate instantiations of the relation. (9) ~ ........................................ N (Tomi A B ..... F C ........................ I O R (Tom) ( T o m ~ A ilik~ J K 5 T O ..... B E ..... ~ (ilke) A (emotyi, (swim) / \ (zwem) (swim) (graag) L M (empty) (sNim) Note that Dutch 'graag' is not translated at all; it only serves as a basis for the complex transfer elementKC,l~. The principle of simple transfer can now be formulated as follows: If A translates-as A', then we will call A' a TN of A. We now call an element s,t of the set defined by translates-as a simple iff. either s and t are both terminal nodes, or (i) s is a subtree, dominated by the nonterminal node A, and (ii) t is a tree, dominated by A', and (iii) A' is a copy of A', and (iv) the immediate daughters of A' are copies of the TNs of the immediate daughters of A. The principle of simple transfer then says that the proportion of simple elements in translates- as must be maximal. The generalised relation translates-as makes it possible to put some order into complex transfer. It localises it in a natural way, based on a tree structure. In (9), only the pair ~C, 12 is complex; all the others are simple. This view on transfer is easily implemented by means of an inbuilt strategy that simulates recursion. 4. Conclusion. 466 The principle of division of labour, together with the principle of node-by-node transfer constitute a framework in which it is possible to study 'depth of representation' in a systematic way. 467
1984
99
SEHANTICS OF TEHPORAL QUERIES AND TEHPORAL DATA Carole O. Hafner College of Computer Science Northeastern University Boston, MA 02115 Abstract This paper analyzes the requirements for adding a temporal reasoning component to a natural language database query system, and proposes a computational model that satisfies those requirements. A prelim- Inary implementation in Prolog is used to generate examples of the model's capabi Iltles. I. Introduction A major area of weakness in natural language (NL) interfaces is the lack of ability to understar~ and answer queries involving time. Although there is growing recognition of the importance of temporal semantics among database theoretlcians (see, for example, Codd [6J, Anderson [2L Clifford and Warren [41, Snodgrass [ i 5]), existing database management systems offer little or no support for the manipulation of tlme data. Furthermore (as we will see In the next Section), there is no consensus among researchers about how such capabilities should work. Thus, the developer of a NL interface who wants to support time-related queries cannot look to an underlying ~ for he!p. Currently available NL systems such as Intellect (SJ have not attempted to sugoort temporal queries, except in a trivial sense. In Intellect, users can ask to retrieve date attributes (e.o~, "When was Smith hired'?') or enter restrictions based on the value of a date attribute (e.g., "List the employees hired after Jan I, 1984"); but more complex questions, such as "How long has it been since Smith received a raise~ or "What projects did Jones work on last January?', are not 'Jnderstoo~ This Is a serious PraCtical limitation, since the intended users of NL systems are executives and other professionals who will require more sopffistlcated temporal capal)illtles. This report describes a model of temporal reasoning that is designed to be tncoroorated Into a NL query system. We assume that a syntactic component could be developed to translate explicit temporal references in English (e.g., "two years ago') into logical representations, and restrict our attention to the conceptual framework (including both knowledge structures and rules of inference) underlying such representations. Section 2 analyzes the requirements that the temporal model must satisfy: first describing some of the issues that arise tn trying to model time in a computer, then defining four basic semantic relattonsl~ips that are expressed by time attributes in databases, and finally analyzing the capat)tlites required to Interpret a variety of temporal queries. Based on this analysis, a computational model is described that satisfies many of the requirements for understanding and answering time-related database queries, and examples are presented that t l lustrate the model's calDabiltties. 2. Hodellng Temporal Knowledge Hodellng time, dasoite its olovlous importance, has proved an elusive goal for artificial Intelligence (AI). One of the first formal proposals for representing time-dependent knowledge in AI systems was the "situation calculus" described by I'lcCarthy a~l Hayes [I I]. That proposal created a paradigm for temporal reasoning based on the notion of an infinite collection of states, each reoresenting a single instant of time. Prepositions are defined as being either true or false in a particular state, and predicates such as "before (sl, s2)" can be defined to order the states temporally. This approach was used by Bruce [3] in modeling the meaning of tensed verb phrases In English, and It has been refined and extended by McDermott ( ! 3~ 5tare space models describe time as being similar to the real number line, with branches for alternative pasts and hypothetical futures. Although this approach is intuitively appealing, there are many unsolved problems from both the logical and the linguistic points of view. A few of the current problems in temporal semantics are very briefly described below: a. Non-monotonic reasontno~ In a system for automated reasoning, conclusions are drawn on the basis of current facts. When a fact that was true becomes false at a later time, conclusions that were based on that fact may (or may not) have to be revised. This problem, which is viewed by many as "the" current issue in common sense reasoning, has been studied extensively by Doyle [7], Moore [I 4], and McDermott [I 3], and continues to occupy the attention of John McCarthy [ ! 2~ b. Representation of Intervals and processes. Another problem for temporal logic is the representation of events that occur over intervals of time. Allen [I] points out that even events which seem to be instantaneous, such as a light coming on, cause problems for the state space model, since at the instant that this event occurs it is impossible to say that either "the light is on" or "the light is not on" is true. As a result, Allen chooses a representation of time that uses intervals as the primitive objects instead of instantaneous states. c. Temporal distance. Neither the state space model nor the interval model offers a convincing notion of temporal distance. Yet, the ability of a system to understand how long an event took or how much time separated two events Is an Integral part of temporal reasonir~ d. Periodicity of time. There are many periodic events that affect the way we think and talk about time - such as day and night, the days of the wee~, etc. McDermott [13] shows how his tempo~ al logic can describe periodic events, and Anderson [2] includes a representation of periodic data in her model of temporal database semantics. However, reasoning about periodic time structures is sttli z relatively unexplored issue. e. Vagueness ana uncertainty. People are able to reason about events whose temporal par-~neters are not known exactly - in fact, almost all temporal descriptions incorporate some vagueness. The most direct treatment of this phenomenon was a system by Kahn and Gorry [9], which attached a "fuzz factor" to temporal descriptions. However, Kahn and Gorry recognized that this approach was very crude and more sophisticated techniques were needed. f. Complex event structures. The situation calculus is not easily adapted to descriptions of complex acts such as running as race, simultaneous events such as hiding something from someone by standing in front of it while that person is in the room (an example dis- cussed by Allen [I ]), or "non-events" such as waitin~ Metaphorical time descriptions. In naturally occuring NL dialogues, time descriptions are frequently metaphoric. Lakoff and Johnson [I O] have shown that at least three metaphors are used to describe time tn English: time as a path, time as a resource, and time as a moving object. AI models have yet to adequately deal with any of these metaphors. Considering all of these complex issues (and there are others not mentioned here), It is not surprising that general temporal capabilities are not found in applied AI systems. However, tn the domain of NL query systems, it may be possible to ignore many of these problems and still produce a useful system. The reason for this is, in the world models of computer dataOases, most of the complexity and ambiguity has already been "modeled out'. Furthermore, current NL interfaces only work well on a supclass of databases: those that Conform to a simple entity-attribute-rela- tionship model of reality. The research described in this paper has focused on the design of a temporal component for a NL database QueP), system This has led to a model of time that corresponds to the structure of time attributes in databases: i.e., a domain of discrete units representing intervals of equal length. (Whether these units are SOCOrK2S, minutes, days, or years may vary from one aatabase to another.) The description of the model presented In Section 3 assumes that the basic tempora! units are days, In order to make the model more intuitively meaningful; however, the model can be easily adaoted to time units of other sizes. • 2 2.1 Analysis of Time Attributes in Databases The primary role of time Information In databases is to record the fact that a specific event occurred at a specific time. (It is also possible to represent times in the future, when an event is scheduled to occur, e.~, the date when a lease Is due to expire.) Having said this, there are still different ways in which time attributes may be semantically related to the entities in the database, and these require different Inferences to be made in translating NL queries into the framework of the data model. The following categories of time attributes are frequently observed in "real world" databases: I. Time attributes describing individuals 2. Time of a "transaction" 3. Time when an attribute or relationship changed 4. The time of transition from one stage of a process to the next. The first two categories are quite straightforward. Time attributes of individuals appear In "entity" relations, as shown In Figure la; they describe the occurrence of a significant, event for each Individual, such as an employee's date of birth or the date when the employee was hired. This type of temporal attribute has a unique (and usually unchanging) value for each Individual. The term "transaction" is used here to describe an event (usually involving several types of entities) that does not change the status of the participants, other than the fact that they participated In the event. For example, the date of each treatment (an X-ray, a therapy session, or a surgical procedure) given to a patient by a doctor would be recorded in a medical records database, as shown in Figure lb. Attributes In the third category record the time at which some other attribute or relationship changed. Databases containing this type of information are called "historical databases', in contrast to the more traditional "operational" databases, which only record a "snapshot" of the current state of the world. The salary history and student records databases shown in l a. Time Attributes Decribmg Individuals EmploLIee Database EmD_ID I Name I Address lb. Time of a Transaction Medical Records Database Patient IOoctor IProcedure I Birth_Date IHire-Date ic Time Whan an Attribute or Relationship Changed Salary History Database Emp_lO I Salar9 IDate Student Records Database Date Student-IO I Subject IOegree I Date I d. Time of a Process Transition Publication Database ISub-Oate [Disp-Date JRev-Date [Pub-Date Examples of Temporal Attributes Doc_lO J Author Figure 1. 3 I. Which doctors performed operations on June 15, 19837 2. How many people received PhD's in Math last month? 3. What percent of the employees got raises in the 4th quarter of 19847 4. Did any authors have more than one paper waiting for publication on Jan I? 5 How much was Jones making in September of 19847 6. How long has Green worked here? 7. What was the average review time for papers suDmitted in t go3? 8. Which patients received operations on each dog last week? 9. How many Ph. D's were granted to women during each of the pest 10 years? Figure 2. Figure Ic are examples of this type of temporal datZL Within this category, we must recognize a further distinction between exclusive attributes such as salary and qon-exclustve attributes such as degree. When a new salary is entered for an employee, the previous salary is no longer valid; but when a new degree is entered, it Is added to the individual's previous degrees. Examples of Temporal Queries The last category of temporal data is used to record fixed sequences of events that occur in various actiivies. For example, the publication database of Figure Id records the life-cycle stages of papers submitted to a scientific journal: the date the paper was received, the date it was accepted (or rejected), the date the revised version was received, and the date that is it scheduled to be published. We can view this sequence as a process with several stages ('under review', "being revised', "awaiting publication'), where each temporal attribute represents the time of transition from one stage to the next. 2.2. Analysts of Tempera! Queries particular interval of time. Current database systems already support time restrictions, such as Query I, that use simple, absolute time references. Queries such as (2), which use relative time references, and (3) which refer to intervals not directly represented in the database, require a more elaCx~ate model of time structures than current systems provide. The time domain model described In Section 3. I can support queries of this type. The second type of query asks about the state-of-the-world on a given date (Query 4) or during an interval of time (Query 5). Understanding and answering these queries requires rules for deducing the situation at a given time, as a result of the occurrence (or non-occun'ence) of events before that time. For example, Query 5 asks about Jones' salary in September of Ig78; however, there may not be an entry for Jones in the salary history file during that period. The system must know that the correct salary can be retrieved from the most recent salary change entered for Jones before that date. 5action 3.2 describes an event model that can represent this type of know ledge. This section considers four types of queries Involving temporal data, and briefly outlines the capaDilites that a temporal knowledge model must have in order to understand and answer queries of ead~ type. Oueries I-3 in Figure 2 are examples of time restriction aueries, which retrieve data about individuals or events whose dates fall into a Another type of query asks about the lenoth of time that a situation has existed (Query 6), or about the duration of one stage of a process (Ouer 7 7). These queries require functions to compute and compare lengths of time, and rules for deducing the starting and King times of states-of-the-world based on the events that trigger them. Section 3.3 shows how the proposed temporal model handles this type of query. 4 The last type of query Is the oertodlc query, which asks for objects to be grouped according to one or more attributes. High-level data languages and current NL interfaces are generally able to handle this type of request when it refers directly to the value of an attribute (e.~, Query 8), but not when it requires information to be grouped by time period, as in Query 9. To anwer periodic queries requires a formal representation for descriptions such as "each of the past 5 years'; the "periodic descriptors" defined in Section 3. I satisfy this requirement. 3. A Temporal Reasoning Model for Databases In this section, a temporal reasoning model is proposed that can interpret the types of queries described in Section 2.2. The model, which Is expressed as a collection of predicates and rules written in Prolng [S], consists of the following components: I. A time domain model for representing units (days), intervals, lengths of time, calendar structures, and a variety of relative time descriptions. . An event model for representing and reasoning about the temporal relationships among events, situations, and processes in the application domairL 3. I Time Domain Model The basic structures of the time domain model are days, intervals. Calendars, and oeriodlc descriotors. The days (D, OI, D2.. ) form a totally ordered set, with a "distance" function representing the number of days between two days. The distance function satisfies the laws of addition, i.e.: I) dtstance(DI,D2)= 0 <--> Oi-D 2) distance ( DI, D2 ) - - distance ( D2, DI) 3) distance ( DI, D2 ) + distance ( D2, D3 ) - distance ( D I, 03) Intervals (I, I1, 12..) are ordered pairs of days [Ds, De] such that distance (Ds, De) >= O. If an interval I - [Ds, De] then: 4) start(I) • Ds 5) end( I ) = De 6) length ( I ) = distance ( start (I), end ( I )) + I 7) during ( D, I) = "true" <--> distance (start(I), D ) >= 0 and distance ( D, end(I)) >= 0 Other temporal relations, such as "before (D I, D2)', "after(D I, D2)', and interval relations such as those described by Allen [ i ], can be defined using the "distance" function in an equally straightforward manner. Also included in the model Is a function "today" of no arguments whose value is always the current day. Formulas (1-7) are repeated below in Prolog notattor~ i ) dtstance(D I ,D2,0) :- O I = O2. 2) distance(D1, D2, Y):- distance(D2, D1, X), Y = -X. 3) distance(D i, D3, Z) :- distance(D I, D2, X), distance(D2, D3, Y), Z=X+Y. 4) start(I,Ds). 5) end(I,De). 6) length(I, Y) :- distance(start(I), end(I), X), Y = X+l. 7) during (D, I) :- distance(start(I), D , X), X >- 0, distance (D, end(I), Y), Y >- O. Examples of some natural language concepts: n_dayq ~jo (N, D) :- today(DT), distance(D, DT, N). n_days_from_now (N, O) :- today(DT), distance (DT, D, N). the..past..n_days (N, I) :- today(DT), end(I,DT), length( I ,N). the._nexL.l~days (N, I) :- teday(DT), start(I,DT), length(I,N). the_day_before_yesterday (D) :- n_days_ago(2, D). A calendar is a structure for representing sequences of intervals, such as weeks, months, and years. We will consider only "complete" calendars, which cover all the days, although It would be useful to define Incomplete calendars to represent concepts such as "work weeks" which exclude some days. A calendar (CAt) is a totally ordered set of Interval descriptors called "calendar elements" (L'~, CEI, CE2. .). The following predicates are defined for calen~. dtstcal(CAL, CEI, CE2, N). This Is like the distance function for days. It is true if CE2 is N calendar elements after CE I. For example:, distcal(year, 1983, 1985, 2) is true. 5 getcal(CAL, CE, I). This predicate Is true if I Is the interval represented by the calendar element CE. For example: getcal(year, 1983, [ janO I 1983, dec311983] ) is true. incal(CAL, D, CE, N). This predicate Is true If D is the Nth day of calendar element CE. It is used to map a day into the calendar element to which It belongs For example:, incal(month, jan 121983, [jan, 1983], t2] ) ts true. Calendars satisfy the well-formedness rules that we would expect; for example, for each day D and each calendar CAL, there is at most one (for complete calendars, exactly one) calendar element CE and positive integer N such that incal (CAL, D, CE, N) is true. Also, if CE i is before CE2, then each day in CE I is before each day in CE2. And, for complete calendars, if CE! immediately precedes CE2, then the last day of CEI immediately precedes the first day of CE2. Although the representation of calendar elements Is arbitrary, we have chosen conventions that are both meaningful to the programmer and useful to the implementation. The simplest calendars are those such as "year', containing named elements that occur only once. Years are simply represented as atoms cor~'espondlng to their names. Cyclic calendars are those that cycle within another calendar, such as the calendars for "month" and "quarter'. The elements of these calendars are represented as 2-tuoles, for example: distcal(month, [dec, 1983], [jan, ! 984], ! ) is true. The calendar for weeks presents the most difficult problem for the time domain model, since weeks are not usually identified by name. We have defined the week calendar so that all weeks begin on Sunday and end on Saturday, with each element of the calendar equal to the interval it rel:cesents. While this Is not an entirely satisfactory solution, it allows a number of useful "weekly" computations. Hore examples of natural language concel)t~ from_ce 1_to_ce2(CAL, CE I, CE2, I) :- /e from January, I q~3 to duly, 1985 e/ getcai(CAL, CE 1, I I ), getcal(CAL, CE2, 12), start(I I, S), end (12, E) , start(I, 5), er~KI, E). n_cai_elts_ago(CAL, N, D) :- /e three weeks ago o/ today(OT), lncal(CAL, DT, CEi, X), dlstcal(CAL, CE2, CE I, N), Incal(CAL, D, CE2, X). The last structure in the time domain model is the periodic de-JCrtptor (PO), ~ for PelX%--Jenting expressions such as "each of the past 5 years" or "each month in 1983". Periodic descriptors ate 3-tupies consisting of a calendar (to define the size of each period), a starting element from that calendar (to define the first period), and either an ending element from that calendar (to define the last period) or an integer (to define how many periods are to be computed). Periodic descriptors can run either forward or backward in time, as shown by the following example: each_of_the_gas~cal_elts(CAL,N, PO):- PO - [CAL, CEP, MI, today(DT), incal(CAL, DT, CET, _), dtstcal(CAL, CEP, CET, I ), H Is -N. To Interpret a query containing a periodic descrip- tor, the NL interface must first expand the structure Into a list of Intervals (this must wait until execution time in order to ensure the right value for "today') and then perform an Iteratlve execution of the query, restricting it In turn to each interval in the list. 3.2. Event Model In the event model, each type of event is re~'esented by a unique predicate, as are the situations and IX'ocess stages that are signified by events. For example, the event of a person receiving a degree is represented by: awarded(Person, Subject, Degree). The situation of having the degree is represented by: holds(Person, Subject, Degree). While the "awarOed" medicate is true only on the date the degree was received, the "holds" predicate is true on that date and all future dates. Below we define a straightforward al~:>roach to rewesentlng this type of know ledge. Five basic tempor'al predicates are Introduced to relate events and situations of the al~ltcation model to elements of the Lime domain model. 6 timeof(E, D) - succeeds whenever an event that matches E occcurs In the database with a tlme that matches D. This is the basic assertion that relates events to their times of occurrence. nextof(E, T, D) - asserts that D is the next time of occurrence of event E after time T. nextof(E, T, D):- tlmeof(E, D) , before(T, D), not (tlmeof (E, X), before (T, X), before (X, O). startof(5, D) - defines the tlme when a situation or process stage begins to be true, based on the occurrence of the event that triggers IL Rules of this sort are part of the knowledge base of each application, for example: startof (holds(Person, Subject, Degree), Date) :- timeof (awarded( Person, Subject, Degree), Date). endof(5, D) - defines the time when a situation ceases to be true. For an exclusive attribute such as salary(jones, 40000), the "end-of" a situation is the "next-of" the same kind of event that triggered the situation (i.e., when Jones gets a new salary then salary(jones,40000) is no longer true). For other kinds of situations, a specific "termination" is required to signify the ending; e.g., a publication ceases to be "under review" when It Is accepted. trueon(S, D) - succeeds if situation S is true at time D. Given the predicates described above, the definition of trueon might be:. trueon(S, D):- startof (S, A), not (after(A,D)), not (endof(5, B), before (B, D)). This rule asserts that situation S is true at time 0 if S began at a time before (or equal to) O, and dill not end at a time before D. 3.3. An Example Query We can now bring the two parts of the model together to describe how a temporal query is represented anti interpreted using the predicates and rules defined above. We will consider the following query, addressed to the salary histor'/database:. Which employees are making at least twice as much now as they made 5 years ago. For experimental purposes, we have defined our database as a collection of Prolog facts, as proposed by Warren[ 16] ; thus, the database can be queried directly in Prolog. We have also defined the "days', which are the primitive elements of the time domain model, to have names such as janO11982 or jul041776; these names appear in the database as the values of temporal attributes, as shown below: salhistory(jones, 30000, janO I 1983). salhistory(smith, 42000, jan l5 i 983). Each of the event-model predicates described in the previous section has also been created, with "nowsalary(EHPlD, 5At)" substituted for E and "makes(EHPlD, SAt.)" substituted for 5. For example.- timeof(newsalary(EHPlO, SAt), D):- salhistory(EHPlD, $AL, D). startof(makes(EHPlD, SAL), D):- timeof(newsalary(EMPlO, SAt), O). endof(makes(EHPlO, 5AL), D2):- timeof(newsaiary(EHPl D,SAL), D), nextof(newsalary(EHPlO,SAL2), D, O2), SAt --- SAt2 trueon(malces(EHPlD, 5At), D):- startof(makes(EMPlD,SAL), D. trueon(makes(EHPlD, S/d.), D):- stattol'(mdcedBMPlD, SAL), DI ), befote(DI,D), (e~do((makes(EMPlD, SAL), I)2), before(D2, D)). We can now express the sample query in Proiog: resuit(EHPlO, 5AL, OLDSAL):- teday(DT), trueon(makes(EHPlD, $AL), OT), n_caL.elts_ago(year, 5, DFYA), trueordmakes(EHPlO, OLDSAL), DYFA), SAL >= 2 * OLDSAL. This Prolog rule would be the desired output of the linguistic comoonent of a NL query system. ParalXcased in English, it says: retrieve all triples of employee td, current salary, and old salary, such that the employee makes the current salary today, the employee made the old salary five years ago, and the current ~alary is greater than or equal to two times the old salary. If we exoand all of the Prolog rules that would be invoked in answering this query, leaving only database access commands, arithmetic tests, and computations of the "distance" function, the complete translation would be:. result(EMPlD, SAt, OLDSAL) :- today(DT), saihistory(EMPlO, SAt, O), distance(D, DT, X I ), Xl >=0, not(salhistory (EHPlD, SAL2, D2), distance(D, D2, X2), X2>O, distance (D2, DT, X3), X3>=O, S~J. - SAL2), lncal(year, DT, YR1, Y), distcal (year, YR1, YPfA, -5), incal(year, DFYA, YFYA, Y), salhlstory (EMPlD, O(.DS~., D3), distance (D3, DYFA, X4), X4>= O, not(salhistory(EMPlD, OLDSAL2, D4), distance(OZ, D4, X5), X4> O, distance(D4, DYFA, XS), X5 >- O, OLDSAL I ",= OLDSAL2). 4. Conclusions This paper has proposed a temporal reasoning model based on the use of time attributes in databases, and the types of queries that we would expect in "real-world" applications. The model includes constructs for representing events, situations, and processes that are similar to those found in other temporal reasoning models. It also addresses some !ssues of particular importance for NL query systems, which are not addressed by other recent work ;n temporal reasoning, includir~. I. Representing the time between two polnts, and the lengths of intervals. 2. Representing weeks, months, years, and other stendm-d calendee structur¢-~. 3. ~epresenting information relative to "today", "this month', etc. 4. Representing periodic time descriptions. The use of discrete, calendar-like structures as a basis for representing tim.e in a computer is a simplification that is compatible with the discrete representation of information in databases. Hopefully, this simplification will make IL easter to program the model and to integrate it Into a state-of-the-art NL quer~ system. 5. References I. Alien, J. F., "Towards a General Theory of Action and Time. Artificial Intelllaence. Voi. 23, No. 2 (1984) 123-154. 2. Anderson, T. L, "Modeling Time at the Conceptual Level." In P. Scheuermann, ed., II~orovino Oatabase Usability and ResPonsiveness. pp 273-297. Jerusalem: Academic Press, 1982. 3. Bruce, B., "A Model for Temporal Reference and its Application in a Question Answering System." Artificial Intellioence. Vol 3, No. I (1972), 1-25. 4. Clifford, J. and D. S. Warren, "Formal Semantics for Time in Databases." A(:M TOOS. Vol. 8, No. 2 (1983) 214-254. 5. Clocksin, W.F. and C. 5. Melltsh, Proorammino in proloo. Berli~ Springer-Verlag, 1981. 6. Codd, E. F., "Extending the Database Relational Model to Capture More Meanino~" Ai~l TOO5. Vol. 4, No. 4 (1979) 397-434. 7. Doyle, J., "A Truth Maintenance System." Artificial I ntelltoence. Vol. 12, No. 3 (1979), 231-272. 8. INTELLECT Reference Manual, INTELLECT LEX Utility Reference, Program Offerings LY20-9083-0 and LY20-9082-0, IBM Corp., 1983. 9. Ka~n, K. and G. A. Gorry, "Mechanizing Temporal Knowledge." ~Jflciai Intelligence. Vol 9 (1977), 87-108. I0. Lakoff, G., andM. Johnson, Metaohors We Live BY. The University of Chicago Press, Chicago ILL (1980). I I. McCarthy, J. and P. J. Hayes, "Some Philosophical ProOlems from the Standpoint of Artificial Intelligence." In B. Mettzer and D. Mtchle, eds., Machine Intellloence 4. American Elsevier, New York (1969). 12. McCarthy, J., "'#hat is Common Sense?" Presidential Address at the National Conference on Artificial Intelligence (AAAI-84), Austin, TX (1984). 13. McDermott, D., "A Temporal Logic for Reasoning About Processes and Plans." Coonittve Science. Vol. 6 (1982) 101-155. 14. Moore, R. C., "Semantical Considerations on Nonmonotonic Logic." Artificial Intelllaence. Vol. 25, No. 1 ( 1 983), 75-94. 15. 5nodgrass, R., "The Temporal Query Language TOuel." In Proc. 3rd ACM SIGPIOD Svmo. on princtoles qf Database Systems. Waterloo, ONT (1984). 16. Warren, D. I-L O., "Efficient Processing of Interactive Relational Database Queries Expressed in Logic" In proc 7th Conf. on Very Laroe Databases. pp. 272-281. IEEE Computer Society (1981).
1985
1
THE COMPUTATIONAL DIFFICULTY OF ID/LP PARSING G. Edward Barton, Jr. M.I.T. Artificial Intelligence Laboratory 545 Technology Square Caanbridge, MA 02139 ABSTRACT .\lodern linguistic theory attributes surface complexity to interacting snbsystems of constraints. ["or instance, the ID LP gr,'unmar formalism separates constraints on immediate dominance from those on linear order. 5hieber's (t983) ID/I.P parsing algorithm shows how to use ID and LP constraints directly in language process- ing, without expandiqg them into an intcrmrdiate "object gammar." However, Shieber's purported O(:,Gi 2 .n ~) run- time bound underestimates the tlillicnlty of ID/LP parsing. ID/LP parsing is actually NP-complete, anti the worst-case runtime of Shieber's algorithm is actually exponential in grammar size. The growth of parser data structures causes the difficulty. So)tie ct)mputational and linguistic implica- tions follow: in particular, it is important to note that despite its poteutial for combinatorial explosion, Shieber's algorithm remains better thau the alternative of parsing an expanded object gr~anmar. INTRODUCTION Recent linguistic theories derive surface complexity fr~ml modular subsystems of constraints; Chotusky (1981:5) proposes separate theories of bounding, government, O-marking, and so forth, while G,'xzdar and ['ullum's GPSG fi)rmalism (Shieber. 1983:2ff) use- immediate-donfinance ¢[D) rules, linear-precedence (l,P) constraints, and ,netarules. When modular ctmstraints ,xre involved, rule systems that multiply out their surface effects are large and clumsy (see Barton. 1984a). "['he expanded context- free %bjeet grammar" that nmltiplies out tile constraints in a typical (,PSG system would contain trillions of rules (Silieber, 1983:1). 5bicher (198:1) thus leads it: a welconte direction by ,.hw.ving how (D,[.P grammars can be parsed "directly," wit hour the combinatorially explosive step of nmltiplying mtt the effects of the [D and LP constraints. Shieber's • dqorithm applies ID and LP constraints one step at a ;ime. ;,s needed, ttowever, some doubts about computa- tion;d complexity remain. ~hieber (198.3:15) argates that his algorithm is identical to Earley's in time complexity, but this result seems almost too much to hope for. An ll)/f.I ) grammar G can be much smalhr th;m an equiva- lent context-free gr,'umnar G'; for example, if Gt contains only the rule ,5 ~to abcde, the corresponding G~t contains 5! ~- 120 rules. If Shieber's algorithm has the same time complexity ~ Earley's. this brevity of exprd~slon comes free (up to a constant). 5hieber ~ays little to ;dlay possible doubts: W,, will t.,r proq,nt a rigor..s (h.lllOtlstr'~..li¢)t) of I'llnP c'(,mpt,'xlty. I.,t ~t .I.,.id b~, ch..tr fr.m tiw oh,.',, rc.lation h,.t w,',.n ) he l,rt,,,vtlt(',l ;tl~,nt hm ;rod E.rt<.y'~ t hat the ('+,t.ph'xity Is )h;d of Earh.y'> ;tig,)rltl~tlt [II t.l.+ worst ,'.:+,,. wh,,re tl.. I.I" rnh'. ;dw:ty:. +p,'('ffy ;t tllli(llll" ordor- t;,~ l'-r t!+(. ri~i~t-imlld :~+'(' ,,l'<,v<."y ID rtih., the l)i'('r~'tlte~l ;d.,;,with;'. r,,,In."v., t.+ E,trh'y, ;tl~t)rllhlll ~qin,.+, ~ivon )h,' ..:ramm.tr. vht.rkm~ :I.. LI) rnh.:.; t;Lk(..., Cl)ll+'r+liillt time. rh,,. thin. c,)IJHd,,'.":it y ,,I it.. pre>ented :d~.,rltht. i..., ideo- tw;d t() E.ri(.y'+ . That i:.. it ts (it (,' '2 .'t:;). wht.ro :(';: t> )1., qzt' ,,f thv gramt,~ar im,ml,vr ,,f [D ruh'.~) and n i. ~ tilt' h'ngth <)f the input. (:i,If) Among other questions, it is nnclear why a +ituation of maximal constraint shouhl represent the worst case. Mtrd- real constraint may mean that there are more possibilities to consider. .q.h;eber's algorithm does have a time advantage over the nse of garley's algorithm on the expanded CF'G. but it blows up in tile worst case; tile el;din of (9(G" . r(~) time complexity is nustaken. A reduction of the vertex- cover l>rt)blenl shows that ID/LP parsing is actually NI )- comph.te: hence ti,is bh)wup arises from the inherent diffi- culty of ID,'LP parsing ratlter than a defect in $hieber's al- gorithm (unless g' = A2). Tile following ~ections explain aud discuss this result. LP constraints are neglected be- cause it is the ID r.les that make parsing dilficult Atte)~tion focuses on unordered contest-free 9rammar~ (I ~('F(;s; essentially, ll)/l,P gram,oars aans LIt). A UCFG rule ;s like a standard C[:G rule except that when use(t m a derivati,,n, it may have the symbols ,)f its ex[~ansiolt writ- ten in any order. SHIEBER'S ALG OIIITHM Shiel)er generalizes Earley's algorithm by generalizing the dotted-rule representation that Earley uses to track progress thro,gh rule expansions. A UCIrG rule differs from a CFG rule only in that its right-hand side is un- ordered; hence successive accumulation of set elements re- places linear ad-.mcement through a sequence. Obvious interpretations follow for the operations that the Earley par.,er performs on dotted rules: X -. {}.{A, B,C} is a 78 typical initial state for a dotted UCFG rule; X -- {A,B,C}.{} is a t~'pical completed state; Z ---. {W}.{a,X,Y} predicts terminal a and nontermi- nail X,Y; and X -- {A}.{B,C,C} should be advanced to X -. {A,C}.{B,C} after the predicted C is located, t Except for these changes, Shieber's algorithm is identical to Earley's. As Shieber hoped, direct parsing is better than using Earley's algorithm on an expanded gr,-mlmar. If Shieber's parser is used to parse abcde according to Ct, the state sets of the parser remain small. The first state set con- tains only iS -- {}.{a,b,c,d,e},O I, the second state set contains only [S -- {a}.{b,c,d,e},O i, ,'rod so forth. The state sets grow lnuch larger if the Earley parser is used to parse the string according to G' t with its 120 rules. After the first terminal a has been processed, the second state set of the Earley parser contain, .1! - 2.t stales spelling out all possible orders in which the renmiaing symbols {b,e,d,e} may appear: ;S ~ a.bcde,O!, ;S -, ,,.ccdb. Oi and so on. Shieber's parser should be faster, since both parsers work by successively processing all of tile states in tile state sets. Similar examples show that tile 5hieber parser can have ,-m arbitrarily large advantage over the tlse of the Earley parser on tile object gr,'unmar. Shieber's parser does not always enjoy such a large ad- vantage; in fact it can blow tip in the presence of ambiguity. Derive G~. by modifying Gt in two ways. First, introduce dummy categories A. tl, C,D,E so that A ~ a and so forth, with S -+ ABCDE. Second, !et z be ambiguously in any of the categories A, B,C, D,E so that the rule for A becomes A ~ a ~, z and so on. What happens when the string zzzza is parsed according to G~.? After the first three occurrences of z, the state set of the parser will reflect the possibility that any three of the phrases A,/3, C, D, E might have been seen ,'rod any two of then| might remain to be parsed. There will be (~) = t0 states reflecting progress through the rule expanding S; iS ~ {A, B,C}.{D,E},0] will be in the state set, a.s will'S ~ {A,C,E}.{B,D},OI, etc. There will also be 15 states reflecting the completion and prediction of phrases. In cases like this, $hieber's al- gorithm enumerates all of the combinations of k elements taken i at a tin|e, where k is the rule length and i is the number of elements already processed. Thus it can be combinatorially explosive. Note, however, that Shieber's algorithm is still better than parsing the object grammar. With the Earley parser, the state set would reflect the same possibilities, but encoded in a less concise representation. In place ot the state involving S ~ {A, 13, C}.{D,E}, for instance, there would be 3!. 2! = 12 states involving S ~ ABC.DE, S ~ 13CA.ED, and so forth. 2 his|end IFor mor~. dl.rail~ ~-e Barton (198,1bi ~ld Shi,.hPr (1983}. Shieber'.~ rel,re~,ent;ttion ,lilfers in .~mle ways from tilt. reprr.'~,nlatioll de.. .a'ribt.,[ lit.re, wit|ell W~.~ ,h.veh}ped illth'pt, ndeutly by tilt, author. The dilft,r,.nces tuft. i~ellPrldly iut.~.'~eutiid, but .~ee |tote 2. lln eontrP....t¢ tit t|lt. rr|)rrr4.ntztl.ion .ilht..4tr;tled here. :¢,}:ieber'.., rt.|v. £P.~Wl'llt+l+liOll Hl'¢ll;Idl|y .~ulfl.r.~ to POIlI(" eXtt'tlt flOlll Tilt + Y.;tllle |lf[lil- of a total of 25 states, the Earley state set would contain 135 = 12 • 10 -+- 15 states. With G~., the parser could not be sure of the categorial identities of the phrases parsed, but at least it was certain of the number ,'tad eztent of the phrases. The situation gets worse if there is uncertainty in those areas ~ well. Derive G3 by replacing every z in G,. with the empty string e so that ,an A, for instance, can be either a or nothing. Before any input has been read, state set S, in $hieber's parser must reflect the possibility that the correct parse may in- clude any of the 2 ~ = 32 possible subsets of A, B, C, D, ~' empty initial constituents. For example, So must in- clude [..q -- {A, ]3,C, D, E}.{},0i because the input might turn out to be the null string. Similarly, S. must include :S ~ {A,C, El.{~3, Dt,O~ because the input might be bd or db. Counting all possible subsets in addition to other states having to do with predictions, con|pie|ions, and the parser start symbol that some it||p[ententatioas introduce, there will be .14 states in £,. (There are 3:~8 states ill the corresponding state when the object gra, atuar G~ is used.) |low call :Shieber's algorithm be exponeatial in grant- Inar size despite its similarity to Earh:y's algorithm, which is polynontiM in gratnln~tr size7 The answer is that Shieber's algorithm involves a leech larger bouad on the number of states in a state set. Since the Eariey parser successively processes all of the states in each state set (Earley, 1970:97), an explosion in the size of the state sets kills any small runtime bound. Consider the Earley parser. Resulting from each rule X ~ At .... 4~ in a gram|oar G,, there are only k - t pos- sible dotted rules. The number of possible dotted rules is thus bounded by the au~'uber of synibois that it takes to write G, down, i.e. by :G,, t. Since an Eariey state just pairs a dotted rule with an interword position ranging front 0 to the length n of the input string, there are only O('~C~; • n) possible states: hence no state set may contain more than O(Gai'n) (distinct) states. By an argument due to Eartey, this limit allows an O(:G~: . n z) bound to be placed on Earley-parser runti,ne. In contrast, the state sets of Shieber's parser may grow t|tuch larger relative to gr~nmar size. A rule X ~ At... A~ in a UCFG G~ yields not k + I ordinary dotted rules, but but 2 ~ possible dot- ted UCFC rules tracking accumulation of set elements. [n the worst ca.,e the gr,'uutttar contains only one rule and k is on the order of G,,:: hence a bound on the mt,nber of possible dotted UCFG rules is not given by O(G,,.), but by 0(2 el, ). (Recall tile exponential blowup illustrated for granmmr /5:.) The parser someti,,tes blows up because there are exponentially more possible ways to to progress through an :reordered rule expansion than an through an ordered one. in ID/LP parsing, the emits| case occurs lem..qhivher {1083:10} um.~ ,~t ordered seqt.,nre in.~tead of a mld- tim.t hvfore tilt. dot: ¢ou.~equently. in plltco of the ..,tate invo|ving S ~ {A.B.(:}.{D.E}, Sltiei,er wouhJ have tilt, :E = 6 ~t;ttt..~ itl- vtdving S -- ~t. {D. E}, where o~ range* over l|te six pernlutlxtion8 of ABC. 77 ae eb I ["'d el I ,, e2 . / e3 Figure 1: This graph illustrates a trivial inst,ance of the vertex cover problem. The set {c,d} is a vertex cover of size 2. when the LP constraints force a unique ordering for ev- ery rule expansion. Given sufficiently strong constraints, Shieber's parser reduces to Earley's as Shieber thought, but strong constraint represents the best case computa- tionally rather than the worst caze. NP-COMPLETENESS The worst-case time complexity of Shieber's algorithm is exponential in grammar size rather than quadratic ,'m Shieber (1983:15} believed, l)id Shieber choose a poor al- gorithm, or is ID/LP parsing inherently difficult? In fact, the simpler problem of recoyn~zzn 9 sentences according to a UCFG is NP-complete. Thus, unless P = 3/P, no ID/LP parsing algorithm can always run in trine polynomial in the combined size of grammar and input. The proof is a reduction of the vertex cover problem (Garey and John- son, 1979:,16), which involves finding a small set of vertices in a graph such that every edge of the graph has an end- point in the set. Figure 1 gives a trivial example. To make the parser decide whether the graph in Fig- ure I has a vertex cover of size 2, take the vertex names a, b, c, and d as the alphabet. Take Ht through H4 as special symbols, one per edge; also take U and D as dummy sym- bols. Next, encode the edges of the graph: for instance, edge el runs from a to c, so include the rules itll ---, a and Ht ~ c. Rules for the dummy symbols are also needed. Dummy symbol D will be used to soak up excess input symbols, so D ~ a through D ~ d should be rules. Dummy symbol U will also soak up excess input symbols, but U will be allowed to match only when there are four occurrences in a row of the same symbol {one occurrence for each edge). Take U ~ aaaa, U --. bbbb, and U --. cccc, and U ---, dddd as the rules expanding U. Now, what does it take for the graph to have a vertex cover of size k = 2? One way to get a vertex cover is to go through the list of edges and underline one endpoint of each edge. If the vertex cover is to be of size 2, the nmlerlining must be done in such a way that only two distinct vertices axe ever touched in the process. Alternatively, since there axe 4 vertices in all, the vertex cover will be of size 2 if there are 4 - 2 = 2 vertices left untouched in the underlining. This method of finding a vertex cover can be translated START -~ Hi tI2H3H4UU DDDD Hl -.-.aI c H2--*ble H3 --. c l ,~ H,..-.bl~ U ---, aaaa ! bbbb t cccc I dddd D~alblcld Figure 2: For k = 2, the construction described in the text transforms the vertex-cover problem of Figure 1 into this UCFG. A parse exists for the string aaaabbbbecccdddd iff the graph in the previous figure has a vertex cover of size <2. into an initial rule for the UCFG, ,as follows: START -. Hi II2H~II4UUDDDD Each //-symbol will match one of the endpoints of the corresponding edge, each /.r-symbol will correspond to a vertex that was left untouclted by the H-matching, and the D-symbols are just for bookkeeping. (Note that this is the only ~ule in the construction that makes essential use of the unordered nat,re of rule right-hand sides.} Figure 2 shows the complete gr,'unmar that encodes the vertex-cover problem ,,f Figure I. To make all of this work properly, take a = aaaabbbbccccdddd as the input string to be parsed. (For every vertex name z, include in a a contiguous run of occurrences of z, one for each edge in the graph.) The gramnlar encodes the under- lining procedure by requiring each //-symbol to match one of its endpoints in a. Since the expansion of the START rx, le is unordered, ,an H-symbol can match anywhere in a, hence can match any vertex name (subject to interference from previously matched rules). Furthermore, since there is one occurrence of each vertex name for every edge, it's impossible to run out of vertex-name occurrences. The grammar will allow either endpoint of an edge to be "un- derlined" -- that is, included in the vertex cover -- so the parser must figure out which vertex cover to select. How- ever, the gr,-mtmar also requires two occurrences of U to match. U can only match four contiguous identical input symbols that have not been matched in any other way; thus if the parser chooses too iarge a vertex cover, the U- symbols will not match and the parse will fail. The proper number of D-symbols equals the length of the input string, minus t|,e number of edges in the graph (to ~count for the //,-matches), minus k times the number of edges (to ac- count for the U-matches): in this case, 16 - 4 - (2 • 4) = 4, as illustrated in the START rule. The result of this construction is that in order to decide whether a is in the language generated by the UCFG, the 78 START U U Ht //2 H3 D //4 D D D A/ IIIIIIII a a a a b b b b c c c c d d d d Figure 3: The grammar of Figure 2, which encodes the vertex-cover problem of Figure I, generates the string a = aaaabbbbccccddddaccording to this parse tree. The vertex cover {c,d} can be read off from the parse tree a~ the set of elements domi,~ated by //-symbols. parser nmst search for a vertex cover of size 2 or less. 3 If a parse exists, an appropriate vertex cover can be read off from beneath the //-symbols in the parse tree; conversely, if an appropriate vertex cover exists, it shows how to con- struct a parse. Figure 3 shows the parse tree that encodes a solution to the vertex-cover problem of Figure 1. The con- struction thus reduces Vertex Cover to UCFG recognition, and since the c,~nstruction can be carried out in polyno- mial time, it follows that UCFG recognition and the more general ta.sk of ID/LP parsing nmst be computationally difficult. For a more detailed treatment of the reduction, see Barton (1984b). IMPLICATIONS The reduction of Vertex Cover shows that the [D/LP parsing problem is YP-complete; unless P = ~/P, its time complexity is not bounded by ,'my polynomial in the size'of the grammar and input. Ilence complexity analysis must be done carefully: despite sintilarity to Earley's algorithm, Shieber's algorithm does not have complexity O(IG[ 2. n3), but can sometimes undergo exponential growth of its in- ternal structures. Other computational ,and linguistic con- sequences alzo follow. Although Shieber's parser sometimes blows up, it re- mains better than the alternative of ,~arsing an expanded "object ~arnmar." The NP-completeness result shows that the general c~e of ID/LP parsing is inherently difficult; hence it is not surprising that Shieber's ID/LP parser some- times suffers from co,nbinatorial explosion. It is more im- portant to note that parsing with the expanded CFG blows up in ea~v c~es. It should not be h~d to parse the lan- ~lf the v#rtex er, ver i.~ t, maller tllall expected, the D-.~y,nbo~ will up the extra eonti~mun ntrm that could have been matrhed I~' more (f-symbols. guage that consists of aH permutations of the string abode, but in so doing, the Earley parser can use 24 states or more to encode what the Shieber parser encodes in only one (re- call Gl). Tile significant fact is not that the Shieber parser can blow up; it is that the use of the object grammar blows up unnecessarily. The construction that reduces the Vertex Cover prob- lem to ID/LP P,xrsing involves a grammar and input string that both depend on the problem instance; hence it leaves it open that a clever programmer ,night concentrate most of the contputational dilliculty of ID/LF' parsing into an ofll_ine grammar-precompilation stage independent of the input -- under optimistic hopes, perhaps reducing the time required for parsing ;m input (after precompilation) to a polynomial function of grammar size and inpt,t length. Shieber's algorithm has no precompilation step, ~ so the present complexity results apply with full force; ,'my pos- sible precompilation phase remains hyl~othetical. More- over, it is not clear that a clever preco,npilation step is even possible. For example, ifn enters into the true com- plexity of ID/LI ~ parsing ,~ a factor multiplying an expo- nential, ,an inpnt-indepemtent precompilation phase can- not help enough to make the parsing phase always run in polynomial time. On a related note,.~uppo,e the precom- pilation step is conversiol, to CF(.; farm ¢md the runtime algorithm is the Earley parser. Ahhough the precompila- tion step does a potentially exponenti;d amount of work in producing G' from G, another expoaential factor shows up at runtime because G' in the complexity bound G'2n~ is exponentially larger than the original G'. The NP-completeness result would be strengthened if the reduction used the same grammar for all vertex-cover problems, for it woold follow that precompilation could not bring runtime down to polynomial time. However, unless ,~ = & P, there can be no such reduction. Since gr.'Jannlar size would not count as a parameter of a fixed- gramm~tr [D/LP parsing problem, the l,se of the Earley parser on the object gr,-ulzmar would already constitute a polynomial-time algorithm for solving it. (See the next section for discussion.) The Vertex Cover reduction also helps pin down the computational power of UCFGs. As G, ,'tad G' t illus- trated, a UCFG (or an ID/LP gr,'uumar) is sometimes tnttch smaller than an equivalent CFG. The NP-complete- ness result illuminat,_'s this property in three ways. First, th'e reduction shows that enough brevity is gained so that an instance of any problem in .~ .~ can be stated in a UCFG that is only polyno,nially larger than the original problem instance. In contrast, the current polynomial-time reduc- tion could not be carried out with a CFG instead of a UCFG, since the necessity of spelling out all the orders in which symbols lltight appear couhl make the CFG expo- nentially larger than the instance. Second, the reduction shows that this brevity of expression is not free. CFG 'Shieber {1983:15 n. 6) mentmn.~ a possible precompilation step. but it i~ concerned ~,,,itlt the, [,P r~'hLrum rather tha.'* tlt~r ID rtth.-~. 79 recognition can be solved in cubic time or less, but unless P = .~'P, general UCFG recognition cannot be solved in polynomial time. Third, the reduction shows that only one essential use of the power to permute rule expansions is necessary to make the parsing problem NP-comphte, though the rule in question may need to be arbitrarily long. Finally, the ID/LP parsing problem illustrates how weakness of constraint c,-m make a problem computation- ally difficult. One might perhaps think that weak constraints would make a problem emier since weak con- straints sound easy to verify, but it often takes ~trong con- straints to reduce the number of possibilities that an algo- rithm nmst consider. In the present case, the removal of constraints on constituent order causes the dependence of the runt|me bound on gr,'unmar size to grow from IGI ~ to TG',. The key factors that cause difficuhy in ID/LP parsing are familiar to linguistic theory. GB-theory amt GPSG both permit the existence of constituents that are empty on the surface, and thus in principle they both allow the kind of pathology illustrated by G~, subject to ,-uueliora- tion by additional constraints. Similarly, every current theory acknowledges lexical ambiguity, a key ingredient of the vertex-cover reduction. Though the reduction illumi- nates the power of certain u,echanisms and formal devices, the direct intplications of the NP-completeness result for grammatical theory are few. The reduction does expose the weakness of attempts to link context-free generative power directly to efficient parsability. Consider, for inst,'mce, Gazdar's (1981:155) claim that the use of a formalism with only context-free power can help explain the rapidity of human sentence processing: Suppose ... that the permitted class of genera- live gl'anllllal'S constituted ,t s,b~ct -f t.h~Jsc phrase structure gramni;trs c;qmblc only of generating con- text-free lung||ages. Such ;t move w, mld have two iz,lportant tuetathcoretical conseqoences, one hav- ing to do with lear,mbility, the other with process- ability ... We wen|hi have the beginnings of an ex- plan:tti~:u for the obvious, but larg~.ly ignored, fact thltI hll:llD.ns process the ~ttterance~ they hear very rapidly. ."~cnll+llCe+ c+f ;t co;O.exl-frec I;tngu;tge are I+r,val>ly l;ar~;tl~h: in ;t l.illn'~ that i>~ i>r,,l>ot'tionitl to the ct,bc ,,f the lezlgl h of the ~entenee or less. As previously remarked, the use of Earley's algorithm on the expanded object grantmar constitutes a parsing method for the ILxed-grammar (D/LP parsing problem that is in- deed no worse than cubic in sentence length. However, the most important, aspect of this possibility is that it is devoid of practical significance. The object ~,'mmtar could con- tain trillions of rules in practical cases (Shieber, 1983:4). If IG'~, z. n ~ complexity is too slow, then it rentains too slow when !G'I: is regarded as a constant. Thus it is impossi- ble to sustain this particular argument for the advantages of such formalisms ,as GPSG over other linguistic theo- ries; instead, GPSG and other modern theories seem to be (very roughly) in the same boat with respect to com- plexity. In such a situation, the linguistic merits of various theories are more important than complexity results. (See Berwick (1982), Berwick and Weinberg (1984), aJad Ris- tad (1985) for further discussion.) The reduction does not rule out the use of formalisms that decouple ID and LP constraints; note that Shieber's direct parsing algorithm wins out over the use of the object grammar. However, if we assume that natural languages ,xre efficiently parsable (EP), then computational difFicul- ties in parsing a formalism do indicate that the formalism itself fl~ils to capture whatever constraints are responsible for making natural languages EP. If the linquistically rel. evant ID/LP grammars are EP but the general ID/LP gramu,ars ~e not, there must be additional factors that guarantee, say, a certain amount of constraint from the LP retationJ (Constraints beyond the bare ID, LP formalism are reqt, ired on linguistic grounds ,as well.) The subset prtnciple ,ff language acqoisition (cf. [h, rwick and We|n- berg, 198.1:233) wouht lead the language learner to initially hypothesize strong order constraints, to be weakened only in response to positive evidence. llowever, there are other potential ways to guarantee that languages will be EP. It is possible that the principles of grammatical theory permit lunge,ages that are not EP in the worst c,'tse, just as ~,'uumatical theory allows sen- tences that are deeply center-embedded (Miller and Chom- sky, 1963}. Difficuh languages or sentences still wouhl not turn up in general use, precisely because they wot, ht be dif- ficult to process. ~ The factors making languages EP would not be part of grammatical theory because they would represent extragrammatical factors, i.e. the resource lim- itations of the language-processing mechanisms. In the same way, the limitations of language-acquisition mech- anisms might make hard-to-parse lunge, ages maccesstble to the langamge le,'u'ner in spite of satisfying ~ammatical constraints. However, these "easy explanations" are not tenable without a detailed account of processing mecha- nisms; correct oredictions are necessary about which con- structions will be easy to parse. ACKNOWLEDGEMENTS This report describes research done at the Artificial Intelligence Laboratory of the Ma.ssachusetts Institute of ~|a the (;B-fr~unework of Chom.-ky (1981). for in~tance, the ,~yn- tactic expre~..,ion of unnrdered 0-grids at tire X level i'~ constrained by tile principlv.~ of C.'~e th~ry, gndocentrieity is anotlmr .~ignifi- cant constraint. See aL~o Berwick's ( 1982} discu.-,,-,ion of constraints that could be pl;wed ml another gr;unmatie',d form,'dism -- lexic,'d- fimetional grammar - to avoid a smfil.'u" intr,'u'tability result. nit is often anordotally remarked that lain|rouges that allow relatively fre~ word order '.end to m',tke heavy u.-.e of infh~'tions. A rich iattec- timln.l system can .-upply parsing constraints that make up for the hack of ordering e.,strai,*s: thu~ tile situation we do not find is the computationa/ly dill|cult cnse ~ff weak cmmcraint. 80 Technology. Support for the Laboratory's artificial intel- ligence research has been provided in part by the Ad- vanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N00014- 80-C-0505. During a portion of this research the author's graduate studies were supported by the Fannie and John Hertz Foundation. Useful guidance and commentary dur- ing this research were provided by Bob Berwick, Michael Sipser, and Joyce Friedman. REFERENCES Barton, E. (1984a). "Towed a Principle-Based Parser," A.I. Menlo No. 788, M.I.T. Artificial Intelligence Lab- oratory, Cambridge, Mass. Barton, E. (198,1b). "On the Complexity of ID/LP Pars- ing," A.I. Menlo No. 812, M.I.T. Artificial Intelligence Laboratory, Cambridge, Mass. Berwick, R. (1982). "Computational Comphxity and Lexical-Functional Grammar," American Journal of Compu:ational Linguistica 8.3-4:97-109. Berwick, R., and A. Wcinberg (1984). The Grammatical Basi~ of Linguistic Performance. Cambridge, Mass.: M.I.T. Press. Chomsky, N. (1981). Lecture8 on Government and Bind. ing. Dordrecht, ttolland: Foris Publications. Earley, J. (1970). "An EfFicient Context-Free Parsing Al- gorithm," Comm. ACM 13.2:94-102. Gaxey, M., and D. Johnson (1979). Computer~ and In- tractability. San Francisco: W. H. Freeman and Co. Gazdar, Gerald (1981). "Unbounded Dependencies and Coordinate Structure," Linguistic Inquiry 12.2:155-184. Miller, G., and N. Chomsky (1963). "Finitary Models of Language Users." in R. D. Luce, R. R. Bush, and E. Galanter, eds., Handbook of Mathematical Psychology, vol. II, 419-492. New York: John Wiley and Sons, Inc. Ristad, E. (1985). "GPSG-Recognition is NP-Ilard," A.I. Memo No. 837, M.I.T. Artificial Intelligence Labora- tory, Cambridge, M,xss., forthcoming. Shieber, S. (1983). "Direct Parsing of !D/LP Grammars." Technical Report 291R, SRI International, Menlo Park, California. Also appears in Lingui~tic~ and Philosophy 7:2. 81
1985
10
SOME COMPUTATIONAL PROPERTISS OF TREE ADJOINING GRAMM.~.S* K. Vijay-Shank~" and Aravind K. Jouhi Department of Computer and Information ~eience Room 288 Moore School/D2 University of Pennsylvania Philadelphia~ PA 191Ct ABSTRACT Tree Adjoining Grammar (TAG) is u formalism for natural language grammars. Some of the basic notions of TAG's were introduced in [Jo~hi,Levy, mad Takakashi I~'Sl and by [Jo~hi, l~l. A detailed investigation of the linguistic relevance of TAG's has been carried out in IKroch and Joshi,1985~. In this paper, we will describe some new results for TAG's, espe¢ially in the following areas: (I) parsing complexity of TAG's, (2) some closure results for TAG's, and (3) the relationship to Head grammars. 1. INTRODUCTION lnvestigatiou of constrained grammatical system from the point of view of their linguistic &leqnary and their computational tractability has been a mnjor concern of computational linguists for the last several years. Generalized Phrase Structure grammars (GPSG), Lexical Functional grunmmm (LFG), Phrm~ Linking grammars (PLG), and Tree Adjoining grammars (TAG) are some key examples of grammatical systems that have been and still continue to be investignted along theme lines. Some of the bask notions of TAG's were introduced in [Joahi, Levy, and Takahashi,1975] and [Jo~hi,198,3 I. Some pretiminav/ investigations of the linguistic relevance and some computational properties were also carried out in [Jo~hi, l~S3 I. More recently, a detailed iuvestigution of the linguistic relevance of TAG's were carried out by [Kro~h and Joshi, 19851. In this paper, we will des¢ribe some new results for TAG's, especially in the following areas: (I) parsing complexity of TAG's, (2) some closure results for TAG's, and (3) the relationship to Head grammar*. These topics will be covered in Sections 3, 4, and $ respectively. In section 2, we will give an introduction to TAG's. In section 6, we will state some properties not discussed here. A detailed exposition of these results is given in [Vijay-Sbuh~ and Joahi,1985[. *This work wu ptrtisJ~ su.~ported by NSP Gr~u~* Mk'TS-4~IOII6.~'~R, MCS42-07.~94. We wtat to thank Clr| Pol!ard. Kelly Rozeh, David Se~ tad David Weu'. We have beDeflt~l enormously I:y v*/uablo di~*eo~iotc with them. 82 2. TREE ADJOINING GRAMMARS--TAG's We now introduce tree adjoining grammars (TAG's). TAG's are more powerful than CFG's, botb weakly and strongly, l TAG's were first introduced in [Joshi, Levy, and Takahashi,1975J and [Joehi,1983 I. We include their description in this ~*ction to make the paper ~lf-contalned. We can define a tree adjoining grammar as follows. A tree adjoining grammar G is a paw (i,A) where i is a set of initial trees, and A is a set of auxiliary trees. A tree a ls an initial tree if it is of the form GI I S I\ I \ eE. r~ l \ I \ l That m, the root node of a is labelled S and the frontier nodes are all terminal symbob. The internal nodes are ~11 non-terminals. A tree ~ is an acxiliar? tree if it is of the form ~= X I \ I \ I \ wle= E I \ ..... X ..... V ! V~ That is, the root node of ~ is labelled with a :on-terminal X and the frontier nodes are all labelled with terminals symbols except one which is labelled X. The node labelled by X on the frontier will be c~dl~l the foot node of ~. The frontiers of initial trees belong to r-*, whereas the frontiers of the auxiliary trees belong to ~ N ~ U ~'+ N '-'*. ~/e will now define a compoeition operation called adjoining, (or adlunetion) which compo6es an auxiliary tree ~ with a tree 3'. Let 3' be a tree with a node n labelled X and let ~ be an auxiliary tree with the root labelled with the same symbol X. (Note that mnst have, by definition, a node (and only one) labelled X on the frontier.) IGr~nm~u Ol tad G2 mm w*aJtly equivuJ*a* if the forint ItaCU*ll* of GI, I~Gi} m tim J~in¢ lua¢un4pD ot G~ ~G2b GI tad G:I *.,,* ,troo¢ly *quivuJeot they m mmkl7 eq~,ivuJeIt tad for etch w UI E,(GI) ~e L(G2), both Gi tad G2 the strne itI~l~urld delleriptioll to v. A ~ m r G is ~ly uleqoa~ for t IPtriD|l llMl~ql~ ~* if UGI am L G ~1 Itt'OO¢~ I~deql]otdl for b if L(G) m h tad for elg'b w is I~ G *~iglm am °*ppmpdm e ,ttuctural description to m. The 8oti~a 0( ItrOu¢ *dequtcT ~ undoobtodlY not pmciN becsmn it deport ,4* ol the notion 0~ zpp~pfiato *tntttu~ de~.*riptioml Adjoining can now be defined as follows. If # is adjoined to at the node n then the resulting tree "Tt' is as shown in Fig. 2.1 below. 7 = ~: $ X /\ /\ / \ / \ node / X \ / \ n I I \ \ ---X--- t 3" = S /\ 3' / \~'~vithout IX\ t --/ \-- / \ --x-- /\ / \+-- FiKure 2.1 The tree t dominnted by X in 3' is excised, ~ is inserted at the node n in "7 and the tree t is attached to the foot node (lab*lled X) of ~, i.e., ~ is inserted or adjoined to the node n in 3' pushing t downwards, Note that ~ljoinmg is not a suJmtitutioa operation. We will now define T(G): The set of alJ trees derived in G starting from initial trees in I. This set will be called the tree set of G. L(G): The set of all terminal strinp which uppe'mr in the frontier of the trees in TIG). This set will be called the string language (~r langtiage) of G. If L is the string language of s TAG G then we say that L is a Tree-Adjoinin~ I.angllage (TAL). The relationship between TAG's , context-free grammmm, and the corresponding string languages can be summarised as follows ([Joehi, Levy, and Takahashi, 1975], [Joshi, 19831). Theorem 2.1: For every context-free grammar, G', there is so equivalent TAG, G, both weakly and strongly. Theorem 2.2: For every TAG, G, we have the following sitoatious: a. LeG) is context-free 3nd there is a context-free grammar G' that is strongly (cud therefore weakly) equivalent to G. b. C. L(G) is context-free and there is 4o coutext~free gramma~ G' that is equivalent to G. Of course, there must be n context-free grmmmar that is weakly equivalent to G. L(G) is strictly context-sensitive. Obviously in this cue, there is no context-freo grammar that is weakly equivalent to G. Part8 Ca) ~d (e) of Theorem 2.2 appear in ([Jushi, Levy, and Tskahacbi, 19T5]). Pact (b) is implicit im that paper, but it is impor*ut to state it explicitly as we have done here because of it8 linguistic significance. ~mmple 2.1 illustrates part Ca). We will now illustrate p,1~ (b) and (e). Example 2.2: Let G J (I,A) where ! : A • ~t = ~t : 5 I e $ T I\ I\ n T t S I\ I\ lb Ib S T Let us look st some dertvttlons tn G. "TO : ~ : Se I e 3'2 = S a/T\ /I\ / n S\~= ' I\ \ I I b \ ¢ T __~ . . . . I~ Ib S I e ~t $ /\ u T I\ $b i U ~t 71 == 3'0 with ~I 3'= =* 3'1 with ~ adjoined at S am indicated in "f0. adjoined at T as indicated in ~.. Clearly. L(G), the string language of G is L-- {,.eb. / Q>o } which is a context-free language. Thus, there must exist a context- tree grammar, G', which is at least we~tkly equivalent to G. [t cam be shown however that there is no context.flee grammar G' which is strongly equivalent to G, i.e., T(G) I- T(G'). This follows from the fat that the set T(G) (the tree ~et of G) is non-r~o,~nizable. *.e., there is an finite st~e bottom-up tree automaton that can recognize precisely T(G). Thus s TAG ma~" ~ _z context-free language, ~ign structural de~riptious to the strinAs that cannot be usi~ned by ~ context-free ~rammnr. F.~xample 2.3: Let G ,m (I,A) where $ I @ #t = #= = S T I\ I\ m T a S II\ II\ II\ II\ b S c b T c 8,3 The precise definition of L(G) is as follows: L(G) =- L t =. {w • ca / n > o, w is a string of a's and b's such that (1) the number o( u's I=, the number o( b's -- n, and (2) for any initial subetriag of w, the number of a's > the number o( b's. } L I is a strictly context-sensitive language (i.e., s context,, sensitive language that i, not context-free). This can be shown as follows. Intersecting L with the regular language a* b* • c* results in the language 1~== { a abnec a/ n>>_o} =-L t Na'b'ec" i~ i~ well-known strictly context-sensitive language. The result of intersecting a context-free language with a regular language is always a context-free language; hence, L t is not a context-free language. It is thus a strictly context-feusitive language. Example 2.3 thus illustrates part (e) of Theorem 2.2. TAG's have more power than CFG's. However, the extra power is quite limited. The language L t bag equal number of a's, b's a~d c's; however, the s's and b's are mixed in a certain way. The Itmguage I~ is similar to Lt, except that a's come before all b's. TAG's as defined so far are not powerful enough to generate L t. This can be seen as follows. Clearly, for any TAG for I.~, each initial tree must contain equal number of a's, b's and c's (including sero), sod each auxiliary tree must also contain equal number of a's, b's and c's. Further in each cue the a's must precede the b's. Then it i~ easy to see from the grammar of Example 2.3, that it will not be po~ible to avoid getting the a's and b's mixed. However, L t can be generated by a TAG with local constraints (see Section 2.1} The so- called copy language. t.- {wewlw,{~b}" } also cannot be generated by s TAG, however, again, with local constraints. It is thus clear that TAG's can generate more than context-free languages. It can be shown that TAG's cannot generate all context,-sensitive languages [Jmhi ,lg84J. Although TAG's are more powerful than CFG's, this extra power is highly constrained and apparently it is just the right kind for characterizing certain structural descriptions. TAG's share almost all the formal properties of CFG's (more precisely, the corresponding classes of language,). ~. we shalJ see in Netin* 4 of this paper and [Vijay-Shankar and Joehi,1985J. In addition,the string languages of TAG's can also be parsed in polynomial time, in partkular is O(nS}. The parsing algorithm is described is detail in section 3. |.1. TAG's with Lanai Constraints on Ad, Jolnln| The adjoining operation as def'med in Seetion 2.1 is "context- free'. Au auxiliary tree, say, X /\ I \ I \ ---X--- is adjoinable to s tree t at a node, say, n, if the label of that node is X. Adjoining does not depend on thn context (tree context) around the node n. In this sense, adjoining is context-free. In [Jmhi ,19831, I~al constraints on adjoining similar to those investigated by [Joshi and Levy ,1977] were considered.These are a generalization of the context-sensitive constraints studied by [Peters and Ritchie ,1~9]. It was soon recognized, however, that the full power of these constraints was never fully utilized, both in the linguistic context as well as in the "formal languages' of TAG's. The so-called proper analysis contexts and domination contexts (as defined in [Jmhi and Levy ,197T]) as used in [Joshi ,1983J always turned out to be such that the context elements were always in a specific elementary tree i.e., they were further localized by being in the same elementary tree. Based on this observation and a suggestion in [Jaehi, Levy and Takahashi ,1975], we will deseribe a new way of introducing local constraints. This approach not only captures the insight stated above, but it is truly in the spirit of TAG's. The earlier approach was not so, although it was certainly adequate for the investigation in [Jmhi ,1983J. A precise characterization of that approach still remains an open problem. G -- (I,A) be a TAG with local constraints if for each elementary tree t E l t.J A, and for each node, n, in t, we specify the set ~ of auxiliary trees that nan be adjoined at the node n. Note that if there is no constraint then all auxiliary trees are adjoinable at n (of course, only those whose root has the same label as the label of th* node s). Thus, in general, ~ is a subset o( the set of all the auxiliary trees adjoiuable at n. We will adopt the following conventions. 1. Since. by definition, no auxiliary trees are adjoinable to a node labelled by a terminal symbol, no constraint has to be stated for node labelled by a terminal. 2. If there is no constraint, i.e., all auxiliary trees (with the appropriate root label} are adioinable at a node, say, u, then we will not state this explicitly. 3. if no auxiliary trees are adjoinable at a node n, then we will write the constraint as ($~, where $ denotes the null set. We will alE.~ allow for the possibility that for a node at least one adjoining is obligatory, of course, from the set of all ixxmible auxiliary trees adjoiuable at that node. Hence, a TAG with Meal constraints is defined as follows. G = (I, A) is a TAG with local constraints dr for each node, n. in each tree t, be speeify one (and only one) of the following constraints. 1. S, Ioetive Adjoinin~ ~.qA:) Only u specified subset of the set of all auxiliary trees are adjoinable at u. SA is w-linen aa (C), where C is u subset of the set of all auxiliary trees adjoisable at n. If C equals the set of all auxiliary trm adjoinable at n, then we do not explkitly state this at the node n. 2. Null Adjoining; (NA:) No auxiliary tree ia adjoinable at the ,,ode N. NA will be written u (~). 3. Obli~atin~ Adjoining; {OA:) At least one (out of all the auxiliary trees adjoissble at a) must be adjoined at n. OA is written as (OA). or as O(C) where C is a subeet of the set of all suxifiacy trees adjoisable at u. I~--~amp~ 2.4: Let G == (I~.) be u TAG with I~ constraints where I: a It S C~) /\ ~t s S (B2) I I a b 84 s (~t) s (~=) I\ I\ I \ I \ a S (¢~) (¢~) S h In a t no anxiliary trees can be adjoined to the root node. Only ~t is adjoinable to the left S node at depth 1 and only ~= is adjoinable to the right S node at depth 1. In ~t only BI is adjoinuhie at the root node and uo auxiliary trees ate adjoinable at the ~.~,~' node. Similarly for ~2. We must now modify our definition of adjoining to take care o( the local constraints, given a tree "7 with a node, say, is, labelled A and given an auxiliary tree, say,/J, with the root node labelled A, we define adjoining as follows. ~ is adjoinable to "y at the node n if B E ~, where ~ is the constraint associated with the node u in "7. The result of adjoining d to ~ will be as defined in earlier, except that the constraint C ~.~sociated with u will be replaced by C', the constraint •ssociated with the root node orb and by C', the constraint associated with the foot node of ~. Thus, given "T: ~= S / \ node n I k (C) I/\ I/ \\ II \\ The resultant tree "7' is k (C') /\ / \ / \ / \ / \ (C') q,' I S /\ / \ / \ / k CC') / /\ \ ---/ \--- / \ / A (C') / /\ \ --./ \--- / \ We abo adopt the convention that any derived tree with a node which has an OA constraint associated with it will not be included in the tree set associated with a TAG, G. The string language L of G is then defined as the get of all terminal strings at all trees derived in G (starting with initial tre~) whkh have on OA constraints left-in them. Example 2.5: Let G == (I,A) be a TAG with local constraints where : Of -- A: 8= S (~) /I /I a S /1\ /1\ h I ¢ S (¢~) There are no constraints in a t. In ~ no auxiliasT trees are adjoinabie at the root node and the foot node and for the center S node there are an constraints. Starting with a t and adjoining ~ to a t at the root node we obtain ? = S (~) II II a S II\ II\ b I c S (¢) I S Adjoining ~ to the ceuter S node (the only node at which adjunction can be made) we have "I' :am S (~) II II ,~ ..~j" (~,) ,'/I " t a S ~ ~ / It\ ; /1\ / b I ¢ / t '- - - - ?'1~ - - /1\ h I e S (¢~) I l It ia easy to ~.e that G generates the string language L = { a°b'ec'lu>O} Other languages such as L'=={a al In ~_~1}, L" == {a a= I n ~__ 1} aim cannot be generated by TAG's. This is because the strings of a TAL grow linearly (for a detailed definite of the property called "contact growth" property, see [Jmhi ,1983 I. For those familiar with [Joehi, 19&3], it is worth pointing out that the SA constraint is only abbreviating, i.e., it does not affect the power of TAG's. The NA and OA constraints however do affect the power of TAG's. This way of looking at local constraints has only greatly simplified their statement, but it has also Mlowed us to capture the insight that the 'locality' of the constraint in statable in terms of the elemental/ trees themselves! S.2. Simple Llngulntle Exmmphm We now give a couple of Unguistie examples. Readers may refer ~o [Krocb and Joshi, 1985] for detads. I, Starting with ~fl ~m at which is an initial tree and then adjoining ~1 (with appropriate lexieaJ insertions) at the indicated node in at, we obtain "~:~. 85 "~t = Ot = S /\ ~. VP /\ l\ DET ~1 V IP I I I I\ I I I I\ ~hn girl I DET I tm I I n sealer the gXrl ~n t sen/or ~1 = mid /\ MP $ /\ /\ ~P VP I /\ • Y Mp I I ant, l I BL11 $ / \ / \ ~Mp~ VP /\ ~ I\ ~\\ ~ I \ MP \\ V ~P /\ , S ~ I /\ DET 11 ; / \~ts VET ! I itlm S\ I I the girl I lVp/ \ \\ a sen/or VP \ I I /\ x I I \ not I x ~" pt \ \ I I The glrl who net BLll t,* n sealer 2. Starting with the initial tree 3't =a ~ and adjoining 0~ at the indicated node in a, we obtain 7~- 3'1 = (~2 = "~2 = 02 = * S 0(02) S , /\ /\ MP ~p liP VP I /\ I /1\ PRO TO ~P W / I \ /\ I V MP S (h) V h'P John I \ I I l \ tnvlr, n I persuaded g I I Iltry B111 PRO to invite "1\\ I Np yp ~ I / ! II\ ', I I V MP, ~' (@) J Join I I. 7 \ i I g~w v~ I persuaded I. ~ I /\ X I~ I TOVP \ i~PRO /\ % .. Bill~ V l(P .... I I Lnvtt, 1 I iltr~ John pomaded eLI1 ~o XnvLte M~ry John persuaded B211 S Note that the initial tree cz 2 is not a matrix sentence. In order for it to become a matrix sentence, it must undergo am adjuuction at its root node, for example, by the auxiliary tree ~2 as shown above. Thus. for a 2 we will specify a local constraint O(~2) for the root node, indicating that a 2 requires for it to undergo am adjuuction at the mot node by an auxiliary tree 02. In a fuller grammar there will be, of course, some alternatives in the scope of O(). 3. PARSING TREE-ADJOINING LANGUAGES a.l. l)eflnltlonm We will give a few additional definitioM. These sre not necessaW for defining derivations in a TAG as defined in section 2. However, they are introduced to help explain the parsing algorithm and the proofs for some of the closure properties of TAL's. DEFINITION 3.1 Let 3',3" be two tre~.We say "r [--" 3" if in 3' we adjoin an auxiliary tree to obtain 3". I'-* is the reflexive,transitive closure of ]---. DEFINITION 3.2 3" is called a derived tree if 7 I--* 3" for some elementary tree % ' We then say "~' E D('I). The frontier of any derived tree 3' belongs to either ~ ~ ~ U N ~ if 7E D(,~) for some auxiliary tree 0. or to ~ if 3' E Dqcr) for some initial tree a. Note if ";, E D(a) for some initial tree ~, then 3' is aim a sententtal tree. If 0 is an auxiliary tre~, "7 E D(0) and the frontier of 3' is w I X w 2 {X is a nooterminsJ.wl.w 2 E ~ r~') then the le~ node having this non-terminal symbol X at the frontier is called the foot of 3'. Sometimes we will be loosely using the phrase "adjoining with a derived tree" "7 E D(~) for some auxiliary tree 0. What we mean is that suppose we sdjoin d at some nc~le and then sLdjoin within t~ and so on, we can derive the desired derived tree E D(0) which uses the same adjoining sequence and use this resulting tree to "adioin" at the original node. 3.3. The Psrsi.s Alsorlthm The ~igorithm, we present here to parse Tree-Adjoining Languages {TAL~), is s modification of the CTK algorithm (which is described in detail iu [Abe and UIIman,1073 D, which uses ,, dynamic programming technique to parse CFL's. For the sake of making our description of the parsing algorithm simpler, we shall present the algorithm for parsing without considering local constraints. We will later show how to handle local constraints. We shall s.~ume that any node in the elementary trees in the grammar has atmos¢ two children. Thm assumption c~m be made without any loss of generality, because it can be easily shown that for any TAG G there m an equivalent TAG G I such that amy node in amy elementary tree in G t has atmmt two children. A similar assumption is made in CYK algorithm. We use the terms ancestor rand descend~at, throughout the paper ms & transitive and reflexive relation, for example, the foot node may be called the ancestor of the foot ands. The ~lgoritbm works am follows. Let st... % be the input to be posed. We use a fom~limeoaioaal array A; each element of the srrny cont4uiu a subset of the nodes o( derived trm. We say a node X of a derived tree 3" belongs to A(i,j.k,lJ iJr X dominates a sub-tree o( 3' whose frontier m given by either =q+a...aq Y ak+i... ~ (where the foot node of 3' ~ labelled by Y) or ~q+t--.~ (i.e., j ,,- k. ~;- 86 corresponds to the case when T is a sentential tree). The indices (i,j,k,I) refer to the positions between the input symbols and range over 0 through u. If i == 5 say. the,, it refers to the gap between a s and a s. Initially, we fill Ali,i+l,t+l,i+l ] with those nodes in the frontier of the elementary trees whose label is the same as the input ai+ t for 0 < i < n-l. The foot nodes of auxiliary trees will belong to MI A(i,i,j,jl, such that i _< j. We are now in a position to fill in 311 the elements of the array A. There are five c~mes to be considered. Case 1. We know that if a node X in a derived tree is the ancestor of the foot node, and node Y is its right sibling, such that X E A[i,j,k,II and Y E A[l,m.m,nJ, then their parent, say. Z should belong to A(i,j,k,n[, see Fig 3.1a. Case 2. If the right sibling Y is the ancestor of the foot node such that it belongs to All,m,n,pJ and its left sibling X belongs to A i.j.j.lJ, then we know that the parent Z of X and Y belongs to A i,m,n.p, see Fig 3.1b Case 3. If neither X nor its right sibling Y are the ancestors of the foot node ( or there is no foot node) then if X E A[i,J,j,ll and Y E A[I.m.m,nJ then their parent Z belongs to A[ioj,j,n[. Came 4. If • node Z has only one child X, and if X E A[i,j,k,l], then obviously Z E A{i,j,k,ll. Ca~e 5. If 3 node X E AIi.j,k,ll, and the root Y of a derived tree "7 having the same label as that of X, belongs to A[m,i,l.u I, then adjoining "t at X makes the resulting node to be in AIm,Lk,nl, see Fig 3.1c. (,) X" I\ I \ I \ I \ I Z' \ / /\ \ I / \ \ I I \ \ • / V' Y' \ / /\ /\ \ / / \ / \ \ I I \I \ \ I ! I I I I t j k 1 • • (b) x' I\ I \ I \ I \ I Z' \ / /\ \ / / \ \ / / \ \ / V' Y' \ / /\ I\ \ / / \ / \ \ I / \I \ \ . . . . . . . . . . . . . . . . . X ' . . . . . . . . I I I I I J i J 1 an p (c) Y /% / \ / \ / \ / \ / \ / \ / \ . . . . . . . . . . X . . . . . . . . /\ I / \ I n / \ • / \ / \ I I I I i J k I Pill•re 3._~I Although we have stated that the elements of the array contain 3 subset of the nodes of derived trees, what really goes in there ape the addresses of nodes in the elementary trees. Thus the the size of any set is bounded by a constant, determined by the grammar. It is hoped that the presentation of the sdgorithm below will make it clear why we do so. 3.3. The adl~orithm The complete algorithm is given below Step I For i=O to n-I step I do Step 2 put all node• in the frontier of elemnntsry tr~ whoso l~bel 18 ~*t In A[i.i÷l.i*l.i*l]. Step 3 For i:O to n-I stop t do Step 4 for J:l to n-I stop 1 do Step 8 put foot nodes of all auxiliary trees in Xtt.:.J.J] Step 6 For 1:0 to n step I do Step 7 For i:l to 0 step -I do Step 8 For J=i to 1 step I do Step 9 For k=l to J step -1 do Step I0 do Cue 1 Step It do Cue 2 Step 12 do C~O 3 Step 13 do Cue 5 Step 14 do Cue 4 Step 1S Accept if root of somn initial tree E A[O.J,j,n], 0~J~_n where, (a) Case I corresponds to situation where the left sibling is the ancestor of the foot node. The parent is put in A[i,j.k.l I if the left sibling is in A[i,j.k.m I and the right sibling is in A|m.p,p,l|, where k ~_ m < I, m _~ p, p ~_ I. Therefore Came I m written as For ask to 1-I ~top I do for p= a to I step I do if there is • left sibling in A[t.J.k.n] and the right sibling in A[n.p.p.1] satisfying appropriate restrictionn then put their parent in A[i,j,k.i]. (b) Case 2 corresponds to the case where the right sibliog is the ancestor ,~f the foot node. If the left sibling is in A[i,m.m.pl and the .ght sibling is in A(p,j,k.I I, i -- m < p and p ~ j, then we put their parent in A[i,j,k,l I. This may be written as For n:l to J-t stop 1 do For p=u-t to J step 1 do for •11 left 8iblinp in A(t.n.n,p] and riKht 8iblinp in A[p.J.k.l] satlsfyins •pproprlatn rHCrlctlon8 put ~heix parents in A{£,j,k.1]. 87 (c) Case 3 corresponds to the cane where •either children ate ancestors of the foot •ode. If the left sibling E A[i,j,j,ml and the right sibling E A(m,p,p01[ then we can pat the parent in A[i,j,j,lJ if it is the c~,.that(i< j _< mori~ j < m) and(m < p ~ lot m _< p < |),This may be written ae fo~ s : J t,o l-t st,up I do for p : J to 1 •~*p t do f•r .11 left, sLblLnKg in A[i.J,J,n] and right, siblings i• A(n,p,p,1] •at1•fy1.nlg t, he appropriate rant,rXcCio•• pot their pgwuat, Xa A(/.J.J.I]. (e) Came 5 correspo•ds to adjoining. If X is n node in A[m,j,k,pJ and Y is the root of a a•xiliary tree with same symbol as that of X, such that Y is in A[i,m,p,I] ((i <_ m _< p <iori < m_< p <_lJand(m < j < k ~ porto ~j ~_k < p)J. This may be writte• as for • = £ co J 8t*p t do for p = u ~o I stop t do tf t node X E A[a.J.k.p] and t, he root, of tuxllXary tree ~.• In k[t,a.p,l] t, heu put, X Xn A(i.J,k,l] Case 4 corresponds to the case where s •ode Y has only one child X If X E A~i,j,k,ll then put Y in A[i,j,k,l[. Repe~t Case 4 again if Y has us siblings. 3.4. Complexity of the Alsorlthm It is obvious that steps I0 through 15 (cases a-e) are completed in 0(•-*), beta•an the different cases have at most two nested for loop statements, the iterating variables taking values in the range 0 thro•gh u. They are repeated utmost 0(• 4) times, because o( the four loop statements i• steps 6 through 9. The initialization phase (steps 1 through 5) has a time complexity of 0(• + •:) == 0(•2). Step 15 is completed in O(•). Therefore, the time complexity of the parsing algorithm is O(•S). 3.5. Cot,~.etnem of tha Allorlthm The main issue in proving the algorithm correct, is to show that while computing the contents of an element of the array A, we must have already determined the contents of other elements of the array needed to correctly complete this entry. We can show this inductively by considering each cue individually. We give an ;.uformal argument below. Case h We need to know the co•tents of A[i,j,k.m[, A[m,p,p,I] where m < I, i < m. when we are trying to compute the co•tents or Aii.j,k,l [. Since I is the y&riable itererated i• the outermost loop (step 6), we can assume (by indnctio• hypothesis) that for all m < I and for all p,q,r, the coate•ts of A[p,q,r,mJ are already computed. Hence, the contents of A[i,j,k,mJ are known. Similarly, for all m > i, and for all p,q, and r <_. l, A[m,p,q,rJ would have been computed. Thus, A[m,p,p,i I would also have bee• computed. Case 2: By s similar ream•lag, the co•tents of A(i,m,m,pJ and A[p,j,k,l I are known since p < I and p > i. Case 3: Woe• we are trying to camp•re the contents of some Aii,j,j,lJ, we need to know the nodes in A(i,j~i,pJ and A[p,q,q,l[. ,Note j > i or j < I. tlence, we know that the co•tents of A[i,j.i,pj and A(p,q,q,l] would have bee• compared already. Came 5: The co•tents of A[i,m,p,iJ and A(m,j,k,pJ must be k•own i• order to compote A(i,j,k,l[, where ( i _< m ~ p < I or i < m < p_<l)aad(m_<j_< k < porto <j_< k_<p). Since either m > i or p < I, contents of Alm,j,k,pl will be know•. Similarly, since either m < j or k < p, the co•re•re of A(i,m,p,l I would have been comp•tcd. 3.S. Pmmlug with Loead Const~mlnt4 So far,we have a~,samed that the give• grammar has •o local constraints, If the grammar has local constraints, it is easy to modify the above algorithm to take care of them. Note that in Ca~e 5, if an adjunctio• occurs at a •ode X, we add X again to the element of the array we are computing. This seems to be in co•trust with our definition of how to associate local constraints with the •odes in a se•te•tial tree. We should have added the root of the auxiliary tree instead to the element of the array being computed, since so far u the local constraints are concerned,this •ode decides the local constraints at this node in the derived tree. However, this scheme cannot be adopted in oar algorithm for obvious reasons. We let pairs of the form (g,C) belong to elements of the array, where g is -- before and C represents the local constraints to be associated with this •ode. We then alter the algorithm as follows. If (X,CI) refers to a uode at which we attempt to adjoin with an auxiliary tree (whose root is denoted by (Y,Cs)). the• adi•nctio• would determined by C I. If adjunctio• is allowed, then we can add (X,Cs) in the corresponding element of the array. In cases I through 4, we do not attempt to add a new element if any one of the children has an obligatory constraint. Once it has bee• determined that the given string belongs to the language, we ca• find the parse i• a way similar to the scheme adopted i• CYK algorithm.To make this process simpler and more efficient, we can use pointers from the new clement added to the elements which caused it to be put there. For example, consider Case i of the algorithm (step 10 ). If we add a node Z to A(i.i,k,I I, because of the pr~nce of its children X and ¥ i• A[ij,k,m i and A(m,p,p.q respectively, then we add pointers from this node Z i• A[i,j,k,l] to the nodes X, Y i• A{i,j,k,mj and A[m,p,p,l[. Once this has been done, the parse c,m be found by traversing the tree formed by these pointers. A paner based o• the techniques described above is currently being implemented mad wiU be reported at time of presentation. 4. CLOSURE PROPERTIES OF TAG's I• this 6ectio•, we present some closure resoits for TALe. We now informally sketch the proofs for the closure properties. interested readers may refer to [Vijay-Shaakas mad Jo6hi,1985] for the eL, replete proofs. 4.1. Closure undem Union Let G t and G. z be two TAGs generating L I and l.~ respectively. We c~• eonstrnct '~ TAG G snch that L(G)m'L t U L-a- Le* G I =- { 11, At, NI, S ), and G 2 = ( I~, A=, N~., S ) Without Io~ of senerality, we may assume that the N I N N:e =" h. Let G -- ( I l U 12 , At LJ A=, N t U N=, S ). We claim that L(G) :~ L l Let x ELt U L-z. Then x ELI or x E I~. If x ELI, then it must be possible to generate the string x in G , since 11 , A t are in G. Hence x E L(G). Similarly if x E [q , we can show that x E L(G). Hence L t U L~ C L(G). If x E L(G), then x is derived using either only Ij, A t or only l~,A:tsince N I I"1 N,j =,, ~. Hence, x ELt or X E t~ Thus, L(G} '-- Lt U I~ Therefore, L(G) =- Lt U L~ 88 4.2. Clmure under Concatena~on Let G t --(lt,At,N~,St), G, ,,, ([~.~=,N~,S~) be two TAGs generating Lt, I~ respectively, such that N I I'1 N= =- ~. We cam construct • TAG G =- (I, A, N, S) such that L(G)=,, L! . !~. We choo~ S such that S is not in Ns t,J N=. We let N -- N t IJ N, U {S}, A ,m A t U An. For all t t E !1, t~ E I,, we add tl:~ to I, as shown in Fig 4.2.1. Therefore, ! =- ( tl= / t! E It, t~ ~ l~), where the nodes in the subtrees t t and t~ of the tree t~= have the same coustra~atm mmocinted with them us in the original grammars G ! and G=. it is easy to show that L(G) ,m L I . L~, once we note that there are no Nxifia~ trees in G rooted with the symbol S, and that N I f3 N, ,m d). s~ s~ st= I \ t~= I \ I \ I \ I \ I \ f"t2 : S /\ / \ / \ / \ s, s~ I X I X / *,t \ / ~s \ Fib, urn 4 2. t 4.3. Cloeuru under Kle~ne gt.m~ Let G t =, (iI,At,NI,S1) be a TAG generating L t. We can show that we can construct a TAG G such that L(G) -. Lt*. Let S be a symbol not in N t, and let N m N I U {S}. We let the set [ of initial trees of G be (re} . where t e is the tree shown in Fig 4.3~. The set o( auxiliary tree, A is defined u A = {t~A / t t ¢ It} UAt. The tree tlA is u shown in Fig 4.3b, with the coustraintm on the root of each tlA being the null adjoining constraint, an constraint~ on the foot, and the constraints on the nodes of the snbtreee t t of the tre~ ttA being the same sm thee for the corresponding nodes in the inithd tree t t of G I. To see why L(G) ,m Lt*, consider x ~ L(G). Obviously, the tree derived (whose frontier is given by x ) must be of the form ~howu in Fig 4.3¢, where each t t' is a sententinJ tree in GI~UCh t I' E D(ti), for zn initial tree t i in G t. Thus, L(G) C LI*. On the other hand, if x E Ls*, then x =- Wl...wu, w i ~ L t for 1 _< i < n. Let e,u'h w| then be the frontier of t~Je sententiai tree t i' of G t such that t i' ~ D(t;), t I ~ I t. Obviously, we ca8 derive the tree T, using the initial tree t,, and have • sequence of adjoining operations using the auxiliary trees tl, ~ for I _< i _ n. From T we c,-, obviously obtain the tree T' the same am given by Fig 4.3¢, using only the mtxifimry tre~ in A t. The fruntiee of T' is obviously wl...w =. Henee, x I~G). Therefore, LI* E L(G). Thus L(G) =~ Us*. (*) % = S I n (b) ~IA : $ IX / \ S St /\ / \,r t,t / \ (c) / / S IX /X /~\*'~'t $ I St S I \ I I \.- c', e T ° FIgure 4.3 4.4. Cloeulm under Intemm~tlon with R elgul~ur ImaKuNlem Let L T be a TAL and L R be a regular language. Let G be • TAG generating L T and M = (Q , ~ , 6 , q0 , QV) be a finite state automaton recognizing Lit. We can construct a 8ramma: G and will show that L(GI) u L T N L R. Let a be an elementary tree in G. We shall associate with each node a quadruple (qt,q2,%,q4) where qt,q2,q.l,qi E Q Let (qt,%,q.~,q4) be mare)tinted with a node X in (~. Let us assume that a is an auxiliary tree, and that X is an ancestor of the foot node of a. and hence, the ancestor of the foot node of any derived tree "r in D(a). Let Y be the label of the root and foot nodes of (~. If the frontier of 7 ('T in D(o)) is w t w 2 Y w s w 4, and the frontier of the snbtree of rooted at Z, which corresponds to the node X in a is w= Y w~. The idea of amso~iating (qt,q~,q3,q~) with X is that it must be the case that 6°(qz, w~) =- q~, and ~(q~, w=) =, qs. When ~ becomes a part of the seutenti ~I tree ~" whose frontier is given by u w I w 2 v w s w4 w, then it must be the case that 6*(q~, v) == cut. Following this remmoing, we must make q= == q~, if Z is not the ancestor of the foot node of % or if "~ is in D(o) for some initial tree (~ in G. We have assumed here, as in the case of the parting algorithm presenf~ed earlier, that =ny node in ~y elementary tree has ~tmost two children. From G we cam obtain GI u follows. For each initial tree a, mmociate with the root the quadruple (q0, q, q, qr) where qe is the initial state of the ~qni~ state automaton M, and ~ E QF. For each auxiliary tree # of G, associate with the root the quadruple (ql,q~,qa,q4), where q,ql,q=,ch,q4 a~e some variables which will later be given values from Q. Let X be some node in some elementary tree cL Let (ql,q=,o.s,q4) be ~umociaU~l with X. Then, we have to consider the fol~)'~iag cues Cans I" X hi- two chUdreu Y and Z. The left child y is the ancestor of the foot node of a. Then zuoeiste with V the quadruple ( p, q~, o..I, q ), and ( q, r, r, s ) with Z, and ~ssociate with X: the constraint that only throe trees whoue root has the quadruple ( qt, P, s, q4 ), among Shone which were allowed in the orism~ grmmmus, may be adjoined at this node. If qt pd p, or q4 ~,i s , then the constraint associated with X must be made obligatory. Lf in the origin.l gruamar X had an obligatory constraint associated with it then we retmm the obligatory constraint regarcllelm of the relationship between qt and p, mud q4 and s. if the constraint amsccinted with X is a null adjoining constraint, we seaociate ( qt, qt, CL,, q ), and ( q, r, r, q4 ) with Y and Z resp~tively, and aamcinte the nuU adjoining enustramt with X. If the label o( Z is a. where s E ~, then we cboous s ~ q such that 6 ( q, a ) I s. In the nu II adjoining constr~nt c~ule, q is cheeeu such that 6 ( q, a ) == q4. 89 CaN 2: This corresponds to the case where • node X hu two childlt~ Y and Z, with (qt,q~,ql0qt) asm¢inted at X. [st Z ( the right child ) be the aucestor of the the foot node the tree a. Then we shall smucinte (p,q,q,r), (r,qs,qa,s) with Y and Z. The am•slated cottstraiat with X shaft be that only those trees amour those which were allowed in the nepal f~nmlmar may be adjoined provided their root has the quadruple (ql,p,s,q4) aaso¢inted with it. If qt ~ P or q4 ~ r then we make the constraint obligatory. If the original grammar had obfiptory constraint we wifl retm the obfiptory constraint. NaB constraint in the original grammar will force us to use null constraint ud not consider the cases where it is not the case that qt I p and q4 m s. If the label of Y is • terminal 'a' then we chouse r such that 6*(p,a) m r. If the constraint at X is s nuU adjoining constraint, then • ¢(qt,a) - r. Case 3: This corresponds to the cue where •either the left child V nor the right child Z of the node X is the ancestor of the foot node of a or if a is a initial tree. Then qs ~ q8 I q. We will ammeiate with Y and 7. the quadruples (p,r,r,q) and (q,u,t) reap. The constraints are assigned as before , in this cuse it is dictated by the quadruple (ql,P,t,qt). [f it is not the cue that ql " P and q4 um t, then it becomes an OA constraint. The OA and NA constraints at X are treated similar to the previous eMes, and so is the cue if either Y o1' Z is labelled by a terminal symbol. Cuss 4: If (ql,qt,q~bqt) is assort•ted with a node X, which hun only one child Y, then we can de~ with the various cusee as follows. We will annotate with Y the q•adruple (p,qs,qa~t) and the constraint that root of the t~,e which can be adjoined at X should have the quadruple (qt,P~,qt) amucinted with it amen8 the trees which were aflowed in the original grammar, if it is to be adjoined st X. The cm where the original grammar bad null or obligatory constraint amocinted with this node or Y is labelled with a terminsi symbol, are treated similar to how we dealt with them in the previous cuses. Once this has been done, let ql,---,qm be the independent variables for this elementary tree o, then we produce as many co~ of a so that ql,..-,qm take ad possible value8 from Q. The only diHerenee •meal the varions copies of cs so produced will be eonsteaint8 u ~ with the nodes in the trees. Repeat the prose• for aft the elementary trees in G a. Once this has been dome and each tree |lynn ~ unique name we can write the constraints in terms of them names. We will now show why L~G1) m U T ~ L R. Let w E I~GI). Then there is a sequence of adjoining operations starting with uu inithd tree a to derive w. Obviowdy, w E L.F, also since corresponding to ensh tree used in deriving w, there is n corresponding tree in G, which diffem only in the constraints asm¢inted with its nodes. Note, however, that the coutraints aloeinted with the nodes in tre~ in G z are just a reatriction of the corresponding om in G, or an obligatory constraint where there wu noes in G. Now, if we can amume ( by induction hypothesis ) that if after n adjoining operation we cam derive "/' E D(a'). the• there is a corresponding tree ~, E D(a) in G, which bus the same tree structure as 7' but differm| only in the constraints aasociated with the corl~sponding nodes, then if we adjoin at some ..ode in "7' to obtain ~t'. we can adjoin in "T to obtain "ft (corresponding to "it'). Therefore, if w can be derived in Gt, then it eu definitely be derived inG. If we can abe 8bow that l,(Gt) ~ 14, then we ean conclude that L(GI) ~ L T /'1 Lm. We can use induction to prove this. The induction hypothesis is that if all derived trees obtained after k <_ n adjeininlg operations have the prepethy P then so will the derived after n + 1 adjoininp where P is defined as, Property P: If any node X in a derived tree -f bus the foot-node of the tree 0 to which X belongs labeDed Y as • descendant sucb that w z Y w= is the fro•tier of the s•btree of ~ rooted at X, then if (ql,q~,q.l,q4) had bee• as•oct•ted with X, 6*(qt,wl) m q= and 6"(q3,ws) m q4, and if w is the fro•tier of the subtree under the foot node of 0 i• "/is then 6*(q~,w) ~ q8- if X is not the ancestor of the foot •ode of 0 then the subtree of 0 below is of the form wtw s. Suppme X has aso~inted with it (ql,q,q,q2) the• 6*(qt,wl) -- q, 5*(q,w,) = q,. Actually what we mean by an adjoining operation is not •eeessarily just one adjoining operation but the minimum number so that no obligatory constraints are am•tinted with any nodes in the derived trees. Similarly, the base ease need not consider only elementary trees, but the smalleat (in terms of the number of adjoining operations) tree starting with elementary trees which h,m no obligatory constraint annotated with any o( its nodes. The base cue can be see• easily considering the why the grammar wse built (it can be shown far•ally by induction on the height of the tree) The inductive step is obvious. Note that the derived tree we are gong to use for adjoining will have the property P, and so will the tree st which we adjoin; the former because of the way we dreig•ed the grammar and amiped coaatraints, and the latter because of induction hypothesis. Thus so will the new derived tree. Once we have proved this, all we have to do to show that L(GI) C_ L R is to consider those derived trees which axe soots•tint trees and observe that the roots of these trees obey property P. Now, if n string x E LT f3 Lit, we can show that x E L(G). To do that, we make use of the following claim. let ~ be sn anxilinry tree in G with root labelled Y and let "r E D(B). We claim that the~ is a B' in Gt with the same structure u 0, such 'that there is n ~,' in D(beta~))') where q' hu the same structure as % such that there is no OA constraint in ~'. let X be a node in ~t which wu used in deriving ~,. The• there is n node X' in ~' such that X' belo•p to the anxilliary tree 0f (with the same structure as 01- There are several rMes to consider - Case 1: X is the ancestor of the foot node of 01, such that the fro•tier of the subtree of 0t rooted at X is wsYw 4 and the fro•tier of the subtree or 7 rooted at X is W|WlZW~W t. Let 6~(qt,w|) an q, 6*(q,wt) -- q,, 6*(qa,w2) n r, and 6*(r,wt) -- q4. Then X' will have (ql,q,r,qt) aseocinted with it, and there will be no OA constraint in Case 2: X is the ancestor of the foot node o( Of and the frontier of the subtree of 0t rooted at X is wsYw 4. let the frontier of the subtree of "I rooted at X is WsWlW=W t. Then we claim that X' in 7' will have amucinted with it the q•adl~tple (qt,q,r,qt), if 6*(ql,wl) m q, 6*(q,wl) me p, 60(p,w2) me r, and 6*(r,wt) u q4- Case 3: let '.he frontier of the subtree of 0t {and aJeo ~) rooted at X is WlW =. Let 6*(q,wl) a p, ~(p,ws) I r. Then X' will have associated with it the quadruple (q,p,p,r). We shall prove o•r claim by induction o• the number of ucljoi•ins operations used to derive "T. The buse case (where -~ == 0} is obvious from the way the Irammar (i t wu built. We shall now amume that for all derived trees % which have bee• derived from 0 using k or less adjolnins operatiou, have the property u required ia o•r claim, let "f be a derived tree in 0 after k adjuuctious. By our inductive hypothesis we may ass•me the existence of the corresponding derived tree "T' (E D(0') derived in G t. Let X be n node in -y as show• in Fig. 4.4.1. The• the •ode X' in 7' corresponding to X will have associated with it the q•adruple (ql',cht',qs',qt"). Note we are nan•inn here that the left child Y' of X' is the ancestor of the 90 foot node of ~', The quadruples (qt',ql',q~',P) and (P,Pl,Pl,q4") will be asao¢inted with ¥' and Z' (by the induction hypothesis). Let "h be derived from ~ by adjoining ~1 at X as in Fig. 4.4.2. We have to chew the existence of ~t' in G 1 such that the root of this auxiliar7 tree hu saso¢iatod with it the quadruple (q,qt',q4",r). The exmtence el the tree follows from induction hypothesis (k =ffi 0). We have also got to show that there exists "/t' with the same structure us "f but one that allows ~1' to be adjoined at the required node. But this should be 8o, since from the way we obtained the tree, in G1, there will exist ~t" such that X I' has the quadruple (q,q~',qa',r) and the constraint* at X l' are dictated by the quadruple (q,qt',q4e,r), bat such that the two children.of X t' will have the same quadruple as in 7'. We can now adjoin ~I' in ~I" to obtain "h'. It can be shown that ~t' has the required property to establish our clam. /\ / \ / \ / \ / x \ / /\ \ / / \ \ x / \ y / / \ \ / \ / / \ \ / \ / /\ /\ \ / \ / / \ / \ \ /\ /\ / / \/ \ \ / \ / \ ........................... / \ I \ v'~ T v'= w* t n'= / \ / \ / x/ \ lr'! ~ W' 2 e°1 e* 2 ~* (q' t.v' t)=q'=~* (p,v° t)--'pt &*(q'a.w'~)---p ~*(Pt.e'=)=q', Fl~furn 4.4.1 /\ / \ / \ / \ / \ / \ / \ / \ / /\ \ / / \ \ / / \ \ / / \ \ ........ / \ ........ I \ I \ I \ ........ /\ ........ ~*(q.x) fq't &*(q's.y)--r Fi?~urn 4.4.2 Flatly, any node below the foot of Dr' in 74' will satisfy our requieement~ as they are the same as the corresponding nodes in 71 *. Since BI' satisfies the requirement, it is simple to obasrve that the nodes in ~1' will, even after the adjunctiou of ~1' in "at'. However, because the quadruple associated with X I' are different, the quadruples of the nodes above X t" must reflect this cbuge. It is easy to check the existence of an anxKinr? tree such that the nodes above X t' satisfy the requirements as sta~l above. It can alan be argued am the basis of the design of gramme GI, that there exisu trees which ailow this new auxiliary tree to be adjoined ~t the appropriate place. This then allows us to conclude that there exmt a derived tree for etch derived tree beiongin to D(~) as in our claim. The next step is to extend our claim to take into --count all derived trees (i.e., including the sentential trees). This can be done in a manner similar to our treatment of derived trees belonging to D(~) for some ~dlinry tree ~ as above. Of course, we have to consider only the cue where the finite state automaton start8 from the ini¢i~d sta~ q0, and rez~bes some final state qr ou the input which is the frontier o( some esnten*ial tree in G. This, then allowu us to conclude that L~ rl 'L R C L(G1). Hence, L(Gt) -- L T ~l Lit. 5. HEAD G ~ S AND TAG's In this section, we attempt to show that Head Grmmmmm (HG) are remarkably similar to Tree Adjoining Grammars. It appesn that the basic intuition behind the two systems is more or less the same. Head Grammars were introduced in (Pollard,1084], but we follow the notations used in [Roach,10841. It has been observed that TAG's and HG's share a lot of common formal properties such as almost identical closure results, similar pummping lemma. Consider the basic operation in Head Grammars - the Head Wrapping operation. A derivation from n non-terminal produces a pair (i,a1...ai...a~) (a more convenient representation for this pan is al...~ilLl+l...a~ ). The arrow denotes the head of the string, which in turn determines where the string is split up when wrapping operation takes place. For example, consider X->LL~(A,B), and let A=*whlx and B=~*uglv.Then we say, X=*whuglvx. We shall define some functions used in the HG formalism, which we need here. If A derives in 0 or more steps the headed string whx and B derives ugv, then q, q, l) if X -> LLI(A.B) L8 a rule ~u the gTtmmmx ~hen X dsrlveu vhugvx 2) L! X -> LL~(A.B) ts * ruln £n ~he grammar ~hnu X derlves vhugvx 4. 3) if X -> LCt(A.B) Ls a rulo In the grammar then X dertvnu vhxugv 4) if X -> LC~(A.B) in a rule [n the granm~r then X durlvee vhxtt~r 4 b Nov consider hoe u dertv.tlon Ln TAGs proceeds - Let ~ be an auxilliary tree and let ~ be n sentential tree as in Fig 5.1. Adjoining ~ at the root of the sub-tree ~ gives us the senteutiaJ tree in Fig 5.1. We eros, now see how the string whx has • wrapped around* the sub-tree i.e,the string ugv. This seems to suggest that there is something similiar m the role played by the foot in an auxilliary tree and the head in a Head Grammar how the adjoining operations and head-wrapping operations operate on strings. We could say that if X is the root of ~ auxilliary tree t~ and al...x i X a~+t...a ~ is the frontier o( a derived tree ~ E D(~}, then the derivation of 7 would correspond to a derivation from a non-terminal X to the string al...a 4 1ai÷t...a~ in HG and the use of 7 in some senteutial tree would correspond to how the strings al... a 5 and ~÷t...a~ are used in deriving, string in HL. a= S /\ / \ / X \ / /-\ \ / /-- - \ \~_~_'7 ugv $ /\ / \ ! \ / x \ ,hT-~-x u~ ~= X /\ / \ / \ / X \ v h • ri~r, s.J1 91 Based on this observation, we attempt ¢4) show the close relationship of TAL'o and HL's. It is more convin/ent for us to think of the headed string (i,at...sl) as the string •t.--~ with tbe head pointing in between the symbok I 4 and 14+t rather than at the symbol 14. The defmition of the dehvation •per•tom can be extended hi ~ stra/ghtforward manner t4) take this into account. However, we c'~" •cheers the S2rne effect by considering the dermitions of the Slimier• LLJ~C,ete. Pollard suggests tha* cases such as IJ~,~) be IcR u"dermed. We shrift -'~,ume thai if ~" --,.by then LI~.~) ~, • ,h~, LC~) -- ~, LC~,~) -- ~, '-C,~L;) -- ~, ~C,(;,X) -- ~, =.~ Lc,(~,;) = ~. ~'e, the~ say that if G is n He~d Grammar, then w I -= w bx belongs ¢4) L(G) if and only if S derives the headed string wbx'ror whXx. With this new definition, we shsil show, without givin~ the proof, ~hat the ci~ of TAL's is ensnared hi the chum of HL's. by systematically coeverthiS any TAG G to n HG G'. We shaft assume, without loss of general/t)', that the constra/nts expressed at the nodes of elementary trees of G ~re - I) Nothing can be •de•heed st • node (NA), 2) Any appropriat~ tree (~mbob at the node and root of the ~*uxillimry tree must marsh) can be adjoined (AA), or 3) Adjoining at the node is •brig•tory (OA). It is ea~ ¢4) show that these constra/nts are enough, and that selective adjoinhig can be expressed in terms of these and additiomd non-terminals. We know give • protednrzi deseriptioe of obtaining an equivalent Head Crammat from • Tree*Adjoining Grammar. The procedure works u follows. It k a reeumve procedure (Couvert_to_HG) which takes in two patametsrs, the first representing the node oe which it k being •pplied and the ~e~ood the label appearing on the left-hand side of the HG productions for this node. ff X is a ~onterminal, for each auxiliary tree ~ whose root hu the label X, we obtain • sequence of production- such that the rmst one has X on the left-hand side. Using these productions, we can defoe the string Wl~W ~ where n derived tree in D(~) has • frontier wiYw ~. ff Y is •Ysode with with IsJ)ei X in some tree where adjoining is allowed, we introduce the productions T' -> L~(x.r) {so then. s derived t.ree with root lnbel X nny wrl~ 8~'ovad the 8t.rin4| derived from the nbt.reo below •.hie node~ r -> L~t(A 1 ..... Aj) {anmu*4q that there exo J children of this node and the Ink child t• the ancestor of the foot node. By cedllng t.he procedure recurstvely for ill the J chLldren of T with At.k r~nlrlng frox I throuKh J, ve cns derive from I1' the front.£er of the subtreo belo~ Y} T' -> I' { thin iu t~ handle t*hn cue where no adJuc~on ~d~ns place •t T) If G is s TAG then we do the following - Repeat for every In£t182 tree Convert to RG(root,S') (S" will be the 8t4u-t symbol of the nov Heed (;re=,--,'}. Repe&t* for o~¢h Amctllta~-y tr~ Conret~ m J~ (root. roo~lmlol) where Ccarez~ ~o HG(n~te.nwso) In dettnsd -.. follmm L! undo 18 an index.aLl node tJmn cnsn I I! tJm ~mstr~tnt n~ ~hn node t8 A& add product, ions $~I->LLu(node syubol.I'). |'->LC t (At', .... At', .... Aj') S]m->LCt (At '. .... At', .... A| ') • here II'.A t ',~'....A|' are ,,mr •ou-tenLtna~ synbole,A ! ..... A| correspond t.o the J chlldren ot the node sad l=l If foot, node is not* • descund..mt* of node else =1 •uch t*h&t* the 1 ~ child of •ode is ancan~r of foot* node,J=uQber of chiZdreu of •ode for Im-I co J sup I do Convert, to HG(k ~ child of •ode.At'). Cue 2 The conet.r~tnt* •t* ~bn node Ls JUt. Sue u Cue 1 except don't* add the product*lone S~m->LL t (node 8~mbol.r). II*->LCt (AI'. .... A| '). Cue 3 The constrnint st, the node l• 0A. Stse an Cue I except, that* we don't, a4d Syx->t.C t (At',...Aj') else if *.he node hu • ternl•s3, synbol •. then add the production gyx ->~, el•e {it 1• • foot* node } it the cons&taint* at, the foot. node is AA then ,dd the product*ionn _ _ Syx ->ill(node syxbol,~)/~ it the con•t.rx.iat* t• 0A then add onXy the product.ion Syu ->l.t1(node slnt~I.~) L! the c gnetl"~nt* il gA add the product.ion S~w ->.~ We sh~dl now xive so example of converting • TAG C to s HG. G coeta~s • single initiaJ tr~ o, and • single suxiliar7 tree as in Fig. 5.2. $ a= I ~= Ftf~ur. S..~2 I\ I \ a S It\ I I \ / I \ b s(~) c ObviouS, L(Cl -- {so~c- / • _> 01 92 Appbying the procedure Convert to_HG to this grammar we obtain the HG whose productions are given by- s'-~ LL~(S,A) A -> s -> L%(B.¢) B -> I[ c -> LL~(S,D)/O 0 -> I.Ct(Z.F.G) ->'~ F -> "~ -) -~- vhtch can be re~r~tten u s' -> s/~ S-> LCt(a,X') ~' -> LL~(S,~c) or ~' ->u~(s,~c) It can be vurifte~ Chat Chin grumsr generates exactly L(G). It is worth emphasising that the main point of this exercise wU to show the ~imilarities between He~J Grammars and Tree Adjoining Grammars. We have shown how a HG G' (using our extended definitions) can be obtained in a systematic fashion from a TAG G. It is our belief that the extension of the definition may not necessar/. Yet, this conversion process should help us understand the similarities between the two formalisms. 6. OTHER MATHEMATICAL PROPERTIES OF TAG's Additional formal properties of TAG's have been discussed in {Vijay-Shankat and Joshi,1085]. Some of them are listed below t) Pumping lemmn for TAG's 2) TAL's are closed under sub6titution and homomorphisms 3) TAL's are not closed under the following operations a) intersection vtth TkL'8 h) |.ntnrsoct~.on ntth CFL'n c) coapleMnt, atton Some other properties that have been considered in [Vijay- Shank~r ~d Joshi,1985j •re u follows t) clomsrn under the folloetng properttan a) tnverle hosollorphtei b) ~m ~ptn~ 2) 8eLtltnnLrtty tad PartlrJ~-bouadndaan8. Referqene~ 1. Aho,A.V., and Ullman,J.D., 1073 "Theory nf Parsing, Translation t an__{d Compiling, Volume 1: Pxrsinp;, Prentice-Hall, Englewood Cliffs, N.J., 1973. 2. Joshi,A.K., 1983 "How much context-sensitivity is necessary for chare~terising structural descriptions - tree adjoining gramman" in Natural Lanpiua~ie ~ - Th#oretieal v Computational I and ~ogieal Perspectives (ed. "D.~owty, L.Karttunen, A.Zwick~, Cambridge University Press, New York, (originally presented in 1983) to appear in 1985. 3. Joshi,A.K., and LevyJ~.S., 1977 "Constraints on Structural Dc,seriptinns: Local Transformations s, SIAM ]ourual of Computinlt; June 1977. 4. Joshi,A.K., Levy :..S., and Takahashi, M., 1975 "Tree adjoining gramm=rs', Jo, rual of Comout~r ~"~'ems and Sc.;enees March 1975 5. Kroch, T, and Joshi, A.K., I~85 °Linguistic relevance of tree adjoining grammars', Technical Report i MS-CIS-g.5-18, Dept. of Computrr and Á~.formation Scteuee I University of P~nnsvlvania, April 6. Poll:zrd, C, t984 "Generalized Fhruse Structure Grammars, Head Grammars, and Natural l"nggagea, Ph.D dissertation t Stanford Univer~itz, August 1984 7. Ro~h. !<., 1084 "Form~J P.-operties of Head Gra:,~m~rs', unpublbhed manu~'ript, Stanford University, also presented at the M-.th~.mir~ ,ff l,anguage~ workshop zt the University of Michigan, Ann Arbor, Oct. lg~.l. 8. Vijay-S~',ankar,K., Jnshi.A.K.. 1935 "Formal Properties ot Tree Adjolmug Grammars'. Tm'hni~.'il Report, D~pt. hi' Cnmp,,ter nail hzformation Srit,~rf.~ Univ@r~ttv of Peoesvlvant~, July 1985. 93
1985
11
TAG's as a Grammatical Formalism for Ceneration David D. McDonald and James D. Pus~ejovsky Departmmt of Compute~ and Information Scienc~ Un/vemty of Mam,dzm~tm at Amherst I. ~mnct Tree Adj~g Grammars, or "TAG's', (Josh/, Levy & Takahash/ 1975; Josh/ 1983; Kroch & Josh/ 1965) we~ developed as an al~ma~ive to the aandard tyntac~ formalisms that are ,,_~'~ in theoretical ~,.ll,/~s of languaSe. They are a.rwac~ve because they may pin,vide just the asFects of context seusit~ve exptes~e Fmv~r that actually appear in human lanSuages while otherwise r~alning context free. "['n/s paper ___~,~,ibcs how we have applied the theory of Tree Adjoining Grammars to natural language generation. We have ~ attracted to TAG's because their cemral opemtiou--~he exteamou of an "initial" phra~ m~ca~u tree through the incl~/ou, at re,? , ~ y came~/aed loeatinus, of oae or mmu "au~!!iar~'* ~ d s dixec~ to cextain ceat~ ol~rat~m of our owu, p~rfonnnce-one~ted theory. We besm by briefly _,~,,-,ibin 8 TAG's u • formalism for phrase ram:rare in a com~___~ theory, and summar/ze the points in the theory of TAG's that are germainu to our own theory. We them conmdm' generaUy the poation of a grammar within the geueratiem process, inemducmg our use of TAG's through a contrast with how oth~nJ have used system~ grammars. This takes us to the ,~,~,~ resulm of our psper:, usng eaamp/es from our research with wefl.wrR1eu trots from aewupapmm, we walk throush our TAG insp/~ed treatments cl r~ng and wh-movemem, and show the cc~denc~ of the TAG ~adjunct/oo" oper~t/oa and our "attachment" process. In the final tectiou we discuss ~mau/ons to the theory, motivated by the way we usa the operafiou cmveqxmdin 8 to TAG's" adjun~iou in performance. This mssesss that the compe~eace theory of TAG's can be profitably projoc~ed to s~na:tur~ at the morphoiogicaJ leve/ as weft as the preseat syntacuc level 2. Tree Ad]unctioo Grammars The theoretic~ apparatus of a TAG cons/sin of a primitive~ defined set of "elememary" phrase smgnu~ gge~ a Jqinkins'~ l'~lgJOgl thag ~ ~ ~ to de~e dependency relations between two nodes within an elemeutary tree, and an "adjunction" operarlon that combines trees under specifiable constraints. The elementary frees are divided into gwo sets: initLll and auxiliary. Initial wea have only terminals at their leaves. A~///m.y we~ are distinguished by having role non-terminal among their leaves; the category of th/s node must be the same u the ,~tegol~ of the root. AU c/~l ~1 ~ ~ "~nnlnlmaJ n ill the serum that they do am regunm on any nou-~rminal. A mxle NI in an elementa,lry tree may be linked (co-indaad) to a second aode N2 in the same tree provided NI c-commands N2. ~Jnklng is used to indicate grammadcaUy defined del~de~:/es between nodes ~.b u subcatesorizatiou relatioashipe or fdler-sap dependencies. Links are p~ved (thouSh "m~.bed out") when their txee is extended throu~ adjunctioo; this is the mechan/sm TAG's use to re~re~___ t unbounded del~denczes. Seatea©u der/va0om start with an in/tial uee, and contimm via the adjunctim of an arbitrary number of auxiJumj trees. To adjoin an auxiliary tree A with reo¢ ,-~tegory X to a in/t/a/ (or clenv~) tree T, we first se/ecz some node of catesory X within T to be the point at which the adjunction is to occur. Then (1) the subcree of T dominated by that instance of X (carl it X') is removed from T, (2) the au.vili~ry ~ A is kn/t into T at the pos/tioQ where X" had beret Icelted, and (3) die sublree detainer_..4 by X" is kn/t into A to replace the second cgcurencu of the catego~ X at T's frontier. The two trees have now been merged by "up/icing" A into T, disp/acing the subcrea of T at the pmnt of the adjunction to the fromier of A. For ~-ram~e we cmdd take the initial tree: ~. who~ doa ~ Zohn ~ke "i ] l (the subucnlX "i" indJ~ttes that the "who" and the trace "e" am Unked) and adjoin to it the aux/Uar/ Uree: to pTedum the derived trea: 94 Adjunctioe may be "constrained'. The grammar writer may specify which specific trees may be adjoined to a given node in an elementary tree; if no specification is given the default is that there is no constraint and that any auxiliary tree may be adjoined to the node. 2.1 Key f,_,_m~ of the theory of TAG's A TAG tqxectfi~ mrfaee m'ucture. There is no notion of derivation from deep structure in the theory of TAG's--the primitive trees are not transformed or otherwise changed once they are introduced into a text, only combined with other primitive trees. As Kmch and Jmhi point out, this means that a TAG is incomplete ms an account of the structure of a natural language, e.g. a TAG grammar wW contain ~th an active and a passive form of the same verbal sutx:ategurization pattern, without an theory-mediated description of the very clme relationship between them. To our minds this is by uo means a deficit. The p~c~lural machinery that generative grammars have traditionally carried with them to characterize relations like that of active to passive has only gotten in the way of employing tho~ characterizations in processing models of generation. This is because a generation model, like any theory of performance, has a procedural m'ucture of its own and cannot coexist with an incompatible one, at least not while still operating efficiently or while retainin 5 a simple mapping from its actual machine to the virtual machine that its authors put forward ms their ao~unt of psycholinguistic data. Our own generator uses surface structure ms its only expficifly represented linguistic level. Thus grammatical formalisms that dwell on the rules governing surface form are more useful to us than those that hide those rules in a deep to surface transformational process. A TAG Involves the manlpulatlea of very mmail demantary m'uctures. This is _'~'__~_use of the stipulation that elementary trees may not include recumve nodes. It implies that the sentences one ~ in everyday usage, e.g. aewpaper texts, are the result of many _o_,__e~6_ 're adjunctions. This melds nicely with a move that we have made in recent years to view the conceptual representation from which generation proceeds ms consisting of a heap of very small, redundantly related information units that have been defiberately selected by a text plannin~g ~ from the total state of the knowledge base at the time of utterance; each such unit will correspond in the final te~ to a head lexical item plus selected thematic arguments--a linguistic entity that is easily projected onto the elementary trees of a TAG. TAG U~n7 Indudes ~ly ow operm~oa, mqemetlom, and otherwim .--~-, u .4,.,,.~ to the elemantary trees that go tnts• text. This compom well with the indefibllity mpulatiou in our mode/ of gene~uion, tince adected text fragments ~ be ~ di~y all ~ by th@ gl~mm~r without the need for any later transformation. The composition options delimited by the constraints on adjunction given with a TAG define a space of alternative text forms which can correspond directly in generation to alternative conceptual relations among information units, alternatives in rhetorical intent, and alternatives in t,,~me style. 3. Adapting TAG's to Generation The mapping from TAG's as a formaligm for competence theories of language to our formalism for generation is strikingly direct. As we described in Section 5 their adjunction operation corresponds to our attachment Wcgess; their constraints ou adjunction correspond to our attachment points; their surface structure trees correspoad to our surface structure trees, t We further hypothesize that two quite strong correspondence claims can be made, though considerably more experimentation and theorizing will have to be done with both formalisms before these claims can be c~nfirmed. I. The primitive information units in renlization specifications can be realized exclusively ms one or another elementary tree ms def'med by a suitable TAG, i.e. linguistic criteria can be used in derermmmg the proper modularity of the conceptual structure. 2 2. Convex~ly, for any textual relationship which our generator would derive by the attachment of multiple information units into a ~ingle package, there is a correslxmding rule of adjunct/on. Since we u~ attachment in the rp,~li,~tiou of nominal compounds like "o// tanker', this has the force of extending the domain of TAG analyses into morphology. (See section 7). 4. 1"he Place of Grammar in a Tneory of Generat/on To understand why we are looking at TAG's rather than some other formaJi~n, one must first understand the role of grammar within our ~ g model. The foflowing is a brief summary of the model; a more complete description can be found in McDonald & Pustejovsky ] Our model ot geaeratioe dora cot eml:~oy the ~ tre~ ot labe.t~ ~ that appear in most ttmm, etical ~ ~ Our mtrfa~ strtEtut~ iaeoqlofat~ tim m~umti~ ~ ot tzem, but it also iacl ,.,t.'- reifi~tiom ot coeMitt~at pomtio.- like "mbject" or "z~..'---" and is b~t~ ~ overall .., an "czemnab t- teq;~:am o( labeled pemtiom'. We dimm this furth~ in ...t~" ._ 5.1. 2 If this hylm~ m race.tel, it has very mmalemttat im~icatiom for tha "sire" of the iaforma~oa umm that th6 tat woukl not be realized u u~m that inc/uda recun/ve nodes. We will diEum ,t,i. and o 's..- implJ~tiom in • ta-~" psp~'. 95 We have always had two complementa~ goats in our research: on the one hand our generation program hu had to be of practical utility to the Imowedge based expert systems that use it as part of a natural language interface. This means that architecturally our generator has always dmgned to produce text from mecepmal spm:~catlons, "plans", devdo~ by another program and comequenfly has had to be mmtive to the limitations and v-ap~g approaches of the present state of the art in concepmal reprewntation. At the same time, we want the architecture of the vimud m~hlne that we abstract out of our program to be effective as a murce of psycholinguis~c hypothesm about the actual generation p~c~em that humans use; it should, for example, provide the basis for predictive ___~mts of human speech error behavior and apparent p~annin s limitatioB. To achieve this, we have restricted om~lves to a highly constrained set of representations and operations, •nd have adopced strong and mgge~ve stipulations on our dmigu such as high locality, information encaptmlation, online qua~-realtimo rtlotime performan~, and inclelibility. 3 restricts us u ptogrammm, but disaplines us as theomu. We me the pmce~ of generation u involving tluen temporally intmmingied activities: (1) determinin$ what goats the u~(~ is tO ac.hie~e, (2) plxnnin S what informaboll omtent and rhetorical force will best meet those goals given the context, and (3) realizing the tpectfied inlormation and rhetorical intent as a grammatical teat. Our l/agum~ camom,~ (henceforth LC), the Zetalisp ~ MUMBLE, handles the ~ of these activities, tskin]g a "TMal~tiO~ qx~ificatim ~ as input, and producing a mmm of morpUotosicaay s~,-~,;.,.a wor~ u output. As described in [McDonald 19@t], LC is a " ~ o n ~ e d " process: it ~ the m-~nue of the realization specification it is given, plus the syntactic surfa~ ttrueture of the text in progrem (which it extends incrementally as the qxa:£fication is mafized) to directly control its sctions, int~t,~hag them as though they were sequential computer programs. This technique imposes strmtg demands on the clem~ptive f ~ used for 3 "Indett, iaty" in a compmattoa requm= that m a~oe o4 • pro=m (matml dmmm. cee~-mml repmmmatiom. ~ ~m. ctg.) call be ~ tmdom olgg it has beta pegtonm& Maw/ mmbacMrackiag, mra~l pml~lm dem~ ha~ tim property; it is our tam for wdmt ~ [Lel~ I rdermd to m tim Ixepany o( tXmlg 4 A realbams ~dfka~oa m Jar, rurally be ,-~-~ m m w~ tmmy r~sndm~, ~ ~ t~t ~ -" tim "me~aSo le~:l" ~ ~ ~ • tat. 5 Whigh m m my that it pemmtly ~ meitt~8 mtha ~m tats. We expect m m~t mtb ~ ompm ~ , ~ , 8nd tl~ amd to ,,Wm~ tl~ mpt~mmm~ I~m e~ m tnmeatimud mmo~ ~ ~ to ma m~ dmSm fee mamimency pattern ht mrfam mmctme. repre~ntin 8 surface gructure. For example, node, and categot~ labeLs now designate actions the generator is to take (e.g. imposillg Ka3~g relatiolu or COtkqUalnln s embedded decisiom) and dictate the inclu~on of function words and morphological specializatiem. 4.1 Unlmmclll~ Syaemb: Gramman Of the established linguistic formalims, systemic grammar [Halliday 1976] has always been the most important to AI researchers on generation. Two of the mo~ important generation systems that have been deveJoped, PROTEUS ~Davey 1974] and NIGEL [Mann & Manhie~en 1983], am systemic grammar, and others, including ourselves, have been mongly influenced by it. The reasons for this entb,,tlatm are central to the special concerns of generation. Systemic grammars employ a functional vocabulary: they empha~/ze the uses to which language can be put--how languages achieve their speakers" goaLs-rather than its formal structure. Since the generation pmcem begins with goals, unlike the comprehension process which begins with structure, this orientation makes systemic grammars more immediately useful than, for example, tramffotmationai generatb,+ grammars or even procedurally oriented AI fogmali-qa~s |of language such as ATN's. The generation researcher's primary question is why use one construction rather than another--active instead of pa~ive, "the" instead of "a'. "toe principle device of a systemic grammar, the "choice system", mppom this question by highlighting how the constructions of the language are gmupud into met of altemativet Choice systems pro~tde an anchoring point for the rules of a theory of language u~ tin,-,, it it natural to associate the vaziotm romantic, disgou~, or rhetorical criteria that bear oa the mlection of a given ~ o n or feature with the choice system to which the consmmtion belongs, thus providing the basis of a decision-Wm:edure for rejecting from its Listed atternatives; the NIGEL sy~em does ~ y this in its "chooser" p~c~_~M_ures. In our formalism ~ make tt~e o~ ttu~ saint i~l'ormatWn a.¢ a sy~emic grammar captures, however we have choosen to bundle it quite differemly. The maderlyiog reat~ for this is that our concern for p~/cholinguistic modeling and efficient procemin~ takes ~ c e in our design decisions about how the facts of language and language me should be repretented in a generator. It is thus instructive to look at the different kinds of linguistic information that a network of choice systems carry. In our system we distribute the~,-- to separate computational devimm. o Delx~cl©ncies among smmtutal features: A generator must respect the constraints that dependencies impom and appgeciam ,.he impact they have on its reafization options: for example that tome mburdinate da-,~_ can am express ten~ or modality while main datum are required to; or that a j~inll ~ Ob~Ol~ foN pll~de ~ e n t while a lealcal ob~cts leaves it optiomd. 96 o Usage criteria. The deei_'Moa pr~___~_mms associated with each choice system are not a part of the oammsLr pl~ m, althOUgh thfy ~ natllg~y asaociated with it and organized by it. Also most s~lra~[lic glr'amm~ll include V~'y a ~ f~tuns ~teh as "geneS: reference" or "completed action', which ~elate the language's surface fennues, and thus are more controllert of why a construct is -_~_ rather than consmJcu themsetva. o Coordinated mucunal alternative=. A teutence may be either active or passive, either a question or a statement. By grouping these Mternatives into systems and using the:m systems exclusively when constructing a teat, one is guaranteed not to ~bine inconsistent ttruetural featun=. o Efficieat ordering of choice~ The network that a~mects choice systems p~ovides a aamral path betweeu decision, which if followed strictly guarentees that a choice will not be made unlem it is required, and that it will aot be made before any of the choices that it is it~If dependent upon, insuring that it can be made indelibly. o Typology of surface structure. Almost by accident (since its specification is distributed throughout all of the systems implicidy), the stammer determines the pattern of dominance and cmtstituency relatiomhips of the tat. While not a principle of the theory, the trees of dauscs, NPs, etc, in ty~.emi¢ grammars tend to be thallow and broad. We believe, but have no¢ yet established, that equivalence transformations can be defined that would take a systemic grammar as a tpecification to coummct the alternative devices that we use in our generator (or augment devices that derive from other murcm, e.g. a TAG) by 4_-eom_ Ixxing the in/ormation in the sy~emic grammar aloug the lines just U_~__*~_ and redistributing it. s. Fuam#e Anat~ One of the task domaiM we are c~,i,~.tly developing involves newsl~per reports of current events. We are "revere engh~eering" leading paragraphs from actual eewsptper articles to produce ~ but mmpta conceptual repretmttation, and then designing realization tpecificatiomt.-plam--that will lead our LC to recommtet the ori~nal text or mmivated variatiou on it. We have adolxed this domain because the ae~a mporung task, with its requirement of communicating what is new and tignificant in an event as well as the event itmif, appears to impom e=czptioually rich cooaerainm on the udection of what conceptual informatioo to report and on what syntaeth: omummctiom to u.~ in reporting it (see in Clipplnger & McDmald [1983|. We expect to f'md out how much mmplt=tity a realizatioa q~cification requires in order to motivate such carefully mmpmed texts; this will later guide ,,I, in dminl- s a tat I ~ with ~ t capsbilitim to mmtruct ugh wecificatiom on its o~m. Our examples are drawn from the text fragment below (Associated Press, 12/23/84); the realization specification we use to reproduce the tat foUow~. "LONDON. Two oil tamer& the Notweglm.owrmd T;-u-~ava ~ a Otm,len.regtsferecl ve~el, were reDortecl to tnwe Deen hit by missilm Friday In the Cuff. The Thot~wet web ahteze end under tow to Ba~r#in, officiaM in Osio said. Uoyds rsponed tl~ two crewmen were Inl~ on the UI3erlm ~" (ttweay" s.ever~me.C~-tar~er-war ~v~Oon.as.to-e~gce (m~evem #<urm~ern-tym_vary~vaU~ #<tgt.oy-nmgks Ymnmgvet> #<llt-Oy~ t.lbm~> > i # ~ . o f - m 2> tmr~y.m ) (pareetm~ # ~ Ttumtuvm Osto-ofltc~a> # ~ Lbemn Uo~> )) This realization specification represents the structured object which gives the toplevel plan for this utterance. Symbols preceded by colons indicate particular featur~ of the utterance. The two ex~ont in parenthems rare the content items of the specification and axe resmeted to appear in the utterance in that order. The first symbol in ,.~eh_ expression is a labet indicating the function of that item within the plan; embett,bM__ items appearing in angle brackets ere in/ormatiou units from the current-events knowledge base. Obviously this plan must be considerably refined before it could mrve as a proximal toarce for the text; that is why we point out that it is a "toplevel" plan. It is a specification for the general outline of the utterance which mum l~ flC~lhed out by rtgugsive planning OUce its realization has begun and the LC can mpply a linguistic context to further constrain the choices for the units and the rhetorical fcatunm. For present purposes, the key fact to al~re about this realization specification is how different it is in form from the surface structure. One cannot produce the -ited text simply by travemng and "reading oat" the dements of the specification as though one were de~g production. S ~ rearrangements are required, and these must be done under the coutrol of constraints which can only be stated in linguis~ vocabulary with terms like "subject" or "r~i~in$'. The fire unit in the qxcification, #<satin.civet.type..>, is a relation over two other units. It indicates that a commotmiity between the two has been noticed and deemed significam in the underlying representation of the event. The premat LC always realize, such relatious by merging the realizations of the two units. If nothing else occurred, this would give us the tat "Two od tanker, were ~ by mits/~r". 97 As it happens, however, a penclmg rhetorical constra/nt from the rcefi~tion specification, ~v 8wto-sotm~ will force the addition of yet another information unit, 6 the reporting event by the ~ service that announced the a/edged event (e.g. a press relce.~ from Iraq, Reuters, etc.). In this case the "content" of the ~ event is the two which have already been p/armed for inclusion in the utterance as past of the "particulars" part of the specification. L~ us look closely at how that reportiing event unit is folded into turface mmcture. When am itself the focus of attention, a event is typically realized u "so-and-,m said X', that is, the content of the report is more important than the report itsel/; whatever sigmficance the report or its source has as newu will be indicated subtlly through which of the alternative realizations below is selected for it. 7 Dem'ed characterisdc de.¢mphuLm report sMppmg sources sa~d. muree is given ebewhm'e emphame report mmmnS test Two tankers v~,re Ms. Gulf Two tankers were reported hit. Iraq reported it hit two tankers. Figuge 2 Pom/b/Utfes for ezwea~all r~ort(mmr~, into) In newpsper prose In our LC, the-,, alternative "choices" are grouped together into a "rcefization class" as shown in Figure 3. Our reatization cla.~,~s have their historic or/sire in the choice systems of systemic grammar, though they are very dLfferent in almost every concrete detail. The mot important difference of interest theoretically is that while systemic choice systems select among s/ogle alternative features (e.g. passive, gemndive), realization classes select among entire surface smmture fragments at a tune (which might be seen as ~ e d ~tious of bundles of features). That is, our approach to genmt~on cafls for us to organize our docis/on procedures m as to ,elect the values for a number of linguistic feature5 timultaneouMy in one choice where a system~ grnmmar would make the selection incrementally. 8 : gm'ammm (a~nt propo~on verb} : ctmk:~ (( (AGENT-VEFIBs-tJ'~t-PROP a0ent verb imp) cm, m focuKst~nt) emp~w~se~0) ; e.g. "L/oyds reports lraq ~ two tanker~." ; encompasus variations with and without that, and ; also tem~las complements like "JoAn believes Aim ; to be a fool." ( (raJ~-V~PFtOP (pas~tze verb) ffoo) mum focug(l~t_ prop)) m~mmd-~ewhem(aOm) ) ; "Two tankers were reported to have been hit" ( 0t-VERB-PFtOP verb prop) ~em~(a~nt} ) ; e.g. "lt Lt reported that 2 tankers were hit." ( Oe~t~P~OP aomt veto ~mv) ; "Two tankers were hit, Gulf sources said." )J lqgare3 ~ ~ . . . ~ s h g n e d m ~ ~ _ ) Returning to our example, we are now faced now with the need to incorporate a unit denoting the report of the Iraqi attacks into the utterance to act as a certification of the #<:~t~> events. This will be done using the reafization class tx~eve-veres; the cla~ is applicable to any information un;t of the form rel~rt(surce, into) (and others). It determines the reafizat/on of such units bot h when they appear in is~olation and, as in the present case, when they are to augment an utterance corresponding to one of th~z arguments. From this realization class the choice rag~VERB-~to-Pl~OP will be selected s/nce (1) the fact that two shipu were hit is most s/gnificant, meaning that the focus will be on the information and not the source (n.b. when the dam executes the murc~ ~ will be bound to its parameter and the information about the missile hits to the propcation parameter); (2) there is no rhetorical motivation for us to occupy space in the first sentence with the murca of the report s/rice they have already been planned to follow. These conditions are sensed by attached pr~__~urm associated with the characteristics that annotate the choice (i.e. f~us and mum~oncd.e.b~whe~e). 6 We will not ~ the ~ by whgh featu~ in th~ spe(~matJon infhgn~ r~-W-=tmn. Rgatisat~on apug/ficau of th~ compka~ty of th/s exampks aru still very n~w in ou~ ~ and we am umu~ wlgtbcg tl~ ~ is t~tt~ ~ st th~ ~awmal dim•inS • compomi~ pngm, imia t~ Mmmmq (during oo~ o4' th~ B immgst~m) or mthin tbo LC mmJ/sbnl • --,--,~- ~ ami~pm~t alm-ut/~m. At 0m ~ ow ~ m m'~ immuglum~. 7 "1"l gin za--~,,'- mm atl~lg~; actual oam ~ be ..... m~ u'ff wU~ffm~do mot ~m~ia my of dm umta ~m havu czammecl. P~luq~ tim "1cut N1 w p~tiou ts mo mlxmam m mum on a pronoun. 8 T ha t~mklua of ~ dg~a~ ~ to control the ,ct~ of utu:zangB femur~ is ~lpioyed by t~ most weLI-knm~ appiica~om of v~a~g grammars to pwrs~on (i~ Lbe work of I~v=y [t.q'741 ~ Mum ~ Mattu~mm {t~D. ~ wry r~mt work ith ,Nmgtmg ~'m~mus at ta~nl~trgh by Patum [I~] from ~s ,~-~n. Patt~ usm • umam~.-ie~:t pisAumS m ~ to ~ gg~k~ groulpm o( festu.,,m at tin rightward. "output', ido og • syaU~ mmm'k, and ~ =mrlm backwards through tho n~mrk to dm~mim wlmt orbs. am ~ ~ f~tmm mum be -,4,'-* to tho ~ f¢~ it m ~ i~ammm~a~ comrol is thus ~ tin grammsr pmp~, ruth grsmmu ruim rclqat~l to mmuUtt --~_yn,~.~ o117. w. ~ ~migued by t~b ag~tque ..d tOOk fmwud "~ its fm'th~ dmgtopmt~. 98 Since the PROP is already in ~ in the mrface smu:mm tree, the LC will be i n ~ g mim-V~Pl~OP as a specificatioa of how it my fold the auT~ary ~e fof reported into the tr~ for Two oa tanker~ were hit by rnit~ Friday in du~ GuLf. co~ds to the TAG anaIys/s in Figure 4 [Kroch & Soshi 1985]. lnltaal Tree AumLtary Tree: S [NFL NP INFL INFL VP t~,o tankers ./ "-,, be repotted INFL ( NF L VP be IXtt by t~stle~ 4 T~Uisi and ...~m..y ere~ for EaJSlal~bject The initial tree for Two o~ tankers were /~ by m/~n~ea, II, may be e~tended at its I~FL" node as ind/ceted by the canto'a/at given in parenthem by that node. Figure shows the tree aJtet the auxiliary tree A2, named by that conma/nt has been adjoined. Notice that the original INFL" of Figure 4 is now in the comp/ement ptmtion of repot, giving US the Nnoteoce Two od tani~r~ ~ere reported NP J ~ m.t#eil~ INFL [NFL VP be r,port~.~ II~/'I. j.~-"~%'--., . II~L VF be rdt by m~uil#~ lq~mS Art~r ~ml~kUnt r~l~n 5.1 Path Notsdem As reader8 of any of our eari/er paper~ are aware, we do am employ a coaveatiomd tree notation in our LC. A generation model places its own kinds of demaads oa the representation of surface structure, and them lead to ~i-dpled ~ from me conventions adop~ by theoretical tlngnim. Figure 6 shows [he uuface m'ucuue as our LC wou/d actually represent it just before the mom~t wMm the ~djunetion is made. ... > [SEHTEHCE ] , [b'UBJECT] , [PRED[CATE] NP (plural) 0 Att~h- • // f~-~., l~t~tt~- td "- s<hit by mxsstles > Pr~tc~te {quant] ---> [headl two N /~.... [premo~]'] ---> [head] otl ~anker Flpre 6 Sarfaee Uructure in l~h notadon We call this repres~tation pmh no¢cufo~ because it defines the path that our LC. Formally the muctum is am a tree but a unidirectional Ih~ked list whose formadoa rules obey the axioms of a tree (e.g. any path "down" through a given node must eventuaUy pass back "up" mrough that same node). The path co~ of a s~ream of entiu~s representing phrasal nodes, constituent positions (indicated by square brack~s), insumces of information units (in boldface), inaanca of words, and activated attachment pomu (me labeled circle und~ me ;nedicate; me next u;etion). The various symbols in the figure (e.g. mmmce, pred/ram, etc.) have attached procedures that are activated as the point of speech morea a/on s the path, a process w© call q~hram muczure ctecution". Phra~ mueture ctecution is the means by wh/eh grammat/cel consta-aints are impmecl oa embedded decim'oas and function words and grammatical moq3he~es are produced (~or discuss/on tee McDoo~d [19S~l). Once one has begun to think of mrface m~-nue as a rrsvenni path, it is a short step to imt~nln~ ~ able to cut the path and ~ in" additional pm/;ion mquences. 9 This q)ficin 8 operation inherits a natural set of ceusu'amu on the ]rinds of dim)mons that it can perform, J~nee, by the inde~b/ticy mpuiation, exiseing pmit~on melUenCe~ can am be d~stroyed or reth _r,~_d_,~_J It is our imptem/oa that these ~ t s will turn out to be formally the same as throe of a TAG, but we have no( yet carried out the de~fled analysm to confirm thi~ 9 The poml~lit7 of ~tdnS tbo mrf-,~ m-.~re and mm, s~os ,al,-~-- ~ ~ ms ~ mn~ of t~m~m .lrcady in has ~ in our theory oL I~n~ml u t978, Wk..- We used it m ~ ntimS v~be whom rbetmk~ form mm the ~ 8s "b,~ uh.,~= I/ko ~ . 0~" p,',=.m. =.,~ rare =~m~e ua8 o( tim ~ m tbo ~ of u dmlnm attachmem ~ dates from ths ~ ot t~. 10 Conm~. ~ Llmsm uean movabou ~ in ~ & [1985]. lhvviom m of TAO theory ailawed "~t~t mmatint qmafimtiom ~at it fact ~ am~ mpimmd. Th8 prtmm c~mmims ~ we attrtcdve foma~ ,~ tt~ nat be muml IccaUy m a .~Je trm. 99 $.2 A-,.~,,~mt Polms The TAG formalism allowu a grammar writer to define "a~straints" by annotating the nodes of elememary with lists indicstin8 what auxiliary trees may be •djohmd to them (inducling "any" or "non~'). m In a ~ manner the "choices" in our realization dasms--which by our hypothem can be taken to always corrmpm~ to TAG elemeautry urees--iadude specifications of the a~ta~Asumt po~r~ at which new information unto can be iato the ms, face muctum peth they define. Rather than being c~nsl~aints on an othexwise free~ applying uperathxt, as in a TAG, attachment pohtts age actual objects inte~ in the path noutdon of the surface sm~mm. A list of the attachment points acbve at any momunt is mainta/ned by the attachment process and ~adted whenever an information unit needs to be .,~4_o Mint un/ts could be attached at any of mveral points, with the decis/on being made on the basis of what would be most consistunt with the des/red prow style (of. McOoemid Pustejowky [198~a]). Whea one of the poinu is sdecud it is ins•anti•ted, usually spficin 8 in new surface m'ucture in the protein, and the new unit -~d_-~_ at a dmignated ptmtion with/n the new structure. Figure 7 shows our Wemnt definition of the attachment point that ultima~dy leads to the addition of "w~s reported". referenco-voV~ ( m n m O v e m w ~ ) ime~ae~-atumewem-poee ( (sctu~-mt "~,~ste peru•} nm~rsas4mJ~j~ ( ~ (v0-~mlv~) ; specification of new phrase veto ; where the unit being an~.bed goes ~n~rdt~~} ; when~ the eximng ccutunts go ~fec~-an~Uw-m,~aXt~--,~um-~mm ~,~em-0aasm~um 0net~m-em "Tms~me)) gtgure 7 'I'm, attacbmunt-peint used by ,~r r~ved This anadununt point goes with any choa (eb~munu~y tree) that indud~ a constituent lmtition Lt~ed pr~,,.-~. It is placed in the position Ixtth imm.~di=t~ly at't~r (or ; "under ~) that poubon (see Figure 6), where it is available to any new unit that passes the lad/cared requireme~m. When this attechmunt is ted_~___,~_, it builds • new VP • ode that has the old VP as one of its aaw~tuunts, then ~pi/ms this new aede into the path in its #aas as ~ ia Fisure 7. The ,,nit being atutched, e.g. the report of the attack on the two ~iI tanken, is made the verb of the new VP. Later, un~ the phnum mucmm es-.',,t/o~ IX~cem has wailred into the new ~ and reached that verb pe~/e~, the unit', rudizathxt dam Oni~,..~) will be comuited aad a choico ml___e,~,~__ that is cc~mscem with the srammafical conseralnts of tx~S a verb (i~. • convuntio,tal variant on the rsfes-VERB.htto-PROP chokm), giving us . . . , (mmT~C~-] , . . . [SUI~IECTI NP two ott tsttkel'l , [PREDICATE] [verbi ---> [tnfimt~ve- rt.port complement;] o<hi( by atsmstle. • r~ure 8 1"~ path •mr attadunem From this discussion one can tee that our urea•taunt of art•thin•at usa two tt~tctuges, an attachment point and • choice, where • TAG would oedy use cme structure, an anx/lia~ tree. Tim is • amsequeace of the fact that we are working with a performance medel of generation that m,,~ ,how explicitly how coacupm~ in/ormafion units arts rendered into tea•as as part of • IxJychofinguisticafly plaus/ble process, while • TAG is • formaIiun for competence theories that oily aeed to qxcify the syntactic mnu~:mm of the grammatical minp of a languagu. "Vnis is a usnifa:ant cliff•race, but not one that should stand in our way in compming what the two theories have to offer each other. Comequeady in the ,rest of this paper we wifl omit the of the psm aoumoa and a¢¢nchmunt point clefimtions to fs~liu~ me comptrtuxt of theoredad lames. 6. Generating questions using a TAG vernon og wh-movement Earlier we illustrated the TAG mncept of "]inking" by shemdng how one woukl ,ran ~th -,', initial u'ee consisting of the /nmrrmo~ datum of a quest/on p/us the frooted wh-phnum and then build outward by ma:emvely •die/n/rig the des/red amdtiary phrases to the S node that intervenes baweea the wb-phram and the dame. Wh-quest/ons am thus built from the bottom up, as in fact is any sentence involving wa~ tsklng urn•retrial complements. This an•lyre has the dem~ble property of •flowing mus to state the dependencies between the W~3hrase aad the gap as a laced relation on a =ngie elementary tree, criminating the need to inducie any machinery for movemem iu the theory. Aft unbounded dependencies now derive from adjunczioas (which, as far as the grammar is coucerned, ca• be made withemt limit), rather than m the exit migratkm of a c~mdtount 8cram dauses. We also find this iocaiRy property to be demable, aad an umlogous ~ in our ~ m of qmsmi01m and osher kinds of W~lUesdcm and unbounded dupmdm~ axumJedm~ 100 This -ommm-u~ dmiKn haa comequencm for how the reaiizatien qmc~catiom for them comcP, ic~o~ mu~ be or~-i-,~ Xa paxecu/ar, the logi~-'s urea/ ~tatiou of senu~d com~em~ ved~ u Id~ opw,non is am tenable m that ~e. For ~'~,,qde we cannm have the mu~m M, my. How may d,~ d~d Re~m.~ r~ d,m In,# had ~,dd it a~ac/~d? be the ex~mssm: when ~ as ,~l~don ~x¢/ficm/ou. ~sm~ ~ ou realizn dm IJml~ opm'a~t fw~, me ee~ o ~ ,-~.-~1, ~e my thi.,d, and ,~ on. A local TAG ,,,-,ym of Wk-movemen¢ requ~ ,,- to have me Ltmlxla and the a singia "hyer" o4 the qxa~ation, otber~i~ we would be forcad to vio/am oae of me .A,,~.S p,mcild,.- of our theory of ~era~ion, aamely chat me ~ ia a reaiizabon clam may ",,~W' only ~he immediam arlFuaenm of ~he ,,-it being reafiz~; they may ao¢ look "~ssicl~" those arguments to mbu~lUCmt levels of ~ m.uc~uru. princilde has ,erred us we~l. aad we a:e to give it up without a very compe~ng P'~,,~a. We dec'.~l immsd to give up the iaummi ~m~ioa of ~mumt/a/ c:m~lement verb ~ u ~inKle exl:m~mo~ This move wu a.,y for ,- to make ,/ace uw.h ~ am awkward m manil~Ltm ia the "Era Coa~ gyle frame ~,,o~l~i~ ~ that we u~ ia our owu rmmmnS and we have p~m'red a ~m¢ionai myle wire r~lundant. ~ m~d ooacepma/ umB for qmte ,ome ~ime. The rep~m~¢acmn we um inateacl ammmm to breaginll up d~e logical ~ into individua~ um~, and s/lowin s ~em m inc/ud~ refm¢-nc~ m each oth~. U 1 - tambd~quam/¢y-ot-sh/ps) . anack(lnq,qmmtiry-of-daps) u2 " ,--y(-u-~, u 0 U 3 = re~or~Reuten, U2) Given such a network u ~e r,.~ii~-~oa specificaaio~. d~e LC mu~ have mine l~nncip/e by wt,P.~ m )uclSe w~e~e to start: which umt ~houJd form me ~ of ~he ~udace smu:nue to which the othe~ are then attached? A tumuli prm¢il~e to adolx i~ to ~ ~m d~e "oa~" ,-,q, i~. me one that does am mention any other umm in im defimQon. We axe ~n~dermg aclopemg the po//cy that atria ~mm daouid be allowed onJy rmdizaUon~ as iaimd trees while ~mm whom defmitioa m ~ "pomunS m" (.-,-".$) other umm taou~d be aflowed o~y realizauem u au~ :xee,. We have rim. howe~e¢, worked thxo~sh a/l M the ramificattom inch a poficy m/ght have on o~or parB of our l~meranon mode/; without ye~ ~ l g whe~ it impn~ve or desra~ me o~w ~ M our mere, y, we axe relum~nt co aum't it as one of our hypoth__-._-~_ retalmS our ge~eranoa mode/ to TAG's. Given tbtt ~en ~ m , me r~indoe d the quea/en is fa~dy maiShdmward (See F~gum 9). The Lameda ¢qnemoa is amgned a realizat/oa dam for dau~ Wk oommscboss, wherentxm the emmmmd aXllummt cp--*,*y-et-ddW is I~''~ ia COMP, aad me body of me k p/aced in me H]BAD pom~0u. At the mine ~me, the two m of quan~-e~-~ a:e , ~ mark~ The o~e ia COMP ~ ~mllned to the reaiiz~oa for w;, phnu~ appmlma~ to quanuty (e.g. it will have the choice how many X aad pmmbly related choicm such as <aan~/> ~' w/dck and olhe¢ vaxiaum aplnopriam to rehmve chuu,m or oth~ pemtiom whe~ Wk commm~om can be m~d). Simedtmuaxudy the i.~.~ M qusm/ty,~t-ddW in the argument pomion of the head frame tmmk i~ amwaed to the reaiiza¢icsa dam for Wk-cmc¢ Them cwo q~ma~m¢iom are the equivalent, in our mode/, of the TAG llnkin s I ' ~ ¢ ~-- Reuters r~pc.r.*.s ::" \ _J S comp S<... ' ,./ WH(smps) [raq atr2cl¢ e F~ 9 Qumclml ferm---., w/th ~ mmldement "[~e n,o pend/nS umu. u 2 aad U3. are mea ,~ed to cl~ ,,,an'ix. mlxnergmll f'um me ~aglt unit and m~ U 2 mm mmplem,,,,t pmuimD. 7. Exumsions to the Theor7 of TAG Coau~-t-free grammars ~um ab/e to ~ the word fonnauon pro¢~ maz seem m ~ for ~ lantlua~ (ct. W~, [19811. Se/k/xk [1982 D. A TAG amdym of arab a grammar seem, like a nanmd app//c~oa to the currier vemoa of the d2mry (cL Pm~eiovsky (in p~.paraUoa)). To uUumram our point, comldcr oompound/ns rulm ia Engii~. We can my dmt for a conu~-frea ~prxmmar for word formacioa. G~, th~ iJ a TAG. r~, thai is cq~w~i,m¢ to Gw (cL F~Kuxes 10 and 11). Co~der a f~Kment of G w be/ow, tl fe¢ ,,,... lemnl~e capac~ M aann.al laquap ~ fmmauoa mmp,mmm. 101 N->N IA I V IF N A->NIAIP A V ->PV ln4tmm Io C~G rrmpn~ tot" Word Foematlaa The ~ aw frat~teat would be: /'\ comp N comp A P V AUXI LIAR'/ TREES N N N t t ( oti tan~er ~et'mtta~L INITIAL TRKES Ftgm~ U TAG Fru~meat for Word F ~ Now ~n.~der the comlmmtd , "oa tamer t~r~r~, t~em from the n~lmr mlxm~g dome, and its derivaUoa in TAG theory, showu ia Figure 12. ~ p N ~ N /C~np''N ~"" .... ~k Figure 12 TAG ~ o! o~ tam~ termma/ the ImUibility of ~ 8 U2 preuominally. One of the e.homes ~ with this unit is a ~atl~mnd ~ i= tenm of an auxiliary ~m. A malXitm at this Ixut in tim dmivatiou tho~J the foflowintt structure. nu2] ulI The ueat unit c~etted up in this structure is U3, which also a~t)~vs for attachlneat l:)tl~Om~nsily. "l~tm an SUZiii,~'y ammspoading to U 4 ~ iamxtuced, giving us the mmctmm bet~: u4 ] u311 ul] The miecflon~ constraints impomd by ~e mmcttmd immticmUtg of i~fmmation unit U 4 aJl~ ooi), a ¢ompouadiag choicm. Had th~ ~ no word.4evet compound raliz~oa option, we would haw work~l out way iam ~ comer without eXlmmmtg the relation between • ~3i1> axtd ~'xa~er>. Becamm of this it may be better to view units such as 0 4 as being umciated directly with a ImicaJ compoue~.'~'~ form, i.e. ed tank.er. This partial ~uUoa, bow~er, wouM not qx~c to the ?mblem of active word formation in the language. Ftuthermom, it would be mteremas to ~mlmre ~e mategic deci.siom made by a gtmm'ttion tn/tt~m with tbom planniag m ~ madm bummm,s wbcm ~--~",5. ~ L5 ~n ~ect of &,tmtwation that tam'its muc~ hmber rmmrc~. La us ~mlmre tim derivation to ~e izromm ,,__,e~ by the LC. The uadmCyin8 intormJmoa umim from which this ¢omlmtmd is dmwed m our system ate tho~m tmtow. "the pitaum' Ilu dmidml that the utits Mt~ meal to be c~tammticated m ord~ to ,a~u.t~y m tho omlce~. The to~evet unit in this Mmdle L5 a<:tm'mlnsl~. LL t ~ ~<tsm, mm> u 2 ,, u.# u 4 = ,<=ram U 5 = ~ The first trait to be pmibcn~ed in tbo surfa~ sm~x~ U 1, ~usd aplxm,t~m u the It~ of ~,t NP. Thems is an attac~cmt point oa this position, however, which allows for 102 8. Acknowledgements This re~u~ has been ml~enn/aaled in part by contract NG014-85-K-(}(}I7 from the Defcmm Advanced Re,arch Projects Agency. We would like to thank Marie Vaushan for help in the preparation of this text. 9. References CLipp/nger, & McDoonld (1983) "Why Good Writing is Eaker to Undcrmmd", Proc. UCAI-83, pp. "~0-732. Davey (1974) ~ lh~ugt/m, Ph.D. Dime~ation, Edinburgh Un/vers/ty; pubt/~ed in 1979 by E~nburgh University Press. Halliday (1976) System and g ~ In Language, Oxford Umvemty Pre~. Joshi (1983) "How Much Coutext-Sens/tivity is Required to Provide Reasonable Structural DescfilXions: Tree Ad~3inin$ Grammar', preprint to appear in Dowry, p~<~ & Zwicky (eds.) Natm'al 12mgua~ ~cho~.uis~ Compu .taaout, ,~, 3"heer.~-~i Perspe~ves, Cambridge Umvemty Fre~. Kngh, T. and A. Joshi (1985) "The Linguistic Relevance of Tree Adjolnln$ Grammar", Univemty of Pennsylvania, Dept. of Computer and In/ormation Science. ransendoen, D.T. (1981) "The Generative Capacity of Word-Format/on Components", w Jn~,n,~le Inquiry, Volume 12,O. Mann A, Magghi~ (1~) Nige[: A Systemic Grammar for Text Generation, in Freedle (ed.) System/g Perstm~vm ~a ~ , Able=. Marcus (1~0) A Theory ~f Sy~a¢~¢ Recogn~m for Namr~ Language, Mr]" [heSS. McDonald (1984) "Description Directed Control: Its Implications for Namr, d Language Generation', in C~i~e (ed.) Comlmtat/om~ lJn-ul~/a, Pergamon Press. McDonald & Pustejovsky (19&~a) "SAMSON: a computational theory of prose style in generation", ~ g s of the 1985 meeting of the European Amociat/on for Computational Linguistics. (1985b) "Description.Directed Namra/ Language Generat/on", Proceedings of IJCAl-85, W.gnufmann Inc., Los Altos CA. Patten T. (1985) "A Problem Solving Approach to Generating Text from Systemic Grammars", Proceedings of the 19&5 meeting of the European Association for Computational Linguistics. Pustejovsky, J. (In Preparation) "Word Forma~ou in Tree Adjo/n/ng Grammars" Se/k~k (1982) 1"~ Syutaa d Word=, MIT Press. Win=fflint (1981) "Ar$um=at Scmemm and Morphok~" T/w /~Su/.me Rev/¢~, 1, 81-114. 103
1985
12
MODULAR LOGIC GRAMMARS Michael C. McCord IBM Thomas J. Watson Research Center P. O. Box 218 Yorktown Heights, NY 10598 ABSTRACT This report describes a logic grammar formalism, Modular Logic Grammars, exhibiting a high degree of modularity between syntax and semantics. There is a syntax rule compiler (compiling into Prolog) which takes care of the building of analysis structures and the interface to a clearly separated semantic interpretation component dealing with scoping and the construction of logical forms. The whole system can work in either a one-pass mode or a two-pass mode. [n the one-pass mode, logical forms are built directly during parsing through interleaved calls to semantics, added automatically by the rule compiler. [n the two-pass mode, syn- tactic analysis trees are built automatically in the first pass, and then given to the (one-pass) semantic component. The grammar formalism includes two devices which cause the automatically built syntactic structures to differ from derivation trees in two ways: [I) There is a shift operator, for dealing with left-embedding constructions such as English possessive noun phrases while using right- rezursive rules (which are appropriate for Prolog parsing). (2) There is a distinction in the syn- tactic formalism between strong non-terminals and weak non-terminals, which is important for distin- guishing major levels of grammar. I. INTRODUCTION l'he term logic grammar will be used here, in the context of natural language processing, to mean a logic programming system (implemented normally in P£olog), which associates semantic represent- ations Cnormally in some version of preaicate logic) with natural language text. Logic grammars may have varying degrees on modularity in their treatments of syntax and semantics. Th, ere may or may not be an isolatable syntactic component. In writing metamorpilosis grammars (Colmerauer, 1978), or definite clause grammars, DCG's, (a spe- cial case of metamorphosis grammars, Pereira and Warren. 1980), it is possible to build logical forms directly in the syntax rules by letting non- terminals have arguments that represent partial logical forms being manipulated. Some of the ear- ties= logic grammars (e.g., Dahl, 1977) used this approach. There is certainly an appeal in being dicect, but there are some disadvantages in this lack of modularity. One disadvantage is that it seems difficulZ to get an adequate treatment of the scoping of quantifiers (and more generally focalizers, McCord, 1981) when the building of log- ical forms is too closely bonded to syntax. Another disadvantage is just a general result of lack of modularity: it can be harder to develop and un- derstand syntax rules when too much is going on in them. The logic grammars described in McCord (1982, 1981) were three-pass systems, where one of the main points of the modularity was a good treatment of scoping. The first pass was the syntactic compo- nent, written as a definite clause grammar, where syntactic structures were explicitly built up in the arguments of the non-terminals. Word sense selection and slot-filling were done in this first pass, so that the output analysis trees were actu- ally partially semantic. The second pass was a preliminary stage of semantic interpretation in which the syntactic analysis tree was reshaped to reflect proper scoping of modifiers. The third pass took the reshaped tree and produced logical forms in a straightforward way by carrying out modification of nodes by their daughters using a modular system of rules that manipulate semantic items -- consist- ing of logical forms together with terms that de- termine how they can combine. The CHAT-80 system (Pereira and Warren, 1982, Pereira, 1983) is a three-pass system. The first pass is a purely syntactic component using an extrapositJon grammar (Pereira, 1981) and producing syntactic analyses in righ~ost normal form. The second pass handles word sense selection and slot- filling, and =he third pass handles some scoping phenomena and the final semantic interpretation. One gets a great deal of modularity between syntax and semantics in that the first component has no elements of semantic interpretation at all. In McCocd (1984) a one-pass semantic inter- pretation component, SEM, for the EPISTLE system {Miller, Heidorn and Jensen, 1981) was described. SEM has been interfaced both to the EPISTLE NLP grammar (Heidorn, 1972, Jensen and Heidorn, 1983), as well as to a logic grammar, SYNT, written as a DCG by the author. These grammars are purely syn- tactic and use the EPISTLE notion (op. cir.) of approximate parse, which is similar to Pereira's notzon of righ~s~ normal form, but was developed independently. Thus SYNT/SEM is a two-pass system with a clear modularity between syntax and seman- tics. 104 In DCG's and extraposition grammars, the building of analysis structures .(either logical forms or syntactic trees) must be specified ex- plicitly in the syntax rules. A certain amoun~ of modularity is then lost, because the grammar writer must be aware of manipulating these structures, and the possibility of using the grammar in different ways is reduced. [n Dahl and McCord (1983), a logic grammar formalism was described, modifier structure grammars (HSG's), in which structure-building (of annotated derivation trees) is implicit in the formalism. MSG's look formally like extraposition grammars, with the additional ingredient that se- mantic items (of the type used in McCord (1981)) can be indicated on the left-hand sides of rules, and contribute automatically to the construction of a syntactico-semantic tree much like that in HcCord (1981). These MSG's were used interpretively in parsing, and then (essentially) the two-pass semantic interpretation system of McCord (1981) was used to get logical forms. So, totally there were three passes in this system. [n this report, [ wish to describe a logic grammar system, modular logic grammars (MLG's), with the following features: There is a syntax rule compiler which takes care of the building of analysis structures and the interface to semantic interpretation. There is a clearly separated semantic inter- pretation component dealing with scoping and the construction of logical forms. The whole system (syntax and semantics) can work optionally in either a one-pass mode or a two- pass mode. In the one-pass mode, no syntactic structures are built, but logical forms are built directly during parsing through interleaved calls to the semantic interpretation component, added auto- matically by the rule compiler. in the two-pass mode, the calls to the semantic interpretation component are not interleaved, but are made in a second pass, operating on syntactic analysis trees produced (automat- ically) in the first pass. The syntactic formalism includes a t device, called the shift operator, for dealing with left-embedding constructions such as English possessive noun phrases ("my wife's brother's friend's car") and Japanese relative clauses. ~ne shift operator instructs the rule compiler to build the structures appropriate for left- embedding. These structures are not derivation trees, because the syntax rules are right-re- cursive, because of the top-down parsing asso- ciated with Prolo E. There is a distinction in the syntactic formalism between strong non-terminals and weak non-terminals, which is important for distin- guishing major levels of grammar and which simplifies the. working of semantic interpreta- tion. This distinction also makes the (auto- matically produced) syntactic analysis trees much more readable and natural linguistically. In the absence of shift constructions, these trees are like derivation trees, but only with nodes corresponding to strong non-terminals. [n an experimental MLG, the semantic component handles all the scoping phenomena handled by that in McCord (1981) and more than the semantic component in McCord (1984). The logical form language is improved over that in the previous systems. The MLG formalism allows for a great deal of modu- larity in natural language grammars, because the syntax rules can be written with very little awareness of semantics or the building of analysis structures, and the very same syntactic component can be used in either the one-pass or the two-pass mode described above. Three other logic grammar systems designed with modularity in mind are Hirschman and Puder (1982), Abramson (1984) and Porto and Filgueiras (198&). These will be compared with MLG's in Section 6. 2. THE MLG SYNTACTIC FORMALISM The syntactic component for an MLG consists of a declaration of the strong non-terminals, fol- lowed by a sequence of MLG syntax rules. The dec- [aration of strong non-terminals is of the form strongnonterminals(NTI.NT2 ..... NTn.nil). where the NTi are the desired strong non-terminals (only their principal functors are indicated). Non-terminals that are not declared strong are called weak. The significance of the strong/weak distinction will be explained below. MLG syntax rules are of the form A ~---> B where A is a non-terminal and B is a rule body. A rule body is any combination of surlCace terminals, logical terminals, goals, shifted non-terminals, non-tprminals, the symbol 'nil', and the cut symbol '/', using the sequencing operator ':' and the 'or' symbol 'l' (We represent left-to-right sequencing with a colon instead of a comma, as is often done in logic grammars.) These rule body elements are Prolog terms (normally with arguments), and they are distinguished formally as follows. A su~e terminal is of the form +A, where A is any Prolog term. Surface terminals corre- spond to ordinary terminals in DCG's (they match elements of the surface word string), and the notation is often [A] in DCG's. A logical terminal is of the form 0p-L~, where Op is a modification operator and LF is a logical form. Logical terminals are special cases of semantic items, the significance of which will be explained below. Formally, the rule compiler 105 recognizes them as being terms of the form A-B. There can be any number of them in a rule body. A goal is of the form $A, where A is a term re- presenting a Prolog goal. (This is the usual provision for Prolog procedure calls, which are often indicated by enclosure in braces in DCG's.) A shifted non-terminal is either of the form%A, or of the form F%A, where A is a weak non- terminal and F is any ~erm. (In practice, F will be a list of features.) As indicated in the introduction, the shift operator '~' is used to handle left-embedding constructions in a right-recursive ~ule system. Any rule body element not of the above four forms and not 'nil' or the cut symbol is taken to be a non-terminal. A terminal is either a surface terminal or a logical ~erminal. Surface ~erminals are building blocks for the word string being analyzed, and logical terminals are building blocks for the amalysis structures. A syntax rule is called strong or weak, .,u- cording as the non-terminal on its left-hand side is strong or weak. It can be seen that on a purely formal level, the only differences between HLG syntax rules and DCG's are (1) the appearance of logical terminals in rule bodies of MLG's, (2) the use of ~he shift operator, and (3) the distinction between strong and weak non-terminals. However, for a given lin- guistic coverage, the syntactic component of an MLG will normally be more compact than the corresponding DCG because structure-building must be ,~xplicit in DCG's. In this report, the arrow '-->' (as opposed to ':>') will be used for for DCG rules, and the same notation for sequencing, terminals, etc.. will be used for DCG's as for MLG's. What is the significance of the strong/weak distinction for non-terminals and rules? Roughly, a strong rule should be thought of as introducing a new l®vel of grammar, whe[eas a weak rule defines analysis within a level. Major categories like sentence and noun phrase are expanded by strong rules, but auxiliary rules like the reoursive rules that find the postmodifiers of a verb are weak rules. An analogy with ATN's (Woods, 1970) is t~at strong non-tecminals are like the start categories of subnetworks (with structure-building POP arcs for termination), whereas weak non-terminals are llke internal nodes. In the one-pass mode, the HLG rule compiler makes the following distinction for strong and weak rules. In the Horn clause ~ranslatiDn of a strong ~11e, a call to the semantic interpretation compo- nent is compiled in at the end of the clause. The non-terminals appearing in rules (both strong and weak) are given extra arguments which manipu!aKe semantic structures used in the call to semantic interpretation. No such call to semantics is com- piled in for weak rules. Weak rules only gather information to be used in the call to semantics made by the next higher strong rule. (Also, a shift generates a call to semantics.) In the two-pass mode, where syntactic analysis trees are built during the first pass, the rule compiler builds in the construction of a tree node corresponding to every strong rule. The node is labeled essentially by the non-terminal appearing on the left-hand side of the strong rule. (A shift also generates the construction of a tree node.) Details of rule compilation will be given in the next section. As indicated above, logical terminals, and more generally semantic items, are of the form Operator-LogicalForm. The Operator is a term which determines how the semantic item can combine with other semantic items during semantic interpretation. (In this combina- tion, new semantic items are formed which ;ire no longer logical terminals.) Logical terminals are most typically associated with lexical items, al- though they ar~ also used to produc~, certain non- lexical ingredients in logical form analysis. An example for the lexical item "each" might be Q/P - each(P,Q). Here the operator Q/P is such that when the "each" item modifies, say, an item having logical form man(X), P gets unified with man(X), and the re- sulting semantic item is @Q - each(~.an(X),Q) where @q is an operator which causes Q to get uni- fied wi~h the logical form of a further modificand. Details ,Jr the dse of semantic items will be given in Section A. Now let us look at the syntactic component of a sample HLG which covers the same ground as a welt-known DCG. The following DCG is taken essen- tially from Pereira and Warren (1980). It is the sort of DCG that builds logical forms directly Dy manipulating partial logical forms in arguments of the grammar symbols. sentfP) --> np(X,PI,P): vp(X,Pl). np(X,P~,P) --~ detfP2,PI,P): noun(X,P3): relclause(X,P3,P2). np(X,P,P) --> name(X). vp(X,P) --> transverbfX,Y,Pl): np(Y,Pl,P). vpfX,P~ --> intransverb(X,P). relcbtuse(X,Pl,Pl&P2) --> +that: vp(X,P2). relc~ause(*,P,P) --> nil. det(PI,P2,P) --> +D: $dt~D,PI,P2,P). nounfX,P) --> +N: SnfN,X,P). name(X) --> +X: $nm(X). transverb(X,Y,P) --> +V: $tv(V,X,Y,P). intransverb(X,P) --> +V: $iv(V,X,P). /~ Lexicon */ n(maa,X,man(X) ). n(woman, X,woman (X)). ~(john). nm(mary). 106 dt(every,P1,P2,all(P1,P2)). dt(a,PI,P2,ex(Pl,P2)). tv(loves,X,Y,love(X,Y)). iv(lives,X,live(X)). The syntactic component of an analogous HLG is as follows. The lexicon is exactly the same as that of the preceding DCG. For reference below, this grammar will be called MLGRAH. strongnonterminals(sent.np.relclause.det.nil). sent ~> np(X): vp(X). np(X) => dec: noun(X): relclause(X). np(X) ~> name(X). vp(X) ~> transverb(X,Y): np(Y). vp(X) ~> intransverb(X). relclause(X) ~> +that: vp(X). relclause(*) ~> nil. det ~> +O: Sdt(D,P1,P2,P): PZ/PI-P. noun(X) ----> +N: Sn(N,X,P): I-P. name(X) ~> +X: Snm(X). transverb(X,Y) :> +V: $tv(V,X,Y,P): I-P. intransverb(X) = > +V: $iv(V,X,P): l-P This small grammar illustrates all the ingredients of HLG syntax rules except the shift operator. The shift will be illustrated below. Note that 'sent' and 'np' are strong categories but 'vp' is weak. A result is that there will be no call to semantics at the end of the 'vp' rule. Instead, the semantic structures associated with the verb and object are passed up to the 'sent' level, so that the subject and object are "thrown into the same pot" for se- mantic combination. (However, their surface order is not forgotten.) There are only two types of modification op- erators appearing in the semantic items of this MLG: 'I' and P2/PI. The operator 'i' means 'left- conlotn . Its effect is to left-conjoin its asso- ciated logical form to the logical form of the modificand (although its use in this small grammar is almost trivial). The operator P2/PI is associ- ated with determiners, and its effect has been il- lustrated above. The semantic component will be given below in Section &. A sa~_ple semantic analysis for the sentence "Every man that lives loves a woman" is all(man(Xl)&live(Xl),ex(woman(X2),love(Xl,X2))). This is the same as for the above DCG. We will also show a sample parse in the next section. A fragment of an MLG illustrating the use of the shift in the treatment of possessive noun phrases is as follows: np ~---> deC: npl. npl => premods: noun: np2. vp2 ~> postmods. np2 ~> poss: %npl. _The idea of this fragment can be described in a rough procedural way, as follows. In parsing an np, one reads an ordinary determiner (deC), then goes to npl. In npl, one reads several premodifiers (premods), say adjectives, then a head noun, then goes to np2. [n np2, one may either finish by reading postmodifiers (postmods), OR one may read an apostrophe-s (poss) and then SHIFT back to npl. Illustration for the noun phrase, "the old man's dusty hat": the old man 's np det npl premods noun np2 poss %npl dusty hat (nil) premods noun np2 postmods When the shift is encountered, the syntactic structures (in the two-pass mode) are manipulated (in the compiled rules) so that the initial np ("the old man") becomes a left-embedded sub-structure of the larger np (whose head is "hat"). But if no apostrophe-s is encountered, then the structure for "the old man" remains on the top level. 3. COMPILATION OF MLG SYNTAX RULES In describing rule compilation, we will first look at the two-pass mode, where syntactic struc- tures are built in the first pass, because the re- lationship of the analysis structures to the syntax rules is more direct in this case. The syntactic structures manipulated by the compiled rules are represented as syntactic items, which are terms of the form syn(Features,Oaughters) where Features is a feature list (to be defined), and Daughters is a list consisting of syntactic items and terminals. Both types of terminal (surface and logical) are included in Daughters, but the dis- playing procedures for syntactic structures can optionally filter out one or the other of the two types. A feature list is of the form nt:Argl, where nt is the principal fun=tot of a strong non-terminal and Argl is its first argument. (If nt has no ar- guments, we take Argl=nil.) It is convenient, in large grammars, to use this first argument Argl to hold a list (based on the operator ':') of gram- matical features of the phrase analyzed by the non-terminal (like number and person for noun phrases). [n compiling DCG rules into Prolog clauses, each non-terminal gets two extra arguments treated as a difference list representing the word string analyzed by the non-terminal. In compiling MLG rules, exactly the same thing is done to handle word strings. For handling syntactic structures, the MLG rule compiler adds additional arguments which manipulate 'syn' structures. The number of addi- tional arguments and the way they are used depend on whether :he non-terminal is strong or weak. If the original non-terminal is strong and has the form nt(Xl .... , Xn) then in the compiled version we will have 107 nt(Xl ..... Xn, Syn, Strl,Str2). Here there is a single syntactic structure argument, Syn, representing the syntactic structure of the phrase associated by nt with the word string given by the difference list (Strl, Sir2). On the other hand, when the non-terminal nt is weak, four syntactic structure arguments are added, producing a compiled predication of the form nt(Xl, .... Xn, SynO,Syn, Hodsl,Hods2, Strl,Str2). Here the pair (Hodsl, Hods2) holds a difference list for the sequence of structures analyzed by the weak non-terminal nt. These structures could be 'syn' structures or terminals, and they will be daughters (modifiers) for a 'syn' structure associated with the closest higher call to a strong non-terminal -- let us call this higher 'syn structure the ma- trix 'syn' structure. The other pair (SynO, Syn) represents the changing view of what the matrix 'syn' structure actually should be, a view that may change because a shift is encountered while satis- fying nt. SynO represents the version before sat- isfying nt, and Syn represents the version after satisfying nt. If no shift is encountered while satisfying nt, then Syn will just equal SynO. But if a shift is encountered, the old version SynO will become a daughter node in the new version Syn. In compiling a rule with several non-terminals in the rule body, linked by the sequencing operator ':', the argument pairs (SynO, Syn) and (Hodsl, Hods2) for weak non-terminals are linked, respec- tively, across adjacent non-terminals in a manner similar to the linking of the difference lists for word-string arguments. Calls to strong non- terminals associate 'syn' structure elements with the modifier lists, just as surface terminals are associated with elements of the word-string lists. Let us look now at the compilation of a set of rules. We will take the noun phrase grammar fragment illustrating the shift and shown above in Section 2, and repeated for convenience here, to- gether with declarations of strong non-terminals. strongnon~erminals(np.det.noun.poss.nil). np => det: npl. npl => premods: noun: np2. np2 ----~-> postmods. rip2 => poss: %npl. The compiled rules are as follows: np[Syn, Strl,Str3) <- det(Hod, Strl,Str2) & npl(syn(np:nil,Hod:Hods),Syn, Hods,nil, Str2,Str3). npl(Synl,Syn3, Hodsl,Hods3, Strl,Str4) <- premods(Synl,Syn2, Hodsl,Hod:Hods2, Strl,Str2) & noun(Hod, Str2,Str3) & np2(Syn2,Syn3, Hods2,Hods3, Str3,Str4). np2(Synl,Syn2, Hodsl,Hods2, Strl,Str2) <- postmods(Synl,Syn2, Hodsl,Hods2, Strl,Str2). np2(syn(Feas,HodsO),Syn, Hod:Hodsl,Hodsl, Strl,Str3) <- poss(Mod, Strl,Str2) & npl(syn(Feas,syn(Feas,HodsO):Hods2),Syn, Hods2,nil, Str2,Str3). In the first compiled rule, the structure Syn to be associated with the call to 'np' appears again in the second matrix structure argument of 'npl' The first matrix structure argument of 'npl' is syn(np:nil,Mod:Hods). and this will turn out to be the value of Syn if no shifts are encountered. Here Hod is the 'syn' structure associated with the determiner 'det', and Hods is the list of modifiers determined further by 'npi'. The feature list np:nil is constructed from the leading non-terminal 'np' of this strong rule. (It would have been np:Argl if np had a (first) argument Argl.) [n the second and third compiled rules, the matrix structure pairs (first two arguments) and the modifier difference list pairs are linked in a straightforward way to reflect sequencing. ]'be fourth rule shows the effect of the shift. Here syn(Feas,HodsO), the previous "conjecture" for the matrix structure, is now made simply the first modifier in the larger structure syn(Feas,syn(Feas,HodsO):Hods2) which becomes the new "conjecture" by being placed in the first argument of the further call to 'npl'. If the shift operator had been used in its binary form FO%npl, then the new conjecture would be syn(NT:F,syn(NT:FO,Mods0):Hods2) where the old conjecture was syn(NT:F,HodsO). [n larger grammars, this allows one to have a com- pletely correct feature list NT:FO for the left- embedded modifier. To illustrate the compilation of terminal symbols, let us look at the rule det => +O: Sdt(D,PI,P2,P): P2/Pt-P. from the grammar HLGRAM in Section 2. The compiled rule is det(syn(det:nil,+D:P2/PI-P:nil), D.Str,Str) <- dt(D,PI,P2,P). Note that both the surface terminal +D and the logical terminal P2/PI-P are entered as modifiers of the 'det' node. The semantic interpretation component looks only at the logical terminals, but in certain applications it is useful to be able to see the surface terminals in the syntactic struc- tures. As mentioned above, the display procedures for syntac=i¢ structures can optionally show only one type of terminal. 108 The display of the syntactic structure of the sentence "Every man loves a woman" produced by MLGRAM is as follows. sentence:nil np:Xl det:nil X2/X3-alI(X3,X2) l-man(Xl) l-love(Xl,XA) np:XA det:nil XS/X6-ex(X6,XS) l-woman(X&) Note that no 'vp' node is shown in the parse tree; 'vp' is a weak non-terminal. The logical form produced for this tree by the semantic component given in the next section is all(man(Xl), ex(woman(X2),love(XI,X2))). Now let us look at the compilation of syntax rules for the one-pass mode. In this mode, syn- tactic structures are not built, but semantic structures are built up directly. The rule compiler adds extra arguments to non-terminals for manipu- lation of semantic structures, and adds calls to the top-level semantic interpretation procedure, 'semant'. The procedure 'semant' builds complex semantic structures out of simpler ones, where the original building blocks are the logical terminals appearing in the MLG syntax rules. In this process of con- struction, it would be possible to work with se- mantic items (and in fact a subsystem of the rules do work directly with semantic items), but it ap- pears to be more efficient to work with slightly more elaborate structures which we call augmented semantic items. These' are terms of the form sem(Feas,Op,LP), where Op and [2 are such that Op-LF is an ordinary semantic item, and Fees is either a feature list or the list terminal:nil. The latter form is used for the initial augmented semantic items associated with logical terminals. As in the two-pass mode, the number of analysis structure arguments added to a non-terminal by the compiler depends on whether the non-terminal is strong or weak. If the original non-terminal is strong and has the form nt(Xl, ..., Xn) then in the compiled version we will have nt(Xl, ..., Xn, Semsl,Sems2, Strl,Str2). Here (Semsl, Sems2) is a difference list of aug- mented semantic items representing the list of se- mantic s~ruotures for the phrase associated by n~ with the word s~ring given by the difference list (Strl, Sir2). In the syntactic (two-pass) mode, only one argument (for a 'syn') is needed here, but now we need a list of structures because of a raising phenomenon necessary for proper scoping, which we will discuss in Sections A and 5. When the non-terminal nt is weak, five extra arguments are added, producing a compiled predi- cation of the form nt(Xl, ..., Xn, Fees, SemsO,Sems, Semsl,Sems2, Strl,Str2). Here Fees is the feature list for the matrix strong non-terminal. The pair (SemsO, Sems) represents the changing "conjecture" for the complete list of. daughter (augmented) semantic items for the matrix node, and is analogous to first extra argument pair in the two-pass mode. The pair (Semsl, Sems2) holds a difference list for the sequence of semantic items analyzed by the weak non-terminal nt. Semsl will be a final sublist of SemsO, and Sems2 will of course be a final sub|ist of Semsl. For each strong rule, a cal-i to 'semant' is added at the end of the compiled form of the rule. The form of the call is semant(Feas, Sems, Semsl,Sems2). Here teas is the feature list for the non-terminal on the left-hand side of the rule. Sems is the final version of the list of daughter semantic items (after all adjustments for shifts) and (SemsL, Sems2) is the difference list of semantic items resulting from the semantic interpretation for this level. (Think of Fees and Sems as input to 'semant', and (Semsl, Sems2) as output.) CSemsl, Sems2) will be the structure arguments for the non-terminal on the left-hand side of the strong rule. A call to 'semant' is also generated when a shift is encountered, as we will see below. The actual working of 'semant' is the topic of the next section. For the shift grammar fragment shown above, the compiled rules are as follows. np(Sems,Sems0, Strl,Str3) <- det(Semsl,Sems2, Strl,Str2) & npl(np:nil, Semsl,Sems3, Sems2,nil, Str2,Scr3) a semant(np:nil, Sems3, Sems,SemsO). npl(Feas, Semsl,Sems3, Semsa,Sems7, Strl,St[~) <- premods(Feas, Semsl,Sems2, SemsA,Sems5, Strl,Str2) & noun(Sems5,Sems6, Str2,Str3) & np2(Feas, Sems2,Sems3, Sems6,SemsT, Str3,StrA). np2(Feas, Semsl,Sems2, Sems3,Semsd, Strl,Str2) <- postmods(Feas, Semsl,Sems2, Sems3,SemsA, Strl,Str2). npE(Feas, Semsl.SemsA, SemsS,Sems6, Strl,Str3) <- poss(SemsS,Sems6, Strl,Str2) & semant(Feas, Semsl, Sems2,Sems3) & npl(Feas, Sems2,Sems~, Sems3,nil, Str2,Str3). In the first compiled rule (a strong rule), the pair (Seres, SemsO) is a difference list of the semantic items analyzing the noun phrase. (Typically there 109 will just be one element in this list, but there can be more when modifiers of the noun phrases contain quantifiers that cause the modifiers to get promoted semantically to be sisters of the noun phrase.) This difference list is the output of the call to 'semant' compiled in at the end of the first rule. The input to this call is the list Sems3 (along with the feature list np:nil). We arrive at Sems3 as follows. The list Semsl is started by , ! the call to det ; its first element is the determiner (if there is one), and the list is con- tinued in the list Sems2 of modifiers determined further by the call to 'npl'. In this call to 'npl', the initial list Semsl is given in the second ar- gument of 'npl' as the "initial verslon for the final list of modifiers of the noun phrase. Sems3, being in the next argument of 'npl', is the "final version" of the np modifier list, and this is the list given as input to 'semant'. [f the processing of 'npl' encounters no shifts, then Sems3 will just equal 5ems I. [n the second compiled rule (for 'npl'), the "versions" of the total list of modifiers are [inked in a chain (Semsl, 5ems2, Sems3) in the second and third arguments of the weak non- terminals. The actual modifiers produced by this rule are linked in a chain (SemsA, Sems51 Sems6, SemsT) in the fourth and fifth arguments of the weak non- terminals and the first and second arguments of the strong non-terminals. A similar situation holds for the first of the 'np2' rules. [n the second 'npZ' rule, a shift is encount- ered, so a call to 'semant' is generated. This is necessary because of the shift of levels; the mod- ifiers produced so far represent all the modifiers in an np, and these must be combined by 'semant' to get the analysis of this np. As input to this call to 'semant', we take the list Semsl, which is the current version of the modifiers of the matrix np. The output is the difference list .(Sems2, gems3). Sems2 is given to the succeeding call to 'npl' as the new current version of the matrix modifier list. The tail Sems3 of the difference list output by 'semant' is given to 'npl' in its fourth argument to receive further modifiers. SemsA is the f~.nal uersion of the matrix modifier list, determined by 'npi I , and this information is also put in the third a,'gument of 'np2'. The difference list (Sems5, Semsb) contains the single element produced by 'poss', and this list tails off the list Semsl. When a semantic item Op-LF occurs in a rule body, the rule compiler inserts the augmented se- mantic item sem(terminal:nil,Op,LF). As an example, the weak rule transverb(X,Y) ~> +V: $tv(V,X,Y,P): I-P. compiles into the clause transverb(X,Y, Feas, Semsl,Semsl, sem(terminal:nil,l,P):Sems2,Sems2, V.Str,Str) <- tv(V,X,Y,P). The strong rule det -----> +D: Sdt(D,PI,P2,P): P2/PI-P. compiles into the clause det(Semsl,Sems2, D.SemsA,Sems&)<- dt(D,P1,P2,P) & semant(det:nil, sem(terminal:nil,P2/PI,P):nil, Semsl,Sems2). 4. SEMANTIC INTERPRETATION FOR MLG'S The semantic interpretation schemes for both the one-pass mode and the two-pass mode share a large core of common procedures; they differ only at the top level. In both schemes, augmented se- mantic items are combined with one another, forming more and more complex items, until a single item is constructed which represents the structure of the whole sentence. In this final structure, only the logical form component is of interest; the other two components are discarded. We will describe the top levels for both modes, then describe the common core. The top level for the one-pass mode is simpler, because semantic interpretation works in tandem with the parser, and does not itself have to go through the parse tree. The procedure 'semant', which has interleaved calls in the compiled syntax rules, essentially is the top-level procedure, but there is some minor cleaning up that has to be done. If the top-level non-terminal is 'sentence' (with no arguments), then the top-level analysis procedure for the one-pass mode can be analyzeCSent) <- sentence(Sems,nil,Sent,nil) & semant(top:nil,Sems,sem(*,e,iF):nil,nil) & outlogform(LF). Normally, the first argument, Sems, of 'sentence' will be a list containing a single augmented se- mantic item, and its logical form component will be the desired logical form. However, for some grammars, the ~dditional call to 'semant' is needed to complete the modification process. The procedure 'outlogform' simplifies the logical form and outputs it. ~ne definition of 'semant' itself is given in a single clause: semant(Feas,Sems,Sems2,Sems3) <- reorder(Sems,Semsl) & modlist(Semsl,sem(Feas,id,t), Sem,Sems2,Sem:Sems3). Here, the procedure 'reorder' takes the list Sems of augmented semantic items to be combined and re- 110 orders it (permutes it), to obtain proper (or most likely) scoping. This procedure belongs to the common core of the two methods of semantic inter- pretation, and will be discussed further below. The procedure 'modlist' does the following. A call modlist(Sems,SemO,Sem,Semsl,Sems2) takes a list Sems of (augmented) semantic items and combines them with (lets them modify) the item SemO, producing an item Sem (as the combination), along with a difference list (Semsl, Sems2) of items which are promoted to be sisters of gem. The leftmost member of Sems acts as the outermost modifier. Thus, in the definition of 'semant', the result list Semsl of reordering acts on the trivial item sem(Feas,id,t) to form a difference list (gems2, Sem:Sems3) where the result Sem is right-appended to its sisters. 'modlist' also belongs to the common core, and will be defined below. The top level for the two-pass system can be defined as follows. analyze2(Sent) <- sentence(gyn,Sent,nil) & synsem(Syn,Sems,nil) & semant(top:nil,gems,sem(*,e,LF):nit,niI) & outlogform(LF). The only difference between this and 'analyze' above is that the call to 'sentence' produces a syntactic item Syn, and this is given to the procedure 'synsem'. The latter is the main recursive proce- dure of the two-pass system. A call synsem(Syn,SemsI,Sems2) takes a syntactic item Syn and produces a difference list (Semsl, Sems2) of augmented semantic items representing the semantic structure of Syn. (Typ- ically, this list will just have one element, but it can have more if modifiers get promoted to sis- ters of the node.) The definition of 'synsem' is as follows. synsem(syn(Feas,Mods),Sems2,Sems3) <- synsemlist(Mods,Sems) & reorder(Sems,Semsl) & modlist(Semsl,sem(Feas,id,t), Sem,Sems2,Sem:Sems3). Note that this differs from the definition of 'semant' only in that 'synsem' must first recursively process the daughters Mode of its input syntactic item before calling 'reorder' and 'modlist' The procedure 'synsemlist' that proc- esses the daughters is defined as follows. synsemlist(syn(Feas,Mods0):Mods,Semsl) <- /& synsem(syn(Feas,ModsO),SemsI,Sems2) & synsemlist(Mods,Sems2). synsemlist((Op-LF):Mods, sem(terminal:nil,Op,LF):Sems) <- /& synsemlist(Mods,Sems). synsemlist(Nod:Mods,Sems) <- synsemlist(Mods,Sems). synsemlist(nil,nil). The first clause calls 'synsem' recursively when the daughter is another 'syn' structure. The second clause replaces a logical terminal by an augmented semantic item whose feature list is terminal:nil. The next clause ignores any other type of daughter (this would normally be a surface terminal). Now we can proceed to the common core of the two semantic interpretation systems. The procedure 'modlist' is defined recursively in a straightfor- ward way: modlist(Sem:Sems, Sem0, Sem2, Semsl,Sems3) <- modlist(Sems, SemO, Seml, Sems2,Sems3) & modify(Sem, Seml, Sem2, Semsl,Sems2). modlist(nil, Sem, gem, Sems,Sems). Here 'modify' takes a single item Sem and lets it operate on Seml, giving Sem2 and a difference list (Semsl, Sems2) of sister items. Its definltion is modify(Sem, Seml, Seml, Sem2:Sems,Sems~ <- raise(Sem,Seml,Sem2) &/. modify(sem(*,Op,LF), sem(Feas,Opl,LFI), sem(Feas,Op2,LF2), Sems,Sems) <- mod(Op-LF, OpI-LFI, Op2-LF2). Here 'raise' is responsible for raising the item Seml so that it becomes a sister of the item Seml; gem2 is a new version of Seml after the raising, although in most cases, gem2 equals geml. Raising occurs for a noun phrase like "a chicken in every pot", where the quantifier "every" has higher scope than the quantifier "a". The semantic item for "every pot" gets promoted to a left sister of that for "a chicken". 'raise' is defined bas- ically by a system of unit clauses which look at specific types of phrases. For the small grammar MLGRAM of Section 2, no raising is necessary, and the definition of 'raise' can just be omitted. The procedures 'raise' and 'reorder' are two key ingredients of reshaping (the movement of se- mantic items to handle scoping problems), which was discussed extensively in McCord (1982, 1981). [n those two systems, reshaping was a separate pass of semantic interpretation, but },ere, as in McCord (198&), reshaping is interleaved with the rest of semantic interpretation. In spite of the new top- level organization for semantic interpretation of MLG's, the low-level procedures for raising and reordering are basically the same as in the previous systems, and we refer to the previous reports for further discussion. The procedure 'mod', used in the second clause for 'modify', is the heart of semantic interpreta- tion. mod(Sem, Seml, Sem2) means that the (non-augmented) semantic item Sem modifies (combines with) the item Semi to give the item Sem2. 'mod' is defined by a system consisting basically of unit clauses which key off the mod- ification operators appearing in the semantic items. 111 In the experimental MLG described in the next sec- tion, there are 22 such clauses. For the grammar MLGRAM of Section 2, the following set of clauses suffices. mod(id -~, Sem, Sem) <- /. mod(Sem, id -~, Sem) <- /. mod(l-P, Op-Q, Op-R) <- and(P,Q,R). mod(P/Q-R, Op-Q, @P-R). mod(@P-Q, Op-P, Op-Q). The first two clauses say that the operator 'id' acts like an identity. The second clause defines 'i' as a left-conjoining operator (its corresponding logical form gets left-conjoined to that of the modificand). The call and(P,Q,R) makes R=P&Q, ex- cept that it treats 't' ('true') as an identity. The next clause for 'mod' allows a quantifier se- mantic item like P/Q-each(Q,P) to operate on an item like I-man(X) to give the item @P-each(man(X),P). The final clause then allows this item to operate on I-live(X) to give l-each(man(X),live(X)). The low-level procedure 'mod' is the same (in purpose) as the procedure 'trans' in HcCord (1981), amd has close similarities to 'trans' in McCord (1982) and 'mod' in McCord (198&), so we refer to this previous work for more illustrations of this approach to modification. For MLGRAH, the only ingredient of semantic interpretation remaining to be defined is 'reorder'. We can define it in a way that is somewhat more general than is necessary for this small grammar, but which employs a technique useful for larger grammars. Each augmented semantic item is assigned a precedence number, and the reordering (sorting) is done so that wh@n item B has higher precedence number than item A, then B is ordered to the left of A; otherwise items are kept in their original order. The following clauses then define 'reorder' in a way suitable for MLGRAM. reorder(A:L,H) <- reorder(L,Ll) & insert(A,Li,M). reordef(nit,n£1). insert(A,B:L,S:Ll) <- prec(A,PA) & prec(B,PB) & gt(PB,PA) &/& insert(A,L,Li). insert(A,L,a:L~. prec(sem(term~nal:*,e,~),2) <- /. pruc(sem(relc!ause:e,e,e),l) <- /. prec(e,3). ~nus terminals are ordered to the end, except not after relative clauses. In particular, the subject and object of a sentence are ordered before the verb (~ terminal in the sentence), and this allows the ssraightforward process of modification in :mod' to scope the quantifiers of the subject and object over the material of the verb. One can alter the definition of 'prec' to get finer distinctions in ~coping, and for this we refer to McCord (1982, 1981). For a grammar as small as MLGRAM, which has no treatment of scoping phenomena, the total tom- plexity of the MLG, including the semantic inter- pretation component we have given in this Section, is certainly greater than that of the comparable DCG in Section 2. However, for larger grammars, the modularity is definitely worthwhile -- concep- tually, and probably in the total size of the sys- tem. 5. AN EXPERIMENTAL MLG This section describes briefly an experimental MLG, called HODL, which covers the same linguistic ground as the grammar (called HOD) in HcCord (198l). The syntactic component of HOD, a DCG, is essen- tially the same as that in HcCord (1982). One feature of these syntactic components is a system- atic use of slot-filling to treat complements of verbs and nouns. This method increases modularity between syntax and lexicon, and is described in detail in McCord (1982). One purpose of HOD, which is carried over to MODL, is a good treatment of scoping of modifiers and a good specification of logical form. The logical form language used by >IODL as the target of semantic interpretation has been improved some- what over that used for HOD. We describe here some of the characteristics of the new logical form language, called LFL, and give sample LFL analyses obtained by MODL, but we defer a more detailed de- scription of LFL to a later report. The main predicates of LFL are word-senses for words in the natural language being analyzed, for' example, believel(X,Y) in the sense "X believes that Y holds". Quantifiers, like 'each', are special cases of word-senses. There are also a small number of non-lexJcal predicates in LFL, some of which are associated with inflections of words, like 'past' for past tense, or syntactic constructions, like 'yesno' for yes-no questions, or have significance at discourse level, dealing for instance with topic/comment. The arguments for predicates of LFL can be constants, variables, or other logical forms (expressions of LFL). Expressions of LFL are either predications (in the sense just indicated) or combinations of LFL expressions using the conjunction '&' and the in- dexing operator ':'. Specifically, if P is a log- ical form and E is a variable, then P:E (read "P indexed by E"~ is also a logical form. When an indexed logical form P:E appears as part of a larger logical form Q, and the index variable E is used elsewhere in Q. then E can be thought of roughly as standing for P together with its "context". Contexts include references to time and place which are normally left implicit in natural language. When P specifies an event, as in see(john,mary), writing P:E and subsequently using E will guarantee that E refers to the same event. In the logical form language used in McCord (1981), event variables (as arguments of verb and noun senses) were used for indexing. But the indexing operator is more powerful because it can index complex logical forms. For some applications, it is sufficient to ignore contexts, and in such cases we just think of P:E as verifying P and binding E to an instantiation 112 of P. In fact, for PROLOG execution of logical forms without contexts, ':' can be defined by the single clause: P:P <- F. A specific purpose of the MOD system in McCord (1981) was to point out the importance of a class of predicates called focaiizers, and to offer a method for dealing with them in semantic interpre- tation. Focalizers include many determiners, adverbs, and adjectives (or their word-senses), as well as certain non-lexical predicates like 'yesno'. Focalizers take two logical form arguments called the base and the fOCUS: focalizer(Base,Focus). The Focus is often associated with sentence stress, hence the name. The pair (Base, Focus) is called the SCOpe of the focalizer. The adverbs 'only' and 'even' are focalizers which most clearly exhibit the connection with stress. The predication only(P,Q) reads "the only case where P holds is when Q also holds". We get different analyses depending on focus. John only buys books at Smith's. only(at(smith,buy(john,X1)), book(X1)). John only buys books at Smith's. only(book(Xl)&at(X2,buy(john,Xl)), X2=smith). quantificational adverbs like 'always' and 'seldom', studied by David Lewis (1975), are also focalizers. Lewis made the point that these quantifiers are properly considered unseJKtJve, in the sense that they quantify over all the free variables in (what we call) their bases. For ex- ample, in John always buys books at Smith's. always(book(Xl)&at(X2,buy(john,Xl)), X2=smith) • the quantification is over both X1 and X2. (A paraphrase is "Always, if X1 is a book and John buys X1 at X2, then X2 is Smith's".) Quantificational determiners are also focalizers (and are unselective quantifiers); they correspond closely in meaning to the quantificational adverbs ('all' - 'always', 'many' 'often', 'few' - 'seldom', etc.). We have the paraphrases: Leopards often attack monkeys in trees. often(leopard(Xl)&tree(X2)&in(X2,attack(Xl,X3)), monkey(X3)). Many leopard attacks in trees are (attacks) on monkeys. many(leopard(Xl)&tree(X2)&in(X2,attack(Xi,X3)), monkey(X3)). Adverbs and adjectives involving comparison or degree along some scale of evaluation (a wide class) are also focalizers. The base specifies the base of comparison, and the focus singles out what is being compared to the base. This shows up most clearly in the superlative forms. Consider the adverb "fastest": John ran fastest yesterday. fastest(run(john):E, yesterday(E)). John ran fastest yesterday. fastest(yesterday(run(X)), X=john). In the first sentence, with focus on "yesterday", the meaning is that, among all the events of John's running (this is the base), John's running yesterday was fastest. The logical form illustrates the in- dexing operator. [n the second sentence, with focus on "John", the meaning is that among all the events of running yesterday (there is an implicit location for these events), John's running was fastest. As an example of a non-lexical focalizer, we have yesno(P,q), which presupposes that a case of P holds, and asks whether P & Q holds. (The pair (P, Q) is like Topic/Comment for yes-no questions.) Example: Did John see M@ry yesterday? yesno(yesterday(see(john,X)), X=mary). It is possible to give Prolog definitions for most of the focalizers discussed above which are suitable for extensional evaluation and which amount to model-theoretic definitions of them. This will be discussed in a later report on LFL. A point of the grammar HODL is to be able to produce LFL analyses of sentences using the modular semantic interpretation system outlined in the preceding section, and to arrive at the right (or most likely) scopes for focalizers and other modi- fiers. The decision on scoping can depend on heuristics involving precedences, on very reliable cues from the syntactic position, and even on the specification of loci by explicit underlining in ~he input string (which is most relevant for adverbial focalizers). Although written text does not often use such explici~ specification of adverbial loci, it is important that the system can get the right logical form after having some spec- ification of the adverbial focus, because this specification might be obtained from prosody in spoken language, or might come from the use of discourse information. [t also is an indication of the modularity of the system that it can use the same syntactic rules and parse path no matter where the adverbial focus happens to lie. Most of the specific linguistic information for semantic interpretation is encoded in the procedures 'mod', 'reorder', and 'raise', which manipulate semantic items. In MODL there are 22 clauses for the procedure 'mod', most of which are unit clauses. These involve ten different modifi- cation operators, four of which were illustrated in the preceding section. The definition of 'mo<l' in MODL is taken fairly directly from the corre- sponding procedure 'trans' in HOD (McCord, 1981), although there are some changes involved in handling the new version of the logical form language (LFL), 113 especially the indexing operator. The definitions of 'reorder' and 'raise' are essentially the same as for procedures in HOD. An illustration of analysis in the two-pass mode in HODL is now given. For the sentence "Leopards only attack monkeys in trees", the syn- tactic analysis tree is as follows. sent nounph l-leopard(X) avp (P<Q)-only(P,Q) l-attack(X,Y) nounph l-monkey(Y) prepph @@R-in(Z,R) nounph l-tree(Z) Here we display complete logical terminals in the leaf nodes of the tree. An indicat[on of the meanings of the operators (P<Q) and @@R will be given below. [n the semantic interpretation of the prepo- sitional phrase, the 'tree' item gets promoted (by 'raise') to be a left-sister of the the 'in' item, and the list of daughter items (augmented semantic items) of the 'sent' node is the following. nounph i leopard(X) avp P<Q only(P,Q) terminal I attack(X,Y) nounph 1 monkey(Y) nounph I tree(Z) prepph @@R in(Z,R). Here we di~:play each augmented semantic item sem(nt:Feas,Op,LF) simply in the form nt Op LF. The material in the first field of the 'monkey' item actually shows that it is stressed. The reshaping p~ocedure 'reorder' rearran6es these items into the order: nounph I leopard(X) nounph 1 tree(Z) prepph @@R in(Z,R) terminal I attack(X,Y) avp P<Q only(P,Q) nounph 1 monkey(Y) Next, these items successively modify (according to the rules for 'mod') the matrix item, sent id t, with the rightmost daughter acting as innermost oodifier. The rules for 'mod' involving the oper- ator (P<Q) associated with only(P,Q) are designed so that the logical form material to the right of 'only' goes into the focus Q of 'only' and the ma- terial to the left goes into the base P. The ma- terial to the right is just monkey(Y). The items on the left ('leopard', 'tree', 'in', 'attack') are allowed to combine (through 'mod') in an independent way before being put into the base of 'only'. The operator ~@R associated with in(Z,R) causes R to be botmd to the logical form of the modificand -- attack(X,Y). The combination of items on the left of 'only' is leopard(X)&tree(Z)&in(Z,attack(X,Y)) This goes into the base, so the whole logical form is only(leopard(X)&tree(Z)&in(Z,attack(X,Y)), monkey(Y)). For detailed traces of logical form construction by this method, see McCord (1981). An illustration of the treatment of left- embedding in HODL in a two-pass analysis of the sentence "John sees each boy's brother's teacher" is as follows. sent nounph [-(X=john) l-see(X,W) nounph nounph nounph determiner Q/P-each(P,Q) l-boy(Y) l-poss l-brother(Z,Y) 1-poss 1-teacher(W,Z) Logical form... each(boy(Y),the(brother(Z,Y), the(teacher(W,Z),see(john,W)))). The MODL noun phrase rules include the shift (in a way that is an elaboration of the shift grammar fragment in Section 2), as well as rules for slot- filling for nouns like 'brother' and 'teacher' which have more than one argument in logical form. Ex- actly the same logical form is obtained by MODL for the sentence "John sees the teacher of the brother of each boy". Both of these analyses involve raising. [n =he first, the 'poss' node resulting from the apostrophe-s is raised to become a definite article. In the second, the prepositional phrases (their semantic structures) are promoted to be sisters of the "teacher" node, and the order of the quantlfiers ts (correctly) reversed. The syntactic component of MODL was adapted as closely as possible from that of HOD (a DCG) in order to get an idea of the efficiency of HLG's. The fact that the MLG rule compiler produces more structure-building arguments than are in the DCG would tend to |engthen analysis times, but it is hard to predic~ the effect of the different organ- ization of the semantic interpreter (from a three- pass system to a one-pass and a two-pass version of MODL). 7"no followin E five sentences were used for timing tests. Who did John say that the man introduced Mary to? Each book Mary said was given to Bill 114 was written by a woman. Leopards only attack monkeys in trees. John saw each boy's brother's teacher. Does anyone wanting to see the teacher know whether there are any hooks left in this room? Using Waterloo Prolog (an interpreter) on an IBM 3081, the following average times to get the logical forms for the five sentences were obtained (not including ~ime for [/0 and initial word separation): MODL, one-pass mode - 40 milliseconds. MODL, two-pass mode - 42 milliseconds. MOD - 35 milliseconds. So there was a loss of speed, but not a significant one. MODL has also been implemented in PSC Prolog (on a 3081). Here the average one-pass analysis time for the five sentences was improved to 30 milliseconds per sentence. On the other hand, the MLG grammar (in source form) ls more compact and easier to understand. The syntactic components for MOD and MODL were compared numerically by a Prolog program that totals up the sizes of all the grammar rules, where the size of a compound term is defined to be I plus the sum of the sizes of its arguments, and the size of any other term is I. The total for MODL was l&33, and for MOD was 1807, for a ratio of 79%. So far, nothing has been said in this report about semantic constraints in HODL. Currently, MODL exercises constraints by unification of semantic types. Prolog terms representing type requirements on slot-fillers must be unified with types of actual fillers. The types used in MODL are t%/pe trees. A type tr~ is either a variable {unspecified type) or a term whose principal functor is an atomic type (like 'human'), and whose arguments are subordinate type trees. A type tree T1 is subordinate to a type tree T2 if either T1 is a variable or the principal functor of T1 is a subtype (ako) of the principal functor of T2. Type trees are a generalization of the type lists used by Dahl (1981), which are lists of the form TI:T2:T3:..., where T1 is a supertype of T2, T2 is a supertype of TS, ..., and the tail of the list may be a variable. The point of the generalization is to allow cross-classification. Multiple daughters of a type node cross-classify it. The lexicon in MODL includes a preprocessor for lexical entries which allows the original lex- ical entries to specify type constraints in a com- pact, non-redundant way. There is a Pro|o K representation for type-hierarchies, and the [exi- cal preprocessor manufactures full type trees from a specification of their leaf nodes. [n the one-pass mode for analysis with MLG's, logical forms get built up during parsing, so log- ical forms are available for examination by semantic checking procedures of the sort outlined in McCord (198&). If such methods are arguably best, then there may be more argument for a one-pass system (with interleaving of semantics). The general question of the number of passes in a natural lan- guage understander is an interesting one. The MLG formalism makes this easier to investigate, because the same syntactic component can he used with one- pass or two-pass interpretation. In MODL, there is a small dictionary stored directly in Prolog, but MODL is also interfaced to a large dictionary/morphology system (Byrd, 1983, 1984) which produces syntactic and morphological information for words based on over 70,000 lemmata. There are plans to include enough semantic infor- mation in this dictionary to provide semantic con- straints for a large MLG. Alexa HcCray is working on the syntactic com- ponent for an MLG with very wide coverage. I wish to thank her for useful conversations about the nature of the system. 6. COMPARISON WITH OTHER SYSTEMS The Restriction Grammars (RG's) of HLrschman and Puder (1982) are logic grammars that were de- signed with modularity [n mind. Restriction Gram- mars derive from the Linguistic String Project {Sager, 1981). An RG consists of conLexE-free phrase structure rules to which restrictions are appended. The rule compiler {written in ProIo K and compiling into Prolog), sees to it that derivation trees are constructed automatically during parsing. The restrictions appended to the rules are basically Prolog procedures which can walk around, during the parse, in the partially constructed parse tree, and can look at the words remaining in the input stream. Thus there is a modularity between the phrase- structure parts of the syntax rules and the re- strictions. The paper contains an interesting discussion of Prolog representations of parse trees that make it easy to walk around in them. A disadvantage of RG's is that the automat- ically constructed analysis tree is just a deriva- tion tree. With MLG's, the shift operator and the declaration of strong non-terminals produce analy- sis structures which are more appropriate seman- tically and are easier to read for large grammars. [n addition, MLG analysis trees contain logical terminals as building blocks for a modular semantic interpretation system. The method of walking about in the partially constructed parse tree is powerful and is worth exploring further; but the more common way of exercising constraints in logic grammars by parameter passing and unification seems to be ade- quate linguistically and notationally more compact, as well as more efficient for the compiled Prolog program. Another type of logic grammar developed with modularity in mind is the Definite Clause Trans- lation Grammars (DCTG's) of Abramson (1984). These were inspired partially by RG's (Hirschman and Puder, 1982), by MSG's {Dahl and McCord, 1983), and by Attribute Grammars (Knuth, 1968). A DCTG rule is like a DCG rule with an appended list of clauses which compute the semantics of the node resulting from use of the rule. The non-terminals on the right-hand side of the syntactic portion of the rule can be indexed by variables, and these index vari- ables can be used in the semantic portion to link to the syntactic portion. For exa~le, the DCG rule 115 sent(P) --> np(X,P1,P): vp(X,Pl). from the DCG in Section 2 has the DCTG equivalent: sent ::= np@N: vp@V <:> logic(P) ::- N@Iogic(X,PI,P) & V@logic(X,Pl). (Our notation is slightly different from Abramson's and is designed to fit the Prolog syntax of this report.) Here the indexing operator is '@'. The syntactic portion is separated from the semantic portion by the operator '<:>'. The non-terminals in DCTG's can have arguments, as in DCG's, which could be used to exercise constraints (re- strictions), but it is possible to do everything by referring to the indexing variables. The DCTG rule compiler sees to the automatic construction of a derivation tree, where each node is labeled not only by the expanded non-terminal but also by the list of clauses in the semantic portion of the expanding rule. These clauses can then be used in computing the semantics of the node. When an in- dexed non-terminal NT@X appears on the right-hand side of a rule, the indexing variable X gets iastantiated to the tree node corresponding to the expansion of NT. There is a definite separation of DCTG rules into a syntactic portion and a semantic portion, with a resulting increase of modularity. Procedures involving different sorts of constraints can be separated from one another, because of the device of referring to the indexing variables. However, it seems that once the reader (or writer) knows that certain variables in the DCG rule deal with the construction of logical forms, the original DCG rule is just as easy (if not easier) to read. The DCTG rule is definitely longer than the DCG rule. The corresponding MLG rule: sent :> np(X): vp(X). is shorter, and does not need to mention logical forms at all. Of course, there are relevant portions of the semantic component that are applied in connection with this rule, but many parts of the semantic component are relevant to several syntax rules, thus reducing the total size of the system. A claimed advantage for DCTG's is that the semantics for each rule is listed locally with each rule. There is certainly an appeal in that, because with MLG's (as well as the methods in McCord (1982, lq81)), the semantics seems to float off more on its own. Semantic items do have a life of their own, and they can move about in the tree (implic- itly, in some versions of the semantic interpreter) because of raising and reordering. This is not as neat theoretically, but it seems more appropriate fur capturing actual natural language. Another disadvantage of DCTG's (as with RG~s) is that the analysis trees that are constructed automatically are derivation trees. The last system to be discussed here, that in Porto and Filgueiras (198&), does not involve a new grammar formalism, but a methodology for writing DCG's. The authors define a notion of intermediate semantic representation (ISR) including entities and predications, where the predications can be viewed as logical forms. In writing DCG rules, one sys- tematically includes at the end of the rule a call to a semantic procedure (specific to the given rule) which combines ISR's obtained in arguments of the non-terminals on the right-hand side of the rule. Two DCG rules in this style (given by the authors) are as follows: sent(S) --> np(N): vp(V): $ssv(N,V,S). vp(S) --> verb(V,trans): np(N): Ssvo(V,N,S). Here 'ssv' and 'svo' are semantic procedures that are specific to the 'sent' rule and the 'vp' rule, respectively. The rules that define 'ssv' and 'svo' can include some general rules, but also a mass of very specific rules tied to specific words. Two specific rules given by the authors for analyzing "All Viennese composers wrote ~ waltz" are as fol- lows. svo(wrote,M:X,wrote(X)) <- is_a(M,music). ssv(P:X,wrote(Y),author_of(Y,X)) <- is_a(P,person). Note that the verb 'wrote' changes from the surface form 'wrote', to the intermediate form wrote(X), then to the form author of(Y,X). [n most logic grammar systems (including MOD and MODL), some form of argument filling is done for predicates; infor- mation is added by binding argument variables, rather than changing the whole form of the predi- cation. The authors claim that it is less efficient to do argument filling, because one can make an early choice of a word sense which may lead to failure and backtracking. An intermediate form like wrote(X) above may only make a partial decision about the sense. The value of the "changing" method over the "adding" method would appear to hinge a lot on the question of parse-time efficiency, because the "changing" method seems more complicated conceptu- ally. It. seems simpler to have the notion that there are word-senses which are predicates with a certain number of arguments, and to deal only with these, rather than inventing intermediate forms that help in discrimination during the parse. So it is partly an empirical question which would be decided after logic grammars dealing semantically with massive dictionaries are developed. .There is modularity in rules written in the style of Porto and Filgueiras, because all the se- mantic structure-building is concentrated in the semantic procedures added (by the grammar writer) at the ends of the rules, in MLG's, in the one-pass mode, the same semantic procedure call, to 'semant', is added at the ends of strong rules, automatically by the compiler. The diversity comes in the an- ciliary procedures for 'semant', especially 'mod'. In fact, 'mod' (or 'trans' in McCord, 1981) has something in common with the Porto-Filgueiras pro- cedures in that it takes two intermediate repres- entations (semantic items) in its first two arguments and produces a new intermediate repre- sentation in its third argument. However, the 116 changes that 'mod' makes all involve the modification-operator components of semantic items, rather than the logical-form components. It might be interesting and worthwhile to look at a combi- nation of the two approaches. Both a strength and a weakness of the Porco- Filgueiras semantic procedures (compared with 'mod') is that there are many of them, associated with specific syntactic rules. The strength is that a specific procedure knows that it is looking at the "results" of a specific rule. But a weakness is that generalizations are missed. For example, modification by a quantified noun phrase (after slot-filling or the equivalent) is often the same, no matter where it comes from. The method in MLG's allows semantic items to move about and then act by one 'mod' rule. The reshaping procedures are free to look at specific syntactic information, even specific words when necessary, because they work with augmented semantic items. Of course, another disadvantage of the diversity of the Porto- Filgueiras procedures is that they must be explic- itly added by the writer of syntax rules, so that there is not as much modularity as in MLG's. REFERENCES Abramson, H. (1984) "Definite clause translation grammars," Proc. 1984 International Symposium on Logic Prograem, ing, pp. 233-240, Atlantic City. Byrd, R. J. (1983) "Word formation in natural lan- guage processing systems," Proc. 8th International Joint Conference on Artificial [ntelli~ence, pp. 704-706, Karlsruhe. Byrd, R. J. (1984) "The Ultimate Dictionary Users' Guide," IBM Research Internal Report. Colmerauer, A. (1978) "Metamorphosis grammars," in L. Bolt (Ed.), Natural Language Communication with Computers, Springer-Verlag. Dahl, V. (1977) "Un systeme deductif d'interrogation de banques de donnees en espagnol," Groupe d'Intelligence Artificielle, Univ. d'Aix-Marseille. Dahl, V. (1981) "Translating Spanish into logic through logic," American Journal of Computational Linguistics, vol. 7, pp. 149-164. Dahl, V, and HcCord, M. C. (1983) "Treating coor- dination in logic grammars," American Journal of Computational Linguistics , vol. 9, pp. 69-91. Heidorn, G. E. (1972) Natural Language Inputs to a Simulation Programming System, Naval Postgraduate School Technical Report No. NPS-55HD7210IA. Hirschman, ~. and Puder, K. (1982) "Restriction grammar in Prolog," Proc. First International Logic Programming Conference, pp. 85-90, Marseille. Jensen, K. and Heidorn, G. E. (1983) "The fitted parse: 100% parsing capability in a syntactic grammar of English," IBM Research Report RC 9729. Knuth, D. E. (1968) "Semantics of context-free languages," Mathematical Systems Theory, vol. 2, pp. 127-145. Lewis, D. (1975) "Adverbs of quantification," In E.L. Keenan (Ed.), Formal Semantics of Natural Language, pp. 3-15, Cambridge University Press. McCord, M. C. (1982) "Using slots and modifiers in logic grammars for natural language," Artificial Intelli~ence, vol 18, pp. 327-367. (Appeared first as 1980 Technical Report, University of Kentucky.) McCord, M. C. (1981) "Focalizers, the scoping problem, and semantic interpretation rules in logic grammars," Technical Report, University of Kentucky. To appear in Logic Programming and its Applications, D. Warren and M: van Caneghem, Eds. McCord, M. C. (1984) "Semantic interpretation for the EPISTLE system," Proc. Second International Logic Programming Conference, pp. 65-76, Uppsala. Miller, L. A., Heidorn, G. E., and Jensen, K. (1981) "Text-critiquing with the EPISTLE system: an author's aid to better syntax," AFIPS Conference Proceedings, vol. 50, pp. 649-655. Pereira, F. (1981) "Extraposition grammars," Amer- ican Journal of Computational Linguistics, vol. 7, pp. 243-256. Pereira, F. (1983) "Logic for natural language analysis," SRI International, Technical Note 275. Pereira, F. and Warren, D. (1980) "Definite clause grammars for language analysis - a survey of the formalism and a comparison with transition net- works," Artificial Intelligence , vol. 13, pp. 231-278. Pereira, F. and Warren, D. (1982) "An efficient easily adaptable system for interpreting natural language queries," American Journal of Computa- tional Linguistics, vol. 8, pp. 110-119. Porto, A. and Filgueiras, M. (1984) "Natural lan- guage semantics: A logic programming approach," Proc. 198A International Symposium on Logid Pro- gramming, pp. 228-232, Atlantic City. Sager, N. (1981) Natural Language Information Processing: A Computer Grammar of English and Its Applications, Addison-Wesley. Woods, W. A. (1970) "Transition network grammars for natural language analysis," C. ACM, vol. 13, pp. 591-606. 117
1985
13
New Approaches to Parsing Conjunctions Using Prolog Sand,way Fong Robert C. Berwick Artificial hitelligence Laboratory M.I.T. 545 Technology Square C,'umbridge MA 02t39, U.S.A. Abstract Conjunctions are particularly difficult to parse in tra- ditional, phra.se-based gramniars. This paper shows how a different representation, not b.xsed on tree structures, markedly improves the parsing problem for conjunctions. It modifies the union of phra.se marker model proposed by GoodalI [19811, where conjllnction is considered as tile lin- earization of a three-dimensional union of a non-tree I),'med phrase marker representation. A PItOLOG grantm~tr for con- junctions using this new approach is given. It is far simpler and more transparent than a recent phr~e-b~qed extra- position parser conjunctions by Dahl and McCord [1984]. Unlike the Dahl and McCor, I or ATN SYSCONJ appr~ach, no special trail machinery i.~ needed for conjunction, be- yond that required for analyzing simple sentences. While oi contparable ¢tficiency, the new ~tpproach unifies under a single analysis a host of related constructions: respectively sentences, right node raising, or gapping. Another ,'ulvan- rage is that it is also completely reversible (without cuts), and therefore can be used to generate sentences. John and Mary went to tile pictures Ylimplc constituent coordhmtion Tile fox and tile hound lived in tile fox hole and kennel respectively CotJstit,wnt coordination "vith r.he 'resp~ctively' reading John and I like to program in Prolog and Hope Simple constitmvR co~rdinatiou but c,~, have a col- lective or n.sp,~'tively reading John likes but I hate bananas ~)tl-c,mstitf~ent coordin,~tion Bill designs cars and Jack aeroplanes Gapping with 'resp,~ctively' reading The fox. the honnd and the horse all went to market Multiple c,mjunets *John sang loudly and a carol Violatiofl of coordination of likes *Wire (lid Peter see and tile car? V/o/atio/i of roisrdJ)l=lte str¢/¢'trlz'e constr.~int *1 will catch Peter and John might the car Gapping, hut componcztt ~cnlenccs c.ntain unlike auxiliary verbs ?Tire president left before noon and at 2. Gorbachev Introduction The problem addressed in this paper ~s to construct ,~ gr;unmatical device for lumdling cooL dination in natural language that is well founded in lingui.~tic theory and yet computationally attractive. 'the linguistic theory, should be powerful enough to describe ,~ll of the l)henomenon in coordi:tation, hut also constrained enough to reject all u.'l- gr;unmatical examples without undue complications. It is difficult to ;tcldeve such ;t line h;dancc - cspcci,dly since the term grammatical itself is hil,hly subjccl.ive. Some exam- ples of the kinds of phenolr-enon th:tt must l)e h;mdh.d are sh.,'.wl hi fig. t '['he theory shouhl Mso be .~menable to computer hnpien:ellt~tion. For example, tilt represeuli~tion of the phrase, marker should be ,'onducive to Imth ¢le~u! process description antl efficient implementation of the associated operations as defined iu the linguistic theory. Fig 1: Example Sentences The goal of the computer implementation is to pro- d,ce a device that can both generate surface sentences given ;t phrase inarker representation and derive a phrase marker represcnt;Ltion given a surface sentences. Thc huplementa- lion should bc ~ efficient as possible whilst preserving the essential properties of the linguistic theory. We will present an ir, ph:n,cut,'ttion which is transparent to the grammax and pcrliaps clemler & more nmdular than other systems such ,~ the int,:rpreter for the Modilh:r Structure Cram- ,,,ar.~ (MSG.,) of l)alll & McCord [1983 I. "]'lie NISG systenl will be compared with ~ shnpliGed irnl)lenlenl.;~tion of tile proposed device. A table showin K tile execution thne of both systems for some sample sen- 118 tences will be presented. Furthermore, the ,'ulvantages and disadvantages of our device will be discussed in relation to the MSG implementation. Finally we can show how the simplifled device can l)e extended to deal with the issues of extending the sys- tem to handle nmltiple conjuncts ~d strengthening the constraints of the system. This representation of a phrase marker is equiva- lent to a proper subset of the more common syaxtactic tree representation. This means that some trees may not be representable by an RPM and all RPMs may be re-cast as trees. (For exmnple, trees wit.h shared nodes representing overlapping constituents are not allowed.) An example of a valid RPM is given in fig. 3 :- The RPM Representation The phrase marker representation used by the theory described in the next section is essentially that of the Re- duced Phrase Marker (RPM) of L,'mnik & Kupin [1977]. A reduced phrase maxker c,'m be thought of im a set consist- " ing of monostrings ,'rod a termiual striltg satisfying certain predicates. More formally, we haws (fig. 2) :- Sentence: Alice saw 13ill RPM representation: {S. Alice.saw.Bill. NP.saw.Bill. Alice.V.Bill. Alice.VP.Alice.saw.NP} Fig 3: Aa example of RPM representation Let E and N denote the set of terminals and non-terminals respectively. Let ~o,~, x E: (TI. U N)'. Let z, y, z E Z'. Let A be a single non-terminal. Let P be an arbitrary set. Then ~o is a monostrmg w.r.t. ~ & N if ~o E Z'.N.E'. Suppose~o = zAz and that ~o,$6:P where P is a some set of strings. We can also define the following predicates :- yisa*~oin PifxyzEP dominates ~b in P if ~b = zXy. X # 0 and x#A. W precedes v) in P if 3y s.t. y isa* ~o in P. ~b=zvX and X#z. Then :- P is an RPM if 3A,z s.t. A,z ~. P and V{~O,~0} C_ P then dominates ~o in P or ~o dominates ~b in P or ~b precedes ~ in P or ~,, precedes ~b in P. Fig 2: Delinitioa of azl RPM 119 This RPM representation forms the basis of i, he linguistic theory described in the next section. The set representation ha.s some dcsir;d~M advantages over a tree representation in terms of b.th simplicity of description and implementation of the operations. Goodall's Theory of Coordination Goodall's idea in his draft thesis [Goodall??] wa.s to ext,md the definition of I.a.snik ~md t(upin's RPM to cover coordiuation. The main idea behind this theory is to ap- ply tilt. notion that coordination remdts from *he union of phr,~e markers to the reduced I)hrmse marker. Since R PMs axe sets, this h,'m the desirable property that the union of RI'Ms wouhl just be the falltiliar set union operation. For a computer intplemeutation, the set union operation can be realized inexpensively. In contr,-Lst, the corresponding op- eration for trees would necessitate a much less simple and efficient union operation than set union. However, the original definition of the R.PM did not ~nvisage the union operation necessary for coordina- tion. "['he RPM w~ used to represent 2-dimensional struc- ture only. But under set union the RPM becomes a rep- resentation of 3-dimensional structure. The admissibility predicates dominates zmd precedes delined on a set of monustrings with a single non-terminal string were inade- quate to describe 3-dimensional structure. B;~ically, Goodall's original idea w~ to extend the dominates ~m(l precedes predicates to handle RPMs un- der the set union operation. This resulted in the relations e-dominates ,'rod e-precedes ,xs shown in fig. 4 :- Assuming the definitions of fig. 2 and in addition let ~, f2, 0 E (~ O N)" and q, r, s, t, u E ]~', then ~o e-dominates xb in P if ~ dominates ~b I in P. X=w = ~'. e~/fl = Xb and = -- g in P. ~o e-precedes Xb in P if y lea* ~o in P. v lea* in P. qgr -~ s,~t in P. y ~ qgr and u ~ ~t where the relation - (terminal equiralence) is defined as :- z----pin P ifxzwEPandxyo~EP Figure 4: Extended definitions This extended definition, in particular - the notion of equivalence forms the baals of the computational device described in the next section, llowever since the size of" the RPM may be large, a direct implementation of the above definition of equivMence is not computationMly fe,'tsible. In the actual system, an optimized but equivalent alternative definition is used. Although these definitions suffice for most examples of coordination, it is not sufficiently constrained enough to reject stone ungr,'mzmatical examples. For exaanple, fig. 5 gives the RPM representation of "*John sang loudly and a carol" in terms of the union of the RPMs for the two constituent sentences :- John sang loudly John sang a carol { {John.sang.loudly, S, John.V.Ioudly, John.VP, John.sang.AP, NP.sang.loudly} {John.sang.a.carol, S, John.V.a.carol, John.VP, John.sang.NP, NP.sang.a.caroi } (When thcse two I[PM.q are merged some of the elements o[ the set do not satisfy La.snik & gupin '~ ongimd deA- uitiou - thc.~e [rdrs arc :-) {John.sang.loudly. John sanff.a.carol} {John.V.loudly. John.V.a.carol} {NP.sang.loudly. NP.sang.a.carol} (N,m. o[ the show: I~xirs .~lt/.st'y the e-dominates prw/i- rate - but Lhcy all .~tisfy e-precedes and hence the sen- tcm:e Js ac~eptc~l as .~, RI'M.) Fig.5: An example ot" union o[ RPMs The above example indicates that the extended RPM definition of Goodall Mlows some ungrammatical sentences to slip through. Although the device preseuted in the next section doesn't make direct use of the extended definitions, the notion of equivMence is central to the implementation. The basic system described in the next section does have this deficiency but a less simplistic version described later is more constrained - at the cost of some computational efficiency. Linearization and Equivalence Although a theory of coordination ham been described in the previous sections - in order for the theory to be put into practice, there remain two important questions to be answered :- • I-low to produce surface strings from a set of sentences to be conjoined? • tlow to produce a set of simple sentences (i.e. sen- tences without co,junct.ions) from ~ conjoined surface string? This section will show that the processes ot" //n- e~zation and finding equivalences provide an answer to both questions. For simplicity in the following discussion, we assume that the number of simple sentences to be con- joined is two only. The processes of linearization ~md 6riding equiva- lences for generation can be defined as :- Given a set of sentences and a set of candidates which represent the set of conjoinable pairs for those sentences, llnearizatinn will output one or more surface strings according to a fixed proce- dure. Given a set of sentences, findinff equivalences will prodnce a set o( conjoinable pairs according to the definition of equivalence o# the linguistic theory. [;'or genera.Lion the second process (linding equiva- lences) iu caJled first to generate a set of (:andidates which is then used in the first, process (linearization) to generate the s.rface strings. For parsing, the definitions still hold - but the processes are applied in reverse order. To illustrate the procedure for linearization, con- sider the following example of a set of simple sentences (fig. 0) :. 120 { John liked ice-cream. Mary liked chocolate} ~t of .~imple senteuces {{John. Mary}. {ice-cream. chocolate}} set ,ff ctmjoinable pairs Fig 6: Example of a set of simple sentences Consider tile plan view of the 3-dimensional repre- aentation of the union of the two simple sentences shown in fig. 7 :- "~. ~ice-cream John liked Mary .- ~-- chocolate Fig 7: Example o[ 3-dimensional structure The procedure of linearization would t~tke the foi- l.wing path shown by the arrows in fig. 8 :- John . ~~.-cream M~--" " chocolate Fig 8: Rxample of linearization F~dlowin K the path shown we obtain the surface siring "John and Mary liked ice-cream and chocolate". The set of conjoinable pairs is produced by the pro- cess of [inding equivalences. The definition of i:quivalence as given in the description of the extended RPM requires the general.ion of the combined R.PM of the constituent sen- lances. However it can be shown [I,'ong??] by considering the constraints impc,sed by the delinitions of equivalence and linc:trization, that tile same set of equivalent terminal string.~ can be produced just by using the terminal strings of the RI*M alone. There ;tre consider;Lble savings of compu- tatioaal resources in not having to compare every element of the set with every other element to generate all possible equivalent strings - which would take O(n ~) time - where n is the cardinality of the set. The corresponding term for the modified definition (given in the next sectiou) is O(1). The Implementation in Prolog This section describes a runnable specification written in Prolog. The specification described also forms the basis for comparison with the MSG interpreter of Dahl aud Me- Cord. The syntax of the clauses to be presented is similar to the Dec-10 Prolog [Bowen et a1.19821 version. The main differences are :- • The symbols %" and ~," have been replaced by the more meaningful reserved words "if" and "and" re- spectively. • The symbol "." is used ,as the list constructor and "nil" is ,,sed to represent the empty list. • ,in an example, a Prolog clause may have the fornt :- a(X V ... Z) ir b(U v ... W) a~d c(R S ... T) where a,b & c are predicate names and R,S,...,Z may represent variables, constants or terms. (Variables are ,listinguished by capitalization of the first charac- ter in the variable name.) The intended logical read- ing of tile clause is :- "a" holds if "b" and "c" both hold for consistent bindings of the arguments X, Y,...,Z, U, V,..., W, R,S,...,T • Cmnments (shown in italics) may be interspersed be- tween tile argamaents in a clause. Parse and Generate In tile previous section tile processes of linearization and linding equivalences are described ;m tile two compo- nents necessary for parsing and generating conjoined sen- testes. We will show how Lhese processes can be combined to produce a parser and a generator. The device used for comparison with Dahl & McCord scheme is a simplified version of the device presented in this section. First, difference lists are used to represent strings in the following sections. For example, the pair (fig. 9) :- 121 { john.liked.ice-cream.Continuation. Continuation} Fig g: Example of a difference list is a difference list representation of the sentence "John liked ice-cream". We can :tow introduce two predicates linearize and equivaleutpalrs which correspond to the processes uf lia- earization uJl(l liuding equivalences respectively (fig. 10) :- linearize( pairs S1 El and 52 E2 candidates Set yivcs Sentence) Linearize hohls when a pair of difference lists ({S1. EL} & {S2. E2)) and a set ,,f candidates (Set) arc consistent with the string (Sentence) as dellned by the procedure given in the previ- ous section. equivahmtpairs( X Y fi'om S1 $2) Equivalentpairs hohls when a ~uhstring X of S1 is equivalent to a substring Y of $2 accordhtg to the delinition of equivalence in the linguistic theory. The definitions fi~r parsing ,'utd generating are al- most logically equivalent. Ilowever the sub-goals for p~s- ing are in reverse order to the sub-goals for generating - since the Prolog interpreter would attempt to solve the sub-goals in a left to right manner. Furthc'rmore, the sub- set relation rather than set equality is used in the definition for parsing. We can interpret the two definitions ~ follows (fig. t2):- Generate holds when Sentence is the con- joined sentence resulting/'ram the linearization of the pair of dilFerence lists (Sl. nil) and (52. nil) using as candidate pairs for conjoining, the set o£ non-redundant pairs of equivalent termi- nal strings (Set). Parse holds when Sentence is the conjoined set, tence resulting from the linearization of the pair of dilference lists (S1. El) anti ($2. E2) provided that the set of candidate pairs for con- joining (Subset) is a subset of the set of pairs of equivalent terminal strings (Set). Fig 12: Logical readhtg for generate & parse Fig 10: Predicates llneari~.e & equivalentpairs Additionally, let the mete-logical predicate ~etof as in "setof(l~lement Goal Set)" hohl when Set is composed of chin,eats c~f the form Element anti that Set contains all in,: auccs of Element I, hat satisfy the goal Goal. The pred- icates generate can now be defined in terms of these two processes as folluws (lig. t t) :- generate(Sentence from St 52) if sctol(X.Y.nil in equivalentpairs(X Y from SI $2) is Set) andlinearize( pair~: St nil anti S2 nil candidtttes Set 9ires Sentence) parse~ Sentence 9iota9 S1 El) if Ijnearize(pairs SI E1 avd $2 E2 candidate.~ SuhSet 9ives Sentence) nndsctot(X.¥ nil in cquivalentpairs(X Y from S1 $2) ia Set) Fig 1 !: Prolog dclinition for generate ~. parse The subset relation is needed for the above defini- tion of parsing hecause it can be shown [Fong?? l that the process of linearization is more constrained (in terms of the p,.rn~issible conjoinable pairs) than the process of tinding eqnivalences. Linearize We can also fashion a logic specification for the process of line~tt'izatiou in the same manner. In this section we will describe the cases corresponding to each Prolog clause necessary in the specification of [inearization. However, ,'or sitnplicity the actual Prolog code is not shown here. (See Appendix A tbr the delinition of predicate Iinearize.) Ill the following discussion we assume that tile tem- plate for predicate Iinearize has the form "linearize( pairs Sl El and 52 E2 rand,tides Set gives Sentence)" shown previously in tig. I0. There are three independent cases to con:rider durivg !incariz~tion f- t. The Base Case. If the two ,lilrcrence tist~ ({S1. El} & {S2. E2}) are both empty then the conjoined string (Sentence) is also entpty. This siml,ly sta.tes that if two empty strings arc conjoint:d then the resttit is also an empty string. 122 2. Identical Leading Substrlngs. The second case occurs wheTt the two (non-eml)ty) difference lists have identical leading non-empty sub- strings. Then the coni-ined string is identical to the concatenation of that leading substring with the lin- eari~.ation of the rest of th,: two difference lists. For example, consider the linearization of the two flag- ments "likes Mary" and "likes Jill" as shown in fig. 13 .. {likes Mary. likes Jill} which can be. lineariz~:d a~ :- {likes X} where X is the linearization of strings {Mary. Jill} l'Tg. 13: Example of identical leading substrings 3. Conjohfing. The last case occurs when the two pairs of (qon- empty) difference lists have no common leading sub- string, llere, the conjoined string will be the co,t- catenation nf the co.junctinn of one of the pairs from the candidate set, with the conjoined sqring resulting fr~nl the line;trization of the two strings with their re- spective candidate substrings deleted. For example, consider the linearization -f the two sentences "John likes Mary" aitd "Bill likes Jill" a~ shown in fig. 14 :- {John likes Mary. Bill likes Jill} Given th,t the .~elertt:,l ,',ltdi,l,tc lmir is {John. Bill}, the c,,sj,,,',,:,l :;,rtdt ,,'e ~;:,ul.l Iw :- what linearizations the system would produce for an ex- ample sentence. Consider the sentence "John and Bill liked Mary" (fig. 15) :- {John and Bill liked Mary} would produce the string:. {John and Bill liked Mary. John and Bill liked Mary} with candidate set {} { John liked Mary, Bill liked Mary} with candidate set {(John, Bill)} {John Mary. Bill liked Mary} with candidate set {(John. Bill liked)} {John. Bill liked Mary} with candidate set {(John. Bill liked Mary)} Fig. 15: Example of linearizations All of the strings ,'ire then passed to the predicate findequivalences which shouhl pick out the second pair of strings as the only grammatically correct linearization. Finding Equiwdences (.;oodall's delinition of eqnivalence w,'~s that two termi- nal strings were said to be equivalent if they h;ul the same left and right contexts. Furthermore we had previously a.s- sertcd th;~t the equivaleut pairs couhl be l}roduced without ~earching the whole RI'M. For example consider the equiv- ah.nt lernnimd strings in the two sentences "Alice saw Bill" an,J "Mary saw Bill" (fig. 16) :- {John and Bill X.} where X is tl~e linearization of ~;trin~,s {likes Mary, likes .Jill} Fig. 1,1: [';xaml~ic of ,:,mj,iui,g ..mh.st, rin,,,,.,; There are S,.hC i,ul~h~,.c.t;dic.= d,:t;tils Lhat are dlf- r,~re.t for parsi.g tc~ ge,er:ttinK. (~ec al~l~,ndi.'c A.) llowcver the fierce cases :u'e the sanonc for hoth. We cast illusl, r;ll.e the :tl~¢~v,; dc:llntili,m by she=wing {Alice saw Bill. Mary saw Bill} would prt.hwr the, equiwdrnt pairs :- {Alice saw Bill. Mary saw Bill} {Alice, Mary} {Alice saw. Mary saw} l"ig. 16: l'Jxatuple of equivalent pairs Wc also make tile rollowing restriction.~ on Goodall's definition :- 123 • If there exists two terminal strings X & Y such that X-'=xxfl & Y--xYf'/, then X &. 1"~ should be the strongest possible left ~ right contexts respectively - provided x & y axe both nonempty. In the above example, x--nil and fl="saw Bill", so the first a.ud the third pairs produced are redundant. In general, a pair of terminal strings are redundant if they have the form (uv, uw) or (uv, zv), in which case - they may be replaced by the pairs (v, w) ~ad (u, z) respectively. • Ia Goodall's definition any two terminal strings them- selves are also a pair of equivalent terminal strings ( whe, X & f2 ,are both ,ull). We exclude this case it produces simple string concatenation of sentences. The above restrictions imply that in fig. 16 the only remai,ing equivalent pair ({Alice. Mary})is the correct one for tl, is example. However, before fiuding eq,ivalent pairs for two simple zenlences, the ittocess ,,f fimli, g ,quiv.,lel, ces ,nlust check that the two se,tt,;nces ate actually gral,tlllatical. We ;msuune thnt a recot;nizer/i,arser (e.g. a predicate parse(S El) alremly exists for determining the grammaticality of ~itnple ~entenccs. Since the proct'ss only requires a yes/no answer to gramnmtic;dity, any parsing or recognition sys- l.e;,t f,,r simple sentences can be used. We can now specify a l,redicate lindcandi(lates(X Y SI $2) that hohls when {X. Y} is an equiw,hmt pair front the two grantmatical simple .:e,te,ces {SI. $2} .~ f, llows (li!,¢. 17):- findcandidates(X and Y in SI and $2) ir parse(Sl nil) ilnld parse(S2 nil) and eqlniv(X Y SL $2) wh,.rc eqt,iv is ,h'fit~,'d as :. ~q.iv(X Y X1 YI) if append3(Chi X Omega Xl) and ternfinals(X) and append3(C.hi Y Omega YI) and terminals(Y) :vh,'r,' :q,t,',,,IS(L! L2 I..'~ L 1) h,,hls wh,.n L.I i:" ,',l,ml ;o th,. c',,tJ,'nl,'t~;tli,,tl ,,f I.I.L2 .~: 1.3. h'rminzd.~(X) holds when X i.'~ n li..t ,,1' t,'rtztinnl .~yml,,,Is ouly Fig. l 7: Logic delit, itiolz .f Fi.:lcntldirh, Les Then the predicate findcquivalencos is simply de- fined ;t~ (fig. 18) :- findequivalences(X and Y in S1 and $2) if findcandidates(X and Y in S1 and $2) and not redundant(X Y) wl.,re redundant implements the two restrictions described. Fig.18: Logic definition of Findeq,ivalences Comparison with MSGs The following table (fig. 19) gives tile execution times in milliseconds for the parsing of some sample sentences mostly taken from Dahl 0~ McCor(l [1983]. Both systems were executed using Dec-20 Prolog. The times shown for the MSG interpreter is hazed on the time taken to parse ,'rod buihl the syntactic tree only - the time for the subsequent transformations w,-~s not ,,chided. Sample / MSG RPM ences J system device Each m;ul ate an apish ° ;~.lld ;t pear [ 662 292 .Iolm at,, ~lt appl,, and a pear [ 613 233 f Z~k ;t,I ;Ll,ll ;1 WOIIU~.,, ~ilW o;i{'h trttill I Eiit'h ll,;lll ;tllll ,'ach wl|l,llt|t at(' l ,"m pple J,~hll saw and the woman heard a a, lhat laughed .]ohn drov,. Ihe car through and ct)m ~h.lt'ly demolishe, l a window "rh,, woa,t;tl, wit,) gav(" a l),~ok to .John and dr,we ;L car through .'L window laugh~l .h,hn .~aw the ,ltltll |.hiLt Mary .~aw and Bill gay,. a bo,,k t,, hutght~d .l.hnt .~aw the man lhat lu.;trd the wotnaH rhar lattglu'd and ~aw Bill Th,. ,,tan lh;d Mary saw and h(.ard ~;LVI' ,'~.ll ;).llllll" t,I ,,;[l'h ~viHlla[~ .h,htl mtw a /uul Mary .~aw the red pear 319 506 320 503 788 83'i 275 1032 I --1007 3375 .139 3It 636 323 i sot ,9~, 726 770i! Fig. ld: Timings For some sample sentences From tile timings we can conclude that the pro- po..:ed device is comparable to the MSC, system in terms -f comt,ttati,Jn:d elllciency, llowever, there are some other advantages s,,ch as :- • Transparency of the grammar - There is no need for phrmsal rules such .-m "S ~ S and S" The device also allows ,,m-phr~al conjunction. * Since no special grammar or particular phr~e marker representation is required, any par.,;er can be used - the dcvicc' only requires an acctpt/reject answer. 124 • The specification is uot biased with respect to liars - ing or generation. The iniplement:ition is reversible allowing it to generate aay sentence it can parse and vice versa. • Modularity of the device. The granimaticallty of sen- testes with conjunctiou is determined by the defini- tion of equivalence. For instance, if needed we can filter the equivalent terlninals using semantics. A Note on SYSCONJ It is worthwhile to compare the phr;me marker approach t{i the Aq.'N-ba.sed SYSCON.I inechanisln. Like SYSCONJ~ OUr analysis is extragrammatical: we do not tanlper with the h,sic gramnlar, but add a new cnniponent *.hat handles conjunction. Unlike SYSCONJ, our approach is based on a precise definition of "equiwdent tlhrztse~" that attenlpts ta unify urider one analysis nlany dill'erent types of coordina- tion phen,mena. :~YSi~,ONJ relied ou a rather conipticated, interrupt-driven method that restarted sentence ~malysis in SOlltC previously recorded m;tchine coiilil~qiration, but with the input sequence following the conjunction. This cap- turcs part of the "multillle planes" analy:ds of the phrase marker ,'tpproach, but without a precise notion of equiva- lent phr,'l~es. Perhaps ~ a result, SYSCONJ handled only ordinary conjunction, ali(l [tot respectively or gapping read- ing~. In our appr-:,h, a simple change to the lincarization process allows ll~ t~l handle gapping. Extensions to the Basic Device The device described in the previ,lus section is a .~ilu- plified version for rough elliilll;iristin wii.h the MS~ inter- In'ctct ". llowever, the systClll C;ill e.tsily he gciicralizcd to h~uidle nlultiple conjunctz. The only ,uhliti.nal phase re- quired ia to gelicrate telnpl:tte~ for nluttlph: rc:ulings. Also, gallpillg can lie handled just lly adding clauses tll the deft- nifioll of linearize - which allows :l dilferent path from that of fi~. 8 to be taken. The ~iinlllilied device llVruiits ~llllil. ,.,(ainllh~s of un- gr;liillii;lli¢:tl ~.l.il!l,nfl.s I.,, h,r ll;U'<'ed as if tin'i--or (lig. 5), The inildularity ~f the systelll all.ws its {() ciln..itr;tin the dcliiiii.iclii of eClUiv:th,qlcl~ still I'lirl.hl.r. The c×tcndcl[ dellni- ticlns in (141~lthdl's draft l, hcory wci-e licit iiichilled iii his the- si~; (;,i,.la11144i lirP~lilll;lllly hl,vi'.liSe it w:us liill COli.'-itrailled en~liigh. Ilnwever in lii.~ I.hl~sis he lll'llll~lses illiolher :lefini- tion elf !4raniliial.ic;dity ilshil~ II.l~Ms. This delliiitilln cltn lie lisctl t.o c~liistrain i~Cliiiv.-tlclice .,;till I'ilrl, lier ill Clllr systelli at a lOSS fif Siillle crllil:ieni:y ;llld gelilrl';ilil.y. For (~Xltlll|ile, the n~quircd ;tdditional predicate will need to ni;tke explicit use of the colnbined RPM. Therefilre, a parser will need to pro- duce a I1.PM representation as its phr,~ze marker. The mod- ifications necessary to produce th,, representation is shown hi appemlix B. Acknowledgements This work describes research clone at the Artificial Intel- ligence Laboratory of the Massachusetts Institute of Tech- nology. Sitpport for the Laboratory's artificial intelligence rese,'u'ch has been provided in part by the Advanced Re- search Projects Agency of the Depitrtnlent of Defense un- der Office of Naval Re'~earch contract N000t-I-80-C-0505. The first author is also filndnd by a scholarship from the Kennedy Memorial Trust. References Bow~.n ~.t al: D.L. Bowo,l {ed.), L. Byrd, F.C.N. Pert,ira, L.M. P,,r(.ira, D.H.I). Warre:l. Docsystem-lO Prolog User's Man- ira1. Hniversity of Edinburgh. t982. Dahl f4 McCord: V. Dahl and M.C. McCord. Trcatiiig Coordi- nation in Iaigie Gramtnars. Anit.ric~ui Journal of Compu- taii~lnal Linguistics. Vol. 9. No. 2 (t983). .Piing.')?: .%mdiway l"ong. To appear in S,'t,L thesis - ".~pccifying C,,Jrdinatioli ill L~lgic" - 1985 Goodall??.. Grant Todd (;.,.lall. Draft - Chapter 3 (sections 2.1. to 2.7)- C,,irdination. Goodall.~.~ : ( ;ralit To,hi (:oolhdl, P:lrnlh.l Strltctnr¢,s iil ,~yiltax. Ph. D thesis. Uniw.rsity (if CMifiJruia. San Di{.go (tO8, U. Lasnik [.: Kupin: I1. La.~uik iuid .I. [~upin. A r~'strictive th¢,ory +Jt ir.'iosfi,r'.ilatiotl;d gr;Imiilar. Th('or~.tical I.inl4ui:itics ,I (19771. Appendix A: Linearization Thl" fiill Pr.h~g Sll~.ilh.;iiilni flw thl, llrl.dicail , lineai'ize i~ givl.n lll.l(iw. / Linenrize f.r g~'ncr.tion / / tcrmin,din~) r.n,lition / liu('arizt'(pairs SI ,'-;I and $2 $2 candidates [,i.~t £liililty llil) if lillnvar(l,is/) / apldicrtthle mhcn ,yr. have tl t'Oltlllliitl .~i/lb21/rilltJ / lilil'.'triZ~'(lulir.~ S I 1']1 an,l $2 I,',9. ¢lllidid/i/e.1 List yivtnf! ,~l.nl,l.llCl~) if V;lf { ~l'lll, lql¢~) illld not ~llllii'(~l ll.l l~|) iliU| IlOl ~illlll!{~ a.,I ~) 125 and similar(St to S2 common Similar) and not same(Simil~ an nil) and remove{Siutih~x from St leaving NewS[) and r,,nove(Siulilar from $2 lenving NewS2) and line.'u'ize{pairs NewS1 El ,rod NewS2 E2 candidates List ~li,,ing RestOfSentenee) and appeud(Similmr RestOl~,.ntenee Seutenee) / conjoin two substringa / lim:arize(pairs HI El and $2 E;2 candidates List giving Sentence) if var(Sentence) attd uteutber(Candl.Cand2.nil of List) and not same(St as El) and not same(S2 as E2) and remove(Coati t from S 1 leaving NewSI} and removtr(Caltd2 from $2 l,mving NewS2) and coltjoin(li.~t Candl.Cmtd2.nil uning '~md' giving Conjoint,l) and (lclete(Cand t.Coatd2.nil from List leavin~ NewList) and linearize(pairn Ni,wSI 1~1 and NewS2 I~2 candidates Newl,i~t yiving Restot'Sentence) and append(Conjoined RestofS(,stteuce Sentence) / Linearize for par#ing / / Terminating cane / }inearize(pair.q nil nil and nil nil candidates List. giving nil) if var(List) anti :am,.(l.ist a.s all) ,/ Case far common .suhstrinf/,/ lill¢.;,:'it.tr(pairs ('.,,n,mon.N,.wS l nil arid ('(ltllt,lotI.NewS2 nil randidate.~ List giving Sentence) if n,,. wu'(S,.nt¢.,w,.} :llld .;},ttt'(~t)Vliltit*Vn.R¢'.'-l()f~'~t'tth l!,',' ¢,:+ ~¢'Iltt'IICC) ;,,,1 li,..arizt,il,air.~ N,~w.ql nil and NewS2 nil caadidttlcs I.isl y,viny Rest()lSentt'tlce) / C',tne for ,',,,d,,in / lilwarizvIl,.ir.n .q [ nil ¢t?t,'l ~2 ui| raltdidqle.s ['~,h'tttt'ltt.f.{.t'st ,fivinq `Ht'ittcqtt:e} if ,,, ..,va,'(~,',tt(',tce) and :tl)l),',,d: {(h,,,.ioi,te,} I, lh.stt)f:q,.,tt,.,,c,. ~/i,,in~ S,.ttLt.,,c¢.) and ,',,,lj,,i,,(li.~l l'lh',,,,',,l ,t.~i,t!l ';o,,l" :l,,,irtrJ ( h,ttj,hne, l) and ~illii,.( l';h.i,ii.,il. ,i.s ( :mid l.(:at,,12.uil) and uot ~ai,ir(f~a,id t ,i.s nil) and n,)t ~a,n,'(f:m,d2 ,t.s nil) and lim.,triz,.(patr.~ N,.wS! nil and N,,w,H2 nil ,.uttditlates I{.¢'.~1 giving R.*'~I()I'St'IIt¢'II,'¢') and ;qq-',td{('andl N,'wHI ,HI) and ;,pl-',vlH'a,,12 N,,wH2 ,H2) / ,lpp,:tttl * i.s ,1 .spi'rirtl ft, rttt i,f .,q,p,:,td d~'t(m/t that the Jir.~l liM ma,~l b+" rton.,:tttply :q)p,.n,I ' ([h':vl.=til to "[';til yimnt/ Ih.;uI.T;fil) :tpp,.t=,l ( I.'ir~t.Hec,,,d.():l..r:: to Till 9tvi,,/ Fir.~t.Re.~Q if :H~l,.tt,l ' {`Hvc~md.()l h('rs l,, "l';il giving Ih'.~t) eil,fibu'(;tii/o nil cornn,~,l nil} ~tt,,il;~t'llh';td 1. I';dl t lo I[,.;Ld2.T, il:2 common nil) if. ,tot :;.m,'(Ih.adl aa Ih';ul21 -itttil;u'( [l,.;ul.'r;dl t to lh.;.I.T;til2 ,.ornmou [h.mI.Re, t) if .-hml;zr('[';dll lo "[';d12 c,,,a,n Ilcst} / conjoin ia rewer.sible / conjoin(lint [;'irat.Second.ail using Conj,mct giving Conjoined) if nonwtr(First) and nonvar(Second) and apl~end(1;'irst Conj,mct.Sceond Conjoined) conjoin(lint First.S~.wond.uil u.~in9 Conjunct giving Conjoiued) if n,mvar(Conjoined) attd append(First Conjunct.Second Conjoined) remove(nil/rein List leavin~ List} remove(Ih,ad.'rail from lI,,~x(l.Re~t leaving List) if remove(Tail from Rest leaving List) delete{Ilead from nil lenving nil) delete(Head from II,ratl.T, til leaving Tail) delete(fiend frum First .Rest leaving First.Tail) if not sa,,,,.{lI,!ad an First) and delete{ {h,,ul from Rest leaving Tail} Appendix B: Building the RPM A RPM rv[)res,.utali.n ,'ml b(. Imilt by adding three extra imramt,t,,rs to em'h ;;ra.ttmm" |'11h, {f)~(){ht.r with a call t:o a con- cat.enat.i,m routine. F,~r examl)k', c,msider th(. verb phra.se "liked Mary" fr,,n {he .~imph. semem',. "'John liked Mary". The lltonoa- trin~ c-rr,,.~l),mdi,tg t.,~ the mmn-t('rmin;d VP is (',)r,structe, l by taking the h.ft m.I right eout, exls .f "liked Mary ;rod placing the non-h.rn,inid syl=d),,I VP inl.,Iwt~.n them. In geueral, we have ~.melhing of the form :- phr;L~e( from Pointt to Point2 unin9 Start to End !/iv/n9 MS.RPM) if isphrase(Pointt t, Point2 RPM} and bu|hlmonostring{Start Pointl pit=# 'VP" Point2 End MS) wirer,. ,lilferonce pairs {Start. Pointt}. {Point2. End} aa{l {Start. End} repr{.s4.nt the left ,',mt(.xt. the right context lind the ..ent,.twe string rcsp,~'tively. Th," c(mc;~retmtion routim: build- monostring is just :- buildmonostring(Start Point[ l,ht# NonTermiaal Point2 End MS) if append(Pointl Left Start) and append(Point2 Right End) and append(Lelt NonTerminaI.Right MS) 126
1985
14
Parsing with Discontinuous Constituents Mark Johnson Center for the Study of Language and Information and Department of LinKuktics, StLnford University. Abstract By generalizing the notion of location of a constituent to allow discontinuous Ioctaions, one can describe the discontinuous consti- tuents of non-configurational languages. These discontinuous consti- tuents can be described by a variant of definite clause grammars, and these grammars can be used in conjunction with a proof pro- cedure to create a parser for non-configurational languages. 1. Introduction In this paper [ discuss the problem of describing and computa- tionally processing the discontinuous constituents of non- configurational languages. In these languages the grammatical func- tion that an argument plays in the clause or the sentence is not determined by its position or confguration in the sentence, as it is in configurational languages like English, but rather by some kind of morphological marking on tile argument or on the verb. Word order in non-configurational languages is often extremely free: it has been claimed that if some string of words S is grammatical in one of these languages, then the string S' formed by any arbitrary permutation of the words in S is also grammatical. Most attempts to describe this word order freedom take mechanisms designed to handle fairly rigid word order systems and modify them in order to account for the greater word order freedom of non-configurational languages. .Mthough it is doubtful whether any natural language ever exhibits such total scrambling, it is interesting to investigate the computa- tional and linguistic implications of systems that allow a ifigh degree of word order freedom. So the approach here is the opposite to the usual one: I start with a system which, unconstrained, allows for unrestricted permutation of the words of a sentence, and capture any word order regularities the language may have by adding res- trictions to the system. The extremely free word order of non- configurational languages is described by allowing constituents to have discontinuous [ocatio,ls. To demonstrate that it is possible to parse with such discontinuous constituents. I show how they can be incorporated into a variant of definite clause grammars, and that these grammars can be used in conjunction with a proof procedure. such as Ear[ey deduction, to coestruct a parser, a.s shown in Pereira and Warren (1983). This paper is organized as follows: section 2 contains an infor- mal introduction to Definite Clause Grammars and ,lisctmses how they can be used in parsing, section :l giws a brief description of some of the grammatical features of o**o non-,'onligurational language. Guugu Yimhlhirr, section I presents ;t deiinite clause frag- ment for this language, aml shows how this can be u~ed for parsing. Section 5 notes that the use of di~conti*uiot*s con.~l.ituent.~ is *lot lim- ited to definite clause granunars. [)lit they could lie incorporated into such disparate formalisnts as GP~(;, I,FG or (';[L Section 6 discusses whether a unified account of par.qng both conligurational and non-configurational languages can be given, and section 7 com- pares the notion of discontinuous constituents with other approaches to free word order. 2. Definite Clam Grammars and Parsing [n this section [ show how to represent both an utterance and a context free granmmz (CI"G) so that tile locations of eoustituents are explicitly represented in the grammar formalism. Given this, it will be easy to generalize the notion of location so that it can describe the discontinuous constituents of non-configurational languages. The formalism [ use here is the Definite Clause Gram- mar formalism described in Clocksin and Mellish (1084). To fatal- liarize the reader with the DCG notation, I discuss a fragment for English in this section. In fact, the DCG representation is even more general than is brought out here: as Pereira and Warren (1083) demonstrated, one can view parsin K algorithms highly specialized proof procedures, and the process of parsing as logical inferencing on the representation of the utterance, with the grammar functioning as the axioms of the logical system. Given a context free grammar, as in (l), the parsing problem is to determine whether a particular utterance, such as the one in (2}, is an S with respect to it. {1) S -- NP VP VP ~VNP NP -- Det N Det -- NP [÷Ce~l ('2} 0 the t boy's 2 father 3 hit 4 the s dog s The subscripts in (2} serve to locate the lexical items: they indicate that, for instance, the utterance of the word d0~ began at time t s and ended at time t s. That is, the location of the utterance "dog" in example (2) was the interval (t~,tsl. [ interpret the sub- scripts aa the points in time that segment the utterance into indivi- dual words or morphemes. Note that they perform the same func- tion as the vertices of a standard chart parsing system. Parsing {2} is the same as searching for an S node that dom- inates the entire string, ie. whose location is [0,6!. By looking at the rules in {I), we see that an S node is composed of an NP and a VP node. The interpretation conventions associated with phrase struc- ture rules like those in (I) tell us this, and also tell us that the loca- tion of the S is the concatenation of the location of tile NP and the VP. That is, tile existence of an S node located at i0.t~! would be implied by the existetlce of an NP node located at interval r O.z i ( z a variable) and a V'P node located at [: ,8 t. The relationship between the mother constituent's location anti those of its daughters is made explicit in the definite clause grauunar, .~hown in (3}, that corresponds to the CFG (l}. The utter- ante (1) (after lexical analysis} would be represented as in (4). Those familiar with Prolog should note that [ [lave reversed the usual orthographic convention by writing variables with a lower ca~e intial letter (they are also italicized), while constants begin with an upper case letter. (3) S(z ,: ) -- NP(.r ,y ,0) & VP{V ,: }. VP(: ,: ) -- V(z ,y ) &NP(y ,: ,O}. NP(z .: ,case ) -- Det(z .y ) & N(y ," ,ca** ). Det(z ,y ) -- NP(x ,y ,Gen) 127 (4) Det(0,1). N(l,2,Gen). N(.~,3,~). V(3,4). Det(4,5). N(S,8,~). (3) contains four definite clauses; each corresponds to one phrase structure rule in the CFG (I). The major difference for us between (1) and (3) is that in (3) the locations of the constituents fie. the endpoints) are explicit in the formalism: they are arguments of the constituents, the first argument being the beginning time of the constituent's location, the second argument its ending time. ALso note the way that syntactic features arc treated in this system: the difference between genitive and non-genitive case marked nouns is indicated by a third argument in both the N and NP constituents. Genitive nouns and noun phrases have the value Gen for their third argument, while non-genitive NP have the value ~, and the rule that expands NP explicitly passes the case of the mother to the head daughter t. We can use (3) and (4) to make inferences about the existence of constituents in utterance (2). For example, using the rule that expands ~ in (3) together with the first two facts of (4), we can infer the existence of constituent NP(0,2,Gen). The simplest appruach to parsing is probably to use (t) in a top*down fashion, and start by searching for an S with location [0,6]; that is, search for goal S(0,6). This method, top down recursive des- cent, is the method the programming language Prolog uses to per- form deduction on definite clauses systems, so Prulo g can easily be used to make very efficient top*down parsers. Unfortunately, despite their intuitive simplicity, top down recursive descent parsers have certain properties that make them less than optimal for handling natural language phenomena. Unless the grammar the parser is using is very specially crafted, these parsers tend to go into infinite loops. For example, the rule that expands NP into Det and N in (3) above would be used by a top- down parser to create a Det subgoal from an NP goal. But Det Ltself can be expanded as a genitive NP, so the parser would create another NP subgual from the Det subgoal, and so on infinitely. The problem is that the parser is searching for the same [NP many times over: what is needed is a strategy that reduces multiple searches for the same item to a single search, and arranges to share its results. Eariey deduction, based on the Eariey parsing algorithm. is capable of doing this. For reasons of time, I won't go into details of Eariey Deduction (see Pereira and Warren (1983) for details}; [ will simply note here that using Eariey Deduction on the definite clause grammar in {3) results in behaviour that corresponds exactly to the way an Earley chart parser wouhl parse ( I ). 3. Non-conflguratlon~l Languages in this section I identify some of the properties of uon- conhgurational languages. Since this is a paper on discontinuous constituents, [ focus on word order properties, as exemplified in the non-configurational language Guugu Yimidhirr. The treatment here is necessarily superficial: [ have completely ignored many complex phonological, inflectional and syntactic processes that a complete grammar would have to deal with. A non-configurationa[ language dilfer~ front configurational languages like English in that morphological form (eg. affixes), rather than position fie. configuration), indicates which words are syntactically connected to each other, in Engii~h the grammatical, and hence semantic, relationsitip betwe~.n bog. father and dog in (5) are indicated in surface form by their positiou~, ~t,l changing these positions changes these relationships, and hence the meaning, ~s in ' Of course, there is nothing special about these two values: any two distinct values would have done. (6). (5) The boy's father hit the dog (6) The father's dos hit the boy In Guugu Yimidhirr, an Australian language spoken in north- east Queensland, the relationships in {7) 2 are indicated by the affixes on the various nouns, and to change the relationships, one would have to change the alfLxes. (7) Yarraga-aga~ mu-n gudaa gunda-y biiba-ngun boy-GEN-mu-ERG dog+ABS hit-PAST father-l~RG 'The boy's father hit the dog' The idea, then, is that in these languages morphological form plays the same rule that word order does in a configurational language like English. One might suspect that word order would be rather irrelevant in a non-configurational language, and infact Guugu Yimidhirr speakers remark that their language, un- like English, can be spoken 'back to front': that is, it is possible to scramble words and still produce a grammatical utterance (Haviland 1979, p 26.) Interestingly, in some Guugu Yimidhirr constructions it appears that information about grammatical relations can be obtain either through word order or morphology: in the possessive construc- tion When a complex NP carries case inflection, each element (in this case, both possession and possessive expression) may bear case inflection - and both must be inflected for case if they are not contiguous - but frequently the 'head noun' (the posac~ion) Idirectly MJ I precedes the possessive expre~ion, and only the latter has explicit case inflection (Haviland 1979,p.56) Thus in (8), biiba 'father' shows up without an ergative suffix because it is immediately to the left of the NP that possesses it (is. possession is indicated by position). (8) Biiba yarraga-aga- m~n gudaa gunda-y father boy-GEN-mu-ERG dog+ABS hit-PAST 'The boy's father hit the dog' While ultimate judgement will have to await a full analysis of these constructions, it does seem as if word order and morphological form do supply the same sort of information. In the sections that follow. [ will show how a variant of definite clause grammar can be used to describe the examples given above, and how this grammar can be used in conjunction with a proof procedure to construct a parser. 4. Repeesentin$ Discontinuotm Constituent8 [ propose to represent discontinuous constituents rather directly, in terms of a syntactic category and a discontinuous loca- tion in the utterance. For example, [ represent the location of the discontinous constituent in (7), Yarraga.aea.mu.n ... biiba.ngun "boy's father' a.~ a set of continuous locations, as in (9). (ol {[o.tl,{:~.4i} .Alternatively, one could represent discontinuous locations in terms of a 'bit-pattern', am in (10), where a 'l' indicates that the constituent occupies this position. (10) { ~ 0 0 i ] While the descriptive power of both representations is the same, [ will use the representation of (9) because it is Somewhat o AJI examples are from Haviland (1979). The constructions shown here are used to indicate alienable pcnse~ion (which includes kinship relationships). 128 easier to state configur~ional notions in it. For example, the requirement thw; a constituent be contiguous can be expressed by requiring its location set to have no more than a single interval member. To represent the morphological form of NP constituents I use two argument positions, rather than the single argument position used in the DCG in (3). The firet takes as values either Er~, ~ or ~, and the second either Gen or ~. Thus our discontinuous NP has three a4"gument positions in total, and would he represented as (11). (U) NP([[0,t} ,[a,4ll,Erz,~). [Zz (II), the firet argument pmition identifies the constituent's location, while the next two are the two morphological form argu- ments discussed immediately above. The grammar rules must tell us under what conditions we can infer the existence of a constituent like (I1). The morphologies/ form features seem to pose no particu- lar problem: they can be handled in a similia~ way to the genitive feature in the mini-DCG for English in (3) (although a full account would have to deal with the dual ergative/ahsohtive and nominative/accusative systems that Guugu Yimidhirr possesses). But the DCG rule format must be extended to allow for discontinu- ous locations of constituents, like (11). [n the rule~ in (3), the end-points of the mother's location ~re explicitly constructed from the end-points of the daughter's loca- tions. [n general, the realtionship between the mother's location and that of its daughters can be represented in terms of a predicate that holds between them. [n the DCG rules for Guugu Yimidhirr, (12) to (14}, the relationship between the mother's location and those of its daughters is represented by the predicate combines. The definition of combines ~ ~, follows: combines(I,l,,l~) is true if and only if I is equal to the (bit-wise) union of I~ and I~, and the (bit-wise) intersection of I~ and I~ is null {ie. I~ and I~ must be non-overlapping locations), (12) S(i) -- V(I,) & NP(I:,EnI,~) & NP(Is,Abs,m) ~ combine*.(/,I 1,l~,la)- (~3) NP(/,cue ,e) -- N(lz,c~e ,e) & NP(l~.c~e ,Gen) &: combines(/,l z,l~). (14} NP(/,case t,caae2) ~ N(/,c~c t,casc~) Following Hale (1983), I have not posited a VP node in this grammar, a/though it would have trivial to do so. To a~count for the 'configurational' possessive shown in (8), [ add the additions/ clause in (15) to the grammar. (15) NP({{z ,:t],c~se ,~) -- NP({[z ,~ ]],e,e) & N(II~ ,z}l,case ,Gen) Given this definite clause grammar and a proof procedure such a~ Earley Deduction, it is quite straight-forward to construct a parser. In (16) [ show how one can parse [7) using the above gram- mar and the Eariey Deduction proof procedure. The Prolog predi- cate 'p' starts the parser. First the lexical analyser adds lexical items to the state (the workin~ store of the deduction procedure), when this is finished the deduction pr~edure uses the DCG rules above to ma&e inferences about the utterance. The answer '*-yes' given by the parser indicates that it waa able to find an S that spans the entire utterance. The command 'print state' prints the state of the deduction sy,~tem; since new inferences are always added to the bottom of the state, it provides a chonnlogica/ records of the deduc- tions made by the system. The state is printed in Prolog notation: variables are written a.s '_I', '_. ~', etc., and the implication symbol (re) a~prologeariey atucnscr UNSW-PROLOG : p([y arragaagamun,gudaa, gunday,hiibangun])? Word yarragaagmman is a n([[O, Z]], era, sen) Word q,~u ~ a n([[Z, ~1], ,~,, o) Word ~,.#~v ~ a ~({[z, 31]) Word b;;66ng~n is a .([[*, 41], ~'s, o) ** yes : print_sta~e ! ..(frO, 111, "g, Z") ,,,([[z, 21], .b,,,, o) ,,([[2, aid "fills, 4t1, erg, o) ~1[0, 4]}) :- v(_z), np(_~, erg, o), ~p(_3, abs, o), eombines({{O. 411, _z, _2, _3~ 8(((0. 4{i ) :. np(_t, er~, o}, np(_2, abs, o). combines{{[0, 4J], {{2, 3[l, _t, _2) np(_t, erE, o) :- n(_2, er~, o) , rip(_3, ez'~, gen) , combines(_t, ..2, _3) np(_Z, er&, o) :- rip(_2, erz, sen), combine~_l. ~I3, 411, _2) np(_Z, e~, sen) :- n(_L erg, zen) np([[o, ill, ,,s, z~) np(_t, err, o):- combines(_t, [[3, 4 H, [[0, Z!]) combines(l[0, tl, [3,-lJ], I[3, 4lJ, {[0, tJJ) ,.p([[o, xl, [3, 41l, ~'s, o) ~(({0, 4(0 :- np(_l, abs, o), combines({{O, 411, [[2, 31[, [[0, 11, {3, 4[], _/) up(J, abs, o) :- n(_2, abs, o) , np(..3, abs. gen) , combines(_l, _2, 3) np(L, abs, o):- np(_2, abs, gen), combines(J, IIt 21I, _2) np(_l, abs, gen) :- .(_1, abs, gen) ,p(_t. a~. o) :. n{_t. ~. o) opfl[z, 2}h ab., o) s({{0, 41[ ) :. combines({[0 ' 4[{, [{2, 3 , [[0, l, 3 41[, I[1, 2J]} combin O, 4 , 2, 3 • (frO, ~II II {I If, I[ 0, 1, {3, 4 [, Ill, -OIl) .P{{[_Z, _°.l], ~bs, o):- npflI_L _all, o, o), "(if_3,--~/!, abs, St.) np{[[_1, _°.If, o, o) :. n(_3, o, o), np(_~, o, genl, combines({!_l, _211 ' _3, _4) nP{{LI,--~ll, o, o) :- .({I_t, _.oil, o, o) nl~I[_i, _2fJ, o, o):- np([(_/, _3]1 , o, o) , n(([_3, _2JJ, o, Sen) .IN_I, er~, o) :- n(_/, er~, o) .~[[a, 4]], ,,g, o) s({{0, 4{]):- rip(_/, abs, o), comb nes([[0 4]J ~[o 3~ if3 411 ,, • .ll) , omb,nv(II0, 3!i, !f3 ii ........ I'P~[- L, --Ib erg, o):- np([]_l, _31] , o, o), n(fI_3 , _2!1 ' erg, gen) 6. Ualnt Dh~ontinuoua Constituents in Grsmamgrs Ahhough the previous section introduced discontinuous consti- tuents in terms of definite cianse grammar, there is no reason we could not invent a notation that abbreviates or implies the "com- bines' relationship between mother and daughters, just a.s the CFG in (1) "implies" the mother-daughter location relationships made explicit in the DCG (3). For instance, we could choose to interpret a rule like (171 ~s implying the 'combines' relationship between the mother and its daughters. (17) A -*/7 ;C ;D Then the DC'G grammar presented in the last section could be written in the GPSG like notation of (18). Note that the third rule is a stamdaed phrase structure rule: it expresses the 'configurationai' p~sessive shown in (81. 129 (zs~ s - [c~E~l ; v; [CAS~'Abel It is easy to show that grammars based on the 'combines' predicate lie outside the class of context free languages: the strinp the grammar (19) accepts are the permutations of a m b'c "; thus this grammar does not have weakly equivalent CFG. (,91 S--= ;b ;c ; (S) While it would he interesting to investigate other properties of the 'combines' predicate, I suspect that it is not optimal for describ- ing linguistic systems in general, including non-configurational languages. It is difficult to state word order requirements that refer to a particular constituent position in the utterance. For instance, the only word order requirement in Waripiri, another non- configurational language, is that the auxilary element must follow exactly one syntactic constituent, and this would be difficult to state in a system with only the predicate 'combines', although it would be easy to write a special DCG predicate which forces this behaviour. Rather, l suspect it would be more profitable to investigate other predicates on constituent locations besides 'combines' to see what implications they have. [n particular, the wrapping operations of Pollard (1984) would seem to be excellent candidates for such rL,~eagc h. Finally, I note that the discontinuous constituent analysis described here is by no means incompatible with standard theories of grammar. A~ I noted before, the rules in (18) look very much like GPSG rules, and with a little work much of the machinery of GSPG could be grafted on to such a formalism. Similiarly, the CFG part of LFG, the C-structure, could be enriched to allow discontinuous constituents if one wished. And introducing some version of discon- tinuous constituents to GB could make the mysterious "mapping" between P-structure and L-structure that Hale (t983) talks about a little less perplexing. My own feeling is that the approach that would bring the most immediate results would be to adopt some of the "head driven" aspects of Pollard's (1984) Head Grammars. [n his conception, heads contain as lexical information a list of the items they sub- categorize for. This strongly suggests that one should parse accord- ing to a "head first" strategy: when one parses a sentence, one looks for its verb first, and then, based on the lexical form of the verb, one looks for the other arguments in the clause. Not only would such an approach be easy to implement in a DCG framework, but given the empirical fact that the nature of argument NPs in a clause is strongly determined by that clause's verb, it seems a very reasonable thing to do. 8. Implementha g the Parser In their 1983 paper, Pereira and Warren point out several problems involved in implementing the Earley proof procedure, and proposed ways of circumventing or minimizing these problems. In :.his section [ only consider the specialized case of Earley Deduction working with clauses that correspond to grammars of either the con- tinuous or discontinuous constituent type, rather than the general ca~e of performing deduction on an arbitrary set of clauses. Considering first the case of Earley Deduction applying to a set of clauses like (3) that correspond to a CFG, a sensible thing to do would be to index the derived clauses fie. the intermediate results) on the left edge of their location. Because Earley Deduction on such a set of clauses always proceeds in exactly the same manner as Eariey chart parsing, namely strictly left to right within a consti- tuent, the position of the left edge of any constituent being searched for is always determined by the ending location of the constituent immediately preceeding it in the derivation. That is, the proof 15ro- cedure is always searching for constituents with hard, ie. non- variable, left edges. I have no empirical data on this point, but the reduction in the number of clauses that need to be checked because of this indexing could be quite important. Note that the vertices in a chart act essentially as indices to edges in the manner described. Unfortunately, indexing on the left edge in system working with discontinuous constituents in the manner suggested above would not be very useful, since the inferencing does not proceed in a left to right fashion. Rather, if the suggestions at the end of the last section are heeded, the parser proceeds in a "head first" fashion, looking first for the head of a constituent and then for its comple- ments, the nature and number of which are partially determined by information available from the head. ht such a strategy, it would seem reasonable to index clauses not on their location, but on mor- phological or categorial features, such as category, case, etc., since these are the features they will be identified by when they are searched for. It seems then that the optimal data structure for one type of constituent is not optimal for the other. The question then arises whether there is a unified parsing strategy for both configurational and non-configurational languages. Languages with contiguous con- stituents could be parsed with a head first strategy, but I suspect that this would prove less efficient than a strategy that indexed on left edge position. Locations have the useful property that their number grows as the size of the sentence (and hence the number of constituents) increases, thus giving more indexing resolution where it is needed, namely in longer sentences. But of course, one could always index on both morphological category and utterance iota- lion... ?. Comptwison with oth~ Fram~orks In this section I compare the discontinuous locm~ion approach [ have developed above to some other approaches to free word order: the ID/LP rule format of GPSG, and the non-configurational encod- ing of LFG. [ have omitted a discussion of the scrambling and rais- ing rules of Standard Theory and their counterparts in current GB theory because their properties depend strongly on properties of the grammatical system as a whole (such a.s a universal theory of "land- ing sites", etc.): which (as far as I know) have not been given in sufficiently specific form to enable a comparison. The ID/LP rule format (Gazdar et al. 1985) can be regarded as a factoring of "normal" context free rules a into two components, one expressing immediate domination relationships, the other the linear precedence relationships that hold between daughter constituents. For example, the ID rule in (20) and the LP rule in (21) express the same mother-daughter relationships as the rules in (22). (20) S "Io {V, NIP, NP, S' } (21) V < S' (22) S -- V NP NP S' S -- V NP S* NP S--VS f NPNP S-- NPVNPS' S--NPVS f NP S -- NP NP V S' Because a grammar in ID/LP format always has a strongly equivalent context free grammar, only context free languages can be generated by these grammars. Since it is possible to write grammars s In Gasdar etad. (1985) the system is more complicated than this, since the ID/LP component interacts with the feature instan- elation principles and other components of the grammar. 130 for non-context-free languages using discontinuous constituents (us shown above), it is clear tha& ID/LP format is I~ powerful than the discontinuous constituent analysis proposed here. ~n particular, ID/LP allows only reordering of the daughters of a constitiuent rels- tire to the other daughters: it does not allow a constituent to be "scattered" accrom the sentence in the way a discontinuous consti- tuent analysis allows, Thus an ID/LP grammar o[ Guugu Yimidhirr could not ana/yse sentence (7) in the same way we did here. In fa~t, if we added the requirement that all locations be continuous (ie. tbag the location sets contain at moat one member) to the DCG r~es using the 'combines' predicate, the word order freedom allowed would be the same as that allowed by an ID rule without any LP restrictions. I don't claim that it is imposaible to write a GPSG grammar [or a ~angua4$e like Guu~pa ~fLmidlxit~ ou the busis of the formatism's not =flowing dL~:outinuous cormtituents: on closer inveS- tiSatio~ it misht turn out that the "discontinuities" could be described by some set of medium or long distance dependencies. In LFG the nature of the mapping between c-structure and f- structure enables it to achieve many of the effects o[ discontinuous ¢onstituenta, even though the phrase structure component (the c- structure) does not allow discontinuous constituents ~ such. In psi- ticular, the information represented in one component o[ the f- structure may come from several di~erent c-structure constituents located throughout the sentence. For example, in their analysis of the cross serial dependencies in Dutch, Bresnan, Kaptan, Peters and Zaenen (1982) propose that the PREP feature of the VCONIP com- ponent of the f-structure is set by a verb located down one branch of the c.structure tree, while the OBJ feature of that component is set by an NP located on another branch o| the c.structure tree. Thus in LFG one would not claim that there was a discontinuous NP in (7~, but ralAer that both the ergative NP and the genitive msrked e~ative NP were contributing information to the same corn- portent of the f-structUre. [n the .on-confiquratianai cncodlaf of Bresnan (1982, p.297), the c-structure is relatively impoverished, and the morphology' on the lexica~ items identifies the component of the f-structure they supply information to. For example, the c-structure in (23) together with the lexical items in (24) give sentence (7) the f-structure (25). (23) S_{ NP,~=[V }* NP (T SUBJ poss)=L Varraea.aga.mu-n (I, CASE}==Erg (LOcal=+ NP g.d== (t OBJ)= [ (t CASE}-~--Abs V gunda.y (T PRED)~ffihit((t SUBJ),([ OBJ)) NP biiba.nonn . (1 SUBJ}==L (t CASE]-----~rZ (25) CASE =- Erg 1 POSS == | Gen == + L PRED == aoy SUBJ = CASE == Erg "~ PRED = [ =ther / oASE = Abs / OBJ •- PRED = do9 ~.. -'~ PP~D = h;t( ~ LFG is capable of describing the "discontinuity" of (7) without using discontinuous constituents. There is, however, a sub- tle difference in the amount of "discontinuity" allowed by the LFG and the discontinuous constituent analyses. As | remarked at the beginning of the paper, the discontinuous constituent approach al/ows grammars that accept tots/scrambling of the lexical items: if a string S is accepted, then so is any permutation of S. In particu- lar, the discontinuous constituent approach glows unrestricted scrambling of elements out of embedded clauses and stacked N'Ps. which the LFC non-configurational encoding analysis cannot. This is because the position in the sentence's f-structure that any lexical item occupies is determined solely by the f-equation annotations attached to thaL lexical item, since the only equations in the c- structure are of the form ~L, and these create no new components in the f-structure for the clause to embed the f-structures from lexi- ca/items into. Suppose, for example, Guugu Yimidhirr allowed stacked NP poeseesors, in the same way that English allows them in construc- tions like my mother's lather'8 brother, except that, because the language is non-conFigurational, the lexical elements could be scat- tered throughout the entire sentence. The LFG analysis would run into problems here, because there would be a potentially infinite number of positions in the f-structure where the possessor could be located: implying that there are an infinite number of lexical entries for each poaae~ive NP. Guugu Yimidhirr does not exhibit such stacked possessives. Rather, the possessor of the possessor is indicated by a dative con- struction and so *,he LFG analysis is supported here. None the less, a similiar argument shows that embedded clausal f-structure com- ponents such a.s adjunts or VCOMP must have corresponding c- structure nodes so that the lexical items in these clauses can be attached sufficiently "far down" in the f-structure for the entire sen- tence. (Another possibility, which [ won't explore here, would be to allow f-equation annotations to include regular ezpressiona over items like VCOMP I- Still, it would be interesting to investigate further the restrictions on scrambling that follow from the non- configurational encoding analysis and the bame principles of LFG. For instance, the a~Tinc parsa6dity property (F'ereira and Warren 1983) that is required to assure decidablity in LFG (Bresnan and Kaplan 1982) essentially prohibits scrambling of single lexical ele- ments from doubly embedded clauses, because such scrambling would entail one S node exhaustively dominating another. But these distinctions are quite subtle, and, unfortunately, our knowledge of uon-configurational languages is insufficient to determine whether the scrambling they exhibit is within'the limits allowed by non- configurations/encoding. 8. Conchmion ['[ale (1983) begins his paper by listing three properties tha~. have come to be associated with the typological label ~non- configurational', namely (i) free word order, (ii) the use of :~yntacti- tally discontinuous constituents and (iii) the extensive use of null anaphora. [n this paper [ have shown that the flint two properties follow from a system that allows constituents that have discontinu- ous constituents and that captures the mother daughter location relationships using a predicate like 'combines'. 131 It is still far too early to ~ell whether this approach really is the moe~ appropriate way to deal with discontinuous coustituente: it may be that for a grammar of ressonsble size some other t~echnique, such as the non-configurational encoding of LFG, will be superior on linguistic and computational grounds. 9. Bibliosrsphy J. Bresnaa (1982), "Control and Complemencation," in The Mental Representation of Gr4mmatizal Rdatione, J. Bresn~n, ed., pp. 173.281, ~ Pre~, CambridKe, Ma~. J. Bresnan and R. Kaplan (1982), "Lexical Functional Gram- mar: A Formal System for Grammatical Representation," in The Mental Reprcecntatien of Grammatical Relations, J. Bresnan, ed., pp. t73.281, MIT Press, Cambridge, Mass. J. Bresnan, R. Kaplan, S. Peters and A. Zaenen (1982), Croee.Serial Dependeneic* in Detek, Linguistic Inquiry, II.4, pp. 613-635. G. Ga:ldar, E. Klein, G. Pullum and I. Sag, (1985) Generalized Phrc~e Structure Grammar, Havard University Press, C&mbridge, Ma~s. K. Hale (1983), "Warlpiri and the Grammar of Non- configur,,tional Languages", Natural Language and Linguietic Theory, t.t, pp. 5-.49. J. H,~viland (1979), "Guugu Yimidhirr", in tfandbook of Au- tralian Languegce, R. Dixon and B. Blake, eds., Benjamius. Amster- dam. F.C.N. Pereira. and D.H.D. Warren (1983), "Parsing as Deduc- tion", Prec. of the 2let Annlal Mecrinf o[ the ACL, pp. 137-143, A.~ociation for ComputaJ:ional Linguistics. C. Pollard (1984), Generalized Phrue Streeturc Grammara, ['lead Grammars, and Natural Language, unpublished thesis, Stan- ford Universiey. 132
1985
15
Structure Sharing with Binary Trees Lauri Karttunen SRI International, CSLI Stanford Martin Kay Xerox PARC, CSU Stanford Many current interfaces for natural language represent syntactic and semantic information in the form of directed graphs where attributes correspond to vectors and values to nodes. There is a simple correspondence between such graphs and the matrix notation linguists traditionally use for feature sets. . ' n"<'a'"" sg 3rd b. I cat: np -]1 rnumber: sg agr: [..person: 3rdJJ Figure I The standard operation for working with such graphs is unification. The unification operation succedes only on a pair of compatible graphs, and its result is a graph containing the information in both contributors. When a parser applies a syntactic rule, it unifies selected features of input constituents to check constraints and to budd a representat=on for the output constituent. Problem: proliferation of copies When words are combined to form phrases, unification is not applied to lexlcat representations directly because it would result in the lexicon being changed. When a word is encountered in a text, a copy is made of its entry, and unification is applied to the copied graph, not the original one. In fact, unification in a typical parser is always preceded by a copying operation. Because of nondeterminism in parsing, it is, in general, necessary to preserve every representation that gets built. The same graph may be needed again when the parser comes back to pursue some yet unexplored option. Our experience suggests that the amount of computational effort that goes into producing these copies is much greater than the cost of unification itself. It accounts for a significant amount of the total parsing time. In a sense, most of the copying effort is wasted. Unifications that fail typically fail for a simple reason. If it were known in advance what aspects of structures are relevant in a particular case, some effort could be saved by first considering only the crucial features of the input. Solution: structure sharing This paper lays out one strategy that has turned out to be very useful in eliminating much of the wasted effort. Our version of the basic idea is due to Martin Kay. It has been implemented in slightly different ways by Kay in Interlisp-O and by Lauri Karttunen in Zeta Lisp. The basic idea is to minimize copying by allowing graphs share common parts of their structure. This version of structure sharing is based on four related ideas: 133 • Binary trees as a storage device for feature graphs • "Lazy" copying • Relative indexing of nodes in the tree • Strategy for keeping storage trees as balanced as possible Binary trees Our structure-sharing scheme depends on represented feature sets as binary trees. A tree consists of cells that have a content field and two pointers which, if not empty, point to a left and a right cell respectively. For example, the content of the feature set and the corresponding directed graph in Figure 1 can be distributed over the cells of a binary tree in the following way. Figure 2 The index of the top node is 1; the two cells below have indices 2 and 3. In general, a node whose index is n may be the parent of ceils indexed 2n and 2n + 1. Each cell contains either an atomic value or a set of pairs that associate attribute names with indices of cells where their value is stored. The assignment of vaiues to storage cells is arbitrary; =t doesn't matter which cell stores which value. Here, cell 1 conta,ns the information that the value of the at"tribute cat is found in ceil 2 and that of agr in cell 3. This is a slight simplification. As we shall shortly see, when the value in a cell involves a reference to another cell, that reference is encoded as a relative index. The method of locating the cell that corresponds to a given index takes advantage of the fact that the tree branches in a binary fashion. The path to a node can be read off from the binary representation of its index by starting after the first 1 in this number and taking 0 to be a signal for a left turn and 1 as a mark for a right turn. For example, starting at node 1, node S is reached by first going down a left branch and then a right branch. This sequence of turns corresponds to the digits 01. Prefixed with 1, this is the same as the binary representation of 5, namely 101. The same holds for all indices. Thus the path to node 9 (binary 1001) would be LEFT-LEFT-RIGHT as signalled by the last three digits following the initial 1 in the binary numeral (see Figure 6). Lazy copying The most important advantage that the scheme minimizes the amount of copying that has to be done. In general, when a graph is copied, we duplicate only The operation that replaces copying in this scheme starts by duplicating the topmost node of the tree that contains it. The rest of the structure remains the same. It is Other nodes are modified only ~f and when destructive changes are about to happen. For example, assume that we need another copy of the graph stored in the tree in Figure 2. This can be obtained by producing a tree which has a different root node, but shares the rest of the structure with its original. In order to keep track of which tree actually owns a given node, each node tames a numeral tag that indicates its parentage. The relationship between the original tree (generation 0) and its copy (generation 1) is illustrated in Figure 3 where the generation is separated from the index of a node by a colon. 1:0 1:1 person 4 2:0 Inpl 3:0 number S 4:0 S:O Figure 3 134 If the node that we want to copy is not the topmost node of a tree, we need to duplicate the nodes along the branch leading to it. When a tree headed by the copied node has to be changed, we use the generation tags to minimize the creation of new structure. In general, all and only the nodes on the branch that lead to the site of a destructive change or addition need to belong to the same generation as the top node of the tree. The rest of the structure can consist of old nodes. For example, suppose we add a new feature, say [gender: femJ to the value of agr in Figure 3 to yield the feature set in Figure 4. p at: np 11 Fperson: 3rd Jnumber: sg agr: gender: fern Figure 4 Furthermore, suppose that we want the change to affect only the copy but not the original feature set. In terms of the trees that we have constructed for the example in Figure 3, this involves adding one new cell to the copied structure to hold the value fem, and changing the content of cell 3 by adding the new feature to it. The modified copy and its relation to the original is shown in Figure S. Note that one half of the structure is shared. The copy contains only three new nodes. 2 : 0 ~ 4 / ~ J...~ml~t ~ j number 5 / "~ gender 6 4:0,1~"] S:oF'~ f 6:1 ~m'--~, Figure 5 From the point of view of a process that only needs to find or print out the value of particular features, it makes no difference that the nodes containing the values belong to several ,trees as long as there is no confusion about the structure. Relative addressing Accessing an arbitrary cell in a binary tree consumes time in proportion to the logarithm of the size of the structure, assuming that cells are reached by starting at the top node and using the index of the target node as an address. Another method is to use relative addressing. Relative addresses encode the shortest path between two nodes in the tree regardless of where they are are. For example, if we are at node 9 in Figure 6.a below and need to reach node 11, it is easy to see that it is not necessary to go all the way up to node 1 and then partially retrace the same path in looking up node 11. instead, one can stop going upward at the lowest common ancestor, node 2., of nodes 9 and 11 and go down from there. a. Figure 6 With respect to node 2, node 11 is in the same position as 7 is with respect 1. Thus the retative address of cell 11 counted from 9 is 2,7--'two nodes 135 up, then down as if going to node 7". In general, relative addresses are of the form <up,down > where <up> is the number of links to the lowest common ancestor of the origin and <down> is the relative index of the target node with respect to it. Sometimes we can just go up or down on the same branch; for example, the relative address of cell 10 seen from node 2 is simply 0,6; the path from 8 or 9 to 4is 1,1. As one might expect, it is easy to see these relationships if we think of node indices in their binary representation (see Figure 6.b). The lowest common ancestor 2 (binary 10) is designated by the longest common initial substring of 9 (binary 1001) and 11 (binary 1011). The relative index of 11, with respect to, 7 (binary 111), is the rest of its index with 1 prefixed to the front. In terms of number of links traversed, relative addresses have no statistical advantage over the simpler method of always starting from the top. However, they have one important property that is essential for our purposes: relative addresses remain valid even when trees are embedded ~n other trees; absolute indices would have to be recalculated. Figure 7 is a recoding of Figure S using relative addresses. 2:0 ~ 3.01 ~o~,~1~ I ~:ll person1,4 /\ I I number 1,s 4:01 ira I 5:01 sg I 6:1 Figure 7 Keeping trees balanced When two feature matrices are unified, the binary trees corresponding to them have to be combined to form a single tree. New attributes are added to some of the nodes; other nodes become "pointer nodes," 136 i.e., their only content is the relative address of some other node where the real content is stored. As long as we keep adding nodes to one tree, it is a simple matter to keep the tree maximally balanced. At any given time, only the growing fringe of the tree can be incompletely filled. When two trees need to be combined, it would, of course, be possible to add all the cells from one tree in a balanced fashion to the other one but that would defeat the very purpose of using binary trees because it would mean having to copy almost all of the structure. The only alternative is to embed one of the trees in the other one. The resulting tree will not be a balanced one; some of the branches are much longer than others. Consequently, the average time needed to look up a value ~s bound to be worse than in a balanced tree. For example, suppose that we want to unify a copy of the feature set in Figure lb, represented as in Figure 2 but with relative addressing, with a copy of the feature set in Figure 8. a. agr: [gender: fem]] l:01agr0,2 J gender 2:ol 1,31 3:o Figure 8 a. [-cat: np I person: 3rd II Lagr: I-number: sg-~ Lgender : fem~J I cat0,2 l b. 1"1 aqr0,3 Z . 0 [ ~ ~ ~ ~ ~ ~ n 1,4 • ~1_:.~ I number 1,5 1:11 agrO,2 I 2:11 --> 2,1 I 3:0 Figure 9 Although the feature set in Figure 9.a is the same as the one represented by the right half of Figure 7, the structure in Figure 9.b is more complicated because it is derived by unifying copies of two separate trees, not by simply adding more features to a tree, as in Figure 7. In 9b, a copy of 8.b has been embedded as node 6 of the host tree. The original indices of both trees remain unchanged. Because all the addresses are relative; no harm comes from the fact that indices in the embedded tree no longer correspond to the true location of the nodes. Absolute indices are not used as addresses because they change when a tree is embedded. The symbol -> in node 2 of the lower tree indicates that the original content of this node--<jender 1,3~has been replaced by the address of the cell that it was unified with, namely cell 3 in the host tree. In the case at hand, it matters very little which of the two trees becomes the host for the other. The resulting tree is about as much out of balance either way. However, when a sequence of unifications is ~erformed, differences can be very significant. For example, if A, B, and C are unified with one another, ~t can make a great deal of difference, which of the two alternative shapes in Figure 10 is produced as the final result. A A .., ¢ ~ ~ ,& Figure 10 When a choice has to be made as to which of the two • ,rees to embed in the other, it is important to minimize the length of the longest path in the resulting tree. To do this at all efficiently requires addtitional infornation to be stored with each node. According to one simple scheme, this is simply the length of the shortest path from the node down to a node with a free left or right pointer. Using this, it is a simple matter to find the shallowest place in a tree at which to embed another one. If the length of the longer path is also stored, it is also easy to determine which choice of host will give rise to the shallowest combined tree. Another problem which needs careful attention concerns generation markers. If a pair of trees to be unified have independent histories, their generation markers will presumably be incommensurable and those of an embedded tree will therfore not be valide in the host. Various solutions are possible for this problem. The most straightforward is relate the histories of all trees at least to the extent of drawing generation markers from a global pool. In Lisp, for example, the simplest thing is to let them be CONS cells. Conclusion We will conclude by comparing our method of structure sharing with two others that we know of: R. Cohen's immutable arrays and the idea discussed in Fernando Pereira's paper at this meeting. The three alternatives involve different trade-offs along the space/time continuum. The choice between them wdl depend on the particular application they are intended for. No statistics on parsing are avadable yet but we hope to have some in the final version. Acknowledgements This research, made possible in part by a gift from the Systems Development Foundation, was also supported by the Defense Advanced Research Projects Agency under Contracts N00039-80- C-0575 and N00039-84-C-0524 with the Naval Electronic Systems Command. The views and conclusions contained in this document are those of the author and should not be interpreted as representative of the official policies, either expressed or implied, of the Defense Advanced Research Projects Agency, or the United States government. Thanks are due to Fernando Pereira and Stuart Shieber for their comments on earlier presentations of this material. 136A THIS PAGE INTENTIONALLY LEFT BLANK.
1985
16
A Structure-Sharing Representation for Unification-Based Grammar Formalisms Fernando C. N. Pereira Artificial Intelligence Center, SRI International and Center for the Study of Language and Information Stanford University Abstract This paper describes a structure-sharing method for the rep- resentation of complex phrase types in a parser for PATR-[I, a unification-based grammar formalism. In parsers for unification-based grammar formalisms, complex phrase types are derived by incremental refinement of rite phrase types defined in grammar rules and lexical entries. In a naive implementation, a new phrase type is built by copying older ones and then combining the copies according to the constraints stated in a grammar rule. The structure-sharing method was designed to eliminate most such copying; indeed, practical tests suggest that the use of this technique reduces parsing time by as much as 60%. The present work is inspired by the structure-sharing method for theorem proving introduced by Boyer and Moore and on the variant of it that is used in some Prolog imple- mentations. 1 Overview In this paper I describe a method, structure sharing, for the representation of complex phrase types in 'a parser for PATR-II, a unification-based grammar formalism. In parsers for unification-based grammar formalisms, cfmtplex phrase types are derived by incremental refinement of the phrase types defined in grammar rules anti h, xical emries. In a naive implementation, a new phrase t.vpe is built by" copying older ones and then combining the copies according to the constraints stated in a grammar ruh,. The structure-sharing method eliminates most such copying by This research, made possible in part by a gift from the Systems De* velol.~ment Foundation, wa~ also supported by the Defense Advanced Research Projects Agency under Contracts N00039*80-C-OG75 and N00039-84-C-0,524 with the Naval Electronic Systems Command. The views and conclusions contained in this document are those of the au- thor and should not be inierpreted as representative ol the official policies, either expressed or implied, of the Defense Advanced Re* s,,arrh Projects Agency, or the United States government. Thanks are due to Stuart Shieber, Lauri Karttunen. aml Ray Per- rault for their comments on earlier presentations of this materiM. representing updates to objects (phrase types) separately from the objects themselves. The present work is inspired by the structure-sharing method for theorem proving introduced by Boyer and Moore [11 and on the variant of it that is used in some Prolog im- plementations [9]. 2 Grammars with Unification The data representation discussed in this paper is applicable, with but minor changes, to a variety of grammar formalisms based on unification, such as definite-clause grammars [61, functional-unification grammar [4], lexical-fimctional gram- mar [21 and PATR-II [8i. For the sake of concreteness, how- ever, our discussion will be in terms of the PATR-II formal- ism. The basic idea of unification-ba.se, I grammar formalisms is very simple. As with context-free ~rammars. granlmar rules stafe how phrase types con,blue t(, yiehl ol her phr:~se types. [h,t where:m a context-free grammar allows only a finite nl,mber ,~f predefined atomic phrase types or nonlerminal.~, a unification-based grammar will in general define implicitly an infinity of phra.se types. A phrase type is defined by a net of constraints. A gram- mar m,le is a set of ronsl.rnints b,,twe,,u the type .\,~ .f a phr:me ;lnd the types .\', ...... \', of its ,'on..,iitm,nis. The rt,h, niay I,, applied It, It,. analysis ~,f a -,Ir'irlg s,, ;is the c<mc;ih,nalion of rcmslil.m'nls "~1,.....%t if and <rely if tho types ,,f the .~i arc' rOml);~iible with the lypes .\', ;tml the constraints in the ruh,. Unification is the operation that determines whether two types are compauble by buihling the most general type com- patible with both. if the constramls arc, Cqllationn I)elween at tril-iI,,s (~f phra.se types, ;is is the ,'ase in PATII-II. i~,, ltlir:l~e l.x t)e- can lie unilh,d wlH,iI~,~,l,r ih,,y ,Io liol ;l~-.ii.rli ,li~iinci ~,;ihie.~ I.o Ilie ,~;llll¢, al, l.rillilh,. The illlilil;il illii i~, lhcll jil.~l Ih~, ~'llii- junction (,sOt Ilili(lll) Of the rorreslXlading sets of COll.~trailll~, lsl. Ilere is a sample rule, in a simplified version (if the PATR- 137 II notation: Xo -- Xt X2 : (Xo cat) = $ (X, cat) = NP (X, cat) = VP (l) (Xt agr) = (X~ agr) (Xo trans) = (X2 trans) (Xo trans argt) = (Xt trans) This rule may be read as stating that a phrase of type Xo can be the concatenation of a phrase of type Xt and a phrase of type X:, provided that the attribute equations of the rule are satisfied if the phrases axe substituted for their types. The equations state that phrases of types X0, Xt, and X: have categories S, NP, and VP, respectively, that types Xt and X~ have the same agreement value, that types Xo and X2 have the same translation, and that the first argument of X0's translation is the translation of Xt. Formally, the expressions of the form (it..-I,,,) used in attribute equations axe path8 and each I~ is a label. When all the phrase types in a rule axe given constant cat (category} values by the rule, we can use an abbreviated notation in which the phrase type vaxiables X~ axe replaced by their category values and the category-setting equations are omitted. For example, rule (1) may be written as S -* NP VP : (NP aor) = (VP agr) (5' trana) = (VP teens) (2) (8 trana args) -- (NP trans) In existing PATR-II implementations, phrase types are not actually represented by their sets of defining equations. Instead, they are represented by symbolic solutions of the equations in the form of directed acyclic graphs (dacs) with arcs labeled by the attributes used in the equations. Dag nodes represent the values of attributes and an arc labeled by l goes from node m to node n if and only if, according to the equations, the value represented by m has n as the value of its t attribute [~]. A dag node (and by extension a dag) is said to be atomic if it represents a constant value; complex if it has some out- going arcs; and a leaf if is is neither atomic or complex, that is, if it represents an as yet completely undetermined value. The domain dora(d) of a complex dag d is the set of labels on arcs leaving the top node of d. Given a dag d and a label l E dora(d) we denote by d/I the subdag of d at the end of the arc labeled I from the top node of d. By extension, for any path p whose labels are in the domains of the appropri- ate subdags, d/p represents the subdag of d at the end of path p from the root of d. For uniformity, lexical entries and grammar rules are also represented by appropriate dags. For example, the dag for rule (t) is shown in Figure 1. 3 The Problem In a chart parser [31 all the intermediate stages of deriva- tions are encoded in ed0es, representing either incomplete 0 2 arg I I~ trans Figure 1: Dag Representation of a Rule (active) or complete (pensive) phra.ses. For PATR-[I, each edge contains a dag instance that represents the phrase type of that edge. The problem we address here is how to encode multiple dag instances efficiently. [n a chart parser for context-free grammars, the solution is trivial: instances can be represented by the unique inter- hal names (that is, addresses) of their objects because the information contained in an instance is exactly the same a.s that in the original object. [n a parser for PATR-|I or any other unification-based for- realism, however, distinct instances of an object will in gen- eral specify different values for attributes left unspecified in the original object. Clearly, the attribute values specified for one instance are independent of those for another instance of the same object. One obvious solution is to build new instances by copy- ing the original object and then updating the copy with the new attribute values. This was the solution adopted in the first PATR-II parser [8]. The high cost of this solution both in time spent copying and in space required for the copies thenmelves constitutes the principal justification for employ- ing the method described here. 4 Structure Sharing Structure sharing is based on the observation that an ini- tial object, together with a list of update records, contains the same information as the object that results from apply- ing the updates to the initial object. In this way, we can trade the cost of actually applying the updates (with pos- sible copying to avoid the destruction of the source object) against the cost of having to compute the effects of updates when examining the derived object. This reasoning applies in particular to dag instances that are the result of adding attribute values to other instances. 138 As in the variant of Boyer and Moore's method [1] used in Prolog [9], I shall represent a dag instance by a molecule (see Figure 2) consisting of 1. [A pointer to] the initial dag, the instance's skeleton 2. [A pointer to] a table of updates of the skeleton, the instance's environment. Environments may contain two kinds of updates: reroutings that replace a dag node with another dag; are bindings that add to a node a new outgoing arc pointing to a dag. Figure 3 shows the unification of the dags 1, "- [a:z,b:y] z= = [c. [d: eli After unification, the top node of/2 is rerouted to It and the top node of [i gets an arc binding with label c and a value that is the subdag [d : e] of/2. As we shall see later, any up- date of a dag represented by a molecule is either an update of the molecule's skeleton or an update of a dag (to which the same reasoning applies) appearing in the molecule's en- viroment. Therefore, the updates in a molecule's environ- ment are always shown in figures tagged by a boxed number identifying the affected node in the molecule's skeleton. The choice of which dag is rerouted and which one gets arc bindings is arbitrary. For reasons discussed later, the cost of looking up instance node updates in Boyer and Moore's environment represen- tation is O(]dl), where [d[ is the length of the derivation (a ~equence of resolutions) of the instance. In the present rep- resentation, however, this cost is only O(Iog ]d]). This better performance is achieved by particularizing the environment representation and by splitting the representational scheme into two components: a memory organization and a daft rep- re.sentation. A dag representation is & way of mapping the mathemat- ical entity dag onto a memory. A memory organization is a way of putting together a memory that has certain proper- ties with respect to lookup, updating and copying. One can think of the memory organization as the hardware and the dag representation as the data structure. 5 Memory organization In practice, random-access memory can be accessed and up- dated in constant time. However, updates destroy old val- ues, which is obviously unacceptable when dealing with al- ternative updates of the same data structure. If we want to keep the old version, we need to copy it first into a sepa- rate part of memory and change the copy instead. For the normal kind of memory, copying time is proportional to the size of the object copied. The present scheme uses another type of memory orga- nization -- virtual-copy array~ ~ which requires O(logn) time to access or update an array with highest used index k=2 a[nl = f a: f n = 30 = 132 (base 4) O(a) = 3 Figure 4: Virtual-Copy Array of n, but in which the old contents are not destroyed by up- dating. Virtual-copy arrays were developed by David H. D. Warren [10] as an implementation of extensible arrays for Prolog. Virtual-copy arrays provide a fully general memory ~truc- ture: anything that can be stored in r,'tndom-a,-ces~ mem- ory can be stored in virtual-copy arrays, althoqlgh p,~mters in machine memory correspond to indexes in a virtual-copy array. An updating operation takes a virtual-copy array, an index, and a new value and returns a new virtual-copy array with the new value stored at the given index. An access op- eration takes an array and an index, and returns the value at that index. Basically, virtual-copy arrays are 2k-ary trees for some fixed k > 0. Define the depth d(n) of a tree node n to be 0 for the root and d(p) + I if p is the parent of n. Each virtual-copy array a has also a positive depth D(a) > max{d(n) : n is a node of a}. A tree node at depth D(a) (necessarily a leaf) can be either an array element or the special marker .L for unassigned elements. All leaf nodes at depths lower than D(a) are also ±, indicating that no elements have yet been stored in the subarray below the node. With this arrangement, the array can store at most 2 k°('l elements, numbered 0 through 2 k°~*l - l, but unused sdbarrays need not be allocated. By numbering the 2 h daughters of a nonleaf node from 0 to 2 k - 1, a path from a's root to an array element (a leaf at depth D(a)) can be represented by a sequence no... no(ab-t in which n, is the number of the branch taken at depth d. This sequence is just the base 2 k representation of the index n of the array element, with no the most significant digit and no(.} the least significant (Figure .t). When a virtual-copy array a is updated, one of two things may happen. If the index for the updated element exceeds the maximum for the current depth (,a~ in the a[8] := ~/up- date in Figure 5), a new root node is created for the updated array and the old array becomes the leftmost daughter of the new root. Other node,, are also created, as appropriate, to reach the position of the new element. If, on the other hand, the index for the update is within the range for the current 139 mo,.~,2~ ~ _ m I skeleton ~ "~ environment own I ref I ref Spot Daniel initial update °-° I,ef I,.f Daniel Spot Figure 2: Molecule X unification . / ~ ~ < > ~ <-/\ <> °- "_L_ J °- ':_L_ III xa y~d Figure 3: Unification of Two Molecules 140 a{21: = h a: [O:e, 2:h, 8:gl o{81: = g • • a: [0:e, 2:f, 8:gl I g a: [0:e, 2:fl e f Figure 5: Updating Virtual-Copy Arrays depth, the path from the root to the element being updated is copied and the old element is replaced in the new tree by the new element (as in the a[21 := h update in Figure 5). This description assumes that the element being updated has alroady been set. If not, the branch to the element may T,,rminate prematurely in a 2. leaf, in which case new nodes are created to the required depth and attached to the ap- propriate position at the end of the new path from the root. 6 Dag representation Any dug representation can be implemented with virtual- copy memory instead of random-access memory. If that were ,lone for the original PATR-II copying implementation, a certain measure of structure sharing would be achieved. The present scheme, however, goes well b~yond that by using the method of structure sharing introduced in Section 4. As we saw there, an instance object is represented by a molecule, a pair consisting of a skeleton dug {from a rule or iexical entry) and an update environment. We shall now examine the structure of environments. In a chart parser for PATR-ll, dug instances in the chart fall into two classes. Base in.stances are those associated with edges that are created directly from lexical entries or rules. Derived instances occur in edges that result from the com- bination of a left and a right parent edge containing the left and right parent instances of the derived instance. The left ancestors of an instance {edge) are its left parent and that parent's ancestors, and similarly for right ancestors, l will assume, for ease of exposition, that a derived instance is always a subdag of the unification of its right parent with a subdag of its left parent. This is the case for most com- mon parsing algorithms, although more general schemes are possible [7]. If the original Boyer-Moore scheme were used directly, the environment for a derived instance would consist of point- ers to left and right parent instances, as well as a list of the updates needed to build the current instance from its parents. As noted before, this method requires a worst-case O(Idl} search to find the updates that result in the current instance. The present scheme relies on the fact that in the great majority of cases no instance is both the left and the right ancestor of another instance. [ shall assume for the moment that this is always the case. In Section 9 this restriction will be removed. It is asimple observation about unification that an update of a node of an instance ]" is either an update of ['s skeleton or of the value (a subdag of another instance) of another update of L If we iterate this reasoning, it becomes clear that every update is ultimately an update of the skeleton of a base instance ancestor of [. Since we assumed above that no instance could occur more than once in it's derivation, we can therefore conclude that ['s environment consists only of updates of nodes in the skeletons of its base instance an- cestors. By numbering the base instances of a derivation consecutively, we can then represent an environment by an array of frames, each containing all the updates of the skele- ton of a given base instance. Actually, the environment of an instance [ will be a branch environment containing not only those updates directly rele- vant to [, but also all those that are relevant to the instances of/'s particular branch through the parsing search space. In the context of a given branch environment, it is then possible to represent a molecule by a pair consisting of a skeleton and the index of a frame in the environment. In particular, this representation can be used for all the value~ (dags) in updates. More specifically, the frame of a base instance is an array of update records indexed by small integers representing the nodes of the instance's skeleton. An update record is either a list of arc bindings for distinct arc labels or a rerouting update. An arc binding is a pair consisting of a label and a molecule (the value of the arc binding). This represents an addition of an arc with that label and that value at th,, given node. A rerouting update is just a pointer to another molecule; it says that the subdag at that node in the updated dug is given by that molecule (rather than by whatever w,xs in the initial skeleton). To see how skeletons and bindings work together to rep- resent a dag, consider the operation of finding the sub(tag d/(It'"lm) of dug d. For this purpose, we use a current skeleton s and a current frame f, given initially by the skele- ton and frame of the molecule representing d. Now assume 141 that the current skeleton s and current frame ,f correspond to the subdag d' -- d/(ll.., l~-l). To find d/(l~.., l~) -" ~/l~, we use the following method: I. If the top node of s has been rerouted in j" to a dag v, dereference £ by setting s and .f from v and repeating this step; otherwise 2. If the top node of s has an arc labeled by l~ with value s', the subdag at l~ is given by the moledule (g,[); otherwise 3. If .f contains an arc binding labeled l~ for the top node of s, the subdag at l~ is the value of the binding If none of these steps can be applied, (It .-. l~) is not a path from the root in d. The details of the representation are illustrated by the example in Figure 6, which shows the passive edges for the chart analysis of the string ab according to the sample gram- S-*AB: (5" a) = (A) (S b) = (B) (S==) = (Shy) mar A-*a: (Auv) = a (3) 8-...b: (Buy) = b For the sake of simplicity, only the subdags corresponding to the explicit equations in these rules are shown (ie., the cat dug arcs and the rule arcs 0, 1,... are omitted}. In the figure, the three nonterminal edges (for phrase types S, .4 and B) are labeled by molecules representing the corre- sponding dags. The skeleton of each of the three molecules comes from the rule used to build the nonterminal. Each molecule points (via a frame index not shown in the figure) to a frame in the branch environment. The frames for the A and B edges contain arc bindings for the top nodes of the respective skeletons whereas the frame for the S edge reroute nodes 1 and 2 of the S rule skeleton to the A and B molecules respectively. 7 The Unification Algorithm I shall now give the u~nification algorithm for two molecules (dags} in the same branch environment. We can treat a complex dug d a8 a partial function from labels to dags that maps the label on each arc leaving the top node of the dag to the dug at the end of that arc. This allows us to define the following two operations between dags: d~ \ d2 = {{l,d}ed~li~dom{d:}} di <3 d= = {(l,d) Edl J I Gdorn(d:)} It is clear that dom(dl <~ d~) = dom(d2 <~ dl). We also need the notion of dug dereferencing introduced in the last section. As a side effect of successive unifications, the top node of a dag may be rerouted to another dag whose top node will also end up being rerouted. Dereferencing is the process of following such chains of rerouting pointers to reach a dug that has not been rerouted. The unification of dags dl and d~ in environment e consists of the following steps: 1. Dereference dl and d2 2. If dl and d: are identical, the unification is immediately successful 3. 4. 5. 6. If dl is a leaf, add to e a rerouting from the top node of dl to d~; otherwise If d2 is a leaf, add to e a rerouting from the top node of d2 to dl; otherwise If dl and d2 are complex dags, for each arc (l, d) E dl <~ d= unify the dag d with the dag d' of the corresponding arc (i,d') G d~ <l dl. Each of those unifications may add new bindings to e. If this unification of subdags i.~ successful, all the arcs in dl \ d~ are are cab'red in e ~ arc bindings for the top node of d: and tinnily the top node of dl is rerouted to d~. If none of the conditions above applies, the unification fails. To determine whether a dag node is a leaf or com- plex, both the skeleton and the frame of the corresponding molecule must be examined. For a dereferenced molecule. the set of arcs leaving a node is just the union of the skele- ton arcs and the arc bindings for the node. For this to make sense, the skeleton arcs and arc bindings for any molecule node must be disjoint. The interested reader will have no di~cuhy in proving that this property is preserved by the unification algorithm and therefore all molecules built from skeletons and empty frames by unification wiU satisfy it. ° 8 Mapping dags onto virtual-copy memory As we saw above, any dag or set of dags constructed by the parser is built from just two kinds of material: (I) frames; (21 pieces of the initial skeletons from rules and [exical entries. The initial skeletons can be represented triv- ially by host language data structures, as they never change. F~'ames, though, are always being updated. A new frame is born with the creation of an instance of a rule or lexical entry when the rule or entry is used in some parsing step (uses of the same rule or entry in other steps beget their own frames). A frame is updated when the instance it belongs to participates in a unification. During parsing, there are in general several possible ways of continuing a derivation. These correspond to alternative ways of updating a branch environment. In abstract terms, 142 [] [] {7) i Figure 6: Structure-Sharing Chart on coming to a choice point in the derivation with n possi- ble continuations, n - 1 copies of the environment are made, giving n environments -- namely, one for each alternative. In fact. the use of virtual-copy arrays for environments and frames renders this copying unnecessary, so each continu- ation path performs its own updating of its version of the environment without interfering with the other paths. Thus, all unchanged portions of the environment are shared. In fact, derivations as such are not explicit in a ,'hart parser. Instead, the instance in each edge has its own branch ,,nvironment, as described previously. Therefore. when two e,lges are combined, it is necessary to merge their environ- ments. The cost of this merge operation is at most the same the worst case cost for unification proper (O([d[ log JdJ)). However, in the very common case in which the ranges of frame indices of the two environments do not overlap, the merge cost is only O(log [d[). To summarize, we have sharing at two levels: the Boyer- Moore style dag representation allows derived (lag in- stances to share input data structures (skeletons), and the virtual-copy array environment representation allows differ- ent branches of the search space to share update records. 9 The Renaming Problem In the foregoing discussion of the structure-sharing method, [ assumed that the left and right ancestors of a derived in- stance were disjoint. In fact, it is easy to show that the con- dition holds whenever the graHtm;tr d.'s n¢)t ~.llow elllpty deriv(,d edges. In ,',mtrast, it is p,)ssible t,) construct a grammar in which an empty derived edge with dag D is b.th a left and a right ancestor of another edge with dag E. Clearly, tile two uses (~f D a.s an ancestor of E are mutually independent and the corresponding updates have to be seqregated. In ,~ther words, we need two ,'~l)ies of tile instance D. 13v anal,,~' with theorem proving, [ call Ihi~ lhe renaminq pr~d,h,m. The ('nrreflt sol|,t.i(,n is t,) us,, real ,'(,I)YiV|g t,) turn th,, empty edge into a skelet(>n, which is the|| adde~l t~ the chart. The new skeleton is then used in the norn|al fa.shion to pro- duce multiple instances that are free of mutual interference. 10 Implementation The representation described here has been used in a PATR- II parser implemented in I)r,~l,)g ". Two versions of the parser exist - cme using all Ea,-h,y-st.vle algorithn| related to Ear- ley deduction [7], the other using a left-,'.rner algorithm. Preliminary tests of the left-corner algorithm with struc- ture sharing on various grammars and input have shown parsing times as much as 60% faster (never less, in fact, than 40% faster) than those achieved by the same parsing algorithm with structure copying. 14,3 References [1] R. S. Boyer and J S. Moore. The sharing of structure in theorem-proving program& In Machine Intelligence 7, pages 101-116, John Wiley and Sons, New York, New York, 1972. [21 J. Bresnan and R. Kaplan. Lexical-functional gram- mar: a formal system for grammatical representation. In J. Bresnan, editor, The Mental Representation of Grammatical Relations, pages 173-281, MIT Press, Cambridge, Massachusetts, 1982. [3] M. Kay. Algorithm Schemata and Data Structures in Syntactic Processing. Technical Report, XEROX Palo Alto Research Center, Palo Alto, California, 1980. A version will appear in the proceedings of the Nobel Symposium on Text Processing, Gothenburg, 1980. I4] M. Kay. Functional grammar. In Pro¢. of the Fifth Annual Meeting of the Berkeley Linguistic Society, pages 142-158, Berkeley Linguistic Society, Berkeley, California, February 17-19 1979. [5] Fernando C. N. Pereira and Stuart M. Shieber. The se- mantics of grammar formalisms seen as computer lan- guages. |n Proe. of Coling8~, pages 123-129, Asso,-ia- tion for Computational Linguistics, 1984. [6] Fernando C. N. Pereira and David H. D. Warren. Defi- nite clause grammars for language analysis - a survey of the formalism and a comparison with augmented transi- tion networks. Artificial Inteilicence, 13:231-278, 1980. [7] Fernando C. N. Pereira and David H. D. Warren. Pars- ing as deduction. In Proc. of the 9lst Annual 3Iectin~ of the Association for Computational Linguistics, MIT, Cambridge, Massachusetts, June 15-17 1983. [8[ Stuart M. Shieber. The design of a computer lan- guage for linguistic information. In Proc. of Colinf8j, pages 362-366, Association for Computational l,inguis- tics, 1984. [9] David H. D. Warren. Applied Logic - its use and intple. menlalion as proqramming tool. PhD thesis, University of FMinburgh, Scotland, 1977. Reprinted as T~,,'hnical Note 290, Artificial Intelligence Center, SRI, Intorna- tional, Menlo Park, California. {10] David H. D. Warren, Logarithmic access arrays for Prolog. Unpublished program, 1983. 144
1985
17
Using Restriction to Extend Parsing Algorithms for Complex-Feature-Based Formalisms Stuart M. Shieber Artificial Intelligence Center SRI International and Center for the Study of Language and Information Stanford University Abstract 1 Introduction Grammar formalisms based on the encoding of grammatical information in complex-valued feature systems enjoy some currency both in linguistics and natural-language-processing research. Such formalisms can be thought of by analogy to context-free grammars as generalizing the notion of non- terminal symbol from a finite domain of atomic elements to a possibly infinite domain of directed graph structures nf a certain sort. Unfortunately, in moving to an infinite nonterminal domain, standard methods of parsing may no longer be applicable to the formalism. Typically, the prob- lem manifests itself ,as gross inefficiency or ew, n nontermina- t icm of the alg~,rit hms. In this paper, we discuss a solution to the problem of extending parsing algorithms to formalisms with possibly infinite nonterminal domains, a solution based on a general technique we call restriction. As a particular example of such an extension, we present a complete, cor- rect, terminating extension of Earley's algorithm that uses restriction to perform top-down filtering. Our implementa- tion of this algorithm demonstrates the drastic elimination of chart edges that can be achieved by this technique. Fi- t,all.v, we describe further uses for the technique--including parsing other grammar formalisms, including definite.clause grammars; extending other parsing algorithms, including LR methods and syntactic preference modeling algorithms; anti efficient indexing. This research has been made possible in part by a gift from the Sys* terns Development Fonndation. and was also supported by the Defense Advancml Research Projects Agency under C,mtraet NOOO39-g4-K- 0n78 with the Naval Electronics Systems Ckm~mand. The views and ronchtsi~ms contained in this &Jcument should not be interpreted a.s representative of the official p~dicies, either expressed or implied, of the D~'fen~p Research Projects Agency or the United States govern- mont. The author is indebted to Fernando Pereira and Ray Perrault for their comments on ea, riier drafts o[ this paper. Grammar formalisms ba.sed on the encircling of grantmal- ical information in complex-valued fealure systems enjoy some currency both in linguistics and natural-language- processing research. Such formalisms can be thought of by analogy to context-free grammars a.s generalizing the no- tion of nonterminai symbol from a finite domain of atomic elements to a possibly infinite domain of directed graph structures of a certain sort. Many of tile sm'fa,',,-bast,,I grammatical formalisms explicitly dvfin,,,I ,,r pr~"~Ul~p,~'.,'.l in linguistics can be characterized in this way ,,.~.. It.xi ,I- functional grammar (I,F(;} [5], generalizt,I I,hr:~,' ~l rlt,'l ur,. grammar (GPSG) [.1], even categorial systems such ,as M,,n- tague grammar [81 and Ades/Steedman grammar Ill --,~s can several of the grammar formalisms being used in natural- language processing research--e.g., definite clause grammar (DCG) [9], and PATR-II [13]. Unfortunately, in moving to an infinite nonlermiual de,- main, standard methods of parsing may no h,ngvr t~, ap- plicable to the formalism. ~k~r instance, the application of techniques for preprocessing of grantmars in ,,rder t,, gain efficiency may fail to terminate, ~ in left-c,~rner and LR algorithms. Algorithms performing top-dc~wn prediction (e.g. top-down backtrack parsing, Earley's algorithm) may not terminate at parse time. Implementing backtracking regimens~useful for instance for generating parses in some particular order, say, in order of syntactic preference--is in general difficult when LR-style and top-down backtrack techniques are eliminated. [n this paper, we discuss a s~dul.ion to the pr~,blem of ex- tending parsing algorithms to formalisms with possibly infi- nite nonterminal domains, a solution based on an operation we call restriction. In Section 2, we summarize traditional proposals for solutions and problems inherent in them and propose an alternative approach to a solution using restric- tion. In Section 3, we present some technical background including a brief description of the PATR-II formalism~ which is used as the formalism interpreted by the pars- ing algorithms~and a formal definition of restriction for 145 PATR-II's nonterminal domain. In Section 4, we develop a correct, complete and terminating extension of Earley's algorithm for the PATR-II formalism using the restriction notion. Readers uninterested in the technical details of the extensions may want to skip these latter two sections, refer- ring instead to Section 4.1 for an informal overview of the algorithms. Finally, in Section 5, we discuss applications of the particular algorithm and the restriction technique in general. 2 Traditional Solutions and an Al- ternative Approach Problems with efficiently parsing formalisms based on potentially infinite nonterminal domains have manifested themselves in many different ways. Traditional solutions have involved limiting in some way the class of grammars that can be parsed. 2.1 Limiting the formalism The limitations can be applied to the formalism by, for in- stance, adding a context-free "backbone." If we require that a context-free subgrammar be implicit in every grammar, the subgrammar can be used for parsing and the rest of the grammar used az a filter during or aRer parsing. This solu- tion has been recommended for functional unification gram- mars (FI,G) by Martin Kay [61; its legacy can be seen in the context-free skeleton of LFG, and the Hewlett-Packard GPSG system [31, and in the cat feature requirement in PATR-[I that is described below. However, several problems inhere in this solution of man- dating a context-free backbone. First, the move from context-free to complex-feature-based formalisms wan mo- tivated by the desire to structure the notion of nonterminal. Many analyses take advantage of this by eliminating men- tion of major category information from particular rules a or by structuring the major category itself (say into binary N and V features plus a bar-level feature as in ~-based theo- ries). F.rcing the primacy and atomicity of major category defeats part of the purpose of structured category systems. Sec, m,l. and perhaps more critically, because only cer- tain ,ff the information in a rule is used to guide the parse, say major category information, only such information can be used to filter spurious hypotheses by top-down filtering. Note that this problem occurs even if filtering by the rule information is used to eliminate at the earliest possible time constituents and partial constituents proposed during pars- ing {as is the case in the PATR-II implementation and the ~Se~'. [or instance, the coordination and copular "be" aaalyses from GPSG [4 I, the nested VP analysis used in some PATR-ll grammars 11.5 I, or almost all categorial analyse~, in which general roles of com- bination play the role o1' specific phlr~se-stroctur¢ roles. Earley algorithm given below; cf. the Xerox LFG system}. Thus, if information about subcategorization is left out of the category information in the context-free skeleton, it can- not be used to eliminate prediction edges. For example, if we find a verb that subcategorizes for a noun phrase, but the grammar rules allow postverbal NPs, PPs, Ss, VPs, and so forth, the parser will have no way to eliminate the build- ing of edges corresponding to these categories. Only when such edges attempt to join with the V will the inconsistency be found. Similarly, if information about filler-gap depen- dencies is kept extrinsic to the category information, as in a slash category in GPSG or an LFG annotation concern- ing a matching constituent for a I~ specification, there will be no way to keep from hypothesizing gaps at any given vertex. This "gap-proliferation" problem has plagued many attempts at building parsers for grammar formalisms in this style. In fact, by making these stringent requirements on what information is used to guide parsing, we have to a certain extent thrown the baby out with the bathwater. These formalisms were intended to free us from the tyranny of atomic nonterminal symbols, but for good performance, we are forced toward analyses putting more and more informa- tion in an atomic category feature. An example of this phe- nomenon can be seen in the author's paper on LR syntactic preference parsing [14]. Because the LALR table building algorithm does not in general terminate for complex-feature- based grammar formalisms, the grammar used in that paper was a simple context-free grammar with subcategorization and gap information placed in the atomic nonterminal sym- bol. 2.2 Limiting grammars and parsers On the other hand, the grammar formalism can be left un- changed, but particular grammars dew,loped that happen not to succumb to the problems inhere, at in the g,,neral parsing problem for the formalism. The solution mentioned above of placing more information in lilt, category symbol falls into this class. Unpublished work by Kent Witwnburg and by Robin Cooper has attempted to solve the gap pro- liferation problem using special grammars. In building a general tool for grammar testing and debug- ging, however, we would like to commit as little ,as possible to a particular grammar or style of grammar.: Furthermore, the grammar designer should not be held down in building an analysis by limitations of the algorithms. Thus a solution requiring careful crMting of grammars is inadequate. Finally, specialized parsing alg~withms can be designed that make use of information about the p;trtictd;tr gram- mar being parsed to eliminate spurious edges or h vpothe- ses. Rather than using a general parsing algorithm on a 'See [121 for further discl~sioa of thi~ matter. 146 limited formalism, Ford, Bresnan, and Kaplan [21 chose a specialized algorithm working on grammars in the full LFG formalism to model syntactic preferences. Current work at Hewlett-Packard on parsing recent variants of GPSG seems to take this line as well. Again, we feel that the separation of burden is inappropri- ate in such an attack, especially in a grammar-development context. Coupling the grammar design and parser design problems in this way leads to the linguistic and technolog- ical problems becoming inherently mixed, magnifying the difficulty of writing an adequate grammar/parser system. 2.3 An Alternative: Using Restriction Instead, we would like a parsing algorithm that placed no restraints on the grammars it could handle as long as they could be expressed within the intended formalism. Still, the algorithm should take advantage of that part of the arbi- trarily large amount of information in the complex-feature structures that is significant for guiding parsing with the particular grammar. One of the aforementioned solutions is to require the grammar writer to put all such signifi- cant information in a special atomic symbol--i.e., mandate a context-free backbone. Another is to use all of the feature structure information--but this method, as we shall see, in- evitably leads to nonterminating algorithms. A compromise is to parameterize the parsing algorithm by a small amount of grammar-dependent information that tells the algorithm which of the information in the feature structures is significant for guiding the parse. That is, the parameter determines how to split up the infinite nontermi- nal domain into a finite set of equivalence classes that can be used for parsing. By doing so, we have an optimal compro- mise: Whatever part of the feature structure is significant we distinguish in the equivalence classes by setting the pa- rameter appropriately, so the information is used in parsing. But because there are only a finite number of equivalence ciasses, parsing algorithms guided in this way will terminate. The technique we use to form equivalence classes is re- strietion, which involves taking a quotient of the domain with respect to a rcstrietor. The restrictor thus serves as the sole repository, of grammar-dependent information in the algorithm. By tuning the restrictor, the set of equivalence classes engendered can be changed, making the algorithm more or less efficient at guiding the parse. But independent of the restrictor, the algorithm will be correct, since it is still doing parsing over a finite domain of "nonterminals," namely, the elements of the restricted domain. This idea can be applied to solve many of the problems en- gendered by infinite nonterminal domains, allowing prepro- cessing of grammars as required by LR and LC algorithms, allowing top-down filtering or prediction as in Earley and top-down backtrack parsing, guaranteeing termination, etc. 3 Technical Preliminaries Before discussing the use of restriction in parsing algorithms, we present some technical details, including a brief introduc- tion to the PATR-II grammar formalism, which will serve as the grammatical formalism that the presented algorithms will interpret. PATR-II is a simple grammar formalism that can serve as the least common denominator of many of the complex-feature-based and unification-based formalisms prevalent in linguistics and computational linguistics. As such it provides a good testbed for describing algorithms for complex-feature-based formalisms. 3.1 The PATR-II nonterminal domain The PATR-II nonterminal domain is a lattice of directed, acyclic, graph structures (dags). s Dags can be thought of similar to the reentrant f-structures of LFG or functional structures of FUG, and we will use the bracketed notation associated with these formalisms for them. For example. the following is a dag {D0) in this notation, with reentrancy indicated with coindexing boxes: a: d: b: c] I , i: k: I hl] Dags come in two varieties, complez (like the one above) and atomic (like the dags h and c in the example). Con~plex dags can be viewed a.s partial functions from labels to dag values, and the notation D(l) will therefore denote the value associated with the label l in the dag D. In the same spirit. we can refer to the domain of a dag (dora(D)). A dag with an empty domain is often called an empty dag or variable. A path in a dag is a sequence of label names (notated, e.g.. (d e ,f)), which can be used to pick out a particular subpart of the dag by repeated application {in this case. the dag [g : hi). We will extend the notation D(p) in the obvious way to include the subdag of D picked ~,tlt b.v a path p. We will also occasionally use the square brackets as l he dag c~mstructor function, so that [f : DI where D is an expression denoting a dag will denote the dag whose f feature has value D. 3.2 Subsumption and Unification There is a natural lattice structure for dags based on subsumption---an ordering cm ¢lag~ that l'~mghly c~rre~pon~l.~ to the compatibility and relative specificity of infi~rmation ~The reader is referred to earlier works [15.101 for more detailed dis- cussions of dag structures. 147 contained in the dags. Intuitively viewed, a dag D subsumes a dag D' {notated D ~/T) if D contains a subset of the in- formation in (i.e., is more general than)/Y. Thus variables subsume all other dags, atomic or complex, because as the trivial case, they contain no information at all. A complex dag D subsumes a complex dag De if and only if D(i) C D'(I) for all l E dora(D) and LF(P) =/Y(q) for all paths p and q such that D(p) = D(q). An atomic dag neither subsumes nor is subsumed by any different atomic dag. For instance, the following subsumption relations hold: a: m[b : c] ] field:el r'[a: {b:el]c d: ~ -- - - t: f e: f Finally, given two dags D' and D", the unification of the dags is the most general dag D such that LF ~ D and D a C_ D. We notate this D = D ~ U D". The following examples illustrate the notion of unification: to tb:cllot : ,lb:cl] [ a: {b:cl]u d - d The unification of two dags is not always well-defined. In the rases where no unification exists, the unificati,,n is said to fail. For example the following pair of dags fail to unify with each other: d d: [b d] =fail 3.3 Restriction in the PATR-II nontermi- r,.al domain Now. consider the notion of restriction of a dag, using the term almost in its technical sense of restricting the domain ,)f ,x function. By viewing dags as partial functions from la- bels to dag values, we can envision a process ,~f restricting the ,l~mlain of this function to a given set of labels. Extend- ing this process recursively to every level of the dag, we have the ,'-ncept of restriction used below. Given a finite, sperifi- ,'ati,,n ~ (called a restrictor) of what the allowable domain at ,,:u'h node of a dag is, we can define a functional, g', that yields the dag restricted by the given restrictor. Formally, we define restriction as follows. Given a relation between paths and labels, and a dag D, we define D~ to be the most specific dag LF C D such that for every path p either D'(p) is undefined, or if(p) is atomic, or for every ! E dom(D'(p)}, pOl. That is, every path in the restricted dag is either undefined, atomic, or specifically allowed by the restrictor. The restriction process can be viewed as putting dags into equivalence classes, each equivalence class being the largest set of dags that all are restricted to the same dag {which we will call its canonical member). It follows from the definition that in general O~O C_ D. Finally, if we disallow infinite relations as restrictors (i.e., restrictors must not allow values for an infinite number of distinct paths) as we will do for the remainder of the discussion, we are guaranteed to have only a finite number of equivalence classes. Actually, in the sequel we will use a particularly simple subclass of restrictors that are generable from sets of paths. Given a set of paths s, we can define • such that pOI if and only if p is a prefix of some p' E s. Such restrictors can be understood as ~throwing away" all values not lying on one of the given paths. This subclass of restrictors is sut~cient for most applications. However, tile algorithms that we will present apply to the general class as well. Using our previous example, consider a restrictor 4~0 gen- erated from the set of paths {(a b), (d e f),(d i j f)}. That is, pool for all p in the listed paths and all their pre- fixes. Then given the previous dag Do, D0~O0 is a: [b: e l Restriction has thrown away all the infi~rmatiou except the direct values of (a b), (d e f), and (d i j f). (Note however that because the values for paths such as (d e f 9) were thrown away, (D0~'¢o)((d e f)) is a variahh,.) 3.4 PATR-II grammar rules PATR-ll rules describe how to combine a sequence ,,f con- stituents. X, ..... X,, to form a constituent X0, stating mu- tual constraints on the dags associated with tile n + 1 con- stituents as unifications of various parts of the dags. For instance, we might have the following rule: Xo -" Xt .\': : (.\,, ,'sO = >' (.\', rat) = .X l' (.\': cat) = I'P (X, agreement) = (.\'~ agreement). By notational convention, we can eliminate unifications for the special feature cat {the atomic major category feature) recording this information implicitly by using it in the "name" of the constituent, e.g., 148 S-- NP VP: (NP agreement) = (VP agreement). If we require that this notational convention always be used (in so doing, guaranteeing that each constituent have an atomic major category associated with it}, we have thereby mandated a context-free backbone to the grammar, and can then use standard context-free parsing algorithms to parse sentences relative to grammars in this formalism. Limiting to a context-free-based PATR-II is the solution that previous implementations have incorporated. Before proceeding to describe parsing such a context-free- based PATR-II, we make one more purely notational change. Rather than associating with each grammar rule a set of unifications, we instead associate a dag that incorporates all of those unifications implicitly, i.e., a rule is associated with a dug D, such that for all unifications of the form p = q in the rule. D,(p) = D,(q). Similarly, unifications of the form p = a where a is atomic would require that D,(p) = a. For the rule mentioned above, such a dug would be X0: [cat: S] Xl : agreement: m[] [eat: V P ] X, : agreement : ,~I Thus a rule can be thought of as an ordered pair (P, D) whore P is a production of the form X0 -- XI -.. X, and D is a dug with top-level features Xo,..., X, and with atomic values for the eat feature of each of the top-level subdags. The two notational conventions--using sets of unifications instead of dags, and putting the eat feature information im- plicitly in the names of the constituents--allow us to write rules in the more compact and familiar.format above, rather than this final cumbersome way presupposed by the algo- rithm. 4 Using Restriction to Extend Ear- ley's Algorithm for PATR-II We now develop a concrete example of the use of restriction in parsing by extending Earley's algorithm to parse gram- mars in the PATR-[I formalism just presented. 4.1 An overview of the algorithms Earley's algorithm ia a bottom-up parsing algorithm that uses top-down prediction to hypothesize the starting points of possible constituents. Typically, the prediction step de- termines which categories of constituent can start at a given point in a sentence. But when most of the information is not in an atomic category symbol, such prediction is rela- tively useless and many types of constituents are predicted that could never be involved in a completed parse. This standard Earley's algorithm is presented in Section 4.2. By extending the algorithm so that the prediction step determines which dags can start at a given point, we can use the information in the features to be more precise in the predictions and eliminate many hypotheses. However. be- cause there are a potentially infinite number of such feature structures, the prediction step may never terminate. This extended Earley's algorithm is presented in Section 4.3. We compromise by having the prediction step determine which restricted dags can start at a given point. If the re- strictor is chosen appropriately, this can be as constraining as predicting on the basis of the whole feature structure, yet prediction is guaranteed to terminate because the domain -f restricted feature structures is finite. This final extension ,,f Earley's algorithm is presented in Section -t.4. 4.2 Parsing a context-free-based PATR-II We start with the Earley algorithm for context-free-based PATR-II on which the other algorithms are based. The al- gorithm is described in a chart-parsing incarnation, vertices numbered from 0 to n for an n-word sentence TL, I ' ' , Wn. An item of the form [h, i, A -- a.~, D I designates an edge in the chart from vertex h to i with dotted rule A -- a.3 and dag D. The chart is initialized with an edge [0, 0, X0 -- .a, DI for each rule (X0 -- a, D) where D((.% cat)) = S. For each vertex i do the following steps until no more items can be added: Predictor step: For each item ending at i c,f the form [h, i, Xo -- a.Xj~, D I and each rule ,ff the form (-\'o -- ~, E) such that E((Xo cat)) = D((Xi cat)), add an edge of the form [i, i,.I( 0 -- .3,, E] if this edge is not subsumed by another edge. Informally, this involves predicting top-down all r~tles whose left-hand-side categor~j matches the eatego~ of some constituent being looked for. Completer step: For each item of the form [h, i,.\o -- a., D] and each item of the form [9. h, Xo -- f3..Yj~/, E] add the item [9, i, X0 --/LY/.3', Eu iX/ : D(.X'0)I] if the unification succeeds' and this edge is not subsumed by another edge. s ~Note that this unification will fail if D((Xo eat)) # E((X~ cat)) and no edge will be added, i.e., if the subphrase is not of the appropriate category for IsNrtlos Into the phrase being built. SOue edge subsumes another edge if and only if the fit'at three elements of the edges are identical and the fourth element o{ the first edge subsumes that of the second edge. 149 Informally, this involves forming a nsw partial phrase whenever the category of a constituent needed b~l one partial phrase matches the category of a completed phrase and the dug associated with the completed phrase can be unified in appropriately. Scanner step: If i # 0 and w~ - a, then for all items {h, i- 1, Xo --* a.a~3, D] add the item [h, i, Xo --* oa.B, D]. Informally, this involves aliomin9 lezical items to be in- serted into partial phrases. Notice that the Predictor Step in particular assumes the availability of the eat feature for top-down prediction. Con- sequently, this algorithm applies only to PATR-II with a context-free base. 4.3 Removing the Context-Free Base: An Inadequate Extension A first attempt at extending the algorithm to make use of morn than just a single atomic-valued cat feature {or less if no .~u,'h feature is mandated} is to change the Predictor Step so that instead of checking the predicted rule for a left- hand side that matches its cat feature with the predicting subphr,'~e, we require that the whole left.hand-side subdag unifies with the subphrase being predicted from. Formally, we have Predictor step: For each item ending at i of the form ih. i. Xo -- a.Xj~, DI and each rule of the form (Xo "~. E). add an edge of the form [i, i, X0 -- .7, Ell {X0 : D(Xj)II if the unification succeeds and this edge is not subsumed by another edge. This step predicts top-down all rules whose left-hand side matches the dag of some constituent bein 9 looked for. Completer step: As before. Scanner step: As before. [[owever. this extension does not preserve termination. Consi,h,r a %ountin~' grammar that records in the dag the numb,,r of terminals in the string, s .5' -- T : <.~f) = a. T,-- T: .4: (TIf) = {T:f f). .b'-- :i. A~G. SSimilar problems occur in natural language grammars when keeping lists of, say, subcategorized constituents or galm to be found. Initially, the ,.q -.- T rule will yield the edge [0,0, Xo ---, .Xt, x0 S] 1 [oo, T] 1 &: I: a which in turn causes the Prediction step to give [0, 0, Xo -'- .Xi, eat: T ] X0: I: ~a [ eat : T ] Xt: f: [f: ~] x,: feat a] yielding in turn [0, 0, .% -..X,, cat: T ) Xo: f: '~a f eat : i .If t : f: f: X,: [cat: A] If: l]] and so forth ad infinitum. 4.4 Removing the Context-free Base: An Adequate Extension What is needed is a way of ~forgetting" some of the structure we are using for top-down prediction. But this is just what restriction gives us, since a restricted dag always subsumes the original, i.e.. it has strictly less information. Takin~ advantage of this properly, we can change the Predi,'ri~n Step to restrict the top-down infurulation bef~,re unif> in~ it into the rule's dag. Predictor step: For each item ending at i of the f(~rm Ih, i, .% -- c,..Y~;L DI and each rule of the form,{.\'0 -- "t, E}, add an edge of the form ft. i..V0 -- .'~. E u {D{Xi)I~4~}] if the unification succeeds and this odge is not subsumed by another edge. This step predicts top-do,,n flit rules ,'h,.~r lefl.ha,d side matrhes the restricted (lag of .~ott:e r,o.~tilttcol fitt- ing looked for. Completer step: AS before. Se~m, er step: As before. 150 This algorithm on the previous grammar, using a restrictor that allows through only the cat feature of a dag, operates a.s before, but predicts the first time around the more general edge: [0, o, Xo -- .X,, cat: T ] X0: f: ITi[] cat: T X,: f: if: l-if l A] 1 Another round of prediction yields this same edge so the process terminates immediately, duck Because the predicted edge is more general than {i.e., subsumes) all the infinite nutuber ,,f edges it replaced that were predicted under the nonterminating extension, it preserves completeness. On the other hand. because the predicted edge is not more general than the rule itself, it permits no constituents that violate the constraints of the rule: therefore, it preserves correctness. Finally, because restriction has a finite range, the prediction step can only occur a finite number of times before building an edge identical to one already built; therefore, it preserves ter,nination. 5 Applications 5.1 Some Examples of the Use of the Al- gorithm The alg.rithnl just described liras been imph,meuted and in- (',>rp()rat,,<l into the PATR-II Exp(,rinwntal Syst(,m at SRI Itlt,.rnali(,)lal. a gr:lmmar deveh)pment :m(l tt,~,ting envirt)n- m,.))t fi,l' I'\TILII ~rammars writt(.u in Z(.t:llisl) for the Syrn- l)+)li('~ 3(;(ll). The following table gives s,)me data ~ugge~t.ive of the el'- feet of the restrictor on parsing etliciency, it shows the total mlnlber (,f active and passive edges added to the <'hart for five sent,,ncos of up to eleven words using four different re- strictors. The first allowed only category information to be ,ist,d in prodiction, thus generating th,, same l)eh:wi<)r .as the un<'xte:M('(} Earh,y's algorithl,,. The -,<'('(,n,{ a<{d,,,l su{w:tle- m+,rizati+.n illf-rrllalion in a<l(lili(,n t<)lh(,(-:H+,~<)ry: Thethird a<hl-d lill.+r-gap +h,l.'ndency infornlaliou a.s well ~,<+ Ihat the ~:tp pr.lif<.rati<,n pr-hlem wa.s r<,m<)ved. The lin:d restri<'tor ad,lo.I v<,rb form informati.n. The last c<flutnn shows the p,,r('entag+, of edges that were elin,inated by using this final restrh-tor. Prediction % Sentence eat] + s.bcat I + gap t ÷ form elim. 1 33 33 20 16 I 52 2 85 50 29 21 I 75 3 219 124 72 45 79 4 319 319 98 71 78 5 812 516 157 100 !i 88 Several facts should be kept in mind about the data above. First, for sentences with no Wh-movement or rel- ative clauses, no gaps were ever predicted. In other words, the top-down filtering is in some sense maximal with re- spect to gap hypothesis. Second, the subcategorization in- formation used in top-down filtering removed all hypotheses of constituents except for those directly subcategorized [or. Finally, the grammar used contained constructs that would cause nontermination in the unrestricted extension of Ear- ley's algorithm. 5.2 Other Applications of Restriction This technique of restriction of complex-feature structures into a finite set of equivalence cla~ses can be used for a wide variety of purposes. First. parsing Mg<,rithnls such ~ tile ;d~<)ve (:all be mod- ified for u~e by grain<nat (ortnalintus other than P.\TR-ll. In particular, definite-clause grammars are amenable to this technique, anti it <:an be IIsed to extend the Earley deduc- tion of Pereira and Warren [i 1 I. Pereira has use<l a similar technique to improve the ellh'iency of the BI'P (bottom- up h,ft-corner) parser [71 for DCC;. I,F(; and t;PSC parsers can nlake use of the top-down filteringdevic,,a~wvll. [:f'(; p,'tl~ot'~ n|ight be [mill th;tl d() ll(d. r<,<[11il'i. ;+. c<~llt+,,,;-l'ri,~. backl><.m,. • ";*'<'(rod. rt,~ll'i<'ti(.ll <';tlt l)e llmt'+l If> ~'llh;lllt'+' ,+l h,'r I+;~l'>ill~, :dgorithuls. Ig>r eX;lllll)le, tilt, ancillary fllllttic~ll to c.tlq)uto 1.1{ <'l.sure-- whMi. like Ihe Earh,y alg-rithm..,itht,r du.,.+ not use feature information, or fails to terminate--,-an be modified in the same way as the Ea.rh,y I)re<lict~r step to ter- nlinate while still using significant feature inf<,rmati(m. LR parsing techniques <'an therel+y I)e Ilsed f,,r ellicient par'dn~ +J conll)h,x-fe:)+ture-lmn.,<l fiwnlalislun. .\l,,r(' -,l)*','ulaliv+,ly. ,'++cheme~. l'(+r s,'hed.lin~ I,I{ l>:irnt.r:.-+ h~ yi..hl l,;~r.,,.-, i. l>rvl "- or+,m-e ,~r+h'r t.i:~hl I., it,,,lilie~l fi,r .',.mld.,x-f,,:lqur.-l,;r~.,,l fl)rlllaliP,.llln, alld et,'cn t1111t,<[ Iw lll+,:)+tln .d + lilt + l.(,,+.tl'ivt~+r. Finally, restriction can be ilsed ill are:~.s of i)arshlg oth+,r than top-down prediction and liltering. For inslance, in many parsing schemes, edges are indexed by a categ<,ry sym- bol for elficient retrieval. In the case of Earley's Mgorithm. active edges can be indexed bv the category of the ,'on- stituent following the dot in the dotted rule. tlowever, this again forces the primacy and atomicity of major category in- formation. Once again, restriction can be used to solve the problem. Indexing by the restriction of the dag associated 151 with the need p.grmits efficient retrieval that can be tuned to the particular grammar, yet does not affect the completeness or correctness of the algorithm. The indexing can be done by discrimination nets, or specialized hashing functions akin to the partial-match retrieval techniques designed for use in Prolog implementations [16]. 6 Conclusion We have presented a general technique of restriction with many applications in the area of manipulating complex- feature-based grammar formalisms. As a particular exam- ple, we presented a complete, correct, terminating exten- sion of Earley's algorithm that uses restriction to perform top-down filtering. Our implementation demonstrates the drastic elimination of chart edges that can be achieved by this technique. Finally, we described further uses for the technique--including parsing other grammar formalisms, in- cluding definite-clause grammars; extending other parsing algorithms, including LR methods and syntactic preference modeling algorithms; and efficient indexing. We feel that the restriction technique has great potential to make increasingly powerful grammar formalisms compu- tationally feasible. References [I] Ades, A. E. and M. J. Steedman. On theorder of words. Linguistics and Philosophy, 4(4):517-558, 1982. [21 Ford, M., J. Bresnan, and R. Kaplan. A competence- based theory of syntactic closure. In J. Bresnan, editor, The Mental Representation of Grammatical Relations, MIT Press, Cambridge, Massachusetts, 1982. [3] Gawron, J. M., J. King, J. Lamping, E. Loebner, E. A. Paulson, G. K. Pullum, I. A. Sag, and T. Wasow. Processing English with a generalized phrase structure grammar. In Proeecdinos of the ~Oth Annual Meet- ing of the Association for Computational Linguistics, pages 74-81, University of Toronto. Toronto, Ontario, Canada, 16-18 June 1982. [41 Gazdar, G., E. Klein, G. K. Puilum, and I. A. Sag. Generalized Phrase Structure Grammar. Blackwell Publishing, Oxford, England, and Harvard University Press, Cambridge, M~ssachusetts, 1985. [51 Kaplan, R. and J. Bresnan. Lexical-functional gram- mar: a formal system for grammatical representation. [n J. Bresnan, editor, The Mental Representation o/ Grammatical Relations, MIT Press, Cambridge, Mas- sachusetts, 1983. [61 Kay, M. An algorithm for compiling parsing tables from a grammar. 1980. Xerox Pale Alto Research Center. Pale Alto, California. [7] Matsumoto, Y., H. Tanaka, H. Hira'kawa. II. Miyoshi. and H. Yasukawa. BUP: a bottom-up parser embed- dad in Prolog. New Generation Computing, 1:145-158, 1983. [8] Montague, R. The proper treatment of quantification in ordinary English. In R. H. Thomason. editor. Formal Philosophy, pages 188-221, Yale University Press. New Haven, Connecticut, 1974. [9] Pereira, F. C. N. Logic for natural language anal.vsis. Technical Note 275, Artificial Intelligence Center, SRI International, Menlo Park, California, 1983. [I0] Pereira, F. C. N. and S. M. Shieber. The semantics of grammar formalisms seen as computer languages. In Proceedings of the Tenth International Conference on Computational Linguistics, Stanford University, Stan- ford, California, 2-7 July 198,t. [11] Pereira, F. C. N. and D. H. D. Warren. Parsing as deduction. In Proceedinas o/ the elst Annual Meet- inff of the Association for Computational Linguistics. pages 137-144, Massachusetts Institute of Technology.. Cambridge, Massachusetts, 15-17 June 1983. [12] Shieber, S. M. Criteria for designing computer facilities for linguistic analysis. To appear in Linguistics. [13] Shieber, S. M. The design of a computer language for linguistic information. In Proceedings of the Tenth International Conference on Computational Lingui,s- ties, Stanford University, Stanford. California. 2-7 July 1984. [14] Shieber, S. M. Sentence disambiguation by a shift- reduce parsing technique. [n Proceedinqs of the ~l.~t Annual Martin O of the Association for Computational Linguistics, pages 1i5--118, Massachusetts Institute of Technology, Cambridge, Massachusetts, 15-17 June 1983. [15] Shieber, S. M., H. Uszkoreit, F. C. N. Pereira, J. J. Robinson, and M. Tyson. The formalism and im- plementation of PATR-II. In Re,earth on Interactive Acquisition and Use of Knowledge, SRI International. Menio Park, California, 1983. [16] Wise, M. J. and D. M. W, Powors. Indexing Prol.g clauses via superimposed code words and lield encoded words. In Pvoeeedincs of the 198. i International Svm. posture on Logic Prowammin¢, pages 203-210, IEEE Computer Society Press, Atlantic City, New Jersey, 6-9 February 1984. 152
1985
18
Semantic Caseframe Parsing and Syntactic Generality Philip J. Hayes. Peggy M. Andersen. and Scott Safier Carnegie Group Incorporated Commerce Court at Station Square Pittsburgi'~. PA 15219 USA Abstract We nave implemented a restricted .:lommn parser called Plume "M Building on previous work at Carneg=e-Mellon Unfvers=ty e.g. [4, 5. 81. Plume s approacn to oars=ng ~s based on semantic caseframe mstant~a~on Th~s nas the advantages of effic=ency on grin ~atical ,nput. and robustness in the face of ungrammatmcal tnput Wh~le Plume ~s well adapted to s=mpte ,:;~ectaratwe and ~mperat=ve utterances, it handles 0ass=yes relatmve clauses anti =nterrogatives in an act noc manner leading to patciny syntact=c coverage Th~s paOe, oulhnes Plume as =t Currently exfsts and descr,Oes our detaded des=gn for extending Plume to handle passives rela|~ve clauses, and =nterrogatlves ~n a general manner 1 The Plume Parser Recent work at Carnegie-Mellon Umvers=ly eg. [4. 51 has sinown semanttc caseframe =nstant~ation to be a n,ghly robust and efficient method of parsing restricted domain ~n0ut. In tn~S approach ~0 parsing, a caseframe grammar contains lhe doma~n-soecific semantic informat=on, ana" the pars=ng program contains general syntact=c knowledge Input ,s mapped onto me grammar using m=s budt-~n syntact=c knowledge We nave chosen m=s approach for Plume ":'M a commercial restricted domam parser ~ because of ~ts advantages =n efficfency and robustness Let us take a simple example from a natural language interface, called NLVMS. thai we are developing under a 1 More 0:eccselv. Phjme TM ,s me n4me ,)t lne run-hltle ~vstem TM assoclalecl N~m Language Craft an mlegralerJ envlrollmenl for me oevetoomenl of naluraI language ,nteHaces "he PlUm? 10arser Nnlch transla{es Eng#lsl'l lil~UI qnto casetrame ,nslances, .s a malot comDoneiIt ot tt~=s rurl-tlme system The diner malOr -,3111OG~te~H tratislales ire caseframe ~nslance~ into aoDIica|lofl specifIC !anguaqes. in JlOOlhon to the Plume run-brae system, ta.guaqe Craft ,nc!uoes grammar development ~OOlS ,ncludlng -.1 -;lrH,:hJreO edllOr .ln~l tracing ,1ha ,~ert.3rmance rneasutemenl rOi)~S ~r~ln Plume Ji~a ~Jltq,iaqe Craft ate crOOuctS ,It Carnegie Group .~.d ,Jle u,telltly i,I re'~lrlrleO r~tease Plume .]n~ Language CI,Ift It@ ,,,Id~,,Idr v'~ ,',t ]ot.1.+gle :3~,)HO 'ncotoorafe~l contract with Digital Equipment Corporation NLVMS ,s an tnterface to Digltal's VMS ~ operating system for './AX ~ computers 2 The Plume grammar for .th~s ,ntertace contains the follow=ng semantic caseframe 3 correspond=ng ¢o the copy command of VMS: [ *copy* :cf-type clausal :header copy :cases ( f i le-to-copy :filler *file* :positional Direct-Object) ( source :filler *directory* :marker from I out of) (destination :filler *file* I *directory* :marker to I into I in l onto) ] This defines a caseframe called "copy" w~th mree cases: file-to-copy, source, and destination The hie-to-copy case ,s filled by an oioiect of type "file" and appears =n the input as a direct oblect Source ,s filled 0y a "d~rectory" and should appear in me ~nput as a preposmonal phrase preceded or marked by the prepos,t~ons "from" or 'out of" Oestinat=on is filled by a "file" or "clirectory" and ~s marked by "to'. "into'. or "onto" Finally the copy command itself is recognized by the header word ,ndicated above (by header) as "copy" Using mis caseframe. Plume can parse ,n0uts like: Copy fop Oar out ot [x/ ,nro [y~ From [x] to [yJ cooy fop oar too oar coDy /rom [x/ ro [y/ 2VMS anO VAX are ¢raOemark5 of Olg=tal EQu.omen! CorDorallon ]Th.s is a s.npiltleO .:e,slols ,~t rne r.L~e,, ~.~ e .I..'~,.IIh/ ~ fne gralnmar. 153 In essence. Plume's parsing algorithm +S tO find a caseframe header, in this case "copy" and use the associated caseframe, "copy" to guide the rest of the parse. Once the caseframe has been identified Plume looks for case markers, and then parses the associated case filler directly following the marker Plume also tnes to parse pomtionally specified cases, like direct ObleCt. in the usual position in the sentence - immediately following the header for direct object. Any input not accounted for at the end of this procedure is matched against any unfilled cases, so that cases that are supposed to be marked can be recognized without their markers and pos=tionally indicated cases can be recognized out of their usual positions, This flemble. interpretive style of matching caseframes against the input allows Plume to deal with the kind of variation in word order illustrated in the examples above. The above examples implied there was some method to recognize files and directones They showed only atomic file and directory descriptions, but Plume can also deal with more complex ObleCt descnptions In fact, in Plume grammars, obiects as well as actions can be described by caseframes. For instance, here =s the caseframe s used to define a file for NLVMS [*f~le* :.c f- type nominal :header file ' :name ?(%period ~extension) : cases ( name : assignedp t name) ( extension : assignedp t extension :marker written in :adjective <language> :filler <language>) (creator :filler *person* :marker created by) (directory :filler *directory* :marker in) ] 4 n rme syntax used ,.',,.n VMS. chrectorles are ,ncl.calecl Dy sauare Dtackefs. 5~qa~,~ ~,mOl,hed ~l~lUtl~e ,.]ulOmall< allv +e,:oqn,zes "l~.te,mmer ¢, 4rl,1 :lual~hl,er~; asSoclaled • .,fn ~totnmal , a~|f'~tf~e5 This caseframe allows Plume to recogn,ze file descriptions like: 6 fop fop.Par The file created Oy John The fortran file in ix/ created Oy Joan The caseframe notation and parsing algorithm used here are very similar to those described above for clause level input. The significant differences are additions related to the :adiective and :assignedp attributes of some of the cases above. While Plume normally only looks for fillers after the header in nominal caseframes an adiective attnbute of a slot tells Plume that the SlOt filletmay appear before the header. An :assignedp attribute allows cases to be filled through recognition of a header+ This is generally useful for proper names, such as fop and foo.bar. In the example above. the second alternatwe header contmns two '.,ar~ables name and 'extension. that can each match any s=ngJe .vorcI. The ClUeSt=on mark Indicates opt=onal~ty, so that me header can be either a single word or a word followed Dv a per=pal and another word. The first wOrd ,s asmgned to the ~'anaOle 'name. and IRe second (if =t =s mere~ to the vanaOle !extension If 'name or 'extension are matched ,,vnde recognizing a file header, their values are placed ,n the name and extenmon cases of "hie" w,ln the above mod,ficat,ons P~ume can parse nomqna, caseframes umng the same algor~ttnm that ~t uses for clausal caseframes that account for complete sentences. However there are some interactions between the two levels of parsing. In particular, mere can be ambiguity about where to attach marked cases• For anstance. In: Copy me fortran file ,n [,:/ to [y/ "~n [xr" could e,her fill the directory case of the hie described as 'the fortran hie or could fill the dest+natBon case of the whole copy command. The second interpretation does not work at the global level because the only place to put "to [y}" ,s tn that same destination case However. at the time the file descrlpt,on ts parsed, tins information is not avadable and so both possible attachments must be considered In general, if Plume is able to fill a case of a nora,hal caseframe from a 154 prepositional phrase, it also splits off an alternative parse in which that attachment is not made. When all input has I~een parsed. Plume retains only t~ose parses t~at succeed at the global level, i.e.. consume all of the input. Others are discarded. The current implementation of Plume is based on the nominal and clausal level caseframe instant=ation algorithms descnPed above. Us=ng these algor=thms and a restr=cted clommn grammar of caseframes like the ones ShOWn above. Plume can parse a w~de variety of ~mDerat~ve and declarative sentences relevant to that doma=n. However. there remain significant gaps ,n ~ts coverage. Interrogatives are not handled at all: + passives are covered only if mey are explicitly specified =n the grammar ancl relative clauses can only be handled by pretending they are a form of prepos=t=onal phrase The regular and predictable relattonsn~p between s~mple statements. ¢~uestions and relalwe clauses and between act=ve and passive sentences ~s ,veil known A parser wmcil purports to tnterpret a dohlaln specific tanguage specification using a built-in knowledge of symax ShOuld account for tills regularity =n a general way The current implementer=on of Plume ilas no mecnamsm for doing tn~s. Eacil ~ndividual possiDdity for questions relative clauses and passives must be explicitly specified ,n the grammar For instance, to handle reduced relative clauses as =n "the file created by jim .... created by" ~s hSted as a case marker (compound prepositlorll tn the creator slot of file. mark+ng a description of the creator To handle full relat=ves the case marker must be specified as something hke "3(which < be >) created by". '3 Wh=ie mis allows Plume to recognize +the file which was created by Jim", "the file created by Jim". or even "the file created by Jim on Mondav ~t breaks down on something like "the file created on Monday by Jim ' because the case marker "created by' {s no longer a unll Moreover using the current techniques. Plume S abdtly to ?rhR Curren! ,rnoleftl~nt;~llon ,)1 PIIIIII@ ".* a }s -.~ lef/l~,) r,~tV t'nF, iI'l,)d OI ,, ,I ,.aseft,)me ,, 1 .-. ~ .t i11 la ii ,-~ ~1 recognize the above inputs =s completely unrelated tO =ts abdity tO recognize inputs like: the fi/e Jim created on Mon(Tay the person that the file was crearect ov on Monday the day on which Jim created rne me If an interface could recogmze any of these examptes +t might seem unreasonable to a uSer that ~t could not recognize all of the others Moreover g~ven any of the above examples, a user might reasonaPly expect recogmt=on of related sentence level inputs hke Create the hie on Monday' J~m created the hie on Monday Dt~ J~m create the hie on Moneay ? Was the hie create(l Ioy J~m on Monclay ~ Who created the hie on Monday ? What day was the hie created on? The current ,mplememation of Plume has no means of guaranteeing such regularity of coverage. Of course, this problem of patcl~y syntactic coverage is not new for restricted doma=n parsers. The lack Of syntactic generality of the original semantic grammar {3] for the Sophie system {21 led tO the concept of cascaded ATNs {10} and the RUS parser {1 I, A progress=on w=tln s=milar goals occurred from the LIFER system [91 to TEAM {6] and KLAUS [7]. The bas=c oDstacle to ach~evmg Syntactic generality ~n these network-based approaches was me way syntactic and semantic information was m=xed together +n the grammar networks. The sOlutions, therefore, rested on separating the syntact=c and semanttc reformat=on. Plume already incorporates just me separation of syntax and semantics necessary for syntactic generahly general syntactic knowledge resides in the parser whde semantic =nformat=on resides ~n the grammar This suggests that syntactic generahty ~n a System like Plume can be acnreved Qv ,morowng the parser s caseframe ,nstanttatJon algOrithms .vHnou{ 3n~,. malor changes to arammar Content ,n terms of me above examples =nvo~wng ;reafe =t suggests .."Je can use a s4ngle "create" ,,:3seframe to nandte .~11 the examples We Simply need to prowde suHable extensions to the existing caseframe nslantlatton algoNthms In the next section we present a detaded deszgn for such extensaons 2. Providing Plume wtth Syntactic Generality As descr=bed above. Plume can currently use clausal 155 caseframes only to recognize s,ngle clause imperative and declaratwe utterances in the active voice. This section describes our design for extending Plume so that relative and interrogative uses of clausal caseframes in passive as well as active voice can also De recognized from the same information. We will present our general design by showing how it operates for the following "create" caseframe in the context of NLVMS [ *create* : cf-type clausal :header <create> : cases (creator :filler *person* :positional Subject) (createe :filler *file* :positional Direct-Object) (creat ion-date :filler *date* :marker on) ] Note tNat symbols in angle brackets represent non-terminals ,n a conmxt-free grammar (recogmzed by Plume using oattern matching techn,ques) In Ine caseframe defin,tlon above <create> matches all morDnologlcal vat=ants of the verio 'create" ,ncluding "create ' 'creates ' 'created" and 'creating" impugn not combound tenses +~ke .s :real,ng' see below). Using me ex,st=ng Plume :n,s .':ouid 3olv 9.1lOW uS tO recognize simple ~mperallves and actwe ~eclarat,ves llke Create ~oo Oar on Moniaav 2m crealecI tot)oar on Mor~Uay 2. I Passives Plume recogn,zes pasture sentences lhrough ~tS processing of the ]erO cluster +e the ma~n verb plus me sequence of modal and auxiliary .'erD ,mmedlalely preceding it. Once me main verb has been located a sl0ecsal verb cluster processing mechanvsm reads me verb cluster and determines from il whether me sentence ts acttve or passive 'j The parser records tills =nformaticn in a special case called "%voice". If a sentence is found to be achve the standard parsing algor,hm described above ,s used If =t is found to be passive, the standard algorithm ~s used with the modification that the parser looks for the direct object or the indirect object ~° in the subject positron, and for the subject as an optional marked case with the case marker "by". Thus. given the "create" caseframe above, the follow,rig passive sentences could be handled as well as their active counterparts. Fop oar was creamd by Jim FOO oar COuto /'lave dee t~ rFateo ov j,m FO0 oar ,s Oe,ng (reate~l ~v ~,m Fop Oar was created on MGnclay 22. Relative clauses The detailed design presented below allows Plume to use the "create" caseframe to parse nominals hke: the tile J~m crearecl on Monclav the person tna~ the tile was created oy on Monday the day on vvn~ch Jtm create(:/ tl~e hie TO do tins. we ~ntroduce the conceDt of a relative case A relative case is a link back from the caseframes for the objects that fill the cases of a clausal caseframe to mat clausal caseframe. A grammar preprocessor generates a relatwe case automatically from each case of a clausal caseframe, associating ,t 'Nlth the nominal caseframe .~at fills the case in me clausal caseframe. Relative cases rio not need to be spemfied by the grammar writer For instance, a relative case ,s generaled from the createe case of "create" and rnctuded in the "hie" caseframe. It lOOkS like this: [*file* (:relative-cf *create* :relative-case-name createe :marker <create> ] 911 also clelerrrllnes I~le lense ol me sentence and whelne¢ ,l s ,Jllfltrrtallve or neqallVe IOSn ,I u~ere ,s a case .~,ln a OoSlhO.al mq.ecboiolecI $1ol me ,¢lGitec! .~DleCt is dlloweO lo iJasslv,ze Ne .:air thus uoderslano -;e~le,~<'es !IW~ " MaIV ,VaS ~iVell a boow " ,iOln I ",~ive ' .Ise!~,3me ,-,¢11 13oln a f]if~,-' ,~lecl ,llt(~ ,]ii ,it(~it'ecl )l}lel,~l ' ~'~ie 156 Note thai :marker is the same as :header of "create" Similar relative cases are generated in the "person" caseframe for the creator case. and in the "date" caseframe for the creation-date case. differing only in :relative-case-name Relative cases are used s~mdarly to the ordinary marked cases of nominal caseframes. In essence, ff the parser ~s parsmg a non,nat caseframe ~nd finds the marker of one of ~ts relative cases, then it tries to instanhate the :relative- cf It performs tms instantlatlon ~n the same way as ,f me relatwe.cf were a top-level clausal caseframe and the word that matched the header were ,is main verb. An ~mportan! d=fference ~s that it never tries to fill the case ,,,,nose name ~s g=ven by relative-case-name That case =s hlled by the nommal caseframe which contams the relative case For mstance, suppose the parser =s tryCng to process. 7"he file J~m createcl on MonclaV And suppose that ~t has already located "file ' and used that to determine ,t ,s ~nstanhat,ng a "file" nominal caseframe It ~s able to match {aga,nst 'created"~ me • marker of the relative caseframe of "hie' shown above. It then ~ries to ~nstanhate me relatwe.cf "create" using ~tS standard tecnmdues except real ~! does not try to fill createe the case of "create" specff=eo as the relallve-case- name Th~s mstanr~at~on succeeds wllh "Jim' gong =nip creator and "on Monday" bemg used to hll creatmn-date The parser then uses (a pomter to) the nommat caseframe currently being instant~ated. "file" to fill createe, the :relative-case-name case of "create" and the newly created instance of "create" is attached to this mstance of "file" as a modifier a b. ~t never looks any further left ,n the ~nout than the header of the nom=r'al caseframe or ,f ,t ~as already parsed any omer Oos'.-r~ommat cases of the nommal caseframe no further left than the r~ght hand end ot; them it COnsumes. but Otherwise ignores any relatwe pronouns iwno .,vn~;.m ~,.,n~n rr~ar ~ that ~mmediately precede the segment used to instantiate the relatwe-cf Tnlg ~neans rna~ 3/i words, including "thar" .~vdl ~e 3ccounrec #or ~n "t/le file ttlat Jim createc .:.)t~ ~/lonclay" it does not try to fill the case specified by the relative-case-name ~n the relative-of: =nstead tms case is filled by (a Oomter to) the Or~g=nal nommal caseframe tnstance: d. ff the relal=ve-case.name specifies a marked case rather than a positional one tn the relative.of then ~ts case marker can De consumed, but omerwtse ~gnored. durmg mstanhataon of me relatwe.cf This 3110w3 US tO deal wlln "on ~n me .gate Jim created ~he hie on" or "the care un whlcn jim created the file ' 3 Passwe relalave clauses (e g. "Ihe file that was created on Monday"t can generally be handled using the same mechanisms used Ior passwes at the main clause level However tn relative clauses, passives may sometimes be recIucec/ by om~thng the usual auxihary verb to be (and the relat=ve pronoun) as ~n: the file create(l on Monday To account for such reduced relative clauses, the verb cluster processor will produce approonate additional readings of the verio clusters ,n relahve clauses for which the relative pronoun JS m~ssmg This may lead to multlOle oarses, mcludmg one for the above example s~mdar to the correct one for: the file Jot~n crea[e~ on Monclay These amb=guaties wdl De taken care of by Plume s standard ambiguity reduction methods More comotetely. Plumes atgor~mm for relattve clauses ~s: 1. When processing a nommal caseframe. Plume scans for the ;markers of lhe rela{tve cases of the nominal caseframe at the same t~me as [t scans for the regular case markers ol: that nominal caseframe 2. If it finds a marker of a relatwe case. ~t rues to inst~ilntlate the relaltve.cf lust as though if were the Top-level clausal case|tame and the header were ~ts mmn '/erb. ~.xcept mat: 2 ] interrogatives in addmon to handling passaves 3no -e¢ahve :lauses. also wish {he =nformatlon ~n me "c'eate -"aseframe hanclle ~nterrogatlves tnvolvlng "create' ~cn 3s ,re to ~1C Jim create me hl~. {~n MG;I,I]V ' W,aS r/le /lie cre3teo OV J~m or} '.4L,",I.]/~ ,/I/ho c.reare(~ the hie On ~f,unc,av ' What clay was the hie crejleC ,: The prtmary diffiCulty for Plume .,.,~ln mterrogatwes ~s that 3S these examoles ShOw me number of variations in stanclard COnStituent order is much greater than for tmperatives and 157 dectaratJves. Interrogatives come in a w~de variety of forms. depending on whether the question is yes/no or wh: on which auxiliary verb ~s used: on whether the voice is active or passive: and for wh questions, on which case is queried. On the other hand. apart from var)ations in the order ancl placement of marked cases, there is only one standard constituent order for =mperatives and only two for declaratives (corresponding to active and passive voice). We have exl~lO=tecl th=s low variability by building knowledge of the imperative and declarative order into Plumes parsing algorithm. However this is impractical for the larger number of variations associalecl with interrogatives. Accordingly, we have designed a more data,driven approach This approach involves two Passes through the inpul: the first categorizes the input into one on several primary input categories incluOing yes-no questions, several kinds of wh- cluestions, statements, or ~mperat=ves. The second Pass performs a detaded parse of me input based on the ctassfficat=on made in the first Pass. The rules used contam bas=c syntactic ~nformat=on al3out Enghsn. and will rema,n constant for any of Plumes restricted domam grammars of semantic caseframes for Enghsh The first level of process=rig +nvolves an ordered set of r~D-/evel patterns. Each too.level pattern corresponds tO one of the primary =nput categor=es ment~onecl adore Th=s classificatory matchmg c~oes not attempt to match every +,vord +n the input sentence but only to do the ram=mum necessary to make the classdicat=on. Most of the relevant ,nformat~on is found at the beg=nnmg of the ~nDuts. In ioart=cular, the top-level patterns make use of the fronted aux=liary verb and wh-worcls tn questions. AS well as classffymg the input, th~s top-level match ,s also useci to determme the iclenttty of the caseframe To be =nstant=ated. Th=s =S =moortant to dO at this stage because the deta,led recognmon Ln the seconcl phase ts neav=ly de~enclent on the ~clent=ty of his top-level casetrame The special symbol. SverO. that appears exactly once =n all top- level patterns, matches a heacler of any clausal caseframe We call trte caseframe whose heacler is matcnecl by SverO the primary casetrame for that input. The second more detailed parsing phase is organized relative to the primary caseframe Associated with each top- level pattern, there is a corresponding parse femo/ate. A parse template specifies which parts of the primary caseframe will' be found in unusual positions and which parls the default parsing process (the one for declarat=ves and imperatives) can be used for. A simplified example of a top-level pattern for a yes-no question is: ~ <aux> (- ($verD !! <aux>)~ (&s SverOj Srest This top.level pattern w=ll match inputs hke. me followmg: D~ Jim create fop ~ Was fop creafecl Oy J~m ? The first element of the above top-level pattern ~s an auxiliary verlo, represented Dy me non-termmal <aux> Th~s auxdiary ~s remembered and used by the veto cluster processor (as though ~t were the first auxd~ary ~n the cluster) to determine tense and voice. AcCOrChng tO the next part of the pattern, some word that ts not a verb or an aux~hary must appear after the fronted auxdiary and before the mare verb ( is the negation operator, and !! marks a dislunction). Next. the scanmng operator &,~ tetls the hatcher to scan until it finds $vero which matches the header of any clausal caseframe F~nally. Srest matches the remaimng ~nDut. If the top-level pattern successfully matches. Plume uses the assoc~atecl Parse template to clirect ~ts more detaded processmg of the ~npul. The goal of this second pass through the input ~s to mstantiate the caseframe corresponding to the heacler matched by Sverlo in the top- level pattern, The concept of a kernel-casetrame is important to this stage of processmg. A kemel-caseframe Corresponcls to that part of an ~nput that can be processect according to the algorithm already budt into Plume for declarative and imperative Sentences, P Ihl fhl~ ~allern. .'~nly ii1OuIS wrlefe tl~e tronfecl auxlllarv .¢+ ,'he first worO ,~ rh~ sentence are alloweo t'he rrl()re ",'+=nplex ~anerr; ~al ,s achJally .lsecI P)v PfLIIn~ dllc)ws ofeuu~lfiol)dll.~/ i~l,|fke 0 "ases ',~ ionear i~lihaliv as ,,felt 158 The parse template associated with the above top-level pattern for yes/no questions is: aux kernel-casetrame + (:query) This template tells the parser that the input consists of the auxiliary verb matched in the first pass followed by a :kernel-caseframe. For example. ~n: O;d J~m create fop ~ the auxtliary verb. "did" appears hrst followed by a kernel- caseframe. "Jim create fop" Note ~ow the kernel- caseframe looks exactly like a declarative sentence, and so can be parsed according to the usual declarative/imperative parsing algorithm In addition to spec:ficatJon of where to find components of the primary caseframe a parse lemplate ~ncludes annotations (indicated by a plus sign) in the above template for yes/no questions, there =S lust one annotatton - ~uery. Some annotations, hke thiS one ,ndlcate what type of input has been found, while others direct the processing of the parse template. Annotations o! the first type record which case is being queried ~n wn questfons, mat ~s. which case ,s associated w,m the wh word. Wh questions thus include one of the following annotatTons SuOlect-query. Prelect-query. and mar~ea-case-que~ Marked case queries correspond to examples like: On what day d~d J~m create too ° What day d~d Jim create /oo on ~ in which a case marked by a preposition iS 13eing asked aPout. AS illustrated here me case-marker in such queries can either precede the wn word or appear somewhere .after the verO. To deal w;m this, me parse template for marked case quenes has the annotation tloa~na-case-marker. This annotation ~s of the second type thai ,s =t affects the way Plume processes the associated parse template. Some top-level patterns result ~n two poss=bdmlles for parse templates, For example, the follow=no top-level pattern < ,'/n.'NorO > < at.ix > i ( Sv~rto ii .-- at.ix > ~ $vf~rt~ $',f=.~t could match an ObleCt query or a marked case query, ~ncluding the following: What did Jsm create ~ By whom was fop created? sz Who was fop created Oy ? These ~nputs cannot be satisfactordy discriminated Oy a top- level pattern, so the above top-level pattern has twO different parse templates associated with it: wt~-ob/ect aux kemel-caseframe ÷ (oOlecr.query~ wig-marked-case-tiller aux kernel-caseframe + (roamed-case-query float~ng-case-mar~er} . When the above top-level pattern matches. Plume tries to parse the input using both of these parse templates, in general, only one wil! succeed Ln accounting for all me input, so the amb~gudy wdl De eliminated by the methods already built ~nto Plume. The method of parsing interrogatives presented above allows Plume to handle a wide variety of interrogatwes ~n a very general way using domain specific semantic caseframes. The writer of the caseframes does not have to worry about whether they will ioe used for ~mperative. declarative, or interrogative sentences. (or in relatwve clauses). He is free to concentrafe on the domain-specific grammar. In addition. the concept of the kernel-caseframe allows Plume to use the same efficient caseframe-based parsing algorithm that =t used for declarative and imperative sentences to parse malor subparts of questions. 3. Conclusion Prey,puS work (e.g. [4. 5. 81 / 3no exoer,ence .,vdh our current rmolementat~on of Plume. Carnegie 'Group s semantic caseframe parser, has ~nown semantic caseframe instanl=ation to be an efficient and mgnly roloust method of parsing restnctecl dommn tnout However hke other methods of parsing tleawly deoendent on restricted domain semantics these ,nmal attempts at parsers based on semantic caseframe =nslant;al~on suffer from palcny syntactic coverage. 159 After first describing the current ~mplementation of Plume, this paper presented a detaded design for endowing Plume with much broader syntact=c coverage including passives. interrogatives, and relat=ve clauses. Relative clauses are accommodated through some grammar preprocessing and a minor change in the processing of nominal caseframes Handling of interrogatives relies on a set of rules for classifying inputs into one of a limited number of types. Each of these types has one or more associated parse templates which guide the subsequent detailed parse of the sentence, As the final version of this paper is prepared (late April, 1985). the handling of passives and interrogatives has already been implemented in an internal development version of Plume. and relative clauses are expected to follow SOOn Though the above methods of incorporating syntactic generality into Plume do not Cover all of English syntax. trey show that a s=gnfficant degree of syntactic generality can Ioe provided straightforwardly t:)y a domain specific parser drtven from a semantic caseframe grarpmar References 1. Bobrow. R J. The RUS System 8BN Report 3878. Bolt. Beranek. and Newman. 1978 2. Brown. J. S and Burton, R R Multiple Representations of Knowledge for Tutorial Reasomng. In Representation and Understanding Bobrow. 0 G and Collins, A.. Ed., Academic Press. New York. 1975. pp. 311-349. 3. Burton, R. R. Semantic Grammar An Engineering Technique for Constructing Natural Language Understanding Systems. BBN Report 3453. Bolt. 8eranek, and Newman. Inc.. Cambridge. Mass.. Oecember. 1976. 4. Carbonell. J. G.. Boggs. W. M. Mauldin, M. L.. and Anick, P. G. The XCALIBUR Prolect: A Natural Language Interface to Expert Systems. Proc. Eighth Int. Jr. Conf on Artificial Intelligence. Karlsruhe. August. 1983. 5. Carbonetl. J. G. and Hayes P J. "Recovery Strategies for Parsing Extragrammatical Language" Comoutat~ona/ Lingulstscs 10 (1984). 6. Grosz, B. J. TEAM: A Transportable Natural Language Interface System Proc. Conf on Applied Natural Language Processing, Santa Mon,ca. February 1983 7. Haas. N and Hendnx. G G. An Approach to AccluJrmg and Applying Knowledge Proc. Nattonat Conference of the American Assoc=ation for Artific=al Intelligence. Stanford University. August. 1980. pp. 235-239 8. Hayes, P J. and Carbonetl. J G. Multt-Strategy Parsing and its Role ~n Robust Man-Machine Commun=cat=on. Carneg=e-Metlon Umvers=ty Computer Sc=ence Oepartment, May, 1981. 9. Hendnx. G. G. Human Engineering for Applied Natural Language Process=ng. Proc Fift~ Int. Jr. Conf on Art=fvctai Intelligence, MIT. 1977. pp. 183-191 10. Woods. W. A, "Cascaded ATN Grammars' Arnertc3r~ Journal of Computational Linguistics 6. 1 (August 1980Y 1-t2 160
1985
19
TEMPORAL I]~'RRI~C~S IN HEDICAL TEXTS Klaus K. Obermeier BatteIle's Columbus Laboratories 505 K~ng Avenue CoLumbus, Oh£o 43201-2693, USA ABSTRACT The objectives of this paper are twofold, whereby the computer program is meant to be a particular implementation of a general natural Language [NL] proeessin~ system [NI,PSI which could be used for different domains. The first obiective is to provide a theory for processing temporal information contained in a well-struct- ured, technical text. The second obiective is to argue for a knowledge-based approach to NLP in which the parsing procedure is driven bv extra Linguistic knowledRe. The resulting computer program incorporates enough domain-specific and ~enera[ knowledge so that the parsing procedure can be driven by the knowledge base of the program, while at the same time empLoyin~ a descriptively adequate theory of syntactic processing, i.e., X-bar syntax. My parsing algorithm not only supports the prevalent theories of knowledge-based parsin~ put forth in A[, but also uses a sound linguistic theory for the necessary syntactic information processing. l.O INTRODUCTION This paper describes the development of a NiPS for analyzing domain-specific as well as temporal information in a well-defined text type. The analysis, i.e. output, of the NLPS is a data structure which serves as the input to an expert system. The ultimate Real is to allow the user of the expert system to enter data into the system by means of NL text which follows the linguistic conventions of English. The particular domain chosen to illustrate the underlying theory of such a system ts that of medical descriptive re×is which deal with patients' case histories of Liver diseases. The texts are taken unedtted from the Jourmal of the Amerzcan Medical As~oc£ation. The infor- mation contained in those texts serves as input to PATREC, an intelligent database assistant for MDX, the medical expert system [Chandrasekaran 831. The objectives of this research are twofold, whereby the sy~;tem described above is meant to be a particular implementation of a genera[ NLP which could be used for a variety of domains. The first objective is to provide a theory for processing temporal information contained in a given text. The second objective is to argue for a knowledge-based approach to NL processing in which the parsing procedure is driven by extra Linguistic knowledge. My NLPS, called GROK, [Gran~nattcal Representation of Obiective Knowledge] is a functioning program which is implemented in EL[SP and EFRL on a DEC20/60. The full documentation, including source code is available IObermeier 8A]. The program performs the following tasks: (L) parse a text from a medical iournaL while using Linguistic and extra Linguistic knowledge; (2) map the parsed Linguistic structure into an event-representation; (3) draw temporal and factual inferences within the domain of Liver diseases; (4) create and update a database containing the pertinent information about a patient. 2.0 OVERVI RW 2. l A SampLe Text: The user of my NLPS can enter a text of the format given in FiRure L L The texts which the NLPS accepts are descriptive for a particular domain. The information-processing task consists of the analysis of Linguistic information into datastructures which are chronologically ordered by the NLPS. L This 80-year-old Cau=aslan female complained of nau.s~, vomlclnL abciommal swelhnl~ and jaundice. ~. She h~[ dlal~ melhtus, credlL~'l wllh iosuiln for slx years ~fora aclm,~on. 3. She ~ad ~lacl fll-~efmes~ p.sl~romcmuna[ complamu for many ye..lrs ancl occaalonai em~me.s of nau.s~ ancl vomum$ chr~ years ~'evlousiy -~ Four w~ics ~forc aclmlsslon snc dcveloo~l ptm across the u~" aO~lomen. radmunll to the rlanlcs. 5. She also compiamed of shoal.in E ~ecordlai ~ma anti ~im~{ion wlm shl~lt ,-'xer t|o~l d~ s~n~. F~.~ure I.: SampLe Text Eor Case So. 17~.556 lThe numbering on the sentences is only for ease of references in the following discussion and does not appear in the actual text, 9 The first module of the program analyzes each word by accessing a [exical component which assigns syntactic, semantic, and conceptual features to it. The second module consists of a bottom-up parser which matches the output from the lexical component to a set of augmented phrase structure rules 2. The third module consists of a knowledge base which contains the domain-specific information as well as temporal knowledge. The knowledge base is accessed during the processing of the text in conjunction with the augmented phrase structure rules. The output of the program includes a lexical feature assignment as given in Figure 2, a phrase-structure representation as given in Figure 3, and a knowledge representation as provided in Figure 4. The resulting knowledge representation of mv NLPS consists of a series of events which are extracted from the text and chronologically ordered by the NLPS based on the stored knowledge the system has about the domain and ~enera [ temporal re[at ions. The final knowledge representation (see Figure 5) which my NLPS ~enerates is the input to the expert system or its database specialist. The final output o[ the expert system is a diagnosis of the patient. rlqlS 01[T~ I[IGI4TV-V[AIZ-0m0 ~O~ AG(, C~JC~SIa~ ~ RACE, . F~[NA~( N SEX' , ;~I[T -N([[D-NI[W , ~TE ~.ONPLA|N l ~UOT( ~.LASSI F • QUOT[ 5VAL, UI[ , , ' ,(D,, , OF me~p, ,N&US[A N SI~YM~TOM, VOMZT ki V S~[~iSyIIIIPTOM ~NGI, • ~ . 60UNOadlV , , 4J~INO|C[ N 5Z~NSYN~Mr0N' Ft~ure I: '-extra[ Access ): Sentence i [ tn Rtz,lre 2.2 Scenario The comprehension of a descriptive text requires various types of knowledge: linguistic knowledge for analyzing the structure of words and sentences; "world knowledge" for relating the text to our experience; and, in the case ,)f tech:~ica[ texts, expert knowledge for dealing with information ~eared toward the domain expert. =or the purpose o[ mv r(.search, [ contend that the comprehension of technical, descriptive te>:t is ~implv a conversion of information from one representation i~to another based on the knowledge oF the NLI'E. I ,N2 3 I~. ~ ) ¢JUJCAS:[AN AO*J RA¢I[)~, IN. ~INI ~i[llALl[ N SEX)I~: ,.NP: h~d: FEMALE V (t FGET-Ni[I[O*N~W I qUOTE CQMPt*AIN) , qUOTE Ct.ASSZF ! QUOTE 5VAIAJE I , * i ~ PJUIT ) OF P~LqT ~ • %'~. the ~-suffix ms ~parated: the trigger on compl~m chan~d the following of from a prep~it]ou [o a panicle: ~fN~ N~JSIA N SZGN~yMIPT~) ~ , ,thts N is part of :he VP ,.¢=Ima, IlOUl~my) .. plmctuatlou bre~ up phra.~,e~ ,N2 ,,N, ,~N* VOMIT N 5XQICS~IIITOll ~[NGJ,,~J . , the noun/verb amb;~,uJty on thL~ word b~ been re~ived by the "l~G-$pec:aiis~'" • "|%iG" chlnged the verb [o I gerund 'k[~ ''N" I~N, O~JNOIC[ N SIC~44.~VNIJlJT~, , , , Figure ]: ~¥ntact~c Annot4t~on for Sentence : i ! Ln FL~uce . I[VI[NT 1 SyIOT~l • k~kiS Jr A/V(:M | T / AS0~U[ NWIV~t SV([ L L. ]r NQ, d~4jNO| C[ KIlT :VENT ~DIiISSIQN 0t~AT [ 0N: ASII| SS$ 0N [VENT2 SYzmToum. OIaaETES m[~ITuS ~EY .fVEWI' t~IIIISSION IIEI.A;~O~ -Q KIE~ (VIINT II ~IIAIIS IIIIFOIII[ ~T|0N: ~IX YEAIIS EYENI"3 SYIIPTrJe • GASTII~IrN'IrESTTN~6 ¢OMPt.AINT I([T [V(NT a~IOtSSION IEL..%T~011 r 0 KEY t=VI[NT ~ YEAIs 0UN411ON" ItJlV TI[JUt s (VENY4 S fMPTI]m. NaMS~A/"£011| T .lily ('41NT bDIII $~ZON II(LATION TO KI[~ .tVI~NV 3 YEJJIIS I|FQN| 0tJNiT~ QN: 1~[~| TTI~ 2t~ure -- % SLn, O:LfLe~I 5amD[e ~*tp,*t of [he Representation or ~er, tences [I. II, Jnd !~l from F~zure [ 2The augmentation consists of rules which contain know[edze about morphology, syntax, and the particular domain in which the NLPS is operatzng. These rules are used for inter- preting the text, Ln particular, embiguities, as well as for generating the final output ~f the NLFS. 3This partial parse of the sentence follows Jackendoff's X-bar theory [Jackendoff 77}, which ts discussed in [Obe rmeier 84, 851; roman numerals indicate the number of bars assigned to each phrase, Comments to the parse were made after the actual run of the program. 10 If a doctor were given a patient's case history (see Figure l), he would read the text and try to extract the salient pieces of infor- mation which are necessary for his diagnosis. In this particular text type, he would be in- terested in the sign, symptoms, and laboratory data, as well as the medical history of the patient. The crucial point hereby is the temporal information associated with the occurrences of these data. In general, he would try to cluster certain abnormal manifestations to form hypotheses which would result in a coherent diagnosis. The clustering would be based on the temporal succession of the information in the text. Each manifestation of abnormalities [ will refer to as an "event". Each event is defined and related to other events by means of temporal information explicitly or implicitly provided in the text. An important notion which [ use in my program is chat of a key event 4. "Events are or~anize~ around key events (which are domain-specific in the medical domain, some of the important ones are 'admission', 'surgery', 'accident', etc.), so that ocher events are typically stated or ordered with respect to these key events" [Micra[ 82]. 3.0 KNi~IrLF.DCE-BASED PARSING 3.1 Selection and OwganizaCion for the Knowledge Base [ have characterized the task of a doctor reading a patient's case history as finding key domain concepts (e.g., sign, symptom, laboratory data), relating them to temporal indicators (e.g, seven veers a~o), and ordering the events resulting from assignin R temporal indicators co key concepts with respect to a "key event" (e.g., at admission, at surgery). ([) This 80-year-old Caucasian female complained of nausea, vomiting, abdominal swe[[in~ ~nd iaundice. In the sample text in Figure l, the first sentence, given in (l) requires the following domain concepts: Patient: person identified by age, sex, and profession, whose signs, symptoms, and laboratory data will be given. Symptoms: manifestations of abnormalities repor[ed by the patient. Certain symptoms have to be further defined: swellin~ needs a characterization as to where it occurs. Pain can be characterized by its location, intensity. and nature (e.g., "shooting"). Signs: abnormalities found by the physician such as fever, jaundice, or swelling. 4The notion of "key event" is further discussed in 4.3 "Key Events". Whether "fever" is a sign or a symptom is indicated by the verb. Therefore, the verbs have features which indicate if the following is a sign or a symptom. There are no explicit temporal indicators in (1), except the tense marker on the verb. The doctor, however, knows chat case histories ordinarily use "admission" as a reference point. rF*SS[NT EVI~ ~SyIIPT~I ,SVAJ.UZ ¢14(,(4NtL ~SEAIV~IIT)AI~QMINAL 5WELL*dALMOICE' IK~Y-~y£~( SVALAJEIAmlISSIQNI~I I OURATI~[$VA~U~IAi~IISSI~III I CLASSIF I$VAL~IE II~IVl~AJ..Jll ,TYPE iSVAi*U[ L[V[NlrI~J, Figure 5: Final KnowledRe Representation of Event l kn EFRL (2) She had diabetes mellitus, treated with insulin for six veers before admission. The sentence in (2) requires a temporal concept "year" in conjunction with the numerical value "six", it also requires the concept "dur- ation" to represent the meaning of for. The "key event" at admission is mentioned explicitly and must be recognized as a concept by the system. After selecting the facts on the basis of about 35 case descriptions as well as previous research of the medical sublanguage [Hirschman 83] 5 , [ organized them into schemas based on what is known" about the particular text type. [n ]Bonnet 79], a medical summary is characterized as "a sequence of episodes that correspond Co phrases, sentences, or groups of sentences dealing with a single topic. These constitute the model and are represented bv schemas" [Bonnet 79, 80]. Schemas for the medical domain in Bonnet's system are $PATIENT- iNFORMATION (e.g., sex, job), SSICNS (e.g., [ever, jaundice). [n GROK, l use the schemas SREPORT-SICN, SREPORT-SYMPTOM, SREPORT-LAB-DATA, SPATIENT-[NFO. Each of my schemas indicates "who reports, what co whom, and when". The $REPORT-SYMPTOM schema has the following ele- ments: verb(unknown), subject(patient), object- (symptom), indirect object(medic), time(default is admission). After selecting the facts on the basis of the domain, and organizing them on the basis of the text-type, [ add one fact for putting the information into the target representation. The target representation consists of a temporal indicator attached to a domain-specific fact what [ had referred to in as "event". The event structure contains the following elements: name of domain-specific concept, reference point, duration (known or unknown), and relation to reference point (e.g., before, after). 51 use ten types of domain-specific facts: sign, symptom, lab data, body-part, etc., I use six temporal facts: month, year, day, week, duration, period, i.e., "for how long". 11 3.2 The Flow of Control In addition to domain-specific knowledge, a person reading a text also uses his linguistic knowledge of the English grammar. The problem for a NLPS is how to integrate linguistic and extra linguistic knowledge. The dominant paradigm in computational linguistics uses syntactic and morphological information before considering extra linguistic knowledge; if extra linguistic knowledge is used at all. Considering syntactic knowledge before any other type of knowledge has the following problems which are avoided if enough contextual information can be detected by the knowledge base of the NIPS: • global ambiguities cannot be resolved (e.g., Visitin~ relatives can be bortn~) • word-class ambiguities (e.g., bank) and structural ambiguities cause multiple parses (e.g. , [ saw the man on the hill with the telescope). Moreover, psycholinguistic experiments have shown [Marslen-Wilson 75, Marslen-Wilson 78, Marsten-Wilson 801 that the syntactic .,nalvsis of a sentence does not precede higher level processing bu~ interacts with seman=ic and pragmatic information. These findings are, to some extent, controversial, and not accepted by all psvcholinRuists. In my system, knowledge about the domain, the text-type, and the tarRet representation is used before and together with syntactic information. The syntactic information helps to select the interpretation of the sentence. Syntax functions as a filter for processing information. [t selects the constituents of a sentence, and groups them into larger "chunks", called phrases. The phrase types noun phrases [NP] and verb phrase [ V P I contain procedures to form concepts (e.g., "abdominal pain"). These concepts are combined by function specialists. Function specialists consists of procedures attached to function words (e.~., prepositions, determiners), fnflectional morphemes, and boundary markers (e.g., comma, period). Technically, [ distinguish between phrase ~pecialists and function specialists. The phrase ~pecialists interact with extra[tnguistic knowledge to determine which concepts are ey- pressed in a text, the function specialists de~ermine locally what relation these concepts have to each other. So in general, the phrase specialists are activated before the function specialists. To illustrate this process, consider the sentence: (3) The patient complained of shoottn~ pain across the flanks for three days before admission. The NP-specialist combines the and patient into a phrase. The central processing component in the sentence ls the VP-specialist. Its task is to find the verb-particle construction (complain of), and the object (e.g., shootin~ pain). The VP-specialist also looks at the syntactic and semantic characteristics of complain o__f_f. It notes that complain of expects a symptom in its object position. The expectation of a symptom invokes the schema "report-symptom". At this point, the schema could fill in missing information, e.~., if no subject had been mentioned, it could indicate that the patient is the subject. The schema identifies the current topic of the sentence, vlz., "symptom". CROK next encounters the word shootin~. This word has no further specification besides that of bein~ used as an adjective. The head noun pain points to a more complex entity "pain" which expects further specifications (e.~., location, type). It first tries to find any further specifications within the :malvzed part of the NP. [t finds shootin~ and adds this characteristic to the entity "pain". Since "pain" is usually specified in terms of its location, a place adverbial is expected. Upon the eqtry of across, the entity "pain" includes "acro~s" as a local ion marker, expect in~ as the next word a body-part. The next word, flank is a body-part, and the "pain" entity is completed. Note here, that the attachment of the preposition was ~uided by the information contained in the knowledge base. The next word for is a function word which can indicate duration. To determine which adverbial for Lntroduces, the system has to wait for the information from the following Nl'-specialist. After the numeric value "three", the temporal indicator "dav" identifies for as a duration marker. Explicit ~emporal indicators such as day, week, or month, under certain conditions in- troduce new events. As soon as GROK veri- fies that a temporal indicator started an event, it fills in the information from the "report- :<xx" ,~chema. The new event representation includes the sign, symptom, or laboratory data, and the temporal indicator. The last two words in the sample sentence before adm£ssion, pro- vide Khe missing information as to what "key event" the ~ewly created event [s related to. Once a new event frame or domain-specific frame is instnntiated) GROK can use the infor- mation associated with each event frame (e.g.) duration, key-event), together with the infor- mation from the domain-specific frame (e.g., the pain frame contains slots for specifying the location, intensity, and type of pain) to interpret the text. 12 4.0 TEMPORAL [NFO[~ATION PROCESSINC 4.1 Problems The inherent problems of text comprehension from an information processing viewpoint are how to deal with the foremost problems in computational NLP (e.g., ambiguity, anaphora, ellipsis, conjunction), including the foremost problems in temporal information processing (e.g., implicit time reference, imprecision of reference). Within A[ and computational linguistics, only a few theories have been proposed for the processing of temporal information [Kahn 77, Hirschman 8[, Kamp 7g, Allen 83l. in parti- cular, a theory of how a NLP can comprehend temporal relations in a written text is still missing. [n my research, [ present a theory for processing temporal information in a NLPS for a well-defined class of technical descrip- tive texts. The texts deal with a specific domain and tasks which require the processing of linguistic information into a chronological order of events. The problems for processing the temporal information contained in the text include: • a NLPS has to work with impli- cit temporal information. ALthough in (I), no explicit temporal reference is present, the NLPS has to detect the implied information from the context and the extra Linguis- tic knowledge available. • a NLPS has to work with fuzzy information. The reference tO for many years in (}) is fuzzy, and yet a NiPS has to relate it to the chronology of the case. • a NLPS has to order the events in their chronology although they are not temporally ordered in the text. 4.2 Solutions Hv solution to the problems discussed in the previous section lies within the computational paradigm as opposed co the Chomskyan generative paradi~m. The comFutationaL paradigm focuses nn how the comprehension pro- cesses are organized whereas within the gener- ative paradiRm, linguistic performance is of less importance for a Linguistic theory than Linguistic competence. Within the computational paradigm, the representation and use of extra- Linguistic knowledge is a maior part of studying Linguistic phenomena, whereas generative lin- guists separate linguistic phenomena which fall within the realm of syntax from other cognitive aspects [W~nograd 83, 21]. Functionality is the central theoretical concept upon which the design of GROK rests. What is important for comprehending language is the function of an utterance in a given situation. Words are used for their meaning, and the meaning depends on the use in a given context. The meaning of a word is subject to change according to the context, which is based on the function of the words that make up the text. Therefore, my approach to building a NLPS focuses on modeling the context of a text in a particular domain. [ am primarily concerned with the relationship between writer- text-reader, rather than with the relationship between two sentences. The use of the context for parsing requLres a knowledge representation of the domain, and the type of text, in addition to linguistic and empirical knowledge. In contradistinction to NLPSs which use syntactic information first [Thompson 8[], and which possibly generate unnecessary structural descriptions, mv system uses higher [eve[ information (e.~., domain, text-type) before and together with usuaLLv a smaller amount o[ syntactic information, in GROK, the syntactic information selects between contextually interpretations o[ the text ~untax acts as ~ ill=or for the N[.IJS. in contradistinction to NLPSs which use conceptual information first [Schank 75], GROK, partially due to the limited information pro- cessin¢ task and the particular domain, starts out with a small knowledge base and builds up datastructures which are used subsequently in the processing of the text. The knowledge base of my system contains only the information it absolutely needs, whereas Schankian scripts have problems with when to activate scripts and when to exit them. 4.3 Key Events Temporal information in a text is conveyed by explicit temporal indicators, implicit temporal relations based on what one knows about written texts (e.g., "time moves forward"), and "key events". [ define a key event as a domain-specific concept which is used ro order and group events around a particular key event. [n my theorv, temporal processing is based on the identification of key events far a parti=uLar domain, and their subsequent reco~uition bv the NLPS in the text. Temporal indicators . in a sentence are not of equal importance. The tense markin£ on the verb has been the Least influential {'or filling in the event structure. For the program, the most important sources are adverbials. The linear sequence of sentences also contributes co the seE-up of the configurations of events. My program makes use of two generally known heuristics; time moves forward in a narrative if not explicitly stated otherwise; J 13 the temporal reference of the subordinate clause is ordinarily the same as that in the main clause. "Key events" are significant since they are used to relate events to one another. [n my theory of text processing, key events build up the temporal structure of a text. [f key events for other domains can be identified, they could be used to explain how a NLPS can "comprehend" the texts of the domain in question. The representation of temporal information is significant [n my theory. [ define an event as the result of the assignment of a temporal value to a domain-specific concept. The structure of an event is Reneralizable to other domains. An event consists of a domain-specific concept, a key event, a relation to ke~ event, and a duration. [n the medical domain, the instantiated event contains information about how long, and when a symptom or sign occurred, and what the kev event of the instantiated event was. ,\part from the temporal issue, my research has shown that [f the domain and the task of the NLPS are sufficiently constrained, the use of frames as a knowledge representation ~cheme is efficient in implementing CROK. in ,nv program, [ flare used individual frames to represent single concepts (e.g., pain). These concepts help the NLPS to access the domain-specific knowledge base. To£ether with the temporal indicators, the information from tne knowledge base is then transferred to the topmost event frame. Procedures are then used to relate various event frames to each other. The restrictions and checks on the instantiation of the individual frames preclude an erroneotls activation of a frame. The viability of this approach shows that the idea of stereotypical representdL[on of information is useful for NLPS [f properly constrained. Mv program checks for the access- ability of the various levels of the knowledge representation whenever new information is coming in. This multilaver approach constrains the ~nstantiatton of the event frame suffi- ciently in order to prevent erroneous event tnstantiation. 4.4 Comparison to Extant Theories on Temporal ProcessinR The overall ideas of GROK .is they re[are ~,r differ from ~he extant theories and svstems are introduced by looking at four major issues concerning temporal proces:~ing. • temporaiiry: how is an event defined in the system; ho~ is temporal information treated vis-a-. !.; =he whole system? What search algorithms or in- ference procedures are pro- vided? • organization: are events or- ganized on a time line, by key events, calendar dates, before/after chains? • problems: how is imprecision, fuzziness, and incompleteness of data handled? • testing: how can the system be tested; by queries, proofs, etc.? Does it have a consistency checker? In GROK, [ use an interval-based approach to temporal information processing. An event is defined as an entity of finite duration. As in IKamp 79, 3771, event structures are transformed into instants by the Russell-Wiener construction. [n GROK, the NLPS processes temporal (nformat[on by first associating a concept with a temporal reference, then evaluating the extension of this event. The evaluation considers syntactic (e.~., adverbials) and pragmatic information (current time focus). Each event is represented in the knowledge base with information about when, for how long, and what occurred. The parser while analyzing the sentences, orders these events according to a "key event". The single events contain information about the temporal indicator which is attached to a domain-soec~fic fact. The single events are connected to the respective "key event". "Key events" are domain-specific. [n general, [ qcipulate that everv domain has a limited number of such "key events" which provide the "hooks" for the temporal structure of a domain-speci fic text. CROK also differs from logical theories [n that it deals with discourse structures and their conceptual representations, not with :solated sentences and their truth value. [t is different from Kahn's rime specialist {Kahn 771 in that it uses domain knowledge and "knows" about temporal relations of a particular domain. Moreover, Kahn's program only accepts LiSP-like input and handled only explicit temporal information. The use of domain-specific temporal knowledKe also qet=; CROK apart from Allen's l,\[len 83] temporal inference engine approach. GROK differs from Kamp's discourse structures in that it uses the notion of reference intervals that are based on conventiGnal temporal units (e.g., day, week, month, year) to organize single events into chronological order. GROK is in many respects similar to research reported in [Hirschman [98l]: both systems deal with temporal relations in the medical domain; both syatems deal with implicit and explicit temporal information. GROK differs 14 from Hirschman's system in that GROK uses domain-specific and other extra linguistic information for analyzing the text, whereas Hirschman relies primarily on available syntactic information. Therefore, Hirschman's system as presented in [Hirschman 81] can neither handle anaphoric references to continuous states nor represent imprecision in time specification. 4.5 State of [=q~tememtatiou GROK is a highly exploratory program. The limitations of the current implementation are in three areas: • The parser itself does not provide the capability of a chart parser since it will not give different interpretations of a structurally ambiguous sentences. This type of structural ambiguity, where one constituent can belong to two or more different constructions, would not be detected. • The knowledge base does not have a fully implemented frame structure. Each ~eneric frame has a certain number of slots that define the concept. A generic concept (e.g., sign) must have slots which contain possible attributes of the specific frame (e.g., where is the sign found; how severe is its manifestation). These slots have not yet been implemented. The number of frames is strictly i/mired to the temporal frames and a few exemplary ~eneric frames necessary to process the text. • The range of phenomena is limited. Only "before-admission" references are recognized by the system. Furthermore, slots that prevent the inheritance of events of limited durations are not yet in place. in general, GROK is still in a developmental stage at which a number of phenomena have vet to be accounted for =hrough an implementation. 5.0 CONCLUSION [n this paper, [ argued for an integration of insi%hcs Rained from linguistic, psychological, and Al-based research to provide a pragmatic theory and cognitive mode[ of how temporal inferences can be explained within the framework of computational information processing. A pragmatic theory focuses on the information from the context (e.g., co-text, discourse situation, intentions of interlocutors) to explain linguistic behavior. I have shown how an integration of linguistic and extra linguistic knowledge achieves a form of comprehension, where comprehension is characterized as a conversion of information based on knowledge from on representation into another. [ have also shown how this approach leads to a parsing technique which avoids corm~on pitfalls, and, at the same time, is consistent with results in psycholinguistic research. [ have further- more shown that such a procedural approach is a basis for an event-based theory for temporal information processing. In particular, the findings implemented in GROK show the shortcomings of the orthodox rule-based approach to language processing which reduces words to tokens in a larger context while overemphasizing the role of the phrase and sentence level. It does this by providing a temporal knowledge representation and algorithms for processing pragmatic information which are applicable to a wider range of phenomena than most of the notable computational NL theories within the field of A[ Schank 8[, R/eger 79, Wilks 75I, or linguistics Marcus 801. [n particular, my research shows that • NL can be processed realistically by a deterministic algorithm which can be interpreted in a mental model. A realistic NLPS tries to emulate human behavior. A deterministic parser works under the assumption that ([) a human NLPS makes irrevocable decisions during processing and (2) that humans are not unconstrained "wait-and-see-parsers" {Kac 821. A mental model provides an internal representation of the state of affairs that are described in a given sentence [ Johnson-La ird 8[I. • Temporal information processing is adequately explained only in a pragmatic theory that captures the duality of interval and point-based representation of time. In my theory, temporal processing is possible because of domain-specific key events which provide the "hooks" for the temporal structure of a text. • NL can be processed efficiently by a set of integrated linguistic and extra linguistic knowledge sources. 15 RgFEREN~S {Allen 83l Allen, J.F. Maintaining Knowledge about Temporal Inter- vals CACM 26, t983. [Bonnet 79] Bonne t, A. Understanding Medical Jargon as if it were Natural Language. [n Proc from IJCA[ 6. 1979. [ Chandrasekaran 83a] Chandrasekaran, B. and Mittal, S. Conceptual Representation of Medical Know- [edge for DiaRnos is bv Computer: MDX and Associated Systems 1983 Advances in Computer, Vol. 22. [ Hirschman 8[] Hirschman, L., Story, C. Representing implicit and explicit time relations in narrative. in [JCA[ 8[. [98[ Hi rschman 83] Hirschman, L. and Sager, N. Automatic Information Formatt inR of a Medica [ Sub'anguage. In K,ttredge (editor), Sublangua~e. deGruvter, 1983 Johnson-Laird 8[] Johnson-Laird, P.N. Mental Model of Meaning. In Joshi, A. Webber, B. Sag, I (editor), Elements of Discourse Understand[ng. Cambridge University Press, [98[. Kac 82 ] Kac, M.B. Marcus: A theory of syntactic recognition for NL (Review). Language S8:A47-A5A, 1982. [Kahn 77 l Kahn, K. and Corry, G.A. Mechanizing Temporal Knowledge. Artificial Intelligence 9, [977. iK1mp 79l K'Imp, H. Events, h,stants and Temporal Reference. In Baeurle, R., Eg[i, U., Stechow, A. (editors), Semantics from Different Points of View, Springer, [979. Marcus ,~O1 Marcus, M. Theory of Syntactic RecoEnition for Natural Language. HIT Press, [980. [Marslen-Wilson 75] Marslen-Wilson, W.D. Sentence perception as in interactive parallel process. Science 189, 1975. [Marslen-Wilson 78] Marslen-Wilson, W. and Welsh, A. Processing interactions and lexical access during word recognition in continuous speech. Cognitive Psychology lO, [978. [Marslen-Wilson 80[ Marslen-Wtlson, W. and Tyler, L. The temporal structure of spoken language understanding; the perception of sentences ind words in sentences. Cognition 8, 1980. [HittaL 82[ Mittal, S. Event-based Organization of Temporal Data- bases. L982 ProcessinRs from the 4th National Conference of the Canadian Society for Computational Studies of Intelligence, Saskaton, Canada. [Obermeier 841 Obermeier, K. Temporal Inferences in Computational linguistic Information Processing. ;-he Ohio State University, Oh.D. Dissertation, [984. IObermeier 851 Oberme ier, K. Crok a uatural language front end for medical expert systems. [n Proceedings from the 5th International Workshop on Expert Systems and Their Applications. ~ala!s des Papes - Auignon, France May [3-[5, 1985. [Rieger 791 RieRer, C. and Small, S. Word Expert Parsing 6th [JAI, 1979. [Schank 75] ~chank. R. Concepzual Information Processing Nor:b Holland, [975. [Schank ~[I Schank, R.C. and Riesbeck, C.K. [naide Computer Understanding. Five Programs Plus Miniatures. Lawrence Erlbaum Associates, [981. [ Thompsoc 8 [ I Thompson, H. Chart Parsing and rule schemata in PSG. t98l 19th Ann. ACL. 16 [Wilks 75] WElks, Y. An intelligent analyzer and undersCander of English. CACM 18, [975. [Winograd 8)] Winograd, T. Language as a Cognitive Process. Addison°Wesley, 1983. 17
1985
2
MOVEMENT IN ACTIVE PRODUCTION NETWORKS Mark A. Jones Alan S. Driacoll AT&T Bell Laboratories Murray Hill, New Jersey 07974 ABSTRACT We describe how movement is handled in a class of computational devices called active production networks (APNs). The APN model is a parallel, activation-basod framework that ha= been applied to other aspects of natural language processing. The model is briefly defined, the notation and mechanism for movement is explained, and then several examples are given which illustrate how various conditions on movement can naturally be explained in terms of limitations of the APN device. I. INTRODUCTION Movement is an important phenomenon in natural languages. Recently, proposals such as Gazdar's dcrivod rules (Gazdar, 1982) and Pereira's extraposition grammars (Pereirao 1983) have attemptod to find minimal extensions to the context-free framework that would allow the descrip- tion of movement. In this paper, we describe a class of computational devices for natural language processing. called active production networks (APNs), and explore how certain kinds of movement are handled. In particular. we are concerned with left extraposition, such as Subject- auxiliary Inversion. Wh-movement, and NP holes in rela- tive clauses, in these cos•s, the extraposod constituent leaves a trace which is insertod at a later point in the pro- cessing. This paper builds on the research reported in Jones (1983) and Jones (forthcoming). 7,. ACTIVE PRODUCTION NgrwoPJ~ 7..1 Tim i~vk~ Our contention is that only a class of parallel devices will prove to be powerful enough to allow broad contextual priming, to pursue alternative hypotheses, and to explain the paradox that the performance of a sequential system often degrades with new knowledge, whereas human per- formance usually improves with learning and experience. = There are a number of new parallel processing (connection- • st) models which are sympathetic to this view--Anderson (1983). Feldman and Ballard (1982), Waltz and Pollack (1985). McClelland and Rumelhart (1981, 1982), and Fahlman. Hinton and Sejnowski (1983). Many of the connection•st models use iterative relaxa- tion techniques with networks containing excitatory and inhibitory links. They have primarily been used as best-fit categorizers in large recognition spaces, and it is not yet clear how they will implement the rule-governed behavior of parsers or problem solvers. Rule-based systems need a strong notion of an operating state, and they depend heavily on appropriate variable binding schemes for opera- tions such as matching (e.g.. unification) and recurs•on. The APN model directly supports a rule-based interpreta- tion, while retaining much of the general flavor of I. 1"be htmmm li~ity to L:mrfofm mmlpatztmmtlly e•patm,m opmltmm =alia s ~ y ~, imt'alkd loud,mum remforou this b¢fid. connection•sin. An active production network is a rule- oriented, distributed processing system based on the follow- ing principles: 1. Each node in the network executes a uniform activa- tion algorithm and assumes states in response to mes- sage (,such as expectation, inhibition, and activation) that arrive locally; the node can, in turn, relay mes- sages, initiate messages, and spawn new instances to process message activity. Although the patterns that define a node's behavior may be quite idiosyncratic or spocializod, the algorithm that interprets the pattern is the same for each node in the network. 2. Messages are relatively simple. They have an associ- ated time, strength, and purpose (e.g., to post an expectation). They do not encode complex structures such as entire binding lists, parse trees, feature lists, or meaning representations, z Consequently, no struc- ture is explicitly built; the "result" of a computation consists entirely of the activation trace and the new state of the network. Figure I gives an artificial', but comprehensive example of an APN grammar in graphical form. The grammar generates the strings--a, b. acd. ace. bed. bee. fg and gl- and illustrates mapy of the pattern language features and grammar writing paradigms. The network responds to $ourcex which activate the network at its leaves. Activa- tion messages spread '*upward" through the network. At conjunctive nodes (seq and and), expectation messages are posted for the legal continuations of the pattern; inhibition messages are sent down previous links when new activa- tions are recorded. P J~ Figure i. A Sample APN In parsing applications, partially instantiatcd nodes are viewed as phrase structure rules whose next constituent is expected. The sources primarily arise from exogenous 2. For • sit'tatar ¢oaaectioaett vnew, ~ F¢ldman sad B#llard (1982) or Waltz ted Pollack (198S). A compemoa or markor patuns, value Imaan I •ad uoreltricted melmzle pinball =yttm=t= i= ipvea ia Fahlmnm, Hlalal lad Scjnowl~ (IgS)). 161 strobings of the network by external inputs. In generation or problem solving applications, partially instantiated nodes are viewed as partially satisfied goals which have out.stand- ing subgoaLs whine solutions are de=ired. The source= in this case are endogenously generated. The compatibility of the=e two views not only allows the same network to be used for both parsing and generation, but also permits procesu~ to share in the interaction of internal and exter- nal sources of information. This compatibility, somewhat surprisingly, turned out to be crucial to our treatment of movement, but it is aLso clearly desirable for other aspects of natural language processing in which parsing and prob- lem solving interact (e.8., referenco resolution and infer- en(~P.). Each node in an APN is defined by a pattern, written in the pattern language of Figure 2. A pattern describes the me=age= to which a node rmponds, and the new mes- sage= and internal state= that are produced. Each subpat- tern of the form ($ v binding-put) in the pattern for node N is a variable binding site; a variable binding takes place when an instance of a node in binding-gat activates a reference to variable v of node N. Implicitly, a pattern defines the set of state= and. state transitions for a node. The ? (optiouality), + (repetition) and • (optional repeti- tion) operators do not extend the expressiveness of the language, but have been added for convenience. They can be replaced in preprocessin8 by equivalent expre&sions, j Formal semantic definitions of the m_~_~$e passing behavior for each primitive operator have been specified. pattern ::-- binding-site (seq pattern ...) (and pattern ...) (or pattern ...) (? pattern) (+ binding.site) (. binding-site) binding-site ::-- ($ vat binding-pattern) binding.pauern ::-- node I (and binding-pattern ...) I (or binding-pattern ...) Figure 7.. The APN Pattern Language An important distinction that the pattern language makes is in the synchronicity* of activation signals. The pattern (and ($ vl X) ($ v2 ]'3) require= that the activa- tion from X and F emanate from distinct network sources, while the pattern ($ v (and X I"3) insists that instances of X and Y are activated from the same source. In the ]. The enact chore= o( cq~s'acors in the pattern tan|up it t matewhat ~at= mine from the =!~=m~attma of the APN maciaa~. 4, -r~ ¢nulreat APN model allocate= ~ telueatmUy. The ten= $yllgiteomlclly reflC~lt thl fact thll t[~ ~ kicl~Uly o4 r t~ i ~ m~se= can be Ioc~y COmlm '.,,I f~m tlm=r tiuw ~f ~ TI~ u kin| u the ,ctJvuua= pmau= rims [=== a~ugb to coacli~aa the network bmmi, mai my, m0 scuvatmm. Alua'aaUvety, a,:Uvalma melal~ covid emV tl~ mmr¢~ ideatiW =t as a4di,t*...-t l~ram, et~n. ia tl~ csm. m=Jme aeuntiom cam a*su.hq~ ~ at t t'' prom t'~h, ~ e( tit iaaemmltal cxp~¢ume ~mvtm,,_,~._ F.= re~l~iy illlequndeut i ,i.. o,m'lap may nm po~ a p~Vlem. graphical representation of an APN, synchrony is indicated by a short tail above the subpattern expression; the definition of U in Figure I illustrates both conventions: (and ($ vl (and TI)) ($ v2 S)). 2.3 Am F..~m~ Figure 3 shows the stages in parsing the string acd. An exogenous source Exog-srcO first activates a, which is not currently supported by a source and, hence, is in an inac- tive state. The activation of an inactive or inhibited node give= rise to a new instance (nO) to record the binding. The instance is effectively a new node in the network, and derives its pattern from the spawning node. The activation spreads upward to the other instances shown in Figure 3(a). The labels on each node indicate the current activa- tion level, repreu:nted as an integer between 0 and 9, inclusive. PO(9) qo(9) c I I aO(9) I I Exog-~rc0(9) [Exog-srcJ (a) trace structure after a po(4) Q0 c0(4) s aO I T Exog-src0 Exog-srcl(9) d e f Exog-src (b) trace structure after ac pO(9) Q0 cO S0(9) I Exog-src0 Exog-srcl d0(9) J Exog-src2(9) (c) trace structzure after acd [~ple 3, Stalp=l in Parsing acd 162 The activation of a node causes its pattern to be 4re)instantiated and a variable to be (re)bound. For exam- pie. in the activation of RO, the pattern (seq ($ vi Q) (5 v2 c'9) is replaced by (seq ($ vi (or Q QO)) ($ v2 c)). and the variable vl is bound to (20. For simplicity, only the active links are shown in Figure 3. RO posts an expecta- tion message for node C which can further its pattern. The source Exog-secO is said to be supporting the activa- tion of nodea nO. QO. RO and PO above it, and the expecta- tions or inhibitions that are generated by these nodes. For the current paper we will assume that exogenous sources remain fully on for the duration of the sentenco, s In Figure 3(b), another exogenous source Exog-srcl activates c, which furthers the pattern for RO. RO sends an inhibition message to QO, posts expectations for S, and relays an activation message to P0, which rebind~ its vari- able to RO and a~umes a new activation value. Figure 3(c) shows the final situation after d has been activated. The synchronous conjunction of SO is satisfied hy TO and dO. RO is fully satisfied (activation value of 9), and PO is re-satisfied. 1,4 Gramm~ Writbql P~Ulpm The APN in Figure I illustrates several grammar writ- ing paradigms. The situation in which an initial prefix string (a or b) satisfies a constituent (P), but can be fol- lowed by optional suffix strings (cd or ce) occurs frequently in natural language grammars. For example, noun phrase heads in English have optional prenominal and postnominal modifiers. The synchronous disjunction at P allows the local role of a or b to change, while preserving its interpre- tation as part of a P. It is also simple to encode optional prefixes. Another common situation in natural language gram- mars is specialization of a constituent based on some inter- hal feature. Noun phrases in English, for exampl©, can be specialized hy case; verb phrases can be specialized as par- ticipial, tensed or infinitive. In Figure l, node S is a spe. cialization which represents "Ts with d-ness or e-ness, but not f-heSS.'" The specialization is constructed by a synchro- nous conjunction of features that arise from subtrees some- where below the node to be specialized. The APN model also provides for node outputs to he partitioned into independent classes for the purl~s¢~ ,~)f the activation algorithm. The nodes in the classes form levels in the network and represent orthogonal systems of classification. The cascading of expectations from dilfcrent I~els can implement context-sensitive behaviors such as feature agreement and s':mantic sclectionai restrictiops. This is described in Jones (forthcoming). In the next sec- tion, we will introduce a grammar writing paradigm to represent movement, another type of non..context-fre¢ behavior. $. It is interertins to sp~'ulatc: on the oOm~lUamC~ o( vsr~w relauua~q of ~hiu ¢al~m~l~Oe. Fundam,mt~l limitatmm in the allocatm of ~ may be reJalod to limiuUmna in sluart term memory (~r buff're space in dc'tl~iatMi¢ zzleJ¢l~ I¢¢ Matctul, 19BO). Lin|uilti¢ ¢emmzinUl ~ on OoQM~tlt~l¢ IcqtStb oou~ be col=ted tO ~rl~ daca),. ~ |yntlcli¢ Mlzdca path bebav~¢ mJlbl be rclltad to accc.h=Itad iowr~ decay r.atmmd by inbibitioo from • ~up~mll bypmbmia. Anythin$ mum than • f,~m~ iJ ~tttre at ,hi= 3. MOVI~W..NT From the APN perspective, movement (limited here to left-extrapnsition) necessitates the endogenous reactivation of a trace that was created earlier in the process. To cap.. ture the trace so that expectations for its reactivation can be posted, we use the following type of rule: (seq (5 vl ... X... ) ($ v2 ... (and X X-see Y) ...). When an instance, XO, first activatea this rule, vl is bound to XO; the second occurrence X in the rule is constrained to match instances of XO, and expectations for XO, X-see and Y are created. No new exogenous source can satisfy the synchronous con- junction; only an endogenous X.src can. The rule is simi- lar to the notion of an X followed by a Y with an X hole in it (cf. Gazdar, 1982). NP-t raell CNP V V ] I ?.~.<>.~ 7 p... / I ~.~e t~ N ~ ran cnasecl / I a the ¢.~mOU.. ~ / Figure 4. A Grammar for Relative Clauses Figure 4 defines a grammar with an NP hoic in a rela- tive clause; other type, s of [eft-extraposition are handled analogously. Our treatment of relatives is adapted from C'homsky and Lasnik (1977). The movement rule for S is: (seq ($ vl (and Cutup Re/ (or Exog.src PRO-src)) ($ v2 (and Rel Rel.src S))). The rule restricts the first instance of Re/ to arise either from an exogenous relative pronoun such as which or from an endogenously generated (phono- logically null) pronoun PRO. The second variable is satisfied when Rei,src simultaneously reactivates a trace of the Rel instance and inserts an NP-tracc into an S. It is instructive to consider how phonologically null pro- nouns are inserted before we discuss how movement occurs by trace insertion. The phrase, [NP the mouse [~ PRO=" that ...]], illustrates how a relative pronoun PRO is inserted. Figure 5(a) shows the network after parsing the cat. When the complementizer that appears next in the input, PRO-src receives inhibition (marked by downward arrows in Figure 5(b)) from Rel.CompO. Non-exogenous 163 sources such as PRO-src and Rel.src are activated in con- texts in which they are expected and then receive inhibi- tion. Figure 5(c) shows the resulting network after PRO- src has been activated, The inserted pronoun behaves pre- cisely as an input pronoun with respect to subsequent movement. The trace generation necessary for movement uses the same insertion mechanism described above. Figures 6(a)- (d) illustrate various stages in parsing the phraso, [/vp the cat [~" whichi [$ tl ranll], in Figure 6(a), after parsing the cat which, synchronous expectations are posted for an S which contains a reactivation of the RelO trace by Rel. see. The signal sent to S by Rei.src will be in the form of an NP (through NP-trace). Figure 6(b) shows how the input of ran produces inhi- bition on Rei-src from SI. The inhibition on Rei-src caus~ it to activate (just as in the null pronoun insertion) to try to satisfy the current contextual expectations. Fig- ure 6(c) shows the network after Rel-src has activated to supply the trace. The only remaining problem is that Rel-src is actively inhibiting itself through .~0. 6 When Rel-src activates again, new instances are created for the inhibited nodes as they are re-activated; the uninhibited nodes are simply rebound. The final structure is shown in Figure 6(d). it is interesting that the network automatically enforces the restriction that the relative pronoun, complementizer and subject of the embedded sentence cannot all be miss- ing. PRO must be generated before its trace can be inserted as the subject. Furthermore. since expectations are strongest for the first link of a sequence, expectations will be much weaker for the VP in the relative clause (under S under S") than for the top-level VP under SO. The fact that the device blocks certai'n structures, without explicit weli-formedness constraints, is quite significant. Wherever possible, we would like to account for the complexity of the data through the composite behavior of a universal device and a simple, general gram- mar. We consider the description of a device which embo- dies the appropriate principles more parsimonious than a list of complex conditions and filters, and, to the extent that its architecture is independently motivated by proc,'ss- ink (i.e.. performance) considerations, of greater thcorctical interestf As we have seen, certain interpretations can be suppressed by expectations from elsewhere in the network. Furthermore, the occurrence of traces and empty consti- tuents is severely constrained because they must be sup- plied by endogenous sources, which can only suppurt a sin- tie constituent at any given time. For NP movement, these two properties of the device, taken together. elfectively enforce Ross's Complex NP Constraint (Ross. 1967), which states that, "No element contained in a 6. Another ,~sy o4" rut•inS thi,J iJ that the noa~ynchroetM:ity of the two vanaMea in the I~ttern hat ~ viohtted. The wdt-inhibittoa of • murcg ocgtwt in othcnr conteat~ in the APN ft'tnM:lmek eve• for egolgno~t toMt~eL Is net,aerita that contai• leJ't.rm;urtiv• cyr.t~ or ,endmSl~tm tttaghn~nta (e.S.. PP lUaghfl~'ltt), tett-iahibltioa Call Ifiu naturally U the t~ult at nemum~ me-de~rmiaim~ ae.tctivatioe of • ~[-inhil~t~ mum d'egUvety Ixorgtva the aea-tyarJumigity ~ pmuwnt. ?. 1"I~ work 4 Margin (1980) iain tJ~tm~&l~t. sentence dominated by an NP with a lexLcal head noun may be moved out of that NP by a transformation." To see why this constraint is enforced, consider the two kinds of sentences that an NP with a lexical head noun might dominate. If the embedded sentence is a relative clause, as in. [pip the rat [~" whichl [$ the cat [~" whichj [S fj chased/I]] likes fish]J], then Rel.src cannot support both traces. If the embedded sentence is a noun comple- ment (not shown in Figure 4). as in. [NP the rat [~" whichi [S he read a report [~" that [$ the cat chased fl]]]]], then there is only one trace in the intended interpretation, but there is nondeterminlsm during parsing between the noun complement and the relative clause interpretation. The interference eaus¢,, the trace to be bound to the innermost relative pronoun in the relative clause interpretation.' Thus, the combined properties of the device and grammar consistently block those structures which violate the Complex NP Constraint. Our prelim- inary findings for other types of movement (e.g., Subject- auxiliary Inversion, Wh-movement, and Raising) indicate that they also have natural APN explanations. 4. IMPLF.aMENTATION 8ml Fu'ruRg DIMF.CrlONS Although the re.torch described in this summary is pri- marily of a theoretic nature, the basic ideas involved in using APNs for recognition and generation are being implemented and tested in Zetalisp on a Symbolics Lisp Machine. We have also hand-simulated data on movement from the literature to design the theory and algorithms presented in this paper. We are currently designing net- works for a broad coverage syntactic grammar of English and for additional, cascaded levels for NP role mapping and case frames. The model has aLso been adapted as a general, context-driven problem solver, although more work remains to be done. We are considering ways of integrating iterative relaxa- tion techniques with the rule-based framework of APNs. This is particularly necessary in helping the network to identify expectation coalitions. In Figure 5(a), for exam- pie. there should be virtually no expectations for Rel-src, since it cannot satisfy any of the dominating synchronous conjunctions. Some type of non-activating feedback from the sources seems to be necessary. S. SUI~ARY Recent linguistic theories have attempted to induce general principles (e.g., CNPC. Subjacency, and the Struc- ture Preserving Hypothesis) from the detailed structural descriptions of earlier transformational theories (Chomsky, 1981), Our research can be viewed as an attempt tu induce the machine that embodies theae principles. In this paper, we have described a class of candidate machine~, called active production networks, and outlined how they handle movement as a natural way in which machine and grammar interact. The APN framework was initially developed as a plau- sible cognitive model for language processing, which would have real-time processing behavior, and extensive 8. Uhle tO r~-.~,-~.--i ¢oeskJs~ttmsJ wb~t rg~lto tO e.lp~t~om q~t~nfftb~ tJr•~ tm heud ia s ~.tr tlmt ~ nemmtg. 164 so(4) NPO(9) VP CNPO (9) "'" o s theO I Exog-srcO cat0(9) ~ " " | L T Comn Exog-sr¢l (9) Ir~ / .h4Ch PRO% \ that for (a) trace structure after the cat So(4) NPO(9) I CNPO(9) NO --~ VP OetO • I\ ' /..=,~ Rel -Com°O (4) Re I ~ ~ ~ 4=er0(9) I (b) trace structure after the cat ... that NPO(4) vP / CNPO(4) Oat0 NO / I tneO catO I l ExOg-SrCO ~x Og-S¢'C ! SO(4) / _ ~ ~, -Ico~,oo ~ 9 ) ~ ~" --. ~ NP-t r'ace CNP RelO(9) I [ ComglementtzerO I / .I.. \1 ,.'..o J / [ p~o-src( )[ g- (c) trace structure after the cat PRO that .Figure 5. Relative Pronoun Insertion contextual processing and learning capabilities based on a formal notion of expectations. That movement also seems naturally expressible in a way that is consistent with current linguistic theories is quite intriguing. REFERENCES Anderson, J. R. (1983). The Architecture of Cognition, Harvard University Press, Cambridge. Chomsky. N. (1981). Lectures on Government and Bind- ing. Foris Publications, Dordrecht. Chomsky, N. and Lasnik, H. (1977). "Filters and Con- trol," Linguistic Inquiry g, 425-504. Fahlman, S. E. (1979). NETL" A System for Represent- ing and Using Real-World Knowledge. MIT Press, Cam- bridge. Fahlman, S. E., Hinton, G. E. and Sejnowski, T. J. (1983). "Massively Parallel Architectures for Ah NFTL, Thistle, and Boltzmann Machines," AAAI.83 Conference Proceedings. Feldman. J, A. and Ballard, D. It. (1982). "Connection- ist Models and Their Properties," Cognitive Science 6, 205-254. Gazdar, G. (1982). "Phrase Structure Grammar," The Nature of Syntactic Representation, Jacubson and Pullum, eds., Reidel, Boston, 131 - 186. Jones. M. A.. (1983). "Activation-Based Parsi.g." 8th IJCAI, Karlsruhe, W. Germany, 678-682. Jones, M.A. (forthcoming). submitted for publication. Marcus. M. P. (1980). A Theory of S),ntactic Recogni. lion for Natural L,znguage, M IT Press, Cambridge. Pereira. F. (1983). "Logic for Natural Language Analysis," technical report 275, SRI International. Menlo Park. Ross, J. R. (1967). Constraints on Variables.in Syntax, unpublished Ph.D. thesis, MIT, Cambridge. Waltz. D. L. and Pollack, J. B. (1985). "Massively Parallel Parsing: A Strongly Interactive Model of Natural Language Interpretation," Cognitive Science, 9, 51-74. 165 SO(2) VP NPO(4) / CNPO(4) OetO NO tfleO i I I'~c° .'P°(9.~ 1 s I i~tchO(t) • ll--~ riCl CNI SO(2) NPO(4 ~*~='~''`mr~~ " VP / CNPO(4) OetO NO $0(4) . I r~=o~ ~ i ~,(4) Sxog life0 Exoo-srcl I "~4#" / ~ I ~=lo / ~. ~ v=o(9) [ li" / / t 1 I wntchO /NI--IPICi VO(9) ,.o.-..<~.,/.; ~!°o<,, (a) trace structure after ihe cat which (b) trace structure after the cat which ... ran NP0(9) VP / CNP0(9) 0et0 NO SO(9} ~..o~,,oo ~,o:.,<, '4 ~ I .o,<.~oo / I I il;~o /..-<../<.o(,, ~ ~,o,~y ,:°o S0(9) NP0{9) VP / CNPO(9) ..---'7 0et0 NO S0(9) 1 / cato tneO I wn~c~O =n4cn00(9~NP-trace0(9) v0 ~: \// , Exog ranO I Exog-Sr¢3 pel-src(9)| Exog-src3 (c) trace structure just after the cat which t ran (d) final trace structure l;igwe 6. Parsin8 Rclativc Clauses 166
1985
20
PARSING HEAD-DRIVEN PHRASE STRUCTURE GRAMMAR Derek Proudlan and Carl Pollard Hewlett-Packard Laboratories 1501 Page Mill Road Palo Alto, CA. 94303, USA Abstract The Head-driven Phrase Structure Grammar project (HPSG) is an English language database query system under development at Hewlett-Packard Laboratories. Unlike other product-oriented efforts in the natural lan- guage understanding field, the HPSG system was de- signed and implemented by linguists on the basis of recent theoretical developments. But, unlike other im- plementations of linguistic theories, this system is not a toy, as it deals with a variety of practical problems not covered in the theoretical literature. We believe that this makes the HPSG system ,nique in its combination of linguistic theory and practical application. The HPSG system differs from its predecessor GPSG, reported on at the 1982 ACL meeting (Gawron et al. 119821), in four significant respects: syntax, lexi- cal representation, parsing, and semantics. The paper focuses on parsing issues, but also gives a synopsis of the underlying syntactic formalism. 1 Syntax HPSG is a lexically based theory of phrase struc- ture, so called because of the central role played by grammlttical heads and their associated complements.' Roughly speaking, heads are linguistic forms (words and phrases) tl, at exert syntactic and semantic restric- tions on the phrases, called complements, that charac- teristically combine with them to form larger phrases. Verbs are the heads of verb phrm~es (apd sentences), nouns are the heads of noun phra~es, and so forth. As in most current syntactic theories, categories are represented as complexes of feature specifications. But the [IPSG treatment of lcxical subcategorization obviates the need in the theory of categories for the no- tion of bar-level (in the sense of X-bar theory, prevalent in much current linguistic research}. [n addition, the augmentation of the system of categories with stack- valued features - features whose values ~re sequences of categories - unilies the theory of lexical subcatego- riz~tion with the theory of bi,~ding phenomena. By binding pimnomena we meaa essentially noJL-clause- bounded delmndencies, ,'such a.~ th~rse involving dislo- cated constituents, relative ~Lnd interrogative pronouns, and reflexive and reciprocal pronouns [12 I. * iIPSG ul a relinwlJ~i¢ ~ld ¢zt.,~nsioll ,,f th~ clu~dy rel~tteu Gt~lmr~dilmd Ph¢.'tme Structulm Gran|n|ar lTI. The detaaJs uf lily tllt~J/-y of HPSG ar~ Nt forth in Ii|[. More precisely, the subcategorization of a head is encoded as the value of a stack-valued feature called ~SUBCAT". For example, the SUBCAT value of the verb persuade is the sequence of three categories IVP, NP, NP I, corresponding to the grammatical relations (GR's): controlled complement, direct object, and sub- ject respectively. We are adopting a modified version of Dowty's [19821 terminology for GR's, where subject "LS last, direct object second-to-last, etc. For semantic rea- sons we call the GR following a controlled complement the controller. One of the key differences between HPSG and its predecesor GPSG is the massive relocation of linguistic information from phrase structure rules into the lexi- con [5]. This wholesale lexicalization of linguistic infor- mation in HPSG results in a drastic reduction in the number of phrase structure rules. Since rules no longer handle subcategorization, their sole remaining function is to encode a small number of language-specific prin- ciples for projecting from [exical entries h, surface con- stituent order. The schematic nature of the grammar rules allows the system to parse a large fragment of English with only a small number of rules (the system currently uses sixteen), since each r1,le can be used in many different situations. The constituents of each rule are sparsely annotated with features, but are fleshed out when taken together with constituents looked for and constituents found. For example the sentence The manager works can be parsed using the single rule RI below. The rule is applied to build the noun phrase The manager by identifying the head H with the [exical element man- aqer and tile complement CI with the lexical element the. The entire sentence is built by ideutifying the H with works and the C1 with the noun phrase described above. Thus the single rule RI functions as both the S -* NP VP, and NP ~ Det N rules of familiar context fRe grammars. R1. x -> ci hi(CONTROL INTRANS)] a* Figure I. A Grammar Rule. 167 ]Feature Passing The theory of HPSG embodies a number of sub- stantive hypotheses about universal granunatical prin- ciples, Such principles as the Head Feature Princi- ple, the Binding Inheritance Principle, and the Con- trol Agreement Principle, require that certain syntac- tic features specified on daughters in syntactic trees are inherited by the mothers. Highly abstract phrase struc- ture rules thus give rise to fully specified grammatical structures in a recursive process driven by syntactic in- formation encoded on lexical heads. Thus HPSG, un- like similar ~unification-based" syntactic theories, em- bodies a strong hypothesis about the flow of relevant information in the derivation of complex structures. Unification Another important difference between HPSG and other unification based syntactic theories concerns the form of the expressions which are actually unified. In HPSG, the structures which get unified are (with limited exceptions to be discussed below) not general graph structures as in Lexical Functional Qrammar [1 I, or Functional Unification Granmaar IlOI, but rather fiat atomic valued feature matrices, such as those ~hown below. [(CONTROL 0 INTRANS) (MAJ N A) (AGR 3RDSG) (PRD MINUS) (TOP MINUS)] [(CONTROL O) (MAJ H V) (INV PLUS)] Figure 2. Two feature matrices. In the implementation of [[PSG we have been able to use this restrictiou on the form of feature tnatrices to good advantage. Since for any given version of the system the range of atomic features and feature values is fixed, we are able to represent fiat feature matrices, such as the ores above, as vectors of intcKers, where each cell in the vector represents a feature, and ~he in- teger in each cell represents a disjunctioa of tile possible values for that feature. CON MAJ AGR PRD INV TOP ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . i tTt toi2 It 13 I t I I t[ t21 7 13 I t I 3 I Figure 3: Two ~ransduced feature matrices. For example, if the possible values of the MAJ fea- ture are N, V, A, and P then we can uuiquely represent any combination of these features with an integer in the raalge 0..15. This is accomplished simply by a~ign- ing each possible value an index which is an integral power of 2 in this range and then adding up the indices so derived for each disjunction of values encountered. Unification in such cases is thus reduced to the "logical and" of the integers in each cell of the vector represent- ing the feature matrix. In this way unification of these flat structures can be done in constant time, and since =logical and" is generally a single machine instruction the overhead is very low. N V A P [ t [ 0 [ t [ 0 [ = tO = (MAJ N A) It It 10 io l= t2= (MAJ gV) I l m l l m l l m m l l l l l l l ~ l Un£fication I I I 0 I 0 I 0 I = 8 = (MAJ H) Figure 4: Closeup of the MAJ feature. There are, however, certain cases when the values of features are not atomic, but are instead themselves feature matrices. The unification of such structures could, in theory, involve arbitrary recursion on the gen- eral unification algorithm, and it would seem that we had not progressed very far from the problem of uni- fying general graph structures. Happily, the features for which this property of embedding holds, constitute a small finite set (basically tlte so called "binding fea- tures"). Thus we are able to segregate such features from the rest, and recurse only when such a "category valued ~ feature is present. [n practice, therefore, the time performance of the general uailication algorithm is very good, essentially the sanze a.s that of the lint structure unification algorithm described above. 2 Parsing As in the earlier GPSG system, the primary job of the parser in the HPSG system is to produce a se- mantics for the input sentence. This is done composi- tionally as the phrase structure is built, and uses only locally available information. Thus every constituent which is built syntactically has a corresponding seman- tics built for it at the same time, using only information available in the phrasal subtree which it immediately dominates. This locality constraint in computing the semantics for constituents is an essential characteristic of HPSG. For a more complete description of the se- mantic treatment used in the HPSG system see Creary and Pollard [2]. Head-driven Active Chart Parser A crucial dilference between the HPSG system and its predecessor GPSG is the importance placed on the head constituent in HPSG. [n HPSG it is the head con- stituent of a rule which carries the subcategorization information needed to build the other constituents of 168 the rule. Thus parsing proceeds head first through the phrase structure of a sentence, rather than left to right through the sentence string. The parser itself is a variation of an active chart parser [4,9,8,13], modified to permit the construction of constituents head first, instead of in left-to-right order. In order to successfully parse "head first", an edge* must be augmented to include information about its span (i.e. its position in the string). This is necessary because heaA can appear as a middle constituent of a rule with other constituents (e.g. complements or adjuncts) on either side. Thus it is not possible to record all the requisite boundary information simply by moving a dot through the rule (as in Earley), or by keeping track of just those constituents which remain to be built (as in Winograd). An example should make this clear. Suppose as before we are confronted with the task of parsing the sentence The manager works, and again we have available the grammar rule R1. Since we are parsing in a ~head first" manner we must match the H constituent against some substring of the sentence. But which substring? In more conventional chart pars- ing algorithms which proceed left to right this is not a serious problem, since we are always guaranteed to have an anchor to the left. We simply try building the [eftmost constituent of the rule starting at the [eftmost position of the string, and if this succeeds we try to build the next [eftmost constituent starting at one po- sition to the right of wherever the previous constituent ended. However in our case we cannot ausume any such anchoring to the left, since as the example illustrates. the H is not always leftmost. The solution we have adopted in the HPSG system is to annotate each edge with information about the span of substring which it covers. In the example be- low the inactive edge E1 is matched against the head of rule R1, and since they unify the new active edge E2 is created with its head constituent instantiated with the feature specifications which resulted from the unifica- tion. This new edge E2 is annotated with the span of the inactive edge El. Some time later the inactive edge I,:3 is matched against the "np" constituent of our ac- tive edge E2, resulting in the new active edge E.I. The span of E4 is obtained by combining the starting posi- tion of E3 {i.e. t) with the finishing postion of E2 (i.e. 3). The point is that edges ~Lre constructed from the head out, so that at any given tame in Lhe life cycle of an edge the spanning informatiun on the edge records the span of contiguous substring which it covers. Note that in the transition from rule ill to edge 1~2 we have relabeled the constituent markers z, cl, ~nd h with the symbols ~, np, ~utd VP respectively. This is done merely a.s ~t mnemouic device to reflect the fact that once the head of the edge is found, the subcategorization information on that head (i.e. the values of the "SUHCAT" feature of the verb work.s) is An edi[e is, Iooe~y spea&ing, ,-tn inlCantiation of a nile witll ~nnle of tile [e~urml on conlltituentll m~de ntore spm:ifl¢. propagated to the other elements of the edge, thereby restricting the types of constituents with which they can be satisfied. Writing a constituent marker in upper case indicates that an inactive edge has been found to instantiate it, while a lower case (not yet found) constituent in bold face indicates that this is the next constituent which will try to be instantiated. El. V<3.3> RI. x-> ci h a* g2. s<3.3> -> np VP a* E3. NP<I,2>" E2. s<3,3> -> np VP a* E4. s<1.3> -> ~P VP R*' Figure 5: Combining edges and rules. Using Semantics Restrictions Parsing ~head first" offers both practical and theo- retical advantages. As mentioned above, the categories of the grammatical relations subcategorized for by a particular head are encoded as the SUBCAT value of the head. Now GR's are of two distinct types: those which are ~saturated" (i.e. do not subcategorize for anything themselves), such as subject and objects, and those which subcategorize for a subject (i.e. controlled complements). One of the language-universal gram- matical principles (the Control Agreement Principle) requires that the semantic controller of a controlled complement always be the next grammatical relation (in the order specified by the value of the SUBCAT feature of the head) after the controlled complement to combine with the head. But since the HPSG parser always finds the head of a clause first, the grammati- cal order of its complements, as well as their semantic roles, are always specified before the complements are found. As a consequence, semantic processing ~f con- stituents can be done on the fly as the constituents are found, rather than waiting until an edge has been completed. Thus semantic processing can be do.e ex- tremely locally (constituent-to-constituent in the edge, rather than merely node-to-node in the parse tree as in Montague semantics), and therefore a parse path ,an be abandoned on semantic grounds (e.g. sortal iltcon- sistency) in the rniddle of constructing an edge. la this way semantics, as well as syntax, can be used to control the parsing process. Anaphora ill HPSG Another example of how parsing ~head first" pays oil is illustrated by the elegant technique this strat- egy makes possible for the binding of intr~entential a~taphors. This method allows us to assimilate cases of bound anaphora to the same general binding method used iu the HPSG system to handle other non-lexically- governed dependencies ~uch a.s ~ap.~, ~,ttt~ro~,t.ive pro- nouns, and relative pronouns. Roughly, the unbound dependencies of each type on every constituent are en- • coded as values of a,n appropriate stack-valued feature 169 ("binding feature"). In particular, unbound anaphors axe kept track of by two binding features, REFL (for reflexive pronouns) and BPRO ['or personal pronouns available to serve as bound anaphors. According to the Binding Inheritance Principle, all categories on binding-feature stacks which do not get bound under a particular node are inherited onto that node. Just how binding is effected depends on the type of dependency. In the case of bound anaphora, this is accomplished by merging the relevant agreement information (stored in the REFL or BPRO stack of the constituent contain- ing the anaphor) with one of the later GR's subcatego- rized for by the head which governs that constituent. This has the effect of forcing the node that ultimately unifies with that GR (if any) to be the sought-after antecedent. The difference between reflexives and per- sonal pronouns is this. The binding feature REFL is not allowed to inherit onto nodes of certain types (those with CONTROL value [N'rRANS}, thus forc- ing the reflexive pronoun to become locally bound. In the case of non-reflexive pronouns, the class of possible antecedents is determined by n,,difying the subcatego- rization information on the hel,,l governing the pronoun so that all the subcategorized-fl~r GR's later in gram- matical order than the pronoun are "contra-indexed" with the pronoun (and thereby prohibited front being its antecedent). Binding then takes place precisely as with reflexives, but somewhere higher in the tree. We illustrate this d~ttttction v, ti, kh I.~O examples. [n sentence S I below told subcategorizes for three con- stituents: the subject NP Pullum, the direct object Gazdar, and the oblique object PP about himself.' Thus either PuUum or f;uzdur are po~ible antecedents of himself, but not Wasow. SI. Wasow was convinced that Pullum told Gazdar about himself. $2. Wasow persuaded Pullum to shave him. [n sentence 52 shave subcategorizes for tile direct object NP him and an NP subject eventue.tly tilled by the constituent Pullum via control. Since the subject position is contra-indexed with tile pronoun, PuUum is blocked from serving a~ the a,tecedent. The pro,mun is eventually bound by the NP WanouJ higher up in the tree. Heuristics to Optiudze ."Joareh '['he liPS(; system, based as it is upon a care- fully developed hngui~tic theory, has broad expressive power. In practice, how-ver, much of this power is often not necessary. To exploit this fact the IiPSC, system u.~cs heuristics to help r,,duve the search space implic- itly defined by the grammar. These heuristics allow the parser to produce an optimally ordered agenda of edges to try ba.sed on words used in tile sentence, and on constituents it has found so far. • The pNpOlltiOll tl treltt~| eel~lttt~tllF :is a c:~se tam'king. One type of heuristic involves additional syntactic information which can be attached to rules to deter- mine their likelihood. Such a heuristic is based on the currently intended use for the rule to which it is at- tached, and on the edges already available in the chart. An example of this type of heuristic is sketched below. RI. x -> cl h a* Heuristic-l: Are the features of cl +QUE? Figure 6: A rule with an attached heuristic. Heuristic-I encodes the fact that rule RI, when used in its incarnation as the S -- NP VP rule, is pri- marily intended to handle declarative sentences rather than questions. Thus if the answer to Heuristic-1 is "no" then this edge is given a higher ranking than if the answer is "yes". This heuristic, taken together with others, determines the rank of the edge instantiated from this rule, which in turn determines the order in which edges will be tried. The result in this case is that for a sentence such as 53 below, the system will pre- fer the reading for which an appropriate answer is ".a character in a play by Shakespeare", over the reading which has as a felicitous answer "Richard Burton". S3. Who is Hamlet? It should be empha~sized, however, that heuristics are not an essential part of the system, as are the fea- ture passing principles, but rather are used only for reasons of efficiency. In theory all possible constituents permitted by the grammar will be found eventually with or without heuristics. The heuristics siu,ply help a linguist tell the parser which readings are most likely, and which parsing strategies are usually most fruitful, thereby allowing the parser to construct the most likely reading first. We believe that this clearly diifereuti- ares [IPSG from "ad hoc" systems which do not make sharp the distinction between theoretical principle and heuristic guideline, and that this distinction is an izn- portant one if the natural language understanding pro- grams of today are to be of any use to the natural language programs and theories of the future. ACKO WLED(4EME1NTS We would like to acknowledge the valuable assi- tance of Thomas Wasow ~md Ivan Sag ht tile writing of this paper. We would also like to thank Martin Kay and Stuart Shi.e~:r lot tlke~r tte[p[u[ cuttutteut, aly on an earlier draft. 170 REFERENCES Ill Bresnan, J. (ed). (1982) The Mental Representation of Grammatical Rela- tions, The MIT Press, Cambridge, Mass. [z) Creary, L. and C. Pollard (1985) ~A Computational Semantics for Natural Lan- guage", Proceedings of the gSrd Annual Meeting of the Association for Computational Linguistics. [31 Dowry, D.R. (1982) "Grammatical Relations and Montague Grammar", In P. Jacobson and G.K. Pullum (eds.), The Nature of Syntactic Representation D. Reidel Publishing Co., Dordrecht, Holland. L41 Earley, J. (1970) "An efficient context-free parsing algorithm", CACM 6:8, 1970. [5t Fliekinger, D., C. Pollard. T. Wasow (1985) "Structure-Sharing in Lexical Representation", Proceedings of the 23rd Annual Meetin!l of the Association for Computational Linguistics. i61 Gawron, 3. et al. (1982) "Processing English with a Generalized Phrase Structure Grammar", ACL Proceedings 20. it! Gazdar, G. et al. (in pres,s) Generalized Phrase Structure (;rammar, Blackwell and Harvard University Press. Is! Kaplan, R. ( 197:. ! ~A General Syl!tlxt:l, ic Processor", la Rustin (ed.) Natural Langua~te Proeessiny. Algorithmics Press, N.Y. Kay, M. [t973) "The MIND System", lu Rusl, in (ed.) Natural Language Processiur.i. Algorithmics Press, N.Y. it01 Kay, M. (forthcoming) "Parsing in Functiotml Uailicatiou Grammar". iLll Pollard. C. (198,1) Generalized Context-Free (;rammur~, Ile, ad (:r.m- mar.s, and Natural L,mtlU.Ve, I'tn.D. Dissertation, Stanford. Pollard, C. (forthcotnitlg) "A Semantic Approlu:h to Ilhuling in ;t Movms- trata[ Theory", To appe,~r in Lin!luistic~ and Philosophy. 131 Winograd, T. (tg~O) Language as a Uo~nitive l'rocess. Addi~on-W~lcy, lteadiag, Marts. 171
1985
21
A Computational Semantics for Natural Language Lewis G. Creary and Carl J. Pollard Hewlett-Packard Laboratories 1501 Page Mill Road Palo Alto, CA 94304, USA Abstract In the new Head-driven Phrase Structure Grammar (HPSG) language processing system that is currently under development at Hewlett-Packard Laboratories, the Montagovian semantics of the earlier GPSG system (see [Gawron et al. 19821) is replaced by a radically different approach with a number of distinct advantages. In place of the lambda calculus and standard first-order logic, our medium of conceptual representation is a new logical for- realism called NFLT (Neo-Fregean Language of Thought); compositional semantics is effected, not by schematic lambda expressions, but by LISP procedures that operate on NFLT expressions to produce new expressions. NFLT has a number of features that make it well-suited {'or nat- ural language translations, including predicates of variable arity in which explicitly marked situational roles supercede order-coded argument positions, sortally restricted quan- tification, a compositional (but nonextensional) semantics that handles causal contexts, and a princip[ed conceptual raising mechanism that we expect to lead to a computation- ally tractable account of propositional attitudes. The use of semantically compositional LiSP procedures in place of lambda-schemas allows us to produce fully reduced trans- lations on the fly, with no need for post-processing. This approach should simplify the task of using semantic infor- mation (such as sortal incompatibilities) to eliminate bad parse paths. I. Introduction Someone who knows a natural language is able to use utterances of certain types to give and receive information about the world, flow can we explain this? We take as our point of departure the assumption that members of a language community share a certain mental system -- a grammar -- that mediates the correspondence between ut- terance types and other things in the world, such as individ- u~ds, relations, and states of ~ffairs, to a large degree, this system i~ the language. According to the relation theory of meaning (Barwise & Perry !1983!), linguistic meaning is a relation between types of utterance events and other as- pects of objective reality. We accept this view of linguistic meaning, but unlike Barwise and Perry we focus on how the meaning relation is mediated by the intersubjective psycho- logical system of grammar. [n our view, a computational semantics ['or a natural language has three essential components: 172 a. a system of conceptual representation for internal use as a computational medium in processes of information retrieval, inference, planning, etc. b. a system of linkages between expressions of the natural language and those of the conceptual representation, and c. a system of linkages between expressions in the concep- tual representation and objects, relations, and states of affairs in the external world. [n this paper, we shall concentrate almost exclusively on the first two components. We shall sketch our ontologi- cal commitments, describe our internal representation lan- guage, explain how our grammar (and our computer im- plementation) makes the connection between English and the internal representations, and finally indicate the present status and future directions of our research. Our internal representation language. NFLT. is due to Creary 119831. The grammatical theory in which the present research is couched is the theory of head grammar (HG) set forth in [Pollard 1984] and [Pollard forthcoming i and imple- mented as the front end of the HPSG (Head-driven Phrase Structure Grammar) system, an English [auguage database query system under development at Hewlett-Packard Lab- oratories. The non-semantic aspects of the implementation are described in IFlickinger, Pollard, & Wasow t9851 and [Proudian & Pollard 1.9851. 2. Ontological Assumptions To get started, we make the following assumptions about what categories of things are in the world. a. There are individuals. These include objects of the usual kind (such as Ron and Nancy) as well as situations. Situations comprise states (such as Ron's being tall) and events (such as Ron giving his inaugural address on January 21, 1985). b. There are relations (subsuming properties). Exam- ples are COOKIE (= the property of being a cookie) and BUY (= the relation which Nancy has to the cookies she buys). Associated with each relation is a characteristic set of roles appropriate to that relation (such as AGENT, PATIENT, LO- CATION, etc.) which can be filled by individuals. Simple situations consist of individuals playing roles in relations. Unlike properties and relations in situation semantics [Barwise & Perry 1983[, our relations do not have fixed ar- ity (number of arguments). This is made possible by taking explicit account of roles, and has important linguistic con- sequences. Also there is no distinguished ontological cate- gory of locations~ instead, the location of an event is just the individual that fills the LOCATION role. c. Some relations are sortal relations, or sorts. Associ- ated with each sort {but not with any non-sortal relation) is a criterion of identity for individuals of that sort [Coc- chiarella 1977, Gupta 1980 I. Predicates denoting sorts oc- cur in the restrictor-clanses of quantifiers (see section 4.2 below), and the associated criteria of identity are essential to determining the truth values of quantified assertions. Two important sorts of situations are states and events. One can characterize a wide range of subsorts of these (which we shall call situation types) by specifying a par- ticular configuration of relation, individuals, and roles. For example, one might consider the sort of event in which Ron kisses Nancy in the Oval Office, i.e. in which the relation is KISS, Ron plays the AGENT role, Nancy plays the PATIENT role, and the Oval Office plays the LOCATION role. One might also consider the sort of state in which Ron is a per- son, i.e. in which the relation is PERSON, and Ron plays the INSTANCE role. We assume that the INSTANCE role is appropriate only for sortal relations. d. There are concepts, both subjective and objective. Some individuals are information-processing organisms that use complex symbolic objects (subjective concepts) as com- putational media for information storage and retrieval, in- ference, planning, etc. An example is Ron's internal rep- resentation of the property COOKIE. This representation in turn is a token of a certain abstract type ~'COOKIE, an objective concept which is shared by the vast majority of speakers of English. t Note that the objective concept ~COOKIE, the property COOKIE, and the extension of that property (i.e. the set ofall cookies) are three distinct things that play three different roles in the semantics of the Eng- lish noun cookie. e. There are computational processes in organisms for manipulating concepts e.g. methods for constructing com- plex concepts from simpler ones, inferencing nmchanisms, etc. Concepts of situations are called propositions; organ- isms use inferencing mechanisms to derive new propositions from old. To the extent that concepts are accurate repre- sentations of existing things and the relations in which they stand, organisms can contain information. We call the sys- tem of objective concepts and concept-manipulating mech- anisms instantiated in an organism its conceptual ~ystem. Communities of organisms can share the same conceptual system. f. Communities of organisms whose common concep- tual system contains a subsystem of a certain kind called a grammar can cornnmnicate with each other. Roughly, grammars are conceptual subsystems that mediate between events of a specific type (calh:d utterances) and other as- pects of reality. Grammars enable organisms to use utter- ances to give and receive information about the world. This is the subject of sections 4-6. 3. The Internal Representation Language: NFLT The translation of input sentences into a logical for- malism of some kind is a fairly standard feature of com- puter systems for natural-language understanding, and one which is shared by the HPSG system. A distinctive feature of this system, however, is the particular logical formalism involved, which is called NFLT (Neo-Fregean Language of Thought). 2 This is a new logical language that is being developed to serve as the internal representation medium in computer agents with natural language capabilities. The language is the result of augmenting and partially reinter- preting the standard predicate calculus formalism in sev- eral ways, some of which will be described very briefly in this section. Historically, the predicate calculus was de- ve|oped by mathematical logicians as an explication of the logic of mathematical proofs, in order to throw light on the nature of purely mathematical concepts and knowledge. Since many basic concepts that are commonplace in natu- ral language (including concepts of belief, desire, intention, temporal change, causality, subjunctive conditionality, etc.) play no role in pure mathematics, we should not be espe- cially surprised to find that the predicate calculus requires supplementation in order to represent adequately and natu- rally information involving these concepts. The belief that such supplementation is needed has led to the design of NFLT, While NFLT is much closer semantically to natural lan- guage than is the standard predicate calculus, and is to some extent inspired by psycho[ogistic considerations, it is nevertheless a formal logic admitting of a mathemati- cally precise semantics. The intended semantics incorpo- rates a Fregean distinction between sense and denotation, associated principles of compositionality, and a somewhat non-Fregean theory of situations or situation-types as the denotations of sentential formulas. 3.1. Predicates of Variable Arity Atomic formulas in NFLT have an explicit ro[e-marker for each argument; in this respect NFLT resembles seman- tic network formalisms and differs from standard predicate t We regard this notion of obiective concept as the appro- priate basis on which to reconstruct, ia terms of informa- tion processing, Saussure's notions of ~ignifiant (signifier) and #ignifig (signified) [1916!, as well an Frege's notion of Sinn (sense, connotation) [1892 I. ~" The formalism is called ~neo-Fregean" because it in- corporates many of the semantic ideas of Gottlob Frege, though it also departs from Frege's ideas in several signif- icant ways. It is called a "language of thought" because unlike English, which is first and foremost a medium of communication, NFLT is designed to serve as a medium of reasoning in computer problem-solving systems, which we regard for theoretical purposes as thinking organisms, (Frege referred to his own logical formalism, Begriffsschrift, an a "formula language for pure thought" [Frege 1879, title and p. 6 (translation)]). 17"3 calculus, in which the roles are order-coded. This explicit representation of roles permits each predicate-symbol in NFLT to take a variable number of arguments, which in turn makes it possible to represent occurrences of the same verb with the same predicate-symbol, despite differences in valence (i.e. number and identity of attached comple- ments and adjuncts). This clears up a host of problems that arise in theoretical frameworks (such an Montague se- mantics and situation semantics) that depend on fixed-arity relations (see [Carlson forthcoming] and [Dowry 1982] for discussion). In particular, new roles (corresponding to ad- juncts or optional complements in natural language) can be added as required, and there is no need for explicit existen- tial quantification over ~missing arguments". Atomic formulas in NFLT are compounded of a base- predicate and a set of rolemark-argument pairs, as in the following example: (la) English: Ron kissed Nancy in the Oval Office on April 1, 1985. (lb) NFLT Internal Syntax: (kiss (agent . con) (patient . nancy) (location . oval-office) (time . 4-i-85) ) (lc) NFLT Display Syntax: ( KISS agt: RON p~:nt: NANCY loc: OVAL-OFFICE art: 4-i-8S) The base-predicate 'KISS' takes a variable number of argu- ments, depending on the needs of a particular context. [n ,iLe display syntax, the arguments are explicitly introduced by abbreviated lowercase role markers. 3.2. Sortal Quantification Quantificational expressi..s in NFLT differ from those in predicate calculus by alway~ rontaining a restrictor-clause consisting of a sortal predication, in addition to the u, sual scope-clause, as in the following example: (2a) English: Ron ate a cookie in the Oval Office. (2b) NFLT Display Syntax: { SOME XS (COOKIE inst: XS) (EAT agt:RON ptnt:X5 Io¢: OVAL-OFFICE) } Note that we always quantify over instances of a sort, i.e. the quantified variable fills the instance role in the restrictor- clause. This style of quantifier is superior in several ways to that of the predicate calcuhls for the purposes of represent- ing commonsense knowledge. It is intuitively more natu- ral, since it follows the quantificational pattern of English. More importantly, it is more general, being sufficient to handle a number of natural language determiners such as many, most, few, etc., that cannot be represented using only the unrestricted quantification of standard predicate calcu- lus (see [Wallace 1965], {Barwise & Cooper 1981]). Finally, information carried by the sortal predicates in quantifiers (namely, criteria of identity for things of the various sorts in question) provides a sound semantic basis for counting the members of extensions of such predicates (see section 2, assumption c above). Any internal structure which a variable may have is irrelevant to its function as a uniquely identifiable place- holder in a formula, in particular, a quantified formula can itself serve as its own ~bound variable". This is how quanti- tiers are actually implemented in the HPSG system; in the internal (i.e. implementation) syntax for quantified NFLT- formulas, bound variables of the usual sort are dispensed with in favor of pointers to the relevant quantified formu- las. Thus, of the three occurrences of X5 in the display- formula (2b), the first has no counterpart in the internal syntax, while the last two correspond internally to LISP pointers back to the data structure that implements (2b). This method of implementing quantification has some im- portant advantages. First, it eliminates the technical prob- lems of variable clash that arise in conventional treatments. There are no ~alphabetic variants", just structurally equiv- alent concept tokens. Secondly, each occurrence of a quanti- fied ~bound variable" provides direct computational access to the determiner, restrictor-clause, and scope-clause with which it is associated. A special class of quantificational expressions, called quantifier expressions, have no scope-clause. An example is: (3) NFLT Display Syntax: (SOME gl (COOKIE inst: xl) ) Such expressions translate quantified noun phrases in En- glish, e.g. a cookie. 3.3. Causal Relations and Non-Extensionality According to the standard semantics for the predicate calculus, predicate symbols denote the extensions of rela- tions (i.e. sets of ordered n-tuples) and sentential formu- las denote truth values. By contrast, we propose a non- eztensional semantics for NFLT: we take predicate symbols to denote relations themselves (rather than their exten- sions), and sentential formulas to denote situations or situ- ation types (rather than the corresponding truth values). 3 The motivation for this is to provide for the expression of propositions involving causal relations among situations, as in the following example: a The distinction between situations and situation types corresponds roughly to the fnite/infinitive distinction in natural language. For discussion of this within the frame- work of situation semantics, see [Cooper 1984]. 174 (4a) English: John has brown eyes because he is of genotype XYZW. (4b) NFLT Display Syntax: ( C~USE conditn: (GENOTYPE-XYZW inst:JOHN) result: (BROWN-EYED bearer:JOHN} ) Now, the predicate calculus is an extensional language in the sense that the replacement of categorical subparts within an expression by new subparts having the same extension must preserve the extension of the original ex- pression. Such replacements within a sentential expression must preserve the truth-value of the expression, since the extension of a sentence is a truth-value. NFLT is not ex- tensional in this sense. [n particular, some of its predicate- symbols may denote causal relations among situations, and extension-preserving substitutions within causal contexts do not generally preserve the causal relations. Suppose, for example, that the formula (4b) is true. While the ex- tension of the NFLT-predicate 'GENOTYPE-XYZW' is the set of animals of genotype XYZW, its denotation is not this set, but rather what Putnam I1969] would call a "physical property", the property of having the genotype XYZW. As noted above (section 2, assumption d) a property is to be distinguished both from the set of objects of which it holds and from any concept of it. Now even if this property were to happen by coincidence to have the same extension as the property of being a citizen of Polo Alto born precisely at noon on I April ].956, the substitution of a predicate- symbol denoting this latter property for 'GENOTYPE-XYZW' in the formula (4b) would produce a falsehood. However, NFLT's lack of extensionality does not involve any departure from compositional semantics. The deno- tation of an NFLT-predicate-symbol is a property; thus, although the substitution discussed earlier preserves the extension of 'GENOTYPE-XYZW', it does not preserve the denotation of that predicate-symbol. Similarly, the deno- tation of an NFLT-sentence is a situation or ~ttuation-type, as distinguished both from a mere truth-val,e and from a propositionJ Then, although NFLT is not at~ extensional language in the standard sense, a Fregean a.alogue of the principle of extensionality does hold for it: The replace- ment of subparts within an expression by new subparts having the same denotation must preserve the denotation of the original expression (see [Frege 18921). Moreover, such replacements within an NFLT-sentence must preserve tile truth-value of that sentence, since the truth-value is deter- mined by the denotation. 3.4. Intentionality and Conceptual Raising The NFLT notation for representing information about propositional attitudes is an improved version of the neo- Fregean scheme described in [Creary 1979 I, section 2, which is itself an extension and improvement of that found in [McCarthy 1979]. The basic idea underlying this scheme is that propositional attitudes are relations between peo- ple (or other intelligent organisms) and propositions; both ternm of such relations are taken as members of the do- main of discourse. Objective propositions and their com- ponent objective concepts are regarded a.s abstract enti- ties, roughly on a par with numbers, sets, etc. They are person-independent components of situations involving be- lief, knowledge, desire, and the like. More specifically, ob- jective concepts are abstract types which may have as to- ken~ the subjective concepts of individual organisms, which in turn are configurations of information and associated procedures in various individual memories (cf. section 2, assurnption d above). Unlike Montague semantics [Montague 19731, the se- mantic theory underlying NFLT does not imply that an organism necessarily believes all the logical equivalents of a proposition it believes. This is because distinct propo- sitions have as tokens distinct subjective concepts, even if they necessarily have the same truth-value. Here is an example of the use of NFLT to represent information concerning propositional attitudes: (5a) English: Nancy wants to tickle Ron. (5b) NFLT Display Syntax: (WANT appr: NANCY prop: t(TICKLE agt:I ptnt:RON)) [n a Fregean spirit, we assign to each categorematic expression of NFLT both a sense and a denotation. For ex- ample, the denotation of the predicate-constant 'COOKIE' is the property COOKIE, while the sense of that constant is a certain objective concept - the ~standard public" concept of a cookie. We say that ~COOKIE' expresses its sense and denotes its denotation. The result of appending the "con- ceptual raising" symbol ' l" to the constant "COOKIE' is a new constant, ' TCOOKIE', that denotes the concept that 'COOKTE' expresses (i.e. ' 1"' applies to a constant and forms a standard name of the sense of that constant). By ap- pending multiple occurrences of ' T' to constants, we obtain new constants that denote concepts of concepts, concepts of concepts of concepts, etc. 5 [n expression (5b), ' 1" is not explicitly appended to a constant, but instead is prefxed to a compound expres- sion. When used in this way, " 1" functions as a syncat- egorematic operator that "conceptually raises" each cate- gorematic constant within its scope and forms a term incor- porating the raised constants and denoting a proposition. 4 Thus, something similar to what Barwise and Perry call "situation semantics" 119831 is to be provided for NFLT- expressions, insofar as those expressions involve no ascrip- tion of propositional attitudes (the Barwise-Perry semantics for ascriptions of propositional attitudes takes a quite dif- ferent approach from that to be described for NFLT in the next section): s For further details concerning this Fregean conceptual hierarchy, see [Creary 1979 I, sections 2.2 and 2.3.1. Cap- italization, '$'-postfixing, and braces are used there to do the work done here by the symbol ' t'. 175 Thus, the subformula ' T (TICKLE aqt:I ptnt:RON) ' is the name of a proposition whose component concepts are the relation-concept TTICKLE and the individual concepts TI and I'RON. This proposition is the sense of the unraised subformula ' (TICKLE agt: I pint: RON) '. The individual concept TI, the minimal concept of self, is an especially interesting objective concept. We assume that for each sufficiently self-conscious and active organism X, X's minimal internal representation of itself is g token of TI. This concept is the sense of the indexical pronoun I, and is itself indexical in the sense that what it is a concept of is determined not by its content (which is the same for each token), but rather by the context of its use. The content of this concept is partly descriptive but mostly procedural, consisting mainly of the unique and important role that it plays in the information-processing of the organisms that have it. 4. Lexicon HPSG's head grammar takes as its point of departure Saussure's [1916 t notion of a sign. A sign is a conceptual ob- ject, shared by a group of organisms, which consist,~ of two associated concepts that we call (by a conventional abuse of language) a phonolooical representation and a semantic rep- resentation. For example, members of the English-speaking community share a sign which consists of an internal rep- resentation of the utterance-type /kUki/ together with an internal representation of the property of being a cookie. In a computer implementation, we model such a concep- tual object with a data object of this form: (6) (cookie ;COOKIE} Here the symbol 'cookie' is a surrogate for a phonological representation (in fact we ignore phonology altogether and deal only with typewritten English input). The symbol 'COOKIE' (a basic constant of NFLT denoting the prop- erty COOKIE) models the corresponding semantic represen- tation. We call a data object such as (6) a lezical entry. Of course there must be more to a language than simple signs like (6). Words and phrases of certain kinds can char- acteristically combine with certain other kinds of phrases to form longer expressions that can convey :,nformation about the world. Correspondingly, we assume that a grammar contains in addition to a lexicon a set of grammatical rules (see next section) for combining simple signs to produce new signs which pair longer English expressions with more complex NFLT translations. For rules to work, each sign must contain information about how it figures in the rules. We call this information the (syntactic) category of the sign. Following established practice, we encode categories as specifications of values for a finite set of features. Aug- mented with such information, lexical signs assume forms such as these: (7a) {cookie ; COOKIE; [MAJOR: N; AGR: 3RDSGI} (7b) (kisses ; KISS; [MAJOR: V; VFORM: FINI} Such features as MAJOR (major category), AGR (agree- ment), and VFORM (verb form) encode inherent syntactic properties of signs. Still more information is required, however. Certain expressions (heads) characteristically combine with other expressions of specified categories (complements) to form larger expressions. (For the time being we ignore optional elements, called adjuncts.) This is the linguistic notion of subcategoeization. For example, the English verb touches subcategorizes for two NP's, of which one must be third- person-singular. We encode subcategorization information as the value of a feature called SUBCAT. Thus the value of the SUBCAT feature is a sequence of categories. (Such features, called stack-valued features, play a central role in the HG account of binding. See [Pollard forthcomingi. ) Augmented with its SUBCAT feature, the [exical sign (2b) takes the form: (8) {kisses ; KZflS; [MAJOR: V; VFORM: FIN 1 SUBCAT: NP, NP-3RDSG} (Symbols like 'NP' and 'NP-3RDSG' are shorthand for cer- tain sets of feature specifications). For ease of reference, we use traditional grammatical relation names for comple- ments. Modifying the usage of Dowry [1982], we designate them (in reverse of the order that they appear in SUBCAT) as subject, direct object, indirect object, and oblique objects. (Under this definition, determiners count as subjects of the nouns they combine with.) Complements that themselves subcategorize for a complement fall outside this hierarchy and are called controlled complements. The complement next in sequence after a controlled complement is called its controller. For the sign (8) to play a communicative role, one ad- ditional kind of information is needed. Typically, heads give information about relation.~, while complements give information about the roles that individuals play in those relations. Thus lexical signs must assign roles to their com- plements. Augmented with role-assignment information, the lexical sign (8) takes the form: (9) (kisses ; KISS; IMAJOR: V: VFORM: FIN i SUBCAT: ~NP, patient), (NP-3RDSG, agent? } Thu~ (9) assign,, the roles AGENT and PATIENT to the sub- ject and direct object respectively. (Note: we assume that nouns subcategorize for a determiner complement and as- sign it the instance role. See section 6 below.) 5. Grammatical Rules [n addition to the lexicon, the grammar must contain mechanisms for constructing more complex signs that me- diate between longer English expressions and more complex NFLT translations. Such mechanisms are called grammat- ical rules. From a purely syntactic point of view, rules can be regarded as ordering principles. For example, English grammar has a rule something like this: (lO) If X is a sign whose SUBCAT value contains just one category Y, and Z is a sign whose category is consistent with Y, then X and Z can be combined to form a new sign W whose expression is got by 178 concatenating the expressions of X and Z. That is, put the final complement (subject} to the left of the head. We write this rule in the abbreviated form: (11) -> C H [Condition: length of SUBCAT of H = 11 The form of (11) is analogous to conventional phrase struc- ture rules such as NP - > DET N or S - > NP VP; in fact (11) subsumes both of these. However, (11) has no left-hand side. This is because the category of the constructed sign (mother) can be computed from the con- stituent signs (daughters) by general principles, as we shall presently show. Two more rules of English are: (12) -> H C [Condition: length of SUBCAT of H = 2 I (13) -> I-I C2 C1 [Condition: length of SUBCAT of H = 31 (12) says: put a direct object or subject-controlled comple- ment after the head. And (13) says: put an indirect object or object-controlled complement after the direct object. As in (11), the complement signs have to be consistent with the subcategorization specifications on the head. In (13), the indices on the complement symbols correspond to the order of the complement categories in the SUBCAT of the head. The category and translation of a mother need not be specified by the rule used to construct it. Instead, they are computed from information on the daughters by universal principles that govern rule application. Two such princi- ples are the Head Feature Principle (HFP) (14) and the Subcategorization Principle (15): (14) Head Feature Principle: Unless otherwise specified, the head features on a mother coincide with the head features on the head daughter. (For present purposes, assume the head features are all fea- tures except SUBCAT.) (15) Subcategorization Principle: The SUBCAT value on the mother is got by deleting from the SUBCAT value on the head daughter those categories corresponding to complement daughters. (Additional principles not discussed here govern control and binding.} The basic idea is that we start with the head daughter and then process the complement daughters in the order given by the indices on the complement symbols in the rule. So far, we have said nothing about the determination of the mother's translation. We turn to this question in the next section. 6. The Semantic Interpretation Principle Now we can explain how the NFLT-translation of a phrase is computed from the translations of its constituents. The basic idea is that every time we apply a grammar rule, we process the head first and then the complements in the order indicated by the rule (see [Proudian & Pollard 1985i). As each complement is processed, the correspond- ing category-role pair is popped off the SUBCAT stack of the head; the category information is merged (unified) with the category of the complement, and the role information is used to combine the complement translation with the head translation. We state this formally as: (16) Semantic Interpretation Principle (SIP): The translation of the mother is computed by the following program: a. Initialize the mother's translation to be the head daughter's translation. b. Cycle through the complement daughters, set- ting the mother's translation to the result of combining the complement's translation with the mother's translation. c. Return the mother's translation. The program given in (16) calls a function whose ar- guments are a sign (the complement), a rolemark (gotten from the top of the bead's SUBCAT stack), and an NFLT expression (the value of the mother translation computed thus far). This function is given in (17). There are two cases to consider, according as the translation of the com- plement is a determiner or not. (17) Function for Combining Complements: a. If the MAJOR feature value of the comple- ment is DET, form the quantifier-expression whose determiner is the complement transla- tion and whose restriction is the mother trans- lation. Then add to the restriction a role link with the indicated rolemark (viz. instance} whose argument is a pointer back to that quan- tifier-expression, and return the resulting quan- tifier-expression. b. Otherwise, add to the mother translation a role link with the indicated rolemark whose argu- ment is a pointer to the complement transla- tion (a quantifier-expression or individual con- stant). [f the complement translation is a quan- tifier-expression, return the quantificational ex- pression formed from that quantifier-expression by letting its scope-clause be the mother trans- lation; if not, return the mother translation. The first case arises when the head daughter is a noun and the complement is a determiner. Then (17) simply re- turns a complement like (3). In the second case, there are two subcases according as the complement transiation is a quantifier-expression or something else (individual con- stant, sentential expression, propositional term, etc.) For example, suppose the head is this: (18) {jogs ; JOG; [MAJOR: V; VFORM: FIN I SUBCAT: <NP-3RDSG, agent) } If the (subject) complement translation is 'RON' (not a quan- tifier-expression), the mother translation is just: (19) {JOG aqt:RON); but if the complement translation is '{I~LL P3 (PERSON inst:P3)}' (a quantifier-expresslon), the mother translation is: 177 concatenating the expressions of X and Z. That is, put the final complement (subject) to the left of the head. We write this rule in the abbreviated form: (11) -> C H [Condition: length of SUBCAT of H = 11 The form of (11) is analogous to conventional phrase struc- ture rules such as NP - > DET N or S - > NP VP; in fact (U) subsumes both of these. However, (11) has no left-hand side. This is because the category of the constructed sign (mother) can be computed from the con- stituent signs (daughter8) by general principles, as we shall presently show. Two more rules of English are: (12) -> H C [Condition: length of SUBCAT of H = 2[ (13) ->HC2C1 [Condition: length of SUBCAT of H = 3] (12) says: put a direct object or subject-controlled comple- ment after the head. And (13) says: put an indirect object or object-controlled complement after the direct object. As in (11), the complement signs have to be consistent with the subcategorization specifications on the head. In (13), the indices on the complement symbols correspond so the order of the complement categories in the SUBCAT of the head. The category and translation of a mother need not be specified by the rule used to construct it. instead, they are computed from information on the daughters by universal principles that govern rule application. Two such princi- ples are the Head Feature Principle (HFP) (14) and the Subcategorization Principle (15): (14) Head Feature Principle: Unless otherwise specified, the head features on a mother coincide with the head features on the head daughter. (For present purposes, assume the head features are all fea- tures except SUBCAT.) (15) Subcategorization Principle: The SUBCAT value on the mother is got by deleting from the SUBCAT value on the head daughter those categories corresponding to complement daughters. (Additional principles not discussed here govern control and binding.) The basic idea is that we start with the head daughter and then process the complement daughters in the order given by the indices on the complement symbols in the rule. So far, we have said nothing about the determination of the mother's translation. We turn to this question in the next section. 6. The Semantic Interpretation Principle Now we can explain how the NFLT-translation of a phrase is computed from the translations of its constituents. The basic idea is that every time we apply a grammar rule, we process the head first and then the complements in the order indicated by the rule (see !Proudiaa & Pollard 19851). As each complement is processed, the correspond- ing category-role pair is popped off the SUBCAT stack of the head; the category information is merged (unified) with the category of the complement, and the role information is used to combine the complement translation with the head translation. We state this formally as: (16) Semantic Interpretation Principle (SIP): The translation of the mother is computed by the following program: a. Initialize the mother's translation to be the head daughter's translation. b. Cycle through the complement daughters, set- ting the mother's translation to the result of combining the complement's translation with the mother's translation. c. Return the mother's translation. The program given in (16) calls a function whose ar- guments are a sign (the complement), a rolemark (gotten from the top of the head's SUBCAT stack), and an NFLT expression (the value of the mother translation computed thus far). This function is given in (17). There are two cases to consider, according as the translation of the com- plement is a determiner or not. (17) Function for Combining Complements: a. If the MAJOR feature value of the comple- ment is DET, form the quantifier-expression whose determiner is the complement transla- tion and whose restriction is the mother trans- lation. Then add to the restriction a role link with the indicated rolemark (viz. instance) whose argument is a pointer back to that quan- tifier-expression, and return the resulting quan- tifier-expression. b. Otherwise, add to the mother translation a role link with the indicated rolemark whose argu- ment is a pointer to the complement transla- tion (a quantifier-expression or individual con- stant). If the complement translation is a quan- tifier-expression, return tile quantificational ex- pression formed from that quantifier-expression by letting its scope-clause be the mother trans- latio,; if not, return the mother translation. The first case arises when the head daughter is a noun and the complement is a determiner. Then (17) simply re- turns a complement like (3). In the second c,~e. there are two subcases according as the complement translation is a quantifier-expression or something else (individual con- stant, sentential expression, propositional term, etc.) For example, suppose the head is this: (18) {jogs ; JOG; [MAJOR: V; VFORM: FIN I SUBCAT: <NP-3RDSG, agent.>} If the (subject) complement translation is 'RON' (not a quan- tifier-expression), the mother translation is just: (19) {JOG agt:RON); but if the complement translation is '{ALL P3 (PERSON inst:P3))' (a quantifier-expression), the mother translation is: 177 son, Yale University Press, New Haven and London, 1974. Pollard, Carl [19841 . Generalized Phrase Structure Gram- mars, Head Grammars, and Natural Language. Doc-, torsi dissertation, Stanford University. Pollard, Carl [forthcomingl. ~A Semantic Approach to Binding in a Monostratal Theory." To appear in Linguistics and Philosophy. Proudian, Derek, and Carl Pollard [1985]. ~Parsing Head- driven Phrase Structure Grammar." Proceedings of the ~Srd Annual Meeting of the Association for Computational Linouistics. Putnam, Hilary [1969 I. "On Properties." In Essays in Honor o/Carl G. Hempel, N. Rescher, ed., D. Rei- del, Dordrecht. Reprinted in Mind, Language, and Reality: Philosophical Papers (Vol. I, Ch. 19), Cam- bridge University Press, Cambridge, 1975. Saussure, Ferdinand de [1916]. Gouts de Linguistiquc Gen- erale. Paris: Payot. Translated into English by Wade Baskin as Course in General Linguistics, The Philosophical Library, New York, 1959 (paperback edition, McGraw-Hill, New York, 1966). Wallace, John [1965 I. "Sortal Predicates and Quantifica- tion." The Journal o[ Philosophy 62, 8-13. 179
1985
22
ANALYSIS OF OONOUNCTIONS IN A ~JLE-~ PAKSER leonardo L~smo and Pietro Torasso Dipartimento di Informatica - Universita' di Torino Via Valperga Caluso 37 - 10125 Torino (ITALY) ABSTRACT The aim of the present paper is to show how a rule-based parser for the Italian language has been extended to analyze sentences involving conjunc- tions. The most noticeable fact is the ease with ~4nich the required modifications fit in the previ- ous parser structure. In particular, the rules written for analyzing simple sentences (without conjunctions) needed only small changes. On the contrary, more substantial changes were made to the e~oeption-handling rules (called "natural changes") that are used to restructure the tree in case of failure of a syntactic hypothesis. T0~ parser described in the present work constitutes the syn- tactic component of the FIDO system (a Flexible Interface for Database Operations), an interface allowing an end-user to access a relational data- base in natural language (Italian). INTRODUCTION It is not our intention to present here a comprehensive overview of the previous work on coordination, but just to describe a couple of recent studies On this topic and to specify the main differences between them and our approach. It must be noticed, however, that both systems that will be discussed use a logic grammar as their basic framework, so that we will try to make the comparison picking out the basic principles for the manipulation of conjunctions, and disregarding the more fundamental differences concerning the global system design. It is also worth pointing out that, although the present section is ac~nittedly incom- plete, most of the systems for the automatic analysis of I~ural language do not describe the met~hods adopted for the interpretation of sentences containing conjunctions in great detail. There- fore, it is reasonable to assume that in many of these systems the conjunctions are handled only by means of specific heuristic mechanisms. A noticeable e~ception is the ~ facility of the U/R~%R system (Woods, 1973): in this case, The research project described in this paper has partially been m/pported by the Ministero della Babblica Istruzione of Italy, MPI 40% Intelligenza Artificiale. the conjunctions are handled by m~ans of a para- syntactic mechanis~ that enables the parser to analyze the second conjunct assuming that it has a structure dependent on the hypothesized first con- junct. The main drawback of this approach is that the top-down bias of the ATNs does not allow the system to take advantage of the actual structure of the second conjunct to hypothesize its role. In other words, the analysis of the second conjunct acts as a confirnution mechanism for the hypothesis made on the sole basis of the position where the conjunction has been found. Consequently, all the v~rious possibilities (of increasing levels of com- plexity) must be analyzed until a match is found, which involves an apparent ~aste of computational resources. The solution proposed in the first of the systems we will be discussing here is quite simi- lar. It is based on Modifier Structure Grammars (MSG), a logic formalism introduced in (Dahl & McCord, 1983), which constit%Ites an extension of the Extraposition Grammar by F. Pereira (1981). TNe conjunctions are analyzed by means of a special operator, a "demon", that deals with the two prob- lems that occur in coordination: ~he first conjunct can be "interrupted" in an incomplete status by the occurrence of the conjunction (this is not foresee- able at the beginning of the analysis) and the second conjunct must be analyzed taking into account the previous interruption point (and in this case, mainly because the second conjunct may ass~m~ a greater number of forms, some degree of top-down hypothesization is required). ~e first problem is solved by the "backup" procedure, which forces the satisfaction (or "clo- sure" in our terms) of one or more of the (incom- plete) nodes appearing, in the so-called "parent" stack. T~e choice of the node to which the second conjunct must be attached makes the system hypothesize (as in SYSCONJ) the syntactic category of the second conjunct and the analysis can proceed (a previous, incomplete constituent would be saved in a parallel structure, called '~erge stack" that would be used subsequently to complete the interpretation of the first conjunct). Apar~ from the ccr~iderable pc~er offered by ~LgGs for semantic interpretation, it is not quite clear why this approach represents an advance with respect to ~ ' a~roach. Even though the analysis times re[x)zted in the appendix of (Oahl & McCord, 1983) are ~ry low, the top-down bias of 180 F~Gs produces the ~ problems as ATNs do. The '~:sckup" procedure, in fact, chooses blindly among the alternatives present in the parent stack (this problem is mentioned by the authors). A final ccm- ment concerns the analysis of the second conjtmct: since the basic grammar aims at describing "normal" English clauses, it seems that the system has so~ trouble with sentences involving "gapping" (see the third section). In fact, while an elliptical sub- ject can be handled by the hypothesizetion, as second conjunct, of a verb phrase (this is the equivalent of treating the sit~/ation as a single sentence involving a single subject and tw3 actions, and not as tw~ coordinated sentences, the second of which has an elliptical subject; it a perfectly acceptable choice), the same mechanism cannot be used t~ handle sentences with an ellipti- cal verb in the second conjunct. The last system we discuss in this section has been described in (Huang, 1984). ThOugh it is based, as the previous one is, on a logic grammar, it starts from a qt/ite different asst~tion: the grammar deals explicitly with conjunctions in its rules. It does not need any extra-gramnatical mechanisms hut the positions where a particular constituent can be erased by the ellipsis ~ve to be indicated in the rules. Even though the effort of reconstructing the complete structure (i.e. of recovering the elliptical fragment) is mainly left to the unification mechanism of P~K)LOG, the design of the grammar is rendered s(~newhat more complex. %~e fragment of grammar reported in (Huang, 1984) gives the i~pression of a set of rules "flatter" than the ones that normally appear in standard grammars (this is not a negative aspect; it is a feature of the ATNs too). The "sentence" structure co,rises a NP (the subject, which m~y be elliptical) , an adverbial phrase, a verb (which also may be elliptical), a restverb (for handling possible previous auxiliares) and a rest-sentence cc~nent. We can justify our previous comment on the increased effort in grammar development by not- ing that two different predicates had to be defined to account for the normal ccmlplements and the structure that Huang calls "reduced conjunction", see example (13) in the third section. Moreover, it se~ms that a recovery procedure deeply embedded within the language interpreter reduces the flexi- bility of the design. It is difficult to realize how far this problem could affect the analysis of n~re complex sentences (space contraints limited the size of the gra~m~ar reported in the paper quoted), but, for instance, the explicit assu~tion that the absence of the subject makes the system retrieve it from a previous conjumct, seems too strong. Disre- garding languages where the subject is not always required (as it is the case for Italian), in English a sentence of the fore "Go home and stay there till I call you" could give the parser store trouble. In the following we will describe an approach that overcomes som~ of the problems mentioned above. The parser that will be induced consti- tutes the syntactic com[xm~t of the FIDO system (a Flexible Interface for Database Operations), which is a prototype allowing an end-user to interact in natural language (Italian) with a relational data base. The query facility has been fully implemented in E~ANZ LISP on a VAX-780 computer. The update operations are currently under study. Tne various com[x~ents of the system have been described in a series of papers which will be referenced within the following sections. The system includes also an optimization ccmlmonent that c~nverts the query expressed at a conceptual level into an efficient logical-level query (Lesmo, Siklossy & Torasso, 1985). ORGANIZATION OF THE PARSER In this section we overview the principles that lie at the root of the syntactic analysis in FIDO. We try to focus the discussion on the issues that guided the design of the parser, rather than giving all the details about its current implen~n- tation. We hope that this approach will enable the reader to realize why the system is so easily extendible. For a more detailed presentation, see (Lesmo & Torasso, 1983 and Lesmo & Torasso, 1984). The first issue concerns the interactions between the concept of "structured representation of a sentence" and "status of the analysis". These t%~ concepts have usually been considered as dis- tinct: in ATNs, to consider a well-known exa~le, the parse tree is held in a register, but the glo- bal status of the parsing process also includes t/he contents of the other registers, a set of states identifying the current position in the various transition networks, and a stack containing the data on the previous choice points. In logic gram- mars (Definite Clause Granmars (Pereira & Warren, 1980), Extraposition Grammars (Pereira, 1981), M~difier Structure Grammars (Dahl & ~L-~Drd, 1983)) this book-keeping need not be completely explicit, but the interpreter of the language (usually a dialect of PROLOG) has to keep track of the binding of the variables, of the clauses that have not been used (but could be used in case of failure of the current path), and so on. On the contrary, ~e tried to organize the parser in such a way that the two concepts mentioned above coincide: the portion of the tree that has been built so far "is" the sta~/s of the analysis. Tne implicit assunlDtion is that the parser, in order to go on wi~/~ the analysis does not need to know how the tree was built (what rules have been applied, what alterna- tives there were), but just what the result of the previous processing steps is 4. Of course, this assumption implies that all infor- mation present in the input sentence must also be AWe must confess that this assumption has not been pushed to its extreme consequences. In some cases (see (Lesm~ & Torasso, 1983) for a more detailed discussion) the backtracking mechanism is still needed, but, although we are not unable to pro- vide experimental evidence, we believe that it cou/d be substituted by diagnostic procedures of the type discussed, with different purposes and within a different fomTalism, in (Weischedel & Black, 1980). 181 present in its struct-ttred representation; actually, what happens is that new pieces of information, which were implicit in the "linear" input form, are made explicit in the result of the analysis. These pieces of information are extracted using the syn- tactic knowledge (how the constituents are struc- tured) and the lexical knowledge (inflectional data). The main advantage of such an approach is that the whole interpretation process is centered around a single structure: the deL~ndency structure of the constituents composing the sentence. This enhances the modularity of ~he systam: the mutual indepen- dence of the various knowledge sources can be stated clearly, at least as regards the pieces of knowledge contained in each of t_~; on the c~n- trary, the control flow can be designed in such a way that all knowledge sources contribute, by cooperating in a more or less synchronized way, to the overall goal of comprehension (see fig.l). A side-effect of the independence of knowledge sources n~_ntioned above is that there is no strict coupling between syntactic analysis and s~T~%ntic interpretation, contrarily to what happens, for instance, in Augmented Phrase Structure Grammars (Robinson, 1982). This moans that there is no one- to-one association between syntactic and semantic rules, a further advantage if we succeed in making the structured representation of the sentence rea- sonably uniform. This result has been achieved by distinguishing between "syntactic categories", which are used in the syntactic rules to build the tree, and "node types", whose instantiations are the ele_,~nts the tree is built of. z Since the number of syntactic categories (and of syntactic rules) is considerably larger than the ntm~ber of node types (6 node types, 22 syntactic categories, 61 rules), then so,~ general constraints and interpretation tales may be expressed in a more compact form. WiL-hout entering into a discussion on semantic interpretation, we can give an exile using the rules that validate the tree from a syntactic point of view (SY~IC RULES 2 in fig.l). One of these rules specifies that the subject and the verb of the sentence must agree in nun~r. On the other hand, the subject can be a noun, a pronoun, an interrogative pro~)un, a relative pro~m~n: each of them is associated with a different syntactic category, but all of them will finally be stored in a node of type REF (standing for REFerent) ; independently of the category, a single rule is used to specify the agreement constraint mentioned above. let us now have a look at the box in fig.l labelled " ~ I C RULES i: EXTENDING THE [~a~". ~Six node types have been introduced (each node is actually a o~91ex data structure): REL (~a- tions, mainly verbs), REF (R]~Ferents, no~s, pro- nouns, etc. ), CO~ (CONNectors, e.g. preposi- tions), OET (DETerminers), ADJ (ADJectives), and MOD (MCOifiers, ~ainly adverbs). Be~nd these six types, a special node (TOP) has been included to identi~ Z the main verb(s) of the sentence. SYNTACTIC RULES 1 : EXTENDING THE TREE II I SYNT"C iC I |1 ] RULES 2: I~{IRE IVALZDATZNG[ , I T"=T E I / NATURAL [ CHANCES: [ RESHAPING[ THE TREE[ SEMANTIC I KNOWLEDGE l: 1 VALIDATING I THE TREE I (STRONG1 J RE' SENTATIO INKNOW E GE ANNOTATING [ /' THETRE 1 ANAPHORA RESOLUTION: DISAMBIGUATING THE TREE FiE.l: A single structure is the basis of the whole interpretation process. The rules that are logically contained in that box are the primary tool for performing the syntactic analysis of a sentence. Each of them has the form: ~ITION ---> ACTION where PR~ONDITION is a boolean expression ~nose ter~tg are elementary conditions; their predicates allow the system to inspect the current status of the analysis, i.e. the tree (for instance: '"~hat is the type of the current node?", "Is t.here an en~pty node of type X?") ; a look-ahead can also be included in the preconditions (maxirman 2 words). The right-hand side of a rule (ACTION) consists in a sequence of operations; there are two operators: CRLINK (X,Y) which creates a new instance of the type X and links it to the nearest node of type Y existing in the rightn~Dst path of the tree (and moving only upwards) FILL (X,V) which fills the nearest node (see above) of type X with the value V (which in most cases coincides with the lexical date about the current input word). '][he rules are grouped in packets, each of which is associated with a lexical category. It is worth noting that the choice of the rule to fire is non-deterministic, since different rules can be executed at a given stage. On the other hand, the non-determinism has been reduced by making the preconditions of the rules belonging to the same packet mutually e~uzlusive; consequently, the status is saved on the stack only (but not always) if the input word is syntactically ambiguous. Note that nothing prevents there being exceptions to this rule. For e~le, in ~glish the past indicative and the past participle u.~ually have the same form: in this case, ~ different rules of the V~ packet could be activated if the context allows for both interpretations. 182 Currently, the syntactic categories of an ambiguous word are ordered manually in the lexicon; since the "first" rule is deten~ined by that order, the selection of the rule to execute depends Only on the choices made by the designer of the lexicon. Same experiments :,a~e been made to include a weighting mechanism, which should depend both on the syntactic context and on the semantic knowledge (Lesmo & Torasso, 1985). A second "syntactic" box appears in fig.l. It refers to rules that are, in a sense, weaker than the rules of the set discussed above. The rules of the first set are aimed at defining acceptable syn- tactic structures, where "acceptable" is used to maan that the resulting structure is semantically interpretable (for instance, a determiner cannot be used to modify an adjective). On the contrary, the rules of t~he second set specify which of the mean- ingful sentences are well formed; in particular, they are used to check gender and number agreement and the ordering of constituents (e.g. the fact that in ~glish an adjective should occur before the noun it refers to, whereas this is not always the case in Italian). The separation between the rules of the two sets is the feature that makes the system robust from a syntactic point of view (see (Lesmo & Torasso, 1984) for further details). It may be noticed that, in fig. i, both the second set of syntactic rules we have just dis- cussed and a part of the semantic knowledge have the purpose of '~alidating the tree", independently of t.he fact that the second-level syntactic con- straints can be broken (they are "weak" con- straints), whilst the semantic constraints can not (they are "strong" constraints), sane action must be performed when the structure hypothesized by the first-level rules does not match those constraints. The task of the rules called "natural changes" (see fig.l) is to restructure the tree in order to pro- vide the parser with a new, "correct" structure. We will not go into further details here, since the natural changes (in particular t_he one concerning the treatn~nt of conjunctions) will be discussed in a following section; however, in order to give a complete picture of the behavior of the parser, we must point out ~.hat the natural changes can fail (no correct structure can be built) . In this case, the parser returns to the original structure and issues a warning m~ssage, if the trigger of the natural changes ~as a weak constraint; otherwise (semantic failure) it backtracks to a previous choice point. A~LYSIS OF CDNJUNL~IONS Before starting the description of the n~chan- isms adop~=d to analyze conjunctions, it is worth noting that the analysis of conjunctions was already mentioned in a previous paper (Lesmo & Torasso, 1984). The present paper represents an advance with respect to the referenced one in that new solutions have been adopted, which greatly enhance the homogeneity of the parsing process (not to mention the fact that the behavior of ~ parser was treated very sketchily in the previous paper). The presentation of the solution we adopted is based on the classification of sentences containing conjunctions reported in (Huang, 1984) : we will start from the simpler cases and introduce the more ccmplex examples later. A last remark concerns the language: as stated above, the FIDO system works on Italian; in order to enhance the readability of the paper, we present ~glish examples. Actually, we are doing some experiments using a restricted ~glish grammar, but it must be clear that the facilities that will be described are fully i~@le- mented only for the Italian grammar (the cases where Italian behaves differently from I~glish will be pointed out during the presentation). As for all other syntactic categories, the category "conjunction" also has an associated set of rules: the set contains a single, very simple rule: it saves the conjunction in a global regis- ter, which is available during the subsequent stages of processing. %~e simplest case of conjunc- tion is the one referred to in (Fmang, 1984) as "unit interpretation" : (i) Bob met Sue and Mary in London Normally, the rules associated with hOLmS hypothesize the attachrrent of a newly created REF node to a connector that (if it does not already exist) is, in turn, created and attached to the nearest node of type REL above the current node (or to the current node itself if it is of type REL). After the analysis of "Bob mat", the situation of the parse tree would be as in fig.2.a (and p~l is the current node). Tne analysis of "Sue" would pro- duce the tree of fig.2.b. The noun rules have bee_n changed to allow for the attachment of more than one noun to the same connector (should a conjunc- tion be present in the register). In fig.2.c, the tree built after the analysis of sentence (1) is reported. It must be noted that the most common exar~le of natural change (the one called MOVEUP) is also useful when a conjunction is present. Cons ider, for instance, the sentence : (2) John saw the boy you told the story and the girl you met yesterday After the analysis of the fragment ending wir/n "story", we get the tree of fig.3.a (and REF4 is the current node). According to the previous disc-assion, the noun "girl" would be stored in a ~EF node attached to CONN4. On the other hand, the semantics would reject this hypothesis, since the case frame (TO '~r: SUHJ/PERSON; DIROBJ/PERSON; INDOBJ~) is not acceptable. The portion of the tree representing "and the girl" would be '~ved up" and attached to CONN2, thus yielding the tree of fig.3.b (that would be expanded subse- quently, by attaching the relative clause "you nnet yesterday" to Faro'5). Unlike what happens in the previous cases, a new rule had to be added to account for the other types of conjt~ctions. This rule is a new natural change, that the system executes when the conjunc- tion implies the existence of a new clause in the sentence. ~he need for such a rUle is clear if we 183 REL~ ~¢ I soe I H I (a) g g L ~ ~ (b) Fig.2 - I',-o NEET I,IHI,ITt ¢oww:P ~ CONN~ (c) Different phases of the interpretation of the sentence "Bob met Sue and Mary in London". H means "head" and indicates the position of the node filler within the sequence of dependent structures. UNM means "Unmarked" and indicates that the corresponding verb case is not marked by a p~-eposition (a) (b) Fig.3 - Two phases in the analysis of the sentence "John saw the boy you told the story and the Eirl you met yesterday" (the subtree relative to "you met yesterday" is not shown). consider one of the basic assumptions of the parser. In a sense, the parser knows that it has to parse a sentence because, before starting the analysis, the tree is initialized by the creation of an empty REL node. Analogously, when a relative pronoun is found, the relative clause is "initial- ized" via the creation of a new empty REL node and its attachment to the REF node whictl the relative clause is supposed to refer to. The only exception to this rule is represented by gerunds and partici- ples, which are handled by means of explicit preconditions in the VERB rule set. Of course, this can give rise to ambiguities when the past indicative and the past participle have the same form, as in the well known garden path: (3) The horse raced past the barn fell In the case of sentence (3), the choice of the indicative tense would be made, and the past parti- ciple rule would be saved ~o allow for a possible backtrackLng in a s ~ n t phase, as would actu- ally occur in example (3) (we must note here that such an ambiguity does not occur in Italian). A further co~Tent concerns the relative clauses with the deleted relative pronouns (as in (2) above): this gaencmenon does not occur in Italian either; v~ believe that it could be handled by means of a 184 natural change very similar to the one described below. Wecan now turn hack to the prob1~ of c~m- junctions. Let's consider first a sentence where the right conjumct is a complete ~rase. (4) Bob mint Sue and Mary kissed her After the analysis of the sentence as far as "Mary", the stru~=e of the tree would be as in fig.2.c (apart ~ the subtree referring to "in Lond~"). ~ "kissed" is four~, no empty ~ga_~ exists to ac~----.---~umte it, thus the natural cha.~es are triggered and, because of the preconai- tions, the new one (caLled D e ~ ) is executed. It operates according to the following steps: I) A conjunction is looked for in the right subtrse 2) It is detached together with the structure fol- lowing it 3) The conj~tion is inserted in the node the first I~ that is found going up in the hierarchy (in fig.2.c, starting from C~NN2 and going u~s, we find 1:m.'.1 and the node above it is TOP) 4) A new empty REL is created and attac~ed to the L~d__e found in step 3 5) The structure deteched in step 2 is attached to the new REL, inserting, when ~ , a cc~nmc- tot. The e.~.~cution of INam~z~L in the case of example (4) produces the s~-uc~n~e depictad in fig.4, that is completed subsequently, by inserting "TO KISS" in REL2 and by creating the branch for "her" in the ususl way. ~Wo more complex examples show that the abil- ity of the parser to analyze conjunctions is not limited to main clauses: (5) Henry heard the story that John toid Marl, and BOb told Ann With regard to sentence (5), wa can see the result of the analysis of the portion ending with "Bob" in fig.5.a. It is apparent that the execution of the steps described above causes the insertion of a new REL node at the same level of R~2 and attached to ~Y2; this seems intuitively acceptable and provides FIDO with a structure consistent with the ~sitive semantics adopted to obtain the formal query (Easing, Siklossy & Torasso, 1983). 11"op l,l^No I,I I'm "e'TI IHITI I 1,1 lUNM I '1 lu,,,,', lu',',"ltl leo,, I '1 I I I'1 FIE.4 - Pamtial structure built durin E the analysis of the sentence "Bob met Sue and Mary kissed he~". An even more interesting exanlple is provided by the following sentence: (6) ~ ~-d the story John told Wary and Bob tola Ann his opinion ~ere the I~TREL and MDVEOP cooperate in build- ing the right tree. What happens is as follows: after the execution of I~IREL (in the way described above) "his opinion" is attached to REL3. ~he selection restrictions are not respected because four um-~rked cases are present for the verb "to tell" (including the elliptical relative extracted from the first conjtnnct), so the smallest right subtres ("his opinion") is m~ved up and attached to RELI; again, the hypothesis is rejected (three unmarked cases for "to hear"). The tree returns to the original sta~zs and MOVEJP is tried again on a larger subtree (the one headed by ~mT~}. Since a conjunction is found in the node above REL3, it is moved t~o and the analysis finally succeeds. ~he last type of sentences that we will con- sider involves gapping. An example of clause- internal ellipsis is: (7) I played football and John tennis. the name "John" is encountered, a ~it interpretation is attempted ("football and John ") and it is rejected for obvious reasons. The only alternative left to the parser is the execution of 15~KTREL, which, working in the usual way, allows the parser to build up the right interpretation. Note that an empty node is left after the analysis of the sentence is completed, which is not done in the examples described above. This is han- dled by non-syntactic routines that build up the se,~ntic interpretation of the sentence (formal query oonstruction in FIDO). However the ac~al ~rb is made available as soon as possible, because the interpretation routines do not wait until the analysis of the o~,,=nd is finished before begin- ning their work. As the reader will see frum the following examples, no ~uble is caused for the parser by the other kinds of gapping: - left-peripheral ellipsis with ~ NP-remn ns. For example: (8) Max gave a nickel to Sally and a dime to Harvey (unit interpretation "to Sally and a dime" attampted and rejected; I~E~L executed; the semantic routines also have to recover the elliptical subject). - left-peripheral ellipsis with one NP remnant and nDn-NP remnant(s). For example: (9) Bob met Sue in Paris and Mary in London (e~Jctly the same case as (8); the parser makes no distiction between NPs and non-NPs) - Right peripheral ellipsis concomitant with clause int~mm%al ellipmis. For example: 185 (I0) Jack asked Elsie to dance and Wilfred Phoebe (same processing as be~re; more complex semantic recovery of lacking constituents is necessary). Not very different is the case where "the right conjunct is a verb ~rase to be treated as a clause with the subject deleted". As an example consider the following sentence: (11) The ~sn kicked the child and threw the ball. In this case, the search for an empty REL node fails in the usual way and II~SERTREL is executed as discussed above, except that the ccmjuncticn is still in the register and no structure follows it, so that the steps 1,2, and 5 are skipped. Finally, the "Right Node Raising", exemplified (12) The man kicked and threw the ball. %T~ problem here is that the left conjunct is not a complete sentence. However, the syntactic rules have no troubles in analyzing it; it is a task of semantics to decide whether "the man kicked" can he accepted or not. In other words, "the ball" could he considered as an elliptical object in the first clause; although the procedures for ellipsis reso- lution are unable, at the present stage of develop- ment, to handle such a case, it is not difficult to imagine how they could be extended. To close this section, two cases must be men- tioned that the parser is unable to analyse correctly. In sentence (13) (13) John drove his car through and completely demolished a plate glass window a preposition (through) has no NP attached to it. The problem here is very similar to that of "dan- gling prepositions" (and, like the latter, it does not occur in Italian). A simple change in the syn- tax would allow a CONN node to be left without any dependent R~:. Less simple would be the changes necessary in the anaphora procedures to allow them to reconstruct the ~=aning of the sentence (the difficulty here is similar to the "Right Node Rais- CONM£ J t JN/~ R~-I=~ ~ ' r i'A,,,o I, I (a) RELI p I ,..,N: REF~. f,, Fig. 5 - Two phases in the analysis of the sentence: "Henry herd the story that John told Mary and Bob told Ann". 186 ing" discussed above). The last problematic case is concerned with multi-level gappings, as in the folluwing example: (14) Max wants to try to begin t~ write a novel and Alex a play. In this case, the insertion of an empty REL node to account for the second conjunct ("Alex a play") does not allow the parser to build a structure that corresponds to the one erased by the ellipsis. We have not gone deeply into this problem, which, unlike the preceding ones, also occurs in Italian. H~wever, it seems that, also in this case, the increased power of the procedures handling ellipti- cal fragments could provide some reasonable solu- tions without requiring substantial changes to the presented approach to parsing. CONCLUSIONS AS stated in the introduction, a proper treat- • ent of coordination involves the ability to inter- rupt the analysis of the first conjunct when the conjunction is found and the ability to analyze the second conjunct taking into account what happened before. ~he system described in the paper deals with the two probl~s by adopting a robust and modular bottom-up approach. The first conjunct is extended as far as possible using the incoming words and the structure building syntactic rules. Its complete- ness and/or acceptability is verified by n~_ans of another set of rules that fit easily in the pro- posed framework and do not affect the validity of the other rules. ~he second conjunct is analyzed using the s~me standard set of structure building rules, plus an excep~ion-~%ndling rule that accounts for the pres- ence of a whole clause as second conjunct. The need ~o take into account what happened before is satis- fied by the availability of the portion of the tree that has already been built and that can be inspected by all the rules existing in the system. qhe paper shows that the approach that has been adopted enables the system to analyze correctly most sentences involving conjunctions. Although sane cases are pointed out, where the present i~plementation fails tm analyze a correct sentence, we believe that the solutions presented in the paper enlight some of the advantages that a rule-based approach to parsing has with respect to the classical grammar-based ones. V.Dahl, M.MmCord (1983): Treating Coordination in Logic Grammars. AJCL 9, 69-91. X.Huang (1984) : Dealing with Conjunctions in a Machine Translation Environment. Proc. COLING 84, Stanford, 243-246. L.Lesr~, L.Siklossy, P.Torasso (1983): A Two Level Net for Integrating Selectional Restrictions and Semantic Knowledge. Proc. IEEE Int. Conf. on Sys- tems, Man and Cybernetics, India, 14-18. L.iesmo, L.Siklossy, P.Torasso (1985): Semantic and Pragmatic Processing in FIDO: a Flexible Interface for Database Operations. Information Systa~s 10, n.2. L.Lesmo, P.Torasso (1983) : A Flexible Natural Language Parser Based on a T~o-ievel Representation of Syntax. Pro(:. ist Conf. ACL Europe, Pisa, i14- 121. L.Lesmo, P.Torasso (1934): tally Ill-Formed Sentences. ford, 534-539. Interpreting Syntacti- Pro(:. COLING 84, Stan- L.Le~, P.Torasso (1985): Weighted Interaction of Syntax and Semantics in Natural Language Analysis. 9th IJCAI, Los Angeles. F.Pereira (1981) : Extraposition Grammars. AJCL 7, 243-256. F.Pereira, D.Warren (1980): Definite Clause Gram- mars for Language Analysis: A Survey of the Formal- ism and a Comparison with Transition Networks. Artificial Intelligence 13, 231-278. J.J.Robinson (1982): DIAGRAM: A Grammar for Dialo- gues. Ccmm. ACM 25, 27-47. R.M.Weischedel, J.E.Black (1980): Responding Intel- ligently to Unparsable InpUts. AJCL 6, 97-109. W.A.Woods (1973): An Experimental Parsing System for Transition Network Grammars. In R.R~stin (ed.): Natural Language Processing, Algorithmics Press, New York, Iii-154. 187
1985
23
A PRAGMATIC~BASED APPROACH TO UNDERSTANDING INTERS~NTENTIAL ~LIPSI~ Sandra Car berry Department of Computer and Information Science University of Delaware Nevark, Delaware 19715, U3A ABSTRACT IntersententAal eAlipti caA utterances occur frequently in information-seeking dielogues. This paper presents a pragmatics-based framework for interpreting such utterances, ~ncluding identAfi- cation of the spoa~r' s discourse ~oel in employ- ing the fra~ent. We claim that the advantage of this approach is its reliance upon pragmatic information, including discourse content and conversational goals, rather than upon precise representations of the preceding utterance alone. INTRODOCTION The fraRmentary utterances that are common in communication between humans also occur in man- Nachi~e OOmmUlLCcation. Humans perslat in using abbreviated statements and queries, even in the presence o/ explicit and repeated instructions to adhere to syntactically and semantically complete sentences (Carbonell, 1983) • Thus a robust natural langua@e interface must handle ellipsis. We have studied one class of elliptical utterances, Intersentential fragments, in the con- text of an Information-seeklng dialogue. As noted by Allen(1980), such utterances differ from other forms of ellipsis in that interpretation often depends more heavily upon the speaker's inferred underlying task-related plan than upon preceding syntactic forms. For example, the fcllowlng elliptical fra@ment can only be interpreted within the context of the speaker's goal as communicated in the first utterance: [EX1 ] aT want to cash this check. Smell bills only. * Furthermore, intersententiel fragments are often employed to communicate discourse 8oals, such as expressing doubt, which a syntactically complete form of the same utterance may not convey as effectively. In the following alternative responses to the initial statement by SPEAKER-I, F1 expresses doubt regarding the proposition seated by 3PEAEZB-I whereas F2 merely asks about the jet's contents. • This work has been partially supported by a grant from the National 3cAence Foundation, XST- 8311~00, and a subeontraot from Bolt Beranek and Newmm'l Inc. of a grant flwm the Nationa~ ScAence Foundation, T~T-8~19162 S~A~R-I : "The Korean Jet shot down by the Soviets was a spy plane." FI: "With 269 people on board?"~ F2: "With infrared cameras on board?" Previous research on ellipsis has neglected to address the speaker's discourse Eoals in employing the frasment but reel understanding requires that these be identified (Mann, Moore, and Levin, 1977) (Webber, PoZlack, and Hirschberg, 1982). In this paper, we investlgate a framework for interpreting Intersententlal ellipsis that occurs in task-orlented dialogues. This framework includes: [1] [2] a context mechanism (Carberry, 1983) that builds the information-seeker, s underlying plan as the dialogue progresses and differen- tiates be~een local and global contexts. a discourse component that controls the interpretation of ellipsis based upon discourse goal expectations ~eaned from the dial o@ue ; this component "understands" ellipsis by identifying the" discourse goal which the speaker is pursuing by employing the elliptical fragment, and by determining how the frasment should be interpreted rela- tive to that goal. [3] an analysis component that suggests possible associations o£ an elliptical fragment with aspects of the inferred plan for the information-seeker. [4] an evaluation component which, 51yen multiple possible associations o£ an elliptical frag- ment with aspects of the information-seeker,s underlying plan, selects that ansociation most appropriate to the discourse context and believed to be intended by the speaker. INTERPRETATION OF INTERS~qTENTIAL ~LLTPSIS As Ltlustrated by [EX1], intersententiel eA1ipticaA fra@ments cannot be fully understood in and of themselves. Therefore a strate8~ for interpreting suc~ fra@ments must rely on knowledge obtained frcl sources other than the fragment itself. Three possibilities exist: the syntactic ee Ta.~n fr'~ Flowers and Dyer(198~) 188 form ar precedlug utterances, the seaantlo representation of preceding utterances, and expec- tations gleaned from understanding the preceding disQourse. The first two strategies are exemplified by the work oC Carbosoll and Hayes(1983), Hendrlx, Sacerdot¢, and Sloc,~( 1976), Waltz( 1978), and Velschedel and 3ondhelmer( 1982 ). Several limita- tions exist in these approaches, includiug an ina- bilit 7 to handle utterances that rely upon an assumed communication of the underlying task and difficulty in resolving ambiguity ="oug multiple interpretations. Consider the following two dislo~e sequences: SPEAE~R: "I want to take a bus. The cost?" SPEAKER: "I want to purchase a bus. The cost?" Zf a semantic strategy is employed, the case frame representation for "bus" may have a "cost of bus" and a "cost of bus ticket" slot; a~hlgulty arises regardlug to which slot the elliptical fr~sment "The cost?" refers. Althou~ one might suggest extensions far handling this fra~ent, a semantic strategy alone does not provide an adequate frame~- wurk for Interpreting intersentential ellipsis. The third potential strategy utilizes a model c~ the information-seeker's inferred tank-related plan and discourse ~oals. The power of this approach is its reliance upon pragmatic informa- tion, including discourse content and converse- tiona~ goals, rather than upon precise representa- tions of the preceding utterances alone. Allen(1980) was the first to relate ellipsis processlug to the domain-dependent plan underlying a speaker's utterance. Allen views the speaker's utterance as part of a plan which the speaker has constructed and is executlug to accomplish his overall task-related goals. To interpret ellipti- cal fragments, Allen first constructs a set of possible surface speech act representations for the elliptical fragment, limited by syntactic clues appearing within the fragment. The task- related ~oals which the speaker might pursue form a set o1" expectations, and Allen attempts to infer the speaker's ~al-related plan which resulted in execution of the observed utterance. A part of this inference process involves determining which of the partially constructed plans connecting expectations (goals) and obeerved utterance are 'reasonable given the knovled~ and mutual beliefs of the speaker and hearer. Allen selects the sur- face speech act which produced the most reasonable inferred plan as the correct interpretation. Allen notes that the speaker's fragment must identif7 the subg~als which the spea~er is pursu- Lug, but claims that in very restricted dmaains, identifying the speaker's overall ~ from the utterance ls sufficient to identify the appropri- ate response in terms of the obstacles present in such a plan. For his restricted do~aln involving train arrivals and departures, Allen's Interprets- tlon strategy vurke well. In more complex domains, it Is necessary to identify the particu- lar aspect of the speaker's overall task-related plan addressed by the clliptlcal frasment in order to interpret It properly. More recently, Litman and Allen(198q) have extended Allen's model to a hierarchy of task-plans and meta-plans. Litman is currently studying the interpretation of ellipti- cal frasments within this enhanced framework. In addition to the syntactic, semantic, and plan-based strategies, a few other heuristics have been utilized. Carbusoll(1983) uses discourse expectation rules that suggest a set of expected user utterances and relate elliptical f~a~ents to these expected patterns. For example, if the sys- t~a asks the user whether a particular value should be used an the filler o£ a slot in a case frane, the system then expects the user's utter- ance to contain a confirmation or disson~Irmatlon pattern, a different filler for the slot, a com- parative pattern such as "too hard", and so forth. Although these rules use expectations about how the speaker m~ght respond, they seem to have llt- tle to do with the expected discourse goals of the speaker. Real understanding consists sot only of resognlzAr~ the particular surface-request or surface-lnform, but also of inferring what the speaker wants to accomplish and the relationship of each utterance to this task. Interpretation of ellipsis based upon the speaker's inferred under~ lying task-related plan and discourse Eoals facil- itates a richer interpretation of elliptical utterances. REQUISITE KNCWLEDG E A speaker can felicitously employ intersen- tentlal ellipsis only Lf he believes his utterance will be properly understood. The motivation for this work is the hypothesis that speaker and hearer mutually believe that certain knowledge has been acquired during the course of the dialogue and that this factual knowledge along with other processing knowledge will be used to deduce the speaker,s intentions. We claim that the requisite factual knowledge includes the speaker,s inferred task-related plan, the speaker's inferred beliefs, and the anticipated discourse Eoala of the speaker; We claim that the requisite processing knowledge includes plan recognltlon strategies and focuslng techniques. 1. Task-Related Plan In a cooperative information-seeking dAelo- gue, the ln~ormation-provider is expected to infer the ir~ors~ation-seeker, s underlying task-related plan an the dialogue pro~-eases. At any point An the dialo~e, ZS (the information-seeker) believes that soae subset of this plan has been coemunA- mated to IP (the in~ormation-provider); therefore Y~ feeAa Juatl.rled in ~ormuAating utterances under the assumption that IP will use this inferred task model to interpret utterances, includIDg elliptl- eLL frasmente. 189 An example will illustrate the importance of IS's inferred task-related plan in interpreting ellipsis. In the following, IS is conslderi~ purchase of a home mentioned earlier in the dialo- ~ue: IS: "What elementary school do children in Rolling Hills attend?" ZP: "They attend Castle Elementary." IS: "Any nearby seim clubs?" An informal poll indicates that most people inter- pret the last utterance as a request for swim clubs near the property under consideration in Rolling Hills and that the reason for such an interpretation is their inference that IS is investigating recreational facilities that might be used if IS were to purchase the home. However, if we substitute the frasment • An~ nearby day-care centers?" for the last utterance in the dialogue, then interpretation depen~ upon whether one believes IS wants hls/her children to be bused, or perhaps even walk, to day-care directly from school. 2. Shared Beliefs Shared beliefs of facts, beliefs which the listener believes speaker and iistecer mutually hold, are a second component of factual knowledge required for processing intersentential elliptical fra6ments. These shared beliefs either represent presueed a priori knowledge of the domain, such as a pres~ptlon that dialogue participants in a unAvereity domain know that each course has a teacher, or beliefs derived from the dialogue itself. An e~ple of the latter occurs i~ IP tells IS that C3360 is a 5 credit hour course; IS may not himself believe that C3360 is a 5 credit hour course, but as a result of IP's utterance, he does believe it is mutually believed that IP believes this. Understanding utterances requires that we identify the speaker's discourse goal in making the utterance. Shared beliefs, often called mutual beliefs, form a part of communicated knowledge used to interpret utterances and iden- tify discourse goals in a cooperative dlalogue. The following e~a~le illustrates how IP' s beliefs about IS influence usderstan~Ing. IS: "Who is teaching C~O0?" IP: "Dr. Brown is teaching C.~O0." IS: "At ni~t?" The frasmentar~ utterance "At ni~t?" is a request to know whether CS~O0 is meeting at night. Hc~- ever, if one precedes the above utterances with a quer~ whose rms~onse informs IS that CS~O0 meets only at ni~t, then the last utterance, • At ni~t? = becomes an objection and request for corroboration or e~lanatlon. The reason for this difference in interpretation is the difference in beliefs regarding IS at the time the elliptical fragment is uttered. In the latter case, IP believes it As mutually believed that IS already knows IP' s beliefs regarcling when C/~O0 meets, so a request for that informatlon is not felicitous and a dif- ferent intention or discourse goal is attributed to L~. Allen and Perrault(1980) used mutual beliefs in their work on indirect speech acts and sug- ~sted their use in clarification and correction dlalogues. ~idner(1983) models user beliefs about system capabilities in her work on recognlzlng speaker intention in utterances. 3. Anticipated Discourse Goals The speaker' s anticipated discourse goals form a third compocent of factual knowledge required for processing elliptical frasmenta. The dlalogue precedlng an elliptical utterance may sugEest discourse goals for the speaker; these sugEested discourse gcals become shared knowledge between speaker and hearer. As a result, the listener is on the lookout for the speaker to pur- sue these anticipated discourse goals and inter~ ~rets utterances accordingly. Consider for example the following dialogue: IP: "Have you taken C3105 or C3170?" I~: wit the Unlversity of Delaware?" IP: "No, anywhere." IS: "Yes, at Penn State." In this example, IP's inlt~al query produces a strong anticipation that IS will pursue the discourse 8oal of provldlng the requested i~forma- tlon. There/ore subsequent utterances are inter- preted with the expectation that IS will eventu- ally address this 8oal. IS's first utterance is interpreted as ~u-sulng a discourse Eoal of seek- ing clarification of the question posed by IP; IS' s last utterance ansMers the initial query posed by IP. However discourse expectatlons do not persist forever with intervening utterances. . Processing ~owledp P1 an- recognl tlon strategies and focusing techniques are necessary components of processing knowledge for interpreting intersententlal eillpsis. Plan-recognltion strategies are essen- tial I- order to In/er a model of the speaker's underlying task-related plan and focusing tech- niqces are necessary in order to identIDi that portion of the underlying plan to which a frasmen- tar7 utterance refers. Focusing mechanAas have been employed by Gross(1977) in identifying the referents of defin- ite noun phrases, by Robinson(1981) in interpret- ing verb p~vases, by ~ner( 1981 ) in anaphora resolution, by CarberrT(1983) in plan inference, and by McKeown(19fl~) in natural lan&uage genera- t~on. 190 FRAmeWORK FOR PROCESSING ELLIPSLS If an utterance is parsed as a sentence frag- ment, ellipsis processing begins. A model of any preceding dialogue contains a context tree (Car- berry, 1983) corresponding to IS's inferred under- lying task-related plan, a space containing IS's anticipated discourse goals, and a belief model representing IS's inferred beliefs. Our framework is a top-down strategy which uses the informatlon-seeker' s anticipated discourse goals to guide interpretation of the fragment and relate it to the underlying task- related plan. The discourse component first analyzes the top element of the discourse stack and suggests potential discourse goals which IS might be expected to pursue. The plan analysis component uses the context tree and the belief model to suggest possible associations of the elliptical fragment with aspects of IS's inferred task-related plan. If multiple associations are suggested, the evaluation component applies focusing strategies to select the interpretation believed intended by the speaker --- namely, that most appropriate to the current focus of attention in the dialogue. The discourse component then uses the results produced by the analysis com- ponent to determine if the fragment accomplishes the proposed discourse goal; if so, it interprets the fragment relevant to the identified discourse goal. PLAN-ANALYSIS COMPONENT I. Association of Fragments The plan-analysls component is responsible for associating an elliptical fragment with a term or conjunction of propositions in Is's underlying task-related plan. The analysis component deter- mines, based upon the .current focus of attention, the particular aspect of the plan highlighted by IS's fragment and the discourse goal rules infer hcw IS intends the fra@Rent to be interpreted. This paper will discuss three classes of ellipti- cal fragments; a description of how other frag- ments are associated with plan elements is pro- vided in (Carberry, 1985). A constant fragment can only associate with terms whose semantic type is the same or a super- set of the semantic type of the constant. Further- more, each term has a limited set of valid instan- tlations within the existing plan. A constant associates with a term only if IP's beliefs indi- cate that IS might believe that the uttered con- stant is one of the te.,-m's valid instantiations. For example, if a plan contains the proposition Starting-Date( AI-CONF, JAN/5) the elliptical fragment • February 2?" wall associate w~th this proposition only if IP believes I3 might believe that the starting date for the AS conference is in February. Recourse to such a belief model is necessary in order to allow for Yes-No questions to which the answer is "No" and yet eliminate potential associations which a human listener would reCOg- nize as unlikely. Although this discarding of possible associations does not occur often in interpreting elliptical fragments, actual human dialogues indicate that it is a real phenomenon. (Sidner(1981) employs a similar strategy in her work on anaphora resolution. A co-specifler pro- posed by the focusing rules must be confirmed by an inference machine; if any contradications are detected, other co-specifiers are suggested. ) A propositional fragment can be of two types. The first contains a proposition whose name is the same as the name of a proposition in the plan domain. The second type is a more general propo- sitional fragment which cannot be associated with a specific plan-based proposition until after analyzing the relevant propositions appearing in IS's plan. The semantic representations of the" utterances "Taught by Dr. Smith?" "With Dr. Smith?" would produce respectively the type I and type 2 pro pc si ti ons Teaches (_as : &SECTIONS, SMITH ) Genpred( SMITH ) The latter indicates that the name of the specific plan proposition is as yet unknown but that one of its parameters must associate with the constant Sml th. A proposition of the first type associates with a proposition of the same name if the parame- ters of the propositions associate. A proposition of the second type associates with any proposition whose ~arameters include terms associating with the known parameters of the propositional frag- ment. The semantic representation of a term such as "The meeting time?" is a variable term _~me : &MTG- TMES Such a term associates with terms of the same semantic type in IS's plan. Note that the exlst- ing plan may contain constant instantiatlons in place of former variaOles. A term fragment still associates with such constant terms. 2. Results of Plan-Analysis Component The plan-analysis component constructs a con- junction of propositions PLPREDS and/or a term PLTERM representing that aspect of the informatlon-seeker' s plan highlighted by the elliptical fragment; STERM and SPREDS are produced by substituting into PLTERM and PLPREDS the terms in IS's fragment for the terms with which they are associated in IS's plan. 191 (1)mEarn-Credit(IS,CS360,FALL85) such that Course-Offered(CS360,FALL85) ] i (1)~Earn-Cre~t-Sectlon(IS,_ss:&SECTIONS) such that Is- ~ection-Of(_ss: &3ECTION S, ~360 ) Is- Of fere,~(_ss: &SECTION S, FALL85 ) (1)~iearn-Materlal(IS,_ss:&SEcTIONS,_s~l:&SYLBI) such that Is-Syllabus-Of(_ss:&SECTIONS,_s~l:&SYLBI) i (1)ILearn-Frem(I~,_fac:aSECTIONS,_ss:&SECTIONS) such that Teaches(_fae:&FACULTY,_ss:&SECTIONS) [ i (1)IAttend-CIass(IS,_day:&MTG-DAYS,_tme:&MTG-T~S,~Ic:&MTG-PLC3) such that Is- Mt g-Day (_ss: &SECTION S, day: &MTG- T~S ) Is-Mtg-Time (_ss: &SECT ION S,_tme: &~- T~S ) Is-Mtg-PIc(_ss:&SECTIONS,_plc:&MTG-~C~) J (1)'iearn-Text(IS,_txt:&TEXTS) such that Uses(_ss:&SECTIONS,_txt:&TEXTS) Figure I: A Portion of the Expanded Context Tree for It appears that h,-,ans retain as much of the established context as possible in interpreting intersententlal ellipsis. Carbonell(1983) demon- strated this phemonenon in an informal poll in which users were found to interpret the fraRment in the followlng dialogue as retaining the fixed media specification: "What is the size of the 3 largest single port fixed media disks?" "disks with two ports?" We have noted the same phenomenon in a student advisement domain. Thus when an elliptical fragment associates with a portion of the task-related plan or an expansion of one of its actions, the context esta- bllshed by the preceding dlalogue must be used to replace information deleted from this streamlined, frae~mentary utterance. The set of ACTIVE nodes in the context model form a stack of plans, the toP- most of whlca is the current focused plan; each of these plans is the expanslon of an action appearing in the plan Immediately beneath it in this stack. These ACTIVE nodes represent the established Elobal context within w~ich the frag- mentary utterance occurs, and the propositions appeaclng along this path contain information missing frca the sentence fragment but ;~'esumed understood by the speaker. If the elliptical fragment ls a proposition, the analysis component produces a conjunction of propositions 3PREI~ representing that aspect ot the plan hi~hii~ted bY IS's el!iptlcal fra~ent. EXAM~E- I If the elliptical fragment is a constant, term, or term with attached propositions, the analysis com- ponent produces a term STERM associated with the constant or term in the fraRment as well as a con- Junction of propositions SPREDS. SPREDS consists of all propositions along the paths from the root of the context tree to the nodes at which an ele- ment of the frasment is associated with a plan element, as well as all propositions appearing along the previous ACTIVE path. The former represent the new context derived from IS's frs4- mentary utterance whereas the latter retain the previously established global context. 3. E~mple This example illustrates how the plan- analysis component determines that aspect of IS's plan hi~llg~ted by an elliptical fragment. It also shows how the established context is main- rained in interpreting ellipsis. IS: "Is C3360 offered in Fall 1985?" IP: "Yes." IS: sod any sections meet on Monday?" IP: "One section of CS360 meets on Monday at ~PM and another section meets on Monday at 7PM. " IS: "The text?" A portlon 0£ I~'s inferred task-related plan prior to the elliptical fragment is shown in glgure I. Nodes along the ACTIVE path are marked by aster- lsk~. 192 The semantic representation of the fragment "The text?" will be the variable term _book: &TEXTS This term associates with the term _txt : &TEXTS appearing at the node for the action Learn- Text ( IS, txt: &TEXTS ) such that Use s(_ss: &SECTIONS,_txt : &TEXTS ) The propositions along the active path are Course-Offered( CS360, FALL85 ) Is- Sectl on- Of (_ss: &SECTIONS, CS360) Is- Offered (_as : &SECT I0N S, FALLS 5 ) Is-Syllabus-Of(_ss: &SECTIONS,_syl: &S~LBI) Teaches (_fac: &FACULTY,_ss: &SECTIONS) I s- Mt g-Day (_ss: &SECT ION S, MDN DAY ) Is- Mt g-Time (_ss: &SECT IONS,_tme: & M%T,- T~S ) Is- Mt g- P1 c (_ss: &SECT IONS,_pl c: &MTG- PLCS ) These propositions maintain the established con- text that we are talking about the sections of C3360 that meet on Monday in the Fall of 1985. The path from the root of the context model to the node at which the elliptical fragment associates with a term in the plan produces the additional pro pc sl tl on Uses (_ss : &SECT IONS,_book: &TEXTS ) The analysis component returns the con~unctlon of these propositions along with STERM, in this case _book: &TEXTS The semantics of this interpretation is that IS is drawing attention to the term STERM such that the con~unctlon of propositions SPREDS is satisfied --- namely, the textbook used in sections of C3360 that meet on Monday in the Fall of 1985. EVALUATION COMPONENT The analysis component proposes a set of potential associations of the elliptical fragment with elements of IS' s underlying task-related plan. The evaluation component employs focusing strategies to select what it believes to be the interpretation intended by 13 --- namely, that interpretation most relevant to the current focus of attention in the dialogue. We employ the notion of focus domains in order to group finely grained actions and associ- ated plans into more general related structures. A focus domain consists of a set of actions, one of which is an ancestor of all other actions in the focus domain and is called the root of the focus domain. If as action is a member of a focus domain and that action is not the root action of another focus domain, then all the actions con- talnad in the plan associated with the first action are also members of the focus domain. (This is similar to Grosz's focus spaces and the notion of an object being in implicit focus.) The use of focus domains allows the groupin8 together of those actions that appear to be at approximately the sa~me level of Impllcit focus when a plan is explicitly focused. For example, the actions of learnlr~ from a particular teacher, learning the material in a given text, and attend- Ing class will all reside at the same focus level within the expanded plan for earning credit in a course. The action of going to the cashler's office to pay one's tuition also appears within this expanded plan; however it will reside at a different focus level since it does not come to mind nearly so readily when one thinks about tak- ing a course. The following are two of seven focusing rules used to select the association deemed most relevant to the existing plan context. [F1] Within the current focus space, prefer asso- clatlons which occur within the current focused plan. IF2] Within the current focus space and current focused plan, prefer associations within the actions to achieve the most recently con- sidered action. DISCOURSE GOALS We have analyzed dialogues from several dif- ferent domains and have identified eleven discourse goals which occur during information- seeking dialogues and which may be accomplished via elliptical fragments. Three exemplary discourse goals are [;] Obtaln-In/ormatlon: IS requests Ir.formatlon relevant to constructing the underlying task-related plan or relevant to formulating an answer to a question posed by IP. [2] Obtaln-Corroboration: IS expresses surprise regarding some proposition P and requests elaboration upon and justification of it. [33 Seek-Clarify-questlon: IS requests informa- tion relevant to clarifying a question posed by ZP. ANTICIPATED DISCOURSE GOALS When IS m~es an utterance, he is attempting to accomplls~ a discourse goal ; this discourse goal may in turn predict other suDsequent discourse goals for IS. For e~ple, if I~ asks a question, one anticipates that IS may want to expand upon his question. Similarly, utterances made by IP suggest dlsoourse goals for LS. These Aatlcipated Discourse Goals provide very strong expectations for IS and may often be accomplished implicitly as well as explicitly. The discourse ~als of the previous section also serve as anticipated discourse goals. Three additional anticipated discourse goals appear tO play a major role in determining how elliptical fragments are interpreted. One such anticipated discourse ~al is: 193 Accept-Questlon: IP has posed a question to IS; IS must now accept the question either explicitly, implicitly, or indicate that he does not as yet accept it. Normally dialogue participants accept such ques- tions implicitly by proceding to answer the ques- tion or to seek information relevant to formulat- ing an answer. However IS may refuse to accept the question posed by IP because he does not understand It (perhaps he is unable to identify some of the entities mentioned in the question) or because he is surprised by it. This leads to discourse goals such as seeking confirmation, seeking the identity of an entity, seeking clarif- ication of the posed question, or expressing surprise at the question. THE DISCOURSE STACK The discourse stack contains anticipated discourse goals which IS is expected to pursue. Anticipated discourse goals are pushed onto or popped from the stack as a result of utterances made by IS and IP. We have identified a set of stack processing rules which hold for simple utterances. Three examples of such stack process- Ing rules are: [SP1]When IP asks a question of IS, Answer- Question and Accept-Questlon are pushed onto the discourse stack. [SP2]When IS poses a question to IP, Expand- Question is pushed onto the discourse stack. Once IP begins answering the question, the stack is popped up to and including the Expand-Questlon discourse goal. [SP3]When IS's utterance does not pursue a goal sugEested by the top entry on the discourse stack, this entry is popped from the stack. The motivation for these rules is the following. When IP asks a question of IS, IS is first expected to accept the question, either implicitly or expllcltly, and then answer the question. Upon posing a question to ~P, IS is expected to expand upon this question with subsequent utterances or wait u~tll IP produces an answer to the question. Alt~oug~ the strongest expectations are that IS will pursue a goal suggested by the top element of the discourse stack, this anticipated discourse goal can be passed over, at which point it no longer sug~sts expectations for utterances. DISCOURSE INTERPRETATIOM COMPOM~T The discourse component employs discourse expectation rules and discourse goal rules. The discourse expectation rules use the discourse stack to suggest possible discourse goal s for L~ and activate the associated discourse goal rules. These disnourse goal rules ttse the plan-analysis component to help determine the best interpreta- tion of the fra~entar7 utterance relevant to the sug~sted discourse goal. If a discourse goal rule succeeds in producing an interpretation, then the discourse component identifies that discourse goal and its associated interpretation as its understanding of the utterance. I. Discourse Expectation Rules The top element of the discourse stack activates the discourse expectation rule with which it is associated; this rule in turn suggests discourse goals which the information-seeker' s utterance may pursue and activates these discourse goal rules. The following is an example of a discourse expectation rule: [DE1]If the top element of the discourse stack is Answer-Question, then I. Apply discourse goal rule DG-Answer-Quest to determine if the elliptical fragment is being used to accomplish the discourse goal of answering the question. 2. If no interpretation is produced, apply rule S-Suggest-Answer-Questlon to determine if the elliptical fragment is being used to accomplish the discourse goal of suggesting an answer to the question. 3. If no interpretation is produced, apply discourse goal rule DG-Obtaln-Info to deter- mine if the elliptical fragment is being used to accomplish the discourse goal of seeking information in order to construct an answer to the posed question. Once IS understands the question posed to him, IP's strongest expectation is that IS will answer the question; therefore first preference is given to interpretations which accomplis~ this goal. If IS does not immediately answer the question, then we expect a cooperative dialogue participant to work towards answering the question. This entails gathering information about the underlying task- related plan in order to construct a response. 2. Discourse Goal Rules Discourse goal rules determine if an elllptl- cal fragment accomplishes the associated discourse goal and, if so, produce the appropriate interpretation of the fragment. These discourse goal rules use the plan-analysls component to help determine the best interpretation of the frasmen- tary utterance relevant to the suggested discourse goal. However these interpretations are not actual representations of surface speech acts; instead they generally indicate elements of the plan whose values the speaker is querying or specifying. In many respects, this provides a better "understanding" of the utterance since it describes what the speaker is trying to accom- pli~. The following is an example of a rule associ- ated with a discourse goal suggested by the stack entry Accept-Response; the latter is pushed onto the discourse stack when IP responds to a question posed by IS. 194 Obtain-Corrob The discourse component calls the plan- analysis component to associate the ellipti- cal fragment with a term STERM or a conjunc- tion of propositions SPREDS in IS's underly- ing task-related plan. If IP believes it is mutually believed that IS already knows IP's beliefs about the value of the term STERM or the truth of the propositions $PREDS, then identify the elliptical fragment as accom- plishing the discourse ~al of expressing surprise at the preceding response; in par- tlcular, IS is surprised at the known values of STEP=M or SPREDS in li@~t of the new infor- met.lon provided by IP' s preceding response and the known aspect queried by IS's frag- ment. The followin8 is one of several rules associ- ated with the discourse ~al Answer-Question. J~Ct" Answer- Oues t--~. If the elliptical fragment terminates with a period, then the discourse component calls the plan-analysls component to associate the elliptical frasment with a conjunction of propositions SPEEDS in IS's underlying task- related plan. If successful, interpret the elliptical fragment as answerlr~ "Yes", with the restriction that the propositions SPREDS be satlsfi~d in the underlyin~ .i ~n. IMPLE}~NTATION AND EXAMPLES This pragmatics-based framework for process- ing intersententlal ellipsis has been implemented for a subset of discourse goals in a domain con- slstln8 of the courses, policies, and requlrements for students at a unlverslty. The following are worklng examples from this implementation. The ellipsis processor is presented with a semantic representation of Is's elliptical frag- ment; it "understands" intersententlal elliptical utterances by Identlfyin8 the discourse goal which I~ is pursuing in employing the frasment and by producing a plar,-Oased interpretation relevant to this discourse goal. This e,-=mple illustrates a simple request for information. IS: "Is CS360 offered in Fall 19857" IP: "Yes." IS: "Do any sections meet on Monday?" IP: "One section of C3360 meets on Monday at qPM and another section meets on Monday at 7PM. " IS: "The text?" Immediately prior to IS's elliptical utter- . ante, the discourse stack contair~ the entries Acre pt- Response Obtaln-Informatlon The discourse goal rules sugEested by Accept- Response do not identify the fragment as accom- plishing their associated discourse Eoals, so the top entry of the discourse stack is popped; this indicates that IS has implicitly accepted IP' s response. The entry Obtaln-Informatlon on the discourse stack activates the rule DG-Obtaln-In/'o. Pl an- analy sl s is activated to associate the elliptical fragment with an aspect of I$'s task- related plan. The construction of 5TERM and SPREDS for this ezample was described in detail in the plan analysis section and will not be repeated here. Since our belief model indicates that IS does not currently know the value of STERM such that SPREDS is satisfied, this rule identifies the elliptical fragment as seeking information in order to formulate a task-related plan; in partic- ular, I -~ is requestlng the value of STERM such that SPREDS is satisfied --- namely, the textbook used in sections of C3360 that meet on Monday in the Fall of 1985. This example illustrates an utterance in which IS is surprised by IP's response and see~s elabora- tion and corroboration of it. (The construction of $PREDS by the plan analysis component will not be described since it is similar to EXAMPLE-I.) IS: "I want to take CS620 in Fall 1985. Who is teaching it?" IF: "Dr. Smith is teaching CS620 in Fall 1985." IS: "What time does CS620 meet?" IP: "C°~20 meets at SAM. " IS: "With Dr. Smlth?" I~'s elliptical fragment will associate with the term Teaches (_fat - &FACULTY,_ss : &SECTIONS ) in IS's task-related plan. SPREDS will contain the propositions Course- Offered( CS6 20, FALL85 ) Is- Section- Of(_ss :&SECTIONS, CS620 ) Is- Offered (_ss: &SECT I0N S, gALL85 ) Is-Syllabus-Of( _ss : &ZECTIONS,_sy i : &SYLB I ) Teaches( SMITH ,_ss : &SECTIONS) Is-Mt~-Day ( _ss: &SECTIONS,_day : &MTG-DA YS ) Is-Htg-Time(_ss: &SECTIONS,_tme: &MT~- TM~S) Is- Mtg-Plc(_ss: &SECTIONS,_gl c : &MTG- PL CS) Immediately prior to the occurrence of the elllpt- ical fragment, the discourse stack contains the entries Acre pt- Respo n~e Obtain- Information Accept-Response, the top entry of the discourse stack, su6Eests the discourse goals of I )seeking .~onflrmatlon or 2,~seeklng corroboration of a com- ponent of the preceding response or 3)seeking ela- boration and corroboration of some aspect of this 195 ( I ) eEarn-Credit ( IS ,_crse : &COU RsE,_sem: &SEmeSTERS) such that Course-Of f ered(_cr se: &COU RSE,_sem: &S~STERS) l I ( I ) eEarn-Cr edit-Sectlon(IS ,_ss: &SECTIONS) such that Is- Secti on- Of (_as: a3ECT ION S, _or se :&COURSE) Is-Offered(_ss: &SECTIONS,_sea: &SE)~STERS) I i ( I ) iRegl ster- Late ( IS ,_ss: &SECTION S, _sea: &S E)~STERS) i I ( 2 ) eMiss- Pro- Reg( IS ,_sea: &SEM~TEBS) [ (2) Pay-Fee (IS, LATE- REG ,_sere: &SEI~STF~S) t [ (2) Pay( IS ,_lreg: &MONEY) such that Costs( LATE- RE3 ,_lreg: &MON ~-Y) Figure 2. A Portion of the Expanded Context Tree for EXAMPLE-3 response. The discourse goal rules Seek-Conflrm and Seek-Identlfy fail to identify their associ- ated discourse goals as accomplished by the user's fragment. Ou~ belief model indicates that IS already knows that SPREDS is satisfied; therefore the discourse goal rule DG-Obtain-Corrob identifies the elliptical fragment as expressing surprise at and requesting corroboration of IP's response. In particular, IS is surprised that SPRED~ is satis- fied and this surprise is a result of [I] the new information presented in IP's preced- ing response, namely that 8AM is the value of the term _tae: &MTG- T~S in the SPREDS proposition Is- Mt g-Tiae(_ss: &SECTION S,_tme: ~ T~S ) C2] the aspect of the plan queried by IS's elliptical fra~ent, namely the SPREDS propo- sition Teaches ( SMITH ,_ss: &SECTIONS) EXA~ELFcl The following is an example which our framework handles but which poses problems for other stra- te61es. IS: "I want to register for a course. But I massed pre-reglstration. The cost?" The first two utterances establish a plan context of late-reglstering, within which the elliptical fra~ent requests the fees involved in doing so. ( Late registration generally involves extra chargos. ) Figure 2 presents a portion of 13' s underly- ing task-related plan Inferred frca the utterances preceding the elliptical frasment. The parenthesized numbers preceding actions indicate the action's focus domain. I~'s fragment associ- ates with the term _ireg: &MONEY in IS' s inferred plan, as well as with terms else- where in the plan. However none of the other terms appear in the same focus space as the most recently considered action, and therefore the association of the fragment with _lreg: &MONEY is selected as most relevant to the current dlalo- gue context. The discourse stack immediately prior to the elliptical fra6ment contains the sin- gle entry Prov ide- For- Assimil atl on This anticipated discourse goal suggests the discourse goals of 1 )providing further inforaatlon for assimilation and 2)see~Ing information in order to formulate the task-related plan. The utterance terminates in a "?", ruling out provide for assimilation. Therefore rule DG-Obtaln-Info identifies the elliptical fragment as seeking information. In particular, the user is request- ing the fee for late registration, namely, the value of the term _cstl : &MONEY such that SPREDS is satisfied, where SPREDS Is the conjunction of the propositions Course-Offered(_crs: &COU RSE,_sea: &SEMESTERS ) Is-Sectlon-Of( _ss: &SECTION S,_sem: &SE)~STERS) Is- Offer ed(_ss: &SECTIONS,_sem: &SEmeSTERS) Costs( LATE- Rwn. ,_cstl : &MONEY) 196 EXTENSIONS AND FUTURE WORK The main limitation of this pragmatics-based framework appears to be in handling Intersenten- tlal elliptical utterances such as the following: IS: "Who is the teacher of C3200?" IF: "Dr. Herd is the teacher of C3200." IS: "C32637" Obviously IS' s elliptical fragment requests the teacher of C3263. Our model cannot currently han- dle such fragments. This limitation is partially due to the fact that our mechanlems for retaining dialogue context are based upon the view that IS constructs a plan for a task in a deptb-flrst fashlon, completing Investlgation of a plan for C3200 before moving on to investigate a plan for CS263. Since the teacher of C3200 has nothing to do with the plan for taking C3263, the mechanisms for retaining dialogue context will fail to iden- tify • teacher of CS263" as the information requested by IS. One might argue that the elliptical fragment in the above dialogue relies heavily upon the syn- tactic representation of the preceding utterance and thus a syntactic strategy is required for interpretation. This may be true. However if we view dialogues such as the above as investigating task-related plans in a kind of "breadth-flrst" fa~hlon, then IS is analyzing the teachers of each course under consideration first, and will then move to considering other attributes of the courses. It appears that the plan-based framework can be extended to handle many such dialogues, perhaps by using meta-plans to represent how IS is constructing his task-related plan. CON CL USION S This paper ha~ described a pragmatlcs-based approach to interpreting intersententlal ellipti- cal utterances during an information-seeking dialogue in a task domsin. Our framework coordl- nares many knowledge sources, including the informatlon-seeker' s inferred task-related plan, his inferred beliefs, his anticipated discourse goals, and focusing strategies to produce a rich interpretation of ellipsis, including identifica- tion of the Ir~ormatlon-seeker's d/scourse goal. This framework can handle many e-~mples wblch pose problems for other strate~Les. We claim that the advantage of tbls approach is its reliance upon pragmatic information, including discourse content and conversational goals, rather than upon precise representations of the preceding utterance alone. ACKN OWLEDG E~ TS T would llke to thank Ralph Welschedel for his encouragement and direction in this research and Lance Remsbaw for many help/ul ~Iscusslons and suggestlons. REFERENCES I. Allen, J.F. and Perraul t, C.R. , "Analyzing Intention in Utterance, s", Artificial Intelli- gence, 15(3), 1980 2. Carberry, S., "Tracking User Goals in an Informatlon-Seeking Environment", AAAI, 1983 3. Carberry, S., "Pragmatic Modeling in Informa- tion System Interfaces", forthcoming Ph.D. Dissertation, Dept. of Computer Science, University of Delaware, Newark, Delaware 4. Carbonell, J.G., and Philip Hayes, "Recovery Strategies for Parsing Extragrammatl cal Language", Amer. Journal of Comp. Ling. , Vcl.9, No.3-4, 1983 5. Carbonell,J.G., " "Discourse Pragmatlcs and Ellipsis Resolution in Task-Orlented Natural Language Interfaces" , Proc. 21 rst Annual Meeting of ACL, 1983 6. Flowers, M. and M.E. Dyer, "Really Arguing With Your Computer", Proc. of Nat. Comp. Conf. , 1984 7. Grice, H.P., "Meaning", Phil. Rev. 66, 1957 8. Orice, H.P., "Utterer's Meaning and Inten- tions", Phil. Rev., 68, 1969 9. Grosz,B.J., "The Representation and Use of Focus in a System for Understanding Dialogs", IjCAI, 1977 10. Orosz,H.J., Joshi, A.K., and Weinstein, S., • Providing a Unified Account of Definite Noun Phrases in Discourse ", Proceedings 2 Irst Annual Meeting of ACL, 1983 11. Hendrlx, G.G., E.D. Sacerdoti, and J.Slocum, "Developing a Natural Language Interface to Complex Data", SRI International, 1976 12. hltman,D.J., and Allen, J.F., "A Plan Recog- nation Model for Clarification Subdlalogues", Proceedings of the International Conference on Computational Linguistics, 198~ 13. Mann,W., J.Moore, and J.Levin, "A Comprehen- sion Model :'or Human Dialogue", IJCAI, 1977 14. McKeown,K. R., "The Text System /or Natural Language Generation: An Overview", Proc. of the 20th Annual Meeting of ACL, 1982 15. Perrault, C.R., and Allen, J.F., "A Plan- Based Analysis of ~ndlrect Speech Acts", American Journal of Computational Llnguls- tiCS, July 1980 16. Robinson, A. E., "Determining Verb Phrase Referents in Dialog~", American Journal of Computatlor~l Linguistics", Jan. 1981 17. Sidner, C.L., "What the Speaker Means: The Recog~%itlon Of Speakers' Plans in Discourse", Comp. and Maths. with Appls., Vol.9,No. 1, 1983 18. Si~ner,C.L., "Focussing for Interpretation of Pronouns", American Journal of Computational Linguistics, Oct. 1981 19. Waltz, D.L., "An Engllsh Language Question An~werlng System for a Large Relational Data Base", Comm. of ACM, vo121, No.7, 1978 20. Webber, B.L., M.E. Pollack, and J. Hirsch- berg, "User Participation in the Reasoning Processes of Expert Systems", Proc. of Nat. Con/. on Art. Int., 1982 21. Welsehedel, R.M. and N. Sondhelmer, "An Improved Heuristic for Ellipsis Processing", Proc. 20th Annual Meeting of ACL, 1982 197
1985
24