text
stringlengths 0
316k
| year
stringclasses 50
values | No
stringclasses 911
values |
---|---|---|
Analysis of Syntax-Based Pronoun Resolution Methods Joel R. Tetreault University of Rochester Department of Computer Science Rochester, NY, 14627 tetreaul@cs, rochester, edu Abstract This paper presents a pronoun resolution algo- rithm that adheres to the constraints and rules of Centering Theory (Grosz et al., 1995) and is an alternative to Brennan et al.'s 1987 algo- rithm. The advantages of this new model, the Left-Right Centering Algorithm (LRC), lie in its incremental processing of utterances and in its low computational overhead. The algorithm is compared with three other pronoun resolu- tion methods: Hobbs' syntax-based algorithm, Strube's S-list approach, and the BFP Center- ing algorithm. All four methods were imple- mented in a system and tested on an annotated subset of the Treebank corpus consisting of 2026 pronouns. The noteworthy results were that Hobbs and LRC performed the best. 1 Introduction The aim of this project is to develop a pro- noun resolution algorithm which performs bet- ter than the Brennan et al. 1987 algorithm 1 as a cognitive model while also performing well empirically. A revised algorithm (Left-Right Centering) was motivated by the fact that the BFP al- gorithm did not allow for incremental process- ing of an utterance and hence of its pronouns, and also by the fact that it occasionally im- poses a high computational load, detracting from its psycholinguistic plausibility. A sec- ond motivation for the project is to remedy the dearth of empirical results on pronoun res- olution methods. Many small comparisons of methods have been made, such as by Strube (1998) and Walker (1989), but those usually consist of statistics based on a small hand- tested corpus. The problem with evaluating 1Henceforth BFP algorithms by hand is that it is time consum- ing and difficult to process corpora that are large enough to provide reliable, broadly based statistics. By creating a system that can run algorithms, one can easily and quickly analyze large amounts of data and generate more reli- able results. In this project, the new algorithm is tested against three leading syntax-based pro- noun resolution methods: Hobbs' naive algo- rithm (1977), S-list (Strube 1998), and BFP. Section 2 presents the motivation and algo- rithm for Left-Right Centering. In Section 3, the results of the algorithms are presented and then discussed in Section 4. 2 Left-Right Centering Algorithm Left-Right Centering (LRC) is a formalized algorithm built upon centering theory's con- straints and rules as detailed in Grosz et. al (1995). The creation of the LRC Algorithm is motivated by two drawbacks found in the BFP method. The first is BFP's limitation as a cognitive model since it makes no provision for incremental resolution of pronouns (Kehler 1997). Psycholinguistic research support the claim that listeners process utterances one word at a time, so when they hear a pronoun they will try to resolve it immediately. If new infor- mation comes into play which makes the reso- lution incorrect (such as a violation of binding constraints), the listener will go back and find a correct antecedent. This incremental resolution problem also motivates Strube's S-list approach. The second drawback to the BFP algorithm is the computational explosion of generating and filtering anchors. In utterances with two or more pronouns and a Cf-list with several can- didate antecedents for each pronoun, thousands of anchors can easily be generated making for a time consuming filtering phase. An exam- 602 ple from the evaluation corpus illustrates this problem (the italics in Un-1 represent possible antecedents for the pronouns (in italics) of Un): Un-l: Separately, the Federal Energy Regu- latory Commission turned down for now a re- quest by Northeast seeking approval of its possi- ble purchase of PS of New Hampshire. Un: Northeast said it would refile its request and still hopes for an expedited review by the FERC so that it could complete the purchase by next summer if its bid is the one approved by the bankruptcy court. With four pronouns in Un, and eight possible antecedents for each in Un-1, 4096 unique Cf- lists are generated. In the cross-product phase, 9 possible Cb's are crossed with the 4096 Cf's, generating 36864 anchors. Given these drawbacks, we propose a revised resolution algorithm that adheres to centering constraints. It works by first searching for an antecedent in the current utterance 2, if one is not found, then the previous Cf-lists (starting with the previous utterance) are searched left- to-right for an antecedent: 1. Preprocessing - from previous utterance: Cb(Un-1) and Cf(Un-1) are available. 2. Process Utterance - parse and extract incrementally from Un all references to dis- course entities. For each pronoun do: (a) Search for an antecedent intrasenten- tially in Cf-partial(Un) 3 that meets feature and binding constraints. If one is found proceed to the next pro- noun within utterance. Else go to (b). (b) Search for an antecedent intersenten- tially in Cf(Un-1) that meets feature and binding constraints. 3. Create Cf- create Cf-list of Un by rank- ing discourse entities of Un according to grammatical function. Our implementa- tion used a left-right breadth-first walk of the parse tree to approximate sorting by grammatical function. 2In this project, a sentence is considered an utterance 3Cf-partial is a list of all processed discourse entities in Un 4. Identify Cb - the backward-looking cen- ter is the most highly ranked entity from Cf(Un-1) realized in Cf(Un). 5. Identify Transition - with the Cb and Cf resolved, use the criteria from (Brennan et al., 1987) to assign the transition. It should be noted that BFP makes use of Centering Rule 2 (Grosz et al., 1995), LRC does not use the transition generated or Rule 2 in steps 4 and 5 since Rule 2's role in pronoun resolution is not yet known (see Kehler 1997 for a critique of its use by BFP). Computational overhead is avoided since no anchors or auxiliary data structures need to be produced and filtered. 3 Evaluation of Algorithms All four algorithms were run on a 3900 utterance subset of the Penn Treebank annotated corpus (Marcus et al., 1993) provided by Charniak and Ge (1998). The corpus consists of 195 different newspaper articles. Sentences are fully brack- eted and have labels that indicate word-class and features. Because the S-list and BFP algo- rithms do not allow resolution of quoted text, all quoted expressions were removed from the corpus, leaving 1696 pronouns (out of 2026) to be resolved. For analysis, the algorithms were broken up into two classes. The "N" group consists of al- gorithms that search intersententially through all Cf-lists for an antecedent. The "1" group consists of algorithms that can only search for an antecedent in Cf(Un-1). The results for the "N" algorithms and "1" algorithms are depicted in Figures 1 and 2 respectively. For comparison, a baseline algorithm was cre- ated which simply took the most recent NP (by surface order) that met binding and feature con- straints. This naive approach resolved 28.6 per- cent of pronouns correctly. Clearly, all four per- form better than the naive approach. The fol- lowing section discusses the performance of each algorithm. 4 Discussion The surprising result from this evaluation is that the Hobbs algorithm, which uses the least amount of information, actually performs the best. The difference of six more pronouns right 603 Algorithm Right % Right % Right Intra % Right Inter Hobbs 1234 72.8 68.4 85.0 LRC-N 1228 72.4 67.8 85.2 Strube-N 1166 68.8 62.9 85.2 Figure 1: "N" algorithms: search all previous Cf lists Algorithm LRC-1 Strube-1 BFP Right % Right % Right Intra % Right Inter 1208 71.2 68.4 80.7 1120 66.0 60.3 71.1 962 56.7 40.7 78.8 Figure 2: "1" algorithms: search Cf(Un-1) only between LRC-N and Hobbs is statistically in- significant so one may conclude that the new centering algorithm is also a viable method. Why do these algorithms perform better than the others? First, both search for referents in- trasententially and then intersentially. In this corpus, over 71% of all pronouns have intrasen- tential referents, so clearly an algorithm that favors the current utterance will perform bet- ter. Second, both search their respective data structures in a salience-first manner. Inter- sententially, both examine previous utterances in the same manner. LRC-N sorts the Cf- list by grammatical function using a breadth- first search and by moving prepended phrases to a less salient position. While Hobbs' algo- rithm does not do the movement it still searches its parse tree in a breadth-first manner thus emulating the Cf-list search. Intrasententially, Hobbs gets slightly more correct since it first favors antecedents close to the pronoun before searching the rest of the tree. LRC favors en- tities near the head of the sentence under the assumption they are more salient. The similar- ities in intra- and intersentential evaluation are reflected in the similarities in their percent right for the respective categories. Because the S-list approach incorporates both semantics and syntax in its familiarity rank- ing scheme, a shallow version which only uses syntax is implemented in this study. Even though several entities were incorrectly labeled, the shallow S-list approach still performed quite well, only 4 percent lower than Hobbs and LRC- i. The standing of the BFP algorithm should not be too surprising given past studies. For example, Strube (1997) had the S-list algorithm performing at 91 percent correct on three New York Times articles while the best version of BFP performed at 81 percent. This ten per- cent difference is reflected in the present eval- uation as well. The main drawback for BFP was its preference for intersentential resolution. Also, BFP as formally defined does not have an intrasentential processing mechanism. For the purposes of the project, the LRC intrasen- tential technique was used to resolve pronouns that were unable to be resolved by the BFP (in- tersentential) algorithm. In additional experiments, Hobbs and LRC- N were tested with quoted expressions included. LRC used an approach similar to the one proposed by Kamayema (1998) for analyzing quoted expressions. Given this new approach, 70.4% of the 2026 pronouns were resolved cor- rectly by LRC while Hobbs performed at 69.8%, a difference of only 13 pronouns right. 5 Conclusions This paper first presented a revised pronoun resolution algorithm that adheres to the con- straints of centering theory. It is inspired by the need to remedy a lack of incremental pro- cessing and computational issues with the BFP algorithm. Second, the performance of LRC was compared against three other leading pro- noun resolution algorithms based solely on syn- tax. The comparison of these algorithms is 604 significant in its own right because they have not been previously compared, in computer- encoded form, on a common corpus. Coding all the algorithms allows one to quickly test them all on a large corpus and eliminates human er- ror, both shortcomings of hand evaluation. Most noteworthy is the performance of Hobbs and LRC. The Hobbs approach reveals that a walk of the parse tree performs just as well as salience based approaches. LRC performs just as well as Hobbs, but the important point is that it can be considered as a replacement for the BFP algorithm not only in terms of perfor- mance but in terms of modeling. In terms of implementation, Hobbs is dependent on a pre- cise parse tree for its analysis. If no parse tree is available, Strube's S-list algorithm and LRC prove more useful since grammatical function can be approximated by using surface order. 6 Future Work The next step is to test all four algorithms on a novel or short stories. Statistics from the Walker and Strube studies suggest that BFP will perform better in these cases. Other future work includes constructing a hybrid algorithm of LRC and S-list in which entities are ranked both by the familiarity scale and by grammati- cal function. Research into how transitions and the Cb can be used in a pronoun resolution al- gorithm should also be examined. Strube and Hahn (1996) developed a heuristic of ranking transition pairs by cost to evaluate different Cf- ranking schemes. Perhaps this heuristic could be used to constrain the search for antecedents. It is quite possible that hybrid algorithms (i.e. using Hobbs for intrasentential resolution, LRC for intersentential) may not produce any sig- nificant improvement over the current systems. If so, this might indicate that purely syntactic methods cannot be pushed much farther, and the upper limit reached can serve as a base line for approaches that combine syntax and seman- tics. 7 Acknowledgments I am grateful to Barbara Grosz for aiding me in the development of the LRC algorithm and discussing centering issues. I am also grate- ful to Donna Byron who was responsible for much brainstorming, cross-checking of results, and coding of the Hobbs algorithm. Special thanks goes to Michael Strube, James Allen, and Lenhart Schubert for their advice and brainstorming. We would also like to thank Charniak and Ge for the annotated, parsed Treebank corpus which proved invaluable. Partial support for the research reported in this paper was provided by the National Sci- ence Foundation under Grants No. IRI-90- 09018, IRI-94-04756 and CDA-94-01024 to Har- yard University and also by the DARPA re- search grant no. F30602-98-2-0133 to the Uni- versity of Rochester. References Susan E. Brennan, Marilyn W. Friedman, and Carl J. Pollard. 1987. A centering approach to pronouns. In Proceedings, 25th Annual Meeting of the ACL, pages 155-162. Niyu Ge, John Hale, and Eugene Charniak. 1998. A statistical approach to anaphora res- olution. Proceedings of the Sixth Workshop on Very Large Corpora. Barbara J. Grosz, Aravind K. Joshi, and Scott Weinstein. 1995. Centering: A framework for modeling the local coherence of discourse. Computational Linguistics, 21 (2):203-226. Jerry R. Hobbs. 1977. Resolving pronoun ref- erences. Lingua, 44:311-338. Megumi Kameyama. 1986. Intrasentential cen- tering: A case study. In Centering Theory in Discourse. Andrew Kehler. 1997. Current theories of cen- tering for pronoun interpretation: A crit- ical evaluation. Computational Linguistics, 23(3):467-475. Mitchell P. Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building a large annotated corpus of english: The penn treebank. Computational Lingusitics, 19(2):313-330. Michael Strube and Udo Hahn. 1996. Func- tional centering. In Association for Compu- tational Lingusitics, pages 270-277. Michael Strube. 1998. Never look back: An alternative to centering. In Association for Computational Lingusitics, pages 1251-1257. Marilyn A. Walker. 1989. Evaluating discourse processing algorithms. In Proceedings, 27th Annual Meeting of the Association for Com- puational Linguisites, pages 251-261. 605 | 1999 | 79 |
Finding Parts in Very Large Corpora Matthew Berland, Eugene Charniak rob, ec @ cs. brown, edu Department of Computer Science Brown University, Box 1910 Providence, RI 02912 Abstract We present a method for extracting parts of objects from wholes (e.g. "speedometer" from "car"). Given a very large corpus our method finds part words with 55% accuracy for the top 50 words as ranked by the system. The part list could be scanned by an end-user and added to an existing ontology (such as WordNet), or used as a part of a rough semantic lexicon. 1 Introduction We present a method of extracting parts of objects from wholes (e.g. "speedometer" from "car"). To be more precise, given a single word denoting some entity that has recognizable parts, the system finds and rank-orders other words that may denote parts of the entity in question. Thus the relation found is strictly speaking between words, a relation Miller [1] calls "meronymy." In this paper we use the more colloquial "part-of" terminology. We produce words with 55°£ accuracy for the top 50 words ranked by the system, given a very large corpus. Lacking an objective definition of the part-of relation, we use the majority judgment of five human subjects to decide which proposed parts are correct. The program's output could be scanned by an end- user and added to an existing ontology (e.g., Word- Net), or used as a part of a rough semantic lexicon. To the best of our knowledge, there is no published work on automatically finding parts from unlabeled corpora. Casting our nets wider, the work most sim- ilar to what we present here is that by Hearst [2] on acquisition of hyponyms ("isa" relations). In that pa- per Hearst (a) finds lexical correlates to the hyponym relations by looking in text for cases where known hy- ponyms appear in proximity (e.g., in the construction (NP, NP and (NP other NN)) as in "boats, cars, and other vehicles"), (b) tests the proposed patterns for validity, and (c) uses them to extract relations from a corpus. In this paper we apply much the same methodology to the part-of relation. Indeed, in [2] Hearst states that she tried to apply this strategy to the part-of relation, but failed. We comment later on the differences in our approach that we believe were most important to our comparative success. Looking more widely still, there is an ever- growing literature on the use of statistical/corpus- based techniques in the automatic acquisition of lexical-semantic knowledge ([3-8]). We take it as ax- iomatic that such knowledge is tremendously useful in a wide variety of tasks, from lower-level tasks like noun-phrase reference, and parsing to user-level tasks such as web searches, question answering, and digest- ing. Certainly the large number of projects that use WordNet [1] would support this contention. And al- though WordNet is hand-built, there is general agree- ment that corpus-based methods have an advantage in the relative completeness of their coverage, partic- ularly when used as supplements to the more labor- intensive methods. 2 Finding Parts 2.1 Parts Webster's Dictionary defines "part" as "one of the often indefinite or unequal subdivisions into which something is or is regarded as divided and which to- gether constitute the whole." The vagueness of this definition translates into a lack of guidance on exactly what constitutes a part, which in turn translates into some doubts about evaluating the results of any pro- cedure that claims to find them. More specifically, note that the definition does not claim that parts must be physical objects. Thus, say, "novel" might have "plot" as a part. In this study we handle this problem by asking in- formants which words in a list are parts of some target word, and then declaring majority opinion to be cor- rect. We give more details on this aspect of the study later. Here we simply note that while our subjects often disagreed, there was fair consensus that what might count as a part depends on the nature of the 57 word: a physical object yields physical parts, an in- stitution yields its members, and a concept yields its characteristics and processes. In other words, "floor" is part of "building" and "plot" is part of "book." 2.2 Patterns Our first goal is to find lexical patterns that tend to indicate part-whole relations. Following Hearst [2], we find possible patterns by taking two words that are in a part-whole relation (e.g, basement and build- ing) and finding sentences in our corpus (we used the North American News Corpus (NANC) from LDC) that have these words within close proximity. The first few such sentences are: ... the basement of the building. ... the basement in question is in a four-story apartment building ... ... the basement of the apartment building. From the building's basement ... ... the basement of a building ... ... the basements of buildings ... From these examples we construct the five pat- terns shown in Table 1. We assume here that parts and wholes are represented by individual lexical items (more specifically, as head nouns of noun-phrases) as opposed to complete noun phrases, or as a sequence of "important" noun modifiers together with the head. This occasionally causes problems, e.g., "conditioner" was marked by our informants as not part of "car", whereas "air conditioner" probably would have made it into a part list. Nevertheless, in most cases head nouns have worked quite well on their own. We evaluated these patterns by observing how they performed in an experiment on a single example. Table 2 shows the 20 highest ranked part words (with the seed word "car") for each of the patterns A-E. (We discuss later how the rankings were obtained.) Table 2 shows patterns A and B clearly outper- form patterns C, D, and E. Although parts occur in all five patterns~ the lists for A and B are predom- inately parts-oriented. The relatively poor perfor- mance of patterns C and E was ant!cipated, as many things occur "in" cars (or buildings, etc.) other than their parts. Pattern D is not so obviously bad as it differs from the plural case of pattern B only in the lack of the determiner "the" or "a". However, this difference proves critical in that pattern D tends to pick up "counting" nouns such as "truckload." On the basis of this experiment we decided to proceed using only patterns A and B from Table 1. A. whole NN[-PL] 's POS part NN[-PL] ... building's basement ... B. part NN[-PL] of PREP {theIa } DET roods [JJINN]* whole NN ... basement of a building... C. part NN in PREP {thela } DET roods [JJINN]* whole NN ... basement in a building ... D. parts NN-PL of PREP wholes NN-PL ... basements of buildings ... E. parts NN-PL in PREP wholes NN-PL ... basements in buildings ... Format: type_of_word TAG type_of_word TAG ... NN = Noun, NN-PL = Plural Noun DET = Determiner, PREP = Preposition POS = Possessive, JJ = Adjective Table h Patterns for partOf(basement,building) 3 Algorithm 3.1 Input We use the LDC North American News Corpus (NANC). which is a compilation of the wire output of several US newspapers. The total corpus is about 100,000,000 words. We ran our program on the whole data set, which takes roughly four hours on our net- work. The bulk of that time (around 90%) is spent tagging the corpus. As is typical in this sort of work, we assume that our evidence (occurrences of patterns A and B) is independently and identically distributed (lid). We have found this assumption reasonable, but its break- down has led to a few errors. In particular, a draw- back of the NANC is the occurrence of repeated ar- ticles; since the corpus consists of all of the articles that come over the wire, some days include multiple, updated versions of the same story, containing iden- tical paragraphs or sentences. We wrote programs to weed out such cases, but ultimately found them of little use. First, "update" articles still have sub- stantial variation, so there is a continuum between these and articles that are simply on the same topic. Second, our data is so sparse that any such repeats are very unlikely to manifest themselves as repeated examples of part-type patterns. Nevertheless since two or three occurrences of a word can make it rank highly, our results have a few anomalies that stem from failure of the iid assumption (e.g., quite appro- priately, "clunker"). 58 Pattern A headlight windshield ignition shifter dashboard ra- diator brake tailpipe pipe airbag speedometer con- verter hood trunk visor vent wheel occupant en- gine tyre Pattern B trunk wheel driver hood occupant seat bumper backseat dashboard jalopy fender rear roof wind- shield back clunker window shipment reenactment axle Pattern C passenger gunmen leaflet hop houseplant airbag gun koran cocaine getaway motorist phone men indecency person ride woman detonator kid key Pattern D import caravan make dozen carcass shipment hun- dred thousand sale export model truckload queue million boatload inventory hood registration trunk ten Pattern E airbag packet switch gem amateur device handgun passenger fire smuggler phone tag driver weapon meal compartment croatian defect refugee delay Table 2: Grammatical Pattern Comparison Our seeds are one word (such as "car") and its plural. We do not claim that all single words would fare as well as our seeds, as we picked highly probable words for our corpus (such as "building" and "hos- pital") that we thought would have parts that might also be mentioned therein. With enough text, one could probably get reasonable results with any noun that met these criteria. 3.2 Statistical Methods The program has three phases. The first identifies and records all occurrences of patterns A and B in our corpus. The second filters out all words ending with "ing', "ness', or "ity', since these suffixes typically occur in words that denote a quality rather than a physical object. Finally we order the possible parts by the likelihood that they are true parts according to some appropriate metric. We took some care in the selection of this met- ric. At an intuitive level the metric should be some- thing like p(w [ p). (Here and in what follows w denotes the outcome of the random variable gener- ating wholes, and p the outcome for parts. W(w) states that w appears in the patterns AB as a whole, while P(p) states that p appears as a part.) Met- rics of the form p(w I P) have the desirable property that they are invariant over p with radically different base frequencies, and for this reason have been widely used in corpus-based lexical semantic research [3,6,9]. However, in making this intuitive idea someone more precise we found two closely related versions: p(w, W(w) I P) p(w, w(~,) I p, e(p)) We call metrics based on the first of these "loosely conditioned" and those based on the second "strongly conditioned". While invariance with respect to frequency is gen- erally a good property, such invariant metrics can lead to bad results when used with sparse data. In particular, if a part word p has occurred only once in the data in the AB patterns, then perforce p(w [ P) = 1 for the entity w with which it is paired. Thus this metric must be tempered to take into account the quantity of data that supports its conclusion. To put this another way, we want to pick (w,p) pairs that have two properties, p(w I P) is high and [ w, pl is large. We need a metric that combines these two desiderata in a natural way. We tried two such metrics. The first is Dun- ning's [10] log-likelihood metric which measures how "surprised" one would be to observe the data counts I w,p[,[ -,w, pl, [ w,-,pland I-'w,-'Plifone assumes that p(w I P) = p(w). Intuitively this will be high when the observed p(w I P) >> p(w) and when the counts supporting this calculation are large. The second metric is proposed by Johnson (per- sonal communication). He suggests asking the ques- tion: how far apart can we be sure the distributions p(w [ p)and p(w) are if we require a particular signif- icance level, say .05 or .01. We call this new test the "significant-difference" test, or sigdiff. Johnson ob- serves that compared to sigdiff, log-likelihood tends to overestimate the importance of data frequency at the expense of the distance between p(w I P) and 3.3 Comparison Table 3 shows the 20 highest ranked words for each statistical method, using the seed word "car." The first group contains the words found for the method we perceive as the most accurate, sigdiff and strong conditioning. The other groups show the differences between them and the first group. The + category means that this method adds the word to its list, - means the opposite. For example, "back" is on the sigdiff-loose list but not the sigdiff-strong list. In general, sigdiff worked better than surprise and strong conditioning worked better than loose condi- tioning. In both cases the less favored methods tend to promote words that are less specific ("back" over "airbag", "use" over "radiator"). Furthermore, the 59 Sigdiff, Strong airbag brake bumper dashboard driver fender headlight hood ignition occupant pipe radi- ator seat shifter speedometer tailpipe trunk vent wheel windshield Sigdiff, Loose + back backseat oversteer rear roof vehicle visor - airbag brake bumper pipe speedometer tailpipe vent Surprise, Strong + back cost engine owner price rear roof use value window - airbag bumper fender ignition pipe radiator shifter speedometer tailpipe vent Surprise, Loose + back cost engine front owner price rear roof side value version window - airbag brake bumper dashboard fender ig- nition pipe radiator shifter speedometer tailpipe vent Table 3: Methods Comparison combination of sigdiff and strong conditioning worked better than either by itself. Thus all results in this paper, unless explicitly noted otherwise, were gath- ered using sigdiff and strong conditioning combined. 4 Results 4.1 Testing Humans We tested five subjects (all of whom were unaware of our goals) for their concept of a "part." We asked them to rate sets of 100 words, of which 50 were in our final results set. Tables 6 - 11 show the top 50 words for each of our six seed words along with the number book 10 8 20 14 30 20 40 24 50 28 10 20 30 40 5O hospital 7 16 21 23 26 building car 7 12 18 21 29 plant 5 10 15 20 22 8 17 23 26 31 school 10 14 20 26 31 Table 4: Result Scores of subjects who marked the wordas a part of the seed concept. The score of individual words vary greatly but there was relative consensus on most words. We put an asterisk next to words that the majority sub- jects marked as correct. Lacking a formal definition of part, we can only define those words as correct and the rest as wrong. While the scoring is admit- tedly not perfect 1, it provides an adequate reference result. Table 4 summarizes these results. There we show the number of correct part words in the top 10, 20, 30, 40, and 50 parts for each seed (e.g., for "book", 8 of the top 10 are parts, and 14 of the top 20). Over- all, about 55% of the top 50 words for each seed are parts, and about 70% of the top 20 for each seed. The reader should also note that we tried one ambigu- ous word, "plant" to see what would happen. Our program finds parts corresponding to both senses, though given the nature of our text, the industrial use is more common. Our subjects marked both kinds of parts as correct, but even so, this produced the weak- est part list of the six words we tried. As a baseline we also tried using as our "pattern" the head nouns that immediately surround our target word. We then applied the same "strong condition- ing, sigdiff" statistical test to rank the candidates. This performed quite poorly. Of the top 50 candi- dates for each target, only 8% were parts, as opposed to the 55% for our program. 4.2 WordNet WordNet + door engine floorboard gear grille horn mirror roof tailfin window - brake bumper dashboard driver headlight ig- nition occupant pipe radiator seat shifter speedometer tailpipe vent wheel windshield Table 5: WordNet Comparison We also compared out parts list to those of Word- Net. Table 5 shows the parts of "car" in WordNet that are not in our top 20 (+) and the words in our top 20 that are not in WordNet (-). There are defi- nite tradeoffs, although we would argue that our top- 20 set is both more specific and more comprehensive. Two notable words our top 20 lack are "engine" and "door", both of which occur before 100. More gener- ally, all WordNet parts occur somewhere before 500, with the exception of "tailfin', which never occurs with car. It would seem that our program would be l For instance, "shifter" is undeniably part of a car, while "production" is only arguably part of a plant. 60 a good tool for expanding Wordnet, as a person can scan and mark the list of part words in a few minutes. 5 Discussion and Conclusions The program presented here can find parts of objects given a word denoting the whole object and a large corpus of unmarked text. The program is about 55% accurate for the top 50 proposed parts for each of six examples upon which we tested it. There does not seem to be a single cause for the 45% of the cases that are mistakes. We present here a few problems that have caught our attention. Idiomatic phrases like "a jalopy of a car" or "the son of a gun" provide problems that are not easily weeded out. Depending on the data, these phrases can be as prevalent as the legitimate parts. In some cases problems arose because of tagger mistakes. For example, "re-enactment" would be found as part of a "car" using pattern B in the phrase "the re-enactment of the car crash" if "crash" is tagged as a verb. The program had some tendency to find qualities of objects. For example, "driveability" is strongly correlated with car. We try to weed out most of the qualities by removing words with the suffixes "hess", "ing', and "ity." The most persistent problem is sparse data, which is the source of most of the noise. More data would almost certainly allow us to produce better lists, both because the statistics we are currently collecting would be more accurate, but also because larger num- bers would allow us to find other reliable indicators. For example, idiomatic phrases might be recognized as such. So we see "jalopy of a car" (two times) but not, of course, "the car's jalopy". Words that appear in only one of the two patterns are suspect, but to use this rule we need sufficient counts on the good words to be sure we have a representative sample. At 100 million words, the NANC is not exactly small, but we were able to process it in about four hours with the machines at our disposal, so still larger corpora would not be out of the question. Finally, as noted above, Hearst [2] tried to find parts in corpora but did not achieve good results. She does not say what procedures were used, but as- suming that the work closely paralleled her work on hyponyms, we suspect that our relative success was due to our very large corpus and the use of more re- fined statistical measures for ranking the output. 6 Acknowledgments This research was funded in part by NSF grant IRI- 9319516 and ONR Grant N0014-96-1-0549. Thanks to the entire statistical NLP group at Brown, and particularly to Mark Johnson, Brian Roark, Gideon Mann, and Ann-Maria Popescu who provided invalu- able help on the project. References [1] George Miller, Richard Beckwith, Cristiane Fell- baum, Derek Gross & Katherine J. Miller, "Word- Net: an on-line lexicai database," International Journal of Lexicography 3 (1990), 235-245. [2] Marti Hearst, "Automatic acquisition of hy- ponyms from large text corpora," in Proceed- ings of the Fourteenth International Conference on Computational Linguistics,, 1992. [3] Ellen Riloff & Jessica Shepherd, "A corpus-based approach for building semantic lexicons," in Pro- ceedings of the Second Conference on Empirical Methods in Natural Language Processing, 1997, 117-124. [4] Dekang Lin, "Automatic retrieval and cluster- ing of similar words," in 36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computa- tional Linguistics, 1998, 768-774. [5] Gregory Grefenstette, "SEXTANT: extracting se- mantics from raw text implementation details," Heuristics: The Journal of Knowledge Engineer- ing (1993). [6] Brian Roark & Eugene Charniak, "Noun-phrase co-occurrence statistics for semi-automatic se- mantic lexicon construction," in 36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics, 1998, 1110-1116. [7] Vasileios Hatzivassiloglou & Kathleen R. McKe- own, "Predicting the semantic orientation of ad- jectives," in Proceedings of the 35th Annual Meet- ing of the ACL, 1997, 174-181. [8] Stephen D. Richardson, William B. Dolan & Lucy Vanderwende, "MindNet: acquiring and structur- ing semantic information from text," in 36th An- nual Meeting of the Association for Computa- tional Linguistics and 17th International Confer- ence on Computational Linguistics, 1998, 1098- 1102. [9] William A. Gale, Kenneth W. Church & David Yarowsky, "A method for disambiguating word senses in a large corpus," Computers and the Hu- manities (1992). [10] Ted Dunning, "Accurate methods for the statis- tics of surprise and coincidence," Computational Linguistics 19 (1993), 61-74. 61 Ocr. 853 23 114 7 123 5 9 51 220 125 103 6 13 45 4 69 16 48 2 289 12 45 16 3 57 8 3 6 13 11 30 3 53 9 44 23 8 56 15 47 2 3 6 8 3 3 5 35 6 7 Frame 3069 48 414 16 963 10 32 499 3053 1961 1607 28 122 771 14 1693 240 1243 2 10800 175 1512 366 10 2312 123 13 82 360 295 1390 16 3304 252 2908 1207 218 4265 697 3674 5 22 140 276 25 26 111 3648 194 3OO Word author subtitle co-author foreword publication epigraph co-editor cover copy page title authorship manuscript chapter epilogue publisher jacket subject double-page sale excerpt content plot galley edition protagonist co-publisher spine premise revelation theme fallacy editor translation character tone flaw section introduction release diarist preface narrator format facsimile mock-up essay back heroine pleasure Table 6: book x/5 5* 4* 4* 5* 2 3* 4* 5* 2 5* 5* 2 2 5* 5* 4* 5* 5* 0 0 2 5* 5* 2 3* 4* 3* 5* 1 2 2 2 5* 2 5* 2 2 4* 5* 1 0 4* 4* 2 0 1 2 5* 4* 0 Ocr. Frame 72 154 527 2116 42 156 85 456 100 577 9 23 32 162 28 152 12 45 49 333 7 20 30 250 14 89 14 93 10 60 23 225 4 9 10 62 36 432 7 37 82 1449 23 276 37 572 12 120 3 6 13 156 9 83 32 635 219 6612 7 58 11 143 2 2 2 2 2 2 47 1404 9 115 14 285 129 5616 17 404 25 730 15 358 3 11 6 72 3 12 37 1520 10 207 39 1646 2 3 38 1736 4 31 Word rubble ~oor facade basement roof atrium exterior tenant rooftop wreckage stairwell shell demolition balcony hallway renovation janitor rotunda entrance hulk wall ruin lobby courtyard tenancy debris pipe interior front elevator evacuation web-site airshaft cornice construction landlord occupant owner rear destruction superintendent stairway cellar half-mile step corridor window subbasement door spire Table 7: building x/5 0 5* 4* 5* 5* 4* 5* 1 4* 1 5* 0 0 5* 5* 0 1 5* 3* 0 5* 0 5* 4* 0 1 2 3* 4* 5* 1 0 4* 3* 2 1 1 1 3* 1 1 5* 5* 0 5* 5* 5* 5* 4* 3* 62 Ocr. 92 27 12 13 70 9 43 119 6 4 37 15 5 6 3 8 11 7 108 3 3 3 64 28 2 33 20 4 6 75 2 10 9 3 7 18 19 11 5 3 3 11 6 18 71 5 4 2 2 6 Frame 215 71 24 30 318 21 210 880 13 6 285 83 12 18 4 42 83 36 1985 5 6 6 1646 577 2 784 404 19 68 3648 3 216 179 13 117 635 761 334 73 18 18 376 125 980 6326 88 51 5 5 151 Word trunk windshield dashboard headlight wheel ignition hood driver radiator shifter occupant brake vent fender tailpipe bumper pipe airbag seat speedometer converter backseat window roof . jalopy engine rear visor deficiency back oversteer plate cigarette clunker battery interior speed shipment re-enactment conditioner axle tank attribute location cost paint antenna socket corsa tire Table 8: car x/5 4* 5* 5* 5* 5* 4* 5* 1 5* 1 1 5* 3* 5* 5* 5* 3* 5* 4* 4* 2 5* 5* 5* 0 5* 4* 3* 0 2 1 3* 1 0 5* 3* 1 0 0 2 5* 5* 0 1 1 4* 5* 0 0 5* Oct. 43 3 2 3 3 17 3 18 16 33 68 44 11 19 15 6 25 35 7 2 100 5 3 20 4 4 29 3 2 3 14 2 17 13 4 5 15 8 3 4 2 14 5 15 2 4 16 2 29 3 Frame 302 7 2 9 9 434 11 711 692 2116 5404 3352 432 1237 1041 207 2905 5015 374 11 23692 358 89 5347 299 306 13944 149 33 156 5073 35 7147 4686 416 745 6612 2200 190 457 42 6315 875 7643 46 518 8788 48 25606 276 Word ward radiologist trograncic mortuary hopewell clinic aneasthetist ground patient floor unit room entrance doctor administrator corridor staff department bed pharmacist director superintendent storage chief lawn compound head nurse switchboard debris executive pediatrician board area ceo yard front reputation inmate procedure overhead committee mile center pharmacy laboratory program shah president ruin Table 9: hospital x/5 5* 5* 0 4* 0 5* 5* 1 4* 4* 4* 2 4* 5* 5* 4* 3* 5* 5* 4* 5* 3* 3* 2 2 0 0 5* 4* 0 2 4* 1 1 2 2 3* 1 1 2 0 4* 0 1 4* 5* 1 0 2 1 63 Ocr. 185 5 23 8 10 2 19 6 41 22 17 22 26 12 21 19 2 4 26 3 12 4 2 3 8 8 8 17 9 23 5 50 24 24 29 40 9 49 41 6 21 3 32 6 5 2 8 3 5 7 Frame 1404 12 311 72 122 2 459 62 1663 844 645 965 1257 387 98O 856 4 41 1519 20 506 51 5 22 253 254 309 1177 413 1966 131 6326 2553 2564 3478 5616 577 7793 6360 276 2688 48 5404 337 233 13 711 69 296 632 Word construction stalk reactor emission modernization melter shutdown start-up worker root closure completion operator inspection location gate sprout leaf output turbine equipment residue zen foliage conversion workforce seed design fruit expansion pollution cost tour employee site owner roof manager operation characteristic production shoot unit tower co-owner instrumentation ground fiancee economics energy Table 10: plant x/5 2 4* 3* 3* 1 3* 1 0 2 3* 0 0 4* 2 2 3* 3* 5* 2 3* 3* 1 0 4* 0 1 3* 4* 5* 2 2 1 0 5* 1 3* 4* 3* 3* 1 3* 0 1 1 1 3* 2 0 1 2 Oer. 525 164 134 11 7 16 19 4 8 25 3 13 8 9 11 5 3 8 75 56 10 4 5 8 28 4 2 2 7 21 11 17 8 7 5 5 7 39 2 6 105 16 6 25 17 3 6 2 4 6 Fralne 1051 445 538 24 12 61 79 5 22 134 3 87 40 57 82 18 5 52 1462 1022 100 15 26 71 603 17 2 2 65 525 203 423 115 108 56 60 130 2323 4 112 8788 711 120 1442 837 20 135 5 53 144 Word dean principal graduate prom headmistress Mumni curriculum seventh-grader gymnasium faculty crit endowment ~umn~ cadet enrollment infwmary valedictorian commandant student feet auditorium jamieson yearbook cafeteria teacher grader wennberg jeffe pupil campus class trustee counselor benefactor berth hallway mascot founder raskin playground program ground courtyard hall championship accreditation fellow freund rector classroom Table 1 I: school 5* 3* 3* 4* 3* 5* 3* 5* 5* 0 3* 2 0 2 4* 4* 0 5* 0 3* 4* 5* 2 0 o' 3* 4* 5* 3* 4* 2 0 4* 3* 1 0 4* 3* 3* 3* 4* 1 2 1 0 2 4* 64 | 1999 | 8 |
A Pylonic Decision-Tree Language Model with Optimal Question Selection Adrian Corduneanu University of Toronto 73 Saint George St #299 Toronto, Ontario, M5S 2E5, Canada [email protected] Abstract This paper discusses a decision-tree approach to the problem of assigning probabilities to words following a given text. In contrast with previ- ous decision-tree language model attempts, an algorithm for selecting nearly optimal questions is considered. The model is to be tested on a standard task, The Wall Street Journal, allow- ing a fair comparison with the well-known tri- gram model. 1 Introduction In many applications such as automatic speech recognition, machine translation, spelling cor- rection, etc., a statistical language model (LM) is needed to assign ~probabilities to sentences. This probability assignment may be used, e.g., to choose one of many transcriptions hypoth- esized by the recognizer or to make deci- sions about capitalization. Without any loss of generality, we consider models that oper- ate left-to-right on the sentences, assigning a probability to the next word given its word history. Specifically, we consider statistical LM's which compute probabilities of the type P{wn ]Wl, W2,..-, Wn--1}, where wi denotes the i-th word in the text. Even for a small vocabulary, the space of word histories is so large that any attempt to estimate the conditional probabilities for each distinct history from raw frequencies is infea- sible. To make the problem manageable, one partitions the word histories into some classes C(wl,w2,...,Wn-1), and identifies the word probabilities with P{wn [ C(wl, w2,. . . , Wn-1)}. Such probabilities are easier to estimate as each class gets significantly more counts from a train- ing corpus. With this setup, building a language model becomes a classification problem: group the word histories into a small number of classes 606 while preserving their predictive power. Currently, popular N-gram models classify the word histories by their last N - 1 words. N varies from 2 to 4 and the trigram model P{wn [Wn-2, wn-1} is commonly used. Al- though these simple models perform surpris- ingly well, there is much room for improvement. The approach used in this paper is to classify the histories by means of a decision tree: to clus- ter word histories Wl,W2,... ,wn-1 for which the distributions of the following word Wn in a training corpus are similar. The decision tree is pylonic in the sense that histories at different nodes in the tree may be recombined in a new node to increase the complexity of questions and avoid data fragmentation. The method has been tried before (Bahl et al., 1989) and had promising results. In the work presented here we made two major changes to the previous attempts: we have used an opti- mal tree growing algorithm (Chou, 1991) not known at the time of publication of (Bahl et al., 1989), and we have replaced the ad-hoc clus- tering of vocabulary items used by Bahl with a data-driven clustering scheme proposed in (Lu- cassen and Mercer, 1984). 2 Description of the Model 2.1 The Decision-Tree Classifier The purpose of the decision-tree classifier is to cluster the word history wl, w2,..., Wn-1 into a manageable number of classes Ci, and to esti- mate for each class the next word conditional distribution P{wn [C i}. The classifier, together with the collection of conditional probabilities, is the resultant LM. The general methodology of decision tree construction is well known (e.g., see (Jelinek, 1998)). The following issues need to be ad- dressed for our specific application. • A tree growing criterion, often called the measure of purity; • A set of permitted questions (partitions) to be considered at each node; • A stopping rule, which decides the number of distinct classes. These are discussed below. Once the tree has been grown, we address one other issue: the estimation of the language model at each leaf of the resulting tree classifier. 2.1.1 The Tree Growing Criterion We view the training corpus as a set of ordered pairs of the following word wn and its word his- tory (wi,w2,... ,wn-i). We seek a classifica- tion of the space of all histories (not just those seen in the corpus) such that a good conditional probability P{wn I C(wi, w2,.. . , Wn-i)} can be estimated for each class of histories. Since sev- eral vocabulary items may potentially follow any history, perfect "classification" or predic- tion of the word that follows a history is out of the question, and the classifier must parti- tion the space of all word histories maximizing the probability P{wn I C(wi, w2, . . . , Wn-i)} as" signed to the pairs in the corpus. We seek a history classification such that C(wi,w2,... ,Wn-i) is as informative as pos- sible about the distribution of the next word. Thus, from an information theoretical point of view, a natural cost function for choosing ques- tions is the empirical conditional entropy of the training data with respect to the tree: H = - Z I c,)log f(w I C,). w i Each question in the tree is chosen so as to minimize the conditional entropy, or, equiva- lently, to maximize the mutual information be- tween the class of a history and the predicted word. 2.1.2 The Set of Questions and Decision Pylons Although a tree with general questions can rep- resent any classification of the histories, some restrictions must be made in order to make the selection of an optimal question computation- ally feasible. We consider elementary questions of the type w-k E S, where W-k refers to the k-th position before the word to be predicted, 607 y/ n ( D n yes no Figure 1: The structure of a pylon and S is a subset of the vocabulary. However, this kind of elementary question is rather sim- plistic, as one node in the tree cannot refer to two different history positions. A conjunction of elementary questions can still be implemented over a few nodes, but similar histories become unnecessarily fragmented. Therefore a node in the tree is not implemented as a single elemen- tary question, but as a modified decision tree in itself, called a pylon (Bahl et al., 1989). The topology of the pylon as in Figure 1 allows us to combine answers from elementary questions without increasing the number of classes. A py- lon may be of any size, and it is grown as a standard decision tree. 2.1.3 Question Selection Within the Pylon For each leaf node and position k the problem is to find the subset S of the vocabulary that minimizes the entropy of the split W-k E S. The best question over all k's will eventually be selected. We will use a greedy optimization algorithm developed by Chou (1991). Given a partition P = {81,/32,...,/3k} of the vocabu- lary, the method finds a subset S of P for which the reduction of entropy after the split is nearly optimal. The algorithm is initialized with a random partition S t2 S of P. At each iteration every atom 3 is examined and redistributed into a new partition S'U S', according to the following rule: place j3 into S' when l(wlw-kcf~) < Ew f(wlw-k e 3) log I(w w_heS) -- E,o f (wlw_ 3) log f(wlW-kEC3) where the f's are word frequencies computed relative to the given leaf. This selection crite- rion ensures a decreasing empirical entropy of the tree. The iteration stops when S = S' and If questions on the same level in the pylon are constructed independently with the Chou algo- ritm, the overall entropy may increase. That is why nodes whose children are merged must be jointly optimized. In order to reduce complex- ity, questions on the same level in the pylon are asked with respect to the same position in the history. The Chou algorithm is not accurate when the training data is sparse. For instance, when no history at the leaf has w-k E /3, the atom is invariantly placed in S'. Because such a choice of a question is not based on evidence, it is not expected to generalize to unseen data. As the tree is growing, data is fragmented among the leaves, and this issue becomes unavoidable. To deal with this problem, we choose the atomic partition P so that each atom gets a history count above a threshold. The choice of such an atomic partition is a complex problem, as words composing an atom must have similar predictive power. Our ap- proach is to consider a hierarchical classification of the words, and prune it to a level at which each atom gets sufficient history counts. The word hierarchy is generated from training data with an information theoretical algorithm (Lu- cassen and Mercer, 1984) detailed in section 2.2. 2.1.4 The Stopping Rule A common problem of all decision trees is the lack of a clear rule for when to stop growing new nodes. The split of a node always brings a reduction in the estimated entropy, but that might not hold for the true entropy. We use a simplified version of cross-validation (Breiman et al., 1984), to test for the significance of the reduction in entropy. If the entropy on a held out data set is not reduced, or the reduction on the held out text is less than 10% of the entropy reduction on the training text, the leaf is not split, because the reduction in entropy has failed to generalize to the unseen data. 2.1.5 Estimating the Language Model at Each Leaf Once an equivalence classification of all histo- ries is constructed, additional training data is used to estimate the conditional probabilities required for each node, as described in (Bahl et al., 1989). Smoothing as well as interpolation with a standard trigram model eliminates the zero probabilities. 2.2 The Hierarchical Classification of Words The goal is to build a binary tree with the words of the vocabulary as leaves, such that similar words correspond to closely related leaves. A partition of the vocabulary can be derived from such a hierarchy by taking a cut through the tree to obtain a set of subtrees. The reason for keeping a hierarchy instead of a fixed partition of the vocabulary is to be able to dynamically adjust the partition to accommodate for train- ing data fragmentation. The hierarchical classification of words was built with an entirely data-driven method. The motivation is that even though an expert could exhibit some strong classes by looking at parts of speech and synonyms, it is hard to produce a full hierarchy of a large vocabulary. Perhaps a combination of the expert and data-driven ap- proaches would give the best result. Neverthe- less, the algorithm that has been used in deriv- ing the hierarchy can be initialized with classes based on parts of speech or meaning, thus tak- ing account of prior expert information. The approach is to construct the tree back- wards. Starting with single-word classes, each iteration consists of merging the two classes most similar in predicting the word that follows them. The process continues until the entire vo- cabulary is in one class. The binary tree is then obtained from the sequence of merge operations. To quantify the predictive power of a parti- tion P = {j3z,/32,...,/3k} of the vocabulary we look at the conditional entropy of the vocabu- lary with respect to class of the previous word: H(w I P) = EZeP p(/3)H(w [ w-1 •/3) = - E epp(/3) E evp(wl )logp(w I/3) At each iteration we merge the two classes that minimize H(w I P') - H(w I P), where P' is the partition after the merge. In information- theoretical terms we seek the merge that brings the least reduction in the information provided by P about the distribution of the current word. 608 IRAN'S UNION'S IRAQ'S INVESTORS' BANKS' PEOPLE'S FARMER TEACHER WORKER DRIVER WRITER SPECIALIST EXPERT TRADER PLUMMETED PLUNGED SOARED TUMBLED SURGED RALLIED FALLING FALLS RISEN FALLEN MYSELF HIMSELF OURSELVES THEMSELVES CONSIDERABLY SIGNIFICANTLY SUBSTANTIALLY SOMEWHAT SLIGHTLY Figure 2: Sample classes from a 1000-element partition of a 5000-word vocabulary (each col- umn is a different class) The algorithm produced satisfactory results on a 5000-word vocabulary. One can see from the sample classes that the automatic building of the hierarchy accounts both for similarity in meaning and of parts of speech. the vocabulary is significantly larger, making impossible the estimation of N-gram models for N > 3. However, we expect that due to the good smoothing of the trigram probabilities a combination of the decision-tree and N-gram models will give the best results. 4 Summary In this paper we have developed a decision-tree method for building a language model that pre- dicts words given their previous history. We have described a powerful question search algo- rithm, that guarantees the local optimality of the selection, and which has not been applied before to word language models. We expect that the model will perform significantly better than the standard N-gram approach. 5 Acknowledgments I would like to thank Prof.Frederick Jelinek and Sanjeev Khu- dampur from Center for Language and Speech Processing, Johns Hopkins University, for their help related to this work and for providing the computer resources. I also wish to thank Prof.Graeme Hirst from University of Toronto for his useful advice in all the stages of this project. 3 Evaluation of the Model The decision tree is being trained and tested on the Wall Street Journal corpus from 1987 to 1989 containing 45 million words. The data is divided into 15 million words for growing the nodes, 15 million for cross-validation, 10 mil- lion for estimating probabilities, and 5 million for testing. To compare the results with other similar attempts (Bahl et al., 1989), the vocab- ulary consists of only the 5000 most frequent words and a special "unknown" word that re- places all the others. The model tries to predict the word following a 20-word history. At the time this paper was written, the im- plementation of the presented algorithms was nearly complete and preliminary results on the performance of the decision tree were expected soon. The evaluation criterion to be used is the perplexity of the test data with respect to the tree. A comparison with the perplexity of a standard back-off trigram model will in- dicate which model performs better. Although decision-tree letter language models are inferior to their N-gram counterparts (Potamianos and Jelinek, 1998), the situation should be reversed for word language models. In the case of words References L. R. Bahl, P. F. Brown, P. V. de Souza, and R. L. Mercer. 1989. A tree-based statistical language model for natural language speech recognition. IEEE Transactions on Acous- tics, Speech, and Signal Processing, 37:1001- 1008. L. Breiman, J. Friedman, R. Olshen, and C. Stone. 1984. Classification and regression trees. Wadsworth and Brooks, Pacific Grove. P. A. Chou. 1991. Optimal partitioning for classification and regression trees. IEEE Transactions on Pattern Analysis and Ma- chine Intelligence, 13:340-354. F. Jelinek. 1998. Statistical methods ]or speech recognition. The MIT Press, Cambridge. J. M. Lucassen and R. L. Mercer. 1984. An information theoretic approach to the auto- matic determination of phonemic baseforms. In Proceedings of the 1984 International Con- -ference on Acoustics, Speech, and Signal Pro- cessing, volume III, pages 42.5.1-42.5.4. G. Potamianos and F. Jelinek. 1998. A study of n-gram and decision tree letter language modeling methods. Speech Communication, 24:171-192. 609 | 1999 | 80 |
An Unsupervised Model for Statistically Determining Coordinate Phrase Attachment Miriam Goldberg Central High School & Dept. of Computer and Information Science 200 South 33rd Street Philadelphia, PA 19104-6389 University of Pennsylvania miriamgOunagi, cis. upenn, edu Abstract This paper examines the use of an unsuper- vised statistical model for determining the at- tachment of ambiguous coordinate phrases (CP) of the form nl p n2 cc n3. The model pre- sented here is based on JAR98], an unsupervised model for determining prepositional phrase at- tachment. After training on unannotated 1988 Wall Street Journal text, the model performs at 72% accuracy on a development set from sections 14 through 19 of the WSJ TreeBank [MSM93]. 1 Introduction The coordinate phrase (CP) is a source of struc- tural ambiguity in natural language. For exam- ple, take the phrase: box of chocolates and roses 'Roses' attaches either high to 'box' or low to 'chocolates'. In this case, attachment is high, yielding: H-attach: ((box (of chocolates)) (and roses)) Consider, then, the phrase: salad of lettuce and tomatoes 'Lettuce' attaches low to 'tomatoes', giving: L-attach: (salad (of ((lettuce) and (tomatoes))) [AR98] models. In addition to these, a corpus- based model for PP-attachment [SN97] has been reported that uses information from a semantic dictionary. Sparse data can be a major concern in corpus- based disambiguation. Supervised models are limited by the amount of annotated data avail- able for training. Such a model is useful only for languages in which annotated corpora are available. Because an unsupervised model does not rely on such corpora it may be modified for use in multiple languages as in [AR98]. The unsupervised model presented here trains from an unannotated version of the 1988 Wall Street Journal. After tagging and chunk- ing the text, a rough heuristic is then employed to pick out training examples. This results in a training set that is less accurate, but much larger, than currently existing annotated cor- pora. It is the goal, then, of unsupervised train- ing data to be abundant in order to offset its noisiness. 2 Background The statistical model must determine the prob- ability of a given CP attaching either high (H) or low (L), p( attachment I phrase). Results shown come from a development corpus of 500 phrases of extracted head word tuples from the WSJ TreeBank [MSM93]. 64% of these phrases attach low and 36% attach high. After further development, final testing will be done on a sep- arate corpus. The phrase: Previous work has used corpus-based ap- proaches to solve the similar problem of prepo- sitional phrase attachment. These have in- cluded backed-off [CB 95], maximum entropy [RRR94], rule-based [HR94], and unsupervised (busloads (of ((executives) and (their wives))) gives the 6-tuple: L busloads of executives and wives 610 where, a = L, nl = busloads, p = of, n2 = executives, cc = and, n3 = wives. The CP at- tachment model must determine a for all (nl p n2 cc n3) sets. The attachment decision is correct if it is the same as the corresponding decision in the TreeBank set. The probability of a CP attaching high is conditional on the 5-tuple. The algorithm pre- sented in this paper estimates the probability: regular expressions that replace noun and quan- tifier phrases with their head words. These head words were then passed through a set of heuris- tics to extract the unambiguous phrases. The heuristics to find an unambiguous CP are: • wn is a coordinating conjunction (cc) if it is tagged cc. • w,~_~ is the leftmost noun (nl) if: I5 = (a l nl,p, n2, cc, n3) The parts of the CP are analogous to those of the prepositional phrase (PP) such that {nl,n2} - {n,v} and n3 - p. JAR98] de- termines the probability p(v,n,p,a). To be consistent, here we determine the probability p(nl, n2, n3, a). 3 Training Data Extraction A statistical learning model must train from un- ambiguous data. In annotated corpora ambigu- ous data are made unambiguous through classi- fications made by human annotators. In unan- notated corpora the data themselves must be unambiguous. Therefore, while this model dis- ambiguates CPs of the form (nl p n2 cc n3), it trains from implicitly unambiguous CPs of the form (n ccn). For example: - Wn-x is the first noun to occur within 4 words to the left of cc. -no preposition occurs between this noun and cc. - no preposition occurs within 4 words to the left of this noun. • wn+x is the rightmost noun (n2) if: - it is the first noun to occur within 4 words to the right of cc. - No preposition occurs between cc and this noun. The first noun to occur within 4 words to the right of cc is always extracted. This is ncc. Such nouns are also used in the statistical model. For example, the we process the sentence below as follows: dog and cat Because there are only two nouns in the un- ambiguous CP, we must redefine its compo- nents. The first noun will be referred to as nl. It is analogous to nl and n2 in the ambiguous CP. The second, terminal noun will be referred to as n3. It is analogous to the third noun in the ambiguous CP. Hence nl -- dog, cc --- and, n3 = cat. In addition to the unambiguous CPs, the model also uses any noun that follows acc. Such nouns are classified, ncc. We extracted 119629 unambiguous CPs and 325261 nccs from the unannotated 1988 Wall Street Journal. First the raw text was fed into the part-of-speech tagger described in [AR96] 1. This was then passed to a simple chunker as used in [AR98], implemented with two small IBecause this tagger trained on annotated data, one may argue that the model presented here is not purely unsupervised. Several firms have also launched busi- ness subsidiaries and consulting arms specializing in trade, lobbying and other areas. First it is annotated with parts of speech: Several_JJ firms__NNS have_VBP also_RB launched_VBN business.aNN subsidiaries_NNS and_CC consult- ing_VBG armsANNS specializing_VBG in_IN tradeANN ,_, lobbying_NN and_CC other_JJ areas_NNS ._. From there, it is passed to the chunker yield- ing: firmsANNS have_VBP also_RB launched_VBN subsidiaries_NNS and_CC consulting_VBG armsANNS specializing_VBG in_IN tradeANN ,_, Iobbying_.NN and_CC areas_NNS ._. 611 Noun phrase heads of ambiguous and unam- biguous CPs are then extracted according to the heuristic, giving: subsidiaries and arms and areas where the extracted unambiguous CP is {nl = subsidiaries, cc = and, n3 = arms} and areas is extracted as a ncc because, although it is not part of an unambiguous CP, it occurs within four words after a conjunction. 4 The Statistical Model First, we can factor p(a, nl, n2, n3) as follows: p(a, nl,n2, n3) = p(nl)p(n2) , p(alnl ,n2) , p(n3 I a, nl,n2) The terms p(nl) and p(n2) are independent of the attachment and need not be computed. The other two terms are more problematic. Be- cause the training phrases are unambiguous and of the form (nl cc n2), nl and n2 of the CP in question never appear together in the train- ing data. To compensate we use the following heuristic as in JAR98]. Let the random variable ¢ range over (true, false} and let it denote the presence or absence of any n3 that unambigu- ously attaches to the nl or n2 in question. If ¢ = true when any n3 unambiguously attaches to nl, then p(¢ = true [ nl) is the conditional probability that a particular nl occurs with an unambiguously attached n3. Now p(a I nl,n2) can be approximated as: p(a = H lnl, n2) p(true l nl) Z(nl,n2) p(a = L [nl,n2) ~ p(true In2) " Z(nl, n2) where the normalization factor, Z(nl,n2) = p(true I nl) + p(true I n2). The reasoning be- hind this approximation is that the tendency of a CP to attach high (low) is related to the ten- dency of the nl (n2) in question to appear in an unambiguous CP in the training data. We approximate p(n3la, nl, n2) as follows: p(n3 I a = H, nl, n2) ~ p(n3 I true, nl) p(n3 I a = L, nl, n2) ~ p(n3 I true, n2) The reasoning behind this approximation is that when generating n3 given high (low) at- tachment, the only counts from the training data that matter are those which unambigu- ously attach to nl (n2), i.e., ¢ = true. Word statistics from the extracted CPs are used to formulate these probabilities. 4.1 Generate ¢ The conditional probabilities p(truelnl) and p(true I n2) denote the probability of whether a noun will appear attached unambiguously to some n3. These probabilities are estimated as: { $(.~1,true) iff(nl,true) >0 f(nl) p(truelnl) = .5 otherwise { /(n2,~r~,e) if f(n2, true)> 0 /(n2) p(true[n2) = .5 otherwise where f(n2, true) is the number of times n2 appears in an unambiguously attached CP in the training data and f(n2) is the number of times this noun has appeared as either nl, n3, or ncc in the training data. 4.2 Generate n3 The terms p(n3 I nl, true) and p(n3 I n2, true) denote the probabilies that the noun n3 appears attached unambiguously to nl and n2 respec- tively. Bigram counts axe used to compute these as follows: f(nl,n3,true) p(n3 [ true, nl) = l](nl, TM) if I(nl,n3,true)>O otherwise f(n2,n3,true) p(n3 l true, n2) = 11(n2, TM) if f(n2,n3,true)>O otherwise where N is the set of all n3s and nets that occur in the training data. 5 Results Decisions were deemed correct if they agreed with the decision in the corresponding Tree- Bank data. The correct attachment was chosen 612 72% of the time on the 500-phrase development corpus from the WSJ TreeBank. Because it is a forced binary decision, there are no measure- ments for recall or precision. If low attachment is always chosen, the accuracy is 64%. After fur- ther development the model will be tested on a testing corpus. When evaluating the effectiveness of an un- supervised model, it is helpful to compare its performance to that of an analogous supervised model. The smaller the error reduction when going from unsupervised to supervised models, the more comparable the unsupervised model is to its supervised counterpart. To our knowl- edge there has been very little if any work in the area of ambiguous CPs. In addition to develop- ing an unsupervised CP disambiguation model, In [MG, in prep] we have developed two super- vised models (one backed-off and one maximum entropy) for determining CP attachment. The backed-off model, closely based on [CB95] per- forms at 75.6% accuracy. The reduction error from the unsupervised model presented here to the backed-off model is 13%. This is compa- rable to the 14.3% error reduction found when going from JAR98] to [CB95]. It is interesting to note that after reducing the volume of training data by half there was no drop in accuracy. In fact, accuracy remained exactly the same as the volume of data was in- creased from half to full. The backed-off model in [MG, in prep] trained on only 1380 train- ing phrases. The training corpus used in the study presented here consisted of 119629 train- ing phrases. Reducing this figure by half is not overly significant. 6 Discussion In an effort to make the heuristic concise and portable, we may have oversimplified it thereby negatively affecting the performance of the model. For example, when the heuristic came upon a noun phrase consisting of more than one consecutive noun the noun closest to the cc was extracted. In a phrase like coffee and rhubarb apple pie the heuristic would chose rhubarb as the n3 when clearly pie should have been cho- sen. Also, the heuristic did not check if a prepo- sition occurred between either nl and cc or cc and n3. Such cases make the CP ambiguous thereby invalidating it as an unambiguous train- ing example. By including annotated training data from the TreeBank set, this model could be modified to become a partially-unsupervised classifier. Because the model presented here is basically a straight reimplementation of [AR98] it fails to take into account attributes that are specific to the CP. For example, whereas (nl ce n3) -- (n3 cc nl), (v p n) ~ (n p v). In other words, there is no reason to make the distinction between "dog and cat" and "cat and dog." Modifying the model accordingly may greatly increase the usefulness of the training data. 7 Acknowledgements We thank Mitch Marcus and Dennis Erlick for making this research possible, Mike Col]in.~ for his guidance, and Adwait Ratnaparkhi and Ja- son Eisner for their helpful insights. References ~[CB95] M. Collins, J. Brooks. 1995. Preposi- tional Phrase Attachment through a Backed- Off Model, A CL 3rd Workshop on Very Large Corpora, Pages 27-38, Cambridge, Mas- sachusetts, June. [MG, in prep] M. Goldberg. in preparation. Three Models for Statistically Determining Coordinate Phrase Attachment. [HR93] D. Hindle, M. Rooth. 1993. Structural Ambiguity and Lexical Relations. Computa- tional Linguistics, 19(1):103-120. [MSM93] M. Marcus, B. Santorini and M. Marcinkiewicz. 1993. Building a Large Anno- tated Corpus of English: the Penn Treebank, Computational Linguistics, 19(2):313-330. [RRR94] A. Ratnaparkhi, J. Reynar and S. Roukos. 1994. A Maximum Entropy Model for Prepositional Phrase Attachment, In Pro- ceedings of the ARPA Workshop on Human Language Technology, 1994. [AR96] A. Ratnaparkhi. 1996. A Maximum En- tropy Part-Of-Speech Tagger, In Proceedings of the Empirical Methods in Natural Lan- guage Processing Conference, May 17-18. [AR98] A. Ratnaparkhi. 1998. Unsupervised Statistical Models for Prepositional Phrase Attachment, In Proceedings of the Seven- teenth International Conference on Compu- tational Linguistics, Aug. 10-14, Montreal, Canada. 613 [SN97] J. Stetina, M. Nagao. 1997. Corpus Based PP Attachment Ambiguity Resolution with a Semantic Dictionary. In Jou Shou and Kenneth Church, editors, Proceedings o] the Fifth Workshop on Very Large Corpora, pages 66-80, Beijing and Hong Kong, Aug. 18-20. 614 | 1999 | 81 |
A flexible distributed architecture for NLP system development and use Freddy Y. Y. Choi Artificial Intelligence Group University of Manchester Manchester, U.K. [email protected] Abstract We describe a distributed, modular architecture for platform independent natural language sys- tems. It features automatic interface genera- tion and self-organization. Adaptive (and non- adaptive) voting mechanisms are used for inte- grating discrete modules. The architecture is suitable for rapid prototyping and product de- livery. 1 Introduction This article describes TEA 1, a flexible architec- ture for developing and delivering platform in- dependent text engineering (TE) systems. TEA provides a generalized framework for organizing and applying reusable TE components (e.g. to- kenizer, stemmer). Thus, developers are able to focus on problem solving rather than imple- mentation. For product delivery, the end user receives an exact copy of the developer's edition. The visibility of configurable options (different levels of detail) is adjustable along a simple gra- dient via the automatically generated user inter- face (Edwards, Forthcoming). Our target application is telegraphic text compression (Choi (1999b); of Roelofs (Forth- coming); Grefenstette (1998)). We aim to im- prove the efficiency of screen readers for the visually disabled by removing uninformative words (e.g. determiners) in text documents. This produces a stream of topic cues for rapid skimming. The information value of each word is to be estimated based on an unusually wide range of linguistic information. TEA was designed to be a development en- vironment for this work. However, the target application has led us to produce an interesting tTEA is an acronym for Text Engineering Architec- ture. architecture and techniques that are more gen- erally applicable, and it is these which we will focus on in this paper. 2 Architecture I System input and output I I L I I Plug*ins Shared knowledge System control s~ructure Figure 1: An overview of the TEA system framework. The central component of TEA is a frame- based data model (F) (see Fig.2). In this model, a document is a list of frames (Rich and Knight, 1991) for recording the properties about each token in the text (example in Fig.2). A typical TE system converts a document into F with an input plug-in. The information required at the output determines the set of process plug-ins to activate. These use the information in F to add annotations to F. Their dependencies are auto- matically resolved by TEA. System behavior is controlled by adjusting the configurable param- eters. Frame 1: (:token An :pos art :begin_s 1) Frame 2: (:token example :pos n) Frame 3: (:token sentence :pos n) Frame 4: (:token . :pos punc :end_s 1) Figure 2: "An example sentence." in a frame- based data model 615 This type of architecture has been imple- mented, classically, as a 'blackboard' system such as Hearsay-II (Erman, 1980), where inter- module communication takes place through a shared knowledge structure; or as a 'message- passing' system where the modules communi- cate directly. Our architecture is similar to blackboard systems. However, the purpose of F (the shared knowledge structure in TEA) is to provide a single extendable data structure for annotating text. It also defines a standard in- terface for inter-module communication, thus, improves system integration and ease of soft- ware reuse. 2.1 Voting mechanism A feature that distinguishes TEA from similar systems is its use of voting mechanisms for sys- tem integration. Our approach has two distinct but uniformly treated applications. First, for any type of language analysis, different tech- niques ti will return successful results P(r) on different subsets of the problem space. Thus combining the outputs P(rlti) from several ti should give a result more accurate than any one in isolation. This has been demonstrated in sev- eral systems (e.g. Choi (1999a); van Halteren et al. (1998); Brill and Wu (1998); Veronis and Ide (1991)). Our architecture currently offers two types of voting mechanisms: weighted av- erage (Eq.1) and weighted maximum (Eq.2). A Bayesian classifier (Weiss and Kulikowski, 1991) based weight estimation algorithm (Eq.3) is in- cluded for constructing adaptive voting mecha- nisms. P(r) = w P(rlti) i=1 (1) P(r) = max{WlP(rltx),...,w,,P(rlt,)} (2) = P(rlt,)) (3) Second, different types of analysis a/ will pro- vide different information about a problem, hence, a solution is improved by combining sev- eral ai. For telegraphic text compression, we es- timate E(w), the information value of a word, based on a wide range of different information sources (Fig.2.1 shows a subset of our working system). The output of each ai are combined by a voting mechanism to form a single measure. Vo~ng mechanism 0 Pmcoss 0 I " ....... "I l I I ! Technique Ane~ysis com~na~on ¢om~n~on Figure 3: An example configuration of TEA for telegraphic text compression. Thus, for example, if our system encoun- ters the phrase 'President Clinton', both lexical lookup and automatic tagging will agree that 'President' is a noun. Nouns are generally infor- mative, so should be retained in the compressed output text. However, grammar-based syntac- tic analysis gives a lower weighting to the first noun of a noun-noun construction, and bigram analysis tells us that 'President Clinton' is a common word pair. These two modules overrule the simple POS value, and 'President Clinton' is reduced to 'Clinton'. 3 Related work Current trends in the development of reusable TE tools are best represented by the Edinburgh tools (LTGT) 2 (LTG, 1999) and GATE 3 (Cun- ningham et al., 1995). Like TEA, both LTGT and GATE are frameworks for TE. LTGT adopts the pipeline architecture for module integration. For processing, a text doc- ument is converted into SGML format. Pro- cessing modules are then applied to the SGML file sequentially. Annotations are accumulated as mark-up tags in the text. The architecture is simple to understand, robust and future proof. The SGML/XML standard is well developed and supported by the community. This im- proves the reusability of the tools. However, 2LTGT is an acronym for the Edinburgh Language Technology Group Tools aGATE is an acronym for General Architecture for Text Engineering. 616 tile architecture encourages tool development rather than reuse of existing TE components. GATE is based on an object-oriented data model (similar to the TIPSTER architecture (Grishman, 1997)). Modules communicate by reading and writing information to and from a central database. Unlike LTGT, both GATE and TEA are designed to encourage software reuse. Existing TE tools are easily incorporated with Tcl wrapper scripts and Java interfaces, re- spectively. Features that distinguish LTCT, GATE and TEA are the configuration methods, portabil- ity and motivation. Users of LTGT write shell scripts to define a system (as a chain of LTGT components). With GATE, a system is con- structed manually by wiring TE components to- gether using the graphical interface. TEA as- sumes the user knows nothing but the available input and required output. The appropriate set of plug-ins are automatically activated. Module selection can be manually configured by adjust- ing the parameters of the voting mechanisms. This ensures a TE system is accessible to com- plete novices ~,,-I yet has sufficient control for developers. LTGT and GATE are both open-source C ap- plications. They can be recompiled for many platforms. TEA is a Java application. It can run directly (without compilation) on any Java supported systems. However, applications con- structed with the current release of GATE and TEA are less portable than those produced with LTGT. GATE and TEA encourage reuse of ex- isting components, not all of which are platform independent 4. We believe this is a worth while trade off since it allows developers to construct prototypes with components that are only avail- able as separate applications. Native tools can be developed incrementally. 4 An example Our application is telegraphic text compression. The examples were generated with a subset of our working system using a section of the book HAL's legacy (Stork, 1997) as test data. First, we use different compression techniques to gen- erate the examples in Fig.4. This was done by simply adjusting a parameter of an output plug- 4This is not a problem for LTGT since the architec- ture does not encourage component reuse. in. It is clear that the output is inadequate for rapid text skimming. To improve the system, the three measures were combine with an un- weighted voting mechanism. Fig.4 presents two levels of compression using the new measure. 1. With science fiction films the more science you understand the less you admire the film or respect its makers 2. fiction films understand less admire respect makers 3. fiction understand less admire respect makers 4. science fiction films science film makers Figure 4: Three measures of information value: (1) Original sentence, (2) Token frequency, (3) Stem frequency and (4) POS. 1. science fiction films understand less admire film respect makers 2. fiction makers Figure 5: Improving telegraphic text compres- sion by analysis combination. 5 Conclusions and future directions We have described an interesting architecture (TEA) for developing platform independent text engineering applications. Product delivery, configuration and development are made sim- ple by the self-organizing architecture and vari- able interface. The use of voting mechanisms for integrating discrete modules is original. Its motivation is well supported. The current implementation of TEA is geared towards token analysis. We plan to extend the data model to cater for structural annota- tions. The tool set for TEA is constantly be- ing extended, recent additions include a proto- type symbolic classifier, shallow parser (Choi, Forthcoming), sentence segmentation algorithm (Reynar and Ratnaparkhi, 1997) and a POS tagger (Ratnaparkhi, 1996). Other adaptive voting mechanisms are to be investigated. Fu- ture release of TEA will support concurrent ex- ecution (distributed processing) over a network. Finally, we plan to investigate means of im- proving system integration and module orga- nization, e.g. annotation, module and tag set compatibility. 617 References E. Brill and J. Wu. 1998. Classifier combina- tion for improved lexical disambiguation. In Proceedings of COLING-A CL '98, pages 191- 195, Montreal, Canada, August. F. Choi. 1999a. An adaptive voting mechanism for improving the reliability of natural lan- guage processing systems. Paper submitted to EACL'99, January. F. Choi. 1999b. Speed reading for the visually disabled. Paper submitted to SIGART/AAAI'99 Doctoral Consortium, February. F. Choi. Forthcoming. A probabilistic ap- proach to learning shallow linguistic patterns. In ProCeedings of ECAI'99 (Student Session), Greece. H. Cunningham, R.G. Gaizauskas, and Y. Wilks. 1995. A general architecture for text engineering (gate) - a new approach to language engineering research and de- velopment. Technical Report CD-95-21, Department of Computer Science, University of Sheffield. http://xxx.lanl.gov/ps/cmp- lg/9601009. M. Edwards. Forthcoming. An approach to automatic interface generation. Final year project report, Department of Computer Sci- ence, University of Manchester, Manchester, England. L. Erman. 1980. The hearsay-ii speech under- standing system: Integrating knowledge to resolve uncertainty. In A CM Computer Sur- veys, volume 12. G. Grefenstette. 1998. Producing intelligent telegraphic text reduction to provide an audio scanning service for the blind. In AAAI'98 Workshop on Intelligent Text Summariza- tion, San Francisco, March. R. Grishman. 1997. Tipster architecture de- sign document version 2.3. Technical report, DARPA. http://www.tipster.org. LTG. 1999. Edinburgh univer- sity, hcrc, ltg software. WWW. http://www.ltg.ed.ac.uk/software/index.html. H. Rollfs of Roelofs. Forthcoming. Telegraph- ese: Converting text into telegram style. Master's thesis, Department of Computer Sci- ence, University of Manchester, Manchester, England. G. M. P. O'Hare and N. R. Jennings, edi- tots. 1996. Foundations of Distributed Ar- tificial Intelligence. Sixth generation com- puter series. Wiley Interscience Publishers, New York. ISBN 0-471-00675. A. Ratnaparkhi. 1996. A maximum entropy model for part-of-speech tagging. In Proceed- ings of the empirical methods in NLP confer- ence, University of Pennsylvania. J. Reynar and A. Ratnaparkhi. 1997. A max- imum entropy approach to identifying sen- tence boundaries. In Proceedings of the fifth conference on Applied NLP, Washington D.C. E. Rich and K. Knight. 1991. Artificial Intel- ligence. McGraw-Hill, Inc., second edition. ISBN 0-07-100894-2. D. Stork, editor. 1997. Hal's Legacy: 2001's Computer in Dream and Reality. MIT Press. http: / / mitpress.mit.edu[ e-books /Hal /. H. van Halteren, J. Zavrel, and W. Daelemans. 1998. Improving data driven wordclass tag- ging by system combination. In Proceedings of COLING-A CL'g8, volume 1. J. Veronis and N. Ide. 1991. An accessment of semantic information automatically extracted from machine readable dictionaries. In Pro- ceedings of EA CL'91, pages 227-232, Berlin. S. Weiss and C. Kulikowski. 1991. Computer Systems That Learn. Morgan Kaufmann. 618 | 1999 | 82 |
Modeling Filled Pauses in Medical Dictations Serge)' V.. Pakhomov University of Minnesota 190 Klaeber Court 320-16 th Ave. S.E Minneapolis, MN 55455 [email protected] Abstract Filled pauses are characteristic of spontaneous speech and can present considerable problems for speech recognition by being often recognized as short words. An um can be recognized as thumb or arm if the recognizer's language model does not adequately represent FP's. Recognition of quasi-spontaneous speech (medical dictation) is subject to this problem as well. Results from medical dictations by 21 family practice physicians show that using an FP model trained on the corpus populated with FP's produces overall better results than a model trained on a corpus that excluded FP's or a corpus that had random FP's. Introduction Filled pauses (FP's), false starts, repetitions, fragments, etc. are characteristic of spontaneous speech and can present considerable problems for speech recognition. FP's are often recognized as short words of similar phonetic quality. For example, an um can be recognized as thumb or arm if the recognizer's language model does not adequately represent FP's. Recognition of quasi-spontaneous speech (medical dictation) is subject to this problem as well. The FP problem becomes especially pertinent where the corpora used to build language models are compiled from text with no FP's. Shriberg (1996) has shown that representing FP's in a language model helps decrease the model' s perplexity. She finds that when a FP occurs at a major phrase or discourse boundary, the FP itself is the best predictor of the following lexical material; conversely, in a non-boundary context, FP's are predictable from the preceding words. Shriberg (1994) shows that the rate of disfluencies grows exponentially with the length of the sentence, and that FP's occur more often in the initial position (see also Swerts (1996)). This paper presents a method of using bigram probabilities for extracting FP distribution from a corpus of hand- transcribed dam. The resulting bigram model is used to populate another Iraining corpus that originally had no FP's. Results from medical dictations by 21 family practice physicians show that using an FP model trained on the corpus populated with FP's produces overall better results than a model trained on a corpus that excluded FP's or a corpus that had random FP's. Recognition accuracy improves proportionately to the frequency of FP's in the speech. 1. Filled Pauses FP's are not random events, but have a systematic distribution and well-defined functions in discourse. (Shriberg and Stolcke 1996, Shriberg 1994, Swerts 1996, Macalay and Osgood 1959, Cook 1970, Cook and Lalljee 1970, Christenfeld, et al. 1991) Cook and Lalljee (1970) make an interesting proposal that FP's may have something to do with the listener's perception of disfluent speech. They suggest that speech may be more 619 comprehensible when it contains filler material during hesitations by preserving continuity and that a FP may serve as a signal to draw the listeners attention to the next utterance in order for the listener not to lose the onset of the following utterance. Perhaps, from the point of view of perception, FP's are not disfluent events at all. This proposal bears directly on the domain of medical dictations, since many doctors who use old voice operated equipment train themselves to use FP's instead of silent pauses, so that the recorder wouldn't cut off the beginning of the post pause utterance. 2. Quasi-spontaneous speech Family practice medical dictations tend to be pre-planned and follow an established SOAP format: (Subjective (informal observations), Objective (examination), Assessment (diagnosis) and Plan (treatment plan)). Despite that, doctors vary greatly in how frequently they use FP's, which agrees with Cook and Lalljee's (1970) findings of no correlation between FP use and the mode of discourse. Audience awareness may also play a role in variability. My observations provide multiple examples where the doctors address the transcriptionists directly by making editing comments and thanking them. 3. Training Corpora and FP Model This study used three base and two derived corpora Base corpora represent three different sets of dictations described in section 3.1. Derived corpora are variations on the base corpora conditioned in several different ways described in section 3.2. 3.1 Base Balanced FP training corpus (BFP- CORPUS) that has 75, 887 words of word-by-word transcription data evenly distributed between 16 talkers. This 3.2 corpus was used to build a BIGRAM- FP-LM which controls the process of populating a no-FP corpus with artificial FP's. Unbalanced FP training corpus (UFP- CORPUS) of approximately 500,000 words of all available word-by-word transcription data from approximately 20 talkers. This corpus was used only to calculate average frequency of FP use among all available talkers. Finished transcriptions corpus (FT- CORPUS) of 12,978,707 words contains all available dictations and no FP's. It represents over 200 talkers of mixed gender and professional status. The corpus contains no FP's or any other types of disfluencies such as repetitions, repairs and false starts. The language in this corpus is also edited for grammar. Derived CONTROLLED-FP-CORPUS is a version of the finished transcriptions corpus populated stochastically with 2,665,000 FP's based on the BIGRAM- FP-LM. RANDOM-FP-CORPUS- 1 (normal density) is another version of the finished transcriptions corpus populated with 916,114 FP's where the insertion point was selected at random in the range between 0 and 29. The random function is based on the average frequency of FPs in the unbalanced UFP-CORPUS where an FP occurs on the average after every 15 th word. Another RANDOM-FP-CORPUS-2 (high density) was used to approximate the frequency of FP's in the CONTROLLED-FP-CORPUS. 620 4. Models The language modeling process in this study was conducted in two stages. First, a bigram model containing bigram probabilities of FP's in the balanced BFP-COPRUS was built followed by four different trigram language models, some of which used corpora generated with the BIGRAM-FP- LM built during the first stage. 4.1 Bigram FP model This model contains the distribution of FP's obtained by using the following formulas: P(FPIwi-O = Cw-i Fp/Cw-i P(FPIwH) = CFp w+l/Cw+l Thus, each word in a corpus to be populated with FP's becomes a potential landing site for a FP and does or does not receive one based on the probability found in the BIGRAM-FP-LM. 4.2 Trigram models The following trigram models were built using ECRL's Transcriber language modeling tools (Valtchev, et al. 1998). Both bigram and trigram cutoffs were set to 3. • NOFP-LM was built using the FT- CORPUS with no FP's. • ALLFP-LM was built entirely on CONTROLLED-FP-CORPUS. • ADAPTFP-LM was built by interpolating ALLFP-LM and NOFP- LM at 90/10 ratio. Here 90 % of the resulting ADAPTFP-LM represents the CONTROLLED-FP-CORPUS and 10% represents FT-CORPUS. • RANDOMFP-LM-1 (normal density) was built entirely on the RANDOM-FP- CORPUS-1. = RANDOMFP-LM-2 (high density) was built entirely on the RANDOM-FP- CORPUS-2 5. Testing Data Testing data comes from 21 talkers selected at random and represents 3 (1-3 min) dictations for each talker. The talkers are a random mix of male and female medical doctors and practitioners who vary greatly in their use of FP's. Some use literally no FP's (but long silences instead), others use FP's almost every other word. Based on the frequency of FP use, the talkers were roughly split into a high FP user and low FP user groups. The relevance of such division will become apparent during the discussion of test results. 6. Adaptation Test results for ALLFP-LM (63.01% avg. word accuracy) suggest that the model over represents FP's. The recognition accuracy for this model is 4.21 points higher than that of the NOFP-LM (58.8% avg. word accuracy) but lower than that of both the RANDOMFP-LM-1 (67.99% avg. word accuracy) by about 5% and RANDOMFP- LM-2 (65.87% avg. word accuracy) by about 7%. One way of decreasing the FP representation is to correct the BIGRAM- FP-LM, which proves to be computationally expensive because of having to rebuild the large training corpus with each change in BIGRAM-FP-LM. Another method is to build a NOFP-LM and an ALLFP-LM once and experiment with their relative weights through adaptation. I chose the second method because ECRL Transcriber toolkit provides an adaptation tool that achieves the goals of the first method much faster. The results show that introducing a NOFP-LM into the equation improves recognition. The difference in recognition accuracy between the ALLFP-LM and ADAPTFP-LM is on average 4.9% across all talkers in ADAPTFP-LM's favor. Separating the talkers into high FP user group and low FP user group raises ADAPTFP-LM's gain to 6.2% for high FP users and lowers it to 3.3% 621 for low FP users. This shows that adaptation to no-FP data is, counter- intuitively more beneficial for high FP users. 7. Results and discussion Although a perplexity test provides a good theoretical measure of a language model, it is not always accurate in predicting the model's performance in a recognizer (Chen 1998); therefore, both perplexity and recognition accuracy were used in this study. Both were calculated using ECRL's LM Transcriber tools. 7.1 Perplexity Perplexity tests were conducted with ECRL's LPlex tool based on the same text corpus (BFP-CORPUS) that was used to build the BIGRAM-FP-LM. Three conditions were used. Condition A used the whole corpus. Condition B used a subset of the corpus that contained high frequency FP users (FPs/Words ratio above 1.0). Condition C used the remaining subset containing data from lower frequency FP users (FPs/Words ratio below 1.0). Table 1 summarizes the results of perplexity tests at 3-gram level for the models under the three conditions. .... , : Lp~ Lplex.: :: i OOV: ~. :Lpl~ :NOFP~LIV, ::, = ,,: ,: 617.59 6.35 1618.35 6.08 287.46 ADAVT~. M ........ i.. ;'L = 132.74 6.35 ::: ..... 6.08 ' ~:13L70 : .... : ~DOMFP~LM~. : 138.02 6.3_5 ~ 6.08 125,79 i ,R.ANDOMFP~2 156.09 6.35 152.16 6.08 145.47 6.06 980.67 6.35 964.48 6.08 916.53 6.06 Table 1. Perplexity measurements OOV r~:: (%),:,, ,,,, 6.06 6.06 6.06 The perplexity measures in Condition A show over 400 point difference between ADAPTFP- LM and NOFP-LM language models. The 363,08 increase in perplexity for ALLFP-LM model corroborates the results discussed in Section 6. Another interesting result is contained in the highlighted fields of Table 1. ADAPTFP-LM based on CONTROLLED-FP- CORPUS has lower perplexity in general. When tested on conditions B and C, ADAPTFP- LM does better on frequent FP users, whereas RANDOMFP-LM-Â does better on infrequent FP users, which is consistent with the recognition accuracy results for the two models (see Table 2). 7.2 Recognition accuracy Recognition accuracy was obtained with ECRL's HResults tool and is summarized in Table 2. ::~. ~,::,~: 1 5140 % [ . . . . . ~ I ~ ~ / ) ~ ~:::l 66.57 % [ ~ ii: ~ii~! iiiiiii!!iiiiiii!i ii]67.14% Table 2. Recognition accuracy tests for LM's. !A~ ! i ~ ~ ) i:~i~::.~:i. ~i!~i I 67.76% 71.46 % 69.23 % 71.24% The results in Table 2 demonstrate two things. First, a FP model performs better than a clean model that has no FP representation~ Second, a FP model based on populating a no-FP training corpus with FP's whose distribution was derived from a 622 small sample of speech data performs better than the one populated with FP's at random based solely on the frequency of FP's. The results also show that ADAPTFP-LM performs slightly better than RANDOMFP- LM-1 on high FP users. The gain becomes more pronounced towards the higher end of the FP use continuum. For example, the scores for the top four high FP users are 62.07% with RANDOMFP-LM-1 and 63.51% with ADAPTFP-LM. This difference cannot be attributed to the fact that RANDOMFP-LM-1 contains fewer FP's than ADAPTFP-LM. The word accuracy rates for RANDOMFP-LM-2 indicate that frequency of FP's in the training corpus is not responsible for the difference in performance between the RANDOM-FP-LM-1 and the ADAPTFP- LM. The frequency is roughly the same for both RANDOMFP-CORPUS-2 and CONTROLLED-FP-CORPUS, but RANDOMFP-LM-2 scores are lower than those of RANDOMFP-LM-1, which allows in absence of further evidence to attribute the difference in scores to the pattern of FP distribution, not their frequency. Conclusion Based on the results so far, several conclusions about FP modeling can be made: 1. Representing FP's in the training data improves both the language model's perplexity and recognition accuracy. 2. It is not absolutely necessary to have a corpus that contains naturally occurring FP's for successful recognition. FP distribution can be extrapolated from a relatively small corpus containing naturally occurring FP's to a larger clean corpus. This becomes vital in situations where the language model has to be built from "clean" text such as finished transcriptions, newspaper articles, web documents, etc. 3. If one is hard-pressed for hand transcribed data with natural FP's, a . random population can be used with relatively good results. FP's are quite common to both quasi- spontaneous monologue and spontaneous dialogue (medical dictation). Research in progress The present study leaves a number of issues to be investigated further: 1. The results for RANDOMFP-LM-1 are very close to those of ADAPTFP-LM. A statistical test is needed in order to determine if the difference is significant. 2. A systematic study of the syntactic as well as discursive contexts in which FP's are used in medical dictations. This will involve tagging a corpus of literal transcriptions for various kinds of syntactic and discourse boundaries such as clause, phrase and theme/rheme boundaries. The results of the analysis of the tagged corpus may lead to investigating which lexical items may be helpful in identifying syntactic and discourse boundaries. Although FP's may not always be lexically conditioned, lexical information may be useful in modeling FP's that occur at discourse boundaries due to co- occurrence of such boundaries and certain lexical items. 3. The present study roughly categorizes talkers according to the frequency of FP's in their speech into high FP users and low FP users. A more finely tuned categorization of talkers in respect to FP use as well as its usefulness remain to be investigated. 4. Another area of investigation will focus on the SOAP structure of medical dictations. I plan to look at relative frequency of FP use in the four parts of a medical dictation. Informal observation of data collected so far indicates that FP use is more frequent and different from other parts during the 623 Subjective part of a dictation. This is when the doctor uses fewer frozen expressions and the discourse is closest to a natural conversation. Acknowledgements I would like to thank Joan Bachenko and Michael Shonwetter, at Linguistic Technologies, Inc. and Bruce Downing at the University of Minnesota for helpful discussions and comments. References Chen, S., Beeferman, Rosenfeld, R. (1998). "Evaluation metrics for language models," In DARPA Broadcast News Transcription and Understanding Workshop. Christenfeld, N, Schachter, S and Bilous, F. (1991). "Filled Pauses and Gestures: It's not coincidence," Journal of Psycholinguistic Research, Vol. 20(1). Cook, M. (1977). "The incidence of filled pauses in relation to part of speech," Language and Speech, Vol. 14, pp. 135-139. Cook, M. and Lalljee, M. (1970). "The interpretation of pauses by the listener," Brit. J. Soc. Clin. Psy. Vol. 9, pp. 375-376. Cook, M., Smith, J, and Lalljee, M (1977). "Filled pauses and syntactic complexity," Language and Speech, Vol. 17, pp.11-16. Valtchev, V. Kershaw, D. and Odell, J. 1998. The truetalk transcriber book. Entropic Cambridge Research Laboratory, Cambridge, England. Heeman, P.A. and Loken-Kim, K. and Allen, J.F. (1996). "Combining the detection and correlation of speech repairs," In Proc., ICSLP. Lalljee, M and Cook, M. (1974). "Filled pauses and floor holding: The final test?" Semiotica, Vol. 12, pp.219-225. Maclay, H, and Osgood, C. (1959). "Hesitation phenomena in spontaneous speech," Word, Vol.15, pp. 19-44. Shriberg, E. E. (1994). Preliminaries to a theory of speech disfluencies. Ph.D. thesis, University of California at Berkely. Shriberg, E.E and Stolcke, A. (1996). "Word predictability after hesitations: A corpus- based study,, In Proc. ICSLP. Shriberg, E.E. (1996). "Disfluencies in Switchboard," In Proc. ICSLP. Shriberg, EE. Bates, R. and Stolcke, A. (1997). "A prosody-only decision-tree model for disfluency detection" In Proc. EUROSPEECH. Siu, M. and Ostendorf, M. (1996). "Modeling disfluencies in conversational speech," Proc. ICSLP. Stolcke, A and Shriberg, E. (1996). "Statistical language modeling for speech disfluencies," In Proc. ICASSP. Swerts, M, Wichmann, A and Beun, R. (1996). "Filled pauses as markers of discourse structure," Proc. ICSLP. 624 | 1999 | 83 |
AUTHOR INDEX Abella, Alicia ............................................. 191 Abney, Steven ............................................ 542 Barzilay, Regina ........................................ 550 Bateman, John A ....................................... 127 Bean, David L ............................................ 373 Beil, Franz .......................................... 104, 269 Berland, Matthew ....................................... 57 Bian, Guo-Wei ........................................... 215 Blaheta, Don .............................................. 513 Bloedorn, Eric ............................................ 558 Bratt, Elizabeth Owen .............................. 183 Breck, Eric .................................................. 325 Brill, Eric ....................................................... 65 Bruce, Rebecca F ........................................ 246 Burger, John D ........................................... 325 Canon, Stephen ......................................... 535 Caraballo, Sharon A ................................. 120 Carroll, Glenn .................................... 104, 269 Carroll, John .............................................. 473 Caudal, Patrick ..... ; .................................... 497 Cech, Claude G .......................................... 238 Charniak, Eugene ................................ 57, 513 Chen, Hsin-His .......................................... 215 Chi, Zhiyi ................................................... 535 Cho, Jeong-Mi ............................................ 230 Choi, Won Seug ......................................... 230 Collins, Michael ......................................... 505 Condon, Sherri L ....................................... 238 Content, Alain ........................................... 436 Core, Mark G ............................................. 413 Corston-Oliver, Simon H ......................... 349 Daelemans, Walter .................................... 285 Dohsaka, Kohji .......................................... 200 Dolan, William B ....................................... 349 Dowding, John .......................................... 183 Dras, Mark ................................................... 80 Edwards, William R ................................. 238 Eisner, Jason ............................................... 457 Elhadad, Michael .............................. 144, 550 Florian, Radu ............................................. 167 Fung, Pascale ............................................. 333 Furui, Sadaoki .............................................. 11 Gardent, Claire ............................................ 49 Gates, Barbara ........................................... 558 Gawron, Jean Mark ................................... 183 Geman, Stuart ............................................ 535 Gorin, Allen L ............................................ 191 625 Hajic, Jan .................................................... 505 Harper, Mary P ......................................... 175 Hatzivassiloglou, Vasileios ..................... 135 Hearst, Marti A ................................................ 3 Hepple, Mark ............................................ 465 Hirasawa, Jun-ichi .................................... 200 Hirschman, Lynette .................................. 325 Holt, Alexander ........................................ 451 Hwa, Rebecca .............................................. 73 Isahara, Hitoshi ......................................... 489 Jacquemin, Christian ........................ 341, 389 Jang, Myung-Gil ....................................... 223 Johnson, Mark ................................... 421,535 Joshi, Aravind ............................................. 41 Kanzaki, Kyoko ......................................... 489 Kasper, Walter .......................................... 405 Kawabata, Takeshi ................................... 200 Kearns, Michael S ..................................... 309 Kiefer, Bernd ..................................... 405, 473 Kis, Bal~zs .................................................. 261 Klein, Ewan ............................................... 451 Knott, Alistair .............................................. 41 Koo, Jessica Li Teng .................................. 443 Krieger, Hans-Ulrich ........................ 405, 473 Kurohashi, Sadao ...................................... 481 Lange, Marielle ......................................... 436 Lapata, Maria ................ ~ ........................... 397 Lee, Lillian ............................................. 25, 33 Light, Marc ................................................ 325 Lim, Chung Yong ..................................... 443 Lin, Dekang ............................................... 317 Lin, Wen-Cheng ........................................ 215 Litman, Diane J ......................................... 309 Malouf, Rob ............................................... 473 Manandhar, Suresh .................................. 293 Mani, Inderjeet .......................................... 558 Marcu, Daniel ............................................ 365 McAllester, David ..................................... 542 McCarley, J. Scott ...................................... 208 McKeown, Kathleen R ............................. 550 Mihalcea, Rada ......... . ................................. 152 Mikheev, Andrei ....................................... 159 Miller, George A ......................................... 21 Miyazaki, Noboru .................................... 200 Moldovan, Dan I ....................................... 152 Moore, Robert ........................................... 183 Morin, Emmanuel ..................................... 389 Myaeng, Sung Hyon ................................. 223 Nagata, Masaaki ....................................... 277 Nakano, Mikio ........................................... 200 Netzer, Yael Dahan ................................... 144 Ng, Hwee Tou ........................................... 443 Ngai, Grace .................................................. 65 Oflazer, Kemal ........................................... 254 O'Hara, Thomas P ..................................... 246 Park, Se Young .......................................... 223 Pereira, Fernando ................................ 33, 542 Prescher, Detlef ................................. 104, 269 Pr6sz6ky, G~ibor ........................................ 261 Ramshaw, Lance ....................................... 505 Rapp, Reinhard ......................................... 519 Resnik, Philip ............................................. 527 Reynar, Jeffrey C ....................................... 357 Riezler, Stefan ............................ 104, 269, 535 Riloff, Ellen ................................................ 373 Roark, Brian ............................................... 421 Rooth, Mats ........................................ 104, 269 Rupp, C. J ................................................... 405 Sakai, Yasuyuki ......................................... 481 Satta, Giorgio ............................................. 457 Schubert, Lenhart K ................................. 413 Schuler, William ......................................... 88 Seo, Jungyun ............................................. 230 Shaw, James ............................................... 135 Shun, Cheung Chi .................................... 333 Siegel, Eric V .............................................. 112 Steedman, Mark ........................................ 301 Stent, Amanda ........................................... 183 Stone, Matthew ........................................... 41 Tanaka, Hideki .......................................... 381 Thede, Scott M .......................................... 175 Tillmann, Christoph ................................. 505 van den Bosch, Antal ............................... 285 Walker, Marilyn A .................................... 309 Webber, Bonnie ........................................... 41 Wiebe, Janyce M ....................................... 246 Willis, Alistair ........................................... 293 Wintner, Shuly ............................................ 96 Worm, Karsten L ...................................... 405 Xiaohu, Liu ................................................ 333 Yang, Charles D ........................................ 429 Yarowsky, David ...................................... 167 Yokoo, Akio ............................................... 381 STUDENT AUTHOR INDEX Choi, Freddy Y. Y ...................................... 615 Corduneanu, Adrian ................................ 606 Goldberg, Miriam ..................................... 610 Kaiser, Edward C ...................................... 573 Kaufmann, Stefan ..................................... 591 Kinyon, Alexandra .................................... 585 Miyao, Yusuke .......................................... 579 Pakhomov, Sergey V ................................ 619 Saggion, Horacio ...................................... 596 Tetreault, Joel R ......................................... 602 Thomas, Kavita ......................................... 569 626 | 1999 | 84 |
Man* vs. Machine: A Case Study in Base Noun Phrase Learning Eric Brill and Grace Ngai Department of Computer Science The Johns Hopkins University Baltimore, MD 21218, USA Email: (brill,gyn}~cs. jhu. edu Abstract A great deal of work has been done demonstrat- ing the ability of machine learning algorithms to automatically extract linguistic knowledge from annotated corpora. Very little work has gone into quantifying the difference in ability at this task between a person and a machine. This pa- per is a first step in that direction. 1 Introduction Machine learning has been very successful at solving many problems in the field of natural language processing. It has been amply demon- strated that a wide assortment of machine learn- ing algorithms are quite effective at extracting linguistic information from manually annotated corpora. Among the machine learning algorithms stud- ied, rule based systems have proven effective on many natural language processing tasks, including part-of-speech tagging (Brill, 1995; Ramshaw and Marcus, 1994), spelling correc- tion (Mangu and Brill, 1997), word-sense dis- ambiguation (Gale et al., 1992), message un- derstanding (Day et al., 1997), discourse tag- ging (Samuel et al., 1998), accent restoration (Yarowsky, 1994), prepositional-phrase attach- ment (Brill and Resnik, 1994) and base noun phrase identification (Ramshaw and Marcus, In Press; Cardie and Pierce, 1998; Veenstra, 1998; Argamon et al., 1998). Many of these rule based systems learn a short list of simple rules (typ- ically on the order of 50-300) which are easily understood by humans. Since these rule-based systems achieve good performance while learning a small list of sim- ple rules, it raises the question of whether peo- *and Woman. 65 ple could also derive an effective rule list man- ually from an annotated corpus. In this pa- per we explore how quickly and effectively rel- atively untrained people can extract linguistic generalities from a corpus as compared to a ma- chine. There are a number of reasons for doing this. We would like to understand the relative strengths and weaknesses of humans versus ma- chines in hopes of marrying their con~plemen- tary strengths to create even more accurate sys- tems. Also, since people can use their meta- knowledge to generalize from a small number of examples, it is possible that a person could de- rive effective linguistic knowledge from a much smaller training corpus than that needed by a machine. A person could also potentially learn more powerful representations than a machine, thereby achieving higher accuracy. In this paper we describe experiments we per- formed to ascertain how well humans, given an annotated training set, can generate rules for base noun phrase chunking. Much previous work has been done on this problem and many different methods have been used: Church's PARTS (1988) program uses a Markov model; Bourigault (1992) uses heuristics along with a grammar; Voutilainen's NPTool (1993) uses a lexicon combined with a constraint grammar; Juteson and Katz (1995) use repeated phrases; Veenstra (1998), Argamon, Dagan & Kry- molowski(1998) and Daelemaus, van den Bosch & Zavrel (1999) use memory-based systems; Ramshaw & Marcus (In Press) and Cardie & Pierce (1998) use rule-based systems. 2 Learning Base Noun Phrases by Machine We used the base noun phrase system of Ramshaw and Marcus (R&M) as the machine learning system with which to compare the hu- man learners. It is difficult to compare different machine learning approaches to base NP anno- tation, since different definitions of base NP are used in many of the papers, but the R&M sys- tem is the best of those that have been tested on the Penn Treebank. 1 To train their system, R&M used a 200k-word chunk of the Penn Treebank Parsed Wall Street Journal (Marcus et al., 1993) tagged using a transformation-based tagger (Brill, 1995) and extracted base noun phrases from its parses by selecting noun phrases that contained no nested noun phrases and further processing the data with some heuristics (like treating the posses- sive marker as the first word of a new base noun phrase) to flatten the recursive struc- ture of the parse. They cast the problem as a transformation-based tagging problem, where each word is to be labelled with a chunk struc- ture tag from the set {I, O, B}, where words marked 'T' are inside some base NP chunk, those marked "O" are not part of any base NP, and those marked "B" denote the first word of a base NP which immediately succeeds an- other base NP. The training corpus is first run through a part-of-speech tagger. Then, as a baseline annotation, each word is labelled with the most common chunk structure tag for its part-of-speech tag. After the baseline is achieved, transforma- tion rules fitting a set of rule templates are then learned to improve the "tagging accuracy" of the training set. These templates take into consideration the word, part-of-speech tag and chunk structure tag of the current word and all words within a window of 3 to either side of it. Applying a rule to a word changes the chunk structure tag of a word and in effect alters the boundaries of the base NP chunks in the sen- tence. An example of a rule learned by the R&M sys- tem is: change a chunk structure tag of a word from I to B if the word is a determiner, the next word ks a noun, and the two previous words both have chunk structure tags of I. In other words, a determiner in this context is likely to begin a noun phrase. The R&M system learns a total 1We would like to thank Lance Ramshaw for pro- viding us with the base-NP-annotated training and test corpora that were used in the R&M system, as well as the rules learned by this system. of 500 rules. 3 Manual Rule Acquisition R&M framed the base NP annotation problem as a word tagging problem. We chose instead to use regular expressions on words and part of speech tags to characterize the NPs, as well as the context surrounding the NPs, because this is both a more powerful representational lan- guage and more intuitive to a person. A person can more easily consider potential phrases as a sequence of words and tags, rather than looking at each individual word and deciding whether it is part of a phrase or not. The rule actions we allow are: 2 Add Add a base NP (bracket a se- quence of words as a base NP) Kill Delete a base NP (remove a pair of parentheses) Transform Transform a base NP (move one or both parentheses to ex- tend/contract a base NP) Merge Merge two base NPs As an example, we consider an actual rule from our experiments: Bracket all sequences of words of: one determiner (DT), zero or more adjec- tives (JJ, JJR, JJS), and one or more nouns (NN, NNP, NNS, NNPS), if they are followed by a verb (VB, VBD, VBG, VBN, VBP, VBZ). In our language, the rule is written thus: 3 A (* .) ({i} t=DT) (* t=JJ[RS]?) (+ t=NNP?S?) ({i} t=VB [DGNPZ] ?) The first line denotes the action, in this case, Add a bracketing. The second line defines the context preceding the sequence we want to have bracketed -- in this case, we do not care what this sequence is. The third line defines the se- quence which we want bracketed, and the last 2The rule types we have chosen are similar to those used by Vilain and Day (1996) in transformation-based parsing, but are more powerful. SA full description of the rule language can be found at http://nlp, cs. jhu. edu/,~baseNP/manual. 6B line defines the context following the bracketed sequence. Internally, the software then translates this rule into the more unwieldy Perl regular expres- sion: s( ( ( ['\s_] +__DT\s+) ( ['\s_] +__JJ [RS] \s+)* The actual system is located at http://nlp, cs. jhu. edu/~basenp/chunking. A screenshot of this system is shown in figure 4. The correct base NPs are enclosed in paren- theses and those annotated by the human's rules in brackets. ( ['\s_] +__NNPFS?\s+) +) ( [" \s_] +__VB [DGNPZ] \s+)} 4 { ( $1 ) $5 ]'g The base NP annotation system created by the humans is essentially a transformation- based system with hand-written rules. The user manually creates an ordered list of rules. A rule list can be edited by adding a rule at any position, deleting a rule, or modifying a rule. The user begins with an empty rule list. Rules are derived by studying the training corpus and NPs that the rules have not yet bracketed, as well as NPs that the rules have incorrectly bracketed. Whenever the rule list is edited, the efficacy of the changes can be checked by run- ning the new rule list on the training set and seeing how the modified rule list compares to the unmodified list. Based on this feedback, the user decides whether, to accept or reject the changes that were made. One nice prop- erty of transformation-based learning is that in appending a rule to the end of a rule list, the user need not be concerned about how that rule may interact with other rules on the list. This is much easier than writing a CFG, for instance, where rules interact in a way that may not be readily apparent to a human rule writer. To make it easy for people to study the train- ing set, word sequences are presented in one of four colors indicating that they: 1. are not part of an NP either in the truth or in the output of the person's rule set 2. consist of an NP both in the truth and in the output of the person's rule set (i.e. they constitute a base NP that the person's rules correctly annotated) 3. consist of an NP in the truth but not in the output of the person's rule set (i.e. they constitute a recall error) 4. consist of an NP in the output of the per- son's rule set but not in the truth (i.e. they constitute a precision error) Experimental Set-Up and Results The experiment of writing rule lists for base NP annotation was assigned as a homework set to a group of 11 undergraduate and graduate stu- dents in an introductory natural language pro- cessing course. 4 The corpus that the students were given from which to derive and validate rules is a 25k word subset of the R&M training set, approximately ! the size of the full R&M training set. The 8 reason we used a downsized training set was that we believed humans could generalize better from less data, and we thought that it might be possible to meet or surpass R&M's results with a much smaller training set. Figure 1 shows the final precision, recall, F- measure and precision+recall numbers on the training and test corpora for the students. There was very little difference in performance on the training set compared to the test set. This indicates that people, unlike machines, seem immune to overtraining. The time the students spent on the problem ranged from less than 3 hours to almost 10 hours, with an av- erage of about 6 hours. While it was certainly the case that the students with the worst results spent the least amount of time on the prob- lem, it was not true that those with the best results spent the most time -- indeed, the av- erage amount of time spent by the top three students was a little less than the overall aver- age -- slightly over 5 hours. On average, peo- ple achieved 90% of their final performance after half of the total time they spent in rule writing. The number of rules in the final rule lists also varied, from as few as 16 rules to as many as 61 rules, with an average of 35.6 rules. Again, the average number for the top three subjects was a little under the average for everybody: 30.3 rules. 4These 11 students were a subset of the entire class. Students were given an option of participating in this ex- periment or doing a much more challenging final project. Thus, as a population, they tended to be the less moti- vated students. 67 TRAINING SET (25K Words) Precision Recall 87.8% 88.6% 88.1% 88.2% 88.6% 87.6% 88.0% 87.2% 86.2% 86.8% 86.0% 87.1% 84.9% 86.7% 83.6% 86.0% 83.9% 85.0% 82.8% 84.5% 84.8% 78.8% Student 1 Student 2 Student 3 Student 4 Student 5 Student 6 Student 7 Student 8 Student 9 Student 10 Student 11 F-Measure P+n Precision 2 88.2 88.2 88.0% 88.2 88.2 88.2% 88.1 88.2 88.3% 87.6 87.6 86.9% 86.5 86.5 85.8% 86.6 86.6 85.8% 85.8 85.8 85.3% 84.8 84.8 83.1% 84.4 84.5 83.5% 83.6 83.7 83.3% 81.7 81.8 84.0% TEST SET Recall F-Measure 88.8% 88.4 87.9% 88.0 87.8% 88.0 85.9% 86.4 85.8% 85.8 87.1% 86.4 87.3% 86.3 85.7% 84.4 84.8% 84.1 84.4% 83.8 77.4% 80.6 2 88.4 88.1 88.1 86.4 85.8 86.5 86.3 84.4 84.2 83.8 80.7 Figure 1: P/R results of test subjects on training and test corpora In the beginning, we believed that the stu- dents would be able to match or better the R&M system's results, which are shown in fig- ure 2. It can be seen that when the same train- ing corpus is used, the best students do achieve performances which are close to the R&M sys- tem's -- on average, the top 3 students' per- formances come within 0.5% precision and 1.1% recall of the machine's. In the following section, we will examine the output of both the manual and automatic systems for differences. 5 Analysis Before we started the analysis of the test set, we hypothesized that the manually derived sys- tems would have more difficulty with potential rifles that are effective, but fix only a very small number of mistakes in the training set. The distribution of noun phrase types, iden- tified by their part of speech sequence, roughly obeys Zipf's Law (Zipf, 1935): there is a large tail of noun phrase types that occur very infre- quently in the corpus. Assuming there is not a rule that can generalize across a large number of these low-frequency noun phrases, the only way noun phrases in the tail of the distribution can be learned is by learning low-count rules: in other words, rules that will only positively af- fect a small number of instances in the training corpus. Van der Dosch and Daelemans (1998) show that not ignoring the low count instances is of- ten crucial to performance in machine learning systems for natural language. Do the human- written rules suffer from failing to learn these infrequent phrases? To explore the hypothesis that a primary dif- ference between the accuracy of human and ma- chine is the machine's ability to capture the low frequency noun phrases, we observed how the accuracy of noun phrase annotation of both hu- man and machine derived rules is affected by the frequency of occurrence of the noun phrases in the training corpus. We reduced each base NP in the test set to its POS tag sequence as assigned by the POS tagger. For each POS tag sequence, we then counted the number of times it appeared in the training set and the recall achieved on the test set. The plot of the test set recall vs. the number of appearances in the training set of each tag sequence for the machine and the mean of the top 3 students is shown in figure 3. For instance, for base NPs in the test set with tag sequences that appeared 5 times in the training corpus, the students achieved an average recall of 63.6% while the machine achieved a recall of 83.5%. For base NPs with tag sequences that appear less than 6 times in the training set, the machine outperforms the students by a recall of 62.8% vs. 54.8%. However, for the rest of the base NPs -- those that appear 6 or more times -- the performances of the machine and students are almost identical: 93.7% for the machine vs. 93.5% for the 3 students, a difference that is not statistically significant. The recall graph clearly shows that for the top 3 students, performance is comparable to the machine's on all but the low frequency con- stituents. This can be explained by the human's 68 Recall F-Measure 89.3% 89.0 92.3% 92.0 2 89.0 92.1 0.9 Figure 2: P/R results of the R&M system on test corpus ..." ""... °o." 0.8 0.7~ 0.6- 0.5- 0.4- 0.3 o Training set size(words) Precision 25k 88.7% 200k 91.8% Number of Appearances in Training Set • • 4- - • Machine Students Figure 3: Test Set Recall vs. Frequency of Appearances in Training Set. reluctance or inability to write a rule that will only capture a small number of new base NPs in the training set. Whereas a machine can easily learn a few hundred rules, each of which makes a very small improvement to accuracy, this is a tedious task for a person, and a task which ap- parently none of our human subjects was willing or able to take on. There is one anomalous point in figure 3. For base NPs with POS tag sequences that appear 3 times in the training set, there is a large de- crease in recall for the machine, but a large increase in recall for the students. When we looked at the POS tag sequences in question and their corresponding base NPs, we found that this was caused by one single POS tag sequence -- that of two successive numbers (CD). The 69 test set happened to include many sentences containing sequences of the type: ...( CD CD ) TO ( CD CD )... as in: ( International/NNP Paper/NNP ) fell/VBD ( 1/CD 3/CD ) to/TO ( 51/CD ½/CD )... while the training set had none. The machine ended up bracketing the entire sequence I/CD -~/CD to/T0 51/CD ½/CD as a base NP. None of the students, however, made this mistake. 6 Conclusions and Future Work In this paper we have described research we un- dertook in an attempt to ascertain how people can perform compared to a machine at learning linguistic information from an annotated cor- pus, and more importantly to begin to explore the differences in learning behavior between hu- man and machine. Although people did not match the performance of the machine-learned annotator, it is interesting that these "language novices", with almost no training, were able to come fairly close, learning a small number of powerful rules in a short amount of time on a small training set. This challenges the claim that machine learning offers portability advan- tages over manual rule writing, seeing that rel- atively unmotivated people can near-match the best machine performance on this task in so lit- tle time at a labor cost of approximately US$40. We plan to take this work in a number of di- rections. First, we will further explore whether people can meet or beat the machine's accuracy at this task. We have identified one major weak- ness of human rule writers: capturing informa- tion about low frequency events. It is possible that by providing the person with sufficiently powerful corpus analysis tools to aide in rule writing, we could overcome this problem. We ran all of our human experiments on a fixed training corpus size. It would be interest- ing to compare how human performance varies as a function of training corpus size with how machine performance varies. There are many ways to combine human corpus-based knowledge extraction with ma- chine learning. One possibility would be to com- bine the human and machine outputs. Another would be to have the human start with the out- put of the machine and then learn rules to cor- rect the machine's mistakes. We could also have a hybrid system where the person writes rules with the help of machine learning. For instance, the machine could propose a set of rules and the person could choose the best one. We hope that by further studying both human and ma- chine knowledge acquisition from corpora, we can devise learning strategies that successfully combine the two approaches, and by doing so, further improve our ability to extract useful lin- guistic information from online resources. 70 Acknowledgements The authors would like to thank Ryan Brown, Mike Harmon, John Henderson and David Yarowsky for their valuable feedback regarding this work. This work was partly funded by NSF grant IRI-9502312. References S. Argamon, I. Dagan, and Y. Krymolowski. 1998. A memory-based approach to learning shallow language patterns. In Proceedings of the ITth International Conference on Compu- tational Linguistics, pages 67-73. COLING- ACL. D. Bourigault. 1992. Surface grammatical anal- ysis for the extraction of terminological noun phrases. In Proceedings of the 30th Annual Meeting of the Association of Computational Linguistics, pages 977-981. Association of Computational Linguistics. E. Brill and P. Resnik. 1994. A rule-based approach to prepositional phrase attachment disambiguation. In Proceedings of the fif- teenth International Conference on Compu- tational Linguistics (COLING-1994). E. Brill. 1995. Transformation-based error- driven learning and natural language process- ing: A case study in part of speech tagging. Computational Linguistics, December. C. Cardie and D. Pierce. 1998. Error-driven pruning of treebank gramars for base noun phrase identification. In Proceedings of the 36th Annual Meeting of the Association of Computational Linguistics, pages 218-224. Association of Computational Linguistics. K. Church. 1988. A stochastic parts program and noun phrase parser for unrestricted text. In Proceedings of the Second Conference on Applied Natural Language Processing, pages 136-143. Association of Computational Lin- guistics. W. Daelemans, A. van den Bosch, and J. Zavrel. 1999. Forgetting exceptions is harmful in lan- guage learning. In Machine Learning, spe- cial issue on natural language learning, vol- ume 11, pages 11-43. to appear. D. Day, J. Aberdeen, L. Hirschman, R. Kozierok, P. Robinson, and M. Vi- lain. 1997. Mixed-initiative development of language processing systems. In Fifth Conference on Applied Natural Language ~nUre corpus ~mSed lines only ~l'recision a'ro. only ~ errors only ~3rep on re~c Rules so far: (Reload frame ON EVERY ITERATION to make=urethat contents rare up to date) 1~e in yore mla in thebox bdow, Tlmn~ for your im~dpation and good luck~ existential/pronoun Pule (e ,) ({1 } t=(EX I PRP IWP It~T)) (* .) # dete rm~ ne r+adjecti re+noun A, (-({1})t=(DT)) (* t=(CDt33[RS]?IVBG)) (+ t=NNS?) (* .) # POS+adjecti ves+nouns A (* .) ({1} t=PO5) (? t=(JJ[RS]?IVBNIVBG)) (+ t=NNS?) (* .) ([-~T-t~ird-lar~st ,3 thriftNN i~titutionNN D hi m ([PtlcrtONN P RiCONNp]) ahoRB ==Jdv~ ([itpap]) exlmCtSv~ ([aljT retnrnNs]) tOTo ([profitabilitysN ] ) in m ([theft third;~ quartersN])Wltc~wl ~ ([itpRl~]) rePOr~vBZ (opc~tingvB G rcsultsvl ~ ([thiZDT weekNN]) .. Sem~ce 499: ([POneeNN P Federalt, iNp] ) Illddv~ ([th%T divid~dNN])WltSv~ IRl=FatdedvBN inlN ([.anticipationN NI) OliN (m0reRBR [|tzhlgl~tjj ~Pimlss r~u~u~nsss] )und=m [ (~r Financi~ssP institotiomNN p Pd~OIIlINNP] ,, [I~d~C~NNP] ,,'ndcc [FmforeememtNN P AetNN P] ) ofm ([1989cD]) .; $mtcnc¢ .~0: ([%~ labor-..~,~=tn ~o~PNN])~'~ ~-~o ([~ rcvisedvB N buy-otltNn bidNN] ) for m [ (Onited~Np Aklin=NsPS ~-,-t,N] [UALNNp CO~' N.p] ) ([t~tw~r]),,,~d~ m,~ ([=~Jo~N. ~'~l'~ N])=~o ([~mp~s]) ~ Figure 4: Screenshot of base NP chunking system Processing, pages 348-355. Association for Computational Linguistics, March. W. Gale, K. Church, and D. Yarowsky. 1992. One sense per discourse. In Proceedings of the 4th DARPA Speech and Natural Language Workship, pages 233-237. J. Juteson and S. Katz. 1995. Technical ter- minology: Some linguistic properties and an algorithm for identification in text. Natural Language Engineering, 1:9-27. L. Mangu and E. Brill. 1997. Automatic rule acquisition for spelling correction. In Pro- ceedings of the Fourteenth International Con- ference on Machine Learning, Nashville, Ten- nessee. M. Marcus, M. Marcinkiewicz, and B. Santorini. 1993. Building a large annotated corpus of English: The Penn Treebank. Computational Linguistics, 19(2):313-330. L. Ramshaw and M. Marcus. 1994. Exploring the statistical derivation of transformational 71 rule sequences for part-of-speech tagging. In The Balancing Act: Proceedings of the A CL Workshop on Combining Symbolic and Sta- tistical Approaches to Language, New Mexico State University, July. L. Ramshaw and M. Marcus. In Press. Text chunking using transformation-based learn- ing. In Natural Language Processing Using Very large Corpora. Kluwer. K. Samuel, S. Carberry, and K. Vijay- Shanker. 1998. Dialogue act tagging with transformation-based learning. In Proceed- ings of the 36th Annual Meeting of the As- sociation for Computational Linguistics, vol- ume 2. Association of Computational Linguis- tics. A. van der Dosch and W. Daelemans. 1998. Do not forget: Full memory in memory- based learning of word pronunciation. In New Methods in Language Processing, pages 195- 204. Computational Natural Language Learn- ing. J. Veenstra. 1998. Fast NP chunking using memory-based learning techniques. In BENELEARN-98: Proceedings of the Eighth Belgian-Dutch Conference on Ma- chine Learning, Wageningen, the Nether- lands. M. Vilain and D. Day. 1996. Finite-state parsing by rule sequences. In International Conference on Computational Linguistics, Copenhagen, Denmark, August. The Interna- tional Committee on Computational Linguis- tics. A Voutilainen. 1993. NPTool, a detector of English noun phrases. In Proceedings of the Workshop on Very Large Corpora, pages 48- 57. Association for Computational Linguis- tics. D. Yarowsky. 1994. Decision lists for lexi- cal ambiguity resolution: Application to ac- cent restoration in Spanish and French. In Proceedings of the 32nd Annual Meeting of the Association for Computational Linguis- tics, pages 88-95, Las Cruces, NM. G. Zipf. 1935. The Psycho-Biology of Language. Houghton Mifflin. 72 | 1999 | 9 |
Processes that Shape Conversation and their Implications for Computational Linguistics Susan E. Brennan Department of Psychology State University of New York Stony Brook, NY, US 11794-2500 [email protected] Abstract Experimental studies of interactive language use have shed light on the cognitive and interpersonal processes that shape conversation; corpora are the emergent products of these processes. I will survey studies that focus on under-modelled aspects of interactive language use, including the processing of spontaneous speech and disfluencies; metalinguistic displays such as hedges; interactive processes that affect choices of referring expressions; and how communication media shape conversations. The findings suggest some agendas for computational linguistics. Introduction Language is shaped not only by grammar, but also by the cognitive processing of speakers and addressees, and by the medium in which it is used. These forces have, until recently, received little attention, having been originally consigned to "performance" by Chomsky, and considered to be of secondary importance by many others. But as anyone who has listened to a tape of herself lecturing surely knows, spoken language is formally quite different from written language. And as those who have transcribed conversation are excruciatingly aware, interactive, spontaneous speech is especially messy and disfluent. This fact is rarely acknowledged by psychological theories of comprehension and production (although see Brennan & Schober, in press; Clark, 1994, 1997; Fox Tree, 1995). In fact, experimental psycholinguists still make up most of their materials, so that much of what we know about sentence processing is based on a sanitized, ideal form of language that no one actually speaks. But the field of computational linguistics has taken an interesting turn: Linguists and computational linguists who formerly used made-up sentences are now using naturally- and experimentally-generated corpora on which to base and test their theories. One of the most exciting developments since the early 1990s has been the focus on corpus data. Organized efforts such as LDC and ELRA have assembled large and varied corpora of speech and text, making them widely available to researchers and creators of natural language and speech recognition systems. Finally, Internet usage has generated huge corpora of interactive spontaneous text or "visible conversations" that little resemble edited texts. Of course, ethnographers and sociolinguists who practice conversation analysis (e.g., Sacks, Schegloff, & Jefferson, 1974; Goodwin, 1981) have known for a long time that spontaneous interaction is interesting in its own right, and that although conversation seems messy at first glance, it is actually orderly. Conversation analysts have demonstrated that speakers coordinate with each other such feats as achieving a joint focus of attention, producing closely timed turn exchanges, and finishing each another’s utterances. These demonstrations have been compelling enough to inspire researchers from psychology, linguistics, computer science, and human-computer interaction to turn their attention to naturalistic language data. But it is important to keep in mind that a corpus is, after all, only an artifact—a product that emerges from the processes that occur between and within speakers and addressees. Researchers who analyze the textual records of conversation are only overhearers, and there is ample evidence that overhearers experience a conversation quite differently from addressees and from side participants (Schober & Clark, 1989; Wilkes-Gibbs & Clark, 1992). With a corpus alone, there is no independent evidence of what people actually intend or understand at different points in a conversation, or why they make the choices they do. Conversation experiments that provide partners with a task to do have much to offer, such as independent measures of communicative success as well as evidence of precisely when one partner is confused or has reached a hypothesis about the other’s beliefs or intentions. Task-oriented corpora in combination with information about how they were generated are important for discourse studies. We still don't know nearly enough about the cognitive and interpersonal processes that underlie spontaneous language use—how speaking and listening are coordinated between individuals as well as within the mind of someone who is switching speaking and listening roles in rapid succession. Hence, determining what information needs to be represented moment by moment in a dialog model, as well as how and when it should be updated and used, is still an open frontier. In this paper I start with an example and identify some distinctive features of spoken language interchanges. Then I describe several experiments aimed at understanding the processes that generate them. I conclude by proposing some desiderata for a dialog model. Two people in search of a perspective To begin, consider the following conversational interchange from a laboratory experiment on referential communication. A director and a matcher who could not see each another were trying to get identical sets of picture cards lined up in the same order. (1) D:ah boy this one ah boy all right it looks kinda likeon the right top there’s a square that looks diagonal M: uh huh D: and you have sort of another like rectangle shape, thelike a triangle, angled, and on the bottom it’s uh I don’t know what that is, glass shaped M: all right I think I got it D: it’s almost like a person kind of in a weird way M: yeah like like a monk praying or something D: right yeah good great M: all right I got it (Stellmann & Brennan, 1993) Several things are apparent from this exchange. First, it contains several disfluencies or interruptions in fluent speech. The director restarts her first turn twice and her second turn once. She delivers a description in a series of installments, with backchannels from the matcher to confirm them. She seasons her speech with fillers like uh, pauses occasionally, and displays her commitment (or lack thereof) to what she is saying with displays like ah boy this one ah boy and I don’t know what that is. Even though she is the one who knows what the target picture is, it is the matcher who ends up proposing the description that they both end up ratifying: like a monk praying or something. Once the director has ratified this proposal, they have succeeded in establishing a conceptual pact (see Brennan & Clark, 1996). En route, both partners hedged their descriptions liberally, marking them as provisional, pending evidence of acceptance from the other. This example is typical; in fact, 24 pairs of partners who discussed this object ended up synthesizing nearly 24 different but mutually agreed-upon perspectives. Finally, the disfluencies, hedges, and turns would have been distributed quite differently if this conversation had been conducted over a different medium—through instant messaging, or if the partners had had visual contact. Next I will consider the proceses that underlie these aspects of interactive spoken communication. 1 Speech is disfluent, and disfluencies bear information The implicit assumptions of psychological and computational theories that ignore disfluencies must be either that people aren't disfluent, or that disfluencies make processing more difficult, and so theories of fluent speech processing should be developed before the research agenda turns to disfluent speech processing. The first assumption is clearly false; disfluency rates in spontaneous speech are estimated by Fox Tree (1995) and by Bortfeld, Leon, Bloom, Schober, and Brennan (2000) to be about 6 disfluencies per 100 words, not including silent pauses. The rate is lower for speech to machines (Oviatt, 1995; Shriberg, 1996), due in part to utterance length; that is, disfluency rates are higher in longer utterances, where planning is more difficult, and utterances addressed to machines tend to be shorter than those addressed to people, often because dialogue interfaces are designed to take on more initiative. The average speaker may believe, quite rightly, that machines are imperfect speech processors, and plan their utterances to machines more carefully. The good news is that speakers can adapt to machines; the bad news is that they do so by recruiting limited cognitive resources that could otherwise be focused on the task itself. As for the second assumption, if the goal is to eventually process unrestricted, natural human speech, then committing to an early and exclusive focus on processing fluent utterances is risky. In humans, speech production and speech processing are done incrementally, using contextual information from the earliest moments of processing (see, e.g., Tanenhaus et al. 1995). This sort of processing requires quite a different architecture and different mechanisms for ambiguity resolution than one that begins processing only at the end of a complete and well-formed utterance. Few approaches to parsing have tried to handle disfluent utterances (notable exceptions are Core & Schubert, 1999; Hindle, 1983; Nakatani & Hirschberg, 1994; Shriberg, Bear, & Dowding, 1992). The few psycholinguistic experiments that have examined human processing of disfluent speech also throw into question the assumption that disfluent speech is harder to process than fluent speech. Lickley and Bard (1996) found evidence that listeners may be relatively deaf to the words in a reparandum (the part that would need to be excised in order for the utterance to be fluent), and Shriberg and Lickley (1993) found that fillers such as um or uh may be produced with a distinctive intonation that helps listeners distinguish them from the rest of the utterance. Fox Tree (1995) found that while previous restarts in an utterance may slow a listener’s monitoring for a particular word, repetitions don’t seem to hurt, and some fillers, such as uh, seem to actually speed monitoring for a subsequent word. What information exists in disfluencies, and how might speakers use it? Speech production processes can be broken into three phases: a message or semantic process, a formulation process in which a syntactic frame is chosen and words are filled in, and an articulation process (Bock, 1986; Bock & Levelt, 1994; Levelt, 1989). Speakers monitor their speech both internally and externally; that is, they can make covert repairs at the point when an internal monitoring loop checks the output of the formulation phase before articulation begins, or overt repairs when a problem is discovered after the articulation phase via the speaker's external monitor—the point at which listeners also have access to the signal (Levelt, 1989). According to Nooteboom's (1980) Main Interruption Rule, speakers tend to halt speaking as soon as they detect a problem. Production data from Levelt's (1983) corpus supported this rule; speakers interrupted themselves within or right after a problem word 69% of the time. How are regularities in disfluencies exploited by listeners? We have looked at the comprehension of simple fluent and disfluent instructions in a constrained situation where the listener had the opportunity to develop expectations about what the speaker would say (Brennan & Schober, in press). We tested two hypotheses drawn from some suggestions of Levelt's (1989): that "by interrupting a word, a speaker signals to the addressee that the word is an error," and that an editing expression like er or uh may "warn the addressee that the current message is to be replaced," as with Move to the ye— uh, orange square. We collected naturally fluent and disfluent utterances by having a speaker watch a display of objects; when one was highlighted he issued a command about it, like "move to the yellow square." Sometimes the highlight changed suddenly; this sometimes caused the speaker to produce disfluencies. We recorded enough tokens of simple disfluencies to compare the impact of three ways in which speakers interrupt themselves: immediately after a problem word, within a problem word, or within a problem word and with the filler uh. We reasoned that if a disfluency indeed bears useful information, then we should be able to find a situation where a target word is faster to comprehend in a disfluent utterance than in a fluent one. Imagine a situation in which a listener expects a speaker to refer to one of two objects. If the speaker begins to name one and then stops and names the other, the way in which she interrupts the utterance might be an early clue as to her intentions. So the listener may be faster to recognize her intentions relative to a target word in a disfluent utterance than in an utterance in which disfluencies are absent. We compared the following types of utterances: a. Move to the orange square (naturally fluent) b. Move to the |orange square (disfluency excised) c. Move to the yelloworange square d. Move to the yeorange square e. Move to the yeuh, orange square f. Move to the orange square g. Move to the yeorange square h. Move to the uh, orange square Utterances c, d, and e were spontaneous disfluencies, and f, g, and h were edited versions that replaced the removed material with pauses of equal length to control for timing. In utterances c—h, the reparandum began after the word the and continued until the interruption site (after the unintended color word, color word fragment, or location where this information had been edited out). The edit interval in c—h began with the interruption site, included silence or a filler, and ended with the onset of the repair color word. Response times were calculated relative to the onset of the repair, orange. The results were that listeners made fewer errors, the less incorrect information they heard in the reparandum (that is, the shorter the reparandum), and they were faster to respond to the target word when the edit interval before the repair was longer. They comprehended target words after mid-word interruptions with fillers faster than they did after mid-word interruptions without fillers (since a filler makes the edit interval longer), and faster than they did when the disfluency was replaced by a pause of equal length. This filler advantage did not occur at the expense of accuracy—unlike with disfluent utterances without fillers, listeners made no more errors on disfluent utterances with fillers than they did on fluent utterances. These findings highlight the importance of timing in speech recognition and utterance interpretation. The form and length of the reparandum and edit interval bear consequences for how quickly a disfluent utterance is processed as well as for whether the listener makes a commitment to an interpretation the speaker does not intend. Listeners respond to pauses and fillers on other levels as well, such as to make inferences about speakers’ alignment to their utterances. People coordinate both the content and the process of conversation; fillers, pauses, and self-speech can serve as displays by speakers that provide an account to listeners for difficulties or delays in speaking (Clark, 1994; Clark, 1997; Clark & Brennan, 1991). Speakers signal their Feeling-of-Knowing (FOK) when answering a question by the displays they put on right before the answer (or right before they respond with I don’t know) (Brennan & Williams, 1995; Smith & Clark, 1993). In these experiments, longer latencies, especially ones that contained fillers, were associated with answers produced with a lower FOK and that turned out to be incorrect. Thus in the following example, A1 displayed a lower FOK than A2: Q: Who founded the American Red Cross? A1: .....um......... Florence Nightingale? A2: ......... Clara Barton. Likewise, non-answers (e.g., I don’t know) after a filler or a long latency were produced by speakers who were more likely to recognize the correct answers later on a multiple choice test; those who produced a non-answer immediately did not know the answers. Not only do speakers display their difficulties and metalinguistic knowledge using such devices, but listeners can process this information to produce an accurate Feeling-of-Another's-Knowing, or estimate of the speaker’s likelihood of knowing the correct answer (Brennan & Williams, 1995). These programs of experiments hold implications for both the generation and interpretation of spoken utterances. A system could indicate its confidence in its message with silent pauses, fillers, and intonation, and users should be able to interpret this information accurately. If machine speech recognition were conducted in a fashion more like human speech recognition, timing would be a critical cue and incremental parses would be continually made and unmade. Although this approach would be computationally expensive, it might produce better results with spontaneous speech. 2 Referring expressions are provisional until ratified by addressees. Consider again the exchange in Example (1). After some work, the director and matcher eventually settled on a mutual perspective. When they finished matching the set of 12 picture cards, the cards were shuffled and the task was repeated several more times. In the very next round, the conversation went like this: (2) B: nine is that monk praying A: yup Later on, referring was even more efficient: (3) A: three is the monk B: ok A and B, who switched roles on each round, marked the fact that they had achieved a mutual perspective by reusing the same term, monk, in repeated references to the same object. These references tend to shorten over time. In Brennan and Clark (1996), we showed that once people coordinate a perspective on an object, they tend to continue to use the same terms that mark that shared perspective (e.g., the man’s pennyloafer), even when they could use an even shorter basiclevel term (e.g., the shoe, when the set of objects has changed such that it no longer needs to be distinguished from other shoes in the set). This process of conceptual entrainment appears to be partner-specific—upon repeated referring to the same object but with a new partner, speakers were more likely to revert to the basic level term, due in part to the feedback they received from their partners (Brennan & Clark, 1996). These examples depict the interpersonal processes that lead to conceptual entrainment. The director and matcher used many hedges in their initial proposals and counter-proposals (e.g., it’s almost like a person kind of in a weird way, and yeah like like a monk praying or something). Hedges dropped out upon repeated referring. We have proposed (Brennan & Clark, 1996) that hedges are devices for signaling a speaker's commitment to the perspective she is proposing. Hedges serve social needs as well, by inviting counter-proposals from the addressee without risking loss of face due to overt disagreements (Brennan & Ohaeri, 1999). It is worth noting that people's referring expressions converge not only with those of their human partners, but also with those of computer partners (Brennan, 1996; Ohaeri, 1995). In our text and spoken dialogue Wizardof-Oz studies, when simulated computer partners used deliberately different terms than the ones people first presented to them, people tended to adopt the computers' terms, even though the computers had apparently "understood" the terms people had first produced (Brennan, 1996; Ohaeri, 1995). The impetus toward conceptual entrainment marked by repeated referring expressions appears to be so compelling that native speakers of English will even produce non-idiomatic referring expressions (e.g., the chair in which I shake my body, referring to a rocking chair) in order to ratify a mutuallyachieved perspective with non-native speakers (Bortfeld & Brennan, 1987). Such findings hold many implications for utterance generation and the design of dialogue models. Spoken and text dialogue interfaces of the future should include resources for collaboration, including those for negotiating meanings, modeling context, recognizing which referring expressions are likely to index a particular conceptualization, keeping track of the referring expressions used by a partner so far, and reusing those expressions. This would help solve the “vocabulary problem” in humancomputer interaction (Brennan, to appear). 3 Grounding varies with the medium Grounding is the process by which people coordinate their conversational activities, establishing, for instance, that they understand one another well enough for current purposes. There are many activities to coordinate in conversation, each with its own cost, including: • getting an addressee’s attention in order to begin the conversation • planning utterances the addressee is likely to understand • producing utterances • recognizing when the addressee does not understand • initiating and managing repairs • determining what inferences to make when there is a delay • receiving utterances • recognizing the intention behind an utterance • displaying or acknowledging this understanding • keeping track of what has been discussed so far (common ground due to linguistic co-presence) • determining when to take a turn • monitoring and furthering the main purposes or tasks at hand • serving other important social needs, such as face-management (adapted from Clark & Brennan, 1991) Most of these activities are relatively easy to do when interaction is face-to-face. However, the affordances of different media affect the costs of coordinating these activities. The actual forms of speech and text corpora are shaped by how people balance and trade off these costs in the context of communication. In a referential communication study, I compared task-oriented conversations in which one person either had or didn’t have visual evidence about the other’s progress (Brennan, 1990). Pairs of people discussed many different locations on identical maps displayed on networked computer screens in adjoining cubicles. The task was for the matcher to get his car icon parked in the same spot as the car displayed on only the director’s screen. In one condition, Visual Evidence, the director could see the matcher’s car icon and its movements. In the other, Verbal-Only Evidence, she could not. In both conditions, they could talk freely. Language-action transcripts were produced for a randomly chosen 10% of 480 transcribed interchanges. During each trial, the x and y coordinates of the matcher's icon were recorded and time-stamped, as a moment-bymoment estimate of where the matcher thought the target location was. For the sample of 48 trials, I plotted the distance between the matchers' icon and the target (the director's icon) over time, to provide a visible display of how their beliefs about the target location converged. Sample time-distance plots are shown in Figures 1 and 2. Matchers' icons got closer to the target over time, but not at a steady rate. Typically, distance diminished relatively steeply early in the trial, while the matcher interpreted the director's initial description and rapidly moved his icon toward the target location. Many of the plots then showed a distinct elbow followed by a nearly horizontal region, meaning that the matcher then paused or moved away only slightly before returning to park his car icon. This suggests that it wasn’t sufficient for the matcher to develop a reasonable hypothesis about what the director meant by the description she presented, but that they also had to ground their understanding, or exchange sufficient evidence in order to establish mutual belief. The region after the elbow appears to correspond to the acceptance phase proposed by Clark & Schaefer (1989); the figures show that it was much shorter when directors had visual evidence than when they did not. The accompanying speech transcripts, when synchronized with the time-distance plots, showed that matchers gave verbal acknowledgements when directors did not have visual evidence and withheld them when directors did have visual evidence. Matchers made this adjustment to directors even though the information on the matchers’ own screen was the same for both conditions, which alternated after every 10 locations for a total of 80 locations discussed by each pair. Figure 1: Time-Distance Plot of Matcher-Director Convergence, Without Visual Evidence of the Matcher’s Progress Figure 2: Time-Distance Plot of Matcher-Director Convergence, With Visual Evidence of the Matcher’s Progress These results document the grounding process and the time course of how directors’ and matchers’ hypotheses converge. The process is a flexible one; partners shift the responsibility to whomever can pay a particular cost most easily, expending the least collaborative effort (Clark & Wilkes-Gibbs, 1986). In another study of how media affect conversation (Brennan & Ohaeri, 1999; Ohaeri, 1998) we looked at how grounding shapes conversation held face-to-face vs. via chat windows in which people sent text messages that appeared immediately on their partners’ screens. Three-person groups had to reach a consensus account of a complex movie clip they had viewed together. We examined the costs of serving face-management needs (politeness) and 0 50 100 150 200 250 300 350 0 5 10 15 20 25 30 Time (seconds) 0 50 100 150 200 250 300 350 0 5 10 15 20 25 30 Time (seconds) looked at devices that serve these needs by giving a partner options or seeking their input. The devices counted were hedges and questions. Although both kinds of groups recalled the events equally well, they produced only half as many words typing as speaking. There were much lower rates of hedging (per 100 words) in the text conversations than face-to-face, but the same rates of questions. We explained these findings by appealing to the costs of grounding over different media: Hedging requires using additional words, and therefore is more costly in typed than spoken utterances. Questions, on the other hand, require only different intonation or punctuation, and so are equally easy, regardless of medium. The fact that people used just as many questions in both kinds of conversations suggests that people in electronic or remote groups don’t cease to care about facemanagement needs, as some have suggested; it’s just harder to meet these needs when the medium makes the primary task more difficult. Desiderata for a Dialogue Model Findings such as these hold a number of implications for both computational linguistics and human-computer interaction. First is a methodological point: corpus data and dialogue feature coding are particularly useful when they include systematic information about the tasks conversants were engaged in. Second, there is a large body of evidence that people accomplish utterance production and interpretation incrementally, using information from all available sources in parallel. If computational language systems are ever to approach the power, error recovery ability, and flexibility of human language processing, then more research needs to be done using architectures that can support incremental processing. Architectures should not be based on assumptions that utterances are complete and well-formed, and that processing is modular. A related issue is that timing is critically important in interactive systems. Many models of language processing focus on the propositional content of speech with little attention to “performance” or “surface” features such as timing. (Other non-propositional aspects such as intonation are important as well.) Computational dialogue systems (both text and spoken) should include resources for collaboration. When a new referring expression is introduced, it could be marked as provisional. Fillers can be used to display trouble, and hedges, to invite input. Dialogue models should track the forms of referring expressions used in a discourse so far, enabling agents to use the same terms consistently to refer to the same things. Because communication media shape conversations and their emergent corpora, minor differences in features of a dialogue interface can have major impact on the form of the language that is generated, as well as on coordination costs that language users pay. Finally, dialogue models should keep a structured record of jointly achieved contributions that is updated and revised incrementally. No agent is omniscient; a dialogue model represents only one agent's estimate of the common ground so far (see Cahn & Brennan, 1999). There are many open and interesting questions about how to best structure the contributions from interacting partners into a dialogue model, as well as how such a model can be used to support incremental processes of generation, interpretation, and repair. Acknowledgements This material is based upon work supported by the National Science Foundation under Grants No. IRI9402167, IRI9711974, and IRI9980013. I thank Michael Schober for helpful comments. References Bock, J. K. (1986). Meaning, sound, and syntax: Lexical priming in sentence production. J. of Experimental Psychology: Learning, Memory, & Cognition, 12, 575-586. Bock, K., & Levelt, W. J. M. (1994). Language production: Grammatical encoding. In M.A. Gernsbacher (Ed.), Handbook of psycholinguistics (pp. 945-984). London: Academic Press. Bortfeld, H., & Brennan, S. E. (1997). Use and acquisition of idiomatic expressions in referring by native and non-native speakers. Discourse Processes, 23, 119-147. Bortfeld, H., Leon, S. D., Bloom, J. E., Schober, M. F., & Brennan, S. E. (2000). Disfluency rates in spontaneous speech: Effects of age, relationship, topic, role, and gender. Manuscript under review. Brennan, S. E. (1990). Seeking and providing evidence for mutual understanding. Unpublished doctoral dissertation. Stanford University. Brennan, S. E. (1996). Lexical entrainment in spontaneous dialog. Proc. 1996 International Symposium on Spoken Dialogue (ISSD-96) (pp. 4144). Acoustical Society of Japan: Phila., PA. Brennan, S. E. (to appear). The vocabulary problem in spoken dialog systems. In S. Luperfoy (Ed.), Automated Spoken Dialog Systems, Cambridge, MA: MIT Press. Brennan, S. E., & Clark, H. H. (1996). Conceptual pacts and lexical choice in conversation. J. of Experimental Psychology: Learning, Memory, & Cognition, 22, 1482-1493. Brennan, S. E., & Ohaeri, J. O. (1999). Why do electronic conversations seem less polite? The costs and benefits of hedging. Proc. Int. Joint Conference on Work Activities, Coordination, and Collaboration (WACC ’99) (pp. 227-235). San Francisco, CA: ACM. Brennan, S. E., & Schober, M. F. (in press). How listeners compensate for disfluencies in spontaneous speech. J. of Memory & Language. Brennan, S. E., & Williams, M. (1995). The feeling of another’s knowing: Prosody and filled pauses as cues to listeners about the metacognitive states of speakers. J. of Memory & Language, 34, 383-398. Cahn, J. E., & Brennan, S. E. (1999). A psychological model of grounding and repair in dialog. Proc. AAAI Fall Symposium on Psychological Models of Communication in Collaborative Systems (pp. 2533). North Falmouth, MA: AAAI. Clark, H.H. (1994). Managing problems in speaking. Speech Communication, 15, 243-250. Clark, H. H. (1997). Dogmas of understanding. Discourse Processes, 23, 567-598. Clark, H. H., & Brennan, S. E. (1991). Grounding in communication. In L. B. Resnick, J. Levine, & S. D. Teasley (Eds.), Perspectives on socially shared cognition (pp. 127-149). Clark, H.H. & Schaefer, E.F. (1989). Contributing to discourse. Cognitive Science, 13, 259-294. Clark, H.H. & Wilkes-Gibbs, D. (1986). Referring as a collaborative process. Cognition, 22, 1-39. Core, M. G., & Schubert, L. K. (1999). A model of speech repairs and other disruptions. Proc. AAAI Fall Symposium on Psychological Models of Communication in Collaborative Systems. North Falmouth, MA: AAAI. Fox Tree, J.E. (1995). The effects of false starts and repetitions on the processing of subsequent words in spontaneous speech. J. of Memory & Language, 34, 709-738. Goodwin, C. (1981). Conversational Organization: Interaction between speakers and hearers. New York: Academic Press. Hindle, D. (1983). Deterministic parsing of syntactic non-fluencies. In Proc. of the 21st Annual Meeting, Association for Computational Linguistics, Cambridge, MA, pp. 123-128. Levelt, W. J. M. (1983). Monitoring and self-repair in speech. Cognition, 14, 41-104. Levelt, W. (1989). Speaking: From intention to articulation. Cambridge, MA: MIT Press. Lickley, R., & Bard, E. (1996). On not recognizing disfluencies in dialog. Proc. International Conference on Spoken Language Processing (ICSLIP ‘96), Philadelphia, 1876-1879. Nakatani, C. H., & Hirschberg, J. (1994). A corpusbased study of repair cues in spontaneous speech. J of the Acoustical Society of America, 95, 1603-1616. Nooteboom, S. G. (1980). Speaking and unspeaking: Detection and correction of phonological and lexical errors in spontaneous speech. In V. A. Fromkin (Ed.), Errors in linguistic performance. New York: Academic Press. Ohaeri, J. O. (1995). Lexical convergence with human and computer partners: Same cognitive process? Unpub. Master's thesis. SUNY, Stony Brook, NY. Ohaeri, J. O. (1998). Group processes and the collaborative remembering of stories. Unpublished doctoral dissertation. SUNY, Stony Brook, NY. Oviatt, S. (1995). Predicting spoken disfluencies during human-computer interaction. Computer Speech and Language, 9, 19-35. Sacks, H., Schegloff, E., & Jefferson, G. (1974). A simplest systematics for the organization of turntaking in conversation. Language, 50, 696-735. Schober, M.F. & Clark, H.H. (1989). Understanding by addressees and overhearers. Cognitive Psychology, 21, 211-232. Shriberg, E. (1996). Disfluencies in Switchboard. Proceedings, International Conference on Spoken Language Processing, Vol. Addendum, 11-14. Philadelphia, PA, 3-6 October. Shriberg, E., Bear, J., & Dowding, J. (1992). Automatic detection and correction of repairs in human-computer dialog. In M. Marcus (Ed.), Proc DARPA Speech and Natural Language Workshop (pp. 419-424). Morgan Kaufmann. Shriberg, E.E. & Lickley, R.J. (1993). Intonation of clause-internal filled pauses. Phonetica, 50, 172-179. Smith, V., & Clark, H. H. (1993). On the course of answering questions. J. of Memory and Language, 32, 25-38. Stellmann, P., & Brennan, S. E. (1993). Flexible perspective-setting in conversation. Abstracts of the Psychonomic Society, 34th Annual Meeting (p. 20), Washington, DC. Tanenhaus, M. K., Spivey-Knowlton, M. J., Eberhard, K. M., & Sedivy, J. (1995). Integration of visual and linguistic information in spoken language comprehension. Science, 268, 1632-1634. Wilkes-Gibbs, D., & Clark, H.H. (1992). Coordinating beliefs in conversation. Journal of Memory and Language, 31, 183-194. | 2000 | 1 |
Robust Temporal Processing of News Inderjeet Mani and George Wilson The MITRE Corporation, W640 11493 Sunset Hills Road Reston, Virginia 22090 {imani, gwilson}@mitre.org Abstract We introduce an annotation scheme for temporal expressions, and describe a method for resolving temporal expressions in print and broadcast news. The system, which is based on both hand-crafted and machine-learnt rules, achieves an 83.2% accuracy (Fmeasure) against hand-annotated data. Some initial steps towards tagging event chronologies are also described. Introduction The extraction of temporal information from news offers many interesting linguistic challenges in the coverage and representation of temporal expressions. It is also of considerable practical importance in a variety of current applications. For example, in question-answering, it is useful to be able to resolve the underlined reference in “the next year, he won the Open” in response to a question like “When did X win the U.S. Open?”. In multidocument summarization, providing finegrained chronologies of events over time (e.g., for a biography of a person, or a history of a crisis) can be very useful. In information retrieval, being able to index broadcast news stories by event times allows for powerful multimedia browsing capabilities. Our focus here, in contrast to previous work such as (MUC 1998), is on resolving time expressions, especially indexical expressions like “now”, “today”, “tomorrow”, “next Tuesday”, “two weeks ago”, “20 mins after the next hour”, etc., which designate times that are dependent on the speaker and some “reference” time1. In this paper, we discuss a temporal annotation scheme for representing dates and times in temporal expressions. This is followed by details and performance measures for a tagger to extract this information from news sources. The tagger uses a variety of hand-crafted and machine-discovered rules, all of which rely on lexical features that are easily recognized. We also report on a preliminary effort towards constructing event chronologies from this data. 1 Annotation Scheme Any annotation scheme should aim to be simple enough to be executed by humans, and yet precise enough for use in various natural language processing tasks. Our approach (Wilson et al. 2000) has been to annotate those things that a human could be expected to tag. Our representation of times uses the ISO standard CC:YY:MM:DD:HH:XX:SS, with an optional time zone (ISO-8601 1997). In other words, time points are represented in terms of a calendric coordinate system, rather than a real number line. The standard also supports the representation of weeks and days of the week in the format CC:YY:Wwwd where ww specifies which week within the year (1-53) and d specifies the day of the week (1-7). For example, “last week” might receive the VAL 20:00:W16. A time (TIMEX) expression (of type TIME or DATE) representing a particular point on the ISO line, e.g., “Tuesday, November 2, 2000” (or “next Tuesday”) is represented with the ISO time Value (VAL), 20:00:11:02. Interval expressions like “From 1 Some of these indexicals have been called “relative times” in the (MUC 1998) temporal tagging task. May 1999 to June 1999”, or “from 3 pm to 6 pm” are represented as two separate TIMEX expressions. In addition to the values provided by the ISO standard, we have added several extensions, including a list of additional tokens to represent some commonly occurring temporal units; for example, “summer of ‘69” could be represented as 19:69:SU. The intention here is to capture the information in the text while leaving further interpretation of the Values to applications using the markup. It is worth noting that there are several kinds of temporal expressions that are not to be tagged, and that other expressions tagged as a time expression are not assigned a value, because doing so would violate the simplicity and preciseness requirements. We do not tag unanchored intervals, such as “half an hour (long)” or “(for) one month”. Non-specific time expressions like generics, e.g., “April” in “April is usually wet”, or “today” in “today’s youth”, and indefinites, e.g., “a Tuesday”, are tagged without a value. Finally, expressions which are ambiguous without a strongly preferred reading are left without a value. This representation treats points as primitive (as do (Bennett and Partee 1972), (Dowty 1979), among others); other representations treat intervals as primitive, e.g., (Allen 1983). Arguments can be made for either position, as long as both intervals and points are accommodated. The annotation scheme does not force committing to end-points of intervals, and is compatible with current temporal ontologies such as (KSL-Time 1999); this may help eventually support advanced inferential capabilities based on temporal information extraction. 2 Tagging Method Overall Architecture The system architecture of the temporal tagger is shown in Figure 1. The tagging program takes in a document which has been tokenized into words and sentences and tagged for part-of-speech. The program passes each sentence first to a module that identifies time expressions, and then to another module (SC) that resolves selfcontained time expressions. The program then takes the entire document and passes it to a discourse processing module (DP) which resolves context-dependent time expressions (indexicals as well as other expressions). The DP module tracks transitions in temporal focus, uses syntactic clues, and various other knowledge sources. The module uses a notion of Reference Time to help resolve context-dependent expressions. Here, the Reference Time is the time a context-dependent expression is relative to. In our work, the reference time is assigned the value of either the Temporal Focus or the document (creation) date. The Temporal Focus is the time currently being talked about in the narrative. The initial reference time is the document date. 2.2 Assignment of time values We now discuss the modules that assign values to identified time expressions. Times which are fully specified are tagged with their value, e.g, “June 1999” as 19:99:06 by the SC module. The DP module uses an ordered sequence of rules to handle the context-dependent expressions. These cover the following cases: Explicit offsets from reference time: indexicals like “yesterday”, “today”, “tomorrow”, “this afternoon”, etc., are ambiguous between a specific and a nonspecific reading. The specific use (distinguished from the generic one by machine learned rules discussed below) gets assigned a value based on an offset from the reference time, but the generic use does not. Positional offsets from reference time: Expressions like “next month”, “last year” and “this coming Thursday” use lexical markers (underlined) to describe the direction and magnitude of the offset from the reference time. Implicit offsets based on verb tense: Expressions like “Thursday” in “the action taken Thursday”, or bare month names like “February” are passed to rules that try to determine the direction of the offset from the reference time. Once the direction is determined, the magnitude of the offset can be computed. The tense of a neighboring verb is used to decide what direction to look to resolve the expression. Such a verb is found by first searching backward to the last TIMEX, if any, in the sentence, then forward to the end of the sentence and finally backwards to the beginning of the sentence. If the tense is past, then the direction is backwards from the reference time. If the tense is future, the direction is forward. If the verb is present tense, the expression is passed on to subsequent rules for resolution. For example, in the following passage, “Thursday” is resolved to the Thursday prior to the reference date because “was”, which has a past tense tag, is found earlier in the sentence: The Iraqi news agency said the first shipment of 600,000 barrels was loaded Thursday by the oil tanker Edinburgh. Further use of lexical markers: Other expressions lacking a value are examined for the nearby presence of a few additional markers, such as “since” and “until”, that suggest the direction of the offset. Nearby Dates: If a direction from the reference time has not been determined, some dates, like “Feb. 14”, and other expressions that indicate a particular date, like “Valentine’s Day”, may still be untagged because the year has not been determined. If the year can be chosen in a way that makes the date in question less than a month from the reference date, that year is chosen. For example, if the reference date is Feb. 20, 2000 and the expression “Feb. 14” has not been assigned a value, this rule would assign it the value Feb. 14, 2000. Dates more than a month away are not assigned values by this rule. 3 Time Tagging Performance 3.1 Test Corpus There were two different genres used in the testing: print news and broadcast news transcripts. The print news consisted of 22 New York Times (NYT) articles from January 1998. The broadcast news data consisted of 199 transcripts of Voice of America (VOA) broadcasts from January of 1998, taken from the TDT2 collection (TDT2 1999). The print data was much cleaner than the transcribed broadcast data in the sense that there were very few typographical errors, spelling and grammar were good. On the other hand, the print data also had longer, more complex sentences with somewhat greater variety in the words used to represent dates. The broadcast collection had a greater proportion of expressions referring to time of day, primarily due to repeated announcements of the current time and the time of upcoming shows. The test data was marked by hand tagging the time expressions and assigning value to them where appropriate. This hand-marked data was used to evaluate the performance of a frozen version of the machine tagger, which was trained and engineered on a separate body of NYT, ABC News, and CNN data. Only the body of the text was included in the tagging and evaluation. 3.2 System performance The system performance is shown in Table 12. Note that if the human said the TIMEX had no value, and the system decided it had a value, this is treated as an error. A baseline of just tagging values of absolute, fully specified TIMEXs (e.g., “January 31st, 1999”) is shown for comparison in parentheses. Obviously, we would prefer a larger data sample; we are currently engaged in an effort within the information extraction community to annotate a large sample of the TDT2 collection and to conduct an interannotator reliability study. Error Analysis Table 2 shows the number of errors made by the program classified by the type of error. Only 2 of these 138 errors (5 on TIME, 133 on DATE) were due to errors in the source. 14 of the 138 errors (9 NYT vs. 5 VOA) 2 The evaluated version of the system does not adjust the Reference Time for subsequent sentences. were due to the document date being incorrect as a reference time. Part of speech tagging: Some errors, both in the identification of time expressions and the assignment of values, can be traced to incorrect part of speech tagging in the preprocessing; many of these errors should be easily correctable. TIMEX expressions A total of 44 errors were made in the identification of TIMEX expressions. Not yet implemented: The biggest source of errors in identifying time expressions was formats that had not yet been implemented. For example, one third (7 of 21, 5 of which were of type TIME) of all missed time expressions came from numeric expressions being spelled out, e.g. “nineteen seventynine”. More than two thirds (11 of 16) of the time expressions for which the program incorrectly found the boundaries of the expression (bad extent) were due to the unimplemented pattern “Friday the 13th”. Generalization of the existing patterns should correct these errors. Proper Name Recognition: A few items were spuriously tagged as time expressions (extra TIMEX). One source of this that should be at least partially correctable is in the tagging of apparent dates in proper names, e.g. “The July 26 Movement”, “The Tonight Show”, “USA Today”. The time expression identifying rules assumed that these had been tagged as lexical items, but this lexicalization has not yet been implemented. Values assigned A total of 94 errors were made in the assignment of values to time expressions that had been correctly identified. Generic/Specific: In the combined data, 25 expressions were assigned a value when they should have received none because the expression was a generic usage that could not be placed on a time line. This is the single biggest source of errors in the value assignments. 4 Machine Learning Rules Our approach has been to develop initial rules by hand, conduct an initial evaluation on an unseen test set, determine major errors, and then handling those errors by augmenting the rule set with additional rules discovered by machine learning. As noted earlier, distinguishing between specific use of a time expression and a generic use (e.g., “today”, “now”, etc.) was and is a significant source of error. Some of the other problems that these methods could be applied to distinguishing a calendar year reference from a fiscal year one (as in “this year”), and distinguishing seasonal from specific day references. For example, “Christmas” has a seasonal use (e.g., “I spent Christmas visiting European capitals”) distinct from its reference to a specific day use as “December 25th” (e.g., “We went to a great party on Christmas”). Here we discuss machine learning results in distinguishing specific use of “today” (meaning the day of the utterance) from its generic use meaning “nowadays”. In addition to features based on words cooccurring with “today” (Said, Will, Even, Most, and Some features below), some other features (DOW and CCYY) were added based on a granularity hypothesis. Specifically, it seems possible that “today” meaning the day of the utterance sets a scale of events at a day or a small number of days. The generic use, “nowadays”, seems to have a broader scale. Therefore, terms that might point to one of these scales such as the names of days of the week, the word “year” and four digit years were also included in the training features. To summarize, the features we used for the “today” problem are as follows (features are boolean except for string-valued POS1 and POS2): Poss: whether “today” has a possessive inflection Qcontext: whether “today” is inside a quotation Said: presence of “said” in the same sentence Will: presence of “will” in the same sentence Even: presence of “even” in the same sentence Most: presence of “most” in the same sentence Some: presence of “some” in the same sentence Year: presence of “year” in the same sentence CCYY: presence of a four-digit year in the same sentence DOW: presence of a day of the week expression (“Monday” thru “Sunday”) in the same sentence FW: “today” is the first word of the sentence POS1: part-of-speech of the word before “today” POS2: part-of-speech of the word after “today” Label: specific or non-specific (class label) Table 3 shows the performance of different classifiers in classifying occurrences of “today” as generic versus specific. The results are for 377 training vectors and 191 test vectors, measured in terms of Predictive Accuracy (percentage test vectors correctly classified). We incorporated some of the rules learnt by C4.5 Rules (the only classifier which directly output rules) into the current version of the program. These rules included classifying “today” as generic based on (1) feature Most being true (74.1% accuracy) or (2) based on feature FW being true and Poss, Some and Most being false (67.4% accuracy). The granularity hypothesis was partly borne out in that C4.5 rules also discovered that the mention of a day of a week (e.g. “Monday”), anywhere in the sentence predicted specific use (73.3% accuracy). 5 Towards Chronology Extraction Event Ordering Our work in this area is highly preliminary. To extract temporal relations between events, we have developed an eventordering component, following (Song and Cohen 1991). We encode the tense associated with each verb using their modified Reichenbachian (Reichenbach 1947) representation based on the tuple <si, lge, ri, lge, ei>. Here si is an index for the speech time, ri for the reference time, and ei for the event time, with lge being the temporal relations precedes, follows, or coincides. With each successive event, the temporal focus is either maintained or shifted, and a temporal ordering relation between the event and the focus is asserted, using heuristics defining coherent tense sequences; see (Song and Cohen 1991) for more details. Note that the tagged TIME expressions aren't used in determining these inter-event temporal relations, so this eventordering component could be used to order events which don't have time VALs. Event Time Alignment In addition, we have also investigated the alignment of events on a calendric line, using the tagged TIME expressions. The processing, applied to documents tagged by the time tagger, is in two stages. In the first stage, for each sentence, each “taggable verb occurrence” lacking a time expression is given the VAL of the immediately previous time expression in the sentence. Taggable verb occurrences are all verb occurrences except auxiliaries, modals and verbs following “to”, “not”, or specific modal verbs. In turn, when a time expression is found, the immediately previous verb lacking a time expression is given that expression's VAL as its TIME. In the second stage, each taggable verb in a sentence lacking a time expression is given the TIME of the immediately previous verb in the sentence which has one, under the default assumption that the temporal focus is maintained. Of course, rather than blindly propagating time expressions to events based on proximity, we should try to represent relationships expressed by temporal coordinators like “when”, “since”, “before”, as well as explicitly temporally anchored events, like “ate at 3 pm”. The event-aligner component uses a very simple method, intended to serve as a baseline method, and to gain an understanding of the issues involved. In the future, we expect to advance to event-alignment algorithms which rely on a syntactic analysis, which will be compared against this baseline. Assessment An example of the chronological tagging of events offered by these two components is shown in Figure 2, along with the TIMEX tags extracted by the time tagger. Here each taggable verb is given an event index, with the precedes attribute indicating one or more event indices which it precedes temporally. (Attributes irrelevant to the example aren't shown). The information of the sort shown in Figure 2 can be used to sort and cluster events temporally, allowing for various time-line based presentations of this information in response to specific queries. The event-orderer has not yet been evaluated. Our evaluation of the eventaligner checks the TIME of all correctly recognized verbs (i.e., verbs recognized correctly by the part-of-speech tagger). The basic criterion for event TIME annotation is that if the time of the event is obvious, it is to be tagged as the TIME for that verb. (This criterion excludes interval specifications for events, as well as event references involving generics, counterfactuals, etc. However, the judgements are still delicate in certain cases.) We score Correctness as number of correct TIME fills for correctly recognized verbs over total number of correctly recognized verbs. Our total correctness scores on a small sample of 8505 words of text is 394 correct event times out of 663 correct verb tags, giving a correctness score of 59.4%. Over half the errors were due to propagation of spreading of an incorrect event time to neighboring events; about 15% of the errors were due to event times preceding the initial TIMEX expression (here the initial reference time should have been used); and at least 10% of the errors were due to explicitly marked tense switches. This is a very small sample, so the results are meant to be illustrative of the scope and limitations of this baseline eventaligning technique rather than present a definitive result. 6 Related Work The most relevant prior work is (Wiebe et al. 98), who dealt with meeting scheduling dialogs (see also (Alexandersson et al. 97), (Busemann et al. 97)), where the goal is to schedule a time for the meeting. The temporal references in meeting scheduling are somewhat more constrained than in news, where (e.g., in a historical news piece on toxic dumping) dates and times may be relatively unconstrained. In addition, their model requires the maintenance of a focus stack. They obtained roughly .91 Precision and .80 Recall on one test set, and .87 Precision and .68 Recall on another. However, they adjust the reference time during processing, which is something that we have not yet addressed. More recently, (Setzer and Gaizauskas 2000) have independently developed an annotation scheme which represents both time values and more fine-grained interevent and event-time temporal relations. Although our work is much more limited in scope, and doesn't exploit the internal structure of events, their annotation scheme may be leveraged in evaluating aspects of our work. The MUC-7 task (MUC-7 98) did not require VALs, but did test TIMEX recognition accuracy. Our 98 F-measure on NYT can be compared for just TIMEX with MUC-7 (MUC-7 1998) results on similar news stories, where the best performance was .99 Precision and .88 Recall. (The MUC task required recognizing a wider variety of TIMEXs, including event-dependent ones. However, at least 30% of the dates and times in the MUC test were fixed-format ones occurring in document headers, trailers, and copyright notices. ) Finally, there is a large body of work, e.g., (Moens and Steedman 1988), (Passoneau 1988), (Webber 1988), (Hwang 1992), (Song and Cohen 1991), that has focused on a computational analysis of tense and aspect. While the work on event chronologies is based on some of the notions developed in that body of work, we hope to further exploit insights from previous work. Conclusion We have developed a temporal annotation specification, and an algorithm for resolving a class of time expressions found in news. The algorithm, which is relatively knowledge-poor, uses a mix of hand-crafted and machine-learnt rules and obtains reasonable results. In the future, we expect to improve the integration of various modules, including tracking the temporal focus in the time resolver, and interaction between the eventorder and the event-aligner. We also hope to handle a wider class of time expressions, as well as further improve our extraction and evaluation of event chronologies. In the long run, this could include representing eventtime and inter-event relations expressed by temporal coordinators, explicitly temporally anchored events, and nominalizations. Figure 1. Time Tagger Source articles number of words Type Human Found (Correct) System Found System Correct Precision Recall Fmeasure NYT 22 35,555 TIMEX 302 302 296 98.0 98.0 98.0 Values 302 302 249 (129) 82.5 (42.7) 82.5 (42.7) 82.5 (42.7) Broadcast 199 42,616 TIMEX 426 417 400 95.9 93.9 94.9 Values 426 417 353 (105) 84.7 (25.1) 82.9 (24.6) 83.8 (24.8) Overall 221 78,171 TIMEX 728 719 696 96.8 95.6 96.2 Values 728 719 602 (234) 83.7 (32.5) 82.7 (32.1) 83.2 (32.3) Table 1. Performance of Time Tagging Algorithm Print Broadcast Total Missing Vals 10 29 39 Extra Vals 18 7 25 Wrong Vals 19 11 30 Missing TIMEX 6 15 21 Extra TIMEX 2 5 7 Bad TIMEX extent 4 12 16 TOTAL 59 79 138 Table 2. High Level Analysis of Errors Driver Resolve Self-contained Identify Expressions Discourse Processor Context Tracker Algorithm Predictive Accuracy MC4 Decision Tree3 79.8 C4.5 Rules 69.8 Naïve Bayes 69.6 Majority Class (specific) 66.5 Table 3. Performance of “Today” Classifiers In the last step after years of preparation, the countries <lex eindex=“9” precedes=“10|” TIME=“19981231”>locked</lex> in the exchange rates of their individual currencies to the euro, thereby <lex eindex=“10” TIME=“19981231”>setting</lex> the value at which the euro will begin <lex eindex=“11” TIME=“19990104”>trading</lex> when financial markets open around the world on <TIMEX VAL=“19990104”>Monday</TIMEX>……. Figure 2. Chronological Tagging 3 Algorithm from the MLC++ package (Kohavi and Sommerfield 1996). References J. Alexandersson, N. Riethinger, and E. Maier. Insights into the Dialogue Processing of VERBMOBIL. Proceedings of the Fifth Conference on Applied Natural Language Processing, 1997, 33-40. J. F. Allen. Maintaining Knowledge About Temporal Intervals. Communications of the ACM, Volume 26, Number 11, 1983. M. Bennett and B. H. Partee. Towards the Logic of Tense and Aspect in English, Indiana University Linguistics Club, 1972. S. Busemann, T. Decleck, A. K. Diagne, L. Dini, J. Klein, and S. Schmeier. Natural Language Dialogue Service for Appointment Scheduling Agents. Proceedings of the Fifth Conference on Applied Natural Language Processing, 1997, 25-32. D. Dowty. “Word Meaning and Montague Grammar”, D. Reidel, Boston, 1979. C. H. Hwang. A Logical Approach to Narrative Understanding. Ph.D. Dissertation, Department of Computer Science, U. of Alberta, 1992. ISO-8601 ftp://ftp.qsl.net/pub/g1smd/8601v03.pdf 1997. R. Kohavy and D. Sommerfield. MLC++: Machine Learning Library in C++. http://www.sgi.com/Technology/mlc 1996. KSL-Time 1999. http://www.ksl.Stanford.EDU/ontologies/time/ 1999. M. Moens and M. Steedman. Temporal Ontology and Temporal Reference. Computational Linguistics, 14, 2, 1988, pp. 15-28. MUC-7. Proceedings of the Seventh Message Understanding Conference, DARPA. 1998. R. J. Passonneau. A Computational Model of the Semantics of Tense and Aspect. Computational Linguistics, 14, 2, 1988, pp. 44-60. H. Reichenbach. Elements of Symbolic Logic. London, Macmillan. 1947. A. Setzer and R. Gaizauskas. Annotating Events and Temporal Information in Newswire Texts. Proceedings of the Second International Conference On Language Resources And Evaluation (LREC-2000), Athens, Greece, 31 May- 2 June 2000. F. Song and R. Cohen. Tense Interpretation in the Context of Narrative. Proceedings of the Ninth National Conference on Artifical Intelligence (AAAI'91), pp.131-136. 1991. TDT2 http://morph.ldc.upenn.edu/Catalog/LDC99T3 7.html 1999 B. Webber. Tense as Discourse Anaphor. Computational Linguistics, 14, 2, 1988, pp. 61-73. J. M. Wiebe, T. P. O’Hara, T. OhrstromSandgren, and K. J. McKeever. An Empirical Approach to Temporal Reference Resolution. Journal of Artificial Intelligence Research, 9, 1998, pp. 247-293. G. Wilson, I. Mani, B. Sundheim, and L. Ferro. Some Conventions for Temporal Annotation of Text. Technical Note (in preparation). The MITRE Corporation, 2000. | 2000 | 10 |