text
stringlengths
0
316k
year
stringclasses
50 values
No
stringclasses
911 values
Similarity-Based Methods For Word Sense Disambiguation Ido Dagan Dept. of Mathematics and Computer Science Bar Ilan University Ramat Gan 52900, Israel dagan©macs, biu. ac. il Lillian Lee Fernando Pereira Div. of Engineering and AT&T Labs - Research Applied Sciences 600 Mountain Ave. Harvard University Murray Hill, NJ 07974, USA Cambridge, MA 01238, USA pereira©research, att. corn llee©eecs, harvard, edu Abstract We compare four similarity-based esti- mation methods against back-off and maximum-likelihood estimation meth- ods on a pseudo-word sense disam- biguation task in which we controlled for both unigram and bigram fre- quency. The similarity-based meth- ods perform up to 40% better on this particular task. We also conclude that events that occur only once in the training set have major impact on similarity-based estimates. 1 Introduction The problem of data sparseness affects all sta- tistical methods for natural language process- ing. Even large training sets tend to misrep- resent low-probability events, since rare events may not appear in the training corpus at all. We concentrate here on the problem of es- timating the probability of unseen word pairs, that is, pairs that do not occur in the train- ing set. Katz's back-off scheme (Katz, 1987), widely used in bigram language modeling, esti- mates the probability of an unseen bigram by utilizing unigram estimates. This has the un- desirable result of assigning unseen bigrams the same probability if they are made up of uni- grams of the same frequency. Class-based methods (Brown et al., 1992; Pereira, Tishby, and Lee, 1993; Resnik, 1992) cluster words into classes of similar words, so that one can base the estimate of a word pair's probability on the averaged cooccurrence prob- ability of the classes to which the two words be- long. However, a word is therefore modeled by the average behavior of many words, which may cause the given word's idiosyncrasies to be ig- nored. For instance, the word "red" might well act like a generic color word in most cases, but it has distinctive cooccurrence patterns with re- spect to words like "apple," "banana," and so on. We therefore consider similarity-based esti- mation schemes that do not require building general word classes. Instead, estimates for the most similar words to a word w are com- bined; the evidence provided by word w' is weighted by a function of its similarity to w. Dagan, Markus, and Markovitch (1993) pro- pose such a scheme for predicting which un- seen cooccurrences are more likely than others. However, their scheme does not assign probabil- ities. In what follows, we focus on probabilistic similarity-based estimation methods. We compared several such methods, in- cluding that of Dagan, Pereira, and Lee (1994) and the cooccurrence smoothing method of Essen and Steinbiss (1992), against classical es- timation methods, including that of Katz, in a decision task involving unseen pairs of direct ob- jects and verbs, where unigram frequency was eliminated from being a factor. We found that all the similarity-based schemes performed al- most 40% better than back-off, which is ex- pected to yield about 50% accuracy in our experimental setting. Furthermore, a scheme based on the total divergence of empirical dis- 56 tributions to their average 1 yielded statistically significant improvement in error rate over cooc- currence smoothing. We also investigated the effect of removing extremely low-frequency events from the train- ing set. We found that, in contrast to back- off smoothing, where such events are often dis- carded from training with little discernible ef- fect, similarity-based smoothing methods suf- fer noticeable performance degradation when singletons (events that occur exactly once) are omitted. 2 Distributional Similarity Models We wish to model conditional probability distri- butions arising from the coocurrence of linguis- tic objects, typically words, in certain configura- tions. We thus consider pairs (wl, w2) E Vi × V2 for appropriate sets 1/1 and V2, not necessar- ily disjoint. In what follows, we use subscript i for the i th element of a pair; thus P(w21wi) is the conditional probability (or rather, some empirical estimate, the true probability being unknown) that a pair has second element w2 given that its first element is wl; and P(wllw2) denotes the probability estimate, according to the base language model, that wl is the first word of a pair given that the second word is w2. P(w) denotes the base estimate for the unigram probability of word w. A similarity-based language model consists of three parts: a scheme for deciding which word pairs require a similarity-based estimate, a method for combining information from simi- lar words, and, of course, a function measuring the similarity between words. We give the de- tails of each of these three parts in the following three sections. We will only be concerned with similarity between words in V1. 1To the best of our "knowledge, this is the first use of this particular distribution dissimilarity function in statistical language processing. The function itself is im- plicit in earlier work on distributional clustering (Pereira, Tishby, and Lee, 1993}, has been used by Tishby (p.e.) in other distributional similarity work, and, as sug- gested by Yoav Freund (p.c.), it is related to results of Hoeffding (1965) on the probability that a given sample was drawn from a given joint distribution. 2.1 Discounting and Redistribution Data sparseness makes the maximum likelihood estimate (MLE) for word pair probabilities un- reliable. The MLE for the probability of a word pair (Wl, w2), conditional on the appearance of word wl, is simply PML(W2[wl) -- c(wl, w2) (1) c( i) where c(wl, w2) is the frequency of (wl, w2) in the training corpus and c(wl) is the frequency of wt. However, PML is zero for any unseen word pair, which leads to extremely inaccurate estimates for word pair probabilities. Previous proposals for remedying the above problem (Good, 1953; Jelinek, Mercer, and Roukos, 1992; Katz, 1987; Church and Gale, 1991) adjust the MLE in so that the total prob- ability of seen word pairs is less than one, leav- ing some probability mass to be redistributed among the unseen pairs. In general, the ad- justment involves either interpolation, in which the MLE is used in linear combination with an estimator guaranteed to be nonzero for unseen word pairs, or discounting, in which a reduced MLE is used for seen word pairs, with the prob- ability mass left over from this reduction used to model unseen pairs. The discounting approach is the one adopted by Katz (1987): /Pd(w2]wx) C(Wl, w2) > 0 /5(w2lwl) = [o~(wl)Pr(w2[wl) o.w. (2) where Pd represents the Good-Turing dis- counted estimate (Katz, 1987) for seen word pairs, and Pr denotes the model for probabil- ity redistribution among the unseen word pairs. c~(wl) is a normalization factor. Following Dagan, Pereira, and Lee (1994), we modify Katz's formulation by writing Pr(w2]wl) instead P(w2), enabling us to use similarity-based estimates for unseen word pairs instead of basing the estimate for the pair on un- igram frequency P(w2). Observe that similarity estimates are used for unseen word pairs only. We next investigate estimates for Pr(w21wl) 57 derived by averaging information from words that are distributionally similar to Wl. 2.2 Combining Evidence Similarity-based models assume that if word w~ is "similar" to word wl, then w~ can yield in- formation about the probability of unseen word pairs involving wl. We use a weighted aver- age of the evidence provided by similar words, where the weight given to a particular word w~ depends on its similarity to wl. More precisely, let W(wl, W~l) denote an in- creasing function of the similarity between wl and w[, and let $(Wl) denote the set of words most similar to Wl. Then the general form of similarity model we consider is a W-weighted linear combination of predictions of similar words: PSIM('W2IWl) = ~V(Wl, W~) E ~ ~s(~1 ) (3) where = is a nor- malization factor. According to this formula, w2 is more likely to occur with wl if it tends to occur with the words that are most similar to WI. Considerable latitude is allowed in defining the set $(Wx), as is evidenced by previous work that can be put in the above form. Essen and Steinbiss (1992) and Karov and Edelman (1996) (implicitly) set 8(wl) = V1. However, it may be desirable to restrict ,5(wl) in some fashion, especially if 1/1 is large. For instance, Dagan. Pereira, and Lee (1994) use the closest k or fewer words w~ such that the dissimilarity between wl and w~ is less than a threshold value t; k and t are tuned experimentally. Now, we could directly replace P,.(w2[wl) in the back-off equation (2) with PSIM(W21Wl). However, other variations are possible, such as interpolating with the unigram probability P(w2): P,.(w2lwl) = 7P(w2) + (1 - 7)PsiM(W2lWl), where 7 is determined experimentally (Dagan, Pereira, and Lee, 1994). This represents, in ef- fect, a linear combination of the similarity es- timate and the back-off estimate: if 7 -- 1, then we have exactly Katz's back-off scheme. As we focus in this paper on alternatives for PSlM, we will not consider this approach here; that is, for the rest of this paper, Pr(w2]wl) = PslM(W21wl). 2.3 Measures of Similarity We now consider several word similarity func- tions that can be derived automatically from the statistics of a training corpus, as opposed to functions derived from manually-constructed word classes (Resnik, 1992). All the similarity functions we describe below depend just on the base language model P('I'), not the discounted model /5(.[.) from Section 2.1 above. 2.3.1 KL divergence Kullback-Leibler (KL) divergence is a stan- dard information-theoretic measure of the dis- similarity between two probability mass func- tions (Cover and Thomas, 1991). We can ap- ply it to the conditional distribution P(.[wl) in- duced by Wl on words in V2: D(wx[lW ) = P(w2lwl) log P(wu[wx) P(w21wl)" (4) For D(wxHw~l) to be defined it must be the case that P(w2]w~l) > 0 whenever P(w21wl) > 0. Unfortunately, this will not in general be the case for MLEs based on samples, so we would need smoothed estimates of P(w2]w~) that redistribute some probability mass to zero- frequency events. However, using smoothed es- timates for P(w2[wl) as well requires a sum over all w2 6 172, which is expensive ['or the large vocabularies under consideration. Given the smoothed denominator distribution, we set l/V(wl, w~) = lO -~D(wlllw'l) , where/3 is a free parameter. 2.3.2 Total divergence to the average A related measure is based on the total KL divergence to the average of the two distribu- tions: + wl A(wx, W11) = D (w, wl )+D (w~[ + w~) 2 58 where (Wl ÷ w~)/2 shorthand for the distribu- tion ½ (P(.IwJ + P(.Iw~)) Since D('II-) > O, A(Wl,W~) >_ O. Furthermore, letting p(w2) = P(w2[wJ, p'(w2) = P(w2lw~) and C : {w2 : p(w2) > O,p'(w2) > O}, it is straightforward to show by grouping terms ap- propriately that A(wi,wb= -H(p(w2)) - H(p'(w2)) } + 2 log 2, where H(x) = -x logx. Therefore, d(wl, w~) is bounded, ranging between 0 and 2log2, and smoothed estimates are not required because probability ratios are not involved. In addi- tion, the calculation of A(wl, w~) requires sum- ming only over those w2 for which P(w2iwJ and P(w2]w~) are both non-zero, which, for sparse data, makes the computation quite fast. As in the KL divergence case, we set W(Wl, W~l) to be 10 -~A(~'wl). 2.3.3 LI norm The L1 norm is defined as n(wi, wl) : ~ IP(w2lwj - P(w21w'Jl . (6) W2 By grouping terms as before, we can express L(wI, w~) in a form depending only on the "common" w2: n(wl, w~) = 2- E p(w2)- E p'(w2) w26C w2EC ÷ Ip(w2)-p'(w2)t. w2EC This last form makes it clear that 0 < L(Wl, w[) _< 2, with equality if and only if there are no words w2 such that both P(w2lwJ and P(w2lw[) are strictly positive. Since we require a weighting scheme that is decreasing in L, we set W(wl, w~) = (2 - n(wl, W/l)) fl with fl again free. 2.3.4 Confusion probability Essen and Steinbiss (1992) introduced confu- sion probability 2, which estimates the probabil- ity that word w~ can be substituted for word Wl: Pc(w lWl) = w(wl, = ~, P(wllw2)P(w~[w2)P(w2) w2 P(Wl) Unlike the measures described above, wl may not necessarily be the "closest" word to itself, that is, there may exist a word w~ such that Pc(W'l[Wl ) > Pc(w,[wl) . The confusion probability can be computed from empirical estimates provided all unigram estimates are nonzero (as we assume through- out). In fact, the use of smoothed estimates like those of Katz's back-off scheme is problem- atic, because those estimates typically do not preserve consistency with respect to marginal estimates and Bayes's rule. However, using con- sistent estimates (such as the MLE), we can rewrite Pc as follows: ' w P(w2lwl) . P(w21w'JP(w'J. Pc(W1[ 1)= ~ P(w2) W2 This form reveals another important difference between the confusion probability and the func- tions D, A, and L described in the previous sec- tions. Those functions rate w~ as similar to wl if, roughly, P(w21w~) is high when P(w21'wj is. Pc(w~[wl), however, is greater for those w~ for which P(w~, wJ is large when P(w21wJ/P(w2) is. When the ratio P(w21wl)/P(w2) is large, we may think of w2 as being exceptional, since if w2 is infrequent, we do not expect P(w21wJ to be large. 2.3.5 Summary Several features of the measures of similarity listed above are summarized in table 1. "Base LM constraints" are conditions that must be satisfied by the probability estimates of the base 2Actually, they present two alternative definitions. We use their model 2-B, which they found yielded the best experimental results. 59 language model. The last column indicates whether the weight W(wl, w~) associated with each similarity function depends on a parameter that needs to be tuned experimentally. 3 Experimental Results We evaluated the similarity measures listed above on a word sense disambiguation task, in which each method is presented with a noun and two verbs, and decides which verb is more likely to have the noun as a direct object. Thus, we do not measure the absolute quality of the assign- ment of probabilities, as would be the case in a perplexity evaluation, but rather the relative quality. We are therefore able to ignore constant factors, and so we neither normalize the similar- ity measures nor calculate the denominator in equation (3). 3.1 Task: Pseudo-word Sense Disambiguation In the usual word sense disambiguation prob- lem, the method to be tested is presented with an ambiguous word in some context, and is asked to identify the correct sense of the word from the context. For example, a test instance might be the sentence fragment "robbed the bank"; the disambiguation method must decide whether "bank" refers to a river bank, a savings bank, or perhaps some other alternative. While sense disambiguation is clearly an im- portant task, it presents numerous experimen- tal difficulties. First, the very notion of "sense" is not clearly defined; for instance, dictionaries may provide sense distinctions that are too fine or too coarse for the data at hand. Also, one needs to have training data for which the cor- rect senses have been assigned, which can re- quire considerable human effort. To circumvent these and other difficulties, we set up a pseudo-word disambiguation ex- periment (Schiitze, 1992; Gale, Church, and Yarowsky, 1992) the general format of which is as follows. We first construct a list of pseudo- words, each of which is the combination of two different words in V2. Each word in V2 con- tributes to exactly one pseudo-word. Then, we replace each w2 in the test set with its cor- responding pseudo-word. For example, if we choose to create a pseudo-word out of the words "make" and "take", we would change the test data like this: make plans =~ {make, take} plans take action =~ {make, take} action The method being tested must choose between the two words that make up the pseudo-word. 3.2 Data We used a statistical part-of-speech tagger (Church, 1988) and pattern matching and con- cordancing tools (due to David Yarowsky) to identify transitive main verbs and head nouns of the corresponding direct objects in 44 mil- lion words of 1988 Associated Press newswire. We selected the noun-verb pairs for the 1000 most frequent nouns in the corpus. These pairs are undoubtedly somewhat noisy given the er- rors inherent in the part-of-speech tagging and pattern matching. We used 80%, or 587833, of the pairs so de- rived, for building base bigram language mod- els, reserving 20.o/0 for testing purposes. As some, but not all, of the similarity measures re- quire smoothed language models, we calculated both a Katz back-off language model (P = 15 (equation (2)), with Pr(w2[wl) = P(w2)), and a maximum-likelihood model (P = PML)- Fur- thermore, we wished to investigate Katz's claim that one can delete singletons, word pairs that occur only once, from the training set with- out affecting model performance (Katz, 1987); our training set contained 82407 singletons. We therefore built four base language models, sum- marized in Table 2. MLE Katz with singletons no singletons (587833 pairs) (505426 pairs) MLE-1 MLE-ol BO-1 BO-ol Table 2: Base Language Models Since we wished to test the effectiveness of us- ing similarity for unseen word cooccurrences, we removed from the test set any verb-object pairs 60 name D A L Pc range [0, co] [0, 2 log 2] [0, 2] [0, ½ maxw, P(w2)] base LM constraints P(w21w~l) ¢ 0 if P(w2[wx) ~: 0 none none Bayes consistency Table 1: Summary of similarity function properties tune? yes yes yes no that occurred in the training set; this resulted in 17152 unseen pairs (some occurred multiple times). The unseen pairs were further divided into five equal-sized parts, T1 through :/'5, which formed the basis for fivefold cross-validation: in each of five runs, one of the Ti was used as a performance test set, with the other 4 sets com- bined into one set used for tuning parameters (if necessary) via a simple grid search. Finally, test pseudo-words were created from pairs of verbs with similar frequencies, so as to control for word frequency in the decision task. We use error rate as our performance metric, defined as (# incorrect choices + (# of ties)/2) of where N was the size of the test corpus. A tie occurs when the two words making up a pseudo- word are deemed equally likely. 3.3 Baseline Experiments The performances of the four base language models are shown in table 3. MLE-1 and MLE-ol both have error rates of exactly .5 be- cause the test sets consist of unseen bigrams, which are all assigned a probability of 0 by maximum-likelihood estimates, and thus are all ties for this method. The back-off models BO-1 and BO-ol also perform similarly. MLE-1 MLE-ol BO-1 BO-ol 7'1 T~ % T4 % .5 .5 .5 .5 .5 ir 0.517 0.520 0.512 0.513 0.516 0.517 0.520 0.512 0.513 0.516 Table 3: Base Language Model Error Rates Since the back-off models consistently per- formed worse than the MLE models, we chose to use only the MLE models in our subse- quent experiments. Therefore, we only ran com- parisons between the measures that could uti- lize unsmoothed data, namely, the Lt norm, L(wx, w~); the total divergence to the aver- age, A(wx, w~); and the confusion probability, Pc(w~lwx). 3 In the full paper, we give de- tailed examples showing the different neighbor- hoods induced by the different measures, which we omit here for reasons of space. 3.4 Performance of Similarity-Based Methods Figure 1 shows the results on the five test sets, using MLE-1 as the base language model. The parameter/3 was always set to the optimal value for the corresponding training set. RAND, which is shown for comparison purposes, sim- ply chooses the weights W(wl,w~) randomly. S(wl) was set equal to Vt in all cases. The similarity-based methods consistently outperform the MLE method (which, recall, al- ways has an error rate of .5) and Katz's back- off method (which always had an error rate of about .51) by a huge margin; therefore, we con- clude that information from other word pairs is very useful for unseen pairs where unigram fre- quency is not informative. The similarity-based methods also do much better than RAND, which indicates that it is not enough to simply combine information from other words arbitrar- ily: it is quite important to take word similarity into account. In all cases, A edged out the other methods. The average improvement in using A instead of Pc is .0082; this difference is signifi- cant to the .1 level (p < .085), according to the paired t-test. 3It should be noted, however, that on BO-1 data, KL- divergence performed slightly better than the L1 norm. 61 T1 T2 Err~ Rates on T~t Sets, 8aN Language MociJ MLEI "RANOMLEI" -- "CONFMU~ I" - .... "I.MLEI" • .... • AMLEI • -- ii T3 T4 T5 Figure 1: Error rates for each test set, where the base language model was MLE-1. The methods, going from left to right, are RAND, Pc, L, and A. The performances shown are for settings offl that were optimal for the corresponding training set. I3 ranged from 4.0 to 4.5 for L and from 10 to 13 for A. The results for the MLE-ol case are depicted in figure 2. Again, we see the similarity-based methods achieving far lower error rates than the MLE, back-off, and RAND methods, and again, A always performed the best. However, with singletons omitted the difference between A and Pc is even greater, the average difference being .024, which is significant to the .01 level (paired t-test). An important observation is that all meth- ods, including RAND, were much more effective if singletons were included in the base language model; thus, in the case of unseen word pairs, Katz's claim that singletons can be safely ig- nored in the back-off model does not hold for similarity-based models. 4 Conclusions Similarity-based language models provide an appealing approach for dealing with data sparseness. We have described and compared the performance of four such models against two classical estimation methods, the MLE method and Katz's back-off scheme, on a pseudo-word disambiguation task. We observed that the similarity-based methods perform much better on unseen word pairs, with the measure based E~or ~tes on TeSt ~. ~ Umgua91 Model MLE.ot .... F-] Tt ;)-I T2 1"3 T4 "RANDMLEol*-- "CONFMLE01"- .... "LMLEol"- .... "7"AMLEol ...... "°'!-... ii! : ' F T5 Figure 2: Error rates for each test set, where the base language model was MLE-ol. /~ ranged from 6 to 11 for L and from 21 to 22 for A. on the KL divergence to the average, being the best overall. We also investigated Katz's claim that one can discard singletons in the training data, re- sulting in a more compact language model, without significant loss of performance. Our re- sults indicate that for similarity-based language modeling, singletons are quite important; their omission leads to significant degradation of per- formance. Acknowledgments We thank Hiyan Alshawi, Joshua Goodman, Rebecca Hwa, Stuart Shieber, and Yoram Singer for many helpful comments and discus- sions. Part of this work was done while the first and second authors were visiting AT&:T Labs. This material is based upon work supported in part by the National Science Foundation under Grant No. IRI-9350192. The second author also gratefully acknowledges support from a Na- tional Science Foundation Graduate Fellowship and an AT&T GRPW/ALFP grant. References Brown, Peter F., Vincent J. DellaPietra, Peter V. deSouza, Jennifer C. Lai, and Robert L. Mercer. 1992. Class-based n-gram models of natural lan- guage. Computational Linguistics, 18(4):467-479, December. 62 Church, Kenneth. 1988. A stochastic parts program and noun phrase parser for unrestricted text. In Proceedings of the Second Conference on Applied Natural Language Processing, pages 136-143. Church, Kenneth W. and William A. Gale. 1991. A comparison of the enhanced Good-Turing and deleted estimation methods for estimating proba- bilites of english bigrams. Computer Speech and Language, 5:19-54. Cover, Thomas M. and Joy A. Thomas. 1991. Ele- ments of Information Theory. John Wiley. Dagan, Ido, Fernando Pereira, and Lillian Lee. 1994. Similarity-based estimation of word cooccurrence probabilities. In Proceedings of the 32nd Annual Meeting of the ACL, pages 272-278, Las Cruces, NM. Essen, Ute and Volker Steinbiss. 1992. Co- occurrence smoothing for stochastic language modeling. In Proceedings of ICASSP, volume 1, pages 161-164. Gale, William, Kenneth Church, and David Yarowsky. 1992. Work on statistcal methods for word sense disambiguation. In Working Notes, AAAI Fall Symposium Series, Probabilistic Ap- proaches to Natural Language, pages 54-60. Good, I.J. 1953. The population frequencies of species and the estimation of population parame- ters. Biometrika, 40(3 and 4):237-264. Hoeffding, Wassily. 1965. Asymptotically optimal tests for nmttinomial distributions. Annals of Mathematical Statistics, pages 369-401. Jelinek, Frederick, Robert L. Mercer, and Salim Roukos. 1992. Principles of lexical language modeling for speech recognition. In In Sadaoki Furui and M. Mohan Sondhi, editors, Advances in Speech Signal Processing. Mercer Dekker, Inc., pages 651-699. Karov, Yael and Shimon Edelman. 1996. Learning similarity-based word sense disambiguation from sparse data. In 4rth Workshop on Very Large Corpora. Katz, Slava M. 1987. Estimation of probabilities from sparse data for the language model com- ponent of a speech recognizer. IEEE Transac- tions on Acoustics, Speech and Signal Processing, ASSP-35(3) :400-401, March. Pereira, Fernando, Naftali Tishby, and Lillian Lee. 1993. Distributional clustering of English words. In Proceedings of the 31st Annual Meeting of the ACL, pages 183-190, Columbus, OH. Resnik, Philip. 1992. Wordnet and distributional analysis: A class-based approach to lexical discov- ery. AAAI Workshop on Statistically-based Natu- ral Language Processing Techniques, pages 56-64, July. Schiitze, Hinrich. 1992. Context space. In Work- ing Notes, AAAI Fall Symposium on Probabilistic Approaches to Natural Language. 63
1997
8
Using Syntactic Dependency as Local Context to Resolve Word Sense Ambiguity Dekang Lin Department of Computer Science University of Manitoba Winnipeg, Manitoba, Canada R3T 2N2 [email protected] Abstract Most previous corpus-based algorithms dis- ambiguate a word with a classifier trained from previous usages of the same word. Separate classifiers have to be trained for different words. We present an algorithm that uses the same knowledge sources to disambiguate different words. The algo- rithm does not require a sense-tagged cor- pus and exploits the fact that two different words are likely to have similar meanings if they occur in identical local contexts. 1 Introduction Given a word, its context and its possible meanings, the problem of word sense disambiguation (WSD) is to determine the meaning of the word in that con- text. WSD is useful in many natural language tasks, such as choosing the correct word in machine trans- lation and coreference resolution. In several recent proposals (Hearst, 1991; Bruce and Wiebe, 1994; Leacock, Towwell, and Voorhees, 1996; Ng and Lee, 1996; Yarowsky, 1992; Yarowsky, 1994), statistical and machine learning techniques were used to extract classifiers from hand-tagged corpus. Yarowsky (Yarowsky, 1995) proposed an unsupervised method that used heuristics to obtain seed classifications and expanded the results to the other parts of the corpus, thus avoided the need to hand-annotate any examples. Most previous corpus-based WSD algorithms de- termine the meanings of polysemous words by ex- ploiting their local contexts. A basic intuition that underlies those algorithms is the following: (i) Two occurrences of the same word have identical meanings if they have similar local contexts. In other words, most previous corpus-based WSD al- gorithms learn to disambiguate a polysemous word from previous usages of the same word. This has sev- eral undesirable consequences. Firstly, a word must occur thousands of times before a good classifier can be learned. In Yarowsky's experiment (Yarowsky, 1995), an average of 3936 examples were used to disambiguate between two senses. In Ng and Lee's experiment, 192,800 occurrences of 191 words were used as training examples. There are thousands of polysemous words, e.g., there are 11,562 polysemous nouns in WordNet. For every polysemous word to occur thousands of times each, the corpus must con- tain billions of words. Secondly, learning to disam- biguate a word from the previous usages of the same word means that whatever was learned for one word is not used on other words, which obviously missed generality in natural languages. Thirdly, these algo- rithms cannot deal with words for which classifiers have not been learned. In this paper, we present a WSD algorithm that relies on a different intuition: (2) Two different words are likely to have similar meanings if they occur in identical local contexts. Consider the sentence: (3) The new facility will employ 500 of the existing 600 employees The word "facility" has 5 possible meanings in WordNet 1.5 (Miller, 1990): (a) installation, (b) proficiency/technique, (c) adeptness, (d) readiness, (e) toilet/bathroom. To disambiguate the word, we consider other words that appeared in an identical local context as "facility" in (3). Table 1 is a list of words that have also been used as the subject of "employ" in a 25-million-word Wall Street Journal corpus. The "freq" column are the number of times these words were used as the subject of "employ". 64 Table 1: Subjects of "employ" with highest likelihood ratio word freq logA word freq logA bRG 64 50.4 plant 14 31.0 company 27 28.6 operation 8 23.0 industry 9 14.6 firm 8 13.5 pirate 2 12.1 unit 9 9.32 shift 3 8.48 postal service 2 7.73 machine 3 6.56 corporation 3 6.47 manufacturer 3 6.21 insurance company 2 6.06 aerospace 2 5.81 memory device 1 5.79 department 3 5.55 foreign office 1 5.41 enterprise 2 5.39 pilot 2 5.37 *ORG includes all proper names recognized as organizations The logA column are their likelihood ratios (Dun- ning, 1993). The meaning of "facility" in (3) can be determined by choosing one of its 5 senses that is most similar 1 to the meanings of words in Table 1. This way, a polysemous word is disambiguated with past usages of other words. Whether or not it appears in the corpus is irrelevant. Our approach offers several advantages: • The same knowledge sources are used for all words, as opposed to using a separate classifier for each individual word. • It requires a much smaller corpus that needs not be sense-tagged. • It is able to deal with words that are infrequent or do not even appear in the corpus. • The same mechanism can also be used to infer the semantic categories of unknown words. The required resources of the algorithm include the following: (a) an untagged text corpus, (b) a broad-coverage parser, (c) a concept hierarchy, such as the WordNet (Miller, 1990) or Roget's Thesaurus, and (d) a similarity measure between concepts. In the next section, we introduce our definition of local contexts and the database of local contexts. A description of the disambiguation algorithm is pre- sented in Section 3. Section 4 discusses the evalua- tion results. 2 Local Context Psychological experiments show that humans are able to resolve word sense ambiguities given a narrow window of surrounding words (Choueka and Lusig- nan, 1985). Most WSD algorithms take as input • to be defined in Section 3.1 a polysemous word and its local context. Different systems have different definitions of local contexts. In (Leacock, Towwell, and Voorhees, 1996), the lo- cal context of a word is an unordered set of words in the sentence containing the word and the preceding sentence. In (Ng and Lee. 1996), a local context of a word consists of an ordered sequence of 6 surround- ing part-of-speech tags, its morphological features, and a set of collocations. In our approach, a local context of a word is de- fined in terms of the syntactic dependencies between the word and other words in the same sentence. A dependency relationship (Hudson, 1984; Mel'~uk, 1987) is an asymmetric binary relation- ship between a word called head (or governor, par- ent), and another word called modifier (or depen- dent, daughter). Dependency grammars represent sentence structures as a set of dependency relation- ships. Normally the dependency relationships form a tree that connects all the words in a sentence. An example dependency structure is shown in (4). (4) spec subj /-'~ // the boy chased a brown dog The local context of a word W is a triple that corresponds to a dependency relationship in which W is the head or the modifier: (type word position) where type is the type of the dependency relation- ship, such as subj (subject), adjn (adjunct), compl (first complement), etc.; word is the word related to W via the dependency relationship; and position can either be head or rood. The position indicates whether word is the head or the modifier in depen- 65 dency relation. Since a word may be involved in sev- eral dependency relationships, each occurrence of a word may have multiple local contexts. The local contexts of the two nouns "boy" and "dog" in (4) are as follows (the dependency relations between nouns and their determiners are ignored): (5) Word Local Contexts boy (subj chase head) dog (adjn brown rood) (compl chase head) Using a broad coverage parser to parse a corpus, we construct a Local Context Database. An en- try in the database is a pair: (6) (tc, C(tc)) where Ic is a local context and C(lc) is a set of (word frequency likelihood)-triples. Each triple speci- fies how often word occurred in lc and the likelihood ratio of lc and word. The likelihood ratio is obtained by treating word and Ic as a bigram and computed with the formula in (Dunning, 1993). The database entry corresponding to Table 1 is as follows: C(/c) -- ((ORG 64 50.4) (plant 14 31.0) ...... (pilot 2 5.37)) 3 The Approach The polysemous words in the input text are disam- biguated in the following steps: Step A. Parse the input text and extract local con- texts of each word. Let LCw denote the set of local contexts of all occurrences of w in the in- put text. Step B. Search the local context database and find words that appeared in an identical local con- text as w. They are called selectors of w: Selectorsw = ([JlceLC,~ C(Ic) ) - {w}. Step C. Select a sense s of w that maximizes the similarity between w and Selectors~. Step D. The sense s is assigned to all occurrences of w in the input text. This implements the "one sense per discourse" heuristic advocated in (Gale, Church, and Yarowsky, 1992). Step C. needs further explanation. In the next sub- section, we define the similarity between two word senses (or concepts). We then explain how the simi- larity between a word and its selectors is maximized. 3.1 Similarity between Two Concepts There have been several proposed measures for sim- ilarity between two concepts (Lee, Kim, and Lee, 1989; Kada et al., 1989; Resnik, 1995b; Wu and Palmer, 1994). All of those similarity measures are defined directly by a formula. We use instead an information-theoretic definition of similarity that can be derived from the following assumptions: Assumption 1: The commonality between A and B is measured by I(common(A, B)) where common(A, B) is a proposition that states the commonalities between A and B; I(s) is the amount of information contained in the proposition s. Assumption 2: The differences between A and B is measured by I ( describe( A, B) ) - I ( common( A, B ) ) where describe(A, B) is a proposition that describes what A and B are. Assumption 3: The similarity between A and B, sire(A, B), is a function of their commonality and differences. That is, sire(A, B) = f(I(common(d, B)), I(describe(A, B))) Whedomainof f(x,y) is {(x,y)lx > O,y > O,y > x}. Assumption 4: Similarity is independent of the unit used in the information measure. According to Information Theory (Cover and Thomas, 1991), I(s) = -logbP(S), where P(s) is the probability of s and b is the unit. When b = 2, I(s) is the number of bits needed to encode s. Since log~,, Assumption 4 means that the func- logbx = logb, b , tion f must satisfy the following condition: Vc > O, f(x, y) = f(cz, cy) Assumption 5: Similarity is additive with respect to commonality. If common(A,B) consists of two independent parts, then the sim(A,B) is the sum of the simi- larities computed when each part of the commonal- ity is considered. In other words: f(xl + x2,y) = f(xl,y) + f(x2,y). A corollary of Assumption 5 is that Vy, f(0, y) = f(x + O,y) -f(x,y) = O, which means that when there is no commonality between A and B, their similarity is 0, no matter how different they are. For example, the similarity between "depth-first search" and "leather sofa" is neither higher nor lower than the similarity between "rectangle" and "inter- est rate". 66 Assumption 6: The similarity between a pair of identical objects is 1. When A and B are identical, knowning their commonalities means knowing what they are, i.e., I ( comrnon(.4, B ) ) = I ( describe( A. B ) ) . Therefore, the function f must have the following property: vz,/(z, z) = 1. Assumption 7: The function f(x,y) is continu- ous. Similarity Theorem: The similarity between A and B is measured by the ratio between the amount of information neededto state the commonality of A and B and the information needed to fully describe what A and B are: sirn( A. B) = logP(common( A, B) ) logP( describe(.4, B) ) Proof." To prove the theorem, we need to show f(z,y) = ~. Since f(z,V) = f(~,l) (due to As- sumption 4), we only need to show that when ~ is a rational number f(z, y) = -~. The result can be gen- y eralized to all real numbers because f is continuous and for any real number, there are rational numbers that are infinitely close to it. Suppose m and n are positive integers. f(nz, y) = f((n - 1)z, V) + f(z, V) = nf(z, V) (due to Assumption 5). Thus. f(z, y) = ¼f(nx, y). Substituting ~ for x in this equation: f(z,v) Since z is rational, there exist m and n such that ~- -- --nu Therefore, Y m" m y Q.E.D. For example. Figure 1 is a fragment of the Word- Net. The nodes are concepts (or synsets as they are called in the WordNet). The links represent IS-A relationships. The number attached to a node C is the probability P(C) that a randomly selected noun refers to an instance of C. The probabilities are estimated by the frequency of concepts in SemCor (Miller et al., 1994), a sense-tagged subset of the Brown corpus. If x is a Hill and y is a Coast, the commonality between x and y is that "z is a GeoForm and y is a GeoForm". The information contained in this 0.000113 0.0000189 entity 0.395 inanima[e-object 0.167 / natural-~bject 0.0163 / ,eyi a, 000,70 natural-?levation shire 0.0000836 hill coast 0.0000216 Figure 1: A fragment of WordNet statement is -2 x logP(GeoForm). The similarity between the concepts Hill and Coast is: 2 x logP(GeoForm) sim(HiU, Coast) = = 0.59 logP(Hill) + logP(Coast) Generally speaking, 2xlogP(N i Ci ) (7) $irlz(C, C') "- iogP(C)+logP(C,) where P(fqi Ci) is the probability of that an object belongs to all the maximally specific super classes (Cis) of both C and C'. 3.2 Disambiguation by Maximizing Similarity We now provide the details of Step C in our algo- rithm. The input to this step consists of a polyse- mous word W0 and its selectors {l,I,'l, I, V2 ..... IVy}. The word Wi has ni senses: {sa,..., sin, }. Step C.I: Construct a similarity matrix (8). The rows and columns represent word senses. The matrix is divided into (k + 1) x (k + 1) blocks. The blocks on the diagonal are all 0s. The el- ements in block Sij are the similarity measures between the senses of Wi and the senses of II~. Similarity measures lower than a threshold 0 are considered to be noise and are ignored. In our experiments, 0 = 0.2 was used. sire(sit. Sjm) if i ¢ j and Sij(l,m) = sim(sa. Sjm) >__ O 0 otherwise 67 (8) 801 80n 0 811 81~ 1 8kl 8kn~ 801 • -. 80no $10 Sk0 8kl...Skn~ Sok S~k o Step C.2: Let A be the set of polysemous words in {Wo,...,wk): A = {Witn~ > 1} Step C.3: Find a sense of words in ,4 that gets the highest total support from other words. Call this sense si,,~,t,,~, : k si.,a,l.,~ = argmaxs, ~ support(sit, Wj) j=0 where sit is a word sense such that W/E A and 1 6 [1, n/] and support(su,Wj) is the support sa gets from Wj: support(sil, Wj) = max Sij(l,m) mE[1,nj] Step C.4: The sense of Wi~,,~ is chosen to be 8i~.~lm,a,. Remove Wi,.,,,, from A. A ( A- {W/.,., } Step C.5: Modify the similarity matrix to remove the similarity values between other senses of W/~, and senses of other words. For all l, j, m, such that l E [1,ni.~.,] and l ~ lmaz and j # imax and m E [1, nj]: Si.~o~j (/, m) e---- 0 Step C.6: Repeat from Step C.3 unless im,~z = O. 3.3 Walk Through Examples Let's consider again the word "facility" in (3). It has two local contexts: subject of "employ" (subj employ head) and modifiee of "new" (adjn new rood). Table 1 lists words that appeared in the first local context. Table 2 lists words that appeared in the second local context. Only words with top-20 likelihood ratio were used in our experiments. The two groups of words are merged and used as the selectors of "facility". The words "facility" has 5 senses in the WordNet. Table 2: Modifiees of "new" with the highest likeli- hood ratios word freq logA word freq logA post 432 952.9 issue 805 902.8 product 675 888.6 rule 459 875.8 law 356 541.5 technology 237 382.7 generation 150 323.2 model 207 319.3 job 260 269.2 system 318 251.8 bonds 223 245.4 capital 178 241.8 order 228 236.5 version 158 223.7 position 236 207.3 high 152 201.2 contract 279 198.1 bill 208 194.9 venture 123 193.7 program 283 183.8 1. something created to provide a particular ser- vice; 2. proficiency, technique; 3. adeptness, deftness, quickness; 4. readiness, effortlessness; 5. toilet, lavatory. Senses 1 and 5 are subclasses of artifact. Senses 2 and 3 are kinds of state. Sense 4 is a kind of ab- straction. Many of the selectors in Tables 1 and Table 2 have artifact senses, such as "post", "prod- uct", "system", "unit", "memory device", "ma- chine", "plant", "model", "program", etc. There- fore, Senses 1 and 5 of "facility" received much more support, 5.37 and 2.42 respectively, than other senses. Sense 1 is selected. Consider another example that involves an un- known proper name: (9) DreamLand employed 20 programmers. We treat unknown proper nouns as a polysemous word which could refer to a person, an organization, or a location. Since "DreamLand" is the subject of "employed", its meaning is determined by maximiz- ing the similarity between one of {person, organiza- tion, locaton} and the words in Table 1. Since Table 1 contains many "organization" words, the support for the "organization" sense is nmch higher than the others. 4 Evaluation We used a subset of the SemCor (Miller et al., 1994) to evaluate our algorithm. 68 4.1 Evaluation Criteria General-purpose lexical resources, such as Word- Net, Longman Dictionary of Contemporary English (LDOCE), and Roget's Thesaurus, strive to achieve completeness. They often make subtle distinctions between word senses. As a result, when the WSD task is defined as choosing a sense out of a list of senses in a general-purpose lexical resource, even hu- mans may frequently disagree with one another on what the correct sense should be. The subtle distinctions between different word senses are often unnecessary. Therefore, we relaxed the correctness criterion. A selected sense 8answer is correct if it is "similar enough" to the sense tag skeu in SemCor. We experimented with three in- terpretations of "similar enough". The strictest in- terpretation is sim(sanswer,Ske~)=l, which is true only when 8answer~Skey. The most relaxed inter- pretation is sim(s~nsw~, Skey) >0, which is true if 8answer and 8key are the descendents of the same top-level concepts in WordNet (e.g., entity, group, location, etc.). A compromise between these two is sim(Sans~er, Skew) >_ 0.27, where 0.27 is the average similarity of 50,000 randomly generated pairs (w, w') in which w and w ~ belong to the same Roget's cate- gory. We use three words "duty", "interest" and "line" as examples to provide a rough idea about what sirn( s~nswer, Skew) >_ 0.27 means. The word "duty" has three senses in WordNet 1.5. The similarity between the three senses are all below 0.27, although the similarity between Senses 1 (re- sponsibility) and 2 (assignment, chore) is very close (0.26) to the threshold. The word "interest" has 8 senses. Senses 1 (sake, benefit) and 7 (interestingness) are merged. 2 Senses 3 (fixed charge for borrowing money), 4 (a right or legal share of something), and 5 (financial interest in something) are merged. The word "interest" is reduced to a 5-way ambiguous word. The other three senses are 2 (curiosity), 6 (interest group) and 8 (pastime, hobby). The word "line" has 27 senses. The similarity threshold 0.27 reduces the number of senses to 14. The reduced senses are • Senses 1, 5, 17 and 24: something that is com- municated between people or groups. 1: a mark that is long relative to its width 5: a linear string of words expressing some idea ')The similarities between senses of the same word are computed during scoring. We do not actually change the WordNet hierarchy 17: a mark indicating positions or bounds of the playing area 24: as in "drop me a line when you get there" • Senses 2, 3, 9, 14, 18: group 2: a formation of people or things beside one another 3: a formation of people or things one after another 9: a connected series of events or actions or developments 14: the descendants of one individual 18: common carrier • Sense 4: a single frequency (or very narrow band) of radiation in a spectrum • Senses 6 and 25: cognitive process 6: line of reasoning 25: a conceptual separation or demarcation • Senses 7, 15, and 26: instrumentation 7: electrical cable 15: telephone line 26: assembly line • Senses 8 and 10: shape 8: a length (straight or curved) without breadth or thickness 10: wrinkle, furrow, crease, crinkle, seam, line • Senses 11 and 16: any road or path affording passage from one place to another; 11: pipeline 16: railway • Sense 12: location, a spatial location defined by a real or imaginary unidimensional extent; • Senses 13 and 27: human action 13: acting in conformity 27: occupation, line of work; • Sense 19: something long and thin and flexible • Sense 20: product line, line of products • Sense 21: space for one line of print (one col- umn wide and 1/14 inch deep) used to measure advertising • Sense 22: credit line, line of credit • Sense 23: a succession of notes forming a dis- tinctived sequence where each group is a reduced sense and the numbers are original WordNet sense numbers. 69 4.2 Results We used a 25-million-word Wall Street Journal cor- pus (part of LDC/DCI 3 CDROM) to construct the local context database. The text was parsed in 126 hours on a SPARC-Ultra 1/140 with 96MB of memory. We then extracted from the parse trees 8,665,362 dependency relationships in which the head or the modifier is a noun. We then fil- tered out (lc, word) pairs with a likelihood ratio lower than 5 (an arbitrary threshold). The resulting database contains 354,670 local contexts with a to- tal of 1,067,451 words in them (Table 1 is counted as one local context with 20 words in it). Since the local context database is constructed from WSJ corpus which are mostly business news, we only used the "press reportage" part of Sem- Cor which consists of 7 files with about 2000 words each. Furthermore, we only applied our algorithm to nouns. Table 3 shows the results on 2,832 polyse- mous nouns in SemCor. This number also includes proper nouns that do not contain simple markers (e.g., Mr., Inc.) to indicate its category. Such a proper noun is treated as a 3-way ambiguous word: person, organization, or location. We also showed as a baseline the performance of the simple strategy of always choosing the first sense of a word in the WordNet. Since the WordNet senses are ordered ac- cording to their frequency in SemCor, choosing the first sense is roughly the same as choosing the sense with highest prior probability, except that we are not using all the files in SemCor. It can be seen from Table 3 that our algorithm performed slightly worse than the baseline when the strictest correctness criterion is used. However, when the condition is relaxed, its performance gain is much lager than the baseline. This means that when the algorithm makes mistakes, the mistakes tend to be close to the correct answer. 5 Discussion 5.1 Related Work The Step C in Section 3.2 is similar to Resnik's noun group disambiguation (Resnik, 1995a), although he did not address the question of the creation of noun groups. The earlier work on WSD that is most similar to ours is (Li, Szpakowicz, and Matwin, 1995). They proposed a set of heuristic rules that are based on the idea that objects of the same or similar verbs are similar. 3http://www.ldc.upenn.edu/ 5.2 Weak Contexts Our algorithm treats all local contexts equally in its decision-making. However, some local contexts hardly provide any constraint on the meaning of a word. For example, the object of "get" can practi- cally be anything. This type of contexts should be filtered out or discounted in decision-making. 5.3 Idiomatic Usages Our assumption that similar words appear in iden- tical context does not always hold. For example, (10) ... the condition in which the heart beats between 150 and 200 beats a minute The most frequent subjects of "beat" (according to our local context database) are the following: (11) PER, badge, bidder, bunch, challenger, democrat, Dewey, grass, mummification, pimp, police, return, semi. and soldier. where PER refers to proper names recognized as per- sons. None of these is similar to the "body part" meaning of "heart". In fact, "heart" is the only body part that beats. 6 Conclusion We have presented a new algorithm for word sense disambiguation. Unlike most previous corpus- based WSD algorithm where separate classifiers are trained for different words, we use the same lo- cal context database and a concept hierarchy as the knowledge sources for disambiguating all words. This allows our algorithm to deal with infrequent words or unknown proper nouns. Unnecessarily subtle distinction between word senses is a well-known problem for evaluating WSD algorithms with general-purpose lexical resources. Our use of similarity measure to relax the correct- ness criterion provides a possible solution to this problem. Acknowledgement This research has also been partially supported by NSERC Research Grant 0GP121338 and by the In- stitute for Robotics and Intelligent Systems. References Bruce, Rebecca and Janyce Wiebe. 1994. Word- sense disambiguation using decomposable models. In Proceedings of the 32nd Annual Meeting o/the Associations/or Computational Linguistics, pages 139-145, Las Cruces, New Mexico. 70 Table 3: Performance on polysemous nouns in 7 SemCor files correctness criterion our algorithm first sense in WordNet sim(Sanswer, Skey) > 0 73.6% 67.2% sim(sanswe~,Skey) >_ 0.27 68.5% 64.2% sim(Sanswer, Skey) = 1 56.1% 58.9% Choueka, Y. and S. Lusignan. 1985. Disambigua- tion by short contexts. Computer and the Hu- manities, 19:147-157. Cover, Thomas M. and Joy A. Thomas. 1991. El- ements of information theory. Wiley series in telecommunications. Wiley, New York. Dunning, Ted. 1993. Accurate methods for the statistics of surprise and coincidence. Computa- tional Linguistics, 19(1):61-74, March. Gale, W., K. Church, and D. Yarowsky. 1992. A method for disambiguating word senses in a large corpus. Computers and the Humannities, 26:415- 439. Hearst, Marti. 1991. noun homograph disambigua- tion using local context in large text corpora. In Conference on Research and Development in In- formation Retrieval ACM/SIGIR, pages 36-47, Pittsburgh, PA. Hudson, Richard. 1984. Word Grammar. Basil Blackwell Publishers Limited., Oxford, England. Leacock, Claudia, Goeffrey Towwell, and Ellen M. Voorhees. 1996. Towards building contextual rep- resentations of word senses using statistical mod- els. In Corpus Processing for Lexical Acquisition. The MIT Press, chapter 6, pages 97-113. Lee, Joon Ho, Myoung Ho Kim, and Yoon Joon Lee. 1989. Information retrieval based on conceptual distance in is-a hierarchies. Journal of Documen- tation, 49(2):188-207, June. Li, Xiaobin, Stan Szpakowicz, and Stan Matwin. 1995. A wordnet-based algorithm for word sense disambiguation. In Proceedings of IJCAI-95, pages 1368-1374, Montreal, Canada, August. Mel'~uk, Igor A. 1987. Dependency syntax: theory and practice. State University of New York Press, Albany. Miller, George A. 1990. WordNet: An on-line lexi- cal database. International Journal of Lexicogra- phy, 3(4):235-312. Miller, George A., Martin Chodorow, Shari Landes, Claudia Leacock, and robert G. Thomas. 1994. Using a semantic concordance for sense identifi- cation. In Proceedings of the ARPA Human Lan- guage Technology Workshop. Ng, Hwee Tow and Hian Beng Lee. 1996. Integrat- ing multiple knowledge sources to disambiguate word sense: An examplar-based approach. In Pro- ceedings of 34th Annual Meeting of the Associa- tion for Computational Linguistics, pages 40-47, Santa Cruz, California. Rada, Roy, Hafedh Mili, Ellen Bicknell, and Maria Blettner. 1989. Development and application of a metric on semantic nets. IEEE Transaction on Systems, Man, and Cybernetics, 19(1):17-30, February. Resnik, Philip. 1995a. Disambiguating noun group- ings with respect to wordnet senses. In Third Workshop on Very Large Corpora. Association for Computational Linguistics. Resnik, Philip. 1995b. Using information content to evaluate semantic similarity in a taxonomy. In Proceedings of IJCAI-95, pages 448-453, Mon- treal, Canada, August. Wu, Zhibiao and Martha Palmer. 1994. Verb se- mantics and lexical selection. In Proceedings of the 32nd Annual Meeting of the Associations for Computational Linguistics, pages 133-138, Las Cruces, New Mexico. Yarowsky, David. 1992. Word-sense disambigua- tion using statistical models of Roget's cate- gories trained on large corpora. In Proceedings of COLING-92, Nantes, France. Yarowsky, David. 1994. Decision lists for lexical am- biguity resolution: Application to accent restora- tion in spanish and french. In Proceedings of 32nd Annual Meeting of the Association for Computa- tional Linguistics, pages 88-95, Las Cruces, NM, June. Yarowsky, David. 1995. Unsupervised word sense disambiguation rivaling supervised methods. In Proceedings of 33rd Annual Meeting of the Asso- ciation for Computational Linguistics, pages 189- 196, Cambridge, Massachusetts, June. 71
1997
9
A Quasi-Dependency Model for Structural Analysis of Chinese BaseNPs* Zhao Jun Huang Changning Department of Computer Science & Technology, The State Key Lab of Intelligent Technology & Systems, Tsinghua University, Beijing, China, 100084 Email: [email protected], [email protected] Abstract: The paper puts forward a quasi- dependency model for structural analysis of Chinese baseNPs and a MDL-based algorithm for quasi- dependency-strength acquisition. The experiments show that the proposed model is more suitable for Chinese baseNP analysis and the proposed MDL- based algorithm is superior to the traditional ML- based algorithm. The paper also discusses the problem of incorporating the linguistic knowledge into the above statistical model. 1. Introduction The concept of baseNP is initially put forward by Church. In English, baseNP is defined as 'simple non-recursive noun phrases', which means that there is no sub-noun-phrases contained in a baseNP[1]. B~t the definition can not meet the needs in Chinese information retrieval. The noun phrases such as "1~ ~(natural) ~-~(language) ~(process)", "~-IF~b~l(Asian) ~;-'~!]~(finance) ~f~ ~(crisis)" and "i~(political) /¢~k$1J(system) ~(reformation) ~.~(process)" are critical for information retrieval, but they are not non- recursive noun phrases. Type In Chinese, the attribute of noun phrases can be classified into three types, that is restrictive attributes, distinctive attributes and descriptive attributes, among which the restrictive attributes have agglutinative relation with the heads. The using the paper defines the Chinese baseNP restrictive attributes. [ Definition 1 ] Chinese baseNP (hereafter abbreviated as baseNP)- baseNP -- baseNP + baseNP baseNP --- baseNP + N I VN baseNP -- restrictive-attribute + baseNP baseNP --- restrictive-attribute + N I VN restrictive-attribute --- A I B I V IN ] S I X I (M+Q) Where, the terminal symbols A, B, V, N, VN, S, X, M, Q stand for respectively adjective, distinctives, verbs, nouns, norminalized verbs, locatives, non- Chinese string, numerals and quantifiers. According to the definition, noun phrases falls into baseNPs and non-baseNPs (abbreviated as ~baseNP). Table-1 gives some examples. Table- 1 : Examples of baseNP and -baseNP Examples BaseNP ~ q~/air z-I~/eorridor BaseNP i~'~/politics/t~k$1J/system ~/reform BaseNP ,W, D/export ~ h/commodity ffl'~/price ~/index baseNP ~,/complicated I~/de ~fi~q-'/feature - baseNP ~i)~/research -~/and ~J~/development -baseNP ~i~l~/teacher q/write t~/de i,~/eomment Both baseNP recognition and baseNP structural analysis are basic tasks in Chinese information retrieval. The paper mainly discusses the problems in structural analysis of baseNPs, which is essential for generating the compositional indexing units from a baseNP. The task of baseNP • The research is supported by the key project of the National Natural Science Foundation 1 structural analysis is to determine the syntactic structure of a baseNP. In this paper, we use dichotomy for baseNP analysis. For example, the structure of "I~1 ~/natural ~/ianguage ~J~ /process" is "( ~ ~/natural i,~'/language) ~ /process". Obviously, a baseNP composed of three or more than three words has syntactic ambiguities. For example, baseNP "x y z" has two possible structures, that is "(x y) z" and "x (y z)". The task of baseNP structural analysis is to select the correct structure from the possible structures. The paper mainly discusses the problems related to Chinese baseNP structural analysis. Section 2 puts forward a quasi-dependency model for structure analysis of Chinese baseNPs. Section 3 gives an unsupervised quasi-dependency- strength estimation algorithm based on the minimum description length (MDL) principle. Section 4 analyzes the performance of the proposed model and the algorithm. Section 5 discusses some issues in the implementation of baseNP structure analysis and quasi-dependency- strength estimation. Section 6 is the conclusion. 2. The quasi-dependency model There are two kinds of structural analysis models for Eng!ish noun phrase, that is adjacency model and dependency model. The research of Lauer shows that the dependency model is superior to the adjacency model for structural analysis of English noun phrase[2]. However, there is no model for structural analysis of Chinese baseNP till now. According to the dependency grammar, two constituents can be bound together they are determined to be dependent. The determination of y z y the dependency relation between two constituents is composed of two steps. The first step is to determine whether they have the possibility to constituent dependency relation. The second step is to determine whether they have dependency relation in the given context. The former is called the quasi-dependency-relation, which can be acquired from collocation dictionaries or corpora. The determination of the latter is difficult, because multiple information in the given context should be taken into consideration, such as syntax Or semantics information, etc. [ Definition 2 ] Quasi-Dependency-Relation: If two words x and y have the possibility to constituent dependency relation, then we say that they have quasi-dependency-relation in the given baseNP, formulated as x--"y (where y is called the head) or y--x (where x is called the head); Otherwise, we say that they have no quasi dependency relation, formulated as x ~ y and y--/~x. [Assumption 1] In a Chinese baseNP, if two words x and y can constituent dependency relation, then the head is always the post-positon word y, that is x'-y. According to the Definition 1, there is no preposition phrase, verb phrase, locality phrase or (l~)-structure in a baseNP, so assumption-1 is reasonable. On the basis of assumption-l, we put forward the quasi-dependency model for structural analysis of Chinese baseNPs. There are the following 3 kinds of quasi- dependency-pattern for a tri-word-composed baseNP xyz. z X g X Y Where, pattern s3t means x~y, y-"z andx ~ z, which corresponds to structure (x y) z; pattern s3~ means x-"z, y~z and x ~ y, which corresponds to the structure x (y z); However, the quasi- dependency-strength must be used to determine the corresponding structure for pattern s33, which means x--*y, y-"z and x---z. For example, as for baseNP " i~ ~/politics ~ ~tJ/system La~ J ,/ x x J x ,/ x J ,/ J 4 J v y y I I s3, =(x y) z s==x O' z) /reform", there are quasi-dependency-relations "~ ~/politics--- ~k~U/system", "~ ~/politics-" ~ /reform" and "~k~lJ/system--- ~/reform". If we know that the quasi-dependency-relations "i~ /politics--- ~iJ/system" and "/~k~lJ/system-- ~ /reform" are stronger than "i~/politics~ /reform", the structure of the baseNP can be determined to "(i~/politics ~qk~lJ/system) ~ 2 /reform". In the following, we give the definition of quasi-dependency-strength and the formula for determining the syntactic structure of baseNPs based on the quasi-dependency-strengths. [Definition 3 ]quasi-dependency-strength: Given a baseNP set NP={npt,np2,...,npM} and lexicon W={W~,...,WM} , VW~, Wj E W , the quasi- dependency-strength of w~wj is defined as: ~_a dep( w i --~ w.i , nP t ) npt ~ NP dsCwi --~ w~) = Z co(w~ --~ w j, rip, ) npt ~ NP where dep(w i ~ wj,npk)is the count of dependent word pair w~'w~ contained in np,, co(w~, w~,np,) is the count of cooccurent word pair (w~, wj) contained in np,. The formula for determining the syntactic x y z x y z Uw, lx x x J x X J Y Y 4 S41 = ((wx)y)z S42 = (wx)(yz) structure of baseNP based on the quasi- dependency-strengths is as follows. ds(u ~ v) (u..-~v)eD(np~ ,s j ) belief(sj I np, ) = Z ds(u ~ v) + Z ds(u ~ v) (u---~v)aD(np i ,s I ) (u-~v)~D(npi ,.v I ) Where, belief(sj ]nPi) represents the belief in which the structure of np~ is sj. D(npi,sj) represents the set of quasi-dependency-relations included in the quasi-dependency-pattern corresponding to structure sj. A tri-word-composed baseNP has two possible syntactic structures, that is s3t and s32. Similarly, a four-word-composed baseNP has the following five possible structures. x y z x J x Y Y s43 ; (~))z In summary, we can compute the belief in which the structure of npi is sj using the correspondence between the quasi-dependency- pattern and the baseNP structure. The acquisition of quasi-dependency-strength between words is the critical problem. 3. The acquisition of quasi- dependency-strength between words If we have a large scale baseNP annotated corpus in which the baseNPs have been assigned the syntactic structures, the quasi-dependency- strength between words can be acquired through a simple statistics. However, such an annotated corpus is not available. We only have a baseNP corpus which has no structural information. How to acquire the quasi-dependency-strength from such a corpus is the main task of the section. Given a baseNP set NP={npi,np2,...,npM} and a lexicon W={wl,w2 ..... WM}, the problem can be described as learning a quasi-dependency-strength set G (abbreviated as model) from the training set. Where, G = {asO. I d.sj --- d4w wj x y z x y z J X x X 4 Y $44 = W((Xy)Z) $45 = W(X(yT)) Zhai Chengxiang puts forward an unsupervised algorithm for acquiring quasi-dependency-strength from noun phrase set[3]. The algorithm is derived from the EM algorithm. Because the algorithm is based on the maximum likelihood (ML) principle, it usually leads to overfitness between the data and the model[4]. For example, given a simple baseNP set NP={i~/politics ~k~lJ/system ~/reform, _t~:/economics ~k~lJ/system ~i~/reform, i~ /politics ~ f~lJ/system ~ ~/revolute , ~ /economics/t~lJ/system ~/revolute}, there are sixteen possible models for the training set, among them (34, GT, G~0 and Gt3 have the best fitness to NP, that is Num(NPIG)=6. However, in the linguistic view, G~ is the correct model, though it has lower fitness to NP, that is Num(NPIG)=4 (see the appendix). 3.1 The estimation of the quasi-dependency- strength under Bayesian framework In Bayesian framework, the task of acquiring the quasi-dependency-strength can be described as the problem of selecting G which has the highest posterior probability p( G [NP). G = arg max p(G I We) G According to Bayesian theorem, we have the following inference. G = arg max p(Ne I G)p(G) G p(NP) -- arg max p(Ne I G)p(G) G Besides using conditional probability p(NP[G) to measure the fitness between the training set and the model G, Bayesian modeling gives additional consideration to the generality of the model through the prior probability p(G), that is simpler model has higher probability. The central idea of Bayesian modeling is to find a compromise between the goodness of fit and the simplicity of the model. 3.2 Defining the evaluation function of Bayesian modeling using MDL principle The difficulty in Bayesian modeling is the estimation of the prior probability p(G). According to the coding theory, the lower bound of the coding length (bit-string) of an information with probability p is log 2 l/p[5]. The theorem connects Bayesian modeling with the MDL principle in the coding theory. G = arg max p(NPIG)p(G) G -- arg min {-log2 [p(NPIG)p(G)]} G = arg min {log2 1 + log 2 1 G p(NPIG) -~-~} = arg min {L(NP [ G) + L(G)} G Where, L(a) is the optimal coding length of information a. Specially, L(NPIG) is called the data description length and L(G) is called the model description length. Therefore, the problem of estimating the prior probability p(G) and the conditional probability p(NPIG) is converted to the problem of estimating the model description length L(G) and the data description length L(NPIG). 3_3 The MDL-based quasi-dependency-strength estimation algorithm In MDL principle, the modeling problem can he viewed as a problem of finding a model G which has the smallest sum of the data description length and the model description length. Because the search space is huge, we can not find the optimal model in a transversal manner. The model must be improved in an iterative manner in order to arrive at a minimum description length. In the research, the model is composed of the quasi-dependency-strength ds(w i ~ wj ), where each ds(w i ~ w j) can be decomposed into two parts: Othe structure part: the quasi-dependency- relation (w i ~ w j); (~)the parameter part: the quasi-dependency-strength ds. Therefore, the learning process is divided into two steps: (!) Keeping the structure part fixed, optimize the parameter part; (g)Keeping the parameter part fixed, optimize the structure part. The two steps go on alternately until the process arrives at a convergent point. Algorithm 1: The MDL-based algorithm for quasi-dependency-strength estimation (!)Initialize model G; (~Let L = L( NP [ G) + L( G), G = ( G s , G e ) ,where Gs and Gp represent respectively the structure part and the parameter part. Execute the following two steps alternately, until L converged. • Keeping Gs fixed, optimize Gp, until L(NP I G) converges, that is L converges; • Keeping Gp fixed, optimize Gs, until L(G) converges, that is L converges. On condition that the structure part of the model is fixed, the parameter optimization means to find the optimal sets of quasi-dependency-strength in order that the data description length minimized, 4 that is C -- arg min LCNP I G) G Where L(NPIG ) is the optimal coding length of NP when G is known. The parameter optimization step can be implemented using EM algorithm[3]. In the process of parameter optimization, the structure part of the model is kept fixed. The optimum estimates of the parameters are obtained through Algorithm 2: The structure optimization algorithm the gradual reduction of data description length. In MDL principle, the model description length can be gradually reduced through the modification of the structure part of the model, therefore the overall description length of the model is reduced. Let the model after the parameter optimization process is G, which is composed of the quasi- dependency-strength ds(w~ ~ wj). QSort the quasi-dependency-strengths of model G in ascending order, that is ds tzl, ds [21, ds I3], . ..... ; (g) Repeat the following steps, until [L(NP I G'} + L(G')]- [L(NP I G} + L(G)] <= Th L (The. is the selected threshold). Let i= 1, • Delete the quasi-dependency-strength ds til from model G; • Construct the newmodel G'; • If [L(NPIG')+L(G')]-[L(NPIG)+L(G)]<=Th L Then the cycle ends Else let G=G', i=i+1 and continue the next cycle. 4. The performance analysis This section takes the N2+N2+N2-type (where N2 represents bi-syllable noun) baseNPs as the testing data in order to discuss the performance of the quasi- dependency-based model for structural analysis of baseNPs and the MDL-based algorithm for quasi- dependency-strength acquisition. The training set includes 7,500 N2+N2+N2-type baseNPs. The close testing set is the 500 baseNPs included in the training set. The open testing set is the 500 baseNPs outside the training set. The testing target is the precision of baseNP structural analysis, that is a precision = -- × 100%; b Where a is the count of the baseNPs which are correctly analyzed, b is the count of the baseNPs in the tesing set. 4.1 The performance of the quasi-dependency model The experiments shows: (~)In the N2+N2+N2- type baseNPs, the left-binding structure is about two times of the right-binding structure; (~)The analysis precision of the quasi-dependency model is about 7% higher than that of the adjacency model. This conclusion can be explained intuitively through the following example. The structure of baseNP "~d:: /doctor J~3~/dissertation ~/outline" can not be correctly determined through the adjacency model, because we can not find that the dependency strength of "~:~/doctor ~3~/dissertation" is stronger than that of"J~3~/dissertation ~/outline". In the other hand, the structure of the above baseNP can be determined to "(~/doctor J~3~/dissertation) fl~/outline" through the quasi-dependency model, because both "t~/doctor J~3~/dissertation" and " "~3~/dissertation ~_~/outline" are dependent word pairs, while "~ ~/doctor ~. ~/outline" is an independent word pair. Table 2 is the testing result. Table-2: The analysis precision ofN2+N2+N2-type baseNP Testing type Right-binding Left-binding Adjacency model Quasi-dependency model Close test 31.5% 68.5% 84.6% 91.5% Open test 32.7% 67.3% 81.5% 88.7% 5 4.2 The performance of the MDL-based algorithm for quasi-dependency-strength acquisition The ML algorithm is equivalent to the first parameter optimization process of the MDL algorithm. The MDL process is composed of two iterative optimization steps. In the iterative process, the parameters are optimized gradually and the model is simplified gradually as well. Therefore, the overfitness problem inherent in the ML algorithm is solved to a great extent. In the following, the performance of the ML algorithm and the MDL algorithm are compared through comparing the baseNP analysis precision of the models constructed using the above two algorithms. The precision is listed in Table-3. The experiment shows that the MDL algorithm is superior to the ML algorithm. Table-3: The performance of ML algorithm and MDL algorithm BaseNP analysis precision Close test ML algorithm MDL algorithm 89.0% 91.5% 5. Implementation issues The most difficult problem related to the structural analysis of baseNPs is the acquisition of the quasi-dependency-strength. The proposed algorithm(Algorithm 2) is an unsupervised algorithm, that is the parameters are estimated over the baseNP corpus which has no structural information. In order to improve the estimation results and speed up the iteration process, some measures are taken during the implementation. 5.1 The pre-assignment of the baseNP structure The structures of some baseNPs can be determined using the linguistic knowledge. Such knowledge includes: (~) In a baseNP, a word pair which has the following syntactic composition is independent. • Noun+Adjective: for example, " ~ /ground/Noun :~/eomplicated/Adjective :~,~, /condition", "]~]tll/glass/Noun ~[/curved/Adjective Open test ~/pipe"; • Noun+Distinctive: for /elementary-school/Noun age/Distinctive } L~-~/child"; example, " zJ~ ~: J[~ I~ /of-the-right- • Distinctive+Verb: for example, " ~ /large/Distinctive ~ ~1~/fight/Verb -~ ~l],/plane", "~l~[/elementary/Distinetive /l~Fj:/ereepNerb S/animal". (g) If two verbs cooccur in a baseNP, then they are dependent. For example," (lgJJ~/prospeet/Verb "~ /design/Verb ) ~ ~./group ", " ( ~ []/Anti- Japanese/Verb ~[[~i/save-the-nationNerb) J~ /campaign". If we preproeess the baseNP corpus using the ML algorithm [ MDL algorithm 82.5% I 88.7% above knowledge, it is beneficial for the estimation process. 5.2 The complex-feature-based modeling If the lexicon size is ]~, then the parameter number of the above word-based acquisition algorithm amounts to [~2. The enormous parameter space will lead to the data sparseness problem during the estimation. Therefore, the paper puts forward the complex-feature-based acquisition algorithm. First, map each word to a complex-feature-set according to the multiple feature of the words; Then, acquire the quasi-dependency-strength between the complex- feature-sets. During analyzing the structure of a baseNP, the strength between the complex-feature- sets is used instead of that between the words. In the research, the multiple features include part-of-speech, number of syllables and word sense categories. 6. Conclusions The paper put forward a quasi-dependency model for structural analysis of Chinese baseNPs, and a MDL-based algorithm for the quasi-dependency- strength acquisition. The experiments show that the proposed model is more suitable for Chinese baseNP analysis and the proposed MDL-based algorithm is superior to the traditional ML-based algorithm. The further research will focus on incorporating more linguistic knowledge into the above statistical model. References [1] Church K., A stochastic parts program and noun phrase parser for unrestricted text, In: Proceedings of the Second Conference on Applied Natural Language Processing, 1988. 6 [2] Lauer M. Conceptual association for compound noun analysis, In: Proceedings of the 32 "d Annual Meeting of the Association for Computational Linguistics, Student Session, Las Cruces, NM, 1994. [3] Zhai Chengxiang, Fast Statistical Parsing of Noun Phrases for Document Indexing, In: Proceedings of the 35 th Annual Meeting of the Association for Computational Linguistics, USA.: Association for Computational Linguistics. 1997. 311-318. [4] Stoicke A. Bayesian learning of probabilistic language models, Dissertation for Ph.D. Degree, Berkeley, California: University of California, 1994. [5] Solomonoff R. The mechanization of linguistic learning, In: Proceedings of the 2nd International Conference on Cybernetics. Appendix: An example for quasi-dependency-relation acquisition Fitness between G and NP I No. Model G IGI ~3~ ~ ~$[J(l ), ~$~J(l ) (~k~J)~(1), l #f$,J~(~), ~,$,l~(]) 4 (~{,$~l)~(]), ~(o), ~(o) (~*$~)~@0), ~(o), ~@(o) (@-i~*$~J)~O), ~$~J(~), -~{*~J(~) (~{*$0)~(]), 4 ~$0~(]), ~$~@(]) 6 (~*$0)~0), ~(o), ~(o) (~$~)~@(~), ~(1), _~F~(]) (~*$,J)~(~), ~*$0(~), ~$~J(]) (~$~J)~(]), 7 ~$~J~(]), {,$~/~(i) 6 (~$~J)~(t), ~(o), ~(~) (~t~$,J)~(]), ~(i), ~(o) (~:~$~)~@(]), ~{*$,J(1), ~{,$~J(~) (~t~$~J)~(~), lO ~$~J~(]), ~$~J~(l) 6 (~{t$~)~O), ~(~), ~(o) (~,$~J)~(]), ~'~ ~(0), ~.~J~: ~ ~'(I ) (~ ~/:~J)~( l ), ~(~J~) (o) ~(~$~J~)(o) ~(~,~J~@) (o) ~(~J~s)(o) ~(~$~J~) (o) ~(~,~J~)(o) ~(~$~J~)(]) ~t~(~,$~J~) (o) ~(~$~J~)(]) ~(~$~J~#~)(o) ~(~*$,J~) (I) ~F(~$~J~)(o) ~(~*$,~) (o) ~(~*$~J~)(1) (~*$~l)~(]), (~{*~J)~(~), (~*$~)~(]), ~(~$~J~) (I) ~:(~$~J~)(o) ~(~$,J~) (I) ~$JJ(~), ~$,J(~) 13 ~$~J~(~), ~J~(1) 6 ~t~, ~(o), ~es(o) Num( NPIG ) 4 7
1998
1
A Memory-Based Approach to Learning Shallow Natural Language Patterns Shlomo Argamon and Ido Dagan and Yuval Krymolowski Department of Mathematics and Computer Science Bar-Ilan University 52900 Ramat Gan, Israel { argamon, dagan, yuvalk}@cs, biu. ac. il Abstract Recognizing shallow linguistic patterns, such as ba- sic syntactic relationships between words, is a com- mon task in applied natural language and text pro- cessing. The common practice for approaching this task is by tedious manual definition of possible pat- tern structures, often in the form of regular expres- sions or finite automata. This paper presents a novel memory-based learning method that recognizes shal- low patterns in new text based on a bracketed train- ing corpus. The training data are stored as-is, in efficient suffix-tree data structures. Generalization is performed on-line at recognition time by compar- ing subsequences of the new text to positive and negative evidence in the corpus. This way, no in- formation in the training is lost, as can happen in other learning systems that construct a single gen- eralized model at the time of training. The paper presents experimental results for recognizing noun phrase, subject-verb and verb-object patterns in En- glish. Since the learning approach enables easy port- ing to new domains, we plan to apply it to syntac- tic patterns in other languages and to sub-language patterns for information extraction. 1 Introduction Identifying local patterns of syntactic sequences and relationships is a fundamental task in natural lan- guage processing (NLP). Such patterns may corre- spond to syntactic phrases, like noun phrases, or to pairs of words that participate in a syntactic rela- tionship, like the heads of a verb-object relation. Such patterns have been found useful in various application areas, including information extraction, text summarization, and bilingual alignment. Syn- tactic patterns are useful also for many basic com- putational linguistic tasks, such as statistical word similarity and various disambiguation problems. One approach for detecting syntactic patterns is to obtain a full parse of a sentence and then extract the required patterns. However, obtaining a complete parse tree for a sentence is difficult in many cases, and may not be necessary at all for identifying most instances of local syntactic patterns. An alternative approach is to avoid the complex- ity of full parsing and instead to rely only on local information. A variety of methods have been devel- oped within this framework, known as shallow pars- ing, chunking, local parsing etc. (e.g., (Abney, 1991; Greffenstette, 1993)). These works have shown that it is possible to identify most instances of local syn- tactic patterns by rules that examine only the pat- tern itself and its nearby context. Often, the rules are applied to sentences that were tagged by part- of-speech (POS) and are phrased by some form of regular expressions or finite state automata. Manual writing of local syntactic rules has become a common practice for many applications. However, writing rules is often tedious and time consuming. Furthermore, extending the rules to different lan- guages or sub-language domains can require sub- stantial resources and expertise that are often not available. As in many areas of NLP, a learning ap- proach is appealing. Surprisingly, though, rather lit- tle work has been devoted to learning local syntactic patterns, mostly noun phrases (Ramshaw and Mar- cus, 1995; Vilain and Day, 1996). This paper presents a novel general learning ap- proach for recognizing local sequential patterns, that may be perceived as falling within the memory- based learning paradigm. The method utilizes a part-of-speech tagged training corpus in which all in- stances of the target pattern are marked (bracketed). The training data are stored as-is in suffix-tree data structures, which enable linear time searching for subsequences in the corpus. The memory-based nature of the presented algo- rithm stems from its deduction strategy: a new in- stance of the target pattern is recognized by exam- ining the raw training corpus, searching for positive and negative evidence with respect to the given test sequence. No model is created for the training cor- pus, and the raw examples are not converted to any other representation. Consider the following example 1. Suppose we 1We use here the POS tags: DT ---- determiner, ADJ = adjective, hDV = adverb, C0NJ = conjunction, VB=verb, PP=preposition, NN = singular noun, and NNP ---- plural noun. 67 want to decide whether the candidate sequence DT ADJ ADJ NN NNP is a noun phrase (NP) by comparing it to the train- ing corpus. A good match would be if the entire sequence appears as-is several times in the corpus. However, due to data sparseness, an exact match cannot always be expected. A somewhat weaker match may be obtained if we consider sub-parts of the candidate sequence (called tiles). For example, suppose the corpus contains noun phrase instances with the following structures: (i) DT ADJ ADJ NN NN (2) DT ADJ NN NNP The first structure provides positive evidence that the sequence "DT ADJ ADJ NN" is a possible NP pre- fix while the second structure provides evidence for "ADJ NN NNP" being an NP suffix. Together, these two training instances provide positive evidence that covers the entire candidate. Considering evidence for sub-parts of the pattern enables us to general- ize over the exact structures that are present in the corpus. Similarly, we also consider the negative evi- dence for such sub-parts by noting where they occur in the corpus without being a corresponding part of a target instance. The proposed method, as described in detail in the next section, formalizes this type of reasoning. It searches specialized data structures for both positive and negative evidence for sub-parts of the candidate structure, and considers additional factors such as context and evidence overlap. Section 3 presents ex- perimental results for three target syntactic patterns in English, and Section 4 describes related work. 2 The Algorithm The input to the Memory-Based Sequence Learning (MBSL) algorithm is a sentence represented as a se- quence of POS tags, and its output is a bracketed sentence, indicating which subsequences of the sen- tence are to be considered instances of the target pattern (target instances). MBSL determines the bracketing by first considering each subsequence of the sentence as a candidate to be a target instance. It computes a score for each candidate by comparing it to the training corpus, which consists of a set of pre-bracketed sentences. The algorithm then finds a consistent bracketing for the input sentence, giving preference to high scoring subsequences. In the re- mainder of this section we describe the scoring and bracketing methods in more detail. 2.1 Scoring candidates We first describe the mechanism for scoring an in- dividual candidate. The input is a candidate sub- sequence, along with its context, i.e., the other tags in the input sentence. The method is presented at two levels: a general memory-based learning schema and a particular instantiation of it. Further instan- tiations of the schema are expected in future work. 2.1.1 The general MBSL schema The MBSL scoring algorithm works by considering situated candidates. A situated candidate is a sen- tence containing one pair of brackets, indicating a candidate to be a target instance. The portion of the sentence between the brackets is the candidate (as above), while the portion before and after the candidate is its context. (Although we describe the algorithm here for the general case of unlimited con- text, for computational reasons our implementation only considers a limited amount of context on either side of the candidate.) This subsection describes how to compute the score of a situated candidate from the training corpus. The idea of the MBSL scoring algorithm is to con- struct a tiling of subsequences of a situated candi- date which covers the entire candidate. We con- sider as tiles subsequences of the situated candidate which contain a bracket. (We thus consider only tiles within or adjacent to the candidate that also include a candidate boundary.) Each tile is assigned a score based on its occur- rence in the training memory. Since brackets cor- respond to the boundaries of potential target in- stances, it is important to consider how the bracket positions in the tile correspond to those in the train- ing memory. For example, consider the training sentence [ NN ] VB [ ADJ NN NN ] ADV PP [ NN ] We may now examine the occurrence in this sentence of several possible tiles: VB [ ADJ NN occurs positively in the sentence, and NN NN ] ADV also occurs positively, while NN [ NN ADV occurs negatively in the training sen- tence, since the bracket does not correspond. The positive evidence for a tile is measured by its positive count, the number of times the tile (in- cluding brackets) occurs in the training memory with corresponding brackets. Similarly, the nega- tive evidence for a tile is measured by its negative count, the number of times that the POS sequence of the tile occurs in the training memory with non- corresponding brackets (either brackets in the train- ing where they do not occur in the tile, or vice versa). The total count of a tile is its positive count plus its negative count, that is, the total count of the POS sequence of the tile, regardless of bracket position. The score ](t) of a tile t is a function of its positive and negative counts. 68 Candidate: NN VB [ ADJ NN NN ] ADV MTile I: VB [ ADJ NN NN ] MTile 2: VB [ ADJ MTile 3: [ ADJ NN MTile 4: NN NN ] MTile 5: NN ] ADV Figure 1: A candidate subsequence with some of its context, and 5 matching tiles found in the training corpus. The overall score of a situated candidate is gen- erally a function of the scores of all the tiles for the candidate, as well as the relations between the tiles' positions. These relations include tile adjacency, overlap between tiles, the amount of context in a tile, and so on. 2.1.2 An instantiation of the MBSL schema In our instantiation of the MBSL schema, we define the score fit) of a tile t as the ratio of its positive count pos(t) and its total count total(t): 1 if -P-P--~!!- > 0 total(t) I(t) = 0 otherwise for a predefined threshold O. Tiles with a score of 1, and so with sufficient positive evidence, are called matching tiles. Each matching tile gives supporting evidence that a part of the candidate can be a part of a target in- stance. In order to combine this evidence, we try to cover the entire candidate by a set of matching tiles, with no gaps. Such a covering constitutes evidence that the entire candidate is a target instance. For example, consider the matching tiles shown for the candidate in Figure 1. The set of matching tiles 2, 4, and 5 covers the candidate, as does the set of tiles 1 and 5. Also note that tile 1 constitutes a cover on its own. To make this precise, we first say that a tile T1 connects to a tile T2 if (i) T2 starts after T1 starts, (ii) there is no gap between the end of T1 and the start of T2 (there may be some overlap), and (iii) T2 ends after T1 (neither tile includes the other). For example, tiles 2 and 4 in the figure connect, while tiles 2 and 5 do not, and neither do tiles 1 and 4 (since tile 1 includes tile 4 as a subsequence). A cover for a situated candidate c is a sequence of matching tiles which collectively cover the en- tire candidate, including the boundary brackets, and possibly some context, such that each tile connects to the following one. A cover thus provides posi- tive evidence for the entire sequence of tags in the candidate. The set of all the covers for a candidate summa- rizes all of the evidence for the candidate being a target instance. We therefore compute the score of a candidate as a function of some statistics of the set of all its covers. For example, if a candidate has many different covers, it is more likely to be a target instance, since many different pieces of evidence can be brought to bear. We have empirically found several statistics of the cover set to be useful. These include, for each cover, the number of tiles it contains, the total number of context tags it contains, and the number of positions which more than one tile covers (the amount of over- lap). We thus compute, for the set of all covers of a candidate c, the • Total number of different covers, num(c), * Minimum number of matches in any cover, minsize(c), • Maximum amount of context in any cover, maxcontext(c), and • Maximum total overlap between tiles for any cover, maxoverlap(c). Each of these items gives an indication regarding the overall strength of the cover-based evidence for the candidate. The score of the candidate is a linear function of its statistics: f(c) = anum(c) - 13minsize(c)+ 3' maxcontext (c) + maxoverlap (c) If candidate c has no covers, we set f(c) = O. Note that minsize is weighted negatively, since a cover with fewer tiles provides stronger evidence for the candidate. In the current implementation, the weights were chosen so as to give a lexicographic ordering, pre- ferring first candidates with more covers, then those with covers containing fewer tiles, then those with larger contexts, and finally, when all else is equal, preferring candidates with more overlap between tiles. We plan to investigate in the future a data- driven approach (based on the Winnow algorithm) for optimal selection and weighting of statistical fea- tures of the score. We compute a candidate's statistics efficiently by performing a depth-first traversal of the cover graph of the candidate. The cover graph is a directed acyclic graph (DAG) whose nodes represent match- ing tiles of the candidate, such that an arc exists between nodes n and n', if tile n connects to n'. A special start node is added as the root of the DAG, that connects to all of the nodes (tiles) that contain an open bracket. There is a cover corresponding to each path from the start node to a node (tile) that contains a close bracket. Thus the statistics of all the covers may be efficiently computed by traversing the cover graph. 69 2.1.3 Summary Given a candidate sequence and its context (a situ- ated candidate): 1. Consider all the subsequences of the situated candidate which include a bracket as tiles; 2. Compute a tile score as a function of its positive count and total counts, by searching the train- ing corpus. Determine which tiles are matching tiles; 3. Construct the set of all possible covers for the candidate, that is, sequences of connected matching tiles that cover the entire candidate; 4. Compute the candidate score based on the statistics of its covers. 2.2 Searching the training memory The MBSL scoring algorithm searches the training corpus for each subsequence of the sentence in or- der to find matching tiles. Implementing this search efficiently is therefore of prime importance. We do so by encoding the training corpus using suffix trees (Edward and McCreight, 1976), which provide string searching in time which is linear in the length of the searched string. Inspired by Satta (1997), we build two suffix trees for retrieving the positive and total counts for a tile. The first suffix tree holds all pattern instances from the training corpus surrounded by bracket symbols and a fixed amount of context. Searching a given tile (which includes a bracket symbol) in this tree yields the positive count for the tile. The second suffix tree holds an unbracketed version of the en- tire training corpus. This tree is used for searching the POS sequence of a tile, with brackets omitted, yielding the total count for the tile (recall that the negative count is the difference between the total and positive counts). 2.3 Selecting candidates After the above procedure, each situated candidate is assigned a score. In order to select a bracketing for the input sentence, we assume that target instances are non-overlapping (this is usually the case for the types of patterns with which we experimented). We use a simple constraint propagation algorithm that finds the best choice of non-overlapping candidates in an input sentence: 1. Examine each situated candidate c with f(c) > 0, in descending order of f(c): (a) Add c's brackets to the sentence; (b) Remove all situated candidates overlapping with c which have not yet been examined. 2. Return the bracketed sentence. NP VO SV NP VO SV Train Data: sentences words 8936 229598 16397 454375 16397 454375 patterns 54760 14271 25024 Test Data: sentences words patterns 2012 51401 12335 1921 53604 1626 1921 53604 3044 Table 1: Sizes of training and test data Len 1 16959 31 2 21577 39 3203 22 7613 30 3 10264 19 5922 41 7265 29 4 3630 7 2952 21 3284 13 5 1460 3 1242 9 1697 7 6 521 1 506 4 1112 4 7 199 0 242 2 806 3 8 69 0 119 1 ,592 2 9 40 0 44 0 446 2 10 18 0 20 0 392 2 >10 23 0 23 0 1917 8 total 54760 14271 25024 avg. len 2.2 3.4 4.5 Table 2: Distribution of pattern lengths, total num- ber of patterns and average length in the training data. 3 Evaluation 3.1 The Data We have tested our algorithm in recognizing three syntactic patterns: noun phrase sequences (NP), verb-object (VO), and subject-verb (SV) relations. The NP patterns were delimited by ' [' and ']' symbols at the borders of the phrase. For VO pat- terns, we have put the starting delimiter before the main verb and the ending delimiter after the object head, thus covering the whole noun phrase compris- ing the object; for example: ... investigators started to [ view the lower price levels ] as attractive ... We used a similar policy for SV patterns, defining the start of the pattern at the start of the subject noun phrase and the end at the first verb encoun- tered (not including auxiliaries and medals); for ex- ample: ... argue that [ the U.S. should regulate ] the class ... 70 90 t~ O L o. 80 70 I , , t , I s , , [ , , , 70 80 90 Recall Figure 2: Recall-Precision curves for NP, VO, and SV; 0.1 < 8 < 0.99 The subject and object noun-phrase borders were those specified by the annotators, phrases which con- tain conjunctions or appositives were not further an- alyzed. The training and testing data were derived from the Penn TreeBank. We used the NP data prepared by Ramshaw and Marcus (1995), hereafter RM95. The SV and VO data were obtained using T (Tree- Bank's search script language) scripts. 2 Table 1 summarizes the sizes of the training and test data sets and the number of examples in each. The T scripts did not attempt to match depen- dencies over very complex structures, since we are concerned with shallow, or local, patterns. Table 2 shows the distribution of pattern length in the train data. We also did not attempt to extract passive- voice VO relations. 3.2 Testing Methodology The test procedure has two parameters: (a) maxi- mum context size of a candidate, which limits what queries are performed on the memory, and (b) the threshold 8 used for establishing a matching tile, which determines how to make use of the query re- sults. Recall and precision figures were obtained for var- ious parameter values. F~ (van Rijsbergen, 1979), a common measure in information retrieval, was used 2The scripts may be found at the URL http://www.cs.biu.ac.il/,-~yuvalk/MBSL. as a single-figure measure of performance: (f12 + 1). P. n F~ = f12 . P + R We use ~ = 1 which gives no preference to either recall or precision. 3.3 Results Table 3 summarizes the optimal parameter settings and results for NP, VO, and SV on the test set. In order to find the optimal values of the context size and threshold, we tried 0.1 < t~ < 0.95, and maxi- mum context sizes of 1,2, and 3. Our experiments used 5-fold cross-validation on the training data to determine the optimal parameter settings. In experimenting with the maximum context size parameter, we found that the difference between the values of F~ for context sizes of 2 and 3 is less than 0.5% for the optimal threshold. Scores for a context size of 1 yielded F~ values smaller by more than 1% than the values for the larger contexts. Figure 2 shows recall/precision curves for the three data sets, obtained by varying 8 while keeping the maximum context size at its optimal value. The difference between F~=I values for different thresh- olds was always less than 2%. Performance may be measured also on a word-by word basis, counting as a success any word which was identified correctly as being part of the tar- get pattern. That method was employed, along with recall/precision, by RM95. We preferred to measure performance by recall and precision for complete patterns. Most errors involved identifica- tions of slightly shifted, shorter or longer sequences. Given a pattern consisting of five words, for example, identifying only a four-word portion of this pattern would yield both a recall and precision errors. Tag- assignment scoring, on the other hand, will give it a score of 80%. We hold the view that such an identi- fication is an error, rather than a partial success. We used the datasets created by RM95 for NP learning; their results are shown in Table 3. 3 The F~ difference is small (0.4%), yet they use a richer feature set, which incorporates lexicai information as well. The method of Ramshaw and Marcus makes a decision per word, relying on predefined rule tem- plates. The method presented here makes deci- sions on sequences and uses sequences as its mem- ory, thereby attaining a dynamic perspective of the SNotice that our results, as well as those we cite from RM95, pertains to a training set of 229,000 words. RM95 report also results for a larger training set, of 950,000 words, for which recall/precision is 93.5%/93.1%, correspondingly (F~=93.3%). Our system needs to be further optimized in order to handle that amount of data, though our major con- cern in future work is to reduce the overall amount of labeled training data. 71 Thresh. VO 2 0.5 81.3 SV 3 0.6 86.1 NP 3 0.6 91.4 aM95 (NP) I I - I Recall (%) Precision (%) 89.8 77.1 84.5 88.6 91.6 91.6 I 92.3 91.8 83.0 86.5 91.6 192.0 Table 3: Results with optimal parameter settings for context size and threshold, and breakeven points. The last line shows the results of Ramshaw and Marcus (1995) (recognizing NP's) with the same train/test data. The optimal parameters were obtained by 5-fold cross-validation. 90 85 J 80 t 1 t . . . . . . . . . . . . . . '- - _'_ _ -, / / / / / NP. 8=0.7 Con.=2 90 /./ SV. 8=0.8 Con.-3 ./ / l/ =i/ Ii J L VO. 0~0.3 Con.-2 n 20000 85 Y, 80 75 ' ' ] ~ ' ~ ] 75 0 40000 ' ' ' ' I ' ' ' ' I _'- ' ' ' I ' ' ' i 1 ' ' ' / t / / NP. e=o.7 Con.=2 0 Examples Figure 3: Learning curves for NP, VO, and SV by number of SV, 8~0.6 Con.~3 .~'~ i. I .i" ~ ~ / / i / / I/ I ~ ! vo. ,-o.~ ~ .... , / / i00000 200000 300000 400000 Words examples (left) and words (right) pattern structure. We aim to incorporate lexical in- formation as well in the future, it is still unclear whether that will improve the results. Figure 3 shows the learning curves by amount of training examples and number of words in the train- ing data, for particular parameter settings. 4 Related Work Two previous methods for learning local syntactic patterns follow the transformation-based paradigm introduced by Brill (1992). Vilain and Day (1996) identify (and classify) name phrases such as com- pany names, locations, etc. Ramshaw and Marcus (1995) detect noun phrases, by classifying each word as being inside a phrase, outside or on the boundary between phrases. Finite state machines (FSMs) are a natural for- malism for learning linear sequences. It was used for learning linguistic structures other than shallow syntax. Gold (1978) showed that learning regular languages from positive examples is undecidable in the limit. Recently, however, several learning meth- ods have been proposed for restricted classes of FSM. OSTIA (Onward Subsequential Transducer Infer- ence Algorithm; Oncina, Garcia, and Vidal 1993), learns a subsequential transducer in the limit. This algorithm was used for natural-language tasks by Vi- lar, Marzal, and Vidal (1994) for learning translation of a limited-domain language, as well as by Gildea and Jurafsky (1994) for learning phonological rules. Ahonen et al. (1994) describe an algorithm for learn- ing (k,h)-contextual regular languages, which they use for learning the structure of SGML documents. Apart from deterministic FSMs, there are a num- ber of algorithms for learning stochastic models, eg., (Stolcke and Omohundro, 1992; Carrasco and Oncina, 1994; Ron et al., 1995). These algorithms differ mainly by their state-merging strategies, used for generalizing from the training data. A major difference between the abovementioned learning methods and our memory-based approach is that the former employ generalized models that were created at training time while the latter uses the training corpus as-is and generalizes only at recog- nition time. Much work aimed at learning models for full pars- ing, i.e., learning hierarchical structures. We re- fer here only to the DOP (Data Oriented Parsing) method (Bod, 1992) which, like the present work, is a memory-based approach. This method constructs parse alternatives for a sentence based on combina- tions of subtrees in the training corpus. The MBSL approach may be viewed as a linear analogy to DOP in that it constructs a cover for a candidate based 72 on subsequences of training instances. Other implementations of the memory-based paradigm for NLP tasks include Daelemans et al. (1996), for POS tagging; Cardie (1993), for syntactic and semantic tagging; and Stanfill and Waltz (1986), for word pronunciation. In all these works, examples are represented as sets of features and the deduction is carried out by finding the most similar cases. The method presented here is radically different in that it makes use of the raw sequential form of the data, and generalizes by reconstructing test examples from different pieces of the training data. 5 Conclusions We have presented a novel general schema and a par- ticular instantiation of it for learning sequential pat- terns. Applying the method to three syntactic pat- terns in English yielded positive results, suggesting its applicability for recognizing local linguistic pat- terns. In future work we plan to investigate a data- driven approach for optimal selection and weighting of statistical features of candidate scores, as well as to apply the method to syntactic patterns of Hebrew and to domain-specific patterns for information ex- traction. 6 acknowledgements The authors wish to thank Yoram Singer for his collaboration in an earlier phase of this research project, and Giorgio Satta for helpful discussions. We also thank the anonymous reviewers for their in- structive comments. This research was supported in part by grant 498/95-1 from the Israel Science Foundation, and by grant 8560296 from the Israeli Ministry of Science. References S. P. Abney. 1991. Parsing by chunks. In R. C. Berwick, S. P. Abney, and C. Tenny, editors, Principle-Based Parsing: Computation and Psy- cholinguistics, pages 257-278. Kluwer, Dordrecht. H. Ahonen, H. Mannila, and E. Nikunen. 1994. Forming grammars for structured documents: An application of grammatical inference. In R. C. Carrasco and J. Oncina, editors, Grammatical In- ference and Applications (ICGI-9~), pages 153- 167. Springer, Berlin, Heidelberg. R. Bod. 1992. A computational model of language performance: Data oriented parsing. In Coling, pages 855-859, Nantes, France. E. Brill. 1992. A simple rule-based part of speech tagger. In proc. of the DARPA Workshop on Speech and Natural Language. C. Cardie. 1993. A case-based approach to knowl- edge acquisition for domain-specific sentence anal- ysis. In Proceedings of the 11th National Con- ference on Artificial Intelligence, pages 798-803, Menlo Park, CA, USA, July. AAAI Press. R. C. Carrasco and J. Oncina. 1994. Learn- ing stochastic regular grammars by means of a state merging method. In R. C. Carrasco and J. Oncina, editors, Grammatical Inference and Applications (ICGI-94), pages 139-152. Springer, Berlin, Heidelberg. W. Daelemans, J. Zavrel, Berck P., and Gillis S. 1996. Mbt: A memory-based part of speech tag- ger generator. In Eva Ejerhed and Ido Dagan, edi- tors, Proceedings of the Fourth Workshop on Very Large Corpora, pages 14-27. ACL SIGDAT. T. Edward and M. McCreight. 1976. space- economical suffix tree construction algorithm. Journal of the ACM, 23(2):262-272, April. D. Gildea and D. Jurafsky. 1994. Automatic induc- tion of finite state transducers for simple phono- logical rules. Technical Report TR-94-052, In- ternational Computer Science Institute, Berkeley, CA, October. E. M. Gold. 1978. Complexity of automaton iden- tification from given data. Information and Con- trol, 37:302-320. Gregory Greffenstette. 1993. Evaluation techniques for automatic semantic extraction: Comparing syntactic and window based approaches. In A CL Workshop on Acquisition of Lexical Knowledge From Text, Ohio State University, June. L. A. Ramshaw and M. P. Marcus. 1995. Text chunking using transformation-based learning. In Proceedings of the Third Workshop on Very Large Corpora. D. Ron, Y. Singer, and N. Tishby. 1995. On the learnability and usage of acyclic probabilistic fi- nite automata. In Proceedings of the 8th Annual Conference on Computational Learning Theory (COLT'95), pages 31-40, New York, NY, USA, July. ACM Press. G. Satta. 1997. String transformation learning. In Proc. of the ACL/EACL Annual Meeting, pages 444-451, Madrid, Spain, July. C. Stanfill and D. Waltz. 1986. Toward memory- based reasoning. Communications of the ACM, 29(12):1213-1228, December. A. Stolcke and S. Omohundro. 1992. Hidden markov model induction by bayesian model merg- ing. In Proceedings of Neural Information Pro- cessing Systems 5 (NIPS-5). C. J. van Rijsbergen. 1979. Information Retrieval. Buttersworth. M. B. Vilain and D. S. Day. 1996. Finite-state phrase parsing by rule sequences. In Proc. of COLING, Copenhagen, Denmark. 73
1998
10
Text Segmentation Using Reiteration and Collocation Amanda C. Jobbins Department of Computing Nottingham Trent University Nottingham NG1 4BU, UK ajobbins @resumix.com Lindsay J. Evett Department of Computing Nottingham Trent University Nottingham NG1 4BU, UK [email protected] Abstract A method is presented for segmenting text into subtopic areas. The proportion of related pairwise words is calculated between adjacent windows of text to determine their lexical similarity. The lexical cohesion relations of reiteration and collocation are used to identify related words. These relations are automatically located using a combination of three linguistic features: word repetition, collocation and relation weights. This method is shown to successfully detect known subject changes in text and corresponds well to the segmentations placed by test subjects. Introduction Many examples of heterogeneous data can be found in daily life. The Wall Street Journal archives, for example, consist of a series of articles about different subject areas. Segmenting such data into distinct topics is useful for information retrieval, where only those segments relevant to a user's query can be retrieved. Text segmentation could also be used as a pre-processing step in automatic summarisation. Each segment could be summarised individually and then combined to provide an abstract for a document. Previous work on text segmentation has used term matching to identify clusters of related text. Salton and Buckley (1992) and later, Hearst (1994) extracted related text portions by matching high frequency terms. Yaari (1997) segmented text into a hierarchical structure, identifying sub-segments of larger segments. Ponte and Croft (1997) used word co-occurrences to expand the number of terms for matching. Reynar (1994) compared all words across a text rather than the more usual nearest neighbours. A problem with using word repetition is that inappropriate matches can be made because of the lack of contextual information (Salton et al., 1994). Another approach to text segmentation is the detection of semantically related words. Hearst (1993) incorporated semantic information derived from WordNet but in later work reported that this information actually degraded word repetition results (Hearst, 1994). Related words have been located using spreading activation on a semantic network (Kozima, 1993), although only one text was segmented. Another approach extracted semantic information from Roget's Thesaurus (RT). Lexical cohesion relations (Halliday and Hasan, 1976) between words were identified in RT and used to construct lexical chains of related words in five texts (Morris and Hirst, 1991). It was reported that the lexical chains closely correlated to the intentional structure (Grosz and Sidner, 1986) of the texts, where the start and end of chains coincided with the intention ranges. However, RT does not capture all types of lexical cohesion relations. In previous work, it was found that collocation (a lexical cohesion relation) was under-represented in the thesaurus. Furthermore, this process was not automated and relied on subjective decision making. Following Morris and Hirst's work, a segmentation algorithm was developed based on identifying lexical cohesion relations across a text. The proposed algorithm is fully automated, and a quantitative measure of the association between words is calculated. This algorithm utilises linguistic features additional to those captured in the thesaurus to identify the other types of lexical cohesion relations that can exist in text. 614 1 Background Theory: Lexical Cohesion Cohesion concerns how words in a text are related. The major work on cohesion in English was conducted by Halliday and Hasan (1976). An instance of cohesion between a pair of elements is referred to as a tie. Ties can be anaphoric or cataphoric, and located at both the sentential and supra-sentential level. Halliday and Hasan classified cohesion under two types: grammatical and lexical. Grammatical cohesion is expressed through the grammatical relations in text such as ellipsis and conjunction. Lexical cohesion is expressed through the vocabulary used in text and the semantic relations between those words. Identifying semantic relations in a text can be a useful indicator of its conceptual structure. Lexical cohesion is divided into three classes: general noun, reiteration and collocation. General noun's cohesive function is both grammatical and lexical, although Halliday and Hasan's analysis showed that this class plays a minor cohesive role. Consequently, it was not further considered. Reiteration is subdivided into four cohesive effects: word repetition (e.g. ascent and ascent), synonym (e.g. ascent and climb) which includes near-synonym and hyponym, superordinate (e.g. ascent and task) and general word (e.g. ascent and thing). The effect of general word is difficult to automatically identify because no common referent exists between the general word and the word to which it refers. A collocation is a predisposed combination of words, typically pairwise words, that tend to regularly co-occur (e.g. orange and peel). All semantic relations not classified under the class of reiteration are attributed to the class of collocation. 2 Identifying Lexical Cohesion To automatically detect lexical cohesion ties between pairwise words, three linguistic features were considered: word repetition, collocation and relation weights. The first two methods represent lexical cohesion relations. Word repetition is a component of the lexical cohesion class of reiteration, and collocation is a lexical cohesion class in its entirety. The remaining types of lexical cohesion considered, include synonym and superordinate (the cohesive effect of general word was not included). These types can be identified using relation weights (Jobbins and Evett, 1998). Word repetition: Word repetition ties in lexical cohesion are identified by same word matches and matches on inflections derived from the same stem. An inflected word was reduced to its stem by look- up in a lexicon (Keenan and Evett, 1989) comprising inflection and stem word pair records (e.g. "orange oranges"). Collocation: Collocations were extracted from a seven million word sample of the Longman English Language Corpus using the association ratio (Church and Hanks, 1990) and outputted to a lexicon. Collocations were automatically located in a text by looking up pairwise words in this lexicon. Figure 1 shows the record for the headword orange followed by its collocates. For example, the pairwise words orange and peel form a collocation. I orange free green lemon peel red ] state yellow I Figure 1. Excerpt from the collocation lexicon. Relation Weights: Relation weights quantify the amount of semantic relation between words based on the lexical organisation of RT (Jobbins and Evett, 1995). A thesaurus is a collection of synonym groups, indicating that synonym relations are captured, and the hierarchical structure of RT implies that superordinate relations are also captured. An alphabetically-ordered index of RT was generated, referred to as the Thesaurus Lexicon (TLex). Relation weights for pairwise words are calculated based on the satisfaction of one or more of four possible connections in TLex. 3 Proposed Segmentation Algorithm The proposed segmentation algorithm compares adjacent windows of sentences and determines their lexical similarity. A window size of three sentences was found to produce the best results. Multiple sentences were compared because 615 calculating lexical similarity between words is too fine (Rotondo, 1984) and between individual sentences is unreliable (Salton and Buckley, 1991). Lexical similarity is calculated for each window comparison based on the proportion of related words, and is given as a normalised score. Word repetitions are identified between identical words and words derived from the same stem. Collocations are located by looking up word pairs in the collocation lexicon. Relation weights are calculated between pairwise words according to their location in RT. The lexical similarity score indicates the amount of lexical cohesion demonstrated by two windows. Scores plotted on a graph show a series of peaks (high scores) and troughs (low scores). Low scores indicate a weak level of cohesion. Hence, a trough signals a potential subject change and texts can be segmented at these points. 4 Experiment 1: Locating Subject Change An investigation was conducted to determine whether the segmentation algorithm could reliably locate subject change in text. Method: Seven topical articles of between 250 to 450 words in length were extracted from the World Wide Web. A total of 42 texts for test data were generated by concatenating pairs of these articles. Hence, each generated text consisted of two articles. The transition from the first article to the second represented a known subject change point. Previous work has identified the breaks between concatenated texts to evaluate the performance of text segmentation algorithms (Reynar, 1994; Stairmand, 1997). For each text, the troughs placed by the segmentation algorithm were compared to the location of the known subject change point in that text. An error margin of one sentence either side of this point, determined by empirical analysis, was allowed. Results: Table 1 gives the results for the comparison of the troughs placed by the segmentation algorithm to the known subject change points. linguistic feature troughs placed subject change points located average I std. dev. (out of 42 poss.) word repetition 7.1 3.16 41 collocation (97.6%) word repetition 7.3 5.22 41 relation weights (97.6%) 41 word repetition 8.5 3.62 (97.6%) collocation 40 5.8 3.70 relation weights (95.2%) word repetition 40 collocation 6.4 4.72 relation weights (95.2%) 39 relation weights 7 4.23 (92.9%) 35 collocation 6.3 3.83 (83.3%) Table 1. Comparison of segmentation algorithm using different linguistic features. Discussion: The segmentation algorithm using the linguistic features word repetition and collocation in combination achieved the best result. A total of 41 out of a possible 42 known subject change points were identified from the least number of troughs placed per text (7.1). For the text where the known subject change point went undetected, a total of three troughs were placed at sentences 6, 11 and 18. The subject change point occurred at sentence 13, just two sentences after a predicted subject change at sentence 11. In this investigation, word repetition alone achieved better results than using either collocation or relation weights individually. The combination of word repetition with another linguistic feature improved on its individual result, where less troughs were placed per text. 5 Experiment 2: Test Subject Evaluation The objective of the current investigation was to determine whether all troughs coincide with a subject change. The troughs placed by the 616 algorithm were compared to the segmentations identified by test subjects for the same texts. Method: Twenty texts were randomly selected for test data each consisting of approximately 500 words. These texts were presented to seven test subjects who were instructed to identify the sentences at which a new subject area commenced. No restriction was placed on the number of subject changes that could be identified. Segmentation points, indicating a change of subject, were determined by the agreement of three or more test subjects (Litman and Passonneau, 1996). Adjacent segmentation points were treated as one point because it is likely that they refer to the same subject change. The troughs placed by the segmentation algorithm were compared to the segmentation points identified by the test subjects. In Experiment 1, the top five approaches investigated identified at least 40 out of 42 known subject change points. Due to that success, these five approaches were applied in this experiment. To evaluate the results, the information retrieval metrics precision and recall were used. These metrics have tended to be adopted for the assessment of text segmentation algorithms, but they do not provide a scale of correctness (Beeferman et al., 1997). The degree to which a segmentation point was 'missed' by a trough, for instance, is not considered. Allowing an error margin provides some degree of flexibility. An error margin of two sentences either side of a segmentation point was used by Hearst (1993) and Reynar (1994) allowed three sentences. In this investigation, an error margin of two sentences was considered. Results: Table 2 gives the mean values for the comparison of troughs placed by the segmentation algorithm to the segmentation points identified by the test subjects for all the texts. Discussion: The segmentation algorithm using word repetition and relation weights in combination achieved mean precision and recall rates of 0.80 and 0.69, respectively. For 9 out of the 20 texts segmented, all troughs were relevant. Therefore, many of the troughs placed by the segmentation algorithm represented valid subject linguistic feature word repetition] relation weights word repetition collocation word repetition collocation relation weights l collocation relation weights word repetition I mean values for all texts relevant!relevant nonrel, prec. found found rec. 4.50 3.10 1.00 0.80 0.69 4.50 2.80 0.85 0.80 0.62 4.50 2.80 0.85 0.80 0.62 4.50 2.75 0.90 0.80 0.60 4.50 2.50 0.95 0.78 0.56 Table 2. Comparison of troughs to segmentation points placed by the test subjects. changes. Both word repetition in combination with collocation and all three features in combination also achieved a precision rate of 0.80 but attained a lower recall rate of 0.62. These results demonstrate that supplementing word repetition with other linguistic features can improve text segmentation. As an example, a text segmentation algorithm developed by Hearst (1994) based on word repetition alone attained inferior precision and recall rates of 0.66 and 0.61. In this investigation, recall rates tended to be lower than precision rates because the algorithm identified fewer segments (4.1 per text) than the test subjects (4.5). Each text was only 500 words in length and was related to a specific subject area. These factors limited the degree of subject change that occurred. Consequently, the test subjects tended to identify subject changes that were more subtle than the algorithm could detect. Conclusion The text segmentation algorithm developed used three linguistic features to automatically detect lexical cohesion relations across windows. The combination of features word repetition and relation weights produced the best precision and recall rates of 0.80 and 0.69. When used in 617 isolation, the performance of each feature was inferior to a combined approach. This fact provides evidence that different lexical relations are detected by each linguistic feature considered. Areas for improving the segmentation algorithm include incorporation of a threshold for troughs. Currently, all troughs indicate a subject change, however, minor fluctuations in scores may be discounted. Future work with this algorithm should include application to longer documents. With trough thresholding the segments identified in longer documents could detect significant subject changes. Having located the related segments in text, a method of determining the subject of each segment could be developed, for example, for information retrieval purposes. References Beeferman D., Berger A. and Lafferty J. (1997) Text segmentation using exponential models, Proceedings of the 2nd Conference on Empirical Methods in Natural Language Processing Church K. W. and Hanks E (1990) Word association norms, mutual infotTnation and lexicograph), Proceedings of the 28th Annual Meeting of the Association for Computational Linguistics, pp. 76-83 Grosz, B. J. and Sidner, C. L. (1986) Attention, intentions and the structure of discourse, Computational Linguistics, 12(3), pp. 175-204 Halliday M. A. K. and Hasan R. (1976) Cohesion in English, Longman Group Hearst M. A. (1993) Text Tiling: A quantitative approach to discourse segmentation, Technical Report 93/24, Sequoia 2000, University of California, Berkeley Hearst M. A. (1994) Multi-paragraph segmentation of expositor), texts, Report No. UCB/CSD 94/790, University of California, Berkeley Jobbins A. C and Evett L. J. (1995) Automatic identification of cohesion in texts: Exploiting the lexical organisation of Roget's Thesaurus, Proceedings of ROCLING VIII, Taipei, Taiwan Jobbins A. C. and Evett L. J. (1998) Semantic h~formation from Roget's Thesaurus: Applied to the Correction of Cursive Script Recognition Output, Proceedings of the International Conference on Computational Linguistics, Speech and Document Processing, India, pp. 65-70 Keenan E G and Evett L. J. (1989) Lexical structure for natural language processing, Proceedings of the 1st International Lexical Acquisition Workshop at IJCAI Kozima H. (1993) Text segmentation based on similariO, between words, Proceedings of the 31st Annual Meeting on the Association for Computational Linguistics, pp. 286-288 Litman D. J. and Passonneau R. J. (1996) Combining knowledge sources for discourse segmentation, Proceedings of the 33rd Annual Meeting of the Association for Computational Linguistics Morris J. and Hirst G. (1991) Lexical cohesion computed by thesaural relations as an indicator of the structure of text, Computational Linguistics, 17(1), pp. 21-48 Ponte J. M. and Croft W. B. (1997) Text Segmentation by Topic, 1st European Conference on Research and Advanced Technology for Digital Libraries (ECDL'97), pp. 113-125 Reynar J. C. (1994) An automatic method of finding topic boundaries, Proceedings of the 32nd Annual Meeting of the Association for Computational Linguistics (Student Session), pp. 331-333 Rotondo J. A. (1984) Clustering analysis of subjective partitions of text, Discourse Processes, 7, pp. 69-88 Salton G. and Buckley C. (1991) Global te.rt matching for information retrieval, Science, 253, pp. 1012-1015 Salton G. and Buckley C. (1992) Automatic te.rt structuring experiments in "Text-Based Intelligent Systems: Current Research and Practice in Information Extraction and Retrieval," P. S. Jacobs, ed, Lawrence Earlbaum Associates, New Jersey, pp. 199-210 Salton G., Allen J. and Buckley C. (1994) Automatic structuring and retrieval of large text fles, Communications of the Association for Computing Machinery, 37(2), pp. 97-108 Stairmand M. A. (1997) Textual context analysis for information retrieval, Proceedings of the ACM SIGIR Conference on Research and Development in Information Retrieval, Philadelphia, pp. 140-147 Yaari Y. (1997) Segmentation of expositor3., texts by hierarchical agglomerative clustering, RANLP'97, Bulgaria 618
1998
100
Finite-state Approximation of Constraint-based Grammars using Left-corner Grammar Transforms Mark Johnson* Cognitive and Linguistic Sciences, Box 1978 Brown University [email protected] Abstract This paper describes how to construct a finite-state machine (FSM) approximating a 'unification-based' grammar using a left-corner grammar transform. The approximation is presented as a series of gram- mar transforms, and is exact for left-linear and right- linear CFGs, and for trees up to a user-specified depth of center-embedding. 1 Introduction This paper describes a method for approximat- ing grammars with finite-state machines. Unlike the method derived from the LR(k) parsing algo- rithm described in Pereira and Wright (1991), these methods use grammar transformations based on the left-corner grammar transform (Rosenkrantz and Lewis II, 1970; Aho and Ullman, 1972). One ad- vantage of the left corner methods is that they gen- eralize straightforwardly to complex feature "unifi- cation based" grammars, unlike the LR(k) based ap- proach. For example, the implementation described here translates a DCG version of the example gram- mar given by Pereira and Wright (1991) directly into a FSM without constructing an approximating CFG. Left-corner based techniques are natural for this kind of application because (with the simple opti- mization described below) they can parse pure left- branching or pure right-branching structures with a stack depth of one (two if terminals are pushed and popped from the stack). Higher stack depth occurs with center-embedded structures, which hu- mans find difficult to comprehend. This suggests that we may get a finite-state approximation to hu- man performance by simply imposing a stack depth bound. We provide a simple tree-geometric descrip- tion of the configurations that cause an increase in a left corner parser's stack depth below. The rest of this paper is structured as follows. The remainder of this section outlines the "gram- mar transform" approach, summarizes the top-down * This research was supported by NSF grant SBR526978. I began this research while I was on sabbatical at the Xerox Research Centre in Grenoble, France. I would like to thank them and my colleages at Brown for their support. parsing algorithm and discusses how finite state approximations of top-down parsers can be con- structed. The fact that this approximation is not ex- act for left linear grammars (which define finite-state languages) motivates a finite-state approximation based on the left-corner parsing algorithm (which is presented as a grammar transform in section 2). In its standard form the approximation based on the left-corner parsing algorithm suffers from the com- plementary problem to the top-down approximation: it is not exact for right-linear grammars, but the "optimized" variants presented in section 3 over- come this deficiency, resulting in finite-state CFG approximations which are exact for left-linear and right-linear grammars. Section 4 discusses how these techniques can be combined in an implementation. 1.1 Parsing strategies as grammar transformations The parsing algorithms discussed here are presented as grammar trans]ormations, i.e., functions T that map a context-free grammar G into another context- free grammar T(G). The transforms have the prop- erty that a top-down parse using the transformed grammar is isomorphic to some other kind of parse using the original grammar. Thus grammar trans- forms provide a simple, compact way of describing various parsing algorithms, as a top-down parser us- ing T(G) behaves identically to the kind of parser we want to study using G. 1.2 Mappings from trees to trees The transformations presented here can also be un- derstood as isomorphisms from the set of parse trees of the source grammar G to parse trees of the trans- formed grammar which preserve terminal strings. Thus it is convenient to explain the transforms in terms of their effect on parse trees. We call a parse tree with respect to the source grammar G an anal- ysis tree, in order to distinguish it from parse trees with respect to some transform of G. The analy- sis tree t in Figure 1 will be used as an example throughout this paper. 619 $ z.c,(t) = DET S-DET the N S-NP s dog ve s-s t= NP VP V VP-V DET N V ADV ran ADV VP-VP I I I I I the dog ran fast fast $ = DET S-DET /:C4(t) : $ r /N the N S-NP DET S-DET J J I dog vP the N S-NP /N i /N V VP-V dog v vP-v I I I I ran ADV ran ADV I I last last Figure 1: The analysis tree t used as a running example below, and its left-corner transforms ~Ci(t). Note that the phonological forms are treated here as annotations on the nodes drawn above them, rather than independent nodes. That is, DEW (annotated with the) is a terminal node. 1.3 Top-down parsers and parse trees The "predictive" or "top-down" recognition algo- rithm is one of the simplest CFG recognition al- gorithms. Given a CFG G = (N, T, P, S), a (top- down) stack state is a sequence of terminals and nonterminals. Let Q = (N U T)* be the set of stack states for G. The start state qo E Q is the sequence S, and the final state ql E Q is the empty sequence e. The state transition function 6 : Q x (TU {e}) ~ 2 Q maps a state and a terminal or epsilon into a set of states. It is the smallest function 5 that satisfies the following conditions: -~ ~ ~(a% a) : a ~ T,'~ ~ (N u T)*. f17 E ~(AT, e) : A E N, 3' E (N W T)*, A --~ fl • P. A string w is accepted by the top-down recognition algorithm if q/ E 5*(q0,w), where 5* is the reflex- ive transitive closure of 6 with respect to epsilon moves. Extending this top-down parsing algorithm to a 'unification-based' grammar is straight-forward, and described in many textbooks, such as Pereira and Shieber (1987). It is easy to read off the stack states of a top- down parser constructing a parse tree from the tree itself. For any node X in the tree, the stack contents of a top-down parser just before the construction of X consists of (the label of) X followed by the sequence of labels on the right siblings of the nodes encountered on the path from X back to the root. It is easy to check that a top-down parser requires a stack of depth 3 to construct the tree t depicted in Figure 1. 1.4 Finite-state approximations We obtain a finite-state approximation to a top- down parser by restricting attention to only a finite number of possible stack states. The system imple- mented here imposes a stack depth restriction, i.e., the transition function is modified so that there are no transitions to any stack state whose size is larger than some user-specified limit. 1 This restriction en- sures that there is only a finite number of possible stack states, and hence that the top down parser is an finite-state machine. The resulting finite-state machine accepts a subset of the language generated by the original grammar. The situation becomes more complicated when we move to 'unification-based' grammars, since there may be an unbounded number of different categories appearing in the accessible stack states. In the sys- tem implemented here we used restriction (Shieber, 1985) on the stack states to restrict attention to a finite number of distinct stack states for any given stack depth. Since the restriction operation maps a stack state to a more general one, it produces a finite-state approximation which accepts a superset of the language generated by the original unification grammar. Thus for general constraint-based gram- mars the language accepted by our finite-state ap- proximation is not guaranteed to be either a superset or a subset of the language generated by the input grammar. 2 The left-corner transform While conceptually simple, the top-down parsing al- gorithm presented in the last section suffers from a number of drawbacks for a finite-state approxi- mation. For example, the number of distinct ac- cessible stack states is unbounded if the grammar is left-recursive, yet left-linear grammars always generate regular languages. This section presents 1With the optimized left-corner transforms described be- low we obtain acceptable approximations with a stack size limit of 5 or less. In many useful cases, including the example grammar provided by Pereira and Wright (1991), this stack bound is never reached and the system reports that the FSA it returns is exact. 620 the standard left-corner grammar transformation (Rosenkrantz and Lewis II, 1970; Aho and Ull- man, 1972); these references should be consulted for proofs of correctness. This transform serves as the basis for the further transforms described in the next section; these transforms have the property that the output grammar induces a finite number of distinct accessible stack states if their input is a left-recursive left-linear grammar. Given an input grammar G with nonterminals N and terminals T, these transforms £Ci produce grammars with an enlarged set of nonterminals N t = N O (N x (N O T)). The new "pair" categories in N x (N U T) are written A-X, where A is a non- terminal of G and X is either a terminal or non- terminal of G. It turns out that if A =~* X7 then G A-X ~*~cI(G) 7, i.e., a non-terminal A-X in the transformed grammar derives the difference between A and X in the original grammar, and the notation is meant to be suggestive of this. The left-corner trans/orm of a CFG G = (N, T, P, S) is a grammar/2C1 (G) = (N', T, P1, S), where P1 contains all productions of the form (1.a- 1.c). This paper assumes that N n T = 0, as is standard. To save space we assume that P does not contain any epsilon productions (but it is straight- forward to deal with them). A --4 a A-a : A e N, a e T. (1.a) A-X --~ fl A-B : A e N, B -+ X fl e P. (1.b) A-A ~ e : A e N. (1.c) Informally, the productions (1.a) start the left- corner recognition of A by recognizing a terminal a as a possible left-corner of A. The actual left- corner recognition is performed by the productions (1.b), which extend the left-corner from X to its parent B by recognizing fl; these productions are used repeatedly to construct increasingly larger left- corners. Finally, the productions (1.c) terminate the recognition of A when this left-corner construction process has constructed an A. The left-corner transform preserves the number of parses of a string, so it defines an isomorphism from analysis trees (i.e., parse trees with respect to G) to parse trees with respect to £gl (G). If t is a parse tree with respect to G then (abusing notation) £Cl(t) is the corresponding parse tree with respect to £CI(G). Figure 1 shows the effect of this map- ping on a simple tree. The transformed tree is con- siderably more complex: it has double the number of nodes of the original tree. In a top-down parse of the tree £Cl(t) in Figure 1 the maximum stack depth is 3, which occurs at the recognition of the terminals ran and/ast. 2.1 Filtering useless categories In general the grammar produced by the transform £¢1(G) contains a large number of useless nonter- minals, i.e., non-terminals which can never appear in any complete derivation, even if the grammar G is fully pruned (i.e., contains no useless productions). While £C1(G) can be pruned using standard algo- rithms, given the observation about the relationship between the pair non-terminals in £:C1 (G) and non- terminals in G, it is clear that certain productions can be discarded immediately as useless. Define the lef-eorner relation ¢ C (N U T) x N as follows: X ~A iff 3ft. A ~ Xfl E P, Let 4" be the reflexive and transitive closure of 4. It is easy to show that a category A-X is useless in £CI(G) (i.e., derives no sequence of terminals) unless X 4" A. Thus we can restrict the productions in (1.a-l.c) without affecting the language (strongly) generated to those that only contain pair categories A-X where X 4" A. 2.2 Unification grammars One of the main advantages of left-corner parsing algorithms over LR(k) based parsing algorithms is that they extend straight-forwardly to complex fea- ture based "unification" grammars. The transfor- mation £C1 itself can be encoded in several lines of Prolog (Matsumoto et al., 1983; Pereira and Shieber, 1987). This contrasts with the LR(k) methods. In LR(k) parsing a single LR state may correspond to several items or dotted rules, so it is not clear how the feature "unification" constraints should be associated with transitions from LR state to LR state (see Nakazawa (1995) for one proposal). In contrast, extending the techniques described here to complex feature based "unification" grammar is straight-forward. The main complication is the filter on useless non- terminals and productions just discussed. General- izing the left-corner closure filter on pair categories to complex feature "unification" grammars in an ef- ficient way is complicated, and is the primary diffi- culty in using left-corner methods with complex fea- ture based grammars, van Noord (1997) provides a detailed discussion of methods for using such a "left-corner filter" in unification-grammar parsing, and the methods he discusses are used in the imple- mentation described below. 3 Extended left-corner transforms This section presents some simple extensions to the basic left-corner transform presented above. The 'tail-recursion' optimization permits bounded-stack parsing of both left and right linear constructions. Further manipulation of this transform puts it into a form in which we can identify precisely the tree con- figurations in the original grammar which cause the stack size of a left-corner parser to increase. These 621 observations motivate the special binarization meth- ods described in the next section, which minimize stack depth in grammars that contain productions of length no greater than two. 3.1 A tail-recursion optimization If G is a left-linear grammar, a top-down parser us- ing £.C1 (G) can recognize any string generated by G with a constant-bounded stack size. However, the corresponding operation with right-linear grammars requires a stack of size proportional to the length of the string, since the stack fills with paired cate- gories A-A for each non-left-corner nonterminal in the analysis tree. The 'tail recursion' or 'composition' optimiza- tion (Abney and Johnson, 1991; Resnik, 1992) per- mits right-branching structures to be parsed with bounded stack depth. It is the result of epsilon re- moval applied to the output of £C1, and can be de- scribed in terms of resolution or partial evaluation of the transformed grammar with respect to pro- ductions (1.c). In effect, the schema (1.b) is split into two cases, depending on whether or not the rightmost nonterminal A-B is expanded by the ep- silon rules produced by schema (1.c). This expansion yields a grammar L:C2 (G) = (N', T, P2, S), where P2 contains all productions of the form (2.a-2.c). (In these schemata A,B E N; a E T; X E N U T and fl E (NOT)*). A ~ a A-a (2.a) A-X -+ ~ A-B : B ~ X/3 E P. (2.b) A-X --+/3 : A --+ X/3 E P. (2.c) Figure 1 shows the effect of the transform L:C2 on the example tree. The maximum stack depth re- quired for this tree is 2. When this 'tail recursion' optimization is applied, pair categories in the trans- formed grammar encode proper left-corner relation- ships between nodes in the analysis tree. This lets us strengthen the 'useless category' filter described above as follows. Let ,~+ be the transitive closure of the left-corner relation ~ defined above. It is easy to show that a category A-X is useless in L:C2(G) (i.e., derives no sequence of terminals) unless X,~ + A. Thus we can restrict the productions in (2.a-2.b) without affecting the language (strongly) generated to just those that only contain pair categories A-X where X 4 + A. 3.2 The special case of binary productions We can get a better idea of the properties of transfor- mation L:C2 if we investigate the special case where the productions of G are unary or binary. In this situation, transformation £C2(G) can be more ex- plicitly written as /:C3(G) = (N', T, P3, S), where P3 contains all instances of the production schemata (3.a-3.e). (In these schemata, a E T; A, B E N and X, Y E NoT). / ~ .:C a A-X ~ a C-a A-B (4.0 Figure 2: The highly distinctive "zig-zag" or "light- ning bolt" configuration of nodes in the analysis tree characteristic of the use of production schema (4. 0 in transform £C4. This is the only configuration which causes an increase in stack depth in a top- down parser using a grammar transformed with L:C4. A --+ a A-a. (3.a) A-X --~ A-B : B ~ X E P. (3.b) A-X ~ ~ : A --+ X ~ P. (3.c) A-X -~ Y A-B : B --+ X Y E P. (3.d) A-X --+ Y : A --~ X Y E P. (3.e) Productions (3.b-3.c) and (3.d-3.e) correspond to unary and binary productions respectively in the original grammar. Now, note that nonterminals from N only appear in the right hand sides of pro- ductions of type (3.d) and (3.e). Moreover, any such nonterminals must be immediately expanded by a production of type (3.a). Thus these non-terminals are eliminable by resolving them with (3.a); the only remaining nonterminal is the start symbol S. This expansion yields a new transform £:C4, where EC4(G) = ({S} U (N × (NUT)),T, P4,S). P4, de- fined in (4.a-4.g), still contains productions of type (3.a), but these only expand the start symbol, as all occurences of nonterminals in N have been resolved away. (In these schemata a E T; A, B, C, D E N and X E NUT). S --+ a S-a. (4.a) A-X --~ A-B : B --~ X E P. (4.b) A-X ~ e : A -~ X E P. (4.c) A-X --+ a A-B : B -~ X a E P. (4.d) A-X -~ a : A -~ X a E P. (4.e) A-X -~ a C-a A-B : B -~ X C E P. (4.f) A-X --+ a C-a : A ~ X C E P. (4.g) In the production schemata defining/2C4, (4.a-4.c) are copied directly from (3.a-3.c) respectively. The schemata (4.d-4.e) are obtained by instantiating Y in (3.d-3.e) to a terminal a E T, while the other two schemata (4.f-4.g) are obtained by instantiating Y in (3.d-3.e) with the right hand sides of (3.a). Figure 1 shows the result of applying the transformation £1C4 to the example analysis tree t. The transform also simplifies the specification of finite-state machine approximations. Because all terminals are introduced as the left-most symbols in 622 their productions, there is no need for terminal sym- bols to appear on the parser's stack, saving an ep- silon transition associated with a stack push and an immediately following stack pop with respect to the standard left-corner algorithm. Productions (4.a) and (4.d-4.g) can be understood as transitions over a terminal a that replace the top stack element with a sequence of other elements, while the other produc- tions can be interpreted as epsilon transitions that manipulate the stack contents accordingly. Note that the right hand sides of all of these productions except for schema (4.f) are right-linear. Thus instances of this schema are the only produc- tions that can increase the stack size in a top-down parse with EC4(G), and the stack depth required to parse an analysis tree is the maximum number of "zig-zag" patterns in the path in the analysis tree from any terminal node to the root. Figure 2 sketches the configuration of nodes in the analysis trees in which instances of schemata (4.f) would be used in a parse using £C4(G). This highly distinc- tive "zig-zag" or "lightning bolt" pattern does not occur at all in the example tree t in Figure 1, so the maximum required stack depth is 2. (Recall that in a traditional top-down parser terminals are pushed onto the stack and popped later, so initialization productions (4.a) cause two symbols to be pushed onto the stack). It follows that this finite state ap- proximation is exact for left-linear and right-linear CFGs. Indeed, analysis trees that consist simply of a left-branching subtree followed by a right-branching subtree, such as the example tree t, are transformed into strictly right-branching trees by/:C4. 4 Implementation This section provides further details of the finite- state approximator implemented in this research. The approximator is written in Sicstus Prolog. It takes a user-specifier Definite Clause Grammar G (without Prolog annotations) as input, which it bi- narizes and then applies transform/:C4 to. The implementation annotates each transition with the production it corresponds to (represented as a pair of a /2C4 schema number and a produc- tion number from G), so the finite-state approxima- tion actually defines a transducer which transduces a lexical input to a sequence of productions which specify a parse of that input with respect to/:C4(G). A following program inverts the tree transform EC4, returning a corresponding parse tree with respect to G. This parse tree can be checked by perform- ing complete unifications with respect to the orig- inal grammar productions if so desired. Thus the finite-state approximation provides an efficient way of determining if an analysis of a given input string with respect to a unification grammar G exists, and if so, it can be used to suggest such analyses. 5 Conclusion This paper surveyed the issues arising in the con- struction of finite-state approximations of left-corner parsers. The different kinds of parsers were pre- sented as grammar transforms, which let us abstract away from the algorithmic details of parsing algo- rithms themselves. It derived the various forms of the left-corner parsing algorithms in terms of gram- mar transformations from the original left-corner grammar transform. References Stephen Abney and Mark Johnson. 1991. Mem- ory requirements and local ambiguities of parsing strategies. Journal of Psycholinguistic Research, 20(3):233-250. Alfred V. Aho and Jeffery D. Ullman. 1972. The Theory of Parsing, Translation and Compiling; Volume 1: Parsing. Prentice-Hall, Englewood Cliffs, New Jersey. Yuji Matsumoto, Hozumi Tanaka, Hideki Hirakawa, Hideo Miyoshi, and Hideki Yasukawa. 1983. BUP: A bottom-up parser embedded in Prolog. New Generation Computing, 1(2):145-158. Tsuneko Nakazawa. 1995. Construction of LR pars- ing tables for grammars using feature-based syn- tactic categories. In Jennifer Cole, Georgia M. Green, and Jerry L. Morgan, editors, Linguis- tics and Computation, number 52 in CSLI Lecture Notes Series, pages 199-219, Stanford, California. CSLI Publications. Fernando C.N. Pereira and Stuart M. Shieber. 1987. Prolog and Natural Language Analysis. Num- ber 10 in CSLI Lecture Notes Series. Chicago Uni- versity Press, Chicago. Fernando C. N. Pereira and Rebecca N. Wright. 1991. Finite state approximation of phrase struc- ture grammars. In The Proceedings of the 29th Annual Meeting of the Association for Computa- tional Linguistics, pages 246-255. Philip Resnik. 1992. Left-corner parsing and psy- chological plausibility. In The Proceedings of the fifteenth International Conference on Computa- tional Linguistics, COLING-92, volume 1, pages 191-197. Stanley J. Rosenkrantz and Philip M. Lewis II. 1970. Deterministic left corner parser. In IEEE Conference Record of the 11th Annual Symposium on Switching and Automata, pages 139-152. Stuart M. Shieber. 1985. Using Restriction to ex- tend parsing algorithms for unification-based for- malisms. In Proceedings of the 23rd Annual Meet- ing of the Association for Computational Linguis- tics, pages 145-152, Chicago. Gertjan van Noord. 1997. An efficient implemen- tation of the head-corner parser. Computational Linguistics, 23(3):425-456. 623
1998
101
Unification-based Multimodal Parsing Michael Johnston Center for Human Computer Communication Department of Computer Science and Engineering Oregon Graduate Institute P.O. Box 91000, Portland, OR 97291-1000 johnston @ cse.ogi.edu Abstract In order to realize their full potential, multimodal systems need to support not just input from multiple modes, but also synchronized integration of modes. Johnston et al (1997) model this integration using a unification opera- tion over typed feature structures. This is an effective so- lution for a broad class of systems, but limits multimodal utterances to combinations of a single spoken phrase with a single gesture. We show how the unification-based ap- proach can be scaled up to provide a full multimodal grammar formalism. In conjunction with a multidimen- sional chart parser, this approach supports integration of multiple elements distributed across the spatial, temporal, and acoustic dimensions of multimodal interaction. In- tegration strategies are stated in a high level unification- based rule formalism supporting rapid prototyping and it- erative development of multimodal systems. 1 Introduction Multimodal interfaces enable more natural and effi- cient interaction between humans and machines by providing multiple channels through which input or output may pass. Our concern here is with multi- modal input, such as interfaces which support simul- taneous input from speech and pen. Such interfaces have clear task performance and user preference ad- vantages over speech only interfaces, in particular for spatial tasks such as those involving maps (Ovi- att 1996). Our focus here is on the integration of in- put from multiple modes and the role this plays in the segmentation and parsing of natural human input. In the examples given here, the modes are speech and pen, but the architecture described is more general in that it can support more than two input modes and modes of other types such as 3D gestural input. Our multimodal interface technology is imple- mented in QuickSet (Cohen et al 1997), a work- ing system which supports dynamic interaction with maps and other complex visual displays. The initial applications of QuickSet are: setting up and inter- acting with distributed simulations (Courtemanche and Cercanowicz 1995), logistics planning, and nav- igation in virtual worlds. The system is distributed; consisting of a series of agents (Figure 1) which communicate through a shared blackboard (Cohen et al 1994). It runs on both desktop and handheld PCs, communicating over wired and wireless LANs. The user interacts with a map displayed on a wireless hand-held unit (Figure 2). Figure 1: Multimodal Architecture ~cm -~ ~ Figure 2: User Interface They can draw directly on the map and simultane- ously issue spoken commands. Different kinds of entities, lines, and areas may be created by drawing the appropriate spatial features and speaking their type; for example, drawing an area and saying 'flood zone'. Orders may also be specified; for example, by drawing a line and saying 'helicopterfollow this route'. The speech signal is routed to an HMM- 624 based continuous speaker-independent recognizer. The electronic 'ink' is routed to a neural net-based gesture recognizer (Pittman 1991). Both generate N-best lists of potential recognition results with as- sociated probabilities. These results are assigned se- mantic interpretations by natural language process- ing and gesture interpretation agents respectively. A multimodal integrator agent fields input from the natural language and gesture interpretation agents and selects the appropriate multimodal or unimodal commands to execute. These are passed on to a bridge agent which provides an API to the underly- ing applications the system is used to control. In the approach to multimodal integration pro- posed by Johnston et al 1997, integration of spoken and gestural input is driven by a unification opera- tion over typed feature structures (Carpenter 1992) representing the semantic contributions of the differ- ent modes. This approach overcomes the limitations of previous approaches in that it allows for a full range of gestura~ input beyond simple deictic point- ing gestures. Unlike speech-driven systems (Bolt 1980, Neal and Shapiro 1991, Koons et al 1993, Wauchope 1994), it is fully multimodal in that all el- ements of the content of a command can be in ei- ther mode. Furthermore, compared to related frame- merging strategies (Vo and Wood 1996), it provides a well understood, generally applicable common meaning representation for the different modes and a formally well defined mechanism for multimodal integration. However, while this approach provides an efficient solution for a broad class of multimodal systems, there are significant limitations on the ex- pressivity and generality of the approach. A wide range of potential multimodal utterances fall outside the expressive potential of the previous architecture. Empirical studies of multimodal in- teraction (Oviatt 1996), utilizing wizard-of-oz tech- niques, have shown that when users are free to inter- act with any combination of speech and pen, a single spoken utterance maybe associated with more than one gesture. For example, a number of deictic point- ing gestures may be associated with a single spo- ken utterance: ' calculate distance from here to bere', 'put that there', 'move this team to here and prepare to rescue residents from this building'. Speech may also be combined with a series of gestures of differ- ent types: the user circles a vehicle on the map, says 'follow this route', and draws an arrow indicating the route to be followed. In addition to more complex multipart multi- modal utterances, unimodal gestural utterances may contain several component gestures which compose to yield a command. For example, to create an entity with a specific orientation, a user might draw the en- tity and then draw an arrow leading out from it (Fig- ure 3 (a)). To specify a movement, the user might draw an arrow indicating the extent of the move and indicate departure and arrival times by writing ex- pressions at the base and head (Figure 3 (b)). These I I z'°l Figure 3: Complex Unimodal Gestures are specific examples of the more general problem of visual parsing, which has been a focus of attention in research on visual programming and pen-based interfaces for the creation of complex graphical ob- jects such as mathematical equations and flowcharts (Lakin 1986, Wittenburg et al 1991, Helm et al 1991, Crimi et al 1995). The approach of Johnston et al 1997 also faces fundamental architectural problems. The multi- modal integration strategy is hard-coded into the in- tegration agent and there is no isolatable statement of the rules and constraints independent of the code itself. As the range of multimodal utterances sup- ported is extended, it becomes essential that there be a declarative statement of the grammar of multi- modal utterances, separate from the algorithms and mechanisms of parsing. This will enable system de- velopers to describe integration strategies in a high level representation, facilitating rapid prototyping and iterative development of multimodal systems. 2 Parsing in Multidimensional Space The integrator in Johnston et al 1997 does in essence parse input, but the resulting structures can only be unary or binary trees one level deep; unimodal spo- ken or gestural commands and multimodal combina- tions consisting of a single spoken element and a sin- gle gesture. In order to account for a broader range of multimodal expressions, a more general parsing mechanism is needed. Chart parsing methods have proven effective for parsing strings and are commonplace in natural language processing (Kay 1980). Chart parsing involves population of a triangular matrix of well-formed constituents: chart(i, j), where i and j are numbered vertices delimiting the start and end of the string. In its most basic formulation, chart parsing can be defined as follows, where . is an operator which combines two constituents in accordance with the rules of the grammar. chart(i, j) = U chart(i, k) * chart(k, j) i<k<j Crucially, this requires the combining constituents to be discrete and linearly ordered. However, multimodal input does not meet these requirements: 625 gestural input spans two (or three) spatial dimen- sions, there is an additional non-spatial acoustic dimension of speech, and both gesture and speech are distributed across the temporal dimension. Unlike words in a string, speech and gesture may overlap temporally, and there is no single dimension on which the input is linear and discrete. So then, how can we parse in this multidimensional space of speech and gesture? What is the rule for chart pars- ing in multi-dimensional space? Our formulation of multidimensional parsing for multimodal systems (multichart) is as follows. multichart(X) = U multichart(Y) * multichart(Z) where X = Y uz, Y nZ = O,Y ~ 0,2 ~ In place of numerical spans within a single dimension (e.g. chart(3,5)), edges in the mul- tidimensional chart are identified by sets (e.g. multichart({[s, 4, 2], [g, 6, 1]})) containing the identifiers(IDs) of the terminal input elements they contain. When two edges combine, the ID of the resulting edge is the union of their IDs. One constraint that linearity enforced, which we can still maintain, is that a given piece of input can only be used once within a single parse. This is captured by a requirement of non-intersection between the ID sets associated with edges being combined. This requirement is especially important since a single piece of spoken or gestural input may have multiple interpretations available in the chart. To prevent multiple interpretations of a single signal being used, they are assigned IDs which are identical with respect to the the non-intersection constraint. The multichart statement enumerates all the possible combinations that need to be considered given a set of inputs whose IDs are contained in a set X. The multidimensional parsing algorithm (Figure 4) runs bottom-up from the input elements, build- ing progressively larger constituents in accordance with the ruleset. An agenda is used to store edges to be processed. As a simplifying assumption, rules are assumed to be binary. It is straightforward to ex- tend the approach to allow for non-binary rules using techniques from active chart parsing (Earley 1970), but this step is of limited value given the availability of multimodal subcategorization (Section 4). while AGENDA ¢ [ ] do remove front edge from AGENDA and make it CURRENTEDGE for each EDGE, EDGE E CHART if CURRENTEDGE (1 EDGE = find set NEWEDGES = U ( (U CURRENTEDGE * EDGE) (U EDGE * CURRENTEDGE)) add NEWEDGES to end of AGENDA add CURRENTEDGE to CHART Figure 4: Multichart Parsing Algorithm For use in a multimodal interface, the multidi- mensional parsing algorithm needs to be embedded into the integration agent in such a way that input can be processed incrementally. Each new input re- ceived is handled as follows. First, to avoid unnec- essary computation, stale edges are removed from the chart. A timeout feature indicates the shelf- life of an edge within the chart. Second, the in- terpretations of the new input are treated as termi- nal edges, placed on the agenda, and combined with edges in the chart in accordance with the algorithm above. Third, complete edges are identified and ex- ecuted. Unlike the typical case in string parsing, the goal is not to find a single parse covering the whole chart; the chart may contain several complete non- overlapping edges which can be executed. These are assigned to a category command as described in the next section. The complete edges are ranked with respect to probability. These probabilities are a function of the recognition probabilities of the el- ements which make up the comrrrand. The com- bination of probabilities is specified using declar- ative constraints, as described in the next section. The most probable complete edge is executed first, and all edges it intersects with are removed from the chart. The next most probable complete edge re- maining is then executed and the procedure contin- ues until there are no complete edges left in the chart. This means that selection of higher probability com- plete edges eliminates overlapping complete edges of lower probability from the list of edges to be ex- ecuted. Lastly, the new chart is stored. In ongoing work, we are exploring the introduction of other fac- tors to the selection process. For example, sets of disjoint complete edges which parse all of the termi- nal edges in the chart should likely be preferred over those that do not. Under certain circumstances, an edge can be used more than once. This capability supports multiple creation of entities. For example, the user can utter 'multiple helicopters' point point point point in or- der to create a series of vehicles. This significantly speeds up the creation process and limits reliance on speech recognition. Multiple commands are per- sistent edges; they are not removed from the chart after they have participated in the formation of an executable command. They are assigned timeouts and are removed when their alloted time runs out. These 'self-destruct' timers are zeroed each time an- other entity is created, allowing creations to chain together. 3 Unification-based Multimodal Grammar Representation Our grammar representation for multimodal expres- sions draws on unification-based approaches to syn- tax and semantics (Shieber 1986) such as Head- 626 driven phrase structure grammar (HPSG) (Pollard and Sag 1987,1994). Spoken phrases and pen ges- tures, which are the terminal elements of the mul- timodal parsing process, are referred to as lexical edges. They are assigned grammatical representa- tions in the form of typed feature structures by the natural language and gesture interpretation agents respectively. For example, the spoken phrase "heli- copter is assigned the representation in Figure 5. cat : unit.type fsTYPE : unit content : object : type : helicopter echelon : vehicle location : [ fsTYPE : point ] modallty : speech time : interval(.., ..) prob : 0.85 Figure 5: Spoken Input Edge The cat feature indicates the basic category of the element, while content specifies the semantic con- tent. In this case, it is a create_unit command in which the object to be created is a vehicle of type helicopter, and the location is required to be a point. The remaining features specify auxiliary informa- tion such as the modality, temporal interval, and probability associated with the edge. A point ges- ture has the representation in Figure 6. t r fsTYPE : point conten : L coord : latlong(.., ..) ] modalit]t : gesture time : interval(.,, ..) prob : 0.69 Figure 6: Point Gesture Edge Multimodal grammar rules are productions of the form LHS --r DTR1 DTR2 where LHS, DTR1, and DTR2 are feature structures of the form indi- cated above. Following HPSG, these are encoded as feature structure rule schemata. One advantage of this is that rule schemata can be hierarchically ordered, allowing for specific rules to inherit ba- sic constraints from general rule schemata. The ba- sic multimodal integration strategy of Johnston et al 1997 is now just one rule among many (Figure 7). content : [1] lhs : modalit~/ : [2] time : [3 I prob : [4] content : [I] [ location : [51 ] dtrl : modallt¥ : [6] time : {7] rhs : prob : [8] cat:spatial.gesture "[ content : [5] ] dtr2 : modality : [9] [ time: {,ol / prob : [11] J ( lap([7],[lO]) V ]ollow([7],[lO],4) t .... total.tirne([7],[lOl, [3]) constraints: combine-prob(Ial, [I I], {,1]) amsign.modahty([6] ,[9],[2]) Figure 7: Basic Integration Rule Schema The lhs,dtrl, and dtr2 features correspond to LHS, DTR1, and DTR2 in the rule above. The constraints feature indicates an ordered series of constraints which must be satisfied in order for the rule to apply. Structure-sharing in the rule represen- tation is used to impose constraints on the input fea- ture structures, to construct the LHS category, and to instantiate the variables in the constraints. For ex- ample, in Figure 7, the basic constraint that the lo- cation of a located command such as 'helicopter' needs to unify with the content of the gesture it com- bines with is captured by the structure-sharing tag [5]. This also instantiates the location of the result- ing edge, whose content is inherited through tag [1 ]. The application of a rule involves unifying the two candidate edges for combination against dtrl and dtr2. Rules are indexed by their cat feature in order to avoid unnecessary unification. If the edges unify with dtrl and dtr2, then the constraints are checked. If they are satisfied then a new edge is cre- ated whose category is the value of lhs and whose ID set consists of the union of the ID sets assigned to the two input edges. Constraints require certain temporal and spatial relationships to hold between edges. Complex con- straints can be formed using the basic logical op- erators V, A, and =¢,. The temporal constraint in Figure 7, overlap(J7], [10]) V follow([7],[lO], 4), states that the time of the speech [7] must either overlap with or start within four seconds of the time of the gesture [10]. This temporal constraint is based on empirical investigation of multimodal in- teraction (Oviatt et al 1997). Spatial constraints are used for combinations of gestural inputs. For ex- ample, close_to(X, Y) requires two gestures to be a limited distance apart (See Figure 12 below) and contact(X, Y) determines whether the regions oc- cupied by two objects are in contact. The remaining constraints in Figure 7 do not constrain the inputs per se, rather they are used to calculate the time, prob, and modality features for the resulting edge. For example, the constraint combine_prob([8], [11], [4]) is used to combine the probabilities of two inputs and assign a joint probability to the resulting edge. In this case, the input probabilities are multiplied. The assign_modality([6], [9], [2]) constraint deter- mines the modality of the resulting edge. Auxiliary features and constraints which are not directly rele- vant to the discussion will be omitted. The constraints are interpreted using a prolog meta-interpreter. This basic back-tracking con- straint satisfaction strategy is simplistic but adequate for current purposes. It could readily be substi- tuted with a more sophisticated constraint solving strategy allowing for more interaction among con- straints, default constraints, optimization among a series of constraints, and so on. The addition of functional constraints is common in HPSG and other unification grammar formalisms (Wittenburg 1993). 627 4 Multimodal Subcategorization Given that multimodal grammar rules are required to be binary, how can the wide variety of commands in which speech combines with more than one gestural element be accounted for? The solution to this prob- lem draws on the lexicalist treatment of complemen- tation in HPSG. HPSG utilizes a sophisticated the- ory of subcategorization to account for the different complementation patterns that verbs and other lexi- cal items require. Just as a verb subcategorizes for its complements, we can think of a lexical edge in the multimodal grammar as subcategorizing for the edges with which it needs to combine. For example, spoken inputs such as 'calculate distance from here to here' an d ' sandbag wall from here to here' (Figure 8) result in edges which subcategorize for two ges- tures. Their multimodal subcategorization is speci- fied in a list valued subcat feature, implemented us- ing a recursive first/rest feature structure (Shieber 1986:27-32). "eat : subcat.command "fsTYPE : create.line "l r fsTYPE : wall.obj] content : object : ]style : sand.bag | Lcolor : grey J • rfsTYPE : line ] location . Lcoordlist : [[I], [2]]J time : [31 r Feat : spatial.ge#ture "~ / r fsTYPE : point3 I first: |content: [ .... d:[1] J/ Ltime : [4] J constraints : [overlap(J3], [4]) V ]ollow([3], [4],4)] subcat : 1 r teat : spatial.gesture ~ ~l ] ] [ I" fsTYPE : point1 I I / |first : lcontent : [coord " f21 | | [ i rest: l ttime: [,] " "J / l lconstraints : [lollo=([S], [41,S)] / L Lrest : end J Figure 8: 'Sandbag wall from here to here' The cat feature is subcat_comrnand, indicating that this is an edge with an unsaturated subcatego- rization list. The first/rest structure indicates the two gestures the edge needs to combine with and ter- minates with rest: end. The temporal constraints on expressions such as these are specific to the ex- pressions themselves and cannot be specified in the rule constraints. To support this, we allow for lexical edges to carry their own specific lexical constraints, which are held in a constraints feature at each level in the subeat list. In this case, the first gesture is constrained to overlap with the speech or come up to four seconds before it and the second gesture is required to follow the first gesture. Lexical con- straints are inherited into the rule constraints in the combinatory schemata described below. Edges with subcat features are combined with other elements in the chart in accordance with general combinatory schemata. The first (Figure 9) applies to unsaturated edges which have more than one element on their subcat list. It unifies the first element of the sub- cat list with an element in the chart and builds a new edge of category subcat_command whose subcat list is the value of rest. content : [1] lhs : subcat :.[2] prob : [31 [ content : [1] / I" first : [4] rhs: dtra : [ subcat : [ const .... ts: [Sl / L rest:J21| ] L prob : [6] L dtr2 : [41[ prob: [71 J constraints : { combine.prob([6],[7], [3]) I [51 } Figure 9: Subcat Combination Schema The second schema (Figure 10) applies to unsat- urated (cat: subcat_command) edges on whose sub- cat list only one element remains and generates sat- urated (cat: command) edges. content : [1] lhs : subcat : end prob : [2] / content : [1] rhs: dtrl : / ..... t [ cflor~ttr[3] L r:0 [:5 [ rest: en:tS: [4] ] L dtr2 : [3][ prob : t61 ] constraints: { cornbir=e.prob([5], [O], [21) I [4] } Figure 10: Subcat Termination Schema This specification of combinatory information in the lexical edges constitutes a shift from rules to representations. The ruleset is simplified to a set of general schemata, and the lexical representa- tion is extended to express combinatorics. How- ever, there is still a need for rules beyond these general schemata in order to account for construc- tional meaning (Goldberg 1995) in multimodal in- put, specifically with respect to complex unimodal gestures. 5 Visual Parsing: Complex Gestures In addition to combinations of speech with more than one gesture, the architecture supports unimodal gestural commands consisting of several indepen- dently recognized gestural components. For exam- ple, lines may be created using what we term gestu- ral diacritics. If environmental noise or other fac- tors make speaking the type of a line infeasible, it may be specified by drawing a simple gestural mark or word over a line gesture. To create a barbed wire, the user can draw a line specifying its spatial extent and then draw an alpha to indicate its type. Figure 1 1: Complex Gesture for Barbed Wire This gestural construction is licensed by the rule schema in Figure 12. It states that a line gesture 628 (dtrl) and an alpha gesture (dtr2) can be combined, resulting in a command to create a barbed wire. The location information is inherited from the line ges- ture. There is nothing inherent about alpha that makes it mean 'barbed wire'. That meaning is em- bodied only in its construction with a line gesture, which is captured in the rule schema. The close_to constraint requires that the centroid of the alpha be in proximity to the line. cat : command "1 J fsTYPE : wire.ob 3 lhs : content : object : color : red style : barbed location : [I] dtrl : content : [1] coordllst : [21 rhs : time : [3] F cat : spatial.gesture 1 • | content:[ fsTYPE:alpha ] l dtr2 . | centroid : [41 L time : [5] f Iollow([5],[3],5) constraints : i, close.to([4],[2]) Figure 12: Rule Schema for Unimodal Barbed Wire 6 Conclusion The multimodal language processing architecture presented here enables parsing and interpretation of natural human input distributed across two or three spatial dimensions, time, and the acoustic dimension of speech. Multimodal integration strategies are stated declaratively in a unification-based grammar formalism which is interpreted by an incremental multidimensional parser. We have shown how this architecture supports multimodal (pen/voice) inter- faces to dynamic maps. It has been implemented and deployed as part of QuickSet (Cohen et al 1997) and operates in real time. A broad range of multimodal utterances are supported including combination of speech with multiple gestures and visual parsing of collections of gestures into complex unimodal com- mands. Combinatory information and constraints may be stated either in the lexical edges or in the rule schemata, allowing individual phenomena to be de- scribed in the way that best suits their nature. The ar- chitecture is sufficiently general to support other in- put modes and devices including 3D gestural input. The declarative statement of multimodal integration strategies enables rapid prototyping and iterative de- velopment of multimodal systems. The system has undergone a form of pro-active evaluation in that its design is informed by detailed predictive modeling of how users interact multi- modally, and incorporates the results of empirical studies of multimodal interaction (Oviatt 1996, Ovi- att et al 1997). It is currently undergoing extensive user testing and evaluation (McGee et al 1998). Previous work on grammars and parsing for mul- tidimensional languages has focused on two dimen- sional graphical expressions such as mathematical equations, flowcharts, and visual programming lan- guages. Lakin (1986) lays out many of the ini- tial issues in parsing for two-dimensional draw- ings and utilizes specialized parsers implemented in LISP to parse specific graphical languages. Helm et al (1991) employ a grammatical framework, con- strained set grammars, in which constituent struc- ture rules are augmented with spatial constraints. Visual language parsers are build by translation of these rules into a constraint logic programming lan- guage. Crimi et al (1991) utilize a similar relation grammar formalism in which a sentence consists of a multiset of objects and relations among them. Their rules are also augmented with constraints and parsing is provided by a prolog axiomatization. Wit- tenburg et al (1991) employ a unification-based grammar formalism augmented with functional con- straints (F-PATR, Wittenburg 1993), and a bottom- up, incremental, Earley-style (Earley 1970) tabular parsing algorithm. All of these approaches face significant difficul- ties in terms of computational complexity. At worst, an exponential number of combinations of the in- put elements need to be considered, and the parse table may be of exponential size (Wittenburg et al 1991:365). Efficiency concerns drive Helm et al (1991:111) to adopt a committed choice strategy under which successfully applied productions can- not be backtracked over and complex negative and quantificational constraints are used to limit rule ap- plication. Wittenburg et al's parsing mechanism is directed by expander relations in the grammar for- malism which filter out inappropriate combinations before they are considered. Wittenburg (1996) ad- dresses the complexity issue by adding top-down predictive information to the parsing process. This work is fundamentally different from all of these approaches in that it focuses on multi- modal systems, and this has significant implications in terms of computational viability. The task dif- fers greatly from parsing of mathematical equations, flowcharts, and other complex graphical expressions in that the number of elements to be parsed is far smaller. Empirical investigation (Oviatt 1996, Ovi- att et al 1997) has shown that multimodal utter- ances rarely contain more than two or three ele- ments. Each of those elements may have multi- ple interpretations, but the overall number of lexi- cal edges remains sufficiently small to enable fast processing of all the potential combinations. Also, the intersection constraint on combining edges lim- its the impact of the multiple interpretations of each piece of input. The deployment of this architecture in an implemented system supporting real time spo- ken and gestural interaction with a dynamic map provides evidence of its computational viability for real tasks. Our approach is similar to Wittenburg et 629 al 1991 in its use of a unification-based grammar for- malism augmented with functional constraints and a chart parser adapted for multidimensional spaces. Our approach differs in that, given the nature of the input, using spatial constraints and top-down predic- tive information to guide the parse is less of a con- cern, and as a result the parsing algorithm is signifi- cantly more straightforward and general. The evolution of multimodal systems is follow- ing a trajectory which has parallels in the history of syntactic parsing. Initial approaches to multi- modal integration were largely algorithmic in na- ture. The next stage is the formulation of declarative integration rules (phrase structure rules), then comes a shift from rules to representations (lexicalism, cat- egorial and unification-based grammars). The ap- proach outlined here is at representational stage, al- though rule schemata are still used for constructional meaning. The next phase, which syntax is under- going, is the compilation of rules and representa- tions back into fast, low-powered finite state devices (Roche and Schabes 1997). At this early stage in the development of multimodal systems, we need a high degree of flexibility. In the future, once it is clearer what needs to be accounted for, the next step will be to explore compilation of multimodal grammars into lower power devices. Our primary areas of future research include re- finement of the probability combination scheme for multimodal utterances, exploration of alternative constraint solving strategies, multiple inheritance for rule schemata, maintenance of multimodal di- alogue history, and experimentation with 3D input and other combinations of modes. References Bolt, R. A. 1980. "Put-That-There":Voice and gesture at the graphics interface. ComputerGraphics, 14.3:262- 270. Carpenter, R. 1992. The logic of typed feature structures. Cambridge University Press, Cambridge, England. Cohen, P. R., A. Cheyer, M. Wang, and S. C. Baeg. 1994. An open agent architecture. In Working Notes of the AAAI Spring Symposium on Software Agents, 1-8. Cohen, P. R., M. Johnston, D. McGee, S. L. Oviatt, J. A. Pittman, I. Smith, L. Chen, and J. Clow. 1997. • QuickSet: Multimodal interaction for distributed ap- plications. In Proceedings of the Fifth ACM Interna- tional Multimedia Conference. 31-40. Courtemanche, A. J., and A. Ceranowicz. 1995. Mod- SAF development status. In Proceedings of the 5th Conference on Computer Generated Forces and Be- havioral Re_presentation, 3-13. Crimi, A, A. Guercio, G. Nota, G. Pacini, G. Tortora, and M. Tucci. 1991. Relation grammars and their applica- tion to multi-dimensionallanguages. Journal of Visual Languages and Computing, 2: 333-346. Earley, J. 1970. An efficient context-free parsing algo- rithm. Communications of the ACM, 13, 94--102. Goldberg, A. 1995. Constructions: A Construction Grammar Approach to Argument Structure. Univer- sity of Chicago Press, Chicago. Helm, R., K. Marriott, and M. Odersky. 1991. Building visual language parsers. In Proceedings of Conference on Human Factors in Computing Systems: CHI 91, ACM Press, New York, 105-112. Johnston, M., P. R. Cohen, D. McGee, S. L. Oviatt, J. A. Pittman, and I. Smith. 1997. Unification-based multi- modal integration. In Proceedings of the 35th Annual Meeting of the Association for Computational Linguis- tics and 8th Conference of the European Chapter of the Association for Computational Linguistics, 281-288. Kay, M. 1980. Algorithm schemata and data structures In syntactic processing. In B. J. Grosz, K. S. Jones, and B. L. Webber (eds.) Readings in Natural Language Processing, Morgan Kaufmann, 1986, 35-70. Koons, D. B., C. J.Sparrell, and K. R. Thorisson. 1993. Integrating simultaneous input from speech, gaze, and hand gestures. In M. T. Maybury (ed.) IntelligentMul- timedia Interfaces, MIT Press, 257-276. Lakin, E 1986. Spatial parsing for visual languages. In S. K. Chang, T. Ichikawa, and E A. Ligomenides (ed.s), Ifsual Languages. Plenum Press, 35-85. McGee, D., P. R. Co-hen, S. L. Oviatt. 1998. Confirma- tion in multimodal systems. In Proceedings ofl7th In- ternational Conference on Computational Linguistics and 36th Annual Meeting of the Association for Com- putational Linguistics. Neal, J. G., and S. C. Shapiro. 1991. Intelligent multi- media interface technology. In J. W. Sullivan and S. W. Tyler (eds.) Intelligent User Interfaces, ACM Press, Addison Wesley, New York, 45-68. Oviatt, S.L. 1996. Multimodal interfaces for dynamic interactive maps. In Proceedings of Conference on Human Factors in Co.m.puting Systems, 95-102. Oviatt, S. L., A. DeAngeli, and K. Kuhn. 1997. Integra- tion and synchronization of input modes during multi- modal human-computer interaction. In Proceedings of Conference on Human Factors in Computing Systems, 415-422. Pittman, J.A. 1991. Recognizing handwritten text. In Proceedings of Conference on Human Factors in Computing Systems: CHI 91.271-275. Pollard, C. J., and I. A. Sag. 1987. Information-based syntax and semantics: Volume L Fundamentals., CSLI Lecture Notes Volume 13. CSLI, Stanford. Pollard, Carl and Ivan Sag. 1994. Head-driven hrase structure grammar. University of Chicago ress. Chicago. Roche, E. and Y. Schabes. 1997. Finite state language processing. MIT Press, Cambridge. Shleber, S.M. 1986. An Introauction to unification- based approaches to grammar. CSLI Lecture Notes Volume 4. CSLI, Stanford. Vo, M. T., and C. Wood. 1996. Building an applica- tion framework for speech and pen input integration in multimodal learning interfaces. In Proceedmgs of ICASSP'96. Wauchope, K. 1994. Eucalyptus: Integrating natural language input with a graphical user interface. Naval Research Laboratory, Report NRL/FR/5510-94-9711. Wittenburg, K., L. Weitzman, and J. Talley. 1991. Unification-Based grammars and tabular parsing for graphical languages. Journal of Visual Languages and Computing 2:347-370. wmenburg, "K. L. 1993. F-PATR: Functional con- straints for unification-based grammars. Proceedings of the 31st Annual Meeting of the Association for Com- putational Linguistics, 216-223. Wittenburg, K. 1996. Predictive parsing for unordered relational languages. In H. Bunt and M. Tomita (eds.), Recent Advances in Parsing Technologies, Kluwer, Dordrecht, 385-407. 630
1998
102
Context Management with Topics for Spoken Dialogue Systems Kristiina Jokinen and Hideki Tanaka and Akio Yokoo ATR Interpreting Telecommunications Research Laboratories 2-2 Hikaridai, Seika-cho, Soraku-gun Kyoto 619-02 Japan email : {kj okinen[tanakah[ayokoo}~itl, air. co. jp Abstract In this paper we discuss the use of discourse con- text in spoken dialogue systems and argue that the knowledge of the domain, modelled with the help of dialogue topics is important in maintaining robust- ness of the system and improving recognition accu- racy of spoken utterances. We propose a topic model which consists of a domain model, structured into a topic tree, and the Predict-Support algorithm which assigns topics to utterances on the basis of the topic transitions described in the topic tree and the words recognized in the input utterance. The algorithm uses a probabilistic topic type tree and mutual infor- mation between the words and different topic types, and gives recognition accuracy of 78.68% and preci- sion of 74.64%. This makes our topic model highly comparable to discourse models which are based on recognizing dialogue acts. 1 Introduction One of the fragile points in integrated spoken lan- guage systems is the erroneous analyses of the initial speech input. 1 The output of a speech recognizer has direct influence on the performance of other mod- ules of the system (dealing with dialogue manage- ment, translation, database search, response plan- ning, etc.), and the initial inaccuracy usually gets accumulated in the later stages of processing. Per- formance of speech recognizers can be improved by tuning their language model and lexicon, but prob- lems still remain with the erroneous ranking of the best paths: information content of the selected ut- terances may be wrong. It is thus essential to use contextual information to compensate various errors in the output, to provide expectations of what will be said next and to help to determine the appropri- ate dialogue state. However, negative effects of an inaccurate context have also been noted: cumulative error in discourse context drags performance of the system below the rates it would achieve were contextual information 1 Alexandersson (1996) remarks that with a 3000 word lex- icon, a 75 % word accuracy means that in practice the word lattice does not contain the actually spoken sentence, not used (Qu et al., 1996; Church and Gale, 1991). Successful use of context thus presupposes appro- priate context management: (1) features that define the context are relevant for the processing task, and (2) construction of the context is accurate. In this paper we argue in favour of using one type of contextual information, topic information, to maintain robustness of a spoken language sys- tem. Our model deals with the information content of utterances, and defines the context in terms of topic types, related to the current domain knowl- edge and represented in the form of a topic tree. To update the context with topics we introduce the Predict-Support algorithm which selects utterance topics on the basis of topic transitions described in the topic tree and words recognized in the current utterance. At present, the algorithm is designed as a filter which re-orders the candidates produced by the speech recognizer, but future work encompasses integration of the algorithm into a language model and actual speech recognition process. The paper is organised as follows. Section 2 re- views the related previous research and sets out our starting point. Section 3 presents the topic model and the Predict-Support algorithm, and section 4 gives results of the experiments conducted with the model. Finally, section 5 summarises the properties of the topic model, and points to future research. 2 Previous research Previous research on using contextual information in spoken language systems has mainly dealt with speech acts (Nagata and Morimoto, 1994; Reithinger and Maier, 1995; MSller, 1996). In dialogue sys- tems, speech acts seem to provide a reasonable first approximation of the utterance meaning: they ab- stract over possible linguistic realisations and, deal- ing with the illocutionary force of utterances, can also be regarded as a domain-independent aspect of communication. 2 2Of course, most dialogue systems include domain depen- dent acts to cope with the particular requirements of the do- main, cf.Alexandersson (1996). Speech acts are also related to the task: information providing, appointment negotiat- 631 However, speech acts concern a rather abstract level of utterance modelling: they represent the speakers' intentions, but ignore the semantic con- tent of the utterance. Consequently, context models which use only speech act information tend to be less specific and hence less accurate. Nagata and Morimoto (1994) report prediction accuracy of 61.7 %, 77.5 % and 85.1% for the first, second and third best dialogue act (in their terminology: Illocution- ary Force Type) prediction, respectively, while Rei- thinger and Maier (1995) report the corresponding accuracy rates as 40.28 %, 59.62 % and 71.93 %, respectively. The latter used structurally varied di- alogues in their tests and noted that deviations from the defined dialogue structures made the recognition accuracy drop drastically. To overcome prediction inaccuracies, speech act based context models are accompanied with the in- formation about the task or the actual words used. Reithinger and Maier (1995) describe plan-based re- pairs, while MSller (1996) argues in favour of domain knowledge. Qu et al. (1996) show that to minimize cumulative contextual errors, the best method, with 71.3% accuracy, is the Jumping Context approach which relies on syntactic and semantic information of the input utterance rather than strict prediction of dialogue act sequences. Recently also keyword-based topic identification has been applied to dialogue move (dialogue act) recognition (Garner, 1997). Our goal is to build a context model for a spo- ken dialogue system, and we emphasise especially the system's robustness, i.e. its capability to pro- duce reliable and meaningful responses in presence of various errors, disfluencies, unexpected input and out-of-domain utterances, etc. (which are especially notorious when dealing with spontaneous speech). The model is used to improve word recognition ac- curacy, and it should also provide a useful basis for other system modules. However, we do not aim at robustness on a merely mechanical level of matching correct words, but rather, on the level of maintaining the information content of the utterances. Despite the vagueness of such a term, we believe that speech act based context models are less robust due to the fact that the information content of the utterances is ignored. Consistency of the information exchanged in (task- oriented) conversations is one of the main sources for dialogue coherence, and so pertinent in the context management besides speech acts. Deviations from a predefined dialogue structure, multifunctionality of utterances, various side-sequences, disfluencies, etc. cannot be dealt with on a purely abstract level of illocution, but require knowledge of the domain, ex- pressed in the semantic content of the utterances. ion, argumentation etc. have different communicative pur- poses which are reflected in the set of necessary speech acts. Moreover, in multilingual applications, like speech- to-speech translation systems, the semantic content of utterances plays an important role and an inte- grated system must also produce a semantic analysis of the input utterance. Although the goal may be a shallow understanding only, it is not enough that the system knows that the speaker uttered a "request": the type of the request is also crucial. We thus reckon that appropriate context manage- ment should provide descriptions of what is said, and that the recognition of the utterance topic is an important task of spoken dialogue systems. 3 The Topic Model In AI-based dialogue modelling, topics are associ- ated with a particular discourse entity, focus, which is currently in the centre of attention and which the participants want to focus their actions on, e.g. Grosz and Sidner (1986). The topic (focus) is a means to describe thematically coherent discourse structure, and its use has been mainly supported by arguments regarding anaphora resolution and pro- cessing effort (search space limits). Our goal is to use topic information in predicting likely content of the next utterance, and thus we are more interested in the topic types that describe the information con- veyed by utterances than the actual topic entity. Consequently, instead of tracing salient entities in the dialogue and providing heuristics for different shifts of attention, we seek a formalisation of the information structure of utterances in terms of the new information that is exchanged in the course of the dialogue. The purpose of our topic model is to assist speech processing, and so extensive and elaborated reason- ing about plans and world knowledge is not avail- able. Instead a model that relies on observed facts (= word tokens) and uses statistical information is preferred. We also expect the topic model to be gen- eral and extendable, so that if it is to be applied to a different domain, or more factors in the recogni- tion of the information structure of the utterances 3 are to be taken into account, the model could easily adapt to these changes. The topic model consists of the following parts: 1. domain knowledge structured into a topic tree 2. prior probabilities of different topic shifts 3. topic vectors describing the mutual information between words and topic types 4. Predict-Support algorithm to measure similar- ity between the predicted topics and the topics supported by the input utterance. Below we describe each item in detail. 3For instance, sentential stress and pitch accent are im- portant in recognizing topics in spontaneous speech. 632 Figure 1: A partial topic tree. 3.1 Topic trees Originally "focus trees" were proposed by (McCoy and Cheng, 1991) to trace foci in NL generation sys- tems. The branches of the tree describe what sort of shifts are cognitively easy to process and can be expected to occur in dialogues: random jumps from one branch to another are not very likely to occur, and if they do, they should be appropriately marked. The focus tree is a subgraph of the world knowledge, built in the course of the discourse on the basis of the utterances that have occurred. The tree both constrains and enables prediction of what is likely to be talked about next, and provides a top-down approach to dialogue coherence. Our topic tree is an organisation of the domain knowledge in terms of topic types, bearing resem- blance to the topic tree of Carcagno and Iordanskaja (1993). The nodes of the tree 4 correspond to topic types which represent clusters of the words expected to occur at a particular point of the dialogue. Fig- ure 1 shows a partial topic tree in a hotel reservation domain. For our experiments, topic trees were hand-coded from our dialogue corpus. Since this is time- consuming and subjective, an automatic clustering program, using the notion of a topic-binder, is cur- rently under development. Our corpus contains 80 dialogues from the bilin- gual ATR Spoken Language Dialogue Database. 4We will continue talking about a topic tree, although in statistical modelling, the tree becomes a topic network where the shift probability between nodes which are not daughters or sisters of each other is close to zero. The dialogues deal with hotel reservation and tourist information, and the total number of utterances is 4228. (Segmentation is based on the information structure so that one utterance contains only one piece of new information.) The number of different word tokens is 27058, giving an average utterance length 6,4 words. The corpus is tagged with speech acts, using a surface pattern oriented speech act classification of Seligman et al. (1994), and with topic types. The topics are assigned to utterances on the basis of the new information carried by the utterance. New in- formation (Clark and Haviland, 1977; Vallduvl and Engdahl, 1996) is the locus of information related to the sentential nuclear stress, and identified in regard to the previous context as the piece of information with which the context is updated after uttering the utterance. Often new information includes the verb and the following noun phrase. More than one third of the utterances (1747) con- tain short fixed phrases (Let me confirm; thank you; good.bye; ok; yes), and temporizers (well, ah, uhm). These utterances do not request or provide informa- tion about the domain, but control the dialogue in terms of time management requests or convention- alised dialogue acts (feedback-acknowledgements, thanks, greetings, closings, etc.) The special topic type IAM, is assigned to these utterances to signify their role in InterAction Management. The topic type MIX is reserved for utterances which contain in- formation not directly related to the domain (safety of the downtown area, business taking longer than expected, a friend coming for a visit etc.), thus mark- ing out-of-domain utterances. Typically these utter- ances give the reason for the request. The number of topic types in the corpus is 62. Given the small size of the corpus, this was consid- ered too big to be used successfully in statistical cal- culations, and they were pruned on the basis of the topic tree: only the topmost nodes were taken into account and the subtopics merged into approproate mother topics. Figure 2 lists the pruned topic types and their frequencies in the corpus. tag count ~ interpretation iam 1747 41.3 Interaction Management room 826 19.5 Room, its properties stay 332 7.9 Staying period name 320 7.6 Name, spelling res 310 7.3 Make/change/extend/ cancel reservation paym 250 5.9 Payment method contact 237 5.6 Contact Info meals 135 3.2 Meals (breakfast, dinner) mix 71 1.7 Single unique topics Figure 2: Topic tags for the experiment. 633 3.2 Topic shifts On the basis of the tagged dialogue corpus, proba- bilities of different topic shifts were estimated. We used the Carnegie Mellon Statistical Language Mod- eling (CMU SLM) Toolkit, (Clarkson and Rosen- feld, 1997) to calculate probabilities. This builds a trigram backoff model where the conditional proba- blilities are calculated as follows: p(w3[wl, w2) = p3(wl, w2, w3) bo_wt2(wl, w2) x p(w31w2) p(w3lw2) if trigram exists if bigram (wl,w2) exists otherwise. p(w21wl) = p2(wl, w2) if bigram exists bo_wtl(wl) × pl(w2) otherwise. 3.3 Topic vectors Each word type may support several topics. For in- stance, the occurrence of the word room in the utter- ance I'd like to make a room reservation, supports the topic MAKERESERVATION, but in the utterance We have only twin rooms available on the 15th. it supports the topic ROOM. To estimate how well the words support the different topic types, we measured mutual information between each word and the topic types. Mutual information describes how much in- formation a word w gives about a topic type t, and is calculated as follows (ln is log base two, p(tlw ) the conditional probability of t given w, and p(t) the probability of t): I(w,t) = In p(w,t) -- In p(t[w) p(w). p(t) p(t) If a word and a topic are negatively correlated, mutual information is negative: the word signals absence of the topic rather than supports its pres- ence. Compared with a simple counting whether the word occurs with a topic or not, mutual information thus gives a sophisticated and intuitively appealing method for describing the interdependence between words and the different topic types. Each word is associated with a topic vector, which describes how much information the word w carries about each possible topic type ti: topvector( mi( w, t l ), mi( w, t 2 ), ..., mi( w, t, ) ) For instance, the topic vector of the word room is: topvector (room, [mi (0. 21409750769169117, cont act ), mi (-5. 5258041314543815, iam), mi (-3. 831955835588453 ,meals ), mi (0 ,mix), mi ( ml * 2697134113673~ ~ naive ) mi (-2. 720924523199709, paym) , mi (0. 9687353561881407 ,res), mi (I. 9035899442740105, room), mi (-4.130179669884547, stay) ] ). The word supports the topics ROOM and MAKE- RESERVATION (res), but gives no information about MIX (out-of-domain) topics, and its presence is highly indicative that the utterance is not at least IAM or STAY. It also supports CONTACT because the corpus contains utterances like I'm in room 213 which give information about how to contact the customer who is staying at a hotel. The topic vectors are formed from the corpus. %Ve assume that the words are independently related to the topic types, although in the case of natural lan- guage utterances this may be too strong a constraint. 3.4 The Predict-Support Algorithm Topics are assigned to utterances given the previous topic sequence (what has been talked about) and the words that carry new information (what is actu- ally said). The Predict-Support Algorithm goes as follows: 1. Prediction: get the set of likely next topics in regard to the previous topic sequences using the topic shift model. 2. Support: link each Newlnfo word wj of the in- put to the possible topics types by retrieving its topic vector. For each topic type ti, add up the amounts of mutual information rni(wj;ti) by which it is supported by the words wj, and rank the topic types in the descending order of mutual information. 3. Selection: (a) Default: From the set of predicted topics, select the most supported topic as the cur- rent topic. (b) What-is-said heuristics: If the predicted topics do not include the supported topic, rely on what is said, and select the most supported topic as the current topic (cf. the Jumping Context approach in Qu et al. (1996)). (c) What-is-talked-about heuristics: If the words do not support any topic (e.g. all the words are unknown or out-of-domain), rely on what is predicted and select the most likely topic as the current topic. 3 shows schematically how the algorithm Figure works. 634 U 1 - w11, w12, ..., Wlm ---> T 1 U 2 - w21" w22, ..., W2m ---> T 2 U 3 - w31, w32, ..., W3m ---> T 3 Un Prediction: T n - max p(Tk I Tk_2Tk. 1) Tk m Wnl , wn2 ....... wnm --> T n mi(W*nl .Ta) . mi('Wrt2,T a ) . . . . i(Wnm.T a ) mi(Wnl ,T b) miff,*V n2,T b) • . . mi0&'nm,T b) rni(Wnl ,T k) mi(Wn2,T k) . . . mi(Wnm,Tk) m Support: mi(Un,Tk ) = ~ mi0/Vni,Tk ) T n - max mi(Un,T k) i=l T k Select:. Default: T n =max ml(Un,T k) and Tn= max p(TkITk.2Tk_l)} T k T k Whnt/s s~/d: T n -max mi(Un,T k) Tk What is tnl'~d about: Tn = max p(T k I Tk.2Tk. 1 ) Tk Figure 3: Scheme of the Predict-Support Algorithm. Using the probabilities obtained by the trigram backoff model, the set of likely topics is actually a set of all topic types ordered according to their like- lihood. However, the original idea of the topic trees is to constrain topic shifts (transitions from a node to its daughters or sisters are favoured, while shifts to nodes in separate branches are less likely to oc- cur unless the information under the current node is exhaustively discussed), and to maintain this re- strictive property, we take into consideration only topics which have probability greater than an arbi- trary limit p. Instead of having only one utterance analysed at the time and predicting its topic, a speech rec- ognizer produces a word lattice, and the topic is to be selected among candidates for several word strings. We envisage the Predict-Support algorithm will work in the described way in these cases as well. However, an extra step must be added in the se- lection process: once the topics are decided for the n-best word strings in the lattice, the current topic is selected among the topic candidates as the high- est supported topic. Consequently, the word string associated with the selected topic is then picked up as the current utterance. We must make two caveats for the performance of the algorithm, related to the sparse data prob- lem in calculating mutual information. First, there is no difference between out-of-domain words and unknown but in-domain words: both are treated as providing no information about the topic types. If such words are rare, the algorithm works fine since the other words in the utterance usually support the correct topic. However, if such words occur ?re- quently, there is a difference in regard to whether the unknown words belong to the domain or not. Repeated out-of-domain words may signal a shift to a new topic: the speaker has simply jumped into a different domain. Since the out-of-domain words do not contribute to any expected topic type, the topic shift is not detected. On the other hand, if unknown but in-domain words are repeated, mu- tual information by which the topic types are sup- ported is too coarse and fails to make necessary dis- tinctions; hence, incorrect topics can be assigned. For instance, if lunch is an unknown word, the ut- terance Is lunch included? may get an incorrect topic type ROOMPRICE since this is supported by the other words of the utterance whose topic vec- tors were build on the basis of the training corpus examples like Is tax included? The other caveat is opposite to unknown words. If a word occurs in the corpus but only with a par- ticular topic type, mutual information between the word and the topic becomes high, while it is zero with the other topics. This co-occurrence may just be an accidental fact due to a small training cor- pus, and the word can indeed occur with other topic types too. In these cases it is possible that the algo- rithm may go wrong: if none of the predicted topics of the utterance is supported by the words, we rely on the What-is-said heuristics and assign the highly supported but incorrect topic to the utterance. For instance, if included has occurred only with ROOM- PRICE, the utterance Is lunch included? may still get an incorrect topic, even though lunch is a known word: mutual information mi(included, RoomPrice) may be greater than mi(lunch, Meals). 4 Experiments We tested the Predict-Support algorithm using cross-validation on our corpus. The accuracy results of the first predictions are given in Table 4. PP is the corpus perplexity which represents the average branching factor of the corpus, or the number of al- ternatives from which to choose the correct label at a given point. For the pruned topic types, we reserved 10 ran- domly picked dialogues for testing (each test file con- tained about 400-500 test utterances), and used the other 70 dialogues for training in each test cycle. The average accuracy rate, 78.68 % is a satisfactory result. We also did another set of cross-validation tests using 75 dialogues for training and 5 dialogues for testing, and as expected, a bigger training cor- pus gives better recognition results when perplexity stays the same. To compare how much difference a bigger num- ber of topic tags makes to the results, we con- ducted cross-validation tests with the original 62 topic types. A finer set of topic tags does worsen 635 Test type PP Topics = 10 train = 70 files 3.82 Topics = 10 train = 75 files 3.74 Topics = 62 train = 70 files 5.59 Dacts = 32 train = 70 files 6.22 PS-aigorithm BO model 78.68 41.30 80.55 40.33 64.96 41.32 58.52 19.80 Figure 4: Accuracy results of the first predictions. the accuracy, but not as much as we expected: the Support-part of the algorithm effectively remedies prediction inaccuracies. Since the same corpus is also tagged with speech acts, we conducted similar cross-validation tests with speech act labels. The recognition rates are worse than those of the 62 topic types, although perplexity is almost the same. We believe that this is because speech acts ignore the actual content of the utterance. Although our speech act labels are surface-oriented, they correlate with only a few fixed phrases (I would like to; please), and are thus less suitable to convey the semantic focus of the utter- ances, expressed by the content words than topics, which by definition deal with the content. As the lower-bound experiments we conducted cross-validation tests using the trigram backoff- model, i.e. relying only on the context which records the history of topic types. For the first ranked pre- dictions the accuracy rate is about 40%, which is on the same level as the first ranked speech act predic- tions reported in Reithinger and Mater (1995). The average precision of the Predict-Support al- gorithm is also calculated (Table 5). Precision is the ratio of correctly assigned tags to the total number of assigned tags. The average precision for all the pruned topic types is 74.64%, varying from 95.63% for ROOM to 37.63% for MIx. If MIx is left out, the average precision is 79.27%. The poor precision for MIX is due to the unknown word problem with mutual information. Topic type Precision Topic type Precision contact 55.75 paym 83.25 iam meals name 79.13 res 62.13 82.13 room 95.63 88.12 stay 88.00 mix 37.63 Average 74.64 Figure 5: Precision results for different topic types. The results of the topic recognition show that the model performs well, and we notice a considerable improvement in the accuracy rates compared to ac- curacy rates in speech act recognition cited in section 2 (modulo perplexity). Although the rates are some- what optimistic as we used transcribed dialogues (= the correct recognizer output), we can still safely conclude that topic information provides a promis- ing starting point in attempts to provide an accurate context for the spoken dialogue systems. This can be further verified in the perplexity measures for the word recognition: compared to a general language model trained on non-tagged dialogues, perplexity decreases by 20 % for a language model which is trained on topic-dependent dialogues, and by 14 % if we use an open test with unknown words included as well (Jokinen and Morimoto, 1997). At the end we have to make a remark concerning the relevance of speech acts: our argumentation is not meant to underestimate their use for other pur- poses in dialogue modelling, but rather, to empha- sise the role of topic information in successful con- text management: in our opinion the topics provide a more reliable and straighforward approximation of the utterance meaning than speech acts, and should not be ignored in the definition of context models for spoken dialogue systems. 5 Conclusions The paper has presented a probabilistic topic model to be used as a context model for spoken dialogue systems. The model combines both top-down and bottom-up approaches to topic modelling: the topic tree, which structures domain knowledge, provides expectations of likely topic shifts, whereas the infor- mation structure of the utterances is linked to the topic types via topic vectors which describe mutual information between the words and topic types. The Predict-Support Algorithm assigns topics to utter- ances, and achieves an accuracy rate of 78.68 %, and a precision rate of 74.64%. The paper also suggests that the context needed to maintain robustness of spoken dialogue systems can be defined in terms of topic types rather than speech acts. Our model uses actually occurring words and topic information of the domain, and gives highly competitive results for the first ranked topic predic- tion: there is no need to resort to extra information to disambiguate the three best candidates. Con- struction of the context, necessary to improve word recognition and for further processing, becomes thus more accurate and reliable. Research on statistical topic modelling and com- bining topic information with spoken language sys- tems is still new and contains several aspects for fu- ture research. We have mentioned automatic do- main modelling, in which clustering methods can be used to build necessary topic trees. Another re- search issue is the coverage of topic trees. Topic trees can be generalised in regard to world knowl- edge, but this requires deep analysis of the utterance meaning, and an inference mechanism to reason on conceptual relations. We will explore possibilities to 636 extract semantic categories from the parse tree and integrate these with the topic knowledge. We will also investigate further the relation between topics and speech acts, and specify their respective roles in context management for spoken dialogue systems. Finally, statistical modelling is prone to sparse data problems, and we need to consider ways to overcome inaccuracies in calculating mutual information. References J. Alexandersson. 1996. Some ideas for the auto- matic acquisition of dialogue structure. In Dia- logue Management in Natural Language Process- ing Systems, pages 149-158. Proceedings of the 1 lth Twente Workshop on Language Technology, Twente. D. Carcagno and Lidija Iordanskaja. 1993. Content determination and text structuring: two interre- lated processes. In H. Horacek and M. Zock, edi- tors, New Concepts in Natural Language Genera- lion, pages 10-26. Pinter Publishers, London. K. W. Church and W. A. Gale. 1991. Probabil- ity scoring for spelling correction. Statistics and Computing, (1):93-103. H. H. Clark and S. E. Haviland. 1977. Comprehen- sion and the given-new contract. In R. O. Freedle, editor, Discourse Production and Comprehension, Vol. 1. Ablex. P. Clarkson and R. Rosenfeld. 1997. Statistical language modeling using the CMU-Cambridge toolkit. In Eurospeech-97, pages 2707-2710. P. Garner. 1997. On topic identification and di- alogue move recognition. Computer Speech and Language, 11:275-306. B. J. Grosz and C. L. Sidner. 1986. Attention, in- tentions, and the structure of discourse." Compu- tational Linguistics, 12(3):175-204. K. Jokinen and T. Morimoto. 1997. Topic informa- tion and spoken dialogue systems. In NLPRS-97, pages 429-434. Proceedings of the Natural Lan- guage Processing Pacific Rim Symposium 1997, Phuket, Thailand. K. McCoy and J. Cheng. 1991. Focus of attention: Constraining what can be said next. In C. L. Paris, W. R. Swartout, and W. C. Moore, ed- itors, Natural Language Generation in Artificial Intelligence and Computational Linguistics, pages 103-124. Kluwer Academic Publishers, Norwell, Massachusetts. J-U. MSller. 1996. Using DIA-MOLE for unsuper- vised learning of domain specific dialogue acts from spontaneous language. Technical Report FBI-HH-B-191/96, University of Hamburg. M. Nagata and T. Morimoto. 1994. An information- theoretic model of discourse for next utterance type prediction. In Transactions of Information Processing Society of Japan, volume 35:6, pages 1050-1061. Y. Qu, B. Di Eugenio, A. Lavie, L. Levin, and C. P. Ros~. 1996. Minimizing cumulative error in dis- course context. In Dialogue Processing in Spoken Dialogue Systems, pages 60-64. Proceedings of the ECAI'96 Workshop, Budapest, Hungary. N. Reithinger and E. Maier. 1995. Utilizing statisti- cal dialogue act processing in verbmobil. In Pro- ceedings of the 33rd Annual Meeting of the ACL, pages 116-121. M. Seligman, L. Fais, and M. Tomokiyo. 1994. A bilingual set of communicative act labels for spontaneous dialogues. Technical Report ATR Technical Report TR-IT-81, ATR Interpreting Telecommunications Research Laboratories, Ky- oto, Japan. E. Vallduvi and E. Engdahl. 1996. The linguistic realization of information packaging. Linguistics, 34:459-519. 637
1998
103
A Statistical Analysis of Morphemes in Japanese Terminology Kyo KAGEURA National Center for Science Information Systems 3-29-10tsuka, Bunkyo-ku, Tokyo, 112-8640 Japan E-Mail: [email protected] Abstract In this paper I will report the result of a quan- titative analysis of the dynamics of the con- stituent elements of Japanese terminology. In Japanese technical terms, the linguistic contri- bution of morphemes greatly differ according to their types of origin. To analyse this aspect, a quantitative method is applied, which can prop- erly characterise the dynamic nature of mor- phemes in terminology on the basis of a small sample. 1 Introduction In computational linguistics, the interest in ter- minological applications such as automatic term extraction is growing, and many studies use the quantitative information (cf. Kageura & Umino, 1996). However, the basic quantita- tive nature of terminological structure, which is essential for terminological theory and appli- cations, has not yet been exploited. The static quantitative descriptions are not sufficient, as there are terms which do not appear in the sam- ple. So it is crucial to establish some models, by which the terminological structure beyond the sample size can be properly described. In Japanese terminology, the roles of mor- phemes are different according to their types of origin, i.e. the morphemes borrowed mainly from Western languages (borrowed morphemes) and the native morphemes including Chinese- origined morphemes which are the majority. There are some quantitative studies (Ishii, 1987; Nomura & Ishii, 1989), but they only treat the static nature of the sample. Located in the intersection of these two backgrounds, the aim of the present study is twofold, i.e. (1) to introduce a quantitative framework in which the dynamic nature of ter- minology can be described, and to examine its theoretical validity, and (2) to describe the quantitative dynamics of morphemes as a 'mass' in Japanese terminology, with reference to the types of origin. 2 Terminological Data 2.1 The Data We use a list of different terms as a sample, and observe the quantitative nature of the con- stituent elements or morphemes. The quantita- tive regularities is expected to be observed at this level, because a large portion of terms is complex (Nomura & Ishii, 1989), whose forma- tion is systematic (Sager, 1990), and the quan- titative nature of morphemes in terminology is independent of the token frequency of terms, be- cause the term formation is a lexical formation. With the correspondences between text and terminology, sentences and terms, and words and morphemes, the present work can be re- garded as parallel to the quantitative study of words in texts (Baayen, 1991; Baayen, 1993; Mandelbrot, 1962; Simon, 1955; Yule, 1944; Zipf, 1935). Such terms as 'type', 'token', 'vo- cabulary', etc. will be used in this context. Two Japanese terminological data are used in this study: computer science (CS: Aiso, 1993) and psychology (PS: Japanese Ministry of Ed- ucation, 1986). The basic quantitative data are given in Table 1, where T, N, and V(N) in- dicate the number of terms, of running mor- phemes (tokens), and of different morphemes (types), respectively. In computer science, the frequencies of the borrowed and the native morphemes are not very different. In psychology, the borrowed 638 Domain [ T N V(N~ N/T N/V(N) ] Of, [ CS all 14983 36640 5176 2.45 7.08 0.211 "' borrowed 14696 2809 5.23 0.242 native 21944 2367 9.27 0.174 PS all 6272 14314 3594 2,28 5.98 0.235 borrowed 1541 993 1.55 0.309 native 12773 2599 4.91 0.207 Table 1. Basic Figures of the Terminological Data morphemes constitute only slightly more than 10% of the tokens. The mean frequency N/V(N) of the borrowed morphemes is much lower than the native morphemes in both do- mains. 2.2 LNRE Nature of the Data The LNRE (Large Number of Rare Events) zone (Chitashvili & Baayen, 1993) is defined as the range of sample size where the population events (different morphemes) are far from being exhausted. This is shown by the fact that the numbers of hapax legomena and of dislegomena are increasing (see Figure 1 for hapax). A convenient test to see if the sample is lo- cated in the LNRE zone is to see the ratio of loss of the number of morpheme types, calcu- lated by the sample relative frequencies as the estimates of population probabilities. Assuming the binomial model, the ratio of loss is obtained by: CL = (V(N) - E[V(N)])/V(N) ~'~m>_l V(m, g)(1 - p(i[f(i,N)=m], N)) N V(N) where: f(i, N) : frequency of a morpheme wi in a sample of N. p(i, N) = f(i, N)/N : sample relative frequency. m : frequency class or a number of occurrence. V(m, N) : the number of morpheme types occur- ring m times (spectrum elements) in a sample of N. In the two data, we underestimate the number of morpheme types by more than 20% (CL in Table 1), which indicates that they are clearly located in the LNRE zone. 3 The LNRE Framework When a sample is located in the LNRE zone, values of statistical measures such as type-token ratio, the parameters of 'laws' (e.g. of Mandel- brot, 1962) of word frequency distributions, etc. change systematically according to the sample size, due to the unobserved events. To treat LNRE samples, therefore, the factor of sample size should be taken into consideration. Good (1953) gives a method of re-estimating the population probabilities of the types in the sample as well as estimating the probability mass of unseen types. There is also work on the estimation of the theoretical vocabulary size (Efron & Thisted, 1976; National Language Re- search Institute, 1958; Tuldava, 1980). How- ever, they do not give means to estimate such values as V(N), V(m, N) for arbitrary sample size, which are what we need. The LNRE frame- work (Chitashvili & Baayen, 1993) offers the means suitable for the present study. 3.1 Binomial/Poisson Assumption Assume that there are S different morphemes wi, i = 1,2,...S, in the terminological pop- ulation, with a probability Pl associated with each of them. Assuming the binomial distribu- tion and its Poisson approximation, we can ex- press the expected numbers of morphemes and of spectrum elements in a given sample of size N as follows: S S E[V(N)] = S- E(1 - pi)g = E( 1 _ e-NP,). (1) i=1 i=1 $ i=1 $ = ~--~(~p,)~e-Np'/m!. (2) i=1 As our data is in the LNRE zone, we cannot estimate Pi. Good (1953) and Good & Toulmin (1956) introduced the method of interpolating and extrapolating the number of types for ar- bitrary sample size, but it cannot be used for extrapolating to a very large size. 3.2 The LNRE Models Assume that the distribution of grouped proba- bility p follows a distribution 'law', which can be expressed by some structural type distribution G(p) s = ~i=1 I[p~>p], where I = 1 when pi > P and 0 otherwise. Using G(p), the expressions (1) and (2) can be re-expressed as follows: E[V(N)I = (1 - e -~') da(p). (3) 639 ~0 ~ E[V(rn, N)] = (Np)"~e-NP/m! dG(p). (4) where dG(p) = G(pj) - G(pj+l ) around PJ, and 0 otherwise, in which p is now grouped for the same value and indexed by the subscript j that indicates in ascending order the values of p. In using some explicit expressions such as lognormal 'law' (Carrol, 1967) for G(p), we again face the problem of sample size depen- dency of the parameters of these 'laws'. To over- come the problem, a certain distribution model for the population is assumed, which manifests itself as one of the 'laws' at a pivotal sample size Z. By explicitly incorporating Z as a parame- ter, the models can be completed, and it be- comes possible (i) to represent the distribution of population probabilities by means of G(p) with Z and to estimate the theoretical vocabu- lary size, and (ii) to interpolate and extrapolate V(N) and V(m, N) to the arbitrary sample size N, by such an expression: E[V(m, N)] = --I = -(~(Z-'-P))'~)m! e-~(zP) dG(p) The parameters of the model, i.e. the orig- inal parameters of the 'laws' of word frequency distributions and the pivotal sample size Z, are estimated by looking for the values that most properly describe the distributions of spectrum elements and the vocabulary size at the given sample size. In this study, four LNRE mod- els were tried, which incorporate the lognormal 'law' (Carrol, 1967), the inverse Gauss-Poisson 'law' (Sichel, 1986), Zipf's 'law' (Zipf, 1935) and Yule-Simon 'law' (Simon, 1955). 4 Analysis of Terminology 4.1 Random Permutation Unlike texts, the order of terms in a given ter- minological sample is basically arbitrary. Thus term-level random permutation can be used to obtain the better descriptions of sub-samples. In the following, we use the results of 1000 term- level random permutations for the empirical de- scriptions of sub-samples. In fact, the results of the term-level and morpheme-level permutations almost coincide, with no statistically significant difference. From this we can conclude that the binomial/Poisson assumption of the LNRE models in the previous section holds for the terminological data. 4.2 Quantitative Measures Two measures are used for observing the dy- namics of morphemes in terminology. The first is the mean frequency of morphemes: N X(V(N))- V(N) (5) The repeated occurrence of a morpheme indi- cates that it is used as a constituent element of terms, as the samples consist of term types. As it is not likely that the same morpheme occurs twice in a term, the mean frequency indicates the average number of terms which is connected by a common morpheme. A more important measure is the growth rate, P(N). If we observe E[V(N)] for changing N, we obtain the growth curve of the morpheme types. The slope of the growth curve gives the growth rate. By taking the first derivate of E[V(N)] given by equation (3), therefore, we obtain the growth rate of the morpheme types: ~N E[(V(1, g)] P(N) = E[V(N)] = N (6) This "expresses in a very real sense the proba- bility that new types will be encountered when the ... sample is increased" (Baayen, 1991). For convenience, we introduce the notation for the complement of P(N), the reuse ratio: R(N) = 1 - P(N) (7) which expresses the probability that the existing types will be encountered. For each type of morpheme, there are two ways of calculating P(N). The first is on the basis of the total number of the running mor- phemes (frame sample). For the borrowed mor- phemes, for instance, it is defined as: PI~(N) = E[V~ ...... a(1, N)]/N The second is on the basis of the number of running morphemes of each type (item sample). For instance, for the borrowed morphemes: Pib(N) = E[Vb ...... a(1, N)]/Nb ...... ,i Correspondingly, the reuse ratio R(N) is also defined in two ways. Pi reflects the growth rate of the morphemes of each type observed separately. Each of them expresses the probability of encountering a new morpheme for the separate sample consisting of the morphemes of the same type, and does not in itself indicate any characteristics in the frame sample. 640 On the other hand, Pf and Rf express the quantitative status of the morphemes of each type as a mass in terminology. So the transi- tions of Pf and Rf, with changing N, express the changes of the status of the morphemes of each type in the terminology. In terminology, Pf can be interpreted as the probability of in- corporating new conceptual elements. 4.3 Application of LNRE Models Table 2 shows the results of the application of the LNRE models, for the models whose mean square errors of V(N) and V(1,N) are mini- mal for 40 equally-spaced intervals of the sam- ple. Figure 1 shows the growth curve of the morpheme types up to the original sample size (LNRE estimations by lines and the empirical values by dots). According to Baayen (1993), a good lognormal fit indicates high productiv- ity, and the large Z of Yule-Simon model also means richness of the vocabulary. Figure 1 and the chosen models in Table 2 confirm these in- terpretations. Domain Model Z $ V(N) E[V(N)] CS all Gauss-Poisson 236 56085 5176 5176.0 borrowed Lognormal 419 75296 2809 2809.0 native Gauss-Poisson 104 6095 2387 2362.6 PS all Losnormal 1283 30691 3594 3694.0 borrowed Yule-Simon 38051 ~1 995 996.0 native Gauss-Poisson 231 101 2599 2599.0 * Z : pivotal sample sise ; S : population number of types Table 2. The Applications of LNRE Models From Figure 1, it is observed that the num- ber of the borrowed morpheme types in com- puter science becomes bigger than that of the native morphemes around N = 15000, while in psychology the number of the borrowed mor- phemes is much smaller within the given sam- ple range. All the elements are still growing, which implies that the quantitative measures keep changing. Figure 2 shows the empirical and LNRE es- timation of the spectrum elements, for m = 1 to 10. In both domains, the differences be- tween V(1, N) and V(2, N) of the borrowed morphemes are bigger than those of the native morphemes. Both the growth curves in Figure 1 and the distributions of the spectrum elements in Figure 2 show, at least to the eye, the reasonable fits of the LNRE models. In the discussions below, we assume that the LNRE based estimations are 641 z V(N):all / *-- V(N):borrowed / ~- V(N): V "S ol ~V(1 ,N):all / *---V(1,N):borr0wed / ~--V(l,N):native f ~ ....I 7J j 10000 20000 30000 2000300(~00~000 12000 N N lines : LNRE estimations ; dots : empirical values (a) Computer Science (b) Psychology Fig. 1. Empirical and LNRE Growth Curve §8. t ~_~.: ((::: )) ::1: trowed ~-V(m,N):native g~ ~V(m,N):all *-- V(m,N):b0rrowed 2 4 6 8 10 2 4 6 8 10 m 01 lines : LNB.E estimations ; dots : empirical values (a) Computer Science (b) Psychology Fig. 2. Empirical and LNRE Spectrum Elements valid, within the reasonable range of N. The statistical validity will be examined later. 4.3.1 Mean Frequency As the population numbers of morphemes are estimated to be finite with the excep- tion of the borrowed morphemes in psychology, limN._,oo X(V(N)) = o% which is not of much interest. The more important and interesting is the actual transition of the mean frequencies within a realistic range of N, because the size of a terminology in practice is expected to be limited. Figure 3 shows the transitions of X(V(N)), based on the LNRE models, up to 2N in com- puter science and 5N in psychology, plotted ac- cording to the size of the frame sample. The mean frequencies are consistently higher in com- puter science than in psychology. Around N = or, o, CS : ell .......................... ~ - ~;~.~ --- cs: borrowed ~'~ ~ .... I --- CS : native -- PS : all --- PS : borrowed . . . . . . . :~; I . . . . ---~-- ~ __ 0 20000 40000 60000 N Fig. 3. Mean Frequencies 70000, X(V(N)) in computer science is ex- pected to be 10, while in psychology it is 9. The particularly low value of X(V(Nbo,,.owed)) in psychology is also notable. (o <5 0 o -- Pf : all /" ---- Pf : borrowed ./" ---- Pf : native / . o o Pi : borrowed ...................... L".aYf ........ ~- ..... - ........ -'-" RI : borrowed '~ ---- Rf : native i°2 .i" %"x / T u r n i n g point of I=1 ' , ~ ~ r native and borrowed morphemes 0 20000 40000 60000 N (a) Computer Science 4.3.2 Growth Rate/Reuse Ratio Figure 4 shows the values of Pf, Pi and Rf, for the same range of N as in Figure 3. The values of Pib(N) and Pi,(N) in both domains show that, in general, the borrowed morphemes are more 'productive' than the native morphemes, though the actual value depends on the domain. Comparing the two domains by Pfau (N), we can observe that at the beginning the terminol- ogy of psychology relies more on the new mor- phemes than in computer science, but the values are expected to become about the same around N -- 70000. Pfs for the borrowed and native morphemes show interesting characteristics in each domain. Firstly, in computer science, at the relatively early stage of terminological growth (i.e. N -~ 3500), the borrowed morphemes begin to take the bigger role in incorporating new conceptual elements. Pfb(N) in psychology is expected to become bigger than ['In (N) around N = 47000. As the model estimates the population num- ber of the borrowed morphemes to be infinite in psychology, that the Pfb(N) becomes bigger than Pfn (N) at some stage is logically expected. What is important here is that, even in psychol- ogy, where the overall role of the borrowed mor- phemes is marginal, Pf=(N) is expected to be- come bigger around N -- 47000, i.e. T ~-- 21000, which is well within the realistic value for a pos- sible terminological size. Unhke Pf, the values of Rf show stable tran- sition beyond N = 20000 in both domains, o 6 ¸ ~5 o o ./ -- Pf : all o / ---. Pf : borrowed o .' i/ / --- Pf : native o o o Pi : borrowed * • - Pi : native ~ k for native and bor::w~iggPo°i;t:mf ~t .... R, : borrowed / --'=native 20000 40000 60000 N (b) Psychology Fig. 4. Changes of the Growth Rates gradually approaching the relative token fre- quencies. 5 Theoretical Validity 5.1 Linguistic Validity We have seen that the LNRE models offer a useful means to observe the dynamics of mor- phemes, beyond the sample size. As mentioned, what is important in terminological analyses is to obtain the patterns of transitions of some characteristic quantities beyond the sample size but still within the realistic range, e.g. 2N, 3N, etc. Because we have been concerned with the morphemes as a mass, we could safely use N in- stead of T to discuss the status of morphemes, 642 implicitly assuming that the average number of constituent morphemes in a term is stable. Among the measures we used in the anal- ysis of morphemes, the most important is the growth rate. The growth rate as the mea- sure of the productivity of affixes (Baayen, 1991) was critically examined by van Marle (1991). One of his essential points was the re- lation between the performance-based measure and the competence-based concept of produc- tivity. As the growth rate is by definition a performance-based measure, it is not unnatu- ral that the competence-based interpretation of the performance-based productivity measure is requested, when the object of the analysis is di- rectly related to such competence-oriented no- tion as derivation. In terminology, however, this is not the case, because the notion of terminology is essentially performance-oriented (Kageura, 1995). The growth rate, which con- cerns with the linguistic performance, directly reflects the inherent nature of terminological structure 1. One thing which may also have to be ac- counted for is the influence of the starting sam- ple size. Although we assumed that the order of terms in a given terminology is arbitrary, it may • not be the case, because usually a smaller sam- ple may well include more 'central' terms. We may need further study concerning the status of the available terminological corpora. 5.2 Statistical Validity Figure 5 plots the values of the z-score for E[V] and E[V(1)], for the models used in the analy- ses, at 20 equally-spaced intervals for the first half of the sample 2. In psychology, all but one values are within the 95% confidence interval. In computer science, however, the fit is not so good as in psychology. Table 3 shows the X 2 values calculated on the basis of the first 15 spectrum elements at the original sample size. Unfortunately, the X 2 values show that the models have obtained the fits which are not ideal, and the null hypothesis XNote however that the level of what is meant by the word 'performance' is different, as Baayen (1991) is text- oriented, while here it is vocabulary-oriented. 2To calculate the variance we need V(2N), so the test can be applied only for the first half of the sample cD V(N):aU ~,, o-- V(N):borrow~ r#~q~l --" V(N):native ~ , o ~ io V(1,N):all ~-- Y(IJ~:bon'awec 5 10 15 20 5 10 15 20 Intewals up to N/2 Intervals up to N/2 (a) Computer Science (b) Psychology Fig. 5. Z-Scores for E[V] and E[V(1)] is rejected at 95% level, for all the models we used. Data Model X z DF CS all Gauss-Poisson 129.70 14 borrowed Lognormal 259.08 14 native Gauss-Poisson 60.30 13 PS all Lognormal 72.21 14 borrowed Yule-Simon 179.36 14 native Gauss-Poisson 135.30 13 Table 3. X 2 Values for the Models Unlike texts (Baayen, 1996a;1996b), the ill- fits of the growth curve of the models are not caused by the randomness assumption of the model, because the results of the term-level per- mutations, used for calculating z-scores, are sta- tistically identical to the results of morpheme- level permutations. This implies that we need better models if we pursue the better curve- fitting. On the other hand, if we emphasise the theoretical assumption of the models of fre- quency distributions used in the LNRE analy- ses, it is necessary to introduce the finer distinc- tions of morphemes. 6 Conclusions Using the LNRE models, we have succesfully analysed the dynamic nature of the morphemes in Japanese terminology. As the majority of the terminological data is located in the LNRE zone, it is important to use the statistical frame- work which allows for the LNRE characteristics. The LNRE models give the suitable means. We are currently extending our research to integrating the quantitative nature of morpho- logical distributions to the qualitative mode] of term formation, by taking into account the po- 643 sitional and combinatorial nature of morphemes and the distributions of term length. Acknowledgement I would like to express my thanks to Dr. Har- aid Baayen of the Max Plank Institute for Psy- cholinguistics, for introducing me to the LNRE models and giving me advice. Without him, this work coudn't have been carried out. I also thank to Ms. Clare McCauley of the NLP group, Department of Computer Science, the University of Sheffield, for checking the draft. References [1] Aiso, H. (ed.) (1993) Joho Syori Yogo Dai- jiten. Tokyo: Ohm. [2] Baayen, R. H. (1991) "Quantitative as- pects of morphological productivity." Year- book o] Morphology 1991. p. 109-149. [3] Baayen, R. H. (1993) "Statistical models for word frequency distributions: A lin- guistic evaluation." Computers and the Hu- manities. 26(5-6), p. 347-363. [4] Saayen, R. U. (19969) "The randomness assumption in word frequency statistics." Research in Humanities Computing 5. p. 17-31. [5] Baayen, R. H. (1996b) "The effects of lex- ical specialization on the growth curve of the vocabulary." Computational Linguis- tics. 22(4), p. 455-480. [6] Carrol, J. B. (1967) "On sampling from a lognormal model of word frequency distri- bution." In: Kucera, H. and Francis, W. N. (eds.) Computational Analysis of Present- Day American English. Province: Brown University Press. p. 406-424. [7] Chitashvili, R. J. and Baayen, R. H. (1993) "Word frequency distributions." In: Hrebicek, L. and Altmann, G. (eds.) Quantitative Text Analysis. Trier: Wis- senschaftlicher Verlag. p. 54-135. [8] Efron, B. and Thisted, R. (1976) "Es- timating the number of unseen species: How many words did Shakespeare know?" Biometrika. 63(3), p. 435-447. [9] Good, I. J. (1953) "The population fre- quencies of species and the estimation of population parameters." Biometrika. 40(3- 4), p. 237-264. [10] Good, I. J. and Toulmin, G. H. (1956) "The number of new species, and the increase in population coverage, when a sample is in- creased." Biometrika. 43(1), p. 45-63. [11] Ishii, M. (1987) "Economy in Japanese scientific terminology." Terminology and Knowledge Engineering '87. p. 123-136. [12] Japanese Ministry of Education (1986) Japanese Scientific Terms: Psychology. Tokyo: Gakujutu-Sinkokal. [13] Kageura, K. (1995) "Toward the theoret- ical study of terms." Terminology. 2(2), 239-257. [14] Kageura, K. and Vmino, B. (1996) "Meth- ods of automatic term recognition: A re- view." Terminology. 3(2), 259-289. [15] Mandelbrot, B. (1962). "On the theory of word frequencies and on related Marko- vian models of discourse." In: Jakobson, R. (ed.) Structure of Language and its Math- ematical Aspects. Rhode Island: American Mathematical Society. p. 190-219. [16] Marle, J. van. (1991). "The relationship be- tween morphological productivity and fre- quency." Yearbook of Morphology 1991. p. 151-163. [17] National Language Research Institute (1958) Research on Vocabulary in Cultural Reviews. Tokyo: NLRI. [18] Nomura, M. and Ishii, M. (1989) Gakujutu Yogo Goki-Hyo. Tokyo: NLRI. [19] Sager, J. C. (1990) A Practical Course in Terminology Processing. Amsterdam: John Benjamins. [20] Sichel, H. S. (1986) "Word frequency dis- tributions and type-token characteristics." Mathematical Scientist. 11(1), p. 45-72. [21] Simon, H. A. (1955) "On a class of skew distribution functions." Biometrika. 42(4), p. 435-440. [22] Wuldava, J. (1980) "A mathematical model of the vocabulary-text relation." COL- ING'80. p. 600-604. [23] Yule, G. U. (1944) The Statistical Study of Literary Vocabulary. Cambridge: Cam- bridge University Press. [24] Zipf, G. K. (1935). The Psycho-Biology of Language. Boston: Houghton Mifflin. 644
1998
104
A Statistical Analysis of Morphemes in Japanese Terminology Kyo KAGEURA National Center for Science Information Systems 3-29-1 Otsuka, Bunkyo-ku, Tokyo, 112-8640 Japan E-Mail: [email protected] Resumen En este art~culo, esta informado el resultado de an~lisis estad~stico de la dinAmica de los e|emen- tos constitutivos de t~rminos japoneses. En t~rminos japoneses, la contribuciSn de |os elementos morfol6gicos es diferente segfin los tipos de origen (entre los elementos adoptados de lenguas oc- cidentales y los elementos originMes incluso elementos adoptados de lengua china). Para analizar este punto, un m~todo cuantitativo esta applicado, que puede caracterizar propiamente la din£mica de los datos mofol6gicos de t~rminos en base a las muestras pequefias. 645
1998
105
Pseudo-Projectivity: A Polynomially Parsable Non-Projective Dependency Grammar Sylvain Kahane* and Alexis Nasr t and Owen Rambowt • TALANA Universit@ Paris 7 (sk0ccr. jussieu.fr) t LIA Universit@ d'Avignon (alexis. nasr©lia, univ-avignon, fr) :~CoGenTex, Inc. (owenOcogentex.com) 1 Introduction Dependency grammar has a long tradition in syntactic theory, dating back to at least Tesni~re's work from the thirties3 Recently, it has gained renewed attention as empirical meth- ods in parsing are discovering the importance of relations between words (see, e.g., (Collins, 1997)), which is what dependency grammars model explicitly do, but context-free phrase- structure grammars do not. One problem that has posed an impediment to more wide-spread acceptance of dependency grammars is the fact that there is no computationally tractable ver- sion of dependency grammar which is not re- stricted to projective analyses. However, it is well known that there are some syntactic phe- nomena (such as wh-movement in English or clitic climbing in Romance) that require non- projective analyses. In this paper, we present a form of projectivity which we call pseudo- projectivity, and we present a generative string- rewriting formalism that can generate pseudo- projective analyses and which is polynomially parsable. The paper is structured as follows. In Sec- tion 2, we introduce our notion of pseudo- projectivity. We briefly review a previously pro- posed formalization of projective dependency grammars in Section 3. In Section 4, we extend this formalism to handle pseudo-projectivity. We informally present a parser in Section 5. 2 Linear and Syntactic Order of a Sentence 2.1 Some Notation and Terminology We will use the following terminology and no- tation in this paper. The hierarchical order tThe work presented in this paper is collective and the order of authors is alphabetical. (dominance) between the nodes of a tree T will be represented with the symbol _~T and T. Whenever they are unambiguous, the notations -< and _ will be used. When x -~ y, we will say that x is a descendent of y and y an ancestor of x. The projection of a node x, belonging to a tree T, is the set of the nodes y of T such that y _T X. An arc between two nodes y and x of a tree T, directed from y to x will be noted either (y, x) or ~-. The node x will be referred to as the dependent and y as the governor. The latter will be noted, when convenient, x +T (x + when unambiguous). The notations ~2- and x + are unambiguous because a node x has at most one governor in a tree. As usual, an ordered tree is a tree enriched with a linear order over the set of its nodes. Finally, if l is an arc of an ordered tree T, then Supp(1) represents the support of l, i.e. the set of the nodes of T situated between the extremities of l, extremi- ties included. We will say that the elements of Supp(1) are covered by I. 2.2 Projectivity The notion of projectivity was introduced by (Lecerf, 1960) and has received several different definitions since then. The definition given here is borrowed from (Marcus, 1965) and (Robin- son, 1970): Definition: An arc ~- is projective if and only if for every y covered by ~2-, y ~ x +. A tree T is projective if and only if every arc of T is projective A projective tree has been represented in Fig- ure 1. A projective dependency tree can be associ- ated with a phrase structure tree whose con- stituents are the projections of the nodes of the dependency tree. Projectivity is therefore equivalent, in phrase structure markers, to con- 646 The big cat sometimes eats white mice Figure 1: A projective sub-categorization tree tinuity of constituent. The strong constraints introduced by the pro- jectivity property on the relationship between hierarchical order and linear order allow us to describe word order of a projective dependency tree at a local level: in order to describe the linear position of a node, it is sufficient to de- scribe its position towards its governor and sis- ter nodes. The domain of locality of the linear order rules is therefore limited to a subtree of depth equal to one. It can be noted that this do- main of locality is equal to the domain of local- ity of sub-categorization rules. Both rules can therefore be represented together as in (Gaif- man, 1965) or separately as will be proposed in 3. 2.3 Pseudo-Projectivity Although most linguistic structures can be represented as projective trees, it is well known that projectivity is too strong a constraint for dependency trees, as shown by the example of Figure 2, which includes a non-projective arc (marked with a star). Who do you think she invited ? Figure 2: A non projective sub-categorization tree The non projective structures found in linguistics represent a small subset of the potential non projective structures. We will define a property (more exactly a family of properties), weaker than projectivity, called pseudo-projectivity, which describes a subset of the set of ordered dependency trees, containing the non-projective linguistic struc- tures. In order to define pseudo-projectivity, we in- troduce an operation on dependency trees called lifting. When applied to a tree, this operation leads to the creation of a second tree, a lift of the first one. An ordered tree T' is a lift of the ordered tree T if and only if T and T' have the same nodes in the same order and for ev- ery node x, x +T ..<T x+T'. We will say that the node x has been lifted from x +T (its syntactic governor) to x +T' (its linear governor). Recall that the linear position of a node in a projective tree can be defined relative to its governor and its sisters. In order to define the linear order in a non projective tree, we will use a projective lift of the tree. In this case, the position of a node can be defined only with regards to its governor and sisters in the lift, i.e., its linear governor and sisters. Definition: An ordered tree T is said pseudo-projective if there exists a lift T' of tree T which is projective. If there is no restriction on the lifting, the previous definition is not very interesting since we can in fact take any non-projective tree and lift all nodes to the root node and obtain a pro- jective tree. We will therefore constrain the lifting by a set of rules, called lifting rules. Consider a set of (syntactic) categories. The following defini- tions make sense only for trees whose nodes are labeled with categories. 2 The lifting rules are of the following form (LD, SG and LG are categories and w is a reg- ular expression on the set of categories): LD $ SG w LG (1) This rule says that a node of category LD can be lifted from its syntactic governor of cat- egory SG to its linear governor of category LG through a path consisting of nodes of category C1,..., Ca, where the string C1... Cn belongs to L(w). Every set of lifting rules defines a par- ticular property of pseudo-projectivity by im- posing particular constraints on the lifting. A sit is possible to define pseudo-projectivity purely structurally (i.e. without referring to the labeling). For example, we can impose that each node x is lifted to the highest ancestor of x covered by ~2" ((Nasr, 1996)). The resulting pseudo-projectivity is a fairly weak exten- sion to projectivity, which nevertheless covers major non- projective linguistic structures. However, we do not pur- sue a purely structural definition of pseudo-projectivity in this paper. 647 linguistic example of lifting rule is given in Sec- tion 4. The idea of building a projective tree by means of lifting appears in (Kunze, 1968) and is used by (Hudson, 1990) and (Hudson, un- published). This idea can also be compared to the notion of word order domain (Reape, 1990; BrSker and Neuhaus, 1997), to the Slash feature of GPSG and HPSG, to the functional uncer- tainty of LFG, and to the Move-a of GB theory. 3 Projective Dependency Grammars Revisited We (informally) define a projective Dependency Grammar as a string-rewriting system 3 by giv- ing a set of categories such as N, V and Adv, 4 a set of distinguished start categories (the root categories of well-formed trees), a mapping from strings to categories, and two types of rules: de- pendency rules which state hierarchical order (dominance) and LP rules which state linear order. The dependency rules are further sub- divided into subcategorization rules (or s-rules) and modification rules (or m-rules). Here are some sample s-rules: dl : Vtrans ) gnom, Nobj, (2) d2 : Yclause ~ gnom, Y Here is a sample m-rule. (3) d3 : V ~ Adv (4) LP rules are represented as regular expressions (actually, only a limited form of regular expres- sions) associated with each category. We use the hash sign (#) to denote the position of the governor (head). For example: pl:Yt .... = (Adv)Nnom(Aux)Adv*#YobjAdv*Yt .... (5) 3We follow (Gaifman, 1965) throughout this paper by modeling a dependency grammar with a string-rewriting system. However, we will identify a derivation with its representation as a tree, and we will sometimes refer to symbols introduced in a rewrite step as "dependent nodes". For a model of a DG based on tree-rewriting (in the spirit of Tree Adjoining Grammar (Joshi et al., 1975)), see (Nasr, 1995). 4In this paper, we will allow finite feature structures on categories, which we will notate using subscripts; e.g., Vtrans. Since the feature structures are finite, this is sim- ply a notational variant of a system defined only with simple category labels. ~clause Adv Nnom thought Vtrans yesterday Fernando thought Vtrans ==~ yesterday Fernando thought Nnom eats Nob j A dv yesterday Fernando thought Carlos eats beans slowly Vclause Adv Nnom thought Vtrans yesterday Fernando Nnom eats Nobj Adv I f J Carlos beans slowly Figure 3: A sample GDG derivation We will call this system generative depen- dency grammar or GDG for short. Derivations in GDG are defined as follows. In a rewrite step, we choose a multiset of de- pendency rules (i.e., a set of instances of de- pendency rules) which contains exactly one s- rule and zero or more m-rules. The left-hand side nonterminal is the same as that we want to rewrite. Call this multiset the rewrite-multiset. In the rewriting operation, we introduce a mul- tiset of new nonterminals and exactly one termi- nal symbol (the head). The rewriting operation then must meet the following three conditions: • There is a bijection between the set of de- pendents of the instances of rules in the rewrite-multiset and the set of newly intro- duced dependents. • The order of the newly introduced depen- dents is consistent with the LP rule associ- ated with the governor. • The introduced terminal string (head) is mapped to the rewritten category. As an example, consider a grammar contain- ing the three dependency rules dl (rule 2), d2 (rule 3), and d3 (rule 4), as well as the LP rule Pl (rule 5). In addition, we have some lexical map- pings (they are obvious from the example), and the start symbol is Yfinite: +. A sample deriva- tion is shown in Figure 3, with the sentential form representation on top and the correspond- ing tree representation below. Using this kind of representation, we can derive a bottom-up parser in the following 648 straightforward manner. 5 Since syntactic and linear governors coincide, we can derive de- terministic finite-state machines which capture both the dependency and the LP rules for a given governor category. We will refer to these FSMs as rule-FSMs, and if the governor is of category C, we will refer to a C-rule-FSM. In a rule-FSM, the transitions are labeled by cate- gories, and the transition corresponding to the governor labeled by its category and a special mark (such as #). This transition is called the "head transition". The entries in the parse matrix M are of the form (m, q), where rn is a rule-FSM and q a state of it, except for the entries in squares M(i, i), 1 <: i < n, which also contain category labels. Let wo'"wn be the input word. We initialize the parse matrix as follows. Let C be a category of word wi. First, we add C to M(i,i). Then, we add to M(i, i) every pair (m, q) such that m is a rule-FSM with a transition labeled C from a start state and q the state reached after that transition. 6 Embedded in the usual three loops on i, j, k, we add an entry (ml,q) to M(i,j) if (rnl,ql) is in M(k,j), (m2, q2) is in M(i, k-t-l), q2 is a final state of m2, m2 is a C-rule-FSM, and ml transi- tions from ql to q on C (a non-head transition). There is a special case for the head transitions in ml: ifk = i - 1, C is in M(i,i), ml is a C- rule-FSM, and there is a head transition from ql to q in ml, then we add (ml, q) to M(i, j). The time complexity of the algorithm is O(n3[GIQmax), where G is the number of rule- FSMs derived from the dependency and LP rules in the grammar and Qmax is the maximum number of states in any of the rule-FSMs. 4 A Formalization of PP-Dependency Grammars Recall that in a pseudo-projective tree, we make a distinction between a syntactic governor and a linear governor. A node can be "lifted" along a lifting path from being a dependent of its syn- tactic governor to being a dependent of its linear 5This type of parser has been proposed previously. See for example (Lombardi, 1996; Eisner, 1996), who also discuss Early-style parsers for projective depen- dency grammars. 6We can use pre-computed top-down prediction to limit the number of pairs added. 649 governor, which must be an ancestor of the gov- ernor. In defining a formal rewriting system for pseudo-projective trees, we will not attempt to model the "lifting" as a transformational step in the derivation. Rather, we will directly derive the "lifted" version of the tree, where a node is dependent of its linear governor. Thus, the derived structure resembles more a unistratal dependency representation like those used by (Hudson, 1990) than the multistratal represen- tations of, for example, (Mel'~uk, 1988). How- ever, from a formal point of view, the distinction is not significant. In order to capture pseudo-projectivity, we will interpret rules of the form (2) (for subcate- gorization of arguments by a head) and (4) (for selection of a head by an adjunct) as introducing syntactic dependents which may lift to a higher linear governor. An LP rule of the form (5) or- ders all linear dependents of the linear governor, no matter whose syntactic dependents they are. In addition, we need a third type of rule, namely a lifting rule, or l-rule (see 2.3). The 1-rule (1) can be rewrited on the following form: ll : LG > LD {LG.w SG LD} (6) This rule resembles normal dependency rules but instead of introducing syntactic dependents of a category, it introduces a lifted dependent. Besides introducing a linear dependent LD, a 1-rule should make sure that the syntactic gov- ernor of LD will be introduced at a later stage of the derivation, and prevent it to introduce LD as its syntactic dependent, otherwise non pro- jective nodes would be introduced twice, a first time by their linear governor and a second time by their syntactic governor. This condition is represented in the rule by means of a constraint on the categories found along the lifting path. This condition, which we call the lifting con- dition, is represented by the regular expression LG. w SG. The regular expression representing the lifting condition is enriched with a dot sep- arating, on its left, the part of the lifting path which has already been introduced during the rewriting and on its right the part which is still to be introduced for the rewriting to be valid. The dot is an unperfect way of representing the current state in a finite state automaton equiv- alent to the regular expression. We can further notice that the lifting condition ends with a rep- etition of LD for reasons which will be made clear when discussing the rewriting process. A sentential form contains terminal strings and categories paired with a multiset of lifting conditions, called the lift multiset. The lift mul- tiset associated to a category C contains 'tran- siting' lifting conditions: introduced by ances- tors of C and passing across C. Three cases must be distinguished when rewriting a category C and its lifting multiset LM: • LM contains a single lifting condi- tion which dot is situated to its right: LGw SG C.. In such acase, Cmust be rewritten by the empty string. The situ- ation of the dot at the right of the lifting condition indicates that C has been intro- duced by its syntactic governor although it has already been introduced by its linear governor earlier in the rewriting process. This is the reason why C has been added at the end of the lifting condition. • LM contains several lifting conditions one of which has its dot to the right. In such a case, the rewriting fails since, in accor- dance with the preceding case, C must be rewritten by the empty string. Therefore, the other lifting conditions of LM will not be satisfied. Furthermore, a single instance of a category cannot anchor more than one lifting condition. • LM contains several lifting conditions none of which having the dot to their right. In this case, a rewrite multiset of dependency rules and lifting rules, both having C as their left hand side, is selected. The result of the rewriting then must meet the follow- ing conditions: 1. The order of the newly introduced de- pendents is consistent with the LP rule associated with C. 2. The union 7 of the lift multisets asso- ciated with all the newly introduced (instances of) categories is equal to the union of the lift multiset of C and the multiset composed of the lift condition 7When discussing set operations on multisets, we of course mean the corresponding multiset operations. of the 1-rules used in the rewriting op- eration. 3. The lifting conditions contained in the lift multiset of all the newly introduced dependents D should be compatible with D, with the dot advanced appro- priately. In addition, we require that, when we rewrite a category as a terminal, the lift multiset is empty. Let us consider an example. Suppose we have have a grammar containing the dependency rules dl (rule 2), d2 (rule 3), and d3 (rule 4); the LP rule Pl (rule 5) and p2: p2:Vclause : (Ntop: + INwh:+)(Adv)Nnom(Aux)Adv* #Adv* Vt .... Furthermore, we have the following 1-rule: II :Vbridge:+---~Nc ..... bj top:+ {'V~ridge:+VNc ..... bj top:+ } This rule says that an objective wh-noun with feature top:+ which depends on a verb with no further restrictions (the third V in the lifting path) can raise to any verb that dominates its immediate governor as long as the raising paths contains only verb with feature bridge:+, i.e., bridge verbs. Vclause Nobj Nnom thought Adv Y{'Y~ridge: + Y Ncase:obj top:+} beans Fernando thought yesterday V{.V~ridge: + V Nc .... bj top:+} beans Fernando thought yesterday Nnom claims V{.V~ridge: + V Nc .... bj top:+} =~ beans Fernando thought yesterday Milagro claims V{-V~ridge: + Y Nc ..... bj top:+} beans yesterday Fernando thought yesterday Milagro claims Nnom eats N { Y~ridge:+ V Ycase:obj top:+'} Adv :=~ beans Fernando thought yesterday Milagro claims Carlos eats slowly Vcl~us¢ N ~ a u * e beans Fernando yester Nno m claims Vtrans Milagro Nnom eats Adv I I Carlos slowly Figure 4: A sample PP-GDG derivation A sample derivation is shown in Figure 4, with the sentential form representation on top 650 and the corresponding tree representation be- low. We start our derivation with the start symbol Vclause and rewrite it using dependency rules d2 and d3, and the lifting rule ll which introduces an objective NP argument. The lift- ing condition of I1 is passed to the V dependent but the dot remains at the left of V'bridge:. {. be- cause of the Kleene star. When we rewrite the embedded V, we choose to rewrite again with Yclause , and the lifting condition is passed on to the next verb. This verb is a Ytrans which re- quires a Yobj. The lifting condition is passed to Nob j and the dot is moved to the right of the regular expression, therefore Nob j is rewritten as the empty string. 5 A Polynomial Parser for PP-GDG In this section, we show that pseudo-projective dependency grammars as defined in Section 2.3 are polynomially parsable. We can extend the bottom-up parser for GDG to a parser for PP-GDG in the following man- ner. In PP-GDG, syntactic and linear governors do not necessarily coincide, and we must keep track separately of linear precedence and of lift- ing (i.e., "long distance" syntactic dependence). The entries in the parse matrix M are of the form (m,q, LM), where m is a rule-FSM, q a state of m, and LM is a multiset of lift- ing conditions as defined in Section 4. An entry (m, q, LM) in a square M(i, j) of the parse ma- trix means that the sub-word wi...wj of the entry can be analyzed by m up to state q (i.e., it matches the beginning of an LP rule), but that nodes corresponding to the lifting rules in LM are being lifted from the subtrees span- ning wi...wj. Put differently, in this bottom- up view LM represents the set of nodes which have a syntactic governor in the subtree span- ning wi...wj and a lifting rule, but are still looking for a linear governor. Suppose we have an entry in the parse matrix M of the form (m, q, L). As we traverse the C- rule-FSM m, we recognize one by one the linear dependents of a node of category C. Call this governor ~?. The action of adding a new entry to the parse matrix corresponds to adding a single new linear dependent to 77. (While we are work- ing on the C-rule-FSM m and are not yet in a final state, we have not yet recognized ~? itself.) Each new dependent ~?' brings with it a multiset 651 of nodes being lifted from the subtree it is the root of. Call this multiset LM'. The new entry will be (m, q', LM U LM') (where q' is the state ! , that m transitions to when ~? is recognized as the next linear dependent. When we have reached a final state q of the rule-FSM m, we have recognized a complete subtree rooted in the new governor, ~?. Some of the dependent nodes of ~? will be both syn- tactic and linear dependents of ~?, and the others will be linear dependents of ~?, but lifted from a descendent of 7. In addition, 77 may have syn- tactic dependents which are not realized as its own linear dependent and are lifted away. (No other options are possible.) Therefore, when we have reached the final state of a rule-FSM, we must connect up all nodes and lifting conditions before we can proceed to put an entry (m, q, L) in the parse matrix. This involves these steps: 1. For every lifting condition in LM, we en- sure that it is compatible with the category of ~?. This is done by moving the dot left- wards in accordance with the category of 77. (The dot is moved leftwards since we are doing bottom-up recognition.) The obvious special provisions deal with the Kleene star and optional elements. If the category matches a catgeory with Kleene start in the lifting condition, we do not move the dot. If the category matches a category which is to the left of an op- tional category, or to the left of category with Kleene star, then we can move the dot to the left of that category. If the dot cannot be placed in accordance with the category of 77, then no new entry is made in the parse matrix for ~?. 2. We then choose a multiset of s-, m-, and 1- rules whose left-hand side is the category of ~?. For every dependent of 77 introduced by an 1-rule, the dependent must be compati- ble with an instance of a lifting condition in LM (whose dot must be at its beginning, or seperated from the beginning by optional or categories only); the lifting condition is then removed from L. 3. If, after the above repositioning of the dot and the linking up of all linear dependents to lifting conditions, there are still lifting . conditions in LM such that the dot is at the beginning of the lifting condition, then no new entry is made in the parse matrix for ~?. For every syntactic dependent of ?, we de- termine if it is a linear dependent of ~ which has not yet been identified as lifted. For each syntactic dependents which is not also a linear dependent, we check whether there is an applicable lifting rule. If not, no entry is made in the parse matrix for 77. If yes, we add the lifting rule to LM. This procedure determines a new multiset LM so we can add entry (m, q, LM) in the parse matrix. (In fact, it may determine several pos- sible new multisets, resulting in multiple new entries.) The parse is complete if there is an entry (m, qrn, O) in square M(n, 1) of the parse matrix, where m is a C-rule-FSM for a start category and qm is a final state of m. If we keep backpointers at each step in the algorithm, we have a compact representation of the parse for- est. The maximum number of entries in each square of the parse matrix is O(GQnL), where G is the number of rule-FSMs corresponding to LP rules in the grammar, Q is the maximum number of states in any of the rule-FSMs, and L is the maximum number of states that the lifting rules can be in (i.e., the number of lift- ing conditions in the grammar multiplied by the maximum number of dot positions of any lifting condition). Note that the exponent is a gram- mar constant, but this number can be rather small since the lifting rules are not lexicalized - they are construction-specific, not lexeme- specific. The time complexity of the algorithm is therefore O(GQn3+21L[). References Norbert BrSker and Peter Neuhaus. 1997. The complexity of recognition of linguistically ad- equate dependency grammars. In 35th Meet- ing of the Association for Computational Lin- guistics (ACL'97), Madrid, Spain. ACL. M. Collins. 1997. Three generative, lexicalised models for statistical parsing. In Proceedings of the 35th Annual Meeting of the Associa- tion for Computational Linguistics, Madrid, Spain, July. 652 Jason M. Eisner. 1996. Three new probabilis- tic models for dependency parsing: An ex- ploration. In Proceedings of the 16th Inter- national Conference on Computational Lin- guistics (COLING'96), Copenhagen. Haim Galfman. 1965. Dependency systems and phrase-structure systems. Information and Control, 8:304-337. Richard Hudson. 1990. English Word Gram- mar. Basil Blackwell, Oxford, RU. Richard Hudson. unpublished. Discontinuity. e-preprint (ftp.phon.ucl.ac.uk). Aravind K. Joshi, Leon Levy, and M Takahashi. 1975. Tree adjunct grammars. J. Comput. Syst. Sci., 10:136-163. Jiirgen Kunze. 1968. The treatment of non- projective structures in the syntactic analysis and synthesis of english and german. Com- putational Linguistics, 7:67-77. Yves Lecerf. 1960. Programme des conflits, module des conflits. Bulletin bimestriel de I'ATALA, 4,5. Vicenzo Lombardi. 1996. An Earley-style parser for dependency grammars. In Pro- ceedings of the 16th International Conference on Computational Linguistics (COLING'96), Copenhagen. Solomon Marcus. 1965. Sur la notion de projec- tivit6. Zeitschr. f. math. Logik und Grundla- gen d. Math., 11:181-192. Igor A. Mel'6uk. 1988. Dependency Syntax: Theory and Practice. State University of New York Press, New York. Alexis Nasr. 1995. A formalism and a parser for lexicalised dependency grammars. In 4th In- ternational Workshop on Parsing Technolo- gies, pages 186-195, Prague. Alexis Nasr. 1996. Un syst~me de reformu- lation automatique de phrases fondd sur la Thdorie Sens-Texte : application aux langues contr61des. Ph.D. thesis, Universit6 Paris 7. Michael Reape. 1990. Getting things in order. In Proceedings of the Symposium on Discon- tinuous Constituents, Tilburg, Holland. Jane J. Robinson. 1970. Dependency struc- tures and transformational rules. Language, 46(2):259-285.
1998
106
A Method for Correcting Errors in Speech Recognition Using the Statistical Features of Character Co-occurrence Satoshi Kaki, Eiichiro Sumita, and Hitoshi Iida ATR Interpreting Telecommunications Research Labs, Hikaridai 2-2 Seika-cho, Soraku-gun, Kyoto 619-0288, Japan {skaki, sumita, iida}@itl.atr.co.jp Abstract It is important to correct the errors in the results of speech recognition to increase the performance of a speech translation system. This paper proposes a method for correcting errors using the statistical features of character co-occurrence, and evaluates the method. The proposed method comprises two successive correcting processes. The first process uses pairs of strings: the first string is an erroneous substring of the utterance predicted by speech recognition, the second string is the corresponding section of the actual utterance. Errors are detected and corrected according to the database learned from erroneous-correct utterance pairs. The remaining errors are passed to the posterior process which uses a string in the corpus that is similar to the string including recognition errors. The results of our evaluation show that the use of our proposed method as a post-processor for speech recognition is likely to make a significant contribution to the performance of speech translation systems. method also obtains reliably recognized partial segments of an utterance by cooperatively using both grammatical and n-gram based statistical language constraints, and uses a robust parsing technique to apply the grammatical constraints described by context-free grammar (Tsukada et aL, 97). However, these methods do not carry out any error correction on a recognition result, but only specify correct parts in it. In this paper we therefore propose a method for correcting errors, which is characterized by learning the trend of errors and expressions, and by processing in an arbitrary length string. Similar work on English was presented by (E.K. Ringger et al., 96). Using a noisy-channel model, they implemented a post-processor to correct word-level errors committed by a speech recognizer. 2 Method for Correcting Errors We refer to two compositions of the proposal as Error- Pattem-Correction (EPC) and Similar-String-Correction (SSC) respectively. The correction using EPC and SSC together in this order is abbreviated to EPC+SSC. 1 Introduction In spite of the increased performance of speech recognition systems, the output still contains many errors. For language processing such as a machine translation, it is extremely difficult to deal with such errors. In integrating recognition and translation into a speech translation system, the development of the following processes is therefore important: (1) detection of errors in speech recognition results; (2) sorting of speech recognition results by means of error detection; (3) providing feedback to the recognition process and/or making the user speak again; (4) correct errors, etc. For this purpose, a number of methods have been proposed. One method is to translate correct parts extracted from speech recognition results by using the semantic distance between words calculated with an example-based approach (Wakita et al., 97). Another 2.1 Error-Pattern-Correction (EPC) When examining errors in speech recognition, errors are found to occur in regular pattems rather than at random. EPC uses such error pattems for correction. We refer to this pattern as an Ermr-Pattem. An Error-Pattem is made up of two strings. One is the Ma chiog I [Sobsti ting E.or- Corre - ]pa ofE.or /I for Pattern l[ Error-Part ~pa rror-Pattern-Databa~-~ irs of Error- and Correct-~J Figure 2-1 The block diagram for EPC 653 string including errors, and the other is the corresponding correct string (the former string is referred to as the Error- Part, and the latter as the Correct-Part respectively). These parts are extracted from the speech recognition results and the corresponding actual utterances, then they are stored in a database (referred to as an Error-Pattern-Database). In EPC, the correction is made by substituting a Correct-Part for an Error-Part when the Error-Part is detected in a recognition result (see Figure 2-1). Table 2-1 shows some Error-Pattern examples. Table 2-1 Examples of Error-Patterns Correct-Part Error-Part 2.1.1 Extraction of Error-Patterns The Error-Pattern-Database is mechanically prepared using a pair of parts from the speech recognition results and the corresponding actual utterance. The examples below show candidates grouped according to the correct part '<~>' and the erroneous part '< ~ ~1. Error-Pattern Candidates Frq. <N> : <t.¢> 3 ~<N> : !~<t.~> 3 ~<N> : ~[.~</'.c> 3 EPC is a simple and effective method because it detects and corrects errors only by pattern-matching. The unrestricted use of Error-Patterns, however, may produce the wrong correction. Therefore a careful selection of Error-Patterns is necessary. In this method, several selection conditions are applied in order, as described below. Candidates passing all of the conditions are employed as Error-Patterns. Condition of High Frequency: Candidates of not less than a given threshold value (2 in the experiment) in frequency are selected to collect errors which have a high frequency of occurrence in recognition results. Condition of Non-Side Effect:, This step excludes the candidate whose Error-Part is included in actual utterances to prevent the Error-Part from matching with a section of actual utterances. Condition of Inclusion-l: Because a long Error-Part is more accurate for matching, this step selects an Error- Pattern whose Error-Part is as long as possible. For two arbitrary candidates, when one of their Error-Parts includes the other, and their frequencies are the same value, the candidate whose Error-Part includes the other is accepted. Condition of Inclusion-2: If some Error-Parts are derived from different utterances and have a common part in them, this common part is suitable for an Error-Pattern. Therefore in this step, an Error-Pattem with its Error-Part as short as possible is selected. For two arbitrary candidates, when one of their Error-Parts includes the other, and their frequencies have different values, the included candidate is accepted. 2.2 Similar-String-Correction (SSC) In an erroneous Japanese sentence, the correct expressions can be estimated frequently by the row of characters before and after the erroneous sections of the sentence. This means that we are involuntarily applying a portion of a regular expression to an erroneous section. Instead of this portion of the regular expression, SSC uses a collection of strings, the members of which are in the corpus (this collection we refer to as the String-Database). As shown in the block diagram in figure 2-2, the correction is performed through the following steps, the first step is error detection. The next step is the retrieval of the string that is most I Input String Error Detection Retrieval of Similar String Substitution of Dissimilar Part I Corrected String Figure 2-2 The block diagram of SSC 654 similar to the string including errors from the String- Database (the former string is referred to as the Similar-String, and the latter as the Error-String). Finally, the correction is made using the difference between these two strings. 2.2.1 Procedure for Correction The procedure for correction varies slightly, depending on the position of the detected error: a top, a middle, or a tail, in an utterance. Here we will explain the case of a middle. Step 1: Estimate an erroneous section (referred to as an error-block) with error detection method'. If there is no error-block, the procedure is terminated. Depending on the position of the error-block, the procedure branches in the following way. If P1 is less than T (T=4), then go to the step for a top. If a value L - P2 + T is less than T, then go to the step for a tail. In all other cases, go to the step for a middle. Here, P1 and P2 denote the start and end positions of an error-block, and L denotes the length of the input string. Step 2: Take the string (Error-String) that comprises an error-block and each M (5 in the experiment) character before and after the error-block out of the input string, and using this string (Error-String) as a query key, retrieve a string (Similar-String) from the String-Database to satisfy the following condition. It must be located in a middle of an utterance, it must have the highest value (S), and S must be not less than a given threshold value ( 0.6 in the experiment). Here, S is defined as: S=(L-N)/L where L is the len~uh of the Similar String, and N is the minimum number of character insertions, deletions, or substitutions necessary to transform the Error-String to the Similar-String. If there is no Similar-String, then go to step 1 leaving this error-block undone. Step 3: If the two strings (denoted A and B), that are each K (2 in the experiment) characters before and after an error-block in the Error-String, am found in the Similar- String, take out the string (denoted C) between A and B in 1 For detecting errors in Japanese sentences, the method using the probability of character sequence was reported to be fairly effective (Araki et al., 93). The result of a preliminary experiment was that the precision and recall rates were over 80% and over 70% respectively. <error-block> Error-String: ['~@] {~<:fi~A. ~>t;l:l [ffJ'~] [A] A''/~ ~Substituti°n ~ ~ [B] Similar-String: [~'9"-] {~A.~r~l;~t [ffJ'~]~J~'~ Ict ~" _h)'ffure 2-3 The procedure o£ SSC the Similar-String. ff k is not found, then go to Step 1 leaving this error-block undone. Substitute string C as the correct string for the string between A and B in the Error-String (see figure 2-3). 3. Evaluation 3.1 Data Condition for Experiments Results of Speech Recognition: We used 4806 recognition results including errors, from the output of speech recognition (Masataki et al., 96; Shimizu et al., 96) experiment using an ATR spoken language database (Morimoto et al., 94) on travel arrangements. The characteristics of those results are shown in table 3-1. The breakdown of these 4806 results is as follows: 4321 results were used for the preparation of Error- Patterns and the other 495 results were used for the evaluation. Table 3-1 The recognition characteristics Recognition accuracy(%) Insertion Deletion Substitution Sum (in character) 74.73 2642 1702 8087 12431 Preparation of Error-Patterns: As the threshold value for the frequency of the occurrence, we employed a value of not less than 2, therefore we obtained 629 Error-Pattems using the 4321 results of speech recognition. Preparation of the String-Database: Using the different data-sets of the ATR spoken language database from the above-mentioned 4806 results, we prepared the String- Database. We employed 3 as the threshold value for the frequency of the occurrence, and 10 as the length of a string, therefore obtaining 16655 strings. 3.2 Two Factors for Evaluation We evaluated the following two factors before and after correction: (1) the counting of errors, and (2) the effectiveness of the method in understanding the recognized results. 655 To confirm the effectiveness, the recognition results were evaluated by two native Japanese. They assigned one of five levels, A-E, to each recognition result before and after correction, by comparing it with the corresponding actual utterance. Finally, we employed the overall results of the stricter of two evaluators. (A) No lacking in the meaning of the actual utterance, and with perfect expression. (B) No lacking in meaning, but with slightly awkward expression. (C) Slightly lacking in meaning. (D) Considerably lacking in meaning. (E) Unable to understand, and unable to imagine the actual utterance. 4. Results and Discussions 4.1 Decrease in the Number of Errors Table 4-1 shows the number of errors before and after correction. These results show the following. Table 4-1 The number of errors before and after correction Insertion Deletion Substitution Sum Before 264 206 891 1361 EPC 226(-14.4) 190(-7.8) 853(-4.3) 1269(-6.8) SSC 251(-4.9) 214(+3.9) 870(-2.4) 1335(-1.9) EPC+SSC 216(-18.2) 198(-3.9) 831 (-7.9) 1245(-8.5) The values inside brackets 0 are the rate of decrease In EPC+SSC, the rate of decrease was 8.5%, and the decrease was obtained in all type of errors. In SSC, the number of deletion errors increased by 3.9%. The reason for this is that in SSC, correction by deleting the part of a substitution error frequently caused new deletion errors as shown in the example below. From the standpoint of the correction it might be a mistaken correction, but it increases understanding of the results by deleting a noise and makes the results viable for machine translation. It therefore practically refines the speech recognition results. Correct String: '~:t~ ~ 5 ~%~ ~'¢,V,,~ ~-)~,~/19~'~,='~°~ ~'¢ ' "Hai arigatou gozaimasu Kyoto Kanko Hoteru yoyaku gakari de gozaimasu", ('l'hank you for calling Kyoto Kanko Hotel reservations.) Input String: -¢, "A hai arigatou gozaimasu e Kyoto Kanko Hoteru yanichikan gozaimasu", (Thank you for calling Kyoto Kanko Hotel ....... ) Corrected String: "A hai arigatou gozaimasu e Kyoto Kanko Hoteru de gozaimasu", (Thank you for calling Kyoto Kanko Hotel.) 656 4.2 Improvement of Understandability Table 4-2 shows the number of change in the evaluated level. The rate of improvement after correction was 7%. There were also a lot of cases that improved their level by recovering content words. For example, the word "cash" was recovered in '~,~ ~, "~'--~,@, "~" (before-'after), "guide" in '~i]X-J --~ ~-"~', etc. These results confirm that our method is effective in improving the understanding of the recognition results. On the other hand, there were four level-down cases. Three of these cases were caused by the misdetection of errors in the SSC procedure. The remaining case occurred in the EPC procedure. The Error-Pattern used in this case could not be excluded by the condition of non-side effects because its Error- Part was not included in the corpus of the actual utterance. Table 4-2 The number of changes in the evaluated level before and aJier correction. EPC SSC EPC+SSC Improve 18(3.7) 15(3.1) 34(7.0) No Change 466( 96.1 ) 467(96.3) 447(92.2) Down 1(0.2) 3(0.6) 4(0.8) The values inside brackets 0 are the rate (%) of the number to total number of evaluated results. 4.3 More Applicable for a Result Having a Few Errors Table 4-3 shows the rate of change in the evaluated level by the original number of erroneous characters 2 Table 4-3 The rate of change in the evaluated level by the original number of erroneous characters involved in the reco Num. of erroneous characters nition results (EPC+SSC). Num. of Rate(%) of change No results Improve Change Down 0 102 0.0 98.0 2.0 1 30 16.7 80.0 3.3 2 21 28.6 66.7 4.8 3 26 19.2 80.8 0.0 4 40 12.5 87.5 0.0 5 27 14.8 85.2 0.0 6 24 12.5 87.5 0.0 7 21 9.5 90.5 0.0 8 17 0.0 100.0 0.0 9 20 5.0 95.0 0.0 10 29 0.0 100.0 0.0 11 22 0.0 100.0 0.0 12 > 106 2.8 97.2 0.0 Total 485 7.0 92.2 0.8 This number is the minimum number of character insertions, deletions or substitutions necessary to transform the result of recognition into a corresponding actual utterance. included in the recognition results. The recognition results improving their level after cone~tion mosdy fell in the range of erroneous numbers by not more than 7. The reasons for this are that with there being many errors, the failure of the corrections increases because the corrections are prevented by other surrounding errors. In addition, when only a few successful corrections have been made, they have little influence on the overall understanding. These results show that the proposed method is more applicable for a recognition result having a few errors, as compared with one having many errors. 5 Conclusion As described above, our proposed method has the following features: (1) Since the proposed method is designed with a arbitrary length string as a unit, it is capable of correcting errors which are hard to deal with by methods designed to treat words as units. For example, the insertion error '~" ("wo") in the string '3~f.~L ~,~ Jj"(~ ' ("shiharai wo houhou'~ shown in table 2- 1 cannot be corrected by a method designed to treat words as units, because of the existence of the particle' ~' ("wo") as a correct word. However with the proposed method, it is possible to correct this kind of error by using the row of characters before and after '~' ("wo"). (2) In the proposed method of learning the trend of errors and expressions with long strings, it is possible to correct errors where it is difficult to narrow the candidates down to the correct character with the probability of the character sequence alone. When considering the candidate for "(" ("te") in' l.,U. "( ~ ~. ~ ©U." ("shitetekimasunode '~) shown in table 2-1 to satisfy the probability of the character sequence, its candidates, '4 ~' ("/"), '}3' Co"), 'I~' ("itada'~ are arranged in order of increasing probability. It is therefore difficult to narrow the candidates into the correct character 'I~' ("itada") by the probability of character sequence alone. But with the proposed method it is possible to correct this kind of error by using the row of the characters before and after "(" Cte"). (3) Both the Error-Pattem-Database and String-Database can be mechanically prepared, which reduces the effort required to prepare the databases and makes it possible to apply this method to a new recognition system in a short time. From the evaluation, it became clear that the proposed method has the following effects: (1) It reduces over 8% of the errors. (2) It improves the understanding of the recognition results by7%. (3) It has very little influence on correct recognition results. (4) It is more applicable for a recognition result with a few errors than one with many errors. Judging from these results and features, the use of the proposed method as a post-processor for speech recognition is likely to make a significant contribution to the performance of speech translation systems. In the future, we will try to improve the correcting accuracy by changing algorithms and will also try to improve translation performance by combining our method with Wakita's method. References T. Araki et al., 93. A Method for Detecting and Correcting of Characters Wrongly Substituted, Deleted or Inserted in Japanese Strings Using 2nd-Order Markov Model IPSJ, Report of SIG-NL, 97-5, pp. 29-35 (1993) T. Morimoto et al., 94: A Speech and language database for speech translation research. Proc. of ICSLP 94, pp. 1791- 1794, 1994. H. Masataki et al., 96. Variable-order n-gram generation by word-class splitting and consecutive word grouping. In Proc. of ICASSP, 1996. T. Shimizu et al., 96. Spontaneous Dialogue Speech Recognition using Cross-word Context Constrained Word Graphs. ICASSP 96, pp. 145-148, 1996. Y. Wakita et al., 97. Correct parts extraction from speech recognition results using semantic distance calculation, and its application to speech translation. ACI.JF_.ACL Workshop Spoken Language Translation, pp. 24-31, 1997-7. H. Tsukada et al., 97. Integration of grammar and statistical language constraints for partial word-sequence recognition. In Proc. of 5th European Conference on Speech Communication and Technology (EuroSpeech 97), 1997. E.K.Ringger et al., 96. A Fertility Channel Model for Post- Correction of Continuous Speech Recognition. ICSLP96, pp. 897-900, 1996. 657
1998
107
Use of Mutual Information Based Character Clusters in Dictionary-less Morphological Analysis of Japanese Hideki Kashioka, Yasuhiro Kawata, Yumiko Kinjo, Andrew Finch and Ezra W. Black {kashioka, ykawata, kinjo, finch, black}~itl.atr.co.jp ATR Interpreting Telecommunications Reserach Laboratories Abstract For languages whose character set is very large and whose orthography does not require spac- ing between words, such as Japanese, tokenizing and part-of-speech tagging are often the diffi- cult parts of any morphological analysis. For practical systems to tackle this problem, un- controlled heuristics are primarily used. The use of information on character sorts, however, mitigates this difficulty. This paper presents our method of incorporating character cluster- ing based on mutual information into Decision- Tree Dictionary-less morphological analysis. By using natural classes, we have confirmed that our morphological analyzer has been signifi- cantly improved in both tokenizing and tagging Japanese text. 1 Introduction Recent papers have reported cases of successful part-of-speech tagging with statistical language modeling techniques (Church 1988; Cutting et. al. 1992; Charniak et. al. 1993; Brill 1994; Nagata 1994; Yamamoto 1996). Morphological analysis on Japanese, however, is more complex because, unlike European languages, no spaces are inserted between words. In fact, even native Japanese speakers place word boundaries incon- sistently. Consequently, individual researchers have been adopting different word boundaries and tag sets based on their own theory-internal justifications. For a practical system to utilize the different word boundaries and tag sets according to the demands of an application, it is necessary to co- ordinate the dictionary used, tag sets, and nu- merous other parameters. Unfortunately, such a task is costly. Furthermore, it is difficult to maintain the accuracy needed to regulate the word boundaries. Also, depending on the pur- pose, new technical terminology may have to be collected, the dictionary has to be coordinated, but the problem of unknown words would still remain. The above problems will arise so long as a dictionary continue to play a principal role. In analyzing Japanese, a Decision-Tree approach with no need for a dictionary (Kashioka, et. al. 1997) has led us to employ, among other param- eters, mutual information (MI) bits of individ- ual characters derived from large hierarchically clustered sets of characters in the corpus. This paper therefore proposes a type of Decision-Tree morphological analysis using the MI of characters but with no need for a dic- tionary. Next the paper describes the use of information on character sorts in morpholog- ical analysis involving the Japanese language, how knowing the sort of each character is use- ful when tokenizing a string of characters into a string of words and when assigning parts-of- speech to them, and our method of clustering characters based on MI bits. Then, it proposes a type of Decision-Tree analysis where the no- tion of MI-based character and word clustering is incorporated. Finally, we move on to an ex- perimental report and discussions. 2 Use of Information on Characters Many languages in the world do not insert a space between words in the written text. Japanese is one of them. Moreover, the num- ber of characters involved in Japanese is very large. 1 a Unlike English being basically written in a 26- character alphabet, the domain of possible characters appearing in an average Japanese text is a set involving tens of thousands of characters, 658 2.1 Character Sort There are three clearly identifiable character sorts in Japanese: 2 Kanji are Chinese characters adopted for historical reasons and deeply rooted in Japanese. Each character carries a seman- tic sense. Hiragana are basic Japanes e phonograms rep- resenting syllables. About fifty of them constitute the syllabary. Katakana are characters corresponding to hi- ragana, but their use is restricted mainly to foreign loan words. Each character sort has a limited number of el- ements, except for Kanji whose exhaustive list is hard to obtain. Identifying each character sort in a sen- tence would help in predicting the word bound- aries and subsequently in assigning the parts-of- speech. For example, between characters of dif- ferent sorts, word boundaries are highly likely. Accordingly, in formalizing heuristics, character sorts must be assumed. 2.2 Character Cluster Apart from the distinctions mentioned above, are there things such as natural classes with re- spect to the distribution of characters in a cer- tain set of sentences (therefore, the classes are empirically learnable)? If there are, how can we obtain such knowledge? It seems that only a certain group of charac- ters tends to occur in a certain restricted con- text. For example, in Japanese, there are many numerical classifier expressions attached imme- diately after numericals. 3 If such is the case, these classifiers can be clustered in terms of their distributions with respect to a presumably natural class called numericals. Supposing one of a certain group of characters often occurs as a neighbor to one of the other groups of char- acters, and supposing characters are clustered and organized in a hierarchical fashion, then it is possible to refer to such groupings by pointing ~Other sorts found in ordinary text are Arabic nu- merics, punctuations, other symbols, etc. 3For example, " 3 ~ (san-satsu)" for bound ob- jects "3 copies of", "2 ~ (ni-mai)" for flat objects "2 pieces~sheets of". out a certain node in the structure. Having a way of organizing classes of characters is clearly an advantage in describing facts in Japanese. The next section presents such a method. 3 Mutual Information-Based Character Clustering One idea is to sort words out in terms of neigh- boring contexts. Accordingly research has been carried out on n-gram models of word cluster- ing (Brown et. al. 1992) to obtain hierarchical clusters of words by classifying words in such a way so as to minimizes the reduction of MI. This idea is general in the clustering of any kind of list of items into hierarchical classes. 4 We therefore have adopted this approach not only to compute word classes but also to com- pute character clusterings in Japanese. The basic algorithm for clustering items based on the amount of MI is as follows: s 1) Assign a singleton class to every item in the set. 2) Choose two appropriate classes to create a new class which subsumes them. 3) Repeat 2) until the additional new items include all of the items in the set. With this method, we conducted an experi- mental clustering over the ATR travel conver- sation corpus. 6 As a result, all of the charac- ters in the corpus were hierarchically clustered according to their distributions. Example: A partial character clustering -+ ......... ~: 0000000110111 +-+-+-+--- ~lJ 0000000111000000 I I +-+- ~ 00000001110000010 I I *- f-~ 00000001110000011 [ + ..... ~ 000000011100001 + ~_~ 00000001110001000 Each node represents a subset of all of the different characters found in the training data. We represent tree structured clusters with bit strings, so that we may specify any node in the structure by using a bit substring. 4Brown, et. al. (1992) for details. 5This algorithm, however, is too costly because the amount of computation exponentially increases depend- ing on the number of items. For practical processing, the basic procedure is carried out over a certain limited number of items, while a new item is supplied to the processing set each time clustering is done. 880,000 sentences, with a total number of 1,585,009 characters and 1,831 different characters. 659 Numerous significant clusters are found among them. r They are all natural classes computed based on the events in the training set. 4 Decision-Tree Morphological Analysis The Decision-Tree model consists of a set of questions structured into a dendrogram with a probability distribution associated with each leaf of the tree. In general, a decision-tree is a complex of n-ary branching trees in which ques- tions are associated with each parent node, and a choice or class is associated with each child node. 8 We represent answers to questions as bits. Among other advantages to using decision- trees, it is important to note that they are able to assign integrated costs for classification by all types of questions at different feature levels provided each feature has a different cost. 4.1 Model Let us assume that an input sentence C = cl c2 ... cn denotes a sequence of n charac- ters that constitute words 1¥ = Wl w2 ... win, where each word wi is assigned a tag ti (T = tl t2 ... tin). The morphological analysis task can be for- mally defined as finding a set of word segmenta- tions and part-of-speech assignments that maxi- mizes the joint probability of the word sequence and tag sequence P(W,T[C). The joint probability P(W, TIC) is calculated by the following formulae: P(W, TIC ) = I-I;~l P(wi, tiiwl,..., wi-1, tl, ..., ti-1, C) P( wi, ti I Wl, ..., wi-1, tl , ..., ti-1, C) = P(wi [wl, ..., wi-1, q, ..., t~-l, C) 9 * P( ti[wl , ..., wi, tl , ..., ti-1, C) 10 The Word Model decision-tree is used as the word tokenizer. While finding word bound- rFor example, katakana, numerical classifiers, numer- ics, postpositional case particles, and prefixes of demon- strative pronouns. SThe work described here employs only binary decision-trees. Multiple alternative questions are rep- resented in more than two yes/no questions. The main reason for this is the computational efficiency. Allowing questions to have more answers complicates the decision- tree growth algorithm. OWe call this the "Word Model". 1°~,Ve call this the "Tagging Model". aries, we use two different labels: Word+ and Word-. In the training data, we label Word+ to a complete word string, and Word- to ev- ery substring of a relevant word since these sub- strings are not in fact a word in the current con- text. 11 The probability of a word estimates the associated distributions of leaves with a word decision-tree. We use the Tagging Model decision-tree as our part-of-speech tagger. For an input sentence C, let us consider the character sequence from Cl to %-1 (assigned Wl w2 ... wk-1) and the following character sequence from p to p + l to be the word wk; also, the word wk is assumed to be assigned the tag tk. We approximate the probability of the word wk assigned with tag tk as follows: P(tk) = p(ti[wl, ..., wk,q,..., tk-1, C). This probability estimates the associated distributions of leaves with a part-of-speech tag decision-tree. 4.2 Growing Decision-Trees Growing a decision-tree requires two steps: se- lecting a question to ask at each node; and de- termining the probability distribution for each leaf from the distribution of events in the train- ing set. At each node, we choose from among all possible questions, the question that maximizes the reduction in entropy. The two steps are repeated until the following conditions are no longer satisfied: • The number of leaf node events exceeds the constant number. • The reduction in entropy is more than the threshold. Consequently, the list of questions is optimally structured in such a way that, when the data flows in the decision-tree, at each decision point, the most efficient question is asked. Provided a set of training sentences with word boundaries in which each word is assigned with a part-of-speech tag, we have a) the neces- sary structured character clusters, and b) the necessary structured word clusters; 12 both of them are based on the n-gram language model. laFor instance, for the word "mo-shi-mo-shi" (hello), "mo-shi-mo-shi" is labeled Word-I-, and "mo-shi-mo", "mo-shi', "mo" are all labeled Word-. Note that "mo- shi" or "mo-shi-mo" may be real words in other contexts, e.g., "mo-shi/wa-ta-shi/ga ... (If I do ... )'. 12Here, a word token is based only on a word string, not on a word string tagged with a part-of-speech. 660 We also have c) the necessary decision-trees for word-splitting and part-of-speech tagging, each of which contains a set of questions about events. We have considered the following points in making decision-tree questions. 1) MI character bits We define self-organizing character classes represented by binary trees, each of whose nodes are significant in the n-gram lan- guage model. We can ask which node a character is dominated by. 2) MI word bits Likewise, MI word bits (Brown et. al. 1992) are also available so that we may ask which node a word is dominated by. 3) Questions about the target word These questions mostly relate to the mor- phology of a word (e.g., Is it ending in '- shi-i' (an adjective ending)? Does it start with 'do-'?). 4) Questions about the context Many of these questions concern continu- ous part-of-speech tags (e.g., Is the pre- vious word an adjective?). However, the questions may concern information at dif- ferent remote locations in a sentence (e.g., Is the initial word in the sentence a noun?). These questions can be combined in order to form questions of greater complexity. 5 Analysis with Decision-Trees Our proposed morphological analyzer processes each character in a string from left to right. Candidates for a word are examined, and a tag candidate is assigned to each word. When each candidate for a word is checked, it is given a probability by the word model decision-tree. We can either exhaustively enumerate and score all of the cases or use a stack decoder algorithm (Jelinek 1969; Paul 1991) to search through the most probable candidates. The fact that we do not use a dictionary, 13 is one of the great advantages. By using a dic- tionary, a morphological analyzer has to deal with unknown words and unknown tags, 14 and is also fooled by many words sharing common substrings. In practical contexts, the system a3Here, a dictionary is a listing of words attached to part-of-speech tags. 14Words that are not found in the dictionary and nec- essary tags that are not assigned in the dictionary. Table 1: Travel Conversation Training 1,000+MIChr -MIChr 2,000+MIChr -MIChr 3,000+MIChr -MIChr 4,000+MIChr -MIChr 5,000+MIChr -MIChr [I A(%) B (%) 80.67 69.93 70.03 62.24 86.61 76.43 69.65 63.36 88.60 79.33 71.97 66.47 88.26 80.11 72.55 67.24 89.42 81.94 72.41 67.72 Training: number of sentences with/without Character Clustering A: Correct word/system output words B: Correct tags/system output words refers to the dictionary by using heuristic rules to find the more likely word boundaries, e.g., the minimum number of words, or the maximum word length available at the minimum cost. If the system could learn how to find word bound- aries without a dictionary, then there would be no need for such an extra device or process. 6 Experimental Results We tested our morphological analyzer with two different corpora: a) ATR-travel, which is a task oriented dialogue in a travel context, and b) EDR Corpus, (EDR 1996) which consists of rather general written text. For each experiment, we used the charac- ter clustering based on MI. Each question for the decision-trees was prepared separately, with or without questions concerning the character clusters. Evaluations were made with respect to the original tagged corpora, from which both the training and test sentences were taken. The analyzer was trained for an incrementally enlarged set of training data using or not us- ing character clustering. 15 Table 1 shows re- sults obtained from training sets of ATR-travel. The upper figures in each box indicate the re- sults when using the character clusters, and the lower without using them. The actual test set of 4,147 sentences (55,544 words) was taken from 15Another 2,231 sentences (28,933 words) in the same domain are used for the smoothing. 661 Table 2: General Written Text Training 3,000+MIChr -MIChr 5,000+MIChr -MIChr 7,000+MIChr -MIChr 9,000+MIChr -MIChr 10,000+MIChr -MIChr A (%)IB (%)II 83.80 78.19 77.56 72.49 85.50 80.42 78.68 73.84 85.97 81.66 79.32 75.30 86.08 81.2O 78.59 74.05 86.22 81.39 78.94 74.41 the same domain. The MI-word clusters were constructed ac- cording to the domain of the training set. The tag set consisted of 209 part-of-speech tags. 16 For the word model decision-tree, three of 69 questions concerned the character clusters and three of 63 the tagging model. Their presence or absence was the deciding parameter. The analyzer was also trained for the EDR Corpus. The same character clusters as with the conversational corpus were used. A tag set in the corpus consisted of 15 parts-of-speech. For the word model, 45 questions were prepared; 18 for the Tagging model. Just a couple of them were involved in the character clusters. The re- sults are shown in Table 2. 7 Conclusion and Discussion Both results show that the use of character clus- ters significantly improves both tokenizing and tagging at every stage of the training. Consid- ering the results, our model with MI characters is useful for assigning parts of speech as well as for finding word boundaries, and overcoming the unknown word problem. The consistent experimental results obtained from the training data with different word boundaries and different tag sets in the Japanese text, suggests the method is generally applicable to various different sets of corpora constructed for different purposes. We believe that with the appropriate number of adequate l~These include common noun, verb, post-position, auxiliary verb, adjective, adverb, etc. The purpose of this tag set is to perform machine translation from Japanese to English, German and Korean. questions, the method is transferable to other languages that have word boundaries not indi- cated in the text. In conclusion, we should note that our method, which does not require a dictionary, has been significantly improved by the charac- ter cluster information provided. Our plans for further research include inves- tigating the correlation between accuracy and the training data size, the number of questions as well as exploring methods for factoring in- formation from a "dictionary" into our model. Along these lines, a fruitful approach may be to explore methods of coordinating probabilis- tic decision-trees to obtain a higher accuracy. References Brill, E. (1994) "Some Advances in Transformation- Based Part of Speech Tagging," AAAI-94, pp. 722-727. Brown, P., Della Pietra, V., de Souza, P., Lai, J., and Mercer, R. (1992) "Class-based n-gram models of natural language," Computational Linguistics, Vol. 18, No. 4, pp. 467-479. Cutting, D., Kupiec, J., Pedersen, J., and Sibun, P. (1992) "A Practical Part-of-Speech Tagger," ANLP-92, pp. 133-140. Charniak, E., Hendrickson, C., Jacobson, N., and Perkowits, M. (1993) "Equations for Part-of- Speech Tagging," AAAI-93, pp. 784-789. Church, K. (1988) "A Stochastic Parts Program and Noun Phrase Parser for Unrestricted Text," Pro- ceedings of the 2nd Conference on Applied Natu- ral Language Processing, Austin-Marriott at the Capitol, Austin, Texas, USA, 1988, pp. 136-143. EDR (1996) EDR Electronic Dictionary Version 1.5 Technical Guide. EDR TR2-007. Jelinek, F. (1969) "A fast sequential decoding algo- rithm using a stack," IBM Journal of Research and Development, Vol. 13, pp. 675-685. Kashioka, H., Black, E., and Eubank, S. (1997) "Decision-Tree Morphological Analysis without a Dictionary for Japanese," Proceedings of NLPRS 97, pp. 541-544. Nagata, M. (1994) "A Stochastic Japanese Morpho- logical Analyzer Using a Forward-DP Backward- A* N-Best Search Algorithm," Proceedings of COLING-94, pp. 201-207. Paul, D. (1991) "Algorithms for an optimal a* search and linearizing the search in the stack de- coder," Proceedings, ICASSP 91, pp. 693-696. Yamamoto, M. (1996) "A Re-estimation Method for Stochastic Language Modeling from Ambiguous Observations," WVLC-4, pp. 155-167. 662
1998
108
Know When to Hold 'Em: Shuffling Deterministically in a Parser for Nonconcatenative Grammars* Robert T. Kasper, Mike Calcagno, and Paul C. Davis Department of Linguistics, Ohio State University 222 Oxley Hall 1712 Neil Avenue Columbus, OH 43210 U.S.A. Email: {kasper,calcagno,pcdavis) @ling.ohio-state.edu Abstract Nonconcatenative constraints, such as the shuffle re- lation, are frequently employed in grammatical anal- yses of languages that have more flexible ordering of constituents than English. We show how it is pos- sible to avoid searching the large space of permuta- tions that results from a nondeterministic applica- tion of shuffle constraints. The results of our imple- mentation demonstrate that deterministic applica- tion of shuffle constraints yields a dramatic improve- ment in the overall performance of a head-corner parser for German using an HPSG-style grammar. 1 Introduction Although there has been a considerable amount of research on parsing for constraint-based grammars in the HPSG (Head-driven Phrase Structure Gram- mar) framework, most computational implementa- tions embody the limiting assumption that the con- stituents of phrases are combined only by concate- nation. The few parsing algorithms that have been proposed to handle more flexible linearization con- straints have not yet been applied to nontrivial grammars using nonconcatenative constraints. For example, van Noord (1991; 1994) suggests that the head-corner parsing strategy should be particularly well-suited for parsing with grammars that admit discontinuous constituency, illustrated with what he calls a "tiny" fragment of Dutch, but his more re- cent development of the head-corner parser (van No- ord, 1997) only documents its use with purely con- catenative grammars. The conventional wisdom has been that the large search space resulting from the use of such constraints (e.g., the shuffle relation) makes parsing too inefficient for most practical ap- plications. On the other hand, grammatical anal- yses of languages that have more flexible ordering of constituents than English make frequent use of constraints of this type. For example, in recent work by Dowty (1996), Reape (1996), and Kathol " This research was sponsored in part by National Science Foundation grant SBR-9410532, and in part by a seed grant from the Ohio State University Office of Research; the opin- ions expressed here are solely those of the authors. (1995), in which linear order constraints are taken to apply to domains distinct from the local trees formed by syntactic combination, the nonconcate- native shuffle relation is the basic operation by which these word order domains are formed. Reape and Kathol apply this approach to various flexible word-order constructions in German. A small sampling of other nonconcatenative op- erations that have often been employed in linguistic descriptions includes Bach's (1979) wrapping oper- ations, Pollard's (1984) head-wrapping operations, and Moortgat's (1996) extraction and infixation op- erations in (categorial) type-logical grammar. What is common to the proposals of Dowty, Reape, and Kathol, and to the particular analysis implemented here, is the characterization of nat- ural language syntax in terms of two interrelated but in principle distinct sets of constraints: (a) con- straints on an unordered hierarchical structure, pro- jected from (grammatical-relational or semantic) va- lence properties of lexical items; and (b) constraints on the linear order in which elements appear. In this type of framework, constraints on linear order may place conditions on the the relative order of constituents that are not siblings in the hierarchical structure. To this end, we follow Reape and Kathol and utilize order domains, which are associated with each node of the hierarchical structure, and serve as the domain of application for linearization con- straints. In this paper, we show how it is possible to avoid searching the large space of permutations that re- sults from a nondeterministic application of shuffle constraints. By delaying the application of shuffle constraints until the linear position of each element is known, and by using an efficient encoding of the portions of the input covered by each element of an order domain, shuffle constraints can be applied de- terministically. The results of our implementation demonstrate that this optimization of shuffle con- straints yields a dramatic improvement in the overall performance of a head-corner parser for German. The remainder of the paper is organized as fol- lows: §2 introduces the nonconcatenative fragment 663 (1) Seiner Freundin liess er ihn helfen his(DAT) friend(FEM) allows he(NOM) him(ACC) help 'He allows him to help his friend.' (2) Hilft sie ihr schnell help she(NOM) her(DAT) quickly 'Does she help her quickly?' (3) Der Vater denkt dass sie ihr seinen Sohn helfen liess The(NOM) father thinks that she(NOM) her(DAW) his(ACe) son help allows 'The father thinks that she allows his son to help her.' (4) r_decl dorr~_obj PHON seiner Freundin SYNSEM NP TOPO vf r dom_obj ] I zio.I ' |SWSEM V|' LTOPO cf J dom_obj ] PHON er | SYNSEM NP| ' TOPO m/ J " dom_obj ] PHON ihn I SYNSEM NP I ' TOPO mf 1 (5) . [,o o4] . . [ o.ow] . [,o.o.4 Figure 1: Linear order of German clauses. dora_obj ] PHON herren I SYNSEM W / TOPO vc .I S [DOM([seiner Freundin],[liess],[er],[ihn],[helfen])] VP [DOM([seiner Freundin],[liess],[ihn],[hel]en])] NP I VP [DOM([seiner Freundin],[liess],[helfen])] NP er V [DOM([liess],[helfen])] NP [DOM([seiner],[lareundin])] ihn V V N Det I I I I liess helfen Freundin seiner Figure 2: Hierarchical structure of sentence (1). of German which forms the basis of our study; §3 describes the head-corner parsing algorithm that we use in our implementation; §4 discusses details of the implementation, and the optimization of the shuffle constraint is explained in §5; §6 compares the perfor- mance of the optimized and non-optimized parsers. 2 A German Grammar Fragment The fragment is based on the analysis of German in Kathol's (1995) dissertation. Kathol's approach is a variant of HPSG, which merges insights from both Reape's work and from descriptive accounts of German syntax using topological fields (linear posi- tion classes). The fragment covers (1) root declara- tive (verb-second) sentences, (2) polar interrogative (verb-first) clauses and (3) embedded subordinate (verb-final) clauses, as exemplified in Figure 1. The linear order of constituents in a clause is rep- resented by an order domain (DOM), which is a list of domain objects, whose relative order must satisfy a set of linear precedence (LP) constraints. The or- der domain for example (1) is shown in (4). Notice that each domain object contains a TOPO attribute, whose value specifies a topological field that par- tially determines the object's linear position in the list. Kathol defines five topological fields for German clauses: Vorfeld (v]), Comp/Left Sentence Bracket (c]), Mittelfeld (m]), Verb Cluster/Right Sentence Bracket (vc), and Nachfeld (nO). These fields are or- dered according to the LP constraints shown in (5). The hierarchical structure of a sentence, on the other hand, is constrained by a set of immediate dominance (ID) schemata, three of which are in- cluded in our fragment: Head-Argument (where "Ar- gument" subsumes complements, subjects, and spec- ifiers), Adjunct-Head, and Marker-Head. The Head- 664 Argument schema is shown below, along with the constraints on the order domain of the mother con- stituent. In all three schemata, the domain of a non- head daughter is compacted into a single domain ob- ject, which is shuffled together with the domain of the head daughter to form the domain of the mother. (6) Head-Argument Schema (simplified) " r MEAD [-?] ] sv sE Ls,.,Bo T 171J DOM [] L s,.,Bc,,,T (DID J L [] A shuffle(~, compaction(~), V~) A order_constraints (V~) The hierarchical structure of (1) is shown by the unordered tree of Figure 2, where head daughters appear on the left at each branch. Focusing on the NP seiner Freundin in the tree, it is compacted into a single domain object, and must remain so, but its position is not fixed relative to the other arguments of liess (which include the raised argu- ments of helfen). The shuffle constraint allows this single, compacted domain object to be realized in various permutations with respect to the other ar- guments, subject to the LP constraints, which are implemented by the order_constraints predicate in (6). Each NP argument may be assigned either vfor mfas its TOPO value, subject to the constraint that root declarative clauses must contain exactly one element in the vf field. In this case, seiner Fre- undin is assigned vf, while the other NP arguments of liess are in m~ However, the following permuta- tions of (1) are also grammatical, in which er and ihn are assigned to the vf field instead: (7) a. Er liess ihn seiner Freundin helfen. b. Ihn liess er seiner Freundin helfen. Comparing the hierarchical structure in Figure 2 with the linear order domain in (4), we see that some daughters in the hierarchical structure are realized discontinuously in the order domain for the clause (e.g., the verbal complex liess helfen). In such cases, nonconcatenative constraints, such as shuffle, can provide a more succinct analysis than concatenative rules. This situation is quite common in languages like German and Japanese, where word order is not totally fixed by grammatical relations. 3 Head-Corner Parsing The grammar described above has a number of properties relevant to the choice of a parsing strat- egy. First, as in HPSG and other constraint-based grammars, the lexicon is information-rich, and the combinatory or phrase structure rules are highly schematic. We would thus expect a purely top- down algorithm to be inefficient for a grammar of this type, and it may even fail to terminate, for the simple reason that the search space would not be adequately constrained by the highly general combi- natory rules. Second, the grammar is essentially nonconcatena- tive, i.e., constituents of the grammar may appear discontinuously in the string. This suggests that a strict left-to-right or right-to-left approach may be less efficient than a bidirectional or non-directional approach. Lastly, the grammar is head-driven, and we would thus expect the most appropriate parsing algorithm to take advantage of the information that a semantic head provides. For example, a head usually provides information about the remaining daughters that the parser must find, and (since the head daughter in a construction is in many ways similar to its mother category) effective top-down identification of candi- date heads should be possible. One type of parser that we believe to be partic- ularly well-suited to this type of grammar is the head-corner parser, introduced by van Noord (1991; 1994) based on one of the parsing strategies ex- plored by Kay (1989). The head-corner parser can be thought of as a generalization of a left-corner parser (Rosenkrantz and Lewis-II, 1970; Matsumoto et al., 1983; Pereira and Shieber, 1987). 1 The outstanding features of parsers of this type are that they are head-driven, of course, and that they process the string bidirectionally, starting from a lexical head and working outward. The key ingre- dients of the parsing algorithm are as follows: • Each grammar rule contains a distinguished daughter which is identified as the head of the rule. 2 • The relation head-corner is defined as the reflexive and transitive closure of the head relation. • In order to prove that an input string can be parsed as some (potentially complex) goal cat- egory, the parser nondeterministically selects a potential head of the string and proves that this head is the head-corner of the goal. • Parsing proceeds from the head, with a rule being chosen whose head daughter can be instantiated by the selected head word. The other daughters of the rule are parsed recursively in a bidirec- tional fashion, with the result being a slightly larger head-corner. lln fact, a head-corner parser for a grammar in which the head daughter in each rule is the leftmost daughter will func- tion as a left-corner parser. 2Note that the fragment of the previous section has this property. • 665 • The process succeeds when a head-corner is constructed which dominates the entire input string. 4 Implementation We have implemented the German grammar and head-corner parsing algorithm described in §2 and §3 using the ConTroll formalism (GStz and Meurers, 1997). ConTroll is a constraint logic programming system for typed feature structures, which supports a direct implementation of HPSG. Several properties of the formalism are crucial for the approach to lin- earization that we are investigating: it does not re- quire the grammar to have a context-free backbone; it includes definite relations, enabling the definition of nonconcatenative constraints, such as shuffle; and it supports delayed evaluation of constraints. The ability to control when relational contraints are evaluated is especially important in the optimiza- tion of shuffle to be discussed next (§5). ConTroll also allows a parsing strategy to be specified within the same formalism as the grammar. 3 Our imple- mentation of the head-corner parser adapts van No- ord's (1997) parser to the ConTroll environment. 5 Shuffling Deterministically A standard definition of the shuffle relation is given below as a Prolog predicate. shuffle (unoptimized version) shuffle(IS, [] , []). shuffle([XISi], $2, [XIS3]) :- shuffle(SI,S2,S3). shuffle(S1, [XIS2S, [XIS3]) :- shuffle(S1,S2,S3). The use of a shuffle constraint reflects the fact that several permutations of constituents may be grammatical. If we parse in a bottom-up fashion, and the order domains of two daughter constituents are combined as the first two arguments of shuffle, multiple solutions will be possible for the mother domain (the third argument of shuffle). For ex- ample, in the structure shown earlier in Figure 2, when the domain ([liess],[helfen]) is combined with the compacted domain element ([seiner Freundin]), shuffle will produce three solutions: (8) a. ([liess],[helfen],[seiner Freundin] ) b. ([liess],[seiner Freundin],[helfen] ) c. ([seiner Freundin],[liess],[helfen] ) This set of possible solutions is further constrained in two ways: it must be consistent with the linear 3An interface from ConqYoll to the underlying Prolog en- vironment was also developed to support some optimizations of the parser, such as memoization and the operations over bitstrings described in §5. precedence constraints defined by the grammar, and it must yield a sequence of words that is identical to the input sequence that was given to the parser. However, as it stands, the correspondence with the input sequence is only checked after an order do- main is proposed for the entire sentence. The or- der domains of intermediate phrases in the hierar- chical structure are not directly constrained by the grammar, since they may involve discontinuous sub- sequences of the input sentence. The shuffle con- straint is acting as a generator of possible order do- mains, which are then filtered first by LP constraints and ultimately by the order of the words in the in- put sentence. Although each possible order domain that satisfies the LP constraints is a grammatical se- quence, it is useless, in the context of parsing, to con- sider those permutations whose order diverges from that of the input sentence. In order to avoid this very inefficient generate-and-test behavior, we need to provide a way for the input positions covered by each proposed constituent to be considered sooner, so that the only solutions produced by the shuffle constraint will be those that correspond to the or- der of words in the actual input sequence. Since the portion of the input string covered by an order domain may be discontinuous, we cannot just use a pair of endpoints for each constituent as in chart parsers or DCGs. Instead, we adapt a tech- nique described by Reape (1991), and use bitstring codes to represent the portions of the input covered by each element in an order domain. If the input string contains n words, the code value for each con- stituent will be a bitstring of length n. If element i of the bitstring is 1, the constituent contains the ith word of the sentence, and if element i of the bitstring is 0, the constituent does not contain the ith word. Reape uses bitstring codes for a tabular parsing algorithm, different from the head-corner al- gorithm used here, and attributes the original idea to Johnson (1985). The optimized version of the shuffle relation is de- fined below, using a notation in which the arguments are descriptions of typed feature structures. The ac- tual implementation of relations in the ConTroll for- malism uses a slightly different notation, but we use a more familiar Prolog-style notation here. 4 4Symbols beginning with an upper-case letter are vari- ables, while lower-case symbols are either attribute labels (when followed by ':') or the types of values (e.g., he_list). 666 ~, shuffle (optimized version) shuffle([], [], []). shuffle((Sl&ne_list), [], Sl). shuffle([], (S2&ne_list), $2). shuffle(Sl, $2, S3) :- Sl=[(code:Cl) l_], S2=[(code:C2) l_], code_prec (Cl, C2, Bool), shuf f le_d (Bool, Sl, $2, S3). Y, shuffle_d(Bool, [HI[T1], [H2JT2], List). 7, Bool=true: HI precedes H2 Y, Bool=false: H1 does not precede H2 shuffle_d(true, [HI{S1], S2, [H1]S3]) :- may_precede_all (H1, S2), shuffle (Sl, S2, S3). shuffle_d(false, Sl, [H2{S2], [H21S3]) :- may_pre cede_all (H2, S i), shuffle (Sl, S2, S3). This revision of the shuffle relation uses two auxiliary relations, code_prec and shuffle_d. code_prec compares two bitstrings, and yields a boolean value indicating whether the first string pre- cedes the second (the details of the implementation are suppressed). The result of a comparison be- tween the codes of the first element of each domain is used to determine which element must appear first in the resulting domain. This is implemented by using the boolean result of the code comparison to select a unique disjunct of the shuffle_d relation. The shuffle_d relation also incorporates an opti- mization in the checking of LP constraints. As each element is shuffled into the result, it only needs to be checked for LP acceptability with the elements of the other argument list, because the LP constraints have already been satisfied on each of the argument do- mains. Therefore, LP acceptability no longer needs to be checked for the entire order domain of each phrase, and the call to order_constraints can be eliminated from each of the phrasal schemata. In order to achieve the desired effect of making shuffle constraints deterministic, we must delay their evaluation until the code attributes of the first ele- ment of each argument domain have been instanti- ated to a specific string. Using the analogy of a card game, we must hold the cards (delay shuffling) until we know what their values are (the codes must be instantiated). The delayed evaluation is enforced by the following declarations in the ConTroll system, where argn:©type specifies that evaluation should be delayed until the value of the nth argument of the relation has a value more specific than type: delay (code_prec, (argl : @string & arg2 : @string) ). delay (shuffle_d, argl : ©bool). With the addition of CODE values to each domain element, the input to the shuffle constraint in our previous example is shown below, and the unique solution for MDom is the one corresponding to (8c). (9) shu~e(([ PHON liess ] [PHON hel/en 1 LCODE 001000 ' LCODE 000001 )' ( [CODE 110000 J )' MDom) 6 Performance Comparison In order to evaluate the reduction in the search space that is achieved by shuffling deterministically, the parser with the optimized shuffle constraints and the parser with the nonoptimized constraints were each tested with the same grammar of German on a set of 30 sentences of varying length, complexity and clause types. Apart from the redefinition of the shuffle relation, discussed in the previous section, the only differences between the grammars used for the optimized and unoptimized tests are the addi- tion of CODE values for each domain element in the optimized version and the constraints necessary to propagate these code values through the intermedi- ate structures used by the parser. A representative sample of the tested sentences is given in Table 2 (because of space limitations, English glosses are not given, but the words have all been glossed in §2), and the performance results for these 12 sentences are listed in Table 1. For each version of the parser, time, choice points, and calls are reported, as follows: The time measurement (Time) 5 is the amount of CPU seconds (on a Sun SPARCstation 5) required to search for all possible parses, choice points (ChoicePts) records the num- ber of instances where more than one disjunct may apply at the time when a constraint is resolved, and calls (Calls) lists the number of times a constraint is unfolded. The number of calls listed includes all constraints evaluated by the parser, not only shuffle constraints. Given the nature of the ConTroll imple- mentation, the number of calls represents the most basic number of steps performed by the parser at a logical level. Therefore, the most revealing compar- ison with regard to performance improvement be- tween the optimized and nonoptimized versions is the call factor, given in the last column of Table 1. The call factor for each sentence is the number of nonoptimized calls divided by the number of opti- mized calls. For example, in T1, Er hilfl ihr, the version using the nonoptimized shuffle was required to make 4.1 times as many calls as the version em- ploying the optimized shuffle. The deterministic shuffle had its most dramatic impact on longer sentences and on sentences con- 5The absolute time values are not very significant, be- cause the ConTroll system is currently implemented as an interpreter running in Prolog. However, the relative time dif- ferences between sentences confirm that the number of calls roughly reflects the total work required by the parser. 667 Nonoptimized Time(sec) ChoicePts T1 1 5.6 61 T2 1 I0.0 80 T3 1 24.3 199 T4 1 25.0 199 T5 1 51.4 299 T6 2 463.5 2308 T7 2 465.1 2308 T8 1 305.7 1301 T9 1 270.5 1187 T10 1 2063.4 6916 Tll 1 3368.9 8833 T12 1 8355.0 19235 Optimized Calls Time(sec) ChoicePts Calls 359 1.8 20 88 480 3.6 29 131 1362 4.9 44 200 1377 5.2 45 211 2757 6.2 49 241 22972 32.4 209 974 23080 26.6 172 815 9622 52.1 228 942 7201 48.0 214 1024 44602 253.8 859 4176 74703 176.5 536 2565 129513 528.1 1182 4937 Table 1: Comparison of Results for Selected Sentences 4.1 3.7 6.8 6.5 11.4 23.6 28.3 10.2 7.0 10.7 29.1 26.2 I Table T1. Er hilft ihr. T2. Hilft er seiner Freundin? T3. Er hilft ihr schnell. T4. Hilft er ihr schnell? T5. Liess er ihr ihn helfen? T6. Er liess ihn ihr schnell helfen. T7. Liess er ihn ihr schnell helfen? TS. Der Vater liess seiner Freundin seinen Sohn helfen. T9. Sie denkt dass er ihr hilft. T10. Sie denkt dass er ihr schnell hilft. Tll. Sie denkt dass er ihr ihn helfen liess. T12. Sie denkt dass er seiner Freundin seinen Sohn helfen liess. 2: Selected Sentences taining adjuncts. For instance, in T7, a verb-first sentence containing the adjunct schnell, the opti- mized version outperformed the nonoptimized by a call factor of 28.3. From these results, the utility of a deterministic shuffle constraint is clear. In par- ticular, it should be noted that avoiding useless re- sults for shuffle constraints prunes away many large branches from the overall search space of the parser, because shuffle constraints are imposed on each node of the hierarchical structure. Since we use a largely bottom-up strategy, this means that if there are n solutions to a shuffle constraint on some daughter node, then all of the constraints on its mother node have to be solved n times. If we avoid producing n - 1 useless solutions to shuffle, then we also avoid n - 1 attempts to construct all of the ancestors to this node in the hierarchical structure. 7 Conclusion We have shown that eliminating the nondetermin- ism of shuffle constraints overcomes one of the pri- mary inefficiencies of parsing for grammars that use discontinuous order domains. Although bitstring codes have been used before in parsers for discon- tinuous constituents, we are not aware of any prior research that has demonstrated the use of this tech- nique to eliminate the nondeterminism of relational constraints on word order. Additionally, we expect that the applicability of bitstring codes is not limited to shuffle contraints, and that the technique could be straightforwardly generalized for other noncon- catenative constraints. In fact, some way of record- ing the input positions associated with each con- stituent is necessary to eliminate spurious ambigui- ties that arise when the input sentence contains more than one occurrence of the same word (cf. van No- ord's (1994) discussion of nonminimality). For con- catenative grammars, each position can be repre- sented by a simple remainder of the input list, but a more general encoding, such as the bitstrings used here, is needed for grammars using nonconcatenative constraints. References Emmon Bach. 1979. Control in montague grammar. Linguistic Inquiry, 10:515-553. David R. Dowty. 1996. Toward a minimalist the- ory of syntactic structure. In Arthur Horck and Wietske Sijtsma, editors, Discontinuous Con- stituency, Berlin. Mouton de Gruyter. Thilo GStz and Walt Detmar Meurers. 1997. The ConTroll system as large grammar develop- ment platform. In Proceedings of the Workshop on Computational Environments for Grammar 668 Development and Linguistic Engineering (EN- VGRAM) held at ACL-97, Madrid, Spain. Mark Johnson. 1985. Parsing with discontinuous constituents. In Proceedings of the 23 ra Annual Meeting of the Association for Computational Linguistics, pages 127-132, Chicago, IL, July. Andreas Kathol. 1995. Linearization-based German Syntax. Ph.D. thesis, The Ohio State University. Martin Kay. 1989. Head-driven parsing. In Proceed- ings of the First International Workshop on Pars- ing Technologies. Carnegie Mellon University. Y. Matsumoto, H. Tanaka, H. Hirakawa, H. Miyoshi, and H. Yasukawa. 1983. BUP: a bottom up parser embedded in prolog. New Generation Computing, 1(2). Michael Moortgat. 1996. Generalized quantifiers and discontinuous type constructors. In Arthur Horck and Wietske Sijtsma, editors, Discontinu- ous Constituency, Berlin. Mouton de Gruyter. Fernando C.N. Pereira and Stuart M. Shieber. 1987. Prolog and Natural Language Analysis. CSLI Lec- ture Notes Number 10, Stanford, CA. Carl Pollard. 1984. Generalized Phrase Structure Grammars, Head Grammars and Natural Lan- guage. Ph.D. thesis, Stanford University. Michael Reape. 1991. Parsing bounded discontin- uous constituents: Generalizations of some com- mon algorithms. In Proceedings of the First Com- putational Linguistics in the Netherlands Day, OTK, University of Utrecht. Mike Reape. 1996. Getting things in order. In Arthur Horck and Wietske Sijtsma, editors, Dis- continuous Constituents. Mouton de Gruyter, Berlin. D.J. Rosenkrantz and P.M. Lewis-II. 1970. Deter- ministic left corner parsing. In IEEE Conference of the 11th Annual Symposium on Switching and Automata Theory, pages 139-152. Gertjan van Noord. 1991. Head corner parsing for discontinuous constituency. In Proceedings of the 29 th Annual Meeting of the Association for Com- putational Linguistics, pages 114-121, Berkeley, CA, June. Gertjan van Noord. 1994. Head corner parsing. In C.J. Rupp, M.A. Rosner, and R.L. Johnson, editors, Constraints, Language and Computation, pages 315-338. Academic Press. Gertjan van Noord. 1997. An efficient implemen- tation of the head-corner parser. Computational Linguistics, 23(3):425-456. 669
1998
109
Evaluating a Focus-Based Approach to Anaphora Resolution* Saliha Azzam, Kevin Humphreys and Robert Gaizauskas {s. azzam, k. humphreys, r. gaizauskas}©dcs, shef. ac. uk Department of Computer Science, University of Sheffield Regent Court, Portobello Road Sheffield S1 4DP UK Abstract We present an approach to anaphora resolution based on a focusing algorithm, and implemented within an existing MUC (Message Understand- ing Conference) Information Extraction system, allowing quantitative evaluation against a sub- stantial corpus of annotated real-world texts. Extensions to the basic focusing mechanism can be easily tested, resulting in refinements to the mechanism and resolution rules. Results show that the focusing algorithm is highly sensitive to the quality of syntactic-semantic analyses, when compared to a simpler heuristic-based ap- proach. 1 Introduction Anaphora resolution is still present as a signi- ficant linguistic problem, both theoretically and practically, and interest has recently been re- newed with the introduction of a quantitative evaluation regime as part of the Message Under- standing Conference (MUC) evaluations of In- formation Extraction (IE) systems (Grishman and Sundheim, 1996). This has made it pos- sible to evaluate different (implementable) the- oretical approaches against sizable corpora of real-world texts, rather than the small collec- tions of artificial examples typically discussed in the literature. This paper describes an evaluation of a focus- based approach to pronoun resolution (not ana- phora in general), based on an extension of Sidner's algorithm (Sidner, 1981) proposed in (Azzam, 1996), with further refinements from development on real-world texts. The approach * This work was carried out in the context of the EU AVENTINUS project (Thumair, 1996), which aims to develop a multilingual IE system for drug enforcement, and including a language-independent coreference mech- anism (Azzam et al., 1998). is implemented within the general coreference mechanism provided by the LaSIE (Large Scale Information Extraction) system (Gaizauskas et al., 1995) and (Humphreys et al., 1998), Shef- field University's entry in the MUC-6 and 7 evaluations. 2 Focus in Anaphora Resolution The term focus, along with its many relations such as theme, topic, center, etc., reflects an in- tuitive notion that utterances in discourse are usually 'about' something. This notion has been put to use in accounts of numerous linguistic phenomena, but it has rarely been given a firm enough definition to allow its use to be evalu- ated. For anaphora resolution, however, stem- ming from Sidner's work, focus has been given an algorithmic definition and a set of rules for its application. Sidner's approach is based on the claim that anaphora generally refer to the cur- rent discourse focus, and so modelling changes in focus through a discourse will allow the iden- tification of antecedents. The algorithm makes use of several focus re- gisters to represent the current state of a dis- course: CF, the current focus; AFL, the altern- ate focus list, containing other candidate foci; and FS, the focus stack. A parallel structure to the CF, AF the actor focus, is also set to deal with agentive pronouns. The algorithm updates these registers after each sentence, confirming or rejecting the current focus. A set of Interpret- ation Rules (IRs) applies whenever an anaphor is encountered, proposing potential antecedents from the registers, from which one is chosen us- ing other criteria: syntactic, semantic, inferen- tial, etc. 74 2.1 Evaluating Focus-Based Approaches Sidner's algorithmic account, although not ex- haustively specified, has lead to the implement- ation of focus-based approaches to anaphora resolution in several systems, e.g. PIE (Lin, 1995). However, evaluation of the approach has mainly consisted of manual analyses of small sets of problematic cases mentioned in the liter- ature. Precise evaluation over sizable corpora of real-world texts has only recently become pos- sible, through the resources provided as part of the MUC evaluations. 3 Coreference in LaSIE The LaSIE system (Gaizauskas et al., 1995) and (Humphreys et al., 1998), has been de- signed as a general purpose IE system which can conform to the MUC task specifications for named entity identification, coreference resolu- tion, IE template element and relation identific- ation, and the construction of scenario-specific IE templates. The system is basically a pipeline architecture consisting of tokenisation, sentence splitting, part-of-speech tagging, morphological stemming, list lookup, parsing with semantic in- terpretation, proper name matching, and dis- course interpretation. The latter stage con- structs a discourse model, based on a predefined domain model, using the, often partial, se- mantic analyses supplied by the parser. The domain model represents a hierarchy of domain-relevant concept nodes, together with associated properties. It is expressed in the XI formalism (Gaizauskas, 1995) which provides a basic inheritance mechanism for property values and the ability to represent multiple classificat- ory dimensions in the hierarchy. Instances of concepts mentioned in a text are added to the domain model, populating it to become a text-, or discourse-, specific model. Coreference resolution is carried out by at- tempting to merge each newly added instance, including pronouns, with instances already present in the model. The basic mechanism is to examine, for each new-old pair of in- stances: semantic type consistency/similarity in the concept hierarchy; attribute value con- sistency/similarity, and a set of heuristic rules, some specific to pronouns, which can act to rule out a proposed merge. These rules can refer to various lexical, syntactic, semantic, and po- sitional information about instances. The in- tegration of the focus-based approach replaces the heuristic rules for pronouns, and represents the use of LaSIE as an evaluation platform for more theoretically motivated algorithms. It is possible to extend the approach to include def- inite NPs but, at present, the existing rules are retained for non-pronominal anaphora in the MUC coreference task: proper names, definite noun phrases and bare nouns. 4 Implementing Focus-Based Pronoun Resolution in LaSIE Our implementation makes use of the algorithm proposed in (Azzam, 1996), where elementary events (EEs, effectively simple clauses) are used as basic processing units, rather than sentences. Updating the focus registers and the application of interpretation rules (IRs) for pronoun resolu- tion then takes place after each EE, permitting intrasentential references3 In addition, an ini- tial 'expected focus' is determined based on the first EE in a text, providing a potential ante- cedent for any pronoun within the first EE. Development of the algorithm using real- world texts resulted in various further refine- ments to the algorithm, in both the IRs and the rules for updating the focus registers. The fol- lowing sections describe the two rules sets sep- arately, though they are highly interrelated in both development and processing. 4.1 Updating the Focus The algorithm includes two new focus registers, in addition to those mentioned in section 2: AFS, the actor focus stack, used to record pre- vious AF (actor focus) values and so allow a separate set of IRs for agent pronouns (animate verb subjects); and Intra-AFL, the intrasenten- tial alternate focus list, used to record candidate foci from the current EE only. In the space available here, the algorithm is best described through an example showing the use of the registers. This example is taken from a New York Times article in the MUC-7 training corpus on aircraft crashes: 1An important limitation of Sidner's algorithm, noted in (Azzam, 1996), is that the focus registers are only updated after each sentence. Thus antecedents proposed for an anaphor in the current sentence will always be from the previous sentence or before and intrasentential references axe impossible. 75 State Police said witnesses told them the pro- peller was not turning as the plane descended quickly toward the highway in Wareham near Exit 2. It hit a tree. EE-I: State Police said tell_event An 'expected focus' algorithm applies to initialise the registers as follows: CF (current focus) = tell_event AF (actor focus) = State Police Intra-AFL remains empty because EE-1 contains no other candidate foci. No other registers are affected by the expected focus. No pronouns occur in EE-1 and so no IRs apply. EE-2: witnesses told them The Intra-AFL is first initialised with all (non-pronominal) candidate foci in the EE: Intra-AFL = witnesses The IRs are then applied to the first pronoun, them, and, in this case, propose the current AF, State Police, as the antecedent. The Intra-AFL is immediately updated to add the antecedent: Intra-AFL = State Police, witnesses EE-2 has a pronoun in 'thematic' position, 'theme' being either the object of a transitive verb, or the subject of an intransitive or the copula (following (Gruber, 1976)). Its ante- cedent therefore becomes the new CF, with the previous value moving to the FS. EE-2 has an 'agent', where this is an animate verb subject (again as in (Gruber, 1976)), and this becomes the new AF. Because the old AF is now the CF, it is not added to the AFS as it would be otherwise. After each EE the Intra-AFL is added to the current AFL, excluding the CF. The state after EE-2 is then: CF = State Police AF = witnesses FS = tell_event AFL = witnesses EE-3: the propeller was not turning The Intra-AFL is reinitialised with candidate foci from this EE: Intra-AFL = propeller No pronouns occur in EE-3 and so no IRs apply. The 'theme', propeller here because of the copula, becomes the new CF and the old one is added to the FS. The AF remains unchanged as the current EE lacks an agent: CF = propeller AF = witnesses FS = State Police, tell_event AFL = propeller, witnesses EE-4: the plane descended Intra-AFL = the plane CF = the plane (theme) AF = witnesses (unchanged) FS = propeller, State Police, tell_event AFL = the plane, propeller, witnesses In the current algorithm the AFL is reset at this point, because EE-4 ends the sentence. EE-5: it hit a tree Intra-AFL = a tree The IRs resolve the pronoun it with the CF: CF = the plane (unchanged) AF = witnesses (unchanged) FS = propeller, State Police, tell_event AFL = a tree 4.2 Interpretation Rules Pronouns are divided into three classes, each with a distinct set of IRs proposing antecedents: Personal pronouns acting as agents (an- imate subjects): (e.g. he in Shotz said he knew the pilots) AF proposed initially, then an- imate members of AFL. Non-agent pronouns: (e.g. them in EE-2 above and it in EE-5) CF proposed initially, then members of the AFL and FS. Possessive, reciprocal and reflexive pro- nouns (PRRs): (e.g. their in the brothers had left and were on their way home) Ante- cedents proposed from the Intra-AFL, allowing intra-EE references. Antecedents proposed by the IRs are accep- ted or rejected based on their semantic type and feature compatibility, using the semantic and attribute value similarity scores of LaSIE's ex- isting coreference mechanism. 5 Evaluation with the MUC Corpora As part of MUC (Grishman and Sundheim, 1996), coreference resolution was evaluated as a sub-task of information extraction, which in- volved negotiating a definition of coreference re- lations that could be reliably evaluated. The fi- nal definition included only 'identity' relations between text strings: proper nouns, common nouns and pronouns. Other possible corefer- ence relations, such as 'part-whole', and non- text strings (zero anaphora) were excluded. 76 The definition was used to manually annot- ate several corpora of newswire texts, using SGML markup to indicate relations between text strings. Automatically annotated texts, produced by systems using the same markup scheme, were then compared with the manually annotated versions, using scoring software made available to MUC participants, based on (Vilain et al., 1995). The scoring software calculates the stand- ard Information Retrieval metrics of 'recall' and 'precision', 2 together with an overall f-measure. The following section presents the results ob- tained using the corpora and scorer provided for MUC-7 training (60 texts, average 581 words per text, 19 words per sentence) and evaluation (20 texts, average 605 words per text, 20 words per sentence), the latter provided for the formal MUC-7 run and kept blind during development. 6 Results The MUC scorer does not distinguish between different classes of anaphora (pronouns, definite noun phrases, bare nouns, and proper nouns), but baseline figures can be established by run- ning the LaSIE system with no attempt made to resolve any pronouns: Corpus Recall Precision f Training: 42.47. 73.67. 52.67. Evaluation: 44.77. 73.97. 55.77. LaSIE with the simple pronoun resolution heuristics of the non-focus-based mechanism achieves the following: Corpus Recall Precision f Training: 58.27. 71.37. 64.17. Evaluation : 56.07. 70.27. 62.37. showing that more than three quarters of the estimated 20% of pronoun coreferences in the corpora are correctly resolved with only a minor loss of precision. LaSIE with the focus-based algorithm achieves the following: ~Recall is a measure of how many correct (i.e. manu- ally annotated) coreferences a system found, and preci- sion is a measure of how many coreferences that the sys- tem proposed were actually correct. For example, with 100 manually annotated coreference relations in a corpus and a system that proposes 75, of which 50 are correct, recall is then 50/100 or 50% and precision is 50/75 or 66.7%. Corpus Recall Precision f Training: 55.47. 70.37. 61.97. Evaluation: 53.37. 69.77. 60.47. which, while demonstrating that the focus- based algorithm is applicable to real-world text, does question whether the more complex al- gorithm has any real advantage over LaSIE's original simple approach. The lower performance of the focus-based al- gorithm is mainly due to an increased reliance on the accuracy and completeness of the gram- matical structure identified by the parser. For example, the resolution of a pronoun will be skipped altogether if its role as a verb argu- ment is missed by the parser. Partial parses will also affect the identification of EE bound- aries, on which the focus update rules depend. For example, if the parser fails to attach a pre- positional phrase containing an antecedent, it will then be missed from the focus registers and so the IRs (see (Azzam, 1995)). The simple LaSIE approach, however, will be unaffected in this case. Recall is also lost due to the more restricted proposal of candidate antecedents in the focus- based approach. The simple LaSIE approach proposes antecedents from each preceding para- graph until one is accepted, while the focus- based approach suggests a single fixed set. From a theoretical point of view, many interesting issues appear with a large set of examples, discussed here only briefly because of lack of space. Firstly, the fundamental assumption of the focus-based approach, that the focus is favoured as an antecedent, does not always apply. For example: In June, a few weeks before the crash of TWA Flight 800, leaders of several Middle Eastern terrorist organizations met in Te- heran to plan terrorist acts. Among them was the PFL of Palestine, an organization that has been linked to airplane bombings in the past. Here, the pronoun them corefers with organiz- ations rather than the focus leaders. Additional information will be required to override the fun- damental assumption. Another significant question is when sentence focus changes. In our algorithm, focus changes when there is no reference (pronominal or otherwise) to the current focus in the current 77 EE. In the example used in section 4.1, this causes the focus at the end of the first sentence to be that of the last EE in that sentence, thus allowing the pronoun it in the subsequent sentence to be correctly resolved with the plane. However in the example below, the focus of the first EE (the writ) is the antecedent of the pronoun it in the subsequent sentence, rather than the focus from the last EE (the ...flight): The writ is for "damages" of seven pas- sengers who died when the Airbus A310 flight crashed. It claims the deaths were caused by negligence. Updating focus after the complete sentence, rather than each EE, would propose the cor- rect antecedent in this case. However neither strategy has a significant overall advantage in our evaluations on the MUC corpora. Another important factor is the priorities of the Interpretation Rules. For example, when a personal pronoun can corefer with both CF and AF, IRs select the CF first in our algorithm. However, this priority is not fixed, being based only on the corpora used so far, which raises the possibility of automatically acquiring IR prior- ities through training on other corpora. 7 Conclusion A focus-based approach to pronoun resolution has been implemented within the LaSIE IE sys- tem and evaluated on real-world texts. The res- ults show no significant preformance increase over a simpler heuristic-based approach. The main limitation of the focus-based approach is its reliance on a robust syntactic/semantic ana- lysis to find the focus on which all the IRs depend. Examining performance on the real- world data also raises questions about the the- oretical assumptions of focus-based approaches, in particular whether focus is always a favoured antecedent, or whether this depends, to some extent, on discourse style. Analysing the differences in the results of the focus- and non-focus-based approaches, does show that the focus-based rules are commonly required when the simple syntactic and se- mantic rules propose a set of equivalent ante- cedents and can only select, say, the closest ar- bitrarily. A combined approach is therefore sug- gested, but whether this would be more effect- ive than further refining the resolution rules of the focus-based approach, or improving parse results and adding more detailed semantic con- straints, remains an open question. References S. Azzam, K. Humphreys, and R. Gaizauskas. 1998. Coreference resolution in a multilin- gual information extraction system. In Pro- ceedings of the First Language Resources and Evaluation Conference (LREC). Linguistic Coreference Workshop. S. Azzam. 1995. Anaphors, PPs and Disam- biguation Process for conceptual analysis. In Proceedings of l~th IJCAL S. Azzam. 1996. Resolving anaphors in embed- ded sentences. In Proceedings of 34th ACL. R. Gaizauskas, T. Wakao, K Humphreys, H. Cunningham, and Y. Wilks. 1995. De- scription of the LaSIE system. In Pro- ceedings of MUC-6, pages 207-220. Morgan Kaufmann. R. Gaizauskas. 1995. XI: A Knowledge Representation Language Based on Cross- Classification and Inheritance. Technical Re- port CS-95-24, University of Sheffield. R. Grishman and B. Sundheim. 1996. Mes- sage Understanding Conference - 6: A brief history. In Proceedings of 16th IJCAI, pages 466-471. J.S. Gruber. 1976. Lexical structures in syntax and semantics. North-Holland. K. Humphreys, R. Gaizauskas, S. Azzam, C. Huyck, B. Mitchell, H. Cunningham, and Y. Wilks. 1998. Description of the LaSIE-II system. In Proceedings of MUC-7. Forthcom- ing. D. Lin. 1995. Description of the PIE System. In Proceedings of MUC-6, pages 113-126. Mor- gan Kaufmann. C. Sidner. 1981. Focusing for interpretation of pronouns. American Journal of Computa- tional Linguistics, 7:217-231. G. Thurmair. 1996. AVENTINUS System Ar- chitecture. AVENTINUS project report LE1- 2238. M. Vilain, J. Burger, J. Aberdeen, D. Connolly, and L. Hirschman. 1995. A model-theoretic coreference scoring scheme. In Proceedings of MUC-6, pages 45-52. Morgan Kaufmann. 78
1998
11
Term-list Translation using Mono-lingual Word Co-occurrence Vectors* Genichiro Kikui NTT Information and Communication Systems Labs. 1-1 Hikarinooka, Yokosuka-Shi, Kanagawa, Japan e-mail: [email protected] Abstract A term-list is a list of content words that charac- terize a consistent text or a concept. This paper presents a new method for translating a term-list by using a corpus in the target language. The method first retrieves alternative translations for each input word from a bilingual dictionary. It then determines the most 'coherent' combination of alternative trans- lations, where the coherence of a set of words is defined as the proximity among multi-dimensional vectors produced from the words on the basis of co-occurrence statistics. The method was applied to term-lists extracted from newspaper articles and achieved 81% translation accuracy for ambiguous words (i.e., words with multiple translations). 1 Introduction A list of content words, called a term-list, is widely used as a compact representation of documents in in- formation retrieval and other document processing. Automatic translation of term-lists enables this pro- cessing to be cross-linguistic. This paper presents a new method for translating term-lists by using co- occurrence statistics in the target language. Although there is little study on automatic trans- lation of term-lists, related studies are found in the area of target word selection (for content words) in conventional full-text machine translation (MT). Approaches for target word selection can be clas- sifted into two types. The first type, which has been adopted in many commercial MT systems, is based on hand assembled disambiguation rules, and/or dic- tionaries. The problem with this approach is that creating these rules requires much cost and that they are usually domain-dependent 1 The second type, called the statistics-based ap- proach, learns disambiguation knowledge from large corpora. Brown et al. presented an algorithm that * This research was done when the author was at Center for the Study of Language and Information(CSLI), Stanford University. 1In fact, this is partly shown by the fact that many MT systems have substitutable domain-dependent (or "user" ) dic- tionaries . relies on translation probabilities estimated from large bilingual corpora (Brown et al., 1990)(Brown et al., 1991). Dagan and Itai (1994) and Tanaka and Iwasaki (1996) proposed algorithms for selecting tar- get words by using word co-occurrence statistics in the target language corpora. The latter algorithms using mono-lingual corpora are particularly impor- tant because, at present, we cannot always get a sufficient amount of bilingual or parallel corpora. Our method is closely related to (Tanaka and Iwasaki, 1996) from the viewpoint that they both rely on mono-lingual corpora only and do not re- quire any syntactic analysis. The difference is that our method uses "coherence scores", which can cap- ture associative relations between two words which do not co-occur in the training corpus. This paper is organized as follows, Section 2 de- scribes the overall translation process. Section 3 presents a disambiguation algorithm, which is the core part of our translation method. Section 4 and 5 give experimental results and discussion. 2 Term-list Translation Our term-list translation method consists of two steps called Dictionary Lookup and Disambiguation. 1. Dictionary Lookup: For each word in the given term-list, all the al- ternative translations are retrieved from a bilin- gual dictionary. A translation candidate is defined as a combi- nation of one translation for each input word. For example, if the input term-list consists of two words, say wl and w~, and their transla- tion include wll for wl and w23 for w2, then (w11, w23) is a translation candidate. If wl and w~ have two and three alternatives respectively then there are 6 possible translation candidates. 2. Disambiguation: In this step, all possible translation candidates are ranked according to a measure that reflects the 'coherence' of each candidate. The top ranked candidate is the translated term-list. 670 In the following sections we concentrate on the disambiguation step. 3 Disambiguation Algorithm The underlying hypothesis of our disambiguation method is that a plausible combination of transla- tion alternatives will be semantically coherent. In order to find the most coherent combination of words, we map words onto points in a multidi- mensional vector space where the 'proximity' of two vectors represents the level of coherence of the corre- sponding two words. The coherence of n words can be defined as the order of spatial 'concentration' of the vectors. The rest of this section formalizes this idea. 3.1 Co-occurrence Vector Space: WORD SPACE We employed a multi-dimensional vector space, called WORD SPACE (Schuetze, 1997) for defin- ing the coherence of words. The starting point of WORD SPACE is to represent a word with an n- dimensional vector whose i-th element is how many times the word wi occurs close to the word. For simplicity, we consider w~ and wj to occur close in context if and only if they appear within an m-word distance (i.e., the words occur within a window of m-word length), where m is a predetermined natu- ral number. Table 1 shows an artificial example of co- occurrence statistics. The table shows that the word ginko (bank, where people deposit money) co- occurred with shikin (fund) 483 times and with hashi (bridge) 31 times. Thus the co-occurrence vector of ginko (money bank) contains 483 as its 89th ele- ment and 31 as its 468th element. In short, a word is mapped onto the row vector of the co-occurrence table (matrix). Table 1: An example of co-occurrence statistics. col. no. word (Eng.) 89 ... 468 ... shikin ... hashi (fund) (bridge) ginko (bank:money) teibo (bank:fiver) ...... 483 31 120 Using this word representation, we define the proximity, proz, of two vectors, ~, b, as the cosine of the angle between them, given as follows. = g)/(I II D'I) (1) If two vectors have high proximity then the corre- sponding two words occur in similar context, and in our terms, are coherent. This simple definition, however, has problems, namely its high-dimensionality and sparseness of data. In order to solve these problems, the original co-occurrence vector space is converted into a con- densed low dimensional real-valued matrix by using SVD (Singular Value Decomposition). For example, a 20000-by-1000 matrix can be reduced to a 20000- by-100 matrix. The resulting vector space is the WORD SPACE 2 3.2 Coherence of Words We define the coherence of words in terms of a geo- metric relationship between the corresponding word vectors. As shown above, two vectors with high proximity are coherent with respect to their associative prop- erties. We have extended this notion to n-words. That is, if a group of vectors are concentrated, then the corresponding words are defined to be coherent. Conversely, if vectors are scattered, the correspond- ing words are in-coherent. In this paper, the concen- tration of vectors is measured by the average prox- imity from their centroid vector. Formally, for a given word set W, its coherence coh(W) is defined as follows: 1 eoh(W) - I W I y~ prox(~(w),~(W)) (2) wEW e(w) = (3) wEW [WI = the number of words inW (4) 3.3 Disambiguatlon Procedure Our disambiguation procedure is simply selecting the combination of translation alternatives that has the largest cob(W) defined above. The current im- plementation exhaustively calculates the coherence score for each combination of translation alterna- tives, then selects the combination with the highest score. 3.4 Example Suppose the given term-list consists of bank and river. Our method first retrieves translation alter- natives from the bilingual dictionary. Let the dictio- nary contain following translations. 2The WORD SPACE method is closely related to La- tent Semantic Indexing (LSI)(Deerwester et al., 1990), where document-by-word matrices are processed by SVD instead of word-by-word matrices. The difference between these two is discussed in (Schuetze and Pedersen, 1997). 671 source translations bank --~ ginko (bank:money), teibo(bank:river) interest --+ rishi (interest:money), kyoumi(interest :feeling) Combining these translation alternatives yields four translation candidates: (ginko, risoku), (ginko, kyoumi), (teibo, risoku), (teibo, kyoumi). Then the coherence score is calculated for each candidate. Table 2 shows scores calculated with the co- occurrence data used in the translation experiment (see. Section 4.4.2). The combination of ginko (bank:money) and risoku(interest:money) has the highest score. This is consistent with our intuition. Table 2: An example of scores rank candidate score (coh) 1 (ginko, risoku) 0.930 2 (teibo, kyoumi) 0.897 3 (ginko, kyoumi) 0.839 4 (teibo, risoku) 0.821 4 Experiments We conducted two types of experiments: re- translation experiments and translation experi- ments. Each experiment includes comparison against the baseline algorithm, which is a unigram- based translation algorithm. This section presents the two types of experiments, plus the baseline al- gorithm, followed by experimental results. 4.1 Two Types of Experiments 4.1.1 Translation Experiment In the translation experiment, term-lists in one lan- guage, e.g., English, were translated into another language, e.g., in Japanese. In this experiment, hu- mans judged the correctness of outputs. 4.1.2 Re-translation Experiment Although the translation experiment recreates real applications, it requires human judgment 3. Thus we decided to conduct another type of experiment, called a re-translation experiment. This experiment translates given term-lists (e.g., in English) into a second language (e.g., Japanese) and maps them back onto the source language (e.g., in this case, En- glish). Thus the correct translation of a term list, in the most strict sense, is the original term-list itself. 3 If a bilingual parallel corpus is available, then correspond- ing translations could be used for correct results. This experiment uses two bilingual dictionaries: a forward dictionary and a backward dictionary. In this experiment, a word in the given term-list (e.g. in English) is first mapped to another lan- guage (e.g., Japanese) by using the forward dictio- nary. Each translated word is then mapped back into original language by referring to the backward dictionary. The union of the translations from the backward dictionary are the translation alternatives to be disambiguated. 4.2 Baseline Algorithm The baseline algorithm against which our method was compared employs unigram probabilities for dis- ambiguation. For each word in the given term-list, this algorithm chooses the translation alternative with the highest unigram probability in the target language. Note that each word is translated inde- pendently. 4.3 Experimental Data The source and the target languages of the trans- lation experiments were English and Japanese re- spectively. The re-translation experiments were con- ducted for English term-lists using Japanese as the second language. The Japanese- to-English dictionary was EDICT(Breen, 1995) and the English-to-Japanese dictionary was an inversion of the Japanese-to-English dictionary. The co-occurrence statistics were extracted from the 1994 New York Times (420MB) for English and 1990 Nikkei Shinbun (Japanese newspaper) (150MB) for Japanese. The domains of these texts range from business to sports. Note that 400 articles were randomly separated from the former corpus as the test set. The initial size of each co-occurrence matrix was 20000-by-1000, where rows and columns correspond to the 20,000 and 1000 most frequent words in the corpus 4. Each initial matrix was then reduced by us- ing SVD into a matrix of 20000-by-100 using SVD- PACKC(Berry et al., 1993). Term-lists for the experiments were automatically generated from texts, where a term-list of a docu- ment consists of the topmost n words ranked by their tf-idf scores 5. The relation between the length n of term-list and the disambiguation accuracy was also tested. We prepared two test sets of term-lists: those ex- tracted from the 400 articles from the New York Times mentioned above, and those extracted from 4 Stopwords are ignored. 5The tf-idf score of a word w in a text is tfwlog(N-~), where tfwis the occurrence of w in the text, N is the num- ber of documents in the collection, and Nw is the number of documents containing w. 672 articles in Reuters(Reuters, 1997), called Test-NYT, and Test-REU, respectively. 4.4 Results 4.4.1 re-translation experiment The proposed method was applied to several sets of term-lists of different length. Results are shown in Table 3. In this table and the following tables, "ambiguous" and "success" correspond to the total number of ambiguous words, not term-lists, and the number of words that were successfully translated 6. The best results were obtained when the length of term-lists was 4 or 6. In general, the longer a term- list becomes, the more information it has. However, a long term-list tends to be less coherent (i.e., con- tain different topics). As far as our experiments are concerned, 4 or 6 was the point of compromise. Table 3: Result of Re-translation for Test-NYT length success/ambiguous (rate) 2 98/141 (69.5%) 4 240/329 (72.9%) 6 410/555 (73.8%) 8 559/777 (71.9%) 10 691/981 (70.4%) 12 813/1165 (69.8%) Then we compared our method against the base- line algorithm that was trained on the same set of articles used to create the co-occurrence matrix for our algorithm (i.e., New York Times). Both are ap- plied to term-lists of length 6 made from test-NYT. The results are shown in Table 4. Although the ab- solute value of the success rate is not satisfactory, our method significantly outperforms the baseline algorithm. Table 4: Result of Re-translation for Test-NYT Method success/ambiguous (rate) baseline 236/555 (42.5%) proposed 410/555 (73.8%) We, then, applied the same method with the same parameters (i.e., cooccurence and unigram data) to Test-REU. As shown in Table 5, our method did bet- ter than the baseline algorithm although the success rate is lower than the previous result. Table 5: Result of re-translation for Test-REU Method success/ambiguous (rate) baseline 162/565 (28.7%) proposed 351/565 (62.1%) 6If 100 term-lists were processed and each term-list con- tains 2 ambiguous words, then the "total" becomes 200. Table 6: Result of Translation for Test-NYT Method success/ambiguous (rate) baseline 74/125 (72.6%) proposed 101/125 (80.8%) 4.4.2 translation experiment The translation experiment from English to Japanese was carried out on Test-NYT. The training corpus for both proposed and baseline methods was the Nikkei corpus described above. Outputs were compared against the "correct data" which were manually created by removing incorrect alternatives from all possible alternatives. If all the translation alternatives in the bilingual dictionary were judged to be correct, then we counted this word as unam- biguous. The accuracy of our method and baseline algo- rithm are shown on Table6. The accuracy of our method was 80.8%, about 8 points higher than that of the baseline method. This shows our method is effective in improving trans- lation accuracy when syntactic information is not available. In this experiment, 57% of input words were unambiguous. Thus the success rates for entire words were 91.8% (proposed) and 82.6% (baseline). 4.5 Error Analysis The following are two major failure reasons relevant to our method 7 The first reason is that alternatives were seman- tically too similar to be discriminated. For ex- ample, "share" has at least two Japanese trans- lations: "shea"(market share) and "kabu" (stock ). Both translations frequently occur in the same con- text in business articles, and moreover these two words sometimes co-occur in the same text. Thus, it is very difficult to discriminate them. In this case, the task is difficult also for humans unless the origi- nal text is presented. The second reason is more complicated. Some translation alternatives are polysemous in the target language. If a polysemous word has a very general meaning that co-occurs with various words, then this word is more likely to be chosen. This is because the corresponding vector has "average" value for each dimension and, thus, has high proximity with the centroid vector of multiple words. For example, alternative translations of "stock ~' includes two words: "kabu" (company share) and "dashz" (liquid used for food). The second trans- lation "dashz" is also a conjugation form of the Japanese verb "dasff', which means "put out" and "start". In this case, the word, "dash,", has a cer- 7Other reasons came from errors in pre-processing includ- ing 1) ignoring compound words, 2) incorrect handling of cap- italized words etc. 673 tain amount of proximity because of the meaning irrelevant to the source word, e.g., stock. This problem was pointed out by (Dagan and Itai, 1994) and they suggested two solutions 1) increas- ing the size of the (mono-lingual) training corpora or 2) using bilingual corpora. Another possible solu- tion is to resolve semantic ambiguities of the training corpora by using a mono-lingual disambiguation al- gorithm (e.g., (?)) before making the co-occurrence matrix. 5 Related Work Dagan and Itai (1994) proposed a method for choos- ing target words using mono-lingual corpora. It first locates pairs of words in dependency relations (e.g., verb-object, modifier-noun, etc.), then for each pair, it chooses the most plausible combination of trans- lation alternatives. The plausibility of a word-pair is measured by its co-occurence probability estimated from corpora in the target language. One major difference is that their method re- lies on co-occurrence statistics between tightly and locally related (i.e., syntactically dependent) word pairs, whereas ours relies on associative proper- ties of loosely and more globally related (i.e., co- occurring within a certain distance) word groups. Although the former statistics could provide more accurate information for disambiguation, it requires huge amounts of data to cover inputs (the data sparseness problem). Another difference, which also relates to the data sparseness problem, is that their method uses "row" co-occurrence statistics, whereas ours uses statistics converted with SVD. The converted matrix has the advantage that it represents the co-occurrence rela- tionship between two words that share similar con- texts but do not co-occur in the same text s. SVD conversion may, however, weaken co-occurrence re- lations which actually exist in the corpus. Tanaka and Iwasaki (1996) also proposed a method for choosing translations that solely relies on co-occurrence statistics in the target language. The main difference with our approach lies in the plau- sibility measure of a translation candidate. Instead of using a "coherence score", their method employs proximity, or inverse distance, between the two co- occurrence matrices: one from the corpus (in the target language) and the other from the translation candidate. The distance measure of two matrices given in the paper is the sum of the absolute dis- tance of each corresponding element. This defini- tion seems to lead the measure to be insensitive to the candidate when the co-occurrence matrix is filled with large numbers. s"Second order co-occurrence". See (Schuetze, 1997) 6 Concluding Remarks In this paper, we have presented a method for trans- lating term-lists using mono-lingual corpora. The proposed method is evaluated by translation and re-translation experiments and showed a trans- lation accuracy of 82% for term-lists extracted from articles ranging from business to sports. We are planning to apply the proposed method to cross-linguistic information retrieval (CLIR). Since the method does not rely on syntactic analysis, it is applicable to translating users' queries as well as translating term-lists extracted from documents. A future issue is further evaluation of the pro- posed method using more data and various criteria including overall performance of an application sys- tem (e.g., CLIR). Acknowledgment I am grateful to members of the Infomap project at CSLI, Stanford for their kind support and discus- sions. In particular I would like to thank Stanley Peters and Raymond Flournoy. References M.W. Berry, T. Do, G. O'Brien, V. Krishna, and S. Varadhan. 1993. SVDPACKC USER'S GUIDE. Tech. Rep. CS-93-194, University ofTen- nessee, Knoxville, TN,. J.W. Breen. 1995. EDICT, Freeware, Japanese.to- English Dictionary. P. Brown, J. Cocke, V. Della Pietra, F. Jelinek, R.L. Mercer, and P. C. Roosin. 1990. A statistical approach to language translation. Computational Linguistics, 16(2). P. Brown, V. Della Pietra, and R.L. Mercer. 1991. Word sense disambiguation using statisical meth- ods. In Proceedings of ACL-91. I. Dagan and A. Itai. 1994. Word sense disambigua- tion using a second language monolingual corpus. Computational Linguistics. S. Deerwester, S.T. Dumais, and R. Harshman. 1990. Indexing by latent semantic analysis. Jour- nal of American Society for Information Science. Reuters. 1997. Reuters-21578, Distribution 1.0. available at http://www.research.att.com/~lewis. H. Schuetze and Jan O. Pedersen. 1997. A cooccurrence-based thesaurus and two applica- tions to information retrieval. Information Pro- cessing ~ Management. H. Schuetze. 1997. Ambiguity Resolution in Lan- guage Learning. CSLI. K. Tanaka and H. Iwasaki. 1996. Extraction of lexi- cal translations from non-aligned corpora. In Pro- ceedings of COLING-96. 674
1998
110
Unlimited Vocabulary Grapheme to Phoneme Conversion for Korean TTS Byeongchang Kim and WonI1 Lee and Geunbae Lee and Jong-Hyeok Lee Department of Computer Science & Engineering Pohang University of Science & Technology Pohang, Korea {bckim, bdragon, gblee, jhlee)@postech.ac.kr Abstract This paper describes a grapheme-to-phoneme conversion method using phoneme connectivity and CCV conversion rules. The method consists of mainly four modules including morpheme normalization, phrase-break detec- tion, morpheme to phoneme conversion and phoneme connectivity check. The morpheme normalization is to replace non-Korean symbols into standard Korean graphemes. The phrase-break detector assigns phrase breaks using part-of-speech (POS) information. In the morpheme-to-phoneme conversion module, each morpheme in the phrase is converted into phonetic patterns by looking up the morpheme phonetic pat- tern dictionary which contains candidate phonological changes in boundaries of the morphemes. Graphemes within a morpheme are grouped into CCV patterns and converted into phonemes by the CCV conversion rules. The phoneme connectivity table supports grammaticality checking of the adjacent two phonetic morphemes. In the experiments with a corpus of 4,973 sentences, we achieved 99.9% of the grapheme- to-phoneme conversion performance and 97.5% of the sentence conversion performance. The full Korean TTS system is now being imple- mented using this conversion method. 1 Introduction During the past few years, remarkable improve- ments have been made for high-quality text- to-speech systems (van Santen et al., 1997). One of the enduring problems in developing high-quality text-to-speech system is accurate grapheme-to-phoneme conversion (Divay and Vitale, 1997). It can be described as a function mapping the spelling of words to their phonetic symbols. Nevertheless, the function in some al- phabetic languages needs some linguistic knowl- edge, especially morphological and phonologi- cal, but often also semantic knowledge. In this paper, we present a new grapheme-to- phoneme conversion method for unlimited vo- cabulary Korean TTS. The conversion method is divided into mainly four modules. Each mod- ule has its own linguistic knowledge. Phrase- break detection module assigns phrase breaks onto part-of-speech sequences using morpho- logical knowledge. Word-boundaries before and after phrase breaks should not be co- articulated. So, accurate phrase-break assign- ments are essential in high quality TTS sys- tems. In the morpheme-to-phoneme conver- sion module, boundary graphemes of each mor- pheme in the phrase are converted to phonemes by applying phonetic patterns which contain possible phonological changes in the boundaries of morphemes. The patterns are designed us- ing morphological and phonotactic knowledge. Graphemes within a morpheme are converted into phonemes by CCV (consonant consonant vowel) conversion rules which are automatically extracted from a corpus. After all the conver- sions, phoneme connectivity table supports the grammaticality of the adjacency of two phonetic morphemes. This grammaticality comes from Korean phonology rules. This paper is organized as follows. Section 2 briefly explains the characteristics of spoken Ko- rean for general readers. Section 3 and 4 in- troduces our grapheme-to-phoneme conversion method based on morphological and phonolog- ical knowledge of Korean. Section 5 shows experiment results to demonstrate the perfor- mance and Section 6 draws some conclusions. 675 2 Features of Spoken Korean This section briefly explains the linguistic char- acteristics of spoken Korean before describing the architecture. A Korean word (called eojeol) consists of more than one morpheme with clear-cut morpheme boundaries (Korean is an agglutinative lan- guage). Korean is a postpositional language with many kinds of noun-endings, verb-endings, and prefinal verb-endings. These functional morphemes determine the noun's case roles, verb's aspect/tenses, modals, and modification relations between words. The unit of pause in speech (phrase break) is usually different from that in written text. No phonological change occur between these phrase breaks. Phonologi- cal changes can occur in a morpheme, between morphemes in a word, and even between words in a phrase break as described in the 30 general phonological rules for Korean(Korean Ministry of Education, 1995). These changes include con- sonant and vowel assimilation, dissimilation, in- sertion, deletion, and contraction. For exam- ple, noun "kag-ryo" pronounced as "kangnyo" (meaning "cabinet") is an example of phono- logical change within a morpheme. Noun plus noun-ending "such+gwa", in which "such" means "charcoal" and "gwa" means "and" in English, is sounded as "sudggwa", which is an example of the inter-morpheme phonologi- cal change. "Ta-seos gae", which means "five items", is sounded as "taseot ggae", in which phonological changes occur between words. In addition, phonological changes can occur condi- tionally on the morphotactic environments but also on phonotactic environments. 3 Architecture of the Grapheme-to-Phoneme Converter Part-of-speech (POS) tagging is a basic step to the grapheme-to-phoneme conversion since phonological changes depend on morphotactic and phonotactic environments. The POS tag- ging system have to handle out-of-vocabulary (OOV) words for accurate grapheme-to- phoneme conversion of unlimited vocabulary (Bechet and E1-Beze, 1997). Figure 1 shows the architecture of our grapheme-to-phoneme converter integrated with the hybrid POS tagging system (Lee et al., 1997). The hybrid POS tagging system employs generalized OOV word handling mechanisms in the morpho- logical analysis, and cascades statistical and rule-based approaches in the two-phase training architecture for POS disambiguation. table J I connectivity checker "reefing | Figure 1: Architecture of the grapheme-to- phoneme converter in TTS applications Each morpheme tagged by the POS tagger is normalized by replacing non-Korean symbols by Korean graphemes to expand numbers, ab- breviations, and acronyms. The phrase-break detector segments the POS sequences into sev- eral phrases according to phrase-break detec- tion rules. In the phoneme converter, each mor- pheme in the phrase is converted into phoneme sequences by consulting the morpheme pho- netic dictionary. The OOV morphemes which are not registered in the morpheme phonetic dictionary should be processed in two differ- ent ways. The graphemes in the morpheme boundary are converted into phonemes by con- sulting the morpheme phonetic pattern dictio- nary. The graphemes within morphemes are converted into phonemes according to CCV con- version rules. To model phoneme's connectabli- ties between morpheme boundaries, the sepa- rate phoneme connectivity table encodes the phonological changes between the morpheme with their POS tags. Outputs of the grapheme- to-phoneme converter, that is, phoneme se- 676 quences of the input sentence, can be directly fed to the lower level signal processing module of TTS systems. Next section will give detail de- scriptions of each component of the grapheme- to-phoneme converter. The hybrid POS tagging system will not be explained in this paper, and interested readers can see the reference (Lee et al., 1997). 4 Component Descriptions of the Converter 4.1 Morpheme Normalization The normalization replaces non-Korean sym- bols by corresponding Korean graphemes. Non- Korean symbols include numbers (e.g. 54, - 12, 5,400, 4.2), dates (e.g. 20/1/97, 20-Jan- 97), times (e.g. 12:46), scores (e.g. 74:64), mathematical expressions (e.g. 4+5, 1/3), tele- phone numbers, abbreviations (e.g. km, ha) and acronyms (e.g. UNESCO, OECD). Especially, acronyms have two types: spelled acronyms such as OECD and pronounced ones like a word such as UNESCO. The numbers are converted into the correspond- ing Korean graphemes using deterministic fi- nite automata. The dates, times, scores, ex- pressions and telephone numbers are converted into equivalent graphemes using their formats and values. The abbreviations and acronyms are enrolled in the morpheme phonetic dictio- nary, and converted into the phonemes using the morpheme-to-phoneme conversion module. 4.2 Phrase-Break Detection Phrase-break boundaries are important to the subsequent processing such as morpheme- to-phoneme conversion and prosodic feature generation. Graphemes in phrase-break boundaries are not phonologically changed and sounded as their original corresponding phonemes in Korean. A number of different algorithms have been suggested and implemented for phrase break detection (Black and Taylor, 1997). The simplest algorithm uses deterministic rules and more complicated algorithms can use syntactic knowledge and even semantic knowledge. We designed simple rules using break and POS tagged corpus. We found that, in Korean, the average length of phrases is 5.6 words and over 90% of breaks are after 6 different POS tags: conjunctive ending, auxiliary particle, case particle, other particle, adverb and adnominal ending. The phrase-break detector assigns breaks after these 6 POS tags considering the length of phrases. 4.3 Morpheme-to-Phoneme Conversion The morphemes registered in the morpheme phonetic dictionary can be directly converted into phonemes by consulting the dictionary en- tries. However, separate method to process the OOV morphemes which are not registered in the dictionary is necessary. We developed a new method as shown Figure 2. Apply direct morpheme-to-phoneme conversion and phonological connectivity assignment Morpheme t~muee~ dictionary Convert graphemes in morpheme boundaries and assign phonological connectivity Moq~eme phoneae dictionary Ill Ill Convert graphemes within morphemes CCV conversion rule 1 ,i Figure 2: Morpheme-to-phoneme conversion for unlimited vocabularies The morpheme phonetic dictionary contains POS tag, morpheme, phoneme connectivity (left and right) and phoneme sequence for each entry. We try to register minimum number of morpheme in the dictionary. So it contains only the morphemes which are difficult to pro- cess using the next OOV morpheme conversion modules. Table 1 shows example entries for the common noun "pang-gabs", meaning "price of a room" in hotel reservation dialogs. The common noun "pang-gabs" can be pronounced as "pang-ggam", "pang-ggab" or "pang-ggabss" according to first phoneme of the adjacent mor- phemes. To handle the OOV morphemes, morpheme phonetic pattern dictionary is developed to con- tain all the general patterns of Korean POS tags, morphemes, phoneme connectivity and phoneme sequences. Boundary phonemes of the OOV morphemes can be converted to their candidate phonemes, and the phonological con- nectivity for them can be acquired by consult- ing this morpheme phonetic pattern dictionary. 677 Table 1: Example entries of the morpheme phonetic dictionary POS tag morpheme phoneme sequence left connectivity right connectivity common noun pang-gabs pang-ggam 'p' no change 'bs' changed to 'm' common noun pang-gabs pang-ggab 'p' no change 'bs' changed to 'b' common noun pang-gabs pang-ggabss 'p' no change 'bs' changed to 'bss' Table 2: Example entries of morpheme phonetic pattern dictionary POS tag morpheme phoneme sequence left connectivity right connectivity t,d tt,n irregular verb irregular verb irregular verb irregular verb t,Z Y,d Y,Z tt,Z Y,n Y,Z 't' changed to 'tt' 't' changed to 'tt' no change no change 'd' changed to 'n' no change 'd' changed to 'n' no change Example entries corresponding to the irregular verb "teud", meaning "hear", are shown in Ta- ble 2. Meta characters, 'Z', 'Y', 'V', '*' desig- nate single consonant, consonant except silence phoneme, vowel, any character sequence with variable length in the order. The table shows that the first grapheme 't' can be phonologically changed to 'tt' according to the last phoneme of the preceding morpheme (left connectivity), and the last grapheme 'd' can be phonologically changed to 'n' according to the first phoneme of the following morpheme(right connectivity). The morpheme phonetic pattern dictionary con- tains similar 1,992 entries to model the general phonological rules for Korean. The graphemes within a morpheme for OOV morphemes are converted into phonemes using the CCV conversion rules. The CCV conversion rules are the mapping rules between grapheme to phoneme in character tri-gram forms which are in the order of consonant(C) consonant(C) vowel(V) spanning two consecutive syllables. The CCV rules are designed and automatically learned from a corpus reflecting the following Korean phonological facts. • Korean is a syllable-base language, i.e., Korean syllable is the basic unit of the graphemes and consists of first consonant, vowel and final consonant (CVC). • The number of possible consonants for each syllable can be varied in grapheme- to-phoneme conversion. • The number of vowels for each syllable is not changed. • Phonological changes of the first consonant are only affected by the final consonant of the preceding syllable and the following vowel of the same syllable. • Phonological changes of the final consonant are only affected by the first consonant of the following syllable. • Phonological changes of the vowel are not affected by the following consonant. The boundary graphemes of the OOV mor- phemes are phonologically changed according to the POS tag and the boundary graphemes of the preceding and following morphemes. On the other hand, the inner grapheme conversion is not affected by the POS tag, but only by the adjacent graphemes within the same mor- pheme. The CCV conversion rules can model the fact easily, but the conventional CC conver- sion rules (Park and Kwon, 1995) cannot model the influence of the vowels. 4.4 Phoneme Connectivity Check To verify the boundary phonemes' con- nectablity to one another, the separate phoneme connectivity table encodes the phonologically connectable pair of each morpheme which has phonologically changed boundary graphemes. This phoneme connectivity table indicates the grammatical sound combinations in Korean 678 phonology using the defined left and right con- nectivity information. The morpheme-to-phoneme conversion can gen- erate a lot of phoneme sequence candidates for single morpheme. We put the whole phoneme sequence candidates in a phoneme graph where a correct phoneme sequence path can be se- lected for input sentence. The phoneme connec- tivity check performs this selection and prunes the ungrammatical phoneme sequences in the graph. 5 Implementation and Experiment Results We implemented simple phrase-break detection rules from break and POS tagged corpus col- lected from recording and transcribing broad- casting news. The rules reflect the fact that av- erage length of phrases in Korean is 5.6 words and over 90% of breaks are after 6 specific POS tags, described in the texts. We constructed a 1,992 entry morpheme pho- netic pattern dictionary for OOV morpheme processing using standard Korean phonological rules. The morpheme phonetic dictionary was constructed for only the morphemes that are difficult to handle with these standard rules. The two dictionaries are indexed using POS tag and morpheme pattern for fast access. To model the boundary phonemes' connectablity to one another, the phoneme connectivity table encodes 626 pair of phonologically connectable morphemes. The 2030 entry rule set for CCV conversion was automatically learned from phonetically tran- scribed 9,773 sentences. The independent pho- netically transcribed 4,973 sentences are used to test the performance of the grapheme-to- phoneme conversion. Of the 4,973 sentences, only 2.5% are incorrectly processed (120 sen- tences out of 4,973), and only 0.1% of the graphemes in the sentences are actually incor- rectly converted. 6 Conclusions This paper presents a new grapheme-to- phoneme conversion method using phoneme connectivity and CCV conversion rules for un- limited vocabulary Korean TTS. For the effi- cient conversion, new ideas of morpheme pho- netic and morpheme phonetic pattern dictio- nary are invented and the system demon- strates remarkable conversion performance for the unlimited vocabulary texts. Our main con- tributions include presenting the morpholog- ically and phonologically conditioned conver- sion model which is essential for morpholog- ically and phonologically complex agglutina- tive languages. The other contribution is the grapheme-to-phoneme conversion model com- bined with the declarative phonological rule which is well suited to the given task. We also designed new CCV unit of grapheme-to- phoneme conversion for unlimited vocabulary task. The experiments show that grapheme- to-phoneme conversion performance is 97.5% in sentence conversion, and 99.9% in each grapheme conversion. We are now working on incorporating this grapheme-to-phoneme con- version into the developing TTS systems. References F. Bechet and M. E1-Beze. 1997. Auto- matic assignment of part-of-speech to out-of- vocabulary words for text-to-speech process- ing. In Proceedings of the EUROSPEECH '97, pages 983-986. Alan W. Black and Paul Taylor. 1997. As- signing phrase breaks from part-of-speech sequences. In Proceedings of the EU- ROSPEECH '97, pages 995-998. Michel Divay and Anthony J. Vitale. 1997. Al- gorithms for grapheme-phoneme translation for English and French: Applications. Com- putational Linguistics, 23(4). Korean Ministry of Education. 1995. Korean Rule Collections. Taehan Publishers. (in Ko- rean). Geunbae Lee, Jeongwon Cha, and Jong-Hyeok Lee. 1997. Hybrid POS tagging with general- ized unknown-word handling. In Proceedings of the IRAL '97, pages 43-50. S.H. Park and H.C. Kwon. 1995. Implementa- tion to phonological alteration module for a Korean text-to-speech. In Proceedigns of the ~th conference on Korean and Korean infor- mation processing. (in Korean). Jan P.H. van Santen, Richard W. Sproat, Joseph P. Olive, and Julia Hirschberg. 1997. Progress in Speech Synthesis. Springer- Verlag. 679
1998
111
Role of Verbs in Document Analysis Judith Klavans* and Min-Yen Kan** Center for Research on Information Access* and Department of Computer Science** Columbia University New York, NY 10027, USA Abstract We present results of two methods for assessing the event profile of news articles as a function of verb type. The unique contribution of this research is the focus on the role of verbs, rather than nouns. Two algorithms are presented and evaluated, one of which is shown to accurately discriminate documents by type and semantic properties, i.e. the event profile. The initial method, using WordNet (Miller et al. 1990), produced multiple cross-classification of arti- cles, primarily due to the bushy nature of the verb tree coupled with the sense disambiguation problem. Our second approach using English Verb Classes and Alternations (EVCA) Levin (1993) showed that monosemous categorization of the frequent verbs in WSJ made it possible to usefully discriminate documents. For example, our results show that articles in which commu- nication verbs predominate tend to be opinion pieces, whereas articles with a high percentage of agreement verbs tend to be about mergers or legal cases. An evaluation is performed on the results using Kendall's ~-. We present convinc- ing evidence for using verb semantic classes as a discriminant in document classification. 1 1 Motivation We present techniques to characterize document type and event by using semantic classification of verbs. The intuition motivating our research is illustrated by an examination of the role of 1The authors acknowledge earlier implementations by James Shaw, and very valuable discussion from Vasileios Hatzivassiloglou, Kathleen McKeown and Nina Wa- cholder. Partial funding for this project was provided by NSF award #IRI-9618797 STIMULATE: Generating Coherent Summaries of On-Line Documents: Combining Statistical and Symbolic Techniques (co-PI's McKeown and Klavans), and by the Columbia University Center for Research on Information Access. 680 nouns and verbs in documents. The listing be- low shows the ontological categories which ex- press the fundamental conceptual components of propositions, using the framework of Jack- endoff (1983). Each category permits the for- mation of a wh-question, e.g. for [THING] "what did you buy?" can be answered by the noun "a fish". The wh-questions for [ACTION] and [EVENT] can only be answered by verbal con- structions, e.g. in the question "what did you do?", where the response must be a verb, e.g. jog, write, fall, etc. [TH,NG] [DmECT,ON] [ACTION] [eLAtE] [MANNER] [EVENT] [AMO,NT] The distinction in the ontological categories of nouns and verbs is reflected in information ex- traction systems. For example, given the noun phrases fares and US Air that occur within a particular article, the reader will know what the story is about, i.e. fares and US Air. However, the reader will not know the [EVENT], i.e. what happened to the fares or to US Air. Did airfare prices rise, fall or stabilize? These are the verbs most typically applicable to prices, and which embody the event. 1.1 Focus on the Noun Many natural language analysis systems focus on nouns and noun phrases in order to identify information on who, what, and where. For ex- ample, in summarization, Barzilay and Elhadad (1997) and Lin and Hovy (1997) focus on multi- word noun phrases. For information extraction tasks, such as the DARPA-sponsored Message Understanding Conferences (1992), only a few projects use verb phrases (events), e.g. Ap- pelt et al. (1993), Lin (1993). In contrast, the named entity task, which identifies nouns and noun phrases, has generated numerous projects as evidenced by a host of papers in recent con- ferences, (e.g. Wacholder et al. 1997, Palmer and Day 1997, Neumann et al. 1997). Although rich information on nominal participants, ac- tors, and other entities is provided, the named entity task provides no information on what happened in the document, i.e. the event or action. Less progress has been made on ways to utilize verbal information efficiently. In ear- lier systems with stemming, many of the verbal and nominal forms were conflated, sometimes erroneously. With the development of more so- phisticated tools, such as part of speech taggers, more accurate verb phrase identification is pos- sible. We present in this paper an effective way to utilize verbal information for document type discrimination. 1.2 Focus on the Verb Our initial observations suggested that both oc- currence and distribution of verbs in news arti- cles provide meaningful insights into both ar- ticle type and content. Exploratory analysis of parsed Wall Street Journal data 2 suggested that articles characterized by movement verbs such as drop, plunge, or fall have a different event profile from articles with a high percent- age of communication verbs, such as report, say, comment, or complain. However, without asso- ciated nominal arguments, it is impossible to know whether the [THING] that drops refers to airfare prices or projected earnings. In this paper, we assume that the set of verbs in a document, when considered as a whole, can be viewed as part of the conceptual map of the events and action in a document, in the same way that the set of nouns has been used as a concept map for entities. This paper reports on two methods using verbs to determine an event profile of the document, while also reliably cat- egorizing documents by type. Intuitively, the event profile refers to the classification of an ar- ticle by the kind of event. For example, the article could be a discussion event, a reporting event, or an argument event. To illustrate, consider a sample article from WSJ of average length (12 sentences in length) with a high percentage of communication verbs. The profile of the article shows that there are 19 verbs: 11 (57%) are communication verbs, including add, report, say, and tell. Other 2Penn TreeBank (Marcus et al. 1994) from the Lin- guistic Data Consortium. 681 verbs include be skeptical, carry, produce, and close. Representative nouns include Polaroid Corp., Michael Ellmann, Wertheim Schroder Co., Prudential-Bache, savings, operating "re- sults, gain, revenue, cuts, profit, loss, sales, an- alyst, and spokesman. In this case, the verbs clearly contribute in- formation that this article is a report with more opinions than new facts. The prepon- derance of communication verbs, coupled with proper noun subjects and human nouns (e.g. spokesman, analyst) suggest a discussion arti- cle. If verbs are ignored, this fact would be overlooked. Matches on frequent nouns like gain and loss do not discriminate this article from one which announces a gain or loss as breaking news; indeed, according to our results, a break- ing news article would feature a higher percent- age of motion verbs rather than verbs of com- munication. 1.3 On Genre Detection Verbs are an important factor in providing an event profile, which in turn might be used in cat- egorizing articles into different genres. Turning to the literature in genre classification, Biber (1989) outlines five dimensions which can be used to characterize genre. Properties for dis- tinguishing dimensions include verbal features such as tense, agentless passives and infinitives. Biber also refers to three verb classes: private, public, and suasive verbs. Karlgren and Cut- ting (1994) take a computationally tractable set of these properties and use them to compute a score to recognize text genre using discriminant analysis. The only verbal feature used in their study is present-tense verb count. As Karlgren and Cutting show, their techniques are effective in genre categorization, but they do not claim to show how genres differ. Kessler et al. (1997) discuss some of the complexities in automatic detection of genre using a set of computation- ally efficient cues, such as punctuation, abbrevi- ations, or presence of Latinate suffixes. The tax- onomy of genres and facets developed in Kessler et al. is useful for a wide range of types, such as found in the Brown corpus. Although some of their discriminators could be useful for news articles (e.g. presence of second person pronoun tends to indicate a letter to the editor), the in- dicators do not appear to be directly applicable to a finer classification of news articles. News articles can be divided into several stan- dard categories typically addressed in journal- ism textbooks. We base our article category ontology, shown in lowercase, on Hill and Breen (1977), in uppercase: 1. FEATURE STORIES : feature; 2. INTERPRETIVE STORIES: editorial, opinion, report; 3. PROFILES; 4. PRESS RELEASES: announcements, mergers, legal cases; 5. OBITUARIES; 6. STATISTICAL INTERPRETATION: posted earnings; 7. ANECDOTES; 8. OTHER: poems. The goal of our research is to identify the role of verbs, keeping in mind that event profile is but one of many factors in determining text type. In our study, we explored the contribu- tion of verbs as one factor in document type dis- crimination; we show how article types can be successfully classified within the news domain using verb semantic classes. 2 Initial Observations We initially considered two specific categories of verbs in the corpus: communication verbs and support verbs. In the WSJ corpus, the two most common main verbs are say, a communication verb, and be, a support verb. In addition to say, other high frequency communication verbs include report, announce, and state. In journal- istic prose, as seen by the statistics in Table 1, at least 20% of the sentences contain commu- nication verbs such as say and announce; these sentences report point of view or indicate an attributed comment. In these cases, the subor- dinated complement represents the main event, e.g. in "Advisors announced that IBM stock rose 36 points over a three year period," there are two actions: announce and rise. In sen- tences with a communication verb as main verb we considered both the main and the subor- dinate verb; this decision augmented our verb count an additional 20% and, even more im- portantly, further captured information on the actual event in an article, not just the commu- nication event. As shown in Table 1, support verbs, such as go ("go out of business") or get ("get along"), constitute 30%, and other con- tent verbs, such as fall, adapt, recognize, or vow, make up the remaining 50%. If we exclude all support type verbs, 70% of the verbs yield in- formation in answering the question "what hap- pened?" or "what did X do?" 3 Event Profile: WordNet and EVCA Since our first intuition of the data suggested that articles with a preponderance of verbs of 682 Verb Type Sample Verbs % communication say, announce .... 20% support have, get, go, ... 30% remainder abuse, claim, offer, ... 50% Table 1: Approximate Frequency of verbs by type from the Wall Street Journal (main and selected subordinate verbs, n = 10,295). a certain semantic type might reveal aspects of document type, we tested the hypothesis that verbs could be used as a predictor in provid- ing an event profile. We developed two algo- rithms to: (1) explore WordNet (WN-Verber) to cluster related verbs and build a set of verb chains in a document, much as Morris and Hirst (1991) used Roget's Thesaurus or like Hirst and St. Onge (1998) used WordNet to build noun chains; (2) classify verbs according to a se- mantic classification system, in this case, us- ing Levin's (1993) English Verb Classes and Alternations (EVCA-Yerber) as a basis. For source material, we used the manually-parsed Linguistic Data Consortium's Wall Street Jour- nal (WSJ) corpus from which we extracted main and complement of communication verbs to test the algorithms on. Using WordNet. Our first technique was to use WordNet to build links between verbs and to provide a semantic profile of the docu- ment. WordNet is a general lexical resource in which words are organized into synonym sets, each representing one underlying lexical concept (Miller et al. 1990). These synonym sets - or synsets - are connected by different semantic relationships such as hypernymy (i.e. plunging is a way of descending), synonymy, antonymy, and others (see Fellbaum 1990). The determina- tion of relatedness via taxonomic relations has a rich history (see Resnik 1993 for a review). The premise is that words with similar meanings will be located relatively close to each other in the hierarchy. Figure 1 shows the verbs cite and post, which are related via a common ancestor inform, ..., let know. The WN-Verber tool. We used the hypernym relationship in WordNet because of its high cov- erage. We counted the number of edges needed to find a common ancestor for a pair of verbs. Given the hierarchical structure of WordNet, the lower the edge count, in principle, the closer the verbs are semantically. Because WordNet common ancestor inform ..... let know t e s t i f Y ~ ~ o u ~ c ~ .... abduct ..... cite attest .... report post sound Figure 1: Taxonomic Relations for cite and post in WordNet. allows individual words (via synsets) to be the descendent of possibly more than one ances- tor, two words can often be related by more than one common ancestor via different paths, possibly with the same relationship (grandpar- ent and grandparent, or with different relations (grandparent and uncle). Results from WN-Verber. We ran all arti- cles longer than 10 sentences in the WSJ cor- pus (1236 articles) through WN-Verber. Output showed that several verbs - e.g. go, take, and say - participate in a very large percentage of the high frequency synsets (approximate 30%). This is due to the width of the verb forest in WordNet (see Fellbaum 1990); top level verb synsets tend to have a large number of descen- dants which are arranged in fewer generations, resulting in a flat and bushy tree structure. For example, a top level verb synset, inform, ..., give information, let know has over 40 children, whereas a similar top level noun synset, entity, only has 15 children. As a result, using fewer than two levels resulted in groupings that were too limited to aggregate verbs effectively. Thus, for our system, we allowed up to two edges to in- tervene between a common ancestor synset and each of the verbs' respective synsets, as in Fig- ure 2. acceptable• ] i• unacceptable• 2 a 2 0 •2 vl • 1 1 4 ° • •3 v~ v~ • • vl i vl v2 • • v2 • v2 • Figure 2: Configurations for relating verbs in our system. In addition to the problem of the flat na- ture of the verb hierarchy, our results from WN-Verber are degraded by ambiguity; similar effects have been reported for nouns. Verbs with differences in high versus low frequency senses caused certain verbs to be incorrectly related; 683 for example, have and drop are related by the synset meaning "to give birth" although this sense of drop is rare in WSJ. The results of NN-Verber in Table 2 reflect the effects of bushiness and ambiguity. The five most frequent synsets are given in column 1; col- umn 2 shows some typical verbs which partici- pate in the clustering; column 3 shows the type of article which tends to contain these synsets. Most articles (864/1236 = 70%) end up in the top five nodes. This illustrates the ineffective- ness of these most frequent WordNet synset to discriminate between article types. Synset Sample Article types Verbs (listed in order) in Synset Act have, relate, announcements, editori- (interact, act to- give, tell als, features gether, ...) Communicate give, get, in- announcements, editori- (communicate, form, tell als, features, poems intercommunicate, ...) Change have, modify, poems, editorials, an- (change) take nouncements, features Alter convert, announcements, poems, (alter, change) make, get editorials Inform inform, ex- announcements, poems, (inform, round on, plain, de- features ...) scribe Table 2: Frequent synsets and article types. Evaluation using Kendall's Tau. We sought independent confirmation to assess the correlation between two variables' rank for WN-Verber results. To evaluate the effects of one synset's frequency on another, we used Kendall's tau (r) rank order statistic (Kendall 1970). For example, was it the case that verbs under the synset act tend not to occur with verbs under the synset think? If so, do ar- ticles with this property fit a particular pro- file? In our results, we have information about synset frequency, where each of the 1236 arti- cles in the corpus constitutes a sample. Ta- ble 3 shows the results of calculating Kendall's r with considerations for ranking ties, for all (10) = 45 pairing combinations of the top 10 most frequently occurring synsets. Correlations can range from -1.0 reflecting inverse correla- tion, to +1.0 showing direct correlation, i.e. the presence of one class increases as the presence of the correlated verb class increases. A T value of 0 would show that the two variables' values are independent of each other. Results show a significant positive correlation between the synsets. The range of correlation is from .850 between the communication verb synset (give, get, inform, ...) and the act verb synset (have, relate, give, ...) to .238 between the think verb synset (plan, study, give, ...) and the change state verb synset (fall, come, close, ...). These correlations show that frequent synsets do not behave independently of each other and thus confirm that the WordNet results are not an effective way to achieve document discrim- ination. Although the WordNet results were not discriminatory, we were still convinced that our initial hypothesis on the role of verbs in determining event profile was worth pursuing. We believe that these results are a by-product of lexical ambiguity and of the richness of the WordNet hierarchy. We thus decided to pur- sue a new approach to test our hypothesis, one which turned out to provide us with clearer and more robust results. act com chng alter infm exps thnk I judg I trnf ~tate .407 .296 .672 .461 .286 .269 .238 I .355 .268 ;rnsf .437 .436 .251 .436 .251 .404 .369 .359 iudge .444 .414 .435 .450 .340 .348 .427 .~xprs .444 .414 .435 .397 .322 .432 ;hink .444 .414 .435 .397 .398 ~nfrm .614 ,649 .341 .380 ~lter .501 .454 .619 Table 3: Kendall's T for frequent WordNet synsets. Utilizing EVCA. A different approach to test the hypothesis was to use another semantic categorization method; we chose the semantic classes of Levin's EVCA as a basis for our next analysis. 3 Levin's seminal work is based on the time-honored observation that verbs which par- ticipate in similar syntactic alternations tend to share semantic properties. Thus, the behavior of a verb with respect to the expression and in- terpretation of its arguments can be said to be, in large part, determined by its meaning. Levin has meticulously set out a list of syntactic tests (about 100 in all), which predict membership in no less than 48 classes, each of which is divided into numerous sub-classes. The rigor and thor- oughness of Levin's study permitted us to en- code our algorithm, EVCA-Verber, on a sub-set 3Strictly speaking, our classification is based on EVCA. Although many of our classes are precisely de- fined in terms of EVCA tests, we did impose some ex- tensions. For example, support verbs are not an EVCA category. of the EVCA classes, ones which were frequent in our corpus. First, we manually categorized the 100 most frequent verbs, as well as 50 addi- tional verbs, which covers 56% of the verbs by token in the corpus. We subjected each verb to a set of strict linguistic tests, as shown in Ta- ble 4 and verified primary verb usage against the corpus. Verb Class (sample verbs) Communication (add, say, an- nounce, ...) Motion (rise, fall, decline, ...) Agreement (agree, accept, con- cur, ...) Argument (argue, debate, , ...) Causative (cause) Sample Test (1) Does this involve a transfer of ideas? (2) X verbed "something." (1) *"X verbed without moving". (1) "They verbed to join forces." (2) involves more than one participant. (1) "They verbed (over) the issue." (2) indicates conflicting views. (3) involves more than one participant. (1) X verbed Y (to happen/happened). (2) X brings about a change in Y. Table 4: EVCA verb class test Results from EVCA-Verber. In order to be able to compare article types and emphasize their differences, we selected articles that had the highest percentage of a particular verb class from each of the ten verb classes; we chose five articles from each EVCA class, yielding a to- tal of 50 articles for analysis from the full set of 1236 articles. We observed that each class discriminated between different article types as shown in Table 5. In contrast to Table 2, the ar- ticle types are well discriminated by verb class. For example, a concentration of communica- tion class verbs (say, report, announce, ... ) in- dicated that the article type was a general an- nouncement of short or medium length, or a longer feature article with many opinions in the text. Articles high in motion verbs were also announcements, but differed from the commu- nication ones, in that they were commonly post- ings of company earnings reaching a new high or dropping from last quarter. Agreement and argument verbs appeared in many of the same articles, involving issues of some controversy. However, we noted that articles with agreement verbs were a superset of the argument ones in that, in our corpus, argument verbs did not ap- pear in articles concerning joint ventures and mergers. Articles marked by causative class verbs tended to be a bit longer, possibly re- flecting prose on both the cause and effect of 684 a particular action. We also used EVCA-Verber to investigate articles marked by the absence of members of each verb class, such as articles lack- ing any verbs in the motion verb class. However, we found that absence of a verb class was not discriminatory. Verb Class (sample verbs) Communication (add, say, announce, ...) Motion (rise, fall, decline, ...) Agreement (agree, accept, concur, ...) Argument (argue, indicate, contend, .,.) Causative (cause) Article types (listed by frequency) issues, reports, opinions, editorials posted earnings, announcements mergers, legal cases, transactions (without buying and selling) legal cases, opinions opinions, feature, editorials Table 5: EVCA-based verb class results. Evaluation of EVCA verb classes. To strengthen the observations that articles domi- nated by verbs of one class reflect distinct arti- cle types, we verified that the verb classes be- haved independently of each other. Correlations for EVCA classes are shown in Table 6. These show a markedly lower level of correlation be- tween verb classes than the results for WordNet synsets, the range being from .265 between mo- tion and aspectual verbs to -.026 for motion verbs and agreement verbs. These low values of T for pairs of verb classes reflects the inde- pendence of the classes. For example, the com- munication and experience verb classes are weakly correlated; this, we surmise, may be due to the different ways opinions can be expressed, i.e. as factual quotes using communication class verbs or as beliefs using experience class verbs. comun motion agree argue exp I aspect~ cause appear .122 .076 .077 .072 .182 [ .112 J .037 cause .093 .083 .000 .000 .073 .096 aspect .246 .265 .034 .110 .189 exp .260 .130 .054 .054 argue .162 .045 .033 argree .071 -.026 Table 6: Kendall's r for EVCA based verb classes. 4 Results and Future Work. Basis for WordNet and EVCA compari- son. This paper reports results from two ap- proaches, one using WordNet and other based 685 on EVCA classes. However, the basis for com- parison must be made explicit. In the case of WordNet, all verb tokens (n = 10K) were considered in all senses, whereas in the case of EVCA, a subset of less ambiguous verbs were manually selected. As reported above, we cov- ered 56% of the verbs by token. Indeed, when we attempted to add more verbs to EVCA cat- egories, at the 59% mark we reached a point of difficulty in adding new verbs due to ambigu- ity, e.g. verbs such as get. Thus, although our results using EVCA are revealing in important ways, it must be emphasized that the compar- ison has some imbalance which puts WordNet in an unnaturally negative light. In order to ac- curately compare the two approaches, we would need to process either the same less ambiguous verb subset with WordNet, or the full set of all verbs in all senses with EVCA. Although the re- sults reported in this paper permitted the vali- dation of our hypothesis, unless a fair compari- son between resources is performed, conclusions about WordNet as a resource versus EVCA class distinctions should not be inferred. Verb Patterns. In addition to considering verb type frequencies in texts, we have observed that verb distribution and patterns might also reveal subtle information in text. Verb class dis- tribution within the document and within par- ticular sub-sections also carry meaning. For ex- ample, we have observed that when sentences with movement verbs such as rise or fall are fol- lowed by sentences with cause and then a telic aspectual verb such as reach, this indicates that a value rose to a certain point due to the actions of some entity. Identification of such sequences will enable us to assign functions to particular sections of contiguous text in an article, in much the same way that text segmentation program seeks identify topics from distributional vocab- ulary (Hearst, 1994; Kan et al., 1998). We can also use specific sequences of verbs to help in determining methods for performing semantic aggregation of individual clauses in text gener- ation for summarization. Future Work. Our plans are to extend the current research in terms of verb coverage and in terms of article coverage. For verbs, we plan to (1) increase the verbs that we cover to include phrasal verbs; (2) increase coverage of verbs by categorizing additional high frequency verbs into EVCA classes; (3) examine the effects of increased coverage on determining article type. For articles, we plan to explore a general parser so we can test our hypothesis on additional texts and examine how our conclusions scale up. Fi- nally, we would like to combine our techniques with other indicators to form a more robust sys- tem, such as that envisioned in Biber (1989) or suggested in Kessler et al. (1997). Conclusion. We have outlined a novel ap- proach to document analysis for news articles which permits discrimination of the event pro- file of news articles. The goal of this research is to determine the role of verbs in document anal- ysis, keeping in mind that event profile is one of many factors in determining text type. Our re- sults show that Levin's EVCA verb classes pro- vide reliable indicators of article type within the news domain. We have applied the algorithm to WSJ data and have discriminated articles with five EVCA semantic classes into categories such as features, opinions, and announcements. This approach to document type classification using verbs has not been explored previously in the literature. Our results on verb analysis coupled with what is already known about NP identi- fication convinces us that future combinations of information will be even more successful in categorization of documents. Results such as these are useful in applications such as passage retrieval, summarization, and information ex- traction. References D. Appelt, J. Hobbs, J. Bear, D. Isreal, and M. Tyson. 1993. Fastus: A finite state processor for information extraction from real world text. In Proceedings of the 13th International Joint Conference on Artificial In- telligence (LICAI), Chambery, l~rance. Regina Barzilay and Michael Elhadad. 1997. Using lex- ical chains for text summarization. In Proceedings of the Intelligent Scalable Text Summarization Work- shop (ISTS'97), ACL, Madrid, Spain. Douglas Biber. 1989. A typology of english texts. Lan- guage, 27:3-43. Christiane Fellbaum. 1990. English verbs as a semantic net. International Journal of Lexicography, 3(4):278- 301. Maarti A. Hearst. 1994. Multi-paragraph segmentation of expository text. In Proceedings of the 32th Annual Meeting of the Association of Computational Linguis- tics. Evan Hill and John J. Breen. 1977. Reporting ~ Writ- ing the News. Little, Brown and Company, Boston, Massachusetts. Graeme Hirst and David St-Onge. 1998. Lexical chains as representations of context for the detection and cor- 686 rection of malapropisms. WordNet: An electronic lex- ical database and some of its applications. Ray Jackendoff. 1983. Semantics and Cognition. MIT University Press, Cambridge, Massachusetts. Min-Yen Kan, Judith L. Klavans, and Kathleen R. McK- eown. 1998. Linear segmentation and segment rele- vance. Unpublished Manuscript. Jussi Karlgren and Douglass Cutting. 1994. Recogniz- ing text genres with simple metrics using discrimi- nant analysis. In Fifteenth International Conference on Computational Linguistics (COLING '9~), Kyoto, Japan. Maurice G. Kendall. 1970. Rank Correlation Methods. Griffin, London, England, 4th edition. Brent Kessler, Geoffrey Nunberg, and Hinrich Schiitze. 1997. Automatic detection of text genre. In Proceed- ings of the 35th Annual Meeting of the Association of Computational Linguistics, Madrid, Spain. Beth Levin. 1993. English Verb Classes and Alterna- tions. University of Chicago Press, Chicago, Ohio. Chin-Yew Lin and Eduard Hovy. 1997. Identifying top- ics by position. In Proceedings of the 5th A CL Confer- ence on Applied Natural Language Processing, pages 283-290, Washington, D.C., April. Dekang Lin. 1993. University of Manitoba: Descrip- tion of the NUBA System as Used for MUC-5. In Proceedings of the Fifth Conference on Message Un- derstanding MUC-5, pages 263-275, Baltimore, Mary- land. ARPA. Mitch Marcus et al. 1994. The Penn Treebank: Anno- tating Predicate Argument Structure. ARPA Human Language Technology Workshop. George A. Miller, Richard Beckwith, Christiane Fell- baum, Derek Gross, and Katherine J. Miller. 1990. Introduction to WordNet: An on-line lexical database. International Journal of Lexicography (spe- cial issue), 3(4):235-312. Jane Morris and Graeme Hirst. 1991. Lexical coher- ence computed by thesaural relations as an indicator of the structure of text. Computational Linguistics, 17(1):21-42. 1992. Message Understanding Conference -- MUC. Giinter Neumann, Rolf Backofen, Judith Baur, Marcus Becker, and Christian Braun. 1997. An information extraction core system for real world german text pro- cessing. In Proceedings of the 5th A CL Conference on Applied Natural Language Processing, pages 209-216, Washington, D.C., April. David D. Palmer and David S. Day. 1997. A statistical profile of the named entity task. In Proceedings of the 5th A CL Conference on Applied Natural Language Processing, pages 190-193, Washington, D.C., April. Philip Resnik. 1993. Selection and Information: A Class-Based Approach to Lexical Relationships. Ph.D. thesis, Department of Computer and Information Sci- ence, University of Pennsylvania. Nina Wacholder, Yael Ravin, and Misook Choi. 1997. Disambiguation of proper names in text. In Proceed- ings of the 5th ACL Conference on Applied Natural Language Processing, volume 1, pages 202-209, Wash- ington, D.C., April.
1998
112
A FLEXIBLE EXAMPLE-BASED PARSER BASED ON THE SSTC" Mosleh Hmoud A1-Adhaileh & Tang Enya Kong Computer Aided Translation Unit School of computer sciences University Sains Malaysia 1 1800 PENANG, MALAYSIA mosleh @ cs. usm.my, enyakong @ cs. usm. my Abstract In this paper we sketch an approach for Natural Language parsing. Our approach is an example-based approach, which relies mainly on examples that already parsed to their representation structure, and on the knowledge that we can get from these examples the required information to parse a new input sentence. In our approach, examples are annotated with the Structured String Tree Correspondence (SSTC) annotation schema where each SSTC describes a sentence, a representation tree as well as the correspondence between substrhzgs in the sentence and subtrees in the representation tree. In the process of parsing, we first try to build subtrees for phrases in the input sentence which have been successfully found in the example-base - a bottom up approach. These subtrees will then be combined together to form a single rooted representation tree based on an example with similar representation structure - a top down approach. Keywords: Example-based parsing, SSTC. 1. INTRODUCTION In natural language processing (NLP), one key problem is how to design an effective parsing system. Natural language parsing is the process of analyzing or parsing that takes sentences in a natural language and converts them to some representation form suitable for further interpretation towards some applications might be required, for example, translation, text abstraction, question-answering, etc. The generated representation tree structure can be a phrase structure tree, a dependency tree or a logical structure tree, as required by the application involved. Here we design an approach for parsing natural language to its representation structure, which depends on related examples already parsed in the example-base. This approach is called example-based parsing, as oppose to the traditional approaches of natural language parsing which normally are based on rewriting rules. Here linguistic knowledge extracted directly from the example-base will be used to parse a natural language sentence (i.e. using past language experiences instead of rules). For a new sentence, to build its analysis (i.e. representation structure tree), ideally if the sentence is already in the example-base, its analysis is found there too, but in general, the input sentence will not be found in the example-base. In such case, a method is used to retrieve close related examples and use the knowledge from these examples to build the analysis for the input sentence. In general, this approach relies on the assumption that if two strings (phrase or sentence) are "close", their analysis should be "close" too. If the analysis of the first one is known, the analysis of the other can be obtained by making some modifications in the analysis of the first one. The example-based approach has become a common technique for NLP applications, especially in MT as reported in [1], [2] or [3]. However, a main problem normally arises in the current approaches which indirectly limits their applications in the development of a large scale and practical example- based system. Namely the lack of flexibility in creating the representation tree due to the restriction that correspondences between nodes (terminal or non terminal) of the representation tree and words of the sentence must be one-to-one and some even restrict it to only in projective manner according to certain traversai order. This restriction normally results to the inefficient usage of the example-base. In this paper, we shall first discuss on certain cases where projective representation trees are inadequate for characterizing representation structures of some natural linguistic phenomena, i.e. featurisation, lexicalisation and crossed dependencies. Next, we • The work reported in this paper is supported by the IRPA research programs, under project number 04-02-05-6001 funded by the Ministry of Science, Technology and Environment, Malaysia. 687 propose to overcome the problem by introducing a flexible annotation schema called Structured String- Tree Correspondence(SSTC) which describes a sentencel a representation tree, and the correspondence between substrings in the sentence and subtrees in the representation tree. Finally, we present a algorithm to parse natural language sentences based on the SSTC annotation schema. 2. NON-PROJECTIVE CORRESPONDE -NCES IN NATURAL LANGUAGE SENTENCES In this section, we shall present some cases where projective representation tree is found to be inadequate for characterizing representation tree of some natural language sentences. The cases illustrated here are featurisation, lexicalisation and crossed dependencies. An example containing mixture of these non-projective correspondences also will be presented. 2.1 Featurisation Featurisation occurs when a linguist decides that a particular substring in the sentence, should not be represented as a subtree in the representation tree but perhaps as a collection of features. For example, as illustrated in figure 1, this would be the case for prepositions in arguments which can be interpreted as part of the predicate and not the argument, and should be featurised into the predicate (e.g. "up" in "picks- up"), the particle "up" is featurised as a part of the feature properties of the verb "pick". picks up He picks up the ball Figure 1: Featurisation 2.2 Lexicalisation Lexicalisation is the case when a particular subtree in the representation tree presents the meaning of some part of the string, which is not orally realized in phonological form. Lexicalisation may result from the correspondence of a subtree in the tree to an empty substring in the sentence, or substring in the sentence to more than one subtree in the tree. Figure 2 illustrates the sentence "John eats the apple and Mary the pear" where "eats" in the sentence corresponds to more than one node in the tree. and ea_./"oO~~eats John eats the apple and Mary tile pear Figure 2: Lexicalisation 2.3 Crossed dependencies The most complicated case of string-tree correspondence is when dependencies are intertwined with each other. It is a very common phenomenon in natural language. In crossed dependencies, subtree in the tree corresponds to single substring in the sentence, but the words in a substring are distributed over the whole sentence in a discontinuous manner, in relation to the subtree they correspond to. An example of crossed dependencies is occurred in the b n c n sentences of the form (a n v I n>0), figure 3 illustrates the representation tree for the string "aa v bb cc " (also written a.la.2 v b.lb.2 c.lc.2 to show the positions), this akin to the 'respectively' problem in English sentence like "John and Mary give Paul and Ann trousers and dresses respectively" [4]. v a.1 b.1 [ c.1 __v 1'4 • Figure 3: Crossed dependencies Sometimes the sentence contains mixture of these non-projective correspondences, figure 4 illustrates the sentence "He picks the ball up", which contains both featurisation and crossed dependencies. Here, the particle "up" is separated from its verb "picks" by a noun phrase "the ball" in the string. And "up" is featurised into the verb "picks" (e.g. "up" in "picks- up"). picl / pick :s up Figure 4: Mixture of featurisation and crossed dependencies 688 3. STRUCTURED STRING-TREE CORRESPONDENCE (SSTC) The correspondence between the string on one hand, and its representation of meaning on the other hand, is defined in terms of finer subcorrespondences between substrings of the sentence and subtrees of the tree. Such correspondence is made of two interrelated correspondences, one between nodes and substrings, and the other between subtrees and substrings, (the substrings being possibly discontinuous in both cases). The notation used in SSTC to denote a correspondence consists of a pair of intervals X/Y attached to each node in the tree, where X(SNODE) denotes the interval containing the substring that corresponds to the node, and Y(STREE) denotes the interval containing the substring that corresponds to the subtree having the node as root [4]. Figure 5 illustrates the sentence "all cats eat mice" with its corresponding SSTC. It is a simple projective correspondence. An interval is assigned to each word in the sentence, i.e. (0-1) for "all", (1-2) for "cats", (2-3) for "eat" and (3-4) for "mice". A substring in the sentence that corresponds to a node in the representation tree is denoted by assigning the interval of the substring to SNODE of the node, e.g. the node "cats" with SNODE interval (1-2) corresponds to the word "cats" in the string with the similar interval. The correspondence between subtrees and substrings are denoted by the interval assigned to the STREE of each node e.g. the subtree rooted at node "eat" with STREE interval (0-4) corresponds to the whole sentence "all cats eat mice". Tree eat(2-3/0-4) 3.4,3.4, all (0-1/0-1)~ t String all cats eat mice (0-1) (1-2) (2-3) (3-4) Figure 5: An SSTC recording the sentence "all cats eat mice" and its Dependency tree together with the correspondences between substrings of the sentence and subtrees of the tree. 4. USES OF SSTC ANNOTATION IN EXAMPLE-BASED PARSING In order to enhance the quality of example- based systems, sentences in the example-base are normally annotated with theirs constituency or dependency structures which in turn allow example- based parsing to be established at the structural level. To facilitate such structural annotation, here we annotate the examples based on the Structured String-Tree Correspondence (SSTC). The SSTC is a general structure that can associate, to string in a language, arbitrary tree structure as desired by the annotator to be the interpretation structure of the string, and more importantly is the facility to specify the correspondence between the string and the associated tree which can be interpreted for both analysis and synthesis in NLP. These features are very much desired in the design of an annotation scheme, in particular for the treatment of linguistic phenomena which are not-standard e.g. crossed dependencies [5]. Since the example in the example-base are described in terms of SSTC, which consists of a sentence (the text), a dependency tree' (the linguistic representation) and the mapping between the two (correspondence); example-based parsing is performed by giving a new input sentence, followed by getting the related examples(i.e, examples that contains same words in the input sentence) from the example-base, and used them to compute the representation tree for the input sentence guided by the correspondence between the string and the tree as discussed in the following sections. Figure 6 illustrates the general schema for example-based NL parsing based on the SSTC schema. sentence Input Example. Ii based / \ Parsing Output Figure 6: Example-based natural language parsing based on the SSTC schema. 4. 1 The parsing algorithm The example-based approach in MT [1], [2] or [3], relies on the assumption that if two sentences are "close", their analysis should be "close" too. If the analysis of the first one is known, the analysis of the other can be obtained by making some modifications in the analysis of the first one (i.e. i Each node is tagged with syntactic category to enable substitution at category level. 689 close: distance not too large, modification: edit operations (insert, delete, replace) [6]. In most of the cases, similar sentence might not occurred in the example-base, so the system utilized some close related examples to the given input sentence (i.e. similar structure to the input sentence or contain some words in the input sentence). For that it is necessary to construct several subSSTCs (called substitutions hereafter) for phrases in the input sentence according to their occurrence in the examples from the example-base. These substitutions are then combined together to form a complete SSTC as the output. Suppose the system intends to parse the sentence " the old man picks the green lamp up", depending on the following set of examples representing the example-base. picks{v] uplp] (1-2+4-5/0-5) He[hi ball{n] (0-1/0-1) (3-4/2-4) I the[detl (2-3/2-3) He picks the ball up 0-1 1-2 2-3 3-4 4-5 (1) tums[v](3-4/0-5) signal{n] on[adv] (2-3/0-3) (4-5/4-5) / ~ theldet] green[adj] (0-1/0-1) (1-2/1-2) The green signal turns on 0-1 I-2 2-3 3-4 4-5 (2) is{v](2-3/0-4) lamp[nl off[adv] (1-2/0-2) (3-4/3-4) I theldetl (0-1/0-1) The lamp is off 0-1 I-2 2-3 3-4 died{v](3-4/0-4) mJn[n] (2-3/0-3) the[det] old[adj] (0-1/0-1) (1-2/1-2) The old man died 0-1 1-2 2-3 3-4 (3) (4) The example-base is first processed to retrieve some knowledge related to each word in the example- base to form a knowledge index. Figure 7 shows the knowledge index constructed based on the example- base given above. The knowledge retrieved for each word consists of: 1. Example number: The example number of one of the examples which containing this word with this knowledge. Note that each example in the example- base is assigned with a number as its identifier. 2. Frequency: The frequency of occurrence in the example-base for this word with the similar knowledge. 3. Category: Syntactic category of this word. 4. Type: Type of this word in the dependency tree (0: terminal, l: non-terminal). - Terminal word: The word which is at the bottom level of the tree structure, namely the word without any son/s under it (i.e. STREE=SNODE in SSTC annotation). - Non terminal word: The word which is linked to other word/s at the lower level, namely the word that has son/s (i.e. STREE~:SNODE in SSTC annotation). 5. Status: Status of this word in the dependency tree (0: root word, 1 : non-root word, 2: friend word) - Friend word: In case of featurisation, if a word is featurised into other word, this word is called friend for that word, e.g. the word "up" is a friend for the word "picks" in figure 1. 6. Parent category: Syntactic category of the parent node of this word in the dependency tree. 7. Position: The position of the parent node in the sentence (0: after this word, 1 : before this word). 8. Next knowledge: A pointer pointing to the next possible knowledge of this word. Note that a word might have more than one knowledge, e.g. "man" could be a verb or a noun. Based on the constructed knowledge index in figure 7, the system built the following table of knowledge for the input sentence: The input sentence: the old man picks the green 0-1 1-2 2-3 3-4 4-5 5-6 the 0 1 1 old 1 2 4 man 2 3 4 picks 3 4 1 the 4 5 1 green 5 6 2 lamp 6 7 3 up 7 8 1 4 det 0 1 n ladj0 1 n 1 n 1 1 v 1 v 1 0 4 det 0 1 n ladj0 1 v 1 n 1 i v 1 p l 2 v lamp up 6-7 7-8 0 nil 0 nil 0 nil nil 0 nil 0 nil 0 nil 1 nil Note that to each word in the input sentence, the system built a record which contain the word, SNODE interval, and a linked list of possible knowledge related to the word as recorded in the knowledge index. The following figure describes an example record for the word <the>: This mean: the word <the>, snode(0-1), one of the examples that contain the word with this knowledge is example l, this knowledge repeated 4 time in the example-base, the category of the word is <det>, it is a terminal node, non-root node, the parent category is <n>, and the parent appear after it in the sentence. 690 =glExample No. Ifrequeneylcategory Itype Is~tus IParent categorylPosition INextKn.[ the ~ I 4 det 0 I n 0 nil. old - ~ 4 1 adj 0 I n 0 nil. he - ~ I I n 0 I v 0 nil. turns - ~ 2 1 v I 0 nil. ball - ~ I I n 1 I v I nil. green - ~ 2 I adj 0 1 n 0 nil. signal - ~ 2 I n I I v 0 nil. on - ~ 2 1 adv 0 I v 1 nil. ticks - ~ I 1 v I 0 nil. off - ~ 3 1 adv 0 1 v 1 nil. man - ~ 4 I n 1 1 v 0 nil. died - ~ 4 I v I 0 nil. lamp - ~ 3 I n 1 I v 0 nil. up - ~ 1 1 p I 2 v 1 nil. Figure 7: The knowledge index for the words in the example-base. This knowledge will be used to build the substitutionsfor the input sentence, as we will discuss in the next section. 4.1.1 Substitutions generation In order to build substitutions, the system first classifies the words in the input sentence into terminal words and non-terminal words. For each terminal word, the system tries to identify the non- terminal word it may be connected to based on the syntactic category and the position of the non- terminal word in the input sentence (i.e. before or after the terminal word) guided by SNODE interval. In the input sentence given above, the terminal words are "the", "old" and "green" and based on the knowledge table for the words in the input sentence, they may be connected as son node to the first non- terminal with category [n] which appear after them in the input sentence. For ( "the" 0-1, and "old" 1-2 ) they are connected as sons to the word ("man" 2-3). nowledge I] Non-terminal I able II wordStn] I For ("the" 4-5, and "green" 5-6 ) they are connected as sons to the word ("lamp" 6-7). ~ n o w l e d g e I I Non-terminal I I -,~"-",pv---lamp[n] I 'he' ~-~ SU~ebnStl_~ertaUttio°? I~ .... I green ~" ~ generator The remainder non-terminal words, which are not connected to any terminal word, will be treated as separate substitutions. From the input sentence the system builds the following substitutions respectively : man[n] picks[v] lamp[n] up[p] (2-3/0-3) (3-4/0-8) (6-7/4-7) (7-8/-) theldet] old[adj] the[de(] green[adj] (0-1/0-1) (1-2/1-2) (4-5/4-5~ (5-6/5-6) (1) (2) (3) (4) Note that this approach is quite similar to the generation of constituents in bottom-up chart parsing except that the problem of handling multiple overlapping constituents is not addressed here. 4.1.2 Substitutions combination In order to combine the substitutions to form a complete SSTC, the system first finds non-terminal words of input sentence, which appear as root word of some dependency trees in the example SSTCs. If more than one example are found (in most cases), the system will calculate the distance between the input sentence and the examples, and the closest example 691 (namely one with minimum distance) will be chosen to proceed further. In our example, the word "picks" is the only word in the sentence which can be the root word, so example (1) which containing "pick" as root will be used as the base to construct the output SSTC. The system first generates the substitutions for example (1) based on the same assumptions mentioned earlier in substitutions generation, which are : heln] Picks[v] ball[n] uplPl (0-1/0-1) (1-2/0-5) (3-4~2-4) (4-5/-) I the[det] (2-3/2-3) (1) (2) (3) (4) Distance calculation: Here the system utilizes distance calculation to determine the plausible example, which SSTC structure will be used as a base to combine the substitutions at the input sentence. We define a heuristic to calculate the distance, in terms of editing operations. Editing operations are insert (E --> p), deletion (p--)E) and replacing (a "-) s). Edition distances, which have been proposed in many works [7], [8] and [9], reflect a sensible notion, and it can be represented as metrics under some hypotheses. They defined the edition distances as number of editing operations to transfer one word to another form, i.e. how many characters needed to be edited based on insertion, deletion or replacement. Since words are strings of characters, sentences are strings of words, editing distances hence are not confined to words, they may be used on sentences [6]. With the similar idea, we define the edition distance as: (i) The distance is calculated at level of substitutions (i.e. only the root nodes of the substitutions will be considered, not all the words in the sentences). (ii) The edit operations are done based on the syntactic category of the root nodes, (i.e. the comparison between the input sentence and an example is based on the syntactic category of the root nodes of their substitutions, not based on the words). The distance is calculated based on the number of editing operations (deletions and insertion) needed to transfer the input sentence substitutions to the example substitutions, by assigning weight to each of these operations: 1 to insertion and 1 to deletion. e.g. : a) S 1: The old man eats an apple. $2: He eats a sweet cake. man [n] eats [v] f' aplle in) the~[adj] ea~~ ~an [det] He In] Iv] cake ln] a ldet] sweet [adj] In (a), the distance between S 1 and $2 is 0. b) He (nl boy[nl I The [detl S 1: He eats an apple in the garden. $2: The boy who drinks tea eats the cake. eats [v] ~ ~ garden [n] who~[~l] d r i ~ : : ~ ~ ~ l n ] I the [det] In (b), the distance between S1 and $2 is (3+2)=5. Note that when a substitution is decided to be deleted from the example, all the words of the related substitutions (i.e. the root of the substitutions and all other words that may link to it as brothers, or son/s), are deleted too. This series is determined by referring to an example containing this substitution in the example-base. For example in (b) above, the substitution rooted with "who" must be deleted, hence substitutions "drinks" and "tea" must be deleted too, similarly "in" must be deleted hence "garden" must be deleted too. Before making the replacement, the system must first check that the root nodes categories for substitutions in both the example and the input sentence are the same, and that these substitutions are occurred in the same order (i.e. the distance is 0). If there exist additional substitutions in the input sentence (i.e. the distance ~: 0), the system will either combine more than one substitution into a single substitution based on the knowledge index before replacement is carried out or treat it as optional substitution which will be added as additional subtree under the root. On the other hand, additional substitutions appear in the example will be treated as optional substitutions and hence can be removed. Additional substitutions are determined during distance calculation. Replacement: Next the substitutions in example (1) will be replaced by the corresponding substitutions generated from the input sentence to form a final SSTC. The replacement 692 process is done by traversing the SSTC tree structure for the example in preorder traversal, and each substitution in the tree structure replaced with its corresponding substitution in the input sentence. This approach is analogous to top down parsing technique. Figure 8, illustrates the parsing schema for the input sentence " The old malt picks the green lamp up". Input sentence The old man picks the green lamp up substitutions Ii m I (I) ~ theldeq oldladj] ( 2 ) ~ I the[det] greenladjl [(4)k~ ~ p.- pickslvl up [Pl (1-2+4-5/0-5) /\ He [hi balllnl (0-1/0-1) (3-4/2-4) I theldetl (2-3/2-3) He picks the ball up 0-1 1-2 2-3 3-4 4-5 SSTC base [ i;i structure ~,,,~ ...... • II.J Replacement ]l ~ -q I SSTC example substitutions I,t l ,olnl I - ( 2 ) ~ I uptp)I c4) I ! Output SSTC ~, structure picks[v] uplp] man[n](2-3/0-3) lamp[n](6-7/4-7) /\ /\ the[det] oldladj] the[det] green[adj] (O-I/0-l) (1-2/1-2) (4-5/4-5) (5-6/5-6) The old man picks the green lamp up 0-1 I-2 2-3 3-4 4-5 5-6 6-7 7-8 I . . . . . . . . . . . . . . Figure 8: The parsing schema based on the SSTC for the sentence "the old man picks the green lamp up" using example ( 1 ). 5. CONCLUSION In this paper, we sketch an approach for parsing NL string, which is an example-based approach relies on the examples that already parsed to their representation structures, and on the knowledge that we can get from these examples information needed to parse the input sentence. A flexible annotation schema called Structured String-Tree Correspondence (SSTC) is introduced to express linguistic phenomena such as featurisation, lexicalisation and crossed dependencies. We also present an overview of the algorithm to parse natural language sentences based on the SSTC annotation schema. However, to obtain a full version of the parsing algorithm, there are several other problems which needed to be considered further, i.e. the handling of multiple substitutions, an efficient method to calculate the distance between the input sentence and the examples, and lastly a detailed formula to compute the resultant SSTC obtained from the combination process especially when deletion of optional substitutions are involved. References: [1] M.Nagao, "A Framework of a mechanical translation between Japanese and English by analogy principle", in; A. Elithorn, R. Benerji, (Eds.), Artificial and Human Intelligence, Elsevier: Amsterdam. [2] V.Sadler & Vendelmans, "Pilot implementation of a bilingual knowledge bank", Proc. of Coling-90, Helsinki, 3, 1990, 449-451. [3] S. Sato & M.Nagao, "Example-based Translation of technical Terms", Proc. of TMI-93, Koyoto, 1993, 58-68. [4] Y. Zaharin & C. Boitet, "Representation trees and string-tree correspondences", Proc. of Coling-88, Budapest, 1988, 59-64. [5] E. K. Tang & Y. Zaharin, "Handling Crossed Dependencies with the STCG", Proc. of NLPRS'95, Seoul, 1995, [6] Y.Lepage & A.Shin-ichi, "Saussurian analogy: a theoritical account and its application", Proc. of Coling-96, Copenhagen, 2, 1996, 717-722. [7] V. I. Levenshtein, "Binary codes capable of correcting deletions, insertions and reversals", Dokl. Akad. Nauk SSSR, 163, No. 4, 1965, 845-848. English translation hz Soviet Physics-doklady, 10, No. 8, 1966, 707-710. [8] Robert A. Wagner & Michael J. Fischer, " The String-to String Correction Problem", Journal for the Association of Computing Machinery, 21, No. 1, 1974, 168-173. [9] Stanley M. Selkow, "The Tree-to-Tree Editing Problem", Information Processing Letters, 6, No. 6, 1977, 184-186. 693
1998
113
Large Scale Collocation Data and Their Application to Japanese Word Processor Technology Yasuo Koymna, Masako Yasutake, Kenji Yoshimura and Kosho Shudo Institute for Informalion and Conlrol Systmas, Fukuoka University N ~ Fukuoka, 814-0180 Japan koymm@aisott co.jp, [email protected] fukuoka-u.ac.jp, [email protected] ac.jp, [email protected] fukuoka-u.ac.jp abstract Word processors or computers used in Japan employ Japanese input method through key- board stroke combined with Kana (phonetic) character to Kanji (ideographic, Chinese) char- acter conversion technology. The key factor of Kana-to-Kanji conversion technology is how to raise the accuracy of the conversion through the homophone processing, since we have so many homophonic Kanjis. In this paper, we report the results of our Kana-to-Kanji conver- sion experiments which embody the homo- phone processing based on large scale colloca- tion data. It is shown that approximately 135,000 collocations yield 9.1% raise of the conversion accuracy compared with the pro- totype system which has no collocation data. 1. Introduction Word processors or computers used in Japan ordi- narily employ Japanese input method through key- board stroke combined ~ with Kana (phonetic) to Kanji (ideographic, Chinese) character conversion technology. The Kana-to-Kanji conversion is per- formed by the morphological analysis on the input Kana siring with no space between words. Word- or phrase-segmentation is carried out by the analysis to identify the substring of the input which has to be converted from Kana to Kanji. Kana-Kanji mixed string, which is the ordinary form of Japanese writ- ten text, is obtained as the final result. The major issue of this technology lies in raising the accuracy of the segmentation and the homophone processing to select the correct Kanji among many homophonic candidates. The conventional methodology for processing ho- mophones have used the function that gives the pri- ority to the word which was used lastly or to the high frequency word. In fact, however, this method sometimes tends to cause inadequate conversion due to the lack of consideration of the semantic consis- tency of the word concurrence. While it is difficult to employ the syntactic or semantic processing in earnest for the word processor from the cost vs. performance viewpoints, for example, the following trials to improve the conversion accuracy have been reported: Employing the case-frame to check the semantic consistency of combination of words [Oshima, Y. et al., 1986]. Employing the neural net- work to describe the consistency of the concurrence of words [Kobayashi, T. et al.,1992], Making a con- currence dictionary for the specific topic or field, and giving the priority to the word which is in the dictionary when the topic is identified [Yamamoto, K. et al., 1992]. In any of these studies, however, many problems are left unsolved in realizing its practical system. Besides these semantic or quasi-semantic gadgets, we think it much more practical and effective to use surface level resources, namely, to use extensively the collocation. But how many collocations contrib- ute to the accuracy of Kana-to-Kanji conversion is not known yet. In this paper, we present some results of our ex- periments of Kana-to-Kanji conversion, focusing on the usage of large scale collocation data. In chapter 2, descriptions of the collocations used in our sys- tem and their classification are given. In chapter 3, the technological framework of our Kana-to-Kanji conversion systems is outlined. In chapter 4, the method and the results of the experiments are given along with some discussions. In chapter 5, con- eluding remarks are given. 2. Collocation Data Unlike the recent works on the automatic extraction of collocations from corpus [Church, K. W, et al, 1990, Ikehara, S. et al, 1996, etc.], our data have been collected manually through the intensive in- vestigation of various texts, spending years on it. This is because no stochastic framework assures the 694 accuracy of the extraction, namely the necessity and sufficiency of the data set. The collocations which are used in our Kana-to-Kanji conversion system consist of two kinds: (1) idiomatic expressions, whose meanings seem to be difficult to compose from the typical meaning of the individual compo- nent words [Shudo, K. et al., 1988]. (2) stereotypical expressions in which the concurrence of component words is seen in the texts with high frequency. The collocations are also classified into two classes by a grammatical criterion: one is a class of functional collocations, which work as functional words such as particles (postpositionals) or auxiliary verbs, the other is a class of conceptual collocations which work as nouns, verbs, adjectives, adverbs, etc. The latter is further classified into two kinds: uninter- ruptible collocations, whose concurrence relation- ship of words are so strong that they can be dealt with as single words, and interruptible collocations, which are occasionally used separately. In the following, the parenthesized number is the number of expressions adopted in the system. 2.1 Functional Collocations (2,174) We call expressions which work like a particle rela- tional collocation and expressions which work like an auxiliary verb at the end of the predicate auxili- ary predicative collocation [Shudo, K. et al., 1980]. relational collocations (760) ex. [ 7./') t, x-C ni/tuae (about) auxiliary predicative collocations (1,414) naKereoa/naranai (must) 2.2 Uninterruptible Conceptual Col- locations (54,290) four-Kanji-compound (2,231) ex. ~ ZJlYg. gaaeninsut (every miller draws water to his own mill) adverb + particle type (3,089) ex ~t:,5,tz.& • atafutat'o'(da sconcertedly) adverb + suru type (1,043) < <-¢ eX'agt~u<se~cusuru toil and moil) noun type (21,128) ex. ~09/~3, akano/tanin (perfect stranger) verb type (13,225) ex. ~'9 ~J ~'~/~ 1-o otsuriga/~-ru . . (be enough to make the change) adjective type (2,394) ex ]t~ L t,~ • uraganashii (mournful) adjective verb type (397) ex ~t~J~ "goldge-n/naname (in a bad mood) adverb and other type (8,185) ex ~ 17../,~'C • meni/miete (remarkably) proverb type (2,598) ex ~ I, ~'C I~I~J ~.I~ ~. • otteha/koni/shitagae (when old, obey your children) 2.3 Interruptible Conceptual Colloca- tions (78,251) noun type (7,627) ex. ~$(7)/tttt, akugyouno/mukui (fruit of an evil deed) verb type (64,087) ex. ~,~. tt:~/~ I 7b~.~ usnlrogamlwo/nlKareru (feel as if one's heart were left behind) adjective type (3,617) ex ~Tb~/:~-~ t,~ "taittbgcr~ool~i ( act in a lordly manner) adjective verb type (2,018) ex. tt~Tb~/± yakushaga/ue (be more able) others (902) ex ~lz/~li'J'~ • atoni/~il~nu (can not give up) 3. Kana-to-Kanji Conversion Systems We developed four different Kana-to-Kanji conver- sion systems, phasing in the collocation data de- scribed in 2. The technological framework of the system is based on extended bunsetsu (e- bunsetsu) model [Shndo, K. et al., 1980] for the unit of the segmentation of the input Kana string, and on minimum cost method [Yoshimura, K. et al., 1987] combined with Viterbi's algorithm [Viterbi, A,, J., 1967] for the reduction of the ambi- guity of the segmentation. A bnn.~etsu is the basic postpositional or predicative 695 phrase which composes Japanese sentences, and an e-bunsetsu, which is a natural extension of the bun- setsu, is defined roughly as follows: <e-bunsetsu>::= <prefix>* <conceptual word l uninterruptible conceptual collocation> <suffix>* <functional word l functional collocation>* The e-bunsetsu which includes no collocation is the bunsetsu. More refmed rules are used in the actual segmentation process. The interruptible conceptual collocation is not treated as a single unit but as a string ofbunsetsus in the segmentation process. Each collocation in the dictionary which is com- posed of multiple number of bunsetsus is marked with the boundary between bunsetsus. The system first tries to segment the input Kana string into e- bunsetsus. Every possible segmentation is evaluated by its cost. A segmentation which is assigned the least cost is chosen as the solution. The boundary between e-bunsetsus in examples in this paper is denoted by "/". ex. two results of e-bunsetsu-segmentation: , hitoh.a/kigqkikunikositagotol, taarimasen (there is nothing like being watchful) hitohdv'Mga/Idkimi/ko3itcv;kotoha/arimasen In the above examples, JKT~/~I] < kiga/kiku: is uninterruptible conceptual collocation and IS-/il~ I., Lx/II~|~/~ ~) ~'t~ A~ ni/kosita/kotoha/arimasen: is a functional collocation. In the first example, these collocations are dealt with a single words. The second example shows the conventional bunsetsu- segmentation. The cost for the segmentation candidate is the sum of three partial costs: b-cost, c-cost and d-cost shown below. (1)a segment cost is assigned to each segment. Sum of segment costs of all segments is the basic cost (b-cost) of a segmentation candidate. By this, the collocation tends to have priority over the ordi- nary word. The standard and initial value of each segment cost is 2, and it is increased by 1 for each occurrence of the prefix, su_Wnx, etc. in the seg- ment. (2)a concatenation cost (c-cost) is assigned to speci- fic e-bunsetsu boundaries to revise the b-cost. The concatenation, such as adnominal-noun, ad- verb-verb, noun-noun, etc. is paid a bonus , namely a negative cost, -1. (3)a dependency cost (d-cost), which has a negative value, is assigned to the strong dependency rela- tionship between conceptual words in the candi- date, representing the consistency of concurrence of conceptual words. By this, the segmentation containing the interrupted conceptual collocation tends to have priority. The value of a d-cost varies from -3 to -1, depending on the strength of the concurrence. The interruptible conceptual collo- cation is given the biggest bonus i.e.-3. The reduction of the homophonic ambiguity, which limits Kanji candidates, is carried out in the course of the segmentation and its evaluation by the cost. 3.1 Prototype System A We first developed a prototype Kana-to-Kanji con- version system which we call System A, revising Kana-to-Kanji conversion software on the market, WXG Ver2.05 for PC. System A has no collocation data but conventional lexical resources, namely functional words (1,010) and conceptual words (131,66 I). 3.2 System B, C and D We reinforced System A to obtain System B, C and D by phasing in the following collocational re- sources. System B is System A equipped addition- ally with functional collocations (2,174) and unin- terruptible conceptual collocations except for four- Kanji-compound and proverb type collocations (49,461). System C is System B equipped addition- ally with four-Kanji-compound (2,231) and proverb type collocations (2,598). Further, System D is System C equipped additionally with interruptible conceptual collocations (78,251). 4. Experiments 4.1 Text Data for Evaluation Prior to the experiments of Kana-to-Kanji conver- sion, we prepared a large volume of text data by hand which is formally a set of triples whose first component a is a Kana string (a sentence) with no space, The second component b is the correct seg- mentation result of a, indicating each boundary between bunsetsus with "/" or ".". '7" and .... means obligatory and optional boundary, respec- tively. The third component c is the correct conver- sion result of a, which is a Kana-Kanji mixed string. ex. { a: {S-;[9[s-[~7b~l,~-Ct,~To niwanibaragasaiteiru 696 (roses are in bloom in a garden) b: IZab)[7-/[~?~/~ [,~.(,~70 niwani/baraga/saite, iru c: I~I~.I#~#J~II~I,~T..I,x,'~ } The introduction of the optional boundary assures the flexible evaluation. For example, each ofl~lA "C/t,~ saite/iru (be in bloom) and I~I,~'CIA~ saiteiru is accepted as a correct result. The data fde is divided into two sub-files, fl and 12, depending on the number of bunsetsus in the Kana string a. fl has 10,733 triples, whose a has less than five bunsetsus and t2 has 12,192 triples, whose a has more than four bunsetsus. 4.2 Method of Evaluation Each a in the text data is fed to the conversion sys- tem. The system outputs two forms of the least cost result: b', Kana string segmented to bunsetsus by "/", and c', Kana-Kanji mixed string corresponding to b and c of the correct data, respectively. Each of the following three cases is counted for the evalua- tion. SS (Segmentation Success): b TM b CS (Complete Success): b TM b and ¢'= ¢ TS (Tolerative Success): b'= b and ¢'~ ¢ There are many kinds of notational fluctuation in Japanese. For example, the conjugational suffix of some kind of Japanese verb is not always necessi- tated, therefore,~l,,I I'{'f,~fi I'I'Y and ~.1: are all acceptable results for input ~ L)~ I~ uriage (sales). Besides, a single word has sometimes more than one Kanji notations, e.g. "~g hama (beach) and ;~ hama (beach) are both acceptable, and so on. c'- ¢ in the case of TS means that e' coincides with ¢ completely or excepting the part which is hetero- morphic in the above sense. For this, each of our conversion system has a dictionary which contains approximately 35,000 fluctuated notations of con- ceptual words. 4.3 Results of Experiments Results of the experiments are given in Table 1 and Table 2 for input file fl and 12, respectively. Comparing the statistics of system A with D, we can conclude that the introduction of approximately 135,000 collocation data causes 8.1% and 10.5 % raise of CS and TS rate, respectively, in case of re- latively short input strings (fl). The raise of SS rate for t"1 is 2.7%. In case of the longer input strings (t2) whose average number of bunsetsus is approxi- mately 12.6, the raise ofCS, TS and SS rate is 2.4 %, 5.2 % and 5.7 %, respectively. As a consequence, the raise ofCS, TS and SS rate is 6.2 %, 9.1% and 3.8 % on the average, respectively. SS(Segmentation Success) CS(Complete Success) TS(Tolerative Success) S~,stem A S)rstem B S~/stern C 9,656(90.0°,6) 9,912(92.4%) 9,927(92.5%) 5,085(47.4%) 5,638(52.5%) 5,677(52.9°,6) 6,226(58.0°,6) 6,971(64.9°,6) 7,024(65.4°,6) Table 1 :Result of the experiments for 10,733 short input strings d~a, fl. (average number of Kana characters per input is 13.7) S~¢stem D 9,954(92.7%) 5,953(55.5%) 7,355(68.5%) SS CS TS S~tma A S),~ B S),stma C 8,345(68.4%) 8,978(73.6%) 8,988(73.7%) 2,422(19.9°,6) 2,660(21.8%) 2~673(21.90"6) 3,965(32.5%) 4,555(37.4%) 4,568(37.5%) Table 2: Result ofthe expea-huents for 12,192 long input strings dam, t2. (average number of Kana characters per input is 42.7) S~¢stem D 9,037(74.1%) 2,717(22.3%) 4,601(37.7%) S~-tem D' WXG SS 9,949(92.7%) 9,804(91.3%) CS 6,180(57.6%) 5,877(54.8°,6) TS 7,646(71.2%) 7,290(67.9°,6) Table 3 :CompmJson of system D' with WXG for fl. S mD' SS 8,928(73.2%) 8,815(72.3%) CS 2,738(22.5%) 2,694(22.1%) TS 4,649(38.1%) 4,543(37.3%) Table 4: Comparison of system D' with WXG for 12. 697 4.4 Comparison with a Software on the Market We compared System D with a Kana-to-Kanji conver- sion soRware for PC on the market, WXG Ver2.05 under the same condition except for the anaount of installed collocation dam For this, system D was reinforced and renmned D', by equipping with WXG's 10,000 items of word dependency description. Both systems were dis- abled for the learning functiom WXG has approximately 60,000 collocations (3,000 unintcrmptible and 57,000 interruptible collocations), whereas Syst~nn D' has ap- proximately 135,000 collocations. The statistical results are givm in Table 3 and Table 4 for the corpus fl and t2, respectively. The tables show that the raise of CS, TS and SS rme, which was oblained by System D' is 2.5 %, 4.5 % and 3.9 % on the average, respectively. No fialher compari- son with the conanercial products has been done, since we judge the perfommnce ofWXG Ver.2.05 to be aver- age among them. 4.5 Discussions Table 1 '~ 4 show that the longer input the system is given, the more difficult for the system to make the cor- rect solution and the difference between accuracy rate of WXG and system D' is less for f2 than for fl. Further investigation clarified that the error of System D is mainly caused by missing words or expressions in the machine dictionmy. Specifically, it was clmified that the dictionary does not have the sufficient number of Kata- Kzna words and people's names. In Mdition, the number of fluctualional variants installed in the dictionary men- fioned in 4.2 turned out to be inst~cient. These problems should be rmaedied in future. 5. Concluding Remarks In this p,%~r, the effectiveness of the large scale colloca- tion data for the improvement of the conversion accuracy of Kana-to-Kanji conversion process used in Japmese word processors was chrified, by relatively large scale experiments. The extensive collection of the collocations has been c,m'fied out manually these ten years by the authors in order to realize not only high precision word processor but also more general Japanese language ~ in future. A lot of resources, school texttx3oks, newspapers, novels, journals, dictionaries, etc. have been investigated by workers for the collection. The candidates for the col- location have been judged one after another by them. Among collocations described in this paper, the idiomatic expressions are quite burdensome in the developmera of NLP, since thW do not follow the principle of composi- lionality of the memaing Generally speaking the more extensive collocational d__~___ it deals with, the less the "rule syst~n" of the rule based NLP system is burdened. This means the great importance of the enrichment of collocalional data Whereas it is inevitable that the ~oi- awiness lies in the human judgment and selection of collocations, we believe that our collocation rl~ is far more refined than the automalicany extracted one from corpora which has been recently reported [Church, K. W. etal, 1990, Ikeham, S. etal, 1996, etc.]. We believe that the approach descrlqxxi here is important for the evolution of NLP product in general as well. References Shudo, K. et ~, 1980. Morphological Aspect of Japanese Language Processing, in Proc. of 8 th Int~a,-~Con£ on Comps_ __a~__'onal Linguistics(COLING80) Oshima, Y. et al., 1986. A Disarnbiguation Method in Kana-to-Kanji Conversion Using Case Frame Gram- rn,'~, in Trans. oflPSJ, 27-7. (in Japanese) Kobayashi, T. et al. ,1986. RealiTation of Kana-to-Kanji Conversion Using Neural Networks. in Toshiba Review, 47-11. (in J~anese) Yoshimura, K. et a1.,1987. Morphological Analysis of Ja- panese S~tences using the Least Cost Metho~ in IPSJ SIG NL.60. (in J nese) Shudo, K. et al. ,1988. On the Idiomatic Expressions in Japanese Language. in IPSJ SIG NL-66. (in Japanese) Church, K.W. et al, 1990. Word Association Norms, Mutual Information, and Lexicography. in Comput- ational Linguistics, 16. Yamamoto, K. et al. ,1992. Kana-to-Kanji Conversion Using Co-occtm~ce Groups. in Proc. of44th Con£ of IPSJ. (in Japanese) Ikehara, S. et al., 1996. A Statistical Method for Extracting Uninterrupted and Interrupted Collocations l~om Very Large Corpora_ in Proc. of 16th Internat. Conf. on Computational Linguistics (COLING 96) Viterbi,A.,J., 1967,F_gor Bounds for Convolutional Codes and an Asymptotically Optimal Decoding Algorithm. in ~ Trans. on Infommfion Theory 13. 698
1998
114
Compacting the Penn Treebank Grammar Alexander Krotov and Mark Hepple and Robert Gaizauskas and Yorick Wilks Department of Computer Science, Sheffield University 211 Portobello Street, Sheffield S1 4DP, UK {alexk, hepple, robertg, yorick}@dcs.shef.ac.uk Abstract Treebanks, such as the Penn Treebank (PTB), offer a simple approach to obtaining a broad coverage grammar: one can simply read the grammar off the parse trees in the treebank. While such a grammar is easy to obtain, a square-root rate of growth of the rule set with corpus size suggests that the derived grammar is far from complete and that much more tree- banked text would be required to obtain a com- plete grammar, if one exists at some limit. However, we offer an alternative explanation in terms of the underspecification of structures within the treebank. This hypothesis is ex- plored by applying an algorithm to compact the derived grammar by eliminating redund- ant rules - rules whose right hand sides can be parsed by other rules. The size of the result- ing compacted grammar, which is significantly less than that of the full treebank grammar, is shown to approach a limit. However, such a compacted grammar does not yield very good performance figures. A version of the compac- tion algorithm taking rule probabilities into ac- count is proposed, which is argued to be more linguistically motivated. Combined with simple thresholding, this method can be used to give a 58% reduction in grammar size without signi- ficant change in parsing performance, and can produce a 69% reduction with some gain in re- call, but a loss in precision. 1 Introduction The Penn Treebank (PTB) (Marcus et al., 1994) has been used for a rather simple approach to deriving large grammars automatically: one where the grammar rules are simply 'read off' the parse trees in the corpus, with each local subtree providing the left and right hand sides of a rule. Charniak (Charniak, 1996) reports precision and recall figures of around 80% for a parser employing such a grammar. In this paper we show that the huge size of such a tree- bank grammar (see below) can be reduced in size without appreciable loss in performance, and, in fact, an improvement in recall can be achieved. Our approach can be generalised in terms of Data-Oriented Parsing (DOP) methods (see (Bonnema et al., 1997)) with the tree depth of 1. However, the number of trees produced with a general DOP method is so large that Bonnema (Bonnema et al., 1997) has to resort to restrict- ing the tree depth, using a very domain-specific corpus such as ATIS or OVIS, and parsing very short sentences of average length 4.74 words. Our compaction algorithm can be easily exten- ded for the use within the DOP framework but, because of the huge size of the derived grammar (see below), we chose to use the simplest PCFG framework for our experiments. We are concerned with the nature of the rule set extracted, and how it can be improved, with regard both to linguistic criteria and processing efficiency. Inwhat follows, we report the worry- ing observation that the growth of the rule set continues at a square root rate throughout pro- cessing of the entire treebank (suggesting, per- haps that the rule set is far from complete). Our results are similar to those reported in (Krotov et al., 1994). 1 We discuss an alternative pos- sible source of this rule growth phenomenon, partial bracketting, and suggest that it can be alleviated by compaction, where rules that are redundant (in a sense to be defined) are elimin- ated from the grammar. Our experiments on compacting a PTB tree- 1 For the complete investigation of the grammar ex- tracted from the Penn Treebank II see (Gaizauskas, 1995) 699 20000 15000 i0000 5000 0 0 20 40 60 80 i00 Percentage of the corpus Figure 1: Rule Set Growth for Penn Treebank II bank grammar resulted in two major findings: one, that the grammar can be compacted to about 7% of its original size, and the rule num- ber growth of the compacted grammar stops at some point. The other is that a 58% reduction can be achieved with no loss in parsing perform- ance, whereas a 69% reduction yields a gain in recall, but a loss in precision. This, we believe, gives further support to the utility of treebank grammars and to the compaction method. For example, compaction methods can be applied within the DOP frame- work to reduce the number of trees. Also, by partially lexicalising the rule extraction process (i.e., by using some more frequent words as well as the part-of-speech tags), we may be able to achieve parsing performance similar to the best results in the field obtained in (Collins, 1996). 2 Growth of the Rule Set One could investigate whether there is a fi- nite grammar that should account for any text within a class of related texts (i.e. a domain oriented sub-grammar of English). If there is, the number of extracted rules will approach a limit as more sentences are processed, i.e. as the rule number approaches the size of such an underlying and finite grammar. We had hoped that some approach to a limit would be seen using PTB II (Marcus et al., 1994), which larger and more consistent for bracketting than PTB I. As shown in Figure 1, however, the rule number growth continues un- abated even after more than 1 million part-of- speech tokens have been processed. 3 Rule Growth and Partial Bracketting Why should the set of rules continue to grow in this way? Putting aside the possibility that nat- ural languages do not have finite rule sets, we can think of two possible answers. First, it may be that the full "underlying grammar" is much larger than the rule set that has so far been produced, requiring a much larger tree-banked corpus than is now available for its extrac- tion. If this were true, then the outlook would be bleak for achieving near-complete grammars from treebanks, given the resource demands of producing hand-parsed text. However, the rad- ical incompleteness of grammar that this al- ternative implies seems incompatible with the promising parsing results that Charniak reports (Charniak, 1996). A second answer is suggested by the presence in the extracted grammar of rules such as (1). 2 This rule is suspicious from a linguistic point of view, and we would expect that the text from which it has been extracted should more prop- erly have been analysed using rules (2,3), i.e. as a coordination of two simpler NPs. NP --~ DT NN CC DT NN (1) NP --~ NP CC NP (2) gP --+ DT NN (3) Our suspicion is that this example reflects a widespread phenomenon of partial bracketting within the PTB. Such partial bracketting will arise during the hand-parsing of texts, with (hu- man) parsers adding brackets where they are confident that some string forms a given con- stituent, but leaving out many brackets where they are less confident of the constituent struc- ture of the text. This will mean that many rules extracted from the corpus will be 'flat- ter' than they should be, corresponding prop- erly to what should be the result of using sev- eral grammar rules, showing only the top node and leaf nodes of some unspecified tree structure (where the 'leaf nodes' here are category sym- bols, which may be nonterminal). For the ex- ample above, a tree structure that should prop- erly have been given as (4), has instead received 2PTB POS tags are used here, i.e. DT for determiner, CC for coordinating conjunction (e.g 'and'), NN for noun 700 only the partial analysis (5), from the flatter 'partial-structure' rule (1). i. NP NP CC NP DT NN DT NN (4) ii. NP (5) DT NN CC DT NN 4 Grammar Compaction The idea of partiality of structure in treebanks and their grammars suggests a route by which treebank grammars may be reduced in size, or compacted as we shall call it, by the elimination of partial-structure rules. A rule that may be eliminable as a partial-structure rule is one that can be 'parsed' (in the familiar sense of context- free parsing) using other rules of the grammar. For example, the rule (1) can be parsed us- ing the rules (2,3), as the structure (4) demon- strates. Note that, although a partial-structure rule should be parsable using other rules, it does not follow that every rule which is so parsable is a partial-structure rule that should be elimin- ated. There may be defensible rules which can be parsed. This is a topic to which we will re- turn at the end of the paper (Sec. 6). For most of what follows, however, we take the simpler path of assuming that the parsability of a rule is not only necessary, but also sufficient, for its elimination. Rules which can be parsed using other rules in the grammar are redundant in the sense that eliminating such a rule will never have the ef- fect of making a sentence unparsable that could previously be parsed. 3 The algorithm we use for compacting a gram- mar is straightforward. A loop is followed whereby each rule R in the grammar is ad- dressed in turn. If R can be parsed using other rules (which have not already been eliminated) then R is deleted (and the grammar without R is used for parsing further rules). Otherwise R 3Thus, wherever a sentence has a parse P that em- ploys the parsable rule R, it also has a further parse that is just like P except that any use of R is replaced by a more complex substructure, i.e. a parse of R. is kept in the grammar. The rules that remain when all rules have been checked constitute the compacted grammar. An interesting question is whether the result of compaction is independent of the order in which the rules are addressed. In general, this is not the case, as is shown by the following rules, of which (8) and (9) can each be used to parse the other, so that whichever is addressed first will be eliminated, whilst the other will remain. B --+ C (6) C --+ B (7) A -+ B B (8) A -~ C C (9) Order-independence can be shown to hold for grammars that contain no unary or epsilon ('empty') rules, i.e. rules whose righthand sides have one or zero elements. The grammar that we have extracted from PTB II, and which is used in the compaction experiments reported in the next section, is one that excludes such rules. For further discussion, and for the proof of the order independence see (Krotov, 1998). Unary and sister rules were collapsed with the sister nodes, e.g. the structure (S (NP -NULL-) (VP VB (NP (QP ...))) .) will produce the fol- lowing rules: S -> VP., VP -> VB QPand QP _> . 4 ° , . 5 Experiments We conducted a number of compaction exper- iments: 5 first, the complete grammar was parsed as described in Section 4. Results ex- ceeded our expectations: the set of 17,529 rules reduced to only 1,667 rules, a better than 90% reduction. To investigate in more detail how the com- pacted grammar grows, we conducted a third experiment involving a staged compaction of the grammar. Firstly, the corpus was split into 10% chunks (by number of files) and the rule sets extracted from each. The staged compaction proceeded as follows: the rule set of the first 10% was compacted, and then the rules for the 4See (Gaizauskas, 1995) for discussion. SFor these experiments, we used two parsers: Stol- cke's BOOGIE (Stolcke, 1995) and Sekine's Apple Pie Parser (Sekine and Grishman, 1995). 701 $ 2 2000 1500 I000 500 0 : i 0 20 40 60 80 i00 Percentage of the corpus Figure 2: Compacted Grammar Size next 10% added and the resulting set again com- pacted, and then the rules for the next 10% ad- ded, and so on. Results of this experiment are shown in Figure 2. At 50% of the corpus processed the com- pacted grammar size actually exceeds the level it reaches at 100%, and then the overall gram- mar size starts to go down as well as up. This reflects the fact that new rules are either re- dundant, or make "old" rules redundant, so that the compacted grammar size seems to approach a limit. 6 Retaining Linguistically Valid Rules Even though parsable rules are redundant in the sense that has been defined above, it does not follow that they should always be removed. In particular, there are times where the flatter structure allowed by some rule may be more lin- guistically correct, rather than simple a case of partial bracketting. Consider, for example, the (linguistically plausible) rules (10,11,12). Rules (11) and (12) can be used to parse (10), but it should not be eliminated, as there are cases where the flatter structure it allows is more lin- guistically correct. VP ~ VB NP PP VP ~ VB NP NP ~ NP PP i. VP ii. VP VB NP VB NP PP NP PP (10) (ii) (12) (13) We believe that a solution to this problem can be found by exploiting the date provided by the corpus. Frequency of occurrence data for rules which have been collected from the cor- pus and used to assign probabilities to rules, and hence to the structures they allow, so as to produce a probabilistic context-free grammar for the rules. Where a parsable rule is correct rather than merely partially bracketted, we then expect this fact to be reflected in rule and parse probabilities (reflecting the occurrence data of the corpus), which can be used to decide when a rule that may be eliminated should be elimin- ated. In particular, a rule should be eliminated only when the more complex structure allowed by other rules is more probable than the simpler structure that the rule itself allows. We developed a linguistic compaction al- gorithm employing the ideas just described. However, we cannot present it here due to the space limitations. The preliminary results of our experiments are presented in Table 1. Simple thresholding (removing rules that only occur once) was also to achieve the maximum compaction ratio. For labelled as well as unla- belled evaluation of the resulting parse trees we used the evalb software by Satoshi Sekine. See (Krotov, 1998) for the complete presentation of our methodology and results. As one can see, the fully compacted grammar yields poor recall and precision figures. This can be because collapsing of the rules often pro- duces too much substructure (hence lower pre- cision figures) and also because many longer rules in fact encode valid linguistic information. However, linguistic compaction combined with simple thresholding achieves a 58% reduction without any loss in performance, and 69% re- duction even yields higher recall. 7 Conclusions We see the principal results of our work to be the following: * the result showing continued square-root growth in the rule set extracted from the PTB II; • the analysis of the source of this continued growth in terms of partial bracketting and the justification this provides for compac- tion via rule-parsing; • the result that the compacted rule set does approach a limit at some point dur- 702 Full Simply thresholded Fully compacted Linguistically compacted Grammar 1 Grammar 2 Recall 70.55% Precision 77.89% Recall 73.49% Precision 81.44% Grammar size 15,421 reduction (as % of full) 0% Labelled evaluation 70.78% 30.93% 71.55% 70.76% 77.66% 19.18% 72.19% 77.21% Unlabelled evaluation 73.71% 43.61% 80.87% 27.04% 7,278 1,122 53% 93% 74.72% 73.67% 75.39% 80.39% 4,820 6,417 69% 58% Table 1: Preliminary results of evaluating the grammar compaction method ing staged rule extraction and compaction, after a sufficient amount of input has been processed; • that, though the fully compacted grammar produces lower parsing performance than the extracted grammar, a 58% reduction (without loss) can still be achieved by us- ing linguistic compaction, and 69% reduc- tion yields a gain in recall, but a loss in precision. The latter result in particular provides further support for the possible future utility of the compaction algorithm. Our method is similar to that used by Shirai (Shirai et al., 1995), but the principal differences are as follows. First, their algorithm does not employ full context- free parsing in determining the redundancy of rules, considering instead only direct composi- tion of the rules (so that only parses of depth 2 are addressed). We proved that the result of compaction is independent of the order in which the rules in the grammar are parsed in those cases involving 'mutual parsability' (discussed in Section 4), but Shirai's algorithm will elimin- ate both rules so that coverage is lost. Secondly, it is not clear that compaction will work in the same way for English as it did for Japanese. References Remko Bonnema, Rens Bod, and Remko Scha. 1997. A DOP model for semantic interpretation. In Proceedings of European Chapter of the ACL, pages 159-167. Eugene Charniak. 1996. Tree-bank grammars. In Proceedings of the Thirteenth National Confer- ence on Artificial Intelligence (AAAI-96), pages 1031-1036. MIT Press, August. Michael Collins. 1996. A new statistical parser based on bigram lexical dependencies. In Proceed- ings of the 3~th Annual Meeting of the ACL. Robert Gaizauskas. 1995. Investigations into the grammar underlying the Penn Treebank II. Re- search Memorandum CS-95-25, University of Sheffield. Alexander Krotov, Robert Gaizauskas, and Yorick Wilks. 1994. Acquiring a stochastic context-free grammar from the Penn Treebank. In Proceedings of Third Conference on the Cognitive Science of Natural Language Processing, pages 79-86, Dub- lin. Alexander Krotov. 1998. Notes on compacting the Penn Treebank grammar. Technical Memo, Department of Computer Science, University of Sheffield. M. Marcus, G. Kim, M.A. Marcinkiewicz, R. MacIntyre, A. Bies, M. Ferguson, K. Katz, and B. Schasberger. 1994. The Penn Tree- bank: Annotating predicate argument structure. In Proceedings of ARPA Speech and Natural language workshop. Satoshi Sekine and Ralph Grishman. 1995. A corpus-based probabilistic grammar with only two non-terminals. In Proceedings of Fourth Interna- tional Workshop on Parsing Technologies. Kiyoaki Shirai, Takenobu Tokunaga, and Hozumi Tanaka. 1995. Automatic extraction of Japanese grammar from a bracketed corpus. In Proceedings of Natural Language Processing Pacific Rim Sym- posium, Korea, December. Andreas Stolcke. 1995. An efficient probabilistic context-free parsing algorithm that computes prefix probabilities. Computational Linguistics, 21(2):165-201. 703
1998
115
Generation that Exploits Corpus-Based Statistical Knowledge Irene Langkilde and Kevin Knight Information Sciences Institute University of Southern California Marina del Rey, CA 90292 ilangkil@isi, edu and knight@isi, edu Abstract We describe novel aspects of a new natural lan- guage generator called Nitrogen. This generator has a highly flexible input representation that allows a spectrum of input from syntactic to semantic depth, and shifts' the burden of many linguistic decisions to the statistical post-processor. The generation al- gorithm is compositional, making it efficient, yet it also handles non-compositional aspects of language. Nitrogen's design makes it robust and scalable, op- erating with lexicons and knowledge bases of one hundred thousand entities. 1 Introduction Language generation is an important subtask of applications like machine translation, human- computer dialogue, explanation, and summariza- tion. The recurring need for generation suggests the usefulness of a general-purpose, domain-independent natural language generator (NLG). However, "plug- in" generators available today, such as FUF/SURGE (Elhadad and Robin, 1998), MUMBLE (Meteer et al., 1987), KPML (Bateman, 1996), and CoGen- Tex's RealPro (Lavoie and Rambow, 1997), require inputs with a daunting amount of linguistic detail. As a result, many client applications resort instead to simpler template-based methods. An important advantage of templates is that they sidestep linguistic decision-making, and avoid the need for large complex knowledge resources and pro- cessing. For example, the following structure could be a typical result from a database query on the type of food a venue serves: ((:obj-type venue)(:obj-name Top_of_the_Mark) (:attribute food-type)(:attrib-value American)) By using a template like <obj-name> 's <attribute> is <attrib-value>. the structure could produce the sentence, "Top of the Mark's food type is American." .Templates avoid the need for detailed linguistic information about lexical items, part-of-speech tags, number, gender, definiteness, tense, sentence organi- zation, sub-categorization structure, semantic rela- • tions, etc., that more general NLG methods need to have specified in the input (or supply defaults for). Such information is usually not readily inferrable from an application's database, nor is it always read- ily available from other sources, with the breadth of coverage or level of detail that is needed. Thus, using a general-purpose generator can be formidable (Re- iter, 1995). However, templates only work in very controlled or limited situations. They cannot pro- vide the expressiveness, flexibility or scalability that many real domains need. A desirable solution is a generator that abstracts away from templates enough to provide the needed flexibility and scalability, and yet still requires only minimal semantic input (and maintains reasonable efficiency). This generator would take on the re- sponsibility of finding an appropriate linguistic re- alization for an underspecified semantic input. This solution is especially important in the context of ma- chine translation, where the surface syntactic orga- nization of the source text is usually different from that of the target language, and the deep semantics are often difficult to obtain or represent completely as well. In Japanese to English translation, for ex- ample, it is often hard to determine from a Japanese text the number or gender of a noun phrase, the En- glish equivalent of a verb tense, or the deep semantic meaning of sentential arguments. There are many other obvious syntactic divergences as well. Thus, shifting such linguistic decisions to the gen- erator is significantly helpful for client applications. However, at the same time, it imposes enormous needs for knowledge on the generator program. Tra- ditional large-scale NLG already requires immense amounts of knowledge, as does any large-scale AI enterprise. NLG operating on a scale of 200,000 en- tities (concepts, relations, and words) requires large and sophisticated lexicons, grammars, ontologies, collocation lists, and morphological tables. Acquir- ing and applying accurate, detailed knowledge of this breadth poses difficult problems. (Knight and Hatzivassiloglou, 1995) suggested 704 meaning symbolic generator ] word lattice of possible renderings ~- lexicon +- grammar I statistical extractor ] <--- corpus English string Figure 1: Combining Symbolic and Statisti- cal Knowledge in a Natural Language Generator (Knight and Hatzivassiloglou, 1995). overcoming this knowledge acquisition bottleneck in NLG by tapping the vast knowledge inherent in En- glish text corpora. Experiments showed that corpus- based knowledge greatly reduced the need for deep, hand-crafted knowledge. This knowledge, in the form of n-gram (word-pair) frequencies, could be ap- plied to a set of semantically related sentences to help sort good ones from bad ones. A corpus-based statistical ranker takes a set of sentences packed ef- ficiently into a word lattice, (a state transition di- agram with links labeled by English words), and extracts the best path from the lattice as output, preferring fluent sentences over contorted ones. A generator can take advantage of this by producing a lattice that encodes various alternative possibilities when the information needed to make a linguistic decision is not available. Such a system organization shown in Figure 1, is robust against underspecified and even ambiguous input meaning structures. Traditionally, underspec- ification is handled with rigid defaults (e.g., assume present tense, use the alphabetically-first synonyms, use nominal arguments, etc.). However, the word lattice structure permits all the different possibili- ties to be encoded as different phrasings, and the corpus-based statistical extractor can select a good sentence from these possibilities. The questions that still remain are: What kind of input representation is minimally necessary? What kinds of linguistic decisions can the statistics reliably make, and which instead need to be made symbol- ically? How should symbolic knowledge be applied to the input to efficiently produce word lattices from the input? This paper describes Nitrogen, a generation sys- tem that computes word lattices from a meaning rep- resentation to take advantage of corpus-based sta- tistical knowledge. Nitrogen performs sentence real- ization and some components of sentence planning-- namely, mapping domain concepts to content words, and to some extent, mapping semantic relations to grammatical ones. It contributes: • A flexible input representation based on concep- tual meanings and the relations between them. • A new grammar formalism for defining the map- ping of meanings onto word lattices. • A new efficient algorithm to do this mapping. • A large grammar, lexicon, and morphology of English, addressing linguistic phenomena such as knowledge acquisition bottlenecks and under- specified/ambiguous input. This paper is organized as follows. First, we de- scribe our Abstract Meaning Representation lan- guage (AMR). Then we outline the generation algo- rithm and describe how various knowledge sources apply to render an AMR into English, including lexical, morphological, and grammatical knowledge bases. We describe the structure of these knowl- edge bases and give examples. We also present a technique that adds powerful flexibility to the gram- mar formalism. We finish with a discussion of the strengths and weaknesses of our generation system. 2 Abstract Meaning Representation The AMR language is composed of concepts from the SENSUS knowledge base (Knight and Luk, 1994}, including all of WordNet 1.5 (Miller, 1990), and keywords relating these concepts to each other) An AMR is a labeled directed graph, or feature structure, derived from the PENMAN Sentence Plan Language (Penman, 1989). The most basic AMR is of the form (label / concept), e.g.: ~ (ml / [dog<canidl) The slash is shorthand for a type (or instance) fea- ture, and in logic notation this AMR might be writ- ten as instance(m1, dog). This AMR can represent "the dog," "the dogs," "a dog," or "dog," etc. A concept can be modified using keywords: (,,2 I [dog<canid[ : quant plural) IStrings can be used in place of concepts. If the string is not a recognized word/phrase, then the generator will add this ambiguity to the word lattice for the statistical extrac- tor to resolve by proposing all possible part-of-speech tags. We prefer to use concepts because they make the AMR more language-lndependent, and enable semantic reasoning and inference. 2Concept names appear between vertical bars. We use a set of short, unique concept names derived from the struc- ture of WordNet by Jonathan Graehl, and available from http://www.isi.edu/natural-language/GAZELLE.html 705 This narrows the meaning to "the dogs," or "dogs." Concepts can be associated with each other in a nested fashion to form more complex mean- ings. These relations between conceptual mean- ings are also expressed through keywords. It is through them that our formalism exhibits an ap- pealing flexibility. A client has the freedom to ex- press the relations at various semantic and syn- tactic levels, using whichever level of representa- tion is most convenient. 3 We have currently im- plemented shallow semantic versions of roles such as :agent, :patient, :sayer, :sensor, etc., as well as deep syntactic roles such as :obliquel, :oblique2, and :oblique3 (which correspond to deep subject, ob- ject, and indirect object respectively, and serve as an abstraction for passive versus active voice) and the straightforward syntactic roles :subject, :direct- object, :indirect-object, etc. We explain further how this is implemented later in the paper. Below is an example of a slightly more complex meaning. The root concept is eating, and it has an agent and a patient, which are dogs and a bone (or bones), respectively. (m3 / lear,take inJ :agent (m4 / Idog<canidJ :quant plural) :patient (mS / [os,boneJ)) Possible output includes "The dogs ate the bone," "Dogs will eat a bone," "The dogs eat bones," "Dogs eat bone," and "The bones were eaten by dogs." 3 Lexical Knowledge The Sensus concept ontology is mapped to an En- glish lexicon that is consulted to find words for ex- pressing the concepts in an AMR. The lexicon is a list of 110,000 tuples of the form: (<word> <part-of-speech> <rank> <concept>) Examples: (("eat" VERB I feat,take in[) ("eat" VERB 2 Jeat>eat lunch[) °.* ) The <rank> field orders the concepts by sense fre- quency for the given word, with a lower number sig- nifying a more frequent sense. Like other types of knowledge used in Nitro- gen, the lexicon is very simple. It contains no 3This flexibility has another advantage from a research point of view. We consider the appropriate level of abstrac- tion an important problem in interlingua-style machine trans- lation. The flexibility of this representation allows us to ex- periment with various levels of abstraction without changing the underlying system. Further, it has opened up to us the possibility of implementing interlingua-based semantic trans- fer, where the interlingua serves as the transfer mechanism, rather than being a single, fixed peak point of abstraction. information about features like transitivity, sub- categorization, gradability (for adjectives), or count- ability (for nouns), etc. Such features are needed in other generators to produce correct grammatical constructions. Our statistical post-processor instead more softly (and robustly) ranks different grammat- ical realizations according to their likelihood. At the lexical level, several important issues in word choice arise. WordNet maps a concept to one or more synonyms. However, some words may be less appropriate than others, or may actually be misleading in certain contexts. An example is the concept [sell<cozen[ to which the lexicon maps the words "betray" and "sell." However, it is not very common to use the word "sell" in the sense of "A traitor sells out on his friends." In the sentence "I cannot [sell<cozen[ their trust" the word "sell" is misleading, or at least sounds very strange; "betray" is more appropriate. This word choice problem occurs frequently, and we deal with it by taking advantage of the word- sense rankings that the lexicon offers. According to the lexicon, the concept [sell<cozen[ expresses the second most frequent sense of the word "betray," but only the sixth most frequent sense of the word "sell." To minimize the lexical choice problem, we have adopted a policy of rejecting words whose pri- mary sense is not the given concept when better words are available. 4 Another issue in word choice relates to the broader issue of preserving ambiguities in MT. In source language analysis, it is often difficult to determine which concept is intended by a certain word. The AMR allows several concepts to be listed together in a disjunction. For example, (m6 / (*OR* [sell<cozen[ [cheat on[ [bewray[ Jbetray,failJ Jrat onJ)) The lexical lookup will attempt to preserve the ambiguity of this *0R*. If it happens that several or all of the concepts in a disjunction can be expressed using the same word, then the lookup will return only that word or words in preference to the other possibilities. For the example above, the lookup re- turns only the word "betray." This also reduces the complexity of the final sentence lattices. 4 Morphological Knowledge The lexicon contains words in their root form, so morphological inflections must be generated. The system also performs derivational morphol- ogy, such as adjective-*noun and noun--*verb (ex: 4A better "soft" technique would be to accept all words returned by the lexicon for a given concept, but associate with each word a preference score using a method such as Bayes' Rule and probabilities computed from a corpus such as SEMCOR, allowing the statistical extractor to choose the best alternative. We plan to implement this in the future. 706 "translation"-~"translate") to give the generator more syntactic flexibility in expressing complex AMK's. This flexibility ensures that the generator can find a way to express a complex meaning repre- sented by nested AMRs, but is also useful for solving problems of syntactic divergence in MT. Both kinds of morphology are handled the same way. Rules and exception tables are merged into a single, concise knowledge base. Here, for example, is a portion of the table for pluralizing nouns: ("-child" "children") ("-person" "people .... persons") ("-a" "as" "ae") ; formulas/formulae ("-x .... xes .... xen") ; boxes/oxen ("-man .... roans .... men") ; humans/footmen ("-Co" "os" "oes") The last line means: if a noun ends in a conso- nant followed by "-o," then we compute two plural forms, one ending in "-os" and one ending in "-oes," and put both possibilities in the word lattice for the post-generation statistical extractor to choose between later. Deciding between these usually re- quires a large word list. However, the statistical extractor already has a strong preference for "pho- tos" and "potatoes" over "photoes" and "potatos," so we do not need to create such a list. Here again corpus-based statistical knowledge greatly simplifies the task of symbolic generation. Derivational morphology raises the issue of mean- ing shift between different part-of-speech forms (such as "depart"-~ "departure"/"department"). Errors of this kind are infrequent, and are corrected in the morphology tables. 5 Generation Algorithm An AMR is transformed into word lattices by keyword-based grammar rules described in Section 7. By contrast, other generators organize their grammar rules around syntactic categories. A keyword-based organization helps achieve simplicity in the input specification, since syntactic informa- tion is not required from a client. This simplification can make Nitrogen more readily usable by client ap- plications that are not inherently linguistically ori- ented. The decisions about how to syntactically re- alize a given meaning can be left largely up to the generator. The top-level keywords of an AMR are used to match it with a rule (or rules). The algorithm is compositional, avoiding a combinatorial explo- sion in the number of rules needed for the various keyword combinations. A matching rule splits the AMR apart, associating a sub-AMR with each key- word, and lumping the relations left over into a sub- AMR under the :rest role using the same root as the original AMR. Each sub-AMR is itself recursively matched against the keyword rules, until the recur- sion bottoms out at a basic AMR which matches with the instance rule. Lexical and morphological knowledge is used to build the initial word lattices associated with a con- cept when the recursion bottoms out. Then the in- stance rule builds basic noun and verb groups from these, as well as basic word lattices for other syn- tactic categories. As the algorithm climbs out of the recursion, each rule concatenates together lattices for each of the sub-AMR's to form longer phrases. The rhs specifies the needed syntactic category for each sub-lattice and the surface order of the concate- nation, as well as the syntactic category for the new resulting lattice. Concatenation is performed by at- taching the end state of one sub-lattice to the start state of the next. Upon emerging from the top-level rule, the lattice with the desired syntactic category, by default S (sentence), is selected and handed to the statistical extractor for ranking. The next sections describe further how lexical and morphological knowledge are used to build the initial word lattices, how underspecification is handled, and how the grammar is encoded. 6 The Instance Rule The instance rule is the most basic rule since it is ap- plied to every concept in the AMR. This rule builds the initial word lattices for each lexical item and for basic noun and verb groups. Each concept in the AMR is eventually handed to the instance rule, where word lattices are constructed for all available parts of speech. The relational keywords that apply at the instance level are :polarity, :quant, :tense, and :modal. In cases where a meaning is underspecified and does not include these keywords, the instance rule uses a recasting mechanism (described below) to add some of them. If not specified, the system assumes posi- tive polarity, both singular and plural quantities, all possible time frames, and no modality. Japanese nouns are often ambiguous with respect to number, so generating both singular and plural possibilities and allowing the statistical extractor to choose the best one results in better translation qual- ity than rigidly choosing a single default as tradi- tional generation systems do. Allowing number to be unspecified in the input is also useful for gen- eral English generation as well. There are many in- stances when the number of a noun is dictated more by usage convention or grammatical constraint than by semantic content. For example, "The company has (a plan/plans) to establish itself in February," or "This child won't eat any carrots," ("carrots" must be plural by grammatical constraint). It is easier for a client program if the input is not required to specify number in these cases, but is allowed to rely 707 on the statistical extractor to supply the best one. In translation, there is frequently no direct corre- spondence between tenses of different languages, so in Nitrogen, tense can be coarsely specified as either past, present, or future, but need not be specified at all. If not specified, Nitrogen generates lattices for the most common English tenses, and allows the statistical extractor to choose the most likely one. The instance rule is factored into several sub- instance rules with three main categories: nouns, verbs, and miscellaneous. The noun instance rules are further subdivided into two rules, one for plu- ral noun phrases, and the other for singular. The verb instance rules are factored into two categories relating to modality and tense. Polarity can apply across all three main instance categories (noun, verb, and other), but only affects the level it appears in. When applied to nouns or ad- jectives, the result is "non-" prepended to the word, which conveys the general intention, but is not usu- ally very grammatical. Negative polarity is usually most fluently expressed in the verb rules with the word "not," e.g., "does not eat. ''5 7 Grammar Formalism The grammatical specifications in the keyword rules constitute the main formalism of the generation sys- tem. The rules map semantic and syntactic roles to grammatical word lattices. These roles include: :agent, :patient, :domain, :range, :source, : dest inat ion, : spat ial-locat ing, :temporal-locating, : accompanier ; :obliquel, :oblique2, :oblique3; :subject, :object, :mod, etc. A simplified version of the rule that applies to an AMR with :agent and :patient roles is: ((Xl :agent) (x2 :patient) (x3 :rest) -> (s (seq (xl np nom-pro) (x3 v-tensed) (x2 np acc-pro) )) (s (seq (x2 np nom-pro) (x3 v-passive) (wrd "by") (xl np acc-pro))) (np (seq (x3 np acc-pro nora-pro) (wrd "of") (x2 np ace-pro) (wrd "by") (xl np acc-pro))) (s-ger (seq ...)) (inf (seq ...))) The left-hand side is used to match an AMR with agent and patient roles at the top level. The :rest keyword serves as a catch-all for other roles that ap- pear at the top level. Note that the rule specifies two ways to build a sentence, one an active voice SWe plan to generate more fluent expressions for negative polarity on nouns and adjectives, for example, "unhappy" instead of "non-happy." version and the other passive. Since at this level the input may be underspecified regarding which voice to use, the statistical extractor is expected to choose later the most fluent version. Note also that this rule builds lattices for other parts of speech, in addition to sentences (ex: "the consumption of the bone by the dogs"). In this way the generation algorithm works bottom-up, building lattices for the leaves (in- nermost nested levels of the input) first, to be com- bined at outer levels according the relations between the leaves. For example, the AMR below will match this rule: (m7 / [eat,take inl : time present :agent (d / [dog,canid~ : quant plural) :patient (b / ~os,bonel : quant sing)) Below are some sample lattices that result from applying the rule above to this AMR: 6 (S (or (seq (or (wrd "the") (wrd "*empty*")) (wrd "dog") (wrd "+plural") (wrd "may") (wrd "eat") (or (wrd "the") (wrd "a") (wrd "an") (wrd "*empty*")) (wrd "bone") ) (seq (or (wrd "the") (wrd "a") (wrd "an") (wrd "*empty*")) (wrd "bone") (wrd "may") (wrd "be") (or (wrd "being") (wrd "*empty*")) (wrd "eat") (wrd "+pastp") (wrd "by") (or (wrd "the") (wrd "*empty*")) (wrd "dog") (wrd "+plural")) ) ) (NP (seq (or (wrd "the") (wrd "a") (wrd "an") (wrd "*empty*")) (wrd "possibility") (wrd "of") (or (wrd "the") (wrd "a") (wrd "an") (wrd "*empty*")) (wrd "consumption") (wrd "of") (or (wrd "the") (wrd "a") (wrd "an") (wrd "*empty*")) (wrd "bone") (wrd "by") (or (wrd "the") (wrd "*empty*") ) (wrd "dog") (wrd "+plural") )) ) (S-GER ...) (INF ... ) Note the variety of symbolic output that is pro- duced with these excessively simple rules. Each re- lation is mapped not to one but to many different realizations, covering regular and irregular behav- ior exhibited in natural language. Purposeful over- generation becomes a strength. 6The grammar rules can insert the special token *empty*, here indicating an option for the null determiner. Before run- ning, the statistical extractor removes all *empty* transitions by determinizing the word lattice. Note also the insertion of morphological tokens like +plural. Inflectional morphology rules also apply during this determinizing stage. 708 The :rest keyword in the rule head provides a handy mechanism for decoupling the possible keyword combinations. By means of this mecha- nism, keywords which generate relatively indepen- dent word lattices can be organized into separate rules, avoiding combinatorial explosion in the num- ber of rules which need to be written. 7.1 Recasting Mechanism The recasting mechanism that is used in the gram- mar formalism gives it unique power and flexibil- ity. The recasting mechanism enables the generator to transform one semantic representation into an- other one (such as deep to shallow, or instance to sub-instance) and to accept as input a specification anywhere along this spectrum, permitting meaning to be encoded at whatever level is most convenient. The recasting mechanism also makes it possible to handle non-compositional aspects of language. One area in which we use this mechanism is in the :domain rule. Take for example the sentence, "It is necessary that the dog eat." It is sometimes most convenient to represent this as: (m8 / [obligatory<necessaryi :domain (m9 / [eat,take inl :agent (ml0 / Idog,canidl))) and at other times as: (mll / [have the quality of beingl :domain (m12 / lear,take inl :agent (d / ]dog,canidl)) :range (m13 / [obligatory<necessaryl)) but we can define them to be semantically equiva- lent. In our system, both are accepted, and the first is automatically transformed into the second. Other ways to say this sentence include "The dog is required to eat," or "The dog must eat." How- ever, the grammar formalism cannot express this, because it would require inserting the word lattice for ]obligatory<necessary[ within the lattice for m9 or m12--but the formalism can only concatenate lat- tices. The recasting mechanism solves this problem, by recasting the above AMR as: (m14 / feat,take inl :modal (m15 / lobligatory<necessaryl) :agent (m16 / [dog,canidl)) which makes it possible to form these sentences. The syntax for recasting the first AMR to the second is: ((xl :rest) (x2 :domain) -> (? (xl (:new (/ Ihave the quality of being]) (:domain x2) (:range xl)) ?)) and for recasting the second into the third: ((xl :rest) (x2 :domain) (x3 :range) -> (7 (x2 (:add (:modal (x3 (:add (:extra xl))))) 7)) (s (seq (x2 np nom-pro) (xt v-tensed) (x3 adj np acc-pro))) (s (seq (wrd "it") (xl v-tensed) (x3 adj np acc-pro) (wrd "that") (x2 s))) o.o) The :new and :add keywords signal an AMR re- cast. The list after the keyword contains the in- structions for doing the recast. In the first case, the :new keyword means: build an AMR with a new root, Ihave the quality of beingl, and two roles, one labeled :domain and assigned sub-AMR x2; the other labeled :range and assigned sub-AMR xl. The question mark causes a direct splice of the results from the recast. In the second case, the :add keyword means: in- sert into the sub-AMR of x2 a role labeled :modal and assign to it the sub-AMR of x3 which itself is recast to include the roles in the sub-AMR of xl but not its root. (This is in case there are other roles such as polarity or time which need to be included in the new AMR.) In fact, recasting makes it possible to nest modals within modals to any desired depth, and even to at- tach polarity and tense at any level. For example, "It is not possible that it is required that you are per- mitted to go," can be also (more concisely) stated as "It cannot be required that you be permitted to go," or "It is not possible that you must be permit- ted to go," or "You cannot have to be permitted to go." This is done by a grammar rule express- ing the most nested modal concept as a modal verb and the remaining modal concepts as a combination of regular verbs or adjective phrases. Our grammar includes a fairly complete model of obligation, pos- sibility, permission, negation, tense, and all of their possible interactions. 8 Discussion We have presented a new generation grammar for- malism capable of mapping meanings onto word lat- tices. It includes novel mechanisms for construct- ing and combining word lattices, and for re-writing meaning representations to handle a broad range of linguistic phenomena. The grammar accepts inputs along a continuum of semantic depth, requiring only a minimal amount of syntactic detail, making it at- tractive for a variety of purposes. Nitrogen's grammar is organized around seman- tic input patterns rather than the syntax of English. This distinguishes it from both unification grammar (Elhadad, 1993a; Shieber et al., 1989) and systemic- network grammar (Penman, 1989). Meanings can 709 be expressed directly, or else be recast and recy- cled back through the generator. This recycling ul- timately allows syntactic constraints to be localized, even though the grammar is not organized around English syntax. Nitrogen's algorithm operates bottom-up, effi- ciently encoding multiple analyses in a lattice data structure to allow structure sharing, analogous to the way a chart is used in bottom-up parsing. In contrast, traditional generation control mechanisms work top-down, either deterministically (Meteer et al., 1987; Penman, 1989) or by backtracking to pre- vious choice points (Elhadad, 1993b). This unnec- essarily duplicates work at run time, unless sophis- ticated control directives are included in the search engine (Elhadad and Robin, 1992). Recently, (Kay, 1996) has explored a bottom-up approach to genera- tion as well, using a chart rather than a word lattice. Nitrogen's generation is robust and scalable. It can generate output even for unexpected or incom- plete input, and is designed for broad coverage. It does not require the detailed, difficult-to-obtain knowledge bases that other NLG systems require, since it relies instead on corpus-based statistics to make a wide variety of linguistic decisions. Cur- rently the quality of the output is limited by the use of only word bigram statistical information, which cannot handle long-distance agreement, or distin- guish likely collocations from unlikely grammatical structure. However, we plan to remedy these prob- lems by using statistical information extracted from the Penn Treebank corpus (Marcus et al., 1994) to rank tagged lattices and parse forests. Nitrogen's rule matching is much less expensive than graph unification, and lattices generated for sub-AMIRs are cached and reused in subsequent ref- erences. The semantic roles used in the grammar formalism cover most common syntactic phenomena, though our grammar does not yet generate ques- tions, or infer pronouns from explicit coreference. Nitrogen has been used extensively as part of a semantics-based Japanese-English MT system (Knight et al., 1995). Japanese analysis provides AMR's, which Nitrogen transforms into word lat- tices on the order of hundreds of nodes and thou- sands of arcs. These lattices compactly encode a number of syntactic variants that usually reach into the trillions and beyond. Most of these are some- what ungrammatical or awkward, yet the statistical extractor rather successfully narrows them down to the top N best paths. An online demo is available at http://www.isi .edu/natural-language/mt/nitrogen/ References J. Bateman. 1996. KPML development environ- ment -- multilingual linguistic resource devel- opment and sentence generation. Technical re- port, German Centre for Information Technology (GMD). M. Elhadad and J. Robin. 1992. Controlling content realization with functional unification grammars. In R. Dale, E. Hovy, D. Roesner, and O. Stock, editors, Aspects of Automated Natural Language Generation. Springier Verlag. M. Elhadad and J. Robin. 1998. Surge: a comprehensive plug-in syntactic realiza- tion component for text generation. In http : //www. es. bgu. ac. il/researeh /projects /surge/. M. Elhadad. 1993a. FUF: The universal unifier-- user manual, version 5.2. Technical Report CUCS-038-91, Columbia University. M. Elhadad. 1993b. Using Argumentation to Con- trol Lexieal Choice: A Unification-Based Imple- mentation. Ph.D. thesis, Columbia University. M. Kay. 1996. Chart generation. In Proc. ACL. K. Knight and V. Hatzivassiloglou. 1995. Two-level, many-paths generation. In Proc. ACL. K. Knight and S. Luk. 1994. Building a large-scale knowledge base for machine translation. In Proc. AAAI. K. Knight, I. Chander, M. Haines, V. Hatzivas- siloglou, E. Hovy, M. Iida, S. K. Luk, R. Whitney, and K. Yamada. 1995. Filling knowledge gaps in a broad-coverage MT system. In Proc. IJCAI. Benoit Lavoie and Owen Rambow. 1997. RealPro - a fast, portable sentence realizer. In ANLP'97. M. Marcus, G. Kim, M. Marcinkiewicz, R. MacIn- tyre, A. Bies, M. Ferguson, K. Katz, and B. Schas- berger. 1994. The Penn treebank: Annotating predicate argument structure. In ARPA Human Language Technology Workshop. M. Meteer, D. McDonald, S. Anderson, D. Forster, L. Gay, A. Iluettner, and P. Sibun. 1987. Mumble-86: Design and implementation. Tech- nical Report COINS 87-87, U. of Massachussets at Amherst, Amherst, MA. G. Miller. 1990. Wordnet: An on-line lexical database. International Journal of Lexicography, 3(4). (Special Issue). Penman. 1989. The Penman documentation. Tech- nical report, USC/Information Sciences Institute. Ehud Reiter. 1995. NLG vs. templates. In Proc. ENLGW '95. S. Shieber, G. van Noord, R. Moore, and F. Pereira. 1989. A semantic-head-driven generation algo- rithm for unification based formalisms. In Proc. ACL. 710
1998
116
Methods and Practical Issues in Evaluating Alignment Techniques Philippe Langlais CTT/KTH SE-I0044 Stockholm CERI-LIA, AGROPARC BP 1228 F-84911 Avignon Cedex 9 Philippe.Langlais~speech.kth.se Michel Simard RALI-DIRO Univ. de Montrdal Qudbec, Canada H3C 3J7 shnardm~IRO.UMontreal.CA Jean Vdronis LPL, Univ. de Provence 29, Av. R. Schuman F-13621 Aix-en-Provence Cedex 1 veronis~univ-aix.fr Abstract This paper describes the work achieved in the first half of a 4-year cooperative research project (ARCADE), financed by AUPELF-UREF. The project is devoted to the evaluation of paral- lel text alignment techniques. In its first period ARCADE ran a competition between six sys- tems on a sentence-to-sentence alignment task which yielded two main types of results. First, a large reference bilingual corpus comprising of texts of different genres was created, each pre- senting various degrees of difficulty with respect to the alignment task. Second, significant methodological progress was made both on the evaluation protocols and metrics, and the algoritbm.q used by the dif- ferent systems. For the second phase, which is now underway, ARCADE has been opened to a larger number of teams who will tackle the problem of word-level alignment. 1 Introduction In the last few years, there has been a growing interest in parallel text alignment techniques. These techniques attempt to map various tex- tual units to their translation and have proven useful for a wide range of applicatious and tools. A simple example of such a tool is probably the TransSearch bilingual concordancing system (Isabelle et al., 1993), which allows a user to query a large archive of existing translations in order to find ready-made solutions to specific translation problems. Such a tool has proved ex- tremely useful not only for translators, but also for bilingual lexicographers (Langlois, 1996) and terminologists (Dagan and Church, 1994). More sophisticated applications based on alignment technology have also been the object of recent work, such as the automatic building of bilin- gual lexical resources (Melamed, 1996; Klavans and Tzoukermann, 1995), the automatic verifi- cation of translations (Macklovitch, 1995), the automatic dictation of translations (Brousseau et al., 1995) and even interactive machine trans- lation (Foster et al., 1997). Enthusiasm for this relatively new field was sparked early on by the apparent demonstra- tion that very simple techniques could yield al- most perfect results. For instance, to produce sentence alignments, Brown et al. (1991) and Gale and Church (1991) both proposed meth- ods that completely ignored the lexical content of the texts and both reported accuracy lev- els exceeding 98%. Unfortunately performance tends to deteriorate significantly when aligners are applied to corpora which are widely differ- ent from the training corpus, and/or where the alignments are not straightforward. For instance graphics, tables, "floating" notes and missing segments, which are very common in real texts, all result in a dramatic loss of efficiency. The truth is that, while text alignment is mostly an easy problem, especially when consid- ered at the sentence level, there are situations where even humans have a hard time making the right decision. In fact, it could be argued that, ultimately, text alignment is no easier than the more general problem of natural language understanding. In addition, most research efforts were directed towards the easiest problem, that of sentence-to-sentence alignment (Brown et al., 1991; Gale and Church, 1991; Debili, 1992; Kay and l~scheisen, 1993; Simard et al., 1992; Simard and Plamondon, 1996). Alignment at the word and term level, which is extremely useful for applications such as lexieal resource extraction, is still a largely unexplored research area(Melamed, 1997). In order to live up to the expectations of the 711 various application fields, alignment technology will therefore have to improve substantially. As was the case with several other language processing techniques (such as information retrieval, document understanding or speech recognition), it is likely that a systematic evalu- ation will enable such improvements. However, before the ARCADE project started, no for- real evaluation exercise was underway; and worse still, there was no multilingnal aligned reference corpus to serve as a "gold standard" (as the Brown corpus did, for example, for part of speech tagging), nor any established methodology for the evaluation of alignment systems. 2 Organization ARCADE is an evaluation exercise financed by AUPELF-UREF, a network of (at least partially) French-speaking universities. It was launched in 1995 to promote research in the field of multilingual alignment. The first 2-year period (96-97) was dedicated to two main tasks: 1) producing a reference bilingual corpus (French-English) aligned at sentence level; 2) evaluating several sentence alignment systems through an ARPA-like competition. In the first phase of ARCADE, two types of teams were involved in the project: the corpus providers (LPL and RALI) and the (RALI, LO- ILIA, ISSCO, IRMC and LIA). General coor- dination was handled by J. V~ronis (LPL); a discussion group was set up and moderated by Ph. Langlais (LIA & KTH). 3 Reference corpus One of the main results of ARCADE has been to produce an aligned French-English corpus, combining texts of different genres and various degrees of difficulty for the alignment task. It is important to mention that until ARCADE, most alignment systems had been tested on ju- dicial and technical texts which present rela- tively few difficulties for a sentence-level align- ment. Therefore, diversity in the nature of the texts was preferred to the collection of a large quantity of similar data. 3.1 Format ARCADE contributed to the development and testing of the Corpus Encoding Standard (CES), which was initiated during the MUL- TEXT project (Ide et al., 1995). The CES is based on SGML and it is an extension of the now internationally-accepted recommendations of the Text Encoding Initiative (Ide and Vdronis, 1995). Both the JOG and BAF parts of the ARCADE corpus (described below) are encoded in CES format. 3:2 JOC The JOC corpus contains texts which were pub- lished in 1993 as a section of the C Series of the Official Journal of the European Community in all of its official languages. This corpus, which was collected and prepared during the MLCC and MULTEXT projects, contains, in 9 parallel versions, questions asked by members of the Eu- ropean Parliament on a variety of topics and the corresponding answers from the European Com- mission. JOC contains approximately 10 million words (ca. 1.1 million words per language). The part used for JOC was composed of one fifth of the French and English sections (ca. 200 000 words per language). 3.3 BAF The BAF corpus is also a set of parallel French- English texts of about 400 000 words per lan- guage. It includes four text genres: 1) INST, four institutional texts (including transcription of speech from the Hansard corpus) for a total- ing close to 300 000 words per language, 2) SCI- ENCE, five scientific articles of about 50 000 words per language, 3) TECH, technical doc- umentation of about 40 000 words per language and 4) VERNE, the Jules Verne novel: "De la terre d la lune" (ca. 50 000 words per lan- guage). This last text is very interesting because the translation of literary texts is much freer than that of other types of tests. Furthermore, the English version is slightly abridged, which adds the problem of detecting missing segments. The BAF corpus is described in greater detail in (Simard, 1998). 4 Evaluation measures We first propose a formal definition of paral- lel text alignment, as defined in (Isabelle and Simard, 1996). Based on that definition, the usual notions of recall and precision can be used to evaluate the quality of a given alignment with 712 respect to a reference. However, recall and preci- sion can be computed for various levels of gran- ularity: an alignment at a given level (e.g. sen- tences) can be measured in terms of units of a lower level (e.g. words, characters). Such a fine- grained measure is less sensitive to segmenta- tion problems, and can be used to weight errors according to the number of sub-units they span. 4.1 Formal definition If we consider a text S and its translation T as two sets of segments S = {Sl, s2, .., Sn} and T = {tl,t2,...,tm}, an alignment A between S and T can be defined as a subset of the Cartesian product ~(S) x p(T), where p(S) and p(T) are respectively the set of all subsets of S and T. The triple iS, T, A) will be called a bitext. Each of the elements (ordered pairs) of the alignment will be called a bisegment. This definition is fairly general. However, in the evaluation exercice described here, segments were sentences and were supposed to be contigu- ous, yielding monotonic alignments. For instance, let us consider the fol- lowing alignment, which will serve as the reference alignment in the subsequent ex- amples. The formal representation of it is: Ar = {({Sl}, {tl}), ({s2}, {t2,t3})}. sl Phrase num~ro un. s2 Phrase num~ro deux qui ressemble h la l~re. tl The first sentence. t2 The 2nd sentence. t3 It looks like the first. 4.2 Recall and precision Let us consider a bitext (S,T, Ar) and a proposed alignment A. The alignment recall with respect to the reference Ar is defined as: recall = IA N Arl/IA~I. It represents the proportion of bisegments in A that are correct with respect to the reference At. The silence corresponds to 1- recall. The alignment precision with respect to the reference Ar is defined as: precision = IA N Arl/IAI. It represents the proportion of bisegments in A that are right with respect to the number of bisegment proposed. The noise corresponds to 1 -- precision. We will also use the F-measure (Rijsbergen, 1979) which combines recall and precision in a single efficiency measure (harmonic mean of precision and recall): (recall x precision) F 2" ( recall~ + precision)" Let us assume the following proposed align- ment: sl Phrase num~ro un. tl The first sentence. t2 The 2nd sentence. s2 Phrase num~ro deux t3 It looks like the first. qui ressemble h la l~re. The formal representation of this alignment is: A = {({s,}, ({}, {t2}), ({s2}, {t3})}. We note that: A n Ar = {((s,}, {tl})}. Align- ment recall and precision with respect to Ar are 1/2 -- 0.50 and 1/3 -- 0.33 respectively. The F- measure is 0.40. Improving both recall and precision are an- tagonistic goals : efforts to improve one often result in degrading the other. Depending on the applications, different trade-offs can be sought. For example, if the bisegments are used to auto- matically generate a bilingual dictionary, maxi- mizing precision (i.e. omitting doubtful couples) is likely to be the preferred option. Recall and precision as defined above are rather unforgiving. They do not take into ac- count the fact that some bisegments could be partially correct. In the previous example, the bisegment ({s2}, {t3}) does not belong to the reference, but can be considered as partially cor- rect: t3 does match a part of s2. To take partial correctness into account, we need to compute re- call and precision at the sentence level instead of the alignment level. Assuming the alignment A = {al, a2,..., am} and the reference Ar = {arl, at2,..., am}, with ai = (asi, ati) and arj = (arsj,artj), we can derive the following sentence-to-sentence align- ments: A' = Ui(asi × ati) A~r = Uj(arsj x artj) Sentence-level recall and precision can thus be defined in the following way: recall = IA' ' ' nArl/lArl precision = IA' n A'rl/IA'I In the example above: A' = {(sl, tl), (s2, t3)} and A~ = {(sl, tl), (s2, t2), (s2, t3)}. Sentence- level recall and precision for this example are 713 therefore 2/3 = 0.66 and 1 respectively, as com- pared to the alignment-level recall and preci- sion, 0.50 and 0.33 respectively. The F-measure becomes 0.80 instead of 0.40. 4.3 Granularity In the definitions above, the sentence is the unit of granularity used for the computation of recall and precision at both levels. This results in two difficulties. First, the measures are very sensi- tive to sentence segmentation errors. Secondly, they do not reflect the seriousness of misalign- ments. It seems reasonable that errors involving short sentences should be less penalized than errors involving longer ones, at least from the perspective of some applications. These problems can be avoided by taking ad- vantage of the fact that a unit of a given gran- ~arity (e.g. sentence) can always be seen as a (possibly discontinuous) sequence of units of finer granularity (e.g. character). Thus, when an alignment A is compared to a reference alignment Ar using the recall and precision measures computed at the char-level, the values obtained are inversely proportional to the quantity of text (i.e. number of characters) in the misaligned sentences, instead of the num- ber of these misaligned sentences. For instance, in the example used above, we would have at sentence level: * using word granularity (punctuation marks are considered as words) : IA'I = 4*4 + 0*4 + 9*6 = 106 IAr'l = 4*4 + 9.10 = 70 IAr' " A'I = 4*4 + 9*6 = 70 recall = 70/106 = 0.66 precision = 1 F = 0.80 • using character granularity (excluding spaces): [A'[ = 15.17 + 0.15 + 36*20 = 975 [Ar'] = 15.17 + 36*35 = 1515 IAr' " A'I = 15.17 + 36*20 = 975 recall = 975/1515 = 0.64 precision = 1 F=0.78 5 Systems tested Six systems were tested, two of which having been submitted by the I:tALI. RALI/Jacal This system uses as a first step a program that reduces the search space only to those sentence pairs that are potentially inter- esting (Simard and Plamondon, 1996). The un- derlying principle is the automatic detection of isolated cognates (i.e. for which no other similar word exists in a window of given size). Once the search space is reduced, the system aligns the sentences using the well-known sentence-length model described in (Gale and Church, 1991). RALI/Sallgn The second method proposed by RALI is based on a dynamic programming scheme which uses a score function derived from a translation model similar to that of (Brown et al., 1990). The search space is reduced to a beam of fixed width around the diagonal (which would represent the alignment if the two texts were perfectly synchronized). LORIA The strategy adopted in this system differs from that of the other systems since sen- tence alignment is performed after the prelim- inary alignment of larger units (whenever pos- sible, using mark-up), such as paragraphs and divisions, on the basis of the SGML structure. A dynamic programming scheme is applied to all alignment levels in successive steps. IRMC This system involves a preliminary, rough word alignment step which uses a trans- fer dictionary and a measure of the proximity of words (D~bili et al., 1994). Sentence alignment is then achieved by an algorithm which opti- mizes several criteria such as word-order con- servation and synchronization between the two texts. LIA Like Jacal, the LIA system uses a pre-processing step involving cognate recog- nition which restricts the search space, but in a less restrictive way. Sentence alignment is then achieved through dynamic program- ming, using a score function which combines sentence length, cognates, transfer dictionary and frequency of translation schemes (1-1, 1-2, etc.). ISSCO Like the LORIA system, the ISSCO aligner is sensitive to the macro-structure of the document. It examines the tree structure of an SGML document in a first pass, weighting each node according to the number of charac- ters contained within the subtree rooted at that node. The second pass descends the tree, first 714 by depth, then by breath, while aligning sen- tences using a method resembling that of Gale & Church. 6 Results Four sets of recall/precision measures were com- puted for the alignments achieved by the six systems for each text type previously described above: Align, alignment-level, Sent sentence- level, Word, word-level and Char, character- level. The global efficiency of the different sys- tems (average F-values) for each text type is given in Figure 1. t.ORJA t ! I " • . W ~ -II - I,IA : . . [.J! i 1 • ! , TECJH ,.~ i i id~ i ii.i_~?~'r i • i "¢¢-~, : ~m.: LIA ~ i ~b-~ ~ i : ! : IX,'~ ] SC~'~ ! ' ~. ! ! i ! ~ ! ~ .."" . SCH~CI ! ! • ~ s.,4~.~ ! ! . i i a~o ~ : T ' , , "~ • ,,-IH i ii iii i i i i i.oll.l~l!~l i i iiiiii ! i i i ii i i ~ i i i iiii i i i i ~,.t- ' I ~ i i i ..'~...,,~.1~ 'ii' ! i i i! ii , "i i I.., ~ i i i 1o< r.,i Ill ~ i Is l Figure h Global efficiency (average F-values for Align, Sent, Word and Char measures) of the different systems (Jacal, Salign, LORIA, IRMC, ISSCO, LIA), by text type (logarithmic scale). First, note than the Char measures are higher that the Align measures. This seems to con- firm that systems tend to fail when dealing with shorter sentences. In addition, the refer- ence alignment for the BAF corpus combines several 1-1 alignments in a single n-n align- ment, for practical reasons owing to the sen- tence segmentation process. This results in de- creased Align measures. The corpus on which all systems scored high- est was the JOC. This corpus is relatively sim- ple to align, since it contains 94% of 1-1 align- ments, reflecting a translation strategy based on speed and absolute fidelity. In addition, this corpus contains a large amount of data that remains unchanged during the translation pro- cess (proper names, dates, etc.) and which can serve as anchor points by some systems. Note that the LORIA system achieves a slightly bet- ter performance than the others on this cor- pus, mainly because it is able to carry out a structure-alignment since paragraphs and divi- sions are explicitly marked. The worst results were achieved on the VERNE corpus. This is also the corpus for which the results showed the most scattering across systems (22% to 90% char-precision). These poor results are linked to the literary nature of the corpus, where translation is freer and more interpretative. In addition, since the English version is slightly abridged, the occa- sional omissions result in de-synchronization in most systems. Nevertheless, the LIA sys- tem still achieves a satisfactory performance (90% char-recall and 94% char-precision), which can be explained by the efficiency of its word-based pre-alignment step, as well as the scoring function used to rank the candidate bisegments. Significant discrepancy are also noted be- tween the Align and Char recalls on the TECH corpus. This document contained a large glossary as an appendix, and since the terms are sorted in alphabetic order, they are ordered differently in each language. This portion of text was not manually aligned in the reference. The size of this bisegment (250-250) drastically lowers the Char-recall. Aligning two glossaries can be seen as a document-structure alignment task rather than a sentence-alignment task. Since the goal of the evaluation was sentence alignment, the TECH corpus results were not taken into account in the final grading of the systems. The overall ranking for all systems (excluding the TECH corpus results) is given in Figure 2, in terms of the Sent and Char F-measures. The LIA system obtains the best average results and shows good stability across texts, which is an 715 g0 LIA JACAL I Allsn ~ Char s~8 Sent ~ Wm-d SALIGN LORIA LSSCO ][R~lC Figure 2: Final r~nking on the systems (average F-vaiues). important criterion for many applications. 7 Conclusion and future work The ARCADE evaluation exercise has allowed for significant methodological progress on paral- lel text alignment. The discussions among par- ticipants on the question of a testing proto- col resulted in the definition of several evalu- ation measures and an assessment of their rela- tive merits. The comparative study of the sys- tems performance also yielded a better under- standing of the various techniques involved. As a significant spin-off, the project has produced a large aligned bilingual corpus, composed of several types of texts, which can be used as a gold standard for future evaluation. Grounded on the experience gained in the first test cam- paign, the second (1998-1999) has been opened to more te~m.q and plans to tackle more difficult problems, such as word-level alignment. 1 Acknowledgments This work has been partially funded by AUPELF-UREF. We are indebted to Lucie Langlois and EUiott Macklovitch for their fruitful comments on this paper. References J. Brousseau, C. Drouin, G. Foster, P. IsabeUe, R. Kuhn, Y. Normandin, and P. Platoon- don. 1995. French Speech Recognition in an Automatic Dictation System for Translators: the TransTalk Project. In Proceedings o-f Eu- rospeech 95, Madrid, Spain. 1For more information check the Web site at http: ] ] www.lp l. univ-a~.fr ]pro jects ]arcade P. F. Brown, J. Cocke, S. A. Della Pietra, V. J. Della Pietra, F. Jelinek, J. D. Lafferty, R. L. Mercer, and P. S. Roosin. 1990. A Sta- tistical Approach to Machine Translation. In Computational Linguistics, volume 16, pages 79-85, June. P.F. Brown, J.C. Lai, and R.L. Mercer. 1991. Aligning Sentences in Parallel Corpora. In ~9th Annual Meeting o-f the Association for Computational Linguistics, pages 169-176, •Berkeley, CA,USA. Ido Dagan and Kenneth W. Church. 1994. Ter- might: Identifying and Translating Techni- cal Terminology. In Proceedings of ANLP-94, Stuttgart, Germany. • F. D~bili, E. Sammouda, and A. Zribi. 1994. De l'appariement des roots ~ la comparaison de phrases. In 9~me Congr~s de Reconnaissance des Formes et Intelligence Artificielle, Paris, Janvier. F. Debili. 1992. Aligning Sentences in Bilingual Texts French - English and French - Arabic. In COLING, pages 517-525, Nantes, 23-28 Aout. George Foster, Pierre Isabelle, and Pierre Pla- mondon. 1997. Target-Text Mediated Inter- active Machine Translation. Machine Trans- lation, 21(1-2). W. A. Gale and Kenneth W. Church. 1991. A Program for Aligning Sentences in Bilin- gual Corpora. In 29th Annual Meeting of the Association -for Computational Linguis- tics, Berkeley, CA. N. Ide and J. V~ronis, 1995. The Text Encod- ing Initiative: background and context, chap- ter 342p. Kluwer Academic Publishers, Dor- drecht. N. Ide, G. Priest-Dorman, and J. V6ronis. 1995. Corpus encoding standard. Report. Accessible on the World Wide Web: http://www.lpl, univ- aix.fr/projects/multext/CES/CES 1.html. Pierre IsabeUe and Michel Simard. 1996. Propositions pour la representation et l'~valuation des alignements de textes parall~les. http ://www-ral i. iro. umontreal, ca/arc-a2/- PropEval. Pierre Isabelle, Marc Dymetman, George Fos- ter, Jean-Marc Jutras, Elliott Macklovitch, Franqois Perrault, Xiaobo Ren, and Michel 716 Simard. 1993. Translation Analysis and Translation Automation. In Proceedings of TMI-93, Kyoto, Japan. M. Kay and M. PdSscheisen. 1993. Text- translation alignment. Computational Lin- guistics, 19(1):121-142. Judith Klavans and Evelyne Tzoukermama. 1995. Combining Corpus and Machine- readable Dictionary Data for Building Bilin- gual Lexicons. Machine Translation, 10(3). Lueie Langlois. 1996. Bilingual Concordances: A New Tool for Bilingual Lexicographers. In Proceedings of AMTA-96, Montreal, Canada. Elliott Maekloviteh. 1995. TransCheek -- or the Automatic Validation of Human Trans- lations. In Proceedings of the MT Summit V, Luxembourg. I. Dan Melamed. 1996. Automatic Con- struetion of Clean Broa~l-eoverage Transla- tion Lexicons. In Proceedings of AMTA-96, Montreal, Canada. I. Dan Melamed. 1997. A portable algorithm for mapping bitext correspondence. In 35th Conference of the Association for Computa- tional Linguistics, Madrid, Spain. C.J. Van Rijsbergen. 1979. Information Re- trieval,2nd edition, London, Butterworths. M. Simard and P. Plamondon. 1996. Bilingual sentence alignment: Balancing robustness and aecura~zy. In Proceedings of the Second Con- ference of the Association for Machine Trans- lation in the Americas (AMTA), Montreal, Quebec. M. Simard, G.F. Foster, and P. IsabeUe. 1992. Using Cognates to Align Sentences in Bilin- gual Corpora. In Fourth International Con- ference on Theoretical and Methodological Is- sues in Machine Translation (TM1), pages 67-81, Montr6al, Canada. M. Simard. 1998. The BAF: A corpus of English-French Bitext. In First International Conference on Language Resources and Eval- uation, Granada, Spain. 717
1998
117
A Framework for Customizable Generation of Hypertext Presentations Benoit Lavoie and Owen Rambow CoGenTex, Inc. 840 Hanshaw Road, Ithaca, NY 14850, USA benoit, owen~cogentex, com Abstract In this paper, we present a framework, PRE- SENTOR, for the development and customiza- tion of hypertext presentation generators. PRE- SENTOR offers intuitive and powerful declarative languages specifying the presentation at differ- ent levels: macro-planning, micro-planning , re- alization, and formatting. PRESENTOR is im- plemented and is portable cross-platform and cross-domain. It has been used with success in several application domains including weather forecasting, object modeling, system descrip- tion and requirements summarization. 1 Introduction Presenting information through text and hyper- text has become a major area of research and development. Complex systems must often deal with a rapidly growing amount of information. In this context, there is a need for presenta- tion techniques facilitating a rapid development and customization of the presentations accord- ing to particular standards or preferences. Typ- ically, the overall task of generating a presen- tation is decomposed into several subtasks in- cluding: macro-planning or text planning (de- termining output content and structure), micro- planning or sentence planning (determining ab- stract target language resources to express con- tent, such as lexical items and syntactic con- structions and aggregating the representations), realization (producing the text string) and for- matting (determining the formatting marks to insert in the text string). Developing an appli- cation to present the information for a given domain is often a time-consuming operation requiring the implementation from scratch of domain communication knowledge (Kittredge et al., 1991) required for the different genera- tion subtasks. In this technical note and demo we present a new presentation framework, PRE- SENTOR, whose main purpose is to facilitate the development of presentation applications. PRE- SENTOR has been used with success in differ- ent domains including object model description (Lavoie et al., 1997), weather forecasting (Kit- tredge and Lavoie, 1998) and system require- ments summarization (Ehrhart et al., 1998; Barzilay et al., 1998). PRESENTOR has the following characteristics, which we believe are unique in this combination: • PRESENTOR modules are implemented in Java and C++. It is therefore easily portable cross-platform. • PRESENTOR modules use declarative knowl- edge interpreted at run-time which can be cus- tomized by non-programmers without changing the modules. • PRESENTOR uses rich presentation plans (or exemplars) (Rambow et al., 1998) which can be used to specify the presentation at different lev- els of abstraction (rhetorical, conceptual, syn- tactic, and surface form) and which can be used for deep or shallow generation. In Section 2, we describe the overall architec- ture of PRESENTOR. In Section 3 to Section 6, we present the different specifications used to define domain communication knowledge and linguistic knowledge. Finally, in Section 7, we describe the outlook for PRESENTOR. 2 PRESENTOR Architecture The architecture of PRESENTOR illustrated in Figure 1 consists of a core generator with sev- eral associated knowledge bases. The core gen- erator has a pipeline architecture which is sim- ilar to many existing systems (Reiter, 1994): an incoming request is received by the genera- tor interface triggering sequentially the macro- planning, micro-planning, realization and fi- 718 Presentation Core Generator Domain Data , Manager Macro-Planner ~ - i Y [Micro-Planner ~ . ~ 1 I _ Realizer (Realpro) i " i Configurable Knowledge Request Figure 1: Architecture of PRESENTOR nally the formatting of a presentation which is then returned by the system. This pipeline ar- chitecture minimizes the interdependencies be- tween the different modules facilitating the up- grade of each module with minimal impact on the overall system. It has been proposed that a pipeline architecture is not an adequate model for NLG (Rubinoff, 1992). However, we are not aware of any example from practical applica- tions that could not be implemented with this architecture. One of the innovations of PRE- SENTOR is in the use of a common presenta- tion structure which facilitates the integration of the processing by the different modules. The macro-planner creates a structure and the other components add to it. All modules use declarative knowledge bases distinguished from the generator engine. This facilitates the reuse of the framework for new application domains with minimal impact on the modules composing the generator. As a re- sult, PRESENTOR can allow non-programmers to develop their own generator applications. Specifically, PRESENTOR uses the following types of knowledge bases: • Environment variables: an open list of vari- ables with corresponding values used to specify the configuration. • Exemplars: a library of schema-like struc- tures (McKeown, 1985; Rambow and Korelsky, 1992) specifying the presentation to be gener- ated at different levels of abstraction (rhetori- cal, conceptual, syntactic, surface form). • Rhetorical dictionary: a knowledge base in- dicating how to realize rhetorical relations lin- guistically. • Conceptual dictionary: a knowledge base used to map language-independent conceptual structures to language-specific syntactic struc- tures. • Linguistic grammar:, transformation rules specifying the transformation of syntactic struc- tures into surface word forms and punctuation marks. • Lexicon: a knowledge base containing the syntactic and morphological attributes of lex- emes. • Format style: formatting specifications as- sociated with different elements of the presen- tation (not yet implemented). As an example, let us consider a simple case illustrated in Figure 2 taken from a design sum- marization domain. Hyperlinks integrated in the presentation allow the user to obtain ad- ditional generated presentations. Data Base Pcoject ProjAF-2 System DBSys Si~e Ra~stein Host Gauss Soft FDBHgr Si~e Syngapour Host Jakarta Soft FDBCIt Description efFDBMgr FDBMgris a software component which is deployed on host Gauss. FDBM~r ~ns as is a server and a daemon and is written in C(ANSI) and JAVA. ...... Figure 2i Presentation Sample The next sections present the different types of knowledge used by PRESENTOR to define and construct the presentation of Figure 2. 3 Exemplar Library An exemplar (Rambow et al., 1998; White and Caldwell, 1998) is a type of schema (McKeown, 1985; Rambow and Korelsky, 1992) whose pur- pose is to determine, for a given presentation request, the general specification of the presen- tation regarding its macro-structure, its con- tent and its format. One main distinction be- tween the exemplars of PRESENTOR and ordi- nary schemas is that they integrate conceptual, syntactic and surface form specifications of the content, and can be used for both deep and shal- low generation, and combining both generality and simplicity. An exemplar can contain dif- 719 ferent type of specifications, each of which is optional except for the name of the exemplar: • Name: Specification of the name of the ex- emplar. • Parameters: Specification of the arguments passed in parameters when the exemplar is called. • Conditions of evaluation: Specification of the conditions under which the exemplar can be evaluated. • Data: Specification of domain data instan- tiated at run-time. • Constituency: Specification of the presenta- tion constituency by references to other exem- plars. • Rhetorical dependencies: Specification of the rhetorical relations between constituents. ] • Features specification: Open list of features (names and values) associated with an element of presentation. These features can be used in other knowledge bases such as grammar, lexi- con, etc. • Formatting specification: Specification of HTML tags associated with the presentation structure constructed from the exemplar. • Conceptual content specification: Specifica- tion of content at the conceptual level. • Syntactic content specification: Specifica- tion of content at the lexico-syntactic level. • Surface form content specification: Specifi- cation of the content (any level of granularity) at the surface level. • Documentation: Documentation of the ex- emplar for maintenance purposes. Once defined, exemplars can be clustered into reusable libraries. Figure 3 illustrates an exemplar, soft- description, to generate the textual descrip- tion of Figure 2, Here, the description for a given object $SOFT, referring to a piece of soft- ware, is decomposed into seven constituents to introduce a title, two paragraph breaks, and some specifications for the software type, its host(s), its usage(s) and its implementation lan- ] guage(s). In this specification, all the con- stituents are evaluated. The result of this evaluation creates seven presentation segments added as constituents (daughters) to the cur- rent growth point in the presentation structure being generated. Referential identifiers (ref 1, ref2, ..., ref4) assigned to some constituents are also being used to specify a rhetorical rela- tion of elaboration and to specify syntactic con- junction. Exemplar: [ Name: soft-description Param: [ $SOFT ] Const: [ AND [ title ( $SOFT ) paragraph-break ( ) object-type ( SSOFT ) : refl soft-host ( $SOFT ) : ref2 paragraph-break ( ) soft-usage ( $SOFT ) : ref3 soft-language ( $SOFT ) : ref4 ] Rhet: [ ( refl R-ELABORATION ref2 ) ( ref3 CONJUNCTION ref4 ) ] Desc: [ Describe the software ] Figure 3: Exemplar for Software Description Figure 4 illustrates an exemplar specifying the conceptual specification of an object type. The notational convention used in this paper is to represent variables with labels preceded by a $ sign, the concepts are upper case English labels preceded by a # sign, and conceptual re- lations are lower case English labels preceded by a # sign. In Figure 4 the conceptual content specification is used to built a conceptual tree structure indicating the state concept #HAS- TYPE has as an object $OBJECT which is of type $TYPE. This variable is initialized by a call to the function ikrs.getData( $OBJECT #type ) defined for the application domain. Exemplar: [ Name: object-type Param: [ $OBJECT ] Var: [ STYPE = ikrs.getData( $OBJECT #type ) ] Concept: [ #HAS-TYPE ( #object $OBJECT #type $TYPE ) ] Desc: [ Describe the object type ] Figure 4: Exemplar for Object Type 4 Conceptual Dictionary PRESENTOR uses a conceptual dictionary for the mapping of conceptual domain-specific rep- 720 resentations to linguistic domain-indepenent representations. This mapping (transition) has the advantage that the modules processing conceptual representations can be unabashedly domain-specific, which is necessary in applica- tions, since a broad-coverage implementation of a domain-independent theory of conceptual rep- resentations and their mapping to linguistic rep- resentations is still far from being realistic. Linguistic representations found in the con- ceptual dictionary are deep-syntactic structures (DSyntSs) which are conform to those that REALPRO (Lavoie and Rambow, 1997), PRE- SENTOR'S sentence realizer, takes as input. The main characteristics of a deep-syntactic struc- ture, inspired in this form by I. Mel'~uk's Meaning-Text Theory (Mel'~uk, 1988), are the following: • The DSyntS is an unordered dependency tree with labeled nodes and labeled arcs. • The DSyntS is lexicalized, meaning that the nodes are labeled with lexemes (uninflected words) from the target language. • The DSyntS is a syntactic representation, meaning that the arcs of the tree are labeled with syntactic relations such as "subject" (rep- resented in DSyntSs as I), rather than concep- tual or semantic relations such as "agent". • The DSyntS is a deep syntactic represen- tation, meaning that only meaning-bearing lex- emes are represented, and not function words. Conceptual representations (ConcSs) used by PRESENTOR are inspired by the characteristics of the DSyntSs in the sense that both types of representations are unordered tree structures with labelled arcs specifying the roles (concep- tual or syntactic) of each node. However, in a ConcS, concepts are used instead of lexemes, and conceptual relations are used instead of re- lations. The similairies of the representions for the ConcSs and DSyntSs facilitate their map- ping and the sharing of the functions that pro- cess them. Figure 5 illustrates a simple case of lexicaliza- tion for the state concept #HAS-TYPE intro- duced in the exemplar defined in Figure 4. If the goal is a sentence, BE1 is used with $OBJECT as its first (I) syntactic actant and $TYPE as its second (II). If the goal is a noun phrase, a complex noun phrase is used (e.g., software component FDBMgr). The lexicalization can be controlled by the user by modifying the appro- priate lexical entries. Lexicalization-rule: [ Concept: #HAS-TYPE Cases: [ Case: [#HAS-TYPE (#object $OBJ #type $TYPE)] <--> [ BE1 ( I $OBJ II $T~E ) ] { [goal:S] [] Case : [#HAS-TYPE (#object $0BJ #type #TYPE)] <--> [ #TYPE ( APPEND $0BJECT ) ] ] [goal : NP] [] Figure 5: Conceptual Dictionary Entry 5 Rhetorical Dictionary PRESENTOR uses a rhetorical dictionary to in- dicate how to express the rhetorical relations connecting clauses using syntax and/or lexical means (cue words). Figure 6 shows a rule used to combine clauses linked by an elaboration re- lationship. This rule combines clauses FDBMgr is a software component and FDBMgr is de- ployed on host Gauss into FDBMgr is a software component which is deployed on host Gauss. Rhetorical-rule: [ Relation: R-ELABORATION Cases: [ Case: [ R-ELABORATION ( nucleus $V ( I $X II $Y ) satellite $Z ( I $l ) ] <--> [ $V ( I SX II SY ( ATTR SZ ) ) ] ] Figure 6: Rhetorical Dictionary Entry 6 Lexicon and Linguistic Grammar The lexicon defines different linguistic charac- teristics of lexemes such as their categories, gov- ernment patterns, morphology, etc., and which are used for the realization process. The lin- guistic grammars of PRESENTOR are used to transform a deep-syntactic representation into 721 a llnearized list of all the lexemes and punctu- ation marks composing a sentence. The format of the declarative lexicon and of the grammar rules is that of the REALPRO realizer, which we discussed in (Lavoie and Rambow, 1997). We omit further discussion here. 7 Status PRESENTOR is currently implemented in Java and C++, and has been used with success in projects in different domains. We intend to add a declarative specification of formatting style in the near future. A serious limitation of the current implemen- tation is the hct that the configurability of PRESENTOR at the micro-planning level is re- stricted to the lexicalization and the linguistic realization of rhetorical relations. Pronominal- ization rules remain hard-coded heuristics in the micro-planner but can be guided by features introduced in the presentation representations. This is problematic since pronominalization is often domain specific and may require changing the heuristics when porting a system to a new domain. CoGenTex has developed a complementary alternative to PRESENTOR, EXEMPLARS (White and Caldwell, 1998) which gives a better pro- grammatic control to the processing of the rep- resentations that PRESENTOR does. While EX- EMPLARS focuses on programmatic extensibil- ity, PRESENTOR fOCUS on declarative represen- tation specification. Both approaches are com- plementary and work is currently being done in order to integrate their features. Acknowledgments The work reported in this paper was partially funded by AFRL under contract F30602-92-C- 0015 and SBIR F30602-92-C-0124, and by US- AFMC under contract F30602-96-C-0076. We are thankful to R. Barzilay, T. Caldwell, J. De- Cristofaro, R. Kittredge, T. Korelsky, D. Mc- Cullough, and M. White for their comments and criticism made during the development of PRE- SENTOR. References Barzilay, R., Rainbow, O., McCullough, D, Korel- sky, T., and Lavoie, B. (1998). DesignExpert: A Knowledge-Based Tool for Developing System- Wide Properties, In Proceedings of the 9th Inter- national Workshop on Natural Language Genera- tion, Ontario, Canada. Ehrhart, L., Rainbow, O., Webber F., McEnerney, J., and Korelsky, T. (1998) DesignExpert: Devel- oping System-Wide Properties with Knowledge- Based Tools. Lee Scott Ehrhart, Submitted. Kittredge, R. and Lavoie, B. (1998). MeteoCo- gent: A Knowledge-Based Tool For Generating Weather Forecast Texts, In Proceedings of Amer- ican Meteorological Society AI Conference (AMS- 98), Phoenix, AZ. Kittredge, R., Korelsky, T. and Rambow, R. (1991). On the Need for Domain Communication Knowl- edge, in Computational Intelligence, Vol 7, No 4. Lavoie, B., Rainbow, O., and Reiter, E. (1997). Cus- tomizable Descriptions of Object-Oriented Mod- els, In Proceedings of the Conference on Applied Natural Language Processing (ANLP'97), Wash- ington, DC. Lavoie, B. and Rainbow, O. (1997). RealPro - A Fast, Portable Sentence Realizer, In Proceedings of the Conference on Applied Natural Language Processing (ANLP'97), Washington, DC. Mann, W. and Thompson, S. (1987). Rhetorical Structure Theory: A Theory of Text Organization, ISI technical report RS-87-190. McKeown, K. (1985). Text Generation, Cambridge University Press. Mel'~uk, I. A. (1988). Dependency Syntax: Theory and Practice. State University of New York Press, New York. Rambow, O., Caldwell, D. E., Lavoie, B., McCul- lough, D., and White, M. (1998). Text Planning: Communicative Intentions and the Conventional- ity of Linguistic Communication. In preparation. Rainbow, O. and Korelsky, T. (1992). Applied Text Generation, In Third Conference on Applied Nat- ural Language Processing, pages 40-47, Trento, Italy. Reiter, E. (1994). Has a Consensus NL Generation Architecture Appeared, and is it Psycholinguisti- tally Plausible? In Proceedings of the 7th Inter- national Workshop on Natural Language Genera- tion, pages 163-170, Maine. Rubinoff, R. (1992). Integrating Text Planning and Linguistic Choice by Annotating Linguistic Struc- tures, In Aspects of Automated Natural Language Generation, pages 45-56, Trento, Italy. White, M. and Caldwell, D. E. (1998). EXEM- PLARS: A Practical Exensible Framework for Real-Time Text Generation, In Proceedings of the 9th International Workshop on Natural Language Generation, Ontario, Canada. 722
1998
118
Automatic Acquisition of Language Model based on Head-Dependent Relation between Words Seungmi Lee and Key-Sun Choi Department of Computer Science Center for Artificial Intelligence Research Korea Advanced Institute of Science and Technology e-mail: {leesm, kschoi}@world, kaist, ac. kr Abstract Language modeling is to associate a sequence of words with a priori probability, which is a key part of many natural language applications such as speech recognition and statistical ma- chine translation. In this paper, we present a language modeling based on a kind of simple dependency grammar. The grammar consists of head-dependent relations between words and can be learned automatically from a raw corpus using the reestimation algorithm which is also introduced in this paper. Our experiments show that the proposed model performs better than n-gram models at 11% to 11.5~ reductions in test corpus entropy. 1 Introduction Language modeling is to associate a priori prob- ability to a sentence. It is a key part of many natural language applications such as speech recognition and statistical machine translation. Previous works for language modeling can be broadly divided into two approaches; one is n- gram-based and the other is grammar-based. N-gram model estimates the probability of a sentence as the product of the probability of each word in the sentence. It assumes that probability of the nth word is dependent on the previous n- 1 words. The n-gram prob- abilities are estimated by simply counting the n-gram frequencies in a training corpus. In some cases, class (or part of speech) n-grams are used instead of word n-grams(Brown et al., 1992; Chang and Chen, 1996). N-gram model has been widely used so far, but it has always been clear that n-gram can not represent long distance dependencies. In contrast with n-gram model, grammar- based approach assigns syntactic structures to a sentence and computes the probability of the sentence using the probabilities of the struc- tures. Long distance dependencies can be rep- resented well by means of the structures. The approach usually makes use of phrase struc- ture grammars such as probabilistic context-free grammar and recursive transition network(Lari and Young, 1991; Sneff, 1992; Chen, 1996). In the approach, however, a sentence which is not accepted by the grammar is assigned zero prob- ability. Thus, the grammar must have broad- coverage so that any sentence will get non-zero probability. But acquisition of such a robust grammar has been known to be very difficult. Due to the difficulty, some works try to use an integrated model of grammar and n-gram com- pensating each other(McCandless, 1994; Meteer and Rohlicek, 1993). Given a robust grammar, grammar-based language modeling is expected to be more powerful and compact in model size than n-gram-based one. In this paper we present a language modeling based on a kind of simple dependency gram- mar. The grammar consists of head-dependent relations between words and can be learned au- tomatically from a raw corpus using the rees- timation algorithm which is also introduced in this paper. Based on the dependencies, a sen- tence is analyzed and assigned syntactic struc- tures by which long distance dependences are represented. Because the model can be thought of as a linguistic bi-gram model, the smoothing functions of n-gram models can be applied to it. Thus, the model can be robust, adapt easily to new domains, and be effective. The paper is organized as follows. We intro- duce some definitions and notations for the de- pendency grammar and the reestimation algo- rithm in section 2, and explain the algorithm in section 3. In section 4, we show the experimen- tal results for the suggested model compared to n-gram models. Finally, section 5 concludes this paper. 2 A Simple Dependency Grammar In this paper, we assume a kind of simple de- pendency grammar which describes a language 723 by a set of head-dependent relations between words. A sentence is analyzed by establishing dependency links between individual words in the sentence. A dependency analysis, :D, of a sentence can be represented with arrows point- ing from head to dependent as depicted in Fig- ure 1. For structural generality, we assume that there is always a marking tag, "EOS"(End of Sentence), at the end of a sentence and it has the head word of the sentence as its own depen- dent("gave" in Figure 1). I gave him a book EOS Figure 1: An example dependency analysis A/) is a set of inter-word dependencies which satisfy the following conditions: (1) every word in the sentence has its head in the sentence ex- cept the head word of the sentence. (2) every word can have only one head. (3) there is nei- ther crossing nor cycle of dependencies. The probabilistic model of the simple depen- dency grammar is given by p(sentence) = ~-'~ p(D) 2) = }2 II 2) x.-.+y6D where p(x--+ y) = p(yl x) freq(x --+ y) E, z)" Complete-Link and Complete-Sequence Here, we define complete-link and complete- sequence which represent partial :Ds for sub- strings. They are used to construct overall 79s and used as the basic structures for the rees- timation algorithm in section 3. A set of dependency relations on a word se- quence, wij l, is a complete-link when the fol- lowing conditions are satisfied: • there is (wi -+ wi) or (wi e-- wj) exclu- sively. • Every inner word has a head in the word sequence. • Neither crossing nor cycle of dependency relations is allowed. tWe use wi for ith word in a sentence and wi,j for the word sequence from wl to wj(i < j). k her second child the bus Figure 2: Example complete-links A complete-link has direction. A complete-link on wij is said to be "rightward" if the outermost relation is (wi --+ wj), and "leftward" if the rela- tion is (wi e-- wj). Unit complete-link is defined on a string of two adjacent words, wi,;+l. In Figure 2, (a) is a rightward complete-link, and both of (b) and (c) are leftward ones. bird in the cage the bus book Figure 3: Example complete-sequences A complete-sequence is a sequence of 0 or more adjacent complete-links that have the same direction. A unit complete-sequence is de- fined on a string of one word. It is 0 sequence of complete-links. The direction of a complete- sequence is determined by the direction of the component complete-links. In Figure 3, (a) is a rightward complete-sequence composed of two complete-links, and (b) is a leftward one. (c) is a complete-sequence composed of zero complete- links, and it can be both leftward and rightward. The word of "complete" means that the de- pendency relations on the inner words are com- pleted and that consequently there is no need to process further on them. From now on, we use Lr(i,j)/Lt(i,j) for rightward/leftward complete-links and Sr(i,j)/St(i,j) for right- ward/leftward complete-sequences on wi, j. Any complete-link on wi, j can be viewed as the following combination. • L~(i,j): {(wi --+ wj), S~(i,m), St(m+l,j)} • Ll(i,j): {(wi e-- wj), St(i, m), St(m+l,j)} foram(i<m<j). Otherwise, the set of dependencies does not sat- isfy the conditions of no crossing, no cycle and no multiple heads and is not a complete-link any more. Similarly, any complete-sequence on wi,j can be viewed as the following combination. • S~(i,j): {Sr(i,m), L~(m,j)} • St(i,j): {Lt(i,m), St(m,j)} foram(i<m<j). In the case of complete-sequence, we can prevent multiple constructions of the same 724 complete-sequence by the above combinational restriction. Figure 4: Abstract representation of/) Figure 4 shows an abstract representation of a/) of an n-word sentence. When wk(1 < k <_ n) is the head of the sentence, any D of the sentence can be represented by a St(l, EOS) uniquely by the assumption that there is always the dependency relation, (wk +-- wEos). 3 Reestimation Algorithm The reestimation algorithm is a variation of Inside-Outside algorithm(Jelinek et al., 1990) adapted to dependency grammar. In this sec- tion we first define the inside-outside probabili- ties of complete-links and complete-sequences, and then describe the reestimation algorithm based on them 2. In the followings, ~ indicates inside probabil- ity and a, is for outside probability. The su- perscripts, l and s, are used for "complete-link" and "complete-sequence" respectively. The sub- scripts indicate direction: r for "rightward" and I for "leftward". The inside probabilities of complete-links (n~(i,j), Lt(i,j)) and complete-sequences (Sr(i,j), Sl(i,j)) are as follows. j-1 /3t~(i,j) = ~ p(wi --+ wj)/3~(i, m)t3~(m + 1,j). rn=i j--I /3[(i,j) = E p(wi 6.- wj)t3~(i,m)13?(m + 1,j). rn=i j--1 fl~(i,j) = ~ /3~(i,m)~t~(m,j). mini J /3?(i,j) = ~ /3[(i,m)t3?(m,j). m=i+l The basis probabilities are: /31r(i,i + 1) = p(wi "~ wi+l) /3[(i,i + 1) = p(wi (-" wi+l) /3~(i, i) = fl?(i, i) = 1 /37(1, EO S) = p( wL, ) ~A little more detailed explanation of the expressions can be found in (Lee and Choi, 1997). /3~(i,i+ 1) = p(L~(i,i+ 1)) = p(wi ~ wi+t) /37 (i, i + 1) = p(Lt(i, i + 1)) = p(wi +-- wi+t). /37(1, EOS) is the sentence probability be- cause every dependency analysis, D, is repre- sented by a St(l, EOS) and/37(1 , EOS) is sum of the probability of every St(l, EOS). probabilities for complete- (i, j)) and complete-sequences are as follows. The outside links (L,.(i,j), Lt (S~(i,j), St(i,j)) i at~(i,j) = n c~ (v, j)/3i~(v, i). a~ (i, h)/3?(j, h). h=j a~(i,j) = ~ a~(i,h)/3tr(j,h) h=j+l +atr(i , h)/3i~(j + 1, h)p(wi -+ Wh) +al(i, h)/3?(j + 1, h)p(wi ~ wh). i-I a~(i,j) = ~ a~(v,j)fl~(v,i) v----I +dr(v,j)Z;(v, i - t)p(wv wA +al(v,j)t3;(v , i- 1)p(wv e- wj). The basis probability is ~(1, EOS) = 1. Given a training corpus, the initial grammar is just a list of all pairs of unique words in the corpus. The initial pairs represent the ten- tative head-dependent relations of the words. And the initial probabilities of the pairs can be given randomly. The training starts with the initial grammar. The train corpus is an- alyzed with the grammar and the occurrence frequency of each dependency relation is cal- culated. Based on the frequencies, probabili- ties of dependency relations are recalculated by C(wp --+ w~) The process w,) = C(w continues until the entropy of the training cor- pus becomes the minimum. The frequency of occurrence, C(wi --+ wj), is calculated by w) = -+ 1 t • • t = p(wt,.)a.(,,3)/3~(i,j) where O~(wi ~ wj, D, wl,n) is 1 if the depen- dency relation, (wi --+ wj), is used in the D, 725 and 0 otherwise. Similarly, the occurrence fre- quency of the dependency relation, (wi +- wj), is computed by ~----L---o~l(i,j)~[(i,j ). 4 Preliminary experiments We have experimented with three language models, tri-gram model (TRI), bi-gram model (BI), and the proposed model (DEP) on a raw corpus extracted from KAIST corpus 3. The raw corpus consists of 1,589 sentences with 13,139 words, describing animal life in nature. We randomly divided the corpus into two parts: a training set of 1,445 sentences and a test set of 144 sentences. And we made 15 partial training sets which include the first s sentences in the whole training set, for s ranging from 100 to 1,445 sentences. We trained the three language models for each partial training set, and tested the training and the test corpus entropies. TRI and BI was trained by counting the oc- currence of tri-grams and bi-grams respectively. DEP was trained by running the reestimation algorithm iteratively until it converges to an op- timal dependency grammar. On the average, 26 iterations were done for the training sets. Smoothing is needed for language modeling due to the sparse data problem. It is to com- pensate for the overestimated and the under- estimated probabilities. Smoothing method it- self is an important factor. But our goal is not to find out a better smoothing method. So we fixed on an interpolation method and applied it for the three models. It can be represented as (McCandless, 1994) ..., w,-x) = ,\P,(wilw,-,+l, ..., wi_l) +(1 - ..., where = C(wl, ..., w,-1) C(w,, ..., + K," The Ks is the global smoothing factor. The big- ger the Ks, the larger the degree of smoothing. For the experiments we used 2 for Ks. We take the performance of a language model to be its cross-entropy on test corpus, 1 s IVl E-l°g2Pm(Si) i=1 3KAIST (Korean Advanced Institute of Science and Technology) corpus has been under construction since 1994. It consists of raw text collection(45,000,000 words), POS-tagged collection(6,750,000 words), and tree-tagged collection(30,000 sentences) at present. where the test corpus contains a total of IV] words and is composed of S sentences. 3.4 i | | i | ! I 3.23 2.8 >" 2.6 O. 2.4 u~ 2.2 ~ (DEP model) o 2 a (TRI model) i 1.8 1.6 1.4 0 200 400 600 800 1000 1200 1400 600 No. of training sentences Figure 5: Training corpus entropies Figure 5 shows the training corpus entropies of the three models. It is not surprising that DEP performs better than BI. DEP can be thought of as a kind of linguistic bi-gram model in which long distance dependencies can be rep- resented through the head-dependent relations between words. TRI shows better performance than both BI and DEP. We think it is because TRI overfits the training corpus, judging from the experimental results for the test corpus. 9.5 i I I I I I I 8.5 uJ 7.5 .=( (TRI model) 7 / (DEP model) o 6.5 a i I I I I I 0 200 400 600 800 1000 1200 1400 1600 No. of training sentences Figure 6: Test corpus entropies For the test corpus, BI shows slightly bet- ter performance than TRI as depicted in Fig- ure 6. Increase in the order of n-gram from two to three shows no gains in entropy reduc- tion. DEP, however, Shows still better per- formance than the n-gram models. It shows about 11.5% entropy reduction to BI and about 11% entropy reduction to TRI. Figure 7 shows the entropies for the mixed corpus of training and test sets. From the results, we can see that head-dependent relations between words are more useful information than the naive n- gram sequences, for language modeling. We can see also that the reestimation algorithm can find out properly the hidden head-dependent rela- tions between words, from a raw corpus. 726 ,r, f- uJ (n o Z 10 9 8 7 6 i i | i ! i i (BI model) (TRI model) (DEP model) 5 3 0 200 400 600 800 1000 1200 1400 No. of training sentences Figure 7: Mixed corpus entropies 60000 50000 40000 30000 20000 10000 0 600 i ! | i i i ! (DEP model) o (TRI model) "*'-- r T I I I I I I 200 400 600 800 1000 1200 1400 1600 No. of training sentences Figure 8: Model size Related to the size of model, however, DEP has much more parameters than TRI and BI as depicted in Figure 8. This can be a serious problem when we create a language model from a large body of text. In the experiments, how- ever, DEP used the grammar acquired automat- ically as it is. In the grammar, many inter-word dependencies have probabilities near 0. If we exclude such dependencies as was experimented for n-grams by Seymore and Rosenfeld (1996), we may get much more compact DEP model with very slight increase in entropy. 5 Conclusions In this paper, we presented a language model based on a kind of simple dependency gram- mar. The grammar consists of head-dependent relations between words and can be learned au- tomatically from a raw corpus by the reestima- tion algorithm which is also introduced in this paper. By the preliminary experiments, it was shown that the proposed language model per- forms better than n-gram models in test cor- pus entropy. This means that the reestimation algorithm can find out the hidden information of head-dependent relation between words in a raw corpus, and the information is more useful than the naive word sequences of n-gram, for language modeling. We are planning to experiment the perfor- mance of the proposed language model for large corpus, for various domains, and with various smoothing methods. For the size of the model, we are planning to test the effects of excluding the dependency relations with near zero proba- bilities. References P. F. Brown, V. J. Della Pietra, P. V. deSouza, J. C. Lai, and R. L. Mercer. 1992. "Class- Based n-gram Models of Natural Language". Computational Linguistics, 18(4):467-480. C. Chang and C. Chen. 1996. "Application Is- sues of SA-class Bigram Language Models". Computer Processing of Oriental Languages, io(1):i-i5. S. F. Chen. 1996. "Building Probabilistic Models for Natural Language". Ph.D. the- sis, Havard University, Cambridge, Mas- sachusetts. F. Jelinek, J. D. Lafferty, and R. L. Mercer. 1990. "Basic Methods of Probabilistic Con- text Free Grammars". Technical report, IBM - T.J. Watson Research Center. K. Lari and S. J. Young. 1991. "Applications of stochastic context-free grammars using the inside-outside algorithm". Computer Speech and Language, 5:237-257. S. Lee and K. Choi. 1997. "Reestimation and Best-First Parsing Algorithm for Probabilis- tic Dependency Grammar". In WVLC-5, pages 11-21. M. K. McCandless. 1994. "Automatic Acquisi- tion of Language Models for Speech Recog- nition". Master's thesis, Massachusetts Insti- tute of Technology. M. Meteer and J.R. Rohlicek. 1993. "Statis- tical Language Modeling Combining N-gram and Context-free Grammars". In ICASSP- 93, volume II, pages 37-40, January. K. Seymore and R. Rosenfeld. 1996. "Scalable Trigram Backoff Language Models". Techni- cal Report CMU-CS-96-139, Carnegie Mellon University. S. Sneff. 1992. "TINA: A natural language sys- tem for spoken language applications". Com- putational Linguistics, 18(1):61-86. 727
1998
119
Entity-Based Cross-Document Coreferencing Using the Vector Space Model Amit Bagga Box 90129 Dept. of Computer Science Duke University Durham, NC 27708-0129 [email protected] Breck Baldwin Institute for Research in Cognitive Sciences University of Pennsylvania 3401 Walnut St. 400C Philadelphia, PA 19104 [email protected] Abstract Cross-document coreference occurs when the same person, place, event, or concept is discussed in more than one text source. Computer recognition of this phenomenon is important because it helps break "the document boundary" by allowing a user to ex- amine information about a particular entity from multiple text sources at the same time. In this paper we describe a cross-document coreference resolution algorithm which uses the Vector Space Model to re- solve ambiguities between people having the same name. In addition, we also describe a scoring algo- rithm for evaluating the cross-document coreference chains produced by our system and we compare our algorithm to the scoring algorithm used in the MUC- 6 (within document) coreference task. 1 Introduction Cross-document coreference occurs when the same person, place, event, or concept is discussed in more than one text source. Computer recognition of this phenomenon is important because it helps break "the document boundary" by allowing a user to ex- amine information about a particular entity from multiple text sources at the same time. In partic- ular, resolving cross-document coreferences allows a user to identify trends and dependencies across documents. Cross-document coreference can also be used as the central tool for producing summaries from multiple documents, and for information fu- sion, both of which have been identified as advanced areas of research by the TIPSTER Phase III pro- gram. Cross-document coreference was also identi- fied as one of the potential tasks for the Sixth Mes- sage Understanding Conference (MUC-6) but was not included as a formal task because it was consid- ered too ambitious (Grishman 94). In this paper we describe a highly successful cross- document coreference resolution algorithm which uses the Vector Space Model to resolve ambiguities between people having the same name. In addition, we also describe a scoring algorithm for evaluating the cross-document coreference chains produced by our system and we compare our algorithm to the scoring algorithm used in the MUC-6 (within docu- ment) coreference task. 2 Cross-Document Coreference: The Problem Cross-document coreference is a distinct technology from Named Entity recognizers like IsoQuest's Ne- tOwl and IBM's Textract because it attempts to determine whether name matches are actually the same individual (not all John Smiths are the same). Neither NetOwl or Textract have mechanisms which try to keep same-named individuals distinct if they are different people. Cross-document coreference also differs in sub- stantial ways from within-document coreference. Within a document there is a certain amount of consistency which cannot be expected across doc- uments. In addition, the problems encountered dur- ing within document coreference are compounded when looking for coreferences across documents be- cause the underlying principles of linguistics and discourse context no longer apply across docu- ments. Because the underlying assumptions in cross- document coreference are so distinct, they require novel approaches. 3 Architecture and the Methodology Figure 1 shows the architecture of the cross- document system developed. The system is built upon the University of Pennsylvania's within docu- ment coreference system, CAMP, which participated in the Seventh Message Understanding Conference (MUC-7) within document coreference task (MUC- 7 1998). Our system takes as input the coreference pro- cessed documents output by CAMP. It then passes these documents through the SentenceExtractor module which extracts, for each document, all the sentences relevant to a particular entity of interest. The VSM-Disambiguate module then uses a vector space model algorithm to compute similarities be- tween the sentences extracted for each pair of docu- ments. 79 Coreference Chains for doe.01 ~ Permlight Coreference System ~ SentenceExtractor Cross-Document Coreference Chains i i L i VSM- Disambiguat¢ summary'O1 [ summary.tin I ~; Figure 1: Architecture of the Cross-Document Coreference System John Perry, of Weston Golf Club, an- nounced his resignation yesterday. He was the President of the Massachusetts Golf Association. During his two years in of- rice, Perry guided the MGA into a closer relationship with the Women's Golf Asso- ciation of Massachusetts. Oliver "Biff" Kelly of Weymouth suc- ceeds John Perry as president of the Mas- sachusetts Golf Association. "We will have continued growth in the future," said Kelly, who will serve for two years. "There's been a lot of changes and there will be continued changes as we head into the year 2000." Figure 2: Extract from doc.36 ® , I Figure 3: Coreference Chains for doc.36 Details about each of the main steps of the cross- document coreference algorithm are given below. • First, for each article, CAMP is run on the ar- ticle. It produces coreference chains for all the entities mentioned in the article. For example, consider the two extracts in Figures 2 and 4. The coreference chains output by CAMP for the two extracts are shown in Figures 3 and 5. Figure 4: Extract from doc.38 I I I I Figure 5: Coreference Chains for doc.38 Next, for the coreference chain of interest within each article (for example, the coreference chain that contains "John Perry"), the Sentence Ex- tractor module extracts all the sentences that contain the noun phrases which form the corefo erence chain. In other words, the SentenceEx- tractor module produces a "summary" of the ar- ticle with respect to the entity of interest. These summaries are a special case of the query sensi- tive techniques being developed at Penn using 80 CAMP. Therefore, for doc.36 (Figure 2), since at least one of the three noun phrases ("John Perry," "he," and "Perry") in the coreference chain of interest appears in each of the three sentences in the extract, the summary produced by SentenceExtractor is the extract itself. On the other hand, the summary produced by Sen- tenceExtractor for the coreference chain of in- terest in doc.38 is only the first sentence of the extract because the only element of the corefer- ence chain appears in this sentence. * For each article, the VSM-Disambiguate mod- ule uses the summary extracted by the Sen- tenceExtractor and computes its similarity with the summaries extracted from each of the other articles. Summaries having similarity above a certain threshold are considered to be regard- ing the same entity. 4 University of Pennsylvania's CAMP System The University of Pennsylvania's CAMP system re- solves within document coreferences for several dif- ferent classes including pronouns, and proper names (Baldwin 95). It ranked among the top systems in the coreference task during the MUC-6 and the MUC-7 evaluations. The coreference chains output by CAMP enable us to gather all the information about the entity of interest in an article. This information about the entity is gathered by the SentenceExtractor mod- ule and is used by the VSM-Disambiguate module for disambignation purposes. Consider the extract for doc.36 shown in Figure 2. We are able to in- clude the fact that the John Perry mentioned in this article was the president of the Massachusetts Golf Association only because CAMP recognized that the "he" in the second sentence is coreferent with "John Perry" in the first. And it is this fact which actually helps VSM-Disambignate decide that the two John Perrys in doc.36 and doc.38 are the same person. 5 The Vector Space Model The vector space model used for disambignating en- tities across documents is the standard vector space model used widely in information retrieval (Salton 89). In this model, each summary extracted by the SentenceExtractor module is stored as a vector of terms. The terms in the vector are in their mor- phological root form and are filtered for stop-words (words that have no information content like a, the, of, an, ... ). If $1 and $2 are the vectors for the two summaries extracted from documents D1 and D2, then their similarity is computed as: Sim(S1, $2) = E wlj x w2j common terms tj where tj is a term present in both $1 and $2, wlj is the weight of the term tj in S1 and w~j is the weight of tj in $2. The weight of a term tj in the vector Si for a summary is given by: t f × log Wij = 2_}_...+2 Jsi~ + si2 sin where tf is the frequency of the term tj in the sum- mary, N is the total number of documents in the collection being examined, and df is the number of documents in the collection that the term tj occurs 2 is the cosine normaliza- in. ~/si~ + si~ +... + Sin tion factor and is equal to the Euclidean length of the vector Si. The VSM-Disambignate module, for each sum- mary Si, computes the similarity of that summary with each of the other summaries. If the similarity computed is above a pre-defined threshold, then the entity of interest in the two summaries are consid- ered to be coreferent. 6 Experiments The cross-document coreference system was tested on a highly ambiguous test set which consisted of 197 articles from 1996 and 1997 editions of the New York Times. The sole criteria for including an article in the test set was the presence or the absence of a string in the article which matched the "/John.*?Smith/" regular expression. In other words, all of the articles either contained the name John Smith or contained some variation with a mid- dle initial/name. The system did not use any New York Times data for training purposes. The answer keys regarding the cross-document chains were man- ually created, but the scoring was completely auto- mated. 6.1 Analysis of the Data There were 35 different John Smiths mentioned in the articles. Of these, 24 of them only had one ar- ticle which mentioned them. The other 173 articles were regarding the 11 remaining John Smiths. The background of these John Smiths , and the number of articles pertaining to each, varied greatly. De- scriptions of a few of the John Smiths are: Chairman and CEO of General Motors, assistant track coach at UCLA, the legendary explorer, and the main charac- ter in Disney's Pocahontas, former president of the Labor Party of Britain. 7 Scoring the Output In order to score the cross-document coreference chains output by the system, we had to map the cross-document coreference scoring problem to a within-document coreference scoring problem. This 81 was done by creating a meta document consisting 6f the file names of each of the documents that the system was run on. Assuming that each of the docu- ments in the data set was about a single John Smith, the cross-document coreference chains produced by the system could now be evaluated by scoring the corresponding within-document coreference chains in the meta document. We used two different scoring algorithms for scor- ing the output. The first was the standard algorithm for within-document coreference chains which was used for the evaluation of the systems participating in the MUC-6 and the MUC-7 coreference tasks. The shortcomings of the MUC scoring algorithm when used for the cross-document coreference task forced us to develop a second algorithm. Details about both these algorithms follow. 7.1 The MUC Coreference Scoring Algorithm 1 The MUC algorithm computes precision and recall statistics by looking at the number of links identi- fied by a system compared to the links in an answer key. In the model-theoretic description of the al- gorithm that follows, the term "key" refers to the manually annotated coreference chains (the truth) while the term "response" refers to the coreference chains output by a system. An equivalence set is the transitive closure of a coreference chain. The algo- rithm, developed by (Vilain 95), computes recall in the following way. First, let S be an equivalence set generated by the key, and let R1... Rm be equivalence classes gener- ated by the response. Then we define the following functions over S: • p(S) is a partition of S relative to the response. Each subset of S in the partition is formed by intersecting S and those response sets Ri that overlap S. Note that the equivalence classes de- fined by the response may include implicit sin- gleton sets - these correspond to elements that are mentioned in the key but not in the re- sponse. For example, say the key generates the equivalence class S = {A B C D}, and the re- sponse is simply <A-B>. The relative partition p(S) is then {A B} {C} and {D}. • c(S) is the minimal number of "correct" links necessary to generate the equivalence class S. It is clear that c(S) is one less than the cardinality of S, i.e., c(S) = (IS[ - 1) . • m(S) is the number of "missing" links in the response relative to the key set S. As noted above, this is the number of links necessary to 1The exposition of this scorer has been taken nearly en- tirely from (Vilain 95). Figure 6: Truth Figure 7: Response: Example 1 fully reunite any components of the p(S) parti- tion. We note that this is simply one fewer than the number of elements in the partition, that is, m(S) = (Ip(S)l- I) . Looking in isolation at a single equivalence class in the key, the recall error for that class is just the number of missing links divided by the number of m(S) correct links, i.e., c(S) • c(S)-m(S) Recall in turn is c(S) , which equals (ISl- 1) - (Ip(S)l- I) ISl- i The whole expression can now be simplified to ISl- Ip(S)I ISl- 1 Precision is computed by switching the roles of the key and response in the above formulation. 7.2 Shortcomings of the MUC Scoring Algorithm While the (Vilain 95) provides intuitive results for coreference scoring, it however does not work as well in the context of evaluating cross document corefer- ence. There are two main reasons. 1. The algorithm does not give any credit for sep- arating out singletons (entities that occur in chains consisting only of one element, the en- tity itself) from other chains which have been identified. This follows from the convention in 82 Figure 8: Response: Example 2 coreference annotation of not identifying those entities that are markable as possibly coreferent with other entities in the text. Rather, entities are only marked as being coreferent if they ac- tually are coreferent with other entities in the text. This shortcoming could be easily enough overcome with different annotation conventions and with minor changes to the algorithm, but it is worth noting. 2. All errors are considered to be equal. The MUC scoring algorithm penalizes the precision num- bers equally for all types of errors. It is our po- sition that, for certain tasks, some coreference errors do more damage than others. Consider the following examples: suppose the truth contains two large coreference chains and one small one (Figure 6), and suppose Figures 7 and 8 show two different responses. We will ex- plore two different precision errors. The first error will connect one of the large coreference chains with the small one (Figure 7). The sec- ond error occurs when the two large coreference chains are related by the errant coreferent link (Figure 8). It is our position that the second er- ror is more damaging because, compared to the first error, the second error makes more entities coreferent that should not be. This distinction is not reflected in the (Vilain 95) scorer which scores both responses as having a precision score of 90% (Figure 9). 7.3 Our B-CUBED Scoring Algorithm 2 Imagine a scenario where a user recalls a collection of articles about John Smith, finds a single arti- cle about the particular John Smith of interest and wants to see all the other articles about that indi- vidual. In commercial systems with News data, pre- cision is typically the desired goal in such settings. As a result we wanted to model the accuracy of the system on a per-document basis and then build a more global score based on the sum of the user's experiences. 2The main idea of this algorithm was initially put forth by Alan W. Biermann of Duke University. Consider the case where the user selects document 6 in Figure 8. This a good outcome with all the relevant documents being found by the system and no extraneous documents. If the user selected doc- ument 1, then there are 5 irrelevant documents in the systems output - precision is quite low then. The goal of our scoring algorithm then is to model the precision and recall on average when looking for more documents about the same person based on selecting a single document. Instead of looking at the links produced by a sys- tem, our algorithm looks at the presence/absence of entities from the chains produced. Therefore, we compute the precision and recall numbers for each entity in the document. The numbers computed with respect to each entity in the document are then combined to produce final precision and recall num- bers for the entire output. For an entity, i, we define the precision and recall with respect to that entity in Figure 10. The final precision and recall numbers are com- puted by the following two formulae: Final Precision = N Z wi * Precisioni i=l N Final Recall = E wi * Recall~ i=l where N is the number of entities in the document, and wi is the weight assigned to entity i in the doc- ument. For all the examples and the experiments in this paper we assign equal weights to each entity i.e. wi = 1IN. We have also looked at the possibilities of using other weighting schemes. Further details about the B-CUBED algorithm including a model theoretic version of the algorithm can be found in (Bagga 98a). Consider the response shown in Figure 7. Using the B-CUBED algorithm, the precision for entity-6 in the document equals 2/7 because the chain out- put for the entity contains 7 elements, 2 of which are correct, namely {6,7}. The recall for entity-6, how- ever, is 2/2 because the chain output for the entity has 2 correct elements in it and the "truth" chain for the entity only contains those 2 elements. Figure 9 shows the final precision and recall numbers com- puted by the B-CUBED algorithm for the examples shown in Figures 7 and 8. The figure also shows the precision and recall numbers for each entity (ordered by entity-numbers). 7.4 Overcoming the Shortcomings of the MUC Algorithm The B-CUBED algorithm does overcome the the two main shortcomings of the MUC scoring algorithm discussed earlier. It implicitly overcomes the first 83 Output MUC Algorithm B-CUBED Algorithm (equal weights for every entity) P: 1-%(90%) P:~*[~+~+~+~+~+~+~+~+~+~+~+~]=76% Example 1 R:~(100%) R:~*[~+~+~+~+~+~+~+~+~+~+5+51=I00% P: 9 (90%) P:~*[5+~+~+~+~+~+~+-~+5+5+5+5]=58% Example 2 R:~(100%) R: 1*[~+~+~+~+~+~+~+~+-~+~+~+~1=I00% Precisioni = Figure 9: Scores of Both Algorithms on the Examples number of correct elements in the output chain containing entityi Recalli = number of elements in the output chain containing entityi number of correct elements in the output chain containing entityi number of elements in the truth chain containing entityi (t) (2) Figure 10: Definitions for Precision and Recall for an Entity i shortcoming of the MUC-6 algorithm by calculating the precision and recall numbers for each entity in the document (irrespective of whether an entity is part of a coreference chain). Consider the responses shown in Figures 7 and 8. We had mentioned earlier that the error of linking the the two large chains in the second response is more damaging than the error of linking one of the large chains with the smaller chain in the first response. Our scoring algorithm takes this into account and computes a final preci- sion of 58% and 76% for the two responses respec- tively. In comparison, the MUC algorithm computes a precision of 90% for both the responses (Figure 9). 8 Results Figure 11 shows the precision, recall, and F-Measure (with equal weights for both precision and recall) using the B-CUBED scoring algorithm. The Vector Space Model in this case constructed the space of terms only from the summaries extracted by Sen- tenceExtractor. In comparison, Figure 12 shows the results (using the B-CUBED scoring algorithm) when the vector space model constructed the space of terms from the articles input to the system (it still used the summaries when computing the simi- laxity). The importance of using CAMP to extract summaries is verified by comparing the highest F- Measures achieved by the system for the two cases. The highest F-Measure for the former case is 84.6% while the highest F-Measure for the latter case is 78.0%. In comparison, for this task, named-entity tools like NetOwl and Textract would mark all the John Smiths the same. Their performance using our g 100 =. , 90 "'t,,, 80 60 50 40 c 30 20' 10 0 = 0 0.1 Precision/Recall vs Threshold : ; ": "7 : ;" ;ur;Ig~Pr;ci;sion Our AIg: Recall -+--- %,, Our AIg: F-Measure -o-- \\ '~ ~ ~',~ "~ '~" G" "E}" "{3"" B" "D - ~3" "O" "~" G" "E}" "El" "El "+ o "-+- -+- -+. _+_ ..+_ _+_ _+_ _+_ .+_ _+_ .+~..~_ I I I I I I I I 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 Threshold Figure 11: Precision, Recall, and F-Measure Us- ing the B-CUBED Algorithm With Training On the Summaries scoring algorithm is 23% precision, and 100% recall. Figures 13 and 14 show the precision, recall, and F-Measure calculated using the MUC scoring algo- rithm. Also, the baseline case when all the John Smiths are considered to be the same person achieves 83% precision and 100% recall. The high initial pre- cision is mainly due to the fact that the MUC algo- rithm assumes that all errors are equal. We have also tested our system on other classes of cross-document coreference like names of companies, and events. Details about these experiments can be found in (Bagga 98b). 84 Precision/Recall vs Threshold 100 "'~-, Our AIg: Precision 8o .: ,. 70 I "~,"b "~'~'G"r~'-n ~. ,0 L~'~"~~°~~~--~ 20 10 0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 Threshold Figure 12: Precision, Recall, and F-Measure Using the B-CUBED Algorithm With Training On Entire Articles Precision/Recall vs Threshold 100 /., ~ ~ . :-.., - : ; : ; : ~ : ~ : ; c ; : ; : ~--o':~,-_ MUC AIo: Precision 90 ~-=~-,,~, MUC AIg: Recall .... '~--o MUC AIg: F-Measure -o-- 7080 i ",, ""m.. 60 '" "D'-{~. {3.. O ~w *'O.~..~..[~.G..O.~.t3. 0 50 "4"'- 40 "+"+'~"+-~-~-~-,+, 30 20 10 o ,. , , , ,. ,. , ,. , 0 01 0.2 0.3 0.4 05 06 0.7 08 0.9 Threshold Figure 13: Precision, Recall, and F-Measure Using the MUC Algorithm With Training On the Sum- maries 9 Conclusions As a novel research problem, cross document coref- erence provides an different perspective from re- lated phenomenon like named entity recognition and within document coreference. Our system takes summaries about an entity of interest and uses vari- ous information retrieval metrics to rank the similar- ity of the summaries. We found it quite challenging to arrive at a scoring metric that satisfied our intu- itions about what was good system output v.s. bad, but we have developed a scoring algorithm that is an improvement for this class of data over other within document coreference scoring algorithms. Our re- sults are quite encouraging with potential perfor- mance being as good as 84.6% (F-Measure). 8 @. 100 90 80 70 60 50 40 30 20: 10 0 0 Precision/Recall vs Threshold ' ~ MUC AIg: Precision 2 ¢ ",:'Q MUC AIg: Recall -~-. MUC AIg: F-Measure -o-- '~,- ' "'O . . 0 ~ ""O.. 12; ~"- "0- ~3.. 0..0..~ "~'" "k'-,,k .., W "(3..0..0.0.. 0 ""4-'= +- -.~. ÷ , , , , ,. , , ,. , 0,1 0.2 0.3 0.4 05 0.6 0.7 08 0.9 Threshold Figure 14: Precision, Recall, and F-Measure Using the MUC Algorithm With Training On Entire Arti- cles 10 Acknowledgments The first author was supported in part by a Fel- lowship from IBM Corporation, and in part by the Institute for Research in Cognitive Science at the University of Pennsylvania. References Bagga, Amit, and Breck Baldwin. Algorithms for Scoring Coreference Chains. To appear at The First International Conference on Language Re- sources and Evaluation Workshop on Linguistics Coreference, May 1998. Bagga, Amit, and Breck Baldwin. How Much Pro- cessing Is Required for Cross-Document Corefer- ence? To appear at The First International Con- ferenee on Language Resources and Evaluation on Linguistics Coreferenee, May 1998. Baldwin, Breck, et el. University of Pennsylva- nia: Description of the University of Pennsylva- nia System Used for MUC-6, Proceedings of the Sixth Message Understanding Conference (MUC- 6), pp. 177-191, November 1995. Grishman, Ralph. Whither Written Language Eval- uation?, Proceedings of the Human Language Technology Workshop, pp. 120-125, March 1994, San Francisco: Morgan Kaufmann. Proceedings of the Seventh Message Understanding Conference (MUC-7), April 1998. Salton, Gerard. Automatic Text Processing: The Transformation, Analysis, and Retrieval of In- formation by Computer, 1989, Reading, MA: Addison-Wesley. Vilain, Marc, et el. A Model-Theoretic Coreference Scoring Scheme, Proceedings of the Sixth Message Understanding Conference (MUC-6), pp. 45-52, November 1995, San Francisco: Morgan Kauf- mann. 85
1998
12
SOLVING ANALOGIES ON WORDS: AN ALGORITHM Yves Lepage ATR Interpreting Telecommunications Research Labs, Hikaridai 2-2, Seika-tyS, SSraku-gun, KySto 619-0288, Japan lepage@itl, atr. co. jp Introduction To introduce the algorithm presented in this pa- per, we take a path that is inverse to the his- torical development of the idea of analogy (se e (Hoffman 95)). This is necessary, because a certain incomprehension is faced when speak- ing about linguistic analogy, i.e., it is generally given a broader and more psychological defini- tion. Also, with our proposal being computa- tional, it is impossible to ignore works about analogy in computer science, which has come to mean artificial intelligence. 1 A Survey of Works on Analogy This paper is not intended to be an exhaustive study. For a more comprehensive study on the subject, see (Hoffman 95). 1.1 Metaphors, or Implicit Analogies Beginning with works in psychology and arti- ficial intelligence, (Gentner 83) is a milestone study of a possible modeling of analogies such as, "an atom is like the solar system" adequate for artificial intelligence. In these analogies, two domains are mapped, one onto the other, thus modeling of the domain becomes necessary. Y sun-,nucleus planet-~Yelectron In addition, properties (expressed by clauses, formulae, etc.) are transferred from one domain onto the other, and their number somehow de- termines the quality of the analogy. aZZracts(sun, J~aZZracZs(nucleus, planeZ) elecZron) moremassive(sun, -~fmoremassive(nucleus, planet) elecZron) However, Gentner's explicit description of sentences as "an A is like a B" as analo- gies is subject to criticism. Others (e.g. (Steinhart 94)) prefer to call these sentences metaphors 1, the validity of which rests on sen- tences of the kind, "A is to B as C is to D", for which the name analogy 2 is reserved. In other words, some metaphors are supported by analo- gies. For instance, the metaphor, "an atom is like the solar system", relies on the analogy, "an electron is to the nucleus, as a planet is to the sun" .3 The answer of the AI community is com- plex because they have headed directly to more complex problems. For them, in analogies or metaphors (Hall 89): two different domains appear for both domains, modeling of a knowledge- base is necessary mapping of objects and transfer of proper- ties are different operations the quality of analogies has to be evalu- ated as a function of the strength (number, truth, etc.) of properties transferred. We must drastically simplify all this and enunciate a simpler problem (whose resolution may not necessarily be simple). This can be aclfieved by simphfying data types, and conse- quently the characteristics of the problem. alf the fact that properties are carried over char- acterises such sentences, then etymologically they are metaphors: In Greek, pherein: to carry; meta-: between, among, with, after. "Metaphor" means to transfer, to carry over. 2In Greek, logos, -logio: ratio, proportion, reason, dis- course; ann-: top-down, again, anew. "Analog3," means the same proportions, similar ratios. 3This complies with Aristotle's definitions in the Poetics. 728 1.2 Multiplicity vs Unicity of Domains In the field of natural language processing, there have been plenty of works on pronunciation of English by analogy, some being very much con- cerned with reproducing human behavior (see (Damper & Eastmond 96)). Here is an illustra- tion of the task from (Pirelli & Federici 94): vane A /vejn/ ,~ g .L h sane 1-~ x = /sejn/ Similarly to AI approaches, two domains ap- pear (graphemic and phonemic). Consequently, the functions f, g and h are of different types because their domains and ranges are of differ- ent data types. Similarly to AI again, a common feature in such pronouncing systems is the use of data bases of written and phonetic forms. Regard- ing his own model, (Yvon 94) comments that: The [...] model crucially relies upon the existence of numerous paradigmatic rela- fionsh.ips in lexical data bases. Paradigmatic relationships being relation- ships in which four words intervene, they are in fact morphological analogies: "reaction is to reactor, as faction is to factor". reactor/-~ reactio.n • Lg lg factor ~ faction Contrasting sharply with AI approaches, morphological analogies apply in only one do- main, that of words. As a consequence, the number of relationships between analogical terms decreases from three (f, g and h) to two (f and g). Moreover, because all four terms intervening in the analogy are from the same domain, the domains and ranges of f and g are identical. Finally, morphological analogies can be regarded as simple equations indepen- dent of any knowledge about the language in which they are written. This standpoint elim- inates the need for any knowledge base or dic- tionary. ] reactor --, reaction ~g ~g factor ~ x? 1.3 Unicity vs Multiplicity of Changes Solving morphological analogies remains diffi- cult because several simultaneous changes may be required to transform one word into a sec- ond (for instance, doer ---, undo requires the deletion of the suffix -er and the insertion of the prefix un-). This problem has yet to be solved satisfactorily. For example, in (Yvon 94), only one change at a time is allowed, and multiple changes are captured by successive applications of morphological analogies (cas- cade model). However, there are cases in the morphology of some languages where multiple changes at the same time are mandatory, for instance in semitic languages. "One change at a time", is also found in (Na- gao 84) for a translation method, called trans- lation by analogy, where the translation of an input sentence is an adaptation of translations of similar sentences retrieved from a data base. The difficulty of handling multiple changes is remedied by feeding the system with new exam- ples differing by only one word commutation at a time. (Sadler and Vendelmans 90) proposed a different solution with an algebra ontrees: dif- ferences on strings are reflected by adding or subtracting trees. Although this seems a more convincing answer, the use of data bases would resume, as would the multiplicity of domains. Our goal is a true analogy-solver, i.e., an algo- rithm which, on receiving three words as input, outputs a word, analogical to the input. For that, we thus have to answer the hard problem of: (1) performing multiple changes (2) using a unique data-type (words) (3) without dictio- nary nor any external knowledge. 1.4 Analogies on Words We have finished our review of the problem and ended up with what was the starting point of our work. In linguistic works, analogy is de- fined by Saussure, after Humboldt and Baudoin de Courtenay, as the operation by which, given two forms of a given word, and only one form of a second word, the missing form is coined 4, "honor is to hon6rem as 6r6tor is to 6rSt6rem" noted 6r~t6rem : 6rdtor = hon6rem : honor. This is the same definition as the one given by Aristotle himself, "A is to B as C is to D", pos- tulating identity of types for A, B, C, and D. 4Latin: 6rdtor (orator, speaker) and honor (honour) nominative singular, 5rat6rern and honfrem accusative singular. 729 However, while analogy has been mentioned and used, algorithmic ways to solve analogies seem to have never been proposed, maybe be- cause the operation, is so "intuitive". We (Lep- age & Ando 96) recently gave a tentative com- putational explanation which was not always valid because false analogies were captured. It did not constitute an algorithm either. The only work on solving analogies on words seems to be Copycat ((Hofstadter et al. 94) and (Hoffman 95)), which solves such puzzles as: abc : abbccc = ijk : x. Unfortunately it does not seem to use a truly dedicated algo- rithm, rather, following the AI approach, it uses a forlnalisation of the domain with such func- tions as, "previous in aZphabe'c", "rank in aZphabel:", etc. 2 Foundations of the Algorithm 2.1 The First Term as an Axis (Itkonen and Haukioja 97) give a program in Prolog to solve analogies in sentences, as a refu- tation of Chomsky, according to whom analogy would not be operational in syntax, because it dehvers non-gralnmatical sentences. That anal- ogy would apply also to syntax, was advocated decades ago by Hermann Paul and Bloomfield. Chomsky's claim is unfair, because it supposes that analogy applies only on the symbol level. Itkonen and Haukioja show that analogy, when controlled by some structural level, does deliver perfectly grammatical sentences. What is of interest to us, is the essence of their method, which is the seed for our algorithm: Sentence D is formed by going through sentences B and C one element at a time and inspecting the relations of each ele- ment to the structure of sentence A (plus the part of sentence D that is ready). Hence, sentence A is the axis against which sen- tences B and C are compared, and by opposition to which output sentence D is built. rextder : u_~nreadoble = d"-oer : x ~ x = un~able The method will thus be: (a) look for those parts which are not common to A and B on one hand, and not common to A and C on the other and (b) put them together in the right order. 2.2 Common Subsequenees Looking for common subsequences of A and B (resp. A and C) solves problem (a) by comple- mentation. (Wagner & Fischer 74) is a method to find longest common subsequences by com- puting edit distance matrices, yielding the min- imal number of edit operations (insertion, dele- tion, substitution) necessary to transform one string into another. For instance, the following matrices give the distance between like and unlike on one hand, and between like and known on the other hand, in their right bottom cells: dist(like, unlike) = 2 and dist( Iike, known) = 5 u n l i k e k n o w n ! 1 2 2 3 4 5 l 1 2 3 4 5 i 2 2 3 2 3 4 i 2 2 3 4 5 k 3 3 3 3 2 3 k 2 3 3 4 5 e 4 4 4 4 3 2 e 3 3 4 4 5 2.3 Similitude between Words We call similitude between A and B the length of their longest common subsequence. It is also equal to the length of A, minus the number of its characters deleted or replaced to produce B. This number we caU pdist(A,B), because it is a pseudo-distance, which can be computed ex- actly as the edit distances, except that inser- tions cost 0. sire(A, B) = I A [ - pdist(A, B) For instance, pdist(unlike, like) = 2, while pdist( like, unlike) = O. l i k e u 1 1 1 1 u n l i k e n 2 2 2 2 l 2 2 2 2 I 1 1 0 0 0 0 i 3 2 2 2 i 2 2 1 0 0 0 k 4 3 2 2 k 3 3 2 1 0 0 e 5 4 3 2 e 4 4 3 2 1 0 Characters inserted into B or C may be left aside, precisely because they are those charac- ters of B and C, absent from A, that we want to assemble into the solution, D. As A is the axis in the resolution of analogy, graphically we make it the vertical axis around which the computation of pseudo-distances takes place. For instance, for like:unlike = k,'r~OW~ : X, n w o n k u n 1 i k e 1 I I I i I 1 I 0 0 0 0 2 2 2 2 2 i 2 2 1 0 0 0 2 2 2 2 2 k 3 3 2 1 0 0 3 3 3 3 3 e 4 4 3 2 i 0 730 2.4 The Coverage Constraint It is easy to verify that there is no solution to an analogy if some characters of A appear neither in B nor in C. The contrapositive says that, for an analogy to hold, any character of A has to appear in either B or C. Hence, the sum of the similitudes of A with B and C must be greater than or equal to its length: sim(A, B) + sire(A, C) >_ I A I, or, equivalently, I d I ~ pdist(d, B) + pdist(d, C) When the length of A is greater than the sum of the pseudo-distances, some subsequences of A are common to all strings in the same order. Such subsequences have to be copied into the solution D. We call com(A, B, C, D) the sum of the length of such subsequences. The del- icate point is that this sum depends precisely on the solution D being currently built by the algorithnL To summarise, for analogy A : B = C : D to hold, the following constraint must be verified: I A I = pdist(A, B)+pdist(A, C)+com(A, B, C, D) 3 The Algorithm 3.1 Computation of Matrices Our method relies on the computation of two pseudo-distance matrices between the three first terms of the analogy. A result by (Ukkonen 85) says that it is sufficient to compute a diagonal band plus two extra bands on each of its sides in the edit distance matrix, in order to get the ex- act distance, if the value of the overall distance is known to be less than some given thresh- old. This result applies to pseudo-distances, and is used to reduce the computation of the two pseudo-distance matrices. The width of the extra bands is obtained by trying to satisfy the coverage constraint with the value of the current pseudo-distance in the other matrix. proc compute_matrices(A, B, C, pdAB,pdAc) compute pseudo-distances matrices with extra bands of pdAB/2 and pdAc/2 if [dl>_ pdist(d,B)+ pdist(A,C) main component else compute.anatrices(A, B, C, max([ A I - pdist(d, C),pdAB + 1), xnax(I A I - pdist(A, B),pdac + x)) end if end proc COlnpute_matrices 3.2 Main Component Once enough in the matrices has been com- puted, the principle of the algorithm is to follow the paths along which longest common subse- quences are found, simultaneously in both ma- trices, copying characters into the solution ac- cordingly. At each time, the positions in both matrices must be on the same horizontal line, i.e. at a same position in A, in order to ensure a right order while building the solution, D. Determining the paths is done by compar- ing the current cell in the matrix with its three previous ones (horizontal, vertical or diagonal), according to the technique in (Wagner & Fis- cher 74). As a consequence, paths are followed from the end of words down to their begin- ning. The nine possible combinations (three di- rections in two matrices) can be divided into two groups: either the directions are the same in both matrices, or they are different. The following sketches the al- gorithm, corn(A, B,C, D) has been initialised to: I AI - (pdist(d,B) + pdist(d,C)), iA, is and ic are the current positions in A, B and C. dirAB (resp. dirAc) is the direction of the path in matrix A x B (resp. A × C) from the current position. "copy" means to copy a char- acter from a word at the beginning of D and to move to the previous character in that word. if constraint(iA, iB, ic, corn(A, B, C, D)) case: dirAB = dirAc = diagonal if A[iA] = B[iB] = C[ic] decrement corn(A, B, C, D) end if copy B[iB] + C[ic] - A[iA] ~ case: dirAB = dirAC = horizontal copy charb/min(pdist(A[1..iA], B[1..iB]), pdist( A[1..iA], C[1..ic]) ) case: dirAB = dirAc = vertical move only in A (change horizontal line) case: dirAB # dirAc if dirAB = horizontal copy B[iB] aIn this case, we move in tile three words at the same time. Also, the character arithmetics factors, in view of generalisations, different operations: if the three current characters in A, B and C are equal, copy this character, otherwise copy that character from B or C that is different from the one in A. If all current characters are different, this is a failure. bThe word with less similitude with A is chosen, so as to make up for its delay. 731 e].se ±f dirAB = vertical move in A and C e1$¢ same thing by exchanging B and C end ±f end if 3.3 Early Termination in Case of Failure Complete computation of both matrices is not necessary to detect a failure. It is obvious when a letter in A does not appear in B or C. This may already be detected before any matrix com- putation. Also, checking the coverage constraint allows the algorithm to stop as soon as non-satisfying moves have been performed. 3.4 An Example We will show how the analogy like : unlike = known : x is solved by the algorithm. The algorithm first verifies that all letters of like are present either in unlike or known. Then, the minimum computation is done for the pseudo-distances matrices, i.e. only the mini- mal diagonal band is computed. e k i l n u k n o w n 0 1 1 1 1 1 0 1 2 i 2 2 0 1 2 k 3 3 0 1 2 e 4 4 As the coverage constraint is verified, the main component is called. It follows the paths noted by values in circles in the matrices. e k i 1 n u k n o w n ® ® i ®® 1 2 i 2 ~) The succession of moves triggers the following copies into the solution: dirAB diagonal diagonal diagonal diagonal horizontal horizontal horizontal dirAc copy diagonal n diagonal w diagonal o diagonal n horizontal k diagonal n diagonal u At each step, the coverage constraint being veri- fied, finally, the solution x = unknown is ouptut. 4 Properties and Coverage 4.1 Trivial Cases, Mirroring Trivial cases of analogies are, of course, solved by the algorithm, like: A:A=A:x =~ x= A or A:A = C:x ~ x = C. Also, by construction, A:B= C:x and A: C=B:x deliver the same solution. With this construction, mirroring poses no problem. If we note A the mirror of word A, then A:B=C:D ¢~ A:B=C:D. 4.2 Prefixing, Suffixing, Parallel Infixing Appendix A lists a number of examples, actu- ally solved by the algorithm, from simple to complex, which illustrate the algorithm's per- formance. 4.3 Reduplication and Permutation The previous form of the algorithm does not produce reduplication. This would be neces- sary if we wanted to obtain, for example, plu- rals in IndonesianS: orang: orang-orang = burung : x =v x = burung-burung . In this case, our algorithm delivers, x = orang-burung, because preference is given to leave prefixes un- changed. However, the algorithm may be easily modified so that it applies repeatedly so as to obtain the desired solution 6. Permutation is not captured by the algo- rithm. An example (q with a and u) in Proto- semitic is: yaqtilu : yuqtiIu = qatal : qutaI. 4.4 Language-independence/Code- dependence Because the present algorithm performs compu- ration only on a symbol level, it may be applied to any language. It is thus language indepen- dent. This is fortunate, as analogy in linguistics certainly derives from a more general psycho- logical operation ((Gentner 83), (Itkonen 94)), which seems to be universal among human be- ings. Examples in Section A illustrate the lan- guage independence of the algorithm. Conversely, the symbols determine the granu- larity of the analogies computed. Consequently, a commutation not reflected in the coding sys- tem will not be captured. This may be illus- trated by a Japanese example in three different Sorang (human being) singular, orang-orang plural, burung (bird). SSi,nilarly, it is easy to apply the algorithm in a transducer-like way so that it modifies, by analogy, parts of an input string. 732 codings: the native writing system, the Hep- burn transcription and the official, strict rec- omlnendation (kunrei). Kanji/Kana: ~-9 : ~#~ ~-9- = ~ < : x Hepburn: matsu : maehimasu = hataraku : x Kunrei: matu : matimasu = hataraku : x x = hatarakimasu The algorithm does not solve the first two analo- gies (solutions: ~-~ $ #, hatarokimasu) be- cause it does not solve the elementary analogies, -9:~ = < : ~ and tsu:chi=ku:ki, which are beyond the symbol level r. More generally speaking, the interaction of analogy with coding seems the basis of a fre- quent reasoning principle: f(A) : f(B) = f(C) : x ~ A : B==_ C : f-t (x) Only the first analogy holds on the symbol level and, as is, is solved by our algorithm, f is an encoding function for which an inverse exists. A striking application of this principle is the resolution of some Copycat puzzles, like: abc : abd = ijk : x => x= ijI Using a binary ASCII representation, which re- flects sequence in the alphabet, our algorithm produces: 011000010110001001100011 : 011000010110001001100100 ---~ 011010010110101001101011 : X =:~ X ~ 011010010110101001101100 ~ ijl Set in this way, even analogies of geometrical type can be solved under a convenient represen- tation. An adequate description (or coding), with no reduplication, is: obj(bia)& . obj(~maU)C obj(big)_ obj(big)~ :x obj=circle" ~:obj=circle - obj=square This is actually solved by our algorithm: obj( , .U)c obj(bia) x = &obj=square ~One could imagine extending the algorithm by parametrising it with such predefined analogical relations. In other words, coding is the key to many analogies. More generally we follow (Itkonen and Haukioja 97) when they claim that analogy is an operation against which formal represen- tations should also be assessed. But for that, of course, we needed an automatic analogy-solver. Conclusion We have proposed an algorithm which solves analogies on words, i.e. when possible it coins a fourth word when given three words. It re- lies on the computation of pseudo-distances be- tween strings. The verification of a constraint, relevant for analogy, limits the computation of matrix cells, and permits early termination in case of failure. This algorithm has been proved to handle many different cases in many different lan- guages. In particular, it handles parallel infix- ing, a property necessary for the morphological description of semitic languages. Reduplication is an easy extension. This algorithm is independent of any lan- guage, but not coding-independent: it consti- tutes a trial at inspecting how much can be achieved using only pure computation on sym- bols, without any external knowledge. We are inclined to advocate that much in the matter of usual analogies, is a question of symbolic rep- resentation, i.e. a question of encoding into a form solvable by a purely symbolic algorithm like the one we proposed. A Examples The following examples show actual resolution of analogies by the algorithm. They illustrate what the algorithm achieves on real linguistic examples. A.1 Insertion or deletion of prefixes or suffixes Latin: oratorem : orator = honorem : x x = honor French: rdpression : rdp.ressionnaire = rdaction : x x = rdactionnaire Malay: tinggal : ketinggalan = d~tduk : x x = kedudukan Chinese: ~:4~ : ~$~ = ~ :x x = ~ 733 A.2 Exchange of prefixes or suffixes English: wolf: wolves = leaf: x x = leaves Malay: kawan : mengawani = keliting : x x = mengelilingi Malay: keras : mengeraskan = kena : x X ---- 17zengefla]zal~ Polish: wyszedteg : wyszIa.4 = poszedted : x x = posztad A.3 Infixing and umlaut Japanese: ~ :~@Y~ =~7o :x x= ,~@~ German: lang : Idngste = scharf : x x = schdrfste German: fliehen : er floh = schlie~en : x x - er sehlofl Polish: zgubiony : zgubieni = zmartwiony : x x = zmartwieni Akkadian: uka~.~ad : uktanaggad = ugak.~ad : x x = u.¢tanakgad A.4 Parallel infixing Proto-semitic: yasriqu : sariq = yanqinm : x x = naqim Arabic: huziht : huzdI= sudi'a : x x = sud(~' Arabic: arsaIa : mursitun = asIama : x x = m.usIimun References Robert I. Damper & John E.G. Eastman Pronouncing Text by Analogy Proceedings of COLING-96, Copenhagen, August 1996, pp. 268-269. Dedre Gentner Structure Mapping: A Theoretical Model for Analogy Cognitive Science, 1983, vol. 7, no 2, pp. 155- 170. Rogers P. Hall Computational Approaches to Analogical Reasoning: A Comparative Analysis Artificial Intelligence, Vol. 39, No. 1, May 1989, pp. 39-120. Douglas Hofstadter and the Fluid Analogies Re- search Group Fluid Cbncepts and Crexttive Analogies Basic Books, New-York, 1994. Robert R. Hoffman Monster Analogies AI Magazinc, Fall 1995, vol. 11, pp 11-35. Esa Itkonen Iconicity, analogy, and universal grammar Journal of Pragmatics, 1994, vol. 22, pp. 37- 53. Esa Itkonen and Jussi Haukioja A rehabilitation of analogy in syntax (and elsewhere) in AndrOs Kert~sz (ed.) Metalinguistik im Wandeh die kognitive Wende in Wis- senschaflstheorie und Linguistik Frankfurt a/M, Peter Lang, 1997, pp. 131-177. Yves Lepage & Ando Shin-Ichi Saussurian analogy: a theoretical account and its application Precedings of COLING-96, Copenhagen, August 1996, pp. 717-722. Nagao Makoto A Framework of a Mechanical Translation be- tween Japanese and English by Analogy Prin- ciple in Artificial ~ Human Intelligence, Alick Elithorn and Ranan Banerji eds., Elsevier Science Publishers, NATO 1984. Vito Pirelli & Stefano Federici "Derivational" paradigms in morphonology Proceedings of COLING-94, Kyoto, August 1994, Vol. I, pp 234-240. Victor Sadler and Ronald Vendelmans Pilot implementation of a bilingual knowl- edge bank Proceedings of COLING-90, Helsinki, 1990, vol 3, pp. 449-451. Eric Steinhart Analogical Truth Conditions for Metaphors Metaphor and Symbolic Activity, 1994, 9(3), pp 161-178. Esko Ukkonen Algorithms for Approximate String Matching h~formation and Control, 64, 1985, pp. 100- 118. Robert A. Wagner and Michael J. Fischer The String-to-String Correction Problem Journal for the Association of Computing Machinery, Vol. 21, No. 1, January 1974, pp. 168-173. Frangois Yvon Paradigmatic Cascades: a Linguistically Sound Model of Pronunciation by Analogy Proceedings of A CL-EACL-97, Madrid, 1994, pp 428-435. 734
1998
120
UN ALGORITHME POUR LA RI~SOLUTION DES ANALOGIES ENTRE MOTS Yves LEPAGE ALGORYTM DO ROZSTRZYGANIA ANALOGII POMIEDZY SLOWAMI Yves LEPAGE R4sum4 Un rappel de travaux pr4c4dents sur l'analogie en psychologie, en intelligence artificieUe et en traitement automatique des langues pr4cSde la pr4sentation d'un algorithme de r4solution, au niveau morphologique, d'analogies entre mots. Cet algorithme cr4e un quatri~me mot h partir de trois mots donn4s, quand c'est possible. Par exemple, 4rant donn4s fable, fabuleux et mira- cle, l'algorithme cr4e bien miraculeux. Des cas bien plus difficiles sont correctement r4solus par ralgorithme, en particulier, les cas d'infixation multiple, n4cessaires pour rendre compte de la morphologie des langues s4mitiques. Nous don- nons les caract4ristiques de l'algorithme et men- tionnons quelques applications possibles. Streszczenie Po opisaniu poprzednich prac had zagadnie- niem analogii w ramach psychologii, sztu- cznej inteligencji oraz lingwistyki kompu- terowej, pokazujemy algorytm do rozwietzania analogii pomi~dzy stowami na poziomie morfo- logicznym. Algorytm ten tworzy, kiedy jest to mo~liwe, czwarty termin na podstawie trzech innych termin6w. Na przyktad, je~eli podamy @iewad, ~piewaczka i dziatad, algorytm stusznie stworzy dziataczka. Algorytm ten rozwia, zuje bardziej skomphkowane problemy analogii, jak w przypadku morfologii jczyk6w semitycznych, gdzie w ~rodku sl6w lnoie pojawi~ siC kilka przy- rostk6w jednoczeinie. Opisujemy algorytm i jego mo$liwe zastosowania. EIN ALGORITHMUS ZUR LOSUNG VON WORT-ANALOGIEN Yves LEPAGE Yves LEPAGE (Jtu~-'~=r) Zusammenfassung Nach einer Beschreibung friiherer Werke fiber Analogie im Rahmen von Psycholo- gie, kiinstlicher Intelligenz und maschineller Sprachverarbeitung, wird ein Algorithmus zur LSsung von Wort-Analogien auf morphologi- scher Ebene vorgeschlagen. Dieser Algorith- mus erzeugt, wenn mSglich, ein viertes Wort aus drei gegebenen WSrtern. Zum Beispiel, a ussiihest wird aus nehmen, a usn~hmest und sehen abgeleitet. Auch komplexere F~ille wer- den korrekt behandelt, selbst in der Morpholo- gie semitischer Sprachen, in denen parallele Infixung vorkommt. Der Algorithmus wird beschrieben und mSgliche Anwendungen wet- den aufgezeigt. ~o 735
1998
121
Characterizing and Recognizing Spoken Corrections in Human-Computer Dialogue Gina-Anne Levow MIT AI Laboratory Room 769, 545 Technology Sq Cambridge, MA 02139 [email protected] Abstract Miscommunication in speech recognition sys- tems is unavoidable, but a detailed character- ization of user corrections will enable speech systems to identify when a correction is taking place and to more accurately recognize the con- tent of correction utterances. In this paper we investigate the adaptations of users when they encounter recognition errors in interactions with a voice-in/voice-out spoken language system. In analyzing more than 300 pairs of original and re- peat correction utterances, matched on speaker and lexical content, we found overall increases in both utterance and pause duration from orig- inal to correction. Interestingly, corrections of misrecognition errors (CME) exhibited signifi- cantly heightened pitch variability, while cor- rections of rejection errors (CRE) showed only a small but significant decrease in pitch minimum. CME's demonstrated much greater increases in measures of duration and pitch variability than CRE's. These contrasts allow the development of decision trees which distinguish CME's from CRE's and from original inputs at 70-75% ac- curacy based on duration, pitch, and amplitude features. 1 Introduction The frequent recognition errors which plague speech recognition systems present a signifi- cant barrier to widespread acceptance of this technology. The difficulty of correcting sys- tem misrecognitions is directly correlated with user assessments of system quality. The in- creased probability of recognition errors imme- diately after an error compounds this prob- lem. Thus, it becomes crucially important to characterize the differences between origi- nal utterances and user corrections of system recognition failures both in order to recognize when a user attempts a correction, indicating a prior recognition error, and to improve recogni- tion accuracy on these problematic utterances. Analysis of data drawn from a field trial of a telephone-based voice-in/voice-out conversa- tional system demonstrates significant differ- ences between original inputs and corrections in measures of duration, pause, and pitch. These differences in turn aid in the development of de- cision trees which distinguish between new in- put and user corrections. 2 Overview We begin by describing the framework in which the utterances used for these experiments were collected. We then describe the acoustic fea- tures used in the data analysis and decision tree design and motivate this choice by considering some related work. A report of the results of the descriptive analysis follows and demonstrates the significant differences among original inputs and different types of corrections. We then turn to the issues involved in the actual construction and assessment of decision trees which identify spoken corrections. The best of these classi- fiers achieves 75% success rates. Finally, we conclude with a plan to use these results to improve recognition accuracy on spoken correc- tions and ease recovery from human-computer miscommunication. 3 System, Subjects, and Errors The utterances used in the design and analy- sis of the decision tree classifiers were drawn from approximately 60 hours of user interac- tions in a field trial of the Sun Microsystems SpeechActs system (Yankelovich et al., 1995). SpeechActs provides a voice-only interface to common desktop applications such as e-mail, calendar, weather reports, stock quotes, and 736 time and currency conversions. The system allows conversational interactions with speech recognition provided by BBN's HARK recog- nizer and speech synthesis through Centigram's TruVoice system. A sample interaction appears below. SpeechActs: User: SpeechActs: User: SpeechActs: User: SpeechActs: User: SpeechActs: User: SpeechActs: User: You have 14 new messages and 47 old messages. The first new message is from Eric Baatz regarding "report completed." Let me hear it. "The first draft is ready for your comments. Eric." Reply and include current message. Preparing message to Eric Baatz. Rejection: no recognition result; no action Begin recording after the tone. When finished, pause several seconds. °f this paper will identify common acoustic changes which characterize this error correction Eric, I'll get those to you this afternoon. Say cancel, send, or review. Send. Message sent. What now? Next message. New message two from Stuart Adams, subject "Can we meet at 3:00 today?" Switch to calendar... 1 Overall there were 7752 user utterances recorded, of which 1961 resulted in a label of ei- ther 'Error' or 'Rejection', giving an error rate of 25%. 1250 utterances, almost two-thirds of the errors, produced outright rejections, while 706 errors were substitution misrecognitions. The remainder of the errors were due to sys- tem crashes or parser errors. The probability of experiencing a recognition failure after a cor- rect recognition was 16%, but immediately after an incorrect recognition it was 44%, 2.75 times greater. This increase in error likelihood sug- gests a change in speaking style which diverges from the recognizer's model. The remainder The field trial involved a group of nineteen subjects. Four of the participants were members of the system development staff, fourteen were volunteers drawn from Sun Microsystems' staff, and a final class of subjects consisted of one- time guest users There were three female and sixteen male subjects. All interactions with the system were recorded and digitized in standard telephone audio quality format at 8kHz sampling in 8-bit mu-law encoding during the conversation. In addition, speech recognition results, parser re- sults, and synthesized responses were logged. A paid assistant then produced a correct verbatim transcript of all user utterances and, by compar- ing the transcription to the recognition results, labeled each utterance with one of four accuracy codes as described below. OK: recognition correct; action correct Error Minor: recognition not exact; action correct Error: recognition incorrect; action incorrect speaking style. This description leads to the de- velopment of a decision tree classifier which can label utterances as corrections or original input. 4 Related Work Since full voice-in/voice-out spoken language systems have only recently been developed, lit- tle work has been done on error correction di- alogs in this context. Two areas of related re- search that have been investigated are the iden- tification of self-repairs and disfluencies, where the speaker self-interrupts to change an utter- ance in progress, and some preliminary efforts in the study of corrections in speech input. In analyzing and identifying self-repairs, (Bear et al., 1992) and (Heeman and Allen, 1994) found that the most effective methods relied on identifying shared textual regions be- tween the reparandum and the repair. However, these techniques are limited to those instances where a reliable recognition string is available; in general, that is not the case for most speech recognition systems currently available. Alter- native approaches described in (Nakatani and Hirschberg, 1994) and (Shriberg et al., 1997), have emphasized acoustic-prosodic cues, includ- ing duration, pitch, and amplitude as discrimi- nating features. The few studies that have focussed on spoken corrections of computer misrecognitions, (Ovi- att et al., 1996) and (Swerts and Ostendorf, 1995), also found significant effects of duration, and in Oviatt et al., pause insertion and length- 737 ening played a role. However, in only one of these studies was input "conversational", the other was a form-filling application, and nei- ther involved spoken system responses, relying instead on visual displays for feedback, with po- tential impact on speaking style. 5 Error Data, Features, and Examples For these experiments, we selected pairs of ut- terances: the first (original) utterance is the first attempt by the user to enter an input or a query; the second (repeat) follows a system recognition error, either misrecognition or re- jection, and tries to correct the mistake in the same words as the original. For example, SYSTEM SAID: Please say mail, calendar, weather, stock quotes or start over to begin again. USER SAID: MAIL SYSTEM HEARD: MAIL CODE: OK SYSTEM SAID: Switching to mail. Your first message is .. USER SAID:Read message four eight nine SYSTEM HEARD: "nothing" CODE: Rejection SYSTEM SAID: Sorry ? USER SAID:Read message four eight nine SYSTEM HEARD: "nothing" CODE: Rejection SYSTEM SAID: Still no luck. Speak clearly, but don't over emphasize . USER SAID:Go to message four eight nine SYSTEM HEARD: Go to message four please umm CODE: ERROR SYSTEM SAID: I heard goto new message 4. In total, there were 302 of these original-repeat pairs: 214 resulting from rejections, and 88 from misrecognitions. Following (Oviatt et al., 1996), (Shriberg et al., 1997), and (Ostendorf et al., 1996), we coded a set of acoustic-prosodic features to de- scribe the utterances. These features fall into four main groups: durational, pause, pitch, and amplitude. We further selected variants of these feature classes that could be scored automati- cally, or at least mostly automatically with some Figure 1: A lexically matched pair where the repeat (bottom) has an 18% increase in total duration and a 400% increase in pause duration. minor hand-adjustment. We hoped that these features would be available during the recog- nition process so that ultimately the original- repeat correction contrasts would be identified automatically. 5.1 Duration The basic duration measure is total utterance duration. This value is obtained through a two- step procedure. First we perform an automatic forced alignment of the utterance to the ver- batim transcription text using the OGI CSLU CSLUsh Toolkit (Colton, 1995). Then the alignment is inspected and, if necessary, ad- justed by hand to correct for any errors, such as those caused by extraneous background noise or non-speech sounds. A typical alignment ap- pears in Figure 1. In addition to the sim- ple measure of total duration in milliseconds, a number of derived measures also prove useful. Some examples of such measures are speaking rate in terms of syllables per second and a ra- tio of the actual utterance duration to the mean duration for that type of utterance. 5.2 Pause A pause is any region of silence internal to an utterance and longer than 10 milliseconds in du- ration. Silences preceding unvoiced stops and affricates were not coded as pauses due to the difficulty of identifying the onset of consonants of these classes. Pause-based features include number of pauses, average pause duration, total pause duration, and silence as a percentage of total utterance duration. An example of pause 738 ........................ ,° iL°,. Figure 2: Contrasting Falling (top) and Rising (bottom) Pitch Contours insertion and lengthening appear in Figure 1. 5.3 Pitch To derive pitch features, we first apply the F0 (fundamental frequency) analysis function from the Entropic ESPS Waves+ system (Se- crest and Doddington, 1993) to produce a basic pitch track. Most of the related work reported above had found relationships between the mag- nitude of pitch features and discourse function rather than presence of accent type, used more heavily by (Pierrehumbert and Hirschberg, 1990), (Hirschberg and Litman, 1993). Thus, we chose to concentrate on pitch features of the former type. A trained analyst examines the pitch track to remove any points of doubling or halving due to pitch tracker error, non-speech sounds, and excessive glottalization of > 5 sam- ple points. We compute several derived mea- sures using simple algorithms to obtain F0 max- imum, F0 minimum, F0 range, final F0 contour, slope of maximum pitch rise, slope of maximum pitch fall, and sum of the slopes of the steep- est rise and fall. Figure 2 depicts a basic pitch contour. 5.4 Amplitude Amplitude, measuring the loudness of an utter- ance, is also computed using the ESPS Waves+ system. Mean amplitudes are computed over all voiced regions with amplitude > 30dB. Am- plitude features include utterance mean ampli- tude, mean amplitude of last voiced region, am- plitude of loudest region, standard deviation, and difference from mean to last and maximum to last. 6 Descriptive Acoustic Analysis Using the features described above, we per- formed some initial simple statistical analyses to identify those features which would be most useful in distinguishing original inputs from re- peat corrections, and corrections of rejection er- rors (CRE) from corrections of misrecognition errors (CME). The results for the most inter- esting features, duration, pause, and pitch, are described below. 6.1 Duration Total utterance duration is significantly greater for corrections than for original inputs. In ad- dition, increases in correction duration relative to mean duration for the utterance prove signif- icantly greater for CME's than for CRE's. 6.2 Pause Similarly to utterance duration, total pause length increases from original to repeat. For original-repeat pairs where at least one pause appears, paired t-test on log-transformed data reveal significantly greater pause durations for corrections than for original inputs. 6.3 Pitch While no overall trends reached significance for pitch measures, CRE's and CME's, when con- sidered separately, did reveal some interesting contrasts between corrections and original in- puts within each subset and between the two types of corrections. Specifically, male speakers showed a small but significant decrease in pitch minimum for CRE's. CME's produced two unexpected results. First they displayed a large and significant in- crease in pitch variability from original to re- peat as measured the slope of the steepest rise, while CRE's exhibited a corresponding decrease rising slopes. In addition, they also showed sig- nificant increases in steepest rise measures when compared with CRE's. 7 Discussion The acoustic-prosodic measures we have exam- ined indicate substantial differences not only be- tween original inputs and repeat corrections, but also between the two correction classes, those in response to rejections and those in re- sponse to misrecognitions. Let us consider the relation of these results to those of related work 739 and produce a more clear overall picture of spo- ken correction behavior in human-computer di- alogue. 7.1 Duration and Pause: Conversational to Clear Speech Durational measures, particularly increases in duration, appear as a common phenomenon among several analyses of speaking style [ (Oviatt et al., 1996), (Ostendorf et al., 1996), (Shriberg et al., 1997)]. Similarly, in- creases in number and duration of silence re- gions are associated with disfluencies (Shriberg et al., 1997), self-repairs (Nakatani and Hirschberg, 1994), and more careful speech (Ostendorf et al., 1996) as well as with spo- ken corrections (Oviatt et al., 1996). These changes in our correction data fit smoothly into an analysis of error corrections as invoking shifts from conversational to more "clear" or "careful" speaking styles. Thus, we observe a parallel be- tween the changes in duration and pause from original to repeat correction, described as con- versational to clear in (Oviatt et al., 1996), and from casual conversation to carefully read speech in (Ostendorf et al., 1996). 7.2 Pitch Pitch, on the other hand, does not fit smoothly into this picture of corrections taking on clear speech characteristics similar to those found in carefully read speech. First of all. (Ostendorf et al., 1996) did not find any pitch measures to be useful in distinguishing speaking mode on the continuum from a rapid conversational style to a carefully read style. Second, pitch features seem to play little role in corrections of rejections. Only a small decrease in pitch min- imum was found, and this difference can easily be explained by the combination of two simple trends. First, there was a decrease in the num- ber of final rising contours, and second, there were increases in utterance length, that, even under constant rates of declination, will yield lower pitch minima. Third, this feature pro- duces a divergence in behavior of CME's from CRE's. While CRE's exhibited only the change in pitch minimum described above, corrections of misrecognition errors displayed some dramatic changes in pitch behavior. Since we observed that simple measures of pitch maximum, min- imum, and range failed to capture even the basic contrast of rising versus falling contour, we extended our feature set with measures of slope of rise and slope of fall. These mea- sures may be viewed both as an attempt to create a simplified form of Taylor's rise-fall- continuation model (Taylor, 1995) and as an attempt to provide quantitative measures of pitch accent. Measures of pitch accent and con- tour had shown some utility in identifying cer- tain discourse relations [ (Pierrehumbert and Hirschberg, 1990), (Hirschberg and Litman, 1993). Although changes in pitch maxima and minima were not significant in themselves, the increases in rise slopes for CME's in contrast to flattening of rise slopes in CRE's combined to form a highly significant measure. While not defining a specific overall contour as in (Tay- lor, 1995), this trend clearly indicates increased pitch accentuation. Future work will seek to de- scribe not only the magnitude, but also the form of these pitch accents and their relation to those outlined in (Pierrehumbert and Hirschberg, 1990). 7.3 Summary It is clear that many of the adaptations asso- ciated with error corrections can be attributed to a general shift from conversational to clear speech articulation. However, while this model may adequately describe corrections of rejection errors, corrections of misrecognition errors ob- viously incorporate additional pitch accent fea- tures to indicate their discourse function. These contrasts will be shown to ease the identification of these utterances as corrections and to high- light their contrastive intent. 8 Decision Tree Experiments The next step was to develop predictive classi- tiers of original vs repeat corrections and CME's vs CRE's informed by the descriptive analysis above. We chose to implement these classifiers with decision trees (using Quinlan's {Quinlan, 1992) C4.5) trained on a subset of the original- repeat pair data. Decision trees have two fea- tures which make them desirable for this task. First, since they can ignore irrelevant attributes, they will not be misled by meaningless noise in one or more of the 38 duration, pause, pitch, and amplitude features coded. Since these fea- tures are probably not all important, it is desir- 740 able to use a technique which can identify those which are most relevant. Second, decision trees are highly intelligible; simple inspection of trees can identify which rules use which attributes to arrive at a classification, unlike more opaque machine learning techniques such as neural nets. 8.1 Decision Trees: Results &: Discussion The first set of decision tree trials attempted to classify original and repeat correction utter- ances, for both correction types. We used a set of 38 attributes: 18 based on duration and pause measures, 6 on amplitude, five on pitch height and range, and 13 on pitch contour. Trials were made with each of the possible subsets of these four feature classes on over 600 instances with seven-way cross-validation. The best results, 33% error, were obtained using attributes from all sets. Duration measures were most impor- tant, providing an improvement of at least 10% in accuracy over all trees without duration fea- tures. The next set of trials dealt with the two er- ror correction classes separately. One focussed on distinguishing CME's from CRE's, while the other concentrated on differentiating CME's alone from original inputs. The test attributes and trial structure were the same as above. The best error rate for the CME vs. CRE classi- fier was 30.7%, again achieved with attributes from all classes, but depending most heavily on durational features. Finally the most success- ful decision trees were those separating original inputs from CME's. These trees obtained an accuracy rate of 75% (25% error) using simi- lar attributes to the previous trials. The most important splits were based on pitch slope and durational features. An exemplar of this type of decision tree in shown below. normdurationl > 0.2335 : r (39.0/4.9) normdurationl <= 0.2335 : normduration2 <= 20.471 : normduration3 <= 1.0116 : normdurationl > -0.0023 : o (51/3) Inormdurationl <= -0.0023 : I pitchslope > 0.265 : o (19/4)) I pitchslope <= 0.265 : II pitchlastmin <= 25.2214:r(11/2) II pitchlastmin > 25.2214: III minslope <= -0.221:r(18/5) IIII minslope > -0.221:o(15/5) normduration3 > 1.0116 : Inormduration4 > 0.0615 : r (7.0/1.3) Inormduration4 <= 0.0615 : llnormduration3 <= 1.0277 : r (8.0/3.5) llnormduration3 > 1.0277 : o (19.0/8.0) normduration2 > 20.471 : I pitchslope <= 0.281 : r (24.0/3.7) I pitchslope > 0.281 : o (7.0/2.4) These decision tree results in conjunction with the earlier descriptive analysis provide ev- idence of strong contrasts between original in- puts and repeat corrections, as well as between the two classes of corrections. They suggest that different error rates after correct and after erro- neous recognitions are due to a change in speak- ing style that we have begun to model. In addition, the results on corrections of mis- recognition errors are particularly encouraging. In current systems, all recognition results are treated as new input unless a rejection occurs. User corrections of system misrecognitions can currently only be identified by complex reason- ing requiring an accurate transcription. In con- trast, the method described here provides a way to use acoustic features such as duration, pause, and pitch variability to identify these particu- larly challenging error corrections without strict dependence on a perfect textual transcription of the input and with relatively little computa- tional effort. 9 Conclusions &: Future Work Using acoustic-prosodic features such as dura- tion, pause, and pitch variability to identify er- ror corrections in spoken dialog systems shows promise for resolving this knotty problem. We further plan to explore the use of more accu- rate characterization of the contrasts between original and correction inputs to adapt standard recognition procedures to improve recognition accuracy in error correction interactions. Help- ing to identify and successfully recognize spoken corrections will improve the ease of recovering from human-computer miscommunication and will lower this hurdle to widespread acceptance of spoken language systems. 741 References J. Bear, J. Dowding, and E. Shriberg. 1992. In- tegrating multiple knowledge sources for de- tection and correction of repairs in human- computer dialog. In Proceedings of the A CL, pages 56-63, University of Delaware, Newark, DE. D. Colton. 1995. Course manual for CSE 553 speech recognition laboratory. Technical Re- port CSLU-007-95, Center for Spoken Lan- guage Understanding, Oregon Graduate In- stitute, July. P.A. Heeman and J. Allen. 1994. Detecting and correcting speech repairs. In Proceedings of the A CL, pages 295-302, New Mexico State University, Las Cruces, NM. Julia Hirschberg and Diane Litman. 1993. Empirical studies on the disambiguation of cue phrases. Computational linguistics, 19(3):501-530. C.H. Nakatani and J. Hirschberg. 1994. A corpus-based study of repair cues in sponta- neous speech. Journal of the Acoustic Society of America, 95(3):1603-1616. M. Ostendorf, B. Byrne, M. Bacchiani, M. Finke, A. Gunawardana, K. Ross, S. Rowels, E. Shribergand D. Talkin, A. "vVaibel, B. Wheatley, and T. Zeppenfeld. 1996. Modeling systematic variations in pro- nunciation via a language-dependent hidden speaking mode. In Proceedings of the In- ternational Conference on Spoken Language Processing. supplementary paper. S.L. Oviatt, G. Levow, M. MacEarchern, and K. Kuhn. 1996. Modeling hyperarticulate speech during human-computer error resolu- tion. In Proceedings of the International Con- ference on Spoken Language Processing, vol- ume 2, pages 801-804. Janet Pierrehumbert and Julia Hirschberg. 1990. The meaning of intonational contours in the interpretation of discourse. In P. Co- hen, J. Morgan, and M. Pollack, editors, In- tentions in Communication, pages 271-312. MIT Press, Cambridge, MA. J.R. Quinlan. 1992. C4.5: Programs for Ma- chine Learning. Morgan Kaufmann. B. G. Secrest and G. R. Doddington. 1993. An integrated pitch tracking algorithm for speech systems. In ICASSP 1993. E. Shriberg, R. Bates, and A. Stolcke. 1997. A prosody-only decision-tree model for dis- fluency detection. In Eurospeech '97. M. Swerts and M. Ostendorf. 1995. Discourse prosody in human-machine interactions. In Proceedings of the ECSA Tutorial and Re- search Workshop on Spoken Dialog Systems - Theories and Applications. Paul Taylor. 1995. The rise/fall/continuation model of intonation. Speech Communication, 15:169-186. N. Yankelovich, G. Levow, and M. Marx. 1995. Designing SpeechActs: Issues in speech user interfaces. In CHI '95 Conference on Human Factors in Computing Systems, Denver, CO, May. 742
1998
122
The Berkeley FrameNet Project Collin F. Baker and Charles J. Fillmore and John B. Lowe {collinb, fillmore, jblowe}@icsi.berkeley.edu International Computer Science Institute 1947 Center St. Suite 600 Berkeley, Calif., 94704 Abstract FrameNet is a three-year NSF-supported project in corpus-based computational lexicog- raphy, now in its second year (NSF IRI-9618838, "Tools for Lexicon Building"). The project's key features are (a) a commitment to corpus evidence for semantic and syntactic generaliza- tions, and (b) the representation of the valences of its target words (mostly nouns, adjectives, and verbs) in which the semantic portion makes use of frame semantics. The resulting database will contain (a) descriptions of the semantic frames underlying the meanings of the words de- scribed, and (b) the valence representation (se- mantic and syntactic) of several thousand words and phrases, each accompanied by (c) a repre- sentative collection of annotated corpus attes- tations, which jointly exemplify the observed linkings between "frame elements" and their syntactic realizations (e.g. grammatical func- tion, phrase type, and other syntactic traits). This report will present the project's goals and workflow, and information about the computa- tional tools that have been adapted or created in-house for this work. 1 Introduction The Berkeley FrameNet project 1 is producing frame-semantic descriptions of several thousand English lexical items and backing up these de- scriptions with semantically annotated attesta- tions from contemporary English corpora 2. 1The project is based at the International Computer Science Institute (1947 Center Street, Berkeley, CA). A fuller bibliography may be found in (Lowe et ai., 1997) 2Our main corpus is the British National Corpus. We have access to it through the courtesy of Oxford University Press; the POS-tagged and lemmatized ver- sion we use was prepared by the Institut flit Maschinelle Sprachverarbeitung of the University of Stuttgart). The These descriptions are based on hand-tagged semantic annotations of example sentences ex- tracted from large text corpora and systematic analysis of the semantic patterns they exem- plify by lexicographers and linguists. The pri- mary emphasis of the project therefore is the encoding, by humans, of semantic knowledge in machine-readable form. The intuition of the lexicographers is guided by and constrained by the results of corpus-based research using high- performance software tools. The semantic domains to be covered are" HEALTH CARE, CHANCE, PERCEPTION, COMMU- NICATION, TRANSACTION, TIME, SPACE, BODY (parts and functions of the body), MOTION, LIFE STAGES, SOCIAL CONTEXT, EMOTION and COG- NITION. 1.1 Scope of the Project The results of the project are (a) a lexical re- source, called the FrameNet database 3, and (b) associated software tools. The database has three major components (described in more de- tail below: • Lexicon containing entries which are com- posed of: (a) some conventional dictionary-type data, mainly for the sake of human readers; (b) FOR- MULAS which capture the morphosyntactic ways in which elements of the semantic frame can be realized within the phrases or sentences built up around the word; (c) links to semantically ANNOTATED EXAM- European collaborators whose participation has made this possible are Sue Atkins, Oxford University Press, and Ulrich Held, IMS-Stuttgart. SThe database will ultimately contain at least 5,000 lexical entries together with a parallel annotated cor- pus, these in formats suitable for integration into appli- cations which use other lexical resources such as Word- Net and COMLEX. The final design of the database will be selected in consultation with colleagues at Princeton (WordNet), ICSI, and IMS, and with other members of the NLP community. 86 PLE SENTENCES which illustrate each of the poten- tial realization patterns identified in the formula; 4 and (d) links to the FRAME DATABASE and to other machine-readable resources such as WordNet and COMLEX. • Frame Database containing descriptions of each frame's basic conceptual structure and giving names and descriptions for the elements which par- ticipate in such structures. Several related entries in this database are schematized in Fig. 1. • Annotated Example Sentences which are marked up to exemplify the semantic and morpho- syntactic properties of the lexical items. (Several of these are schematized in Fig. 2). These sentences provide empirical support for the lexicographic anal- ysis provided in the frame database and lexicon en- tries. These three components form a highly rela- tional and tightly integrated whole: elements in each may point to elements in the other two. The database will also contain estimates of the relative frequency of senses and comple- mentation patterns calculated by matching the senses and patterns in the hand-tagged exam- ples against the entire BNC corpus. 1.2 Conceptual Model The FrameNet work is in some ways similar to efforts to describe the argument structures of lexical items in terms of case-roles or theta- roles, 5 but in FrameNet, the role names (called frame elements or FEs) are local to particular conceptual structures (frames); some of these are quite general, while others are specific to a small family of lexical items. For example, the TRANSPORTATION frame, within the domain of MOTION, provides MOVERS, MEANS of transportation, and PATHS; 6 4In cases of accidental gaps, clearly marked invented examples may be added. 5The semantic frames for individual lexical units are typically "blends" of more than one basic frame; from our point of view, the so-called "linking" patterns pro- posed in LFG, HPSG, and Construction Grammar, op- erate on higher-level frames of action (giving agent, pa- tient, instrument), motion and location (giving theme, location, source, goal, path), and experience (giving ex- periencer, stimulus, content), etc. In some but not all cases, the assignment of syntactic correlates to frame el- ements could be mediated by mapping them to the roles of one of the more abstract frames. 8A detailed study of motion predicates would require a finer-grained analysis of the Path element, separating out Source and Goal, and perhaps Direction and Area, but for a basic study of the transportation predicates such refined analysis is not necessary. In any case, our subframes associated with individual words in- herit all of these while possibly adding some of their own. Fig. 1 shows some of the subframes, as discussed below. fra~ne (TRANSPORTATION) frame.elements(MOVER(S), MEANS, PATH) scene(MOVER(S) move along PATH by MEANS) frame(DRiVING) inherit (TRANSPORTATION) frarne.elements(DRIVER (:MOVER), VEHICLE (:MEANS), RIDER(S) (:MOVER(S)), CARGO (=MOVER(S))) scenes(DRIVER starts VEHICLE, DRIVER con- trois VEHICLE, DRIVER stops VEHICLE) frame(RIDING-i) inherit (TRANSP O RTATION) frame.elements(RIDER(S) (=MOVER(S)), VE- HICLE (:MEANS)) scenes(RIDER enters VEHICLE, VEHICLE carries RIDER along PATH, RIDER leaves VEHICLE ) Figure 1: A subframe can inherit elements and semantics from its parent The DRIVING frame, for example, specifies a DRIVER (a principal MOVER), a VEHICLE (a par- ticularization of the MEANS element), and po- tentially CARGO or RIDER as secondary movers. In this frame, the DRIVER initiates and controls the movement of the VEHICLE. For most verbs in this frame, DRIVER or VEHICLE can be real- ized as subjects; VEHICLE, RIDER, or CARGO can appear as direct objects; and PATH and VEHICLE can appear as oblique complements. Some combinations of frame elements, or Frame Element Groups (FEGs), for some real corpus sentences in the DRIVING frame are shown in Fig. 2. A RIDING_I frame has the primary mover role as RIDER, and allows as VEHICLE those driven by others/ In grammatical realizations of this frame, the RIDER can be the subject; the VEHI- CLE can appear as a direct object or an oblique complement; and the PATH is generally realized as an oblique. The FrameNet entry for each of these verbs will include a concise formula for all seman- work includes the separate analysis of the flame seman- tics of directional and locational expressions. 7A separate frame RIDING_2 that applies to the En- glish verb r/de selects means of transportation that can be straddled, such as bicycles, motorcycles, and horses. 87 FEG Annotated Example from BNC D V, D D, P D, R, P D, V, P D+R, P V, P [D Kate] drove [v home] in a stupor. A pregnant woman lost her baby af- ter she fainted as she waited for a bus and fell into the path of [v a lorry] driven [~ by her uncle]. And that was why [D I] drove [p eastwards along Lake Geneva]. Now [D Van Cheele] was driving [R his guest] Iv back to the station]. [D Cumming] had a fascination with most forms of transport, driving [y his Rolls] at high speed [p around the streets of London]. [D We] drive [p home along miles of empty freeway]. Over the next 4 days, Iv the Rolls Royces] will drive [p down to Ply- mouth], following the route of the railway. Figure 2: Examples of Frame Element Groups and Annotated Sentences tic and syntactic combinatorial possibilities, to- gether with a collection of annotated corpus sen- tences in which each possibility is exemplified. The syntactic positions considered relevant for lexicographic description include those that are internal to the maximal projection of the target word (the whole VP, AP, or NP for target V, A or N), and those that are external to the max- imal projection under precise structural condi- tions; the subject, in the case of VP, and the subject of support verbs in the case of AP and NP. s Used in NLP, the FrameNet database should make it possible for a system which finds a valence-bearing lexical item in a text to know (for each of its senses) where its individual argu- ments are likely to be found. For example, once a parser has found the verb drive and its direct object NP, the link to the DRIVING frame will suggest some semantics for that NP, e.g. that a person as direct object probably represents the RIDER, while a non-human proper noun is probably the VEHICLE. For practical lexicography, the contribution of the FrameNet database will be its presentation SFor causatives, the object of the support verb is included; for details, see (Fillmore and Atkins, forthcoming). of the full range of use possibilities for individ- ual words, documented with corpus data, the model examples for each use, and the statistical information on relative frequency. 2 Organization and Workflow 2.1 Overview The computational side of the FrameNet project is directed at efficiently capturing human in- sights into semantic structure. The majority of the work involved is marking text with se- mantic tags, specifying (again by hand) the structure of the frames to be treated, and writ- ing dictionary-style entries based the results of annotation and a priori descriptions. With the exception of the example sentence extrac- tion component, all the software modules are highly interactive and have substantial user in- terface requirements. Most of this functionality is provided by WWW-based programs written in PERL. Four processing steps are required produce the FrameNet database of frame semantic rep- resentations: (a) generating initial descriptions of semantic and syntactic patterns for use in corpus queries and annotation ("Preparation"), (b) extracting good example sentences ("Sub- corpus Extraction"), (c) marking (by hand) the constituents of interest ("Annotation"), and (d) building a database of lexical semantic represen- tations based on the annotations and other data ("Entry Writing"). These are discussed briefly below and shown in Fig. 3. 2.2 Workflow and Personnel As work on the project has progressed, we have defined several explicit roles which project participants play in the various steps, these roles are referred to as Vanguard (1.1 in Fig. 3), Annotators (3.1) and Rearguard (4.1). These are purely functional designations: the same person may play different roles at dif- ferent times. 9 1. Preparation. The Vanguard (1.1) pre- pares the initial descriptions of frames, includ- ing lists of frames and frame elements, and adds these to the Frame Database (5.1) using the Frame Description tool (1.2). The Vanguard 90f course there are other staff members who write code and maintain the databases. This behind-the- scenes work is not shown in Fig. 3. 88 Vanguard 1.1 Annotators 3.1 # ~ alembic ~..,~-~'~ ] [SGMLannotation ,/f ~.~ [program 3.2 b [ ~ [ ~nnom,e? ~ ~] Entry LT:[.,,, D,,:; / ~,,,,,. 5.3 J / TooI I Extraction .. ~ I - " 2.2.2[~.,,~ I xKwIC c".'Tju'/ I "1 Rearguard 4.1 Figure 3: Workflow, Roles, Data Structures and Software also selects the major vocabulary items for the frame (the target words) and the syntactic pat- terns that need to be checked for each word, which are entered in the Lexical Database (5.2) by means of the Lexical Database Tool (1.3). 2. Subcorpus Extraction. Based on the Vanguard's work, the subcorpus extraction tools (2.2) produce a representative collection of sentences containing these words. This selection of examples is achieved through a hybrid process partially controlled by the pre- liminary lexical description of each lemma. Sen- tences containing the lemma are extracted from from a corpus and classified into subcorpora by syntactic pattern (2.2.1) using a CASCADE FILTER (2.2.2, 2.2.5, 2.2.6) representing a par- tial regular-expression grammar of English over part-of-speech tags (cf. Gahl (forthcoming)), formatted for annotation (2.2.4) , and automat- ically sampled (2.2.3) down to an appropriate number. (If these heuristics fail to find appropriate examples by means of syntactic patterns, sen- tences are selected using INTERACTIVE SELEC- TION TOOLS (2.3)). 3. Annotation. Using the annotation soft- ware (3.2) and the tagsets (3.2.1) derived from the Frame Database, the Annotators (3.1) mark selected constituents in the extracted subcor- pora according to the frame elements which they realize, and identify canonical examples, novel patterns, and problem sentences. 1° 4. Entry Writing. The Rearguard (4.1) reviews the skeletal lexical record created by the Vanguard, the annotated example sentences (5.3), and the FEGs extracted from them, and builds both the entries for the lemmas in the Lexical Database (5.2) and the frame descrip- tions in the Frame Database (5.1), using the Entry Writing Tools (4.2). l°We are building a "constituent type identifier" which will semi-automatically assign Grammatical Function (GF), and Phrase Type (PT) attributes to these FE- marked constituents, eliminating the need for Annota- tors to mark these. 89 3 Implementation 3.1 Data Model The data structures described above are im- plemented in SGML. n Each is described by a DTD, and these DTDs are structured to provide the necessary links between the components. 3.2 Software The software suite currently supporting database development is an aggregate of existing software tools held together with PERL/CGI-based "glue". In order to get the project started, we have depended on off-the- shelf software which in some cases is not ideal for our purposes. Nevertheless, using these programs allowed us to get the project up and running within just a few months. We describe below in approximate order of application the programs used and their state of completion. • Frame Description Tool (1.2) (in development) An interactive, web-based tool. • Lexical Description Tool (1.3) (prototype) An interactive, web-based tool. • CQP (2.2.1) is a high-performance Corpus Query Processor, developed at IMS Stuttgart (IMS, 1997). The cascade filter, which partitions lemma- specific subcorpora by syntactic patterns, is built using a preprocessor (written in PERL, 2.2.2) which generates CQP's native query language. • XKWIC (2.3) is an X-window, interactive tool, also from IMS, which facilitates manipulating cor- pora and subcorpora. • Subcorpora are prepared for annotation by a program ("arf" for Annotation Ready Formatter, 2.2.4) which wraps SGML tags around sentences, target words, comments and other distinguishable text elements. Another program, "whittle" (2.2.3), combines subcorpora in a preselected order, remov- ing very long and very short sentences, and sampling to reduce large subcorpora. • Alembic (3.2) (Mitre, 1998), allows the inter- active markup (in SGML) of text files according to predefined tagsets (3.2.1). It is used to introduce frame element annotations into the subcorpora. • Sgmlnorm, etc. (from James Clark's SGML tool set) are used to validate and manage the SGML files. • Entry Writing Tools (4.2) (in development) • Database management tools to manage the cat- alog of subcorpora, schedule the work, render the nEventually, we plan to migrate to an XML data model, which appears to provide more flexibility while reducing complexity. Also, the FrameNet software is be- ing developed on Unix, but we plan to provide cross- platform capabilities by making our tool suite web-based and XML-compatible. SGML files into HTML for convenient viewing on the web, etc. are being written in PERL. RCS main- tains version control over most files. 4 Conclusion At the time of writing, there is something in place for each of the major software compo- nents, though in some cases these are little more than stubs or "toy" implementations. Nearly 10,000 sentences exemplifying just under 200 lemmas have been annotated; there are over 20,000 frame element tokens marked in these example sentences. About a dozen frames have been specified, which refer to 47 named frame elements. Most of these annotations have been accomplished in the last few months since the software for corpus extraction, frame descrip- tion, and annotation became operational. We expect the inventory to increase rapidly. If the proportions cited hold constant as the Framenet database grows, the final database of 5,000 lex- ical units may contain 250,000 annotated sen- tences and over half a million tokens of frame elements. References Charles J. Fillmore and B. T. S. Atkins. forth- coming. FrameNet and lexicographic rele- vance. In Proceedings of the First Inter- national Conference On Language Resources And Evaluation, Granada, Spain, P8-30 May 1998. Susanne Gahl. forthcoming. Automatic extrac- tion of subcorpora based on subcategoriza- tion frames from a part of speech tagged cor- pus. In Proceedings o/ the 1998 COLING- A CL conference. Institut f'dr maschinelle Sprachverarbeitung IMS. 1997. IMS corpus toolbox web page at stuttgart, http://www.ims.uni- stuttgart.de/~oli/CorpusToolbox/. John B. Lowe, Collin F. Baker, and Charles J. Fillmore. 1997. A frame-semantic approach to semantic annotation. In Tagging Text with Lexical Semantics: Why, What, and How? Proceedings of the Workshop, pages 18-24. Special Interest Group on the Lexicon, Asso- ciation for Computational Linguistics, April. Mitre. 1998. Alembic Work- bench web page at Mitre corp. http: //www.mitre.org/resources/ centers/ advanced_info/g04h/workbench.html. 90
1998
13
Processing Unknown Words in HPSG Petra Barg and Markus Walther* Seminar ftir Allgemeine Sprachwissenschaft Heinrich-Heine-Universit~it Dtisseldorf Universit~itsstr. 1, D-40225 Dtisseldorf, Germany {barg, walther}@ling, uni-duesseldor f. de Abstract The lexical acquisition system presented in this pa- per incrementally updates linguistic properties of un- known words inferred from their surrounding con- text by parsing sentences with an HPSG grammar for German. We employ a gradual, information- based concept of "unknownness" providing a uni- form treatment for the range of completely known to maximally unknown lexical entries. "Unknown" in- formation is viewed as revisable information, which is either generalizable or specializable. Updating takes place after parsing, which only requires a mod- ified lexical lookup. Revisable pieces of informa- tion are identified by grammar-specified declarations wlfich provide access paths into the parse feature structure. The updating mechanism revises the cor- responding places in the lexical feature structures iff the context actually provides new information. For revising generalizable inlbrmation, type union is re- quired. A worked-out example demonstrates the in- ferential capacity of our implemented system. 1 Introduction It is a remarkable fact that humans can often un- derstand sentences containing unknown words, in- fer their grammatical properties and incrementally refine hypotheses about these words when encoun- tering later instances. In contrast, many current NLP systems still presuppose a complete lexicon. Notable exceptions include Zernik (1989), Erbach (1990), Hastings & Lytinen (1994). See Zernik for an intro- duction to the general issues involved. This paper describes an HPSG-based system which can incrementally learn and refine proper- ties of unknown words after parsing individual sen- *This work was carried out within the Sonderforschungs- bereich 282 "Theorie des Lexikons' (project B3), funded by the German Federal Research Agency DFG. We thank James Kil- bury and members of the B3 group for fruitful discussion. tences. It focusses on extracting linguistic proper- ties, as compared to e.g. general concept learning (Hahn, Klenner & Schnattinger 1996). Unlike Er- bach (1990), however, it is not confined to sim- ple morpho-syntactic information but can also han- dle selectional restrictions, semantic types and argu- ment structure. Finally, while statistical approaches like Brent (1991) can gather e.g. valence informa- tion from large corpora, we are more interested in full grammatical processing of individual sentences to maximally exploit each context. The following three goals serve to structure our model. It should i) incorporate a gradual, information-based conceptualization of "unknown- ness". Words are not unknown as a whole, but may contain unlmown, i.e. revisable pieces of infor- mation. Consequently, even known words can un- dergo revision to e.g. acquire new senses. This view replaces the binary distinction between open and closed class words. It should ii) maximally exploit the rich representations and modelling conventions of HPSG and associated formalisms, with essen- tially the same grammar and lexicon as compared to closed-lexicon approaches. This is important both to facilitate reuse of existing grammars and to en- able meaningful feedback for linguistic theorizing. Finally, it should iii) possess domain-independent in- ference and lexicon-updating capabilities. The gram- mar writer must be able to fully declare which pieces of information are open to revision. The system was implemented using MicroCUF, a simplified version of the CUF typed unification formalism (DOrre & Dorna 1993) that we imple- mented in SICStus Prolog. It shares both the feature logic and the definite clause extensions with its big brother, but substitutes a closed-world type system for CUF's open-world regime. A feature of our type system implementation that will be significant later on is that type information in internal feature struc- 91 tures (FSs) can be easily updated. The HPSG grammar developed with MicroCUF models a fragment of German. Since our focus is on the lexicon, the range of syntactic variation treated is currently limited to simplex sentences with canon- ical word order. We have incorporated some recent developments of HPSG, esp. the revisions of Pol- lard & Sag (1994, ch. 9), Manning & Sag (1995)'s proposal for an independent level of argument struc- ture and Bouma (1997)'s use of argument structure to eliminate procedural lexical rules in favour of re- lational constraints. Our elaborate ontology of se- mantic types - useful for non-trivial acquisition of selectional restrictions and nominal sorts - was de- rived from a systematic corpus study of a biological domain (Knodel 1980, 154-188). The grammar also covers all valence classes encountered in the corpus. As for the lexicon format, we currently list full forms only. Clearly, a morphology component would sup- ply more contextual information from known affixes but would still require the processing of unknown stems. 2 Incremental Lexical Acquisition When compared to a previous instance, a new sen- tential context can supply either identical, more spe- cial, more general, or even conflicting information along a given dimension. Example pairs illustrating the latter three relationships are given under (1)-(3) (words assumed to be unknown in bold face). (1) a. Im Axon tritt ein Ruhepotential auf. 'a rest potential occurs in the axon' b. Das Potential wandert tiber das Axon. 'the potential travels along the axon' (2) a. Das Ohr reagiert auf akustische Reize. 'the ear reacts to acoustic stimuli' b. Ein Sinnesorgan reagiert auf Reize. 'a sense organ reacts to stimuli' (3) a. Die Nase ist ftir Geriiche sensibel. 'the nose is sensitive to smells' b. Die sensible Nase reagiert auf Gertiche. 'the sensitive nose reacts to smells' In contrast to (la), which provides the information that the gender of Axon is not feminine (via im), the context in (lb) is more specialized, assigning neuter gender (via das). Conversely, (2b) differs from (2a) in providing a more general selectional restriction for the subject of reagiert, since sense organs include ears as a subtype. Finally, the adjective sensibel is used predicatively in (3a), but attributively in (3b). The usage types must be formally disjoint, because some German adjectives allow for just one usage (ehemalig 'former, attr.', schuld 'guilty, pred.'). On the basis of contrasts like those in (1)-(3) it makes sense to statically assign revisable informa- tion to one of two classes, namely specializable or generalizable. 1 Apart from the specializable kinds 'semantic type of nouns' and 'gender', the inflec- tional class of nouns is another candidate (given a morphological component). Generalizable kinds of information include 'selectional restrictions of verbs and adjectives', 'predicative vs attributive usage of adjectives' as well as 'case and form of PP argu- ments' and 'valence class of verbs'. Note that spe- cializable and generalizable information can cooccur in a given lexical entry. A particular kind of informa- tion may also figure in both classes, as e.g. seman- tic type of nouns and selectional restrictions of verbs are both drawn from the same semantic ontology. Yet the former must be invariantly specialized - indepen- dent of the order in which contexts are processed -, whereas selectional restrictions on NP complements should only become more general with further con- texts. 2.1 Representation We require all revisable or updateable information to be expressible as formal types. 2 As relational clauses can be defined to map types to FSs, this is not much of a restriction in practice. Figure 1 shows a rele- vant fragment. Whereas the combination of special- nom_sem / \ I ~ ~ . . . ~ pred attr n°n I fern ?era son)o masc neut sound smell nose ear Figure 1: Excerpt from type hierarchy izable information translates into simple type unifi- cation (e.g. non_fern A neut = neut), combining 1The different behaviour underlying this classification has previously been noted by e.g. Erbach (1990) and Hastings & Lytinen (1994) but received either no implementational status or no systematic association with arbitrary kinds of information. 2In HPSG types are sometimes also referred to as sorts. 92 generalizable information requires type union (e.g. pred V attr = prd). The latter might pose problems for type systems requiring the explicit definition of all possible unions, corresponding to least common supertypes. However, type union is easy for (Mi- cro)CUF and similar systems which allow for arbi- trary boolean combinations of types. Generalizable information exhibits another peculiarity: we need a disjoint auxiliary type u_g to correctly mark the initial unknown information state) This is because 'content' types like prd, pred, attr are to be inter- preted as recording what contextual information was encountered in the past. Thus, using any of these to prespecify the initial value - either as the side-effect of a feature appropriateness declaration (e.g. prd) or through grammar-controlled specification (e.g. pred, attr) - would be wrong (cf. prdiniti~t V attr = prd, but u_ginitia l V attr = u_g V attr). Generalizable information evokes another ques- tion: can we simply have types like those in fig. 1 within HPSG signs and do in-place type union, just like type unification? The answer is no, for essen- tially two reasons. First, we still want to rule out ungrammatical constructions through (type) unifica- tion failure of coindexed values, so that generalizable types cannot ahvays be combined by nonfailing type union (e.g. *der sensible Geruch 'the sensitive smell' must be ruled out via sense_organ A smell = J_). We would ideally like to order all type unifications pertaining to a value before all unions, but this vi- olates the order independence of constraint solv- ing. Secondly, we already know that a given infor- mational token can simultaneously be generalizable and specializable, e.g. by being coindexed through HPSG's valence principle. However, simultaneous in-place union and unification is contradictory. To avoid these problems and keep the declarative monotonic setting, we employ two independent fea- tures gen and clxt. ctxt is the repository of contex- tually unified information, where conflicts result in ungrammaticality, gen holds generalizable informa- tion. Since all gen values contain u_g as a type dis- junct, they are always unifiable and thus not restric- tive during the parse. To nevertheless get correct gen values we perform type union after parsing, i.e. dur- ing lexicon update. We will see below how this works out. 3Actually, the situation is more symmetrical, as we need a dual type u_s to correctly mark "unknown" specializable infor- mation. This prevents incorrect updating of known information. However, u_~ is unnecessary for the examples presented below. The last representational issue is how to identity revisable information in (substructures ol) the parse FS. For this purpose the grammar defines revisability clauses like the following: (4) a. generalizable([~], [~) := synsemlloelcatl head [adj gen b. specializable([[I) := [ [cat lhead noun "1] [synsem J oc [cont i ind 1 gend 2.2 Processing The first step in processing sentences with unknown or revisable words consists of conventional parsing. Any HPSG-compatible parser may be used, subject to the obvious requirement that lexical lookup must not fail if a word's phonology is unknown. A canon- ical entry for such unknown words is defined as the disjunction of maximally underspecified generic lex- ical entries for nouns, adjectives and verbs. The actual updating of lexical entries consists of four major steps. Step 1 projects the parse FS derived from the whole sentence onto all participating word tokens. This results in word FSs which are contextu- ally enriched (as compared to their original lexicon state) and disambiguated (choosing the compatible disjunct per parse solution if the entry was disjunc- tive). It then filters the set of word FSs by unification with the right-hand side of revisability clauses like in (4). The output of step 1 is a list of update candidates for those words which were unifiable. Step 2 determines concrete update values for each word: for each matching generalizable clause we take the type union of the gen value of the old, lexical state of the word (LexGen) with the ctxt value of its parse projection (Ctxt): TU = LexGenUCtzt. For each matching specializable(Spec) clause we take the parse value Spec. Step 3 checks whether updating would make a dif- ference w.r.t, the original lexical entry of each word. The condition to be met by generalizable information is that TU D LexGen, for specializable information we similarly require Spec C LexSpec. In step 4 the lexical entries of words surviving step 3 are actually modified. We retract the old lexical en- try, revise the entry and re-assert it. For words never encountered before, revision must obviously be pre- ceded by making a copy of the generic unknown en- try, but with the new word's phonology. Revision it- self is the destructive modification of type informa- 93 tion according to the values determined in step 2, at the places in a word FS pointed to by the revis- ability clauses. This is easy in MicroCUF, as types are implemented via the attributed variable mecha- nism of SICStus Prolog, which allows us to substi- tute the type in-place. In comparison, general updat- ing of Prolog-encoded FSs would typically require the traversal of large structures and be dangerous if structure-sharing between substituted and unaffected parts existed. Also note that we currently assume DNF-expanded entries, so that updates work on the contextually selected disjunct. This can be motivated by the advantages of working with presolved struc- tures at run-time, avoiding description-level opera- tions and incremental grammar recompilation. 2.3 A Worked-Out Example We will illustrate how incremental lexical revision works by going through the examples under (5)-(7). (5) Die Nase ist ein Sinnesorgan. 'the nose is a sense organ' (6) Das Ohr perzipiert. 'the ear perceives' (7) Eine verschnupfte Nase perzipiert den Gestank. 'a bunged up nose perceives the stench' The relevant substructures corresponding to the lex- ical FSs of the unknown noun and verb involved are depicted in fig. 2. The leading feature paths synsemlloclcont for Nase and synsemlloclcatlarg-st for perzipiert have been omitted. After parsing (5) the gender of the unknown noun Nase is instantiated to fern by agreement with the determiner die. As the specializable clause (4b) matches and the gend parse value differs from its lexical value gender, gender is updated to fern. Fur- thermore, the object's semantic type has percolated to the subject Nase. Since the objecrs sense_organ type differs from generic initial nom_sem, Nase's ctxt value is updated as well. In place of the still nonex- isting entry for perzipiert, we have displayed the rel- evant part of the generic unknown verb entry. Having parsed (6) the system then knows that perzipiert can be used intransitively with a nomi- native subject referring to ears. Formally, an HPSG mapping principle was successful in mediating be- tween surface subject and complement lists and the argument list. Argument list instantiations are them- selves related to corresponding types by a further Nase after (5) gend fem ] gen u.g | etxt sense.organJ perzipiert gen u-g ] ctxt arg.~trucl after (6) gend fem ] gen u_g | ctxt sense.organJ after (7) gend fem ] gen u.g / ctxt nose I gen u..gVnpnom ] ctxt arg.struc | args([IoclcontLctxtnom_~em] ]rgenu_gvear]] -]J\l gen u-gVnpnomVnpnom.npacc ] ctxt arg.struc I [, , . [gen u_gVsense~rgan]] I /[,OC ICOmLctxtnom_sem j],\] \ 'oc Icon, g= UogV';en :l I / Figure 2: Updates on lexical FSs mapping. On the basis of this type classification of argument structure patterns, the parse derived the ctxt value npnom. Since gen values are generaliz- able, this new value is unioned with the old lexi- cal gen value. Note that ctxt is properly unaffected. The first (subject) element on the aros list itself is targeted by another revisability clause. This has the side-effect of further instantiating the underspecified lexical FS. Since selectional restrictions on nominal subjects must become more general with new con- textual evidence, the union of ear and the old value u_g is indeed appropriate. Sentence (7) first of all provides more specific evi- dence about the semantic type of partially known Nase by way of attributive modification through ver- schnupfte. The system detects this through the differ- ence between lexical ctxt value sense_organ and the parse value nose, so that the entry is specialized ac- cordingly. Since the subject's synsem value is coin- dexed with the first aros element, [etxt nose] simulta- neously appears in the FS ofperzipiert. However, the revisability clause matching there is of class general- izable, so union takes place, yielding ear V nose = sense_organ (w.r.t. the simplified ontology of fig. 1 used in this paper). An analogous match with the second element of ar9 s identifies the necessary up- date to be the unioning-in of smell, the semantic type of Gestank. Finally, the system has learned that an accusative NP object can cooccur with perzipiert, so the argument structure type of gen receives another update through union with npnom_npacc. 94 3 Discussion The incremental lexical acquisition approach de- scribed above attains the goals stated earlier. It re- alizes a gradual, information-based conceptualiza- tion of unknownness by providing updateable formal types - classified as either generalizable or special- izable - together with grammar-defined revisability clauses. It maximally exploits standard HPSG rep- resentations, requiring moderate rearrangements in grammars at best while keeping with the standard assumptions of typed unification formalisms. One noteworthy demand, however, is the need for a type union operation. Parsing is conventional modulo a modified lexical lookup. The actual lexical revision is done in a domain-independent postprocessing step guided by the revisability clauses. Of course there are areas requiring further consid- eration. In contrast to humans, who seem to leap to conclusions based on incomplete evidence, our ap- proach employs a conservative form of generaliza- tion, taking the disjunction of actually observed val- ues only. While this has the advantage of not leading to overgeneralization, the requirement of having to encounter all subtypes in order to infer their com- mon supertype is not realistic (sparse-data problem). In (2) sense_organ as the semantic type of the first argument ofperzipiert is only acquired because the simplified hierarchy in fig. 1 has nose and ear as its only subtypes. Here the work of Li & Abe (1995) who use the MDL principle to generalize over the slots of observed case frames might prove fruitful. An important question is how to administrate alternative parses and their update hypotheses. In Das Aktionspotential erreicht den Dendriten 'the action potential reaches the dendrite(s)', Dendriten is ambiguous between acc.sg, and dat.pl., giving rise to two valence hypotheses npnom_npacc and npnom_npdat for erreicht. Details remain to be worked out on how to delay the choice between such alternative hypotheses until further contexts provide enough information. Another topic concerns the treatment of 'cooc- currence restrictions'. In fig. 2 the system has in- dependently generalized over the selectional restric- tions for subject and object, yet there are clear cases where this overgenerates (e.g. *Das Ohr perzipiert den Gestank 'the ear perceives the stench'). An idea worth exploring is to have a partial, extensible list of type cooccurrences, which is traversed by a recursive principle at parse time. A more general issue is the apparent antagonism 95 between the desire to have both sharp grammatical predictions and continuing openness to contextual revision. If after parsing (7) we transfer the fact that smells are acceptable objects to perzipiert into the re- stricting ctxt feature, a later usage with an object of type sound falls. The opposite case concerns newly acquired specializable values. If in a later context these are used to update a gen value, the result may be too general. It is a topic of future research when to consider information certain and when to make re- visable information restrictive. References Bouma, G. (1997). Valence Alternation without Lexi- cal Rules. In: Papers from the seventh CLIN Meet- ing 1996, Eindhoven, 25--40. Brent, M. R. (1991). Automatic Acquisition of Subcat- egorization Frames From Untagged Text. In: Pro- ceedings of 29th ACL, Berkeley, 209-214. D0rre, J. & M. Dorna (1993). CUF - A Formalism for Linguistic Knowledge Representation. In: J. DOrre (Exl.), ComputationaI Aspects of Constraint-Based Linguistic Description. IMS, Universitat Stuttgart. Deliverable R1.2.A, DYANA-2 - ESPRIT Project 6852. Erbach, G. (1990). Syntactic Processing of Un- known Words. IWBS Report 131, Institute for Knowledge-Based Systems (IWBS), IBM Stuttgart. Hahn, U., M. Klenner & K. Schnattinger (1996). Learning from Texts - A Terminological Meta- Reasoning Perspective. In: S. Wermter, E. Riloff & G. Scheler (Ed.), Connectionist, Statistical, and Symbolic Approaches to Learning for Natural Lan- guage Processing, 453--468. Berlin: Springer. Hastings, P. M. & S. L. Lytinen (1994). The Ups and Downs of Lexical Acquisition. In: Proceedings of AAAI'94, 754-759. Knodel, H. (1980). Linder Biologie - Lehrbuch far die Oberstufe. Stuttgart: J.B. Metzlersche Verlags- buchhandlung. Li, H. & N. Abe (1995). Generalizing Case Frames Us- ing a Thesaurus and the MDL Principle. In: Pro- ceedings of Recent Advantages in Natural Lan- guage Processing, Velingrad, Bulgaria, 239-248. Manning, C. & I. Sag (1995). Dissociations between argument structure and grammatical relations. Ms., Stanford University. Pollard, C. & I. Sag (1994). Head-Driven Phrase Structure Grammar. Chicago University Press. Zernik, U. (1989). Paradigms in Lexical Acquisition. In: U. Zernik (Ed.), Proceedings of the First Inter- national Lexical Acquisition Workshop, Detroit.
1998
14
Semi-Automatic Recognition of Noun Modifier Relationships Ken BARKER and Stan SZPAKOWICZ School of Information Technology and Engineering University of Ottawa Ottawa, Canada K1N 6N5 {kbarker, szpak)@site.uottawa.ca Abstract Semantic relationships among words and phrases are often marked by explicit syntactic or lexical clues that help recognize such rela- tionships in texts. Within complex nominals, however, few overt clues are available. Sys- tems that analyze such nominals must com- pensate for the lack of surface clues with other information. One way is to load the system with lexical semantics for nouns or adjectives. This merely shifts the problem elsewhere: how do we define the lexical se- mantics and build large semantic lexicons? Another way is to find constructions similar to a given complex nominal, for which the relationships are already known. This is the way we chose, but it too has drawbacks. Similarity is not easily assessed, similar ana- lyzed constructions may not exist, and if they do exist, their analysis may not be appropriate for the current nominal. We present a semi-automatic system that identifies semantic relationships in noun phrases without using precoded noun or ad- jective semantics. Instead, partial matching on previously analyzed noun phrases leads to a tentative interpretation of a new input. Proc- essing can start without prior analyses, but the early stage requires user interaction. As more noun phrases are analyzed, the system learns to find better interpretations and reduces its reliance on the user. In experiments on Eng- lish technical texts the system correctly iden- tified 60-70% of relationships automatically. 1 Introduction Any system that extracts knowledge from text cannot ignore complex noun phrases. In technical domains especially, noun phrases carry much of the information. Part of that information is con- tained in words; cataloguing the semantics of sin- gle words for computational purposes is a difficult task that has received much attention. But part of the information in noun phrases is contained in the relationships between components. We have built a system for noun modifier re- lationship (NMR) analysis that assigns semantic relationships in complex noun phrases. Syntactic analysis finds noun phrases in a sentence and pro- vides a flat list of premodifiers and postmodifying prepositional phrases and appositives. The NMR analyzer first brackets the flat list of premodifiers into modifier-head pairs. Next, it assigns NMRs to each pair. NMRs are also assigned to the relation- ships between the noun phrase and each post- modifying phrase. 2 Background 2.1 Noun Compounds A head noun along with a noun premodifier is often called a noun compound. Syntactically a noun compound acts as a noun: a modifier or a head may again be a compound. The NMR ana- lyzer deals with the semantics of a particular kind of compound, namely those that are transparent and endocentric. The meaning of a transparent compound can be derived from the meaning of its elements. For example, laser printer is transparent (a printer that uses a laser). Guinea pig is opaque: there is no obvious direct relationship to guinea or to pig. An endocentric compound is a hyponym of its head. Desktop computer is endocentric because it is a kind of computer. Bird brain is exocentric because it does not refer to a kind of brain, but rather to a kind of person (whose brain resembles that of a bird). Since the NMR analyzer is intended for tech- nical texts, the restriction to transparent endocen- tric compounds should not limit the utility of the system. Our experiments have found no opaque or exocentric compounds in the test texts. 96 2.2 Semantic Relations in Noun Phrases Most of the research on relationships between nouns and modifiers deals with noun compounds, but these relationships also hold between nouns and adjective premodifiers or postmodifying prepositional phrases. Lists of semantic labels have been proposed, based on the theory that a compound expresses one of a small number of covert semantic relations. Levi (1978) argues that semantics and word formation make noun-noun compounds a hetero- geneous class. She removes opaque compounds and adds nominal non-predicating adjectives. For this class Levi offers nine semantic labels. Ac- cording to her theory, these labels represent un- derlying predicates deleted during compound formation. George (1987) disputes the claim that Levi's non-predicating adjectives never appear in predicative position. Warren (1978) describes a multi-level system of semantic labels for noun-noun relationships. Warren (1984) extends the earlier work to cover adjective premodifiers as well as nouns. The similarity of the two lists suggests that many ad- jectives and premodifying nouns can be handled by the same set of semantic relations. 2.3 Recognizing Semantic Relations Programs that uncover the relationships in modi- fier-noun compounds often base their analysis on the semantics of the individual words (or a com- position thereof). Such systems assume the exis- tence of some semantic lexicon. Leonard's system (1984) assigns semantic la- bels to noun-noun compounds based on a diction- ary that includes taxonomic and meronymic (part- whole) information, information about the syntac- tic behaviour of nouns and about the relationships between nouns and verbs. Finin (1986) produces multiple semantic interpretations of modifier-noun compounds. The interpretations are based on pre- coded semantic class information and domain- dependent frames describing the roles that can be associated with certain nouns. Ter Stal's system (1996) identifies concepts in text and unifies them with structures extracted from a hand-coded lexi- con containing syntactic information, logical form templates and taxonomic information. In an attempt to avoid the hand-coding re- quired in other systems, Vanderwende (1993) automatically extracts semantic features of nouns from online dictionaries. Combinations of features imply particular semantic interpretations of the relationship between two nouns in a compound. 3 Noun Modifier Relationship Labels Table 1 lists the NMRs used by our analyzer. The list is based on similar lists found in literature on the semantics of noun compounds. It may evolve as experimental evidence suggests changes. Agent (agt) Beneficiary (benf) Cause (caus) Container (ctn) Content (cont) Destination (dest) Equative (equa) Instrument (inst) Located (led) Location (loc) Material (matr) Object (obj) Possessor (poss) Product (prod) Property (prop) Purpose (purp) Result (resu) Source (src) Time (time) Topic (top) Table 1: The noun modifier relationships For each NMR, we give a paraphrase and example modifier-noun compounds. Following the tradi- tion in the study of noun compound semantics, the paraphrases act as definitions and can be used to check the acceptability of different interpretations of a compound. The paraphrases serve as defini- tions in this section and to help with interpretation during user interactions (as illustrated in section 6). In the analyzer, awkward paraphrases with adjectives could be improved by replacing adjec- tives with their WordNet pertainyms (Miller, 1990), giving, for example, "charity benefits from charitable donation" instead of "charitable bene- fits from charitable donation". Agent: compound is performed by modifier student protest, band concert, military assault Beneficiary: modifier benefits from compound student price, charitable donation Cause: modifier causes compound exam anxiety, overdue fine Container: modifier contains compound printer tray, flood water, film music, story idea Content: modifier is contained in compound paper tray, eviction notice, oil pan Destination: modifier is destination of compound game bus, exit route, entrance stairs 97 Equative: modifier is also head composer arranger, player coach Instrument: modifier is used in compound electron microscope, diesel engine, laser printer Located: modifier is located at compound building site, home town, solar system Location: modifier is the location of compound lab printer, internal combustion, desert storm Material: compound is made of modifier carbon deposit, gingerbread man, water vapottr Object: modifier is acted on by compound engine repair, horse doctor Possessor: modifier has compound national debt, student loan, company car Product: modifier is a product of compound automobile factory, light bulb, colour printer Property: compound is modifier blue car, big house, fast computer Purpose: compound is meant for modifier concert hall soup pot, grinding abrasive Result: modifier is a result of compound storm cloud, cold virus, death penalty Source: modifier is the source of compound foreign capital, chest pain, north wind Time: modifier is the time of compound winter semester, late supper, morning class Topic: compound is concerned with modifier computer expert, safety standard, horror novel 4 Noun Modifier Bracketing Before assigning NMRs, the system must bracket the head noun and the premodifier sequence into modifier-head pairs. Example (2) shows the bracketing for noun phrase (1). (1) dynamic high impedance microphone (2) (dynamic ((high impedance) microphone)) The bracketing problem for noun-noun-noun compounds has been investigated by Liberrnan & Sproat (1992), Pustejovsky et al. (1993), Resnik (1993) and Lauer (1995) among others. Since the NMR analyzer must handle premodifier se- quences of any length with both nouns and adjec- tives, it requires more general techniques. Our semi-automatic bracketer (Barker, 1998) allows for any number of adjective or noun premodifiers. After bracketing, each non-atomic element of a bracketed pair is considered a subphrase of the original phrase. The subphrases for the bracketing in (2) appear in (3), (4) and (5). (3) high impedance (4) high_impedance microphone (5) dynamic high_impedance_microphone Each subphrase consists of a modifier (possibly compound, as in (4)) and a head (possibly com- pound, as in (5)). The NMR analyzer assigns an NMR to the modifier-head pair that makes up each subphrase. Once an NMR has been assigned, the system must store the assignment to help automate future processing. Instead of memorizing complete noun phrases (or even complete subphrases) and analy- ses, the system reduces compound modifiers and compound heads to their own local heads and stores these reduced pairs with their assigned NMR. This allows it to analyze different noun phrases that have only reduced pairs in common with previous phrases. For example, (6) and (7) have the reduced pair (8) in common. If (6) has already been analyzed, its analysis can be used to assist in the analysis of (7)--see section 5.1. (6) (dynamic ((high impedance) microphone)) (7) (dynamic (cardioid (vocal microphone))) (8) (dynamic microphone) 5 Assigning NMRs Three kinds of construction require NMR assign- ments: the modifier-head pairs from the bracketed premodifier sequence; postmodifying preposi- tional phrases; appositives. These three kinds of input can be generalized to a single form--a triple consisting of modifier, head and marker (M, H, Mk). For premodifiers, Mk is the symbol nil, since no lexical item links the premodifier to the head. For postmodifying prepositional phrases Mk is the preposition. For appositives, Mk is the symbol appos. The (M, H, Mk) triples for examples (9), (10) and (11) appear in Table 2. (9) monitor cable plug (10) large piece of chocolate cake (11) my brother, a friend to all young people To assign an NMR to a triple (M, H, Mk), the system looks for previous triples whose distance to the current triple is minimal. The NMRs as- signed to previous similar triples comprise lists of candidate NMRs. The analyzer then finds what it considers the best NMR from these lists of candi- 98 Modifier Head Marker monitor cable nil monitor_cable plug nil chocolate cake nil large piece nil chocolate_cake large_piece of young people nil young_people friend to friend brother appos Table 2: (M, H, Mk) triples for (9), (I0) and (11) dates to present to the user for approval. Apposi- tives are automatically assigned Equative. 5.1 Distance Between Triples The distance between two triples is a measure of the degree to which their modifiers, heads and markers match. Table 3 gives the eight different values for distance used by NMR analysis. The analyzer looks for previous triples at the lower distances before attempting to find triples at higher distances. For example, it will try to find identical triples before trying to find triples whose markers do not match. Several things about the distance measures require explanation. First, a preposition is more similar to a nil marker than to a different preposi- tion. Unlike a different preposition, the nil marker is not known to be different from the marker in an overtly marked pair. Next, no evidence suggests that triples with matching M are more similar or less similar than triples with matching H (distances 3 and 6). Triples with matching prepositional marker (distance 4) are considered more similar than tri- ples with matching M or H only. A preposition is an overt indicator of the relationship between M and H (see Quirk, 1985: chapter 9) so a correla- tion is more likely between the preposition and the NMR than between a given M or H and the NMR. If the current triple has a prepositional marker not seen in any previous triple (distance 5), the system finds candidate NMRs in its NMR marker dictionary. This dictionary was constructed from a list of about 50 common atomic and phrasal prepositions. The various meanings of each preposition were mapped to NMRs by hand. Since the list of prepositions is small, dictionary con' struction was not a difficult knowledge engineer- ing task (requiring just twenty hours of work of a secondary school student). 5.2 The Best NMRs The lists of candidate NMRs consist of all those NMRs previously assigned to (M, H, Mk) triples at a minimum distance from the triple under analysis. If the minimum distance was 3 or 6, there may be two candidate lists: LM contains the NMRs previously assigned to triples with match- ing M, L,-with matching H. The analyzer at- tempts to choose a set R of candidates to suggest to the user as the best NMRs for the current triple, If there is one list L of candidate NMRs, R contains the NMR (or NMRs) that occur most frequently in L For two lists LM and L,, R could be found in several ways, We could take R to contain the most frequent NMRs in LM u L,. This absolute frequency approach has a bias towards NMRs in the larger of the two lists. Alternatively, the system could prefer NMRs with the highest relative frequency in their lists. If there is less variety in the NMRs in LM than in LH, M might be a more consistent indicator of NMR than H. Consider example (12). (12) front line Compounds with the modifier front may always have been assigned Location. Compounds with dist current triple 0 (M, H, Mk) 1 (M, H, <prep>) 2 (M, H, Mk) 3 (M, H, Mk) 4 (M, H, <prep>) 5 (M, H, <prep>) 6 (M, H, Mk) 7 (M, H, Mk) previous triple example (M, H, Mk) (M, H, nil) (M, H,_) (M, _, Mk) or (_, H, Mk) ( .... <prep>) (_ .... ) (M .... ) or (_, H, _) ( ..... ) wall beside a garden wall beside a garden wall beside a garden garden wall wall beside a garden wall around a garden pile of garbage pile of sweaters pile of garbage house of bricks ice in the cup nmrm(in, [ctn,inst, loc,src,time]) wall beside a garden garden fence wall beside a garden pile of garbage Table 3: Measures of distance between triples 99 the head line may have been assigned many dif- ferent NMRs. If line has been seen as a head more often than front as a modifier, one of the NMRs assigned to line may have the highest absolute frequency in LM u LH. But if Location has the highest relative frequency, this method correctly assigns Location to (12). There is a potential bias, however, for smaller lists (a single N-MR in a list always has the highest relative frequency). To avoid these biases, we could combine ab- solute and relative frequencies. Each NMR i is assigned a score si calculated as: freq(i ~ Lu) 2 freq(i e LH) 2 s, = + IL.I R would contain NMR(s) with the highest score. This combined formula was used in the experi- ment described in section 7. 5.3 Premodifiers as Classifiers Since NMR analysis deals with endocentric com- pounds we can recover a taxonomic relationship from triples with a nil marker. Consider example (13) and its reduced pairs in (14): (13) ((laser printer) stand) (14) (laser printer) (printer stand) These pairs produce the following output: laser..printer_stand isa stand laser_.printer isa printer 6 User Interaction The NMR analyzer is intended to start processing from scratch. A session begins with no previous triples to match against the triple at hand. To compensate for the lack of previous analyses, the system relies on the help of a user, who supplies the correct NMR when the system cannot deter- mine it automatically. In order to supply the correct NMR, or even to determine if the suggested NMR is correct, the user must be familiar with the NMR definitions. To minimize the burden of this requirement, all interactions use the modifier and head of the cur- rent phrase in the paraphrases from section 3. Furthermore, if the appropriate NMR is not among those suggested by the system, the user can request the complete list of paraphrases with the current modifier and head. 6.1 An Example Figure 1 shows the interaction for phrases (15)- (18). The system starts with no previously ana- lyzed phrases. The NMR marker dictionary maps the preposition of to twelve NMRs: Agent, Cause, Content, Equative, Located, Material, Object, Possessor, Property, Result, Source, Topic. (15) small gasoline engine (16) the repair of diesel engines (17) diesel engine repair shop (18) an auto repair center User input is shown bold underlined. At any prompt the user may type 'list' to view the com- plete list of NMR paraphrases for the current modifier and head. 7 Evaluation We present the results of evaluating the NMR analyzer in the context of a large knowledge ac- quisition experiment (see Barker et al., 1998). The NMR analyzer is one part of a larger interactive semantic analysis system. The experiment evalu- ated the semantic analysis of Atkinson (1990). We refer to it as the small engines experiment. Other experiments have shown similar results. We consider three evaluation criteria. First, we evaluate the analyzer's ability to learn to make better suggestions to the user as more noun phrases are analyzed. Second, we evaluate its coverage by comparing the number of relation- ships assigned with the total number of such rela- tionships in the text (i.e., the number it should have assigned). Third, we assess the burden that semi-automatic analysis places on the user. 7.1 Improvement in System Performance Since the system starts with no previous noun phrase analyses, the user is responsible for sup- plying NMRs at the beginning of a session. To measure the rate of learning, we compare the cu- mulative number of assignments required from the user to the cumulative number of correct assign- ments suggested by the system. In the small engines experiment, 886 modi- fier-noun pairs were assigned an NMR. We con- sider the system's assignment correct when the correct label is among its suggestions. According to this definition, 608 of the 886 NMRs (69%) were assigned correctly by the system. For most 100 of these assignments (97.5%) the system offered a single suggestion. It had multiple (on average 3.3) suggestions only 22 times. Phrase (15): small gasoline engine There is a relationship between gasoline and gasoline_engine. Please enter a valid NMR label: inst Do you accept the NMR Instrument: gasoline is used in gasoline__engine Y_ There is a relationship between small and small_gasoline_engine. Please enter a valid NMR label: prop Do you accept the NMR Property: small_gasoline__engine is small Y Phrase (16): the repair of diesel engines There is a relationship between diesel and diesel_engine. NMR Analyzer's best suggestions for this input: (1) prop: diesel_engine is diesel (2) inst: diesel is used in diesel_engine Please enter a number between 1 and 2: _2 Do you accept the NMR Instrument: diesel is used in diesel_engine Y There is a relationship between diesel_engine and repair. NMR Analyzer's best suggestions for this input: (1) agt: repairis performed by dieselengine (2) caus: diesel_engine causes repair (7) obj: diesel_engine is acted on by repair (12) top: repairis concerned with diesel_engine Please enter a number between 1 and 12: 7 Do you accept the NMR Object: diesel_en~lin e is acted on by repair Y Phrase (17): diesel engine repair shop Do you accept the NMR Instrument: diesel is used in diesel_engine Y__ Do you accept the NMR Object: diesel_engine is acted on by diesel_engine_.repair Y There is a relationship between diesel_ engine_repair and diesel_enginerepair_shop. Please enter a valid NMR label: purp Do you accept the NMR Purpose: dieselengine_repair__shop is meant for dieseC engine_repair Y Phrase (18): an auto repair center Do you accept the NMR Object: auto is acted on by auto_repair Y Do you accept the NMR Purpose: auto_repair_ centeris meant for auto_repair Y Figure I: NMR analysis interaction for (15)-(18) Figure 2 shows the cumulative number of NMR assignments supplied by the user versus those determined correctly by the system. After about 100 assignments, the system was able to make the majority of assignments automatically. The curves in the figure show that the system learns to make better suggestions as more phrases are analyzed. 700 600 E soo '~ 400 E 300 ~ 2o0 i 100 0 -zTj "-E~E E EE~ ~ numb er of m edifier-noun pairs Figure 2: Cumulative NMR assignments 7.2 NMR Coverage The NMR analyzer depends on a parser to find noun phrases in a text. If parsing is not 100% suc- cessful, the analyzer will not see all noun phrases in the input text. It is not feasible to find manually the total number of relationships in a text--even in one of only a few hundred sentences. To meas- ure coverage, we sampled 100 modifier-noun pairs at random from the small engines text and found that 87 of them appeared in the analyzer's output. At 95% confidence, we can say that the system extracted between 79.0% and 92.2% of the modifier-noun relationships in the text. 7.3 User Burden User burden is a fairly subjective criterion. To measure burden, we assigned an "onus" rating to each interaction during the small engines experi- ment. The onus is a number from 0 to 3.0 means that the correct NMR was obvious, whether sug- gested by the system or supplied by the user. 1 means that selecting an NMR required a few mo- ments of reflection. A rating of 2 means that the interaction required serious thought, but we were 101 ultimately able to choose an NMR. 3 means that even after much contemplation, we were unable to agree on an NMR. The average user onus rating was 0.1 for NMR interactions in the small engines experi- ment. 808 of the 886 NMR assignments received an onus rating of 0; 71 had a rating of 1; 7 re- ceived a rating of 2. No interactions were rated onus level 3. 8 Future Work Although the list of NMRs was inspired by the relationships found commonly in others' lists, it has not undergone a more rigorous validation (such as one described in Barker et al., 1997). In section 5.2 we discussed different ap- proaches to choosing NMRs from two lists of candidates. We have implemented and compared five different techniques for choosing the best NMRs, but experimental results are inconclusive as to which techniques are better. We should seek a more theoretically sound approach followed by further experimentation. The NMR analyzer currently allows its stored triples (and associated NMRs) to be saved in a file at the end of a session. Any number of such files can be reloaded at the beginning of subsequent sessions, "seeding" the new sessions. It is neces- sary to establish the extent to which the triples and assignments from one text or domain are useful in the analysis of noun phrases from another domain. Acknowledgements This work is supported by the Natural Sciences and Engineering Research Council of Canada. References Atkinson, Henry F. (1990). Mechanics of Small En- gines. New York: Gregg Division, McGraw-Hill. Barker, Ken (1998). "A Trainable Bracketer for Noun Modifiers". The Twelfth Canadian Conference on Artificial Intelligence. Barker, Ken, Terry Copeck, Sylvain Delisle & Stan Szpakowicz (1997). "Systematic Construction of a Versatile Case System." Journal of Natural Lan- guage Engineering 3(4), December 1997. Barker, Ken, Sylvain Delisle & Stan Szpakowicz (1998). "Test-Driving TANKA: Evaluating a Semi- Automatic System of Text Analysis for Knowledge Acquisition." The Twelfth Canadian Conference on Artificial Intelligence. Finin, Timothy W. (1986). "Constraining the Interpre- tation of Nominal Compounds in a Limited Con- text." In Analyzing Language in Restricted Domains: Sublanguage Description and Processing, R. Grish- man & R. Kittredge, eds., Lawrence Erlbaum, Hillsdale, pp. 163-173. George, Steffi (1987). On "Nominal Non-Predicating" Adjectives in English. Frankfurt am Main: Peter Lang. Lauer, Mark (1995). "Corpus Statistics Meet the Noun Compound: Some Empirical Results." Proceedings of the 33rd Annual Meeting of the Association for Computational Linguistics. Cambridge. 47-54. Leonard, Rosemary (1984). The Interpretation of Eng- lish Noun Sequences on the Computer. Amsterdam: North-Holland. Levi, Judith N. (1978). The Syntax and Semantics of Complex Nominals. New York: Academic Press. Liberman, Mark & Richard Sproat (1992). "Stress and Structure of Modified Noun Phrases." Lexical Mat- ters (CSLI Lecture Notes, 24). Stanford: Center for the Study of Language and Information. Miller, George A., ed. (1990). "WordNet: An On-Line Lexical Database." International Journal of Lexicog- raphy 3(4). Pustejovsky, James, S. Bergler & P. Anick (1993). "Lexical Semantic Techniques for Corpus Analysis." Computational Linguistics 19(2). 331-358. Quirk, Randolph, Sidney Greenbaum, Geoffrey Leech & Jan Svartvik (1985). A Comprehensive Grammar of the English Language. London: Longman. Resnik, Philip Stuart (1993). "Selection and Informa- tion: A Class-Based Approach to Lexical Relation- ships." Ph.D. thesis, IRCS Report 93-42, University of Pennsylvania. ter Stal, Wilco (1996). "Automated Interpretation of Nominal Compounds in a Technical Domain." Ph.D. thesis, University of Twente, The Netherlands. Vanderwende, Lucy (1993). "SENS: The System for Evaluating Noun Sequences." In Natural Language Processing: The PLNLP Approach, K. Jensen, G. Heidorn & S. Richardson, eds., Kluwer Academic Publishers, Boston, pp. 161-173. Warren, Beatrice (1978). Semantic Patterns of Noun- Noun Compounds. G/Steborg: Acta Universitatis Gothoburgensis. Warren, Beatrice (1984). Classifying Adjectives. GSte- borg: Acta Universitatis Gothoburgensis. 102
1998
15
Redundancy: helping semantic disambiguation Caroline Barri~re School of Information Technology and Engineering University of Ottawa Ottawa, Canada, KIN 7Z3 [email protected] Abstract Redundancy is a good thing, at least in a learn- ing process. To be a good teacher you must say what you are going to say, say it, then say what you have just said. Well, three times is better than one. To acquire and learn knowl- edge from text for building a lexical knowledge base, we need to find a source of information that states facts, and repeats them a few times using slightly different sentence structures. A technique is needed for gathering information from that source and identify the redundant in- formation. The extraction of the commonality is an active learning of the knowledge expressed. The proposed research is based on a clustering method developed by Barri~re and Popowich (1996) which performs a gathering of related information about a particular topic. Individ- ual pieces of information are represented via the Conceptual Graph (CG) formalism and the re- sult of the clustering is a large CG embedding all individual graphs. In the present paper, we suggest that the identification of the redundant information within the resulting graph is very useful for disambiguation of the original infor- mation at the semantic level. 1 Introduction The construction of a Lexical Knowledge Base (LKB), if performed automatically (or semi- automatically), attempts at extracting knowl- edge from text. The extraction can be viewed as a learning process. Simplicity, clarity and re- dundancy of the information given in the source text are key features for a successful acquisition of knowledge. We assume success is attained when a sentence from the source text expressed in natural language can be transformed into an unambiguous internal representation. Using a conceptual graph (CG) representation (Sowa, 1984) of sentences means that a successful ac- quisition of knowledge corresponds to trans- forming each sentence from the source text into a set of unambiguous concepts (correct word senses found) and unambiguous relations (cor- rect semantic relations between concepts). This paper will look at the idea of making good use of the redundancy found in a text to help the knowledge acquisition task. Things are not always understood when they are first en- countered. A sentence expressing new knowl- edge might be ambiguous (at the level of the concepts it introduces and/or at the level of the semantic relations between those concepts). A search through previously acquired knowledge might help disambiguate the new sentence or it might not. A repetition of the exact same sen- tence would be of no help, but a slightly differ- ent format of expression might reveal necessary aspects for the comprehension. This is the av- enue explored in this paper which will unfold as follows. Section 2 will present briefly a pos- sible good source of knowledge and a gather- ing/clustering technique. Section 3 will present how the redundancy resulting from the cluster- ing process can be used in solving some types of semantic ambiguity. Section 4 will emphasize the importance of semantic relations for the pro- cess of semantic disambiguation. Section 5 will conclude. 2 Source of information and clustering technique To acquire and learn knowledge from text for building a lexical knowledge base, we need to find a source of information that states facts, and repeats them a few times using slightly different sentence structures. A technique is needed for gathering information from that source and identify the redundant information. 103 These two aspects are discussed hereafter: (1) the choice of a source of information and (2) the information gathering technique. 2.1 Choice of source of information When we think of learning about words, we think of textbooks and dictionaries. Redun- dancy might be present but not always simplic- ity. Any text is written at a level which assumes some common knowledge among potential read- ers. In a textbook on science, the author will define the scientific terms but not the general English vocabulary. In an adult's dictionary, all words are defined, but a certain knowledge of the "world" (common sense, typical situations) is assumed as common adult knowledge, so the emphasis of the definitions might not be on sim- ple cases but on more ambiguous or infrequent cases. To learn the basic vocabulary used in day to day life, a very simple children's first dic- tionary is a good place to start. In (Barri~re, 1997), such a dictionary is used for an appli- cation of LKB construction in which no prior semantic knowledge was assumed. In the same research the author explains how to use a multi- stage process, to transform the sentences from the dictionary into conceptual graph represen- tations. This dictionary, the American Her- itage First Dictionary 1 (AHFD), is an ex- ample of a good source of knowledge in terms of simplicity, clarity and redundancy. Some def- initions introduce concepts that are mentioned again in other definitions. 2.2 Gathering of information Barri~re and Popowich (1996) presented the idea of concept clustering for knowledge integra- tion. First, a Lexical Knowledge Base (LKB) is built automatically and contains all the nouns and verbs of the AHFD, each word having its definition represented using the CG formalism. Here is a brief summary of the clustering pro- cess from there. It is not a statistical cluster- ing but more a "graph matching" type of clus- tering. A trigger word is chosen and the CG representation of its defining sentences make up the initial CCKG (Concept clustering knowl- edge graph). The trigger word can be any word, 1Copyright (~)1994 by Houghton Mifflin Company. Reproduced by permission from THE AMERICAN HERITAGE FIRST DICTIONARY. but preferably it should be a semantically sig- nificant word. A word is semantically signifi- cant if it occurs less than a maximal number of times in the text, therefore excluding gen- eral words such as place, or person. The clus- tering is really an iterative forward and back- ward search within the LKB to find definitions of words that are somewhat "related" to the trigger word. A forward search looks at the def- inition of the words used in the trigger word's definition. A backward search looks at the defi- nition of the words that use the trigger word to be defined. A word becomes part of the cluster if its CG representation shares a common sub- graph of a minimal size with the CCKG. The process is then extended to perform forward and backward searches based on the words in the cluster and not only on the trigger word. The cluster becomes a set of words related to the trigger word, and the CCKG presents the trigger word within a large context by showing all the links between all the words of the clus- ter. The CCKG is a merge of all individual CGs from the words in the cluster. Table 1 shows examples of clusters found by using the clustering technique on the AHFD. If a word is followed by _#, it means the sense # of that word. The CCKGs corresponding to the clusters are not illustrated as it would require much space to show all the links between all the words in the clusters? The clustering method described is based on the principle that information is acquired from a machine readable dictionary (the AHFD), and therefore each word is associated with some knowledge pertaining to it. To extend this clus- tering technique to a knowledge base containing non-classified pieces of information, we would need to use some indexing scheme allowing ac- cess to all the sentences containing a particular 2The reader might wonder why such word as [rain- bow] is associated with [needle_l] or why [kangaroo] is associated with [stomach]. The AHFD tells the child that "A rainbow looks like a ribbon of many colors across the sky." and "Kangaroo mothers carry their babies in a pocket in ~ront of their stomachs." The threshold used to define the minimal size of the common subgraph nec- essary to include a new word in the cluster is established experimentally. Changing that threshold will change the size of the resulting cluster therefore affecting which words will be included. The clustering technique, and a derived extended clustering technique are explained in much details in (Barri~re and Fass, 1998). 104 Table 1: Multiple clusters from different words Trigger Cluster word needle_l {needle_l, sew, cloth, thread, wool, handkerchief, pin, ribbon, string, rainbow} sew kitchen stove stomach airplane elephant soap wash {sew, cloth, needle_i, needle_2, thread, button, patch_i, pin, pocket, wool, ribbon, rug, string, nest, prize, rainbow} kitchen, stove, refrigerator, pan} {stove; pan, kitchen, refrigerator, pot, clay} {stomach, kangaroo, pain, swallow, mouth} {airplane, wing, airport, fly_2, helicopter, jet, kit, machine, pilot, plane} {elephant, skin, trunk_l, ear, zoo, bark, leather, rhinoceros} {soap, dirt, mix, bath, bubble, suds, wash, boil, steam} {wash, soap, bath, bathroom, suds, bubble, boil, steam} word in them. 3 Semantic disambiguation We propose ill this section a way to attempt at solving different types of semantic ambiguities by using the redundancy of information result- ing from the clustering technique as briefly de- scribed in the previous section. Going through an example, we will look at three types of se- mantic ambiguity: anaphora resolution, word sense disambiguation, and relation disambigua- tion. In Figure 1, Definition 3.1 shows one sen- tence in the definition of mail_l (taken from the AHFD, as all other definitions in Figure 1) with its corresponding CG representation. Def- inition 3.2 shows one sentence in the definition of stamp also with its CG representation. Using the clustering technique briefly described in the previous section, the two words are put together into a cluster triggered by the concept [mail_l]. Result 3.1 shows the maximal join 3 between the two previous graphs around shared concept [mail_l]. Combining the information from stamp and mail_l, puts in evidence the redundant in- formation. The reduction process for eliminat- ing this redundancy will solve some ambigui- ties. This process is based on the idea of find- ing "compatible" concepts within a graph. Two concepts are compatible if their semantic dis- tance is small. That distance is often based on aA maximal join is an operation defined within the CG formalism to gather knowledge from two graphs around a concept that they both share. the relative positions of concepts within the con- cept hierarchy (Delugach, 1993; Foo et ai., 1992; Resnik, 1995). For the present discussion we as- sume that two concepts are compatible if they share a semantically significant common super- type, or if one concept is a supertype of the other. In Result 3.1, the concept [send] is present twice, and also the concept [letter] is present in two compatible forms: [letter] and [mes- sage]. The compatibility comes from the pres- ence in the type hierarchy 4 of one sense of [letter], [letter_2], as being a subtype of [mes- sage]. These compatible forms actually allow the disambiguation of concept [letter] into [let- ter_2]. This should update the definition of stamp shown in Definition 3.2. The other sense of [letter], [letter_l] is a subtype of [symbol]. The pronoun they in Result 3.1 must refer to some word, either previously mentioned in the sentence, or assumed known (as a default) in the LKB. Both (agent) relations attached to con- cept [send] lead to compatible concepts: [they] and [person]. We can therefore go back to the graph definition of [stamp] in which the pronoun [they] could have referred to the concepts [let- ters], [packages], [people] or [stamps], and now disambiguate it to [people]. Result 3.2 shows the internal join which es- tablishes coreference links (shown by *x, *y, *z) between compatible concepts that are in an identical relation with another concept. The re- duced join, after the redundancy is eliminated, is shown in Result 3.3. Two types of disambiguation (anaphora res- olution and word sense disambiguation) were shown up to now. The third type of disam- biguation is at the level of the semantic re- lations. For this type of ambiguity, we must briefly introduce the idea of a relation hierar- chy which is described and justified in more de- tails in (Barri~re, 1998). A relation hierarchy, as presented in (Sowa, 1984), is simply a way to establish an order between the possible rela- tions. The idea is to include relations that cor- respond to the English prepositions (it could be the prepositions of any language studied) at the top of the hierarchy, and consider them gener- alizations of possible deeper semantic relations. 4The type hierarchy has been built automatically from information extracted from the AHFD. 105 Definition 3.1 - MAIL_I : People send messages through the mail. [send]-> (agent)->[person:plural] -> (object)->[message:plural] -> (through)->[mail_l] Definition 3.2 - STAMP : People buy stamps to put on letters and packages they send through the mail. [send]-> (object)-> [letter:plural]-> (and)- > [package:plu ral] <-(on) <-[put] <-(goal) <-[buy]->(agent)-> [person:plural] -> (object)->[stamp:plural] - > (agent)- > [they] -> (through)->[mail_l] Result 3.1 - Maximal join between mail_l and stamp [send]-> (object)-> [letter:plural]-> (and)- > [package:plu ral] <-(on) <-[put] <-(goal) <-[buy]->(agent)-> [person:plural] -> (object)->[stamp:plural] -> (agent)-> [they] -> (through)-> [mail_l] <-(through)<-[send]-> (object)-> [message:plural] -> (agent)->[person:plural] Result 3.2 - Internal Join on Graph maiL1/stamp [send *y]->(object)->[letter:plural *z]->(and)->[package:plural] <-(on) <-[put] <-(goal) <-[buy]->(agent)-> [person:plural] ->(object)->[stamp:plural] - > (agent)- > [they *x] ->(through)->[mail_l]<-(through)<-[send *y]->(object)->[message:plural *z] -> (agent)->[person:plural *x] Result 3.3 - After reduction of graph mail_l/stamp [letter_2:plural]-> (and)-> [package:plu ral] <-(object) <-[send]-> (agent)-> [person:plural] -> (through)->[mail_l] <-(on) <-[put] <-(goal) <-[buy]-> (agent)->[person:plural] -> (object)-> [stamp: plural] Figure 1: Example of ambiguity reduction 106 Definition 3.3 - CARD_2 : You send cards to people in the mail. [send]-> (agent)- > [you] - > (object)-> [card_2:plu ral] -> (to)-> [person:plural] -> (in)->[mail_l] Result 3.4 - Graph mail_l/stamp joined to card (after internal join) [letter_2:plural]-> (and)- > [package:plural] <-(object) <-[send *y]->(agent)->[person:plural *z] -> (through)-> [mail-l] <-(in)<-[send *y]-> (to)-> [person :plural] -> (agent)-> [you *z] ->(object)->[card_2:plural] <-(on) <-[put] <-(goal) <-[buy]-> (agent)-> [person :plural] -> (object)->[stamp:plural] Result 3.5 - After reduction of graph mail_l/stamp/card_2 [letter.2:plural]-> (and)- > [package: plural]- > (and)-> [card-2:plural] <-(object) <-[send]-> (agent)- > [person:plural] ->(manner)->[mail_l] ->(to)-> [person:plural] <-(on)<-[put] <-(goal)<-[buy]-> (agent)-> [person :plural] -> (object)-> [sta m p:plu ral] Figure 1: Example of ambiguity reduction (continued) This relation hierarchy is important for the comparison of graphs expressing similar ideas but using different sentence patterns that are reflected in the graphs by different prepositions becoming relations. Let us look in Figure 1 at Definition 3.3 which gives a sentence in the defi- nition of [card_2] and Result 3.4 which gives the maximal join with graph mail_l/stamp from re- sult 3.3 around concept [mail_l]. Subgraphs [send]->(in)->[mail_l] and [send]- > (through)-> [mail_l] have compatible concepts on both sides of two different relations. These two prepositions are both supertypes of a re- stricted set of semantic relations. On Figure 2 which shows a small part of the relation hierar- chy, we highlighted the compatibility between through and in. It shows that the two prepo- sitions interact at manner (at location as well but more indirectly). Therefore, we can estab- lish the similarity of those two relations via the manner relation, and the ambiguity is resolved as shown in Result 3.5. Note that the concept [person] is present many time among the different graphs in Fig- ure 1. This gives the reader an insight into the complexity behind clustering. It all relies on compatibility of concepts and relations. Com- patibility of concepts alone might be sufficient if the concepts are highly semantically significant, but for general concepts like [person], [place], [animal] we cannot assume so. In the graph pre- sented in Result 3.5, there are buyers of stamps, receivers and senders of letters and they are all people, but not necessarily the same ones. We saw the redundancy resulting from the clustering process and how to exploit this re- dundancy for semantic disambiguation. We see how redundancy at the concept level without the relations can be very misleading, and the following section emphasize the importance of semantic relations. 107 with ['~ I throughl at on accompanimen/ part-o/~ ,~~ instrument y ~ ~ point-in-time about Imannerl Ilocationl destination direction source Figure 2: Small part of relation taxonomy. 4 The importance of semantic relations Clusters are and have been used in different applications for information retrieval and word sense disambiguation. Clustering can be done statistically by analyzing text corpora (Wilks et al., 1989; Brown et al., 1992; Pereira et al., 1995) and usually results in a set of words or word senses. In this paper, we are using the clustering method used in (Barri~re and Popowich, 1996) to present our view on re- dundancy and disambiguation. The clustering brings together a set of words but also builds a CCKG which shows the actual links (semantic relations) between the members of the cluster. We suggest that those links are essential in an- alyzing and disambiguating texts. When links are redundant in a graph (that is we find two identical links between two compatible concepts at each end) we are able to reduce semantic am- biguity relating to anaphora and word sense. The counterpart to this, is that redundancy at the concept level allows us to disambiguate the semantic relations. To show our argument of the importance of links, we present an example. Example 4.1 shows a situation where an ambiguous word chicken (sense 1 for the animal and sense 2 for the meat) is used in a graph and needs to be disambiguated. If two graphs stored in a LKB contain the word chicken in a disam- biguated form they can help solving the ambi- guity. In Example 4.1, Graph 4.1 and Graph 4.2 have two isolated concepts in common: eat and chicken. Graph 4.1 and Graph 4.3 have the same two concepts in common, but the addi- tion of a compatible relation, creating the com- mon subgraph [eat]->(object)->[chicken], makes them more similar. The different relations be- tween words have a large impact on the meaning of a sentence. In Graph 4.1, the word chicken can be disambiguated to chicken_2. Example 4.1 - John eats chicken with a fork. Graph 4.1 - [eat]-> (agent)-> [John] ->(with)->[fork] -> (object)- > [chicken] John's chicken eats grain. Graph 4.2 - [eat]-> (agent)-> [chicken_l] <-(poss) <-[John] -> (object)->[grain] 108 John likes to eat chicken at noon. Graph 4.3 - [like]-> (agent)-> [John] -> (goal)->[eat]-> (object)->[chicken_2] ->(tirne)-> [noon] Only if we look at the relations between words can we understand how different each statement is. It's all in the links... Of course those links might not be necessary at all levels of text anal- ysis. If we try to cluster documents based on keywords, well we don't need to go to such a deep level of understanding. But when we are analyzing one text and trying to understand the meaning it conveys, we are probably within a narrow domain and the relations between words take all their importance. For example, if we are trying to disambiguate the word baseball (the sport or the ball), both senses of the words will occur in the same context, therefore using clusters of words that identify a context will not allow us to disambiguate between both senses. On the other hand, having a CCKG showing the relations between the baseball_l (ball), the bat, the player and the baseball_2 (sport), will express the desired information. 5 Conclusion We presented the problem of semantic disam- biguation as solving ambiguities at the concept level (word sense and anaphora) but also at the link level (the relations between concepts). We showed that when gathering information around a particular subject via a clustering method, we tend to cumulate similar facts ex- pressed in slightly different ways. That redun- dancy is expressed by multiple copies of com- patible/identical concepts and relations in the resulting graph which is called a CCKG (Con- cept Clustering Knowledge Graph). The re- dundancy within the links (relations) helps dis- ambiguate the concepts they connect and the redundancy within the concepts helps disam- biguate the links connecting them. Clustering has been used a lot in previous research but only at the concept level; we propose that it is essen- tial to understand the links between the con- cepts in the cluster if we want to disambiguate between elements that share a similar context of usage. References C. Barri6re and D. Fass. 1998. Dictionary vali- dation through a clustering technique. To be published in the Proceedings of Euralex'98: Eight EURALEX International Congress on Lexicography, Belgium, August 1998. C. Barri~re and F. Popowich. 1996. Concept clustering and knowledge integration from a children's dictionary. In Proc. o] the 16 ~h COLING, Copenhagen, Danemark, August. C. Barri~re. 1997. From a Children's First Dic- tionary to a Lexical Knowledge Base of Con- ceptual Graphs. Ph.D. thesis, Simon Fraser University, June. C. Barri~re. 1998. The relation hierarchy: one key to representing natural language using conceptual graphs. Submitted at ICCS98: International Conference on Con- ceptual Structures, to be held in Montpellier, France, August 1998. P. Brown, V.J. Della Pietra, P.V. deSouza, J.C. Lai, and R.L. Mercer. 1992. Class-based 11- gram models of natural language. Computa- tional Linguistics, 18(4):467-480. H. S. Delugach. 1993. An exploration into semantic distance. In H. D. Pfeiffer and T. E. Nagle, editors, Conceptual Structures: Theory and Implementation, pages 119-124. Springer, Berlin, Heidelberg. N. Foo, B.J. Garner, A. Rao, and E. Tsui. 1992. Semantic distance in conceptual graphs. In T.E. Nagle, J.A. Nagle, L.L. Gerholz, and P.W.Eklund, editors, Conceptual Structures: Current Research and Practice, chapter 7, pages 149-154. Ellis Horwood. F. Pereira, N. Tishby, and L. Lee. 1995. Distri- butional clustering of english words. In Proc. of the 33 th A CL, Cambridge,MA. Philip Resnik. 1995. Using information content to evaluate semantic similarity in a taxonomy. In Proc. o] the 1.~ th IJCAI, volume 1, pages 448-453, Montreal, Canada. J. Sowa. 1984. Conceptual Structures in Mind and Machines. Addison-Wesley. Y. Wilks, D. Fass, G-M Guo, J. McDonald, T. Plate, and B. Slator. 1989. A tractable machine dictionary as a resource for computa- tional semantics. In Bran Boguraev and Ted Briscoe, editors, Computational Lexicography for Natural Language Processing, chapter 9, pages 193-231. Longman Group UK Limited. 109
1998
16
An Efficient Kernel for Multilingual Generation in Speech-to-Speech Dialogue Translation Tilman Becker and Wolfgang Finkler and Anne Kilger and Peter Poller German Research Center for Artificial Intelligence (DFKI GmbH) Stuhlsatzenhausweg 3 D-66123 Saarbriicken Germany becker~dfki.de, finkler~dfki.de, kilgerOdfki.de, poller~dfki.de Abstract We present core aspects of a fully implemented generation component in a multilingual speech- to-speech dialogue translation system. Its de- sign was particularly influenced by the neces- sity of real-time processing and usability for multiple languages and domains. We devel- oped a general kernel system comprising a mi- croplanning and a syntactic realizer module. Tile microplanner performs lexical and syntac- tic choice, based on constraint-satisfaction tech- niques. The syntactic realizer processes HPSG grammars reflecting the latest developments of the underlying linguistic theory, utilizing their pre-processing into the TAG formalism. The declarative nature of the knowledge bases, i.e., the microplanning constraints and the HPSG grammars allowed an easy adaption to new do- mains and languages. The successful integra- tion of our component into the translation sys- tem Verbmobil proved the fulfillment of the spe- cific real-time constraints. 1 Introduction In this paper we present core aspects of the mul- tilingual natural language generation compo- nent VM-GECO 1 that has been integrated into the research prototype of Verbmobil (Wahlster, 1993; Bub et al., 1997), a system for sponta- neous speech-to-speech dialog translation. In order to achieve multilinguality as ele- gantly as possible we found that a clear modu- lar separation between a language-independent general kernel generator and language-specific parts which consist of syntactic and lexical knowledge sources was a very promising ap- proach. Accordingly, our generation component 1VM-GECO is an acronym for "VerbMobil GEnera- tion COmponents." consists of one kernel generator and language- specific knowledge sources for the languages used in Verbmobih German and English with current work on Japanese. Additionally, the kernel generator itself can be modularized furthermore into two separate components. The task of the so-called mi- eroplanning component is to plan an utterance on a phrase- or sentence-level (Hovy, 1996) in- cluding word-choice (section 2). It generates an annotated dependency structure which is used by the syntactic generation component to re- alize an appropriate surface string for it (sec- tion 3). The main goal of this further modular- ization is a stepwise constraining of the search- space of alternative linguistic realizations, using abstracted views on different choice criteria. Multilingual generation in dialog translation imposes strong requirements on the generation module. A very prominent problem is the non- wellformedness (incorrectness, irrelevance, and inconsistency) of spontaneous input. It forces the realization of robust generation to be able to cope with erroneous and incomplete input data so that the quality of the generated out- put may vary between syntactically correct sen- tences and semantically understandable utter- ances. On the level of knowledge sources this is achieved by using a highly declarative HPSG grammar which very closely reflects the latest developments of the underlying linguistic the- ory (Pollard and Sag, 1994) and covers phe- nomena of spoken language. This HPSG is compiled into a TAG grammar in an offtine pre-Processing step (Kasper et al., 1995) which keeps the declarative nature of the grammar in- tact (section 3). Maybe the most important requirement on the generation module of a speech-to-speech translation system is real-time processing. The 110 above mentioned features of VM-GECO con- tribute to the efficiency of the generation com- ponent. The TAG-formalism is well known for the existence of efficient syntactic generation al- gorithms (Kilger and Finkler, 1995). In general, all knowledge sources of all mod- ules are declarative. The main advantage is that this allows for an easier adaptation of the generation component to other domains, lan- guages and semantic representation languages besides the easier extendability of the current system. The feasibility of the language adap- tation was proved in the Verbmobil project it- self where the (originally English) generator was recently extended to cover German and is cur- rently adapted for Japanese. The adaptation to another domain and also to another specifi- cation language for intermediate structures was shown in another translation project which uses in contrast to Verbmobil an interlingua based approach (section 4.1). 2 The Microplanner A generation system for target language utter- ances in an approach to speech-to-speech trans- lation has to work on input elements represent- ing intermediate results of recognition, analy- sis, and transfer components. In that setting, several of the tasks of a complete natural lan- guage generation system such as selection and organization of the contents to be expressed are outside of the control of our generator. They have been decided by the human user of the translation system or they have been negoti- ated and computed by a transfer component. Nevertheless, there remain a number of different but highly interrelated subtasks of the genera- tion process where decisions have to be made in order to determine and realize the trans- lation result to be sent to a speech synthesis component. The diverse subtasks -- often col- lectively denoted as microplanning (cf. (Levelt, 1989; Hovy, 1996)) -- comprise the planning of a rough structure of the target language ut- terance, the determination of sentence borders, sentence type, topicalization, theme-rheme or- ganization of sentential units, focus control, uti- lization of nominalized, or infinitival style, as well as triggering the generation of anaphora and lexical choice. In addition, they have to address the problem of expressibility of the se- lected contents in a text realization component, i.e., bridging the generation gap (see (Meteer, 1990)). The input to our microplanning component consists of semantic representations encoded in a minimal recursive structure following a vari- ant of UDRT. Each individual indicated by some input utterance is formally represented by a discourse referent. Information about the in- dividual is encoded within the DRS-conditions. Relations between descriptions of different dis- course referents lead to a hierarchical semantic structure (see Figure 1 for a graphical represen- tation of fragments of an example input to the generator). Discourse referents are depicted as boxes headed by individual names in; conditions are illustrated within those boxes. [] mm _ [] /Im ==> {1151416 IS} [] temp_loc {i213} workjcceptable 12 arg3 {i214} perspective {i2 I1) ;em_Groul 1,3 I,,o. 13 I ~ ~ ' ~ Itll demonstrative {i3 It2 ht 1) J Figure 1: Example Input to the Generator Besides these input terms from the transfer component, the generator may access knowl- edge about the dialogue act, the dialogue his- tory as well as some prosodic information of the user's utterance. The output of the microplanner is a sentence plan that serves as input for the syntactic real- ization component. It describes a dependency tree over lexical items annotated with syntac- tic, semantic, and pragmatic information which is relevant to produce an acceptable utterance and guide the speech synthesis component. 2.1 Design of the Microplanning Kernel An important design principle of our generator is the demand to cope with multidirectional de- pendencies among decisions of the diverse sub- tasks of microplanning without preferring one 111 order of decisions over others. E.g., the choice of an interrogative sentence requires an (at least elliptical) verbal phrase as a major constituent of the sentence; nominalization or the choice of passive voice depends on the result of word choice, etc. Therefore, we conceived microplan- ning as a constraint-satisfaction problem (Ku- mar, 1992) representing undirected relations be- tween variables. Thereby, variables are created for elements in the input to the generator. They are connected by means of weighted constraints. The domains of the variables correspond to ab- stractions of possible alternatives for syntactic realizations of the semantic elements including sets of specifications of lexical items and syntac- tic features. A solution of the constraint system is a globally consistent instantiation of the vari- ables and is guaranteed to be a valid input for the syntactic generation module. Since there might be locally optimal mappings that lead to contradiction on a global level, the microplan- net generally uses these weighted constraints to direct a backtracking or propagation process. One the one hand, the advantages of utiliz- ing a constraint system lie in the declarativ- ity of the knowledge sources allowing for an easier adaptation of the system to other do- mains and languages. We benefited from this design decision and realized microplanning for English and German by means of merely estab- lishing new rule sets for lexical and syntactic choice. The core engine for constraint process- ing was reused without modification. On the other hand, having defined a suitable represen- tation of the problem to be solved, a constraint- based approach also establishes a testbed for examining the pros and cons of different eval- uation methods, including backtracking, con- straint propagation, heuristics for the order of the instantiation of variable values, to name a few means of dealing with competition among alternatives and to find a solution. The microplanner makes use of the minimal recursive structure of its semantic input term (see Fig. 1) by triggering activities by bundles of conditions, discourse referents, and holes repre- senting underspecified scope relations in the in- put. These three input categories are reflected by different microplanning rule sets that are ap- plied conjointly during the process of microplan- ning. The rules are represented as pattern- condition-action triples. A pattern is to be matched with part of the input, a condition describes additional context-dependent require- ments to be fulfilled by the input, and the ac- tion part describes a bundle of syntactic features realizing lexical entities and their relations to complements and modifiers. A microplanning rule for the combination of the semantic predicates WORK_ACCEPTABLE, ARG3, and PERSPECTIVE which get realized as a finite verb, i.e., representing a 3:1 mapping of se- mantic predicates to a syntactic specification is shown in Figure 2. ;; standard finite verb with 2 complements ((WORK_ACCEPTABLE (L I) ARG3 (L 1 12) ;; pattern PERSPECTIVE (L% I I3)) ($not ($sem-match NOM (L I))) ;; condition (WORK_ACCEPTABLE (CAT V) ;; action (HEAD (OR SUIT_V1 SUIT_V2)) (FORM ordinary) (TENSE Sget-tense I) (VOICE Sget-voice I)) (I2 (GENDER (NOT MAS FEM))) (REGENT-DEP-FUNC WORK_ACCEPTABLE 12 AGENT) (REGENT-DEP-FUNC WORK_ACCEPTABLE 13 PATIENT) (KEY KEY-V) ) ; ; nominalized form . . . Figure 2: Example Microplanning Content Rule In the condition part of the verbal mapping the existence of a NOM-condition within the se- mantic input information is tested. It would forbid the verbal form by demanding a nomi- nalized form. The action part describes the re- sult of lexical selection (the lemma "suit") plus generic functions for computing relevant syntac- tic features like tense and voice. I2 which stands for the ARG3 of WORK_ACCEPTABLE, defined by a database of linking-information as the semantic agent is characterized as neither allowing gen- der masc(uline) nor fem(inine) for preventing "he suits" in the sense of "he is okay". En- tries starting with KEY define identifiers used for computing the preference value of a microplan- ning rule with respect to the given situation. In an additional database, KEYs are associated with weights for predefined situation character- istics such as time pressure, or register. The microplanning content rules are not directly en- tered by a rule writer but are compiled off-line from several knowledge sources for lexical choice rules, rules for syntactic decisions and linking rules, thereby filtering out contradictory combi- nations without requiring on-line runtime. Regarding the sets of alternatives that result 112 from the application of the microplanning rules, the most direct way of realizing a constraint net seems to be the definition of one variable for each condition, discourse referent, and hole, leading to a variable net as shown in Figure 3. Figure 3: Variable Net for Microplanning For our task, it is not enough to define bi- nary matching constraints between each pair of variables that purely test the compatibility of the described syntactic features. Some syn- tactic specifications may contain identifications of further entities, e.g., discourse referents and syntactic identifiers which influence the result of the compatibility test between a pair of vari- ables referring to these identifiers. Thus, the constraint net is not easily subdivided into sub- nets that can be efficiently evaluated. The large number of combinations of alternative values is handled by known means for CSP such as unit- ing variables with 1-value domains and apply- ing matching mechanisms to their values, com- putation of 2-consistency by matching value pairs and filtering out inconsistent ones, storing and reusing knowledge about binary incompat- ibility and performing intelligent backtracking. The result of the constraint solving process for the input shown in Fig. 1 is given in Fig. 4. L21-QUEST (intention w/i-question) ~clau=e (real hs) (cat utt.par) L -WORK_ACCEP+AB'E \ / ==gent/(voice active) \ .. ~ temp =pe¢ ~'/ /(head( .... it_vl ~patient "~--' L6-TEMP_LOC / suit2)) "~ "~ (head whenl) / (tensefut.) L10-PRON L15-TEMP_LOC (wh-focus t) / (cat v) (pers 2a) (head then adv) (cat adv) ~ (cat ppron) (cat adv) L13-PRON (aura s$) (pers 3) (cat ppron) (hum sg) Figure 4: Microplanning Result for the Example 3 The Realizer The syntactic realizer 2 proceeds from the mi- croplanning result as shown in Figure 5. It pro- duces a derived phrase structure from which the output string is read off. The realizer is based on a fully lexicalized grammar in the sense that every lexical item selects for a finite set of possi- ble phrase structures (called elementary trees). In particular, we use a Feature-Based Lexical- ized Tree-Adjoining Grammar (FB-LTAG, see (Vijay-Shanker and Joshi, 1988; Schabes et at., 1988)) that is derived from an HPSG grammar (see section 4 for some more details). The el- } ementary trees (see Figure 9) can be seen as maximal partial projections. A derivation of an utterance is constructed by combining appro- priate elementary trees with the two elementary TAG operations of adjunction and substitution. For each node (i.e., lexical item) in the de- pendency tree, the tree selection phase deter- mines the set of relevant TAG trees. A first tree retrieval step maps every object of the dependency tree into a set of applicable ele- mentary TAG trees. The main tree selection phase uses information from the microplanner output to further refine the set of retrieved trees. The combination phase finds a success- ful combination of trees to build a (derived) phrase structure tree. The final inflection phase uses the information in the feature structures of the leaves (i.e., the words) to apply appro- priate morphological functions. An initial pre- processing phase is needed to accommodate the handling of auxiliaries which are not determined in microplanning. They are derived from the tense, aspect and sentence mood information as supplied by microplanning. (expand ,vxiliazi=s) (adj~lining ~id ~ubstiIUilOll) l ~ g - 'AG Figure 5: Steps of the syntactic generator. The two core phases are the tree selection and 2A more detailed description is contained in (Becker, 1998). 113 the combination phase. The tree selection is driven by the HPSG instance or word class that is supplied by the microplanner. It is mapped to a lexical type by a lexicon that is automatically compiled from the HPSG grammar. The lexi- cal types are then mapped to a tree family, i.e., a set of elementary TAG trees representing all possible minimally complete phrase structures that can be build from the instance. The ad- ditional information in the dependency tree is then used to add further feature values to the trees. This additional information acts as a fil- ter for selecting appropriate trees in two stages: Some values are incompatible with values al- ready present in the trees. These trees can therefore be filtered immediately from the set. E.g., a syntactic structure for an imperative clause is marked as such by a feature and can be discarded if a declarative sentence is to be generated. Additional features can prevent the combination with other trees during the combi- nation phase. This is the case, e.g., with agree- ment features. The combination phase completely belongs to the core machinery. It can be exchanged with more efficient algorithms without change of the grammar or lexicon. It explores the search space of all possible combinations of trees from the candidate sets for each lexical item (instance). Since there is sufficient information available from the microplanner result and from the trees, a well-guided best-first search strategy can be employed in the current system. As part of the tree selection phase, based on the rich annotation of the input structure, the tree sets are sorted locally such that preferred trees are tested first. Then a modified back- tracking algorithm traverses the dependency tree in a bottom-up fashion a. At each node and for each subtree in the dependency tree, a can- didate for the phrase structure of the subtree is constructed. Then all possible adjunction or substitution sites are computed, possibly sorted (e.g., allowing for preferences in word order) and the best candidate for a combined phrase struc- ture is returned. Since the combination of two partial phrase structures by adjunction or sub- stitution might fail due to incompatible feature structures, a backtracking algorithm must be 3The algorithm stores intermediate results with a memoization technique. used. A partial phrase structure for a subtree of the dependency is finally checked for complete- ness. These tests include the unifiability of all top and bottom feature structures and the satis- faction of all other constraints (e.g., obligatory adjunctions or open substitution nodes) since no further adjunctions or substitutions will oc- cur in this subtree. The necessity of a spoken dialog translation system to robustly produce output calls for some relaxations in these tests. E.g., 'obliga- tory' arguments may be missing in the utter- ance. This can be caused by ellipsis in sentences such as "Ok, we postpone." or by false segmen- tations in the analysis such as segmenting "Wit soIlten (we should) das Treffen verschieben (the meeting postpone)." into two segments "Wit sollten" and "das Treffen verschieben". In order to generate "postpone the meeting" for the sec- ond segment, the tests in the syntactic genera- tor must accept a phrase with a missing subject if no other complete phrase can be generated. Figure 6 shows a combination of the tree retrieval and the tree selection phases. In the tree retrieval phase for L5-WORK.ACCEPTABLE, first the HEAD information is used to determine the lexical types of the possible realizations SUIT_Vl and SUIT_V2, namely MV_NP_TRANS_LE and MV_EXPL_PREP_TRANSIE respectively 4. These types are then mapped to their respective sets of elementary trees, a total of 25 trees. In the tree selection phase, this number is reduced to six. For example, the tree MV_NP_TRANS_LE.2 in Figure 9 has a feature C[_-MODE with the value IMPERATIVE. Now, the microplanner output for the root entity LGVI contains the informa- tion (INTENTION WH-QUESTION). The INTENTION information is unified with all appropriate CL- MODE features, which in this case fails. There- fore the tree MV_NP_TRANS_LE.2 is discarded in the tree selection phase. The combination phase uses the best-first bottom-up algorithm described above to deter- mine one suitable tree for every entity and also a target node in the tree that is selected for the governing entity. For the above example, the selected trees and their combination nodes are 4MV_NP_TRANS_LE is an abbreviation for "Main Verb, NP object, TRANSitive Lexical Entry" used in sentences like "Monday suits me." 114 ;; traverse for: LS-WORK_ACCEPTABLE returned MV_NP_TRANS_LE returned MV_EXPL_PREP_TRANS_LE total: 6 trees ;; traverse for: LI3-PRON returned PERS_PRO_LE total: 1 tree ;; traverse for: LIO-PRON returned PERS_PR0_LE total: I tree ; traverse for: L6-TEMP L0C returned WH_ADVERB_W0RD_LE total: 2 trees traverse for: LI5-TEMP_LOC returned NP_ADV_WORD LE total: 5 trees ; traverse for: LGVI returned WILL_AUX_P0S_LE total: 2 trees Figure 6: An excerpt from the tree retrieval and selection phase. shown in Figure 75 . s o.-----. ," S/ADV "'. "', A DV -" ' -::: ..-I,- ........... ', " VP VP : v/%,4, ADV ."" NP , I I ',--'" I I vl I w~tesl ~vill il xldl you then L6 -'rEMP_LOC I~lYl LZ3-PRON L5-~JZT LZO-PlION L15 -TI4(P_LOC Figure 7: The trees finally selected for the enti- ties of the example sentence. Figure 8 shows the final phrase structure for the example. The inflection function selects the base form of "suit" according to the BSE value of the VFORM feature and correctly uses "will." Information about the sentence mode WH-QUESTION can be used to annotate the re- sulting string for the speech-synthesis module. 4 Results Our approach to separate a generation mod- ule into a language-independent kernel and language-specific knowledge sources has been successfully implemented in a dialogue trans- lation system. Furthermore, the mentioned adaptability to other generation tasks has also been proved by an adaptation of the generation module to a new application domain and also to a completely different semantic representation 5Note that the node labels shown in Figures 7 and 8 are only a concession to readability. The TAG require- ment that in an auxiliary tree the foot node must have the same category label as the root node is fulfilled. S ADV S/ADV I v.'he. V VPIADV I V NP VP I I will it VP ADV V NP then I i suit you Figure 8: The final phrase structure for "When will it suit you then?" MV_NP TRANS LF-! MV_NP_TRANS LF~2 MV NP TRANS_LE.3 MV NP TRANS LEA VP S S S MV NP_TRANS L| V NP J, l I I MV_NP TRAN$ LE MV NP_TRANS LE MV NP TRANS_LE Figure 9: Some of the trees for transitive verbs. They are compiled from the corresponding lex- ical type MV_NP_TRANS_LE as defined in the HPSG grammar. Trees 3 and 4 differ only with respect to their feature structures which are not shown in this figure. language by adapting the microplanning knowl- edge sources to the new formalism. VM-GECO is fully implemented (in Common Lisp) and integrated into the speech-to-speech translation system Verbmobil for two output languages, English and German. The adapta- tion to Japanese generation will be performed in the current project phase. Our experience from adding German makes us confident that this can be done straightforwardly by creating the appropriate knowledge sources without modi- fications of the kernel generator. To give the reader a more detailed impression of the im- plementation of the generation component we present some characteristic data of the English generator. The numbers for the German sys- tem, especially for lexicon and processing time, are similar. The underlying English grammar is a lexical- ized TAG which consists of 2844 trees. These trees were transformed during an of[line pre- processing step from 2961 HPSG lexical en- tries of the linguistically well motivated En- glish HPSG grammar written at CSLI. On the other hand the microplanner's knowledge sources consist of 2730 partially pre-processed microplanning rules which are utilized in an in- 115 tegrated handling of structural and lexical de- cisions based on constraint propagation. The microplanning rules are of course especially adapted to the underlying semantic represen- tation formalism. Furthermore, the underlying lexicon covers the word list that has been con- structed from a large corpus of the application domain of the Verbmobil system, i.e., negotia- tion dialogues in spontaneous speech. The TAG grammar resulting from the com- pilation step allows for highly efficient lexically driven robust syntactic generation mainly con- sisting of tree adjoinings, substitutions, and fea- ture unifications. The average overall genera- tion time per sentence (up to length 24) is 0.7 seconds on a SUN ULTRA-1 machine, 68 % of the runtime are needed for the microplanning while the remaining 32 % of the runtime are needed for syntactic generation. 4.1 Reusing the Kernel Beside the usability for multiple languages in Verbmobil our kernel generation component has also proven its adaptability to a very differ- ent semantic representation language (system- atically and terminologically) in another still ongoing multilingual (currently 12 languages) translation project. The project utilizes an interlingua-based approach to semantic rep- resentations of utterances. The goal of this project is to overcome the international lan- guage barrier which is exemplarily realized by a large corpus improvement of the transparency of consisting of international law texts. Our part in this project is the realization and implemen- tation of the German generation component. Because of our language-independent core gen- erator the adaptation of the generation compo- nent to this semantic representation decreased to the adaptation of the structural and lexi- cal knowledge bases of the microplanning com- ponent and appropriate domain-specific exten- sions on the lexicon of the syntactic generator. With an average sentence length of 15 words the average runtime per sentence on a SUN ULTRA-2 is less than 0.5 seconds. Currently, even the longest sentence (40 words) needs un- der 2 seconds runtime. Within Verbmobil, the generation component will also be used for text generation when pro- ducing protocols as described in (Alexandersson and Poller, 1998). References J. Alexandersson and P. Poller. 1998. Towards multilingual protocol generation for spon- taneous speech dialogues. In 9th INLGW, Niagara-on-the-lake, Canada. T. Becker. 1998. Fully lexicalized head- driven syntactic generation. In 9th INLGW, Niagara-on-the-lake, Canada. Th. Bub, W. Wahlster, and A. Waibel. 1997. Verbmobil: The combination of deep and shallow processing for spontaneous speech translation. In Proceedings of ICASSP '97. E. Hovy. 1996. An overview of automated natu- ral language generation. In X. Huang, editor, Proc. of the Intl. Symposium on NL Genera- tion and the Processing of the Chinese Lan- guage, INP(C)-96, Shanghai, China. R. Kasper, B. Kiefer, K. Netter, and K. Vijay- Shanker. 1995. Compilation of HPSG to TAG. In 33rd A CL, Cambridge, Mass. A. Kilger and W. Finkler. 1995. Incremen- tal generation for real-time applications. Research Report RR-95-11, DFKI GmbH, Saarbrficken, Germany, July. V. Kumar. 1992. Algorithms for constraint- satisfaction problems: A survey. AI Maga- zine, 13(1):32-44. W.J.M. Levelt. 1989. Speaking: From Intention to Articulation. The MIT Press, Cambridge, MA. M.W. Meteer. 1990. The "Generation Gap"- The Problem of Expressibility in Text Plan- ning. Ph.D. thesis, Amherst, MA. BBN Re- port No. 7347. C. Pollard and I. A. Sag. 1994. Head-Driven Phrase Structure Grammar. Studies in Con- temporary Linguistics. University of Chicago Press, Chicago. Y. Schabes, A. Abeill~, and A. K. Joshi. i988. Parsing strategies with 'lexicalized' gram- mars: Application to tree adjoining gram- mars. In COLING-88, pages 578-583, Bu- dapest, Hungary. K. Vijay-Shanker and A. K. Joshi. 1988. Fea- ture structure based tree adjoining grammars. In COLING-88, pages 714-719, Budapest, Hungary. W. Wahlster. 1993. Verbmobih Translation of face-to-face dialoges. In MT Summit IV, Kobe, Japan. 116
1998
17
Consonant Spreading in Arabic Stems Kenneth R. BEESLEY Xerox Research Centre Europe Grenoble Laboratory 6, chemin de Maupertuis 38240 MEYLAN France Ken. Beesley@xrce. xerox, com Abstract This paper examines the phenomenon of conso- nant spreading in Arabic stems. Each spread- ing involves a local surface copying of an un- derlying consonant, and, in certain phonologi- cal contexts, spreading alternates productively with consonant lengthening (or gemination). The morphophonemic triggers of spreading lie in the patterns or even in the roots themselves, and the combination of a spreading root and a spreading pattern causes a consonant to be copied multiple times. The interdigitation of Arabic stems and the realization of consonant spreading are formalized using finite-state mor- photactics and variation rules, and this ap- proach has been successfully implemented in a large-scale Arabic morphological analyzer which is available for testing on the Internet. 1 Introduction Most formal analyses of Semitic languages, in- cluding Arabic, defend the reality of abstract, unpronounceable morphemes called ROOTS, consisting usually of three, but sometimes two or four. consonants called RADICALS. The clas- sic examples include ktb (~. ,D ~)1, appearing in a number of words having to do with writ- ing, books, schools, etc.; and drs (~9 z), ap- pearing in words having to do with studying, learning, teaching, etc. Roots combine non- concatenatively with PATTERNS to form STEMS, a process known informally as INTERDIGITA- TION or INTERCALATION. We shall look first at Arabic stems in general before examining GEMINATION and SPREADING, related phenom- ena wherein a single underlying radical is real- ~The Arabic-script examples in this paper were pro- duced using the ArabTeX package for TEX and DTEX by Prof. Dr. Klaus Lagally of the University of Stuttgart. daras duris darn'as duruus diraasa(t) darraas madrasa(t) madaaris madrasiyy tadriis 'study' 'be studied' 'teach' 'lessons' 'study' 'eager student' 'school' 'schools' 'scholastic' 'instruction' verb verb verb noun noun noun noun noun adj-like noun Figure 1: Some stems built on root drs ized multiple times in a surface string. Semitic morphology, including stem interdigitation and spreading, is adequately and elegantly formaliz- able using finite-state rules and operations. 1.1 Arabic Stems The stems in Figure 12 share the drs root mor- pheme, and indeed they are traditionally or- ganized under a drs heading in printed lexi- cons like the authoritative Dictionary of Mod- ern Written Arabic of Hans Wehr (1979). A root morpheme like drs interdigitates with a pattern morpheme, or, in some analyses. with a pattern and a separate vocalization mor- pheme, to form abstract stems. Because inter- digitation involves pattern elements being in- serted between the radicals of the root mor- pheme, Semitic stem formation is a classic example of non-concatenative morphotactics. Separating and identifying the component mor- phemes of words is of course the core task of morphological analysis for any language, and analyzing Semitic stems is a classic challenge 2The taa~ marbuu.ta, notated here as (t), is the fem- inine ending pronounced only in certain environments. Long consonants and long vowels are indicated here with gemination. 117 for any morphological analyzer. 1.2 Interdigitation as Intersection Finite-state morphology is based on the claim that both morphotactics and phonologi- cal/orthographical variation rules, i.e. the rela- tion of underlying forms to surface forms, can be formalized using finite-state automata (Ka- plan and Kay, 1981; Karttunen, 1991; Kaplan and Kay, 1994). Although the most acces- sible computer implementations (Koskenniemi, 1983; Antworth, 1990; Karttunen, 1993)of finite-state morphotactics have been limited to building words via the concatenation of mor- phemes, the theory itself does not have this limi- tation. In Semitic morphotactics, root and pat- tern morphemes (and, according to one's the- ory, perhaps separate vocalization morphemes) are naturally formalized as regular languages, and stems are formed by the intersection, rather than the concatenation, of these regular lan- guages. Such analyses have been laid out else- where (Kataja and Koskenniemi, 1988; Beesley, 1998a; Beesley, 1998b) and cannot be repeated here. For present purposes, it will suffice to view morphophonemic (underlying) stems as being formed from the intersection of a root and a pat- tern, where patterns contain vowels and C slots into which root radicals are, intuitively speak- ing, "plugged", as in the following Form I per- ~ct active and passive verb examples. Root: d r s k t b q t i Pattern: CaCaC CaCaC CaCaC Stem: daras katab qatal Root: d r s k t b q t 1 Pattern : CuCiC CuCiC CuCiC Stem: duris kutib qutil Prefixes and suffixes concatenate onto the stems in the usual way to form complete, but still mor- phophonemic, words; and finite-state variation rules are then applied to map the morphophone- mic strings into strings of surface phonemes or orthographicM characters. For an overview of this approach, see Karttunen, Kaplan and Zae- hen (1992). Following Harris (1941) and Hudson (1986), and unlike McCarthy (1981), we also allow the patterns to contain non-radical consonants as in the following perfect active Form VII, Form VIII and Form X examples. Form VII Form VIII Form X Root: k t b k t b kt b Pattern: nCaCaC CtaCaC staCCaC Stem : nkat ab ktatab staktab In this formalization, noun patterns work ex- actly like verb patterns, as in the following ex- amples: Root: k t b k t b kt b Pattern: CiCaaC CuCuC maCCuuC Stem: kitaab kutub maktuub Gloss : "book" "books" "letter" Where such straightforward intersection of roots and patterns into stems would appear to break down is in cases of gemination and spread- ing, where a single root radical appears multiple times in a surface stem. 2 Arabic Consonant Gemination and Spreading 2.1 Gemination in Forms II and V Some verb and noun stems exhibit a double re- alization (a copying) of an underlying radical, resulting in gemination 3 or spreading at the sur- face level. Looking at gemination first, it is best known from verb stems known in the European tradition as Forms II and V, where the middle radical is doubled. Kay's (1987) pattern nota- tion uses a G symbol before the C slot that needs to be doubled. 4 3Gemination in Arabic words can alternatively be an- alyzed as consonant lengthening, as in Harris (1941) and as implied by Holes (1995). This solution is very attrac- tive if the goal is to generate fully-voweled orthograph- ical surface strings of Arabic, but for the phonological examples in this paper we adopt the gemination repre- sentation as used by phonologists like McCarthy (1981). 4 Kay's stem-building mechanism, using a multi-tape transducer implemented in Prolog, sees G on the pattern tape and writes a copy of the middle radical on the stem tape without consuming it. Then the following C does the same but consumes the radical symbol in the usual way. Kay's analysis in fact abstracts out the vocaliza- 118 Root: k t b d r s Pattern: CaGCaC CaGCaC Stem: kattab darras In the same spirit, but with a different mecha- nism, our Form II and Form V patterns contain an X symbol that appears after the consonant slot to be copied. Root: k t b d r s Pattern: CaCXaC CaCXaC Stem: katXab darXas As in all cases, the stem is formed by straight- forward intersection, resulting in abstract stems like darXas. The X symbol is subsequently re- alized via finite-state variation rules as a copy of the preceding consonant in a phonological gram- mar (/darras/) or, in an orthographical system such as ours, as an optionally written shadda di- acritic (~r,~.~). Finite-state rules to effect such limited local copying are trivially written, s 2.2 Gemination/Spreading in Form IX Spreading, which appears to involve consonant copying over intervening phonemes, is not so different from gemination; and indeed it is com- mon in "spreading" verb stems for the spread- ing to alternate productively with gemination. The best known example of Arabic consonant spreading is the verbal stem known as Form IX (the same behavior is also seen in Form XI, Form XIV, Form QIV and in several noun forms). A typical example is the root dhm (~, 0 z), which in Form IX has the meaning "be- come black". Spreading is not terribly common in Modern Standard Arabic, but it occurs in enough verb and noun forms to deserve, in our opinion, full treatment. In our lexicon of about 4930 roots, tion, placing it on a separate transducer tape, but this difference is not important here. For extensions of this multi-tape approach see Kiraz (1994; 1996). The cur- rent approach differs from the multi-tape approaches in formalizing roots, patterns and vocalizations as regular languages and by computing ("linearizing") the stems at compile time via intersection of these regular lan- guages (Beesley, 1998a; Beesley, 1998b). 5See, for example, the rules of Antworth (1990) for handling the limited reduplication seen in Tagalog. by d & .~ ,4. hmr -J i* C hwl ~ 3 C dhm ~, 0 .~ rbd ~ ~. 3 rfd. 0~, ~..)~ zrq ~ .~ .3 zwr .~ .~ .) smr .~ ~. swd ~ .~ J'qr .9 ~ j J'mt. 3, j s.fr .~ J u" s.hb ~. o o" "C ~gbr A '-?. t qtm i* ,D kmd ~ ~. 2 'become white' 'turn red' 'blush' 'be cross-eyed' 'squint' 'become green' 'be moist' 'become black' 'become ashen' 'glower' 'drip' 'scatter' 'break up' 'be blue in color' 'alienate' 'become brown' 'become black' 'be of fair complexion' 'turn gray' 'turn yellow/pale' 'become reddish' 'be crooked' 'be bent' 'be dust-colored' 'be dark-colored' 'become smutty/dark' Figure 2: Roots that combine with Form IX patterns 20 have Form IX possibilities (see Figure 2). Most of them (but not all) share the general meaning of being or becoming a certain color. McCarthy (1981) and others (Kay, 1987; Ki- raz, 1994; Bird and Blackburn, 1991) postulate an underlying Form IX stem for dhm that looks like dhamam, with a spreading of the final m radical; other writers like Beeston (1968) list the stem as dhamm, with a geminated or length- ened final radical. In fact, both forms do oc- cur in full surface words as shown in Figure 3, and the difference is productively and straight- forwardly phonological. For perfect endings like +a ('he') and +at ('she'), the final consonant is geminated (or "lengthened", depending on your formal point of view). If, however, the suffix be- gins with a consonant, as in +tu (T) or +ta ('you, masc. sg.'), then the separated or true spreading occurs. From a phonological view, and reflecting the 119 dhamm+a ~.~.~] 'he turned black' dhaniam+tu -~2~.~! 'I turned black' Figure 3: Form IX Gemination vs. Spreading notation of Beeston, it is tempting to formal- ize the underlying Form IX perfect active pat- tern as CCaCX so that it intersects with root dhm to form dhamX. When followed by a suf- fix beginning with a vowel such as +a or +at, phonologically oriented variation rules would re- alize the X as a copy of the preceding consonant (/dhamm/). Arabic abhors consonant clus- ters, and it resorts to various "cluster busting" techniques to eliminate them. The final phono- logical realization would include an epentheti- cal/?i/on the front, to break up the dh clus- ter, and would treat the copied m as the on- set of a syUable that includes the suffix: /?id- ham-rna/, or, orthographically, ~.2b.~!. When followed by a suffix beginning with a conso- nant, as in dhamX+tu, the three-consonant cluster would need to be broken up by another epenthetic vowel as in /?id-ha-rnam-tu/, or, orthographically, "~.~!. However, for reasons to become clearer below when we look at bilit- eral roots, we defined an underlying Form IX perfect active pattern CCaCaX leading to ab- stract stems like dhamaX. 2.3 Other Cases of Final Radical Gemination/Spreading Other verb forms where the final radical is copied include the rare Forms XI and XIV. Root lhj (.~ ~ ~) intersects with the Form XI perfect active pattern CCaaCaX to form the abstract stem lhaajaX ("curdle"/"coagulate"), leading to surface forms like /?il-haaj-ja/ (.~5~!) and /?il-haa-jaj-tu/ (-,~.~!) that vary exactly as in Form IX. The same holds for root shb (,.?. ~ ~,,), which takes both Form IX (s.habaX) and Form XI (shaabaX), both meaning "be- come reddish". In our lexicon, one root q% (~r' ~. d) takes form XIV, with patterns like the perfect active CCanCaX and imperfect active CCanCiX ("be pigeon-breasted"). Other sim- ilar Form XIV examples probably exist but are not reflected in the current dictionary. Aside from the verbal nouns and partici- ples of Forms IX, XI and XIV, other noun-like patterns also involve the spreading of the fi- nal radical. These include CiCCiiX and Ca- CaaCiiX, taken by roots nhr (.; ~ ~3), mean- ing "skilled/experienced", and r~d (~ [_~) meaning "coward/cowardly". The CaCaaCiiX pattern also serves as the broken (i.e. ir- regular) plural for CuCCuuX stems for the roots z~r (.~ ~5) meaning "ill-tempered", shr (.J ~ 0") meaning "thrush/blackbird", 1yd (~ ~_ J) meaning "chin", and thr (.~ ~ .~) and t.xr (.j ~ .b), both meaning "cloud". When an X appears after a long vowel as in t.uxruuX, it is always realized as a full copy of the pre- vious consonant as in /tuxruur/ (_;.%~9d,), 1lo matter what follows. 2.4 Middle Radical Gemination/Spreading Just as Forms II and V involve gemination of the middle radical, other forms including Form XII involve the separated spreading of the middle radical. A preceding diphthong, like a preceding long vowel, causes X to be realized as a full copy of the preceding consonant, as shown in the following examples. Root: Pattern: Stem: Surface: Form: Gloss: hd b CCawXaC hdawXab hdawdab Form XII perfect active "be vaulted" "be embossed" ~OOt: X~ n Pattern: CCawXiC Stem: xfawXin Surface: xfaw~in Form: Form XII imperfect active Gloss: "be rough" Root: Pattern: Stem: Surface: Form: Gloss: xd b muCCawXiC muxdawXib muxdawdib Form XII active participle "become green" 120 tamm+a tamam-t-tu $" • o." Figure 4: Biliteral Form I Stems Root: xd r Pattern: CCiiXaaC Stem: xdiiXaar Surface: xdiidaar Form: Form XII verbal noun Gloss: "become green" A number of nouns have broken plurals that also involve spreading of the middle radical, con- trasting with gemination in the singular. x f f "bat" $• xufXaaf ~u4. singular gemination x f f "bats" plural spreading xafaaXiif ¢.2:~/2~.A" " : : d b r "hornet" dabXuur _)~. singular gemination d b r "hornets" plural spreading dabaaXiir .~.~U3 A few other patterns show the same behavior. While not especially common, there are more roots that take middle-radical-spreading noun patterns than take the better-known Form IX verb patterns. 3 Biliteral Roots As pointed out ill McCarthy (1981, p. 396- 7), the gemination vs. spreading behavior of Form IX stems is closely paralleled by Form I stems involving traditionally analyzed "bilit- eral" or ':geminating" roots such as tm (also characterized as tmm) and sm (possibly smm) and many others of the same ilk. As shown in Figure 4, these roots show Form I gemination with suffixes beginning with a vowel vs. full spreading when the suffix begins with a conso- nant. However Form IX is handled, these par- allels strongly suggest that the exact same un- derlying forms and variations rules should also handle the form I of biliteral roots. However, the Form I perfect active pattern, in the current notation, is simply CaCaC (or Root: k t b k t b Pattern: CaCaC CaCaC Lexicah katab+a katab+tu Surface: katab a katab tu Orthography: "3 Figure 5: Ordinary Form I behavior Root: t m X t m X Pattern: CaCaC CaCaC Lexical: tamaX+a tamaX+tu Surface: tamma tamamtu • O~" Orthography: 5/ Figure 6: Biliteral tm formalized as tmX idiosyncratically for some roots, CaCuC or CaCiC). As shown in Figure 5, there is no ev- idence, for normal triliteral roots like ktb, that any kind of copying is specified by the Form I pattern itself. Keeping CaCaC as the Form I perfect active pattern, the behavior of biliteral roots falls out effortlessly if they are formalized not as srn and tin, nor as smm and tram, but as smX and tmX, with the copying-trigger X as the third radical of the root itself. Such roots intersect in the normM way with triliteral patterns as in Figure 6, and they are mapped to appropriate surface strings using the same rules that realize Form IX stems. 4 Rules The TWOLC rule (Karttunen and Beesley, 1992) that maps an X, coming either fl'om roots like tmX or from patterns like Form IX CCaCaX. into a copy of the previous consonant is the fol- lowing, where Cons is a grammar-level variable ranging freely over consonants, LongVowel is a grammar-level variable ranging freely over long vowels and diphthongs, and C is an indexed lo- cal variable ranging over the enumerated set of consonants. X:C <=> :C \:Cons+ _ ~+: Cons ; :C LongVowel _ ; :C X::_ ; where C in (b t 0 j h x d 6 r z sf d;6 xfqk imnhwy); 121 The rule, which in fact compiles into 27 rules, one for each enumerated consonant, realizes un- derlying X as surface C if and only if one of the following cases applies: 6 * First Context: X is preceded by a sur- face C and one or more non-consonants, and is followed by a suffix beginning with a consonant. This context matches lexical dhamaX+tu, realizing X as m (ultimately written ",/~/,L~.~!), but not dhamaX+a, which is written ~.~!. - Second Context: X is preceded by a surface C and a long vowel or diphthong, no matter what follows. This maps lexical dabaaXiir to dabaabiir (.t U-%). • Third Context: X is preceded by a surface C, another X and any symbol, no matter what follows. This matches the second X in samXaX+tu and samXaXWa to pro- duce samXam+tu and samXam+a re- spectively, with ultimate orthographical re- alizations such as ~ " and "~¢~. In the current system, where the goal is to recognize and generate orthographical words of Modern Standard Arabic, as represented in ISO8859-6, UNICODE or an equivalent encod- ing, the default or "elsewhere" case is for X to be realized optionally as a shadda diacritic. 5 Multiple Copies of Radicals When a biliteral root like smX intersects with the Form II pattern CaCXaC, the abstract result is the stem samXaX. The radical m gets geminated (or lengthened) once and spread once to form surface phonological phonological strings like /sammama/ and /sammamtu/, which become orthographical -~ and " ~ re- spectively. And if both roots and patterns can contain X, then the possibility exists that a copying root could combine with a copying pat- tern, requiring a full double spreading of a rad- ical in the surface string. This in fact happens in a single example (in the present lexicon) with ~The full rule contains several other contexts and fine distinctions that do not bear on the data presented here. For example, the w in the set C of consonants must be distinguished from the w-like offglide of diphthongs. Root: m k X Pattern: CaCaaXiiC Abstract stem: makaaXiiX Surface: makaakiik Gloss: "shut tles" Figure 7: Double Consonant Spreading the root mkX, which combines legally with the noun pattern CaCaaXiiC as in Figure 7. In the surface string makaakiik ("shuttles"), or- thographically • A~, the middle radical k is spread twice. The variation rules handle this and the smX examples without difficulty. 6 System Status The current morphological analyzer is based on dictionaries and rules licensed from an ear- lier project at ALPNET (Beesley, 1990), re- built completely using Xerox finite-state tech- nology (Beesley, 1996; Beesley, 1998a). The current dictionaries contain 4930 roots, each one hand-coded to indicate the subset of pat- terns with which it legally combines (Buck- walter, 1990). Roots and patterns are inter- sected (Beesley, 1998b) at compile time to yield 90,000 stems. Various combinations of prefixes and suffixes, concatenated to the stems, yield over 72,000,000 abstract words. Sixty-six finite- state variation rules map these abstract strings into fully-voweled orthographical strings, and additional trivial rules are then applied to op- tionally delete short vowels and other diacritics, allowing the system to analyze unvoweled, par- tially voweled, and fully-voweled orthographical strings. The full system, including a Java interface that displays both input and output in Arabic script, is available for testing on the Internet at http ://www. xrce. xerox, com/research/ mltt/arabic/. 122 References Evan L. Antworth. 1990. PC-KIMMO: a two- level processor for morphological analysis. Number 16 in Occasional publications in aca- demic computing. Summer Institute of Lin- guistics, Dallas. Kenneth R. Beesley. 1990. Finite-state de- scription of Arabic morphology. In Proceed- ings of the Second Cambridge Conference on Bilingual Computing in Arabic and English, September 5-7. No pagination. Kenneth R. Beesley. 1996. Arabic finite-state morphological analysis and generation. In COLING'g6, volume 1, pages 89-94, Copen- hagen, August 5-9. Center for Sprogteknologi. The 16th International Conference on Com- putational Linguistics. Kenneth R. Beesley. 1998a. Arabic morphologi- cal analysis on the Internet. In ICEMCO-98, Cambridge, April 17-18. Centre for Middle Eastern Studies. Proceedings of the 6th Inter- national Conference and Exhibition on Multi- lingual Computing. Paper number 3.1.1; no pagination. Kenneth R. Beesley. 1998b. Arabic stem mor- photactics via finite-state intersection. Paper presented at the 12th Symposium on Ara- bic Linguistics, Arabic Linguistic Society, 6-7 March, 1998, C, hampaign, IL. A. F. L. Beeston. 1968. Written Arabic: an approach to the basic structures. Cambridge University Press, Cambridge. Steven Bird and Patrick Blackburn. 1991. A logical approach to Arabic phonology. In EACL-91, pages 89-94. Timothy A. Buckwalter. 1990. Lexicographic notation of Arabic noun pattern morphemes and their inflectional features. In Proceed- ings of the Second Cambridge Conference on Bilingual Computing in Arabic and English, September 5-7. No pagination. Zelig Harris. 1941. Linguistic structure of He- brew. Journal of the American Oriental So- ciety, 62:143-167. Clives Holes. 1995. Modern Arabic: Structures, Functions and Varieties. Longman, London. Grover Hudson. 1986. Arabic root and pattern morphology without tiers. Journal of Lin- guistics, 22:85-122. Reply to McCarthy:1981. Ronald M. Kaplan and Martin Kay. 1981. Phonological rules and finite-state transduc- ers. In Linguistic Society of America Meeting Handbook, Fifty-Sixth Annual Meeting, New York, December 27-30. Abstract. Ronald M. Kaplan and Martin Kay. 1994. Regular models of phonological rule systems. Computational Linguistics, 20(3):331-378. Lauri Karttunen and Kenneth R. Beesley. 1992. Two-level rule compiler. Technical Report ISTL-92-2, Xerox Palo Alto Research Center, Palo Alto, CA, October. Lauri Karttunen, Ronald M. Kaplan, and Annie Zaenen. 1992. Two-level morphology with composition. In COLING'92, pages 141-148, Nantes, France, August 23-28. Lauri Karttunen. 1991. Finite-state con- straints. In Proceedings of the Interna- tional Conference on Current Issues in Com- putational Linguistics, Penang, Malaysia, June 10-14. Universiti Sains Malaysia. Lauri Karttunen. 1993. Finite-state lexicon compiler. Technical Report ISTL-NLTT- 1993-04-02, Xerox Palo Alto Research Center, Palo Alto, CA, April. Laura Kataja and Kimmo Koskenniemi. 1988. Finite-state description of Semitic morphol- ogy: A case study of Ancient Akkadian. In COLING'88, pages 313-315. Martin Kay. 1987. Nonconcatenative finite- state morphology. In Proceedings of the Third Conference of the European Chapter of the Association for Computational Linguistics, pages 2-10. George Kiraz. 1994. Multi-tape two-level mor- phology: a case study in Semitic non-linear morphology. In COLING'94, volume 1, pages 180-186. George Anton Kiraz. 1996. Computing prosodic morphology. In COLING'96. Kimmo Koskenniemi. 1983. Two-level mor- phology: A general computational model for word-form recognition and production. Pub- lication 11, University of Helsinki, Depart- ment of General Linguistics, Helsinki. John J. McCarthy. 1981. A prosodic theory of nonconcatenative morphology. Linguistic In- quiry, 12(3):373-418. Hans Wehr. 1979. A Dictionary of Modern Written Arabic. Spoken Language Services, Inc., Ithaca, NY, 4 edition. Edited by J. Mil- ton Cowan. 123
1998
18
P arsing Am biguous Structures using Con trolled Disjunctions and Unary Quasi-T rees Philipp e Blac he LPL CNRS  Av en ue Rob ert Sc h uman F- Aix-en-Pro v ence [email protected] Abstract The problem of parsing am biguous structures concerns (i) their represen tation and (ii) the sp eci cation of mec hanisms allo wing to dela y and con trol their ev aluation. W e rst prop ose to use a particular kind of disjunctions called c ontr ol le d disjunctions: these form ulae allo ws the representation and the implemen tation of sp eci c constrain ts that can o ccur b et w een am biguous v alues. But an ecien t con trol of am biguous structures also has to tak e in to accoun t lexical as w ell as syn tactic information concerning this ob ject. W e then prop ose the use of unary quasi-tr e es sp ecifying constrain ts at these di eren t lev els. The t w o devices allo w an ecien t implemen tation of the con trol of the am biguit y . Moreo v er, they are indep enden t from a particular formalism and can b e used whatev er the linguistic theory .  In tro duction Most of the approac hes dealing with am biguit y are disam biguating tec hniques. This preliminary constatation seems trivial and relies on a simple presup osition: the ambiguous structures need to b e disam biguated. Ho w ev er, this is not true from sev eral resp ects. Mac hine translation is a go o d example: the am biguit y of a sen tence in the source language needs v ery often to b e preserv ed and translated in to the target one (cf. (W edekind )). Another remark, in the same p ersp ectiv e: most of the disam biguating tec hniques rely on a single linguistic lev el. In other w ords, they generally mak e use of lexical or syntactic or seman tic information, exclusiv ely . But a natural pro cessing of natural language should not w ork in this w a y . All the linguistic lev els of NLP (i.e. phonetic, phonologic, lexical, syn tactic, seman tic and pragmatic) ha v e to b e tak en in to accoun t at the same time. In other w ords, pro cessing am biguit y w ould ha v e to b e parallel, not sequential. The problem is then to use am biguous structures during the parse without blo c king the analysis. In a rst appro ximation, suc h a problem comes to parse using undersp eci ed structures. W e will see that this constitutes a part of the solution. The third and last preliminary remark focuses on the con trol strategies for the ev aluation of am biguous structures. These strategies can rely on the formal prop erties of the am biguous structure (for example the simpli cation of a disjunctiv e form ula), on the con textual relations, etc. But the am biguous ob jects can themselv es b ear imp ortan t information sp ecifying some restrictions. W e will dev elop in this pap er sev eral examples illustrating this p oin t. The approac h describ ed here mak e an in tensiv e use of this kind of constrain ts, also called c ontr ol r elations. W e presen t in this pap er a tec hnique called c ontr ol le d disjunctions allo wing to represen t and implemen t an ecien t con trol of ambiguous structures at the lexical and phrasestructure lev el. W e illustrate this tec hnique using the HPSG framew ork, but it could b e used in all kind of feature-based represen tations. This approac h relies (i) on the represen tation of constrain ts relations b et w een the feature v alues and (ii) on the propagation of suc h relations. W e insist on the fact that this is not a disam biguating tec hnique, but a con trol of the ev aluation of am biguous structures. In order to increase the n umb er of constrain ts con trolling an am biguous structure, w e generalize the use of con trol remobile =           ca t     head adj  maj A dj mod N  _  noun  maj Noun Nf orm  v alence n h i _   spr Det  o     cont " index  Gen n  masc _ fem _  masc o  #           Figure : Con trol relation within a lexical en try lations at the phrase-structure lev el. W e prop ose for that a particular represen tation of hierarc hical relations for am biguous ob jects called unary quasi-tr e es. This pap er is threefold. In a rst section, w e presen t the limits of the classical represen tation of am biguit y and in particular the tec hnique of named disjunctions. The second section describ es the con trolled disjunction metho d applied to the lexical lev el. W e describ e in the third section the generalization of this tec hnique to the phrase-structure lev el using unary quasi-trees and w e sho w ho w this approac h is useful for an online con trol of the am biguit y during the parse.  Am biguit y and Disjunctions Sev eral tec hniques ha v e b een prop osed for the in terpretation and the con trol of disjunctiv e structures. F or example, dela ying the ev aluation of the disjunctiv e form ulae un til obtaining enough information allo ws partial disam biguation (cf. (Karttunen)). Another solution consists in conv erting the disjunctiv e form ulae in to a conjunctiv e form (using negation) as prop osed b y (Nak aza w a) or (Maxw ell ). W e can also mak e use of the prop erties of the form ula in order to eliminate inconsistencies. This approac h, describ ed in (Maxw ell ), relies on the con v ersion of the original disjunctiv e form ulae in to a set of c ontexte d c onstr aints whic h allo ws, b y the in tro duction of prop ositional v ariables (i) to con v ert the form ulae in to a conjunctiv e form, and (ii) to isolate a subset of form ulae, the disjunctive r esidue (the negation of the unsatis able constrain ts). The problem of the satis abilit y of the initial form ula is then reduced to that of the disjunctiv e residue. This approac h is fruitful and sev eral metho ds rely on this idea to refer form ulae with an index (a prop ositional v ariable, an in teger, etc.). It is the case in particular with name d disjunctions (see (D orre 0), (Krieger ) or (Gerdemann )) whic h prop ose a compact represen tation of con trol phenomena and cov ariancy . A named disjunction (noted hereafter ND) binds sev eral disjunctiv e form ulae with an index (the name of the disjunction). These form ulae ha v e the same arit y and their disjuncts are ordered. They are link ed b y a co v ariancy relation: when one disjunct in a ND is selected (i.e. in terpreted to true), then all the disjuncts o ccurring at the same p osition in to the other form ulae of the ND also ha v e to b e true. The example () presen ts the lexical en try of the german determiner den. The co v ariation is indicated b y three disjunctiv e form ulae comp osing the named disjunction indexed b y . den =     spec     case  ac c _  dat index " gen  masc _  num  sing _  plu #         () But the named disjunction tec hnique also has some limits. In particular, NDs ha v e to represen t all the relations b et w een form ulae in a co v arian t w a y . This leads to a lot of redundancy and a loss of the compactness in the sense that the disjuncts don't con tain an ymore the p ossible v alues but all the p ossible v ariancies according to the other form ulae. Some tec hniques has b een prop osed in order to eliminate this dra wbac k and in particular: the dep endency gr oup r epr esentation (see (Grith )) and the c ontr ol le d disjunctions (see (Blac he )). The former relies on an enric hmen t of the Maxw ell and Kaplan's con texted constrain ts. In this approac h, constrain ts are comp osed of the conjunction of base constrain ts (corresp onding to the initial disjunctiv e form) plus a con trol form ula represen ting the w a y in whic h v alues are c ho osen. The second approac h, describ ed in the next section, consists in a sp eci c represen tation of con trol relations relying on a clear distinction b et w een (i) the p ossible v alues (the disjuncts) and (ii) the relations b et w een these am biguous v alues and other elemen ts of the structure. This approac h allo ws a direct implemen tation of the implication relations (i.e. the orien ted con trols) instead of simple co v ariancies.  Con trolled Disjunctions The con trolled disjunctions (noted hereafter CD) implemen t the relations existing b et w een am biguous feature v alues. The example of the gure () describ es a non co v arian t relation b et w een gender and head features. More precisely , this relation is orien ted: if the ob ject is a noun, then the gender is masculine and if the ob ject is feminine, then it is an adjectiv e. The relation b et w een these v alues can b e represen ted as implications: noun ) masc and f em ) ad j . The main in terest of CDs is the represen tation of the v ariancy b et w een the p ossible v alues and the con trol of this v ariancy b y complex form ulae. Con trolled disjunctions reference the form ulae with names and all the form ula are ordered. So, w e can refer directly to one of the disjuncts (or to a set of link ed disjuncts) with the name of the disjunction and its rank. F or clarit y , w e represen t, as in the gure (), the consequen t of the implication with a pair indexing the an teceden t. This pair indicates the name of the disjunction and the rank of the disjunct. In this example, noun h;i implemen ts noun ) masc: the pair h; i references the elemen t of the disjunction n um b er  at the  st p osition. mobile =        ca t    head  n noun h  ;i , adj o v alence j spr h   [Det], [] i    index  gen  n masc, fem h  ;i o         () As sho wn in this example, CDs can represen t co v arian t disjunction (e.g. the disjunction n um b er ) or simple disjunctions (disjunction n um b er ).    u =  a _ i a _ i a _ i b _ i b _ i b v =  c _ i d _ i d _ i c _ i c _ i d w =  f _ i e _ i f _ i e _ i f _ i e    () The example ()  presen ts the case of an am biguit y that cannot b e totally con trolled b y a ND. This structure indicates a set of v ariancies. But the co v ariancy represen tation only implemen ts a part of the relations. In fact, sev eral \complex" implications (i.e. with a conjunction as an teceden t) con trol these form ulae as follo ws : fa ^ c ) f ; b ^ d ) e; c ^ e ) b; d ^ f ) ag These implications (the \con trolling form ulae") are constrain ts on the p ositions of the disjuncts in the CD. The form ula in the example () presen ts a solution using CDs and totally implemen ting all the relations. In this represen tation, (i = ) ^ (j = ) ) (k = ) implemen ts the implication a ^ c ) f . The set of constrain ts is indicated in to brac k ets. The feature structure, constrained b y this set, simply con tains the elemen tary v ariations.  > < > : (i = ) ^ (j = ) ) (k = ) (i = ) ^ (j = ) ) (k = ) (j = ) ^ (k = ) ) (i = ) (j = ) ^ (k = ) ) (i = ) > = > ; !     a _ i b  c _ j d  e _ k f    () F rom an implemen tation p oin t of view, the con trolled disjunctions can easily b e implemen ted with languages using dela ying devices. An implemen tation using functions in Life has b een describ ed in (Blac he ).  This problem w as giv en b y John Grith. mobile =             phon synsem j ::: j head  noun _  x dtrs        < : adj dtr _   comp dtr _  subj dtr  = ;       phon synsem j ::: j head   ad j _  noun dtrs " head dtr   phon mobile synsem j ::: j head   #                       Figure : UQT in a HPSG form ferme =            phon synsem j ::: j head  noun _  x _  v er b dtrs        > > < > > : adj dtr _   comp dtr _  subj dtr  _  head dtr > > = > > ;      phon synsem j ::: j head   ad j _  noun _  v er b dtrs " head dtr  phon ferme synsem j ::: j head   #                       Figure : UQT of the lexical en try ferme  Generalization to the Phrase-Structure Lev el . Unary Quasi-T rees (Vija y-Shank er ) prop oses the use of trees description called quasi-tr e es whithin the framew ork of T A G. Suc h structures rely on the generalization of hierarc hical relations b et w een constituen ts. These trees b ear some particular no des, called quasi-no des, whic h are constituted b y a pair of categories of the same t yp e. These categories can refer or not to the same ob jet. If not, a subtree will b e inserted b et w een them in the nal structure. Suc h an approac h is particularly in teresting for the description of generalizations. The basic principle in T A G consists in preparing subtrees whic h are part of the nal syn tactic structure. These subtrees can b e of a lev el greater than one: in this case, the tree predicts the hierarc hical relations b et w een a category and its ancestors. Quasi-trees generalize this approac h using a meta-lev el represen tation allo wing the description of the general shap e of the nal syn tactic tree. The idea of the unary quasi-tr e es relies basically on the same generalization and w e prop ose to indicate at the lexical lev el some generalities ab out the syn tactic relations. A t the di erence with the quasi-trees, the only kind of information represen ted here concerns hierarc h y . No other information lik e sub categorization is presen t there. This explain the fact that w e use unary trees. Sev eral prop erties c haracterizes unary quasi-trees (noted hereafter UQTs):  An UQT is in terpreted from the leaf (the lexical lev el) to the ro ot (the prop ositional one).  A relation b et w een t w o no des and ( dominating ) indicates, in a simple PSG represen tation, that there exists a deriv ation of the form )  B suc h that  B .  Eac h no de has only one daugh ter.  An unary quasi-tree is a description of tree and eac h no de can b e substituted b y a subtree  .  But at the di erence with the quasi-trees, a no de is not represen ted b y a pair and no distinction is done b et w een quasi-r o ot and quasi-fo ot (see (Vija yShank er )).                     phon synsem j ::: j head noun dtrs                  < : adj dtr _   comp dtr _  subj dtr  = ;      phon synsem j ::: j head   ad j _  noun dtrs " head dtr  phon b el le synsem j ::: j head   #       < :  comp dtr _  subj dtr  _  adj dtr = ;      phon synsem j ::: j head   noun _  ad j dtrs " head dtr  phon ferme synsem j ::: j head   #                                          Figure : UQT with an em b edded am biguit y  The no des can b e constituted b y a set of ob jects  . If more than one ob ject comp ose a no de, this set in in terpreted as a disjunction. Suc h no des are called ambiguous no des. A categorial am biguit y is then represen ted b y an unary quasitree in whic h eac h no de is a set of objects.  Eac h no de is a disjunctiv e form ula b elonging to a co v arian t disjunction.  An UQT is limited to three lev els: lexical, phrase-structure and prop ositional. mobile Adj N AP NP NP XP () The example () sho ws the UQT corresp onding to the w ord mobile with an am biguit y adjectiv e/noun. F or clarit y's sak e, the tree is presen ted upside-do wn, with the leaf at the top and the ro ot at the b ottom. This example indicates that:  an adjectiv e is a daugh ter of an AP whic h is to its turn a daugh ter of a NP ,  a noun is a daugh ter of a NP whic h is to its turn a daugh ter of an unsp eci ed phrase XP .  These ob jects, as for the quasi-trees, can b e constituted b y atomic sym b ols or feature structures, according to the linguistic formalism. As indicated b efore, eac h no de represen ts a disjunctiv e form ula and the set of no des constitutes a co v arian t disjunction. This information b eing systematic, it b ecomes implicit in the represen tation of the UQTs (i.e. no names are indicated). So, the p osition of a v alue in to a no de is relev an t and indicates the related v alues in to the tree. This kind of represen tation can b e systematized to the ma jor categories and w e can prop ose a set of elemen tary hierarc hies, as sho wn in the gure () used to construct the UQTs. Adj N V Prep AP NP VP SP NP XP S SN/SV () It is in teresting to note that the notion of UQT can ha v e a represen tation in to di eren t formalisms, ev en not based on a tree represen tation. The gure () sho ws for example an HPSG implemen tation of the UQT describ ed in the gure (). In this example, w e can see that the am biguit y is not systematically propagated to all the lev els: at the second lev el (substructure  ), b oth v alues b elong to a same feature (head-d a ughter). The co v ariation here concerns di eren t features at di eren t lev els. There is for example a co v ariation b et w een the head features of the second lev el and the t yp e of the daugh ter at the third lev el. Moreo v er, w e can see that the noun can b e projected in to a NP , but this NP can b e either a complemen t or a sub ject daugh ter. This ambiguit y is represen ted b y an em b edded v ariation (in this case a simple disjunction). The example describ ed in the gure () sho ws a frenc h lexical item that can b e categorized as an adjectiv e, a noun or a v erb (resp. translated as ferm, farm or to close). In comparison with the previous example, adding the v erb sub case simply consists in adding the corresp onding basic tree to the structure. In this case, the co v arian t part of the structure has three sub cases. This kind of represen tation can b e considered as a description in the sense that it w orks as a constrain t on the corresp onding syn tactic structure. . Using UQTs The UQTs represen t the am biguities at the phrase-structure lev el. Suc h a represen tation has sev eral in terests. W e fo cus in this section more particularly on the factorization and the represen tation of di eren t kind of constrain ts in order to con trol the parsing process. The example of the gure () presen ts an am biguit y whic h \disapp ears" at the third lev el of the UQT. This (uncomplete) NP contains t w o elemen ts with a classical am biguit y adj/noun. In this case, b oth com binations are p ossible, but the ro ot t yp e is alw a ys nominal. This is an example of am biguous structure that do esn't need to b e disam biguated (at least at the syn tactic lev el): the parser can use directly this structure  . As seen b efore, the con trolled disjunctions can represen t v ery precisely di eren t kind of relations within a structure. Applying this tec hnique to the UQTs allo ws the representation of dynamic relations relying on the con text. Suc h constrain ts use the selection relations existing b et w een t w o categories. In case of am biguit y , they can b e applied to an  W e can also notice that co v ariation implemen ts the relation b et w een the categories in order to inhibit the noun/noun or adj/adj p ossibilities (cf. the CD n um b er ). am biguous group in order to eliminate inconsistencies and con trol the parsing pro cess. In this case, the goal is not to disam biguate the structure, but (i) to dela y the ev aluation and main tain the am biguit y and (ii) in order to reduce the set of solutions. The gure () sho ws an example of the application of this tec hnique. The selection constrain ts are applied b et w een some v alues of the UQTs. These relations are represen ted b y arcs b et w een the no des at the lexical lev el. They indicate the p ossibilit y of co o ccurrence of t w o juxtap osed categories. The constrain ts represen ted b y arro ws indicate sub categorization. If suc h constrain t is applied to an am biguous area, then it can b e propagated using the selection constrain ts whithin this area. In this example, there is a selection relation b et w een the ro ot S of the UQT describing \p oss  ede" and the no de v alue NP at the second lev el of the UQT describing \ferme". This information is propagated to the rest of the UQT and then to the previous elemen t using the relation existing b et w een the v alues N of \ferme" and A dj of \b el le". All these constrain ts are represen ted using con trolled disjunctions: eac h con troller v alue b ears the references of the con trolled one as describ ed in the section (). The in terest of this kind of constrain ts is that they constitute a lo cal net w ork whic h de nes in some w a y a con trolled am biguous area. The parsing pro cess itself can generate new selection constrain ts to b e applied to an en tire area (for example the selection of a NP b y a v erb). In this case, this constrain t can b e propagated through the net w ork and eliminate inconsisten t solutions (and ev en tually totally disam biguate the structure). This pre-parsing strategy relies on a kind of headcorner metho d. But the main goal here, as for the lexical lev el, is to pro vide constrain ts con trolling the disam biguation of the structures, not a complete parsing strategy .  Conclusion Contr ol le d Disjunctions allo w a precise represen tation of the relations o ccuring b et w een feature v alues. Suc h relations can b e de ned Pro Det N V Adj Adj N V NP NP NP XP S NP VP AP NP XP S AP NP VP NP ferme La serrure de la porte que la belle possede ferme mal beautiful Prep ProR N Det N V Adv XP XP AP S NP XP VP Adj N Pro Det The lock of the door that the farm possesses closes badly Figure : Constrain t net w orks on am biguous areas statically , in the lexicon. They can also b e intro duced dynamically during the parse using the Unary Quasi-T r e e represen tation whic h allo ws the description of relations b et w een categories together with their propagation. These relations can b e seen as constrain ts used to con trol the parsing pro cess in case of am biguit y . An ecien t treatmen t of the am biguit y relies on the p ossibilit y of dela ying the ev aluation of am biguous structures (i.e. dela ying the expansion in to a disjunctiv e normal form). But suc h a treatmen t is ecien t if w e can () extract as m uc h information as p ossible from the con text and () con tin ue the parse using am bigous structures. The use of CDs and UQTs constitutes an ecien t solution to this problem. References Philipp e Blac he.  . \Disam biguating with Con trolled Disjunctions." In Pr oc e e dings of the International Workshop on Parsing T e chnolo gies. Jo c hen D orre & Andreas Eisele.  0. \F eature Logic with Disjunctiv e Uni cation" in pro ceedings of COLING' 0. Dale Gerdemann.  . \T erm Enco ding of T yp ed F eature Structures." In Pr o c e e dings of the F ourth International Workshop on Parsing T e chnolo gies, pp.  { . John Grith.  . \Mo dularizing Contexted Constrain ts." In Pr o c e e dings of COLING' . Lauri Karttunen.  . \F eatures and V alues" in pro ceedings of COLING'. Rob ert Kasp er & William Rounds  0. \The Logic of Uni cation in Grammar" in Linguistics and Philosophy, :. Hans-Ulric h Krieger & John Nerb onne.  . \F eature-Based Inheritance Netw orks for Computational Lexicons." In T. Brisco e, V. de P aiv a and A. Cop estak e, editors, Inheritanc e, Defaults and the L exic on. Cam bridge Univ ersit y Press, Cambridge, USA. John T. Maxw ell I I I & Ronald M. Kaplan.  . \A Metho d for Disjunctiv e Constrain ts Satisfaction." In M. T omita, editor, Curr ent Issues in Parsing T e chnolo gy. Klu w er Academic Publishers, Norw ell, USA. Tsunek o Nak aza w a, Laura Neher & Erhard Hinric hs.  . \Uni cation with Disjunctiv e and Negativ e V alues for GPSG Grammars" in pro ceedings of ECAI'. Gertjan v an No ord & Gosse Bouma.   \Adjuncts and the Pro cessing of Lexical Rules" in pro ceedings of COLING' . K. Vija y-Shank er.   \Using Descriptions of T rees in a T ree Adjoining Grammar" in Computational Linguistics, :. J  urgen W edekind & Ronald Kaplan.   \Am biguit y-Preserving Generation with LF Gand P A TR-st yle Grammars" in Computational Linguistics, :.
1998
19
The production of code-mixed discourse David SANKOFF Centre de recherches math~matiques, Universit@ de Montr@al CP 6128 Succursale Centre-Ville Montr@al, Qudbec H3C 3J7 [email protected] Abstract We propose a comprehensive theory of code- mixed discourse, encompassing equivalence- point and insertional code-switching, palin- dromic constructions and lexical borrowing. The starting point is a production model of code-switching accounting for empirical observations about switch-point distribution (the equivalence constraint), well-formedness of monolingual fragments, conservation of con- stituent structure and lack of constraint be- tween successive switch points, without invok- ing any "code-switching grammar". Code- switched sentence production makes alternate reference to two virtual monolingual sentences, one in each language, and is based on conser- vative conditions on language labeling of con- stituents, together with a constraint against real-time "look-ahead" from one code-switch to the next. Selective weakening of model condi- tions can produce (i) the type of palindromic (or portmanteau) construction occasionally oc- curring e.g., in switches between prepositional and postpositional languages, (ii) the switch- ing by "insertion" of very specific kinds of con- stituent reported e.g., for French noun phrases in switching with Arabic and, most important, (iii) lexical borrowing. Borrowing can create ambiguity as to language membership of sen- tence items, but the model predicts where this can be resolved, and the confirmation of these predictions, based on empirical studies of inflec- tional morphology, validates key aspects of the model. Introduction Communities of bilinguals tend to evolve a con- versational mode where elements of both lan- guages appear in the same interaction and even in the same sentence despite the fact that all participants may be competent in either of the two languages. Whether this mode is used in preference to monolingual discourse depends on the type of interaction, the participants, the subject of conversation and many other fac- tors. The grammatical nature of code-mixed discourse, however, tends to be very specific to the community and varies widely among bilingual communities, even among communi- ties which share the same pair of languages. Empirical research has isolated four clearly dis- tinct processes which may be responsible for mixing to different extents in different commu- nities -- code-switching, nonce borrowing, spe- cialized incorporation and interference. None of these processes requires the deformation, al- teration or convergence of either of the two con- stituent languages at the syntactic, lexical, mor- phological, phonological, or semantic levels at the moment the mixing occurs. Except for code- switching, however, they may all lead in the long term to lexical expansion in one or both of the languages. This paper is a contribution to a coherent for- mal account of code-mixing which integrates all of these processes 1, though only code-switching and borrowing will be considered here. This is based on a series of empirical studies which now allows us to distinguish between them structurally and quantitatively. Our starting point will be a recent formal characterization of a equivalence-point code-switching (Sankoff, 1The analysis of code-mixing is a controversial sub- ject with respect to several aspects: Is all code-mixing -- borrowing, switching, interference -- really the same process? Do languages involved in code-mixing tend to converge syntactically? Are patterns of code-mixing pre- dictable or explicable by theories of (monolingual) gram- mar? We assume a negative response to all these ques- tions and refer to the literature for more detailed discus- sion. 1998). We will then extend this to two rarer code-switching mechanisms, and finally to lexi- cal borrowing, the most frequent type of code- mixing. 1 Code-switching 1.1 "The facts" The modern motivation for studying code- switching was initially to explain the obser- vation that in bilingual communities, speakers tend to switch from one language to another in- trasententially at certain syntactic boundaries and not at others (Gumperz & Hernandez, 1969). The first general explanation to account for this distribution was Poplack's (1978, 1980) argument that switching should be favored at the kinds of syntactic boundaries which occur in both languages, thus avoiding word order that might seem unnatural according to one or both grammars: the equivalence constraint (see also Lipski, 1977; Pfaff, 1979). Despite criticism of this approach (Rivas, 1981; Wool- ford, 1983; Di Sciullo et al., 1986; Pandit, 1990; Myers-Scotton, 1993; Belazi et al., 1994; Ma- hootian & Santorini, 1994), it has been suc- cessfully used to account for code-switching in Spanish-English (Poplack, 1978,1980), Finnish- English (Poplack et al., 1987b), Arabic-French (Na'it M'Barek & Sankoff, 1988), Tamil-English (Sankoff et al., 1990), Fongbe-French (Meechan & Poplack, 1995), Wolof-French (Poplack & Meechan, 1995), Igbo-English (Eze, 1997) and many other bilingual communities. Other fundamental facts about code-switched sentences include the well-formedness of mono- lingual fragments within such sentences -- true whether a fragment constitutes a complete con- stituent or stretches across two or more (possi- bly incomplete) constituents, the conservation of constituent structure, and the unpredictability of switching -- even if we can determine where a code-switch can occur and where it cannot, there is no way of knowing in advance for any site whether a switch will occur there or not. In particular, if a switch occurs at some point in a sentence, this does not constrain any poten- tial site(s) later in the sentence either to contain another switch or not to -- there are no forced switches. 1.2 A production approach We do not assume that the mechanisms of switching from one language to another can be deduced entirely from the general principles of monolingual grammars 2. Thus we do not an- alyze the distribution of intrasentential switch points in terms of a grammar of any of the types ordinarily used for accounting for single languages, but by means of a left-to-right pro- cess that refers to two well-formed monolingual sentences (i.e. each satisfying the constraints of an "ordinary" monolingual grammar for one of the two languages) in producing monolingual sentence fragments and in evaluating potential switch points between these fragments. Our model is based on the assumption that bilinguals are fully competent in their two lan- guages, that there is no convergence of the two monolingual codes even during bilingual dis- course, and that a code-switched sentence con- sists of fragments of two monolingual sentences (each one a translation of the other) pieced to- gether. This is done first through an other- wise unconstrained production model that sim- ply copies part of one monolingual process fol- lowed by part of the other in such a way that constituent structure is conserved. The result- ing process satisfies neither equivalence nor un- predictability. By adding two very simple rules for labeling some constituents according to their language, the existence of a consistent labeling -- which can be easily monitored during real- time linear production -- turns out to guaran- tee both equivalence and, for most situations, unpredictability. 2While formal theories of grammar may well account for monolingual language in terms of general linguistic principles, there is no reason to believe that processes which juxtapose two languages can be explained in ex- actly the same way. The reasons implicit or explicit in attempts to do so have to do with explanatory economy, either of individual linguistic competence or of linguistic theories. This seems specious since both are based on a notion of a "hard-wired" human linguistic faculty evolv- ing in prehistoric monolingualism. Experience in widely diverse speech communities suggests instead that code- mixing strategies, including code-switching, evolve in the life-time of particular communities, are only partly de- pendent on linguistic typology of the two languages and exhibit widely different patterns of adapting monolingual resources for incorporating linguistic innovation. 1.3 Hierarchy and linearity. That monolingual fragments are not co- extensive with entire constituents is problem- atic for any model relying on hierarchical re- lations for deciding well-formedness, since such models are designed primarily to ensure well- formedness of entire constituents, monolingual or bilingual. They cannot ensure that adja- cent same-language parts of neighboring con- stituents are compatible (i.e. yield a well- formed fragment when juxtaposed), since these parts may not even be in the same language as the rest of the constituents that contain them (Muysken, 1995). For example, an earlier model (Sankoff & Mainville, 1986), using the context- free grammars of two languages to account for code-switched sentences satisfying the equiv- alence constraint, could not ensure the well- formedness of monolingual fragments, for the very reason Muysken has pointed out. This problem is at the core of the conflict be- tween hierarchical and linear modes of explana- tion. We will resolve it by ascribing ultimate responsibility for the well-formedness of mono- lingual fragments to production-level processes. These fragments, arbitrary substrings of pre- constructed well-formed monolingual sentences, are pieced together during linear production in a way which corresponds to the other general observations about code-switching and which is essentially neutral with respect to theories of monolingual grammar. 1.4 The syntactic model We are interested in seeing how two hierarchi- cally structured languages resolve their word- order differences during intrasentential code- switching, without the confounding effects of other linguistic phenomena. Thus we will con- struct a model where the only differences be- tween two languages have to do with word order and the (phonological) form of lexical items, and work out the logical consequences for code-switching of various assumptions and constraints on the production of bilingual sen- tences. Linearity and hierarchy are the key structural aspects here 3, and the simplest class Sin focusing on the relationship between word order and hierarchy, we are choosing a model not adapted to the treatment of tags and moveable elements such as many adverbials (whose "switchability" is uncontrover- of recursive grammars accommodating these properties is that of context-free grammars. Consider a context-free grammar consisting of a set C of "categories" or non-terminal sym- bols, including one distinguished symbol s (for "sentence"); a set of terminal symbols T ("lex- ical slots"), none of which are also in C, a lex- icon L which is basically a set of words and an indication of the kind of lexical slot each word can fill, and a set of rewrite rules R of form c --~ vl -"vn where c is a symbol in C', and the string on the right hand side vl...vn consists of one or more symbols in T or C. A sentence is derived by writing s, then rewriting s by the string Ul'"um on the right hand side of any rule in R of form s -~ ul".um, then rewrit- ing any ui which is non-terminal by some rule of form ui --~ wl...wp and so on. Whenever a lexical slot appears in the string it can be filled with words of the appropriate category from L. When there are no more non-terminal symbols (and R must be such that this is always possi- ble), the derivation stops, and the current string is just a sentence generated by the grammar. We will make use of phrase structure tree rep- resentation for sentences and their constituents. Each symbol appearing in the derivation is rep- resented by a node of the tree; it dominates the constituent of which it is the highest node, and may be used to represent that constituent. (Each terminal node is itself a constituent.) Note that the words in each constituent form a (contiguous) substring of the sentence. We de- fine these, and all other contiguous substrings of a well-formed sentence string to be well-formed fragments. In order to speak of code-switching between two different grammars, it is necessary to have some connection between the categories of one and the categories of the other 4. We make sial), nor of remote relationships such as discontinuous constituents, internal co-reference, etc., whose effects on code-switching, if any, have never been systematically documented. On the other hand, context-sensitive phe- nomena such as subcategorization (Bentahila ~ Davies, 1983), cliticization or certain deletion processes also es- cape the scope of context-free modeling, as do other null elements, agreement rules and other features which may, in particular communities, be important to understand- ing switch sites (Muysken, 1995). 4In natural languages, such correspondences will usu- ally be imperfect (Muysken, 1995), but this is peripheral to our interest in word order. 10 the strong assumptions of lexical translatability and categorial congruence, meaning that there is a one-to-one correspondence between the lex- icon LA of language A and the lexicon LB of language B, though the words are all recog- nizable as coming from one language or the other. We use the same categories C and lex- ical slots T for both languages. Furthermore we assume grammatical congruence: there is a one-to-one connection between the rulesRA of language A and RB of language B -- if RA contains a rule c ~ Vl""Vn, then RB must contain a rule c --+ Ul...~Zn, where each sym- bol in Vl""Vn has its counterpart in Ul'"un, and vice-versa, though the order of the terms in one string will not in general correspond to the order of the terms in the other. Finally, for convenience, we will assume fixed word or- der, that is if c --r Vl... vn in a given grammar, then there may be other rules rewriting C in that grammar, but none where the right hand side contains exactly the same set of symbols U1, • • ",~3n. To simplify our presentation in this article, we do not allow ambiguity in our grammars. Not only must each monolingual sentence be derivable in exactly one way, but each rule may contain any one symbol only once on its right hand side. These conditions may be relaxed, as long as there is a way of identifying correspond- ing symbols in corresponding rules in the two grammars. In comparing the structure of two sentences, we say that they have the same constituent structure if there is a one-to-one correspondence between their constituents such that if x in one sentence corresponds to y in the other, then the (unordered) set of subconstituents of x corre- sponds to the set of subconstituents of y. The consequences of our assumptions are summarized in: Theorem 1 5 Every sentence in language A has a unique counterpart in language B with the same constituent structure and whose lexical items are translations o/ those in the sentence o/language A. Examples (la,b) are two (fictitious) sentences in English and French which we may imagine to be counterparts of each other according to some 5Proofs of all theorems are given in Sankoff (1998) grammatical analysis of the two languages, in the sense of Theorem 1: Despite differences in word order, the con- stituent structure is identical and the lexical items are word-for-word translations (without quibbling about the questionable lexical status of the reflexive clitic and the genitive particle and the somewhat different internal structure of determiner in the PP). D N V pro PP D ~ ' ~ NP The brothers wash themselves with some of Grandma's remarkable soap (lb) .......... S ~VP i I I~ ~es se l~ent avec du ~lvon rem~uable de grand-maman 1.5 The production model. In the model, the production of a code-switched sentence presupposes the existence of two vir- tual sentences, one in language A and one in language B, counterparts of each other as in Theorem 1. For each 6 of (2a,b,c,d) the pair of virtual sentences is the one illustrated in (la,b). (2a) The brothers wash themselves I avec du savon remarquable de grand-maman (2b) Les fr~res se lavent I with some of Grand- ma's remarkable soap (2c) The brothers I se lavent avec I some of Grandma's remarkable soap (2d) The ( fr~res I wash themselves I avec I some of Grandma's remarkable soap Given the two virtual sentences in languages A and B, the code-switched sentence is produced by taking part of one of them, followed by part °Our examples of English/French mixing in this pa- per are fabricated, and their well-formedness (or not) asserted, solely to illustrate our arguments; they do not constitute empirical data. 11 of the other, and so on, without using any word (or its translation) more than once, until every lexical element (or its translation) has been used up. The idea of using virtual sentences is an ex- tension of concepts implicit in Poplack's origi- nal discovery (1978, 1980) of the importance of equivalence sites to code-switching and is what distinguishes it from attempts to account for code-switching using purely distributional data examples of sentences thought to be either well-formed or not. It implies the comparison of the sentence actually produced with what "could have been said" in either of the monolin- gual modes. Though the comparative data are of course not directly accessible, since only one sentence is uttered, controlled inference about unrealized possibilities is consistent with rigor- ous methodology (cf the notion of the linguistic variable (Labov, 1969; Sankoff, 1988). Postulating two complete virtual sentences, however, is an analytical convenience. All that would really be needed in a more realistic (and complicated) analysis, are the parts of each sentence that are actually used plus some ad- ditional details about the constituent within which a switch occurs. The consequences of this device, and its realism, lie largely in the way the monolingual fragments are produced "on the fly" by the monolingual grammars, how- ever these grammars are conceived in theory. Indeed, we need not refer to any particular lin- guistic theory for this aspect. The above process produces not only plausi- ble code-switched sentences such as those in (2), but also any combination of elements in any or- der as in (3). (3) de grand-maman [the with [ remarquable fr~res [ soap themselves wash some of To arrive at an empirically and conceptually satisfactory model, we must add constraints. The first constraint is motivated by the empiri- cal observation that each monolingual fragment in bilingual discourse tends to be well-formed in its lexifier language. We will assume that the production of the sentence starts with a word in (either) one of the virtual sentences, and copies successive words from left to right in that sen- tence without skipping any until there is a code- switch to some word in the other virtual sen- tence. From this point in the other virtual sen- tence, production continues from left to right, and so on. When the left-to-right production arrives at a word (such as se after fr~res in ex- ample 4) which has already been used in the current or the other language (also some of af- ter with) or at the end of one of the virtual sen- tences, there must be a switch to the other vir- tual sentence or, if all the words have been used (after remarkable), the production must stop. (4) du savon [ wash themselves with [ les fr~res [ Grandma's remarkable Theorem 2 The monolingual fragments in a code-switched sentence produced by left-to-right copying are well-formed. The left-to-right assumption ensures that mono- lingual fragments are well-formed, as illustrated in (4), as well as (2). By itself, however, it does not constrain how the alternating frag- ments in one language and the other are related; indeed it allows them to be juxtaposed in any order, as long as the fragment languages alter- nate; sister elements in the same constituent in a virtual sentence may find themselves remote from each other in the code-switched sentence, as with remarkable and savon in example (4). As mentioned in Section 1.1, however, empiri- cal research confirms that constituent structure, insofar as content and embedding or nesting re- lations are concerned, is conserved even if the constituent contains a code-switch. To conform to this observation, we make a second assumption, that once the production process enters or switches into a constituent, it must exhaust all the lexical slots in the con- stituent, in one language or both, before return- ing into a higher-level constituent or entering a sister constituent. In other words, each time it enters a deeper, or more nested, subconstituent, it cannot exit from it until that subconstituent is exhausted. Note that this assumption is in- dependent of whether or not the production fol- lows the left-to-right process described above, so that the sentence (5) satisfies nested first, but not left-to-right. (5) (wash((some of (Grandma's soap remark- able)) with) themselves) [ (fr~res les) Theorem 3 Lexicalizing constituents accord- ing to nested-first is a sufficient condition for 12 conserving the same constituent structure in the code-switched sentence as in the two virtual sen- tences. The nested first condition and the left-to-right assumption are independent, in the sense that neither implies the other, as is clear from (4) and (5). Neither excludes the other, and to- gether, as in (6) and (2), they produce code- switched sentences with well-formed monolin- gual fragments and the same constituent struc- ture as the virtual sentences. (6a) (se lavent I (with (some of (Grandma's re- markable soap)))) I (les fr~res) (6b) (fr~res I the) I ((avec I (some of (Grandma's I savon remarquable))) I wash themselves ) 1.6 The language of constituents and subconstituents. The model in Section 1.5, though it produces sentences like those in (2) and (6) with some de- sirable properties, is not complete. With only the two conditions, certain configurations may occur, such as those in (6), that are clearly unrealistic. For example, if the monolingual fragment being copied includes a word which must be positioned finally in a constituent, like fr~res in (6b), and if the words of the con- stituent in one virtual sentence or the other have not yet been used up, then an immediate code-switch, to the in this instance, is obliga- tory to satisfy nested first- the monolingual fragment cannot continue, even though it may have a natural continuation into another con- stituent. This, and other instances of forced code-switching, are clearly not phenomena ob- served in real bilingual discourse. Another type of construction not found in natural bilingual corpora but permitted in the simple production model might include a code- switched sentence which begins with a fragment of the virtual sentence in language A which would never occur in sentence-initial position in monolingual discourse in language A, such as se lavent in (6a). More generally, if there are sev- eral sister subconstituents in a constituent, they may be permuted in any order, as long as there is a switch between each adjacent pair. Still an- other anomalous output from this model, even if the two monolingual grammars contrast com- pletely, is a code-switch between every adjacent pair of words in the sentence. Thus the output of the production process as is hitherto formulated seems unduly constrained from the production point of view (by forcing switches) and not constrained enough from the perspective of the output structure. We thus come to the main point of this section: short of the equivalence constraint itself, can we moti- vate some constraint to account for the obser- vation that switching occurs almost exclusively at equivalence points (notion to be formalized in Section 1.6.1) and virtually all equivalence points seem to be eligible switch points? And can this be done in such a way that during pro- duction, the model speaker avoids any switch- ing which will obligatorily require compensatory switches later on in the sentence construction (switch planning or forcing)? Furthermore, can we do this without referring to facts of partic- ular languages, properties of particular gram- matical categories, or even the mechanisms of particular theories of monolingual grammar? Our approach to this problem is to postu- late a limited degree of structural monitoring. Monitoring the monolingual fragments for well- formedness is uncontroversial, and we need not enter into the details. What we propose for monitoring at the switch points is the "language label" of the constituent in which the switch oc- curs and of its immediate subconstituents. We first ask: what parts of the constituent structure of the code-switched sentence may be with certainty ascribed to one language or the other only, and hence should be labeled accord- ingly? Certainly (i) all terminal symbols -- lexi- cal slots -- are labeled according to whether the word filling the slot comes from LA or LB, since all words are identifiable as to their language, by definition in our model of congruent grammars. (ii) At the constituent level, any constituent, all of whose immediate subconstituents have the same label, should itself have this label. Any- thing else would be inconsistent. We will pro- pose a third criterion for labeling constituents, but we first prove the following" Theorem 4 Any non-terminal node carrying a label must have at least one immediate descen- dant node which has this same label or is un- labeled, and one descendant (possibly a lexical slot) which has the same label. Different instantiations of this model may ac- tually specify that certain subconstituents "in- 13 herit" the label -- in one theory the determiner may inherit the label of the noun phrase, in an- other theory it may be the noun itself. Requirement (ii) depends on constituent con- tent, but not on constituent order, so that it applies meaningfully to any sentence satisfying nested first such as (5) or (6b). The labeling of (6b) is illustrated in (7). j S N(F) D(E) i fr~res the avec some of Grandma's sawn rem~Nuable wash themselves To constrain the order, we are motivated to try to exclude situations such as a declarative sentence which begins with a well-formed verb phrase entirely in English followed by a subject noun phrase in another language. More gener- ally, (iii) any subconstituent which is out of rank order position among its sister subconstituents according to one of the languages, must receive the label of the other language. For example, if the languages A and B are SVO and VSO, re- spectively, and the code-switched sentence has order VSO, then the labeling should be vBsBo; if the code-switched sentence were SOV, then condition (iii) could not be satisfied, since O is out of order according to both languages, as is the V. If conditions (i)-(iii), all of which are well- motivated, cannot be simultaneously satisfied, the code-switched sentence cannot be consid- ered well-formed. Thus in our hypothetical example with a sentence-initial English verb phrase, requirements (ii) and (iii) conflict, so the sentence is not well-formed. And in exam- ple (7) condition (iii) cannot be satisfied with respect to any of the categories lexicalized by fr&es, the, savon, remarquable, wash and them- selves, nor the PP node -- according to the ex- tremely constrained grammars responsible for examples (la,b). On the other hand, each of examples (2a,b,c,d) satisfy all of the conditions (i)-(iii). All that a speaker monitors is the language label of the constituent in which a potential switch occurs and that of its subconstituents. No additional labeling is warranted within this framework, though other treatments of code-switching all have their own particular way of assigning a label to each and every con- stituent (Rivas, 1981; Woolford, 1983; Joshi, 1985; Di Sciullo et al., 1986; Myers-Scotton, 1993). In particular, we would claim that there is no conceptual justification or need for postu- lating an underlying (or "matrix") language for the entire sentence itself when this is not moti- vated by criteria (ii) and (iii). 1.6.1 The equivalence constraint. In the string of words which constitute a code- switched sentence in our model, there is no problem in identifying where a fragment in lan- guage A stops and one in language B starts. On the constituent level, however, it is not as ob- vious where this switch should be located and sometimes even whether or not there is an inter- constituent switch, as in (8). (8) J S(E)._ .._......~) ---- ~- VP(E) D~E) ~ V(E) NI~E) [ AD(~) N(E~ [ D~E) N(E) / l, I:o,,o / I The original Duke [ de Lorraine [ had 10 000 men This is an example of string-level code-switching with and without corresponding constituent- level switches. The rule NP--+ ADJ+NP in En- glish and NP-~ NP+ADJ in French results in the lowest NP being labeled E, by requirement (iii). The higher NPs, the VP, the object NP and the S are labeled E, and the PP labeled F because all of their subconstituents are (re- quirement (ii)). The code-switch between Duke and de is also a constituent-level switch between Duke and the PP, but the switch between Lor- raine and had is not reflected by a switch be- tween the highest NP and its sister VP since they are both labeled E. In general, what constitutes a code-switch be- tween two adjacent sister constituents? The only reasonable answer is that one constituent is labeled A and the other B. What happens if two differently labeled sister constituents are sepa- rated by one or more unlabeled constituents? Once again it is clear that there has been a code- 14 switch at the constituent level, but the site can- not be pinned down, other than by saying that it occurred in the interval between the two la- beled constituents. Note that there also must be switches at some lower levels within each of the intervening unlabeled constituents; other- wise they could not be unlabeled, by criterion (ii). We can now state the equivalence constraint. Consider the corresponding rules for ordering the n subconstituents of a given constituent in language A and language B. If the sets of the first i symbols on the right-hand side of the rules are different in the two grammars (and hence the set of the last n - i symbols are also differ- ent, since the two rules are congruent), then the equivalence constraint prohibits a code-switch between the i-th and i + 1-st subconstituents. Otherwise the boundary between the two sub- constituents is an equivalence point and a code- switch is permitted. In the case n = 2, this reduces to the prohibition of a code-switch be- tween the two subconstituents unless the two languages order them in the same way. 1.7 Proof of the equivalence constraint. The production model in Section 1.5 and the labeling rules in Section 1.6 ensure that mono- lingual fragments are well-formed, constituent structures are correct and that no constituent labeled X appears within a higher constituent in a rank order position not permitted in lan- guage X. This does not mean that the equiv- alence constraint holds. Consider for example the rules c --~ xyz and c --+ zyx in languages A and B respectively. Then the model as it is now constituted would permit the constituent order xAyBz A, with two constituent-level code- switches, both of which violate the equivalence constraint. (cf Grandma's I remarquable I soap or savon I remarkable I de grand-maman.) In one important case, however, equivalence always holds. Monolingual grammars have binary constituent-subconstituent structure if there are at most two symbols on the right-hand side of every rule. Then the following holds: Theorem 5 Given a well-formed code-switched sentence where the monolingual grammars have binary constituent-subconstituent structure. If two sister subconstituents are labeled A and B, respectively, the code-switch between them satis- ties the equivalence constraint. As we have seen, however, the theorem may not hold if rules may have more than two terms on their right-hand sides. The counter-example shown above, for example, represents the inser- tion of a language B subconstituent into an oth- erwise language A constituent. This requires two code-switches, one before and one after the inserted subconstituent. If a code-mixing strat- egy were to be based on the insertion of con- stituents in this way, every code-switch before an insertion would require the speaker to plan for an appropriate second code-switch later on in the sentence. While many types of relatively complex for- ward planning must be incorporated into mono- lingual production models, the distribution of code-switches in bilingual corpora is more con- sistent with a hypothesis of the independence of successive code-switches: no forced switches (or no planning). Where a switch takes place in be- tween two constituents, well-formedness of the code-switched sentence cannot depend further switches later on in the left-to-right order 7. Theorem 6 Given a well-formed code-switched sentence. If two sister subeonstituents are la- beled A and B, respectively, and there is no la- beled subconstituent between them, then under no forced switches, there must be a code-switch satisfying the equivalence constraint in the in- terval between the labeled subconstituents. 2 Relaxing the constraints. 2.1 Repetition-translation. The model in Section 1 precludes forms such as *se lavent [ themselves where correspond- ing items from both virtual sentences (in this case se and themselves) appear. Such repeat- translations (also called portmanteau or palin- dromic constructions do occur, albeit rarely, in some corpora. Example (9) is drawn from the Finnish- English code-mixing corpus of Poplack et al. (1987b). We assume, following these authors' arguments, that this sentence consists of three fragments, with code-switches immediately be- fore and after the English preposition to. The rThere are some exceptions: these will be discussed in Section 2.2. 15 ellative case-marked kidneyst~i and illative aor- taan are formed of borrowings from English and behave as native items (e.g. there is no En- glish determiner preceding them as would be expected within English fragments; rather they manifest null determiners and case-marking characteristic of Finnish -- see Section 3). (9) Mutta se oli kidneyst~i I to [ aortaan. but it was kidney-el, aorta-il. 'But it was from the kidney to the aorta.' The interesting aspect of this example is that to and the ellative marker -an play identical roles and only one should have appeared in the ad- positional phrase containing aorta according to our model. The same, highly bilingual, speaker produced a similar example in (10). (10) Ja sitten, uh, miss~i h/in n- [ at [ yliopistossa and then where she n- university-in. otti, niin kuin, [ art history. took-3p., like, 'And then, uh, where did she- at university she took, like, art history.' Again, the inessive marker -ssa has the same function as the English preposition at and only one of them should have appeared, according to our model. This type of construction, involving the re- dundant use of functionally identical words, is rare -- examples (9) and (10) are the only two in the Finnish-English corpus -- and, as can be seen in (10), tends to evidence production- level difficulties (hesitations, autocorrection, etc.). Nishimura (1986) presents several simi- lar examples involving adpositional phrases for Japanese-English code-mixing. It is also possible to find occasional instances of redundant verb use in SVO/SOV mixing -- producing a SVOV structure. Examples (11) and (12) below are drawn from the Tamil- English code-mixing corpus of Sankoff et al. (1990). (11) They gave me a research grant [ kodutaa. gave-3p.-pl.-past 'They gave me a research grant.' (12) I was talking to I oru orutanooda peesindu iruntein. one person-com, talk-cont, be-lp.-sg.-past 'I was talking to a person.' Still other examples from the same corpus com- bine redundant verb plus complementizer for propositional complements: (13) I think it's the European influence I nu ninaikirein. that think-lp.-sg.-pres. 'I think that its the European influence.' Precisely how does this violate the conditions of our model, and can they be relaxed to ac- commodate it? Clearly the first postulate vio- lated (Section 1.5) is that the "...code-switched sentence is produced by taking part of one of [the virtual sentences], followed by part of the other, and so on, without using any word (or its translation) more than once, until every lex- ical element (or its translation) has been used up." What if we changed the latter part of this to "... and so on, until every lexical element (and/or its translation) has been used once?" This general extension allows for the use of any element as well as its translation in the same sentence. More limited extensions, where only specified lexical classes may occur in both lan- guages, would be more consistent with actual speech behaviour. With these kinds of changes, there is no dif- ficulty in retaining the left-to-right and nested first assumptions, as well as consituent label- ing criteria (i) and (ii). Condition (iii), formu- lated as it is in terms of the rank order of con- stituents, must be worded differently to capture the basic idea that a constituent is labeled ac- cording to language A whenever it is out place according to a rule of language B, and vice- versa. "Out of place" can no longer be detected by simply checking the rank order, but by ascer- taining whether any sister constituent preced- ing the candidate constituent is prohibited from preceding it by the appropriate rule of language B, and similarly for constituents following the candidate constituent. An analogous procedure can be used to verify the equivalence constraint. Typically, one sister constituent of the re- peated translated constituents will receive two conflicting labelings from criterion (iii) in this situation. The noun in the Finnish adpositional 16 phrase, the object in SVOV constructions, the proposition complement of both the English and Tamil verbs, all receive two labels this way. To weaken the model in order to accept such sen- tences, we must discard conflicting criterion (iii) labelings due to "out of place" configurations with respect to the repeated translated con- stituent. Criteria (i) and (ii) still operate. The equivalence constraint will not hold with respect to this sister constituent, but can be verified elsewhere. 2.2 Insertional code-switching. In some bilingual communities, the code-mixing mode of discourse may include the possibility of inserting one specific type of constituent into positions where it would not occur monolin- gually. In the Tamil-English corpus in Section 2.1, examples (14) and (15), as well as (13), il- lustrate the placement of an English proposition preceding the Tamil propositional complemen- tizer, instead of in its obligatory English posi- tion following that. (14) Even there, I am really lucky [ nu collanum. that say must 'Even there, one must say that I am really lucky.' (15) It corrodes your confidence I nu, enakku oru feeling s that I-dat. a 'I have a feeling that it corrodes your confidence.' It is important to note that in this commu- nity this pattern is confined to the very par- ticular category of propositional complements. All other code-switches satisfy the equivalence constraint; none of the numerous other word- order conflicts between Tamil and English give rise to an insertional code-switching possibility. A different type of constituent insertion has been characterized quantitatively by Na'it M'Barek & Sankoff (1988). This involves the insertion, by bilingual Moroccans, of a full French noun phrase, including determiners and quantifiers, in all contexts where an Arabic noun phrase would be appropriate. This in- cludes, among other contexts, post-verbal sub- jects as in (16), which are not possible in French. s feeling is treated as a loanword, for reasons discussed in Section 3. Determiner-initial French noun phrases also ap- pear after demonstratives as in (17) and (18), producing demonstrative-determiner sequences, and after the indefinite wa.hd as in (18), produc- ing determiner-determiner sequences, neither of which is a French pattern. (16) 7aw [les demandes arrived the applications 'The applications arrived.' (17) migi b.hal duk [ les avions 14gers This is not like these the airplanes light 'It's not like these light planes.' 17 (18) ~a s'adresse surtout [ l'wa.hd [une this is targeted mostly at one a certaine classe [ .hant walla [le luxe[ certain class because has become luxury bezzaf f' [ les h6tels. much in the hotels 'This is targeted mostly at a certain class because the hotels have become too luxurious.' The sentences we have shown containing ex- amples of constituent insertion are all excluded from the model based on the equivalence con- straint. For example, the English propositional complement preceding the Tamil complemen- tizer nu would be labeled for Tamil by crite- rion (iii) because of its non-English position, but this would conflict with the English label it would receive from criterion (ii) since it is a normal English sentence containing only En- glish lexical items. Similarly, the post-verbal subject consisting entirely of a normal French noun phrase receives conflicting labels from its position corresponding to Arabic rules and from its own constituent elements. It must be stressed that not all bilingual com- munities that develop code-mixing modes of dis- course make use of constituent insertion; those that do (e.g. Tamil-English or Arabic-French), use it very sparingly in the sense that typically only one type of constituent (English proposi- tions or French noun phrases) may be inserted in contexts (before nu, postverbally) where it would not be found in monolingual discourse. It is not difficult in this case to relax the model conditions so that such sentences are per- mitted. As in Section 2.1, the specified category is simply allowed to escape labeling by crite- rion (iii), and can be labeled by its own sub- constituents (criterion (ii)). There is no dan- ger that this will result in anomalous labeling higher in the phrase structure, since a sister con- stituent (the Tamil complementizer, the Ara- bic verb) will already have the contrary label, and the constituent containing them is thus pre- vented from receiving a label by way of criterion (ii). Note, however, that only those insertions which are not in conflict with the fundamen- tal production conditions left-to-right and most nested can be considered well-formed. In addi- tion, the equivalence constraint, wherever the specified category is not one of the constituents directly involved, still holds. 3 The borrowing process. The equivalence constraint formalized in Sec- tion 1.6.1 has been verified as a general ten- dency in several communities - Puerto Rican Spanish and English in New York (Poplack, 1980), Finnish and English (Poplack et al., 1987b), Tamil and English (Sankoff et al., 1990), Wolof and French, and Fongbe and French (Meechan & Poplack, 1995; Poplack & Meechan, 1995), Igbo and English (Eze, 1997), and others. However, there are actually rela- tively few data on which it has been indepen- dently tested, since most of the voluminous lit- erature on code-switching, especially that on insertional switching, is based on data which represent, we would claim, lexical borrowing (e.g. Eliasson, 1991; Mahootian & Santorini, 1994; Backus, 1996). While code-switching es- sentially involves the reconciliation of the word orders of both languages, only the word-order of the recipient language is pertinent to bor- rowing. Thus attempts to understand code- switching based on a mixture of borrowing and true switching data are likely to be misleading. In the model constructed above, the borrow- ing process is not relevant. Loanwords, includ- ing those are ad hoc, "nonce", or momentary, uses, are not excluded, but simply considered to be syntactically integrated, i.e. to behave as native lexical items with respect to word order. How can this working hypothesis be validated? In Sections 3.1 and 3.2, we demonstrate an an- swer to this problem. 3.1 Properties of loanwords. Many loanwords have long histories in the re- cipient language, are used by monolinguals (of- ten with no consciousness of their foreign et- ymology), are widespread and are accepted by m0nolingual dictionaries and other linguistic ar- biters. None of these non-structural character- istics, however, are necessary to the borrowing process. In many communities, bilinguals have access to essentially the entire content-word lex- icon of one language as potential loanwords into the other, perhaps for a single usage only. What is important is that when these words are borrowed the structural linguistic characteris- tics of their usage are the same as with estab- lished loanwords. What are these characteris- tics? Some of them are: integration into the recipient language at the syntactic, morpholog- ical, semantic and phonological levels, use as a single item independent of other donor lan- guage material, and restriction to nouns, verbs, adjectives, etc., to the exclusion of determiners, pronouns, prepositions and other grammatical words. Often, during the study of a bilingual cor- pus, we discover a pattern of words from a spe- cific lexical category in language A appearing in mixed discourse in contexts where they seem to violate the equivalence constraint, but when considered as language B words, i.e. borrow- ings, there is no violation, e.g. kidney in (9), feeling in (15). Thus these words seem syntac- tically integrated into language B. To confirm their borrowed status, we verify the other prop- erties of loanwords. Phonological integration turns out to be an unreliable indicator, for two reasons (cf Poplack et al., 1987a). One is that bilinguals, in con- trast to monolinguals, tend to be aware of the etymology of loanwords and, in some commu- nities, will often reflect this knowledge in their pronunciation of borrowed items. Second, in some communities, the learning context results in phonologies for languages A and B which converge in unpredictable and diverse ways from speaker to speaker. Semantic integration refers to a shift in func- tion or meaning of a loanword from donor lan- guage characteristics to recipient language char- acteristics. This may often be documented for established loanwords that have had time to 18 evolve within the host language, or for borrow- ings between languages whose functional cate- gories are structured very differently, but in gen- eral, where bilingual borrowings are most fre- quently from and into the category of nouns, the criterion of semantic integration, though satis- fied, may not always revealing. The criterion of isolated occurrence of loan- words in recipient language contexts is more universal and is usually relatively easy to ap- ply. The main difficulties come from compound words and other multi-word forms whose status as single lexical items is not always clear. Sta- tistically, these should not constitute a major problem. There is also the possibility of coin- cidence. If nouns are often borrowed and ad- jectives are often borrowed, then occasionally a noun-adjective combination will appear to have been borrowed together, when this is just the result of chance. This will also be relatively rare. The lexical/grammatical (or content/ func- tion) contrast is also useful. It is true that among the world's languages, loanwords have on occasion included prepositions, pronouns, de- terminers, and other grammatical categories, but these are exceptional, and the overwhelming tendency is for borrowing, and especially one- time borrowing by bilinguals, to affect nouns, and to a lesser extent, verbs, adjectives and ad- verbs. Moreover, in specific communities, bilin- gual borrowing may be focused on particular categories more than in other communities, and these patterns may be useful for analytical pur- poses. Finally, it is the criterion of morphological in- tegration which is of great interest. Loanwords, established or momentary, are inflected exclu- sively through recipient language morphological rules. Insofar as such marking is non-null and is different for language A and language B, words borrowed from the former into the latter should display exclusively language B inflectional mor- phology. 3.2 Case marking of English-origin nouns in Tamil In the Tamil corpus referred to in Sections 2.1 and 2.2, many English-origin nouns occur in preverbal position, where the verb is an in- flected Tamil form. Tamil being a SOV lan- guage, this is just where Tamil direct (and indi- marker marker present absent N ACCUSATIVE English origin 29% 71% 108 Native Tamil (no pronouns) 39% 61% 51 DATIVE English origin 86% 14% 91 Native Tamil 99% 1% 230 Table 1: Variable accusative and dative marking on English-origin and native Tamil objects. rect) objects appear. Examining these English- origin nouns, we first note that these occur most frequently in isolation, and occasionally as com- pounds, or as familiar adjective-noun combina- tions, but never preceded by English preposi- tions, articles, quantifiers or demonstratives as would frequently be the case if these were parts of well-formed English fragments resulting from code-switching. Second, whereas the preponderance of prever- bal native Tamil objects are actually pronouns, from 45-70% depending on the case, no English pronouns whatsoever appear in this context, as would be expected from borrowings, but not if these were code-switches into English fragments -- which would normally include at least the oc- casional pronoun. Third, it is the inflectional morphology on these nouns which is the most revealing. They either have null morphology or Tamil inflec- tions. Since in Tamil the numerically fre- quent accusatives and datives are prescribed to take non-null case-marking, we examine mark- ing rates quantitatively. In fact, as in Table 1, many (non-pronominal) Tamil forms are un- marked, especially accusatives. The Engiish- origin forms show remarkably parallel rates, es- pecially when the accusative-dative contrast is considered. This morphological integration into Tamil is exactly what would be expected of bor- rowings, and certainly not of well-formed En- glish fragments produced by code-switching. In summary, the criteria of syntactic integra- tion, isolation, lexical category, and morpholog- ical integration all confirm the loanword status of the preverbal English-origin nouns, and jus- tify our considering them as Tamil nouns for the purposes of applying our model of code- 19 switching. 3.3 Formal considerations Formally, the only adaptation of our model nec- essary to allow for borrowing is, for specified terminal categories in T, the list of words in language A in the category is added in to the pre-existing list for language B, so that there are now two possible translations in B for each word in A in this category. The uniqueness statement in Theorem 1 has to be modified to take this into account, but this leads to no diffi- culty. We presume of course that the borrowed word in B can be distinguished from its "ety- mological" origin in A by the tests and criteria illustrated in Section 3.2. Discussion. The core of this work is our model of equivalence-point code-switching. This avoids issues of grammatical theory by focusing on the "real-time" production of a code-mixed sen- tence drawing on the output of two monolingual grammars. Our model is built on an earlier formulation of the equivalence constraint (Sankoff & Mainville, 1986). It is the production aspect here, how- ever, that allows us to achieve the all-important well-formedness of monolingual fragments, not strictly guaranteed in the earlier work, and to model the essential unpredictability of code- switching. The present version (Sankoff, 1998) has a more economical protocol for constituent labeling and a more complete account of the co- incidence (or lack thereof) between word-level switching and constituent-level switching. We have shown here how to weaken the strong con- ditions leading to Theorems 5 and 6 to account for other types of code-switching that have been reported. We have allowed some degree of asymmetry in the model. Borrowing can be unidirectional with respect to specific categories. The same is true for constituent insertion. Further de- velopment will require weakening the one-to- one correspondence of the sets of rules, and of the grammatical and lexical categories of the two languages. For example, the use of spe- cialized incorporation devices, like inflection- carrying dummy verbs, or techniques for mark- ing borrowed adverbs, typically belong to one language and not the other. We have not treated the topic of interference in this presentation. Interference differs from borrowing in several respects, in particular on the level of intentionality -- interference is more likely to be avoided or corrected by speakers, and is more likely to show up in communities where the domain of monolingualism, and fre- quency of use of the affected language are re- stricted. Nevertheless, interference has interest- ing consequences for our model in that it tends to affect pre-sentential discourse markers, tags, conjunctions, prepositions and other grammat- ical morphemes rather than lexical items as is the case for loanwords. Both borrowing and in- terference can lead to the long-term establish- ment of lexical items, so that when interference is frequent, the distinction between the two be- comes of interest. Acknowledgements This work supported in part by grants from the Natural Sciences and Engineering Research Council and the Social Sciences and Humanities Research Council. The author is a Fellow of the Canadian Institute for Advanced Research. References Backus, A. (1996). Two in One. Bilingual Speech of Turkish Immigrants in the Nether- lands. Tilburg: Tilburg University Press. Belazi, H. M., Rubin, E. J., and Toribio, A. J. (1994). Code switching and X-bar theory: The functional head constraint. Linguistic In- quiry, 25, 221-237. Bentahila, A., and Davies, E. E. (1983). The syntax of Arab-French code-switching. Lin- gua, 59, 301-30. Di Sciullo, A.-M., Muysken, P., and Singh, R. (1986). Government and code-mixing. Jour- nal of Linguistics, 22, 1-24. Eliasson, S. (1991). Models and constraints in code-switching theory. Papers from the Work- shop on Constraints, Conditions and Mod- els, pp. 17-50. Strasbourg: European Science Foundation. Eze, E. (1997) Aspects of language contact: a variationist perspective on code-switching and borrowing in Igbo-English bilingual dis- course. Ph.D. dissertation, University of Ot- tawa. 20 Gumperz, J., and Hernandez, E. (1969). Cog- nitive aspects of bilingual communication. Working Paper Number 28, Language Behav- ior Research Laboratory. University of Cali- fornia, Berkeley. Joshi, A. K. (1985). Processing of sentences with intrasentential code-switching. In D. R. Dowty, L. Karttunen and A.M. Zwicky (eds.), Natural Language Parsing, pp. 190-205. Cam- bridge: Cambridge University Press. Labov, W. 0969). Contraction, deletion and in- herent variability of the English copula. Lan- guage, 45, 715-762. Lipski, J. (1977). Code-switching and the prob- lem of bilingual competence. In M. Paradis (ed.), Aspects of Bilingualism, pp. 250-263. Columbia, SC: Hornbeam Press. Mahootian, S., and Santorini, B. (1994). Ad- nominal adjectives, codeswitching and lex- icalized TAG. In A. Abeille, S. Aslanides and O. Rambow (eds.), 3e colloque interna- tional sur les grammaires d'arbres adjoints (TAG+3), Technical Report TALANA-RT- 94-01, pp. 73-76. Meechan, M., and Poplack, S. (1995). Orphan categories in bilingual discourse: Adjectiviza- tion strategies in Wolof-French and Fongbe- French. Language Variation and Change, 7, 169-194. Muysken, P. (1995) Code-switching and gram- matical theory. In L. Milroy and P. Muysken (eds.), One speaker, two languages, pp. 177-198. Cambridge: Cambridge University Press. Myers-Scotton, C. (1993). Dueling Languages. Oxford: Clarendon Press. Na'/t M'Barek, M., and Sankoff, D. (1988). Le discours mixte arabe/fran~ais: des emprunts ou des alternances de langue? Revue Cana- dienne de Linguistique, 33(2), 143-154. Nishimura, M. (1986). Intrasentential code- switching. The case of language assignment. In J. Vaid (ed.), Language Processing in Bilinguals: psycholinguistic and neuropsy- chological perspectives, pp. 123-43. Hillsdale, N J: Lawrence Erlbaum. Pandit, I. (1990). Grammaticality in code- switching. In R. Jacobson (ed.), Codeswitch- ing as a worldwide phenomenon, pp. 33-69. New York: Peter Lang. Pfaff, C. (1979). Constraints on language mix- ing: intrasentential code-switching and bor- rowing in Spanish/English. Language, 55, 291-318. Poplack, S. (1978). Syntactic structure and so- cial function of code-switching. In R. Duran, (ed.), Latino Discourse and Communicative Behavior, pp. 169-184. New Jersey: Ablex, Poplack, S. (1980). Sometimes I'll start a sentence in Spanish Y TERMINO EN ESPAI~OL: Toward a typology of code- switching. Linguistics, 18, 581-618. Poplack, S., and Meechan, M. (1995). Pat- terns of language mixture: Nominal structure in Wolof-French and Fongbe-French bilingual discourse. In L. Milroy and P. Muysken (eds.), One speaker, two languages, pp. 199-232. Cambridge: Cambridge University Press. Poplack, S., Sankoff, D., and Miller, C. (1987a). The social correlates and linguistic processes of lexical borrowing and assimilation. Lin- guistics, 26(1), 47-104. Poplack, S., Wheeler, S., and Westwood, A. (1987b). Distinguishing language contact phenomena: Evidence from Finnish-English bilingualism. In P. Lilius and M. Saari (eds.), The Nordic languages and modern linguistics 6, pp. 33-56. University of Helsinki Press. Rivas, A. (1981). On the application of transfor- mations to bilingual sentences. Manuscript. Amherst, MA: Dept. of Spanish and Por- tuguese, University of Massachusetts. Sankoff, D. (1988) Sociolinguistics and syntac- tic variation. In F. Newmeyer (ed.), Linguis- tics: the Cambridge Survey. IV Language: the socio-cultural context, pp. 140-161. Cam- bridge: Cambridge University Press. Sankoff, D. (1998) A formal production-based explanation of the facts of code-switching. Bilingualism, Language and Cognition 1 (in press). Sankoff, D., and Mainville, M. (1986). Code- switching of context-free grammars. Theoret- ical Linguistics, 13, 75-90. Sankoff, D., Poplack, S., and Vanniarajan, S. (1990). The case of the nonce loan in Tamil. Language Variation and Change, 2(1), 71- 101. Woolford, E. (1983). Bilingual code-switching and syntactic theory. Linguistic Inquiry, 14, 52-536. 21
1998
2
Trigger-Pair Predictors in Parsing and Tagging Ezra Black, Andrew Finch, Hideki Kashioka ATR Interpreting Telecommunications Laboratories 2-2 Hikaridai, Seika-cho Soraku-gun, Kyoto, Japan 619-02 {black ,finch ,kashioka}©atr. itl. co. jp Abstract In this article, we apply to natural language parsing and tagging the device of trigger- pair predictors, previously employed exclu- sively within the field of language mod- elling for speech recognition. Given the task of predicting the correct rule to as- sociate with a parse-tree node, or the cor- rect tag to associate with a word of text, and assuming a particular class of pars- ing or tagging model, we quantify the in- formation gain realized by taking account of rule or tag trigger-pair predictors, i.e. pairs consisting of a "triggering" rule or tag which has already occurred in the docu- ment being processed, together with a spe- cific "triggered" rule or tag whose proba- bility of occurrence within the current sen- tence we wish to estimate. This informa- tion gain is shown to be substantial. Fur- ther, by utilizing trigger pairs taken from the same general sort of document as is be- ing processed (e.g. same subject matter or same discourse type)--as opposed to pre- dictors derived from a comprehensive gen- eral set of English texts--we can signifi- cantly increase this information gain. 1 Introduction Ifa person or device wished to predict which words or grammatical constructions were about to occur in some document, intuitively one of the most helpful things to know would seem to be which words and constructions occurred within the last half-dozen or dozen sentences of the document. Other things be- ing equal, a text that has so far been larded with, say, mountaineering terms, is a good bet to continue featuring them. An author with the habit of ending sentences with adverbial clauses of confirmation, e.g. "as we all know", will probably keep up that habit as the discourse progresses. Within the field of language modelling for speech recognition, maintaining a cache of words that have occurred so far within a document, and using this information to alter probabilities of occurrence of particular choices for the word being predicted, has proved a winning strategy (Kuhn et al., 1990). Mod- els using trigger pairs of words, i.e. pairs consist- ing of a "triggering" word which has already oc- curred in the document being processed, plus a spe- cific "triggered" word whose probability of occur- rence as the next word of the document needs to be estimated, have yielded perplexity 1 reductions of 29-38% over the baseline trigram model, for a 5-million-word Wall Street Journal training corpus (Rosenfeld, 1996). This paper introduces the idea of using trigger- pair techniques to assist in the prediction of rule and tag occurrences, within the context of natural- language parsing and tagging. Given the task of predicting the correct rule to associate with a parse- tree node, or the correct tag to associate with a word of text, and assuming a particular class of parsing or tagging model, we quantify the information gain realized by taking account of rule or tag trigger-pair predictors, i.e. pairs consisting of a "triggering" rule or tag which has already occurred in the document being processed, plus a specific "triggered" rule or tag whose probability of occurrence within the cur- rent sentence we wish to estimate. In what follows, Section 2 provides a basic overview of trigger-pair models. Section 3 de- scribes the experiments we have performed, which to a large extent parallel successful modelling ex- periments within the field of language modelling for speech recognition. In the first experiment, we inves- tigate the use of trigger pairs to predict both rules and tags over our full corpus of around a million words. The subsequent experiments investigate the ]See Section 2. 131 additional information gains accruing from trigger- pair modelling when we know what sort of document is being parsed or tagged. We present our exper- imental results in Section 4, and discuss them in Section 5. In Section 6, we present some example trigger pairs; and we conclude, with a glance at pro- jected future research, in Section 7. 2 Background Trigger-pair modelling research has been pursued within the field of language modelling for speech recognition over the last decade (Beeferman et al., 1997; Della Pietra et al., 1992; Kupiec, 1989; Lau, 1994; Lau et al., 1993; Rosenfeld, 1996). Fundamentally, the idea is a simple one: if you have recently seen a word in a document, then it is more likely to occur again, or, more generally, the prior occurrence of a word in a document affects the probability of occurrence of itself and other words. More formally, from an information-theoretic viewpoint, we can interpret the process as the rela- tionship between two dependent random variables. Let the outcome (from the alphabet of outcomes Ay) of a random variable Y be observed and used to predict a random variable X (with alphabet .Ax). The probability distribution of X, in our case, is de- pendent on the outcome of Y. The average amount of information necessary to specify an outcome of X (measured in bits) is called its entropy H(X) and can also be viewed as a mea- sure of the average ambiguity of its outcome: 2 H(X) =- y~-P(z)log~P(x) (1) x6.Ax The mutual information between X and Y is a measure of entropy (ambiguity) reduction of X from the observation of the outcome of Y. This is the entropy of X minus its a posteriori entropy, having observed the outcome of Y. I(X;Y) = H(X)- H(XIY ) = ~ ~] P(x,y) log~ P(x,y) (2) xe.~ x ye.4v P( x)P(y) The dependency information between a word and its history may be captured by the trigger pair. 3 A trigger pair is an ordered pair of words t and w. Knowledge that the trigger word t has occurred within some window of words in the history, changes 2A more intuitive view of entropy is provided through perplexity (Jelinek et al., 1977) which is a measure of the number of choices, on average, there are for a random variable. It is defined to be: 2 H(x). 3For a thorough description of trigger-based mod- elling, see (Rosenfeld, 1996). the probability estimate that word w will occur sub- sequently. Selection of these triggers can be performed by calculating the average mutual information between word pairs over a training corpus. In this case, the alphabet Ax = {w,~}, the presence or absence of word w; similarly, Ay = {t,t}, the presence or ab- sence of the triggering word in the history. This is a measure of the effect that the knowl- edge of the occurrence of the triggering word t has on the occurence of word w, in terms of the entropy (and therefore perplexity) reduction it will provide. Clearly, in the absence of other context (i.e. in the case of the a priori distribition of X), this infor- mation will be additional. However, once ~elated contextual information is included (for example by building a trigram model, or, using other triggers for the same word), this is no longer strictly true. Once the trigger pairs are chosen, they may be used to form constraint functions to be used in a maximum-entropy model, alongside other con- straints. Models of this form are extremely versa- tile, allowing the combination of short- and long- range information. To construct such a model, one transforms the trigger pairs into constraint functions f(t, w): 1 ift E history and f(t, w) = next word = w (3) 0 otherwise The expected values of these functions are then used to constrain the model, usually in combination of with other constraints such as similar functions embodying uni-, bi- and trigram probability esti- mates. (Beeferman et al., 1997) models more accurately the effect of distance between triggering and trig- gered word, showing that for non-self-triggers, 4 the triggering effect decays exponentially with distance. For self-triggers, 5 the effect is the same except that the triggering effect is lessened within a short range of the word. Using a model of these distance effects, they are able to improve the performance of a trigger model. We are unaware of any work on the use of trigger pairs in parsing or tagging. In fact, we have not found any previous research in which extrasentential data of any sort are applied to the problem of parsing or tagging. 3 The Experiments 3.1 Experimental Design In order to investigate the utility of using long- range trigger information in tagging and parsing 4i.e. words which trigger words other than themselves 5i.e. words which trigger themselves 132 tasks, we adopt the simple mutual-information ap- proach used in (Rosenfeld, 1996). We carry over into the domain of tags and rules an experiment from Rosenfeld's paper the details of which we outline be- low. The idea is to measure the information con- tributed (in bits, or, equivalently in terms of per- plexity reduction) by using the triggers. Using this technique requires special care to ensure that infor- mation "added" by the triggers is indeed additional information. For this reason, in all our experiments we use the unigram model as our base model and we allow only one trigger for each tag (or rule) token. 6 We derive these unigram probabilities from the training cor- pus and then calculate the total mutual information gained by using the trigger pairs, again with respect to the training corpus. When using trigger pairs, one usually restricts the trigger to occur within a certain window defined by its distance to the triggered token. In our experi- ments, the window starts at the sentence prior to that containing the token and extends back W (the window size) sentences. The choice to use sentences as the unit of distance is motivated by our intention to incorporate triggers of this form into a probabilis- tie treebank-based parser and tagger, such as (Black et al., 1998; Black et al., 1997; Brill, 1994; Collins, 1996; Jelinek et al., 1994; Magerman, 1995; Ratna- parkhi, 1997). All such parsers and taggers of which we are aware use only intrasentential information in predicting parses or tags, and we wish to remove this information, as far as possible, from our results 7 The window was not allowed to cross a docu- ment boundary. The perplexity of the task before taking the trigger-pair information into account for tags was 224.0 and for rules was 57.0. The characteristics of the training corpus we em- ploy are given in Table 1. The corpus, a subset s of the ATR/Lancaster General-English Treebank (Black et al., 1996), consists of a sequence of sen- tences which have been tagged and parsed by hu- man experts in terms of the ATR English Gram- mar; a broad-coverage grammar of English with a high level of analytic detail (Black et al., 1996; Black et al., 1997). For instance, the tagset is both seman- ¢By rule assignment, we mean the task of assigning a rule-name to a node in a parse tree, given that the constituent boundaries have already been defined. 7This is not completely possible, since correlations, even if slight, will exist between intra- and extrasenten- tial information Sspecifically, a roughly-900,000-word subset of the full ATR/Lancaster General-English Treebank (about 1.05 million words), from which all 150,000 words were excluded that were treebanked by the two least accurate ATR/Lancaster treebankers (expected hand-parsing er- ror rate 32%, versus less than 10% overall for the three remaining treebankers) 1868 documents 80299 sentences 904431 words (tag instances) 1622664 constituents (rule instances) 1873 tags utilized 907 rules utilized 11.3 words per sentence, on average Table 1: Characteristics of Training Set (Subset of ATR/Laneaster General-English Treebank) tic and syntactic, and includes around 2000 different tags, which classify nouns, verbs, adjectives and ad- verbs via over 100 semantic categories. As examples of the level of syntactic detail, exhaustive syntactic and semantic analysis is performed on all nominal compounds; and the full range of attachment sites is available within the Grammar for sentential and phrasal modifiers, and are used precisely in the Tree- bank. The Treebank actually consists of a set of doc- uments, from a variety of sources. Crucially for our experiments (see below), the idea 9 informing the se- lection of (the roughly 2000) documents for inclusion in the Treebank was to pack into it the maximum degree of document variation along many different scales--document length, subject area, style, point of view, etc.--but without establishing a single, pre- determined classification of the included documents. In the first experiment, we examine the effective- ness of using trigger pairs over the entire training corpus. At the same time we investigate the ef- fect of varying the window size. In additional ex- periments, we observe the effect of partitioning our training dataset into a few relatively homogeneous subsets, on the hypothesis that this will decrease perplexity. It seems reasonable that in different text varieties, different sets of trigger pairs will be useful, and that tokens which do not have effective triggers within one text variety may have them in another) ° To investigate the utility of partitioning the dataset, we construct a separate set of trigger pairs for each class. These triggers are only active for their respective class and are independent of each other. Their total mutual information is compared to that derived in exactly the same way from a random par- tition of our corpus into the same number of classes, each comprised of the same number of documents. Our training data partitions naturally into four subsets, shown in Table 2 as Partitioning 1 ("Source"). Partitioning 2, "List Structure", puts all documents which contain at least some HTML- like "List" markup (e.g. LI (=List Item)) 11 in one 9see (Black et al., 1996) 1°Related work in topic-specific trigram modelling (Lau, 1994) has led to a reduction in perplexity. 11All documents in our training set are marked up in HTML-like annotation. 133 subset, and all other documents in the other sub- set. By merging Partitionings 1 and 2 we obtain Partitioning 3, "Source Plus List Structure". Parti- tioning 4 is "Source Plus Document Type", and con- tains 9 subsets, e.g. "Letters; diaries" (subset 8) and "Novels; stories; fables" (subset 7). With 13 subsets, ~e Partitioning 5, "Source Plus Domain" includes e.g. ' .~ "Social Sciences" (subset 9) and Recreation (subset 1). Partitionings 4 and 5 were effected by actual inspection of each document, or at least of its title and/or summary, by one of the authors. The reason P- we included Source within most partitionings was to determine the extent to which information gains were additive, a2 4 Experimental Results 4.1 Window Size Figure 1 shows the effect of varying the window size from 1 to 500 for both rule and tag tokens. The optimal window size for tags was approximately 12 sentences (about 135 words) and for rules it was ap- proximately 6 sentences (about 68 words). These values were used for all subsequent experiments. It is interesting to note that the curves are of simi- lar shape for both rules and tags and that the op- timal value is not the largest window size. Related effects for words are reported in (Lau, 1994; Beefer- man et al., 1997). In the latter paper, an exponential model of distance is used to penalize large distances between triggering word and triggered word. The variable window used here can be seen as a simple alternative to this. One explanation for this effect in our data is, in the case of tags, that topic changes occur in docu- ments. In the case of rules, the effect would seem to indicate a short span of relatively intense stylistic carryover in text. For instance, it may be much more important, in predicting rules typical of list struc- ture, to know that similar rules occurred a few sen- tences ago, than to know that they occurred dozens of sentences back in the document. 4.2 Class-Specific Triggers Table 3 shows the improvement in perplexity over the base (unigram) tag and rule models for both the randomly-split and the hand-partitioned training sets. In every case, the meaningful split yielded sig- nificantly more information than the random split. (Of course, the results for randomly-split training sets are roughly the same as for the unpartitioned training set (Figure 1)). 12For instance, compare the results for Partitionings 1, 2, and 3 in this regard. 0.35 0.3 0.25 0.2 0.15 0.1 0.05 ' ' 0 5O 100 tags rules ...... """"-----. ............................................................................... t t I t t t t 150 200 250 300 350 400 450 500 Window size (sentences) Figure 1: Mutual information gain varying window size 5 Discussion The main result of this paper is to show that analogous to the case of words in language mod- elling, a significant amount of extrasentential infor- mation can be extracted from the long-range his- tory of a document, using trigger pairs for tags and rules. Although some redundancy of information is inevitable, we have taken care to exclude as much information as possible that is already available to (intrasentential-data-based, i.e. all known) parsers and taggers. Quantitatively, the studies of (Rosenfeld, 1996) yielded a total mutual information gain of 0.38 bits, using Wall Street Journal data, with one trigger per word. In a parallel experiment, using the same tech- nique, but on the ATR/Lancaster corpus, the total mutual information of the triggers for lags was 0.41 bits. This figure increases to 0.52 bits when tags fur- ther away than 135 tags (the approximate equivalent in words to the optimal window size in sentences) are excluded from the history. For the remainder of our experiments, we do not use as part of the history the tags/rules from the sentence containing the to- ken to be predicted. This is motivated by our wish to exclude the intrasentential information which is already available to parsers and taggers. In the case of tags, using the optimal window size, the gain was 0.31 bits, and for rules the information gain was 0.12 bits. Although these figures are not as large as for the case where intrasentential infor- mation is incorporated, they are sufficiently close to encourage us to exploit this information in our mod- els. For the case of words, the evidence shows that triggers derived in the same manner as the trig- gers in our experiments, can provide a substantial amount of new information when used in combina- tion with sophisticated language models. For ex- ample, (Rosenfeld, 1996) used a maximum-entropy 134 Part. 1: Source Class Name Sents 1: Assoc. Press, WSJ 8851 2: Canadian Hansards 5002 3: General English 23105 4: Travel-domain dialgs 43341 Part. 2: List Structure Class Name Sents 1: Contains lists 14147 2: Contains no lists 66152 Part. 3: Source + List Structure Class Name Sents 1: Assoc. Press, WSJ 8851 2: Canadian Hansards 5002 3: Contains lists (Gem) 11998 4: Contains no lists (Gen.) 11117 5: Travel-domain dialogues 43341 Part. 4: Source + Doc Type Class Name Sents 1: Legislative 5626 (incl. Srce.2) 2: Transcripts 44287 (incl. Srce.4) 3: News 8614 (incl. most Srce.1) 4: Polemical essays 5160 5: Reports; FAQs; 11440 listings 6: Idiom examples 666 7: Novels; stories; 741 fables 8: Letters; diaries 1997 9: Legal cases; 1768 constitutions Part. 5: Source + Domain Class Name Sents 1: Recreation 3545 2: Business 2055 3: Science, Techn. 4018 4: Humanities 2224 5: Daily Living 896 6: Health, Education 1649 7: Government, Polit. 1768 8: Travel 2667 9: Social Sciences 3617 10: Idiom examp, sents 666 11: Canadian Hansards 5002 12: Assoc. Press, WSJ 8851 13: Travel dialgs 43341 Table 2: Training Set Partitions Partitioning 1: Source 2: List Structure 3: Source Plus List Structure 4: Source Plus Document Type 5: Source Plus Domain Meaningful partition 28.40% 20.39% 28.74% 30.11% 31.55% Perplexity reduction for tags I Random 16.66% 18.71% 17.12% 18.15% 19.39% Perplexity reduction for rules Meaningful partition 15.44% 10.55% 15.61% 16.20% 16.60% Random 6.30% 7.46% 6.50% 6.82% 7.34% Table 3: Perplexity reduction using class-specific triggers to predict tags and rules # Triggering Tag 1 NP1LOCNM 2 JJSYSTEM 3 IIDESPITE 4 PN1PERSON .,. 6 IIAT(SF) 7 IIFROM(SF) 8 NNUNUM Triggered Tag NP 1STATENM NP1ORG CFYET LEBUT22 MPRICE MPHONE22 MZIP NN1MONEY I.e. Words Like These: Hill, County, Bay, Lake national, federal, political despite everyone, one, anybody ,.., at (sent.-final, +/--":") from (sent.-final, +/-":") 25%, 12", 9.4m3 Trigger Words Like These: Utah, Maine, Alaska Party, Council, Department yet (conjunction) (not) only, (not) just $452,983,000, $10,000, $19.95 913-3434 (follows area code) 22314-1698 (postal zipcode) profit, price, cost Table 4: Selected Tag Trigger-Pairs, ATR/Lancaster General-English Treebank # la lb 2a 2b 3a 3b 4a 4b 5a 5b A Construction Like This: Interrupter Phrase -> * Or - Example: *,- VP -> Verb+Interrupter Phrase+Obj/Compl Example: starring--surprise, surprise--men Noun Phrase -> Simple Noun Phrase+Num Example: Lows around 50 Verb Phrase -> Adverb Phrase+Verb Phrase Example: just need to understand it Question -> Be+NP+Object/Complement Example: Is it possible? Triggers A Construction Like This: Sentence -> Interrupter P+Phrasal (Non-S) Example: * DIG. AM/FM TUNER Interrupter Phrase -> ,+Interrupter+, Example: , according to participants, Num-> Num +PrepP with Numerical Obj Example: (Snow level) 6000 to 7000 Auxiliary VP -> Model/Auxilliary Verb+Not Example: do not Quoted Phrasal -> "+Phrasal Constit+" Example: "Mutual funds are back." Table 5: Selected Rule Trigger-Pairs, ATR/Lancaster General-English Treebank 135 For Triggering Tag VVNSEND NP1LOCNM training-set document Triggered Tag I.e. Words Like These: Trigger Words Like These: NP1STATENM shipped, distributed Utah, Maine, Alaska NP1STATENM Hill, County, Bay, Lake Utah, Maine, Alaska class Recreation (1) vs. for unpartitioned training set (2) 3 [ VVOALTER 4 JJPHYS-ATT For training-set document ~[ NN1TIME NP1POSTFRMNM For training-set document NN2SUBSTANCE inhibit, affect, modify tumors, drugs, agents NN2SUBSTANCE fragile, brown, choppy pines, apples, chemicals class Health And Education (3) vs. for unparlitioned training set (4) NN2MONEY period, future, decade T expenses, fees, taxes NN2MONEY Inc., Associates, Co. | loans, damages, charges class Business (5) vs. for unpartitioned training set (6) 7 8 [DD1 DDQ ] For training-set document DDQ DOQ II this th t an°ther each I which which which class Travel Dialogues (7) vs..for unpariitioned training set (8) Table 6: Selected Tag Trigger-Pairs, ATR/Lancaster General-English Treebank: Contrasting Trigger-Pairs Arising From Partitioned vs. Unpartitioned Training Sets model trained on 5 million words, with only trigger, uni-, hi- and trigram constraints, to measure the test-set perpexity reduction with respect to a "com- pact" backoff trigram model, a well-respected model in the language-modelling field. When the top six triggers for each word were used, test-set perplex- ity was reduced by 25%. Furthermore, when a more sophisticated version of this model 13 was applied in conjunction with the SPHINX II speech recognition system (Huang et al., 1993), a 10-14% reduction in word error rate resulted (Rosenfeld, 1996). We see no reason why this effect should not carry over to tag and rule tokens, and are optimistic that long-range trigger information can be used in both parsing and tagging to improve performance. For words (Rosenfeld, 1996), self-triggers--words which triggered themselves--were the most frequent kind of triggers (68% of all word triggers were self- triggers). This is also the case for tags and rules. For tags, 76.8% were self-triggers, and for rules, 96.5% were self-triggers. As in the case of words, the set of self-triggers provides the most useful predictive information. 6 Some Examples We will now explicate a few of the example trig- ger pairs in Tables 4-6. Table 4 Item 5, for instance, captures the common practice of using a sequence of points, e.g ........... , to separate each item of a (price) list and the price of that item. Items 6 and 7 are similar cases (e.g. "contact~call (someone) at:" + phone number; "available from:" + source, typically including address, hence zipcode). These correla- tions typically occur within listings, and, crucially a3trained on 38 million words, and also employing distance-2 N-gram constraints, a unigram cache and a conditional bigram cache (this model reduced perplexity over the baseline trigram model by 32%) for their usefulness as triggers, typically occur many at a time. When triggers are drawn from a relatively homo- geneous set of documents, correlations emerge which seem to reflect the character of the text type in- volved. So in Table 6 Item 5, the proverbial equa- tion of time and money emerges as more central to Business and Commerce texts than the different but equally sensible linkup, within our overall training set, between business corporations and money. Turning to rule triggers, Table 5 Item 1 is more or less a syntactic analog of the tag examples Ta- ble 4 Items 5-7, just discussed. What seems to be captured is that a particular style of listing things, e.g. * + listed item, characterizes a document as a whole (if it contains lists); further, listed items are not always of the same phrasal type, but are prone to vary syntactically. The same document that con- tains the list item "* DIG. AM/FM TUNER", for instance, which is based on a Noun Phrase, soon af- terwards includes ':* WEATHER PROOF" and "* ULTRA COMPACT", which are based on Adjective Phrases. Finally, as in the case of the tag trigger examples of Table 6, text-type-particular correlations emerge when rule triggers are drawn from a relatively ho- mogeneous set of documents. A trigger pair of con- structions specific to Class 1 of the Source partition- ing, which contains only Associated Press newswire and Wall Street Journal articles, is the following: A sentence containing both a quoted remark and an attribution of that remark to a particular source, triggers a sentence containing simply a quoted re- mark, without attribution. (E.g. "The King was in trouble," Wall wrote, triggers "This increased the King's bitterness.".) This correlation is essentially absent in other text types. 136 7 Conclusion In this paper, we have shown that, as in the case of words, there is a substantial amount of information outside the sentence which could be used to sup- plement tagging and parsing models. We have also shown that knowledge of the type of document being processed greatly increases the usefulness of triggers. If this information is known, or can be predicted ac- curately from the history of a given document being processed, then model interpolation techniques (Je- linek et al., 1980) could be employed, we anticipate, to exploit this to useful effect. Future research will concentrate on incorporating trigger-pair information, and extrasentential infor- mation more generally, into more sophisticated mod- els of parsing and tagging. An obvious first extention to this work, for the case of tags, will be, following (Rosenfeld, 1996), to incorporate the triggers into a maximum-entropy model using trigger pairs in ad- dition to unigram, bigram and trigram constraints. Later we intend to incorporate trigger information into a probabilistic English parser/tagger which is able to ask complex, detailed questions about the contents of a sentence. From the results presented here we are optimistic that the additional, extrasen- tential information provided by trigger pairs will benefit such parsing and tagging systems. References D. Beeferman, A. Berger, and J. Lafferty. 1997. A Model of Lexical Attraction and Repulsion. In Proceedings of the ACL-EACL'97 Joint Confer- ence, Madrid. E. Black, S. Eubank, H. Kashioka, J. Saia. 1998. Reinventing Part-of-Speech Tagging. Journal of Natural Language Processing (Japan), 5:1. E. Black, S. Eubank, H. Kashioka. 1997. Probabilis- tic Parsing of Unrestricted English Text, With A Highly-Detailed Grammar. In Proceedings, Fifth Workshop on Very Large Corpora, Beijing/Hong Kong. E. Black, S. Eubank, H. Kashioka, R. Garside, G. Leech, and D. Magerman. 1996. Beyond skeleton parsing: producing a comprehensive large-scale general-English treebank with full grammatical analysis. In Proceedings of the 16th Annual Con- ference on Computational Linguistics, pages 107- 112, Copenhagen. E. Brill. 1994. Some Advances in Transformation- Based Part of Speech Tagging. In Proceedings of the Twelfth National Conference on Artificial Intelligence, pages 722-727, Seattle, Washington. American Association for Artificial Intelligence. M. Collins. 1996. A new statistical parser based on bigram lexical dependencies. In Proceedings of the 34th Annual Meeting of the Association for Computational Languistics, Santa Cruz. S. Della Pietra, V. Della Pietra, R. Mercer, S. Roukos. 1992. Adaptive language modeling us- ing minimum discriminant information. Proceed- ings of the International Conference on Acoustics, Speech and Signal Processing, I:633-636. X. Huang, F. Alleva, H.-W. Hon, M.-Y. Hwang, K.- F. Lee, and R. Rosenfeld. 1993. The SPHINX-II speech recognition system: an overview. Com- puter Speech and Language, 2:137-148. F. Jelinek, R. L. Mercer, L. R. Bahl, J. K. Baker. 1977. Perplexity--a measure of difficulty of speech recognition tasks. In Proceedings of the 94th Meeting of the Acoustic Society of America, Miami Beach, FL. F. Jelinek and R. Mercer. 1980. Interpolated esti- mation of Markov source parameters from sparse data. In Pattern Recognition In Practice, E. S. Gelsema and N. L. Kanal, eds., pages 381-402, Amsterdam: North Holland. F. Jelinek, J. Lafferty, D. Magerman, R. Mercer, A. Ratnaparkhi, S. Roukos. 1994. Decision tree pars- ing using a hidden derivation model. In Proceed- ings of the ARPA Workshop on Human Language Technology, pages 260-265, Plainsboro, New Jer- sey. Advanced Research Projects Agency. R. Kuhn, R. De Mort. 1990. A Cache-Based Natural Language Model for Speech Recognition. IEEE Transactions on Pattern Analysis and Ma- chine Intelligence, 12(6):570-583. J. Kupiec. 1989. Probabilistic models of short and long distance word dependencies in running text. In Proceedings of the DARPA Workshop on Speech and Natural Language, pages 290-295. R. Lau, R. Rosenfeld, S. Roukos. 1993. Trigger- based language models: a maximum entropy ap- proach. Proceedings of the International Confer- ence on Acoustics, Speech and Signal Processing, II:45-48. R. Lau. 1994. Adaptive Statistical Language Mod- elling. Master's Thesis, Massachusetts Institute of Technology, MA. D. Magerman. 1995. Statistical decision-tree mod- els for parsing. In 33rd Annual Meeting of the Association for Computational Linguistics, pages 276-283, Cambridge, Massachusetts. Association for Computational Linguistics. A. Ratnaparkhi. 1997. A Linear Observed Time Statistical Parser Based on Maximum Entropy Models. In Proceedings, Second Conference on Empirical Methods in Natural Language Process- ing, Providence, RI. R. Rosenfeld. 1996. A maximum entropy approach to adaptive statistical language modelling. Com- puter Speech and Language, 10:187-228. 137
1998
20
Spoken Dialogue Interpretation with the DOP Model Rens Bod Department of Computational Linguistics University of Amsterdam Spuistraat 134, 1012 VB Amsterdam [email protected] Abstract We show how the DOP model can be used for fast and robust processing of spoken input in a practical spoken dialogue system called OVIS. OVIS, Openbaar Vervoer Informatie Systeem ("Public Transport Infor- mation System"), is a Dutch spoken language infor- mation system which operates over ordinary telephone lines. The prototype system is the immediate goal of the NWO 1 Priority Programme "Language and Speech Technology". In this paper, we extend the original DOP model to context-sensitive interpretation of spoken input. The system we describe uses the OVIS corpus (10,000 trees enriched with compositional semantics) to compute from an input word-graph the best utterance together with its meaning. Dialogue context is taken into account by dividing up the OVIS corpus into context-dependent subcorpora. Each system question triggers a subcorpus by which the user answer is analyzed and interpreted. Our experiments indicate that the context-sensitive DOP model obtains better accuracy than the original model, allowing for fast and robust processing of spoken input. 1. Introduction The Data-Oriented Parsing (DOP) model (cf. Bod 1992, 1995; Bod & Kaplan 1998; Scha 1992; Sima'an 1995, 1997; Rajman 1995) is a probabilistic parsing model which does not single out a narrowly predefined set of structures as the statistically significant ones. It accomplishes this by maintaining a large corpus of analyses of previously occurring utterances. New utterances are analyzed by combining subtrees from the corpus. The occurrence-frequencies of the subtrees are used to estimate the most probable analysis of an utterance. To date, DOP has mainly been applied to corpora of trees labeled with syntactic annotations. Let us illustrate this with a very simple example. Suppose that a corpus consists of only two trees: (1) S S NP VP NP VP John V NP Peter V NP I J J I likes Mary hates Susan I Netherlands Organization for Scientific Research 138 To combine subtrees, a node-substitution operation indicated as o is used. Node-substitution identifies the leftmost nonterminai frontier node of one tree with the root node of a second tree (i.e., the second tree is substituted on the leftmost nonterminal frontier node of the first tree). A new input sentence such as Mary likes Susan can thus be parsed by combining subtrees from this corpus, as in (2): (2) S o NP o NP = S NP VP Mary Susan NP VP V NP Mary V NP I I I likes likes Susan Other derivations may yield the same parse tree; for instance: (3) S o NP o V = S NP VP Mary likes NP VP V NP Mary V NP I I I Susan likes Susan DOP computes the probability of substituting a subtree t on a specific node as the probability of selecting t among all subtrees in the corpus that could be substituted on that node. This probability is equal to the number of occurrences of t, divided by the total number of occurrences of subtrees t' with the same root label as t. Let rl(t) return the root label of t then: P(t) = #(t) / ~,t,:rl(t,)=rl(t)#(t'). The probability of a derivation is computed by the product of the probabilities of the subtrees is consists of. The probability of a parse tree is computed by the sum of the probabilities of all derivations that produce that parse tree. Bod (1992) demonstrated that DOP can be implemented using conventional context-free parsing techniques. However, the computation of the most probable parse of a sentence is NP-hard (Sima'an 1996). The most probable parse can be estimated by iterative Monte Carlo sampling (Bod 1995), but efficient algorithms exist only for sub-optimal solutions such as the most likely derivation of a sentence (Bod 1995, Sima'an 1995) or the "labelled recall parse" of a sentence (Goodman 1996). So far, the syntactic DOP model has been tested on the ATIS corpus and the Wall Street Journal corpus, obtaining significantly better test results than other stochastic parsers (Charniak 1996). For example, Goodman (1998) compares the results of his DOP parser to a replication of Pereira & Schabes (1992) on the same training and test data. While the Pereira & Schabes method achieves 79.2% zero-crossing brackets accuracy, DOP obtains 86.1% on the same data (Goodman 1998: p. 179, table 4.4). Thus the DOP method outperforms the Pereira & Schabes method with an accuracy-increase of 6.9%, or an error- reduction of 33%. Goodman also performs a statistical analysis using t-test, showing that the differences are statistically significant beyond the 98th percentile. In Bod et al. (1996), it was shown how DOP can be generalized to semantic interpretation by using corpora annotated with compositional semantics. In the current paper, we extend the DOP model to spoken dialogue understanding, and we show how it can be used as an efficient and robust NLP component in a practical spoken dialogue system called OVIS. OVIS, Openbaar Vervoer Informatie Systeem ("Public Transport Information System"), is a Dutch spoken language information system which operates over ordinary telephone lines. The prototype system is the immediate goal of the NWO Priority Programme "Language and Speech Technology". The backbone of any DOP model is an annotated language corpus. In the following section, we therefore start with a description of the corpus that was developed for the OVIS system, the "OVIS corpus". We then show how this corpus can be used by DOP to compute the most likely meaning M of a word string W: argmax g P(M, W). Next we demonstrate how the dialogue context C can be integrated so as to compute argmaxg P(M, W I C). Finally, we interface DOP with speech and show how the most likely meaning M of an acoustic utterance A given dialogue context C is computed: argmax g P(M, A I C). The last section of this paper deals with the experimental evaluation of the model. 2. The OVIS corpus: trees enriched with compositional frame semantics The OVIS corpus currently consists of 10,000 syntac- tically and semantically annotated user utterances that were collected on the basis of a pilot version of the OVIS system 2. The user utterances are answers to system questions such as From where to where do you want to travel?, At what time do you want to travel from Utrecht to Leiden?, Could you please repeat your destination ?. For the syntactic annotation of the OVIS user utterances, a tag set of 40 lexical/syntactic categories 2 The pilot version is based on a German system developed by Philips Dialogue Systems in Aachen (Aust et al. 1995), adapted to Dutch. 139 was developed. This tag set was deliberately kept small so as to improve the robustness of the DOP parser. A correlate of this robustness is that the parser will overgenerate, but as long as the probability model can accurately select the correct utterance-analysis from all possible analyses, this overgeneration is not problematic. Robustness is further achieved by a special category, called ERROR. This category is used for stutters, false starts, and repairs. No grammar is used to determine the correct syntactic annotation; there is a small set of guidelines, that has the degree of detail necessary to avoid an "anything goes" attitude in the annotator, but leaves room for the annotator's perception of the structure of the utterance (see Bonnema et al. 1997). The semantic annotations are based on the update language defined for the OVIS dialogue manager by Veldhuijzen van Zanten (1996). This language consists of a hierarchical frame structure with slots and values for the origin and destination of a train connection, for the time at which the user wants to arrive or depart, etc. The distinction between slots and values can be regarded as a special case of ground and focus distinction (Vallduvi 1990). Updates specify the ground and focus of the user utterances. For example, the utterance Ik wil niet vandaag maar morgen naar Almere (literally: "I want not today but tomorrow to Almere") yields the following update: (4) user.wants. ( ( [# today] ; [ ! tomorrow] ) ; destination .place. town. almere) An important property of this update language is that it allows encoding of speech-act information (v. Noord et al. 1997). The "#" in the update means that the information between the square brackets (representing the focus of the user-utterance) must be retracted, while the "!" denotes the corrected information. This update language is used to semantically enrich the syntactic nodes of the OVIS trees by means of the following annotation convention: • Every meaningful lexical node is annotated with a slot and/or value from the update language which represents the meaning of the lexical item. • Every meaningful non-lexical node is annotated with a formula schema which indicates how its meaning representation can be put together out of the meaning representations assigned to its daughter nodes. In the examples below, these schemata use the variable dl to indicate the meaning of the leftmost daughter constituent, d2 to indicate the meaning of the second daughter node constituent, etc. For instance, the full (syntactic and semantic) annotation for the above sentence Ik wil niet vandaag maar morgen naar Almere is given in figure (5). Note that the top-node meaning of (5) is compositionally built up out of the meanings of its sub-constituents. Substituting the meaning represen- tations into the corresponding variables yields the update expression (4). The OVIS annotations are in contrast with other corpora and systems (e.g. Miller et al. 1996), in that our annotation convention exploits the Principle of Compositionality of Meaning. 3 (5) S dl.d2 ~ V P PER d 1 .d2 uir v / ~ ~ ~ M p ik wants f wil MP MP MP CON MP P NP / ~ ! tomorrow destination,place town.almere I I I I ADV MP maar rnorgen naar almere # today I I niet vaMaag Figure (6) gives an example of the ERROR category for the annotation of the ill-formed sentence Van Voorburg naar van Venlo naar Voorburg ("From Voorburg to from Venlo to Voorburg"): (6) MP ERROR I (dl;d2) MP dl.d2 MP (dl;d2) /• destinaiion.place P NP P NP ,aar °rigin'place towlvenh origi.place town.v0orburg] van venlo van worburg MP P NP destinTon.place low,.]00rbur~ naar tvorburg Note that the ERROR category has no semantic annotation; in the top-node semantics of Van Voorburg 3 To maintain our annotation convention in the face of phenomena such as non-standard quantifier scope or discontinuous constituents may create complications in the syntactic or semantic analyses assigned to certain sentences and their constituents. It is therefore not clear yet whether our current treatment ought to be viewed as completely general, or whether a more sophisticated treatment in the vein of van den Berg et al. (1994) should be worked out. 140 naar van Venlo naar Voorburg, the meaning of the false start Van Voorburg naar is thus absent: (7) (origin.place.town.venlo ; des tination, place, town. voorburg ) The manual annotation of 10,000 OVIS utterances may seem a laborious and error-prone process. In order to expedite this task, a flexible and powerful annotation workbench (SEMTAGS) was developed by Bonnema (1996). SEMTAGS is a graphical interface, written in C using the XVIEW toolkit. It offers all functionality needed for examining, evaluating, and editing syntactic and semantic analyses. SEMTAGS is mainly used for correcting the output of the DOP parser. After the first 100 OVIS utterances were annotated and checked by hand, the parser used the subtrees of these annotations to produce analyses for the next 100 OVIS utterances. These new analyses were checked and corrected by the annotator using SEMTAGS, and were added to the total set of annotations. This new set of 200 analyses was then used by the DOP parser to predict the analyses for a next subset of OVIS utterances. In this incremental, bootstrapping way, 10,000 OVIS utterances were annotated in approximately 600 hours (supervision included). For further information on OVIS and how to obtain the corpus, see http://earth.let.uva.nl/-rens. 3. Using the OVIS corpus for data-oriented semantic analysis An important advantage of a corpus annotated according to the Principle of Compositionality of Meaning is that the subtrees can directly be used by DOP for computing syntactic/semantic representations for new utterances. The only difference is that we now have composite labels which do not only contain syntactic but also semantic information. By way of illustration, we show how a representation for the input utterance lk wil van Venlo naar Almere ("I want from Venlo to Almere") can be constructed out of subtrees from the trees in figures (5) and (6): (8) S o MP dl.d2 (dl;d2) PER VP MP user d I ,d2 d I .d2 ik V MP p NP wants (dl;d2) origin.place town.venlo I I I wi! van venlo MP dl .d2 MP dl.d2 P NP destination.place town.almere I I naar almere S P d2 PER user p (d I ;d2) V ik wants MP MP dl.d2 P NP P NP origin.place town.venlo destination.place town.almere I I I I van venlo near almere which yields the following top-node update semantics: (9) user.wants. ( origin, place, town. venlo ; destination, place, town. almere) The probability calculations for the semantic DOP model are similar to the original DOP model. That is, the probability of a subtree t is equal to the number of occurrences of t in the corpus divided by the number of occurrences of all subtrees t' that can be substituted on the same node as t. The probability of a derivation D = t 1 o ... o t n is the product of the probabilities of its subtrees t i. The probability of a parse tree T is the sum of the probabilities of all derivations D that produce T. And the probability of a meaning M and a word string W is the sum of the probabilities of all parse trees T of W whose top-node meaning is logically equivalent to M (see Bod et al. 1996). As with the most probable parse, the most probable meaning M of a word string W cannot be computed in deterministic polynomial time. Although the most probable meaning can be estimated by iterative Monte Carlo sampling (see Bod 1995), the computation of a sufficiently large number of random derivations is currently not efficient enough for a practical application. To date, only the most likely derivation can be computed in near to real-time (by a best-first Viterbi optimization algorithm). We there- fore assume that most of the probability mass for each top-node meaning is focussed on a single derivation. Under this assumption, the most likely meaning of a string is the top-node meaning generated by the most likely derivation of that string (see also section 5). 4. Extending DOP to dialogue context: context-dependent subcorpora We now extend the semantic DOP model to compute the most likely meaning of a sentence given the previous dialogue. In general, the probability of a top- node meaning M and a particular word string W i given a dialogue-context Ci = Wi-l, Wi-2 ... WI is given by P(M, W i I Wi-l, Wi-2 ... WI). Since the OVIS user utterances are typically answers to previous system questions, we assume that the meaning of a word string W i does not depend on the full dialogue context but only on the previous (system) question Wi.l. Under this assumption, P(M, W i l Ci) = P(M,W i I Wi_l) For DOP, this formula means that the update semantics of a user utterance W i is computed on the basis of the subcorpus which contains all OVIS utterances (with their annotations) that are answers to the system question Wi_ 1. This gives rise to the following interesting model for dialogue processing: each system question triggers a context-dependent domain (a subcorpus) by which the user answer is analyzed and interpreted. Since the number of different system questions is a small closed set (see Veldhuijzen van Zanten 1996), we can create off-line for each subcorpus the corresponding DOP parser. In OVIS, the following context-dependent subcorpora can be distinguished: (1) place subcorous: utterances following questions like From where to where do you want to travel? What is ),our destination ?, etc. (2) date subcorpus: utterances following questions like When do you want to travel?, When do you want to leave from X?, When do you want to arrive in Y?, etc. (3) time subcorpus: utterances following questions like At what time do you want to travel? At what time do you want to leave from X?, At what time do you want to arrive in Y?, etc. (4) yes/no subcorpus: utterances following y/n- questions like Did you say that ... ? Thus you want to arrive at... ? Note that a subcorpus can contain utterances whose topic goes beyond the previous system question. For example, if the system asks From where to where do you want to travel?, and the user answers with: From Amsterdam to Groningen tomorrow morning, then the date-expression tomorrow morning ends up in the place-subcorpus. It is interesting to note that this context- sensitive DOP model can easily be generalized to domain-dependent interpretation: a corpus is clustered into subcorpora, where each subcorpus corresponds to a topic-dependent domain. A new utterance is interpreted by the domain in which it gets highest probability. Since small subcorpora tend to assign higher probabilities to utterances than large subcorpora (because relative frequencies of subtrees in small corpora tend to be higher), it follows that a language user strives for the smallest, most specific domain in which the perceived utterance can be analyzed, thus establishing a most specific common ground. 141 5. Interfacing DOP with speech So far, we have dealt with the estimation of the probability P(M, W[ C) of a meaning M and a word string W given a dialogue context C. However, in spoken dialogue processing, the word string W is not given. The input for DOP in the OVIS system are word-graphs produced by the speech recognizer (these word-graphs are generated by our project partners from the University of Nijmegen). A word-graph is a compact representation for all sequences of words that the speech recognizer hypothesizes for an acoustic utterance A (see e.g. figure 10). The nodes of the graph represent points in time, and a transition between two nodes i and j, represents a word w that may have been uttered between the corresponding points in time. For convenience we refer to transitions in the word-graph using the notation <i, j, w>. The word-graphs are optimized to eliminate epsilon transitions. Such transitions represent periods of time when the speech recognizer hypothesizes that no words are uttered. Each transition is associated with an acoustic score. This is the negative logarithm (of base 10) of the acoustic probability P(a I w) for a hypothesized word w normalized by the length of w. Reconverting these acoustic scores into their corresponding probabilities, the acoustic probability P(A I W) for a hypothesized word string W can be computed by the product of the probabilities associated to each transition in the corresponding word-graph path. Figure (10) shows an example of a simplified word-graph for the uttered sentence lk wil graag vanmorgen naar Leiden ("I'd like to go this morning to Leiden"): (lO) ik wil graag van Maarn naar Leiden ~46.31~ (64.86~ O5.421 196.97~ (121.33~ ~54.751 (11~65~ vanmorgen (258.80~ The probabilistic interface between DOP and speech word-graphs thus consists of the interface between the DOP probabilities P(M, W IC) and the word-graph probabilities P(A I W) so as to compute the probability P(M, A I C) and argmaxM P(M, A I C). We start by rewriting P(M, A I C) as: P(M,A IC) = ~"wP(M,W, AIC) = ~w P(M, W I C) • P(A I M, W, C) The probability P(M, W IC) is computed by the dialogue-sensitive DOP model as explained in the previous section. To estimate the probability P(A IM, W, C) on the basis of the information available in the word-graphs, we must make the following independence assumption: the acoustic utterance A depends only on the word string W, and 142 not on its context C and meaning M (cf. Bod & Scha 1994). Under this assumption: P(M,A IC) = ~wP(M, WIC)' P(A IW) To make fast computation feasible, we furthermore assume that most of the probability mass for each meaning and acoustic utterance is focused on a single word string W (this will allow for efficient Viterbi best first search): P(M,A IC) = P(M, WIC). P(A IW) Thus, the probability of a meaning M for an acoustic utterance A given a context C is computed by the product of the DOP probability P(M, W I C) and the word-graph probability P(A I W). As to the parsing of word-graphs, it is well- known that parsing algorithms for word strings can easily be generalized to word-graphs (e.g. van Noord 1995). For word strings, the initialization of the chart usually consists of entering each word w i into chart entry <i, i+1>. For word-graphs, a transition <i,j, w> corresponds to a word w between positions i and j where j is not necessarily equal to i+1 as is the case for word strings (see figure I0). It is thus easy to see that for word-graphs the initialization of the chart consists of entering each word w from transition <i,j, w> into chart entry <i,j>. Next, parsing proceeds with the subtrees that are triggered by the dialogue context C (provided that all subtrees are converted into equivalent rewrite rules -- see Bod 1992, Sima'an 1995). The most likely derivation is computed by a bottom-up best-first CKY parser adapted to DOP (Sima'an 1995, 1997). This parser has a time complexity which is cubic in the number of word-graph nodes and linear in the grammar size. The top-node meaning of the tree resulting from the most likely derivation is taken as the best meaning M for an utterance A given context C. 6. Evaluation In our experimental evaluation of DOP we were interested in the following questions: (1) Is DOP fast enough for practical spoken dialogue understanding? (2) Can we constrain the OVIS subtrees without loosing accuracy? (3) What is the impact of dialogue context on the accuracy? For all experiments, we used a random split of the 10,000 OVIS trees into a 90% training set and a 10% test set. The training set was divided up into the four subcorpora described in section 4, which served to create the corresponding DOP parsers. The 1000 word- graphs for the test set utterances were used as input. For each word-graph, the previous system question was known to determine the particular DOP parser. while the user utterances were kept apart. As to the complexity of the word-graphs: the average number of transitions per word is 4.2, and the average number of words per word-graph path is 4.6. All experiments were run on an SGI Indigo with a MIPS RI0000 processor and 640 Mbyte of core memory, To establish the semantic accuracy of the system, the best meanings produced by the DOP parser were compared with the meanings in the test set. Besides an exact match metric, we also used a more fine-grained evaluation for the semantic accuracy. Following the proposals in Boros et al. (1996) and van Noord et al. (1997), we translated each update meaning into a set of semantic units, where a unit is triple <CommunicativeFunction, Slot, Value>. For instance, the next example user. wants, travel, des t inat ion. ( [# place, town.almere] ; [ ! place, town.alkmaar] ) translates as: <denial, destination town, almere> <correction, destination_town, alkmaar> Both the updates in the OVIS test set and the updates produced by the DOP parser were translated into semantic units of the form given above. The semantic accuracy was then evaluated in three different ways: (1) match, the percentage of updates which were exactly correct (i.e. which exactly matched the updates in the test set); (2) precision, the number of correct semantic units divided by the number of semantic units which were produced; (3) recall, the number of correct semantic units divided by the number of semantic units in the test set. As to question (1), we already suspect that it is not efficient to use all OVIS subtrees. We therefore performed experiments with versions of DOP where the subtree collection is restricted to subtrees with a certain maximum depth. The following table shows for four different maximum depths (where the maximum number of frontier words is limited to 3), the number of subtree types in the training set, the semantic accuracy in terms of match, precision and recall (as percentages), and the average CPU time per word- graph in seconds. subtree- semantic accuracy #subtrees CPU time depth match precision recall 1 3191 76.2 79.4 82.1 0.21 2 10545 78.5 83.0 84.3 0.86 3 32140 79.8 84.7 86.2 2.76 4 64486 80.6 85.8 86.9 6.03 Table 1: Experimental results on OVIS word-graphs The experiments show that at subtree-depth 4 the highest accuracy is achieved, but that only for subtree-depths I and 2 are the processing times fast enough for practical applications. Thus there is a trade-off between efficiency and accuracy: the efficiency deteriorates if the accuracy improves. We believe that a match of 78.5% and a corresponding precision and recall of resp. 83.0% and 84.3% (for the fast processing times at depth 2) is promising enough for further research. Moreover, by testing DOP directly on the word strings (without the word-graphs), a match of 97.8% was achieved. This shows that linguistic ambiguities do not play a significant role in this domain. The actual problem are the ambiguities in the word-graphs (i.e. the multiple paths). Secondly, we are concerned with the question as to whether we can impose constraints on the subtrees other than their depth, in such a way that the accuracy does not deteriorate and perhaps even improves. To answer this question, we kept the maximal subtree- depth constant at 3, and employed the following constraints: • Eliminating once-occurring subtrees: this led to a considerable decrease for all metrics; e.g. match decreased from 79.8% to 75.5%. • Restricting subtree lexicalization: restricting the maximum number of words in the subtree frontiers to resp. 3, 2 and 1, showed a consistent decrease in semantic accuracy similar to the restriction of the subtree depth in table 1. The match dropped from 79.8% to 76.9% if each subtree was lexicalized with only one word. • Eliminating subtrees with only non-head words: this led also to a decrease in accuracy; the most stringent metric decreased from 79.8% to 77.1%. Evidently, there can be important relations in OVIS that involve non-head words. Finally, we are interested in the impact of dialogue context on semantic accuracy. To test this, we neglected the previous system questions and created one DOP parser for the whole training set. The semantic accuracy metric match dropped from 79.8% to 77.4% (for depth 3). Moreover, the CPU time per sentence deteriorated by a factor of 4 (which is mainly due to the fact that larger training sets yield slower DOP parsers). The following result nicely illustrates how the dialogue context can contribute to better predictions for the correct meaning of an utterance. In parsing the word-graph corresponding to the acoustic utterance Donderdag acht februari ("Thursday eight February"), the DOP model without dialogue context assigned highest probability to a derivation yielding the word string Dordrecht acht februari and its meaning. The uttered word Donderdag was thus interpreted as the town Dordrecht which was indeed among the other hypothesized words in the word-graph. If the DOP model took into account the dialogue context, the previous system question When do you want to leave? was known and thus triggered the subtrees from the date-subcorpus only, which now correctly assigned the 143 highest probability to Donderdag acht februari and its meaning, rather than to Dordrecht acht februari. 7. Conclusions We showed how the DOP model can be used for efficient and robust processing of spoken input in the OVIS spoken dialogue system. The system we described uses syntactically and semantically analyzed subtrees from the OVIS corpus to compute from an input word-graph the best utterance together with its meaning. We showed how dialogue context is integrated by dividing up the OVIS corpus into context-dependent subcorpora. Each system question triggers a subcorpus by which the user utterance is analyzed and interpreted. Efficiency was achieved by computing the most probable derivation rather than the most probable parse, and by restricting the depth and lexicalization of the OVIS subtrees. Robustness was achieved by the shallow syntactic/semantic annotations, including the use of the productive ERROR label for repairs and false starts. The experimental evaluation showed that DOP's blending of lexical relations with syntactic- semantic structure yields promising results. The experiments also indicated that elimination of subtrees diminishes the semantic accuracy, even when intuitively unimportant subtrees with only non- head words are discarded. Neglecting dialogue context also diminished the accuracy. As future research, we want to investigate further optimization techniques for DOP, including finite-state approximations. We want to enrich the OVIS utterances with discourse annotations, such as co-reference links, in order to cope with anaphora resolution. We will also extend the annotations with feature structures and/or functional structures associated with the surface structures so as to deal with more complex linguistic phenomena (see Bod & Kaplan 1998). Acknowledgments We are grateful to Khalii Sima'an for using his DOP parser, and to Remko Bonnema for using SEMTAGS and the relevant semantic interfaces. We also thank Remko Bonnema, Ronald Kaplan, Remko Scha and Khalil Sima'an for helpful discussions and comments. The OVIS corpus was annotated by Mike de Kreek and Sascha SchLitz. This research was supported by NWO, the Netherlands Organization for Scientific Research (Priority Programme Language and Speech Technology). References H. Aust, M. Oerder, F. Seide and V. Steinbiss. 1995. "The Philips automatic train timetable information system", Speech Communication, 17, pp 249-262. M. van den Berg, R. Bod and R. Scha, 1994. "A Corpus- Based Approach to Semantic Interpretation", Proceedings Ninth Amsterdam Colloquium, Amsterdam, The Netherlands. R. Bod, 1992. "A Computational Model of Language Performance: Data Oriented Parsing", Proceedings COLING- 92, Nantes, France. R. Bod, 1995. Enriching Linguistics with Statistics: Performance Models of Natural Language, ILLC Dissertation Series 1995-14, University of Amsterdam. R. Bod and R. Scha, 1994. "Prediction and Disambiguation by means of Data-Oriented Parsing", Proceedings Twente Workshop on Language Technology (TWLT8), Twente, The Netherlands. R. Bod, R. Bonnema and R. Scha, 1996. "A Data-Oriented Approach to Semantic Interpretation", Proceedings Work- shop on Corpus-Oriented Semantic Analysis, ECA1-96, Budapest, Hungary. R. Bod and R. Kaplan, 1998. "A Probabilistic Corpus-Driven Model for Lexical-Functional Analysis", this proceedings. R. Bonnema, 1996. Data-Oriented Semantics, Master's Thesis, Department of Computational Linguistics, University of Amsterdam, The Netherlands. R. Bonnema, R. Bod and R. Scha, 1997. "A DOP Model for Semantic Interpretation", Proceedings ACL/EACL-97, Madrid, Spain. M. Boros et al. 1996. "Towards understanding spontaneous speech: word accuracy vs. concept accuracy." Proceedings ICSLP'96, Philadelphia (PA). E. Charniak, 1996. "Tree-bank Grammars", Proceedings AAAI-96, Menlo Park (Ca). J. Goodman, 1996. "Efficient Algorithms for Parsing the DOP Model", Proceedings Empirical Methods in Natural Language Processing, Philadelphia (PA), J. Goodman, 1998. Parsing Inside-Out, Ph.D. thesis, Harvard University, Massachusetts. S. Miller et al. 1996. "A fully statistical approach to natural language interfaces", Proceedings ACL'96, Santa Cruz (Ca.). G. van Noord, 1995. "The intersection of finite state automata and definite clause grammars", Proceedings ACL'95, Boston, Massachusetts. G. van Noord, G. Bouma, R. Koeling and M. Nederhof, 1997. Robust Grammatical Analysis for Spoken Dialogue Systems, unpublished manuscript. F. Pereira and Y. Schabes, 1992. "Inside-Outside Reestima- tion from Partially Bracketed Corpora", Proceedings ACL'92, Newark, Delaware. M. Rajman 1995. "Approche Probabiliste de l'Analyse Syntaxique", Traitement Automatique des Langues, 36(1-2). R. Scha 1992. "Virtuele Grammatica's en Creatieve Algorit- men", Gramma/77"T 1 (1). K. Sima'an, 1995. "An optimized algorithm for Data Oriented Parsing", In: R. Mitkov and N. Nicolov (eds.), Recent Advances in Natural Language Processing 1995, volume 136 of Current Issues in Linguistic Theory. John Benjamins, Amsterdam. K. Sima'an, 1996. "Computational Complexity of Probabilistic Disambiguation by means of Tree Grammars", Proceedings COLING-96, Copenhagen, Denmark. K. Sima'an, 1997. "Explanation-Based Learning of Data- Oriented Parsing", in T. Ellison (ed.) CoNLL97: Computational Natural Language Learning, ACL'97, Madrid, Spain. E. Vallduvi, 1990. The Informational Component. Ph.D. thesis, University of Pennsylvania, PA. G. Veldhuijzen van Zanten, 1996. Semantics of update expressions. Technical Report 24. NWO Priority Programme Language and Speech Technology, The Hague. 144
1998
21
A Probabilistic Corpus-Driven Model for Lexical-Functional Analysis Rens Bod Department of Computational Linguistics University of Amsterdam Spuistraat 134, NL- 1012 VB Amsterdam [email protected] Ronald Kaplan Xerox Palo Alto Research Center 3333 Coyote Hill Road Palo Alto, California 94304 [email protected] Abstract We develop a Data-Oriented Parsing (DOP) model based on the syntactic representations of Lexical- Functional Grammar (LFG). We start by sum- marizing the original DOP model for tree represen- tations and then show how it can be extended with corresponding functional structures. The resulting LFG-DOP model triggers a new, corpus-based notion of grammaticality, and its probability models exhibit interesting behavior with respect to specificity and the interpretation of ill-formed strings. 1. Introduction Data-Oriented Parsing (DOP) models of natural language embody the assumption that human language perception and production works with representations of past language experiences, rather than with abstract grammar rules (cf. Bod 1992, 95; Scha 1992; Sima'an 1995; Rajman 1995). DOP models therefore maintain large corpora of linguistic representations of previously occurring utterances. New utterances are analyzed by combining (arbitrarily large) fragments from the corpus; the occurrence-frequencies of the fragments are used to determine which analysis is the most probable one. In accordance with the general DOP architecture outlined by Bod (1995), a particular DOP model is described by specifying settings for the following four parameters: • a formal definition of a well-formed represen- tation for utterance analyses, • a set of decomposition operations that divide a given utterance analysis into a set of fragments, • a set of composition operations by which such fragments may be recombined to derive an analysis of a new utterance, and • a definition of a probability model that indicates how the probability of a new utterance analysis is computed on the basis of the probabilities of the fragments that combine to make it up. Previous instantiations of the DOP architecture were based on utterance-analyses represented as surface phrase-structure trees CTree-DOP", e.g. Bod 1993; Rajman 1995; Sima'an 1995; Goodman 1996; Bonnema et al. 1997). Tree-DOP uses two decomposition operations that produce connected subtrees of utterance representations: (1) the Root operation selects any node of a tree to be the root of the new subtree and erases all nodes except the selected node and the nodes it dominates; (2) the Frontier operation then chooses a set (possibly empty) of nodes in the new subtree different from its root and erases all subtrees dominated by the chosen nodes. The only composition operation used by Tree- DOP is a node-substitution operation that replaces the left-most nonterminal frontier node in a subtree with a fragment whose root category matches the category of the frontier node. Thus Tree-DOP provides tree- representations for new utterances by combining fragments from a corpus of phrase structure trees. A Tree-DOP representation R can typically be derived in many different ways. If each derivation D has a probability P(D), then the probability of deriving R is the sum of the individual derivation probabilities: P(R) = ~D derives R P(D) A Tree-DOP derivation D = <tl, t2 ... tk> is produced by a stochastic branching process. It starts by randomly choosing a fragment tl labeled with the initial category (e.g. S). At each subsequent step, a next fragment is chosen at random from among the set of competitors for composition into the current subtree. The process stops when a tree results with no nonterminal leaves. Let CP(tlCS) denote the probability of choosing a tree t from a competition set CS containing t. Then the probability of a derivation is P(<tl, t2 ... tk>) = l']iCP(ti I CSi) where the competition probability CP(t ICS) is given by CP(t I CS) = P(t) / :El, e CS P(t') Here, P(t) is the fragment probability for t in a given corpus. Let Ti-I = tj o t2 o ... o ti.1 be the subanalysis just before the ith step of the process, let LNC(Ti.I ) denote the category of the leftmost nonterminal of Ti-l, and let r(t) denote the root categ.ory of a fragment t. Then the competition set at the i th step is CS i = { t : r(t)=LNC(Ti. 1 ) } That is, the competition sets for Tree-DOP are determined by the category of the leftmost nonterminal of the current subanalysis. This is not the only possible definition of competition set. As Manning and Carpenter (1997) have shown, the competition sets can be made dependent on the composition operation. Their left-corner language model would also apply to Tree-DOP, yielding a different definition for the competition sets. But the properties of such Tree-DOP models have not been investigated. Experiments with Tree-DOP on the Penn Treebank and the OVIS corpus show a consistent increase in parse accuracy when larger and more complex subtrees are taken into account (cf. Bod 1993, 95, 98; Bonnema et al. 1997; Sekine & Grishman 1995; Sima'an 1995). However, Tree-DOP is limited in that it cannot account for underlying syntactic (and semantic) dependencies that are not 145 reflected directly in a surface tree. All modern linguistic theories propose more articulated represen- tations and mechanisms in order to characterize such linguistic phenomena. DOP models for a number of richer representations have been explored (van den Berg et al. 1994; Tugwell 1995), but these approaches have remained context-free in their generative power. In contrast, Lexical-Functional Grammar (Kaplan & Bresnan 1982; Kaplan 1989), which assigns representations consisting of a surface constituent tree enriched with a corresponding functional structure, is known to be beyond context-free. In the current work, we develop a DOP model based on representations defined by LFG theory CLFG-DOP"). That is, we provide a new instantiation for the four parameters of the DOP architecture. We will see that this basic LFG-DOP model triggers a new, corpus-based notion of grammaticality, and that it leads to a different class of its probability models which exhibit interesting properties with respect to specificity and the interpretation of ill-formed strings. 2. A DOP model based on Lexical-Functional representations Representations The definition of a well-formed representation for utterance-analyses follows from LFG theory, that is, every utterance is annotated with a c-structure, an f- structure and a mapping ¢ between them. The c- structure is a tree that describes the surface constituent structure of an utterance; the f-structure is an attribute-value matrix marking the grammatical relations of subject, predicate and object, as well as providing agreement features and semantic forms; and is a correspondence function that maps nodes of the c-structure into units of the f-structure (Kaplan & Bresnan 1982; Kaplan 1989). The following figure shows a representation for the utterance Kim eats. (We leave out some features to keep the example simple.) (1) "XlPRED K,m]] TENSE PRES I • PRED 'eat(SUB J)' ] Note that the ¢ correspondence function gives an explicit characterization of the relation between the superficial and underlying syntactic properties of an utterance, indicating how certain parts of the string carry information about particular units of underlying structure. As such, it will play a crucial role in our definition for the decomposition and composition operations of LFG-DOP. In (1) we see for instance that the NP node maps to the subject f-structure, and the S and VP nodes map to the outermost f-structure. It is generally the case that the nodes in a subtree carry information only about the f-structure units that the subtree's root gives access to. The notion of accessibility is made precise in the following definition: An f-structure unit fis ¢-accessible from a node n iff either n is C-linked to f (that is, f= ¢(n) ) orf is contained within ¢(n) (that is, there is a chain of attributes that leads from ¢(n) to f). All the f-structure units in (1) are C-accessible from for instance the S node and the VP node, but the TENSE and top-level PRED are not 0-accessible from the NP node. According to LFG theory, c-structures and f- structures must satisfy certain formal well-formedness conditions. A c-structure/f-structure pair is a valid LFG representation only if it satisfies the Non- branching Dominance, Uniqueness, Coherence and Completeness conditions (Kaplan & Bresnan 1982). Nonbranching Dominance demands that no c-structure category appears twice in a nonbranching dominance chain; Uniqueness asserts that there can be at most one value for any attribute in the f-structure; Coherence prohibits the appearance of grammatical functions that are not governed by the lexical predicate; and Completeness requires that all the functions that a predicate governs appear as attributes in the local f-structure. Decomposition operations Many different DOP models are compatible with the system of LFG representations. In this paper we outline a basic LFG-DOP model which extends the operations of Tree-DOP to take correspondences and f-structure features into account. The decomposition operations for this model will produce fragments of the composite LFG representations. These will consist of connected subtrees whose nodes are in C- correspondence with sub-units of f-structures. We extend the Root and Frontier decomposition opera- tions of Tree-DOP so that they also apply to the nodes of the c-structure while respecting the fundamental principles of c-structure/f-structure correspondence. When a node is selected by the Root operation, all nodes outside of that node's subtree are erased, just as in Tree-DOP. Further, for LFG-DOP, all ¢ links leaving the erased nodes are removed and all f-structure units that are not C-accessible from the remaining nodes are erased. Root thus maintains the intuitive correlation between nodes and the information in their corresponding f-structures. For example, if Root selects the NP in (1), then the f- structure corresponding to the S node is erased, giving (2) as a possible fragment: (2) ~ i PRED 'Kim' ! NP NUM SG ] In addition the Root operation deletes from the remaining f-structure all semantic forms that are local to f-structures that correspond to erased c-structure nodes, and it thereby also maintains the fundamental two-way connection between words and meanings. Thus, if Root selects the VP node so that the NP is erased, the subject semantic form "Kim" is also deleted: 146 (3) SUB, [,~u,., sG ] ] p~,.,,...D- TENSE PRES eats PRED 'eat(SUB J)' As with Tree-DOP, the Frontier operation then selects a set of frontier nodes and deletes all subtrees they dominate. Like Root, it also removes the ~ links of the deleted nodes and erases any semantic form that corresponds to any of those nodes. Frontier does not delete any other f-structure features. This reflects the fact that all features are C-accessible from the fragment's root even when nodes below the frontier are erased. For instance, if the VP in (1) is selected as a frontier node, Frontier erases the predicate "eat(SUB J)" from the fragment: (4) Kim su., IPRE°:ml INo. TENSE PRES Note that the Root and Frontier operations retain the subject's NUM feature in the VP-rooted fragment (3), even though the subject NP is not present. This reflects the fact, usually encoded in particular grammar rules or lexical entries, that verbs of English carry agreement features for their subjects. On the other hand, fragment (4) retains the predicate's TENSE feature, reflecting the possibility that English subjects might also carry information about their predicate's tense. Subject-tense agreement as encoded in (4) is a pattern seen in some languages (e.g. the split-ergativity pattern of languages like Hindi, Urdu and Georgian) and thus there is no universal principle by which fragments such as (4) can be ruled out. But in order to represent directly the possibility that subject-tense agreement is not a dependency of English, we also allow an S fragment in which the TENSE feature is deleted, as in (5). (5) Kim Fragment (5) is produced by a third decomposition operation, Discard, defined to construct generali- zations of the fragments supplied by Root and Frontier. Discard acts to delete combinations of attribute-value pairs subject to the following restriction: Discard does not delete pairs whose values C-correspond to remaining c-structure nodes. This condition maintains the essential correspondences of LFG representations: if a c- structure and an f-structure are paired in one fragment provided by Root and Frontier, then Discard also pairs that c-structure with all generalizations of that fragment's f-structure. Fragment (5) results from applying Discard to the TENSE feature in (4). 147 Discard also produces fragments such as (6), where the subject's number in (3) has been deleted: (6) V P ~ I TENSE R e!ts~ [ PRED 'eat(SUBJ)' J Again, since we have no language-specific knowled- ge apart from the corpus, we have no basis for ruling out fragments like (6). Indeed, it is quite intuitive to omit the subject's number in fragments derived from sentences with past-tense verbs or modals. Thus the specification of Discard reflects the fact that LFG representations, unlike LFG grammars, do not indicate unambiguously the c-structure source (or sources) of their f-structure feature values. The composition operation In LFG-DOP the operation for combining fragments, again indicated by o, is carried out in two steps. First the c-structures are combined by left-most substitution subject to the category-matching condition, just as in Tree-DOP. This is followed by the recursive unification of the f-structures corresponding to the matching nodes. The result retains the ¢ correspondences of the fragments being combined. A derivation for an LFG-DOP representation R is a sequence of fragments the first of which is labeled with S and for which the iterative application of the composition operation produces R. We show in (7) the effect of the LFG composition operation using two fragments from representations of an imaginary corpus containing the sentences Kim eats and People ate. The VP-rooted fragment is substituted for the VP in the first fragment, and the second f-structure unifies with the first f-structure, resulting in a representation for the new sentence Kim ate. (7) SUBJ [ Kim ate PRED 'Kim'] ] so I] su., II I = TENSE PAST PRED 'eat(SUB J)' i uM so II TENSE PAST [ PRED 'eat(SUB J)' ] This representation satisfies the well-formedness conditions and is therefore valid. Note that in LFG- DOP, as in Tree-DOP, the same representation may be produced by several derivations involving different fragments. Another valid representation for the sentence Kim ate could be composed from a fragment for Kim that does not preserve the number feature, leading to a representation which is unmarked for number. The probability models we discuss below have the desirable property that they tend to assign higher probabilities to more specific representations. The following derivation produces a valid representation for the intuitively ungrammatical sentence People eats: (8) , I i ° w "L ,j people eats suB, II I = TENSE PRES PRED 'eat(SUB J)' people eats SUBJ [ NUMPRED '~°ple'lpL ] TENSE PRES PRED 'eat(SUB J)' This system of fragments and composition thus provides a representational basis for a robust model of language comprehension in that it assigns at least some representations to many strings that would generally be regarded as ill-formed. A correlate of this advantage, however, is the fact that it does not offer a direct formal account of metalinguistic judgments of grammaticality. Nevertheless, we can reconstruct the notion of grammaticality by means of the following definition: A sentence is grammatical with respect to a corpus if and only if it has at least one valid representation with at least one derivation whose fragments are produced only by Root and Frontier and not by Discard. Thus the system is robust in that it assigns three representations (singular, plural, and unmarked as the subject's number) to the string People eats, based on fragments for which the number feature of people, eats, or both has been discarded. But unless the corpus contains non-plural instances of people or non- singular instances of eats, there will be no Discard- free derivation and the string will be classified as ungrammatical (with respect to the corpus). Probability models As in Tree-DOP, an LFG-DOP representation R can typically be derived in many different ways. If each derivation D has a probability P(D), then the probability of deriving R is again the probability of producing it by any of its derivations. This is the sum of the individual derivation probabilities: (9) P(R) = •O derives R P(D) An LFG-DOP derivation is also produced by a stochastic branching process which at each step makes a random selection from a competition set of competing fragments. Let CP(fl CS) denote the probability of choosing a fragment f from a competition set CS containing f, then the probability of a derivation D = <fl,f2 ...fk> is (10) P(<fl,f2 ...fk>) = FIi CPffi I CSi) where as in Tree-DOP, CP(f I CS) is expressed in terms of fragment probabilities P(f) by the formula ( 11 ) CP(f I CS) = P(D / ~,fe cs P(f) Tree-DOP is the special case where there are no conditions of validity other than the ones that are enforced at each step of the stochastic process by the composition operation. This is not generally the case and is certainly not the case for the Completeness Condition of LFG representations: Completeness is a property of a final representation that cannot be evaluated at any intermediate steps of the process. However, we can define probabilities for the valid representations by sampling only from such representations in the output of the stochastic process. The probability of sampling a particular valid representation R is given by (12) P(R I R is valid) = P(R) / ~R' is valid P(R') This formula assigns probabilities to valid represent- ations whether or not the stochastic process guarantees validity. The valid representions for a particular utterance u are obtained by a further sampling step and their probabilities are given by: (13) P(R I R is valid and yields u) = P(R) / ~R' is valid and yields u P(R~ The formulas (9) through (13) will be part of any LFG-DOP probability model. The models will differ only in how the competition sets are defined, and this in turn depends on which well-formedness conditions are enforced on-line during the stochastic branching process and which are evaluated by the off-line validity sampling process. One model, which we call M1, is a straight- forward extension of Tree-DOP's probability model. This computes the competition sets only on the basis of the category-matching condition, leaving all other well-formedness conditions for off-line sampling. Thus for M1 the competition sets are defined simply in terms of the categories of a fragment's c-structure root node. Suppose that Fi-I =fl °f2 o ... ofi.1 is the current subanalysis at the beginning of step i in the process, that LNC(Fi.1) denotes the category of the leftmost nonterminal node of the c-structure of F i.1, and that r(f) is now interpreted as the root-node category of fs c-structure component. Then the competition set for the i th step is (14) CSi = {f: r(0C)=LNC(Fi.1) } Since these competition sets depend only on the category of the leftmost nonterminal of the current c- structure, the competition sets group together all fragments with the same root category, independent of any other properties they may have or that a particular derivation may have. The competition 148 probability for a fragment can be expressed by the formula (15) CP(f) = p(f)/~Ef: r(f)=rff) P(]") We see that the choice of a fragment at a particular step in the stochastic process depends only on the category of its root node; other well-formedness properties of the representation are not used in making fragment selections. Thus, with this model the stochastic process may produce many invalid representations; we rely on sampling of valid representations and the conditional probabilities given by (12) and (13) to take the Uniqueness, Coherence, and Completeness Conditions into account. Another possible model (M2) defines the competition sets so that they take a second condition, Uniqueness, into account in addition to the root node category. For M2 the competing fragments at a particular step in the stochastic derivation process are those whose c-structures have the same root node category as LNC(Fi.1 ) and also whose f-structures are consistently unifiable with the f-structure of Fi. 1 . Thus the competition set for the ith step is (16) CSi = {f: r(f)=LNC(Fi.1) andfis unifiable with the f-structure of Fi-1 } Although it is still the case that the category- matching condition is independent of the derivation, the unifiability requirement means that the competition sets vary according to the representation produced by the sequence of previous steps in the stochastic process. Unifiability must be determined at each step in the process to produce a new competition set, and the competition probability remains dependent on the particular step: (17) CP(3~ I CSi) = P(fi) / ~f: r(f)=r(fl) and fis unifiable with/~. I P(f) On this model we again rely on sampling and the conditional probabilities (12) and (13) to take just the Coherence and Completeness Conditions into account. In model M3 we define the stochastic process to enforce three conditions, Coherence, Uniqueness and category-matching, so that it only produces representations with well-formed c-structures that correspond to coherent and consistent f-structures. The competition probabilities for this model are given by the obvious extension of (17). It is not possible, however, to construct a model in which the Completeness Condition is enforced during the derivation process. This is because the satisfiability of the Completeness Condition depends not only on the results of previous steps of a derivation but also on the following steps (see Kaplan & Bresnan 1982). This nonmonotonic property means that the appropriate step-wise competition sets cannot be defined and that this condition can only be enforced at the final stage of validity sampling. In each of these three models the category- matching condition is evaluated on-line during the derivation process while other conditions are either evaluated on-line or off-line by the after-the-fact sampling process. LFG-DOP is crucially different from Tree-DOP in that at least one validity 149 requirement, the Completeness Condition, must always be left to the post-derivation process. Note that a number of other models are possible which enforce other combinations of these three conditions. 3. Illustration and properties of LFG-DOP We illustrate LFG-DOP using a very small corpus consisting of the two simplified LFG representations shown in (18): (18) "x SUBJ [ PrOD 'pe°plel] PRED 'walk(suB J)' J The fragments from this corpus can be composed to provide representations for the two observed sentences plus two new utterances, John walked and People fell. This is sufficient to demonstrate that the probability models M1 and M2 assign different probabilities to particular representations. We have omitted the TENSE feature and the lexical categories N and V to reduce the number of the fragments we have to deal with. Applying the Root and Frontier operators systematically to the first corpus representation produces the fragments in the first column of (19), while the second column shows the additional f-structure that is associated with each c- structure by the Discard operation. A total of 12 fragments are produced from this representation, and by analogy 12 fragments with either PL or unmarked NUM values will also result from People walked. Note that the [S NP VP] fragment with the unspecified NUM value is produced for both sentences and thus its corpus frequency is 2. There are 14 other S-rooted fragments, 4 NP-rooted fragments, and 4 VP-rooted fragments; each of these occurs only once. These fragments can be used to derive three different representations for John walked (singular, plural, and unmarked as the subject's number). To facilitate the presentation of our derivations and probability calculations, we denote each fragment by an abbreviated name that indicates its c-structure root-node category, the sequence of its frontier-node labels, and whether its subject's number is SG, PL, or unmarked (indicated by U). Thus the first fragment in (19) is referred to as S/John-fell/SG and the unmarked fragment that Discard produces from it is referred to as S/John-fell/U. Given this naming convention, we can specify one of the derivations for John walked by the expression S/NP-VP/U o NP/John/SG o VP/walked/U, corresponding to an analysis in which the subject's number is marked as SG. The fragment VP/walked/U of course comes from People walked, the second corpus sentence, and does not appear in (19). (19) ~ ' f a l I ( S U B J ) ' J L PRED 'fall(SuB,I)' 1 [ PRED SUBJ NU'M John f e l l " ~ PLIED 'falllsUBD' ~ P ~ 'Js~ hn] NUM John ls BJ IPREO'O d su., I i ! PRED 'fall(SUB J)' PRED 'fall(SUBJ)'] PRED 'John] Model M1 evaluates only the Tree-DOP root-category condition during the stochastic branching process, and the competition sets are fixed independent of the derivation. The probability of choosing the fragment S/NP-VP/U, given that an S-rooted fragment is required, is always 2/16, its frequency divided by the sum of the frequencies of all the S fragments. Similarly, the probability of then choosing NP/John/SG to substitute at the NP frontier node is 1/4, since the NP competition set contains 4 fragments each with frequency 1. Thus, under model M1 the probability of producing the complete derivation S/NP-VP/U o NP/John/SG o VP/walked/U is 2/16xl/4xl/4=2/256. This probability is small because it indicates the likelihood of this derivation compared to other derivations for John walked and for the three other analyzable strings. The computation of the other M1 derivation probabilities for John walked is left to the reader. There are 5 different derivations for the representation with SG number and 5 for the PL number, while there are only 3 ways of producing the unmarked number U. The conditional probabilities for the particular representations (SG, PL, U) can be calculated by (9) and (13), and are given below. P(NUM=SG I valid and yield = John walked) = .353 P(NUM=PL I valid and yield = John walked) = .353 P(NUM=U I valid and yield = John walked) = .294 150 We see that the two specific representations are equally likely and each of them is more probable than the representation with unmarked NUM. Model M2 produces a slightly different distribution of probabilities. Under this model, the consistency requirement is used in addition to the root-category matching requirement to define the competition sets at each step of the branching process. This means that the first fragment that instantiates the NUM feature to either SG or PL constrains the competition sets for the following choices in a derivation. Thus, having chosen the NP/John/SG fragment in the derivation S/NP-VP/U o NP/John/SG o VP/walked/U, only 3 VP fragments instead of 4 remain in the competition set at the next step, since the VP/walked/PL fragment is no longer available. The probability for this derivation under model M2 is therefore 2/16xl/4xl/3=2/192, slightly higher than the probability assigned to it by M1. Table 1 shows the complete set of derivations and their M2 probabilities for John walked. S/NP-VP/U o NP/JohrdSG o VP/walked/U SG 2/16 x 1/4 x 1/3 S/NP-VP/SG ° NP/John/SG o VP/walked/U SG 1/16 x 1/3 x 1/3 S/NP-VP/SG ° NP/John/U o VP/walked/U SG 1/16 x 1/3 x 1/3 S/NP-walked/U o NP/John/SG SG 1/16 x 1/4 S/John-VP/SG o VP/walked/U SG 1/16 x 1/3 P(NUM=SG and yield = John walked) = 351576 = .061 P(NUM=SG I valid and yield = John walked) = 701182 = .38 S/NP-VP/U o NP/John/U o VP/walked/PL PL 2/16 x 1/4 x 1/4 S/NP-VP/PL o NP/John/U oVP/walked/PL PL 1/16 x 1/3 x 1/3 S/NP-VP/PL ° NP/John/U o VP/walked/U PL 1/16 x 1/3 x 1/3 S/NP-walked/PL o NP/JohrdU PL 1/16 x 1/3 S/John-VP/U ° VP/walked/PL PL 1/16 x 1/4 P(NUM=PL and yield = John walked) = 33.5/576 = .058 P(NUM=PL I valid and yield = John walked) = 671182 = .37 S/NP-VP/U o NP/John/U o VP/walked/U U 2/16 x 1/4 x 1/4 S/NP-walked/U o NP/John/U U 1/16 x 1/4 S/John-VP/U o VP/walked/U U 1/16 x 1/4 P(NUM=U and yield = John walked) = 22.51576 = .039 P(NUM=U I valid and yield = John walked) = 451182 = .25 Table 1: Model M2 derivations, subject number features, and probabilities for John walked The total probability for the derivations that produce John walked is .158, and the conditional probabilities for the three representations are: P(NUM=SG I valid and yield = John walked) = .38 P(NUM=PL I valid and yield = John walked) = .37 P(NUM=U I valid and yield = John walked) = .25 For model M2 the unmarked representation is less likely than under M1, and now there is a slight bias in favor of the value SG over PL. The SG value is favored because it is carried by substitutions for the left-most word of the utterance and thus reduces competition for subsequent choices. The value PL would be more probable for the sentence People fell. Thus both models give higher probability to the more specific representations. Moreover, M1 assigns the same probability to SG and PL, whereas M2 doesn't. M2 reflects a left-to-right bias (which might be psycholinguistically interesting -- a so-called primacy effect), whereas M1 is, like Tree-DOP, order indepen- dent. It turns out that all LFG-DOP probability models (M 1, M2 and M3) display a preference for the most specific representation. This preference partly depends on the number of derivations: specific representations tend to have more derivations than generalized (i.e., unmarked) representations, and consequently tend to get higher probabilities -- other things being equal. However, this preference also depends on the number of feature values: the more feature values, the longer the minimal derivation length must be in order to get a preference for the most specific representation (Cormons, forthcoming). The bias in favor of more specific represen- tations, and consequently fewer Discard-produced feature generalizations, is especially interesting for the interpretation of ill-formed input strings. Bod & Kaplan (1997) show that in analyzing an intuitively ungrammatical string like These boys walks, there is a probabilistic accumulation of evidence for the plural interpretation over the singular and unmarked one (for all models M1, M2 and M3). This is because both These and boys carry the PL feature while only walks is a source for the SG feature, leading to more derivations for the PL reading of These boys walks. In case of "equal evidence" as in the ill-formed string Boys walks, model M I assigns the same probability to PL and SG, while models M2 and M3 prefer the PL interpretation due to their left-to-right bias. 4. Conclusion and computational issues Previous DOP models were based on context-free tree representations that cannot adequately represent all linguistic phenomena. In this paper, we gave a DOP model based on the more articulated representations provided by LFG theory. LFG-DOP combines the advantages of two approaches: the linguistic adequacy of LFG together with the robustness of DOP. LFG-DOP triggers a new, corpus-based notion of grammaticality, and its probability models exhibit a preference for the most specific analysis containing the fewest number of feature generalizations. The main goal of this paper was to provide the theoretical background of LFG-DOP. As to the computational aspects of LFG-DOP, the problem of finding the most probable representation of a sentence is NP-hard even for Tree-DOP. This problem may be tackled by Monte Carlo sampling techniques (as in Tree-DOP, cf. Bod 1995) or by computing the Viterbi n best derivations of a sentence. Other optimization heuristics may consist of restricting the fragment space, for example by putting an upper bound on the fragment depth, or by constraining the decomposition operations. To date, a couple of LFG-DOP implemen- tations are either operational (Cormons, forthcoming) or under development, and corpora with LFG representations have recently been developed (at XRCE France and Xerox PARC). Experiments with these corpora will be presented in due time. Acknowledgments We thank Joan Bresnan, Mary Dalrymple, Mark Johnson, Martin Kay, John Maxwell, Remko Scha, Khalil Sima'an, Andy Way and three anonymous reviewers for helpful comments. We are most grateful to Boris Cormons whose comments were particularly helpful. This research was supported by NWO, the Dutch Organization for Scientific Research. The initial stages of this work were carried out while the second author was a Fellow of the Netherlands Institute for Advanced Study (NIAS). Subsequent stages were also carried out while the first author was a Consultant at Xerox PARC. References M. van den Berg, R. Bod and R. Scha 1994. "A Corpus- Based Approach to Semantic Interpretation", Proceedings Ninth Amsterdam Colloquium, Amsterdam, The Netherlands. R. Bod 1992. "A Computational Model of Language Performance: Data Oriented Parsing", Proceedings COLING-92, Nantes, France. R. Bod 1993. "Using an Annotated Corpus as a Stochastic Grammar", Proceedings EACL'93, Utrecht, The Netherlands. R. Bod 1995. Enriching Linguistics with Statistics: Performance Models of Natural Language, ILLC Dissertation Series 1995-14, University of Amsterdam R. Bod 1998. "Spoken Dialogue Interpretation with the DOP Model", this proceedings. R. Bod and R. Kaplan 1997. "On Performance models for Lexical-Functional Analysis", Paper presented at the Computational Psycholinguistics Conference 1997, Berkeley (Ca). R. Bonnema, R. Bod and R. Scha 1997. "A DOP Model for Semantic Interpretation", Proceedings ACL/EACL-97, Madrid, Spain. B. Cormons, forthcoming. Analyse et desambiguisation: Une approche purement fi base de corpus (Data-Oriented Parsing) pour le formalisme des Grammaires Lexicales Fonctionnelles, PhD thesis, Universit6 de Rennes, France. J. Goodman 1996. "Efficient Algorithms for Parsing the DOP Model", Proceedings Empirical Methods in Natural Language Processing, Philadelphia, Pennsylvania. R. Kaplan 1989. "The Formal Architecture of Lexical- Functional Grammar", Journal of Information Science and Engineering, vol. 5, 305-322. R. Kaplan and J. Bresnan 1982. "Lexical-Functional Grammar: A Formal System for Grammatical Representation", in J. Bresnan (ed.), The Mental Representation of Grammatical Relations, The MIT Press, Cambridge, MA. C. Manning and B. Carpenter 1997. "Probabilistic parsing using left comer language models", Proceedings IWPT'97, Boston (Mass.). M. Rajman 1995. "Approche Probabiliste de l'Analyse Syntaxique", Traitement Automatique des Langues, vol. 36(1-2). R. Scha 1992. "Virtuele Grammatica's en Creatieve Algoritmen", Gramma/l'TT l(1). S. Sekine and R. Grishman 1995. "A Corpus-based Probabilistic Grammar with Only Two Non-terminals", Proceedings Fourth International Workshop on Parsing Technologies, Prague, Czech Republic. K. Sima'an 1995. "An optimized algorithm for Data Oriented Parsing", in R. Mitkov and N. Nicolov (eds.), Recent Advances in Natural Language Processing 1995, John Benjamins, Amsterdam. D. Tugwell 1995. "A State-Transition Grammar for Data- Oriented Parsing", Proceedings European Chapter of the ACL'95, Dublin, Ireland. 151
1998
22
Anchoring Floating Quantifiers in Japanese-to-English Machine Translation Francis Bond,t Daniela Kurz* and Satoshi Shirai t t NTT Communication Science Laboratories 2-4 Hikari-dai, Seika-cho, Soraku-gun, Kyoto, Japan, 619-0237 {bond, shirai}©cslab, kecl. nit. co. jp Department of Computational Linguistics, University of the Saarland Postfach 1150, D-66041 Saarbrficken, Germany kurz©coli, uni-sb, de Abstract In this paper we present an algorithm to an- chor floating quantifiers in Japanese, a language in which quantificational nouns and numeral- classifier combinations can appear separated from the noun phrase they quantify. The algo- rithm differentiates degree and event modifiers from nouns that quantify noun phrases. It then finds a suitable anchor for such floating quan- tifiers. To do this, the algorithm considers the part of speech of the quantifier and the target, the semantic relation between them, the case marker of the antecedent and the meaning of the verb that governs the two constituents. The al- gorithm has been implemented and tested in a rule-based Japanese-to-English machine trans- lation system, with an accuracy of 76% and a recall of 97%. 1 Introduction One interesting phenomenon in Japanese is the fact that quantifiers can appear in two main po- sitions, as pre-modifier in a noun phrase (1), or 'floating' as adjuncts to the verb phrase, typi- cally in pre-verbal position (2). 1,2 (1) watashi-wa 3-ko-no kgki-wo tabeta I-TOP 3-CL-ADN cake-ACC ate I ate three cakes (2) watashi-wa kgki-wo 3-ko tabeta I-TOP cake-ACC 3-CL ate I ate three cakes Quantifier 'float' of numeral-classifier combi- nations is widely discussed in the linguistic liter- 1Quantifiers are shown in bold, the noun phrases they quantify are underlined. 2This phenomenon exists in other languages, such as Korean. We will, however, restrict our discussion to Japanese in this paper. ature. 3 Much of the discussion focuses on iden- tifying the conditions under which a quantifier can appear in the adjunct position. The expla- nations range from configurational (Inoue, 1983; Miyagawa, 1989) to discourse based (Downing, 1996; Alam, 1997), we shall discuss these fur- ther below. There has been almost no discus- sion of other floating quantifiers, such as quan- tificational nouns. We call the process of identifying the noun phrase being quantified by a floating quanti- fier 'anchoring' the quantifier. The necessity of anchoring floating quantifiers for many nat- ural language processing tasks is widely recog- nized (Asahioka et al., 1990; Bond et al., 1996), and is important not only for machine transla- tion but for the interpretation of Japanese in general. However, although there are several NLP systems that incorporate some solution to the problem of floating quantifiers, to the best of our knowledge, no algorithm for anchoring floating quantifiers has been given. We propose such an algorithm in this paper. The algorithm uses information about case-marking, sentence structure, part-of-speech, noun and verb mean- ing. The algorithm has been implemented and tested within the Japanese-to-English machine translation system ALT-J/E (Ikehara et al., 1991). The next section describes the phenomenon of quantifier float in more detail. We then pro- pose our algorithm to identify and anchor float- ing quantifiers in Section 3. The results of im- plementing the algorithm in ALT-J/E are dis- 3The name 'float' comes from early transformational accounts, where the quantifier was said to 'float' out of the noun phrase. Although this analysis has largely been abandoned, and we disagree with it, we shall continue with accepted practice and call a quantifier in the ad- junct position a floating quantifier. 152 cussed in Section 4 and some remaining prob- lems identified. The conclusion summarises the implementation of the algorithm and highlights some of its strengths. 2 Quantifier float in Japanese First we will give a definition of quantifiers. Se- mantically, quantifiers are elements that serve to quantify, or enumerate, some target. The tar- get can be an entity, in which case the number of objects is quantified, or an action, in which case the number of events (i.e. iterations of the action) are quantified. The quantification can be by a cardinal number, or by a more vague expression, like several or many. In Japanese, quantifiers (Q) axe mainly re- alised in two ways: numeral-classifier combi- nations (XC) and quantificational nouns (N). Note that these nouns are often treated as ad- verbs, as they typically function as adjuncts that modify verbs, a function prototypically car- ried out by adverbs. They can however head noun phrases, and take some case-markers, so we classify them as nouns. Numeral classifiers form a closed class, al- though a large one. Japanese and Korean both have two or three hundred numeral classifiers (not counting units), although typically indi- vidual speakers use far less, between 30 and 80 (Downing, 1995, 346). Syntactically, numeral classifiers are a sub- class of nouns. The main property distinguish- ing them from prototypical nouns is that they cannot stand alone. Typically they postfix to numerals, forming a quantifier phrase, al- though they can also combine with the quan- tificational prefix s~ "some" or the interrogative nani "what": (3) 2-hiki "2 animals" (Numeral) (4) s~-hiki "some animals" (Quantifier) (5) nan-biki "how many animals" (Inter- rogative) Semantically, classifiers both classify and quantify the referent of the noun phrase they collocate with. Quantificational nouns, such as takusan "much/many", subete "all" and ichibu "some", only quantify their targets, there is no classifi- cation involved. Numeral classifier combinations appear in seven major patterns of use (following Asahioka et al. (1990)) as shown below (T refers to the quantified target noun phrase, m is a case- marker): Type Form XC N pre-nominal Q-no T-m + + appositive TQ-m + - floating T-m Q + + Q T-m partitive T-no Q-m + + attributive QT-m + - anaphoric T-m ÷ - predicative T-wa Q-da ÷ - Table 1: Types of quantifier constructions Noun quantifiers cannot appear in the ap- positive, attributive, anaphoric and predicative complement patterns. In the pre-nominal construction the relation between the target noun phrase and quantifier is explicit. For numeral-classifier combinations the quantification can be of the object denoted by the noun phrase itself as in (6); or of a sub- part of it as in (7) (see Bond and Paik (1997) for a fuller discussion). For nouns, only the object denoted by the noun itself can be quantified. (6) 3-ts~-no tegami 3-CL-ADN letter 3 letters (7) 3-rnai-no tegami 3-CL-ADN letter a 3 page letter In the partitive construction the quantifier re- stricts a subset of a known amount: e.g., tegami- no 3-ts~ "three of the letters". This is a very different construal to the pre-nominal construc- tion. Only rational quantificational nouns can appear in the partitive construction. The floating construction, on the other hand, has the same quantificational meaning as the pre-nominal. Two studies indicate that there are pragmatic differences (Downing, 1996; Kim, 1995). Pre-nominal constructions typically are used to introduce important referents, with non- existential predicates, while floating construc- tions typically introduce new number informa- tion. In addition floating constructions are used 153 when the nominal has other modifiers, and are more common in spoken text. We will restrict the following discussion to the difference between the pre-nominal and floating uses. 2.1 Restrictions on quantifier float There have been many attempts to describe the situations under which the floating construction is possible, almost all of which only consider numeral-classifier constructions. The earliest generative approaches suggested that the target in the floating construction must be either subject or object. Inoue (1983) pointed out that quasi-objects, noun phrases marked with the accusative case-marker but failing other tests for objecthood, could also be targets. Miyagawa (1989) gives a comprehensive con- figurational explanation, where the target and quantifier must mutually c-command each other (that is, neither the target nor the quantifier dominates the other, and the first branching node that dominates either one, dominates the other). The restriction to nominative and accusative targets is explained by proposing a difference in structure. Verb arguments sub- categorized for in the lexicon are noun phrases, where the case-marker is a clitic and thus can be c-commanded, whereas adjuncts are headed by their markers, to form post-positional phrases which are thus not available as targets. The c-command relation is applied to both the noun phrases themselves and traces. Quan- tifiers can be scrambled (moved from their base position after their target) leaving a trace if the target is an affected Theme NP, and the target and quantifier are governed by the verb that assigns this thematic role. Thus quantifiers as- sociated with affected themes can move within the sentence. Affected themes are things that axe "changed, created, converted, extinguished, consumed, destroyed or gotten-rid of". Miyagawa (1989, 57) proposes a syntactic test for affectiveness: affected themes can occure in the intransitive resultative construction -te-aru. Alam (1997) looks at the problem from a dif- ferent angle, and proposes that only quantifiers which are interpreted "distributively or as a quantified event" can float, as they take wide scope beyond the NP. A quantified noun phrase will also quantify the event if the noun phrase measures-out the event, where "direct internal arguments undergoing change in the event de- scribed by the verb measure out the event" a very similar description to that of affected theme. However, Jackendoff (1996) has shown that a wide variety of arguments can measure out processes, not just subjects and objects, but also the complements of prepositional phrases. Which case-roles measure out the process can be pragmatically determined as well as lexically stipulated, so it is not a simple matter to deter- mine which arguments are relevent. The excellent distributional analysis of Down- ing (1996) shows that actual cases of float- ing tend to be absolutive, that is quantifiers largely float from intransitive subjects (67%) or direct objects of transitive verbs (24%) rather than from transitive subjects (4%) or indirect objects (1%). On the question of why quantifiers appear outside of the noun phrases they quantify, there have been two explanations: Discourse new in- formation floats to the pre-verb focus position (Downing, 1996; Kim, 1995), quantifiers float from noun phrases that 'measure out' an event (Alam, 1997). We speculate that there may be a perfor- mance based reason. Hawkins (1994) has shown that many phenomena claimed to be discourse related are in fact largely due to performance. However we have not yet compiled sufficient em- pirical evidence to show this conclusively. 3 An algorithm to identify and anchor floating quantifiers The proposed algorithm is outlined in Figure 1. In our implementation it is appplied to each of one or more candidate outputs of a Japanese dependency parser as part of the semantic rank- ing. 3.1 Identify potential floating quantifiers The first step is to identify potential floating quantifiers. Every adjunct case element headed by a noun is checked. All numeral classifier combinations are potential candidates. An adjunct must must meet two conditions to be considered a floating quantificational nouns, one semantic and one syntactic. The semantic criterion is that one of the noun's senses must be 154 For each unit sentence Identify potential floating quan~ifiers (QP) [Numeral-classifier or Quantificational Noun] Identify potential anchors (NP) [nominative or accusative] Discard bad combinations [semantic anomalies, degree modifiers, event modifiers] Rank remaining combinations Prefer accusative Prefer anchor on the left Prefer closest Anchor the best candidate pair(s) Figure 1: Algorithm to anchor floating quanti- tiers subsumed by quanta, few/some, all-part. The syntactic criterion is that the part of speech subcategory must be one of degree or quantifier adverbial. 4 We use the Goi-Taikei (Ikehara et al., 1997) to test for the senses and Miyazaki et al. (1995) for the syntactic classifi- cation. 3.2 Identify potential anchors All noun phrases that matched a case-slot marked with -ga (nominative) or -o (accusative) are accepted as potential anchors. This is the traditional criterion given for potential anchors. Note even if the surface marker is different, for example when the case-marker is overwritten by a focus-marker such as -wa "topic", the 'canon- ical' case-marker will be found by our parser. Noun phrases marked with -hi (dative), have been shown to be permissible candidates, but we do not allow them. Such sentences are, how- ever, rare outside linguistics papers. We found no such candidates in the sentences we exam- ined, and Downing (1996, 239) found only one in ninety six examples. When we tried allow- ing dative noun phrases, it significantly reduced the performance of our algorithm: every dative noun phrase selected was wrong. If we could determine which noun phrases measure-out the action, then they should also be considered as 4This part of speech category actually includes both true adverbs and adverb-like nouns. candidates, but we have no way to identify them at present. 3.3 Discard bad combinations Some combinations of anchor and quantifier can be ruled out. We have identified three cases: semantically anomalous cases; sentences where the quantifier modifies the verb as a degree modifier; and sentences where the quantifier modifies the verb as a frequency modifier. 3.3.1 Semantically anomalous cases Singular noun phrases In Japanese, pro- nouns and names are typically marked with a collectiviser (such as -tachi) if there are multi- ple referents (see e.g. Martin (1988, 143-154)). A pronoun or name not so marked characteris- tically has a singular interpretation. For names this can be overridden by a numeral-classifier combination (8), although it is rare, but not by an quantificational noun (9). (8) Matsuo-san-ga 3-n{n shabetta Matsuo-HON-NOM 3-CL spoke 3 Matsuos spoke (9) Matsuo-san-ga takusan shabetta Matsuo-HON-NOM many spoke Matsuo spoke a lot In all the texts we examined, we found no ex- amples of names modified by floating numeral- classifier combinations. We therefore block all pronouns and names not modified by a collec- tiviser from serving as anchors to floating quan- tifiers. In Japanese, there is not a clear division be- tween pronouns and common nouns, particu- larly kin-terms such as ojisan "grandfather/old man". Pronouns can be modified in the same way as common nouns, and kin-terms are often used to refer to non kin. Pronouns modified by quantifiers need to be translated by more gen- eral terms as in (10). (10) kanojo-tachi-ga 3-nin kita she-COL-NOM 3-CL came ? 3 she came The 3 girls came Classifier semantic restrictions For nu- meral classifiers, the selectional restrictions of the classifier can be used to disallow certain 155 combinations. For example,-kai "event" can only be used to modify event-nouns such as shokuji "meal" or fishin "earthquake". How- ever, the semantics are very complicated, and there is a great deal of variation, as a classifier can select not just for the object denoted by its target but also a sub-part of it. In addition, classifiers can be used to select meanings figu- ratively, coercing a new interpretation of their head. Bond and Paik (1997) suggest a way of dealing with this in the generative lexical frame- work of Pustejovsky (1995) but it requires more information about the conceptual structure of noun phrases than is currently available. For the time being, we use a simple table of forbidden combinations. For example pointo "point" will not be used to quantify nouns de- noting agent, place or abstract noun. 3.3.2 Degree modification Noun quantifiers can be used as degree modi- fiers as well as quantifying some referent. If the predicate is used to state a property of the po- tential anchor, then a noun quantifier will char- acteristically be a degree modifier. We use the verbal semantic attributes given in the Goi-Taikei (Ikehara et al., 1997) to test for this relationship. Anchoring will be blocked either if the potential anchor is nominative and the verbal semantic attribute is one of attribute transfer, existence, attribute or result or if the anchor is ac- cusative and the verbal semantic attribute is physical/attribute transfer. Sentence (ii) shows this constraint in action: (11) kodomo-ga sukoshi samui child-NOM a little cold * A few children are cold The child is a little cold 3.3.3 Event modification The final case we need to consider is where the noun quantifier can quantify the event or the affected theme of the event, such as (12). In Japanese, either reading is possible when the quantifier is in pre-verbal position. Anchor- ing the quantifier is equivalent to choosing the theme reading. (12) kare-wa k~ki-wo takusan tabeta he-TOP cake-NOM much ate He ate cake a lot He ate a lot of cake (event) (theme) Examining our corpus showed the theme reading to be the default. Of course, if the event is modified elsewhere, for example by a temporal modifier, then different readings are possible. The system in which our implementa- tion was tested lacks a system for event quan- tification, so we were not able to implement any constraint for this phenomenon. We there- fore implemented the theme reading as our de- fault. Note that, for stative verbs with per- manent readings such as shiru "know", there is almost no difference between the two readings (13). (13) watashi-wa ratengo-wo sukoshi I-TOP Latin-ACC a little shitte-iru know I know a little Latin I know Latin a little 3.4 Rank and select candidates If there are more than two combinations, the following heuristics are used to choose which one or ones to choose. Prefer accusative: A combination with an accusative anchor gets two points: This is to allow for the absolutive bias. Prefer left anchor: If the anchor is to the left of the quantifier score it with one point: Quantifiers tend to float to the right of their anchors. Prefer closest: Subtract one for each inter- vening quantifier: Closer targets are bet- ter. Finally select the highest scoring combination and eliminate any combinations that include the chosen quantifier and anchor. If there is still a combination left (e.g. there were two quantifiers and two targets) then select it as well. These heuristics rule out crossing combina- tions in the rare instances of two quantifiers and two candidates. 156 Floating Quantifiers: Anchored Not anchored Good Bad Good Bad Nouns (N): 12 2 7 0 Num-Cls (XC): 16 7 11 1 Total: 28 9 18 1 Table 2: Test results 3.5 Anchoring Once the best combinations are chosen, the quantifier can be anchored to its target. We consider the best way to represent this would be by showing the semantic relation in a sepa- rate level from the syntax, in a similar way to the architecture outlined by Jackendoff (1997). Our implementation is in a machine transla- tion system and we simply rewrite the sentence so that the floating quantifier becomes an pre- nominal modifier of its target, marked with the adnominal case-marker -no. The resulting mod- ifier is labeled as 'anchored', to allow special processing during the transfer phase. 4 Results and Discussion The algorithm was tested on a 3700 sentence machine translation test set of Japanese sen- tences with English translations, produced by a professional human translator. A description of the test set and its design is given in Ikehara et al. (1994). Overall, 56 possible combinations were found and 37 anchored in 3700 sentences: Table 2. Of these, 9 were anchored that should not have been, and 1 was not anchored that should have been. The accuracy (correctly an- chored/anchored) was 76% (28/37), and the recall (correctly anchored/should be anchored) was 97% (28/29). The major source of errors was from parsing errors in the system as a whole. All of the badly anchored numeral-classifiers combinations were caused by this. In this case, the algorithm has not degraded the system performance, it would have been a bad result anyway. There were three problems with the algorithm itself. In one case an anaphoric quantifier was mistaken as a floating quantifier, in another the verbal semantic attribute check for degree mod- ification gave a bad result. Finally there was one case where the default blocking for seman- tic anomalies blocked a good combination. Translation of floating quantifiers Note that anchoring a floating quantifier is only the first step toward translating it. Special han- dling is sometimes needed to translate the an- chored quantifiers. For example, Japanese has some universal pronouns that can stand alone as full noun phrases (14) or act as floating quantifiers (15): e.g., minna "everyone", zen'in "all members". When they are anchored, the information about the denotation of the head carried by the pro- noun is redundant, and should not be trans- lated. A special rule is required for this. (14) minna-ga sorou everyone-NOM gather All members gather. (15) membd-ga minna sorou members-NOM everyone gather All the members gather. *Everyone's members gather. Further work The proposed algorithm forms a solid base for extensions in various ways. 1. Combine it with a fuller system of event semantics. 2. Make the treatment of classifier-target se- mantics more detailed, so that inbuilt se- mantic restrictions can be used instead of a table of forbidden combinations. 3. Use the results of the algorithm to help choose between candidate parses and in- tegrate it with the resolution of zero pro- nouns. 4. Test the algorithm on other languages, for example Korean. 5 Conclusion We have presented an algorithm to anchor float- ing quantifiers in Japanese. The algorithm pro- ceeds as follows. First identify potential float- ing quantifiers: either numeral classifier combi- nations or quantificational nouns. Then iden- tify potential anchors: all accusative or nom- inative noun phrases. Inappropriate combina- tions are deleted, either because of a semantic 157 mismatch between the target and quantifier, or because the quantifier is interpreted as a degree or event modifier. Finally, possible combina- tions are ranked, with the accusative candidate being the best choice, then the closest and left- most. The algorithm is robust and uses the full power of currently available detailed semantic dictionaries. Acknowledgments The authors thank Tim Baldwin, Yukie Kurib- ayashi, Kyonghee Paik and the members of the NTT Machine Translation Research Group for their discussion and comments on this paper and earlier versions. The research was carried out while Daniela Kurz visited the NTT Com- munication Science Laboratories. Francis Bond is currently also enrolled part time as a doc- toral candidate at the University of Queens- land's Center for Language Teaching & Re- search. References Yukiko Sasaki Alam. 1997. Numeral classifiers as adverbs of quantification. In Ho-Min Sohn and John Haig, editors, Japanese/Korean Linguistics, volume 6, pages 381-397. CSLI. Yoshimi Asahioka, Hideki Hirakawa, and Shin- ya Amano. 1990. Semantic classification and an analyzing system of Japanese numeri- cal expressions. IPSJ SIG Notes 90-NL-78, 90(64):129-136, July. (in Japanese). Francis Bond and Kyonghee Paik. 1997. Clas- sifying correspondence in Japanese and Ko- rean. In 3rd Pacific Association for Compu- tational Linguistics Conference: PA CLING- 97, pages 58-67. Meisei University, Tokyo, Japan. Francis Bond, Kentaro Ogura, and Satoru Ikehara. 1996. Classifiers in Japanese-to- English machine translation. In 16th Interna- tional Conference on Computational Linguis- tics: COLING-96, pages 125-130, Copen- hagen, August. (cmp-lg/9608014). Pamela Downing and Michael Noonan, editors. 1995. Word Order in Discourse, volume 30 of Typological Studies in Language. John Ben- jamins. Pamela Downing. 1995. The anaphoric use of classifiers in Japanese. In Downing and Noo- nan (Downing and Noonan, 1995), pages 345- 375. Pamela Downing. 1996. Numeral Classifier Systems, the case of Japanese. John Ben- jamins, Amsterdam. John A. Hawkins. 1994. A performance theory of order and constituency, volume 73 of Cam- bridge studies in linguistics. Cambridge Uni- versity Press, Cambridge. Satoru Ikehara, Satoshi Shirai, Akio Yokoo, and Hiromi Nakaiwa. 1991. Toward an MT sys- tem without pre-editing - effects of new meth- ods in ALT-J/E--. In Third Machine Trans- lation Summit: MT Summit III, pages 101- 106, Washington DC. (cmp-lg/9510008). Satoru Ikehara, Satoshi Shirai, and Kentaro Ogura. 1994. Criteria for evaluating the lin- guistic quality of Japanese to English ma- chine translations. Journal of Japanese So- ciety for Artificial Intelligence, 9(4):569-579. (in Japanese). Satoru Ikehara, Masahiro Miyazaki, Satoshi Shirai, Akio Yokoo, Hiromi Nakaiwa, Kentaro Ogura, Yoshifumi Ooyama, and ¥oshihiko Hayashi. 1997. Goi-Taikei -- A Japanese Lexicon. Iwanami Shoten, Tokyo. 5 volumes. Kazuko Inoue, editor. 1983. Nihongo-no Ki- honkouzou (Basic Japanese Structure). San- seido, Tokyo. (in Japanese). Ray Jackendoff. 1996. The proper treatment of measuring out, telicity and perhaps even quantification in English. Natural Language and Linguistic Theory, 14:305-354. Ray Jackendoff. 1997. The Architecture of the Language Faculty. MIT Press. Alan Hyun-Oak Kim. 1995. Word order at the noun phrase level in Japanese: quanti- fier constructions and discourse functions. In Downing and Noonan (Downing and Noonan, 1995), pages 199-246. Samuel E. Martin. 1988. A Reference Grammar of Japanese. Tuttle. Shigeru Miyagawa. 1989: Structure and Case Marking in Japanese, volume 22 of Syntax and Semantics. Academic Press, Amsterdam. Masahiro Miyazaki, Satoshi Shirai, and Satoru Ikehara. 1995. A Japanese syntactic category system based on the constructive process the- ory and its use. Journal of Natural Language Processing, 2(3):3-25, July. (in Japanese). James Pustejovsky. 1995. The Generative Lex- icon. MIT Press. 158 Zusammenfassung In diesem Papier beschreiben wit einen Algo- rithmus zur Resolution von 'floating quantifiers' im Japanischen. Japanisch ist eine Sprache, in der quantifizierende Adverbien oder Kombi- nationen aus Numeral + Klassifikator yon der Nominalphrase, fiir die sie quantifizieren ge- trennt werden kSnnen, d.h. sie miissen nicht in unmittelbarer linearer Abfolge stehen. Der Algorithmus unterscheidet Grad- und Ereignismodifi_katoren yon Adverbialen, die fiir Nominalphrasen quantifizieren und resolviert den richtigen Antezedenten f'tir jeden 'floating quantifier'. Zur Anbindung an die richtige Nominalphrase finden die folgenden Parameter Beriicksichtigung: Wortart der Quantifikators und des Antezedenten, die semantische Relation zwischen diesen beiden, die Kasusmaxkierungen des Antezedenten und die Semantik des Verbs, das sowohl den Quantifikator als auch dessen Antezedenten regiert. Der Algorithmus wurde implementiert und in einem regel-basierten Japanisch/Englischem U'bersetzungssystem evaluiert. :_-5, 7 6 %~0~, 9 7 %¢)~A~7~b -~}-~ -~_v_ -7-~z}~, -~=q-~,~ &-~-~ 76%~ ~-~4 97%4 ~@~- ~r4. 159
1998
23
Managing information at linguistic interfaces* Johan Bos and C.J. Rupp Computerlinguistik Universit~it des Saarlandes D-66041 Saarbrticken {bos, cj }@toll. uni-sb, de Bianka Buschbeck-Wolf and Michael Dorna Institut ftir Maschinelle Sprachverarbeitung (LMS) Universit~it Stuttgart D-70174 Stuttgart {bianka, michl}@ims, uni-sl;utl;gart, de Abstract A large spoken dialogue translation system imposes both engineering and linguistic constraints on the way in which linguistic information is communi- cated between modules. We describe the design and use of interface terms, whose formal, functional and communicative role has been tested in a sequence of integrated systems and which have proven adequate to these constraints. 1 Introduction This paper describes the development and usage of interface representations for linguistic informa- tion within the Vorbmobil project: a large dis- tributed project for speech-to-speech translation of negotiation dialogues, between English, German and Japanese. We take as our reference point the Verbmobil Research Prototype (Bub et al., 1997), but this is only one of a sequence of fully integrated running systems. The functional, formal and com- municative role of interface terms within these sys- tems, once instigated, has already been maintained through changes in the overall architecture and con- stitution of the Ve rbmobil consortium. There are two aspects to our story: • the practical and software engineering con- straints imposed by the distributed develop- ment of a large natural language system. • the linguistic requirements of the translation task at the heart of the system. ° This work was funded by the German Federal Ministry of Education, Science, Research and Technology (BMBF) in the framework of the Vorbmobil project under grant 01 IV 701 R4 and 01 IV 701 N3. The responsibility for this article lies with the authors. Thanks to our many Vorbmobil colleagues who helped to design and develop the results presented here, and also to the anonymous reviewers for giving useful hints to improve this paper. The prominence of the engineering requirements is further heightened by the fact that we are dealing with a spoken dialogue system which must strive towards real time interactions. We proceed by describing the requirements of the Vorbmobi! Research Prototype, the actual contents of the Verbmobil Interface Term (henceforth VIT), the semantic formalism encoded within VITs and processing aspects. We conclude that VITs fulfill the joint goals of a functional interface term and an adequate linguistic representation within a single data structure. 1.1 Modularity We are concerned here with the interface represen- tations exchanged between modules that make use of traditional linguistic concepts. Figure 1 shows a simplified form of the Verbmobi! Research Proto- type architecture I , the modules in the grey shaded area make use of VITs as linguistic interface terms and are loosely termed the "linguistic" modules of the system, in contrast to the other modules shown which are chiefly concerned with the processing of acoustic signals. The linguistic design criteria for VITs derive mainly from the syntactic and seman- tic analysis module, labelled SynSem, the generation and the transfer modules. One point that should be made at the outset is that these linguistic modules really are modules, rather than, say, organisational constructs in some larger constraints system. In practice, these modules are developed at different sites, may be implemented in different programming languages or use differ- ent internal linguistic formalisms, and, indeed, there may be interchangeable modules for the same func- tion. This level of modularity, in itself, provides suf- ficient motivation for a common interface represen- ~ln that we have excluded the modules that employ alter- native techniques and express no interest in linguistic informa- tion. 160 I Recogrtitionl-~ Prosody "~; SynSem ~: ~::z ~,!::~ :: : ;:':~" ~::: Synthesis I I J .............. ~;~ ..... ~2~ . . . . . . . . . . . ............. ~ .... , ....... Figure 1: A Simplified diagram of the Vorbraobil Research Prototype Architecture tation among the linguistic modules, allowing the definition of a module's functionality in terms of its I/O behaviour, but also providing a theory in- dependent linguafranca for discussions which may involve both linguists and computer scientists. The Verbraobil community is actually large enough to require such off-line constructs, too. 1.2 Encoding of Linguistic Information A key question in the design of an interface lan- guage is what information must be carried and to what purpose. The primary definition criterion within the linguistic modules has been the transla- tion task. The actual translation operation is per- formed in the transfer module as a mapping between semantic representations of the source and target languages, see (Dorna and Emele, 1996). However, the information requirements of this process are flexible, since information from various levels of analysis are used in disambiguation within the trans- fer module, including prosody and dialogue struc- ture. To a large extent the information requirements di- vide into two parts: • the expressive adequacy of the semantic repre- sentation; • representing other types of linguistic informa- tion so as to meet the disambiguation require- ments with the minimum of redundancy. The design of the semantic representations en- coded within VITs has been guided by an ongoing movement in representational semantic formalisms which takes as a starting point the fact that certain key features of a purely logical semantics are not fully defined in natural language utterances and typ- ically play no part in translation operations. This has been most clearly demonstrated for cases where quantifier scope ambiguities are exactly preserved under translation. The response to these observa- tions is termed underspecification and various such underspecified formalisms have been defined. In one sense underspecification is, in itself, a form of information management, in that only the informa- tion that is actually present in an utterance is repre- sented, further disambiguation being derived from the context. In the absence of such contextual in- formation further specificity can only be based on guesswork. While the management of information in the VIT semantics consists of leaving unsaid what cannot be adequately specified, the amount of information and also the type of information in the other partitions of the VIT (see Section 2.1) has been determined by the simple principle of providing information on justified demand. The information provided is also quite varied but the unifying property is that the requirements are defined by the specific needs of transfer, in distinguishing cases that are truly am- biguous under translation or need to be resolved. For example: (1) Geht das bei ihnen? a. Is it possible for you? b. Is it possible at your place? In (1), the German preposition bei displays an ambiguity between the experiencer reading (1 a) and the spatial interpretation (lb). The resolution of this ambiguity requires in the first instance three pieces of information: the type of the verb predicate, the sort of the internal argument of bei and the sort of the subject. This, in turn, requires the resolution of the reference of the anaphor das, where mor- phosyntactic constraints come into play. If the ref- erent has the sort time then the experiencer read- ing (la) can be selected. This is the more usual re- sult in the Verbmobil context. Should the referent 161 Slot Name Description VIT ID Index Conditions Constraints Sorts Discourse Syntax Tense and Aspect Prosody combines a unique tag for the turn segment described by the current VIT and the word lattice path used in its linguistic analysis; a triple consisting of the entry points for traversing the VIT representation; labelled conditions describing the possibly underspecified semantic content of an utterance; scope and grouping constraints, e.g. used for underspecified quantifier and opera- tor scope representation; sortal specifications for instance variables introduced in labelled conditions; additional semantic and pragmatic information, e.g. discourse roles for individual instance; morpho-syntactic features, e.g. number and gender of individual instances; morpho-syntactic tense combined with aspect and sentence mood information, e.g. used for computing surface tense; prosodic information such as accenting and sentence mood. Table 1: A list of VIT slots. be sortally specified as a situation, further infor- mation will be required to determine the dialogue stage, i.e. whether the time of appointment is being negotiated or its place. Only in the latter case is the spatial reading (lb) relevant. (2) Dann wiirde das doch gehen a. Then, it WOULD be possible, after all. b. It would be possible, wouldn't it? Consider the discourse particle doch in (2) which can be disambiguated with prosodic information. When doch is stressed and the utterance has falling intonation, it functions as a pointer to a previous dialogue stage. Something that was impossible be- fore turned out to be feasible at the utterance time. Then, doch is translated into after all and the auxil- iary takes over the accent (2a) 2. If (2) has a rising intonation and the particle is not stressed, it signals the speaker's expectation of the hearer's approving response. In English, this meaning is conveyed by a question tag (2b). Lieske et al. (1997) provide a more detailed account of the use of prosodic infor- mation in Verbmobil. In addition to the information that is explicitly represented in the specified fields of a VIT, in- cluding the surface word order that can be inferred from the segment identification, and the resolution of underspecified ambiguities in context, transfer might require further information, such as domain- specific world knowledge, speech act or discourse 2We indicate prosodic accent with SMALL CAPITALS. stage information. This information can be obtained on demand from the resolution component (see Fig- ure 1). This flexible approach to the information required for transfer is termed cascaded disam- biguation (Buschbeck-Wolf, 1997) and is balanced against the fact that each level of escalation implies a correspondingly greater share of the permissible runtime. 2 The Verbmobil Interface Term The VIT encodes various pieces of information pro- duced and used in the linguistic modules. The con- tent of a VIT corresponds to a single segment (or utterance) in a dialog turn. This partitioning of turns enables the linguistic components to work incre- mentally. 2.1 Multiple Levels of Information A VIT is a record-like data structure whose fields are filled with semantic, scopal, sortal, morpho- syntactic, prosodic, discourse and other information (see Table 1). These slots can be seen as analysis layers collecting different types of linguistic infor- mation that is produced by several modules. The in- formation within and between the layers is linked together using constant symbols, called "labels", "instances" and "holes". These constants could be interpreted as skolemized logical variables which each denote a node in a graph. Besides purely lin- guistic information, a VIT contains a unique seg- ment identifier that encodes the time span of the an- alyzed speech input, the analyzed path of the orig- inal word lattice, the producer of the VIT, which language is represented, etc. This identifier is used, 162 for example, to synchronize the processing of anal- yses from different parsers. For processing aspects of VITs see Section 3. A concrete example of a VIT is given in Figure 2 in a Prolog notation where the slots are also marked. This example is further discussed in Section 2.2. 2.2 VIT Semantics The core semantic content of the VIT is contained in the two slots: Conditions and Constraints. The conditions represent the predicates of the se- mantic content and the constraints the semantic dependency structure over those predicates. This partitioning between semantic content and seman- tic structure is modelled on the kind of represen- tational metalanguage employed in UDRS seman- tics (Reyle, 1993) to express underspecification. The semantic representation is, thus, a metalan- guage expression containing metavariables, termed labels, that may be assigned to object language con- structs. Moreover, such a metalanguage is minimally recursive 3, in that recursive structure is expunged from the surface level by the use of metavariables over the recursive constituents of the object lan- guage. In UDRSs quantifier dependencies and other scope information are underspecified because the constraints provide incomplete information about the assignment of object language structures to la- bels. However, a constraint set may be monoton- ically extended to provide a complete resolution. VIT semantics follows a similar strategy but some- what extends the expressivity of the metalanguage. There are two constructs in the VIT semantic metalanguage which provide for an extension in expressivity relative to UDRSs. These have both been adopted from immediate precursors within the project, such as Bos et al. (1996), and further re- fined. The first of these is the mechanism of holes and plugging which originates in the hole seman- tics of Bos (1996). This requires a distinction be- tween two types of metavariable employed: labels and holes. Labels denote the instantiated structures, primarily the individual predicates. Holes, on the other hand, mark the underspecified argument po- sitions of propositional arguments and scope do- mains. Resolution consists of an assignment of la- 3We owe the term minimal recursion to Copestake et al. (1995), but the mechanism they describe was already in use in UDRSs. vit( X Segment ID vitID(sid(ll6.a.ge.2.181.2.ge.y.syntaxger). X WHG Strin E [word(jedes.24.[i5]). word(treffen.25.[19]). word(mit.26.[112]). word(ihnen.27.[114]). word(hat.28.[ll]). ,ord(ein.29.[117]). word(interessantes.30.[121]). word(thema°31.[122])]). X Index index(12.11.i3). Conditions [decl(12.h23). jed(1S,i6,18,h7), treffen(19,i6), mit(112,i6,i13), pron(l14.i13). haben(11.i3). arg3(ll,i3,i6), arg2(ll,i3,i16), ein(llT.il6.120.hl9). interessant(121.i16). thema(122.i16)]. Constraints [in_g(122.120). in_g(121.120). in_g(114.18). in_g(112.18). in_g(19.18). leq(15,h23), leq(ll.h7). leq(117.h23). leq(11.h19)]. Z Sorts [s_sort(i6,meeting_sit), s_sort(i13,human), s_sorr(il6,info_content)], X Discourse [prontype(i13,he,std), dir(112.no)]. Syntax [num(i6.sg). pers(i6,3), pers(il3.1). num(i13.pl). num(i16.sg). pets(J16.3)]. Tense and Aspect [ta_mood(i3.ind). ta_tense(i3.pres). ta_perf(i3.nonperf)]. Prosody [pros_accent(15). pros_accent(121). pros_mood(12.decl)] ) Figure 2: A Verbmobil Interface Term 163 bels to holes, a plugging. In general, the constraint set contains partial constraints on such a plugging, expressed by the "less than or equal" relation (leq) but no actual equations, leq(L ,H) should be inter- preted as a disjunction that the label is either subor- dinate to the hole or equal to it. Alternatively, this relation can be seen as imposing a partial ordering. A valid plugging must observe all such constraints. The other extension in expressivity was the in- troduction of additional constraints between labels. These form another type of abstraction away from logical structure, in that conjunctive structure is also represented by constraints. The purpose of this fur- ther abstraction is to allow lexical predicates to be linked into additional structures, beyond that re- quired for a logical semantics. For example, focus structure may group predicates that belong to dif- ferent scope domains. More immediately, prosodic information, expressed at the lexical level, can be linked into the VIT via lexical predicates. The con- straints expressing conjunctive structure relate basic predicate labels and group labels which correspond to a conjunctive structure in the object language, such as a DRS, intuitively a single box. Grouping constraints take the form in_g (L, G), where the "in group" relation (in_g) denotes that the label L is a member of the group G. The content of a group is, thus, defined by the set of such grouping constraints. n g = A li such that in_g(//, g) 6 Constraints i=1 These two forms of abstraction from logical struc- ture have a tendency to expand the printed form of a VIT, relative to a recursive term structure, but in one respect this is clearly a more compact represen- tation, since lexical predicates occur only once but may engage in various types of structure. 2.3 An Example Figure 2 shows the VIT produced by the analysis of the German sentence: Jedes Treffen mit lhnen hat ein interessantes Thema ("Every meeting with you has an interesting topic"). The instances which cor- respond to object language variables are represented by the sequence {il, i2, ...}, holes by {hi, h2, ...} and labels, including group and predi- cate labels, by {ll, 12 .... }. The base label of a predicate appears in its first argument position. The predicates haben, arg2 and arg3 share the same label because they form the representation of a sin- gle predication, in so-called neo-Davidsonian nota- decl(12,h23) jed(1f,i6,18,h7) ein(llT,il6,120,hl9) haben(ll,i3) arg3(ll,i3,i6) arg2(ll,i3,il6) Figure 3: A Graphical Representation of the Scop- ing Constraints tion (e.g. (Parsons, 1991)). The two groups 120 and 18 form the restrictions of the existential quantifier, ein, and the universal, j ed, respectively. Two of the scoping constraints place the quantifiers' labels be- low the top hole, the argument of the mood operator (decl). The other two link the quantifiers respec- tive scopes to the bottom label, in this case the main verb, but no constraints are imposed on the relative scope of the quantifiers. The whole structure is best viewed as a (partial) subordination hierarchy, as in Figure 3. A complete resolution would result from an as- signment of the labels {11, 15, 117} to the three holes {h23, hi9, h7}. Taking into account the implicit constraint that any argument to a predicate is automatically subordinate to its label, there are in fact only two possibilities: the pluggings pl and p2 given below, II I h231h191 h7 II p2 117 15 11 corresponding to the two relative scopings of the quantifiers: VX(TREFFEN(X)&MIT(X,Z)-+ 3y(THEMA(y) &INTERESSANT(y)&HAB EN(X,y))) and 164 3y(THEMA(y)&INTERES SANT(y)& Vx(TREFFEN(X)&M IT(X,Z)-+ HABEN(X,y))) A full resolution is, however, rarely necessary. Where transfer requires more specific scoping con- straints these can be provided in a pairwise fash- ion, based on default heuristics from word order or prosodic cues, or at the cost of an additional call to the resolution component. 3 VIT Processing 3.1 Lists as Data Structures Almost all fields in a VIT record are represented as lists. 4 In general, list elements do not introduce any further recursive embedding, i.e. the elements are like fixed records with fields containing con- stants. The non-recursive nature of these list ele- ments makes the access and manipulation of in- formation (e.g. addition, deletion, refinement, etc.) very convenient. Moreover, the number of list el- ements can be kept small by distributing informa- tion over different slots according to its linguistic nature (see Section 3.2 below) or the producer of the specific kind of data (see Section 2.1). The kind of structuring adopted and the relative shortness of the lists make for rapid access to and operations on VITs. It is significant that efficient information access is dependent not only on an appropriate data structure, but also on the representational formalism imple- mented in the individual data items. This property has been presented as an independent motivation for minimally recursive representations from the Ma- chine Translation point of view (Copestake et al., 1995), and has been most thoroughly explored in the context of the substitution operations required for transfer. We believe we have taken this argu- ment to its logical conclusion, in implementing a non-recursive semantic metalanguage in an appro- priate data structure. This, in itself, provides suf- ficient motivation for opting for such representa- tions rather than, say, feature structures or the recur- sive QLFs (Alshawi et al., 1991) of CLE (Alshawi, 1992). 3.2 ADT Package In general, linguistic analysis components are very sensitive to changes in input data caused by modifi- 4In typical AI languages, such as Lisp and Prolog, lists are built-in, and they can be ported easily to other programming languages. cations of analyses or by increasing coverage. Ob- viously, there is a need for some kind of robust- ness at the interface level, especially in large dis- tributed software projects like Verbmobil with par- allel development of different components. There- fore, components that communicate with each other should abstract over data types used at their inter- faces. This is really a further projection of standard software engineering practice into the implementa- tion of linguistic modules. In this spirit, all access to and manipulation of the information in a VIT is mediated by an abstract data type (ADT) package (Doma, 1996). The ADT package can be used to build a new VIT, to fill it with information, to copy and delete information within a VIT, to check the contents (see Section 3.3 below), to get specific information, to print a VIT, etc. To give an example of abstraction, there is no need to know where specific information is stored for later lookup. This is done by the ADT package that manages the adding of a piece of information to the appropriate slot. This means that the external treatment of the VIT as an interface term is entirely independent of the internal implementation and data structure within any of the modules and vice versa. 5 3.3 Consistency Checking As a side effect of adopting an extensive ADT pack- age we were able to provide a variety of check- ing and quality control functions. They are espe- cially useful at interfaces between linguistic mod- ules to check format and content errors. At the for- mat checking level language-specific on-line dic- tionaries are used to ensure compatibility between the components. A content checker is used to test language-independent structural properties, such as missing or wrongly bound variables, missing or in- consistent information, and cyclicity. As far as we are aware, this is the first time that the results of linguistic components dealing with se- mantics can be systematically checked at module interfaces. It has been shown that this form of test- ing is well-suited for error detection in components with rapidly growing linguistic coverage. It is worth noting that the source language lexical coverage in the Verbmobil Research Prototype is around 2500 words, rising to 10K at the end of the second phase 6. Furthermore, the complex information produced by 5The kind of data structure used by the communication ar- chitecture (Amtrup and Benra, 1996) is, similarly, transparent to the modules. 6In the year 2000. 165 linguistic components even makes automatic output control necessary. The same checking can be used to define a quality rating, e.g. for correctness, interpretability, etc. of the content of a VIT. Such results are much better and more productive in improving a system than common, purely quantitative, measures based on failure or success rates. 4 Conclusion We have described the interface terms used to carry linguistic information in the Verbmobil system. The amount and type of information that they carry are adapted to the various constraints that arose dur- ing the distributed development of such a large sys- tem over a periods of several years. The VIT format has been integrated into several running systems, including the most successful, the Verbmobil Re- search Prototype. Subsequent modifications in the second phase of Verbmobil have been possible in a systematic and monotonic fashion. Similarly, the adaption of several new syntactic and semantic anal- ysis modules has not presented any major or un- seen problems. Among the tasks currently under de- velopment is the use of VIT representations in the generation of dialogue protocols, in the language of each participant. We feel that VITs have proved themselves ad- equate to the considerable system requirements through their continued, efffective use. We also find it reassuring that, despite the priority given to the engineering requirement, we have been able to em- bed within this interface language representations that are, at least, equivalent to the state of the art in Computational Semantics. References Hiyan Alshawi, David M. Carter, Bj6ru Gamback, and Manny Rayner. 1991. Translation by Quasi Logical Form Transfer. In Proceedings of the 29th Annual Meeting of the Association for Com- putational Linguistics (ACL'91), pages 161-168, Berkeley, CA. Hiyan Alshawi, editor. 1992. The Core Language Engine. ACL-MIT Press Series in Natural Lan- guages Processing. MIT Press, Cambridge, MA. Jan W. Amtrup and J6rg Benra. 1996. Communica- tion in Large distributed AI Systems for Natural Language Processing. In Proceedings of the 16th International Conference on Computational Lin- guistics (Coling'96), pages 35-40, Copenhagen, Denmark. J. Bos, B. Gamb~ick, C. Lieske, Y. Mori, M. Pinkal, and K. Worm. 1996. Compositional Semantics in Verbmobil. In Proceedings of the 16th Interna- tional Conference on Computational Linguistics (Coling'96), pages 131-136, Copenhagen, Den- mark. Johan Bos. 1996. Predicate Logic Unplugged. In Paul Dekker and Martin Stokhof, editors, Pro- ceedings of the Tenth Amsterdam Colloquium, pages 133-143, ILLC/Department of Philosophy, University of Amsterdam. T. Bub, W. Wahlster, and A. Waibel. 1997. Vorbmobil: The Combination of Deep and Shal- low Processing for Spontaneous Speech Trans- lation. In Proceedings of the 22nd Interna- tional Conference on Acoustics, Speech, and Sig- nal Processing (ICASSP'97), Munich, Germany, April. Bianka Buschbeck-Wolf. 1997. Resolution on De- mand. Verbmobil Report 196, IMS, Universit~it Stuttgart, Germany. A. Copestake, D. Flickinger, R. Malouf, S. Riehe- mann, and I. Sag. 1995. Translation using Mini- mal Recursion Semantics. In Proceedings of the 6th International Conference on Theoretical and Methodological Issues in Machine Translation (TMI'95), Leuven, Belgium. Michael Dorna and Martin C. Emele. 1996. Semantic-based Transfer. In Proceedings of the 16th International Conference on Computational Linguistics (Coling'96), Copenhagen, Denmark. Michael Doma. 1996. The ADT-Package for the Verbmobil Interface Term. Verbmobil Report 104, IMS, Universitat Stuttgart, Germany. C. Lieske, J. Bos, B. Gamb~ick M. Emele, and C.J. Rupp. 1997. Giving prosody a meaning. In Proceedings of the 5th European Conference on Speech Communication alut Technology (Eu- roSpeech'97), Rhodes, Greece, September. T. Parsons. 1991. Events in the Semantics of En- glish. MIT Press, Cambridge, Mass. Uwe Reyle. 1993. Dealing with Ambiguities by Underspecification: Construction, Represen- tation and Deduction. Journal of Semantics, 10(2):123-179. 166
1998
24
Deriving the Predicate-Argument Structure for a Free Word Order Language * Cern Bozsahin Department of Computer Engineering Middle East Technical University 06531 Ankara, Turkey bozsahin@ceng, metu. edu. tr Abstract In relatively free word order languages, grammat- ical functions are intricately related to case mark- ing. Assuming an ordered representation of the predicate-argument structure, this work proposes a Combinatory Categorial Grammar formulation of relating surface case cues to categories and types for correctly placing the arguments in the predicate- argument structure. This is achieved by treat- ing case markers as type shifters. Unlike other CG formulations, type shifting does not prolifer- ate or cause spurious ambiguity. Categories of all argument-encoding grammatical functions fol- low from the same principle of category assignment. Normal order evaluation of the combinatory form reveals the predicate-argument structure. The appli- cation of the method to Turkish is shown. 1 Introduction Recent theorizing in linguistics brought forth a level of representation called the Predicate-Argument Structure (PAS). PAS acts as the interface be- tween lexical semantics and d-structure in GB (Grimshaw, 1990), functional structure in LFG (Alsina, 1996), and complement structure in HPSG (Wechsler, 1995). PAS is the sole level of rep- resentation in Combinatory Categorial Grammar (CCG) (Steedman, 1996). All formulations as- sume a prominence-based structured representation for PAS, although they differ in the terms used for defining prominence. For instance, Grimshaw (1990) defines the thematic hierarchy as: Agent > Experiencer > Goal / Location / Source > Theme " Thanks to Mark Steedman for discussion and material, and to the anonymous reviewer of an extended version whose com- ments led to significant revisions. This research is supported by TUBITAK (EEEAG-90) and NATO Science Division (TU- LANGUAGE). whereas LFG accounts make use of the following (Bresnan and Kanerva, 1989): Agent > Beneficiary > Goal / Experiencer > Inst > Patient/Theme > Locative. As an illustration, the predicate-argument struc- tures of the agentive verb murder and the psycho- logical verb fear are (Grimshaw, 1990, p.8): murder (x (y)) Agent Theme fear (x (y)) Exp Theme To abstract away from language-particular case systems and mapping of thematic roles to grammati- cal functions, I assume the Applicative Hierarchy of Shaumyan (1987) for the definition of prominence: Primary Term > Secondary Term > Tertiary Term > Oblique Term. Primacy of a term over another is defined by the for- mer having a wider range of syntactic features than the latter. In an accusative language, subjects are less marked (hence primary) than objects; all verbs take subjects but only transitive verbs take objects. Terms (=arguments) can be denoted by the genotype indices on NPs, such as NP1, NP2 for primary and secondary termsJ An NP2 would be a direct object (NPacc) in an accusative language, or an ergative- marked NP (NPerg) in an ergative language. This level of description also simplifies the formulation of grammatical function changing; the primary term of a passivized predicate (PASS p) is the secondary term of the active p. I follow Shaumyan and Steed- man (1996) also in the ordered representation of the PAS (1). The reader is referred to (Shaumyan, 1987) for linguistic justification of this ordering. (1) Pred... <Sec. Term> <Primary Term> Given this representation, the surface order of t Shaumyan uses T 1 , T 2, but we prefer NPI, NP2 for easier exposition in later formulations. 167 constituents is often in conflict with the order in the PAS. For instance, English as a configurational SVO language has the mapping: (2) SS: S ~ O PAS: ~ NP2~""..~P1 However, in a non-configurational language, per- mutations of word order are possible, and grammat- ical functions are often indicated not by configura- tions but by case marking. For instance, in Turkish, all six permutations of the basic SOV order are pos- sible, and Japanese allows two verb-final permuta- tions of underlying SOV. The relationship between case marking and scrambling is crucial in languages with flexible word order. A computational solution to the problem must rely on some principles of par- simony for representing categories and types of ar- guments and predicates, and efficiency of process- ing. In a categorial formulation, grammatical functions of preverbal and postverbal NPs in (2) can be made explicit by type shifting 2 the subject to S/(S\NP1) and the object to (S\NP1)\((S\NP1)/NP2). These categories follow from the order-preserving type shifting scheme (Dowty, 1988): (3) NP ~ T/(T~NP) or TVT/NP) To resolve the opposition between surface order and the PAS in a free word order language, one can let the type shifted categories of terms proliferate, or reformulate CCG in such a way that arguments of the verbs are sets, rather than lists whose arguments are made available one at a time. The former alter- native makes the spurious ambiguity problem of CG parsing (Karttunen, 1989) even more severe. Multi- set CCG (Hoffman, 1995) is an example of the set- oriented approach. It is known to be computation- ally tractable but less efficient than the polynomial time CCG algorithm of Vijay-Shanker and Weir (1993). I try to show in this paper that the tradi- tional curried notation of CG with type shifting can be maintained to account for Surface Form+-~PAS mapping without leading to proliferation of argu- ment categories or to spurious ambiguity. Categorial framework is particularly suited for this mapping due to its lexicalism. Grammatical functions of the nouns in the lexicon are assigned 2aka. type raising, lifting, or type change by case markers, which are also in the lexicon. Thus, grammatical function marking follows nat- urally from the general CCG schema comprising rules of application (A) and composition (B). The functor-argument distinction in CG helps to model prominence relations without extra levels of repre- sentation. CCG schema (Steedman (1988; 1990)) is summarized in (4). Combinator notation is pre- ferred here because they are the formal primitives operating on the PAS (cf. (Curry and Feys, 1958) for Combinatory Logic). Application is the only primitive of the combinatory system; it is indicated by juxtaposition in the examples and denoted by • in the normal order evaluator (§4). B has the reduction rule B f ga>_f (ga). (4) X/Y: f Y: a =~A> X: fa Y: a X\ Y: f ==¢'A< X: f a x/Y: f r/z: g :----.8> x/z: Bfg Y z:a x r:.f x z: Bfg x/Y: y rxz:9 x\z: Big v/z: g XkV: y Sx< x/z: Byg 2 Grammatical Functions, Type Shifting, and Composition In order to derive all permutations of a ditransi- tive construction in Turkish using (3), the dative- marked indirect object (NP3) must be type shifted in 48 (4!2) different ways so that coordination with the left-adjacent and the right-adjacent constituent is possible. This is due to the fact that the result category T is always a conjoinable type, and the ar- gument category T/NP3 (and T~NP3) must be al- lowed to compose with the result category of the adjacent functor. However, categories of arguments can be made more informative about grammatical functions and word order. The basic principle is as follows: The category assigned for argument n must contain all and only the term information about NPi for all i < n. An NP2 type must contain in its cat- egory word order information about NP1 and NP2 but not NP3. This can be generalized as in (5): (5) Category assignment for argument n: S Tr/Ta or Tr\Ta C(n) ! NPn 168 Ta = Lexical category of an NPn- governing element (e.g., a verb) in the lan- guage whose highest genotype argument is NPn. Tr = The category obtained from Ta by re- moving NPn. Case markers in Turkish are suffixes attached to noun groups. 3 The types of case markers in the lex- icon can be defined as: (6) Lexical type assignment for the case marker (-case) encoding argument n: -case: = C(n): T(C(n) )x\N: x where T(C) denotes the semantic type for cate- gory C: (7) a. T(NPn) = I (lower type for NPn) b. T(C) = T (if C is a type shifted category as in (3)) c. T(C) = BBT (if C is a type shifted and composed category) (5) and (6) are schemas that yield three lexical categories per -case: one for lower type, and two for higher types which differ only in the directionality of the main function due to (5). For instance, for the accusative case suffix encoding NP2, we have: -ACC := NP2:Ix\N:x := ((SINP1)/(SINPIlNP2)):Tx\N:x := ((SINP1)\(SINPIlNP2)):Tx\N:x Type shifting alone is too constraining if the verbs take their arguments in an order different from the Applicative Hierarchy (§ 1). For instance, the cat- egory of Turkish ditransitives is SINPIlNP31NP2. Thus the verb has the wrapping semantics Cv' where C is the permutator with the reduction rule Cfga>fag. Type shifting an NP3 yields (SINP1 INP2)/(SINP1 ]NP~ INP3) in which the argu- ment category is not lexically licensed. (5) is order- preserving in a language-particular way; the result category always corresponds to a lexical category in the language if the argument category does too. For arguments requiring a non-canonical order, we need type shifting and composition (hence the third clause in (7)): 3As suggested in (Bozsahin and Gocmen, 1995), morpho- logical and syntactic composition can be distinguished by asso- ciating several attachment calculi with functors and arguments (e.g., affixation, concatenation, clitics, etc,) NP3:x T=~ (SINP1)/(SINPIlNP3):Tx ~ (SINP, INP2)/(SINP, INP31NP2): B(Tx) = BBTx Once syntactic category of the argument is fixed, its semantics is uniquely determined by (7). The combinatory primitives operating on the PAS are I (7a), T (7b--c), and B (7c). T has the reduction rule Tar>f a, and If>f. The use ofT or B signifies that the term's category is a functor; its correct place in the PAS is yet to be determined. I indicates that the term is in the right place in the partially derived PAS. According to (5), there is a unique result- argument combination for a higher type NP3, com- pared to 24 using (3). (5) differs from (3) in another significant aspect: Tr and Ta may contain direction- ally underspecified categories if licensed by the lex- icon. Directional underspecification is needed when arguments of a verb can scramble to either side of the verb. It is necessary in Turkish and Warlpiri but not in Japanese or Korean. The neutral slash I is a lexical operator; it is instantiated to either \ or / during parsing. A crucial use of underspecifica- tion is shown in (8). SV composition could not fol- low through if the verbs had backward-looking cat- egories; composition of the type shifted subject and the verb in this case would only yield a backward- looking S\NP2 by the schema (4). (8) Adam kurmu¢ ama ~.ocuk topladt masa-yt man.NOM set but chlld.NOMgather table-ACC B> S/(SINPD SINP11NP2 S/NP2 NP2 -8> S/NP2 .A S/NP2 .A> S 'The man had set the table but the child is cleaning it.' The schema in (5) makes the arguments available in higher types, and allows lower (NPn) types only if higher types fail (as in NP2 in (8)). There are two reasons for this: Higher types carry more in- formation about surface order of the language, and they are sufficient to cover bounded phenomena. §3 shows how higher types correctly derive the PAS in various word orders. Lower types are indispensable for unbounded constructions such as relativization and coordination. The choice is due to a concern for economy. If lower types were allowed freely, they would yield the correct PAS as well: 169 (9) S IO DO V NPI: Ig NP3: lit NP2: Id DV: Cv I SINPzlNP3: (Cv') (IoA3 A< SINPI: (Cv')(Io' )(1i' ) A< s: (cv')(Io' )(,i' )(Is' )>_v' i' o' s' (10) a. Mehmet kitab-t oku-du M.NOM book-ACC read-PAST S/IV: Tin' IV/W: TU TV: r' -A> IV: TU r I -A> S: Tm t (Tb' r I ) >r' Um' 'Mehmet read tee book.' In parsing this is achieved as follows: An NPk can only be the argument in a rule of application, and schema (5) is the only way to obtain NPk from a noun group. Thus it suffices to check in the ap- plication rules that if the argument category is NPk, then the functor's result category (e.g., X in X/Y) has none of the terms with genotype indices lower than k. NP2 in (8) is licensed because the adjacent functor is S/NP2. NP2 in (9) is not licensed because the adjacent functor has NP1. For noun-governed grammatical functions such as the genitive (NPs), (5) licenses result categories that are underspecified with respect to the geno- type index. This is indeed necessary because the resulting NP can be further inflected on case and assume a genotype index. For Turkish, the type shifted category is C(5) =NPagr/(NPagr\NPs). Hence the genitive suffix bears the category C(5)\N. Agreement features enforce the possessor- possessed agreement on person and number via uni- fication (as in UCG (Calder et al., 1988)): kalem -in uc -u pencil -GEN.3s tip -POSS.3s N.'--ff C(5)\N: T N:t ---~ (Ne=g~\NPs>\N:p.oss A NPag~/(NP~g~\NPs):~p 5 NP~,~\NPs:posst' A> NPo:: :rp' Cposst' )>_(posst' )p' 'The tip of the pencil' 3 Word Order and Scrambling Due to space limitations, the following abbre- viated categories are employed in derivations: IV = SINPz TV = SINPIINP 2 DV = SINPIINP3INP2 The categories licensed by (5) can then be written as IV/TV and IV~TV for NP2, TV/DV and TV~DV for NP3, etc. (10a-b) show the verb-final variations in the word order. The bracketings in the PAS and juxtaposition are left-associative; (fa)b is same as lab. b. kitab-t Mehmet oku-du IV/TV: Tb' S\IV: Tm ~ TV: r I "Bx< S/TV: B(Tm' )(Tb' ) S: B(Tm' )(Tb' )r' >r' b' m A~" (10a) exhibits spurious ambiguity. Forward com- position of S/IV and IV/TV is possible, yielding exactly the same PAS. This problem is resolved by grammar rewriting in the sense proposed by Eisner 4 (1996). Grammar rewriting can be done using predictive combinators (Wittenburg, 1987), but they cannot handle crossing compositions that are essential to our method. Other normal form parsers, e.g. that of Hepple and Morrill (1989), have the same problem. All grammar rules in (4) in fact check the labels of the constituent cate- gories, which show how the category is derived. The labels are as in (Eisner, 1996). -FC: Output of forward composition, of which forward cross- ing composition is a special case. -BC: Output of backward composition, of which backward cross- ing composition is a special case. -OT: Lexical or type shifted category. The goal is to block e.g., X/Y-FC Y/Z-{FC, BC, OT} =~B> X/Z and X/Y-FC Y-{FC, BC, OT} =~A> X in (10a). S/TV composition would have the label -FC, which can- not be an input to forward application. In (10b), the backward composition follows through since it has the category-label S/TV-BC, which the forward application rule does not block. We use Eisner's method to rewrite all rules in (4). (llab) show the normal form parses for post- verbal scrambling, and (llc-d) for verb-medial cases. 4Eisner (1996, p.81) in fact suggested that the labeling sys- tem can be implemented in the grammar by templates, or in the processor by labeling the chart entries. 170 01) a. oku-du Mehmet kitab-t read-PAST M.NOM book-ACC IV: r' S/IV: Tm' IV~TV: Tb' B×> S\TV: B(Tm' )(Tb' ) A< b. C. d. S: B(Tm' )(Tb' )r' >r' b' m' 'Mehmet read the book.' oku-du kitab-i Mehmet TV: r' IV~TV: Tb' S\IV: Tm' A< IV : Tb' r' -A< S :Tm' (Tb' r' )>r' Um' kitab-z oku-du Mehmet IV/IV: Tb' IV: r' S\IV: Trn' A> IV : Tb' r' A< S :Tin' (Tb' r' )>r' b' m' Mehmet oku-du kitab-t S/IV: Tm' TV: r' IV~TV: Tb' A< IV : Tb' r' A> S : Tin' (Tb' r' ) >r' b' m' Controlled lexical redundancy of higher types, e.g., having both (and only) IV/TV and IV~TV li- censed by the lexicon for an NP2, does not lead to alternative derivations in (10-11). Assume that A/B B\ C, where A /B and B\ C are categories produced by (5), gives a successful parse using the output A\C. A\B B\C and A\B B/C are not composable types according to (4). The other possible configu- ration, A/B B/C, yields an A/C which looks for C in the other direction. Multiple derivations appear to be possible if there is an order-changing com- position over C, such as C/C (e.g., a VP modifier IV/IV). (12) shows two possible configurations with a C on the right. (12b) is blocked by label check be- cause A/C-FC C =~A> A is not licensed by the grammar. If C were to the left, only (12a) would succeed. Similar reasoning can be used to show the uniqueness of derivation in other patterns of direc- tions. (12) a. C/C A/B B\C C ~Bx> A\C-FC Bx < A/C-BC .A> A-OT b. C/CA/B B/C C -B> A/C-FC * * * ~A > Constrained type shifting avoids the problem with freely available categories in Eisner's normal form parsing scheme. However, some surface char- acteristics of the language, such as lack of case marking in certain constructions, puts the burden of type shifting on the processor (Bozsahin, 1997). Lower type arguments such as NP2 pose a different kind of ambiguity problem. Although they are re- quired in unbounded constructions, they may yield alternative derivations of local scrambling cases in a labelled CCG. For instance, when NP2 is peripheral in a ditransitive construction and the verb can form a constituent with all the other arguments (S\NP2 or S/NP2), the parser allows NP2. This is unavoidable unless the parser is made aware of the local and non- local context. In other words, this method solves the spurious ambiguity problem between higher types, but not among higher and lower types. One can try to remedy this problem by making the availability of types dependent on some measures of prominence, e.g., allowing subjects only in higher types to ac- count for subject-complement asymmetries. But, as pointed out by Eisner (1996, p.85), this is not spu- rious ambiguity in the technical sense, just multi- ple derivations due to alternative lexical category assignments. Eliminating ambiguity in such cases remains to be solved. 4 Revealing the PAS The output of the parser is a combinatory form. The combinators in this form may arise from the CCG schema, i.e., the compositor B, and the substitutor S (Steedman, 1987). They may also be projected from the PAS of a lexical item, such as the dupli- cator W (with the reduction rule Wfa>faa) for re- n+l flexives, and B C for predicate composition with the causative suffix. For instance, the combinatory form for (13a) is the expression (13b). (13) a. Adam Cocu~,-a kitab-t man.NOM child-DAT book-ACC :m t :c' :U oku-t-tu read-CAUS-PAST :B3CAUSCr ' 'The man had the child read the book.' b. T.m'-(B.(T.b' ).(T.c' ).(B3.CAUSE.C.r ' )) -- 171 ~m I A A B T c' T t B~'~"c AU s E Although B works in a binary manner in CCG to achieve abstraction, it requires 3 arguments for full evaluation (its order is 3). Revealing the PAS amounts to stripping off all combinators from the combinatory form by evaluating the reducible ex- pressions (redexes). Bfg is not a redex but Bfga is. In other words, the derivations by the parser must saturate the combinators in order to reveal the PAS, which should contain no combinators. PAS is the semantic normal form of a derivation. The sequence of evaluation is the normal or- der, which corresponds to reducing the leftmost- outermost redex first (Peyton Jones, 1987). In tree- theoretic terms, this is depth-first reduction of the combinator tree in which the rearrangement is con- trolled by the reduction rule of the leftmost com- binator, e.g., Tin' X>_Xm' where X is the paren- thesized subexpression in (13b). Reduction by T yields: A B T c' O /"~, B~CAUSE T Further reductions eventually reveal the PAS: B.(T.b' )-(T-d ).(Ba-CAUSE.C.r ' ).m' >_ T.b' .(T.d .(Ba.CAUSE'C'r ' )).m' >_ T-d. (B 3"CAUSE'G'r' )'b' -rrfl >_ (1) (2) (3) Ba.CAUSE.C.r ' "d .b' .m' >_ (4) CAUSE.(C.r' .d .b' ).m' > (5) CAUSE-(r' .b' .d ).m' (6) By the second Church-Rosser theorem, normal order evaluation will terminate if the combinatory form has a normal form. But Combinatory Logic has the same power as A-calculus, and suffers from the same undecidability results. For instance, WWW has no normal form because the reductions will never terminate. Some terminating reductions, such as Glib>N, has no normal form either. It is an open question as to whether such forms can be projected from a natural language lexicon. In an ex- pression X.Y where X is not a redex, the evalua- tor recursively evaluates to reduce as much as pos- sible because X may contain other redexes, as in (5) above. Recursion is terminated either by obtaining the normal form, as in (6) above, or by equivalence check. For instance, (G.(I.a).b).Y recurses on the left subexpression to obtain (G,a-b) then gives up on this subexpression since the evaluator returns the same expression without further evaluation. 5 Conclusion If an ordered representation of the PAS is assumed as many theories do nowadays, its derivation from the surface string requires that the category assign- ment for case cues be rich enough in word order and grammatical function information to correctly place the arguments in the PAS. This work shows that these categories and their types can be uniquely characterized in the lexicon and tightly controlled in parsing. Spurious ambiguity problem is kept under control by normal form parsing on the syntactic side with the use of labelled categories in the grammar. Thus, the PAS of a derivation can be determined uniquely even in the presence of type shifting. The same strategy can account for deriving the PAS in unbounded constructions and non-constituent coor- dination (Bozsahin, 1997). Parser's output (the combinatory form) is reduced to a PAS by normal order evaluation. Model- theoretic interpretation can proceed in parallel with derivations, or as a post-evaluation stage which takes the PAS as input. Quantification and scram- bling in free word order languages interact in many ways, and future work will concentrate on this as- pect of semantics. 172 References Alex Alsina. 1996. The Role of Argument Structure in Grammar. CSLI, Stanford, CA. Cem Bozsahin and Elvan Gocmen. 1995. A cate- gorial framework for composition in multiple lin- guistic domains. In Proceedings of the Fourth In- ternational Conference on Cognitive Science of NLP, Dublin. Cem Bozsahin. 1997. Grammatical functions and word order in Combinatory Grammar. ms. Joan Bresnan and Jonni M. Kanerva. 1989. Loca- tive inversion in Chichewa: A case study of fac- torization in grammar. Linguistic Inquiry, 20:1- 50. Jonathan Calder, Ewan Klein, and Henk Zeevat. 1988. Unification categorial grammar. In Pro- ceedings of the 12th International Conference on Computational Linguistics, Budapest. Haskell B. Curry and Robert Feys. 1958. Combina- tory Logic L North-Holland, Amsterdam. David Dowty. 1988. Type raising, functional com- position, and non-constituent conjunction. In Richard T. Oehrle, Emmon Bach, and Deirdre Wheeler, editors, Categorial Grammars and Nat- ural Language Structures. D. Reidel, Dordrecht. Jason Eisner. 1996. Efficient normal-form pars- ing for combinatory categorial grammar. In Pro- ceedings of the 34th Annual Meeting of the ACL, pages 79-86. Jane Grimshaw. 1990. Argument Structure. MIT Press, Cambridge, MA. Mark Hepple and Glyn Morrill. 1989. Parsing and derivational equivalence. In Proceedings of the 4th EACL, Manchester. Beryl Hoffman. 1995. The Computational Anal- ysis of the Syntax and Interpretation of "Free" Word Order in Turkish. Ph.D. thesis, University of Pennsylvania. Lauri Karttunen. 1989. Radical lexicalism. In Mark Baltin and Anthony Kroch, editors, Alter- native Conceptions of Phrase Structure. Chicago University Press. Simon L. Peyton Jones. 1987. The Implementation of Functional Programing Languages. Prentice- Hall, New York. Sebastian Shaumyan. 1987. A Semiotic Theory of Language. Indiana University Press. Mark Steedman. 1987. Combinatory grammars and parasitic gaps. Natural Language and Lin- guistic Theory, 5:403-439. Mark Steedman. 1988. Combinators and gram- mars. In Richard T. Oehrle, Emmon Bach, and Deirdre Wheeler, editors, Categorial Grammars and Natural Language Structures. D. Reidel, Dordrecht. Mark Steedman. 1990. Gapping as constituent co- ordination. Linguistics and Philosophy, 13:207- 263. Mark Steedman. 1996. Surface Structure and In- terpretation. MIT Press, Cambridge, MA. K. Vijay-Shanker and David J. Weir. 1993. Parsing some constrained grammar formalisms. Compu- tational Linguistics, 19:591--636. Stephen Wechsler. 1995. The Semantic Basis of Ar- gument Structure. CSLI, Stanford, CA. Kent Wittenburg. 1987. Predictive combinators. In Proceedings of the 25th Annual Meeting of the ACL, pages 73-79. 173
1998
25
Separating Surface Order and Syntactic Relations in a Dependency Grammar Norbert BrSker Universitiit Stuttgart Azenbergstr. 12 D-70174 Stuttgart NOBI~IMS. UNI-STUTTGART. DE Abstract This paper proposes decoupling the dependency tree from word order, such that surface ordering is not determined by traversing the dependency tree. We develop the notion of a word order do- main structure, which is linked but structurally dissimilar to the syntactic dependency tree. The proposal results in a lexicalized, declarative, and formally precise description of word order; fea- tures which lack previous proposals for depen- dency grammars. Contrary to other lexicalized approaches to word order, our proposal does not require lexical ambiguities for ordering alterna- tives. 1 Introduction Recently, the concept of valency has gained con- siderable attention. Not only do all linguis- tic theories refer to some reformulation of the traditional notion of valency (in the form of 0- grid, subcategorization list, argument list, or ex- tended domain of locality); there is a growing number of parsers based on binary relations be- tween words (Eisner, 1997; Maruyama, 1990). Given this interest in the valency concept, and the fact that word order is one of the main difference between phrase-structure based approaches (henceforth PSG) and dependency grammar (DG), it is valid to ask whether DG can capture word order phenomena without re- course to phrasal nodes, traces, slashed cate- gories, etc. A very early result on the weak generative equivalence of context-free grammars and DGs suggested that DGs are incapable of describing surface word order (Gaifman, 1965). This result has recently been critizised to apply only to impoverished DGs which do not properly represent formally the expressivity of contempo- rary DG variants (Neuhaus & Br6ker, 1997). Our position will be that dependency re- lations are motivated semantically (Tesni~re, 1959), and need not be projective (i.e., may cross if projected onto the surface ordering). We argue for so-called word order domains, consist- ing of partially ordered sets of words and associ- ated with nodes in the dependency tree. These order domains constitute a tree defined by set in- clusion, and surface word order is determined by traversing this tree. A syntactic analysis there- for consists of two linked, but dissimilar trees. Sec. 2 will briefly review approaches to word order in DG. In Sec. 3, word order domains will be defined, and Sec. 4 introduces a modal logic to describe dependency structures. Sec. 5 ap- plies our approach to the German clause and Sec. 6 relates it to some PSG approaches. 2 Word Order in DG A very brief characterization of DG is that it recognizes only lexical, not phrasal nodes, which are linked by directed, typed, binary rela- tions to form a dependency tree (Tesni~re, 1959; Hudson, 1993). The following overview of DG flavors shows that various mechanisms (global rules, general graphs, procedural means) are generally employed to lift the limitation of pro- jectivity and discusses some shortcomings of these proposals. Functional Generative Description (Sgall et al., 1986) assumes a language-independent underlying order, which is represented as a pro- jective dependency tree. This abstract represen- tation of the sentence is mapped via ordering rules to the concrete surface realization. Re- cently, Kruijff (1997) has given a categorial- style formulation of these ordering rules. He assumes associative categorial operators, per- muting the arguments to yield the surface or- dering. One difference to our proposal is that 174 we argue for a representational account of word order (based on valid structures representing word order), eschewing the non-determinism in- troduced by unary operators; the second differ- ence is the avoidance of an underlying structure~ which stratifies the theory and makes incremen- tal processing difficult. Meaning-Text Theory (Melc'fik, 1988) as- sumes seven strata of representation. The rules mapping fi'om the unordered dependency trees of surface-syntactic representations onto the an- notated lexeme sequences of deep-morphological representations include global ordering rules which allow discontinuities. These rules have not yet been formally specified (Melc'~tk & Pertsov, 1987p.1870. Word Grammar (WG, Hudson (1990)) is based on general graphs instead of trees. The ordering of two linked words is specified together with their dependency relation, as in the propo- sition "object of verb follows it". Extrac- tion of, e.g., objects is analyzed by establish- ing an additional dependency called visitor between the verb and the extractee, which re- quires the reverse order, as in "visitor of verb precedes it". This results in inconsis- tencies, since an extracted object must follow the verb (being its object) and at the same time precede it (being its visitor). The approach compromises the semantic motivation of depen- dencies by adding purely order-induced depen- dencies. WG is similar to our proposal in that it also distinguishes a propositional meta language describing the graph-based analysis structures. Dependency Unification Grammar (DUG, Hellwig (1986)) defines a tree-like data structure for the representation of syntac- tic analyses. 'Using morphosyntactic features with special interpretations, a word defines abstract positions into which modifiers are mapped. Partial orderings and even discon- tinuities can thus be described by allowing a modifier to occupy a position defined by some transitive head. The approach requires that the parser interpretes several features specially, and it cannot restrict the scope of discontinuities. Slot Grammar (McCord, 1990) employs a number of rule types, some of which are ex- clusively concerned with precedence. So-called head/slot and slot/slot ordering rules describe the precedence in projective trees, referring to arbitrary predicates over head and modifiers. Extractions (i.e., discontinuities) are merely handled by a mechanism built into the parser. 3 Word Order Domains Summarizing the previous discussion, we require the following of a word order description for DG: • not to compromise the semantic motivation of dependencies, • to be able to restrict discontinuities to cer- tain constructions and delimit their scope, • to be lexicalized without requiring lexical ambiguities for the representation of order- ing alternatives, • to be declarative (i.e., independent of an analysis procedure), and • to be formally precise and consistent. The subsequent definition of an order domain structure and its linking to the dependency tree satisify these requirements. 3.1 The Order Domain Structure A word order domain is a set of words, general- izing the notion of positions in DUG. The cardi- nality of an order domain may be restricted to at most one element, at least one element, or - by conjunction - to exactly one element. Each word is associated with a sequence of order do- mains, one of which must contain the word itself, and each of these domains may require that its elements have certain features. Order domains can be partially ordered based on set inclusion: If an order domain d contains word w (which is not associated with d), every word w ~ con- tained in a domain d ~ associated with w is also contained in d; therefor, d ~ C d for each d ~ asso- ciated with w. This partial ordering induces a tree on order domains, which we call the order domain structure. Take the example of German "Den Mann hat der Junge gesehen" ("the manAGe - has - the boyNoM - seen"). Its dependency tree is shown in Fig.l, with word order domains indicated by dashed circles. The finite verb, "hat", de- fines a sequence of domains, <dl, d2, d3>, which roughly correspond to the topological fields in the German main clause. The nouns "Mann" 175 ' ,, subj_~.~_~. ", :d3', ,' 'C. "derJunge; '.ge~ehen.,,, ,:.den Mann-., "'. ." ' Figure 1: Dependency Tree and Order Domains for "Den Mann hat der Junge gesehen" dl d,4 hat d 5 d 6 Mann Junge gesehen Figure 2: Order Domain Structure for "Den Mann hat der Junge gesehen" aud "Junge" and the participle "gesehen" each define one order domain (d4,cl5,d6, resp.). Set inclusion gives rise to the domain structure in Fig.2, where the individual words are attached by dashed lines to their including domains (dl and d4 collapse, being identical). 1 3.2 Surface Ordering How is the surface order derived from an or- der domain structure? First of all, the ordering of domains is inherited by their respective ele- ments, i.e., "Mann" precedes (any element of) d2, '!hat" follows (any element of) dl, etc. Ordering within a domain, e.g., of "hat" and d6, or d5 and d6, is based on precedence pred- icates (adapting the precedence predicates of WG). There are two different types, one order- ing a word w.r.t, any other element of the do- main it is associated with (e.g., "hat" w.r.t, d6), and another ordering two modifiers, referring to the dependency relations they occupy (d5 and d6, referring to subj and vpart). A verb like "hat" introduces two precedence predicates, re- quiring other words to follow itself and the par- ticiple to follow subject and object, resp.: 2 "hat" ~ (<. A (vpart) >{subj,obj}) ~Note that in this case, we have not a single rooted tree, but rather an ordered sequence of trees (by virtue of ordering dl, d2, and d3) as domain structure. In gen- eral, we assume the sentence period to govern the finite verb and to introduce a single domain for the complete sentence. 2For details of the notation, please refer to Sec. 4. Informally, the first conjunct is satisfied by any domain in which no word precedes "hat", and the second conjunct is satisfied by any do- main in which no subject or object follows a participle. The domain structure in Fig.2 satis- fies these restrictions since nothing follows the participle, and because "den Mann" is not an el- ement of d2, which contains "hat". This is an im- portant interaction of order domains and prece- dence predicates: Order domains define scopes for precedence predicates. In this way, we take into account that dependency trees are flatter than PS-based ones 3 and avoid the formal in- consistencies noted above for WG. 3.3 Linking Domain Structure and Dependency Tree Order domains easily extend to discontinuous dependencies. Consider the non-projective tree in Fig.1. Assuming that the finite verb gov- erns the participle, no projective dependency between the object "den Mann" and the partici- ple "gesehen" can be established. We allow non- projectivity by loosening the linking between de- pendency tree and domain structure: A modi- fier (e.g., "Mann") may not only be inserted into a domain associated with its direct head ("gese- hen"), but also into a domain of a transitive head ("hat"), which we will call the positional head. The possibility of inserting a word into a do- main of some trausitive head raises the ques- tions of how to require contiguity (as needed in most cases), and how to limit the distance between the governor and the modifier in the case of discontinuity. From a descriptive view- point, the syntactic construction is often cited to determine the possibility and scope of disconti- nuities (Bhatt, 1990; Matthews, 1981). In PS- based accounts, the construction is represented by phrasal categories, and extraction is lim- ited by bounding nodes (e.g., Haegeman (1994), Becker et al. (1991)). In dependency-based ac- counts, the construction is represented by the dependency relation, which is typed or labelled to indicate constructional distinctions which are configurationally defined in PSG. Given this cor- respondence, it is natural to employ dependen- cies in the description of discontinuities as fol- 3Note that each phrasal level in PS-based trees defines a scope for linear precedence rules, which only apply to sister nodes. 176 lows: For each modifier of a certain head, a set of dependency types is defined which may link the direct head and the positional head of the modifier ("gesehen" and "hat", resp.). If this set is empty, both heads are identical and a con- tiguous attachment results. The impossibility of extraction from, e.g., a finite verb phrase may follow from the fact that the dependency embed- ding finite verbs, propo, may not appear on any path between a direct and a positional head. 4 4 The Description Language This section sketches a logical language describ- ing the dependency structure. It is based on modal logic and owes much to work of Blackburn (1994). As he argues, standard Kripke models can be regarded as directed graphs with node annotations. We will use this interpretation to represent dependency structures. Dependencies and the mapping from dependency tree to order domain structure are described by modal opera- tors, while simple properties such as word class, features, and cardinality of order domains are described by modal propositions. 4.1 Model Structures In the following, we assume a set of words, l/Y, ordered by a precedence relation, -<, a set of dependency types, T), a set of atomic feature values .4, and a set of word classes, C. We define a family of dependency relations Rd C W × ~42, d E :D and for convenience abbreviate the union UdET~ Rd as R:D. Def: A dependency tree is a tuple (W, Wr, R~, VA, Vc), where R~ forms a tree over VP rooted in Wr, VA : V~ ~ 2 A maps words to sets of features, and V¢ : 1/~ ~ C maps words to word classes. Def: An order domain (over W) m is a set of words from ~) where VWl,W2,W3 E VV : (wl -< w2-<w3Awl EmAw3 Era) ~ w2 E m. Def: An order domain structure (over W) f14 is a set of order domains where Vm, m ~ E .£4 : mMm ~ = OVm C m'Vm ~ C m. 4One review pointed out that some verbs may allow extractions, i.e., that this restriction is lexical, not uni- versal. This fact can easily be accomodated because the possibility of discontinuity (and the dependency types across which the modifier may be extracted) is described in the lexical entry of the verb. In fact, a universal re- striction could not even be stated because the treatment is completely lexicalized. Def: A dependency structure T is a tuple (VV, Wr, R~, VA, Vc, .A4, VM > where (I,V, wr, Rz~, VA, VC> is a dependency tree, A4 is an order domain structure over ~V, and VAa : V~ ~ .All n maps words to order domain sequences. Additionally, we require for a dependency structure four more conditions: (1) Each word w is contained in exactly one of the domains from V~(w), (2) all domains in V~(w) are pairwise disjoint, (3) each word (except w~) is contained in at least two domains, one of which is associ- ated with a (transitive) head, and (4) the (par- tial) ordering of domains (as described by VM) is consistent with the precedence of the words contained in the domains (see (Brhker, 1997) for more details). 4.2 The Language £:~ Fig.3 defines the logical language /:~ used to describe dependency structures. Although they have been presented differently, they can eas- ily be rewritten as (multimodal) Kripke models: The dependency relation Rd is represented as modality (d> and the mapping from a word to its ith order domain as modality o~.5 All other formulae denote properties of nodes, and can be formulated as unary predicates - most evident for word class and feature assignment. For the precedence predicates <. and <~, there are in- verses >. and >~. For presentation, the relation places C 142 x 142 has been introduced, which holds between two words iff the first argument is the positional head of the second argument. A more elaborate definition of dependency structures and £~ defines two more dimensions, a feature graph mapped off the dependency tree much like the proposal of Blackburn (1994), and a conceptual representation based on termino- logical logic, linking content words with refer- ence objects and dependencies with conceptual roles. 5 The German Clause Traditionally, the German main clause is de- scribed using three topological fields; the ini- tial and middle fields are separated by the fi- nite (auxiliary) verb, and the middle and the 5The modality O~ can be viewed as an abbreviation of o~ O~, composed of a mapping from a word to its ith order domain and from that domain to all its elements. 177 Syntax (valid formulae) Semantics (satisfaction relation) c • £v, Vc • C T,w a • £v, Va • A T,w <d) ¢ • £v, Vd • 79,¢ • £v T,w <. 6 £v, T,w <~ •£9, V6c_79 $~ •£v, VTcD oi single • ED, Vi • $V ,%4 o~filled • £D, Vi • ~V D~a•£D, Vi•$V,a•A ¢^¢ • £~, V¢,¢ • £v -~¢ 6.£~, V¢ • £v T, w T, w T, w T, w T, iv T, w T, w ~c ka <d) ¢ :¢* c = Yc(w) :¢* a e Y (w) :¢~ 3w' 6 142 : wRdW' A T, w' ~ ¢ :~ 3m • M : (V.~(w) = (...m...) ^Vw' • m : (w = w' Vw -< w')) :¢~ ~3w',w",w ''' • W : places(w',w) Aplaces(w', w") A w'" R6w A w'" "< w $6 :¢~ 3w',w" • ~42 : wRvwA places(w", w) A w" R;w' o !{ w'• (,,,11^ ] o~single :¢t, w' ~Bw" : (w"RT)w'n,, < 1 w" e - k obfilled I 1 t:3~a :¢* Vw' • Oi(V.M(w)) : T,w' k a ~¢A¢ :¢~,T,w~¢andT, w~¢ --,¢ :¢¢, not T, w ~ ¢ Figure 3: Syntax and Semantics of Ev Formulae Vfin ~ ol(single A filled) A OLinitial [1] A O L (middle A norel) [2] A 0 3 single A D L (final A norel) [3] A V2 ¢~ (middleA <, A[3~norel) [4] A VEnd ¢~ (middleA >,) [5] A Vl ¢~ (initial A norel) [6] Figure 4: Domain Description of finite verbs "hat" A Vfin [7] A (subj)("Junge" A 1"0) [8] A(vpart) ("gesehen" A S0 [9] A ~final A >{subj,obj} [i0] A (obj)("Mann" A t{vpart})) [11] Figure 5: Hierachical Structure final fields by infinite verb parts such as sepa- rable prefixes or participles. We will generalize this field structure to verb-initial and verb-final clauses as well, without going into the linguistic motivation due to space limits. The formula in Fig.4 states that all finite verbs (word class Vfin 6 C) define three order domains, of which the first requires exactly one element with the feature initial [1], the second allows an unspecified number of elements with features middle and norel [2], and the third al- lows at most one element with features final and norel [3]. The features initial, middle, and final 6 .4 serve to restrict placement of certain phrases in specific fields; e.g., no reflex- ive pronouns can appear in the final field. The norel 6 .4 feature controls placement of a rela- tive NP or PP, which may appear in the initial field only in verb-final clauses. The order types are defined as follows: In a verb-second clause (feature V2), the verb is placed at the beginning (<.) of the middle field (middle), and the el- ement of the initial field cannot be a relative phrase (o~norel in [4]). In a verb-final clause (VEnd), the verb is placed at the end (>.) of the middle field, with no restrictions for the initial field (relative clauses and non-relative verb-final clauses are subordinated to the noun and con- junction, resp.) [5]. In a verb-initial clause (Vl), the verb occupies the initial field [6]. The formula in Fig.5 encodes the hierarchical structure from Fig.1 and contains lexical restric- tions on placement and extraction (the surface is used to identify the word). Given this, the order type of "hat" is determined as follows: The par- ticiple may not be extraposed (~final in [10]; a restriction from the lexical entry of "hat"), it must follow "hat" in d2. Thus, the verb can- not be of order type VEnd, which would require it to be the last element in its domain (>. in [5]). "Mann" is not adjacent to "gesehen", but may be extracted across the dependency vpart (${vpart} in [11]), allowing its insertion into a domain defined by "hat". It cannot precede "hat" in d2, because "hat" must either begin d2 (due to <. in [4]) or itself go into dl. But dl al- lows only one phrase (single), leaving only the domain structure from Fig.2, and thus the order type V2 for "hat". 178 6 Comparison to PSG Approaches One feature of word order domains is that they factor ordering alternatives from the syntactic tree, much like feature annotations do for mor- phological alternatives. Other lexicalized gram- mars collapse syntactic and ordering informa- tion and are forced to represent ordering alterna- tives by lexical ambiguity, most notable L-TAG (Schabes et al., 1988) and some versions of CG (Hepple, 1994). This is not necessary in our approach, which drastically reduces the search space for parsing. This property is shared by the proposal of Reape (1993) to associate HPSG signs with se- quences of constituents, also called word or- der domains. Surface ordering is determined by the sequence of constituents associated with the root node. The order domain of a mother node is the sequence union of the order domains of the daughter nodes, which means that the relative order of elements in an order domain is retained, but material from several domains may be interleaved, resulting in discontinuities. Whether an order domain allows interleaving with other domains is a parameter of the con- stituent. This approach is very similar to ours in that order domains separate word order from the syntactic tree, but there is one important difference: Word order domains in HPSG do not completely free the hierarchical structure from ordering considerations, because discontinuity is specified per phrase, not per modifier. For ex- ample, two projections are required for an NP, the lower one for the continuous material (de- terminer, adjective, noun, genitival and prepo- sitional attributes) and the higher one for the possibly discontinuous relative clause. This de- pendence of hierarchical structure on ordering is absent from our proposal. We may also compare our approach with the projection architecture of LFG (Kaplan & Bres- nan, 1982; Kaplan, 1995). There is a close sim- ilarity of the LFG projections (c-structure and f-structure) to the dimensions used here (order domain structure and dependency tree, respec- tively). C-structure and order domains repre- sent surface ordering, whereas f-structure and dependency tree show the subcategorization or valence requirements. What is more, these pro- jections or dimensions are linked in both ac- counts by'an element-wise mapping. The dif- ference between the two architectures lies in the linkage of the projections or dimensions: LFG maps f-structure off c-structure. In contrast, the dependency relation is taken to be primi- tive here, and ordering restrictions are taken to be indicators or consequences of dependency re- lations (see also BrSker (1998b, 1998a)). 7 Conclusion We have presented an approach to word or- der for DG which combines traditional notions (semantically motivated dependencies, topolog- ical fields) with contemporary techniques (log- ical description language, model-theoretic se- mantics). Word order domains are sets of par- tially ordered words associated with words. A word is contained in an order domain of its head, or may float into an order domain of a transi- tive head, resulting in a discontinuous depen- dency tree while retaining a projective order domain structure. Restrictions on the floating are expressed in a lexicalized fashion in terms of dependency relations. An important benefit is that the proposal is lexicalized without reverting to lexical ambiguity to represent order variation, thus profiting even more from the efficiency con- siderations discussed by Schabes et al. (1988). It is not yet clear what the generative capac- ity of such lexicalized discontinuous DGs is, but at least some index languages (such as anbnc n) can be characterized. Neuhaus & BrSker (1997) have shown that recognition and parsing of such grammars is A/'7~-complete. A parser operating on the model structures is described in (Hahn et al., 1997). References Becket, T., A. Joshi & O. Rainbow (1991). Long- Distance scrambling and tree-adjoining gram- mar. In Proc. 5th Conf. of the European Chap- ter of the ACL, pp. 21-26. Bhatt, C. (1990). Die syntaktische Struktur der Nominalphrase im Deutschen. Studien zur deutschen Grammatik 38. Tfibingen: Narr. Blackburn, P. (1994). Structures, Languages and Translations: The Structural Approach to Fea- ture Logic. In C. Rupp, M. Rosner & R. John- son (Eds.), Constraints, Language and Compu- tation, pp. 1-27. London: Academic Press. BrSker, N. (1997). Eine Dependenzgrammatik zur Kopplung heterogener Wissenssysteme auf modallogischer Basis. Dissertation, Deutsches Seminar, Universit~it Freiburg. 179 BrSker, N. (1998a). How to define a context-free backbone for DGs: An experiment in gram- mar conversion. In Proc. o] the COLING- A CL'98 workshop "Processing of Dependency- based Grammars". Montreal/CAN, Aug 15, 1998. BrSker, N. (1998b). A Projection Architecture for Dependency Grammar and How it Compares to LFG. In Proc. 1998 Int'l Lexical-Functional Grammar Conference. (accepted as alternate paper) Brisbane/AUS: Jun 30-Jul 2, 1998. Eisner, J. (1997). Bilexical Grammars and a Cubic- Time Probabilistic Parser. In Proc. of Int'l Workshop on Parsing Technologies, pp. 54-65. Boston/MA: MIT. Gaifman, H. (1965). Dependency Systems and Phrase Structure Systems. Information and Control, 8:304-337. Haegeman, L. (1994). Introduction to Government and Binding. Oxford/UK: Basil Blackwell. Hahn, U., P. Neuhaus & N. BrSker (1997). Message- Passing Protocols for Real-World Parsing - An Object-Oriented Model and its Preliminary Evaluation. In Proc. Int'l Workshop on Parsing Technology, pp. 101-112. Boston/MA: MIT, Sep 17-21, 1997. Hellwig, P. (1986). Dependency Unification Gram- mar. In Proc. I1th Int'l Conf. on Computa- tional Linguistics, pp. 195-198. Hepple, M. (1994). Discontinuity and the Lambek Calculus. In Proc. 15th Int'l Conf. on Compu- tational Linguistics, pp. 1235-1239. Kyoto/JP. Hudson, R. (1990). English Word Grammar. Ox- ford/UK: Basil Blackwell. Hudson, R. (1993). Recent developments in depen- dency theory. In J. Jacobs, A. v. Stechow, W. Sternefeld & T. Vennemann (Eds.), Syn- tax. Ein internationales Handbuch zeitgenSssis- cher Forsehung, pp. 329-338. Berlin: Walter de Gruyter. Kaplan, R. (1995). The formal architecture of Lexical-Functional Grammar. In M. Dalrym- ple, R. Kaplan, J. I. Maxwell &: A. Zae- nen (Eds.), Formal Issues in Lexical-Functional Grammar, pp. 7-27. Stanford University. Kaplan, R. & J. Bresnan (1982). Lexical-Functional Grammar: A Formal System for Grammatical Representation. In J. Bresnan & R. Kaplan (Eds.), The Mental Representation of Gram- matical Relations, pp. 173-281. Cambridge, MA: MIT Press. Kruijff, G.-J. v. (1997). A Basic Dependency-Based Logical Grammar. Draft Manuscript. Prague: Charles University. Maruyama, H. (1990). Structural Disambiguation with Constraint Propagation. In Proc. 28th Annual Meeting of the ACL, pp. 31-38. Pitts- burgh/PA. Matthews, P. (1981). Syntax. Cambridge Text- books in Linguistics, Cambridge/UK: Cam- bridge Univ. Press. McCord, M. (1990). Slot Grammar: A System for Simpler Construction of Practical Natural Lan- guage Grammars. In R. Studer (Ed.), Natural Language and Logic, pp. 118-145. Berlin, Hei- delberg: Springer. Melc'hk, I. (1988). Dependency Syntax: Theory and Practice. Albany/NY: State Univ. Press of New York. Melc'hk, I. & N. Pertsov (1987). Surlace Syntax of English: A Formal Model within the MTT Framework. Philadelphia/PA: John Benjamins. Neuhaus, P. &: N. BrSker (1997). The Complexity of Recognition of Linguistically Adequate Depen- dency Grammars. In Proc. 35th Annual Meet- ing of the A CL and 8th Conf. of the EA CL, pp. 337-343. Madrid, July 7-12, 1997. Reape, M. (1993). A Formal Theory of Word Order: A Case Study in West Germanic. Doctoral Dis- sertation. Univ. of Edinburg. Schabes, Y., A. Abeille & A. Joshi (1988). Parsing Strategies with 'Lexicalized' Grammars: Appli- cation to TAGs. In Proc. 12th Int'l Con]. on Computational Linguistics, pp. 578-583. Sgall, P., E. Hajicova & J. Panevova (1986). The Meaning of the Sentence in its Semantic and Pragmatic Aspects. Dordrecht/NL: D.Reidel. Tesni&e, L. (1959). Elemdnts de syntaxe structurale. Paris: Klincksiek. 180
1998
26
The Logical Structure of Binding Ant6nio Branco DFKI and Univ. of Lisbon Dep. Inform~itica, Fac. Ci~ncias, Campo Grande, 1700 Lisboa, Portugal Antonio.Branco@di. fc.ul .pt Abstract A log.ical recasting of B.inding Theory is performed as an enhancing step tor the purpose ot its gull and lean declarative implementation. A new insight on sentential anaptioric processes is presented which may suggestively be c%ptured by the slogan binding conclitions are me effect of phase quantification on the universe of discourse referents. Introduction Due to its central role in natural language and its intriguing propert.ies, reference and anap'hor resolution has been a central topic for NLP research. Given the intensive attention devoted to this subject, .it can however be said that sentential anaphor orocessmg has been quite overlooked, when compared io the amount of research effort put in tackling non sentential anaphoric dependencies. This tends to be so because there seems to be a more or less implicit assumption that no substantial difference exists between the two ~cesses 1. ile this may be arguably true for. the heuristics involved in picking out a given antecedent from a list of suitable candidates, a more s.ubtle point asks. itself to be made when we focus on the syntactic conditions which sentential anaohoric relations comply with, but from which non senfential ones are exempt. In theoretical linguistics these grammatical conditions are grouped under the hea.ding of BindingTheory.. In computational linguistics however, tlaoug.n there have been a few papers directly concerned with me implementation of this theory, mainstream research tends t 9 disregard its conceptual, grammatical or practical modularity. When it comes to define me algorithm. .for.setting up the list of suitable candidates from which the antecedent should be chosen, binding conditions, holding just at the sentential level, are most otten put on a par with any other kind of conditions, morphological, semantic, pragmatic, etc.~ which hold for anaptioric relations at both sentential and non sentential level. The interesting p.oint to be made in this connection is at, it the modularity ot grammatical knowledge is to be ensured in a sound reference resolution system, more attention should be paid to previous attempts of implementing, Binding Theory.. It would then become ewdent that mis theory, in its current formulation, appears, as ,a , piece of formalised grammatical KnowJe~age wnicn nowever escapes a full and lean declarative implementation. In fact, implementation efforts concerning Binding Theory 2 bring to light what tend to be eE!ipsed by. mainstream clean theoretical formulations ot it. Behind t.he apparent declarative aspect of its definition under the form ot a set of binding principles(plus definitions of associated concepts, e.g. o-command, o-bound, local IAs entry points into bibliography vd References in Grosz etal. (95) and Botley et al. (96). -'Vd. Chomsky(81 ), Correa(88), lngria et al (89), Fong (90), Giorgi et al. (90), Pianesi (91). domain, etc.), there is a set of procedures which turn out to be an essential p.art ot the theory: after parsing being completed, (it in~lexation: assignln.dices to NPs; (ii) filtering: store the indexed tree it the indexation respects binding principles, reject otherwise; (iii) recursion: repeat (i)with a new assignment until all possible assignments are exhausted. T.his sort of resistance to declarative encompassing is also ap.oarent when one considers how Binding Theor Z is hani:lled in grammatical theories developed on top ot constraint based formalisms and particularly concerned with computational implementa'bility, lille LFG or HPSG. As to HPSG, it has passed quite unnoticed that its Binding Theory is the only piece of the grammar fragment not encoded in its own formalism. In the Appendix of the foundational book (Pollard and Sag ~9"4)), where the fragment of grammar developed along tts 700 pp. is encoded in the adopted formalism, Binding Theory_ escapes such encoding. Bredenkamp (96) and Backot'en et al.. (96) subsequent elaboration on this. is.sue jmplied that som. e. ki.'nd pf essential limitation ot the tormallsm might have been reacnea and that H PSG. Binding Theory is still waiting to be accommpdate~ into HPS.G grammars . . . . . . As tO the UP~ tormulaUon ot t~lndmg lneory, it requires the integration of inside-out equations, a sp6cial purpose extension to the general'declarative fbrmalism. And even though initial scepticism about their tractabili.ty was dissipated by Kaplan and Maxwell [88), the recent survey, of l~acKoten et al. (96) repo.rts that no implementeH formalism, and no implemented grammar, is known to handle LFG Bin.ding Theory.. . . . . . . In this connection the central aim ot the research to De pres.ented here is to render possible a lean declarative implementation of Binding Theory in constraint based formalisms without resorting to specific complex mechanisms. This involves two steps. First, as a sort of enhancing step back, a new account, of Binding lheory, is set up. Second, by the exhibition ot aft example~ the new shape of the theory is shown to support full declarative implementation in basic HPSG formalism. Due to sp.ace constraints, this .paper is mostly concerned with the first, while the latter receives just a rough sketch in last section, being develope~l in future papers. 1 Preliminaries 1.1 The Square of Opposition Recent cross linguistic research, e.g. Xue, Pollard and Sag (94) and Branco and Marrafa (97), ILas shown that the binding ability of long-distance renexives.is not reducible to recursive concatenation of short distance relations, as it has been assumed in GB accounts, but that it is ruled by a fourth binding principle: (1) Principle Z An o-commanded anaphoric pronoun must be o-bound. 181 (2) Z: B: x is bound compatible x is locally free .I contradictory 1 implies 1 contradictory C: contrary A: x is free x is locally bound This new perspective on long-distance reflexives had an important impact in the whole shape of Binding Theory. Branco and Marrafa noted still that the four principles can be arranged in a classical Aristoteli.an s~uare at oppositions, as in (2). This su~zgests that the Binding Theory may have an unsuspec'(td underlying q uantificational structure. The present paper aims at snowing that there is such structure and at determining its basic lines. 1.2 Phase Quantification Barwise and Cooper (81) seminal work gave rise to a uitful research tradition where Generalised Quantiller Theory has been applied to the analysis of natural land e " " =uag q.uant~ficanon. These authors suggested that a universal characterisation of NL nominal quantification could be formally given by means of ,formal prop, erties defined in that theory. Th.'e property to live on was postulated as being the most prominent one~ admittedly constituting the common specific nature at all nominal quantifiers. L.ater, Loebner (87)suggested a criterion to ascertain the quantihcat,onal nature at natural language expressions in general. That is the property that, for a one place second order operator Q expressed by a given exc~ression, there be a corresponding dual operator THls'duality-"¢- based perspective on the essence of natural langua,,. ,.~, e quantificauon permitted to extend quann~fication su orted 19 the determiners all, some. canon well beyond the classic cases of nominal q PP . . most many, etc., namely ~y covering also the realms of tempora'litv and Doss'ibility. Moreover, items like still/ already, , and others (enough~too, scaling adjectives, man)/few, etc.) though they do not lend themselves to be straightforwardly analysed in terms of set .quantification, they can alsob.~ arranged in asquare of duality. The formalization at the semantics at these aspectua] items by Loebner led tq the enlarging of the notion at quantincation through the introduction at the new concept of phase cmantification. He noted that still and alreaclv express duals m2,,d that they are corners,of a square of,d, uality. Let P be she is asleep" and -P 'she is awake', durative propositions which are the arzuments of the semanuc operators corresponding to aTready and still. Then: (3) She is already asleep iff it is not the case that she is still awake. ALREADY P iff - STILL -P Further similar tests can be made in order to show that these aspectual items enter the following square of duality: (4) inner still negauon .not yet /q , e OU ~r OUt ~" negauon/ dual | negauon no longe) ~'~---~ ~ already inner neganon In order to ~et a formalization of (4), Loebner noted that alreac~,.should be taken as convey.in~ the information that there is a phase of not-P which has st a(ted before a given reference time tO and might be IOllOWeO lay at most one phase P which reaches tall tu. This can be displayed in a time axis by means of the diagram in (5). (5) tO tO 1 '"~'"'"'-" ~ t P -P ~p P still P not yet P tO tO P -P ~p P no longer P already P Similar diagrams for the meaning of the other aspectual phase quantitiers at this square of duality are easily intemretable. Inner negation results in exchanging the positive and the negative semiphases, while outer negati9n c.oncerns the.decision whether the parameter to tails Into the hrst or the second semiphase. Phase quantifiers in general (already, scaling a.djectives, etc.) . were thus characterised as requiring two ingredients: (i) a property P, which defines a positive phase in a sequence of two opposi[e phases; (ii) a p.arameter point. The four types at quantifiers just ~liffer in presupposing that either the positive or the negative semiptiase,co.mes first_and in stating that the parameter point tm~s rata the tirst or into the second semiphase. . . . . Next Loebner showed that the semantics of phase ~oUantifiers sketched in the diagrams above can be rmalised in such a way that" a square of duality formed b~, the generalised q.uanti.fiers XX.some'(D,X~/ XX.every (D,X) turns out to t~e su.bjacent to the square of duality of already~still. In order to do it, he just needed the auxiliary, notion at starting, point at the relevant semiphase. This is rendered as the intimum at the set of the closest predecessors of the parameter po.i.nt pt which, forman unint.errt~pted linear sequence w~th property P, or ~P (.termed Libl(K,pt) lay Loelaner): (6) GSI(R,pt) =df inf{x I x<pt & R(x) & Vy(x<y<pt & R(y) --~ Vz(x<z<y ----~ R(z)))} The semantics of the four ohase quantifiers above can then. be rendered in the following way, making pt=tO tar the parameter point and R=P or R=-P: (7) still: XP.every'(X x.(GSI(P,a)<x<t0),P) already: XP.some'(X x.(GSI(-P,a)<x<t0),P) not yet: XP.no'(Xx.(GSI(-P, a) < x < t0),P) nolonger: XP.not every'(Xx.(GSI(P,a)<x<t0),P) 2 The Logic of Binding Taking Loebner's view on quantification, our goal in this section is to make apparent the quantificational structureof binding by showing that on a par with the square o! opposition, of (2) binding, principles form a ,squa4".e of d.dality, we are going tDus to argue .that olnain.g prlnciptes are out the reflex ot the ph.ase quantincational nature oI corresponding nominal expressions: reflexives, prg.no.uns, long-distance reflexives and R-expressions will be shown to express phase quantiners acting on the grammatical oonqueness axis. 182 2.1 Phase quantification ingredients In order to show that the above referred nominals .express ,phase quantifi.ers t.he relevant .components mvoJvea m pnase.quantm.catlon snored t~e. mentmea. lne relevant scale here Is not the continuous nnear ~.rder of mo.ments of time, as for still~already, but a lscrete partla~ order made oI mscourse rererents (ct. DRT) arramzed according to the relative obliqueness of grammatical functions. Note that in multiclausal constructions there is the corresponding subordination of different clausal obliqueness hlerarchles (for the salve or comparalgility with diagrams (3) involving time arrow, Hasse dm~ams for obliqueness are displayed with a turn of 90~right): (8) Kim said Lee saw Max. O k 1 m Note also that the relation "less oblique than" may not be linear: (9)Kim said Lee, who saw Max, hit Norma. O--------O O k 1 n O O I m The sequence of two oEposite semiphases is defined by a,prooerty 1-'. Contrarily to what happens with .alread3, wfie.r.e operator (quantifier). and o~rand (auraUve proposmon) are renderecl p.y mtterent expressions, m binding p.hase, quantification .me operanu r is also contnbuteO by. the nomlna~ expressing the operator, i.e. expressing the binding phase quantiner. For a given nominal N P is determined by the relative position of N in the scale . For a discourse referent r corresponding to N, semiphase P is a linear stretch containingonly elements that. are less than or equal to r in the obliqueness order, that is discourse reterents correspondi.ng to. , nom in.als o-commanding N.. Moreover, it semlpnase .r Is. presupposecl to precede semiphase -P, P is such that the last successor m it is local wrt to r; and if semiphase -P is presupposed, to precedes semlphase P, P is such .tha.t the first predecessor in It is local wrt to r. In both cases tide closest ~ nei~hbour or semiphase -P has to be local wrt r, where the notion of locality has the usual sense given in the definition of binding principles: (10) P(x) iffdef x < r & Vy[(-P(y)& (x-<y or y-<x))----) x is local wrt r] As to the parameter point, in binding..p.hase quantification, it is the discourse reterent a winch is the antecedent of r. 2.2 Binding phase quantifiers We can now formalise phase quantification subjacent to nominals. Let us start with an anaphoric expression N like himself (11)Kim said Lee thinks Max/hit himself/. *Kim said Lee/thinks Max hit himself/. QA: XP.some'(Xx.(GSI(-P,a)<x<a),P) .a P o ! k .... 0 X C X N can thus be inte.rpreted as presupposing that a semiDhase -P precedes a semipfiase P and requiring that the p.arameter point occurs, in the. !atter~ ttiat is, the antecedent a ~s to be round in .s.em~pn~e r among the discourse referents corresponding to Uae local o- commanders of r, the disc referent correspgnd.ing tq N 3. This is captured by_ the definition oI tide pna:s.e .quantifier QA. Sanstaction. of QA(P) obtains iH between the bottom ot tide uninterrupted linear sequence -t-' most close to me parameter p.omt/antecedent a and a inclusive there is at'least one ~liscourse referent in P. Given -P.P, this amounts to requiring that a be in P, and that a be a local o- commander of r. 3 Next, it is then easy to see how the phase quantificational force or a pronominal expression N should be formalised: (12) *Kim said Lee thinks Max/hit him/. Kim said Lee/thinks Max hit him/. QB:XP.no'(Xx.(GSI(~P, a) < x < a),P) _p ~a ~:~ p Here the parameter point a occurs in semiphase -P, which amounts to the antecedent being picked 9utside t,n.e set of loc~ o-commanders. QB(P). Is satisnea itt no discourse reterent between the bottom ot me uninterrupted, linear sequence -P re.ore c.lose to the oarameter i~olnt/antecedent a and a Inclusive Is In r'. Given.-P.P, this.amount.s to requiring that a be ,in semiplmse ~1 ~, and mat a be not a local o-commanoer of r. Like in diagram of (11), ~P is taken here as the complement set oIP. All discourse reterents which are not "local o-commanders of r are in it, either o- commanding r or not. Notice that set -P includes also discourse referents Xl.vX n introduced by previous sentences or the extra-linguistic context, which in constructions similar to (l'2)b. accounts for possible aeictic readings of the pronoun. Below, when studying .R.-expressions~we,wlll see why. the possible non linearity ot me ot~li.qu.eness orizler will led. us. to consider that -1: is sljglatly more complex than just me complement se_t ot r'._ Coming now to long-distance reflexives, ruled by. the fourth binding principle in (1), we get the following formalisation: (13)[O amigo de Kim]i disse que ele pr6prioi acha que Lee wu Max. (Portuguese) [Kim's friend]/saidLDRi thinks Lee saw Max. *[O amigo de Kimi] disse que ele pr6prioi acha que Lee viu Max. [Kim'si friend] said LDRi thinks Lee saw Max. Qz:XP.every'(X x.(GSI(P, a)<x_<a),P) ~a P _p O I xn k 3For the sake of simplicity, agreement requirements between N and its antecedent are overlooked here. 183 Here, like for short-distance reflexives in (11), a is required to occur in P though the presupposition now is .t.13at semiphase P is fpIlowe.d by~ ~m.ipnase ?,r'. laKmg.mto account the de/m.mon oI t- m t~u), me antecedent of N is thus required to Dean o-comma3a.ger Qocal or n.ot) of N. Thesemantics PL P.13ase quantiner ~Z ~s such tpat, tor QZ(r') to .De saUsned, between me bottom oI the uninterrupted linear sequence V more close to the parameter point/antecei[lent a and a inclusive every ..discourse referent is in P. This amounts to requmng that a be in semiphase P, and that a be an o-commander or r. Finally R-expressions call to be formalised as the fourth phase quantifier of (7): (14) [Kim'si friend] said Kimi thinks Lee saw Max. *[Kim's friend]/said Kimi thinks Lee saw Max. Qc:hP.not every'(Xx.(GSI(P,a)<x< a),P) P -P o m 0 I xn k a) The parameter point a is required to occur in -P, which means that a cannot be an o-commander (local or not) of r. This renders the same condition as expressed by Principle C, that R-expressions be free, though it also encodes an uncommon assumption agout the referential autonomy of R-expressions. Here, like for other more obvious dependent reference• nominals, the interpretation .of l,~-expressions is. taken as being dependent on the interpretation ot other expressions or on the salience of discourse referents made available by the communicative context. Taking an extreme example in order to support the plausibility of. this view and awkwardly, ab'6reviate a deep philosophical discussion, one should notice that even a proper name is not a unique label of a given individual~ once knowing who is the person called John (out ot those we know that are named John) depends on the context. Note that like in previous diagrams, -P is taken in (14) just as the complement set of P. However, QC asks finally for a serious ponderation o) this and a more accurate definition of -P for phase quantincation in non linear orders, where it is possible that not all elements are comparable.. . . . . . Por t~c(P ) to be satisfied, between the t~ottom o[ i- and the parameter point/ antecedent a inclusive not every discourse referent is in P. Since we have here the p.resupposition P.-P, andgiven P is an uninterru.pted linear sequence, this would-amount to requiring that a be in -P. It is wortb noting then that i.f we keep -P simply as the complemen.t set of r', the interpretation o! ~- expressions is however not adequately predicted by ~c(P). (15) John said Kimj thinks Lee saw Max. P -P a-...~l o n P -P ~: ......... m Let D be Ix: GSI(P,a)<x<. a}~t.he domain of .Qc. Taking (15)b., it is easy to check that in constructiops like (.IS)a, D is always empty. In fact, it is not the case that G S.I(P,a)<a as a=xl- is not comparab.le to any element ot 1-', andafortiori it is not comparable to the bottom.of P. Consequently, every'(D,P) is trivially true whatever discourse referent xn we take as antecedent for r, and not every'(D,P) is trivially false. The interpretation of.(1.5)a, sketched in (15)b. would thus be incorrectly ruled out. What these considerations seem then to suggest is that, when ph.ase quantification opera.tes o.n non linear orders, negatmn ot the ooerand r' ~s slightly more complex ttian sim_ple Boplean negation rendering the complement se.t.W..e are thus.taugm tla.at negation qf.P involves also the lilting ot the comolement set o~ L', P_L, with _1_ equal to r, the top of P, when P.-P . It is easy to check with diagra..m (15)c. that this specification of-P makes it possible to satisfy Qc(P) in exactly the correct constructions. 2.3 The Binding Square of Duality Fol!owing Loebner's claim that logical duality is the cardinal property to recognise the quant~hcational character 9f nat.ural language expressions, we are thus led to the vmw that the interpretaUon or pon quantincational dennite nominals ~s...ruled by their phase quantincational Iorce over the obliqueness order. Since ~he defining formulas of binding quantiners result from .(7) just by assigning P the ~lefinition .in (10) and taking the .p.arameter point, pt to be toe antecedent a, ~t is w~th no su.rpnse that we get the following square of duality for binding quantitmrs: (16) ~ inner ,-, x/Z negatmn ~/ ~ ~ ~ ~outer outer - / dual / ne~atmn negauon/ _ _ / q -- C inner Q A negation 3 Consequences This new conception of binding seems to have important consequences not only in terms of the understandimz of dependent reference mechanisms captured by Binding Theory but also in terms of our conception o.f generalised quantification in natura] language, of the twofold semantic capacity, ot nominal expressmns, referential and quantificational, and maybe even of the.nature of grammar devices. Here we cannot do but to limit ourselves to hint how a lew central i.ss.ues usual]y assgciated to binding are handled, u.nder this new viewpoint, bet~e we proceed to bnetly~ consider its consequences tor the implementation ot Binding Theory in constraint based grammars. 3.1 Further insights into binding... Parameterization It is well known that though binding principles are assumed to hold universally m all languages, final "grammatical geometry" between 184 nominals and their antecedents may be different from lanRuage to language.. . . . . Da[ry mple (9.3) pointed out that this is.oue .to l.anguage specific cqndit!ons i.mpinging ~),on the eligibimy or t.ne anteceoe.nt (wnemer it is a ~ubiect. or not) and ~l~) the range 9t the local domain (whether it ~s nnite, tensed, .etc.). As to (i), Branco and Marrafa (97) showed that it ~s a conseqgence of a lexical property of the precticates, whose ot~liqueness.hierarchy may be either linear or non linear. Es to (ii), t0is variation may accommogated in the definiuon ,ot property P in 00.), in particular in the de.finitiqn of loca~ w.rt tq r., to proyl(Je for each partlcu!ar language. ~oth splu.hons are oertectly contluent w~th. the uLi standpomt .that binding v.aria.tions across language are the result ot parameterlzatlon. Lexical gaps 4 It is also well known th.at although. t~e tour binding principles, are claimed to be universal mere are. languages wnicn nave not all the corresponding tour type of nominals. For instance, English is not known fo have long-distance reflexives. Ine answer Ior this oecomes now quite simple: like what happens in other squares of duality, it is possible that no[ ever)/, corner of the. square IS le~calized. 9oeoner t~s/) qlscusses at.length t.ne Issue. m ~ngusn, ~or instance, it is noted mat the square or ouauty concerning deontic possibility involvingright h.appens nave only two le_xic_alized .corners, right and duty. , r~xe.mption, and Iogophoricity AlSO worm considering here is the borderline case where the maximum shrink of semiphase P occurs, i.e. when P is the singl.eton whose sole ele .ment is r, the .discourse r.eterent whose interpretation ~s to De anchored Dy nnolng an antecedent tor ~t. _ . Oiven the definition of binding phase quantitlers~, me maximum shripk .of P into a. singl.eton attects significantly only the quantifiers wlaere the parameter polnU antecedent a is to be found in P, namely QA and QZ. In these cases, for a to be in P an~l-me quantincation to 0e satisfied, a can only be r, r being thus its own antecedent. Consequently.~, although the Quantification is satisfied, a '.meaningftil a.nc.hor.mg of the discourse referent r is still to be accomplished since by the sole effect of.quantification satisfaction r is iust anchored to itself. Admittedly, an overarching inte~retability requirement imposes that the significant anchoring of nominals be consummated, which, i.nduces in present case an exceptional logopnorlc ettect: tor me anap.nor (snort or Io.ng- distance), tq t)e lnterpreted,.and given t.nat satls.t.act.lon ot its t)lnding constramt is ensured, It should thus freely . find an antecedent outside any specific restriction. This constitutes th.usan explanation for the exemption restrictions in the definitions oI rrinciples. A and Land so called logophoric effects associated .to exempt anaphor.s. Restrictions. which appeared until no.w to ,~ .mere.stlp.ulations recewe in this approach a pnnciplea jusnttcatlon. 3.2 ... for a lean implementation The new conception of. Binding Theory presented in is paper is currently being inte~ated" in an HPSG grammar implemented in ProFI.T 1.54.. Space lim.its restrict us here to a very. prier rauonme ot .mat implementation, which wall be fully presented in tuture tpapers. T.he in}erestingpqint t 9 note in t.his connectip.n.is .th.at me new insight !nto oinmng phenomena elicited t~y the discovery of. t0eir qua ntin.cationa] nature seems, to constitute a breakthrough tot t.ne desideratum or giving Binding Theory a lean declarative implementation. Adopting a pnnciple based semantics in fine with Frank and Reyle (95), the central goal is not anymore 4 Though it is empirically not necessary, for the sake of uniformity, when -P.P, the order-theoretic dual of this specification of -P can be assumed. 9o filter coindex.ations between NPs in post-processing ut rather to identi.ty the relevant sets oldiscourse reterents against which satistation ot the binding phase fluantitlc.atlon expresse.d by .NPs is check.ed. .. . in practical term.s that myolves first._ collecting discourse reterents into set values ot specific teatures, requiring a minor extension to "HPSG feature declaration. S.econd, giyen the possible .non local nature ot the elements ot a given set, in order to avoid termina.tion problems" some. mechanism of delaying constrmnt satlstactlon has to be ensured. Conclusions fThoe research .reported here present a cogent ,argument r the quantmcatlonal nature ot sententlal dependent reference relations among nominals. This radically new conception of binding appears as a decisive step. ~w~fls a full lean decIara.tive, encompassi_ng or .~inaing lneory i.n constrain.t based g.ram.mars. It may have also opened .new intriguing directions 3or the research on natural language generalised quantitl.cation~ on the. apEarent twolold semantic .capacity .ot nominals, reterential and quantillcational, or on the nature of grammar devices. Acknowledgements lSfsecial thanks are due to Palmira Marrafa and Hans zkoreit for their advice and discussion and to Berthold Crysmann for his detailed comments. References Chomsky (81), Lectures on Government and Binding, Foils, Dordrecht. Correa (88), "A Binding Rule for Government-binding Parsing", COLING'88 Proceedings. Backofen, Becker, Calder, Capstick, Diui, Dtirre, Erbach, Estival, Manandhar, Mineur, van Noord, Oepen and Uszkoreit (96), Final Report of EAGLES Formalisms Working Group. Barwise and Cooper (81), Generalized Quantifiers and Natural Language, L&P 4, 159-219. Botley, Glass, McEnery and Wilson, eds. (96), Proceedings of Discourse Anaphora and Resolution Colloquium, Lancaster University. Branco and Marrafa (97), "Long-Distance Reflexives and the Binding Square of Opposition", 4th International Conf. on HPSG. Bredenkamp (96), Towards a Binding Theo, T for HPSG, PhD dissertation, Univ. of Essex. Dalrymple (93), The Syntax of Anaphoric Binding, CSLI, Stanford. Erbach (95), ProFIT 1.54 User's Guide, DFKI. Fong (90), "Free lndexation: Combinatorial Analysis and a Compositional Algorithm", Proceedings of ACL Meeting, 105- 110. Frank and Reyle (95), "Principle Based Semantics for HPSG", Proceedings of EA CL'95 Meeting. Giorgi, Pianesi and Satta (90), "A Computational Approach to Binding Theory", Proceedings of COLING'90, 1-6. Grosz, Joshi and Weinstein (95), "Centering: A Framework for Modelling the Local Coherence of Discourse", Compututional Linguistics 21. lngria and Stallard (89), A Computational Mechanism for Pronominal Reference, Proceedings of ACL Meetit,g, 262-271. Kaplan and Maxwell (88), "An Algorithm for Functional Uncertainty", Proc. t~f COLING'88. Loebner (87), "Quantification as a Major Module of Natural Language Semantics", in Croenendijk, Jongh and Stokhof, eds., Studies in DRT and the Theory of Generalized Quantitiers. Foils, Dordrecht. Pianesi (91), "Indexing and Referential Dependencies within Binding Theory", Proceedings of EACL Cm!]erence, 39-44. Pollard and Sag (94), Head-Driven Phrase Structure Grammar, CSLI, Stanford. Xue, Pollard and Sag. (94), "A New Perspective on Chinese Ziji". Proceedings of the West Coast Co,!ference on Formal Linguistics, vol. 13, CSLI, Stanford. 185
1998
27
Beyond N-Grams: Can Linguistic Sophistication Improve Language Modeling? Eric Brill, Radu Florian, John C. Henderson, Lidia Mangu Department of Computer Science Johns Hopkins University Baltimore, Md. 21218 USA {brill,rflorian,jhndrsn,lidia} @ cs.jhu.edu Abstract It seems obvious that a successful model of natural language would incorporate a great deal of both linguistic and world knowledge. Interestingly, state of the art language models for speech recognition are based on a very crude linguistic model, namely conditioning the probability of a word on a small fixed number of preceding words. Despite many attempts to incorporate more sophisticated information into the models, the n-gram model remains the state of the art, used in virtually all speech recognition systems. In this paper we address the question of whether there is hope in improving language modeling by incorporating more sophisticated linguistic and world knowledge, or whether the n- grams are already capturing the majority of the information that can be employed. Introduction N-gram language models are very crude linguistic models that attempt to capture the constraints of language by simply conditioning the probability of a word on a small fixed number of predecessors. It is rather frustrating to language engineers that the n-gram model is the workhorse of virtually every speech recognition system. Over the years, there have been many attempts to improve language models by utilizing linguistic information, but these methods have not been able to achieve significant improvements over the n-gram. The insufficiency of Markov models has been known for many years (see Chomsky (1956)). It is easy to construct examples where a trigram model fails and a more sophisticated model could succeed. For instance, in the sentence : The dog on the hill barked, the word barked would be assigned a low probability by a trigram model. However, a linguistic model could determine that dog is the head of the noun phrase preceding barked and therefore assign barked a high probability, since P(barkedldog) is high. Using different sources of rich linguistic information will help speech recognition if the phenomena they capture are prevalent and they involve instances where the recognizer makes errors. ~ In this paper we first give a brief overview of some recent attempts at incorporating linguistic information into language models. Then we discuss experiments which give some insight into what aspects of language hold most promise for improving the accuracy of speech recognizers. 1 Linguistically-Based Models There is a continuing push among members of the speech recognition community to remedy the weaknesses of linguistically impoverished n- gram language models. It is widely believed that incorporating linguistic concepts can lead to more accurate language models and more accurate speech recongizers. One of the In'st attempts at linguistically-based modelling used probabilistic context-free grammars (PCFGs) directly to I This is one of the problems with perplexity as a measure of language model quality: if the better model simply assigns higher probability to the elements the recognizer already gets correct, the model will look better in terms of perplexity, but will do nothing to improve recognizer accuracy. 186 compute language modeling probabilities (Jelinek(1992)). Another approach retrieved n- gram statistics from a handwritten PCFG and combined those statistics with traditional n- grams elicited from a corpus (Jurafsky(1995)). Research has been carded out in adaptively modifying language models using knowledge of the subject matter being discussed (Seymore(1997)). This research depends on the prevalence of jargon and domain-specific language. Linguistically motivated language models were investigated for two consecutive years at the Summer Speech Recognition Workshop, held at Johns Hopkins University. In 1995 experiments were run adding part-of- speech (POS) tags to the language models (Brill(1996)). In the 1996 Summer Speech Recognition Workshop, recognizer improvements were attempted by exploiting the long-distance dependencies provided by a dependency parse (Chelba(1997)). The goal was to exploit the predictive power of predicate- argument structures found in parse trees. In Della Pietra(1994) and Fong(1995), link grammars were used, again in an attempt to improve the language model by providing it with long-distance dependencies not captured in the n-gram statistics. 2 Although much work has been done exploring how to create linguistically-based language models, improvement in speech recognizer accuracy has been elusive. 2 Experimental Framework In an attempt to gain insight into what linguistic knowledge we should be exploring to improve language models for speech recognition, we ran experiments where people tried to improve the output of speech recognition systems and then recorded what types of knowledge they used in doing so. We hoped to both assess how much gain might be expected from very sophisticated models and to determine just what information sources could contribute to this gain. People were given the ordered list of the ten most likely hypotheses for an utterance according to the recognizer. They were then 2 For a more comprehensive review of the historical involvement of natural language parsing in language modelling, see Stolcke(1997). 187 asked to choose from the ten-best list the hypothesis that they thought would have the lowest word error rate, in other words, to try to determine which hypothesis is closest to the truth. Often, the truth is not present in the 10- best list. An example 5-best list from the Wall Street Journal corpus is shown in Figure 1. Four subjects were used in this experiment, and each subject was presented with 75 10-best lists from three different speech recognition systems (225 instances total per subject). From this experiment, we hoped to gauge what the upper bound is on how much we could improve upon state of the art by using very rich models) For our experiments, we used three different speech recognizers, trained respectively on Switchboard (spontaneous speech), Broadcast News (recorded news broadcasts) and Wall Street Journal data. 4 The word error rates of the recognizers for each corpus are shown in the first line of Table 1. The human subjects were presented with the ten-best lists. Sentences within each ten-best list were aligned to make it easier to compare them. In addition to choosing the most appropriate selection from the 10-best list, subjects were also allowed to posit a string not in the list by editing any of the strings in the 10- best list in any way they chose. For each sample, subjects were asked to determine what types of information were used in deciding. This was done by presenting the subjects with a set of check boxes, and asking them to check all that applied. A list of the options presented to the human can be found in Figure 2. Subjects were provided with a detailed explanation, as well as examples, for each of these options .5 2 Net Human Improvement The first question to ask is whether people are able to improve upon the speech recognizer's output by postprocessing the n-best lists. For 3 Note that what we are really measuring is an upper bound on improvement under the paradigm of n-best postprocessing. This is a common technique in speech recognition, but it results in the postprocessor not having access to the entire set of hypotheses, or to full acoustic information. 4 HTK software was used to build all recognizers. s This program is available at http:llwww.cs.jhu.edullabslnlp each corpus, we have four measures: (1) the recognizer's word error rate, (2) the oracle error rate, (3) human error rate when choosing among the 10-best (human selection) and (4) human error rate when allowed to posit any word sequence (human edit). The oracle error rate is the upper bound on how well anybody could do when restricted to choosing between the 10 best hypotheses: the oracle always chooses the string with the lowest word error rate. Note that if the human always picked the highest-ranking hypothesis, then her accuracy would be equivalent to that of the recognizer. Below we show the results for each corpus, averaged across the subjects: Recognizer Oracle Switchboard Broadcast Wall Street News Journal 43.9% 27.2% 13.2% 32.7% 22.6% 7.9% Human 42.0% 25.9% 10.1% Selection Human Edit 41.0% 25.2% 9.2% Table 1 Word Error Rate: Recognizer, Oracle and Human In the following table, we show the results as a function of what percentage of the difference between recognizer and oracle the humans are able to attain. In other words, when the human is not restricted to the 10-best list, he is able to advance 75.5% of the way between recognizer and oracle word error rate on the Wall Street Journal. Switchboard Broadcast Wall Street News Journal Human 17.0% 28.3% 58.5% Selection Human Edit 25.9% 43.5% 75.5% Table 2 Human Gain Relative to Recognizer and Oracle There are a number of interesting things to note about these results. First, they are quite encouraging, in that people are able to improve the output on all corpora. As the accuracy of the recognizer improves, the relative human improvement increases. While people can attain over three-quarters of the possible word error rate reduction over the recognizer on Wall Street Journal, they are only able to attain 25.9% of the possible reduction in Switchboard. This is probably attributable to two causes. The more 188 varied the language is in the corpus, the harder it is for a person to predict what was said. Also, the higher the recognizer word error rate, the less reliable the contextual cues will be which the human uses to choose a lower error rate string. In Switchboard, over 40% of the words in the highest ranked hypothesis are wrong. Therefore, the human is basing her judgement on much less reliable contexts in Switchboard than in the much lower word error rate Wall Street Journal, resulting in less net improvement. For all three corpora, allowing the person to edit the output, as opposed to being limited to pick one of the ten highest ranked hypotheses, resulted in significant gains: over 50% for Switchboard and Broadcast News, and 30% for Wall Street Journal. This indicates that within the paradigm of n-best list postprocessing, one should strongly consider methods for editing, rather than simply choosing. In examining the relative gain over the recognizer the human was able to achieve as a function of sentence length, for the three different corpora, we observed that the general trend is that the longer the sentence is, the greater the net gain is. This is because a longer sentence provides more cues, both syntactic and semantic, that can be used in choosing the highest quality word sequence. We also observed that, other than the case of very low oracle error rate, the more difficult the task is the lower the net human gain. So both across corpora and corpus-internal, we find this relationship between quality of recognizer output and ability of a human to improve upon recognizer output. 3 Usefulness of Linguistic Information In discussions with the participants after they ran the experiment, it was determined that all participants essentially used the same strategy. When all hypotheses appeared to be equally bad, the highest-ranking hypothesis was chosen. This is a conservative strategy that will ensure that the person does no worse than the recognizer on these difficult cases. In other cases, people tried to use linguistic knowledge to pick a hypothesis they felt was better than the highest ranked hypothesis. In Figure 2, we show the distribution of proficiencies that were used by the subjects. We show for each of the three corpora, the percentage of 10-best instances for which the person used each type of knowledge (along with the ranking of these percentages), as well as the net gain over the recognizer accuracy that people were able to achieve by using this information source. For all three corpora, the most common (and most useful) proficiency was that of closed class word choice, for example confusing the words in and and, or confusing than and that. It is encouraging that although world knowledge was used frequently, there were many linguistic proficiencies that the person used as well. If only world knowledge accounted for the person's ability to improve upon the recognizer's output, then we might be faced with an AI-complete problem: speech recognizer improvements are possible, but we would have to essentially solve AI before the benefit could be realized. One might conclude that although people were able to make significant improvements over the recognizer, we may still have to solve linguistics before these improvements could actually be realized by any actual computer system. However, we are encouraged that algorithms could be created that can do quite well at mimicking a number of proficiencies that contributed to the human's performance improvement. For instance, determiner choice was a factor in roughly 25% of the examples for the Wall Street Journal. There already exist algorithms for choosing the proper determiner with fairly high accuracy (Knight(1994)). Many of the cases involved confusion between a relatively small set of choices: closed class word choice, determiner choice, and preposition choice. Methods already exist for choosing the proper word from a fixed set of possibilities based upon the context in which the word appears (e.g. Golding(1996)). Conclusion In this paper, we have shown that humans, by postprocessing speech recognizer output, can make significant improvements in accuracy over the recognizer. The improvements increase with the recognizer's accuracy, both within a particular corpus and across corpora. This demonstrates that there is still a great deal to gain without changing the recognizer's internal models, and simply operating on the recognizer's output. This is encouraging news, as it is typically a much simpler matter to do postprocessing than to attempt to integrate a knowledge source into the recognizer itself. We have presented a description of the proficiencies people used to make these improvements and how much each contributed to the person's success in improving over the recognizer accuracy. Many of the gains involved linguistic proficiencies that appear to be solvable (to a degree) using methods that have been recently developed in natural language processing. We hope that by honing in on the specific high-yield proficiencies that are amenable to being solved using current technology, we will finally advance beyond n- grams. There are four primary foci of future work. First, we want to expand our study to include more people. Second, now that we have some picture as to the proficiencies used, we would like to do a more refined study at a lower level of granularity by expanding the repertoire of proficiencies the person can choose from in describing her decision process. Third, we want to move from what to how: we now have some idea what proficiencies were used and we would next like to establish to the extent we can how the human used them. Finally, eventually we can only prove the validity of our claims by actually using what we have learned to improve speech recognition, which is our ultimate goal. References Brill E, Harris D, Lowe S, Luo X, Rao P, Ristad E and Roukos S. (1996). A hidden tag model for language. In "Research Notes", Center for Language and speech processing. The Johns Hopkins University. Chapter 2. Chelba C, Eagle D, Jelinek F, Jimenez V, Khudanpur S, Mangu L, Printz H, Ristad E, Rosenfeld R, Stolcke A and Wu D. (1997) Structure and Performance of a Dependency Language Model. In Eurospeech '97. Rhodes, Greece. Chomsky N. (1956) Three models for the description of language. IRE Trans. On Inform. Theory. IT-2, 113-124. Della Pietra S, Della Pietra V, Gillett J, Lafferty J, Printz H and tires L. (1994) Inference and Estimation of a Long-Range Trigram Model. In Proceedings of the Second International 189 Colloquium on Grammatical Inference. Alicante, Spain. Fong E and Wu D. (1995) Learning restricted probabilistic link grammars. IJCAI Workshop on New Approaches to Learning for Natural Language Processing, Montreal. Golding A and Roth D. (1996) Applying Winnow to Context-Sensitive Spelling Correction. In Proceedings of ICML '96. Jelinek F, Lafferty J.D. and Mercer R.L. (1992) Basic Methods of Probabilistic Context-Free Grammars. In "Speech Recognition and Understanding. Recent Advances, Trends, and Applications", Volume F75,345-360. Berlin:Springer Verlag. Jurafsky D., Wooters C, Segal J, Stolcke A, Fosler E, Tajchman G and Morgan N. (1995) Using a stochastic context-free grammar as a language model for speech recognition. In ICASSP '95. Knight K and Chandler I. (1994). Automated Postediting of Documents. Proceedings, Twelfth National Conference on Artificial Intelligence. Seymore K. and Rosenfeld R. (1997) Using Story Topics for Language Model Adaptation. In Eurospeech '97. Rhodes, Greece. Stolcke A. (1997) Linguistic Knowledge and Empirical Methods in Speech Recognition. In AI Magazine, Volume 18, 25-31, No.4. (1) people consider what they want but we won't comment he said (2) people to say what they want but we won't comment he said (3) people can say what they want but we won't comment he said (4) people consider what they want them we won't comment he said (5) people to say what they want them we won't comment he said Figure 1 A sample 5-best list from the WSJ corpus. The third hypothesis is the correct one. Switchboard Broadcast News Wall Street Journal % of time Absolute % of time Absolute % of time Absolute clicked WER clicked WER clicked WER reduction reduction reduction using this using this using this Argument Structure 1.3 (14) 0.18 (10) 2.0(12) 0.10(11) 5.3 (12) 0.40(8) Closed Class Word Choice 25.7 (1) 1.62 (1) 40.2 (1) 1.14 (1) 46.4 (1) 2.40 (1) Complete Sent. Vs. Not 16.5 (2) 1.03 (2) 11.0 (6) 0.32 (8) 29.1 (2) 1.52 (2) Determiner Choice 1.7 (12) 0.06 (13) 17.6 (3) 0.41 (5) 24.8 (3) 0.93 (5) IdiomsdCommonPhrases 3.5 (6) 0.19 (9) 6.6 (8) 0.35 (6) 8.6 (8) 0.57 (7) Modal Structure 2.6 (8) 0.13 (11) 3.0 (11) 0.09 (12) 2.3 (15) 0.04 (14) Number Agreement 4.4 (5) 0.32 (8) 3.7 (10) 0.22 (9) 4.0 (14) 0.08 (13) Open Class Word Choice 8.3 (3) 0.71 (3) 19.3 (2) 0.60 (2) 9.6 (7) 0.40 (8) Parallel Structure 0.9 (15) 0.39 (6) 0.7 (15) 0.04 (15) 5.6 (10) 0.25 (11) Part of Speech Confusion 2.2 (9) 0.06 (13) 2.0 (12) 0.07 (13) 7.6 (9) 0.04 (15) Pred-Argument/Semantic 2.2 (9) 0.13 (11) 2.0 (12) 0.06 (14) 5.6 (10) 0.34 (10) Agreement Preposition Choice 3.5 (6) 0.58 (5) 17.3 (4) 0.44 (4) 15.9 (5) 0.82 (6) Tense Agreement 1.7 (12) 0.06 (13) 4.0 (9) 0.16 (10) 5.3 (12) 0.13 (12) Topic 2.2 (9) 0.39 (6) 9.3 (7) 0.34 (7) 15.2 (6) 1.03 (4) World Knowledge 6.1 (4) 0.65 (4) 12.3 (5) 0.57 (3) 19.5 (4) 1.35 (3) Figure 2 Analysis of Proficiencies Used and their Effectiveness 190
1998
28
Classifier Combination for Improved Lexical Disambiguation Eric Brill and Jun Wu Department of Computer Science Johns Hopkins University Baltimore, Md. 21218 USA {brill,junwu} @ cs.jhu.edu Abstract One of the most exciting recent directions in machine learning is the discovery that the combination of multiple classifiers often results in significantly better performance than what can be achieved with a single classifier. In this paper, we first show that the errors made from three different state of the art part of speech taggers are strongly complementary. Next, we show how this complementary behavior can be used to our advantage. By using contextual cues to guide tagger combination, we are able to derive a new tagger that achieves performance significantly greater than any of the individual taggers. Introduction Part of speech tagging has been a central problem in natural language processing for many years. Since the advent of manually tagged corpora such as the Brown Corpus and the Penn Treebank (Francis(1982), Marcus(1993)), the efficacy of machine learning for training a tagger has been demonstrated using a wide array of techniques, including: Markov models, decision trees, connectionist machines, transformations, nearest-neighbor algorithms, and maximum entropy (Weischedel(1993), Black(1992), Schmid(1994), Brill(1995),Daelemans(1995),Ratnaparkhi(1996 )). All of these methods seem to achieve roughly comparable accuracy. The fact that most machine-learning- based taggers achieve comparable results could be attributed to a number of causes. It is possible that the 80/20 rule of engineering is applying: a certain number of tagging instances are relatively simple to disambiguate and are therefore being successfully tagged by all approaches, while another percentage is extremely difficult to disambiguate, requiring deep linguistic knowledge, thereby causing all taggers to err. Another possibility could be that all of the different machine learning techniques are essentially doing the same thing. We know that the features used by the different algorithms are very similar, typically the words and tags within a small window from the word being tagged. Therefore it could be possible that they all end up learning the same information, just in different forms. In the field of machine learning, there have been many recent results demonstrating the efficacy of combining classifiersJ In this paper we explore whether classifier combination can result in an overall improvement in lexical disambiguation accuracy. 1 Different Tagging Algorithms The experiments described in this paper are based on four popular tagging algorithms, all of which have readily available implementations. These taggers are described below. 1.1 Unigram Tagging This is by far the simplest of tagging algorithms. Every word is simply assigned its most likely part of speech, regardless of the context in which it appears. Surprisingly, this simple tagging method achieves fairly high accuracy. Accuracies of 90-94% are typical. In the unigram tagger used in our experiments, for words that do not appear in the lexicon we use a I See Dietterich(1997) for a good summary of these techniques. 191 collection of simple manually-derived heuristics to guess the proper tag for the word. 1.2 N-Gram Tagging N-gram part of speech taggers (Bahl(1976), Church(1992), Weischedel(1993)) are perhaps the most widely used of tagging algorithms. The basic model is that given a word sequence W, we try to find the tag sequence T that maximizes P(TIW). This can be done using the Viterbi algorithm to find the T that maximizes: P(T)*P(WIT). In our experiments, we use a standard trigram tagger using deleted interpolation (Jelinek (1980)) and used suffix information for handling unseen words (as was done in Weischedel (1993)). 1.3 Transformation-Based Tagging In transformation-based tagging (Brill (1995)), every word is first assigned an initial tag, This tag is the most likely tag for a word if the word is known and is guessed based upon properties of the word if the word is not known. Then a sequence of rules are applied that change the tags of words based upon the contexts they appear in. These rules are applied deterministically, in the order they appear in the list. As a simple example, if race appears in the corpus most frequently as a noun, it will initially be mistagged as a noun in the sentence : We can race all day long. The rule Change a tag from NOUN to VERB if the previous tag is a MODAL would be applied to the sentence, resulting in the correct tagging. The environments used for changing a tag are the words and tags within a window of three words. For our experiments, we used a publicly available implementation of transformation-based tagging, 2 retrained on our training set. maximally agnostic with respect to all parameters for which no data exists. It is a nice framework for combining multiple constraints. Whereas the transformation-based tagger enforces multiple constraints by having multiple rules fire, the maximum-entropy tagger can have all of these constraints play a role at setting the probability estimates for the model's parameters. In Ratnaparkhi (1996), a maximum entropy tagger is presented. The tagger uses essentially the same parameters as the transformation-based tagger, but employs them in a different model. For our experiments, we used a publicly available implementation of maximum-entropy tagging) retrained on our training set. 2 Tagger Complementarity All experiments presented in this paper were run on the Penn Treebank Wall Street Joumal corpus (Marcus (1993)). The corpus was divided into approximately 80% training and 20% testing, giving us approximately 1.1 million words of training data and 265,000 words of test data. The test set was not used in any way in training, so the test set does contain unknown words. In Figure 1 we show the relative accuracies of the four taggers. In parentheses we include tagger accuracy when only ambiguous and unknown words are considered? Tagger Accuracy (%) Num Errors Unigram 93.26 (87.9) 17856 Trigram 96.36 (93.8) 9628 Transform. 96.61 (94.3) 8980 Max. Ent. 96.83 (94.7) 8400 Figure 1: Relative Tagger Accuracies Next, we examine just how different the errors of the taggers are. We define the complementary rate of taggers A and B as : 1.4 Maximum-Entropy Tagging The maximum-entropy framework is a probabilistic framework where a model is found that is consistent with the observed data and is 2 http://www.cs.jhu.edu/-brill 3 http://www.cis.upenn.edu/-adwait 4 It is typical in tagging papers to give results in ambiguity resolution over all words, including words that are unambiguous. Correctly tagging words that only can have one label contributes to the accuracy. We see in Figure 1 that when accuracy is measured on truly ambiguous words, the numbers are lower. In this paper we stick to the convention of giving results for all words, including unambiguous ones. 192 # of common errors Comp(A, B) = (I . . . . . . ) * I00 # of errors in A only In other words, Comp(A,B) measures the percentage of time when tagger A is wrong that tagger B is correct. In Figure 2 we show the complementary rates between the different taggers. For instance, when the maximum entropy tagger is wrong, the transformation- based tagger is right 37.7% of the time, and when the transformation-based tagger is wrong, the maximum entropy tagger is right 41.7% of the time. Unigram Trigram Transf. MaxEnt Unigram 0 32.1 20.0 34.9 Trigram 63.4 0 34.6 33.5 Transf. 59.7 39.0 0 37.7 MaxEnt 69.4 42.0 41.7 0 Figure 2: Comp(A,B). Row = A, Column = B The complementary rates are quite high, which is encouraging, since this sets the upper bound on how well we can do in combining the different classifiers. If all taggers made the same errors, or if the errors that lower-accuracy taggers made were merely a superset of higher- accuracy tagger errors, then combination would be futile. In addition, a tagger is much more likely to have misclassified the tag for a word in instances where there is disagreement with at least one of the other classifiers than in the case where all classifiers agree. In Figure 3 we see, for instance that while the overall error rate for the Maximum Entropy tagger is 3.17%, in cases where there is disagreement between the four taggers the Maximum Entropy tagger error rate jumps to 27.1%. And discarding the unigram tagger, which is significantly less accurate than the others, when there is disagreement between the Maximum Entropy, Transformation-based and Trigram taggers, the Maximum Entropy tagger error rate jumps up to 43.7%. These cases account for 58% of the total errors the Maximum Entropy tagger makes (4833/8400). Next, we check whether tagger complementarity is additive. In Figure 4, the first row shows the additive error rate an oracle could achieve on the test set if the oracle could pick between the different outputs of the taggers. For example, when the oracle can examine the output of the Maximum Entropy, Transformation-Based and Trigram taggers, it could achieve an error rate of 1.62%. The second row shows the additive error rate reduction the oracle could achieve. If the oracle is allowed to choose between all four taggers, a 55.5% error rate reduction is obtained over the Maximum Entropy tagger error rate. If the unigram output is discarded, the oracle improvement drops down to 48.8% over Maximum Entropy tagger error rate. Overall Error Rate Error Rate When Disagreement Error Rate When Disagreement (excluding unigram) Max.Ent Trans- Tri- Uni- form gram gram 3.17% 3.39 3.64 6.74 !(8400) (8980) (9628) (17856) :27.1 29.9 33.1 73.4 (5535) (6115) (6763) (14991) 43.7 49.0 54.9 (4833) (5413) (6061) Figure 3: Disagreement Indication of Error Is A Strong MaxEnt +Transf. % of time all 3.17 1.98 are wrong % Oracle 37.7 Improvement +Tri- +Uni- gram gram 1.62 1.41 48.8 55.5 Figure 4 : Complementarity Is Additive. From these results, we can conclude that there is at least hope that improvments can be gained by combining the output of different taggers. We can also conclude that the improvements we expect are somewhat additive, meaning the more taggers we combine, the better results we should expect. 3 Tagger Combination The fact that the errors the taggers make are strongly complementary is very encouraging. If all taggers made the exact same errors, there would obviously be no chance of improving accuracy through classifier combination. However, note that the high complementary rate between tagger errors in itself does not necessarily imply that there is anything to be gained by classifier combination. We ran experiments to determine whether the outputs of the different taggers 193 could be effectively combined. We first explored combination via simple majority-wins voting. Next, we attempted to automatically acquire contextual cues that learned both which tagger to believe in which contexts and what tags are indicated by different patterns of tagger outputs. Both the word environments and the tagger outputs for the word being tagged and its neighbors are used as cues for predicting the proper tag. 3.1 Simple Voting The simplest combination scheme is to have the classifiers vote. The part of speech that appeared as the choice of the largest number of classifiers is picked as the answer, with some method being specified for breaking ties. We tried simple voting, using the Maximum Entropy, Transformation-Based and Trigram taggers. In case of ties (all taggers disagree), the Maximum Entropy tagger output is chosen, since this tagger had the highest overall accuracy (this was determined by using a subset of the training set, not by using the test set). The results are shown in Figure 5. Simple voting gives a net reduction in error of 6.9% over the best of the three taggers. This difference is significant at a >99% confidence level. Tagger Error Rate Num Errors Max Ent 3.2% 8400 Simple Voting 3.0% 7823 Figure 5 Results of Simple Voting 3.2 Contextual Cues Next, we try to exploit the idiosyncracies of the different taggers. Although the Maximum Entropy, Transformation-based and Trigram taggers use essentially the same types of contextual information for disambignation, this information is exploited differently in each case. Our hope is that there is some regularity to these differences, which would then allow us to learn what conditions suggest that we should trust one tagger output over another. We used a version of example-based learning to determine whether these tagger differences could be exploited. 5 To determine 5 Example-based learning has also been applied succesfully in building a single part of speech tagger the tag of a word, we use the previous word, current word, next word, and the output of each tagger for the previous, current and next word. See Figure 6. wora. wo a, Word)., Unigram_Tagj. I Unigram_Tagj Unigram Tagj., Tdgram Tagj. t Tdgram_Tagj Tdgram_Tagj. t Transform_Tagj.~ Transform_Tagj Transform_Tag~ MaxEnt_Tagj.i MaxEnt Tagj MaxEnt_Tagj÷, Figure 6 Features Used To Determine The Proper Tag for Word j. For each such context in the training set, we store the probabilities of what correct tags appeared in that context. When the tag distribution for a context has low entropy, it is a very good predictor of the correct tag when the identical environment occurs in unseen data. The problem is that these environments are very specific, and will have low overall recall in a novel corpus. To account for this, we must back off to more general contexts when we encounter an environment in the test set that did not occur in the training set. This is done by specifying an order in which fields should be ignored until a match is found. The back-off ordering is learned automatically. We ran two variants of this experiment. In the first case, given an instance in the test set, we find the most specific matching example in the training set, using the prespecified back-off ordering, and see what the most probable tag was in the training set for that environment. This is then chosen as the tag for the word. Note that this method is capable of learning to assign a tag that none of the taggers assigned. For instance, it could be the case that when the Unigram tagger thinks the tag should be X, and the Trigram and Maximum Entropy taggers think it should be Y, then the true tag is most frequently Z. In the second experiment, we use contexts to specify which tagger to trust, rather than which tag to output. Again the most specific context is found, but here we check which tagger has the highest probability of being correct in this particular context. For instance, we may learn that the Trigram tagger is most accurate at tagging the word up or that the Unigram tagger does best at tagging the word (Daelemans(1996)). 194 race when the word that follows is and. The results are given in Figure 7. We see that while simple voting achieves an error reduction of 6.9%, using contexts to choose a tag gives an error reduction of 9.8% and using contexts to choose a tagger gives an error reduction of 10.4%. Tagger Error Rate Num Errors Max Ent 3.2% 8400 Simple Voting 3.0% 7823 Context: Pick Tag 2.9% 7580 Context: Pick Tagger 2.8% 7529 Figure 7 Error Rate Reduction For Different Tagger Combination Methods Conclusion In this paper, we showed that the error distributions for three popular state of the art part of speech taggers are highly complementary. Next, we described experiments that demonstrated that we can exploit this complementarity to build a tagger that attains significantly higher accuracy than any of the individual taggers. In the future, we plan to expand our repertoire of base taggers, to determine whether performance continues to improve as we add additional systems. We also plan to explore different methods for combining classifier outputs. We suspect that the features we have chosen to use for combination are not the optimal set of features. We need to carefully study the different algorithms to find possible cues that can indicate where a particular tagger performs well. We hope that by following these general directions, we can further exploit differences in classifiers to improve accuracy in lexical disambiguation. References Black E., Jelinek F., Lafferty J, Mercer R. and Roukos S. (1992). Decision Tree Models Applied to the Labeling of Text with Parts-of-Speech. Darpa Workshop on Speech and Natural Language, Harriman, N.Y. Brill, E. (1995). Transformation-Based Error-Driven Learning and Natural Language Processing: A Case Study in Part of Speech Tagging. Computational Linguistics. Daelemans W. (1996). MBT: A Memory-Based Part of Speech Tagger-Generator. Proceedings of the Workshop on Very Large Corpora, Copenhagen Dietterich T. (1997). Machine-Learning Research: Four Current Directions. AI Magazine. Winter 1997, pp97-136. Francis W. and Kucera H. (1982) Frequency analysis of English usage: Lexicon and grammar. Houghton Mifflin. Jelinek F and Mercer R (1980). Interpolated Estimation of Markov Source Parameters from Sparse Data. In Pattern Recognition in Practice, E. Gelsema and L. Kanal, Eds. Amsterdam: North- Holland. Marcus M., Santorini B. and Marcinkiewicz M. (1993) Building a large annotated corpus of English: the Penn Treebank. Computational Linguistics. Ratnaparkhi A. (1996). A Maximum Entropy Part-of- Speech Tagger. Proceedings of the First Empirical Methods in Natural Language Processing Conference. Philadelphia, Pa. Schmid H. (1994). Part of Speech Tagging With Neural Networks. Proceedings of COLING, Yokohama, Japan. Weischedel R., Meteer M., Schwartz R., Ramshaw L. and Palmueci, J. (1993). Coping with ambiguity and unknown words through probabilistic models. Computational Linguistics. 195
1998
29
Towards a single proposal in spelling correction Eneko Agirre, Koldo Gojenola, Kepa Sarasola Dept. of Computer Languages and Systems University of the Basque Country, 649 P. K., E-20080 Donostia, Basque Country [email protected] Abstract The study presented here relies on the integrated use of different kinds of knowledge in order to improve first-guess accuracy in non-word context-sensitive correction for general unrestricted texts. State of the art spelling correction systems, e.g. ispell, apart from detecting spelling errors, also assist the user by offering a set of candidate corrections that are close to the misspelled word. Based on the correction proposals of ispell, we built several guessers, which were combined in different ways. Firstly, we evaluated all possibilities and selected the best ones in a corpus with artificially generated typing errors. Secondly, the best combinations were tested on texts with genuine spelling errors. The results for the latter suggest that we can expect automatic non-word correction for all the errors in a free running text with 80% precision and a single proposal 98% of the times (1.02 proposals on average). Introduction The problem of devising algorithms and techniques for automatically correcting words in text remains a research challenge. Existing spelling correction techniques are limited in their scope and accuracy. Apart from detecting spelling errors, many programs assist users by offering a set of candidate corrections that are close to the misspelled word. This is true for most commercial word-processors as well as the Unix- based spelling-corrector ispelP (1993). These programs tolerate lower first guess accuracy by returning multiple guesses, allowing the user to make the final choice of the intended word. In i lspell was used for the spell-checking and correction candidate generation. Its assets include broad-coverage and excellent reliability. Atro Voutilainen Department of General Linguistics University of Helsinki, P.O. Box 4 FIN-00014 Helsinki, Finland [email protected] contrast, some applications will require fully automatic correction for general-purpose texts (Kukich 1992). It is clear that context-sensitive spelling correction offers better results than isolated-word error correction. The underlying task is to determine the relative degree of well formedness among alternative sentences (Mays et al. 1991). The question is what kind of knowledge (lexical, syntactic, semantic .... ) should be represented, utilised and combined to aid in this determination. This study relies on the integrated use of three kinds of knowledge (syntagmatic, paradigmatic and statistical) in order to improve first guess accuracy in non-word context-sensitive correction for general unrestricted texts. Our techniques were applied to the corrections posed by ispell. Constraint Grammar (Karlsson et al. 1995) was chosen to represent syntagmatic knowledge. Its use as a part of speech tagger for English has been highly successful. Conceptual Density (Agirre and Rigau 1996) is the paradigmatic component chosen to discriminate semantically among potential noun corrections. This technique measures "affinity distance" between nouns using Wordnet (Miller 1990). Finally, general and document word-occurrence frequency-rates complete the set of knowledge sources combined. We knowingly did not use any model of common misspellings, the main reason being that we did not want to use knowledge about the error source. This work focuses on language models, not error models (typing errors, common misspellings, OCR mistakes, speech recognition mistakes, etc.). The system was evaluated against two sets of texts: artificially generated errors from the Brown corpus (Francis and Kucera 1967) and genuine spelling errors from the Bank of EnglishL The remainder of this paper is organised as 2 http://titania.cobuild.collins.co.uk/boe_info.html 22 follows. Firstly, we present the techniques that will be evaluated and the way to combine them. Section 2 describes the experiments and shows the results, which are evaluated in section 3. Section 4 compares other relevant work in context sensitive correction. 1 The basic techniques 1.1 Constraint Grammar (CG) Constraint Grammar was designed with the aim of being a language-independent and robust tool to disambiguate and analyse unrestricted texts. CG grammar statements are close to real text sentences and directly address parsing problems such as ambiguity. Its application to English (ENGCG 3) resulted a very successful part of speech tagger for English. CG works on a text where all possible morphological interpretations have been assigned to each word-form by the ENGTWOL morphological analyser (Voutilainen and Heikkil~i 1995). The role of CG is to apply a set of linguistic constraints that discard as many alternatives as possible, leaving at the end almost fully disambiguated sentences, with one morphological or syntactic interpretation for each word-form. The fact that CG tries to leave a unique interpretation for each word-form makes the formalism adequate to achieve our objective. Application of Constraint Grammar The text data was input to the morphological analyser. For each unrecognised word, ispell was applied, placing the morphological analyses of the correction proposals as alternative interpretations of the erroneous word (see example 1). EngCG-2 morphological disambiguation was applied to the resulting texts, ruling out the correction proposals with an incompatible POS (cf. example 2). We must note that the broad coverage lexicons of ispell and ENGTWOL are independent. This caused the correspondence between unknown words and ispell's proposals not to be one to one with those of the EngCG-2 morphological analyser, especially in compound words. Such problems were solved considering that a word was correct if it was covered by any of the lexicons. 1.2 Conceptual Density (CD) 3 A recent version of ENGCG, known as EngCG-2, can be tested at http://www.conexor.fi/analysers.html The discrimination of the correct category is unable to distinguish among readings belonging to the same category, so we also applied a word- sense disambiguator based on Wordnet, that had already been tried for nouns on free-running text. In our case it would choose the correction proposal semantically closer to the surrounding context. It has to be noticed that Conceptual Density can only be applied when all the proposals are categorised as nouns, due to the structure of Wordnet. <our> "our" PRON PL ... <bos> ; INCORRECT OR SPELLING ERROR "boss" N S "boys" N P "bop" V S "Bose" <Proper> Example 1. Proposals and morphological analysis for the misspelling bos <our> "our" PRON PL ... <bos> ; INCORRECT OR SPELLING ERROR "boss" N S "boys" N P ,,t.t.nj~,__,, ~i "Bose" <Proper> <are> ... Example 2. CG leaves only nominal proposals 1.3 Frequency statistics (DF & BF) Frequency data was calculated as word-form frequencies obtained from the document where the error was obtained (Document frequency, DF) or from the rest of the documents in the whole Brown Corpus (Brown frequency, BF). The experiments proved that word-forms were better suited for the task, compared to frequencies on lemmas. 1.4 Other interesting heuristics (HI, H2) We eliminated proposals beginning with an uppercase character when the erroneous word did not begin with uppercase and there were alternative proposals beginning with lowercase. In example 1, the fourth reading for the misspelling "bos" was eliminated, as "Bose" would be at an editing distance of two from the misspelling (heuristic HI). This heuristic proved very reliable, and it was used in all experiments. After obtaining the first results, we also noticed that words with less than 4 characters like "si", "teh", ... (misspellings for "is" and "the") produced too many proposals, difficult to disambiguate. As they were one of the main error sources for our method, we also evaluated the results excluding them 23 (heuristic H2). 1.5 Combination of the basic techniques using votes We considered all the possible combinations among the different techniques, e.g. CG+BF, BF+DF, and CG+DF. The weight of the vote can be varied for each technique, e.g. CG could have a weight of 2 and BF a weight of 1 (we will represent this combination as CG2+BF1). This would mean that the BF candidate(s) will only be chosen if CG does not select another option or if CG selects more than one proposal. Several combinations of weights were tried. This simple method to combine the techniques can be improved using optimization algorithms to choose the best weights among fractional values. Nevertheless, we did some trials weighting each technique with its expected precision, and no improvement was observed. As the best combination of techniques and weights for a given set of texts can vary, we separated the error corpora in two, trying all the possibilities on the first half, and testing the best ones on the second half (c.f. section 2.1). 2 The experiments Based on each kind of knowledge, we built simple guessers and combined them in different ways. In the first phase, we evaluated all the possibilities and selected the best ones on part of the corpus with artificially generated errors. Finally, the best combinations were tested against the texts with genuine spelling errors. 2.1 The error corpora We chose two different corpora for the experiment. The first one was obtained by systematically generating misspellings from a sample of the Brown Corpus, and the second one was a raw text with genuine errors. While the first one was ideal for experimenting, allowing for automatic verification, the second one offered a realistic setting. As we said before, we are testing language models, so that both kinds of data are appropriate. The corpora with artificial errors, artificial corpora for short, have the following features: a sample was extracted from SemCor (a subset of the Brown Corpus) selecting 150 paragraphs at random. This yielded a seed corpus of 505 sentences and 12659 tokens. To simulate spelling errors, a program named antispell, which applies Damerau's rules at random, was run, giving an average of one spelling error for each 20 words (non-words were left untouched). Antispell was run 8 times on the seed corpus, creating 8 different corpora with the same text but different errors. Nothing was done to prevent two errors in the same sentence, and some paragraphs did not have any error. The corpus of genuine spelling errors, which we also call the "real" corpus for short, was magazine text from the Bank of English Corpus, which probably was not previously spell-checked (it contained many misspellings), so it was a good source of errors. Added to the difficulty of obtaining texts with real misspellings, there is the problem of marking the text and selecting the correct proposal for automatic evaluation. As mentioned above, the artificial-error corpora were divided in two subsets. The first one was used for training purposes 4. Both the second half and the "real" texts were used for testing. 2.2 Data for each corpora The two corpora were passed trough ispell, and for each unknown word, all its correction proposals were inserted. Table 1 shows how, if the misspellings are generated at random, 23.5% of them are real words, and fall out of the scope of this work. Although we did not make a similar counting in the real texts, we observed that a similar percentage can be expected. words ~rrors aon real-word errors ispell proposals ~vords with multiple proposals Long word errors (H2) proposals for long words (H2) long word errors (H2) with multiple proposals l~'half 2 ~ half"real" 47584 4758439732 1772 1811 1354 1403 365 7242 8083 1257 810 852 15~ 968 98C 33 2245 2313 80~ 430 425 124 Table 1. Number of errors and proposals For the texts with genuine errors, the method used in the selection of the misspellings was the following: after applying ispell, no correction was found for 150 words (mainly proper nouns and foreign words), and there were about 300 which 4 In fact, there is no training in the statistical sense. It just involves choosing the best alternatives for voting. 5 As we focused on non-word words, there is not a count of real-word errors. 24 Basic techniques random baseline random+H2 CG CG+H2 BF BF+H2 DF DF+H2 CD Combinations CG 1 +DF2 CGI+DF2+H2 CGI+DFI+BF1 100.00 54.36 1.00 71.49 71.59 1.00 99.85 86.91 2.33 71.42 95.86 1.70 96.23 86.57 1.00 68.69 92.15 1.00 90.55 89.97 1.02 62.92 96.13 1.01 6.06 79.27 1.01 CGI+DFI+BFI+H2 CGI+DFI+BFI+CD 1 CGI+DFI+BFI+CDI+H2 Table 2. Results for several combinations (I" 99.93 90.39 1.17 71.49 96.38 1.12 99.93 89.14 1.03 71.49 94.73 1.03 99.93 89.14 1.02 71.49 94.63 1.02 half) Basic techniques random baseline random+H2 CG CG+H2 BF BF+H2 DF DF+H2 CD Combinations CGI+DF2 CGI+DF2+H2 CGI+DFI+BF1 CGI+DFI+BFI+H2 CGI+DFI+BFI+CD1 CGI+DFI+BFI+CD+H2 100.00 23.70 1.00 52.70 36.05 1.00 99.75 78.09 3.23 52.57 90.68 2.58 93.70 76.94 1.00 48.04 81.38 1.00 84.20 81.96 1.03 38.48 89.49 1.03 8.27 75.28 1.01 99.88 83.93 1.28 52.70 91.86 1.43 99.88 81.83 1.04 52.70 88.14 1.06 99.88 81.83 1.04 52.70 87.91 1.05 multiple Table 3. Results on errors with proposals (1" half) were formed by joining two consecutive words or by special affixation rules (ispell recognised them correctly). This left 369 erroneous word-forms. After examining them we found that the correct word-form was among ispell's proposals, with very few exceptions. Regarding the selection among the different alternatives for an erroneous word-form, we can see that around half of them has a single proposal. This gives a measure of the work to be done. For example, in the real error corpora, there were 158 word-forms with 1046 different proposals. This means an average of 6.62 proposals per word. If words of length less Basic techniaues random baseline 100.00 53.67 1.00 random+H2 69.85 71.53 1.00 DF 90.31 89.50 !.02 DF+H2 61.51 95.60 1.01 Combinations CGI+DF2 99.64 90.06 1.19 CGI+DF2+H2 69.85 95.71 1.22 CGI+DFI+BF1 99.64 87.77 1.03 CGI+DFI+BFI+H2 69.85 93.16 1.03 CGI+DFI+BFI+CD1 99.64 87.91 1.03 CGI+DFI+BFI+CD+H2 69.85 93.27 1.02 Table 4. Validation of the best combinations (2 *J half) l B+~$ie teehnioues random baseline 100.00 23.71 1.00 random+H2 50.12 34.35 1.00 DF 84.04 81.42 1.03 DF+H2 36.32 87.66 1.04 Combinations CGI+DF2 99.41 83.59 1.31 CGI+DF2+H2 50.12 90.12 1.50 CGI+DFI+BF1 99.41 79.81 1.05 CGI+DFI+BFI+H2 50.12 84.24 1.06 CGI+DFI+BFI+CD1 99.41 80.05 1.05 CGI+DFI+BFI+CDI+H2 50.12 84.47 1.06 Table 5. Results on errors with multiple proposals (2 "d half) than 4 are not taken into account, there are 807 proposals, that is, 4.84 alternatives per word. 2.3 Results We mainly considered three measures: • coverage: the number of errors for which the technique yields an answer. • precision: the number of errors with the correct proposal among the selected ones • remaining proposals: the average number of selected proposals. 2.3.1 Search for the best combinations Table 2 shows the results on the training corpora. We omit many combinations that we tried, for the sake of brevity. As a baseline, we show the results when the selection is done at random. Heuristic H1 is applied in all the cases, while tests are performed with and without heuristic H2. If we focus on the errors for which ispell generates more 25 Cover. % I Prec. % [#prop. Basic techniques random baseline 100.00 69.92 75.47 84.15 90.30 random+H2 89.70 CG 99.19 CG+H2 89.43 DF 70.19 93.05 DF+H2 61.52 97.80 BF 98.37 80.99 BF+H2 88.08 85.54 Combinations CG 1 +DF2 100.00 CG 1 +DF2+H2 89.70 CGI+DFI+BF1 100.00 CGI+DFI+BFI+H2 89.70 1.00 1.00 1.61 1.57 1.02 1.00 1.00 1.00 Table 6. Best combinations Cover. % I Prec. % Basic techniques random baseline 100.00 random+H2 76.54 CG 98. I0 CG+H2 75.93 DF 30.38 DF+H2 12.35 BF 96.20 BF+H2 72.84 87.26 1.42 90.94 1.43 80.76 1.02 84.89 1.02 "real" corpus) [ #prop 29.75 1.00 34.52 1.00 62.58 2.45 73.98 2.52 62.50 1.13 75.00 1.05 54.61 1.00 60.17 1.00 Combinations CG 1 +DF2 100.00 70.25 1.99 CGI+DF2+H2 76.24 75.81 2.15 CGI+DFI+BF1 100.00 55.06 1.04 CGI+DFI+BFI+H2 76.54 59.68 1.05 Table 7. Results on errors with multiple proposals ("real" corpus) than one correction proposal (cf. table 3), we get a better estimate of the contribution of each guesser. There were 8.26 proposals per word in the general case, and 3.96 when H2 is applied. The results for all the techniques are well above the random baseline. The single best techniques are DF and CG. CG shows good results on precision, but fails to choose a single proposal. H2 raises the precision of all techniques at the cost of losing coverage. CD is the weakest of all techniques, and we did not test it with the other corpora. Regarding the combinations, CGI+DF2+H2 gets the best precision overall, but it only gets 52% coverage, with 1.43 remaining proposals. Nearly 100% coverage is attained by the H2 combinations, with highest precision for CGI+DF2 (83% precision, 1.28 proposals). 2.3.2 Validation of the best combinations In the second phase, we evaluated the best combinations on another corpus with artificial errors. Tables 4 and 5 show the results, which agree with those obtained in 2.3.1. They show slightly lower percentages but always in parallel. 2.3.3 Corpus of genuine errors As a final step we evaluated the best combinations on the corpus with genuine typing errors. Table 6 shows the overall results obtained, and table 7 the results for errors with multiple proposals. For the latter there were 6.62 proposals per word in the general case (2 less than in the artificial corpus), and 4.84 when heuristic H2 is applied (one more that in the artificial corpus). These tables are further commented in the following section. 3 Evaluation of results This section reviews the results obtained. The results for the "real" corpus are evaluated first, and the comparison with the other corpora comes later. Concerning the application of each of the simple techniques separately6: • Any of the guessers performs much better than random. • DF has a high precision (75%) at the cost of a low coverage (12%). The difference in coverage compared to the artificial error corpora (84%) is mainly due to the smaller size of the documents in the real error corpus (around 50 words per document). For medium- sized documents we expect a coverage similar to that of the artificial error corpora. • BF offers lower precision (54%) with the gains of a broad coverage (96%). • CG presents 62% precision with nearly 100% coverage, but at the cost of leaving many proposals (2.45) • The use of CD works only with a small fraction of the errors giving modest results. The fact that it was only applied a few times prevents us from making further conclusions. Combining the techniques, the results improve: • The CGI+DF2 combination offers the best results in coverage (100%) and precision (70%) for all tests. As can be seen, CG raises the 6 If not explicitly noted, the figures and comments refer to the "real" corpus, table 7. 26 coverage of the DF method, at the cost of also increasing the number of proposals (1.9) per erroneous word. Had the coverage of DF increased, so would also the number of proposals decrease for this combination, for instance, close to that of the artificial error corpora (1.28). • The CGI+DFI+BF1 combination provides the same coverage with nearly one interpretation per word, but decreasing precision to a 55%. • If full coverage is not necessary, the use of the H2 heuristic raises the precision at least 4% for all combinations. When comparing these results with those of the artificial errors, the precisions in tables 2, 4 and 6 can be misleading. The reason is that the coverage of some techniques varies and the precision varies accordingly. For instance, coverage of DF is around 70% for real errors and 90% for artificial errors, while precisions are 93% and 89% respectively (cf. tables 6 and 2). This increase in precision is not due to the better performance of DF 7, but can be explained because the lower the coverage, the higher the proportion of errors with a single proposal, and therefore the higher the precision. The comparison between tables 3 and 7 is more clarifying. The performance of all techniques drops in table 7. Precision of CG and BF drops 15 and 20 points. DF goes down 20 points in precision and 50 points in coverage. This latter degradation is not surprising, as the length of the documents in this corpus is only of 50 words on average. Had we had access to medium sized documents, we would expect a coverage similar to that of the artificial error corpora. The best combinations hold for the "real" texts, as before. The highest precision is for CGI+DF2 (with and without H2). The number of proposals left is higher in the "real" texts than in the artificial ones (1.99 to 1.28). It can be explained because DF does not manage to cover all errors, and that leaves many CG proposals untouched. We think that the drop in performance for the "real" texts was caused by different factors. First of all, we already mentioned that the size of the documents strongly affected DF. Secondly, the nature of the errors changes: the algorithm to 7 In fact the contrary is deduced from tables 3 and 7. produce spelling errors was biased in favour of frequent words, mostly short ones. We will have to analyse this question further, specially regarding the origin of the natural errors. Lastly, BF was trained on the Brown corpus on American English, while the "real" texts come from the Bank of English. Presumably, this could have also affected negatively the performance of these algorithms. Back to table 6, the figures reveal which would be the output of the correction system. Either we get a single proposal 98% of the times (1.02 proposals left on average) with 80% precision for all non- word errors in the text (CGI+DFI+BF1) or we can get a higher precision of 90% with 89% coverage and an average of 1.43 proposals (CGI+DF2+H2). 4 Comparison with other context- sensitive correction systems There is not much literature about automatic spelling correction with a single proposal. Menezo et al. (1996) present a spelling/grammar checker that adjusts its strategy dynamically taking into account different lexical agents (dictionaries .... ), the user and the kind of text. Although no quantitative results are given, this is in accord with using document and general frequencies. Mays et al. (1991) present the initial success of applying word trigram conditional probabilities to the problem of context based detection and correction of real-word errors. Yarowsky (1994) experiments with the use of decision lists for lexical ambiguity resolution, using context features like local syntactic patterns and collocational information, so that multiple types of evidence are considered in the context of an ambiguous word. In addition to word-forms, the patterns involve POS tags and lemmas. The algorithm is evaluated in missing accent restoration task for Spanish and French text, against a predefined set of a few words giving an accuracy over 99%. Golding and Schabes (1996) propose a hybrid method that combines part-of-speech trigrams and context features in order to detect and correct real- word errors. They present an experiment where their system has substantially higher performance than the grammar checker in MS Word, but its coverage is limited to eighteen particular confusion sets composed by two or three similar words (e.g.: weather, whether). 27 The last three systems rely on a previously collected set of confusion sets (sets of similar words or accentuation ambiguities). On the contrary, our system has to choose a single proposal for any possible spelling error, and it is therefore impossible to collect the confusion sets (i.e. sets of proposals for each spelling error) beforehand. We also need to correct as many errors as possible, even if the amount of data for a particular case is scarce. Conclusion This work presents a study of different methods that build on the correction proposals of ispell, aiming at giving a single correction proposal for misspellings. One of the difficult aspects of the problem is that of testing the results. For that reason, we used both a corpus with artificially generated errors for training and testing, and a corpus with genuine errors for testing. Examining the results, we observe that the results improve as more context is taken into account. The word-form frequencies serve as a crude but helpful criterion for choosing the correct proposal. The precision increases as closer contexts, like document frequencies and Constraint Grammar are incorporated. From the results on the corpus of genuine errors we can conclude the following. Firstly, the correct word is among ispell's proposals 100% of the times, which means that all errors can be recovered. Secondly, the expected output from our present system is that it will correct automatically the spelling errors with either 80% precision with full coverage or 90% precision with 89% coverage and leaving an average of 1.43 proposals. Two of the techniques proposed, Brown Frequencies and Conceptual Density, did not yield useful results. CD only works for a very small fraction of the errors, which prevents us from making further conclusions. There are reasons to expect better results in the future. First of all, the corpus with genuine errors contained very short documents, which caused the performance of DF to degrade substantially. Further tests with longer documents should yield better results. Secondly, we collected frequencies from an American English corpus to correct British English texts. Once this language mismatch is solved, better performance should be obtained. Lastly, there is room for improvement in the techniques themselves. We knowingly did not use any model of common misspellings. Although we expect limited improvement, stronger methods to combine the techniques can also be tried. Continuing with our goal of attaining a single proposal as reliably as possible, we will focus on short words and we plan to also include more syntactic and semantic context in the process by means of collocational information. This step opens different questions about the size of the corpora needed for accessing the data and the space needed to store the information. Acknowledgements This research was supported by the Basque Government, the University of the Basque Country and the CICYT (Comisirn Interministerial de Ciencia y Tecnologfa). References Agirre E. and Rigau G. (1996) Word sense disambiguation using conceptual density. In Proc. of COLING-96, Copenhagen, Denmark. Golding A. and Schabes. Y. (1996) Combining trigram- based and feature-based methods for context-sensitive spelling correction. In Proc. of the 34th ACL Meeting, Santa Cruz, CA. Ispell (1993) International Ispell Version 3.1.00, 10/08/93. Francis S.and Kucera H. (1967) Computing Analysis of Present-Day American English. Brown Univ. Press. Karlsson F., Voutilainen A., Heikkil~i J. and Anttila A. (1995) Constraint Grammar: a Language Independent System for Parsing Unrestricted Text. Ed.Mouton de Gruyter. Koskenniemi K. (1983) Two-level Morphology: A general Computational Model for Word-Form Recognition and Production. University of Helsinki. Kukich K. (1992) Techniques for automatically correcting words in text. In ACM Computing Surveys, Vol. 24, N. 4, December, pp. 377-439. Mays E., Damerau F. and Mercer. R. (1991) Context based spelling correction. Information Processing & Management, Vol. 27, N. 5, pp. 517-522. Miller G. (1990) Five papers on WordNet. Special Issue of the Int. Journal of Lexicography, Vol. 3, N. 4. Menezo J., Genthial D. and Courtin J. (1996) Reconnaisances pluri-lexicales dans CELINE, un syst~me multi-agents de drtection et correction des erreurs. NLP + IA 96, Moncton, N. B., Canada. Yarowsky D. (1994) Decision lists for lexical ambiguity resolution. In Proceedings of the 32nd ACL Meeting, Las Cruces, NM, pp.88-95. 28
1998
3
Terminology Finite-State Preprocessing for Computational LFG Caroline Brun Xerox Research Centre Europe 6, chemin de Maupertuis 38240 Meylan France Caroline. Brun @zrce.ze roz. corn Abstract This paper presents a technique to deal with multiword nominal terminology in a compu- tational Lexical Functional Grammar. This method treats multiword terms as single to- kens by modifying the preprocessing stage of the grammar (tokenization and morphological anal- ysis), which consists of a cascade of two-level finite-state automata (transducers). We present here how we build the transducers to take ter- minology into account. We tested the method by parsing a small corpus with and without this treatment of multiword terms. The number of parses and parsing time decrease without affect- ing the relevance of the results. Moreover, the method improves the perspicuity of the analy- ses. 1 Introduction The general issue we are dealing with here is to determine whether there is an advantage to treating multiword expressions as single tokens, by recognizing them before parsing. Possible advantages are the reduction of ambiguity in the parse results, perspicuity in the structure of analyses, and reduction in parsing time. The possible disadvantage is the loss of valid analy- ses. There is probably no single answer to this issue, as there are many different kinds of mul- tiword expressions. This work follows the inte- gration 1 of (French) fixed multiword expressions like a priori, and time expressions, like le 12jan- vier 1988, in the preprocessing stage. Terminology is an interesting kind of multiword expressions because such expressions are almost but not completely fixed, and there is an in- tuition that you won't loose many good anal- ~This integration has been done by Fr6d~rique Segond. yses by treating them as single tokens. More- over, terminology can be semi or fully automat- ically extracted. Our goal in the present paper is to compare efficiency and syntactic coverage of a French LFG grammar on a technical text, with and without terminology recognition in the preprocessing stage. The preprocessing consists mainly in two stages: tokenization and morpho- logical analysis. Both stages are performed by use of finite-state lexical transducers (Kartun- nen, 1994). In the following, we describe the in- sertion of terminology in these finite-state trans- ducers, as well as the consequences of such an insertion on the syntactic analysis, in terms of number of valid analyses produced, parsing time and nature of the results. We are part of a project, which aims at developing LFG gram- mars, (Bresnan and Kaplan, 1982), in paral- lel for French, English and German, (Butt et al., To appear). The grammar is developed in a computational environment called XLE (Xe- rox Linguistic Environment), (Maxwell and Ka- plan, 1996), which provides automatic parsing and generation, as well as an interface to the preprocessing tools we are describing. 2 Terminology Extraction The first stage of this work was to extract termi- nology from our corpus. This corpus is a small French technical text of 742 sentences (7000 words). As we have at our disposal parallel aligned English/French texts, we use the English translation to decide when a potential term is actually a term. The terminology we are deal- ing with is mainly nominal. To perform this extraction task, we use a tagger (Chanod and Tapanainen, 1995) to disambiguate the French text, and then extract the following syntactic patterns, N Prep N, N N, N A, A N, which are good candidates to be terms. These candidates 196 are considered as terms when the correspond- ing English translation is a unit, or when their translation differs from a word to word trans- lation. For example, we extract the following terms: (1) vitesses rampantes (creepers) boite de vitesse (gearbox) arbre de transmission (drive shaft) tableau de bord (instrument panel) This simple method allowed us to extract a set of 210 terms which are then integrated in the preprocessing stages of the parser, as we are go- ing to explain in the following sections. We are aware that this semi-automatic process works because of the small size of our corpus. A fully automatic method (Jacquemin, 1997) could be used to extract terminology. But the material extracted was sufficient to perform the experiment of comparison we had in mind. 3 Grammar Preprocessing In this section, we present how tokenization and morphological analysis are handled in the sys- tem and then how we integrate terminology pro- cessing in these two stages. 3.1 Tokenization The tokenization process consists of splitting an input string into tokens, (Grefenstette and Tapanainen, 1994), (Ait-Mokthar, 1997), i.e. determining the word boundaries. If there is one and only one output string the tokenization is said to be deterministic, if there is more than one output string, the tokenization is non deter- ministic. The tokenizer of our application is non deterministic (Chanod and Tapanainen, 1996), which is valuable for the treatment of some am- biguous input string 2, but in this paper we deal with fixed multiword expressions. The tokenization is performed by applying a two-level finite-state transducer on the input string. For example, applying this transducer on the sentence in 2 gives the following result, the token boundary being the @ sign. (2) Le tracteur est ~ l'arr~t. (The tractor is stationary.) Le@tracteur@est@~@l'@arr~t@.@ 2for example bien que in French In this particular case, each word is a token. But several words can be a unit, for exam- ple compounds, or multiword expressions. Here are some examples of the desired tokenization, where terms are treated as units: (3) La bore de vitesse est en deux sections. (the gearbox is in two sections) La'.~boRe de vitesse~est~en~deux@sections~.~ (4) Ce levier engage l'arbre de transmission. (This lever engages the drive shaft.) Ce@levier~engage@l'~arbre de transmission@.@ We need such an analysis for the terminology extracted from the text. This tokenization is realized in two logical steps. The first step is performed by the basic transducer and splits the sentence in a sequence of single word. Then a second transducer containing a list of multiword expressions is applied. It recognizes these ex- pressions and marks them as units. When more than one expression in the list matches the in- put, the longest matching expression is marked. We have included all the terms and their mor- phological variations in this last transducer, so that they are analyzed as single tokens later on in the process. The problem now is to associate a morphological analysis to these units. 3.2 Morphological Analysis The morphological analyzer used during the parsing process, just after the tokenization process, is a two-level finite-state transducer (Chanod, 1994). This lexical transducer links the surface form of a string to its morphological analysis, i.e. its canonical form and some char- acterizing morphological tags. Some examples are given in 5. (5) >veut vouloir+IndP+SG+P3+Verb >animaux animal+Masc+PL+Noun animal+Masc+PL+Adj The compound terms have to be integrated into this transducer. This is done by developing a local regular grammar which describes the com- pound morphological variation, according to the inflectional model proposed in (Kartunnen et al., 1992). The hypothesis is that only the two main parts 197 of the compounds are able to vary. i.e. N1 or A1, and N2 or A2. in the patterns .VI prep N2, N1 N2, A1 N2, and ,VI A2. In our corpus, we identify two kinds of morphological variations: • The first part varies in number: gyrophare de toit. gyrophares de toit rdgime moteur, rggirnes moteur • Both parts vary in number: roue motrice, roues motrices This is of course not general for French com- pounds; there are other variation patterns, how- ever it is reliable enough for the technical man- ual we are dealing with. Other inflectional schemes and exceptions are described in (Kar- tunnen et al., 1992) and (Quint, 1997), and can be easily added to the regular grammar if needed. A cascade of regular rules is applied on the dif- ferent parts of the compound to build the mor- phological analyzer of the whole compound. For example, roue rnotrice is marked with the dia- critic +DPL, for double plural and then, a first rule which just copies the morphological tags from the end to the middle is applied if the di- acritic is present in the right context: roue 0 0 -motrice+DPL+Fem+PL roue+Fem+PL-mortice 0 +Fem+PL Figure l: First rule A second rule is applied to the output of the preceding one and "realizes" the tags on surface. roue +Fem+PL-motrice +Fern +PL IIIIII roue 0 s -motrice 0 s Figure 2: Second rule The composition of these two layers gives us the direct mapping between surface inflected forms and morphological analysis. The same kind of rules are used when only the first part of the compound varies, but in this case the second rule just deletes the tags of the second word. The two morphological analyzers for the two variations are both unioned into the basic mor- phological analyzer for French we use for mor- phology. The result is the transducer we use fol- lowing tokenization and completing input pre- processing. An example of compound analysis is given here: (6) > roues motrices roue motrice+Fem+PL+Noun > r~gimes moteur r~gime moteur+Masc+PL+Noun The morphological analysis developed here for terminology allows multiword terms to be treated as regular nouns within the parsing pro- cess. Constraints on agreement remain valid, for example for relative or adjectival attachment. 4 Parsing with the Grammar One of the problems one encounters with pars- ing using a high level grammar is the multi- plicity of (valid) analyses one gets as a result. While syntactically correct, some of these anal- yses should be removed for semantic reasons or in a particular context. One of the challenges is to reduce the parse number, without affecting the relevance of the results and without remov- ing the desired parses. There are several ways to perform such a task, as described for example in (Segond and Copperman, 1997); we show here that finite state preprocessing for compounds is compatible with other possibilities. 4.1 Experiment and Results The experiment reported here is very simple: it consists of parsing the technical corpus before and after integration of the morphological terms in the preprocessing components, using exactly the same grammar rules, and comparing the re- sults obtained. As the compounds are mainly nominal, they will be analyzed just as regular nouns by the grammar rules. For example, if we parse the NP: (7) La bofte de vitesse (the gearbox) before integration we get the structures shown in Fig.3, and after integration we get the simple structures shown in Fig.4. The following tables show the results obtained on the whole corpus: 198 DETP I D I la NP t NPdet NPpp NPap PP N P NP I I I bohe de NPdct I NPpp I NPap I N t vitesse "PRED 'boRe' SPEC [ SPEC-FORM PRED 'de< (t OBJ)>' 'vitesse'] oaJ |sPeC null I AD'IUNCT IPCASE de [ t, P,+...~ :+ J [. PSEM IOC PTYPE sem PERS 3 GEND fem NUM sg } Figure 3: Before Terminology Integration NP i NPdet DETP NPdet t I D NPpp I i la NPap I N I bolte de vitesse PHED 'botte de vitesse' ] /sPEc LS~c-Po~ d: LPBRS 3 GEND fem NUM sg Figure 4: After Terminology Integration • Before Terminology Integration: Number of Token Parse Time sentences Average average Average with terms 358 10.59 4.21 1.706 without terms 384 8.98 3.77 1.025 • After Terminology Integration: Number of Token Parse Time sentences average average Average with terms 358 8.86 2.79 0.987 without terms 384 8.98 3.77 1.025 The results are straightforward: one ob- serves a significant reduction in the number of parses as well as in the parsing time, and no change at all for sentences which do not contain technical terms. Looking closer at the results shows that the parses ruled out by this method are semantically undesirable. We discuss these results in the next section. 4.2 Analysis of Results The good results we obtained in terms of parse number and parsing time reduction were pre- dictable. As the nominal terminology groups flouns, prepositional phrases and adjectival / phrases together in lexical units, there is a sig- nificant reduction of the number of attachments. For example, the adjective hydraulique in the sentence: (8) Le voyant de levier de distributeur hydrau- lique s'allume. (The control valve lever warning light comes on.) can syntactically attach to voyant, levier, and distributeur which leads to 3 analyses. But in the domain the corpus is concerned with, dis- tributeur hydraulique is a term. Parsing it as a nominal unit gives only one parse, which is the desired one. Moreover, grouping terms in unit resolves some lexical ambiguity in the prepro- cessing stage: for example, in ceinture de sdcu- rit4, the word ceinture is a noun but may be a verb in other contexts. Parsing ceinture de sdcu- rite" as a nominal term avoids further syntactic disambiguation. Of course, one has to be very careful with the terminology integration in order to prevent a loss of valid analyses. In this experiment, no valid analyses were ruled out, because the semi- automatic method we used for extraction and integration allowed us to choose accurate terms. The reduction in the number of attachments is the main source of the decrease in the number of parses. As the number of attachments and of lexical ambiguities decreases, the number of grammar rules applied to compute the results decreases 199 as well. The parsing time is reduced as a conse- quence. The gain of efficiency is interesting in this ap- proach, but perhaps more valuable is the per- spicuity of the results. For example, in a trans- lation application it is clear that the represen- tation given in Fig. 4, is more relevant and di- rectly exploitable than the one given in Fig. 3, because in this case there is a direct mapping between the semantic predicate in French and English. 5 Conclusion and possible extensions The experiment presented in this paper shows the advantage of treating terms as single to- kens in the preprocessing stage of a parser. It is an example of interaction between low level finite-state tools and higher level grammars. Its shows the benefit from such' a cooperation for the treatment of terminology and its implica- tion on the syntactic parse results. One can imagine other interactions, for example, to use a "guesser ''3 transducer which can easily pro- cess unknown words, and give them plausible mophological analyses according to rules about productive endings. There are ambiguity sources other than termi- nology, but this method of ambiguity reduction is compatible with others, and improves the per- spicuity of the results. It has been shown to be valuable for other syntactic phenomena like time expressions, where local regular rules can compute the morphological variation of such ex- pressions. In general, lexicalization of (fixed) multiword expressions, like complex preposition or adverbial phrases, compounds , dates, numer- als, etc., is valuable for parsing because it avoids creation of "had hoc" and unproductive syntac- tic rules like ADV ..~ N Coord N to parse corps et rime {body and soul), and unusual lexicon entries like fur to get au fur et d mesure (as one goes along). Ambiguity reduction and better rele- vance of results are direct consequences of such a treatment. This experiment, which has been conducted on a small corpus containing few terms, will be ex- tended with an automatic extraction and inte- gration process on larger scale corpora and other languages. ZAlready used in tagging applications 6 Acknowledgments I would like to thanks my colleagues at XRCE, especially Max Copperman and Fr~d~rique Segond for their help and valuable comments. References Salah Ait-Mokthar. 1997. Du texte ascii au texte lemmatis6 : la pr6syntaxe en une seule 6tape. In Proceedings TALN97, Grenoble, France. Joan Bresnan and Ronald M. Kaplan. 1982. The Mental Representation of Grammatical Relations. The MIT Press, Cambridge, MA. Miriam Butt, Tracy Holloway King, Maria-Eugenia Nifio, and Fr~d~rique Segond. To appear. A Gram- mar Writer's Cookbook. CSLI Publications/Univer- sity of Chicago Press, Stanford University. Jean-Pierre Chanod and Pasi Tapanainen. 1995. Tagging French - comparing a statistical and a constraint-based method. In Proceedings of the Sev- enth Conference of the European Chapter, pages 149-156, Dublin. Association for Computational Linguistic. Jean-Pierre Chanod and Pasi Tapanainen. 1996. A non-deterministic tokeniser for finite-state parsing. In Proceedings ECAI96, Prague, Czech Republic. Jean-Pierre Chanod. 1994. Finite-state composi- tion of French verb morphology. Technical Report MLTT-0O4, Rank Xerox Research Centre, Grenoble. Gregory Grefenstette and Pasi Tapanainen. 1994. What is a word, what is a sentence? problems of tokenisation. In Proceedings of the Third Interna- tional Conference on Computational Lexicography, pages 79-87, Budapest. Research Institute for Lin- guistic Hungarian Academy of Sciences. Christian Jacquemin. 1997. Variation termi- nologique : Reconnaissance et acquistion automa- tique de termes et de leur variante en corpus. Ha- bilitation b. diriger les recherches. Lauri Kartunnen, Ronald M. Kaplan, and Annie Za- enen. 1992. Two-level morphology with composi- tion. In Proceedings of the 17h International Confer- ence on Computational Linguistics (COLING '92}, August. Lauri Kartunnen. 1994. Constructing lexical trans- ducers. In Proceedings oj: the Igh International Conference on Computational Linguistics (COLING '94), August. John T. Maxwell and Ron Kaplan. 1996. An ef- ficient parser for LFG. In Proceedings of LFG96, Grenoble, France. Julien Quint. 1997. Morphologie h deux niveaux des noms du franqais. Master thesis, Xerox European Research Centre, Grenoble. Fr~d~rique Segond and Max Copperman. 1997. Lex- icon filtering. In Proceedings of RANLP97, Bu- dapest. 200
1998
30
Named Entity Scoring for Speech Input John D. Burger David Palmer Lynette Hirschman The MITRE Corporation 202 Burlington Road Bedford, MA 01730, USA john @ mitre.org [email protected] lynette @ mitre.org Abstract This paper describes a new scoring algorithm that supports comparison of linguistically annotated data from noisy sources. The new algorithm generalizes the Message Understanding Conference (MUC) Named Entity scoring algorithm, using a compari- son based on explicit alignment of the underlying texts, followed by a scoring phase. The scoring procedure maps corresponding tagged regions and compares these according to tag type and tag extent, allowing us to reproduce the MUC Named Entity scoring for identical underlying texts. In addition, the new algorithm scores for content (transcription correctness) of the tagged region, a useful distinc- tion when dealing with noisy data that may differ from a reference transcription (e.g., speech recog- nizer output). To illustrate the algorithm, we have prepared a small test data set consisting of a careful transcription of speech data and manual insertion of SGML named entity annotation. We report results for this small test corpus on a variety of experi- ments involving automatic speech recognition and named entity tagging. 1. Introduction: The Problem Linguistically annotated training and test corpora are playing an increasingly prominent role in natural language processing research. The Penn TREEBANK and the SUSANNE corpora (Marcus 93, Sampson 95) have provided corpora for part-of-speech taggers and syntactic processing. The Message Understanding Conferences (MUCs) and the Tipster program have provided corpora for newswire data annotated with named entities ~ in multiple languages (Merchant 96), as well as for higher level relations extracted from text. The value of these corpora depends critically on the ability to evaluate hypothesized annotations against a gold standard reference or key. To date, scoring algorithms such as the MUC Named Entity scorer (Chinchor 95) have assumed that the documents to be compared differ only in linguistic annotation, not in the underlying text. 2 This has precluded applicability to data derived from noisy sources. For example, if we want to compare named entity (NE) processing for a broadcast news source, created via automatic speech recognition and NE tagging, we need to compare it to data created by careful human transcription and manual NE tagging.. But the underlying texts--the recogmzer output and the gold standard transcription--differ, and the MUC algorithm cannot be used. Example 1 shows the reference transcription from a broadcast news source, and below it, the transcription produced by an automatic speech recognition system. The excerpt also includes reference and hypothesis NE annotation, in the form of SGML tags, where <P> tags indicate the name of a person, <L> that of a location, and <O> an organization) We have developed a new scoring algorithm that supports comparison of linguistically annotated data from noisy sources. The new algorithm generalizes the MUC algorithm, using a comparison based on explicit alignment of the underlying texts. The scoring procedure then maps corresponding tagged regions and compares these according to tag type and tag extent. These correspond to the components currently used by the MUC scoring algorithm. In addition, the new algorithm also compares the content of the tagged region, measuring correctness of the transcription within the region, when working with noisy data (e.g., recognizer output). 2. Scoring Procedure The scoring algorithm proceeds in five stages: 1. Preprocessing to prepare data for alignment 2. Alignment of lexemes in the reference and hypothesis files 3. Named entity mapping to determine corresponding phrases in the reference and hypothesis files 4. Comparison of the mapped entities in terms of tag type, tag extent and tag content 5. Final computation of the score t MUC "named entities" include person, organization and location names, as well as numeric expressions. -'Indeed, the Tipster scoring and annotation algorithms require, as part of the Tipster architecture, that the annotation preserve the underlying text including white space. The MUC named entity scoring algorithm uses character offsets to compare the mark-up of two texts. 3The SGML used in Tipster evaluations is actually more explicit than that used in this paper, e.g., <ENAMEX TYPE=PERSON> rather than <P>. 201 ref: ATTHE <L> NEW YORK </L> DESK I'M <P>PHILIPBOROFF</P> <L> hyp:ATTHE <L> NEWARK </L> BASK ON FILM FORUM Example 1: Aligned and tagged text 2.1 Stage 1: Preprocessing The algorithm takes three files as input: the human-transcribed reference file with key NE phrases, the speech recognizer output, which includes coarse-grained timestamps used in the alignment process, and the recogizer output tagged with NE mark-up. The first phase of the scoring algorithm involves reformatting these input files to allow direct comparison of the raw text. This is necessary be- cause the transcript file and the output of the speech recognizer may contain information in addition to the lexemes. For example, for the Broadcast News corpus provided by the Lin- guistic Data Consortium, 4 the transcript file contains, in addition to mixed-case text rep- resenting the words spoken, extensive SGML and pseudo-SGML annotation including seg- ment timestamps, speaker identification, back- ground noise and music conditions, and comments. In the preprocessing phase, this ref: AT THE NEW YORK DESK I'M PHILIP BOROFF hyp: AT THE NEWARK BASK ON FILM FORUM MISSES MISSISSIPPI </L> REPUBLICAN MISSES THE REPUBLICAN 2.2 Stage 2: Lexeme Alignment A key component of the scoring process is the actual alignment of individual lexemes in the reference and hypothesis documents. This task is similar to the alignment that is used to evaluate word error rates of speech recognizers: we match lexemes in the hypothesis text with their corresponding lexemes in the reference text. The standard alignment algorithm used for word error evaluation is a component of the NIST SCLite scoring package used in the Broadcast News evaluations (Garofolo 97). For each lexeme, it provides four possible classifications of the alignment: correct, substitution, insertion, and deletion. This classification has been successful for evaluating word error. However, it restricts alignment to a one-to-one mapping between hypothesis and reference texts. It is very common for multiple lexemes in one text to correspond to a single lexeme in the other, in addition to multiple-to-multiple correspon- MISSISSIPPI REPUBLICAN THE REPUBLICAN ref: AT THE NEW YORK DESK I'M PHILIP BOROFF MISSISSIPPI REPUBLICAN hyp: At" THE N~-~/~U< BASK ON FILM FORUM MISSES THE REPUBLICAN Example 2: SCLite alignment (top) vs. phonetic alignment (bottom) annotation and all punctuation is removed, and all remaining text is converted to upper-case. Each word in the reference text is then assigned an estimated timestamp based on the explicit timestamp of the larger parent segmentJ Given the sequence of all the timestamped words in each file, a coarse segmentation and alignment is performed to assist the lexeme alignment in Stage 2. This is done by identifying sequences of three or more identical words in the reference and hypothesis transcriptions, transforming the long sequence into a set of shorter sequences, each with possible mismatches. Lexeme alignment is then performed on these short sequences .6 4http://www.ldc.upenn.edu/ 5It should be possible to provide more accurate word timestamps by using a large-vocabulary recognizer to provide a forced alignment on the clean transcription. 6The sequence length is dependent on the word-eror rate of the recognizer ouput, but in general the average sequence is 20-30 words long after this coarse segmentation. dences. For example, compare New York and Newark in Example 1. Capturing these alignment possibilities is especially important in evaluating NE performance, since the alignment facilitates phrase mapping and comparison of tagged regions. In the current implementation of our scoring algorithm, the alignment is done using a pho- netic alignment algorithm (Fisher 93). In direct comparison with the standard alignment algorithm in the SCLite package, we have found that the phonetic algorithm results in more intuitive results. This can be seen clearly in Example 2, which repeats the reference and hypothesis texts of the previous example. The top alignment is that produced by the SCLite algorithm; the bottom by the phonetic algorithm. Since this example contains several instances of potential named entities, it also illustrates the impact of different alignment algorithms (and alignment errors) on phrase mapping and comparison. We will compare the effect of the two algorithms on the NE score in Section 3. 202 ref: INVESTING * : . * "-~ * ~,~ :,, [ TRADING JNITH CUBA hyp: INVESTING IN TRAINING i WOULD ,. KEEP OFF " .A~.. LOT ! OF ref: ImrEsTING AND ,TmmIMG~ i WItH !': ~"/i * FROM OTTAWA rH~S, IS hyp: INVESTING:. IN TRAINING WOULD KEEP 0FF A i~'LOT .; OF WHAT THIS ~ IS Example 3: Imperfect alignments (SCLite top, phonetic bottom) Even the phonetic algorithm makes alignment mistakes. This can be seen in Example 3, where, as before, SCLite's alignment is shown above that of the phonetic algorithm. Once again, we judge the latter to be a more intuituive alignment--nonetheless, OTTAWA would argu- ably align better with the three word sequence LOT OF WHAT. As we shall see, these potential misalignments are taken into account in the algorithm's mapping and comparison phases. 2.3 Stage 3: Mapping The result of the previous phase is a series of alignments between the words in the reference text and those in a recognizer's hypothesis. In both of these texts there is named-entity (NE) markup. The next phase is to map the reference NEs to the hypothesis NEs. The result of this will be corresponding pairs of reference and hypothesis phrases, which will be compared for correctness in Stage 4. Currently, the scorer uses a simple, greedy mapping algorithm to find corresponding NE pairs. Potential mapped pmrs are those that overlap--that is, if some word(s) in a hypothesis NE have been aligned with some word(s) in a reference NE, the reference and hypothesis NEs may be mapped to one another. If more than one potential mapping is possible, this is currently resolved in simple left-to-right fashion: the first potential mapping pair is chosen. A more sophisticated algorithm, such as that used in the MUC scorer, will eventually be used that attempts to optimize the pairings, in order to give the best possible final score. In the general case, there will be reference NEs that do not map to any hypothesis NE, and vice versa. As we shall see below, the unmapped reference NEs are completely missing from the hypothesis, and thus will correspond to recall errors. Similarly, unmapped hypothesis NEs are completely spurious: they precision errors. 2.4 Stage 4: Comparison Once the mapping phase reference-hypothesis NEs, compared for correctness. will be scored as has found pairs of these pa~rs are As indicated above, we compare along three independent components: type, extent and content. The first two components correspond to MUC scoring and preserve backward compatibility. Thus our FROM OTTAWA THIS IS WHAT THIS 'i IS algorithm can be used to generate MUC-style NE scores, given two texts that differ only in annotation. Type is the simplest of the three components: A hypothesis type is correct only if it is the same as the corresponding reference typer. Thus, in Example 4, hypothesis 1 has an incorrect type, while hypothesis 2 is correct. Extent comparison makes further use of the information from the alignment phase. Strict extent comparison requires the first word of the hypothesis NE to align with the first word of the reference NE, and similarly for the last word. Thus, in Example 4, hypotheses 1 and 2 are correct in extent, while hypotheses 3 and 4 are not. Note that in hypotheses 2 and 4 the alignment phase has indicated a split between the single reference word GINGRICH and the two hypothesis words GOOD RICH (that is, there is a one- to two-word alignment). In contrast, hypothesis 3 shows the alignment produced by SCLite, which allows only one-to-one alignment. In this case, just as in Example 4, extent is judged to be incorrect, since the final words of the reference and hypothesis NEs do not align. This strict extent comparison can be weakened by adjusting an extent tolerance. This is defined as the degree to which the first and/or last word of the hypothesis need not align exactly with the corresponding word of the reference NE. For example, if the extent tolerance is 1, then hypotheses 3 and 4 would both be correct in the extent component. The main reason for a non- zero tolerance is to allow for possible discrepancies in the lexeme alignment process-- thus the tolerance only comes into play if there are word errors adjacent to the boundary in question (either the beginning or end of the NE). Here, because both GOOD and RICH are errors, hypotheses 3, 4 and 6 are given the benefit of the doubt when the extent tolerance is 1. For Ref: <P> NEWT "GINGRiCH " </P> Hypl: <0> NEWT GOODRICH </0> Hyp2: <P> NEWT GOOD RICH </P> Hyp3: <P> NEWT GOOD RICH </P> Hyp4: <P> NEWT GOOD</P> RICH Hyp5: NEWT <P> GINGRICH " </P> Hyp6: NEW <P> . GINGRICH </P> Example 4 203 hypothesis 5, however, extent is judged to be incorrect, no matter what the extent tolerance is, due to the lack of word errors adjacent to the boundaries of the entity. Content is the score component closest to the standard measures of word error. Using the word alignment information from the earlier phase, a region of intersection between the reference and the hypothesis text is computed, and there must be no word errors in this region. That is, each hypothesis word must align with exactly one reference word, and the two must be identical. The intuition behind using the intersection or overlap region is that otherwise extent errors would be penalized twice. Thus in hypothesis6, even though NEWT is in the reference NE, the substitution error (NEW) does not count with respect to content comparison, because only the region containing GINGRICH is examined. Note that the extent tolerance described above is not used to determine the region of intersection. Table 1 shows the score results for each of these score components on all six of the hypotheses in Example 4. The extent component is shown for two different thresholds, 0 and 1 (the latter being the default setting in our implementation). 2.5 Stage 5: Final Computation After the mapped pairs are compared along all three components, a final score is computed. We use precision and recall, in order to distinguish between errors of commission (spurious responses) and those of omission (missing responses). For a particular pair of reference and hypothesis NE compared in the previous phase, each component that is incorrect is a substitution error, counting against both recall and precision, because a required reference element was missing, and a spurious hypothesis element was present. Each of the reference NEs that was not mapped to a hypothesis NE in the mapping phase also contributes errors: one recall error for each score component missing from the hypothesis text. Similarly, an unmapped hypothesis NE is completely spurious, and thus contributes three precision errors: one for each of the score components. Finally, we combine the precision and recall scores into a balanced F-measure. This is a combination of precision and recall, such that F-- 2PR /(P + R). F-measure is a single metric, a convenient way to compare systems or texts along one dimension 7. 7Because F-measure combines recall and precision, it effectively counts substitution errors twice. Makhoul et al. (1998) have proposed an alternate slot error metric 1 0 2 1 3 1 4 1 5 1 6 1 Extent 1 Content (0) Extent (1) 1 l 1 1 0 l 1 0 0 0 0 0 0 1 0 1 Table 1 3. Experiments and Results To validate our scoring algorithm, we developed a small test set consisting of the Broadcast News development test for the 1996 HUB4 evaluation (Garofolo 97). The reference transcription (179,000 words) was manually annotated with NE information (6150 entities). We then performed a number of scoring experiments on two sets of transcription/NE hypotheses generated automatically from the same speech data. The first data that we scored was the result of a commonly available speech recognition system, which was then automatically tagged for NE by our system Alembic (Aberdeen 95). The second set of data that was scored was made availabe to us by BBN, and was the result of the BYBLOS speech recognizer and IdentiFinder TM NE extractor (Bikel 97, Kubala 97, 98). In both cases, the NE taggers were run on the reference transcription as well as the corresponding recognizer's output. These data were scored using the original MUC scorer as well as our own scorer run in two modes: the three-component mode described above, with an extent threshold of 1, and a "MUC mode", intended to be backward- compatible with the MUC scorer, s We show the results in Table 2. First, we note that when the underlying texts are identical, (columns A and I) our new scoring algorithm in MUC mode produces the same result as the MUC scorer. In normal mode, the scores for the reference text are, of course, higher, because there are no content errors. Not surprisingly, we note lower NE performance on recognizer output. Interestingly, for both the Alembic system (S+A) and the BBN system that counts substitution errors only once. SOur scorer is configurable in a variety of ways. In particular, the extent and content components can be combined into a single component, which is judged to be correct only if the individual extent and content are correct. In this mode, and with the extent threshold described above set to zero, the scorer effectively replicates the MUC algorithm. 204 Metric Word correctness MUC scorer MITRE scorer (MUC mode) MITRE scorer Reference text ] Recognizer output A I S+A B+I 1.00 1.00 0.47 0.80 0.65 0.85 . . . . . 0.65 0.85 0.40 0.71 !0.75 0.91 0.43 0.76 Table 2 (B+I), the degradation is less than we might expect: given the recognizer word error rates shown, one might predict that the NE performance on recognizer output would be no better than the NE performance on the reference text times the word recognition rate. One might thus expect scores around 0.31 (i.e., 0.65x0.47) for the Alembic system and 0.68 (i.e., 0.85×0.80) for the BBN system. However, NE performance is well above these levels for both systems, in both scoring modes. We also wished to determine how sensitive the NE score was to the alignment phase. To explore this, we compared the SCLite and phonetic alignment algorithms, run on the S+A data, with increasing levels of extent tolerance, as shown in Table 3. As we expected, the NE scores converged as the extent tolerance was relaxed. This suggests that in the case where a phonetic alignment algorithm is unavailable (as is currently the case for languages other than English), robust scoring results might still be achieved by relaxing the extent tolerance. 4. Conclusion We have generalized the MUC text-based named entity scoring procedure to handle non-identical underlying texts. Our algorithm can also be used to score other kinds of non-embedded SGML mark-up, e.g., part-of-speech, word segmentation or noun- and verb-group. Despite its generality, the algorithm is backward- compatible with the original MUC algorithm. The distinction made by the algorithm between extent and content allows speech understanding systems to achieve a partial score on the basis of identifying a region as containing a name, even if the recognizer is unable to correctly identify the content words. Encouraging this sort of partial correctness is important because it allows for applications that might, for example, index radio or video broadcasts using named entities, allowing a user to replay a particular region in order to listen to the corresponding content. This flexibility also makes it possible to explore information sources such as prosodics for identifying regions of interest even when it may Extent ~ SC-Lite Phonetic Tolerance Alignment Alignment 1 0.42 0.43 2 0.44 0.45 3 0.45 0.45 Table 3 be difficult to achieve a completely correct transcript, e.g., due to novel words. Acknowledgements Our thanks go to BBN/GTE for providing comparative data for the experiments duscussed in Section 3, as well as fruitful discussion of the issues involved in speech understanding metrics. References J. Aberdeen, J. Burger, D. Day, L. Hirschman, P. Robinson , M. Vilain (1995). "MITRE: Description of the Alembic System as Used for MUC-6", in Proceed- ings of the Sixth Message Understanding Conference. D. Bikel, S. Miller, R. Schwartz, R. Weischedel (1997). "NYMBLE: A High-Performance Learning Name-finder", in Proceedings of the Fifth Conference on Applied Natural Language Processing. N. Chinchor (1995). "MUC-5 Evaluation Metrics", in Proceedings of the Fifth Message Understanding Confer- ence. W.M. Fisher, J.G. Fiscus (1993). "Better Alignment Procedures for Speech Recognition Evaluation". ICASSP Vol. II. J. Garofolo, J. Fiscus, W. Fisher (1997) "Design and Preparation of the 1996 Hub-4 Broadcast News Bench- mark Test Corpora", in Proceedings of the 1997 DARPA Speech Recognition Workshop. F. Kubala, H. Jin, S. Matsoukas, L. Nguyen, R. Schwartz, J. Makhoul (1997) "The 1996 BBN Byblos Hub-4 Transcription System", in Proceedings of the 1997 DARPA Speech Recognition Workshop. F. Kubala, R. Schwartz, R. Stone, R. Weischedel (1998) "Named Entity Extraction from Speech", in Proceedings of the Broadcast News Transcription co~l Understanding Workshop. J. Makhoul, F. Kubala, R. Schwartz (1998) "Performance Measures for Information Extraction". unpublished manuscript, BBN Technologies, GTE In- ternetworking. M. Marcus, S. Santorini, M. Marcinkiewicz (1993) "Building a large annotated corpus of English: the Penn Treebank", Computational Linguistics, 19(2). R. Merchant, M. Okurowski (1996) "The Multilingual Entity Task (MET) Overview", in Proceedings of TIPSTER Text Program (Phase I1). G.R. Sampson (1995) English for the Computer, Ox- ford University Press. 205
1998
31
Automated Scoring Using A Hybrid Feature Identification Technique Jill Burstein, Karen Kukich, Susanne Wolff, Chi Lut Martin Chodorow~, Lisa Braden-Harder$:l:, and Mary Dee Harris:H:+ iEducational Testing Service, Princeton N J, SHunter College, New York City, NY, $1:Butler-Hill Group, Reston, VA, and l:$$Language Technology, Inc, Austin, TX Abstract This study exploits statistical redundancy inherent in natural language to automatically predict scores for essays. We use a hybrid feature identification method, including syntactic structure analysis, rhetorical structure analysis, and topical analysis, to score essay responses from test-takers of the Graduate Management Admissions Test (GMAT) and the Test of Written English (TWE). For each essay question, a stepwise linear regression analysis is run on a training set (sample of human scored essay responses) to extract a weighted set of predictive features for each test question. Score prediction for cross-validation sets is calculated from the set of predictive features. Exact or adjacent agreement between the Electronic Essay Rater (e-rater) score predictions and human rater scores ranged from 87% to 94% across the 15 test questions. 1. Introduction This paper describes the development and evaluation of a prototype system designed for the purpose of automatically scoring essay responses. The paper reports on evaluation results from scoring 13 sets of essay data from the Analytical Writing Assessments of the Graduate Management Admissions Test (GMAT) (see the GMAT Web site at http://www.gmat.org/ for sample questions) and 2 sets of essay data from the Test of Written English (TWE) (see http://www.toefl.org/ tstprpmt.html for sample TWE questions). Electronic Essay Rater (e-rater) was designed to automatically analyze essay features based on writing characteristics specified at each of six score points in the scoring guide used by human raters for manual scoring (also available at http://www.gmat.orff). The scoring guide indicates that an essay that stays on the topic of the question has a strong, coherent and well- organized argument structure, and displays a variety of word use and syntactic structure will receive a score at the higher end of the six-point scale (5 or 6). Lower scores are assigned to essays as these characteristics diminish. One of our main goals was to design a system that could score an essay based on features specified in the scoring guide for manual scoring. E-rater features include rhetorical structure, syntactic structure, and topical analysis. For each essay question, a stepwise linear regression analysis is run on a set of training data (human-scored essay responses) to extract a weighted set of predictive features for each test question. Final score prediction for cross-validation uses the weighted predictive feature set identified during training. Score prediction accuracy is determined by measuring agreement between human rater scores and e-rater score predictions. In accordance with human interrater "agreement" standards, human and e- rater scores also "agree" if there is an exact match or if the scores differ by no more than one point (adjacent agreement). 206 2. Hybrid Feature Methodology E-rater uses a hybrid feature methodology that incorporates several variables either derived statistically, or extracted through NLP techniques. The final linear regression model used for predicting scores includes syntactic, rhetorical and topical features. The next three sections present a conceptual rationale and a description of feature identification in essay responses. 2.1 Syntactic Features The scoring guides indicate that one feature used to evaluate an essay is syntactic variety. All sentences in the essays were parsed using the Microsoft Natural Language Processing tool (MSNLP) (see MSNLP (1997)) so that syntactic structure information could be accessed. The identification of syntactic structures in essay responses yields information about the syntactic variety in an essay with regard to the identification of clause or verb types. A program was implemented to identify the number of complement clauses, subordinate clauses, infinitive clauses, relative clauses and occurrences of the subjunctive modal auxiliary verbs, would, could, should, might and may, for each sentence in an essay. Ratios of syntactic structure types per essay and per sentence were also used as measures of syntactic variety. 2.2 Rhetorical Structure Analysis GMAT essay questions are of two types: Analysis of an Issue (issue) and Analysis of an Argument (argument). The GMAT issue essay asks the writer to respond to a general question and to provide "reasons and/or examples" to support his or her position on an issue introduced by the test question. The GMAT argument essay focuses the writer on the argument in a given piece of text, using the term argument in the sense of a rational presentation of points with the purpose of persuading the reader. The scoring guides indicate that an essay will receive a score based on the examinee's demonstration of a well- developed essay. In this study, we try to identify organization of an essay through automated analysis and identification of the rhetorical (or argument) structure of the essay. Argument structure in the rhetorical sense may or may not correspond to paragraph divisions. One can make a point in a phrase, a sentence, two or three sentences, a paragraph, and so on. For automated argument identification, e-rater identifies 'rhetorical' relations, such as Parallelism and Contrast that can appear at almost any level of discourse. This is part of the reason that human readers must also rely on cue words to identify new arguments in an essay. Literature in the field of discourse analysis supports our approach. It points out that rhetorical cue words and structures can be identified and used for computer-based discourse analysis (Cohen (1984), (Mann and Thompson (1988), Hovy, et al (1992), Hirschberg and Litman (1993), Vander Linden and Martin (1995), and Knott (1996)). E-rater follows this approach by using rhetorical cue words and structure features, in addition to other topical and syntactic information. We adapted the conceptual framework of conjunctive relations from Quirk, et ai (1985) in which cue terms, such as "In summary" and "In conclusion," are classified as conjuncts used for summarizing. Cue words such as "perhaps," and "possibly" are considered to be "belief" words used by the writer to express a belief in developing an argument in the essay. Words like "this" and "these" may often be used to flag that the writer has not changed topics (Sidner (1986)). We also observed that in certain discourse contexts structures such as infinitive clauses mark the beginning of a new argument. E-rater's automated argument partitioning and annotation program (APA) outputs an annotated version of each essay in which the argument units of the essays are labeled with regard to their status as "marking the beginning of an argument," or "marking argument 207 development." APA also outputs a version of the essay that has been partitioned "by argument", instead of "by paragraph," as it was originally partitioned by the test-taker. APA uses rules tbr argument annotation and partitioning based on syntactic and paragraph- based distribution of cue words, phrases and structures to identify rhetorical structure. Relevant cue words and terms are stored in a cue word lexicon. 2.3 Topical Analysis Good essays are relevant to the assigned topic. They also tend to use a more specialized and precise vocabulary in discussing the topic than poorer essays do. We should therefore expect a good essay to resemble other good essays in its choice of words and, conversely, a poor essay to resemble other poor ones. E-rater evaluates the lexical and topical content of an essay by cornparing the words it contains to the words found in manually graded training examples for each of the six score categories. Two programs were implemented that compute measures of content similarity, one based on word frequency (EssayContent) and the other on word weight (ArgContent), as in information retrieval applications (Salton (1988)). In EssayContent, the vocabulary of each score category is converted to a single vector whose elements represent the total frequency of each word in the training essays for that category. In effect, this merges the essays for each score. (A stop list of some function words is removed prior to vector construction.) The system computes cosine correlations between the vector for a given test essay and the six vectors representing the trained categories; the category that is most similar to the test essay is assigned as the evaluation of its content. An advantage of using the cosine correlation is that it is not sensitive to essay length, which may vary considerably. The other content similarity measure, is computed separately by ArgContent for each argument in the test essay and is based on the kind of term weighting used in information retrieval. For this purpose, the word frequency vectors for the six score categories, described above, are converted to vectors of word weights. The weight for word i in score category s is: Wi. s = (freqi..~ / max_freq,) * log(n_essaystot~,l/n_essaysi) where freq,.,, is the frequency of word i in category s, max_freq~ is the frequency of the most frequent word in s (after a stop list of words has been removed), n_essaystot,,i is the total number of training essays across all six categories, and n_essays~ is the number of training essays containing word i. The first part of the weight formula represents the prominence of word i in the score category, and the second part is the log of the word's inverse document frequency. For each argument in the test essay, a vector of word weights is also constructed. Each argument is evaluated by computing cosine correlations between its weighted vector and those of the six score categories, and the most similar category is assigned to the argument. As a result of this analysis, e-rater has a set of scores (one per argument) for each test essay. In a preliminary study, we looked at how well the minimum, maximum, mode, median, and mean of the set of argument scores agreed with the judgments of human raters for the essay as a whole. The greatest agreement was obtained from an adjusted mean of the argument scores that compensated for an effect of the number of arguments in the essay. For example, essays which contained only one or two arguments tended to receive slightly lower scores from the human raters than the mean of the argument scores, and essays which contained many arguments tended to receive slightly higher scores than the mean of the argument scores. To compensate for this, an adjusted mean is used as e-rater's ArgContent, 208 A rqContent = (Zarg_scores + n_args) / (n args + 1) 3. Training and Testing In all, e-rater's syntactic, rhetorical, and topical analyses yielded a total of 57 features for each essay. The training sets for each test question consisted of 5 essays for score 0, 15 essays for score 1, and 50 essays each for scores 2 through 6. To predict the score assigned by human raters, a stepwise linear regression analysis was used to compute the optimal weights for these predictors based on manually scored training essays. For example, Figure 1, below, shows the predictive feature set generated for the ARGI test question (see results in Table 1). The predictive feature set for ARGI illustrates how criteria specified for manual scoring described earlier, such as argument topic and development (using the ArgContent score and argument development terms), syntactic structure usage, and word usage (using the EssayContent score), are represented by e-rarer. After training, e-rater analyzed new test essays, and the regression weights were used to combine the measures into a predicted score for each one. This prediction was then compared to the scores assigned by two human raters to check for exact or acljacent agreement. I. ArgContent Score 2. EssavContent Score 3. Total Argument Development Words/Phrases 4. Total Pronouns Beginning Arguments 5. Total Complement Clauses Beginning Arguments 6. Total Summary Words Beginning Arguments 7. Total Detail Words Beginning Arguments 8. Total Rhetorical Words Developing Arguments 9. Subjunctive Modal Verbs Figure 1: Predictive Feature Set for ARG1 Test Question 3.1 Results Table 1 shows the overall results for 8 GMAT argument questions, 5 GMAT issue questions and 2 TWE questions. There was an average of 638 response essays per test question. E-rater and human rater mean agreement across the 15 data sets was 89%. In many cases, agreement was as high as that found between the two human raters. The items that were tested represented a wide variety of topics (see http://www.gmat.org/ for GMAT sample questions and http://www.toetl.org/tstprpmt.htm! for sample TWE questions). The data also represented a wide variety of English writing competency. In fact, the majority of test-takers from the 2 TWE data sets were nonnative English speakers. Despite these differences in topic and writing skill e- rater performed consistently well across items. Table 1: Mean Percentage and Standard Deviation for E-rater (E) and Human Rater (H) Agreement & Human Interrater Agreement For 15 Cross-Validation Tests HI~H2 HI~E H2~E Mean 90.4 89.1 89.0 S.D 2.1 2.3 2.7 To determine the features that were the most reliable predictors of essay score, we examined the regression models built during training. A feature type was considered to be a reliable predictor if it proved to be significant in at least 12 of the 15 regression analyses. Using this criterion, the most reliable predictors were the ArgContent and EssayContent scores, the number of cue words or phrases indicating the development of an argument, the number of syntactic verb and clause types, and the number of cue words or phrases indicating the beginning of an argument. 209 4. Discussion and Conclusions This study shows processing methods can be used for the study indicates that topical information extracted and used prediction of essay how natural language and statistical techniques evaluation of text. The rhetorical, syntactic, and can be automatically for machine-based score responses. These three types of information model features specified in the manual scoring guides. This study also shows that e-rater adapts well to many different topical domains and populations of test-takers. The information used for automated score prediction by e-rater can also be used as building blocks for automated generation of diagnostic and instructional summaries. Clauses and sentences annotated by APA as "the beginning of a new argument" might be used to identify main points of an essay (Marcu (1997)). In turn, identifying the main points in the text of an essay could be used to generate feedback reflecting essay topic and organization. Other features could be used to automatically generate statements that explicate the basis on which e-rater generates scores. Such statements could supplement manually created qualitative feedback about an essay. 6. References Cohen, Robin (1984). "A computational theory of the function of clue words in argument understanding." In Proceedings of 1984 International Computational Linguistics Col!(erence. California, 251-255.. Hirschberg, Julia and Diane Litman (1993). "Empirical Studies on the Disambiguation of Cue Phrases." Computational Linguistics ( 19)3, 501-530. Hovy, Eduard, Julia Lavid, Elisabeth Maier, "Employing Knowledge Resources in a New Text Planner Architecture," In Aspects of Automated NL Generation, Dale, Hovy, Rosner and Stoch (Eds), Springer-Verlag Lecture Notes in AI no. 587, 57-72. GMAT (1997). http://www.gmat.org/ Knott, Alistair. (1996). "A Data-Driven Methodology for Motivating a Set of Coherence Relations." Ph.D. Dissertation, available at www.cogsci.edu.ac.uk/~alik/publications.htmi, under the Heading, Unpublished Stuff. Mann, William C. and Sandra A. Thompson (1988). "Rhetorical Structure Theory: Toward a functional theory of text organization." Text 8(3), 243-28 !. Marcu, Daniel. (1997). "From Discourse Structures to Text Summaries.", In Proceedings of the Intelligent Scalable Text Summarization Workshop, Association for Computational Linguistics, Universidad Nacionai de Educacion a Distancia, Madrid, Spain. MSNLP (1997) http://research.microsoft.com/nlp/ Quirk, Randolph, Sidney Greenbaum, Geoffrey Leech, and Jan Svartik (1985). A Comprehensive Grammar of the English Language. Longman, New York. Sidner, Candace. (1986). Focusing in the Comprehension of Definite Anaphora. In Readings in Natural Language Processing, Barbara Grosz, Karen Sparck Jones, and Bonnie Lynn Webber (Eds.), Morgan Kaufmann Publishers, Los Altos, California, 363-394. Salton, Gerard. (1988). Automatic text processing : the transformation, analysis, and retrieval of information by computer. Addison- Wesley, Reading, Mass. TOEFL (1997). http://www.toefl.org/tstprpmt.htmi Vander Linden, Keith and James H. Martin (1995). "Expressing Rhetorical Relations in Instructional Text: A Case Study in Purpose Relation." Computational Linguistics 2 / (1), 29- 57. 210
1998
32
Building Parallel LTAG for French and Italian Made-H616ne Candito TALANA & UFRL, Universit6 Paris 7, case 7003, 2, place Jussieu 75251 Paris Cedex 05 France [email protected] Abstract In this paper we view Lexicalized Tree Adjoining Grammars as the compilation of a more abstract and modular layer of linguistic description :the metagrammar (MG). MG provides a hierarchical representation of lexico- syntactic descriptions and principles that capture the well-formedness of lexicalized structures, expressed using syntactic functions. This makes it possible for a tool to compile an instance of MG into an LTAG, automatically performing the relevant combinations of linguistic phenomena. We then describe the instantiation of an MG for Italian and French. The work for French was performed starting with an existing LTAG, which has been augmented as a result. The work for Italian was performed by systematic contrast with the French MG. The automatic compilation gives two parallel LTAG, compatible for multilingual NLP applications. 1. Introduction Lexicalized Tree Adjoining Grammars (LTAG) is a formalism integrating lexicon and grammar (Joshi, 87; Schabes et al, 88) : its description units are lexicalized syntactic trees, the elementary trees. The formalism is associated with a tree-rewriting process that links sentences with syntactic structures (in either way), by combining the elementary trees with two operations, adjunction and substitution. We assume the following linguistic features for LTAG elementary trees (Kroch & Joshi, 85; Abeili6, 91; Frank, 92): • lexicalization : elementary trees are anchored by at least one lexical item. • semantic coherence : the set of lexical items on the frontier of an elementary tree forms exactly one semantic unit t. • large domain of locality : the elementary trees anchored by a predicate contain positions for the arguments of the predicate. This last feature is known as the predicate- argument cooccurrence principle (PACP). Trees anchored by a predicate represent the minimal structure so that positions for all arguments are included. These argumental positions are extended either by receiving substitution or by adjoining at a node. Adjunction is used to factor out recursion. Figure 1 shows two elementary trees anchored by the French verbal form mange (eat-pres-sg), whose arguments in the active voice are a subject NP and a direct object NP 2. The first tree shows all arguments in canonical position. The second tree shows a relativized subject and a pronominal object (accusative clitic). The argumental nodes are numbered, according to their oblicity order, by an index starting at 0 in the unmarked case (active). So for instance in passive trees, the subject is number l, not 0. NO* S S NO V~~O N 1 ,L I mange ~~V r t qui Cl15 V0 I mange Figure 1: 2 elementary trees anchored by mange Though LTAG units used during derivation are lexicalized trees, the LTAG internal representation makes use of "pre-lexicalized" structures, that we will call tree sketches, whose anchor is not instantiated and that are shared by several lexicalized trees. The set of tree sketches thus forms a syntactic database, in which lexical items pick up the structures they can anchor. Families group together tree sketches that are likely to be selected by the same lexeme: the tree sketches may show different surface realization of the arguments (pronominal clitic realization, extraction of an argument, subject inversion...) or different diathesis --matchings between semantic arguments and syntactic Thus semantically void lexical forms (functional words) do not anchor elementary trees on their own. And words composing an idiomatic expression are multiple anchors of the same elementary tree. 2 The trees are examples from a French LTAG (Abeill6, 91), with no VP node (but this is irrelevant here). The ,1, means the node must receive substitution. The * means the node must adjoin in another tree. 211 functions-- (active, passive, middle..) or both. The lexical forms select their tree sketches by indicating one or several families, and features. The features may rule out some tree sketches of the selected family, either because of morphological clash (eg. the passive trees are only selected by past participles) or because of idiosyncrasies. For instance, the French verb peser (to weight) can roughly be encoded as selecting the transitive family, but it disallows the passive diathesis. It remains that tree sketches are large linguistic unit. Each represents a combination of linguistic descriptions that are encoded separately in other formalisms. For instance, a tree sketch is in general of depth > 1, and thus corresponds to a piece of derivation in a formalism using CF rewrite rules (cf (Kasper et al, 95) for the presentation of an LTAG as a compiled HPSG). This causes redundancy in the set of tree sketches, which makes it difficult to write or maintain an LTAG. Several authors (Vijay- Shanker et al, 92- hereafter (VSS92)- ; Becker, 93; Evans et al, 95) have proposed practical solutions to represent in a compact way an LTAG. The idea is to represent canonical trees using an inheritance network and to derive marked syntactic constructions from base tree sketches using lexico-syntactic rules. (Candito, 96), building on (VSS92), defines an additional layer of linguistic description, called the metagrammar (MG), that imposes a general organization for syntactic information and formalizes the well-formedness of lexicalized structures. MG not only provides a general overview of the grammar, but also makes it possible for a tool to perform automatically the combination of smaller linguistic units into a tree sketch. This process of tree sketch building is comparable to a context-free derivation - in the generation way- that would build a minimal clause. A first difference is that CF derivation is performed for each sentence to generate, while the tree sketches are built out of an MG at compile time. Another difference is that while CF derivation uses very local units (CF rules), MG uses partial descriptions of trees (Rogers et Vijay-Shanker, 94) more suitable for the expression of syntactic generalizations. MG offers a common, principle-based frame for syntactic description, to fill in for different languages or domains. In section 2 we present the linguistic and formal characteristics of MG (in a slightly modified version), in section 3 the compilation in an LTAG, and in section 4 we describe the instantiation of the MG for French and Italian. Finally we give some possible applications in section 5. 2. The metagrammar Formally the MG takes up the proposal of (VSS92) to represent grammar as a multiple inheritance network, whose classes specify syntactic structures as partial descriptions of trees (Rogers & Vijay-Shanker, 94). While trees specify for any pair of nodes either a precedence relation or a path of parent relations, these partial descriptions of trees, are sets of constraints that may leave underspecified the relation existing between two nodes. The relation between two nodes may be further specified, either directly or by inference, by adding constraints, either in sub-classes or in lateral classes in the inheritance network. In the MG, nodes of partial descriptions are augmented with feature structures : one for the feature structures of the future tree sketches and one for the features that are specific to the MG, called meta-features. These are, for instance, the possible parts of speech of a node or the index (cf Section l) in the case of argumental nodes. So a class of an instantiated MG may specify the following slots : • the (ordered) list of direct parent classes • a partial description of trees • feature structures associated with nodes 3 Contrary to (VSS92) nodes are global variables within the whole inheritance network, and classes can add features to nodes without involving them in the partial description. Inheritance of partial descriptions is monotonic. The aim is to be able to build pre-lexicalized structures respecting the PACP, and to group together structures likely to pertain for the same lexeme. In order to achieve this, MG makes use of syntactic functions to express either monolingual or cross-linguistic generalizations (cf the work in LFG, Meaning-Text Theory or 3 Actually the tree description language --that we will not detail here-- involves constants, that name nodes of satisfying trees. Several constants may be equal and thus name the same node. The equality is either infered or explicitly stated in the description. 212 Relational Grammar (RG) - see (Blake, 90) for an overview). Positing syntactic functions, characterized by syntactic properties, allows to set parallels between constructions for different languages, that are different in surface (for word order or morpho-syntactic marking), but that share a representation in terms of functional dependencies. Within a language, it allows to abstract from the different surface realizations of a given function and from the different diathesis a predicate can show. So in MG, subcategorization (hereafter subcat) of predicates is expressed as a list of syntactic functions, and their possible categories. Following RG, an initial subcat is distinguished, namely the one for the unmarked case, and is modifiable by redistribution of the functions associated with the arguments of the predicate. Technically, this means that argumental nodes in partial descriptions bear a meta-feature "initial-function" and a meta-feature "function". The "function" value is by default the "initial- function" value, but can be revised by redistribution. Redistributions, in a broad sense, comprise : • pure redistributions that do not modify the number of arguments (eg. full passive). • reductions of the number of arguments (eg. agentless passive) • augmentations of the number of arguments (mainly causative). In MG, structures sharing the same initial subcat can be grouped to form a set of structures likely to be selected by the same lexeme. For verbal predicates, a minimal clause is partly represented with an ordered list of successive subcats, from the initial one to the final one. Minimal clauses sharing a final subcat, may differ in the surface realizations of the functions. The MG represents this repartition of information by imposing a three-dimension inheritance network4: • dimension 1: initial subcat • dimension 2: redistributions of functions • dimension 3: surface realizations of syntactic functions. 4 More precisely a hierarchy is defined for each category of predicate. Dimension 2 is primarily relevant for verbal predicates. Further, remaining structures, for instance for argument-less lexemes or for auxiliaries and raising verbs are represented in an additional network, by classes that may inherit shared properties, but that are totally written by hand. In an instantiated MG for a given language, each terminal class of dimension 1 describes a possible initial subcat and describes partially the verbal morpho-syntax (the verb may appear with a frozen clitic, or a particle in English). Each terminal class of dimension 2 describes a list of ordered redistributions (including the case of no- redistribution). The redistributions may impose a verbal morphology (eg. the auxiliary for passive). Each terminal class of dimension 3 represent the surface realization of a function (independently of the initial function). For some inter-dependent realizations, a class may represent the realizations of several functions (for instance for clitics in romance languages). Terminal classes of the hand-written hierarchy are pieces of information that can be combined to form a tree sketch that respects the PACP. For a given language, some of the terminal classes are incompatible. This is stated either by the content of the classes themselves or within an additional set of language-dependent constraints (compatibility constraints). For instance a constraint is set for French, to block cooccurrence of an inverted subject with an object in canonical position (while this is possible for Italian). 3. Compilation of MG to LTAG The compilation is a two-step process, illustrated figure 2. First the compiler automatically creates additional classes of the inheritance network : the "crossing classes". Then each crossing class is translated into one or several tree sketches. Hand-Written Hierarchy | initial subcat II llsurface realizations oi[ II funct,o I J U/' I ' .d~. "s I ) (,,,~_~_~_~___ ~ ~IL" " / \ "~'~:.',".":'.7~ J ~-"~bd'v'lL, "^uto,,~t~'c ~rat,on v ...... v,~ ~ . .'IkL', ~" of cla.,q.~es language dependent Compatibility constraints ~C~ dimension 2 redistributions of functions ,¢ ... "~,,~ Crossing Translation into LTAG families 213 Figure 2 : Compilation of MG to LTAG 3.1 Automatic extension of the hierarchy A crossing class is a linguistic description that must fulfill the PACP. Using syntactic functions and the three-dimension partition, MG makes more precise this well-formedness principle. A crossing class is a class of the inheritance network that is automatically built as follows: • a crossing class inherits exactly one terminal class of dimension 1 • then, a crossing class inherits exactly one terminal class of dimension 2 These two super-classes define an ordered list of subcat, from the initial one to the final one. • then, a crossing class inherits classes of dimension 3, representing the realizations of every function of the final subcat. Further, for a crossing class to be well-formed, all unifications involved during the inheritance process must succeed, either for feature structures or for partial descriptions. Clashes between features or inconsistencies in partial descriptions are used to rule out some irrelevant crossings of linguistic phenomena. Finally, the compatibility constraints must be respected (cf Section 2). 3.2 Translation into LTAG families While crossing classes specify a partial description with feature structures, LTAG use trees. So the compiler takes the "representative" tree(s) of the partial description (see Rogers & Vijay-Shanker, 94 for a formal definition). Intuitively these representative trees are trees minimally satisfying the description. There can be several for one description. For example, the relative order of several nodes may be underspecified in a description, and the representative trees show every possible order. A family is generated by grouping all the trees computed from crossing classes that share the same class of dimension 1. 4. Metagrammars for French and Italian : a contrast We have instantiated the metagrammar for French, starting with an existing LTAG (Abeill6, 91). The recompilation MG---~LTAG insures coherence (a phenomena is consistently handled through the whole grammar) and completeness (all valid crossings are performed). The coverage of the grammar has been extended 5. Then we have adapted the French MG to Italian, to obtain a "parallel" LTAG for Italian, close with respect to linguistic analyses. The general organization of the MG gives a methodology for systematic syntactic contrast. We describe some pieces of the inheritance network for French and Italian, with particular emphasis on dimension 2 and, in dimension 3, on the surface realizations of the subject. 4.1 Dimension 1 We do not give a description of the content of this dimension, but rather focus on the differences between the two languages. A first difference in dimension 1 is that for Italian, there exist verbs without argument 6 (atmospheric verbs), while for French, a subject is obligatory, though maybe impersonal. Another difference, is known as the unaccusative hypothesis (see (Renzi, 88, vol I) for an account). It follows from syntactic evidence, that the unique argument of avere- selecting intransitives (eg. (I)) and essere- selecting intransitives (the unaccusatives, eg. (2)) has different behavior when post-verbal: (1) *Ne hanno telefonato tre. (of-them have phoned three) Three of them have phoned (2) Ne sono rimaste tre. (of-them are remained three) Three of them have remained. We represent unaccusatives as selecting an initial object and no initial subject. A redistribution in dimension 2 promotes this initial object into a special subject (showing subject properties and some object proTperties, like the he-licensing shown in (2)). This redistribution is also used for specifying passive and middle, which both trigger unaccusative behavior (see next section). s The number of tree sketches passed from 800 to 1100 lwithout causative trees). An alternative analysis would be to consider that these verbs select a subject pronoun, that is not realized in Italian (pro-drop language). 7 We take a simpler approach than RG, which accounts for most of the Italian data. Unhandled are the auxiliary change for verbs, when goal-phrases are added (see (Dini, 95) for an analysis in HPSG). 214 4.2 Dimension 2 The MG for French and Italian cover the following types of redistribution s : passive, middle, causative and impersonal (only for French). Causative verbs plus infinitives are analysed in Romance as complex predicates. Due to a lack of space will not describe their encoding in MG here. Figure 3 shows the inheritance links of dimension 2 for French (without causative). Terminal classes are shown without frame. V ~~,7~s,v~ Figure 3 : Dimension 2 for French (without causative) The verbal morphology is affected by redistributions, so it appears in the hierarchy. The hierarchy comprises the case of no- redistribution, that inherits an active morphology : it simply states that the anchor of the future tree sketch is also the verb that receives inflexions for tense, agreement... Refering to the notion of hierarchy of syntactic functions (A la Keenan-Comrie), we can say that the redistributions shown comprise a subject demotion (which can be a deletion) and a promotion of an element to subject. For active impersonal (3), the subject is demoted to object (class SUBJECT---~OBJECT), and the impersonal il is introduced as subject (class IMPERS---~SUBJECT). (3) I1 est arriv6 trois lettres pour vous. (IL is arrived three letters for you) There arrived three letters for you. Passive is characterized by a particular morphology (auxiliary bearing inflections + past participle) and the demotion of subject (which is either deleted, class SUBJECT--->EMPTY, or demoted to a by-phrase, class SUBJECT--~AGT- OBJ), but not necessarily by a promotion of the object to subject (class OBJECT---->SUBJECT) (cf (Comrie, 77)). In French, the alternative to object promotion is the introduction of the impersonal subject (class IMPERS---~SUBJECT )9. This gives four possibilities, agentless personal (4), full personal (5), agentless impersonal (6), full impersonal, but this last possibility is not well attested. (4) Le film sera projet6 mardi prochain. The movie will be shown next tuesday. (5) La voiture a 6t6 doubl6e par un v61o. The car was overtaken by a bike. (6) I1 a 6t6 d6cr6t6 l'6tat d'urgence. (IL was declared the state of emergency) The state of emergency was declared. Middle is characterized by a deletion of the subject, and a middle morphology (a reflexive clitic se). Here also we have the alternative OBJECT--~SUBJECT (7) or IMPERS--->SUBJECT (8). The interpretation is generic or deontic in French. (7) Le th6 se sert ~ 5h. (Tea SE serves at 5.) One should serve tea at 5. (8) I1 se dit des horreurs ici. (IL SE says horrible things here) Horrible things are pronounced in here. Now let us contrast this hierarchy with the one for Italian. Figure 4 shows dimension 2 for Italian. l ~OBJECT ~ EX'ITc.~DED- S O BJ ECT i PERSONAL PASSIVE Figure 4 : Dimension 2 for Italian (without causative) In Italian, what is called impersonal (9a) is a special realization of subject (by a clitic sO, meaning either people, one or we. (cf Monachesi, 95). The French equivalent is the 8 The locative alternation (John loaded the truck with oranges/John loaded oranges into the truck), is not covered at present time, but can easily be added. It requires to choose an initial subcat for the verb. 9 So we do not analyse impersonal passive as passive to which apply impersonal. This allows to account for the (rare) cases of impersonal passives with no personal passive counterpart. 215 nominative clitic on (9b). (9a) it. Si parti. (SI left) People / we left. (9b) fr. On partit. This impersonal si is thus coded as a realization of subject, in dimension 3, and we have no IMPERS---~SUBJECT promotion for the Italian dimension 2. The impersonal si can appear with all redistributions except the middle. The Italian middle is similar to French, with a reflexive clitic si. Indeed impersonal si, with transitive verbs and singular object (10), is ambiguous with a middle analysis (and subject inversion). (10) Si mangia il gelato. (SI eat-3sg the ice-cream) The ice-cream is eaten. With a plural nominal object, some speakers do not accept impersonal (with singular verb (11 a)) but only the middle (with verb agreement (1 lb)). (1 la) Si mangia le mele. (SI eat-3sg the apples) (1 lb) Si mangiano le mele. (SI eat-3pl the apples) Another difference with French redistributions, is that when the object is promoted, in passive or middle, it is as a subject showing unaccusative behavior (eg. he-licensing, cf section 4.1). To represent this, we use the class OBJECT---~EXTENDED-SUBJECT, which is also used for the spontaneous promotion of initial object of unaccusative intransitives (cf section 4.1). So for Italian, passive (agentless or full) and middle (1 lb) comprise a subject demotion (a mandatory deletion for middle) and the promotion OBJECT--~EXTENDED-SUBJECT, while for intransitive unaccusatives, this promotion is spontaneous. Other differences between French and Italian concern the interaction of causative with other redistributions : passive and middle can apply after causative in Italian, but not in French. 4.3 Dimension 3 We describe in dimension 3 the classes for the surface realizations of subject. This function is special as it partially imposes the mode of the clause. The subject is empty for infinitives and imperatives I°. Adnominal participial clauses are to See (Abeill~, 91) for the detail of the linguistic analyses chosen for French. We describe here the hierarchical organization. represented as auxiliary trees that adjoin on a N, the subject is the foot node of the auxiliary tree (we do not detail here the different participial clauses). For French (Figure 5), when realized, the subject is either sentential, nominal or pronominal (clitic). Nominal subjects may be in preverbal position or inverted, relativized or cleft. These last two realizations inherit also classes describing relative clauses and cleft clauses. Sentential subjects are here only preverbal. Clitic subjects are preverbal (post-verbal subject clitics are not shown here, as their analysis is special). Note that in dimension 2, the class IMPERS---~SUBJECT specifies that the subject is clitic, and dominates the word il. This will only be compatible with the clitic subject realization. / I~ON.R~,~Z~,E~ t~,U~'~j~ ~ k SUBJECT Figure 5 : SubJect realizations for French For Italian, (Figure 6), the hierarchy for subjects is almost the same : a class for non-realized subjects is added, since Italian is a pro-drop language, and pronominal subjects are not realized. But we mentioned in section 4.2 the special case of the impersonal subject clitic si. To handle this clitic, the Italian class for clitic subject introduces the si. Figure 6 : Subject realizations for Italian (differences with French in bold) 216 5. Applications The two LTAG for French and Italian are easy to maintain, due to the hierarchical representation in MG. They can be customized for language domains, by cutting subgraphs of the inheritance network in MG. The MG for French is currently used to maintain the French LTAG. It has also been used to generate the tree sketches for the text generator G-TAG (Danlos & Meunier, 96), based on TAG. The generator makes use of tree sketches characterization as a set of features ---called t- features- such as <passive>, <infinitival- clause>... This characterization has been straightforward to obtain with the representation of the tree sketches in MG. Further, the two MG for French and Italian can provide a basis for tranfer between syntactic structures for Machine Translation. LTAG elementary trees correspond to a semantic unit, with (extendable) positions for the semantic arguments if any. (Abeill6, et al, 90) propose to pair elementary trees for the source and target languages and to match in these pairs the argumental positions of the predicate. Once these links are established, the synchronous TAG procedure can be used for translation. The argumental positions correspondance is straightforward to state within the MG framework. We plan to define an automatic procedure of tree-to-tree matching using MG representations for source and target languages, once the initial functions of arguments are matched for pairs of predicates. This procedure will make use of sets of t-features to characterize tree sketches (as in G-TAG) derived at the MG--->LTAG compilation time. Correspondances between t-features or sets of t-features have to be defined. References A. Abeill6, 1991 : Une grammaire lexicalis6e d'arbres adjoints pour le fran~ais. Ph.D. thesis. Univ. Paris 7. A. Abeill6, Y. Schabes, A. Joshi, 1990 : Using Lexicalized TAG for Machine Translation. COLING'90. T. Becker, 1993 : HyTAG : a new type of Tree Adjoining Grammars for Hybrid Syntactic representation of Free Order Languages, Ph.D. thesis, Univ of Saarbrticken. M-H. Candito, 1996 : A principle-based hierarchical representation of LTAG. COLING'96. B. Comrie, 1977 : In defense of spontaneous demotion :the impersonal passive. Syntax and semantics ~ Grammatical functions >> Cole & Saddock. Danlos, L Meunier, F, 1996 : G-TAG, un formalisme pour ia g6n6ration de textes : pr6sentation et applications industrielles. ILN'96, Nantes. L. Dini, 1995 : Unaccusative behaviors. Quaderni di Linguistica. 9/95. R. Evans, G. Gazdar, D. Weir, 1995 : Encoding Lexicalized TAG in a non-monotonic inheritance hierarchy. ACL'95. R. Frank, 1992 : Syntactic locality and Tree Adjoining Grammar: Grammatical, Acquisition and Processing Perpectives. Ph.D. thesis. Univ. of Pennsylvania. R. Kasper, B. Kiefer, K. Netter, K. Vijay- Shanker, 1995 : Compilation of HPSG to TAG. ACL'95. I. Mel'cuk, 1988 : Dependency Syntax: Theory and Practice. State Univ. Press NY, Albany (NY). P. Monachesi, 1996 : A grammar of Italian clitics. Ph.D. thesis. Univ. of Tilburg. L. Renzi, 1988 : Grande grammatica di consultazione (3 vol.) I1 Mulino, Bologna. J. Rogers, K. Vijay-Shanker, 1994 : Obtaining trees from their descriptions : an application to Tree Adjoining Grammars. Computational Intelligence, vol. 10, # 4. Y. Schabes, A. Joshi, A. Abeill6, 1988 : Parsing strategies with lexicalized grammars : Tree adjoining grammars. COL1NG'88. K. Vijay-Shanker, Y. Schabes, 1992 : Structure sharing in Lexicalized TAG. COLING'92. 217
1998
33
Error-Driven Pruning of Treebank Grammars for Base Noun Phrase Identification Claire Cardie and David Pierce Department of Computer Science Cornell University Ithaca, NY 14853 cardie, [email protected] Abstract Finding simple, non-recursive, base noun phrases is an important subtask for many natural language processing applications. While previous empirical methods for base NP identification have been rather complex, this paper instead proposes a very simple algorithm that is tailored to the relative simplicity of the task. In particular, we present a corpus-based approach for finding base NPs by matching part-of- speech tag sequences. The training phase of the al- gorithm is based on two successful techniques: first the base NP grammar is read from a "treebank" cor- pus; then the grammar is improved by selecting rules with high "benefit" scores. Using this simple algo- rithm with a naive heuristic for matching rules, we achieve surprising accuracy in an evaluation on the Penn Treebank Wall Street Journal. 1 Introduction Finding base noun phrases is a sensible first step for many natural language processing (NLP) tasks: Accurate identification of base noun phrases is ar- guably the most critical component of any partial parser; in addition, information retrieval systems rely on base noun phrases as the main source of multi-word indexing terms; furthermore, the psy- cholinguistic studies of Gee and Grosjean (1983) in- dicate that text chunks like base noun phrases play an important role in human language processing. In this work we define base NPs to be simple, nonre- cursive noun phrases -- noun phrases that do not contain other noun phrase descendants. The brack- eted portions of Figure 1, for example, show the base NPs in one sentence from the Penn Treebank Wall Street Journal (WSJ) corpus (Marcus et al., 1993). Thus, the string the sunny confines of resort towns like Boca Raton and Hot Springs is too complex to be a base NP; instead, it contains four simpler noun phrases, each of which is considered a base NP: the sunny confines, resort towns, Boca Raton, and Hot Springs. Previous empirical research has addressed the problem of base NP identification. Several algo- rithms identify "terminological phrases" -- certain When [it] is [time] for [their biannual powwow] , [the nation] 's [manufacturing titans] typically jet off to [the sunny confines] of [resort towns] like [Boca Raton] and [Hot Springs]. Figure 1: Base NP Examples base noun phrases with initial determiners and mod- ifiers removed: Justeson & Katz (1995) look for repeated phrases; Bourigault (1992) uses a hand- crafted noun phrase grammar in conjunction with heuristics for finding maximal length noun phrases; Voutilainen's NPTool (1993) uses a handcrafted lex- icon and constraint grammar to find terminological noun phrases that include phrase-final prepositional phrases. Church's PARTS program (1988), on the other hand, uses a probabilistic model automati- cally trained on the Brown corpus to locate core noun phrases as well as to assign parts of speech. More recently, Ramshaw & Marcus (In press) ap- ply transformation-based learning (Brill, 1995) to the problem. Unfortunately, it is difficult to directly compare approaches. Each method uses a slightly different definition of base NP. Each is evaluated on a different corpus. Most approaches have been eval- uated by hand on a small test set rather than by au- tomatic comparison to a large test corpus annotated by an impartial third party. A notable exception is the Ramshaw & Marcus work, which evaluates their transformation-based learning approach on a base NP corpus derived from the Penn Treebank WSJ, and achieves precision and recall levels of approxi- mately 93%. This paper presents a new algorithm for identi- fying base NPs in an arbitrary text. Like some of the earlier work on base NP identification, ours is a trainable, corpus-based algorithm. In contrast to other corpus-based approaches, however, we hypoth- esized that the relatively simple nature of base NPs would permit their accurate identification using cor- respondingly simple methods. Assume, for example, that we use the annotated text of Figure 1 as our training corpus. To identify base NPs in an unseen 218 text, we could simply search for all occurrences of the base NPs seen during training -- it, time, their bian- nual powwow, ..., Hot Springs -- and mark them as base NPs in the new text. However, this method would certainly suffer from data sparseness. Instead, we use a similar approach, but back off from lexical items to parts of speech: we identify as a base NP any string having the same part-of-speech tag se- quence as a base NP from the training corpus. The training phase of the algorithm employs two previ- ously successful techniques: like Charniak's (1996) statistical parser, our initial base NP grammar is read from a "treebank" corpus; then the grammar is improved by selecting rules with high "benefit" scores. Our benefit measure is identical to that used in transformation-based learning to select an ordered set of useful transformations (Brill, 1995). Using this simple algorithm with a naive heuristic for matching rules, we achieve surprising accuracy in an evaluation on two base NP corpora of varying complexity, both derived from the Penn Treebank WSJ. The first base NP corpus is that used in the Ramshaw & Marcus work. The second espouses a slightly simpler definition of base NP that conforms to the base NPs used in our Empire sentence ana- lyzer. These simpler phrases appear to be a good starting point for partial parsers that purposely de- lay all complex attachment decisions to later phases of processing. Overall results for the approach are promising. For the Empire corpus, our base NP finder achieves 94% precision and recall; for the Ramshaw & Marcus corpus, it obtains 91% precision and recall, which is 2% less than the best published results. Ramshaw & Marcus, however, provide the learning algorithm with word-level information in addition to the part- of-speech information used in our base NP finder. By controlling for this disparity in available knowl- edge sources, we find that our base NP algorithm performs comparably, achieving slightly worse preci- sion (-1.1%) and slightly better recall (+0.2%) than the Ramshaw & Marcus approach. Moreover, our approach offers many important advantages that make it appropriate for many NLP tasks: * Training is exceedingly simple. . The base NP bracketer is very fast, operating in time linear in the length of the text. . The accuracy of the treebank approach is good for applications that require or prefer fairly sim- ple base NPs. . The learned grammar is easily modified for use with corpora that differ from the training texts. Rules can be selectively added to or deleted from the grammar without worrying about or- dering effects. * Finally, our benefit-based training phase offers a simple, general approach for extracting gram- mars other than noun phrase grammars from annotated text. Note also that the treebank approach to base NP identification obtains good results in spite of a very simple algorithm for "parsing" base NPs. This is ex- tremely encouraging, and our evaluation suggests at least two areas for immediate improvement. First, by replacing the naive match heuristic with a proba- bilistic base NP parser that incorporates lexical pref- erences, we would expect a nontrivial increase in re- call and precision. Second, many of the remaining base NP errors tend to follow simple patterns; these might be corrected using localized, learnable repair rules. The remainder of the paper describes the specifics of the approach and its evaluation. The next section presents the training and application phases of the treebank approach to base NP identification in more detail. Section 3 describes our general approach for pruning the base NP grammar as well as two instan- tiations of that approach. The evaluation and a dis- cussion of the results appear in Section 4, along with techniques for reducing training time and an initial investigation into the use of local repair heuristics. 2 The Treebank Approach Figure 2 depicts the treebank approach to base NP identification. For training, the algorithm requires a corpus that has been annotated with base NPs. More specifically, we assume that the training corpus is a sequence of words wl, w2,..., along with a set of base NP annotations b(il&), b(i~j~),..., where b(ij) indicates that the NP brackets words i through j: [NP Wi, ..., W j]. The goal of the training phase is to create a base NP grammar from this training corpus: 1. Using any available part-of-speech tagger, as- sign a part-of-speech tag ti to each word wi in the training corpus. 2. Extract from each base noun phrase b(ij) in the training corpus its sequence of part-of-speech tags tl .... ,tj to form base NP rules, one rule per base NP. 3. Remove any duplicate rules. The resulting "grammar" can then be used to iden- tify base NPs in a novel text. 1. 2. Assign part-of-speech tags tl, t2,.., to the input words wl, w2, • • • Proceed through the tagged text from left to right, at each point matching the NP rules against the remaining part-of-speech tags ti,ti+l,.., in the text. 219 Training Phase Training Corpus When lit] is [time] for [their biannual powwowl. [ the nation I's I manufacturing titans I typically jet offto [the sunny confinesl of Ireson townsl like [Boca Ratonl and IHot Springs[. Tagged Text When/W'RB [it/PRP] is/VBZ [time/NN] for/IN [their/PRP$ biannual/JJ powwow/NN] ./. [the/DT nation/NN] 's/POS Imanufacmring/VBG titans/NNSI typically/RB jet/VBP off/RP to/TO Ithe/DT snnny/JJ confines/NNSI of/IN I resort/NN towns/NNS ] like/IN I Boca/NNP Raton/NNPI and/CC IHot/NNP Spring~NNPI. ~lP Rules <PRP> <NN> <PRP$ JJ NN> <DT NN> <VBG NNS> <DT JJ NNS> <NN NNS> <NNP NNP> Application Phase Novel Text , Not this year. National Association of Manufacturers settled on the Hoosier capital of Indianapolis for its next meeting. And the city decided to treat its guests more like royalty or rock sta~ than factory owners. Tagged Text Not/RB this/DT year/NN J. National/NNP Association/NNP of/IN ManufacturerffNNP settled/VBD on/IN the/DT Hoosier/NNP capital/NN of/IN lndianapoli~NNP for/IN its/PRP$ nexV'JJ meeting/NN J. And/CC the/DT city/NN decided/VBD to/TO treaV'VB its/PRP$ guesl.,;/NNS more/J JR like/IN royahy/NN or/CC rock/NN star,4NNS than/IN factory/NN owners/NNS ./. NP Bracketed Text Not [this year]. I National Association ] of I Manufacturers I settled on Ithe Hoosier capitall of [Indianapolisl for l its next meetingl. And Ithe cityl decided to treat [its guestsl more like [royaltyl or/rock starsl than [factory ownerq. Figure 2: The Treebank Approach to Base NP Identification 3. If there are multiple rules that match beginning at ti, use the longest matching rule R. Add the new base noun phrase b(i,i+]R[-1) to the set of base NPs. Continue matching at ti+lR[. With the rules stored in an appropriate data struc- ture, this greedy "parsing" of base NPs is very fast. In our implementation, for example, we store the rules in a decision tree, which permits base NP iden- tification in time linear in the length of the tagged input text when using the longest match heuristic. Unfortunately, there is an obvious problem with the algorithm described above. There will be many unhelpful rules in the rule set extracted from the training corpus. These "bad" rules arise from four sources: bracketing errors in the corpus; tagging er- rors; unusual or irregular linguistic constructs (such as parenthetical expressions); and inherent ambigu- ities in the base NPs -- in spite of their simplicity. For example, the rule (VBG NNS), which was ex- tracted from manufacturing/VBG titans/NNS in the example text, is ambiguous, and will cause erroneous bracketing in sentences such as The execs squeezed in a few meetings before [boarding/VBG buses/NNS~ again. In order to have a viable mechanism for iden- tifying base NPs using this algorithm, the grammar must be improved by removing problematic rules. The next section presents two such methods for au- tomatically pruning the base NP grammar. 3 Pruning the Base NP Grammar As described above, our goal is to use the base NP corpus to extract and select a set of noun phrase rules that can be used to accurately identify base NPs in novel text. Our general pruning procedure is shown in Figure 3. First, we divide the base NP cor- pus into two parts: a training corpus and a pruning corpus. The initial base NP grammar is extracted from the training corpus as described in Section 2. Next, the pruning corpus is used to evaluate the set of rules and produce a ranking of the rules in terms of their utility in identifying base NPs. More specif- ically, we use the rule set and the longest match heuristic to find all base NPs in the pruning corpus. Performance of the rule set is measured in terms of labeled precision (P): p _- # of correct proposed NPs # of proposed NPs We then assign to each rule a score that denotes the "net benefit" achieved by using the rule during NP parsing of the improvement corpus. The ben- efit of rule r is given by B~ = C, - E, where C~ 220 Training Corpus Pruning Corpus Improved Rule Set Final Rule Set Figure 3: Pruning the Base NP Grammar is the number of NPs correctly identified by r, and E~ is the number of precision errors for which r is responsible. 1 A rule is considered responsible for an error if it was the first rule to bracket part of a refer- ence NP, i.e., an NP in the base NP training corpus. Thus, rules that form erroneous bracketings are not penalized if another rule previously bracketed part of the same reference NP. For example, suppose the fragment containing base NPs Boca Raton, Hot Springs, and Palm Beach is bracketed as shown below. resort towns like [NP1 Boca/NNP Raton/NNP, Hot/NNP] [NP2 Springs/NNP], and [NP3 Palm/NNP Beach/NNP] Rule (NNP NNP , NNP) brackets NP1; (NNP / brackets NP2; and (NNP NNP / brackets NP~. Rule (NNP NNP , NNP / incorrectly identifies Boca Ra- ton, Hot as a noun phrase, so its score is -1. Rule (NNP) incorrectly identifies Springs, but it is not held responsible for the error because of the previ- ous error by (NNP NNP, NNP / on the same original NP Hot Springs: so its score is 0. Finally, rule (NNP NNP) receives a score of 1 for correctly identifying Palm Beach as a base NP. The benefit scores from evaluation on the pruning corpus are used to rank the rules in the grammar. With such a ranking, we can improve the rule set by discarding the worst rules. Thus far, we have investigated two iterative approaches for discarding rules, a thresholding approach and an incremental approach. We describe each, in turn, in the subsec- tions below. 1 This same benefit measure is also used in the R&M study, but it is used to rank transformations rather than to rank NP rules. 3.1 Threshold Pruning Given a ranking on the rule set, the threshold algo- rithm simply discards rules whose score is less than a predefined threshold R. For all of our experiments, we set R = 1 to select rules that propose more cor- rect bracketings than incorrect. The process of eval- uating, ranking, and discarding rules is repeated un- til no rules have a score less than R. For our evalua- tion on the WSJ corpus, this typically requires only four to five iterations. 3.2 Incremental Pruning Thresholding provides a very coarse mechanism for pruning the NP grammar. In particular, because of interactions between the rules during bracketing, thresholding discards rules whose score might in- crease in the absence of other rules that are also be- ing discarded. Consider, for example, the Boca Ra- ton fragments given earlier. In the absence of (NNP NNP , NNP), the rule (NNP NNP / would have re- ceived a score of three for correctly identifying all three NPs. As a result, we explored a more fine-grained method of discarding rules: Each iteration of incre- mental pruning discards the N worst rules, rather than all rules whose rank is less than some thresh- old. In all of our experiments, we set N = 10. As with thresholding, the process of evaluating, rank- ing, and discarding rules is repeated, this time until precision of the current rule set on the pruning cor- pus begins to drop. The rule set that maximized precision becomes the final rule set. 3.3 Human Review In the experiments below, we compare the thresh- olding and incremental methods for pruning the NP grammar to a rule set that was pruned by hand. When the training corpus is large, exhaustive re- view of the extracted rules is not practical. This is the case for our initial rule set, culled from the WSJ corpus, which contains approximately 4500 base NP rules. Rather than identifying and dis- carding individual problematic rules, our reviewer identified problematic classes of rules that could be removed from the grammar automatically. In partic- ular, the goal of the human reviewer was to discard rules that introduced ambiguity or corresponded to overly complex base NPs. Within our partial parsing framework, these NPs are better identified by more informed components of the NLP system. Our re- viewer identified the following classes of rules as pos- sibly troublesome: rules that contain a preposition, period, or colon; rules that contain WH tags; rules that begin/end with a verb or adverb; rules that con- tain pronouns with any other tags; rules that contain misplaced commas or quotes; rules that end with adjectives. Rules covered under any of these classes 221 were omitted from the human-pruned rule sets used in the experiments of Section 4. 4 Evaluation To evaluate the treebank approach to base NP iden- tification, we created two base NP corpora. Each is derived from the Penn Treebank WSJ. The first corpus attempts to duplicate the base NPs used the Ramshaw & Marcus (R&M) study. The second cor- pus contains slightly less complicated base NPs -- base NPs that are better suited for use with our sentence analyzer, Empire. 2 By evaluating on both corpora, we can measure the effect of noun phrase complexity on the treebank approach to base NP identification. In particular, we hypothesize that the treebank approach will be most appropriate when the base NPs are sufficiently simple. For all experiments, we derived the training, prun- ing, and testing sets from the 25 sections of Wall Street Journal distributed with the Penn Treebank II. All experiments employ 5-fold cross validation. More specifically, in each of five runs, a different fold is used for testing the final, pruned rule set; three of the remaining folds comprise the training corpus (to create the initial rule set); and the final partition is the pruning corpus (to prune bad rules from the ini- tial rule set). All results are averages across the five folds. Performance is measured in terms of precision and recall. Precision was described earlier -- it is a standard measure of accuracy. Recall, on the other hand, is an attempt to measure coverage: # of correct proposed NPs P = # of proposed NPs # of correct proposed NPs R = # of NPs in the annotated text Table 1 summarizes the performance of the tree- bank approach to base NP identification on the R&M and Empire corpora using the initial and pruned rule sets. The first column of results shows the performance of the initial, unpruned base NP grammar. The next two columns show the perfor- mance of the automatically pruned rule sets. The final column indicates the performance of rule sets that had been pruned using the handcrafted pruning heuristics. As expected, the initial rule set performs quite poorly. Both automated approaches provide significant increases in both recall and precision. In addition, they outperform the rule set pruned using handcrafted pruning heuristics. 2Very briefly, the Empire sentence analyzer relies on par- tial parsing to find simple constituents like base NPs and verb groups. Machine learning algorithms then operate on the output of the partial parser to perform all attachment de- cisions. The ultimate output of the parser is a semantic case frame representation of the functional structure of the input sentence. R&M (1998) ]" R&M (1998) with [ without lexical templates lexical templates 93.1P/93.5R ~ 90.5P/90.7R Treebank ] Approach 89.4p/9o.9a ] Table 2: Comparison of Treebank Approach with Ramshaw & Marcus (1998) both With and Without Lexical Templates, on the R&M Corpus Throughout the table, we see the effects of base NP complexity -- the base NPs of the R&M cor- pus are substantially more difficult for our approach to identify than the simpler NPs of the Empire cor- pus. For the R&M corpus, we lag the best pub- lished results (93.1P/93.5R) by approximately 3%. This straightforward comparison, however, is not en- tirely appropriate. Ramshaw & Marcus allow their learning algorithm to access word-level information in addition to part-of-speech tags. The treebank ap- proach, on the other hand, makes use only of part-of- speech tags. Table 2 compares Ramshaw & Marcus' (In press) results with and without lexical knowl- edge. The first column reports their performance when using lexical templates; the second when lexi- cal templates are not used; the third again shows the treebank approach using incremental pruning. The treebank approach and the R&M approach without lecial templates are shown to perform comparably (-1.1P/+0.2R). Lexicalization of our base NP finder will be addressed in Section 4.1. Finally, note the relatively small difference be- tween the threshold and incremental pruning meth- ods in Table 1. For some applications, this minor drop in performance may be worth the decrease in training time. Another effective technique to speed up training is motivated by Charniak's (1996) ob- servation that the benefit of using rules that only occurred once in training is marginal. By discard- ing these rules before pruning, we reduce the size of the initial grammar -- and the time for incremental pruning -- by 60%, with a performance drop of only -0.3P/-0.1R. 4.1 Errors and Local Repair Heuristics It is informative to consider the kinds of errors made by the treebank approach to bracketing. In particular, the errors may indicate options for incor- porating lexical information into the base NP finder. Given the increases in performance achieved by Ramshaw & Marcus by including word-level cues, we would hope to see similar improvements by exploit- ing lexical information in the treebank approach. For each corpus we examined the first 100 or so errors and found that certain linguistic constructs consistently cause trouble. (In the examples that follow, the bracketing shown is the error.) 222 Base NP I Initial I Threshold Incremental I Human Corpus Rule Set Pruning Pruning Review Empire I 23.OP/46.5RI 91.2P/93.1R 92.TP/93.7RI 90.3P/9O.5R R&M 19.0P/36.1R 87.2P/90.0R 89.4P/90.9R 81.6P/g5.0R Table h Evaluation of the Treebank Approach Using the Mitre Part-of-Speech Tagger (P = precision; R = recall) BaseNP I Threshold I Threshold I Incremental I Incremental I Corpus Improvement T Local Repair Improvement + Local Repair Empire [ 91.2P/93.1R 92.8P/93.7R 92.7P/93.7R 93.7P/94.0R 87.2P/90.0R I 89.2P/gO.6R I 89"4P/90"gR I 90.7P/91.IR I R&M I Table 3: Effect of Local Repair Heuristics * Conjunctions. Conjunctions were a major prob- lem in the R&M corpus. For the Empire corpus, conjunctions of adjectives proved dif- ficult: [record/N2~ [third-quarter/JJ and/CC nine-month/JJ results/NN5~. • Gerunds. Even though the most difficult VBG constructions such as manufacturing titans were removed from the Empire corpus, there were others that the bracketer did not handle, like [chiej~ operating [officer]. Like conjunctions, gerunds posed a major difficulty in the R&M corpus. • NPs Containing Punctuation. Predictably, the bracketer has difficulty with NPs containing pe- riods, quotation marks, hyphens, and parenthe- ses. • Adverbial Noun Phrases. Especially temporal NPs such as last month in at [83.6~] of[capacity last month]. • Appositives. These are juxtaposed NPs such as of [colleague Michael Madden] that the brack- eter mistakes for a single NP. • Quantified NPs. NPs that look like PPs are a problem: at/IN [least/JJS~ [the/DT right/JJ jobs/NNS~; about/IN [25/CD million/CD]. Many errors appear to stem from four underly- ing causes. First, close to 20% can be attributed to errors in the Treebank and in the Base NP cor- pus, bringing the effective performance of the algo- rithm to 94.2P/95.9R and 91.5P/92.TR for the Em- pire and R&M corpora, respectively. For example, neither corpus includes WH-phrases as base NPs. When the bracketer correctly recognizes these NPs, they are counted as errors. Part-of-speech tagging errors are a second cause. Third, many NPs are missed by the bracketer because it lacks the appro- priate rule. For example, household products busi- ness is bracketed as [household/NN products/NNS~ [business/Nh~. Fourth, idiomatic and specialized ex- pressions, especially time, date, money, and numeric phrases, also account for a substantial portion of the errors. These last two categories of errors can often be de- tected because they produce either recognizable pat- terns or unlikely linguistic constructs. Consecutive NPs, for example, usually denote bracketing errors, as in [household/NN products/NNS~ [business/Nh~. Merging consecutive NPs in the correct contexts would fix many such errors. Idiomatic and special- ized expressions might be corrected by similarly local repair heuristics. Typical examples might include changing [effective/JJ Monday/NNP] to effective [Monday]; changing [the/DT balance/NN due/J J] to [the balance] due; and changing were/VBP [n't/RB the/DT only/RS losers/NNS~ to were n't [the only losers]. Given these observations, we implemented three local repair heuristics. The first merges consecutive NPs unless either might be a time expression. The second identifies two simple date expressions. The third looks for quantifiers preceding of NP. The first heuristic, for example, merges [household products] [business] to form [household products business], but leaves increased [15 ~ [last Friday] untouched. The second heuristic merges [June b~ , [1995] into [June 5, 1995]; and [June], [1995] into [June, 1995]. The third finds examples like some of[the companies] and produces [some] of [the companies]. These heuristics represent an initial exploration into the effectiveness of employing lexical information in a post-processing phase rather than during grammar induction and bracketing. While we are investigating the latter in current work, local repair heuristics have the ad- vantage of keeping the training and bracketing algo- rithms both simple and fast. The effect of these heuristics on recall and preci- sion is shown in Table 3. We see consistent improve- ments for both corpora and both pruning methods, 223 achieving approximately 94P/R for the Empire cor- pus and approximately 91P/R for the R&M corpus. Note that these are the final results reported in the introduction and conclusion. Although these experi- ments represent only an initial investigation into the usefulness of local repair heuristics, we are very en- couraged by the results. The heuristics uniformly boost precision without harming recall; they help the R&M corpus even though they were designed in response to errors in the Empire corpus. In addi- tion, these three heuristics alone recover 1/2 to 1/3 of the improvements we can expect to obtain from lexicalization based on the R&M results. 5 Conclusions This paper presented a new method for identifying base NPs. Our treebank approach uses the simple technique of matching part-of-speech tag sequences, with the intention of capturing the simplicity of the corresponding syntactic structure. It employs two existing corpus-based techniques: the initial noun phrase grammar is extracted directly from an an- notated corpus; and a benefit score calculated from errors on an improvement corpus selects the best subset of rules via a coarse- or fine-grained pruning algorithm. The overall results are surprisingly good, espe- cially considering the simplicity of the method. It achieves 94% precision and recall on simple base NPs. It achieves 91% precision and recall on the more complex NPs of the Ramshaw & Marcus cor- pus. We believe, however, that the base NP finder can be improved further. First, the longest-match heuristic of the noun phrase bracketer could be re- placed by more sophisticated parsing methods that account for lexical preferences. Rule application, for example, could be disambiguated statistically using distributions induced during training. We are cur- rently investigating such extensions. One approach closely related to ours -- weighted finite-state trans- ducers (e.g. (Pereira and Riley, 1997)) -- might pro- vide a principled way to do this. We could then consider applying our error-driven pruning strategy to rules encoded as transducers. Second, we have only recently begun to explore the use of local re- pair heuristics. While initial results are promising, the full impact of such heuristics on overall perfor- mance can be determined only if they are system- atically learned and tested using available training data. Future work will concentrate on the corpus- based acquisition of local repair heuristics. In conclusion, the treebank approach to base NPs provides an accurate and fast bracketing method, running in time linear in the length of the tagged text.. The approach is simple to understand, im- plement, and train. The learned grammar is easily modified for use with new corpora, as rules can be added or deleted with minimal interaction problems. Finally, the approach provides a general framework for developing other treebank grammars (e.g., for subject/verb/object identification) in addition to these for base NPs. Acknowledgments. This work was supported in part by NSF (]rants IRI-9624639 and GER-9454149. We thank Mitre for providing their part-of-speech tag- ger. References D. Bourigault. 1992. Surface Grammatical Anal- ysis for the Extraction of Terminological Noun Phrases. In Proceedings, COLING-92, pages 977- 981. Eric Brill. 1995. Transformation-Based Error- Driven Learning and Natural Language Process- ing: A Case Study in Part-of-Speech Tagging. Computational Linguistics, 21(4):543-565. E. Charniak. 1996. Treebank Grammars. In Pro- ceedings of the Thirteenth National Conference on Artificial Intelligence, pages 1031-1036, Portland, OR. AAAI Press / MIT Press. K. Church. 1988. A Stochastic Parts Program and Noun Phrase Parser for Unrestricted Text. In Pro- ceedings of the Second Conference on Applied Nat- ural Language Processing, pages 136-143. Associ- ation for Computational Linguistics. J. P. Gee and F. Grosjean. 1983. Performance struc- tures: A psycholinguistic and linguistic appraisal. Cognitive Psychology, 15:411-458. John S. Justeson and Slava M. Katz. 1995. Techni- cal Terminology: Some Linguistic Properties and an Algorithm for Identification in Text. Natural Language Engineering, 1:9-27. M. Marcus, M. Marcinkiewicz, and B. Santorini. 1993. Building a Large Annotated Corpus of En- glish: The Penn Treebank. Computational Lin- guistics, 19(2):313-330. Fernando C. N. Pereira and Michael D. Riley. 1997. Speech Recognition by Composition of Weighted Finite Automata. In Emmanuel Roche and Yves Schabes, editors, Finite-State Language Process- ing. MIT Press. Lance A. Ramshaw and Mitchell P. Marcus. In press. Text chunking using transformation-based learning. In Natural Language Processing Using Very Large Corpora. Kluwer. Originally appeared in WVLC95, 82-94. A. Voutilainen. 1993. NPTool, A Detector of En- glish Noun Phrases. In Proceedings of the Work- shop on Very Large Corpora, pages 48-57. Asso- ciation for Computational Linguistics. 224
1998
34
Exploiting Syntactic Structure for Language Modeling Ciprian Chelba and Frederick Jelinek Center for Language and Speech Processing The Johns Hopkins University, Barton Hall 320 3400 N. Charles St., Baltimore, MD-21218, USA {chelba,jelinek} @jhu.edu Abstract The paper presents a language model that devel- ops syntactic structure and uses it to extract mean- ingful information from the word history, thus en- abling the use of long distance dependencies. The model assigns probability to every joint sequence of words-binary-parse-structure with headword an- notation and operates in a left-to-right manner -- therefore usable for automatic speech recognition. The model, its probabilistic parameterization, and a set of experiments meant to evaluate its predictive power are presented; an improvement over standard trigram modeling is achieved. 1 Introduction The main goal of the present work is to develop a lan- guage model that uses syntactic structure to model long-distance dependencies. During the summer96 DoD Workshop a similar attempt was made by the dependency modeling group. The model we present is closely related to the one investigated in (Chelba et al., 1997), however different in a few important aspects: • our model operates in a left-to-right manner, al- lowing the decoding of word lattices, as opposed to the one referred to previously, where only whole sen- tences could be processed, thus reducing its applica- bility to n-best list re-scoring; the syntactic structure is developed as a model component; • our model is a factored version of the one in (Chelba et al., 1997), thus enabling the calculation of the joint probability of words and parse structure; this was not possible in the previous case due to the huge computational complexity of the model. Our model develops syntactic structure incremen- tally while traversing the sentence from left to right. This is the main difference between our approach and other approaches to statistical natural language parsing. Our parsing strategy is similar to the in- cremental syntax ones proposed relatively recently in the linguistic community (Philips, 1996). The probabilistic model, its parameterization and a few experiments that are meant to evaluate its potential for speech recognition are presented. /// /~tract NP the_DT contract NN ~,dc,l VBI)with INa DT Ioss_NN of_IN 7_CD ~: ~::, X~,'~ after Figure 1: Partial parse 2 The Basic Idea and Terminology Consider predicting the word after in the sentence: the contract ended with a loss of 7 cents after trading as low as 89 cents. A 3-gram approach would predict after from (7, cents) whereas it is intuitively clear that the strongest predictor would be ended which is outside the reach of even 7-grams. Our assumption is that what enables humans to make a good prediction of after is the syntactic structure in the past. The linguistically correct partial parse of the word his- tory when predicting after is shown in Figure 1. The word ended is called the headword of the con- stituent (ended (with (...) )) and ended is an ex- posed headword when predicting after -- topmost headword in the largest constituent that contains it. The syntactic structure in the past filters out irrel- evant words and points to the important ones, thus enabling the use of long distance information when predicting the next word. Our model will attempt to build the syntactic structure incrementally while traversing the sen- tence left-to-right. The model will assign a probabil- ity P(W, T) to every sentence W with every possible POStag assignment, binary branching parse, non- terminal label and headword annotation for every constituent of T. Let W be a sentence of length n words to which we have prepended <s> and appended </s> so that Wo =<s> and w,+l =</s>. Let Wk be the word k-prefix Wo...wk of the sentence and WkTk 225 "i .... " °'' (¢:s>. SB) ....... (wp. t p) (w {p÷l }. L_( I~-I }) ........ (wk. t_k) w_( k*l }.... </s.~ T [-ra} <s> h_(-2 } h (-! } h_O ......... Figure 2: A word-parse k-prefix Figure 4: Before an adjoin operation _ (<,'s>, TOP) (<s>, SB) (w_l, ~_1) ..................... ('*'_n, t_n) (</~, SE) Figure 3: Complete parse the word-parse k-prefix. To stress this point, a word-parse k-prefix contains -- for a given parse -- only those binary subtrees whose span is com- pletely included in the word k-prefix, excluding w0 =<s>. Single words along with their POStag can be regarded as root-only trees. Figure 2 shows a word-parse k-prefix; h_0 .. h_{-m} are the ex- posed heads, each head being a pair(headword, non- terminal label), or (word, POStag) in the case of a root-only tree. A complete parse -- Figure 3 -- is any binary parse of the (wl,tl)...(wn,t,) (</s>, SE) sequence with the restriction that (</s>, TOP') is the only allowed head. Note that ((wl,tl)...(w,,t,)) needn't be a constituent, but for the parses where it is, there is no restriction on which of its words is the headword or what is the non-terminal label that accompanies the headword. The model will operate by means of three mod- ules: • WORD-PREDICTOR predicts the next word wk+l given the word-parse k-prefix and then passes control to the TAGGER; • TAGGER predicts the POStag of the next word tk+l given the word-parse k-prefix and the newly predicted word and then passes control to the PARSER; • PARSER grows the already existing binary branching structure by repeatedly generating the transitions: (unary, NTlabel), (adjoin-left, NTlabel) or (adjoin-right, NTlabel) until it passes control to the PREDICTOR by taking a null transition. NTlabel is the non-terminal label assigned to the newly built constituent and {left ,right} specifies where the new headword is inherited from. The operations performed by the PARSER are illustrated in Figures 4-6 and they ensure that all possible binary branching parses with all possible T'_{.m÷l l <-<s.~. <s> h'{-I } = h_(-2 } h'_0= (h_{-I }.word, NTlabel) Figure 5: Result of adjoin-left under NTlabel headword and non-terminal label assignments for the wl ... wk word sequence can be generated. The following algorithm formalizes the above description of the sequential generation of a sentence with a complete parse. Transition t; // a PARSER transition predict (<s>, SB); do{ //WORD-PREDICTORand TAGGER predict (next_word, POStag); //PARSER do{ if(h_{-l}.word != <s>){ if(h_O.word == </s>) t = (adjoin-right, TOP'); else{ if(h_O.tag== NTlabel) t = [(adjoin-{left,right}, NTlabel), null]; else t = [(unary, NTlabel), (adjoin-{left,right}, NTlabel), null]; } } else{ if (h_O.tag == NTlabel) t = null; else t = [(unary, NTlabel), null] ; } }while(t != null) //done PARSER }while ( ! (h_0. word==</s> && h_{- 1 }. word==<s>) ) t = (adjoin-right, TOP); //adjoin <s>_SB; DONE; The unary transition is allowed only when the most recent exposed head is a leaf of the tree -- a regular word along with its POStag -- hence it can be taken at most once at a given position in the 226 T'_l.m+l } <-<s> h' {- I }=h {-2} h'_0 = (h_0.word, NTlab¢l) Figure 6: Result of adjoin-right under NTlabel input word string. The second subtree in Figure 2 provides an example of a unary transition followed by a null transition. It is easy to see that any given word sequence with a possible parse and headword annotation is generated by a unique sequence of model actions. This will prove very useful in initializing our model parameters from a treebank -- see section 3.5. 3 Probabilistic Model The probability P(W, T) of a word sequence W and a complete parse T can be broken into: P(W, T) = 1-I "+xr P(wk/Wk-aTk-x) " P(tk/Wk-lTk-x,wk)" k=X I. N~ ~I P(Pki /Wk-xTk-a' Wk, tk,pkx ... pLX)](1) i=X where: • Wk-lTk-x is the word-parse (k - 1)-prefix • wk is the word predicted by WORD-PREDICTOR * tk is the tag assigned to wk by the TAGGER • Nk -- 1 is the number of operations the PARSER executes before passing control to the WORD- PREDICTOR (the Nk-th operation at position k is the null transition); Nk is a function of T • pi k denotes the i-th PARSER operation carried out at position k in the word string; p~ 6 {(unary, NTlabel), (adjoin-left, NTlabel), (adjoin-right, NTlabel), null}, pk 6 { (adjoin-left, NTlabel), (adjoin-right, NTlabel)}, 1 < i < Nk , p~ =null, i = Nk Our model is based on three probabilities: P(wk/Wk-lTk-1) (2) P(tk/wk, Wk-lTk-x) (3) P(p~/wk,tk,Wk--xTk--l,p~.. k "Pi--X) C a) As can be seen, (wk, tk, Wk-xTk-x,p~...pki_x) is one of the Nk word-parse k-prefixes WkTk at position k in the sentence, i = 1, Nk. To ensure a proper probabilistic model (1) we have to make sure that (2), (3) and (4) are well de- fined conditional probabilities and that the model halts with probability one. Consequently, certain PARSER and WORD-PREDICTOR probabilities must be given specific values: • P(null/WkTk) = 1, if h_{-1}.word = <s> and h_{0} ~ (</s>, TOP') -- that is, before predicting </s> -- ensures that (<s>, SB) is adjoined in the last step of the parsing process; • P((adjoin-right, TOP)/WkTk) = 1, if h_O = (</s>, TOP') and h_{-l}.word = <s> and P((adjoin-right, TOP')/WkTk) = 1, if h_0 = (</s>, TOP') and h_{-1}.word ~ <s> ensure that the parse generated by our model is con- sistent with the definition of a complete parse; • P((unary, NWlabel)/WkTk) = 0, if h_0.tag POStag ensures correct treatment of unary produc- tions; • 3e > O, VWk-lTk-l,P(wk=</s>/Wk-xTk-1) >_ e ensures that the model halts with probability one. The word-predictor model (2) predicts the next word based on the preceding 2 exposed heads, thus making the following equivalence classification: P(wk/Wk-lTk-1) = P(wk/ho, h-l) After experimenting with several equivalence clas- sifications of the word-parse prefix for the tagger model, the conditioning part of model (3) was re- duced to using the word to be tagged and the tags of the two most recent exposed heads: P(tk/Wk, Wk-lTk-1) = P(tk/wk, ho.tag, h-l.tag) Model (4) assigns probability to different parses of the word k-prefix by chaining the elementary oper- ations described above. The workings of the parser module are similar to those of Spatter (Jelinek et al., 1994). The equivalence classification of the WkTk word-parse we used for the parser model (4) was the same as the one used in (Collins, 1996): p (pk / Wk Tk ) = p (pk / ho , h-x) It is worth noting that if the binary branching structure developed by the parser were always right- branching and we mapped the POStag and non- terminal label vocabularies to a single type then our model would be equivalent to a trigram language model. 3.1 Modeling Tools All model components -- WORD-PREDICTOR, TAGGER, PARSER -- are conditional probabilis- tic models of the type P(y/xl,x2,...,xn) where y, Xx,X2,...,Xn belong to a mixed bag of words, POStags, non-terminal labels and parser operations (y only). For simplicity, the modeling method we chose was deleted interpolation among relative fre- quency estimates of different orders fn(') using a 227 recursive mixing scheme: P(y/xl, . . . ,xn) = A(xl,...,x,)-P(y/xl,...,x,_x) + (1 -- ~(Xl,...,Xn))" fn(y/Xl,...,Xn), (5) f -l (Y) = uniform(vocabulary(y)) (6) As can be seen, the context mixing scheme dis- cards items in the context in right-to-left order. The A coefficients are tied based on the range of the count C(xx,...,Xn). The approach is a standard one which doesn't require an extensive description given the literature available on it (Jelinek and Mer- cer, 1980). 3.2 Search Strategy Since the number of parses for a given word prefix Wt grows exponentially with k, I{Tk}l ,,. O(2k), the state space of our model is huge even for relatively short sentences so we had to use a search strategy that prunes it. Our choice was a synchronous multi- stack search algorithm which is very similar to a beam search. Each stack contains hypotheses -- partial parses -- that have been constructed by the same number of predictor and the same number of parser operations. The hypotheses in each stack are ranked according to the ln(P(W, T)) score, highest on top. The width of the search is controlled by two parameters: • the maximum stack depth -- the maximum num- ber of hypotheses the stack can contain at any given state; • log-probability threshold -- the difference between the log-probability score of the top-most hypothesis and the bottom-most hypothesis at any given state of the stack cannot be larger than a given threshold. Figure 7 shows schematically the operations asso- ciated with the scanning of a new word Wk+l. The above pruning strategy proved to be insufficient so we chose to also discard all hypotheses whose score is more than the log-probability threshold below the score of the topmost hypothesis. This additional pruning step is performed after all hypotheses in stage k' have been extended with the null parser transition and thus prepared for scanning a new word. 3.3 Word Level Perplexity The conditional perplexity calculated by assigning to a whole sentence the probability: P(W/T*) = fi P(wk+l/WkT~), (7) k=O where T* = argrnaxTP(W, T), is not valid because it is not causal: when predicting wk+l we use T* which was determined by looking at the entire sen- tence. To be able to compare the perplexity of our (k) \ 0 parser ot~ k predict. [ p parser op k predict. p+l parser k predict. P_k parser k predict. (k') \ ~ 0 parser opt_ "~+1 predict. [" - 1 k+l predict. p+ 1 parser ~ +1 predict]z/ ~ P_k parser ~_ predict V - _k+ 1 parse~e~ 1 predict.~" word predictor and tagger - (k+l) oq I I I )~_-~--)]pparser op I - - =-~+1 predict. I !--~+1 parser] -- 7 - - >~+..} predic[. I i i i ---!--~lP kparser! ---: , - :-~+ 1 predict. I nullparser transitions parser adjoin/unary transitions Figure 7: One search extension cycle model with that resulting from the standard tri- gram approach, we need to factor in the entropy of guessing the correct parse T~ before predicting wk+l, based solely on the word prefix Wk. The probability assignment for the word at posi- tion k + 1 in the input sentence is made using: P(Wk+l/Wk) = ~TheS~ P(Wk+x/WkTk) " p(Wk,Tk), (8) p(Wk,Tk) = P(W Tk)/ P(WkTk) (9) TkESk which ensures a proper probability over strings W*, where Sk is the set of all parses present in our stacks at the current stage k. Another possibility for evaluating the word level perplexity of our model is to approximate the prob- ability of a whole sentence: N P(W) = Z P(W, T (k)) (10) k=l where T (k) is one of the "N-best" -- in the sense defined by our search -- parses for W. This is a deficient probability assignment, however useful for justifying the model parameter re-estimation. The two estimates (8) and (10) are both consistent in the sense that if the sums are carried over all 228 possible parses we get the correct value for the word level perplexity of our model. 3.4 Parameter Re-estimation The major problem we face when trying to reesti- mate the model parameters is the huge state space of the model and the fact that dynamic programming techniques similar to those used in HMM parame- ter re-estimation cannot be used with our model. Our solution is inspired by an HMM re-estimation technique that works on pruned -- N-best -- trel- lises(Byrne et al., 1998). Let (W, T(k)), k = 1... N be the set of hypothe- ses that survived our pruning strategy until the end of the parsing process for sentence W. Each of them was produced by a sequence of model actions, chained together as described in section 2; let us call the sequence of model actions that produced a given (W, T) the derivation(W, T). Let an elementary event in the derivation(W, T) be :, (m,) .~(m,)~ where: * l is the index of the current model action; * ml is the model component -- WORD- PREDICTOR, TAGGER, PARSER -- that takes action number l in the derivation(W, T); , y~mt) is the action taken at position I in the deriva- tion: if mt = WORD-PREDICTOR, then y~m,) is a word; if mt -- TAGGER, then y~m~) is a POStag; if mt = PARSER, then y~m~) is a parser-action; • ~m~) is the context in which the above action was taken: if rat = WORD-PREDICTOR or PARSER, then _~,na) = (ho.tag, ho.word, h-1 .tag, h-l.word); if rat = TAGGER, then ~mt) = (word-to-tag, ho.tag, h-l.tag). The probability associated with each model ac- tion is determined as described in section 3.1, based on counts C (m) (y(m), x_("0), one set for each model component. Assuming that the deleted interpolation coeffi- cients and the count ranges used for tying them stay fixed, these counts are the only parameters to be re-estimated in an eventual re-estimation procedure; indeed, once a set of counts C (m) (y(m), x_(m)) is spec- ified for a given model ra, we can easily calculate: • the relative frequency estimates fn(m)/,,(m) Ix(m) ~ for all context orders kY I_n / n = 0...maximum-order(model(m)); • the count c(m)(x_ (m)) used for determining the A(x_ (m)) value to be used with the order-n context x(m)" This is all we need for calculating the probability of an elementary event and then the probability of an entire derivation. One training iteration of the re-estimation proce- dure we propose is described by the following algo- rithm: N-best parse development data; // counts.El // prepare counts.E(i+l) for each model component c{ gather_counts development model_c; } In the parsing stage we retain for each "N-best" hy- pothesis (W, T(k)), k = 1... N, only the quantity ¢(W, T(k)) p(W,T(k))/ N = ~-~k=l P(W, T(k)) and its derivation(W,T(k)). We then scan all the derivations in the "development set" and, for each occurrence of the elementary event (y(m), x_(m)) in derivation(W,T(k)) we accumulate the value ¢(W,T (k)) in the C(m)(y(m),x__ (m)) counter to be used in the next iteration. The intuition behind this procedure is that ¢(W,T (k)) is an approximation to the P(T(k)/w) probability which places all its mass on the parses that survived the parsing process; the above proce- dure simply accumulates the expected values of the counts c(m)(y(m),x (m)) under the ¢(W,T (k)) con- ditional distribution. As explained previously, the C(m) (y(m), X_(m)) counts are the parameters defining our model, making our procedure similar to a rigor- ous EM approach (Dempster et al., 1977). A particular -- and very interesting -- case is that of events which had count zero but get a non-zero count in the next iteration, caused by the "N-best" nature of the re-estimation process. Consider a given sentence in our "development" set. The "N-best" derivations for this sentence are trajectories through the state space of our model. They will change from one iteration to the other due to the smooth- ing involved in the probability estimation and the change of the parameters -- event counts -- defin- ing our model, thus allowing new events to appear and discarding others through purging low probabil- ity events from the stacks. The higher the number of trajectories per sentence, the more dynamic this change is expected to be. The results we obtained are presented in the ex- periments section. All the perplexity evaluations were done using the left-to-right formula (8) (L2R- PPL) for which the perplexity on the "development set" is not guaranteed to decrease from one itera- tion to another. However, we believe that our re- estimation method should not increase the approxi- mation to perplexity based on (10) (SUM-PPL) -- again, on the "development set"; we rely on the con- sistency property outlined at the end of section 3.3 to correlate the desired decrease in L2R-PPL with that in SUM-PPL. No claim can be made about the change in either L2R-PPL or SUM-PPL on test data. 229 Y_! Y_k Y_n Y l Y_k Y_n Figure 8: Binarization schemes 3.5 Initial Parameters Each model component -- WORD-PREDICTOR, TAGGER, PARSER -- is trained initially from a set of parsed sentences, after each parse tree (W, T) undergoes: • headword percolation and binarization -- see sec- tion 4; • decomposition into its derivation(W, T). Then, separately for each m model component, we: • gather joint counts cCm)(y(m),x (m)) from the derivations that make up the "development data" using ¢(W,T) = 1; • estimate the deleted interpolation coefficients on joint counts gathered from "check data" using the EM algorithm. These are the initial parameters used with the re- estimation procedure described in the previous sec- tion. 4 Headword Percolation and Binarization In order to get initial statistics for our model com- ponents we needed to binarize the UPenn Tree- bank (Marcus et al., 1995) parse trees and perco- late headwords. The procedure we used was to first percolate headwords using a context-free (CF) rule- based approach and then binarize the parses by us- ing a rule-based approach again. The headword of a phrase is the word that best represents the phrase, all the other words in the phrase being modifiers of the headword. Statisti- cally speaking, we were satisfied with the output of an enhanced version of the procedure described in (Collins, 1996) -- also known under the name "Magerman & Black Headword Percolation Rules". Once the position of the headword within a con- stituent -- equivalent with a CF production of the type Z --~ Y1.--Yn , where Z, Y1,...Yn are non- terminal labels or POStags (only for Y/) -- is iden- tified to be k, we binarize the constituent as follows: depending on the Z identity, a fixed rule is used to decide which of the two binarization schemes in Figure 8 to apply. The intermediate nodes created by the above binarization schemes receive the non- terminal label Z ~. 5 Experiments Due to the low speed of the parser -- 200 wds/min for stack depth 10 and log-probability threshold 6.91 nats (1/1000) -- we could carry out the re- estimation technique described in section 3.4 on only 1 Mwds of training data. For convenience we chose to work on the UPenn Treebank corpus. The vocab- ulary sizes were: * word vocabulary: 10k, open -- all words outside the vocabulary are mapped to the <unk> token; • POS tag vocabulary: 40, closed; • non-terminal tag vocabulary: 52, closed; • parser operation vocabulary: 107, closed; The training data was split into "development" set -- 929,564wds (sections 00-20) -- and "check set" -- 73,760wds (sections 21-22); the test set size was 82,430wds (sections 23-24). The "check" set has been used for estimating the interpolation weights and tuning the search parameters; the "develop- ment" set has been used for gathering/estimating counts; the test set has been used strictly for evalu- ating model performance. Table 1 shows the results of the re-estimation tech- nique presented in section 3.4. We achieved a reduc- tion in test-data perplexity bringing an improvement over a deleted interpolation trigram model whose perplexity was 167.14 on the same training-test data; the reduction is statistically significant according to a sign test. iteration DEV set TEST set number L2R-PPL L2R-PPL E0 24.70 167.47 E1 22.34 160.76 E2 21.69 158.97 E3 21.26 158.28 3-gram 21.20 167.14 Table 1: Parameter re-estimation results Simple linear interpolation between our model and the trigram model: Q(wk+l/Wk) = )~" P(Wk+I/Wk-I,Wk) + (1 -- A)" P(wk+l/Wk) yielded a further improvement in PPL, as shown in Table 2. The interpolation weight was estimated on check data to be )~ = 0.36. An overall relative reduction of 11% over the trigram model has been achieved. 6 Conclusions and Future Directions The large difference between the perplexity of our model calculated on the "development" set -- used 230 II iteration number II Eo E3 [l 3-gram TEST set L2R-PPL 167.47 158.28 167.14 TEST set 3-gram interpolated PPL 152.25 II 148.90 167.14 II Table 2: Interpolation with trigram results for model parameter estimation -- and "test" set -- unseen data -- shows that the initial point we choose for the parameter values has already captured a lot of information from the training data. The same problem is encountered in standard n-gram language modeling; however, our approach has more flexibility in dealing with it due to the possibility of reestimat- ing the model parameters. We believe that the above experiments show the potential of our approach for improved language models. Our future plans include: • experiment with other parameterizations than the two most recent exposed heads in the word predictor model and parser; • estimate a separate word predictor for left-to- right language modeling. Note that the correspond- ing model predictor was obtained via re-estimation aimed at increasing the probability of the "N-best" parses of the entire sentence; • reduce vocabulary of parser operations; extreme case: no non-terminal labels/POS tags, word only model; this will increase the speed of the parser thus rendering it usable on larger amounts of train- ing data and allowing the use of deeper stacks -- resulting in more "N-best" derivations per sentence during re-estimation; • relax -- flatten -- the initial statistics in the re- estimation of model parameters; this would allow the model parameters to converge to a different point that might yield a lower word-level perplexity; • evaluate model performance on n-best sentences output by an automatic speech recognizer. 7 Acknowledgments This research has been funded by the NSF IRI-19618874 grant (STIMULATE). The authors would like to thank Sanjeev Khu- danpur for his insightful suggestions. Also to Harry Printz, Eric Ristad, Andreas Stolcke, Dekai Wu and all the other members of the dependency model- ing group at the summer96 DoD Workshop for use- ful comments on the model, programming support and an extremely creative environment. Also thanks to Eric Brill, Sanjeev Khudanpur, David Yarowsky, Radu Florian, Lidia Mangu and Jun Wu for useful input during the meetings of the people working on our STIMULATE grant. References w. Byrne, A. Gunawardana, and S. Khudanpur. 1998. Information geometry and EM variants. Technical Report CLSP Research Note 17, De- partment of Electical and Computer Engineering, The Johns Hopkins University, Baltimore, MD. C. Chelba, D. Engle, F. Jelinek, V. Jimenez, S. Khu- danpur, L. Mangu, H. Printz, E. S. Ristad, R. Rosenfeld, A. Stolcke, and D. Wu. 1997. Struc- ture and performance of a dependency language model. In Proceedings of Eurospeech, volume 5, pages 2775-2778. Rhodes, Greece. Michael John Collins. 1996. A new statistical parser based on bigram lexical dependencies. In Proceed- ings of the 34th Annual Meeting of the Associ- ation for Computational Linguistics, pages 184- 191. Santa Cruz, CA. A. P. Dempster, N. M. Laird, and D. B. Rubin. 1977. Maximum likelihood from incomplete data via the EM algorithm. In Journal of the Royal Statistical Society, volume 39 of B, pages 1-38. Frederick Jelinek and Robert Mercer. 1980. Inter- polated estimation of markov source parameters from sparse data. In E. Gelsema and L. Kanal, ed- itors, Pattern Recognition in Practice, pages 381- 397. F. Jelinek, J. Lafferty, D. M. Magerman, R. Mercer, A. Ratnaparkhi, and S. Roukos. 1994. Decision tree parsing using a hidden derivational model. In ARPA, editor, Proceedings of the Human Lan- guage Technology Workshop, pages 272-277. M. Marcus, B. Santorini, and M. Marcinkiewicz. 1995. Building a large annotated corpus of En- glish: the Penn Treebank. Computational Lin- guistics, 19(2):313-330. Colin Philips. 1996. Order and Structure. Ph.D. thesis, MIT. Distributed by MITWPL. 231
1998
35
Proper Name Translation in Cross-Language Information Retrieval Hsin-Hsi Chen, Sheng-Jie Huang, Yung-Wei Ding, and Shih-Chung Tsai Department of Computer Science and Information Engineering National Taiwan University Taipei, TAIWAN, R.O.C. [email protected] Abstract Recently, language barrier becomes the major problem for people to search, retrieve, and understand WWW documents in different languages. This paper deals with query translation issue in cross-language information retrieval, proper names in particular. Models for name identification, name translation and name searching are presented. The recall rates and the precision rates for the identification of Chinese organization names, person names and location names under MET data are (76.67%, 79.33%), (87.33%, 82.33%) and (77.00%, 82.00%), respectively. In name translation, only 0.79% and 1.11% of candidates for English person names and location names, respectively, have to be proposed. The name searching facility is implemented on an MT sever for information retrieval on the WWW. Under this system, user can issue queries and read documents with his familiar language. Introduction World Wide Web (WWW) is the most useful and powerful information dissemination system on the Internet. For the multilingual feature, the language barrier becomes the major problem for people to search, retrieve, and understand WWW documents in different languages. That decreases the dissemination power of WWW to some extent. The researches of cross-language information retrieval abbreviated as CLIR (Oard and Dorr, 1996; Oard 1997) aim to tackle the language barriers. There are several important issues in CLIR: 232 (1) Queries and documents are in different languages, so that translation is required. (2) Words in a query may be ambiguous, thus disambiguation is required. (3) Queries are usually short, thus expansion is required. (4) Word boundary in queries of some languages (Chen and Lee, 1996) is not clear, thus segmentation is required. (5) A document may be in more than one language, thus language identification is required. This paper focuses on query translation issue, proper name in particular. The percentage of user queries containing proper names is very high. The paper (Thompson and Dozier, 1997) reported an experiment over periods of several days in 1995. It showed 67.8%, 83.4%, and 38.8% of queries to Wall Street Journal, Los Angeles Times, and Washington Post, respectively, involve name searching. In CLIR, three tasks are needed: name identification, name translation, and name searching. Because proper names are usually unknown words, it is hard to find in monolingual dictionary not to mention bilingual dictionary. Coverage is one of the major problems in dictionary-based approaches (Ballesteros and Croft, 1996; Davis, 1997; Hull and Grefenstette, 1996). Corpus-based approaches (Brown, 1996; Oard 1996; Sheridan and Ballerini, 1996) set up thesaurus from large- scale corpora. They provide narrow but specific coverage of the language, and are complementary to broad and shallow coverage in dictionaries. However, domain shifts and term align accuracy are major limitations of corpus-based approaches. Besides, proper names are infrequent words relative to other content words in corpora. In information retrieval, most frequent and less frequent words are regarded as unimportant words and may be neglected. This paper will propose methods to extract and classify proper names from Chinese queries (Section 1). Then, Chinese proper names are translated into English proper names (Section 2). Finally, the translated queries are sent to an MT sever for information retrieval on the WWW (Bian and Chen, 1997). The retrieved English home pages are presented in Chinese and/or English. 1 Name Extraction and Classification People, affairs, time, places and things are five basic entities in a document. If we can catch the fundamental entities, we can understand the document to some degree. These entities are also the targets that users are interested in. That is, users often issue queries to retrieve such kinds of entities. The basic entities often appear in proper names, which are major unknown words in natural language texts. Thus name extraction is indispensable for both natural language understanding and information retrieval. In famous message understanding system evaluation and message understanding conferences (MUC) and the related multilingual entity tasks (MET), named entity, which covers named organizations, people, and locations, along with date/time expressions and monetary and percentage expressions, is one of tasks for evaluating technologies. In MUC-6 named entity task, the systems developed by SRA (Krupka, 1995) and BBN (Weischedel, 1995) on the person name recognition portion have very high recall and precision scores (over 94%). In Chinese language Processing, Chert and Lee (1996) present various strategies to identify and classify three types of proper nouns, i.e., Chinese person names, Chinese transliterated person names and organization names. In large-scale experiments, the average precision rate is 88.04% and the average recall rate is 92.56% for the identification of Chinese person names. The above approaches can be employed to collect Chinese and English proper name sets from WWW (very large-scale corpora). Identification of proper names in queries is different from that in large-scale texts. The major difference is that query is always short. Thus its context is much shorter than full texts and some technologies involving larger contexts are useless. The following paragraphs depict the methods we adopt in the identification of Chinese proper names. A Chinese person name is composed of surname and name parts. Most Chinese surnames are single character and some rare ones are two characters. A married woman may place her husband's surname before her surname. Thus there are three possible types of surnames, i.e., single character, two characters and two surnames together. Most names are two characters and some rare ones are one character. Theoretically, every character can be considered as names rather than a fixed set. Thus the length of Chinese person names range from 2 to 6 characters. Three kinds of recognition strategies shown below are adopted: (1) name-formulation statistics (2) context cues, e.g., titles, positions, speech-act verbs, and so on (3) cache Name-formulation statistics form the baseline model. It proposes possible candidates. The context cues add extra scores to the candidates. Cache records the occurrences of all the possible candidates in a paragraph. If a candidate appears more than once, it has high tendency to be a person name. Transliterated person names denote foreigners. Compared with Chinese person names, the length of transliterated names is not restricted to 2 to 6 characters. The following strategies are adopted to recognize transliterated names: (1) character condition Two special character sets are setup. The first character of transliterated names and the remaining characters must belong to these two sets, respectively. The character condition is a loose restriction. The string that satisfies the character condition may denote a location, a building, an address, etc. It should be employed with other cues (refer to (2)- (4)). (2) titles Titles used in Chinese person names are 233 also applicable to transliterated person names. (3) name introducers Some words can introduce transliterated names when they are used at the first time. (4) special verbs The same set of speech-act verbs used in Chinese person names are also used for transliterated person names. Cache mechanism is also helpful in the identification of transliterated names. A candidate that satisfies the character condition and one of the cues will be placed in the cache. At the second time, the cues may disappear, but we can recover the transliterated person name by checking cache. The structure of organization names is more complex than that of person names. Basically, a complete organization name can be divided into two parts, i.e., name and keyword. Organization names, country names, person names and location names can be placed into the name part of organization names. Person names can be found by the approaches specified in the last paragraph. Location names will be touched later. Transliterated names may appear in the name part. We use the same character sets mentioned in the last paragraph. If a sequence of characters meet the character condition, the sequence and the keyword form an organization name. Common content words may be inserted in between the name part and the keyword part. In current version, at most two content words are allowed. Besides, we utilize the feature of multiple occurrences of organization names in a document and propose n-gram model to deal with this problem. Although cache mechanism and n-gram use the same feature, i.e., multiple occurrences, their concepts are totally different. For organization names, we are not sure when a pattern should be put into cache because its left boundary is hard to decide. The structure of location names is similar to that of organization names. A complete location name is composed of a person name (or a location name) and a location keyword. For the treatment of location names without keywords, we introduce some locative verbs. Cache is also useful and N-gram model is employed to recover those names that do not meet the character condition. We test our system with three sets of MET data (i.e., MET-1 formal run, MET-2 training, and MET-2 dry run). The recall rates and the precision rates for the identification of Chinese organization names, person names and location names are (76.67%, 79.33%), (87.33%, 82.33%) and (77.00%, 82.00%), respectively. 2 Proper Name Translation Chinese and English are the source language and the target language, respectively, in our query translation. The alphabets of these two languages are totally different. Wade-Giles (WG) and Pinyin are two famous systems to romanize Chinese (Lu, 1995). The proper name translation problem can be formulated as: (1) Collect English proper name sets from WWW. (2) Identify Chinese proper names from queries. (3) Romanize the Chinese proper names. (4) Select candidates from suitable proper name sets. In this way, the translation problem is transferred to a phonic string matching problem. If an English proper name denotes a Chinese entity, e.g., Lee Teng-hui denotes " - ~ " (President of R.O.C.), the matching is simple. Otherwise, the matching is not trivial. For example, we issue a query "F*q~'-~.~d~" in Chinese to retrieve information about Alps. The Pinyin romanization of this name is a.er.bei.si.shan ~. The string "aerbeisishan" is not similar to the string "alps". We develop several language models incrementally to tackle the translation problem. The first issue we consider is how many common characters there are in a romanized Chinese proper name and an English proper name candidate. Here the order is significant. For example, the Chinese query is 'J~$.~-~'. Its WG romanization is 'ai.ssu.chi.le.ssu'. The corresponding proper name is Aeschylus. Three characters (shown as follow in underline) are matched in order: I The dot is inserted among romanization of Chinese characters for clear reading. Later, the dot may be dropped when strings are matched. 234 aeschylus ais suchi lessu We normalize it by the length of the candidate (i.e., 9), and get a score 0.33. In an experiment, there are 1,534 pairs of Chinese-English person names. We conduct a mate matching: to use each Chinese proper name as a query, and try to find the corresponding English proper name from the 1,534 candidates. The performance is evaluated in such a way that how many candidates should be proposed to cover the correct translation. In other words, the average rank of correct translations is reported. The performances of the baseline model under WG and Pinyin systems are 40.06 and 31.05, respectively. The major problem of the baseline model is: if a character is matched incorrectly, those characters that follow this character will not contribute to the matching. In the above example, chi ('~, ') will be helpless for translation. For reducing the error propagation, we consider syllables of the candidate in advance. The matching is done in syllables instead of the whole word. For example, Aeschylus contains three syllables. The matching is shown as follows: aes chy lus aissu chi lessu The score is increased to 0.67 (6/9). In the similar experiment, the performances of the new language model are improved. The average ranks are 35.65 and 27.32 for WG and Pinyin systems, respectively. Observing the performance differences between WG and Pinyin systems, we find they use different phones to denote the same sounds. The following shows examples: (1) vowels p vs. b, t vs. d, k vs. g, ch vs. j, ch vs. q, hs vs. x, ch vs. zh, j vs. r, ts vs. z, ts vs. c (2) consonants -ien vs. -ian, -ieh vs. -ie, -ou vs. -o, -o vs. -uo, -ung vs. -ong, -ueh vs. -ue, -uei vs. -ui, -iung vs. -iong, -i vs. -yi A new language model integrates the alternatives. The average ranks of the mate match is 25.39. The result is better than those of separate romanization systems. In the above ranking, each matching character is given an equal weight. We postulate that the first letter of each Romanized Chinese character is more important than others. For example, c in chi is more important than h and i. Thus it should have higher score. The following shows a new scoring function: score=Zj(f/*(eli/(2 * cli)+0.5)+o,*0.5)/el where el: length of English proper name, eli: length of syllable i in English proper name, cli: number of Chinese characters correspond- ing to syllable i, f~: number of matched first-letters in syllable i, oi: number of matched other letters in syllable i. We reduplicate the above example as follows. The first letter is in capital. aes chy lus AiSsu Chi LeSsu The corresponding parameters are listed below: e1/=3, c11=2, fj=2, o1=0, el=9, el2=3, ci2=1, fe=l, o5=1, eis=3, cls=2, t"3=2, os=0. The new score of this candidate is 0.83. Under the new experiment, the average rank is 20.64. If the first letter of a Romanized Chinese character is not matched, we give it a penalty. The average ranks of the enhanced model is 16.78. Table 1. The Performance of Person Name Translation I I 524 497 107 143 44 22 197 We further consider the pronunciation rules in English. For example, ph usually hasfsound. If all the similar rules are added to the language model, the average rank is enhanced to 12.11. Table 1 summarizes the distribution of ranks of the correct candidate. The first row shows the range of ranks. The second row shows the number of candidates within the range. About one-third have rank 1. On the average, only 0.79% of candidates have to be proposed to cover the correct solution. It shows this method is quite effective. We also make two extra experiments. Given a query, the best model is adopted to find English locations. There are 1,574 candidates in this test. The average rank is 17.40. In other words, 1.11% of candidates have been 235 proposed. If we merge the person name set and location set, and repeat the experiment, the performance drops to 27.70. It tells us the importance of classification of proper names. Conclusion This paper proposes knowledge from character, sentence, and paragraph levels to identify different kinds of proper names. The person name translation problem is formulated as a phonic string matching problem. We consider the length of matching characters, syllables, different romanization systems, pronunciation rules, positive and negative scores in ranking. The name searching mechanism is integrated into a Chinese-English information retrieval system. In this way, languages are transparent to users on the Internet. In current implementation, only 0.79% and 1.11% of candidates for English person names and location names, respectively have to be proposed during name translation. This model can be employed to set up a bilingual proper name dictionary. We can collect English and Chinese proper names from Internet periodically, and then conduct a mate matching. Human can be involved to select the correct translation. That will reduce the cost to develop a large scale bilingual proper name dictionary for name searching. References Ballesteros, L. and Croft, W.B. (1996) "Dictionary- based Methods for Cross-Lingual Information Retrieval," Proceedings of the 7 'h International DEXA Conference on Database and Expert Systems Applications, pp. 791-801. Bian, G.W. and Chen, H.H. (1997) "An MT-Server for Information Retrieval on WWW." Working Notes of the AAAI Spring Symposium on Natural Language Processing for the Worm Wide Web, 1997, pp. 10-16. Brown, R.D. (1996) "Example-Based Machine Translation in the Pangloss System." Proceedings of 16 ~h International Conference on Computational Linguistics, pp. 169-174. Chen, H.H, and Lee, J.C. (1996) "Identification and Classification of Proper Nouns in Chinese Texts." Proceedings of 16th International Conference on Computational Linguistics, 1996, pp. 222-229. Hull, D.A. and Grefenstette, G. (1996) "Querying Across Languages: A Dictionary-based Approach to Multilingual Information Retrieval." Proceedings of the 19 ~h International Conference on Research and Development in Information Retrieval, pp. 49-57. Krupka, G.R. (1995) "SRA: Description of the SRA System as Used for MUC-6." Proceedings of Sixth Message Understanding Conference, 1995, pp. 221-235. Mani, I., et al. (1993) "Identifying Unknown Proper Names in Newswire Text." Proceedings of Workshop on Acquisition of Lexical Knowledge from Text, 1993, pp. 44-54. McDonald, D. (1993) "Internal and External Evidence in the Identification and Semantic Categorization of Proper Names." Proceedings of Workshop on Acquisition of Lexical Knowledge from Text, 1993, pp. 32-43. Oard, D.W. (1997)"Alternative Approaches for Cross-Language Text Retrieval." Working Notes of AAAI-97 Spring Symposiums on Cross-Language Text and Speech Retrieval, pp. 131-139. Oard, D.W. (1996) Adaptive Vector Space Text Filtering for Monolingual and Cross-language Applications, Ph.D. Dissertation, University of Maryland. Oard, D.W. and Dorr, B.J. (1996) A Survey of Multilingual Text Retrieval. Technical Report UMIACS-TR-96-19, University of Maryland, Institute for Advanced Computer Studies. http://www.ee.umd. edu/medlab/filter/papers/mlir.ps. Paik, W., et al. (1993) "Categorizing and Standardizing Proper Nouns for Efficient Information Retrieval." Proceedings of Workshop on Acquisition of Lexical Knowledge from Text, 1993, pp. 154-160. Sheridan, P. and Ballerini, J.P. (1996) "Experiments in Multilingual Information Retrieval Using the SPIDER System." Proceedings of the 19 'h ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 58-65. Thompson, P. and Dozier, C. (1997) "Name Searching and Information Retrieval." Proceedings of Second Conference on Empirical Methods in Natural Language Processing, Providence, Rhode Island, 1997. Weischedei, R. (1995)"BBN: Description of the PLUM System as Used for MUC-6." Proceedings of Sixth Message Understanding Conference, 1995, 55-69. 236
1998
36
A Concept-based Adaptive Approach to Word Sense Disambiguation Jen Nan Chen Jason S. Chang Department of Computer Science Department of Computer Science National Tsing Hua University National Tsing Hua University Hsinchu 30043, Taiwan Hsinchu 30043, Taiwan [email protected] [email protected] Abstract Word sense disambiguation for unrestricted text is one of the most difficult tasks in the fields of computational linguistics. The crux of the problem is to discover a model that relates the intended sense of a word with its context. This paper describes a general framework for adaptive conceptual word sense disambiguation. Central to this WSD framework is the sense division and semantic relations based on topical analysis of dictionary sense definitions. The process begins with an initial disambiguation step using an MRD- derived knowledge base. An adaptation step follows to combine the initial knowledge base with knowledge gleaned from the partial disambiguated text. Once the knowledge base is adjusted to suit the text at hand, it is then applied to the text again to finalize the disambiguation result. Definitions and example sentences from LDOCE are employed as training materials for WSD, while passages from the Brown corpus and Wall Street Journal are used for testing. We report on several experiments illustrating effectiveness of the adaptive approach. 1 Introduction Word sense disambiguation for unrestricted text is one of the most difficult tasks in the fields of computational linguistics. The crux of the problem is to discover a model that relates the intended sense of a word with its context. It seems to be very difficult, if not impossible, to statistically acquire enough word-based knowledge about a language necessary to build a robust system capable of automatically disambiguating senses in unrestricted text. For such a system to be effective, a great deal of balanced materials must be assembled in order to cover many idiosyncratic aspects of the language. There exist three issues in a lexicalized statistical word sense disambiguation (WSD) model - data sparseness, lack of a level of abstraction, and static learning strategy. First, word-based models have a plethora of parameters that are difficult to estimate reliably even with a very large corpus. Under-trained models lead to low precision. Second, word- based models lack a degree of abstraction that is crucial for a broad coverage system. Third, a static WSD model is unlikely to be robust and portable, since it is very difficult to make a single static model relevant to a wide variety of unrestricted texts. Recent WSD systems have been developed using word-based model for specific limited domain to disambiguate senses appearing in usually easy context (Leacock, Towell, and Voorlees 1996) with a lot of typical salient words. For unrestricted text, however, the context tends to be very diverse and difficult to capture with a lexicalized model, therefore a corpus-trained system is unlikely to port to new domains and run off the shelf. Generality and adaptiveness are therefore key to a robust and portable WSD system. A concept-based model for WSD requires less parameter and has an element of generality built in (Liddy and Paik 1993). Conceptual classes make it possible to generalize from word- specific context in order to disambiguate a word sense appearing in a particularly unfamiliar context in term of word recurrences. An adaptive system armed with an initial lexical and conceptual knowledge base extracted from machine-readable dictionaries (MRDs), has two strong advantages over static lexicalized models trained using a corpus. First, the initial 237 knowledge is rich and unbiased such that a substantial portion of text can be disambiguated precisely. Second, based on the result of initial disambiguated text. Subsequently, the knowledge base is adjusted to suit the text at hand. The adjusted knowledge base is then Machine Readable Dictionary ] Machine Readable Thesaurus Initialized Knowledge Base Word Sense Lexical and Conceptual Context ..................................................... bank-GEO river lake land ... GEOMOTION bank-MONEY money account bill ... MONEY COMMERCE ... ~ ed Text ~ ~ f Partially Tagged Text "l / ~f 12" ~n°~te~Is~:: =fff~nb~hbeaknf~" "t ~ 21 Iootedst ud~ "-- ' I. investig,ot~:na, f~lY/c?h.,cck.~u~CR2ME .- ~/ /¢ I 3. adeer near the river bank.,. | ~/ " I 3. a deer/ANIMAL near the river banldGEO ... I i [, 4 Ab..~,olc J i ( " 4. A bank~ ,o,e ) / , ~ Adapted Knowledge Base IWord_Sen:: Lexical and Conceptual Context NI~ bank-GEO river lake land deer near... I GEO MOTION--~I~L ... [ N I bank-MONEY money account b~gation check fraud i \ MONEY COMMERCE CRIME... -- j \ ) WSD Result ] fl .... investigation of bank/MONEY check fraud/CRiME... ~ ~ /// | 2 .... looted/CRIME stores and robbed/CRIME banks/MONEY .... I / | 3 .... a deer/ANIMAL near the river bank/GEO .. I . / 4. A bank/GEO vole/ANIMAL / ,er"- k Figure I General framework for WSD using MRD. disambiguation, an adaptation step is taken to make the knowledge base more relevant to the task at hand, leading to broader and more precise WSD. Figure 1 lays out the general framework for an adaptive conceptual WSD approach, under which this research is being carried out. The learning process described here begins with a step of knowledge acquisition from MRDs. With the acquired knowledge, the system reads the input text and starts the step of initial disambiguation. Adaptive step follows to combine the initial knowledge base with knowledge gleaned from the partially applied to the text again to finalize the disambiguation result. For instance, Figure 1 shows the initial contextual representation (CR) extracted from the Longrnan Dictionary of Contemporary English (Protor 1978, LDOCE) for the GEO-bank sense contained both lexical and conceptual information: {land, river, lake, ...} u {GEO, MOTION .... }. The initial CR is informative enough to disambiguate a passage containing a deer near the river bank in the input text. The initial disambiguation step produces sense tagging of deer~ANIMAL and bank~GEOGRAPHY, but certain instances of bank are left untagged for lack of relevant WSD 238 knowledge. For instance, the GEO-bank sense in the context of vole is unresolved since there is no information linking ANIMAL context to GEOGRAPHY sense of bank. The adaptation step adds deer and ANIMAL to the contextual representation for GEO-bank. The enriched CR therefore contains information capable of disambiguating the instance of bank in the context of vole to produce final disambiguation result. 2 Acquiring Conceptual Knowledge from MRD In this section we apply a so-called TopSense algorithm (Chen and Chang 1998) to acquire CR for MRD senses. The current implementation of TopSense uses the topical information in Longman Lexicon of Contemporary English (McArthur 1992, LLOCE) to represent WSD knowledge for LDOCE senses. In the following subsections we describe how that is done. 2.1 Contextual Representation from MRDs Dictionary is a text whose subject matter is a language. The purpose of dictionary is to provide definitions of word senses, and in the process it supply knowledge not just about the language, but the world (Wilks et al. 1990). A good-sized dictionary usually has a large vocabulary and good coverage of word senses useful for WSD. However, short MRD definitions and examples per se lack a level of abstraction to function effectively as a contextual representation of word sense. On the other hand, the thesaurus organizes word senses into a fixed set of coarse semantic categories and thus could potentially be useful as the basis of a conceptual CR of word sense. To get the best of both worlds of dictionary and thesaurus, we propose to link an MRD sense to thesaurus categories to produce conceptual representation of its context. Content words extracted directly from the definition sentence of a word sense can be put to use as the word-level contextual representation of that particular word sense. One way of producing such conceptual CR is to link MRD senses to their relevant thesaurus senses and categories. These links furnish the MRD senses with information necessary for building a conceptual CR. We will describe one such approach under which each MRD sense is linked to a relevant thesaurus sense according to its defining words. The linked thesaurus sense, unlike the isolated MDR sense, falls within a certain semantic category. Consequently, we can establish relations between defining words and semantic category that eventually lead to conceptual CR. With the word lists in a thesaurus category cast as a document representing a certain subject matter or topic, the task of constructing conceptual representation of context for a certain MRD sense bears a striking resemblance to the document retrieval task in information retrieval (IR) research. Relatively well-established IR techniques of weighting terms and ranking documents are applied to build a list of topics that are most relevant to the definition of each MRD sense. This list of ranked topics, for a particular word sense, forms a vectorized conceptual representation of context in the space of all possible topics. 2.2 Illustrative Example One example is given in this subsection to illustrate how TopSense works. Example 1. Conceptual representation of an LDOCE sense erane.l.n.1, a machine for lifting and moving heavy objects by means of a very strong rope or wire fastened to a movable arm (JIB). For the most relevant topics to fine-grained sense, we get the following ranked list Hd (EQUIPMENT), Ha (MATERIALS), Ma (MOVING). Furthermore, the definition and examples of a particular sense on the surface level seldom are information sufficient to represent context of the sense. For instance, the words machine, lift, move, heavy, object, strong, rope, wire, fasten, movable, arm, jib in the definition of the sense, crane.l.n.1, are hardly enough contextual information to resolve a crane.l.n.1 instance in the Brown corpus shown below: Unsinkable slowed and stopped, hundreds of brilliant white flares swayed eerily down from 239 the black, the air raid sirens ashore rose in a keening shriek, the anti-aircraft guns coughed and chattered- and above it all motors roared and the bombs came whispering and wailing and crashing down among the ships at anchor at Bad. They had come from airports in the Balkans, these hundred-odd Junkers 88's. They had winged over the Adriatic, they had taken Bari by complete surprise and now they were battering her, attacking with deadly skill. They had ruined the radar warning system with their window, they had made themselves invisible above their flares. And they also had the lights of the city, the port wall lanterns, and a shore crane's spotlight to guide on. However, with a level of abstraction made possible by using a thesaurus, it is not difficult to build a conceptual CR of word sense, which is intuitively more effective for WSD. For instance, based on LLOCE topics, the conceptual CR (EQUIPMENT, MATERIALS, MOVING) derived from the definition of crane.l.n.1, is general enough to characterize many salient words appearing in the context of the crane.l.n.1 instance, including motor (EQUIPMENT), lantern (EQUIPMENT), and flare (EQUIPMENT, MATERIALS). 3 The Adaptive WSD Algorithm We sum up the above descriptions and outline the procedure for the algorithm in this section. In what follows an adaptive disambiguation algorithm based on class-based approach will be described. Next, we give an illustrative example to show how the proposed algorithm works for unrestricted text. 3.1 The algorithm The proposed algorithm starts with the step of initial disambiguation using the contextual representation CR(W, S) derived from the MRD for the sense S of the head entry W. A step of adaptation followed to produce a knowledge base from the partially disambiguated text. Finally, the undisambiguated part is disambiguated according to the newly acquired knowledge base. The following algorithm gives a formal and detailed description of adaptive WSD. Algorithm AdaptSense Step I: Preprocess the context and produce a list of lemmatized content words CON(W) in a polysemous word W's context. Step 2: For each sense S of W, compute the similarity between the context representation CR(W, S) and topical context CON(W). Sim (CR(W, S), CON(W)) E (w,., + w, ) where teM E w,,+ E w,' tGCR(W.S) " t~CON(W) M = CR(W, S') N CON(W), Wt, s = weight of a contextual word t with sense S in CR(W, S), 1 W t = weight oft in CON(W) = .[]~.l X, = distance from t to W in number of words. Step 3: For each word W, choose a relevant sense Sw if passes a preset threshold then construct triples T={(W, S, CON(W))}. Step4: Compute a new set of contextual representation CR(W,S) = { u [ ueCON(W) and (W, S, CON(W))e T } Step S: Infer remaining less relevant sense for W in CON 3.2 Illustrative Example Consider the following passage from the Brown corpus: ... Of cattle in a pasture without throwin' 'em together for the purpose was called a "pasture count". The counters rode through the pasture countin' each bunch of grazin' cattle, and drifted it back so that it didn't get mixed with the uncounted cattle ahead. This method of countin' was usually done at the request, and in the presence, of a representative of the bank that held the papers against the herd. The notes and mortgages were spoken of as "cattle paper". A "book count" was the sellin' of cattle by the books, commonly resorted to in the early days, sometimes much to the profit of the seller. This led to the famous sayin' in the Northwest of the "books won't freeze". This became a common byword durin' the ... In our experiment, we observed that hold and paper are related to both MONEY and ROAD sense in the initial knowledge base. 240 Thus, this instance of bank is left unresolved in the initial disambiguation step. The adaptation step discovers that both hold and paper co-occur with some MONEY-bank instances in the partially disambiguated text. Therefore, the system is able to correctly resolve this bank instance to MONEY sense. 4 Experiments and Discussions 4.1 Experiment In our experiment, we use the materials of text windows of 50 words to the left and 50 words to the right of thirteen polysemous words in the Brown corpus and a sample of Wall Street Journal articles. All instances of these thirteen words are first disambiguated by two human judges. For these thirteen words under investigation, only nominal senses are considered. The experimental results show that the adaptive algorithm disambiguated correctly 71% and 77% of these test cases in the Brown corpus and the WSJ sample. Table 1 provides further details. However, there are still room for improvement in the area of precision. Evidence have shown that by exploiting the constraint of so-called "one sense per discourse," (Gale, Church and Yarowsky 1992b) and the strategy of bootstrapping (Yarowsky 1995), it is possible to boost coverage, while maintaining about the same level of precision. 4.2 Discussions Although it is often difficult to compare studies on different text domain, genre and experimental setup, the approach presented here seems to compare favorably with the experimental results reported in previous WSD research. Luk (1995) experiments with the same words we use except the word bank and reports that there are totally 616 instances of these words in the Brown corpus, (slightly less than the 749 instances we have experimented on). The author reports that 60% of instances are resolved correctly using the definition-based concept co-occurrence (DBCC) approach. Leacock et al. (1996) report that precision rate of 76% for disambiguating the word line in a sample of WSJ articles. One of the limiting factors of this approach is the quality of sense definition in the MRD. Short and vague definitions tend to lead to inclusion of inappropriate topics in the contextual representation. Using inferior CR, it is not possible to produce enough and precise samples in the initial step for subsequent adaptation. Table l(a) Disambiguation results for thirteen ambiguous words in Brown corpus. Word # of senses bank 8 bass 2 bow 5 :cone 2 duty 2 !galley 3 l interest 4 issue 4 ]mole 2 sentence 2 slug 5 star 6 taste 3 Total Precision # of instances 97 16 12 14 75 4 346 141 4 32 8 46 51 846 Without With adaptation adaptation # of correct 68 71 16 16 3 3 14 14 67 69 4 4 213 228 67 88 2 2 30 30 4 6 28 29 36 36 552 596 65.2% 70.5% Table l(b) Disambiguation results ambiguous words in Journal articles. Word # of senses bank 8 bass 2 bow 5 one 2 duty 2 galley 3 interest 4 issue 4 mole 2 sentence 2 slug 5 star 6 taste 3 Total Precision for thirteen Wall Street # of instances 370 25 221 260 12 7 6 903 Without With adaptation adaptation # of correct 350 353 2 2 19 22 123 127 181 177 11 12 3 2 3 3 692 698 76.6% 77.3% 241 The experiment and evaluation shows that adaptation is most effective when a high- frequency word with topically contrasting senses is involved. For low-frequency senses such as EARTH, ROW, and ROAD senses of bank, the approach does not seem to be very effective. For instance the following passage containing an instance of bank has the ROW sense but our algorithm fails to disambiguate it. ... They slept- Mynheer with a marvelously high-pitched snoring, the damn seahorse ivory teeth watching him from a bedside table. In the ballroom below, the dark had given way to moonlight coming in through the bank of french windows, it was a delayed moon, but now the sky had cleared of scudding black and the stars sugared the silver-gray sky. Martha Schuyler, old, slow, careful of foot, came down the great staircase, dressed in her best lace-drawn black silk, her jeweled shoe buckles held forward. Non-topical sense like ROW-bank can appeared in many situations, thus are very difficult to captured using a topical contextual representation. Local contextual representation might be more effective. Infrequent and non-topical senses are problematic due to data sparseness. However, that is not specific to the adaptive approach, all other approaches in the literature suffer the same predicament. Even with a static knowledge acquired from a very large corpus, these senses were disambiguated at a considerably lower rate. S Related approaches In this section, we review recent WSD literature from the prospective of types of contextual knowledge and different representational schemes. 5.1 Topical vs. Local Representation of Context 5.1.1 Topical Context With topical representation of context, the context of a given sense is reviewed as a bag of words without structure. Gale, Church and Yarowsky (1992a) experiment on acquiring topical context from substantial bilingual training corpus and report good results. 5.1.2 Local Context Local context includes the structured information on word order, distance, and syntactic feature. For instance, the local content of a line from does not suggest the same sense for the word line as a line for does. Brown et al. (1990) use the trigram model as a way of resolving sense ambiguity for lexical selection in statistical machine translation. This model makes the assumption that only the previous two words have any effect on the translation, thus word sense, of the next word. The model attacks the problem of lexical ambiguity and produces satisfactory results, under some strong assumption. A major problem with trigram model is that of long distance dependency. Dagan and Itai (1994) indicate that two languages are more informative than one; an English corpus is very helpful in disambiguating polysemous words in Hebrew text. Local context in the form of lexical relations are identified in a very large corpus. Brown, et al. (1991) describe a statistical algorithm for partitioning word senses into two groups. The authors use mutual information to find a contextual feature that most reliably indicates which of the senses of the French ambiguous word is used. The authors report a 20% improvement in the performance of a machine translation system when the words are first disambiguated this way. 5.2 Static vs. Adaptive Strategy Of the recent WSD systems proposed in the literature, almost all have the property that the knowledge is fixed when the system completes the training phase. That means the acquired knowledge never expands during the course of disambiguation. Gale, et al. (1992a) report that if one had obtained a set of training materials with errors no more than twenty to thirty percent, one could iterate training materials selection just once or twice and have training sets that had less than ten percent errors. The adaptive approach is somehow similar to their idea of incremental learning and to the bootstrap approach proposed by Yarowsky (1995). However, both approaches are still considered static models which are changed only in the training phase. 242 6 Conclusions We have described a new adaptive approach to word sense disambiguation. Under this learning strategy, first contextual representation for each word sense is built from the sense definition in MRD and represented as a weighted-vector of concepts represented as word lists in a thesaurus. Then the knowledge base is applied to the text for WSD in an adaptive fashion to improve on disambiguation precision. We have demonstrated that this approach has the potential of outperforming established static approaches. This performance is achieved despite the fact no lengthy training time or a very large corpus is required. It is evident that the WSD algorithms proposed herein are simple, take up little time and space, and most importantly, require no human intervention in all phases of WSD. Sense tagging of training material, knowledge acquisition from training data, and disambiguation all are done automatically. Acknowledgements This work is partially supported by ROC NSC grants 84-2213-E-007-023 and NSC 85-2213-E- 007-042. We are grateful to Betty Teng and Nora Liu from Longman Asia Limited for the permission to use their lexicographical resources for research purpose. Finally, we would like to thank the anonymous reviewers for many constructive and insightful suggestions. References Brown, P. F., S. A. Della Pietra, V. J. Della Pietra, and R. L. Mercer (1991). Word-sense disambiguation using statistical methods. In Proceedings of the 29th Annual Meeting of the Association for Computational Linguistics, pp 264- 270. Chen, J. N. and J. S. Chang (1998). Topical clustering of MRD senses based on information retrieval techniques. Special Issue on Word Sense Disambiguation, Computational Linguistics, 24(1), pp 61-95. Dagan, I. and A. Itai (1994). Word Sense Disambiguation Using a second language monolingual corpus. Computational Linguistics, 20(4), pp 563-596. Gale, W. A., K. W. Church, and D. Yarowsky (1992a). Using bilingual materials to develop word sense disambiguation methods. In Proceedings of the 4th International Conference on Theoretical and Methodological Issues in Machine Translation, pp 101-112. Gale, W. A., K. W. Church and D. Yarowsky (1992b). One sense per discourse. In Proceedings of the Speech and Natural Language Workshop, pp 233-237. Leacock, C., G. Towell, and E. M. Voorlees (1996). Towards building contextual representations of word senses using statistical models. In B. Boguraev and J. Pustejovsky, editor, Corpus Processing for Lexical Acquisition. MIT Press, Cambridge, MA. Liddy, E. D. and W. Paik (1993). Document filtering using semantic information from a machine readable dictionary. In Proceedings of the Workshop on Very Large Corpora, pp 20-29. Luk, A. K. (1995). Statistical sense disambiguation with relatively small corpora using dictionary definitions. In Proceedings of the 33rd Annual Meeting of the Association for Computational Linguistics, pp 181-188. McArthur, T. (1992). Longman lexicon of contemporary English. Longman Group (Far East) Ltd., Hong Kong. Proctor, P. (ed.) (1978). Longman dictionary of contemporary English. Harlow: Longrnan Group. Wilks, Y. A., D. C. Fass, C. M. Guo, J. E. McDonald, T. Plate, and B. M. Slator (1990). Providing tractable dictionary tools. Machine Translation, 5, pp 99-154. Yarowsky, D. (1995). Unsupervised word sense disambiguation rivaling supervised methods. In Proceedings of the 33rd Annual Meeting of the Association for Computational Linguistics, pp 189- 196. 243
1998
37
PAT-Trees with the Deletion Function as the Learning Device for Linguistic Patterns Keh-Jiann Chen, Wen Tsuei, and Lee-Feng Chien CKIP, Institute of Information Science, Academia Sinica, Nankang, Taipei 1 15, Taiwan Abstract In this study, a learning device based on the PAT- tree data structures was developed. The original PAT-trees were enhanced with the deletion function to emulate human learning competence. The learning process worked as follows. The linguistic patterns from the text corpus are inserted into the PAT-tree one by one. Since the memory was limited, hopefully, the important and new patterns would be retained in the PAT-tree and the old and unimportant patterns would be released from the tree automatically. The proposed PAT-trees with the deletion function have the following advantages. 1) They are easy to construct and maintain. 2) Any prefix sub- string and its frequency count through PAT-tree can be searched very quickly. 3) The space requirement for a PAT-tree is linear with respect to the size of the input text. 4) The insertion of a new element can be carried out at any time without being blocked by the memory constraints because the free space is released through the deletion of unimportant elements. Experiments on learning high frequency bi- grams were carried out under different memory size constraints. High recall rates were achieved. The results show that the proposed PAT-trees can be used as on-line learning devices. 1. Introduction Human beings remember useful and important information and gradually forget old and unimportant information in order to accommodate new information. Under the constraint of memory capacity, it is important to have a learning mechanism that utilizes memory to store and to retrieve information efficiently and flexibly without loss of important information. We don't know how human memory functions exactly, but the issue of creating computers with similar competence is one of the most important problems being studied. We are especially interested in computer learning of linguistic patterns without the problem of running out of memory. To implement such a learning device, a data structure, equipped with the following functions, is needed: a) accept and store the on-line input of character/word patterns, b) efficiently access and retrieve stored patterns, c) accept unlimited amounts of data and at the same time retain the most important as well as the most recent input patterns. To meet the above needs, the PAT-tree data structure was originally considered a possible candidate to start with. The original design of the PAT-tree can be traced back to 1968. Morrison [Morrison, 68] proposed a data structure called the "Practical Algorithm to Retrieve Information Coded in Alphanumeric"(PATRICIA). It is a variation of the binary search tree with binary representation of keys. In 1987, Gonnet [Gonnet, 87] introduced semi-infinite strings and stored them into PATRICIA trees. A PATRICIA tree constructed over all the possible semi-infinite strings of a text is then called a PAT-tree. Many kinds of searching functions can be easily performed on a PAT-tree, such as prefix searching, range searching, longest repetition searching and so on. A modification of the PAT-tree was done to fit the needs of Chinese processing in 1996 by Hung [Hung, 96], in which the finite strings were used instead of semi-infinite strings. Since finite 244 strings are not unique in a text as semi-infinite strings are, frequency counts are stored in tree nodes. In addition to its searching functions, the frequencies of any prefix sub-strings can be accessed very easily in the modified PAT-tree. Hence, statistical evaluations between sub-strings, such as probabilities, conditional probabilities, and mutual information, can be computed. It is easy to insert new elements into PAT- trees, but memory constrains have made them unable to accept unlimited amounts of information, hence limiting their potential use as learning devices. In reality, only important or representative data should be retained. Old and unimportant data can be replaced by new data. Thus, aside from the original PAT-tree, the deletion mechanism was implemented, which allowed memory to be released for the purpose of storing the most recent inputs when the original memory was exhausted. With this mechanism, the PAT-tree is now enhanced and has the ability to accept unlimited amounts of information. Once evaluation functions for data importance are obtained, the PAT-tree will have the potential to be an on-line learning device. We review the original PAT-tree and its properties in section 2. In section 3,we describe the PAT-tree with deletion in detail. In section 4, we give the results obtained after different deletion criteria were tested to see how it performed on learning word bi-gram collocations under different sizes of memory. Some other possible applications and a simple conclusion are given in the last section. 2. The Original PAT-tree In this section, we review the original version of the PAT-tree and provide enough background information for the following discussion. 2.1 Definition of Pat-tree 2.1.1 PATRICIA Before defining the PAT-tree, we first show how PATRICIA works. PATRICIA is a special kind of trie[Fredkin 60]. In a trie, there are two different kinds of nodesqbranch decision nodes and element nodes. Branch decision nodes are the search decision- makers, and the element nodes contain real data. To process strings, if branch decisions are made on each bit, a complete binary tree is formed where the depth is equal to the number of bits of the longest strings. For example, suppose there are 6 strings in the data set, and that each is 4 bits long. Then, the complete binary search tree is that shown in Fig. 2.1. IO011 Fig 2.1 The complete binar,' tree of the 6 data Apparently, it is very wasteful. Many element nodes and branch nodes are null. If those nodes are removed, then a tree called a "compressed digital search trie" [Flajolet 86], as shown in Fig. 2.2, is formed. It is more efficient, but an additional field to denote the comparing bit for branching decision should be included in each decision node. In addition, the searched results may not exactly match the input keys, since only some of the bits are compared during the search process. Therefore, a matching between the searched results and their search keys is required. Morrison [Morrison, 68] improved the trie structure further. Instead of classifying nodes into branch nodes and element nodes, Morrison combined the above two kinds of nodes into a uniform representation, called an augmented branch node. The structure of an augmented branch node is the same as that of a decision node of the trie except that an additional field for storing elements is included. Whenever an element should be inserted, it is inserted "up" to a branch node instead of creating a new element node as a leaf node. For example, the compressed digital search trie shown in Fig 2.2 has the equivalent PATRICIA like Fig 2.3. It is noticed that each element is stored in an upper node or in itself. How the data elements are inserted will be discussed in the next section. Another difference here is the additional root node. This is because in a binary tree, the number of leaf nodes is always greater than that of internal nodes by one. Whether a leaf node is reached is determined by the upward links. 245 0010 0011 1000 1011 Fig. 2.2 Compressed digital search trie. OtHO IIMNI Fig 2.3 PATRICIA 2.1.2 PAT-tree Gonnet [Gonnet, 87] extended PATRICIA to handle semi-infinite strings. The data structure is called a PAT-tree. It is exactly like PATRICIA except that storage for the finite strings is replaced by the starting position of the semi-infinite strings in the text. Suppose there is a text T with n basic units. T = Ultl2...tt,,. Consider the prefix sub-strings of T's which start from certain positions and go on as necessary to the right, such as u,u2...u,, ..... U2U 3...tt,, ..... U3U4...U n .... and so on. Since each of these strings has got an end to the left but none to the right, they are so-called semi-infinite strings. Note here that whenever a semi-infinite string extends beyond the end of the text, null characters are appended. These null characters are different from any basic units in the text. Then, all the semi-infinite strings starting from different positions are different. Owing to the additional field for comparing bits in each decision node of PATRICIA, PATRICIA can handle branch decisions for the semi-infinite strings (since after all, there is only a finite number of sensible decisions to separate all the elements of semi- infinite strings in each input set). A PAT-tree is constructed by storing all the starting positions of semi-infinite strings in a text using PATRICIA. There are many useful functions which can easily be implemented on PAT-trees, such as prefix searching, range searching, longest repetition searching and so on. Insert(to-end substring Sub, PAT tree rooted at R) { // Search Sub in the PAT tree @4--~, n <-- Left(p); while ( CompareBit ( n ) > CompareBit ( p ) ) { p <---n; if the same bit as CompareBit ( p ) at Sub is 0 n <--- Left(p); else n <---- Right ( p ); } if ( Data (n) = Sub) { // Sub is already in the PAT tree, just // increase the count. No need to insert. Occurrence n ) (----- Occurrence ( n ) + 1 ; return; } // Find the appropriate position to insert SUb // into the PAT tree (SUb will be inserted // between p and n) b <--- the first bit where Data ( n ) and Sub differ; p <---R; n (--- Left(p); while ( (CompareBit ( n ) > CompareBit ( p ) ) and (CompareBit ( p ) < b) ) { p <---n; if the same bit as CompareBit ( p ) at Sub is 0 n <--- Left(p); else n 4-- Right (p); } //Insert SUb into the PAT tree, between p and n // Initiate a new node NN ~--" new node; CompareBit ( NN ) <--- b; Data (NN) ~'- Sub; Occurrence ( NN ) <--- 1 ; // Insert the new node If the bth bit of Sub is 0 { } else { } if n is else Left ( NN ) 4-- NN; Right ( NN ) ~--- n; Left (NN) ~.- n; Right (NN) ~ NN; the Left of p Left(p) ~-- NN; Right ( p ) 4-- NN; Algorithm 2.1 PAT tree Insertion Hung [Hung, 96] took advantage of prefix searching in Chinese processing and revised the PAT-tree. All the different basic unit positions were exhaustively visited as in a PAT-tree, but the strings did not go right to the end of the text. They only stopped at the ends of the sentences. We call these finite strings "to-end sub-strings". In this 246 way, the saved strings will not necessarily be unique. Thus, the frequency counts of the strings must be added. A field denoting the frequency of a prefix was also added to the tree node. With these changes, the PAT-tree is more than a tool for searching prefixes; it also provides their frequencies. The data structure of a complete node of a PAT-tree is as follows. Node: a record of Decision hit: an integer to denote the decision bit. Frequency: the frequency count of the prefix sub- string. . Data element: a data string or a pointer of a semi- infinite string. Data count: the frequency count of the data string. Left: the left pointer points downward to the left sub-tree or points upward to a data node. Right: the right pointer points downward to the right sub-tree or points upward to a data node. End of the record. The construction process for a PAT-tree is nothing more than a consecutive insertion process for input strings. The detailed insertion procedure is given in Algorithm 2.1 and the searching procedure in Algorithm 2.2. SearchforFrequencyof ( Pattern ) ( p ~ R/*the root of PAT-tree*/; n (--- Left(p); while ( ( CompareBit ( n ) > CompareBit ( p ) ) and ( CompareBit ( n ) _< total bits of Pattern ) ) { p (--- n; if the "CompareBit ( p )"th bit of Pattern is 0 n (--- Left(p); else n (--- Right ( p ); l if ( Data ( n ) :~ Pattern ) return O; if ( CompareBit ( n ) > total bits of Pattern ) return TerminalCounts ( n ); else return Occurrence ( n ); Algorithm 2.2 Search for frequency of a pattern in PAT-tree The advantages of PAT-trees are as follows: (1) They are easy to construct and maintain. (2) Any prefix sub-string and its frequency count can be found very quickly using a PAT-tree. (3) The space requirement for a PAT-tree is linear to the size of the input text. 3. Pat-tree with the deletion function The block diagram of the PAT-tree with the deletion function is shown in figure 3.1. Pat tree construction or extention Deletion The main part I Evaluation Fig. 3.1 The Block Diagram of PAT-tree Construction. Implementing the deletion function requires two functions. One is the evaluation function that evaluates the data elements to find the least important element. The second function is release of the least important element from the PAT-tree and return of the freed node. 3.1 The Evaluation function Due to the limited memory capacity of a PAT-tree, old and unimportant elements have to be identified and then deleted from the tree in order to accommodate new elements. Evaluation is based on the following two criteria: a) the oldness of the elements, and b) the importance of the elements. Evaluation of an element has to be balanced between these criteria. The oldness of an element is judged by how long the element has resided in the PAT-tree. It seems that a new field in each node of a PAT-tree is needed to store the time when the element was inserted. When the n- th element was inserted, the time was n. The resident element will become old when new elements are gradually inserted into the tree. However, old elements might become more and more important if they reoccur in the input text. The frequency count of an element is a simple criterion for measuring the importance of an 247 element. Of course, different importance measures can be employed, such as mutual information or conditional probability between a prefix and suffix. Nonetheless, the frequency count is a very simple and useful measurement. To simplify the matter, a unified criterion is adopted. Under this criterion no additional storage is needed to register time. A time lapse will be delayed in order to revisit and evaluate a node, and hopefully, the frequency counts of important elements will be increased during the time lapse. It is implemented by way of a circular-like array of tree nodes. A PAT-tree will be constructed by inserting new elements. The insertion process takes a free node for each element from the array in the increasing order of their indexes until the array is exhausted. The deletion process will then be triggered. The evaluation process will scan the elements according to the array index sequence, which is different from the tree order, to find the least important element in the first k elements to delete. The freed node will be used to store the newly arriving element. The next position of the current deleted node will be the starting index of the next k nodes for evaluation. In this way, it is guaranteed that the minimal time lapse to visit the same node will be at least the size of the PAT-tree divided by k. In section 4, we describe experiments carried out on the learning of high frequency word bi- grams. The above mentioned time lapse and the frequency measurement for importance were used as the evaluation criteria to determine the learning performance under different memory constraints. 3.2 The Deletion function Deleting a node from a PAT-tree is a bit complicated since the proper structure of the PAT- tree has to be maintained after the deletion process. The pointers and the last decision node have to be modified. The deletion procedure is illustrated step by step by the example in Fig. 3.2. Suppose that the element in the node x has to be deleted, i.e. the node x has to be returned free. Hence, the last decision node y is no longer necessary since it is the last decision bit which makes the branch decision between DATA(x) and the strings in the left subtree of y. Therefore, DATA(x) and DECISION(y) can be removed, and the pointers have to be reset properly. In step 1, a) DATA(x) is replaced by DATA(y), b) the backward pointer in z pointing to y is replaced by x, and c) the pointer of the parent node of y which points to y is replaced by the left pointer of y. After step 1, the PAT-tree structure is properly reset. However the node y is deleted instead of x. This will not affect the searching of the PAT-tree, but it will damage the algorithm of the evaluation function to keep the time lapse properly. Therefore, the whole record of the data in x is copied to y, and is reset to the left pointer of the parent node of x be y in the step 2. Of course, it is not necessary to divide the deletion process into the above two steps. This is just for the sake of clear illustration. In the actual implementation, management of those pointers has to be handled carefully. Since there is no backward pointer which points to a parent decision node, the relevant nodes and their ancestor relations have to be accessed and retained after searching DATA(x) and DATA(y). ..°" .°o" Dclctc this Term C~py the data .o l .o o,"* ,°o°," x x ,,.~Fr E Fig. 3.2 The deletion process 4. Learning word collocations by Pat-trees The following simple experiments were carried out in order to determine the learning performance of the PAT-tree under different memory constraints. We wanted to find out how the high frequency word bi-grams were retained when the total number of different word bi-grams much greater than the size of the PAT-tree. 248 4.1 The testing environment We used the Sinica corpus as our testing data. The Sinica corpus is a 3,500,000-word Chinese corpus in which the words are delimited by blanks and tagged with their part-of-speeches[Chen 96]. To simplify the experimental process, the word length was limited to 4 characters. Those words that had more than four characters were truncated. A preprocessor, called reader, read the word bi- grams from the corpus sequentially and did the truncation. Then the reader fed the bi-grams to the construction process for the Pat-tree. There were 2172634 bigrams and 1180399 different bi-grams. Since the number of nodes in the PAT-trees was much less than the number of input bi-grams, the deletion process was carried out and some bi- grams was removed from the PAT-tree. The recall rates of each different frequency bi-grams under the different memory constraints were examined to determine how the PAT-tree performed with learning important information. 4.2 Experimental Results Table 4.1 Finding the minimum of the next 200 nodes. "~--¢, /~1 e~l :w41 4/641 .~1 6/641 7/641 ~/6¢ >256 15 I(X 100 10C II/ 1(1 10(} ICE • 16~ 15 10( I(D IOC 1~ 1(}{ 100 I(E • 75 lff~ lOC 100 10~ 1(~ 10( 100 I(E >66 99.9?, lff lff.3 IOC I(I I(D !(I] 1(£ >56 99.T~ 10( lff3 I(E lOC 1(I) I03 IOC >46 98..: 99,oA ICO IOC l(I 1()0 1(I] I(IC' >3J 96., 99.8~ 1(13 1~ 10~ I(I) l(I] I(£ >36 94.6J 99.6] 1(30 10{ 113( 1(I) 10(3 1(1(: >2.5 91.6~ 98.'A 99.93 I0C 1(}( I(D 1(I] 1(1{: >2~ 85.4~ 97.(12 99.63 99.~ 100 l(ll l(I; I(/ •/3 76.1! 92.87 98.37 99.61 99.89 t~).94 99.9~ l(I • It; 62.3.' 83.2 93.19 %.95, 98.5 t ~.3 99.~ 99.~ >J 39.4~ 60.95 74.~ 83.1~ 88.55 91.86 94.18 %.31 >2 23.52 43.56 57.01 66.4z 73.~ 78.78 83.(~ 86.65 >,~ 14,8: 29.34 43.55 52.22 59.45 65.37 70.55 74.81 • i 6.51 12.97 19.44 25.6[ 31.85 38.04 44.62 48.7~ Different time lapses and PAT-tree sizes were tested to see how they performed by comparing the results with the ideal cases. The ideal cases were obtained using a procedure in which the input bi-grams were pre-sorted according to their frequency counts. The bi-grams were inserted in descending order of their frequencies. Each bi-gram was inserted n times, where n was its frequency. According to the deletion criterion, under such an ideal case, the PAT-tree will retain as many high frequency bi-grams as it can. Table 4.2 Input bi-grams in descending order of their aencies. 1/64 "2/64 .t,/64 4/64 5/64 6/64 7/64 8/64 >250 I~ ICE ICE ICE ICE ICE l~ ICE • 100 ICE ICE ICE 1~ ICE, IOC IOC IOC • 75 I(]0 lifo lif0 1(30 ICE 10C 10G lffd >60 ICE ICE ICE ICE 10C 1~ 10C I0C >.Y0 ICE ICE ICE ICE ICE IOC I(E 1~ >40 IO(J ICE ICE lO0 ICE ICE 1~ ICE >35 ICE lO(l ICE ICE lie lOC IOC lOC >30 ICE ICE ICE ICE IOC IOC IOC ICE >,~ 10C 1013 ICE ICE I(l: 15 10C ICE >20 10C ICE ICE ICE 10L ICE 10C lCE • 15 ICE ICE ICE 100 1~; lCE IOC ICE • 16 lOf IOC 10: 1('£ 10¢ I0(. 1(.( IOC >5 46.12 92.2~ lff~ ICE 10( lO( IOC IOC >3 24 48 72 ~ I0(. 10( lff~ IOC >2 15 3(] 45 6C 7f 9C IOC IOC >1 6.55 13.1 19.65 26.N 32.74 39.2~ 46.2~ 52.3t~ The deletion process worked as follows. A fixed number of nodes were checked starting from the last modified node, and the one with the minimal frequency was chosen for deletion. Since the pointer was moving forward along the index of the array, a time lapse was guaranteed to revisit a node. Hopefully the high frequency bi-grams would reoccur during the time lapse. Different forward steps, such as 100, 150, 200, 250, and 300, were tested, and the results show that deletion of the least important elements within 200 nodes led to the best result. However the performance results of different steps were not very different. Table 4.1 shows the testing results of step size 200 with different PAT-tree sizes. Table 4.2 shows the results under the ideal cases. Comparing the results between Table 4.1 and Table 4.2, it is seen that the recall rates of the important bi-grams under the normal learning process were satisfactory. Each row denotes the recall rates of a bi-gram greater than the frequency under different sizes of PAT-tree. For instance, the row 10 in Table 4.1 shows that the bi-grams which had the frequency greater than 20, were retained as follows: 85.46%, 97.02%, 99.63%, 99.95%, 100%, 100%, 100%, and 100%, when the size of the PAT-tree was 1/64, 2/64 ..... 8/64 of the total number of the different bi-grams, respectively. 5. Conclusion The most appealing features of the PAT-tree with 249 deletion are the efficient searching for patterns and its on-line learning property. It has the potential to be a good on-line training tool. Due to the fast growing WWW, the supply of electronic texts is almost unlimited and provides on-line training data for natural language processing. Following are a few possible applications of PAT- trees with deletion. a) Learning of high frequency patterns by inputting unlimited amounts of patterns. The patterns might be character/word n-grams or collocations. Thus, new words can be extracted. The language model of variable length n-grams can be trained. b) The most recently inserted patterns will be retained in the PAT-tree for a while as if it has a short term memory. Therefore, it can on-line adjust the language model to adapt to the current input text. c) Multiple PAT-trees can be applied to learn the characteristic patterns of different domains or different style texts. They can be utilized as signatures for auto-classification of texts. With the deletion mechanism, the memory limitation is reduced to some extent. The performance of the learning process also relies on the good evaluation criteria. Different applications require different evaluation criteria. Therefore, under the current PAT-tree system, the evaluation function is left open for user design. Suffix search can be done through construction of a PAT-tree containing reverse text. Wiidcard search can be done by traversing sub- trees. When a wiidcard is encountered, an indefinite number of decision bits should be skipped. To cope with the memory limitation on the core memory, secondary memory might be required. In order to speed up memory accessing, a PAT-tree can be split into a PAT-forest. Each time, only the top-level sub-tree and a demanded lower level PAT-tree will resided in the core memory. The lower level PAT-tree will be swapped according to demand. References de la Briandais, R. 1959. File searching using variable length keys. AFIPS western JCC, pp. 295-98, San Francisco. Calif, Chen, Keh-Jiann, Chu-Ren Huang, Li-Ping Chang and Hui-Li Hsu. 1996. Sinica Corpus: Design Meghodology for Balanced Copra. 11" Pacific Asia Conference on Language, Information, and Computation (PA CLIC I1). pp. 167-176. Flajolet, P. and R. Sedgewick. 1986. Digital search trees revisited. SlAM J Computing, 15;748-67. Frakes, William B. and Ricardo Baeza-Yates. 1992. Information Retrieval, Data Structures and Algorithms. Prentice-Hall. Fredkin, E. 1960. Trie memory. CACM. 3, 490- 99. Gonnet, G. 1987. PAT 3.1: An Efficient Text Searching System, User's Mannual. UW Centre for the New OED, University of Waterloo. Hung, J. C. 1996. Dynamic Language Modeling for Mandarin Speech Retrieval for Home Page Information. Master thesis, National Taiwan University. Morrison, D. 1968. PATRICIA-Practial Algorithm to Retrieve Information Coded in Alphanumeric. JA CM, 15 ;514-34 250
1998
38
Hybrid Approaches to Improvement of Translation Quality in Web-based English-Korean Machine Translation Sung-Kwon Choi, Han-Min Jung, ChuI-Min Sim, Taewan Kim, Dong-In Park MT Lab. SERI 1 Eoun-dong, Yuseong-gu, Taejon, 305-333, Korea {skchoi, jhm, cmsim, twkim, dipark}@seri.re.kr Jun-Sik Park, Key-Sun Choi Dept. of Computer Science, KAIST 373-I Kusong-dong, Yuseong-gu, Taejon, 305-701, Korea [email protected] [email protected] Abstract The previous English-Korean MT system that was the transfer-based MT system and applied to only written text enumerated a following brief list of the problems that had not seemed to be easy to solve in the near future : 1) processing of non-continuous idiomatic expressions 2) reduction of too many ambiguities in English syntactic analysis 3) robust processing for failed or ill- formed sentences 4) selecting correct word correspondence between several alternatives 5) generation of Korean sentence style. The problems can be considered as factors that have influence on the translation quality of machine translation system. This paper describes the symbolic and statistical hybrid approaches to solutions of problems of the previous English-to-Korean machine translation system in terms of the improvement of translation quality. The solutions are now successfully applied to the web-based English-Korean machine translation system "FromTo/EK" which has been developed from 1997. Introduction The transfer-based English-to-Korean machine translation system "MATES/EK" that has been developed from 1988 to 1992 in KAIST(Korean Advanced Institute of Science and Technology) and SERl(Systems Engineering Research Institute) enumerated following list that doesn't seem to be easy to solve in the near future in terms of the problems for evolution of the system (Choi et. al., 1994) : • processing of non-continuous idiomatic expressions • generation of Korean sentence style • reduction or ranking of too many ambiguities in English syntactic analysis • robust processing for failed or ill-formed sentences • selecting correct word correspondency between several alternatives The problems result in dropping a translation assessment such as fidelity, intelligibility, and style (Hutchins and Somers, 1992). They can be the problems with which MATES/EK as well as other MT systems have faced. This paper describes the symbolic and statistical hybird approaches to solve the problems and to improve the translation quality of web-based English-to-Korean machine translation. 1 System Overview English-to-Korean machine translation system "FromTo/EK" has been developed from 1997, solving the problems of its predecessor "MATES/EK" and expanding its coverage to WWW. FromTo/EK has basically the same formalism as MATES/EK that does English sentence analysis, transforms the result (parse 251 U ser Interface Translation Engine Eng L:'I° H "" Re n Ler Anal ,u[l¢ll~f ,oftrn~r .......... It = .... II ('ore pound El VlII,*R ¢ I~ lla~l I TI " t'nil " PIrIIr zer ~ ........ ; .... x., I K.,~,. ~ ,,r,,r t-~ ~'nn','lrx~rPr ' Knowledge and Dictionary User Interface Figure 1: The System Configuration of FromTo/EK tree) into an intermediate representation, and then transforms it into a Korean syntactic structure to construct a Korean sentence. Figure 1 shows the overall configuration of FromTo/EK. FromTo/EK consists of user interface for English and Korean, translation engine, and knowledge and dictionaries. The black boxes in the Figure 1 mean the modules that have existed in MATES/EK, while the white ones are the new modules that have been developed to improve the translation quality. Next chapters describe the new modules in detail. 2 Domain Recognizer and Korean sentence style In order to identify the domain of text and connect it to English terminology lexicon and Korean sentence style in Korean generation, we have developed a domain recognizer. We adapted a semi-automated decision tree induction using C4.5 (Quinlan, 1993) among diverse approaches to text categorization such as decision tree induction (Lewis et. al., 1994) and neural networks (Ng et. aL, 1997), because a semi-automated approach showed perhaps the best performance in domain identification according to (Ng et. al., 1997). Twenty-five domains were manually chosen from the categories of awarded Web sites. We collected 0.4 million Web pages by using Web search robot and counted the frequency of words to extract features for domain recognition. The words that appeared more than 200 times were used as features. Besides we added some manually chosen words to features because the features extracted automatically were not able to show the high accuracy. Given an input text, our domain recognizer assigns one or more domains to an input text. The domains can raise the translation quality by activating the corresponding domain-specific terminology and selecting the correct Korean sentence style. For example, given a "driver", it may be screw driver, taxi driver or device driver program. After domain recognizer determines each domain of input text, "driver" can be translated into its appropriate Korean equivalent. The domain selected by the domain recognizer is able to have a contribution to generate a better Korean sentence style because Korean sentence style can be represented in various ways by the verbal endings relevant to the domain. For example, the formal domains such as technology 252 and law etc. make use of the plain verbal ending like 'ta' because they have carateristics of formality, while the informal domains such as weather, food and fashion etc. are related to the polite verbal ending 'supnita' because they have carateristics of politeness. 3 Compound Unit Recognition parsing mechanism. Partial parser operates on cyclic trie and simple CFG rules for the fast syntactic constraint check. The experimental result showed our syntactic verification increased the precision of CU recognition to 99.69%. 4 Competitive Learning Grammar One of the problems of rule-based translation has been the idiomatic expression which has been dealt mainly with syntactic grammar rules (Katoh and Aizawa, 1995) "Mary keeps up with her brilliant classmates." and "I prevent him from going there." are simple examples of uninterupted and interupted idiomatic expressions expectively. In order to solve idiomatic expressions as well as collocations and frozen compound nouns, we have developed the compound unit(CU) recognizer (Jung et. al., 1997). It is a plug-in model locating between morphological and syntactic analyzer. Figure 2 shows the structure of CU recognizer. English ------~. Morphological Analyzer ~ , ~ S)'atac " ' . . . . CFG Grammar ,~ Figure 2 : System structure of CU recognizer The recognizer searches all possible CUs in the input sentence using co-occurrence constraint string/POS and syntactic constraint and makes the CU index. Syntactic verifier checks the syntactic verification of variable constituents in CU. For syntactic verifier we use a partial For the parse tree ranking of too many ambiguities in English syntactic analysis, we use the mechanism to insert the competitive probabilistics into the rules. To decide the correct parse tree ranking, we compare two partial parse trees on the same node level with competitive relation and add ct (currently, 0.01) to the better one, but subtract ct from the worse one on the base of the intuition of linguists. This results now in raising the better parse tree higher in the ranking list of the parse trees than the worse one. 5 Robust Translation In order to deal with long sentences, parsing- failed or ill-formed sentences, we activate the robust translation. It consists of two steps: first, long sentence segmentation and then fail softening. 5.1 Long Sentence Segmentation The grammar rules have generally a weak point to cover long sentences. If there are no grammar rules to process a long sentence, the whole parse tree of a sentence can not be produced. Long sentence segmentation produces simple from long sentences before parsing fragements fails. We use the clue of the sentence POS sequence of input sentence as a segmentation. If the length of input exceeds pre-defined threshold, currently 21 for segmentation level I and 25 for level II, the sentence is divided into two or more parts. Each POS trigram is separately applied to the level 1 or II. After segmenting, each part of 253 input sentence is analyzed and translated. The following example shows an extremely long sentence (45 words) and its long sentence segmentation result. [Input sentence] "Were we to assemble a Valkyrie to challenge IBM, we could play Deep Blue in as many games as IBM wanted us to in a single match, in fact, we could even play multiple games at the same time. Now - - wouldn't that be interesting?" [Long Sentence Segmentation] "Were we to assemble a Valkyrie to challenge IBM, / (noun PUNCT pron) we could play Deep Blue in as many games as IBM wanted us to in a single match, / (noun PUNCT adv) in fact, / (noun PUNCT pron) we could even play multiple games at the same time, / (adv PUNCT adv) Now - - / (PUNCT PUNCT aux) wouldn't that be interesting?" 5.2 Fail Softening For robust translation we have a module 'fail softening' that processes the failed parse trees in case of parsing failure. Fail softening finds set of edges that covers a whole input sentence and makes a parse tree using a virtual sentence tag. We use left-to-right and right-to-left scanning with "longer-edge-first" policy. In case that there is no a set of edges for input sentence in a scanning, the other scanning is preferred. If both make a set of edges respectively, "smaller-set- first" policy is applied to select a preferred set, that is, the number of edges in one set should be smaller than that of the other (e.g. if n(LR)=6 and n(RL)=5, then n(RL) is selected as the first ranked parse tree, where n(LR) is the number of left-to-right scanned edges, and n(RL) is the number of right-to-left scanned edges). We use a virtual sentence tag to connect the selected set of edges. One of our future works is to have a mechanism to give a weight into each edge by syntactic preference. 6 Large Collocation Dictionary We select a correct word equivalent by using lexical semantic marker as information constraint and large collocation dictionary in the transfer phase. The lexical semantic marker is applied to the terminal node for the relational representation, while the collocation information is applied to the non-terminal node. The large collocation dictionary has been collected from two resources; EDR dictionary and Web documents. 7 Test and Evaluation A semi-automated decision tree of our domain recognizer uses as a feature twenty to sixty keywords which are representative words extracted from twenty-five domains. To raise the accuracy of the domain identifier, manually chosen words has been also added as features. For learning of the domain identifier, each thousand sentence from twenty-five domains is used as training sets. We tested 250 sentences that are the summation of each ten sentences extracted from twenty-five domains. These test sentences were not part of training sets. The domain identifier outputs two top domains as its result. The accuracy of first top domain shows 45% for 113 sentences. When second top domains are applied, the accuracy rises up to 75%. In FromTo/EK, the analysis dictionary consists of about 70,000 English words, 15,000 English compound units, 80,000 English-Korean bilingual words, and 50,000 bilingual collocations. The domain dictionary has 5,000 words for computer science that were extracted from IEEE reports. In order to make the evaluation as objective as possible we compared FromTo/EK with MATES/EK on 1,708 sentences in the IEEE computer magazine September 1991 issue, which MATES/EK had tested in 1994 and 254 whose length had been less than 26 words. Table 1 shows the evaluation criteria. Table 1 : The evaluation criteria Degree Meaning 4 The meaning of the sentence is (Perfect) perfectly clear. 3 (Good) 2 (OK) The meaning of the sentence is almost clear. The meaning of the sentence can be understood after several readings. 1 (Poor) The meaning of the sentence can be guessed only after a lot of readings. 0(Fail) The meaning of the sentence cannot be guessed at all. With the evaluation criteria three master degree students whom we randomly selected compared and evaluated the translation results of 1,708 sentences of MATES/EK and those of FromTo/EK. We have considered the degrees 4, 3, and 2 in the table 1 as successful translation results. Figure 3 shows the evaluation result. tSumh~ d mo~,~lb. Inmlll~l m ) 100 0 ~ 4 5 6 7 8 9 lO 11 12 13 14 15 16 1"7 18 19 20 21 ~' 23 24 Figure 3 : The evaluation of 1,708 sentences Figure 3 shows a translation quality of both FromTo/EK and MATES/EK according to the length of a sentence. More than 84% of sentences that FromTo/EK has translated is understood by human being. 8 Conclusion In this paper we described the hybrid approaches to resolution of various problems that MATES/EK as the predecessor of FromTo/Ek had to overcome. The approaches result in improving the translation quality of web-based documents. FromTo/EK is still under growing, aiming at the better Web-based machine translation, and scaling up the dictionaries and the grammatical coverage to get the better translation quality. References Choi K.S., Lee S.M., Kim H.G., and Kim D.B. (1994) An English-to-Korean Machine Translator: MA TES/EK. COLING94, pp. 129- 133. Hutchins W.J. and Somers H.L. (1992) An Introduction to Machine Translation. Academic Press. Jung H.M., Yuh S.H., Kim T.W., and Park D.I. (1997) Compound Unit Recognition for Efficient English-Korean Translation. Proceedings of ACH-ALLC. Katoh N. and Aizawa T. (1995) Machine Translation of Sentences with Fixed Expression. Proceedings of the 4 th Applied Natural Language Processing. Lewis D.D. and Ringuette M. (1994) A comparison of two learning algorithms for text categorization. Symposium on Document Analysis and Information Retrieval, pp.81-93. Ng H., Goh W., and Low K. (1997) Feature Selection, Perceptron Learning, and a Usability Case Study for Text Categorizatio. Proceedings of the 20 th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval. Quinlan J. (1993) C4.5: Programs for Machine Learning. Morgan Kaufmann Publishers. 255
1998
39
A Simple Hybrid Aligner for Generating Lexical Correspondences in Parallel Texts Lars AHRENBERG, Mikael ANDERSSON & Magnus MERKEL NLPLAB, Department of Computer and Information Science Linktping University, S-581 83 Linktping, Sweden [email protected], [email protected], [email protected] Abstract We present an algorithm for bilingual word alignment that extends previous work by treating multi-word candidates on a par with single words, and combining some simple assumptions about the translation process to capture alignments for low frequency words. As most other alignment algorithms it uses co- occurrence statistics as a basis, but differs in the assumptions it makes about the translation process. The algorithm has been implemented in a modular system that allows the user to experiment with different combinations and variants of these assumptions. We give performance results from two evaluations, which compare well with results reported in the literature. Introduction In recent years much progress have been made in the area of bilingual alignment for the support of tasks such as machine translation, machine-aided translation, bilingual lexicography and terminology. For instance, Melamed (1997a) reports that his word-to-word model for translational equivalence produced lexicon entries with 99% precision and 46% recall when trained on 13 million words of the Hansard corpus, where recall was measured as the fraction of words from the bitext that were assigned some translation. Using the same model but less data, a French/English software manual of 400,000 words, Resnik and Melamed (1997) reported 94% precision with 30% recall. While these figures are indeed impressive, more telling figures can only be obtained by measuring the effect of the alignment system on some specific task. Dagan and Church (1994) reports that their Termight system helped double the speed at which terminology lists could be compiled at the AT&T Business Translation Services. It is also clear that the usability of bilingual concordances would be greatly improved if the system could indicate both items of a translation pair and if phrases could be looked up with the same ease and precision as single words (Macklovitch and Hannan 1996). For the language pairs that are of particular interest to us, English vs. other Germanic languages, the ability to handle multi-word units adequately is crucial (cf. Jones and Alexa 1997). In English a large number of technical terms are multi-word compounds, while the corresponding terms in other Germanic languages are often single-word compounds. We illustrate with a few examples from an English/Swedish computer manual: Table 1. Equivalent compounds in an English/Swedish bitext. English frle manager network server operating system setup directory Swedish ftlhanterare n~itverksserver operativsystem installationskatalog Also, many common adverbials and prepositions are multi-word units, which may or may not be translated as such. Table 2. Equivalent adverbials and prepositions English Swedish after all n~ir allt kommer,omkring trots in spite of in general i allm~inhet 1. The Problem The problem we consider is how to find word and phrase alignments for a bitext that is already aligned at the sentence level. Results should be delivered in a form that could easily be checked and corrected by a human user. Although we primarily use the system for 29 bitexts with an English and a Scandinavian half, the system should preferably be useful for many different language pairs. Thus we don~ rely on the existence of POS-taggers or lemmatizers for the languages involved, but wish to provide mechanisms that a user can easily adapt to new languages. The organisation of the paper is as follows: In section 2 we relate this approach to previous work, in section 3 we motivate and spell out our assumptions about the behaviour of lexical units in translation, in section 4 we present the basic features of the algorithm, and in section 5 we present results from an evaluation and try to compare these to the results of others. 2. Previous work Most algorithms for bilingual word alignment to date have been based on the probabilistic translation models first proposed by Brown et al. (1988, 1990), especially Model I and Model 2. These models explicitly exclude multi-word units from consideration 1. Melamed (1997b), however, proposes a method for the recognition of multi- word compounds in bitexts that is based on the predictive value of a translation model. A trial translation model that treat certain multi-word sequences as units is compared with a base translation model that treats the same sequences as multiple single-word units. A drawback with Melamed's method is that compounds are defined relative to a given translation and not with respect to language- internal criteria. Thus, if the method is used to construct a bilingual concordance, there is a risk that compounds and idioms that translate compositionally will not be found. Moreover, it is computationally expensive and, since it constructs compounds incrementally, adding one word at a time, requires many iterations and much processing to find linguistic units of the proper size. Kitamura and Matsumoto (1996) present results from aligning multi-word and single word expressions with a recall of 80 per cent if partially correct translations were included. Their method is iterative and is based on the use of the Dice coefficient. Smadja et. al (1996) also use the Dice Model 3-5 includes multi-word units in one direction. coefficient as their basis for aligning collocations between English and French. Their evaluation show results of 73 per cent accuracy (precision) on average. 3. Underlying assumptions As Fung and Church (1994) we wish to estimate the bilingual lexicon directly. Unlike Fung and Church our texts are already aligned at sentence level and the lexicon is viewed, not merely as word associations, but as associations between lexical units of the two languages. We assume that texts have structure at many different levels. At the most concrete level a text is simply a sequence of characters. At the next level a text is a sequence of word tokens, where word tokens are defined as sequences of alphanumeric character strings that are separated from one another by a finite set of delimiters such as spaces and punctuation marks. While many characters can be used either as word delimiters or as non- delimiters, we prefer to uphold a consistent difference between delimiters and non-delimiters, for the ease of implementation that it allows. At the same time, however, the tokenizer recognizes common abbreviations with internal punctuation marks and regularizes clitics to words (e.g. can't is regularized to can not). At the next level up a text can be viewed as a partially ordered bag of lexical units. It is a bag because the same unit can occur several times in a single sentence. It is partially ordered because a lexical unit may extend across other lexical units, as in He turned the offer down. Tabs were kept on him. We say that words express lexical units, and that units are expressed by words. A unit may be expressed by a multi-word sequence, while a given word can express at most one lexical unit. 2 It is often hard to tell the difference between a lexical unit and a lexical complex. We assume that 2 This latter assumption is actually too strict for Germanic languages where morphological compounding is a productive process, but we make it nevertheless, as we have no means too identify compounds reliably. Moreover, the borderline between a lexicalized compound and a compositional compound is hard to draw consistently, anyway. 30 recurrent collocations that pass certain structural and contextual tests are candidate expressions for lexical units. If such collocations are found to correspond to something in the other half of the bitext on the basis of co-occurrence measures, they are regarded as expressions of lexical units. This will include compound names such as New York" ~enry Kissinger' and ~World War II" and compound terms such as 'network server directory'. Thus, as with the compositional compounds just discussed, we prefer high recall to high precision in identifying multi-word units. The expressions of a lexical unit form an equivalence class. An equivalence class for a single-word unit includes its morphological variants. An equivalence class for a multi-word unit should include syntactic variants as well. For instance, the lexical unit turn down should include p ~urned down' ~urning down' as well as expressions where the particle is separated from the verb by some appropriate phrase, as in the example above. The current system, though, does not provide for syntactic variants. Our aim is to establish relations not only between corresponding words and word sequences in the bitext, but also between corresponding lexical units. A problem is then that the algorithm cannot recognize lexical units directly, but only their expressions. It helps to include lexical units in the underlying model, however, as they have explanatory value. Moreover, the algorithm can be made to deliver its output in the form of correspondences between equivalence classes of expressions belonging to the same lexical unit. For the purpose of generating the alignment and the dictionary we divide the lexical units into three classes: 1. irrelevant units, 2. closed class units, 3. open class units The same categories apply to expressions. Irrelevant units are simply those that we don~t want to include. They have to be listed explicitly. The reason for not including some items may vary with the purpose of alignment. Even if we wish the alignment to be as complete as possible, it might be useful to exclude certain units that we suspect may confuse the algorithm. For instance, the do- support found in English usually has no counterpart in other languages. Thus, the different forms of 'do' may be excluded from consideration from the start. As for the translation relation we make the following assumptions: 1. A lexical unit in one half of the bitext corresponds to at most one lexical unit in the other half. This can be seen as a generalization of the one-to-one assumption for word-to-word translation used by Melamed (1997a) and is exploited for the same purpose, i.e. to exclude large numbers of candidate alignments, when good initial alignments have been found. 2. Open class and closed class lexical units are usually translated and there are a limited number of lexical units in the other language that are commonly used to translate them. While deliberately vague this assumption is what motivates our search for frequent pairs <source expression, target expression> with high mutual information. It also motivates our choice of regarding additions and deletions of lexical units in translation as haphazard apart from the case of a restricted set of irrelevant units that we assume can be known in advance. 3. Open class units can only be aligned with open class units, and closed class units can only be aligned with closed class units. This assumption seems generally correct and has the effect of reducing the number of candidate alignments significantly. Closed class units have to be listed explicitly. The assumption is that we know the two languages sufficiently well to be able to come up with an appropriate list of closed class units and expressions. Multi-word closed class units are listed separately. Closed class units can be further classified for the purposes of alignment (see below). 4. If some expression for the lexical unit Us is found corresponding to some expression for the lexical unit UT, then assume that any expression of Us may correspond to any expression of UT. This assumption is in accordance with the often made observation that morphological properties are not invariants in translation. It is used to make the algorithm more greedy by accepting infrequent alignments that are morphological variants of high-rating ones. 5. If one half of an aligned sentence pair is the expression of a single lexical unit, then assume that the other half is also. This is definitely a heuristic, but it has been shown to be very useful 31 for technical texts involving English and Scandinavian, where terms are often found in lists or table cells (Tiedemann 1997). This heuristic is useful for finding alignments regardless of frequencies. Similarly, if there is only one non-aligned (relevant open class) word left in a partially aligned sentence, assume that it corresponds to the remaining (relevant open class) words of the corresponding sentence. 6. Position matters, i.e. while word order is not an invariant of translation it is not random either. We implement the contribution of position as a distribution of weights over the candidate pairs of expressions drawn from a given pair of sentences. Expressions that are close in relative position receive higher weights, while expressions that are far from each other receive lower weights. 4. The Approach 4.1 Input A bitext aligned at the sentence level. 4.2 Output There are two types of output data: (i) a table of link types in the form of a bilingual dictionary where each entry has the form <<sf .... t">, s being the source expression type and t I .... t n the target expression types that were found to correspond to s; and (ii) a table of link instances <<s,t><i,j>> sorted by sentence pairs, where s is some expression from the source text, t is an expression from the translated text, and i and j are the (within- sentence) positions of the first word of s and t, respectively. 4.3 Preprocessing Both halves of the bitext are regularized. When open class multi-word units are to be included, they are generated in a preprocessing stage for both the source and target texts and assembled in a table. For this purpose, we use the phrase extracting program described in Merkel et al. (1994). 4.4 Basic operation The basic algorithm combines the K-vec approach, described by Fung and Church (1993), with the greedy word-to-word algorithm of Melamed (1997a). In addition, open class expressions are handled separately from closed class expressions, and sentences consisting of a single expression are handled in the manner of Tiedemann (1997). The algorithm is iterative, repeating the same process of generating translation pairs from the bitext, and then reducing the bitext by removing the pairs that have been found before the next iteration starts. The algorithm will stop when no more pairs can be generated, or when a given number of iterations have been completed. In each iteration, the following operations are performed: (i) For each open class expression in the source half of the bitext (with frequency higher than 3), the open class expressions in corresponding sentences of the other half are ranked according to their likelihood as translations of the given source expression. We estimate the probability that a candidate target expression is a translation by counting co- occurrences of the expressions within sentence pairs and overall occurrences in the bitext as a whole. Then the t-score, used by Fung and Church, is calculated, and the candidates are ranked on the basis of this value: In our case K is the number of sentence pairs in prob(V~,Vt) - prob(V~) prob(V,) t-- the bitext. The target expression giving the highest t-score is selected as a translation provided the following two conditions are met: (a) this t-score is higher than a given threshold, and (b) the overall frequency of the pair is sufficiently high. (These are the same conditions that are used by Fung and Church.) This operation yields a list of translation pairs involving open class expressions. (ii) The same as in (i) but this time with the closed class expressions. A difference from the previous stage is that only target candidates of the proper sub-category or sub-categories for the source expression are considered. Conjunctions and personal pronouns are for example specified for both the target and the source languages. This strategy helps to limit the search space when closed-class expressions are linked. 32 (iii) Open class expressions that constitute a sentence on their own (not counting irrelevant word tokens) generate translation pairs with the open class expressions of the corresponding sentence. (iv) When all (relevant) source expressions have been tried in this manner, a number of translation pairs have been obtained that are entered in the output table and then removed from the bitext. This will affect t-scores by reducing mariginal frequencies and will also cause fewer candidate pairs to be considered in the sequel. The reduced bitext is input for the next iteration. 4.5 Variants The basic algorithm is enhanced by a number of modules that can be combined freely by the user. These modules are • a morphological module that groups expressions that are identical modulo specified sets of suffices; • a weight module that affects the likelihood of a candidate translation according to its position in the sentence; • a phrase module that includes multi-word expressions generated in the pre-processing stage as candidate expressions for alignment. 4.5.1 The morphological module The morphological module collects open class translation pairs that are similar to the ones that are found by the basic algorithm. More precisely, if the pair (X, Y) has been generated as a translation pair in some iteration, other candidate pairs with X as the first element are searched. A pair (X, Z) is considered to be a translation pair iff there exist strings W, F and G such that Y --- i~rF, Z = IY/G and F and G have been defined as different suffices of the same paradigm. The data needed for this module consists of simple suffix lists for regular paradigms of the languages involved. For example, [0, s, ed, ing] is a suffix list for regular English verbs. They have to be defined by the user in advance. When the morphological module is used, it is possible to reverse the direction of the linking process at a certain stage. After each iteration of linking expressions from source to target, the different inflectional variants of the target word are used as input data and these candidates are then linked from target to source. This strategy makes it possible to link low-frequency source expressions belonging to the same suffix paradigm. 4.5.2 The weight module The weight module distribute weights over the target expressions depending on their position relative to the given source expression. The weights must be provided by the user in the form of lists of numbers (greater than or equal to 0). The weight for a pair is caclulated as the sum of the weights for the instances of that pair. This weight is then used to adjust the co-occurrence probabilities by using the weight instead of the co- occurrence frequency as input to the the t-score formula. The threshold used is adjusted accordingly. In the current configuration of weights, the threshold is increased by 1. In the weight module it is possible to specify the maximal distance between a source and target expression measured as their relative position in the sentences. 4.5.3 The phrase module When the phrase module is invoked, multi-word expressions are also considered as potential elements of translation pairs. The multi-word expressions to be considered are generated in a special pre-processing phase and stored in a phrase table. T-scores for candidate translation pairs involving multi-word expressions are calculated in the same way as for single words. When weights are used the weight of a multi-word expression is considered equal to that of its first word. It can happen that the t-scores for two pairs <s,tl> and <s,t;>, where t I is a multi-word expression and P is a word that is part of t 1, will be identical or almost identical. In this case we prefer the almost identical target multi-word expression over a single word candidate if it has a t-value over the threshold and is one of the top six target candidates. When a multi-word expression is found to be an element of a translation pair, the expressions that overlap with it, whether multi- word or single-word expressions, are removed from the current agenda and not considered until the next iteration. 33 5. Evaluation The algorithm was tested on two different texts; one novel (66,693 source words) and one computer program manual (169.779 source words) which both were translated from English into Swedish. The tests were run on a Sun UltraSparcl Workstation with 320 MB RAM and took 55 minutes for the novel and 4 and a half hour for the program manual. The tests were run with three different configurations on each text: (i) the baseline (B) configuration which is the t-score measure, (ii) all modules except the weights module (AM-W), but a linkdistance constraint was used and set to 10; and (iii) all modules (AM) including morphology, weights and phrases. The t-score threshold used was 1.65 for B and AM-W, and 2.7 for AM, the minimum frequency of source expression was set to 3. Closed-class expressions were linked in all configurations. In the baseline configuration no distinction was made between closed-class and open-class expressions. In the AM-W and AM tests the closed-class expressions were divided into different subcategories and at the end of each iteration the linking direction was reversed at the end of each of the six iterations which improves the chances of linking low frequency source expressions. The characteristics of the source texts used are shown in Table 3. Table 4. Results from two bitexts, using T-score W), and all modules (AM Table 3. Characteristics for the two source texts Size in running words No of word types Word tTpes frequency 3 or higher Word types frequency 2 or 1 Multi-word expression types (found in pre-processins) The novel contains Novel Prog. Man. 66,693 169,779 9,917 3,828 2,870 2,274 7,047 1,554 243 981 a high number of low frequency words whereas the program manual contains a higher proportion of words that the algorithm acturally tested as the frequency threshold was set to 3. The results from the tests are shown in Table 4. The evaluation was done on an extract from the automatically produced dictionary. All expressions starting with the letters N, O and P were evaluated for all three configurations of each text. The results from the novel show that recall is almost tripled in the sample, from 234 in the B configuration to 709 linked source expressions with the AM configuration. Precision values for the novel lie in the range from 90.13 to 92.50 per cent when partial links are judged as errors and slightly higher if they are not. The use of weights seems to make precision somewhat lower for the novel which perhaps could be explained by the fact that the novel is a much more varied text type. For the program manual the recall results are as good as for the novel (three times as many linked source types for the AM configuration compared to baseline). Precision is increased, but perhaps not only (B), all modules except the weights (AM- Linked source expressiones Linked multi-word expr. Link tTpes in total Links in evaluated sample Correct links in sample Errors in sample Partial links in sample Precision Precision (only errors) Token recall Type recall freq 3 or higher Type recall freq 2 or I Novel Program Manual 1,575 2,467 2,895 2,878 0 177 ~ 187 734 2,059 4,833 5,754 7,487 234 573 709 1,005 207 530 639 753 21 19 30 122 6 24 40 130 88.46% 92.50% 90.13% 74.93% 91.03% 96.68% 95.77% 87.86% 50.9% 54.88% 54.6% 72.06% 3.15% 56.70% 82.65% 4.87% B AM-W 1,631 2,748 0 683 2,740 7,241 318 953 199 655 51 137 68 161 62.58% 68.73% 83.96% 85.62% 60.2% 67.1% 73.88% 82.10% 0 12.74% 67.3% 85.53% 12.74% 34 to the level we anticipated at first. Multi-word expressions are linked with a relatively high recall (above 70%), but the precision of these links are not as high as for single words. Our evaluations of the links show that one major problem lies in the quality of the multi-word expressions that are fed into the alignment program. As the program works iteratively and in the current version starts with the multi-word expressions, any errors at this stage will have consequences in later iterations. We have run each module separately and observed that the addition of each module improves the baseline configuration by itself. To compare our results to those from other approaches is difficult. Not only are we dealing with different language pairs but also with different texts and text types. There is also the issue of different evaluation criteria. A pure word- to-word alignment cannot be compared to an approach where lexical units (both single word expressions and multi-word expressions) are linked. Neither can the combined approach be compared to a pure phrase alignment program because the aims of the alignment are different. However, as far as we can judge given these difficulties, the results presented in this paper are on par with previous work for precision and possibly an improvement on recall because of how we handle low-frequency variants in the morphology module and by using the single-word- line strategy. The handling of closed-class expressions have also been improved due to the division of these expressions into subcategories which limits the search space considerably. Acknowledgements This work is part of the project "Parallell corpora in Link6ping, Uppsala and G6teborg" (PLUG), jointly funded by Nutek and HSFR under the Swedish National research programme in Language Technology. References Brown, P.F., J. Cocke, S. Della Pietra, V. Della Pietra, F. Jelinek, IL Mercer, & P. Roossin. (1988) "A Statistical Approach to Language Translation." Proceedings of the 12th International Conference on Computational Linguistics. Budapest. Brown, P F, J. Cocke, S. Della Pietra, V. Della Pielra, F. Jelinek, R. Mercer, & P. Roossin. (1990) "A Statistical Approach to Machine Translation." Computational Linguistics 16(2). Dagan, I, & K. W. Church. (1994) "Termight: Identifying and Translating Technical Terminology." Proceedings from the Conference on Applied Natural Language Processing; StuttgarL Fung, P, & K. W. Church. (1994) "K-vec: A New Approach for Aligning Parallel Texts." Proceedings from the 15th International Conference on Computational Linguistics, Kyoto. Jones, D: & M. Alexa (1997) '~rowards automatically aligning German compounds with English word groups." In New Methods in Language Processing (eds. Jones D. & H. Somers). UCL Press, London. Kitamura, M. & Y. Matsumoto (1996) "Automatic Extraction of Word Sequence Correspondences in Parallel Corpora". In Proceedings of the Fourth Annual Workshop on Very Large Corpora (WVLC-4), Copenhagen. Macklovitch, E., & Marie-Loiuse Hannan. (1996) "Line Up: Advances in Alignment Technology and Their Impact on Translation Support Tools." In Proceedings of the Second Conference of the Association for Machine Translation in the Americas, Montreal. Melamed, I. D. (1997a) "A Word-to-Word Model of Translational Equivalence." Proceedings of the 35th Conference of the Association for Computational Linguistics, Madrid. Melamed, I. Dan. (1997b) "Automatic Discovery of Non- Compositional Compounds in Parallel Data." Paper presented at the 2nd Conference on Empirical Methods in Natural Language Processing, Providence. Merkel, M. B. Nilsson, & L. Ahrenberg, (1994) "A Phrase- Retrieval System Based on Recurrence." In Proceedings of the Second Annual Workshop on Very Large Corpora (WVLC-2). Kyoto. Resnik, P. & I. D. Melamed. (1997) "Semi-Automatic Acquisition of Domain-Specific Translation Lexicons." In Proceedings of the 7th ACL Conference on Applied Natural Language Processing. Washington DC. Smadja F., K. McKeown, & V. Hatzivassiloglou, (1996) "Franslating Collocations for Bilingual Lexicons: A Statistical Approach." In Computational Linguistics, Vol. 22No. 1. Tiedemann, J6rg. (1997) "Automatic Lexicon ExWaction fi'om Aligned Bilingual Corpora." Diploma Thesis, Otto- von-Guericke-Universit~t Magdeburg. 35
1998
4
Dialogue Management in Vector-Based Call Routing Jennifer Chu-Carroll and Bob Carpenter Lucent Technologies Bell Laboratories 600 Mountain Avenue Murray Hill, NJ 07974, U.S.A. E-mail: {jencc,carp} @research.bell-labs.corn Abstract This paper describes a domain independent, automat- ically trained call router which directs customer calls based on their response to an open-ended "How may I di- rect your call?" query. Routing behavior is trained from a corpus of transcribed and hand-routed calls and then carried out using vector-based information retrieval tech- niques. Based on the statistical discriminating power of the n-gram terms extracted from the caller's request, the caller is 1) routed to the appropriate destination, 2) trans- ferred to a human operator, or 3) asked a disambigua- tion question. In the last case, the system dynamically generates queries tailored to the caller's request and the destinations with which it is consistent. Our approach is domain independent and the training process is fully automatic. Evaluations over a financial services call cen- ter handling hundreds of activities with dozens of desti- nations demonstrate a substantial improvement on exist- ing systems by correctly routing 93.8% of the calls after punting 10.2% of the calls to a human operator. 1 Introduction The call routing task involves directing a user's call to the appropriate destination within a call center or pro- viding some simple information, such as loan rates. In current systems, the user's goals are typically gleaned via a touch-tone system employing a rigid hierarchical menu. The primary disadvantages of navigating menus for users are the time it takes to listen to all the options and the difficulty of matching their goals to the options; these problems are compounded by the necessity of de- scending a nested hierarchy of choices to zero in on a particular activity. Even simple requests such as "I'd like my savings account balance" may require users to nav- igate as many as four or five nested menus with four or five options each. We have developed an alternative to touch-tone menus that allows users to interact with a call router in natural spoken English dialogues just as they would with a human operator. Human operators respond to a caller request by 1) routing the call to an appropriate destination, or 2) query- ing the caller for further information to determine where to route the call. Our automatic call router has these two options as well as a third option of sending the call to a human operator. The rest of this paper provides both a description and an evaluation of an automatic call router driven by vector-based information retrieval techniques. After introducing our fundamental routing technique, we focus on the disambiguation query generation module. Our disambiguation module is based on the same sta- tistical training as routing, and dynamically generates queries tailored to the caller's request and the destina- tions with which it is consistent. The main advantages of our system are that 1) it is domain independent, 2) it is trained fully automatically to both route and disam- biguate requests, and 3) its performance is sufficient for use in the field, substantially improving on that of previ- ous systems. 2 Related Work Call routing is similar to topic identification (Mc- Donough et al., 1994) and document routing (Harman, 1995) in identifying which one of n topics (destinations) most closely matches a caller's request. Call routing is distinguished from these activities by requiring a single destination, but allowing a request to be refined in an in- teractive dialogue. We are further interested in carrying out the routing using natural, conversational language. The only work on call routing to date that we are aware of is that by Gorin et al. (to appear). They se- lect salient phrase fragments from caller requests, such as made a long distance and the area code for. These phrase fragments are used to determine the most likely destina- tion(s), which they refer to as call type(s), for the request either by computing the a posteriori probability for each call type or by passing the fragments through a neural network classifier. Abella and Gorin (1997) utilized the Boolean formula minimization algorithm for combining the resulting set of call types based on a hand-coded hi- erarchy of call types. Their intention is to utilize the out- come of this algorithm to select from a set of dialogue strategies for response generation. 3 Corpus Analysis To examine human-human dialogue behavior in call routing, we analyzed a set of 4497 transcribed telephone calls involving customers interacting with human opera- tors, looking at both the semantics of caller requests and 256 Name Activity Indirect # of calls 949 3271 277 % of all calls 21.1% 72.7% 6.2% Table 1: Semantic Types of Caller Requests dialogue actions for response generation. The call cen- ter provides financial services in hundreds of categories in the general areas of banking, credit cards, loans, insur- ance and investments; we concentrated on the 23 desti- nations for which we had at least 10 calls in the corpus. 3.1 Semantics of Caller Requests The operator provides an open-ended prompt of "How may I direct your call?" We classified user responses into three categories. First, callers may explicitly pro- vide a destination name, either by itself or embedded in a complete sentence, such as "may I have consumer lending?" Second, callers may describe the activity they would like to perform. Such requests may be unambigu- ous, such as "l'd like my checking account balance", or ambiguous, such as "car loans please", which in our call center can be resolved to either consumer lending, which handles new car loans, or to loan services, which handles existing car loans. Third, a caller can provide an indirect request, in which they describe their goal in a round- about way, often including irrelevant information. This often occurs when the caller either is unfamiliar with the call center hierarchy or does not have a concrete idea of how to achieve the goal, as in "ah I'm calling 'cuz ah a friend gave me this number and ah she told me ah with this number I can buy some cars or whatever but she didn't know how to explain it to me so l just called you you know to get that information." Table 1 shows the distribution of caller requests in our corpus with respect to these semantic types. Our analysis shows that in the vast majority of calls, the request was based on destination name or activity. Since there is a fairly small number (dozens to hundreds) of activities be- ing handled by each destination, requests based on name and activity are expected to be more predictable and thus more suitable for handling by an automatic call router. Thus, our goal is to automatically route those calls based on name and activity, while leaving the indirect or inap- propriate requests to human call operators. 3.2 Dialogue Actions for Response Generation We also analyzed the operator's responses to caller re- quests to determine the dialogue actions needed for re- sponse generation in our automatic call router. We found that in the call routing task, the call operator either no- tifies the customer of the routing destination or asks a disambiguating query.l lln cases where the operator generates an acknowledgment, such as uh-huh, midway through the caller's request, we analyzed the next operator utterance. Notification # of calls 3608 % of all calls 80.2% Query NP I Others 657 232 14.6% 5.2% Table 2: Call Operator Dialogue Actions Caller Reslxmse Caller Request -I I andidale Destinations R,,,a,z t_!..~+o.,~,L.o.,~ 0 Notificathm~ ~ ential Query DisambiRuating Yes f Query ~ Human Query ~ - Operator Figure 1: Call Router Architecture Table 2 shows the frequency that each dialogue ac- tion should be employed based strictly on the presence of ambiguity in the caller requests in our corpus. We fur- ther analyzed those calls considered ambiguous within our call center and noted that 75% of such ambiguous re- quests involve underspecified noun phrases, such as re- questing car loans without specifying whether it is an existing or new car loan. The remaining 25% of the ambiguous requests involve underspecified verb phrases, such as asking to transfer funds without specifying the types of accounts to and from which the transfer will oc- cur, or missing verb phrases, such as asking for direct deposit without specifying whether the caller wants to set up or change an existing direct deposit. 4 Dialogue Management in Call Routing Our call router consists of two components: the rout- ing module and the disambiguation module. The rout- ing module takes a caller request and determines a set of destinations to which the call can reasonably be routed. If there is exactly one such destination, the call is routed there and the customer notified; if there are multiple des- tinations, the disambiguation module is invoked in an at- tempt to formulate a query; and if there is no appropriate destination or if a reasonable disambiguation query can- not be generated, the call is routed to an operator. Fig- ure I shows a diagram outlining this process. 4.1 The Routing Module Our approach is novel in its application of information retrieval techniques to select candidate destinations for a call. We treat call routing as an instance of document routing, where a collection of judged documents is used for training and the task is to judge the relevance of a set of test documents (Schiitze et al., 1995). More specifi- 257 cally, each destination in our call center is represented as a collection of documents (transcriptions of calls routed to that destination), and given a caller request, we judge the relevance of the request to each destination. 4.1.1 The Training Process Document Construction Our training corpus consists of 3753 calls each of which is hand-routed to one of 23 destinations. 2 Our first step is to create one (virtual) document per destination, which contains the text of the callers' contributions to all calls routed to that destina- tion. Morphological Filtering We filter each (virtual) doc- ument through the morphological processor of the Bell Labs' Text-to-Speech synthesizer (Sproat, 1997) to ex- tract the root form of each word in the corpus. Next, the root forms of caller utterances are filtered through two lists, the ignore list and the stop list, in order to build a better n-gram model. The ignore list consists of noise words, such as uh and um, which sometimes get in the way of proper n-gram extraction, as in "I'd like to speak to someone about a car uh loan". With noise word filtering, we can properly extract the bigram "car, loan". The stop list enumerates words that do not discriminate between destinations, such as the, be, and afternoon. We modified the standard stop list distributed with the SMART information retrieval system (Salton, 1971) to include domain specific terms and proper names that occurred in the training corpus. Note that when a stop word is filtered out of the caller utterance, a place- holder is inserted to prevent the words preceding and fol- lowing the stop word to form n-grams. For instance, af- ter filtering the stop words out of "I want to check on an account", the utterance becomes "<sw> <sw> <sw> check <sw> <sw> account". Without the placeholders, we would extract the bigram "check, account", just as if the caller had used the term checking account. Term Extraction We extract the n-gram terms that oc- cur more frequently than a pre-determined threshold and do not contain any stop words. Our current system uses unigrams that occurred at least twice and bigrams and trigrams that occurred at least three times in the corpus. No 4-grams occurred three times. Term-Document Matrix Once the set of relevant terms is determined, we construct an m x n term- document frequency matrix A whose rows represent the m terms, whose columns represent the n destinations, and where an entry At,a is the frequency with which term t occurs in calls to destination d. It is often advantageous to weight the raw counts to fine tune the contribution of each term to routing. We begin by normalizing the row vectors representing terms by making them each of unit length. Thus we di- vide each row At in the original matrix by its length, 2These 3753 calls are a subset of the corpus of 4497 calls used in our corpus analysis. We excluded those ambiguous calls that were not resolved by the operator. A 2 1/2 (El<e<n t,e) . Our second weighting is based on the n-oti-on that a term that only occurs in a few docu- ments is more important in discriminating among docu- ments than a term that occurs in nearly every document. We use the inverse document frequency (IDF) weighting scheme (Sparck Jones, 1972) whereby a term is weighted inversely to the number of documents in which it occurs, by means oflDF(t) = log 2 n/d(t) where t is a term, n is the total number of documents in the corpus, and d(t) is the number of documents containing the term t. Thus we obtain a weighted matrix B, whose elements are given by Bt,a = At,a x IDF(t)/(~-~x<e< n A2,e)x/2. Vector Representation To reduce the dimensional- ity of our vector representations for terms and doc- uments, we applied the singular value decomposition (Deerwester et al., 1990) to the m x n matrix B of weighted term-document frequencies. Specifically, we take B = USV T, where U is an m x r orthonormal ma- trix (where r is the rank of B), V is an n x r orthonor- mal matrix, and S is an r x r diagonal matrix such that Sl,1 ~_~ 82,2 ~> "'" ~> Sr,r ~ O. We can think of each row in U as an r-dimensional vector that represents a term, whereas each row in V is an r-dimensional vector representing a document. With appropriate scaling of the axes by the singular values on the diagonal of S, we can compare documents to documents and terms to terms using their corresponding points in this new r-dimensional space (Deerwester et al., 1990). For instance, to employ the dot product of two vectors as a measure of their similarity as is com- mon in information retrieval (Salton, 1971), we have the matrix BTB whose elements contain the dot product of document vectors. Because S is diagonal and U is or- thonormal, BTB = VSZV T = VS(VS) T. Thus, ele- ment i, j in BTB, representing the dot product between document vectors i and j, can be computed by taking the dot product between the i and j rows of the matrix VS. In other words, we can consider rows in the matrix VS as vectors representing documents for the purpose of document/document comparison. An element of the original matrix Bi,j, representing the degree of associa- tion between the ith term and the jth document, can be recovered by multiplying the ith term vector by the jth scaled document vector, namely Bij = Ui((VS)j) T. 4.1.2 Call Routing Given the vector representations of terms and documents (destinations) in r-dimensional space, how do we deter- mine to which destination a new call should be routed? Our process for vector-based call routing consists of the following four steps: Term Extraction Given a transcription of the caller's utterance (either from a keyboard interface or from the output of a speech recognizer), the first step is to extract the relevant n-gram terms from the utterance. For in- stance, term extraction on the request "I want to check the balance in my savings account" would result in 258 one bigram term, "saving, account", and two unigrams, "check" and "balance". Pseudo-Document Generation Given the extracted terms from a caller request, we can represent the request as an m-dimensional vector Q where each component Qi represents the number of times that the ith term occurred in the caller's request. We then create an r-dimensional pseudo-document vector D = QU, following the stan- dard methodology of vector-based information retrieval (see (Deerwester et al., 1990)). Note that D is simply the sum of the term vectors Ui for all terms occurring in the caller's request, weighted by their frequency of oc- currence in the request, and is scaled properly for docu- ment/document comparison. Scoring Once the vector D for the pseudo-document is determined, we compare it with the document vectors by computing the cosine between D and each scaled docu- ment vectors in VS. Next, we transform the cosine score for each destination using a sigmoid function specifically fitted for that destination to obtain a confidence score that represents the router's confidence that the call should be routed to that destination. The reason for the mapping from cosine scores to con- fidence scores is because the absolute degree of similar- ity between a request and a destination, as given by the cosine value between their vector representations, does not translate directly into the likelihood for correct rout- ing. Instead, some destinations may require a higher co- sine value, i.e., a closer degree of similarity, than others in order for a request to be correctly associated with those destinations. Thus we collected, for each destination, a set of cosine value/routing value pairs over all calls in the training data, where the routing value is 1 if the call should be routed to that destination and 0 otherwise. Then for each destination, we used the least squared error method in fitting a sigmoid function, 1/(1 + e-(a~+b)), to the set of cosine/routing pairs. We tested the routing performance using cosine vs. confidence values on 307 unseen unambiguous requests. In each case, we selected the destination with the high- est cosine/confidence score to be the target destination. Using strict cosine scores, 92.2% of the calls are routed to the correct destination. On the other hand, using sig- moid confidence fitting, 93.5% of the calls are correctly routed. This yields a relative reduction in error rate of 16.7%. Decision Making The outcome of the routing module is a set of destinations whose confidence scores are above a pre-determined threshold. These candidate destinations represent those to which the caller's request can reason- ably be routed. If there is only one such destination, then the call is routed and the caller notified; if there are two or more possible destinations, the disambiguation mod- ule is invoked in an attempt to formulate a query; other- wise, the the call is routed to an operator. To determine the optimal value for the threshold, we I 0.8 0.6 0.4 0.2 0 0 Uppcd~nd L~,wcrb~mnd 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 Thr~h~dd Figure 2: Router Performance vs. Threshold ran a series of experiments to compute the upperbound and lowerbound of the router's performance varying the threshold from 0 to 0.9 at 0.1 intervals. The lowerbound represents the percentage of calls that are routed cor- rectly, while the upperbound indicates the percentage of calls that have the potential to be routed correctly after disambiguation (see section 5 for details on upperbound and lowerbound measures). The results in Figure 2 show 0.2 to be the threshold that yields optimal performance. 4.2 The Disambiguation Module The disambiguation module attempts to formulate an ap- propriate query to solicit further information from the caller in order to determine a unique destination to which the call should be routed. To generate an appropriate query, the caller's request and the candidate destinations must both be taken into account. We have developed a vector-based method for dynamically generating dis- ambiguation queries by first selecting a set of terms and then forming a wh or yes-no question from these selected terms. The terms selected by the disambiguation mechanism are those terms related to the original request that can likely be used to disambiguate among the candidate des- tinations. These terms are chosen by filtering all terms based on the following three criteria: I. Closeness: We choose terms that are close (by the cosine measure) to the differences between the scaled pseudo-document query vector, D, and vec- tors representing the candidate destinations in VS. The intuition is that adding terms close to the differ- ences will disambiguate the original query. 2. Relevance: From the close terms, we construct a set of relevant terms which are terms that further specify a term in the original request. A close term is considered relevant if it can be combined with a term in the request to form a valid n-gram term, and the relevant term will be the resulting n-gram tenn. For instance, if "car,loan" is in the original request, then both "new" and "new, car" would produce the relevant term "new, car, loan" 3. Disambiguating power: Finally, we restrict at- tention to relevant terms that can be added to the 259 original request to result in an unambiguous rout- ing using the routing mechanism described in Sec- tion 4.1.2. If none of the relevant terms satisfy this criterion, then we include all relevant terms in the set of disambiguating terms. Thus, instead of giving up the disambiguation process when no one term is predicted to resolve the ambiguity, the system may pose a question which further specifies the request and then select a disambiguating term based on this refined (although still ambiguous) request. The result of this filtering process is a finite set of terms which are relevant to the original ambiguous query and, when added to it, are likely to resolve the ambigu- ity. If a significant number of these terms share a head word, such as loan, the system asks the wh-question "for what type of loan ?'" Otherwise, the term that occurred most frequently in the training data is selected, based on the heuristic that a more common term is likely to be relevant than an obscure term, and a yes-no question is formed based on this term. A third alternative would be to ask a disjunctive question, but we have not yet ex- plored this possibility. Figure 1 shows that after the sys- tem poses its query, it attempts to route the refined re- quest, which is the original request augmented with the caller response to the system's query. In the case of wh- questions, n-gram terms are extracted from the caller's response. In the case of yes-no questions, the system de- termines whether ayes or no answer is given. 3 In the for- mer case, the disambiguating term used to form the query is considered the caller response, while in the latter case, the response is treated as in responses to wh-questions. Note that our disambiguation mechanism, like our ba- sic routing technique, is fully domain-indepefident. It utilizes a set of n-gram terms, as well as term and doc- ument vectors that were obtained by the training of the call router. Thus, porting the call router to a new domain requires no change in the disambiguation module. 4.3 Example To illustrate our call router, consider the request "loans please." This request is ambiguous because our call center handles mortgage loans separately from all other types of loans, and for all other loans, existing loans and new loans are again handled by different departments. Given this request, the call router first extracts the rel- evant n-gram terms, which in this case results in the uni- gram "'loan". It then computes a pseudo-document vec- tor that represents this request, which is compared in turn with the 23 vectors representing all destinations in the call center. The cosine values between the request and each destination are then mapped into confidence values. 3 In our current system, a response is considered a yes response only if it explicitly contains the word yes. However, as discussed in (Green and Carberry, 1994; Hockey et al., 1997), responses to yes-no questions may not explicitly contain a yes or no term. We leave incorporating a more sophisticated response understanding model, such as (Green and Carberry, 1994), into our system for future work. Using a confidence threshold of 0.2, we have two can- didate destinations, Loan Servicing and Consumer Lend- ing; thus the disambiguation module is invoked. Our disambiguation module first selects from all n- gram terms those whose term vectors are close to the dif- ference between the request vector and either of the two candidate destination vectors. This results in a list of 60 close terms, the vast majority of which are semantically close to "loan", such as "auto, loan", "payoff", and "owe". Next, the relevant terms are constructed from the set of close terms. This results in a list of 27 relevant terms, including "'auto, loan" and "loan,payoff", but ex- cluding owe, since neither "loan, owe" nor "owe, loan" constitutes a valid bigram. The third step is to select those relevant terms with disambiguation power, result- ing in 18 disambiguating terms. Since 11 of these disam- biguating terms share a head noun loan, a wh-question is generated based on this head word, resulting in the query "for what type of loan ?" Suppose in response to the system's query, the user answers "car loan". The router then adds the new bi- gram "car, loan" to the original request and attempts to route the refined request. This refined request is again ambiguous between Loan Servicing and Consumer Lend- ing since the caller did not specify whether it was an ex- isting or new car loan. Again, the disambiguation mod- ule selects the close, relevant, and disambiguating terms, resulting in a unique term "exist, car, loan". Thus, the system generates the yes-no question "is this about an existing car loan ? ,4 If the user responds "yes", then the trigram term "exist, car, loan" is added to the refined re- quest and the call routed to Loan Servicing; if the user says "'no, it's a new car loan", then "new, car, loan" is extracted from the response and the call routed to Con- sumer Lending. 5 Evaluation 5.1 The Routing Module We performed an evaluation of the routing module of our call router on a fresh set of 389 calls to a human opera- tor. 5 Out of the 389 requests, 307 are unambiguous and routed to their correct destinations, and 82 were ambigu- ous and annotated with a list of candidate destinations. Unfortunately, in this test set, only the caller's first ut- terance was transcribed. Thus we have no information about where the ambiguous calls were eventually routed. The routing decision made for each call is classified into one of 8 groups, as shown in Figure 3. For instance, 4Our current system uses simple template filling for response gener- ation by utilizing a manually constructed mappings from n-gram terms to their inflected forms, such as from "exist, car, loan" to "an existing car loan ". 5The calls in the test set were recorded separately from our training corpus. In this paper, we focus on evaluation based on transcriptions of the calls. A companion paper compares call performance on transcrip- tions to the output of a speech recognizer (Carpenter and Chu-Carroll, submitted). 260 Is request actually unambiguous? Is call routed by router? Is call routed by router? ye< correct? contains correct? one of possible? overlaps with possible? la lb 2a 2b 3a 3b 4a 4b Figure 3: Classification of Router Outcome Unambiguous Ambiguous All ] Requests Requests I Requests I LB la/(i+2) 4a/(3+4) | (1 a+4a)/all UB Table 3: Calculation of Upperbounds and Lowerbounds group la contains those calls which are 1) actually unam- biguous, 2) considered unambiguous by the router, and 3) routed to the correct destination. On the other hand, group 3b contains those calls which are 1) actually am- biguous, 2) considered by the router to be unambiguous, and 3) routed to a destination which is not one of the potential destinations. We evaluated the router's performance on three sub- sets of our test data, unambiguous requests alone, am- biguous requests alone, and all requests combined. For each set of data, we calculated a lowerbound perfor- mance, which measures the percentage of calls that are correctly routed, and an upperbound performance, which measures the percentage of calls that are either correctly routed or have the potential to be correctly routed. Ta- ble 3 shows how the upperbounds and lowerbounds are computed based on the classification in Figure 3 for each of the three data sets. For instance, for unambiguous re- quests (classes 1 and 2), the lowerbound is the number of calls actually routed to the correct destination (la) divided by the number of total unambiguous requests, while the upperbound is the number of calls actually routed to the correct destination (1 a) plus the number of calls which the router finds to be ambiguous between the correct destination and some other destination(s) (2a), di- vided by the number of unambiguous queries. The calls in category 2a are considered to be potentially correct be- cause it is likely that the call will be routed to the correct destination after disambiguation. Table 4 shows the upperbound and lowerbound perfor- mance for each of the three test sets. These results show Unambiguous Ambiguous All Requests Requests Requests LB 80.1% 58.5% 75.6% UB 96.7% 98.8% 97.2% Table 4: Router Performance with Threshold = 0.2 that the system's overall performance will fall some- where between 75.6% and 97.2%. The actual perfor- mance of the system is determined by two factors: 1) the performance of the disambiguation module, which de- termines the correct routing rate of the 16.6% of the un- ambiguous calls that were considered ambiguous by the router (class 2a), and 2) the percentage of calls that were routed correctly out of the 40.4% ambiguous calls that were considered unambiguous and routed by the router (class 3a). Note that the performance figures in Table 4 are the result of 100% automatic routing, since no re- quest in our test set failed to evoke at least one candidate destination. In the next sections, we discuss the perfor- mance of the disambiguation module, which determines the overall system performance, and show how allowing calls to be punted to operators affects the system's per- formance. 5.2 The Disambiguation Module To evaluate our disambiguation module, we needed dia- logues which satisfy two criteria: 1) the caller's first ut- terance is ambiguous, and 2) the operator asked a follow- up question to disambiguate the query and subsequently routed the call to the appropriate destination. We used 157 calls that meet these two criteria as our test set. Note that this test set is disjoint from the test set used in the evaluation of the router (Section 5. I), since none of the transcribed calls in the latter test set satisfied criterion (2). For each ambiguous call, the first user utterance was given to the router as input. The outcome of the router was classified as follows: 1. Unambiguous: in this case the call was routed to the selected destination. This routing was considered correct if the selected destination was the same as the actual destination and incorrect otherwise. 2. Ambiguous: in this case the router attempted to ini- tiate disambiguation. The outcome of the routing of these calls were determined as follows: (a) Correct, if a disambiguation query was gener- ated which, when answered, led to the correct destination. (b) Incorrect, if a disambiguation query was gen- erated which, when answered, could not lead to a correct destination. (c) Reject, if the router could not form a sensi- ble query or was unable to gather sufficient in- formation from the user after its queries and routed the call to an operator. Table 5 shows the number of calls that fall into each of the 5 categories. Out of the 157 calls, the router au- tomatically routed 115 of them either with or without disambiguation (73.2%). Furthermore, 87.0% of these routed calls were routed to the correct destination. No- tice that out of the 52 ambiguous calls that the router con- sidered unambiguous, 40 were routed correctly (76.9%). 261 Routed As Unambiguous Routed As Ambiguous c° ct 140 lnco ct l 2 C° c' I X°c°?°t I eje°t 60 42 Table 5: Performance of Disambiguation Module on Ambiguous Calls Correct Incorrect Reject Class 1 63.2% 1.3% 0% Class 2 7.5% 1.7% 5.3% Class 3 6.5% 2.2% 0% Class 4 7.0% 0.4% 4.9% Total 84.2% 5.6% 10.2% Table 6: Overall Performance of Call Router This is simply because our vector-based router is able to distinguish between cases where an ambiguous query is equally likely to be routed to more than one destination, and situations where the likelihood of one potential desti- nation overwhelms that of the other(s). In the latter case, the router routes the call to the most likely destination in- stead of initiating disambiguation, which has been shown to be an effective strategy; not surprisingly, human op- erators are also prone to guess the destination based on likelihood and route callers without disambiguation. 5.3 Overall Performance Combining results from Section 5.2 for ambiguous calls with results from Section 5.1 for unambiguous calls leads to the overall performance of the call router in Table 6. The table shows the number of calls that will be correctly routed, incorrectly routed, and rejected, if we apply the performance of the disambiguation module (Table 5) to the calls that fall into each class in the evaluation of the routing module (Section 5.1). Our results show that out of the 389 calls in our test set, 89.8% of the calls will be automatically routed by the call router. Of these calls, 93.8% (which constitutes 84.2% of all calls) will be routed to their correct destinations. This is substan- tially better than the results obtained by Gorin et al., who report an 84% correct routing rate with a 10% false rejec- tion rate (routed to an operator when the call could have been automatically routed) on 14 destinations (Gorin et al., to appear). 6 6 Conclusions We described and evaluated a domain independent, au- tomatically trained call router that takes one of three ac- tions in response to a caller's request. It can route the call to a destination within the call center, attempt to 6Gorin et al.'s results are measured without the possibility of system queries. To provide a fair comparison, we evaluated our muting module on all 389 calls in our test set using the scoring method described in (Gorin et al., to appear) (which corresponds roughly to our upperbound measure), and achieved a 94. ! % correct routing rate to 23 destinations when all calls are automatically routed (no false rejection), a substantial improvement over their system. formulate a disambiguating query, or route the call to a human operator. The routing module of the call router selects a set of candidate destinations based on n-gram terms extracted from the caller request and a vector- based comparison between these n-gram terms and each possible destination. If disambiguation is necessary, a yes-no question or wh-question is dynamically generated from among known n-gram terms in the domain based on closeness, relevance, and disambiguating power, thus tailoring the disambiguating query to the original request and the candidate destinations. Finally, our system per- forms substantially better than the best previously exist- ing system, achieving an overall 93.8% correct routing rate for automatically routed calls when rejecting 10.2% of all calls. Acknowledgments We would like to thank Christer Samuelsson and Jim Hi- eronymus for helpful discussions, and Diane Litman for comments on an earlier draft of this paper. References A. Abella and A. Gorin. 1997. Generating semantically consistent inputs to a dialog manager. In Proc. EU- ROSPEECH, pages 1879-1882. B. Carpenter and J. Chu-Carroll. submitted. Natural language call routing: A robust, self-organizing ap- proach. S. Deerwester, S. Dumais, G. Furnas, T. Landauer, and R. Harshman. 1990. Indexing by latent semantic anal- ysis. Journal of the American Society for Information Science, 41:391-407. A. Gorin, G. Riccardi, and J. Wright. to appear. How may I help you? Speech Communication. N. Green and S. Carberry. 1994. A hybrid reasoning model for indirect answers. In Proc. ACL, pages 58- 65. D. Harman. 1995. Overview of the fourth Text REtrieval Conference. In Proc. TREC. B. Hockey, D. Rossen-Knill, B. Spejewski, M. Stone, and S. Isard. 1997. Can you predict responses to yes/no questions? yes, no, and stuff. In Proc. EU- ROSPEECH, pages 2267-2270. J. McDonough, K. Ng, P. Jeanrenaud, H. Gish, and J. R. Rohlicek. 1994. Approaches to topic identification on the switchboard corpus. In Proc. ICASSP, pages 385- 388. G. Salton. 1971. The SMART Retrieval System. Prentice Hall. H. Schtitze, D. Hull, and J. Pedersen. 1995. A compari- son of classifiers and document representations for the routing problem. In Proc. SIGIR. K. Sparck Jones. 1972. A statistical interpretation of term specificity and its application in retrieval. Jour- nal of Documentation, 28:11-20. R. Sproat, editor. 1997. Multilingual Text-to-Speech Synthesis: The Bell Labs Approach. Kluwer. 262
1998
40
Machine Translation vs. Dictionary Term Translation - a Comparison for English-Japanese News Article Alignment Nigel Collier, Hideki Hirakawa and Akira Kumano Communication and Information Systems Laboratories Research and Development Center, Toshiba Corporation 1 Komukai Toshiba-cho, Kawasaki-shi, Kanagawa 210-8582, Japan {nigel, hirakawa, kmn}@eel, rdc. to shiba, co. j p Abstract Bilingual news article alignment methods based on multi-lingual information retrieval have been shown to be successful for the automatic production of so-called noisy-parallel corpora. In this paper we compare the use of machine translation (MT) to the commonly used dictionary term lookup (DTL) method for Reuter news article alignment in English and Japanese. The results show the trade-off be- tween improved lexical disambiguation provided by machine translation and extended synonym choice provided by dictionary term lookup and indicate that MT is superior to DTL only at medium and low recall levels. At high recall levels DTL has su- perior precision. 1 Introduction In this paper we compare the effectiveness of full ma- chine translation (MT) and simple dictionary term lookup (DTL) for the task of English-Japanese news article alignment using the vector space model from multi-lingual information retrieval. Matching texts depends essentially on lexical coincidence between the English text and the Japanese translation, and we see that the two methods show the trade-off be- tween reduced transfer ambiguity in MT and in- creased synonymy in DTL. Corpus-based approaches to natural language pro- cessing are now well established for tasks such as vo- cabulary and phrase acquisition, word sense disam- biguation and pattern learning. The continued prac- tical application of corpus-based methods is crit- ically dependent on the availability of corpus re- sources. In machine translation we are concerned with the provision of bilingual knowledge and we have found that the types of language domains which users are interested in such as news, current affairs and technology, are poorly represented in today's pub- lically available corpora. Our main area of interest is English-Japanese translation, but there are few clean parallel corpora available in large quantities. As a result we have looked at ways of automatically acquiring large amounts of parallel text for vocabu- lary acquisition. The World Wide Web and other Internet re- sources provide a potentially valuable source of par- allel texts. Newswire companies for example pub- lish news articles in various languages and various domains every day. We can expect a coincidence of content in these collections of text, but the de- gree of parallelism is likely to be less than is the case for texts such as the United Nations and par- liamentary proceedings. Nevertheless, we can expect a coincidence of vocabulary, in the case of names of people and places, organisations and events. This time-sensitive bilingual vocabulary is valuable for machine translation and makes a significant differ- ence to user satisfaction by improving the compre- hensibility of the output. Our goal is to automatically produce a parallel corpus of aligned articles from collections of English and Japanese news texts for bilingual vocabulary ac- quisition. The first stage in this process is to align the news texts. Previously (Collier et al., 1998) adapted multi-lingual (also called "translingual" or "cross-language") information retrieval (MLIR) for this purpose and showed the practicality of the method. In this paper we extend their investigation by comparing the performance of machine transla- tion and conventional dictionary term translation for this task. 2 MLIR Methods There has recently been much interest in the MLIR task (Carbonell et al., 1997)(Dumais et al., 1996)(Hull and Grefenstette, 1996). MLIR differs from traditional informalion retrieval in several re- spects which we will discuss below. The most ob- vious is that we must introduce a translation stage in between matching the query and the texts in the document collection. Query translation, which is currently considered to be preferable to document collection translation, introduces several new factors to the IR task: • Term transfer mistakes - analysis is far from perfect in today's MT systems and we must con- 263 sider how to compensate for incorrect transla- tions. • Unresolved lexical ambiguity- occurs when anal- ysis cannot decide between alternative mean- ings of words in the target language. • Synonym selection - when we use an MT sys- tem to translate a query, generation will usually result in a single lexical choice, even though al- ternative synonyms exist. For matching texts, the MT system may not have chosen the same synonym in the translated query as the author of the matching document. • Vocabulary limitations- are an inevitable factor when using bilingual dictionaries. Most of the previous work in MLIR has used sim- ple dictionary term translation within the vector space model (Salton, 1989). This avoids synonymy selection constraints imposed by sentence generation in machine translation systems, but fails to resolve lexical transfer ambiguity. Since all possible transla- tions are generated, the correctly matching term is assumed to be contained in the list and term transfer mistakes are not an explicit factor. Two important issues need to be considered in dic- tionary term based MLIR. The first, raised by Hull et al (Hull and Grefenstette, 1996), is that generat- ing multiple translations breaks the term indepen- dence assumption of the vector space model. A sec- ond issue, identified by (Davis, 1996), is whether vec- tor matching methods can succeed given that they essentially exploit linear (term-for-term) relations in the query and target document. This becomes im- portant for languages such as English and Japanese where high-level transfer is necessary. Machine translation of the query on the other hand, uses high level analysis and should be able to resolve much of the lexical transfer ambiguity sup- plied by the bilingual dictionary, leading to signif- icant improvements in performance over DTL, e.g. see (Davis, 1996). We assume that the MT system will select only one synonym where a choice exists so term independence in the vector space model is not a problem. Term transfer mistakes clearly de- pend on the quality of analysis, but may become a significant factor when the query contains only a few terms and little surrounding context. Surprisingly, to the best of our knowledge, no com- parison has been attempted before between DTL and MT in MLIR. This may be due either to the un- reliability of MT, or because queries in MLIR tend to be short phrases or single terms and MT is con- sidered too challenging. In our application of article alignment, where the query contains sentences, it is both meaningful and important to compare the two methods. 3 News Article Alignment The goal of news article alignment is the same as that in MLIR: we want to find relevant matching documents in the source language corpus collection for those queries in the target language corpus col- lection. The main characteristics which make news article alignment different to MLIR are: • Number of query terms - the number of terms in a query is very large compared to the usual IR task; • Small search space - we can reduce the search to those documents within a fixed range of the publication date; • Free text retrieval - we cannot control the search vocabulary as is the case in some information retrieval systems; • High precision - is required because the quality of the bilingual knowledge which we can acquire is directly related to the quality of article align- ment. We expect the end prod~act of article alignment to be a noisy-parallel corpus. In contrast to clean-parallel texts we are just be- ginning to explore noisy-parallel texts as a serious option for corpus-based NLP, e.g. (Fung and McK- eown, 1996). Noisy-parallel texts are characterised by heavy reformatting at the translation stage, in- cluding large sections of uatranslated text and tex- tual reordering. Methods which seek to align single sentences are unlikely to succeed with noisy parallel texts and we seek to match whole documents rather than sentences before bilil~gual lexical knowledge ac- quisition. The search effort required to align indi- vidual documents is considerable and makes manual alignment both tedious aJld time consuming. 4 System Overview In our collections of English and Japanese news arti- cles we find that the Japanese texts are much shorter than the English texts, typically only two or three paragraphs, and so it was natural to translate from Japanese into English and to think of the Japanese texts as queries. The goal of article alignment can be reformulated as an IR task by trying to find the English document(s) in the collection (corpus) of news articles which most closely corresponded to the Japanese query. The overall system is outlined in Figure 1 and discussed below. 4.1 Dictionary term lookup method DTL takes each term in the query and performs dic- tionary lookup to produ,:e a list of possible trans- lation terms in the document collection language. Duplicate terms were not removed from the transla- tion list. In our simulaticms we used a 65,000 term 264 ,-_.=.- ¢ ...... / / I----i 1 / Figure 1: System Overview common word bilingual dictionary and 14,000 terms from a proper noun bilingual dictionary which we consider to be relevant to international news events. The disadvantage of term vector translation using DTL arises from the shallow level of analysis. This leads to the incorporation of a range of polysemes and homographs in the translated query which re- duces the precision of document retrieval. In fact, the greater the depth of coverage in the bilingual lexicon, the greater this problem will become. 4.2 Machine translation method Full machine translation (MT) is another option for the translation stage and it should allow us to reduce the transfer ambiguity inherent in the DTL model through linguistic analysis. The system we use is Toshiba Corporation's ASTRANSAC (Hirakawa et al., 1991) for Japanese to English translation. The translation model in ASTRANSAC is the transfer method, following the standard process of morphological analysis, syntactic analysis, semantic analysis and selection of translation words. Analy- sis uses ATNs (Augmented Transition Networks) on a context free grammar. We modified the system so that it used the same dictionary resources as the DTL method described above. 4.3 Example query translation Figure 2 shows an example sentence taken from a Japanese query together with its English translation produced by MT and DTL methods. We see that in both translations there is missing vocabulary (e.g. " 7,~ 4~" 7~-~ ~ b" is not translated); since the two methods both use the same dictionary resource this is a constant factor and we can ignore it for comparison purposes. As expected we see that MT has correctly re- solved some of the lexical ambiguities such as '~: --+ world', whereas DTL has included the spu- Original Japanese text: Translation using MT: Although the American who aims at an independent world round by the balloon, and Mr. Y,~ 4--7" :7e-- set are flying the India sky on 19th, it can seem to attain a simple world round. Translation using DTL: independent individual singlt.handed single separate sole alone balloon round one rouad one revolution world earth universe world-wide internal ional base found ground de- pend turn hang approach come draw drop cause due twist choose call according to bascd on owing to by by means of under due to through from accord owe round one round one revolution go travel drive sail walk run American 7, 4--7" aim direct toward shoot for have direct India Republic of India Rep. of India 7 ~--- Mr. Miss Ms. Mis. Messrs. Mrs. Mmes. Ms. Mses. Esq. American sky skies upper air upper rc~3ions high up in the sky up in the air an altitude a height in the sky of over set arrange- ment arrange world earth universe world-wide universal international simple innoccr~t naive unsophisticated in- experienced fly hop flight aviation round one round one revolution go travel drive sz, iI walk run seem appear en- caustic signs sign indicatioits attain achieve accomplish realise fulfill achievement at lainment Figure 2: Cross method comparison of a sample sen- tence taken from a Japanese query with its transla- tion in English rious homonym terms "earth, universe, world-wide, universal, international". In the case of synonyn-ty we notice that MT has decided on "independent" as the translation of "~ ~", DTL also includes the synonyms "individual, singlehanded, single, separate, sole,..." ,etc.. The au- thor of the correctly matching English text actually chose the term 'singlehauded', so synonym expan- sion will provide us with a better match in this case. The choice of synonyms is quite dependent on au- thor preference and style considerations which MT cannot be expected to second-guess. The limitations of MT analysis give us some selec- tion errors, for example we see that "4' ~" I <~ _1=~}~ ~L77~;5" is translated as "flying the India sky.__.", whereas the natural translation would be 'flying over India", even though 'over' is registered as a possible translation of '_l=~' in the dictionary. 265 5 Corpus The English document collection consisted of Reuter daily news articles taken from the internet for the December 1996 to the May 1997. In total we have 6782 English articles with an average of about 45 articles per day. After pre-processing to remove hy- pertext and formatting characters we are left with approximately 140000 paragraphs of English text. In contrast to the English news articles, the Japanese articles, which are also produced daily by Reuter's, are very short. The Japanese is a trans- lated summary of an English article, but consider- able reformatting has taken place. In many cases the Japanese translation seems to draw on multiple sources including some which do not appear on the public newswire at all. The 1488 Japanese articles cover the same period as the English articles. 6 Implementation The task of text alignment takes a list of texts {Q~ .... Q~} in a target language and a list of texts {Do, .., Din} in a source language and produces a list I of aligned pairs. A pair < Q~, Dy > is in the list if Q~ is a partial or whole translation of Dy. In order to decide on whether the source and target language text should be in the list of aligned pairs we translate Q~ into the source language to obtain Q~ using bilin- gual dictionary lookup. We then match texts from {Q0, .., Qn } and {D0, .., Din} using standard models from Information Retrieval. We now describe the basic model. Terminology An index of t terms is generated from the docu- ment collection (English corpus) and the query set (Japanese translated articles). Each document has a description vector D = (Wdl, Wd2, .., Walt) where Wd~ represents the weight of term k in document D. The set of documents in the collection is N, and nk rep- resents the number of documents in which term k appears, tfdk denotes the term frequency of term k in document D. A query Q is formulated as a query description vector Q = (wql, wq~, .., Wqt). 6.1 Model We implemented the standard vector-space model with cosine normalisation, inverse document fre- quency idf and lexical stemming using the Porter algorithm (Porter, 1980) to remove suffix variations between surface words. The cosine rule is used to compensate for varia- tions in document length and the number of terms when matching a query Q from the Japanese text collection and a document D from the English text collection. t ~k=~ WqkWdk (1) Cos(Q, D) = t 9 t (Ek=l l{~'qk X Ek=l W2k) 1/2 We combined term weights in the document and query with a measure of the importance of the term in the document collection as a whole. This gives us the well-known inverse document frequency (tf+id]) score: w~:k = t fxk x log(lNl/nk ) (2) Since log(INI/nk) favours rarer terms idf is known to improve precision. 7 Experiment In order to automatically evaluate fractional recall and precision it was necessary to construct a repre- sentative set of Japanese articles with their correct English article alignments. We call this a judge- ment set. Although it is a significant effort to eval- uate alignments by hand, this is possibly the only way to obtain an accurate assessment of the align- ment performance. Once alignment has taken place we compared the threshold filtered set of English- Japanese aligned articles with the judgement set to obtain recall-precision statistics. The judgement set consisted of 100 Japanese queries with 454 relevant English documents. Some 24 Japanese queries had llO corresponding English document at all. This large percentage of irrelevant queries can be thought c,f as 'distractors' and is a particular feature of this alignment task. This set was then given to a bilingual checker who was asked to score each aligned article pair according to (1) the two articles are t~'anslations of each other, (2) the two articles are strongly contextually related, (3) no match. We removed type 3 correspondences so that the judgement set contained pairs of articles which at least shared the same context, i.e. referred to the same news event. Following inspection of matching articles we used the heuristic that the search space for each Japanese query was one day either side of the day of publica- tion. On average this was 135 articles. This is small by the standards of conventional IR tasks, but given the large number of distractor queries, the require- ment for high precision and the need to translate queries, the task is challenging. We will define recall and precision in the usual way as follows: no. of relevant items retrieved recall = (3) no. of relevant items in collection no. of relevant items retrieved precision = (4) no. of items retrieved 266 Results for the model with MT and DTL are shown in Figure 3. We see that in the basic tf+idf model, machine translation provides significantly better article matching performance for medium and low levels of recall. For high recall levels DTL is bet- ter. Lexical transfer disambiguation appears to be important for high precision, but synonym choices are crucial for good recall. O,2 0.4 ReGImll 0.6 0.8 Figure 3: Model 1: Recall and precision for English- Japanese article alignment. -4-: DTL x: MT. Overall the MT method obtained an average pre- cision of 0.72 in the 0.1 to 0.9 recall range and DTL has an average precision of 0.67. This 5 percent over- all improvement can be partly attributed to the fact that the Japanese news articles provided sufficient surrounding context to enable word sense disam- biguation to be effective. It may also show that syn- onym selection is not so detrimental where a large number of other terms exist in the query. However, given these advantages we still see that DTL per- forms almost as well as MT and better at higher recall levels. In order to maximise recall, synonym lists provided by DTL seem to be important. More- over, on inspection of the results we found that for some weakly matching document-query pairs in the judgement set, a mistranslation of an important or rare term may significantly bias the matching score. 8 Conclusion We have investigated the performance of MLIR with the DTL and MT models for news article alignment using English and Japanese texts. The results in this paper have shown surprisingly that MT does not have a clear advantage over the DTL model at all levels of recall. The trade-off between lexical trans- fer ambiguity and synonymy implies that we should seek a middle strategy: a sophisticated system would perhaps perform homonym disambiguation and then leave alternative synonyms in the translation query list. This should maximise both precision and re- call and will be a target for our future work. Fur- thermore, we would like to extend our investigation to other MLIR test sets to see how MT performs against DTL when the number of terms in the query is smaller. Acknowledgements We gratefully acknowledge the kind permission of Reuters for the use of their newswire articles in our research. We especially thank Miwako Shimazu for evaluating the judgement, set used in our simula- tions. References J. Carbonell, Y. Yang, R. Frederking, R. Brown, Y. Geng, and D. Lee. 1997. Translingual informa- tion retrieval: A comp:,'ative evaluation. In Fif- teenth International Joint Conference on Artifi- cial Intelligence (IJCA 1-97), Nagoya, Japan, 23rd - 29th August. N. Collier, A. Kumano, and H. Hirakawa. 1998. A study of lexical and discourse factors in bilingual text alignment using MLIR. Trans. of Informa- tion Processing Society of Japan (to appear). M. Davis. 1996. New exp,:riments in cross-language text retrieval at NMSU~s computing research lab. In Fifth Text Retrieval Conference (TREC-5). S. Dumais, T. Landauer, and M. Littman. 1996. Automatic cross-language retrieval using latent semantic indexing. In G. Grefenstette, editor, Working notes of the u'orkshop on cross-linguistic information retrieval A CM SIGIR. P. Fung and K. McKeown. 1996. A technical word and term translation aid using noisy parallel cor- pora across language groups. Machine Transla- tion - Special Issue on New Tools for Human Translators, pages 53-87. H. Hirakawa, H. Nogami, and S. Amano. 1991. EJ/JE machine translation system ASTRANSAC - extensions towards personalization. In Proceed- ings of the Machine Traaslation Summit III, pages 73-80. D. Hull and G. Grefenstette. 1996. Querying across languages: A dictionary-based approach to mul- tilingual information retrieval. In Proceedings of the 19th Annual International A CM SIGIR Con- ference on Research and Development in Informa- tion Retrieval, Zurich, Switzerland, pages 49-57, 18-22 August. M. Porter. 1980. An algorithm for suffix stripping. Program, 14(3) :130-137. G. Salton. 1989. Automotic Text Processing- The Transformation, Analgsis, and Retrieval of Infor- mation by Computer. Addison-Wesley Publishing Company, Inc., Reading, Massachusetts. 267
1998
41
An Experiment in Hybrid Dictionary and Statistical Sentence Alignment Nigel Collier, Kenji Ono and Hideki Hirakawa Communication and Information Systems Laboratories Research and Development Center, Toshiba Corporation 1 Komukai Toshiba-cho, Kawasaki-shi, Kanagawa 210-85S2, Japan {nigel, ono, hirakawa}@eel, rdc. toshiba, co. j p Abstract The task of aligning sentences in parallel corpora of two languages has been well studied using pure sta- tistical or linguistic models. We developed a linguis- tic method based on lexical matching with a bilin- gual dictionary and two statistical methods based on sentence length ratios and sentence offset prob- abilities. This paper seeks to further our knowl- edge of the alignment task by comparing the per- formance of the alignment models when used sepa- rately and together, i.e. as a hybrid system. Our results show that for our English-Japanese corpus of newspaper articles, the hybrid system using lexical matching and sentence length ratios outperforms the pure methods. 1 Introduction There have been many approaches proposed to solve the problem of aligning corresponding sentences in parallel corpora. With a few notable exceptions however, much of this work has focussed on ei- ther corpora containing European language pairs or clean-parallel corpora where there is little reformat- ting. In our work we have focussed on developing a method for robust matching of English-Japanese sentences, based primarily on lexical matching. The method combines statistical information from byte length ratios. We show in this paper that this hybrid model is more effective than its constituent parts used separately. The task of sentence alignment is a critical first step in many automatic applications involving the analysis of bilingual texts such as extraction of bilin- gum vocabulary, extraction of translation templates, word sense disambiguation, word and phrase align- ment, and extraction of parameters for statistical translation models. Many software products which aid human translators now contain sentence align- ment tools as an aid to speeding up editing and ter- minology searching. Various methods have been developed for sentence alignment which we can categorise as either lexical such as (Chen, 1993), based on a large-scale bilin- gual lexicon; statistical such as (Brown et al., 1991) (Church, 1993)(Gale and Church, 1903)(Kay and RSsheheisen, 1993), based on distributional regular- ities of words or byte-length ratios and possibly in- ducing a bilingual lexicon as a by-product, or hybrid such as (Utsuro et al., 1994) (Wu, 1994), based on some combination of the other two. Neither of the pure approaches is entirely satisfactory for the fol- lowing reasons: • Text volume limits the usefulness of statistical approaches. We would often like to be able to align small amounts of text, or texts from var- ious domains which do not share the same sta- tistical properties. • Bilingual dictionary coverage limitations mean that we will often encounter problems establish- ing a correspondence in non-general domains. • Dictionary-based approaches are founded on an assumption of lexicul correspondence between language pairs. We cannot always rely on this for non-cognate language pairs, such as English and Japanese. • Texts are often heavily reformatted in trans- lation, so we cannot assume that the corpus will be clean, i.e. contain many one-to-one sen- tence mappings. In this case statistical methods which rely on structure correspondence such as byte-length ratios may not perform well. These factors suggest that some hybrid method may give us the best combination of coverage and accuracy when we have a variety of text domains, text sizes and language pairs. In this paper we seek to fill a gap in our understanding and to show how the various components of the hybrid method influ- ence the quality of sentence alignment for Japanese and English newspaper articles. 2 Bilingual Sentence Alignment The task of sentence alignment is to match corre- sponding sentences in a text from One language to sentences in a translation of that text in another language. Of particular interest to us is the ap- plication to Asian language pairs. Previous stud- ies such as (Fung and Wu, 1994) have commented 268 that methods developed for Indo-European language pairs using alphabetic characters have not addressed important issues which occur with European-Asian language pairs. For example, the language pairs are unlikely to be cognates, and they may place sentence boundaries at different points in the text. It has also been suggested by (Wu, 1994) that sentence length ratio correlations may arise partly out of historic cognate-based relationships between Indo-European languages. Methods which perform well for Indo- European language pairs have therefore been found to be less effective for non-Indo-European language pairs. In our experiments the languages we use are En- glish (source) and Japanese (translation). Although in our corpus (described below) we observe that, in general, sentences correspond one-to-one we must also consider multiple sentence correspondences as well as one-to-zero correspondences. These cases are summarised below. 1. 1:1 The sentences match one-to-one. 2. l:n One English sentence matches to more than one Japanese sentence. 3. m:l More than one English sentence matches ot one Japanese sentence. 4. m:n More than one English sentence matches to more than one Japanese sentence. 5. m:0 The English sentence/s have no correspond- ing Japanese sentence. 6. 0:n The Japanese sentence/s have no corre- sponding English sentence. In the case of l:n, m:l and m:n correspondences, translation has involved some reformatting and the meaning correspondence is no longer solely at the sentence level. Ideally we would like smaller units of text to match because it is easier later on to establish word alignment correspondences. In the worst case of multiple correspondence, the translation is spread across multiple non-consecutive sentences. 3 Corpus Our primarily motivation is knowledge acquisition for machine translation and consequently we are in- terested to acquire vocabulary and other bilingual knowledge which will be useful for users of such sys- tems. Recently there has been a move towards In- ternet page translation and we consider that one in- teresting domain for users is international news. The bilingual corpus we use in our experiments is made from Reuter news articles which were trans- lated by the Gakken translation agency from En- glish into Japanese 1 . The translations are quite lit- eral and the contents cover international news for I The corpus was generously made available to us by special arrangement with Gakken the period February 1995 to December 1996. We currently have over 20,000 articles (approximately 47 Mb). From this corpus we randomly chose 50 article pairs and aligned them by hand using a hu- man bilingual checker to form a judgement set. The judgement set consists of 380 English sentences and 453 Japanese sentences. On average each English article has 8 lines and each Japanese article 9 lines. The articles themselves form a boundary within which to align constituent sentences. The corpus is quite well behaved. We observe many 1:1 corre- spondences, but also a large proportion of 1:2 and 1:3 correspondences as well as reorderings. Omis- sions seem to be quite rare, so we didn't see many m:0 or 0:n correspondences. An example news article is shown in Figure 1 which highlights several interesting points. Al- though the news article texts are clean and in machine-tractable format we still found that it was a significant challenge to reliably identify sentence boundaries. A simple illustration of this is shown by the first Japanese line J1 which usually corresponds to the first two English lines E1 and E2. This is a result of our general-purpose sentence segmenta- tion algorithm which has difficulty separating the Japanese title from the first sentence. Sentences usually corresponded linearly in our corpus, with few reorderings, so the major chal- lenge was to identify multiple correspondences and zero correspondences. We can see an example of a zero correspondence as E5 has no translation in the Japanese text. A l:n correspondence is shown by E7 aligning to both J5 and J6. 4 Alignment Models In our investigation we examined the performance of three different matching models (lexical matching, byte-length ratios and offset probabilities). The ba- sic models incorporate dynamic programming to find the least cost alignment path over the set of English and Japanese sentences. Cost being determined by the model's scores. The alignment space includes all possible combinations of multiple matches upto and including 3:3 alignments. The basic models are now outlined below. 4.1 Model 1: Lexical vector matching The lexical approach is perhaps the most robust for aligning texts in cognate language pairs, or where there is a large amount of reformatting in trans- lation. It has also been shown to be particularly successful within the vector space model in multilin- gual information retrieval tasks, e.g. (Collier et al., 1998a),(Collier et al., 1998b), for aligning texts in non-cognate languages at the article level. The major limitation with lexical matching is clearly the assumption of lexical correspondence - 269 El. Taiwan ruling party sees power struggle in China E2. TAIPEI , Feb 9 ( Reuter ) - Taiwan's ruling Nationalist Party said a struggle to succeed Deng Xiaoping as China's most powerful man may have already begun. E3. "Once Deng Xiaoping dies, a high tier power struggle among the Chinese communists is in- evitable," a Nationalist Party report said. E4. China and Taiwan have been rivals since the Nationalists lost the Chinese civil war in 1949 and fled to Taiwan. E5. Both Beijing and Taipei sometimes portray each other in an unfavourable light. E6. The report said that the position of Deng's chosen successor, President 3iang Zemin, may have been subtly undermined of late. E7. It based its opinion on the fact that two heavyweight political figures have recently used the phrase the "solid central collective leadership and its core" instead of the accepted "collective leader- ship centred on Jiang Zemin" to describe the current leadership structure. E8. "Such a sensitive statement should not be an unintentional mistake ... E9. Does this mean the power struggle has gradually surfaced while Deng Xiaoping is still alive ?," said the report , distributed to journalists. El0. "At least the information sends a warning signal that the 'core of Jiang' has encountered some subtle changes," it added . 31. ~'~l~l~.~l~:~, ~P[~:,~.-,i~-~~'a~t.~l~j~'~."~/~:i~.fl~.~:/t'~'H:]~ [~'~ 9 13 ~ -I' 9--] ~'~'~ J2. ~l~:, ~~.~6t:i~.~L,/~_@~©~"e, r l-e).,j,~~, ~,~-~.~.~e, ~,~@~,, J3. q~l~-~i'~t~, 1~7)", 1 9 4 9~l:-q~I~l~,~e)~l:-I~( , ~'~I:-~-9~A~, ti~ ~lz~b,5o Js. ~©~I~:, ~~t2.,,L~, ~L~©~-e, ~ t £ ~ ~ _ ~ , "~:~-~ J6..: h.~ el:t. "i~-:v,~ t: ¢.5 q~:~l~J" ~ ~,~' 5 ~z~t~h."¢ ~ I::o Figure 1: Example English-Japanese news article pair which is particularly weak for English and Asian language pairs where structural and semantic dif- ferences mean that transfer often occurs at a level above the lexicon. This is a motivation for incor- porating statistics into the alignment process, but in the initial stage we wanted to treat pure lexical matching as our baseline performance. We translated each Japanese sentence into En- glish using dictionary term lookup. Each Japanese content word was assigned a list of possible English translations and these were used to match against the normalised English words in the English sen- tences. For an English text segment E and the En- glish term list produced from a Japanese text seg- ment J, which we considered to be a possible unit of correspondence, we calculated similarity using Dice's coefficient score shown in Equation 1. This rather simple measure captures frequency, but not positional information, q_]m weights of words are their frequencies inside a sentence. 2fEj (1) Dice(E, .1) - fE + fJ where lea is the number of lexical items which match in E and J, fE is tile number of lexical items in E and fj is the number of lexical items in J. The translation lists for each Japanese word are used disjunctively, so if one word in the list matches then we do not consider the other terms in the list. In this way we maintain term independence. 270 Our transfer dictionary contained some 79,000 En- glish words in full form together with the list of translations in Japanese. Of these English words some 14,000 were proper nouns which were directly relevant to the vocabulary typically found in interna- tional news stories. Additionally we perform lexical normalisation before calculating the matching score and remove function words with a stop list. 4.2 Model 2: Byte-length ratios For Asian language pairs we cannot rely entirely on dictionary term matching. Moreover, algorithms which rely on matching cognates cannot be applied easily to English and some Asian language. We were motivated by statistical alignment models such as (Gale and Church, 1991) to investigate whether byte-length probabilities could improve or replace the lexical matching based method. The underlying assumption is that characters in an English sentence are responsible for generating some fraction of each character in the corresponding Japanese sentence. We derived a probability density function by mak- ing the assumption that English .and Japanese sen- tence length ratios are normally distributed. The parameters required for the model are the mean, p and variance, ~, which we calculated from a training set of 450 hand-aligned sentences. These are then entered into Equation 2 to find the probability of any two sentences (or combinations of sentences for multiple alignments) being in an alignment relation given that they have a length ratio of x. The byte length ratios were calculated as the length of the Japanese text segment divided by the length of the English text segment. So in this way we can incorporate multiple sentence correspondences into our model. Byte lengths for English sentences are calculated according to the number of non-white space characters, with a weighting of 1 for each valid character including punctuation. For the Japanese text we counted 2 for each non-white space char- acter. White spaces were treated as having length 0. The ratios for the training set are shown as a histogram in Figure 2 and seem to support the as- sumption of a normal distribution. The resulting normal curve with ~r = 0.33 and /1 = 0.76 is given in Figure 3, and this can then be used to provide a probability score for any English and Japanese sentence being aligned in the Reuters' corpus. Clearly it is not enough simply to assume that our sentence pair lengths follow the normal distribution. We tested this assumption using a standard test, by plotting the ordered ratio scores against the values calculated for the normal curve in Figure 3. If the ~,o °-4 .s 2 -1 Ill,. o ~ 4 S e Figure 2: Sentence length ratios in training set 1.4 1.a 1 O.S o.e 0.4 0.2 o.. 4 + + 3 4 5 i *~1 II +1 Figure 3: Sentence le, gth ratio normal curve distribution is indeed normal then we would expect the plot in Figure 4 to yi,?ld a straight line. We can see that this is the case l:',r most, although not all, of the observed scores. Although the curve in Figure 4 shows that our training set deviated from the normal distribution at i ! i o.m 0.,, o.,, o., +,2,,o ,.2 ,..,+ ,.,, ,.° Figure 4: Sentence length ratio normal check curve 271 I -2 ~ t Oodl i 0-6 -4 Figure 5: Sentence offsets in training set the extremes we nevertheless proceeded to continue with our simulations using this model considering that the deviations occured at the extreme ends of the distribution where relatively few samples were found. The weakness of this assumption however does add extra evidence to doubts which have been raised, e.g. (Wu, 1994), about whether the byte- length model by itself can perform well. 4.3 Model 3: Offset ratios We calculated the offsets in the sentence indexes for English and Japanese sentences in an alignment re- lation in the hand-aligned training set. An offset difference was calculated as the Japanese sentence index minus the English sentence index within a bilingual news article pair. The values are shown as a histogram in Figure 5. As with the byte-length ratio model, we started from an assumption that sentence correspondence offsets were normally distributed. We then cal- culated the mean and variance for our sample set shown in Figure 5 and used this to form a normal probability density function (where a = 0.50 and /J - 1.45) shown in Figure 6. The test for normality of the distribution is the same as for byte-length ratios and is given in Figure 7. We can see that the assumption of normality is particularly weak for the offset distribution, but we are motivated to see whether such a noisy probabil- ity model can improve alignment results. 5 Experiments In this section we present the results of using dif- ferent combinations of the three basic methods. We combined the basic methods to make hybrid models simply by taking the product of the scores for the models given above. Although this is simplistic we felt that in the first stage of our investigation it was better to give equal weight to each method. The seven methods we tested are coded as follows: 0.11 O.l ~t5 .2 0 SD 4 m Figure 6: Sentence offsets normal curve f "mO Figure 7: Sentence offscts normal check curve DICE: sentence alignmelit using bilingual dictionary and Dice's coefficient scores; LEN: sentence align- ment using sentence length ratios; OFFSET: sen- tence alignment using offs,:t probabilities. We performed sentence alignment on our test set of 380 English sentences and 453 Japanese sentences. The results are shown as recall and precision which we define in the usual way as follows: recall = #correctly matched sentences retrieved #matched sentences in the test collection (a) precision = #correctly matched sentences retrieved matched sentences retrieved (4) The results are shown in Table 1. We see that the baseline method using lexical matching with a bilin- gual lexicon, DICE, performs better than either of the two statistical methods LEN or OFFSET used separately. Offset probabilities in particular per- formed poorly showing tltat we cannot expect the correctly matching sentence to appear constantly in 272 the same highest probability position. -Method Rec. (%) Pr. (%) DICE (baseline) 84 85 LEN 82 83 OFFSET 50 57 LEN+OFFSET 70 70 DICE+LEN 89 87 DICE+OFFSET 80 80 DICE+LEN+OFFSET 88 85 Table 1: Sentence alignment results as recall and precision. Considering the hybrid methods, we see signifi- cantly that DICE+LEN provides a clearly better re- sult for both recall and precision to either DICE or LEN used separately. On inspection we found that DICE by itself could not distinguish clearly between many candidate sentences. This occured for two rea- sons. 1. As a result of the limited domain in which news articles report, there was a strong lexical over- lap between candidate sentences in a news arti- cle. 2. Secondly, where the lexical overlap was poor be- tween the English sentence and the Japanese translation, this leads to low DICE scores. The second reason can be attributed to low cov- erage in the bilingual lexicon with the domain of the news articles. If we had set a minimum thresh- old limit for overlap frequency then we would have ruled out many correct matches which were found. In both cases LEN provides a decisive clue and en- ables us to find the correct result more reliably. Fur- thermore, we found that LEN was particularly ef- fective at identifying multi-sentence correspondences compared to DICE, possibly because some sentences are very small and provide weak evidence for lexi- cal matching, whereas when they are combined with neighbours they provide significant evidence for the LEN model. Using all methods together however in DICE+LEN+OFFSET seems less promising and we believe that the offset probabilities are not a reliable model. Possibly this is due to lack of data in the training stage when we calculated ~ and p, or the data set may not in fact be normally distributed as indicated by Figure 7. Finally, we noticed that a consistent factor in the English and Japanese text pairs was that the first two lines of the English were always matched to the first line of the Japanese. This was because the En- glish text separated the title and first line, whereas our sentence segmenter could not do this for the Japanese. This factor was consistent for all the 50 article pairs in our test collection and may have led to a small deterioration in the results, so the figures we present are the minimum of what we can expect when sentence segmentation is performed correctly. 6 Conclusion The assumption that a partial alignment at the word level from lexical correspondences can clearly in- dicate full sentence alignment is flawed when the texts contain many sentences with similar vocabu- lary. This is the case with the news stories used in our experiments and even technical vocabulary and proper nouns are not adequate to clearly discrimi- nate between alternative alignment choices because the vocabulary range inside the news article is not large. Moreover, the basic assumption of the lexical approach, that the coverage of the bilingual dictio- nary is adequate, cannot be relied on if we require robustness. This has shown the need for some hybrid model. For our corpus of newspaper articles, the hybrid model has been shown to clearly improve sentence alignment results compared with the pure models used separately. In the future we would like to make extensions to the lexical model by incorporating term weighting methods from information retrieval such as inverse document frequency which may help to identify more important terms for matching. In order to test the generalisability of our method we also want to extend our investigation to parallel cor- pora in other domains. Acknowledgements We would like to thank Reuters and Gakken for al- lowing us to use the corpus of news stories in our work. We are grateful to Miwako Shimazu for hand aligning the judgement sct used in the experiments and to Akira Kumano and Satoshi Kinoshita for useful discussions. Finally we would also like ex- press our appreciation to the anonymous reviewers for their helpful comments. References P. Brown, J. Lai, and R. Mercer. 1991. Aligning sen- tences in parallel corpora. In P9th Annual Meeting of the Association for Computational Linguistics, Berkeley, California, USA. S. Chen. 1993. Aligning sentences in bilingual cor- pora using lexical information. 31st Annual Meet- ing of the Association of Computational Linguis- tics, Ohio, USA, 22-26 June. K. Church. 1993. Char_align: a program for align- ing parallel texts at the character level. In 31st Annual Meeting of the Association for Computa- tional Linguistics, Ohio, USA, pages 1-8, 22-26 June. 273 N. Collier, H. Hirakawa, and A. Kumano. 1998a. Creating a noisy parallel corpus from newswire articles using multi-lingual information retrieval. Trans. of Information Processing Society of Japan (to appear). N. Collier, H. Hirakawa, and A. Kumano. 1998b. Machine translation vs. dictionary term transla- tion - a comparison for English-Japanese news article alignment. In Proceedings of COLING- ACL'98, University of Montreal, Canada, 10th August. P. Fung and D. Wu. 1994. Statistical augmenta- tion of a Chinese machine readable dictionary. In Second Annual Workshop on Very Large Corpora, pages 69-85, August. W. Gale and K. Church. 1991. A program for align- ing sentences in bilingual corpora. In Proceedings of the 29th Annual Conference of the Association for Computational Linguistics (ACL-91}, Berke- ley, California, pages 177-184. W. Gale and K. Church. 1993. A program for align- ing sentences in a bilingual corpora. Computa- tional Linguistics, 19(1):75-102. M. Kay and M. Rbshcheisen. 1993. Text-translation alignment. Computational Linguistics, 19:121- 142. T. Utsuro, H. Ikeda, M. Yamane, Y. Matsumoto, and N. Nagao. 1994. Bilingual text match- ing using bilingual dictionary and statistics. In COLING-94, 15th International Conference, Ky- oto, Japan, volume 2, August 5-9. D. Wu. 1994. Aligning a parallel English-Chinese corpus statistically with lexical criteria. In 3end Annual Meeting of the Association for Computa- tional Linguistics, New Mexico, USA, pages 80- 87, June 27-30. 274
1998
42
Alignment of Multiple Languages for Historical Comparison Michael A. Covington Artificial Intelligence Center The University of Georgia Athens, GA 30602-7415 U.S.A. [email protected] Abstract An essential step in comparative reconstruction is to align corresponding phonological segments in the words being compared. To do this, one must search among huge numbers of potential alignments to find those that give a good pho- netic fit. This is a hard computational prob- lem, and it becomes exponentially more difficult when more than two strings are being aligned. In this paper I extend the guided-search align- ment algorithm of Covington (Computational Linguistics, 1996) to handle more than two strings. The resulting algorithm has been im- plemented in Prolog and gives reasonable results when tested on data from several languages. 1 Background The Comparative Method for reconstructing languages consists of at least the following steps: 1. Choose sets of words in the daughter lan- guages that appear to be cognate; 2. Align the phonological segments that ap- pear to correspond (e.g., skip the [k] when aligning German [kn~ with English [niy] 'knee'); 1 3. Find regular correspondence sets (proto- allophones, Hoenigswald 1950); 4. Classify the proto-allophones into proto- phonemes with phonological rules (sound laws). The results of each step can be used to refine guesses made at previous steps. For example, IThese phonetic transcriptions may nor may not be phonemic. Because of the way the Comparative Method works, synchronic aUophony is, in general, factored out along with diachronic allophony as the reconstruction proceeds. 275 a regular correspondence, once discovered, can be used to refine one's choice of alignments and even putative cognates. Parts of the Comparative Method have been computerized by Frantz (1970), Hewson (1974), Wimbish (1989), and Lowe and Mazandon (1994), but none of them have tackled the align- ment step. Covington (1996) presents a work- able alignment algorithm for comparing two lan- guages. In this paper I extend that algorithm to handle more than two languages at once. 2 Multiple-string alignment The alignment step is hard to automate be- cause there are too many possible alignments to choose from. For example, French le [l~] and Spanish el [el I can be lined up at least three ways: el el- -el 12 -1~ 12- Of these, the second is etymologically correct, and the third would merit consideration if one did not know the etymology. The number of alignments rises exponentially with the length of the strings and the number of strings being aligned. Two ten-letter strings have anywhere from 26,797 to 8,079,453 differ- ent alignments depending on exactly what align- ments are considered distinct (Covington 1996, Covington and Canfield 1996). As for multiple strings, if two strings have A alignments then n strings have roughly A '~-1 alignments, assum- ing the alignments are generated by aligning the first two strings, then aligning the third string against the second, and so forth. In fact, the search space isn't quite that large because some combinations are equivalent to others, but it is clearly too large to search exhaustively. Table 1: Evaluation metric used by Covington (1996). Badness 10 30 60 100 40 50 Conditions Exact match of consonants or glides Exact match of vowels (nonzero so the aligner will prefer to match consonants, given a choice) Match of 2 vowels that differ only in length, or [i] and [y], or [u] and [w] Match of 2 dissimilar vowels Match of 2 dissimilar consonants Match of 2 unrelated segments Skip preceded by another skip in the same string Skip not preceded by another skip in the same string Fortunately the comparative linguist is not looking for all possible alignments, only the ones that are likely to manifest regular sound corre- spondences - that is, those with a reasonable degree of phonetic similarity. Thus, phonetic similarity can be used to constrain the search. 3 Applying an evaluation metric The phonetic similarity criterion used by Cov- ington (1996) is shown in Table 1. It is obviously just a stand-in for a more sophisticated, per- haps feature-based, system of phonology. The algorithm computes a "badness" or "penalty" for each step (column) in the alignment, summing the values to judge the badness of the whole alignment, thus: e 1 1 o i00 + i00 -- 200 e 1 - 1 50 + 0 + 50 = i00 The alignment with the lowest total badness is the one with the greatest phonetic similarity. Note that two separate skips count exactly the same as one complete mismatch; thus the align- ments e -e 1 l- are equally valued. In fact, a "no-alternating- skips rule" prevents the second one from being generated; deciding whether [e] and [I] corre- spond is left for another, unstated, part of the comparison process. I will explain below why this is not satisfactory. Naturally, the alignment with the best overall phonetic similarity is not always the etymolog- ically correct one, although it is usually close; we are looking for a good phonetic fit, not nec- essarily the best one. 4 Generalizing to three or more languages When a guided search is involved, aligning strings from three or more languages is not sim- ply a matter of finding the best alignment of the first two, then adding a third, and then a fourth, and so on. Thus, an algorithm to align two strings cannot be used iteratively to align more than two. The reason is that the best overall alignment of three or more strings is not necessarily the best alignment of any given pair in the set. Fox (1995:68) gives a striking example, originally from Haas (1969). The best alignment of the Choctaw and Cree words for 'squirrel' appears to be: Choctaw fani Cree - i !u Here the correspondence [a]:[i] is problematic. Add the Koasati word, though, and it becomes clear that the correct alignment is actually: Choctaw - fani Koasati i p - ! u Cree i - - l u o Any algorithm that started by finding the best alignment of Choctaw against Cree would miss this solution. A much better strategy is to evaluate each col- umn of the alignment (I'll call it a "step") before generating the next column. That is, evaluate the first step, and then the second step, 276 f P and so on. At each step, the total badness is computed by comparing each segment to all of the other segments. Thus the total badness of a b C is badness(a, b) + badness(b, c) + badness(a, c). That way, no string gets aligned against another without considering the rest of the strings in the set. Another detail has to do with skips. Empiri- cally, I found that the badness of f P comes out too high if computed as badness(f,p) + badness(p,-) + badness(f,-); that is, the algorithm is too reluctant to take skips. The reason, intuitively, is that in this alignment step, there is really only one skip, not two separate skips (one skipping If] and one skipping [p]). This becomes even more apparent when more than three strings are being aligned. Accordingly, when computing badness I count each skip only once (assessing it 50 points), then ignore skips when comparing the segments against each other. I have not implemented the rule from Covington (1996) that gives a reduced penalty for adjacent skips in the same string to reflect the fact that affixes tend to be contigu- ous. 5 Searching the set of alignments The standard way to find the best alignment of two strings is a matrix-based technique known as dynamic programming (Ukkonen 1985, Wa- terman 1995). However, dynamic program- ming cannot accommodate rules that look ahead along the string to recognize assimilation or metathesis, a possibility that needs to be left open when implementing comparative recon- struction. Additionally, generalization of dy- namic programming to multiple strings does not entirely appear to be a solved problem (cf. Ke- cecioglu 1993). Accordingly, I follow Covington (1996) in re- casting the problem as a tree search. Consider the problem of aligning [el] with [le]. Coving- ton (1996) treats this as a process that steps through both strings and, at each step, per- forms either a "match" (accepting a character from both strings), a "skip-l" (skipping a char- acter in the first string), or a "skip-2" (skipping a character in the second string). That results in the search tree shown in Fig. 1 (ignoring Cov- ington's "no-alternating-skips rule"). The search tree can be generalized to multiple strings by breaking up each step into a series of operations, one on each string, as shown in Fig. 2. Instead of three choices, match, skip-l, and skip-2, there are really 2x2: accept or skip on string 1 and then accept or skip on string 2. One of the four combinations is disallowed - you can't have a step in which no characters are accepted from any string. Similarly, if there were three strings, there would be three two-way decisions, leading to eight (= 2 3) states, one of which would be dis- allowed. Using search trees of this type, the de- cisions necessary to align any number of strings can be strung together in a satisfactory way. 6 Alternating skips Covington (1996) considers the alignments e -e 1 1- equivalent and generates only the first of them, leaving it to some later step in the comparison process to decide whether [e] and [1] really cor- respond. The rule is: NO-ALTERNATING-SKIPS RULE: If there is a skip in one string, there cannot be a skip in the other string at the next step. Although this tactic narrows the search space, I do not think this is linguistically satisfactory; after all, aligning [el with [1] and skipping them in tandem are quite different linguistic claims. Consider for example the final segment of Span- ish [dos] and Italian [due] 'two'; it is correct to skip the [s] and the [e] in tandem because they come from different Latin endings. It is not his- torically correct to pair Is] with [e] in a corre- spondence set. 277 Start rol/sk,p oo ro-1 LIoJ LJ ~",~ngl string2 L J \ \ ~,';,i~?;\ ro ::: Situations where only one move is possible string 2 o S,,,on E;'-;1 string 1 Analogous to above Figure 1: Part of a 3-way-branching search tree for generating potential alignments (Covington 1996, ignoring no-alternating-skips rule). Start Accept ~[el] Accept [ ¢] ] J U oJ ,,oce,:, .[o]<.--J- k'J ~<,~C~--~_F~.I.._ ,.'-J ,~,<,,:, ;,] rl<" / ~<,-;~..~[~_]_.. Ll.~ Processing Processing Processing Processing Processing string 1 string 2 string I string 2 string 1... I I I I I Step 1 Step 2 Step 3... Figure 2: Search tree factored into 2-way branchings with a disallowed state at each step. This tree generalizes to handle more than 2 strings. 278 Also, the no-alternating-skips rule does not generalize easily to multiple strings. I therefore replace it with a different restriction: ORDERED-ALTERNATING-SKIPS RULE: A skip can be taken in strings i and j in suc- cessive steps only if i ~_ j. That lets us generate - e (String 1) 1 - (String 2) but not e - -1 which is undeniably equivalent. It also ensures that there is only one way of skipping several consecutive segments; we get ---abc def- - - but not -a-b-c abc--- d-e-f . . . . def or numerous other equivalent combinations of skips. 7 Pruning the search The goal of the algorithm is, of course, to gen- erate not the whole search tree, but only the parts of it likely to contain the best alignments, thereby narrowing the intractably large search space into something manageable. Following Covington (1996), I implemented a very simple pruning strategy. The program keeps track of the badness of the best complete alignment found so far. Every branch in the search tree is abandoned as soon as its total bad- ness exceeds that value. Thus, bad alignments are abandoned when they have only partly been generated. A second part of the strategy is that the com- puter always tries matches before it tries skips. As a result, if not much material needs to be skipped, a good alignment is found very quickly. For example, three four-character strings have 10,536 alignments (generated my way), but when comparing Spanish tres, French trois, and Table 2: Some alignments found by the proto- type program. Spanish/Italian/French 'three': tr-es tr-e- t rwa- Spanish/Italian/French 'four': kwa - t r o kwat t ro k-a-tr- Spanish/Italian/French 'five': 0i~k-o ciokwe s~-k- - Koasati / Cree / Choctaw 'squirrel': ip-!u i--!u - fani English three, 2 the algorithm finds its "best" alignment, tr-es t rwa- 0r-iy after completing only ten other alignments, al- though it also pursues several hundred branches of the tree part of the way. (Here the match of [s] with [y] is problematic, but the computer can't know that; it also finds a number of alternative alignments.) 8 Results and evaluation The algorithm has been prototyped in LPA Pro- log, and Table 2 shows some of the alignments it found. None of these took more than five sec- onds on a 133-MHz Pentium, and the Prolog program was written for versatility, not speed. As comparative linguists know, the alignment that gives the best phonetic fit (by any crite- rion) is not always the etymologically correct one. This is evident with my algorithm. For 2Admittedly an odd set to compare because of the different depth of branching, but they are cognates and each has four segments. 279 instance, comparing the Sanskrit, Greek, and Latin words for 'field,' the algorithm finds the correct alignment, ager-- ag-ros a]-ras (badness = 365) but then discards it in favor of a seemingly bet- ter alignment: ager-- ag-ros a-]ras (badness = 345) It doesn't know, of course, that [g]:[]] is a pho- netically probable correspondence. Worse, occasionally the present algorithm doesn't consider the etymologically correct alignment at all because something that looks better has already been found. For example, taking the Avestan, Greek, and Latin words for '100', the algorithm settles on --satom hekaton ken-tum (badness 610) without ever considering the etymologically cor- rect alignment: --sa-tom heka-ton --kentum (badness 690) The penalties for skips may still be too high here, but the real problem is, of course, that the algorithm is looking for the one best alignment, and that's not what comparative reconstruction needs. Instead, the computer should prune the search tree less eagerly, pursuing any alignment whose badness is, say, no more than 120% of the lowest found so far, and delivering all solu- tions that are reasonably close to the best one found during the entire procedure. Indeed, the availability of multiple potential alignments is the keystone of Kay's (1964) proposal to imple- ment the Comparative Method, which could not be implemented at the time Kay proposed it be- cause of the lack of an efficient search algorithm. The requisite modification is easily made and I plan to pursue it in subsequent work. References Covington, Michael A. (1996) An algorithm to align words for historical comparison. Com- putational linguistics 22:481-496. Covington, Michael A., and Canfield, E. Rodney (1996) The number of distinct alignments of two strings. Unpublished manuscript, Univer- sity of Georgia. Fox, Anthony (1995) Linguistic reconstruction: an introduction to theory and method. Oxford: Oxford University Press. Frantz, Donald G. (1970) A PL/1 program to assist the comparative linguist. Communica- tions of the ACM 13:353-356. Haas, Mary R. (1969) The prehistory of lan- guages. The Hague: Mouton. Hewson, John (1974) Comparative reconstruc- tion on the computer. John M. Anderson and Charles Jones, eds., Historical linguistics I: syntax, morphology, internal and comparative reconstruction, 191-197. Amsterdam: North Holland. Hoenigswald, Henry (1950) The principal step in comparative grammar. Language 26:357- 364. Reprinted in Martin Joos, ed., Readings in Linguistics I, 4th ed., 298-302. Chicago: University of Chicago Press, 1966. Kay, Martin (1964) The logic of cognate rcog- nition in historical linguistics. (Memorandum RM-4224-PR.) Santa Monica: The RAND Corporation. Kececioglu, John (1993) The maximum weight trace problem in multiple sequence alignment. Combinatorial pattern matching: 4th annual symposium, ed. A. Apostolico et al., 106-119. Berlin: Springer. Lowe, John B., and Mazaudon, Martine (1994) The Reconstruction Engine: a computer im- plementation of the comparative method. Computational Linguistics 20:381-417. Ukkonen, Esko (1985) Algorithms for approxi- mate string matching. Information and Con- trol 64:100-118. Waterman, Michael S. (1995) Introduction to computational biology: maps, sequences and genomes. London: Chapman & Hall. Wimbish, John S. (1989) WORDSURV: a pro- gram for analyzing language survey word lists. Dallas: Summer Institute of Linguistics. 280
1998
43
Veins Theory: A Model of Global Discourse Cohesion and Coherence Dan CRISTEA Dept. of Computer Science University <<A.I. Cuza>> Ia~i, Romania [email protected] Nancy IDE Dept. of Computer Science Vassar College Poughkeepsie, NY, USA [email protected] Laurent ROMARY Loria-CNRS Vandoeuvre-Les-Nancy, France romary @loria.fr Abstract In this paper, we propose a generalization of Centenng Theory (CT) (Grosz, Joshi, Weinstein (1995)) called Veins Theory (VT), which extends the applicability of centering rules from local to global discourse. A key` facet of the theory involves the idenufication of <<veins>> over discourse structure trees such as those defined in RST, which delimit domains of referential accessibility for each unit in a discourse. Once identified, reference chains can be extended across segment boundaries, thus enabling the application of CT over the entire discourse. We describe the processes by which veins are defined over discourse structure trees and how CT can be applied to global discourse by using these chains. We also define a discourse <<smoothness>> index which can be used to compare different discourse structures and interpretations, and show how VT can be used to abstract a span of text in the context of the whole discourse. Finally, we validate our theory by analyzing examples from corpora of English, French, and Romanian. Introduction As originally postulated, Centering Theory (CT) (Grosz, Joshi, and Weinstein (1995)) accounts for references between adjacent units but is restricted to local reference (i.e., within segment boundaries). Recently, CT-based work has emerged which considers the relation of global discourse structure and anaphora, all of which proposes extensions to centering in order to apply it to global discourse. We approach the relationship between global structure and anaphora resolution from a different, but related, perspective. We identify domains of referential accessibility for each discourse unit over discourse structure trees such as those defined in Rhetorical Structure Theory (RST ; Mann and Thompson (1987)) and show how CT can then be applied to global discourse by. using these domains. As such, our approach differs from Walker's (1996), whose account of referentialit~, within the cache memory model does not rely on discourse structure, but rather on cue phrases and matching constraints together 281 with constraints on the size of the cache imposed to reflect the plausible limits of the attentional span. Our approach is closer to that of Passonneau (1995) and Hahn and Strtibe (1997), who both use a stack-based model of discourse structure based on Grosz and Sidner's (1986) focus spaces. Such a model is equivalent to a dynamic processing model of a tree-like structure reflecting the hierarchical nesting of discourse segments, and thus has significant similarities to discourse structure trees produced by RST (see Moser and Moore (1996)). However, using the RST notion of nuclearity, we go beyond previous work by revealing a "hidden" structure in the discourse tree, which we call veins, that enables us to determine the referential accessibility domain for each discourse unit and ultimately to apply CT globally, without extensions to CT or addltional Oata structures. In this paper, we describe Veins Theory (VT) by showing how veins are defined over discourse structure trees, and how CT can be applied to global discourse by using them. We use centering transitions (Brennan, Friedman and Pollard (1987)) to define a <<smoothness>> index, which is used to compare different discourse structures and interpretations. Because veins define the domains of referential access for each discourse unit, we further demonstrate how VT may` be potentially used to determine the <<minimal>> parts ota text required to resolve references m a given utterance or, more generally, to understand it out of the context of the entire discourse. Finally, we validate our theory by analyzing examples from corpora of English, French, and Romanian. 1 The vein concept We define veins over discourse structure trees of the kind used in RST. Following that theory, we consider the basic units of a discourse to be non- overlapping spans of text (i.e., sharing no common text), usually reduced to a clause and including a single predicate; and we assume that various rhetorical-, cohesive, and coherence relations hold between individual units or groups of units. ' i Note that unlike RST, Veins Theory (VT) is not concerned with the type of relations which hold We represent discourse structures as binary trees, where terminal nodes represent discourse units and non-terminal nodes represent discourse relations. A polarity is established among the children of a relation, which identifies at least one node, the nucleus, considered essential for the writer's purpose; non-nuclear nodes, which include spans of text that increase understanding but are not essential to the writer s purpose are called satellites. Vein expressions defined over a discourse tree are sub-sequences of the sequence of units making up the discourse. In our discussion, the following notations are used: • each terminal node (leaf node, discourse unit) has an attached label; • mark(x) is a function that takes a string of symbols x and returns each symbol in x marked in some way (e.g., with parentheses); • simpl(x) is a function that eliminates all marked symbols from its argument, if they exist; e.g. simpl(a(bc)d(e))=ad; • seq(x, y) is a se.quencing function that takes as mput two non-intersectmg strings of terminal node labels, x and y, and returns that permutation of x/y (x concatenated with y) that is given by the left to right reading of the sequence of labels in x and y on the terminal frontier of the tree. The function maintains the parentheses, if they exist, and seq(nil, y) = y. Heads 1. The head of a terminal node is its label. 2. The head of a non-terminal node is the concatenation of the heads of its nuclear children. Vein expressions 1. The vein expression of the root is its head. 2. For each nuclear node whose parent node has vein v, the vein expression is: • if the node has a left non-nuclear sibling with head h, then seq(mark(h), v); • otherwise, v. 3. For each non-nuclear node of head h whose parent node has vein v the vein expression is: • if the node is the left child otits parent, then seq(h, v); • otherwise, seq(h,.simpl(v)). Note that the computation of heads is bottom-up, while that of veins is top-down. Consider example 1: 1. According to engineering lore, 2. the late Ermal C. Fraze, 3. founder of Dayton Reliable Tool & Manufacturing Company in Ohio, 2a. came up with a practical idea for the pop-top lid 3. after attempting with halting success to open a beer can on the bumper of his car. The structure of this discourse fragment is given in Figure 1. The central gray line traces the among discourse units, but considers only the topological structure and the nuclear/satellite status (see below) of discourse units. 282 principal vein of the tree, which starts at the root and descends along the nuclear nodes. Auxiliary veins are attached to the principal vein. The vein expressions corresponding to each node indicate its domain of accessibility, as defined in the following section. Accordingly, in this example, unit 1 is accessible from unit 2, but not unit 3. 2 Accessibility The domain of accessibility of a unit is defined as the string of units appearing in its vein expression and prefixing that unit itself. More formally, for each terminal node u, if vein(u) is its vein, then accessibility from u is given by acc(u) = pref(u, unmark(vein(u)), where: vein is the function that computes the vein; unmark(x) is a function that removes the markers from all symbols of its argument; • pref is a function that retains the prefix of the second argument up to and including the first argument (e.g., if a and 13 are strings of labels and u is a label, pref(u, aufl) = ocu, Conjecture CI: References from a given unit are possible only in its domain of accessibility. In particular, we can say the following: 1. In most cases, if B is a unit and beB is a referential expression, then either b directly realizes a center that appears for the first time in the discourse, or it refers back to another center realized by a referential expression aeA, such that Aeacc(B). 2 Such cases instantiate direct references. 2. If (1) is not applicable, then if A, B, and C are units, ceC is a referential expression that refers to beB, and B is not on the vein of C .(i.e., it is not visible from C), then there is an item aeA, where A is a unit on the common vein of B and C, such that both b and c refer to a. In this case we say that c is an indirect reference to aJ 3. If neither (1) nor (2) is applicable, then the reference in C can be understood without the referee, as if the corresponding entity were introduced in the discourse for the first time. Such references are inferential references. Note that VT is applicable even when the division into units ~s coarser than in our examples. For instance, Example 1 in its entirety could be taken to comprise a single unit; if it appeared in the context of a larger discourse, it would still be possible to.compute its veins (although, of course, the veins would likely be shorter because there are fewer units to consider). It can be proven formally (Cristea, 2 If a and b are referential expressions, where the center (directly) realized by b is the same as the one (directly) realized by a, or where it is a role of the center (directly) realized by a, we will say that b refers (back) to a, or b is a bridge reference to a. 3 On the basis of their common semantic representations. Figure 1: Tree structure and veins for Example 1 H=2 V=2 H=I V=I 2 H=2 H---4 V=2 4 H=2 V=(1) 2 H=2 V=(1) 2 2 sees I f 1998) that when passing from a finer granularity to a coarser one the accessibility constraints are still obeyed. This observation is important in relation to other approaches that search for stability with respect to granularity (see for instance, Walker, 1996). 3 Global coherence This section shows how VT can predict the inference load for processing global discourse, thus providing an account of discourse coherence. A corollary of Conjecture C1 is that CT can be applied along the accessibility domains defined by tile veins of the discourse structure, rather than to sequentially placed units within a single discourse segment. Therefore, in VT reference domains for any node may include units that are sequentially distant in the text stream, and thus long-distance references (including those requiring "return- pops" (Fox, 1987) over segments that contain syntactically feasible referents) can be accounted for. Thus our model provides a description of global discourse cohesion, which significantly extends the model of local cohesion provided by CT. CT defines a set of transition types for discourse (Grosz, Joshi, and Weinstein (1995); Brennan, Friedman and Pollard (1987)). A smoothness score for a discourse segment can be computed by attaching an elementary score to each transition between sequential units according to Table 2, summing up the scores for each transition in the entire segment, and dividing the result by the number of transitions in the segment. This provides an index of the overall coherence of the segment. A global CT smoothness score can be computed by adding up the scores for the sequence of units making up the whole discourse, and dividing the result by the total number of transitions (number of units minus one). In general, this score will be slightly higher than the average of the scores for the individual segments, since accidental transitions at segment boundaries might also occur. Analogously, a global VT smoothness score H=3 V=2 3 3 doesn't see 1 can be computed using accessibility domains to determine transitions rather than sequential units. Table 2: Smoothness scores for transitions CENTER CONTINUATION 4 CENTER RETAINING 3 CENTER SHIFTING (SMOOTH) 2 CENTER SHIFTING (ABRUPT) 1 NO Cb 0 Conjecture C2: The global smoothness score of a discourse when computed following VT is at least as high as the score computed following CT. That is, we claim that long-distance transitions computed using VT are systematically smoother than accidental transitions at segment boundaries. Note that this conjecture is consistent with results reported by authors like Passonneau (1995) and Walker (1996), and provides an explanation for their results. We can also consider anaphora resolution using Cb's computed using accessibility domains. Because a unit can simultaneously occur in several accessibility domains, unification can be applied using the Cf list of one unit and those of possibly several subsequent (although not necessarily adjacent) units. A graph of Cb-unifications can be derived, in which each edge of the graph represents a Cb computation and therefore a unification process. 4 Minimal text The notion that text summaries can be created by extracting the nuclei from RST trees is well known in the literature (Mann and Thompson, (1988)). Most recently, Marcu (1997) has described a method for text summarization based on nuclearity and selective retention of hierarchical fragments. Because his salient units correspond to heads in VT, his results are predicted in our model. That is, the union of heads at a given level in the tree provides a summary of the text at a degree of detail dependent on the depth of that level. In addition to summarizing entire texts, VT can be used to summarize a given unit or sub-tree of that 283 text. In effect, we reverse the problem addressed by text summarization efforts so far: instead of attempting to summarize an entire discourse at a given level of detail, we select a single span of text and abstract the minimal text required to understand this span alone when considered in the context of the entire discourse. This provides a kind of focused abstraction, enabling the Table 5: Verifying conjecture C1 extraction of sub-texts from larger documents. Because vein expressions for each node include all of the nodes in the discourse within its domain of reference, they identify exactly which parts of the discourse tree are required in order to understand and resolve references for the unit or subtree below that node. Source No. of units Total no. of refs English 62 97 French 48 110 Romanian 66 176 Total 111 318 Table 6: Verifying Conjecture C2 Direct on the vein (case 1) 75 77.3% 98 89.1% 104 93.7% 277 87.1% Indirect on the vein (case 2) 14 14.4% 11 10.0% 2 1.8% 27 8.5 % Inference (case 3) 5 5.2% 1 0.9% 5 4.5% 11 3.5% How many obey CI 94 96.9% 110 100.0% 111 100.0% 315 99.1% Source No. of CT Score Average CT score per VT score Average VT score transitions transition per transition English 59 76 1.25 84 1.38 French 47 109 2.32 116 2.47 Romanian 65 142 2.18 152 2.34 Total 173 327 1.89 352 2.03 5. Corpus analysis Because of the lack of large-scale corpora annotated for discourse, our study currently involves only a small corpus of English, Romanian, and French texts. The corpus was prepared using an encoding scheme for discourse structure (Cristea, Ide, and Romary, 1998) based on the Corpus Encoding Standard (CES) (Ide (1998)). The following texts were included in our analysis: .three short English texts, RST-analyzed by experts and subsequently annotated for reference and Cf lists by the authors; • a fragment from de Balzac s <<Le P~re Goriot>> (French), previously annotated for co-reference (Bruneseaux and Romary (1997)); RST and Cf lists annotation made by the authors; • a fragment from Alexandru Mitru's <<Legendele Olimpului>> 4 . (Romanian); structure, reference, and Cf hsts annotated by one of the authors. The encoding marks referring expressions, links between referring expressions (co-reference or functional), units, relations between units (if known), nuclearity, and the units' Cf lists in terms of refemng expressions. We have developed a program 5 that does the following: builds the tree structure of units and relations between them, adds to each referring expression the index of the unit it occurs in, computes the heads and veins for all nodes in the structure, determines the accessibility domains of the terminal nodes (units), counts the number of direct and indirect references. Hand-analysis was then applied to determine which references are inferential and therefore do not conform to Conjecture C1, as summarized in Table 5. Among the 318 references in the text, only three references not conforming to Conjecture C1 were found (all of them appear in one of the English texts). However, if the BACKGROUND relation is treated as bi- nuclear, ~ all three of these references become direct. To verify Conjecture C2, Cb's and transitions were first marked following the sequential order of the units (according to classical CT), and a smoothness score was computed. Then, following VT, accessibility domains were used to determine maximal chains of accessibility strings, Cb's and transitions were re-computed following these strings, and a VT smoothness score was similarly computed. The results are summarized in Table 6. They show that the score for VT is better than that forCT in all cases, thus validating. Conjecture C2. An investigation of the number of long-distance resolutions yielded the results shown in Table 7. Such resolutions could not have been predicted using CT. Table 7: Long distance reference resolution Source No of long distance No of new referents Cb unifications found English 6 2 French 11 1 Romanian 18 3 4 ~The Legends of Olimp~ 5 Written in Java. 6 Other bi-nuclear relations are JOIN and SEQUENCE. 284 6. Discussion and related work VT is not a model of anaphora resolution; rather, its accessibility domains provide a means to constrain the resolution of anaphora. The fundamental assumption underlying VT is that an inter-unit reference is possible only if the two units are in a structural relation with one another, even if they are distant from one another in the text stream. Furthermore, inter- unit-references are primarily to nuclei rather than to satellites, reflecting the intuition that nuclei assert the writer's mare ideas and provide the main <<threads>> of the discourse (Mann and Thompson [1988]. This Is shown m the computation of veins over (binary) discourse trees where each pair of descendants of a parent node are either both nuclear or the nuclear node is on the left (a left-polarized tree). In such trees, any reference from a nuclear unit must be to entities contained in linguistic expressions appearing in previously occurring nuclei (although perhaps not any nucleus). On the other hand, satellites are dependent on their nuclei for their meaning and hence may refer to entities introduced within them. The definition of veins formalizes these relationship, s. Given the mapping of Grosz and Sidners (1986) stack- based model of discourse structure to RST structure trees outlined by Moser and Moore (1996), the domains of referentiality defined for left-polarized trees using VT are consistent with those defined using the stack-based model (e.g. Passonneau (1995), Hahn and Strtibe (1997)). However, in cases where the discourse structure is not left-polarized, VT provides a more natural account of referential accessibility than the stack- based model. In non left-polarized trees, at least one satellite precedes its nucleus in the discourse and is therefore its left sibling in the binary discourse tree. The vein definition formalizes the intuition that in a sequence of units A B C, where A and C are satellites of B, B can refer to entities in A (its left satellite), but the subsequent right satellite, C, cannot refer to A due to the interposition of nuclear unit B. In stack-based approaches to referentiality, such configurations pose problems: because B dominates 7 A it must appear below it on the stack, even though it is processed after A. Even if the processing difficulties are overcome, this situation leads to the postulation of cataphoric references when a satellite precedes its nucleus, which is counter- intuitive. Acknowledgements Our thanks go to Daniel Marcu who pointed some weak parts and provided RST analysis and to the TELR1 program who facilitated the second meeting of the three authors. 7 We use Grosz and Sidner's (1986) terminology here, but note the equivalence of dominance in G&S and nucleus/satellite relations in RST pointed out by Moser and Moore (1996). 285 References Brennan, S.E., Walker Friedman, M. and Pollard, C.J. (1987). A Centering Approach to Pronouns. Proceedings of the 25th Annual Meeting of the ACL, Stanford, 155-162. Bruneseaux Florence and Laurent Romary (1997). Codage des Rfffrences et corfffrences darts les Dialogues Homme-machine. ProceeCings of A CH/ALLC, Kingston (Ontario). Cristea, D. (1998). Formal proofs in Incremental Discourse Processing and Veins Theory, Research Report TR98-2 Dept. of Computer Science, University "A.I.Cuza", Ia~i. Cristea, D., Ide, N. and Romary, L. (1998). Marking- up Multiple Views of a Text: Discourse and Reference, Proceedings of the First International Conference on Language Resources and Evaluation, Granada, Spain. Fox, B. (1987). Discourse Structure and Anaphora. Written and Conversational English. no 48 in Cambridge Studies in Linguistics, Cambridge University Press. Grosz, B.J., Joshi, A.K. and Weinstein, S. (1995) Centering: A framework for modeling the local coherence of discourse. Computational Linguistics, 12(2), 203-225. Grosz, B. and Sidner, C. (1986). Attention, Intention and the Structure of Discourse. Computational Linguistics, 12, 175-204. Hahn, U. and Striibe, M. (1997). Centered Segmentation: Scaling Up the Centering Model to Global Discourse Sttructure. Proceedings of EACL/ACL97, Madrid, 104-11. Ide, N. (1998) Corpus Encoding Standard: Encoding Practices and a Data Aarchitecture for Linguistic Corpora. Proceedings of the First International Conference on Language Resources and Evaluation, Granada, Spain. See also http:llwww.cs.vassar.edu/CESI. Mann, W.C., Thompson S.A. (1988). Rhetorical structure theory: A theory of text organization, Text, 8:3, 243-281. Marcu, D. (1997). The rhetorical parsing, summarisation and generation of natural language texts, Ph.D. thesis, Dept. of Computer Science, University of Toronto. Moser, M. and Moore, J. (1996). Toward a Synthesis of Two Accounts of Discourse Structure. Computational Linguistics, 22:3,409-20. Passonneau, R.J. (1995). Using Centering to Relax Gricean Informational Constraints on Discourse Anaphoric Noun Phrases, research report, Bellcore. Walker, M.A. (1996). The Cash Memory Model. Computational Linguistics,22:2, 255-64. Walker, M.A.; Joshi, A.K., Prince, E.F. (1997). Centering in Naturally-Occurring Discourse: An Overview. In Walker, M.A.; Joshi, A.K., Prince, E.F. (eds.): Centering in Discourse, Oxford University Press.
1998
44
Automatic Semantic Tagging of Unknown Proper Names Alessandro CUCCHIARELLI Universith di Ancona Istituto di Informatica Via Brecce Bianche 60131 Ancona, Italia alex @inform.unian.it Danilo LUZI Universith di Ancona Istituto di Informatica Via Brecce Bianche 60131 Ancona, Italia luzi @ inform.unian.it Paola VELARDI Universit~ di Roma 'La Sapienza' Dip. di Scienze delrlnformazione Via Salaria 113 00198 Roma, Italia velardi @ dsi.uniroma 1 .it Abstract Implemented methods for proper names recognition rely on large gazetteers of common proper nouns and a set of heuristic rules (e.g. Mr. as an indicator of a PERSON entity type). Though the performance of current PN recognizers is very high (over 90%), it is important to note that this problem is by no means a "solved problem". Existing systems perform extremely well on newswire corpora by virtue of the availability of large gazetteers and rule bases designed for specific tasks (e.g. recognition of Organization and Person entity types as specified in recent Message Understanding Conferences MUC). However, large gazetteers are not available for most languages and applications other than newswire texts and, in any case, proper nouns are an open class. In this paper we describe a context-based method to assign an entity type to unknown proper names (PNs). Like many others, our system relies on a gazetteer and a set of context-dependent heuristics to classify proper nouns. However, due to the unavailability of large gazetteers in Italian, over 20% detected PNs cannot be semantically tagged. The algorithm that we propose assigns an entity type to an unknown PN based on the analysis of syntactically and semantically similar contexts already seen in the application corpus. The performance of the algorithm is evaluated not only in terms of precision, following the tradition of MUC conferences, but also in terms of Information Gain, an information theoretic measure that takes into account the complexity of the classification task. Introduction In terms of syntactic categories, proper nouns are lexical NPs that can be formed by primitive proper names (Adol- fo_Battaglia), groups of proper nouns of different semantic categories (San Paolo di Brescia), and also of non-proper nouns (Banca dei regolamenti internazionali). In the latter case, capital letters are optional, making the problem of PN items identification even more complex. In the literature, it is accepted that an adeq.uate treatment of proper nouns reqmres the use of a context-sensitive grammar (McDonald, 1996). McDonald points out that the context sensitivity requirement involves two complementary types of evidence: internal and external. The internal evidence, can be derived from the sequence of words in a text (proper nouns and trigger words, such as Inc., &, Ltd., Company, etc.), and is gained in almost all state-of-art PNs recognisers by the use of large gazetteers and lists of trigger words. The external evidence is the context of a proper noun, that provides classificatory criteria to reinforce internal evidence, if any, or supplies some classificatory evidence. In fact, proper names form an open class, making the incompleteness of gazetteers an obvious problem. The methods for recognition of proper nouns (PNs) described in literature closely reflects this view of the problem. PN identification typically includes: • a gazetteer lookup, which locates simple and complex nominals identifying common PNs, such as companies, person names, locations, etc. • a set of patterns or rules, stated in terms of part-of-speech, syntactic or lexical features (e.g. Mr. as an indicator of a PERSON entity type), orthographic features (e.g. capitalization), etc. 286 Proper nouns recognition has recently attracted much attention especially in the area of Information Extraction, where this problem is known as the Named Entity recognition task. The highest performing systems include large numbers of hand- coded rules, or patterns, such as VIE (Humphreys et al. 1996), the UMass system (Fisher et al. 1997) and Proteus (Grishman et al. 1992), but lately a high performance has been obtained by the use of statistical methods. For example, Ny.mble (Bikel et al. 1997) learns names using a trained approach based on a variant of Hidden Markov Models. However, a 90% success rate is reached at the price of tagging manually around half a million words. Since PNs are mostly domain-specific, presumably a comparable effort is needed when shifting to different domains. High performances of the existing systems are by no means the result of many years of studies and research in the area of IE from newswire English texts, promoted and funded by the Message Understanding Conferences (MUC) organizers. Yet, there is no evidence that a similar performance could be obtained in other languages and domains, if not at the price of a similar effort for rule writing (or manual training), and for the compilation of a high- coverage gazetteer. A recent study (Palmer and Day, 1997) established that the baseline performances of the PN recognition task for several languages and application domains vary between 34% and 71%. The lower bound is calculated by considering a simple algorithm that recognizes PNs on the basis of a list of frequent proper nouns seen in a training set. The method we propose in this paper combines symbolic and statistical approaches to classify unknown PNs using context evidence previously extracted from the application corpus. The method can be used to overcome the limitation of small gazetteers and poorly encoded rule bases. Our method is untrained: what is needed is a learning (raw) corpus, a surface syntactic analyzer, a dictionary of synonyms, a list of category names for classifying PNs (we used the categories proposed in the forthcoming MUC-7), and a "start-up" gazetteer and rule base, used to acquire an initial model of typical PNs contexts. In the next section, we describe the method in detail. Section 3 is dedicated to a discussion of experimental results. 2 The Method The problem of PN recognition has been considered in our group in the context of the European project ECRAN, aimed at improving domain adaptability of IE systems through the integrated use of corpora and MRDs. A first version of the Named Entity (NE) recognizer, in Italian, closely reproduced the architecture of the VIE recognizer, developed at the University of Sheffield (Humphreys et al. 1996). Proper noun recognition is initially performed in two steps: 1) common proper nouns are identified using a gazetteer, structured in files and related lists of trigger words for each proper nouns category (e.g. "Gulf" for LOCATIONs, or "Association" for ORGANIZATIONs); 2) a context-sensitive grammar of about 250 rules is used to parse proper nouns in contexts. The majority of rules uses internal evidence to identify and classify proper nouns made of complex NPs. For example the following rule is used to recognize street names: rule(tagged_location_np(s form: [via, ", F2 ,' ',F3],sem:A^B), [nome(s..form:via, sem:_a_), organ_names_np( s_form: F2,sem:_^_), num(s_form:F3)]) Ex: " via Giorgio Marini 34 " When running these first two modules on a one million word corpus of economic news (extracted from the newspaper II Sole 24 Ore), we obtained the following performances: 84% precision, 85% recall, about 20% proper nouns correctly identified as such, but NOT classified. Unknown proper nouns are identified initially by the Brill part-of-speech tagger (Brill, 1995). Complex unknown nominals (e.g. Quick Take 200) are partly detected by simple heuristics. One of the motivations for such a high percentage of unknowns and relatively low performance (as compared with state-of- art PN recognizers) is that at the present state of implementation the gazetteer has a 287 limited coveragel; yet, the problem of unknowns is generally recognized as crucial in real-world applications, because oroDer nouns are an open class. We have therefore devised a method to reinforce external evidence, using a corpus-driven algorithm to incrementally update the gazetteer and classification of unknown PNs in running texts. The algorithm to classify unknown proper nouns uses the following linguistic resources: a (raw text) learning corpus in the same domain as the application, a shallow corpus parser, a "seed" gazetteer, and a dictionary of synonyms. The shallow parser (Basili et al. 1994), extracts from the learning corpus elementary syntactic relations such as subject-object, noun-preposition-noun, etc. A syntactic link (hereafter esl) is represented as: esli(wj, mod(typei, wk)) where w i is the head word, Wk is the modifier, ~-ad typei is the type of syntactic relation (e.g. PP(of), PP(for), SUB J-Verb, Verb-DirectObject, etc.). The learning corpus is previously morphologically and syntactically processed. Step 1 and 2 described at the beginning of this section are used to detect PNs. A database of esls including known PNs 2 is then created and used by the algorithm to assign a category to unknown PNs. The algorithm works as follows: let PN_U be an unknown proper noun, i.e. a single word or a complex nominal. Let Cpn = (Cpnl, Cpn2 ..... CpnN) be the set of semantic categories for proper nouns (e.g. Person, Organization, Product etc.). Finally, let ESL be the set of elementary syntactic links (esl) extracted from the 1 The context sensitive grammar closely reflects, with extension, that developed for a similar application in the English VIE system. Therefore, low performance is likely due to the low-coverage gazzetteer. The absence of available linguistic resources in languages other than English is a well known problem. 2Note that the database is not manually inspected for correctness (POS tagging and parsing errors). However, the parser assigns to each detected esl a statistical measure of confidence, called plausibility (Basili et al. 1994b). learning corpus that include PN_U as one of its arguments. For each esli in ESL let: es li(w j, m od(typei, Wk)) = esli(x, PN U) where x=wj or Wk and PN_U =Wk or wj, typei is the syntactic type of esl (e.g. N-di- N, N_N, V-per-N ecc), and further let: pl(esli (x, PN_U) be the plausibility of a detected esl. The plausibility is a measure of the statistical evidence of a detected syntactic link (Basili et al, 1994b), that depends upon local (i.e. at the sentence level) syntactic ambiguity and global corpus evidence. Finally, let: - ESLA be a set of esls defined as follows: for each esli(x,PN_U) in ESL put in ESLA the set of eslj(x,PNj), in the corpus, with type=typei, x in the same position of esli, and PNi a known proper noun, in the same position as PN_U in esli, ESLB be the set of eslk defined as follows: for each esli(x,PN_U) in ESL put in ESLB the set of eslj(w,PNj), in the corpus, with type=typei, w in the same position of x in esli, Sim(w,x)> 8, and PNj a known proper noun, in the same position as PNU in esl i. Sim(w,x) is a similarity measure between x and w. In our first experiments, Sim(w,x)> 8 iff w is a synonym of x. For each semantic category Con i compute evidence(Cpnj) as shown ih-Figure 1, where: amb(esl(x, PNi)) is a measure of the ambiguity of x and PNj in esli; - tx and 13 are experimentally determined weights (currently, t~=0.7 and 13=0.3). The selected category for PN_U is: C=argmax( evidence( Cpnk) )=maxj( evidence( Cpnj) ) The underlying hypothesis is that, in a given application corpus, a PN has a unique sense. This is a reasonable restriction supported by empirical evidence (see also (Gale et al. 1992)). An alternative solution would be to select the "best performing" tags, and then apply 288 • (1)ev idence (Cpn j) = ~, (pl(esl i (x, PNj)) * amb(esl i (x, PN~))) (~ esll EESL a ,C(PNI)=Ct,,, j Epl(esli(x,PNj) esl i ~ESL ~ ,anyPN + E (pl (esl i (w, PNj)) * arab (esl i (x, PNj))) ~ eslj ,~ESL a .C(PN j )=Cp./ Epl(esl i(w,PNj) eslj EESL s ,anyPN Figure 1 - The evidence(Cpnj) computation formula some WSD algorithm to predict the precise sense• in running texts. 3 Discussion of the Experiment In our experiment, we used a corpus of one million words extracted from articles in the II Sole 24 Ore economic newspaper. A database of 76055 esls including proper nouns was obtained. Table 1 shows the distribution of esls by category, and the prior probability (i.e. relative distribution) of each category. Category N ° ESLi Prior Prob. ORGANIZ 26418 0.347 LOCATION 25087 0.330 PERSON 20558 0.270 DATE 544 0.007 TIME 879 0.011 MONEY 1076 0.014 PERCENT 520 0.007 PRODUCT 2671 0.035 OTHERS 1112 0.015 Tot.ESL 76055 Table 1 - PN distribution by category The semantic categories in Table 1, with the addition of Product, are those that will be used for Named Entity task evaluation in the forthcoming MUC-7 contest. In Figure 2, a complete experiment is reported. In the figure, an esl is represented as a list, for example (0.5 G_N_P_N Quick_Take_200 0 1 in documento). The detected esl is 'Quick_Take_200 in documento ' (Quick_Take_200 in document), the syntactic type is G N P N (noun- preposition-noun), the plausibility is 0.5, the initial category of Quick_Take_200 is 0 (= unknown) and its ambiguity is initially set to 1. It is seen in the figure that some detected esls do not contribute to the computation of (1) (e.g. acquisire con Quick Take_200 to acquire with Quick_Take_200) while some other esl turns out to be particularly informative (e.g. qualita' di Quick_Take_200 quality of Quick_Take_200) For the name Quick_Take_200 (a software product), the category 8 is finally selected (PRODUCT, as shown in the figure). An extended experiment was designed as follows: We selected from the corpus 35 PNs for each of the . following categories: Organization, Person, Location and Product 3. The PNs are selected by ranges of frequency in the corpus, except for Producs, that are very rare in our excerpt of the II Sole 24 Ore: here we selected the 35 top frequency PNs. We then removed each of the 140 PNs from the gazetteer, one at the time, and attempted a re-classification using our algorithm. To evaluate the performances we used, in addition to the classical Precision measure, the Information Gain (Kononenko and Bratko, 1991). The Information Gain is an information- theoretic measure that takes into account the complexity of the classification task. 3The other categories are less interesting in our view. Numbers, dates etc. are recursive and regular phenomena that can be detected in a more general way by the use of specific grammars or pattern matchers. 289 pRop~ N~E: Quick_Take_200 0.5 G_N_P_N Quick_Take_200 0 1 in d:cxnmato 1.0 G_N_V Quick_Take 200 0 1 nil dotare ESLB= 1.0 G_N_V Apple 1 1 nil fornire m= 1.0 G_N_V Pcwer Fc 1 1 nil fonzire ~= 1.0 G N_V Tank Franca/se_Chrcr~reflex 8 1 O. 1 G_AgI_P_N acquisito ccn Quick_Take_200 0 1 0.i G_p~._P_N acquisire eon Quick_Take_200 0 I 0.333000 G_N_P_N Forza di Quick_Take_200 0 1 ESLA= 0.333000 G_~._Pjq Forza di Linea_Pret 2 1 ESI23= 0.333000 G N_P._N grande di ~_il 3 1 ESLB= 0.250000 G_j~_P_N ~aL-z~e di Europa 2 1 ESLB= 0.2 G_N_P_N gr-azrle di Casa 1 1 0.333000 G_N_P_N qualita' di Quick_Take_200 0 1 ESLA= i. 0 G_N_P_N quali~' di ~ 8 1 ~-~R:: 0.333000 G__N P_N scrta di Iri 1 1 ESI2~ i. 0 G__N_P_N generazic~e di G 3 1 r~r~: 0.125000 G_N P_~ caratteristica di c~ 1 1 ESLB= 0.250000 G_Jq__P_N caratteristica di Macinto~_Performa 8 1 ESI~= 0.250000 G_N P_N caratteristica di Vs 8 1 ESL~ 0.5 G_N._P_N marca di Arese 2 1 o. 1 G_V_P_N acquisire ccn Oaick_Take_200 0 1 0.2 G V P N utilizzare ccn Quick_TaMe_200 0 1 0.333000 G_N_P_N Punti di Qu/ck_Take_200 0 1 0.333000 G_N_P_N aoquisizicme di Quick_Take 200 0 1 0.333000 G_N_P_~ c ~D~__cita' di Quick_Take_200 0 1 RqTm= 0.167000 G_N_P_N portata di 280_F G 9 1 k~rm= 0.2 G_N_P_N portata di 300_I~ 9 1 ESLB= 0.333000 G_N_P_N mezzo di Cartier 3 1 ~= 0.333000 G N P N facilita' di Apple_Share 8 I 0.333000 G_~_P_N immgine di Quick_Take_200 0 1 Coefficient u: 0.7 Coefficient ~: 0.3 CLASS S/~_ESLA SLM_~ ~ 1 CR3 0.000 2.658 0.109 2. iCC 0.333 0.750 0.205 3 P~RSCIq 0.000 i. 666 0.068 4 [ATE 0.000 0.000 0. 000 5 ~ 0.000 0.000 0.000 6 ~ 0.000 0.000 0.000 7 ~ 0.000 0.000 0.000 8 PRCE/JL~ 1.000 1.833 0.600 9 OIHERS 0.000 0.367 0.015 S3M_ESLA= 1.333 SLIM_~= 7.274 Max evidenoe category is: PRCILL-T 0.333000 G_N_P_N 5 ~ t e di Quick_Take_200 0 1 Selected category: PRCIx/ur Figure 2 - A complete example If P(C) is the prior (a-priori) probability 4 that an instance c is a member of class C, and P'(C) is the probability of c e C, as computed by the classifier in a given test ti, the Information Gain I(ti) is defined as: I(ti) = log(1-P(C)) - log(1-P'(C)) if P(C) > P'(C) or I(ti) = log(P'(C)) - log(P(C)) if P'(C) > P(C) That is, if the classification is wrong, I(ti) is a penalty as high as the classification task 4The prior probability can be easily computed in a learning set as the ratio between the number of training instances belonging to a class C and the total number of training instances. In our experiment, the prior probabilities are listed in Table 1. was an easy one (i.e. the prior probability of C was high). If the classification is correct, I(ti) is a price as high as the classification task was complex (i.e. the prior probability of C was low). Over a test set of T cases, I is given by: I T I=--z~,I(ti) T i=1 Table 2 illustrates the results. It is seen that unknown PNs in the three major categories (those for which there is evidence in the corpus and in the gazetteer) have a very high probability of being correctly classified (up to 100% for Organizations). On the contrary, we obtain poor performances with Products. However, Product is interesting because: - there are no more than 50-60 product names in the gazetteer (which we 290 manually added for the purpose of this experiment) there are no contextual rules for Products in the context-sensitive grammar. Thus, both prior probability and prior knowledge on Products are close to zero. This is numerically evidenced by the Information Gain: though we are not learning much about Products, the Information Gain is higher than for the other categories, and also as an absolute value (in (Kononenko and Bratko, 1991) a 0,5 bit improvement is among the highest measured values in a comparative experiment). In addition, the relative precision of classifying PNs as Product is 100%. This means that most products are misclassified, but, if something is classified as Product, this information can be reliably used to enrich the gazetteer. Category Precision Inf. Gain ORGANIZ. 100.00% 0.11 LOCATION. 91.43% 0.14 PERSON 80.00% 0.23 PRODUCT 22.86% 0.65 Table 2 - Precision and Information Gain of the method Table 3 reports an experiment on a small corpus extracted from another portion of II Sole 24 Ore, indexed as "New Products". Category N ° ESLi Prior Prob. ORGANIZ 735 0.160 LOCATION 583 0.126 PERSON 902 0.196 DATE 7 0.001 TIME 8 0.001 MONEY 31 0.007 PERCENT 114 0.025 PRODUCT 2184 0.473 OTHERS 262 0.057 Tot. ESL 4615 Precision Inf. Gain PRODUCT 88.57% 0.12 Table 3 - Experiment with a small "New Product" Corpus Here, the prior probability of Products is obviously higher, though -due to the poor gazetteer- there is an elevated number of unrecognized products. In this corpus we selected and then removed 35 product names, and now the system correctly classifies 31. Notice that in this experiment the gazetteer and the PN grammar are the same as before, The only difference is that the corpus provides more evidence (contexts) concerning those products that have been recognized as such. Notice on the other side, that the Information Gain now is very low. 4 Conclusions and Future Work Our current implementation of a PN analyzer still has a limited performance, caused by a variety of problems that range from unsatisfactory performance of state- of-art POS taggers in inflected languages, to limited availability of linguistic resources,in Italian, such as PN gazetteers. The algorithm that we propose has indeed the purpose of overcoming limitations of gazetteers and manually defined contextual rules for PN recognition. In (Cucchiarelli et al. 1998) we also show how to extend our method to incrementally update the initial gazzeteer. The performance of the proposed algorithm is more than satisfactory. A comparison with existing systems is difficult because in the literature global PN recognition performances are reported, without considering the semantic classification of unknowns as a subtask. The only exception is in (Wacholder et al, 1997) where the reported performance for the sole semantic disambiguation task of PNs is 79%. In that paper, however, semantic disambiguation is performed among a lower number of classes 5. The performance of our system is clearly affected by the dimension of the initial seed gazetteer and contextual rules. If the sets ESLA and ESLB are large enough, obviously more examples of similar contexts are found, even for unknown PNs with a single occurrence. In our test experiment, we always managed to find at least one or two similar contexts of an unknown PN, but in some cases they were misleading and caused a wrong classification, especially for Products. However, it may be possible to increase the evidence provided by the set ESLB by including contexts in which the words are 5One of the advantages of Information Gain is that, if widely adopted, this measure facilitates the comparison among learning methods with different complexity of the classification task. 291 not strictly synonyms, but belong to the same semantic category. One such experiment requires a word taxonomy, like for example WordNet. WordNet is currently unavailable in Italian (the first known results of the EuroWordNet project are too preliminary), therefore we plan to reproduce our experiment in English. Another strategy to improve performances in absence of a substantial evidence is the definition of general (not contextual) rules to capture unknown complex nominals. For example, looking at the Product experiment in more detail, we found that product names are often formed by very complex nominals, e.g. Fiat- Marea Weekend 2000 (the name of a car model). Capturing complex nominals in absence of anchors and specific contextual rules (here the only anchor is Fiat, which appears in the gazetteer as an Organization name) may be difficult, and if a complex nominal is not captured as a unit, the resulting syntactic context may be misleading (e.g. N_ADJ(Fiat_Marea_Weekend, 2000)). We believe that finding class-independent heuristics for capturing complex nominals is a more "general" way of improving the performance of the method, rather than adding specific rules for specific entity types and enriching the gazetteer. Acknowledgments The authors would like to thank Mr. Enzo Peracchia for his support in the software developent and for aiding with experi- ments. This research has been funded under the EC project ECRAN LE-2110. References Basili, R., Pazienza M.T., Velardi P. (1994) A (not-so) shallow parser for collocational analy- sis. Proc. of Coling '94, Kyoto, Japan, 1994. Basili, R., Marziali A., Pazienza M.T. (1994b) Modelling syntax uncertainty in lexical acqui- sition from texts. Journal of Quantitative Lin- guistics, vol.1, n.1, 1994. Bikel D.,Miller S., Schwartz R. and Weischedel R. (1997) Nymble: a High-Performance Learn- ing Name-finder. in proc. of 5th Conference on Applied natural Language Processing, Wash- ington, 1997 Brill, E (1995). Transformation-based Error- Driven Learning and Natural Language Pro- cessing: A case study of Part of Speech Tag- ging. Computational Linguistics, vol. 21, n. 24, 1995 Cucchiarelli A., Luzi D., Velardi P. Using Corpus evidence for Automatic Gazetteer Extension in Proc. of first Language Resources and Evaluation, Granada, May 1988 ECRAN: Extraction of Content: Research at Near Market. http://www2.echo.lu/langeng/en/ le 1/ecra~ecran.html Fisher D., Soderland S., McCarthy J., Feng F. and Lenhart W. (1996) Description of the UMass system as used for MUC-6. http://ciir.cs.urnass.edu/info/psfiles/tepubs/tepu bs.html Gale, Church W. K. and Yarowsky D.(1992) One sense per discourse, in Proc. of the DARPA speech and and Natural Language workshop, Harriman, N'Y, February 1992 Grishraan R., Macleod C. and Meyers A. (1992) NYU: description of the Proteus System as used for MUC-4. in Proc. of Fourth Message Understanding Conference (MUC-4) June 1992 Humphreys (1996) VIE Technical Specifications, 1996/10/1815. ILASH, University of Sheffield. Kononenko I. and Bratko I. (1991) Information- based Evaluation Criterion for Classifier's Per- formance. Machine Learning 6, pp. 67-80, 1991 Mani I., McMillian R., Luperfoy S., Lusher E., Laskowski S. (1996) Identifying Unknown Proper Names in Newswire Text. in Corpus Processing for Lexical Acquisition, J. Puste- jovsky and B. Boguraev Eds., MIT Press 1996. McDonald D. (1996) Internal and External Evi- dence in the Identification and Semantic Cate- gorization of Proper Names. in Corpus Pro- cessing for Lexical Acquisition, J. Pustejovsky and B. Boguraev Eds., MIT Press 1996. Paik W., Liddy E., Yu E. and McKenna M. (1996) Categorizing and standardizing proper nouns for effcient Information Retrieval. in Corpus Processing for Lexical Acquisition, J. Pustejovsky and B. Boguraev Eds., MIT Press 1996. Palmer D. and Day D. (1997) A Statistical Pro- file of the Named Enity Task. in Proc. of 5th Conference on Applied natural Language Pro- cessing, Washington, 1997 Wacholder N., Ravin Y. and Choi M. (1997) Disambiguation of Proper Names in Text. in Proc. of 5th Conference on Applied natural Language Processing, Washington, 1997 292
1998
45
Investigating regular sense extensions based on intersective Levin classes Hoa Trang Dang, Karin Kipper, Martha Palmer, Joseph Rosenzweig Department of Computer and Information Sciences and the Institute for Research in Cognitive Science University of Pennsylvania 400A, 3401 Walnut Street/6228 Philadelphia, PA 19104, USA htd/kipper/mpalmer/[email protected] Abstract In this paper we specifically address questions of polysemy with respect to verbs, and how regular extensions of meaning can be achieved through the adjunction of particular syntactic phrases. We see verb classes as the key to making gen- eralizations about regular extensions of mean- ing. Current approaches to English classifica- tion, Levin classes and WordNet, have limita- tions in their applicability that impede their utility as general classification schemes. We present a refinement of Levin classes, intersec- tive sets, which are a more fine-grained clas- sification and have more coherent sets of syn- tactic frames and associated semantic compo- nents. We have preliminary indications that the membership of our intersective sets will be more compatible with WordNet than the orig- inal Levin classes. We also have begun to ex- amine related classes in Portuguese, and find that these verbs demonstrate similarly coherent syntactic and semantic properties. 1 Introduction The difficulty of achieving adequate hand- crafted semantic representations has limited the field of natural language processing to applica- tions that can be contained within well-defined subdomains. The only escape from this lim- itation will be through the use of automated or semi-automated methods of lexical acquisi- tion. However, the field has yet to develop a clear consensus on guidelines for a computa- tional lexicon that could provide a springboard for such methods, although attempts are being made (Pustejovsky, 1991), (Copestake and San- filippo, 1993), (Lowe et al., 1997), (Dorr, 1997). The authors would like to acknowledge the sup- port of DARPA grant N66001-94C-6043, ARO grant DAAH04-94G-0426, and CAPES grant 0914/95-2. One of the most controversial areas has to do with polysemy. What constitutes a clear sepa- ration into senses for any one verb, and how can these senses be computationally characterized and distinguished? The answer to this question is the key to breaking the bottleneck of semantic representation that is currently the single great- est limitation on the general application of nat- ural language processing techniques. In this paper we specifically address questions of polysemy with respect to verbs, and how regular extensions of meaning can be achieved through the adjunction of particular syntactic phrases. We base these regular extensions on a fine-grained variation on Levin classes, inter- sective Levin classes, as a source of semantic components associated with specific adjuncts. We also examine similar classes in Portuguese, and the predictive powers of alternations in this language with respect to the same semantic components. The difficulty of determining a suitable lexical representation becomes multi- plied when more than one language is involved and attempts are made to map between them. Preliminary investigations have indicated that a straightforward translation of Levin classes into other languages is not feasible (Jones et al., 1994), (Nomura et al., 1994), (Saint-Dizier, 1996). However, we have found interesting par- allels in how Portuguese and English treat reg- ular sense extensions. 2 Classifying verbs Two current approaches to English verb classi- fications are WordNet (Miller et al., 1990) and Levin classes (Levin, 1993). WordNet is an on- line lexical database of English that currently contains approximately 120,000 sets of noun, verb, adjective, and adverb synonyms, each rep- resenting a lexicalized concept. A synset (syn- 293 onym set) contains, besides all the word forms that can refer to a given concept, a definitional gloss and - in most cases - an example sentence. Words and synsets are interrelated by means of lexical and semantic-conceptual links, respec- tively. Antonymy or semantic opposition links individual words, while the super-/subordinate relation links entire synsets. WordNet was de- signed principally as a semantic network, and contains little syntactic information. Levin verb classes are based on the ability of a verb to occur or not occur in pairs of syntac- tic frames that are in some sense meaning pre- serving (diathesis alternations) (Levin, 1993). The distribution of syntactic frames in which a verb can appear determines its class member- ship. The fundamental assumption is that the syntactic frames are a direct reflection of the un- derlying semantics. Levin classes are supposed to provide specific sets of syntactic frames that are associated with the individual classes. The sets of syntactic frames associated with a particular Levin class are not intended to be arbitrary, and they are supposed to reflect un- derlying semantic components that constrain al- lowable arguments. For example, break verbs and cut verbs are similar in that they can all participate in the transitive and in the mid- dle construction, John broke the window, Glass breaks easily, John cut the bread, This loaf cuts easily. However, only break verbs can also occur in the simple intransitive, The window broke, *The bread cut. In addition, cut verbs can oc- cur in the conative, John valiantly cut/hacked at the frozen loaf, but his knife was too dull to make a dent in it, whereas break verbs cannot, *John broke at the window. The explanation given is that cut describes a series of actions di- rected at achieving the goal of separating some object into pieces. It is possible for these ac- tions to be performed without the end result being achieved, but where the cutting manner can still be recognized, i.e., John cut at the loaf. Where break is concerned, the only thing speci- fied is the resulting change of state where the object becomes separated into pieces. If the result is not achieved, there are no attempted breaking actions that can still be recognized. 2.1 Ambiguities in Levin classes It is not clear how much WordNet synsets should be expected to overlap with Levin classes, and preliminary indications are that there is a wide discrepancy (Dorr and Jones, 1996), (Jones and Onyshkevych, 1997), (Doff, 1997). However, it would be useful for the WordNet senses to have access to the detailed syntactic information that the Levin classes contain, and it would be equally useful to have more guidance as to when membership in a Levin class does in fact indicate shared seman- tic components. Of course, some Levin classes, such as braid (bob, braid, brush, clip, coldcream, comb, condition, crimp, crop, curl, etc.) are clearly not intended to be synonymous, which at least partly explains the lack of overlap be- tween Levin and WordNet. The association of sets of syntactic frames with individual verbs in each class is not as straightforward as one might suppose. For in- stance, carry verbs are described as not taking the conative, *The mother carried at the baby, and yet many of the verbs in the carry class (push, pull, tug, shove, kick) are also listed in the push/pull class, which does take the cona- tive. This listing of a verb in more than one class (many verbs are in three or even four classes) is left open to interpretation in Levin. Does it indicate that more than one sense of the verb is involved, or is one sense primary, and the alternations for that class should take precedence over the alternations for the other classes in which the verb is listed? The grounds for deciding that a verb belongs in a particular class because of the alternations that it does not take are elusive at best. 3 Intersective Levin classes We augmented the existing database of Levin semantic classes with a set of intersective classes, which were created by grouping to- gether subsets of existing classes with over- lapping members. All subsets were included which shared a minimum of three members. If only one or two verbs were shared between two classes, we assumed this might be due to ho- mophony, an idiosyncrasy involving individual verbs rather than a systematic relationship in- volving coherent sets of verbs. This filter al- lowed us to reject the potential intersective class that would have resulted from combining the re- move verbs with the scribble verbs, for example. The sole member of this intersection is the verb 294 draw. On the other hand, the scribble verbs do form an intersective class with the perfor- mance verbs, since paint and write are also in both classes, in addition to draw. The algorithm we used is given in Figure 1. 1. Enumerate all sets S = {Cl,...,cn) of se- mantic classes such that Icl ¢3... f3 cnl _> e, where e is a relevance cut-off. 2. For each such S = {cl,...,c~}, define an intersective class Is such that a verb v E Is iff v E cl f3 ... f3 cn, and there is no S' = {c~l,...,c~) such that S C S' and v e c~ f3... f3 c' m (subset criterion). Figure h Algorithm for identifying relevant semantic-class intersections We then reclassified the verbs in the database as follows. A verb was assigned membership in an intersective class if it was listed in each of the existing classes that were combined to form the new intersective class. Simultaneously, the verb was removed from the membership lists of those existing classes. 3.1 Using intersective Levin classes to isolate semantic components Some of the large Levin classes comprise verbs that exhibit a wide range of possible semantic components, and could be divided into smaller subclasses. The split verbs (cut, draw, kick, knock, push, rip, roll, shove, slip, split, etc.) do not obviously form a homogeneous seman- tic class. Instead, in their use as split verbs, each verb manifests an extended sense that can be paraphrased as "separate by V-ing," where "V" is the basic meaning of that verb (Levin, 1993). Many of the verbs (e.g., draw, pull, push, shove, tug, yank) that do not have an inherent semantic component of "separating" belong to this class because of the component of .force in their meaning. They are interpretable as verbs of splitting or separating only in particular syn- tactic frames ( I pulled the twig and the branch apart, I pulled the twig off (of) the branch, but not *I pulled the twig and the branch). The ad- junction of the apart adverb adds a change of state semantic component with respect to the object which is not present otherwise. These fringe split verbs appear in several other inter- sective classes that highlight the .force aspect of their meaning. Figure 2 depicts the intersection of split, carry and push/pull. "Split" Verbs Figure 2: Intersective class formed from Levin carry, push/pull and split verbs - verbs in 0 are not listed by Levin in all the intersecting classes but participate in all the alternations The intersection between the push/pull verbs of exerting force, the carry verbs and the split verbs illustrates how the force semantic compo- nent of a verb can also be used to extend its meaning so that one can infer a causation of accompanied motion. Depending on the par- ticular syntactic frame in which they appear, members of this intersective class (pull, push, shove, tug, kick, draw, yank) * can be used to exemplify any one (or more) of the the compo- nent Levin classes. 1. Nora pushed the package to Pamela. (carry verb implies causation of accompa- nied motion, no separation) 2. Nora pushed at/against the package. ° Although kick is not listed as a verb of exerting force, it displays all the alternations that define this class. Sim- ilarly, draw and yank can be viewed as carry verbs al- though they are not listed as such. The list of members for each Levin verb class is not always complete, so to check if a particular verb belongs to a class it is better to check that the verb exhibits all the alternations that de- fine the class. Since intersective classes were built using membership lists rather than the set of defining alterna- tions, they were similarly incomplete. This is an obvious shortcoming of the current implementation of intersec- tive classes, and might affect the choice of 3 as a relevance cut-off in later implementations. 295 (verb of exerting force, no separation or causation of accompanied motion implied) 3. Nora pushed the branches apart. (split verb implies separation, no causation of accompanied motion) 4. Nora pushed the package. (verb of exerting force; no separation im- plied, but causation of accompanied motion possible) 5. *Nora pushed at the package to Pamela. Although the Levin classes that make up an intersective class may have conflicting alterna- tions (e.g., verbs of exerting force can take the conative alternation, while carry verbs cannot), this does not invalidate the semantic regularity of the intersective class. As a verb of exerting force, push can appear in the conative alterna- tion, which emphasizes its force semantic com- ponent and ability to express an "attempted" action where any result that might be associ- ated with the verb (e.g., motion) is not nec- essarily achieved; as a carry verb (used with a goal or directional phrase), push cannot take the conative alternation, which would conflict with the core meaning of the carry verb class (i.e., causation of motion). The critical point is that, while the verb's meaning can be extended to either "attempted" action or directed motion, these two extensions cannot co-occur - they are mutually exclusive. However the simultaneous potential of mutually exclusive extensions is not a problem. It is exactly those verbs that are triple-listed in the split/push/carry intersective class (which have force exertion as a semantic component) that can take the conative. The carry verbs that are not in the intersective class (carry, drag, haul, heft, hoist, lug, tote, tow)are more "pure" examples of the carry class and always imply the achievement of causation of motion. Thus they cannot take the conative al- ternation. 3.2 Comparisons to WordNet Even though the Levin verb classes are defined by their syntactic behavior, many reflect seman- tic distinctions made by WordNet, a classifica- tion hierarchy defined in terms of purely se- mantic word relations (synonyms, hypernyms, etc.). When examining in detail the intersec- tive classes just described, which emphasize not only the individual classes, but also their rela- tion to other classes, we see a rich semantic lat- tice much like WordNet. This is exemplified by the Levin cut verbs and the intersective class formed by the cut verbs and split verbs. The original intersective class (cut, hack, hew, saw) exhibits alternations of both parent classes, and has been augmented with chip, clip, slash, snip since these cut verbs also display the syntactic properties of split verbs. WordNet distinguishes two subclasses of cut, differentiated by the type of result: 1. Manner of cutting that results in separa- tion into pieces (chip, clip, cut, hack, hew, saw, slash, snip), having cut, separate with an instrument as an immediate hypernym. 2. Manner of cutting that doesn't separate completely (scrape, scratch), having cut into, incise as an immediate hypernym, which in turn has cut, separate with an in- strument as an immediate hypernym. This distinction appears in the second-order Levin classes as membership vs. nonmember- ship in the intersective class with split. Levin verb classes are based on an underlying lat- tice of partial semantic descriptions, which are manifested indirectly in diathesis alternations. Whereas high level semantic relations (syn- onym, hypernym) are represented directly in WordNet, they can sometimes be inferred from the intersection between Levin verb classes, as with the cut/split class. However, other intersective classes, such as the split/push/carry class, are no more con- sistent with WordNet than the original Levin classes. The most specific hypernym common to all the verbs in this intersective class is move, displace, which is also a hypernym for other carry verbs not in the intersection. In addition, only one verb (pull) has a WordNet sense cor- responding to the change of state - separation semantic component associated with the split class. The fact that the split sense for these verbs does not appear explicitly in WordNet is not surprising since it is only an extended sense of the verbs, and separation is inferred only when the verb occurs with an appropriate adjunct, such as apart. However, apart can also be used with other classes of verbs, including many verbs of motion. To explicitly list separa- 296 tion as a possible sense for all these verbs would be extravagant when this sense can be gener- ated from the combination of the adjunct with the force (potential cause of change of physical state) or motion (itself a special kind of change of state, i.e., of position) semantic component of the verb. WordNet does not currently provide a consistent treatment of regular sense exten- sion (some are listed as separate senses, others are not mentioned at all). It would be straight- forward to augment it with pointers indicating which senses are basic to a class of verbs and which can be generated automatically, and in- clude corresponding syntactic information. 3.3 Sense extension for manner of motion Figure 3 shows intersective classes involving two classes of verbs of manner of motion (run and roll verbs) and a class of verbs of existence (me- ander verbs). Roll and run verbs have seman- tic components describing a manner of motion that typically, though not necessarily, involves change of location. In the absence of a goal or path adjunct they do not specify any direction of motion, and in some cases (e.g., float, bounce) require the adjunct to explicitly specify any dis- placement at all. The two classes differ in that roll verbs relate to manners of motion charac- teristic of inanimate entities, while run verbs describe manners in which animate entities can move. Some manner of motion verbs allow a transitive alternation in addition to the basic in- transitive. When a roll verb occurs in the tran- sitive (Bill moved the box across the room), the subject physically causes the object to move, whereas the subject of a transitive run verb merely induces the object to move (the coach ran the athlete around the track). Some verbs can be used to describe motion of both animate and inanimate objects, and thus appear in both roll and run verb classes. The slide class parti- tions this roll/run intersection into verbs that can take the transitive alternation and verbs that cannot (drift and glide cannot be causative, because they are not typically externally con- trollable). Verbs in the slide/roll/run intersec- tion are also allowed to appear in the dative alternation (Carla slid the book to Dale, Carla slid Dale the book), in which the sense of change of location is extended to change of possession. When used intransitively with a path prepo- sitional phrase, some of the manner of motion verbs can take on a sense of pseudo-motional existence, in which the subject does not actu- ally move, but has a shape that could describe a path for the verb (e.g., The stream twists through the valley). These verbs are listed in the intersective classes with meander verbs of existence. "Slide" Verbs R n Verbs ? "Meander Verbs" Figure 3: Intersections between roll and run verbs of motion and meander verbs of existence 4 Cross-linguistic verb classes The Portuguese verbs we examined behaved much more similarly to their English counter- parts than we expected. Many of the verbs participate in alternations that are direct trans- lations of the English alternations. However, there are some interesting differences in which sense extensions are allowed. 4.1 Similar sense extensions We have made a preliminary study of the Por- tuguese translation of the carry verb class. As in English, these verbs seem to take different alter- nations, and the ability of each to participate in an alternation is related to its semantic content. Table i shows how these Portuguese verbs natu- rally cluster into two different subclasses, based on their ability to take the conative and apart alternations as well as path prepositions. These subclasses correspond very well to the English subclasses created by the intersective class. The conative alternation in Portuguese is mainly contra (against), and the apart alterna- tion is mainly separando (separating). For ex- ample, Eu puxei o ramo e o galho separando-os 297 English Portuguese Conat. Apart Path carry levar no no yes drag arrsStar no yes yes haul fretar no no yes heft levsntar corn dificuldade no no yes hoist icar no no yes lug levsr corn dificuldsde no no yes tote levar facilrnente no no yes tow rebocar no no yes shove ernpurrar corn violencia yes yes yes push ernpurrar yes yes yes draw puxar yes yes yes pull puxar yes yes yes kick chutar yes yes yes tug puxar corn forca yes yes yes yank arrancar yes yes yes Table h Portuguese carry verbs with their al- ternations (I pulled the twig and the branch apart ), and Ele empurrou contra a parede (He pushed against the wal O. 4.2 Changing class membership We also investigated the Portuguese translation of some intersective classes of motion verbs. We selected the slide/roll/run, meander/roll and roll/run intersective classes. Most verbs have more than one translation into Portuguese, so we chose the translation that best described the meaning or that had the same type of arguments as described in Levin's verb classes. The elements of the slide/roll/run class are rebater (bounce), flutuar (float), rolar (rol 0 and deslizar (slide). The resultative in Portuguese cannot be expressed in the same way as in En- glish. It takes a gerund plus a reflexive, as in A porta deslizou abrindo-se (The door slid opening itselj~. Transitivity is also not always preserved in the translations. For example, flutuar does not take a direct object, so some of the alterna- tions that are related to its transitive meaning are not present. For these verbs, we have the in- duced action alternation by using the light verb fazer (make) before the verb, as in Maria fez o barco flutuar (Mary floated the boat). As can be seen in Table 2 the alternations for the Portuguese translations of the verbs in this intersective class indicate that they share simi- lar properties with the English verbs, including the causative/inchoative. The exception to this, as just noted, is flutuar (float). The result of this is that flutuar should move out of the slide class, which puts it with derivar (drift) and pla- nar (glide) in the closely related roll/run class. As in English, derivar and planar are not exter- nally controllable actions and thus don't take the causative/inchoative alternation common to other verbs in the roll class. Planar doesn't take a direct object in Portuguese, and it shows the induced action alternation the same way as flu- tuar (by using the light verb ]azer). Derivar is usually said as "estar a deriva" ("to be adrift"), showing its non-controllable action more explic- itly. 5 Discussion We have presented a refinement of Levin classes, intersective classes, and discussed the potential for mapping them to WordNet senses. Whereas each WordNet synset is hierarchicalized accord- ing to only one aspect (e.g., Result, in the case of cut), Levin recognizes that verbs in a class may share many different semantic features, without designating one as primary. Intersective Levin sets partition these classes according to more co- herent subsets of features (force, force+motion, force+separation), in effect highlighting a lattice of semantic features that determine the sense of a verb. Given the incompleteness of the list of members of Levin classes, each verb must be examined to see whether it exhibits all the al- ternations of a class. This might be approxi- mated by automatically extracting the syntac- tic frames in which the verb occurs in corpus data, rather than manual analysis of each verb, as was done in this study. We have also examined a mapping between the English verbs that we have discussed and their Portuguese translations, which have sev- eral of the same properties as the corresponding verbs in English. Most of these verbs take the same alternations as in English and, by virtue of these alternations, achieve the same regular sense extensions. There are still many questions that require further investigation. First, since our experi- ment was based on a translation from English to Portuguese, we can expect that other verbs in Portuguese would share the same alternations, so the classes in Portuguese should by no means be considered complete. We will be using re- sources such as dictionaries and on-line corpora to investigate potential additional members of our classes. Second, since the translation map- pings may often be many-to-many, the alterna- 298 rebater flutuar (bounce) (float) dative yes *conative no caus./inch, yes middle yes accept, corer, yes caus./inch, yes resultative yes yes adject, part. :~es yes ind. action yes yes locat, invers, yes yes measure yes ye8 *adj. perf. no no *cogn. object no no zero nom. yes yes Table 2: Portuguese slide/roll~run and tions may depend on which translation is cho- sen, potentially giving us different clusters, but it is uncertain to what extent this is a factor, and it also requires further investigation. In this experiment, we have tried to choose the Portuguese verb that is most closely related to the description of the English verb in the Levin class. We expect these cross-linguistic features to be useful for capturing translation generalizations between languages as discussed in the litera- ture (Palmer and Rosenzweig, 1996), (Copes- take and Sanfilippo, 1993), (Dorr, 1997). In pursuing this goal, we are currently implement- ing features for motion verbs in the English Tree-Adjoining Grammar, TAG (Bleam et al., 1998). TAGs have also been applied to Por- tuguese in previous work, resulting in a small Portuguese grammar (Kipper, 1994). We in- tend to extend this grammar, building a more robust TAG grammar for Portuguese, that will allow us to build an English/Portuguese trans- fer lexicon using these features. References Tonia Bleam, Martha Palmer, and Vijay Shanker. 1998. Motion verbs and semantic features in tag. In TAG-l--98, Philadelphia, PA. Submitted. Ann Copestake and Antonio Sanfilippo. 1993. Mul- tilingual lexical representation. In Proceedings of the AAAI Spring Symposium: Building Lexicons for Machine Translation, Stanford, California. Bonnie J. Dorr and Doug Jones. 1996. Acquisition of semantic lexicons: Using word sense disam- biguation to improve precision. In Proceedings of SIGLEX, Santa Cruz, California. Bonnie J. Dorr. 1997. Large-scale dictionary con- struction for foreign language tutoring and in- terlingual machine translation. Machine Trans- lation, 12:1-55. Doug Jones and Boyan Onyshkevych. 1997. Com- rolar deslizar derivar planar (roll) (slide) (drift) (glide) yes yes no no yes yes yes yes yes yes yes yes yes yes yes yes yes ~s ~,es ~s yes yes no yes yes yes yes yes yes yes yes yes no no no no no no no no no yes yes yes roll/run verbs with their alternations parisons of levin and wordnet. Presentation in working session of Semantic Tagging Workshop, ANLP-97. Douglas Jones, Robert Berwick, Franklin Cho, Zeeshan Khan, Karen Kohl, Naouky Nomura, Anand Radhakrishnan Ulrich Sanerland, and Bryan Ulicny. 1994. Verb classes and alternations in bangla, german, english, and korean. Technical report, Massachussets Institute of Technology. Karin Kipper. 1994. Uma investigacao de utilizacao do formalismo das gramaticas de adjuncao de ar- vores para a lingua portuguesa. Master's Thesis, CPGCC, UFRGS. B. Levin. 1993. English Verb Classes and Alterna- tions. J.B. Lowe, C.F. Baker, and C.J. Fillmore. 1997. A frame-semantic approach to semantic annotation. In Proceedings 1997 Siglex Workshop/ANLP97, Washington, D.C. G. Miller, R. Beckwith, C. Fellbaum, D. Gross, and K. Miller. 1990. Five papers on wordnet. Tech- nical Report 43, Cognitive Science Laboratory, Princeton University, July. Naoyuki Nomura, Douglas A. Jones, and Robert C. Berwick. 1994. An architecture for a universal lexicon: A case study on shared syntactic infor- mation in japanese, hindi, ben gall, greek, and en- glish. In Proceedings of COLING, pages 243-249, Santa Cruz, California. Martha Palmer and Joseph Rosenzweig. 1996. Cap- turing motion verb generalizations with syn- chronous tags. In AMTA-96, Montreal, Quebec, October. James Pustejovsky. 1991. The generative lexicon. Computational Linguistics, 17(4). Patrick Salnt-Dizier. 1996. Semantic verb classes based on 'alternations' and on wordnet-like se- mantic criteria: A powerful convergence. In Pro- ceedings the Workshop on Predicative Forms in Natural Language and Lexieal Knowledge Bases, Toulouse, France. 299
1998
46
Learning a syntagmatic and paradigmatic structure from language data with a bi-multigram model Sabine Deligne and Yoshinori Sagisaka ATR-ITL, deptl, 2-2 Hikaridai Seika cho, Soraku gun, Kyoto fu 619-0224, Japan. Abstract In this paper, we present a stochastic language mod- eling tool which aims at retrieving variable-length phrases (multigrams), assuming bigram dependen- cies between them. The phrase retrieval can be in- termixed with a phrase clustering procedure, so that the language data are iteratively structured at both a paradigmatic and a syntagmatic level in a fully in- tegrated way. Perplexity results on ATR travel ar- rangement data with a bi-multigram model (assum- ing bigram correlations between the phrases) come very close to the trigram scores with a reduced num- ber of entries in the language model. Also the ability of the class version of the model to merge semanti- cally related phrases into a common class is illus- trated. 1 Introduction There is currently an increasing interest in statisti- cal language models, which in one way or another aim at exploiting word-dependencies spanning over a variable number of words. Though all these mod- els commonly relax the assumption of fixed-length dependency of the conventional ngram model, they cover a wide variety of modeling assumptions and of parameter estimation frameworks. In this paper, we focus on a phrase-based approach, as opposed to a gram-based approach: sentences are structured into phrases and probabilities are assigned to phrases in- stead of words. Regardless of whether they are gram or phrase-based, models can be either determinis- tic or stochastic. In the phrase-based framework, non determinism is introduced via an ambiguity on the parse of the sentence into phrases. In practice, it means that even if phrase abe is registered as a phrase, the possibility of parsing the string as, for instance, lab] [c] still remains. By contrast, in a de- terministic approach, all co-occurences of a, b and c would be systematically interpreted as an occurence of phrase [abel. Various criteria have been proposed to derive phrases in a purely statistical way 1; data likeli- I i.e. without using grammar rules like in Stochastic Con- text Free Grammars. hood, leaving-one-out likelihood (PLies et al., 1996), mutual information (Suhm and Waibel, 1994), and entropy (Masataki and Sagisaka, 1996). The use of the likelihood criterion in a stochastic framework al- lows EM principled optimization procedures, but it is prone to overlearning. The other criteria tend to reduce the risk of overlearning, but their opti- mization relies on heuristic procedures (e.g. word grouping via a greedy algorithm (Matsunaga and Sagayama, 1997)) for which convergence and opti- mality are not theoretically guaranteed. The work reported in this paper is based on the multigram model, which is a stochastic phrase-based model, the parameters of which are estimated according to a likelihood criterion using an EM procedure. The multigram approach was introduced in (Bimbot et al., 1995), and in (Deligne and Bimbot, 1995) it was used to derive variable-length phrases under the as- sumption of independence of the phrases. Various ways of theoretically releasing this assumption were given in (Deligne et al., 1996). More recently, ex- periments with 2-word multigrams embedded in a deterministic variable ngram scheme were reported in (Siu, 1998). In section 2 of this paper, we further formulate a model with bigram (more generally ~-gram) de- pendencies between the phrases, by including a paradigmatic aspect which enables the clustering of variable-length phrases. It results in a stochas- tic class-phrase model, which can be interpolated with the stochastic phrase model, in a similar way to deterministic approaches. In section 3 and 4, the phrase and class-phrase models are evaluated in terms of perplexity values and model size. 2 Theoretical formulation of the multigrams 2.1 Variable-length phrase distribution In the multigram framework, the assumption is made that sentences result from the concatenation of variable-length phrases, called multigrams. The likelihood of a sentence is computed by summing the likelihood values of all possible segmentations of the sentence into phrases. The likelihood computa- 300 tion for any particular segmentation into phrases de- pends on the model assumed to describe the depen- dencies between the phrases. We call bi-multigram model the model where bigram dependencies are assumed between the phrases. For instance, by lim- iting to 3 words the maximal length of a phrase, the bi-multigram likelihood of the string "a b c d" is: p([,,] I #) p([b] I [hi) p([c] ] [b]) p([d] ] [c]) p([,,] I #)p([b] I [a])p([cd]l[b]) p([,,] I #) p([b¢] I [,,]) p([d] I [bc]) p([a] I #)p([bcd]l[a]) p([ab] l #) p([c] l [ab]) p([d] l [c]) p([ab] I #) p([cd] I [,,b]) p([:bc] I #) p([d] I [:bc]) To )resent the general formalism of the model in this section, we assume ~-gram correlations between the phrases, and we note n the maximal length of a phrase (in the above example, ~=2 and n=3). Let W denote a string of words, and {S} the set of pos- sible segmentations on W. The likelihood of W is: z(w)= ~ z(w,s) se{s} (1) and the likelihood of a segmentation S of W is: c (w,s) = I-I P(S(,) I s(,_~-+~)...s(,_~)) (2) with s(~) denoting the phrase of rank (r) in the seg- mentation S. The model is thus fully defined by the set of ~-gram probabilities on the set {8i} i of all the phrases which can be formed by combining 1, 2, ...up to n words of the vocabulary. Maximum like- lihood (ML) estimates of these probabilities can be obtained by formulating the estimation problem as a ML estimation from incomplete data (Dempster et al., 1977), where the unknown data is the underly- ing segmentation S. Let Q(k, k+ 1) be the following auxiliary function computed with the likelihoods of iterations k and k + 1 : Q(k,k+ 1) = ~ £.(k)(SIW)log£(k+')(W, S) SE{S} (3) It has been shown in (Dempster et al., 1977) that if Q(k,k + 1) > Q(k,k), then £(k+l)(W) > £(k)(W). Therefore the reestimation equation of p(sir I si, ...sir_,), at iteration (k + 1), can be derived by maximizing Q(k, k + 1) over the set of parameters of iteration (k + 1), under the set of con- straints ~"~'.a" p(sir [si,...sir_,) = 1, hence: P(k+l)(siv I Si, .. "Sir_,) = ESE{S} C(8ia .. .81-~_, 8i-~, S) x f_.(k)(S I W) (4) ~sels} c(si, sir_,, S) x £(k)(S I W) where c(si, ... si-~, S) is the number ofoccurences of the combination of phrases sl, ... siw in the segmen- tation S. Reestimation equation (4) can be imple- mented by means of a forward-backward algorithm, such as the one described for bi-multigrams (~ = 2) in the appendix of this paper. In a decision-oriented scheme, the reestimation equation reduces to: c(si, ... si.~_, s~, S "(k)) p(k+l)(si-~ I si, ...sir_,) = c(si, ...sir_,, S "(k)) (5) where S *(k), the segmentation maximizing £:(k)(S ] W), is retrieved with a Viterbi algo- rithm. Since each iteration improves the model in the sense of increasing the likelihood /:(k)(W), it eventually converges to a critical point (possibly a local maximum). 2.2 Variable-length phrase clustering Recently, class-phrase based models have gained some attention (Ries et al., 1996), but usually it assumes a previous clustering of the words. Typically, each word is first assigned a word-class label "< Ck >", then variable-length phrases [Ck,Ck2...Ck,] of word-class labels are retrieved, each of which leads to define a phrase-class label which can be denoted as "< [Ck,Ck2...Ch] >". But in this approach only phrases of the same length can be assigned the same phrase-class label. For instance, the phrases "thank you for" and "thank you very much for" cannot be assigned the same class label. We propose to address this limitation by directly clustering phrases instead of words. For this purpose, we assume bigram correlations between the phrases (~ = 2), and we modify the learning procedure of section 2.1, so that each iteration consists of 2 steps: • Step ! Phrase clustering: { p(k)(si I s~) } , {p(~)(cq(.)IC~(.,)), p(k)(s~ I c,(.,)) } • Step 2 Bi-multigram reestimation: { p(~)(cq(.,) I cq(.,)), p(~)(s~ I cq(.j)) } , {p(~+')(s~ I si) } Step 1 takes a phrase distribution as an input, assigns each phrase sj to a class Cq(,.), and out- puts the corresponding class dmtnbutmn. In our experiments, the class assignment is performed by maximizing the mutual information between adja- cent phrases, following the line described in (Brown 301 et al., 1992), with only the modification that can- didates to clustering are phrases instead of words. The clustering process is initialized by assigning each phrase to its own class. The loss in average mutual information when merging 2 classes is computed for every pair of classes, and the 2 classes for which the loss is minimal are merged. After each merge, the loss values are updated and the process is repeated till the required number of classes is obtained. Step _2 consists in reestimating a phrase distribution using the bi-multigram reestimation equation (4) or (5), with the only difference that the likelihood of a parse, instead of being computed as in Eq. (2), is now computed with the class estimates, i.e. as: £(W,S) = 1"I p(Cq(,.)) l Cq(s._.)) p(s(.) l Cq(,.))) T (6) This is equivalent to reestimating p(k+l)(sj [ Si) from p(k)(Cq(, D [ Cq(,,)) x p(k)(sj [ Cq(,D), instead ofp(k)(sj [ si) as was the case in section 2.1. Overall, step 1 ensures that the class assignment based on the mutual information criterion is optimal with respect to the current estimates of the phrase distribution and step _2 ensures that the phrase dis- tribution optimizes the likelihood computed accord- ing to (6) with the current estimates of the ciass distribution. The training data are thus iteratively structured in a fully integrated way, at both a paradigmatic level (step 1) and a syntagmatic level (step 2_). 2.3 Interpolation of stochastic class-phrase and phrase models With a class model, the probabilities of 2 phrases belonging to the same class are distinguished only according to their unigram probability. As it is un- likely that this loss of precision be compensated by the improved robustness of the estimates of the class distribution, class based models can be expected to deteriorate the likelihood of not only train but also test data, with respect to non-class based models. However, the performance of non-class models can be enhanced by interpolating their estimates with the class estimates. We first recall the way linear interpolation is performed with conventional word ngram models, and then we extend it to the case of our stochastic phrase-based approach. Usually, lin- ear interpolation weights are computed so as to max- imize the likelihood of cross evaluation data (Jelinek and Mercer, 1980). Denoting by A and (1 - A) the interpolation weights, and by p+ the interpolated es- timate, it comes for a word bigram model: I i) = a p(w i I w,) + (l-a) p(Cq(wj) I cq(w,)) I with A having been iteratively estimated on a cross evaluation corpus l,V¢~o,, as: 1 A (k) p(wj [ wi) A(k+l) - - T¢',.o,, Z c(wiwj) p(~)(wj I wi) (8) ij where Tcro,, is the number of words in Weros,, and c(wiwj) the number of co-occurences of the words wi and wj in Wero,~. In the case of a stochastic phrase based model - where the segmentation into phrases is not known a priori - the above computation of the interpolation weights still applies, however, it has to be embedded in dynamic programming to solve the ambiguity on the segmentation: A(k+l) _ 1 S-" e(sis~] S *(k)) A(k) p(sj I si) c(S'(~)) ~ p(~)(si I si) s,2 (9) where S "(k) the most likely segmentation of Wero,s given the current estimates p(~)(sj I si) can be re- trieved with a Viterbi algorithm, and where c(S*(k)) is the number of sequences in the segmentation S "(k). A more accurate, but computationally more involved solution would be to compute A (~+1) as the ~(k) p(sj I s~) expectation of 1 over the set of segmentations {S} on Wcross, us- ing for this purpose a forward-backward algorithm. However in the experiments reported in section 4, we use Eq (9) only. 3 Experiments with phrase based models 3.1 Protocol and database Evaluation protocol A motivation to learn bi- gram dependencies between variable length phrases is to improve the predictive capability of conven- tional word bigram models, while keeping the num- ber of parameters in the model lower than in the word trigram case. The predictive capability is usu- ally evaluated with the perplexity measure: PP = e-rXtogC(w) where T is the number of words in W. The lower PP is, the more accurate the prediction of the model is. In the case of a stochastic model, there are ac- tually 2 perplexity values PP and PP* computed respectively from ~"]~s £(W,S) and £(W,S*). The difference PP* - PP is always positive or zero, and measures the average degree of ambiguity on a parse S of W, or equivalently the loss in terms of predic- tion accuracy, when the sentence likelihood is ap- proximated with the likelihood of the best parse, as is done in a speech recognizer. 302 In section 3.2, we first evaluate the loss (PP" - PP) using the forward-backward estimation procedure, and then we study the influence of the estimation procedure itself, i.e. Eq. (4) or (5), in terms of per- plexity and model size (number of distinct 2-uplets of phrases in the model). Finally, we compare these results with the ones obtained with conventional n- gram models (the model size is thus the number of distinct n-uplets of words observed), using for this purpose the CMU-Cambridge toolkit (Clarkson and Rosenfeld, 1997). Training protocol Experiments are reported for phrases having at most n = 1, 2, 3 or 4 words (for n =1, bi-multigrams correspond to conventional bi- grams). The bi-multigram probabilities are initial- ized using the relative frequencies of all the 2-uplets of phrases observed in the training corpus, and they are reestimated with 6 iterations. The dictionaries of phrases are pruned by discarding all phrases occur- ing less than 20 times at initialization, and less than 10 times after each iteration s, except for the 1-word phrases which are kept with a number of occurrences set to 1. Besides, bi-multigram and n-gram prob- abilities are smoothed with the backoff smoothing technique (Katz, 1987) using Witten-Bell discount- ing (Witten and Bell, 1991) 3. Database Experiments are run on ATR travel ar- rangement data (see Tab. 1). This database con- sists of semi-spontaneous dialogues between a hotel clerk and a customer asking for travel/accomodation informations. All hesitation words and false starts were mapped to a single marker "*uh*". Train test Nb sentences 13 650 2 430 Nb tokens 167 000 29 000 (1% OOV) Vocabulary 3 525 + 280 OOV Table 1: ATR Travel Arrangement Data 3.2 Results Ambiguity on a parse (Table 2) The difference (PP" - PP) usually remains within about 1 point of perplexity, meaning that the average ambiguity on a parse is low, so that relying on the single best parse should not decrease the accuracy of the prediction very much. Influence of the estimation procedure (Ta- ble 3) As far as perplexity values are concerned, 2Using different pruning thresholds values did not dra- matically affect the results on our data, provided that the threshold at initialization is in the range 20-40, and that the threshold of the iterations is less than 10. 3The Witten-Bell discounting was chosen, because it yielded the best perplexity scores with conventional n-grams on our test data. mmmmmmmmm Table 2: Ambiguity on a parse. the estimation scheme seems to have very little in- fluence, with only a slight advantage in using the forward-backward training. On the other hand, the size of the model at the end of the training is about 30% less with the forward-backward training: ap- proximately 40 000 versus 60 000, for a same test perplexity value. The bi-multigram results tend to indicate that the pruning heuristic used to discard phrases does not allow us to fully avoid overtrain- ing, since perplexities with n =3, 4 (i.e. dependen- cies possibly spanning over 6 or 8 words) are higher than with n =2 (dependencies limited to 4 words). Test perplexity values PP" n 1 2 3 4 F.-B. 56.0 45.1 45.4 46.3 Viterbi 56.0 45.7 45.9 46.2 Model size n 1 2 3 4 F.-B. 32505 42347 43672 43186 Viterbi 32505 65141 67258 67295 Table 3: Influence of the estimation procedure: forward-backward (F.-B.) or Viterbi. Comparison with n-grams (Table 4) The low- est bi-multigram perplexity (43.9) is still higher than the trigram score, but it is much closer to the tri- gram value (40.4) than to the bigram one (56.0) 4 The number of entries in the bi-multigram model is much less than in the trigram model (45000 versus 75000), which illustrates the ability of the model to select most relevant phrases. I [';~--] I .~ ,] ~ l,l,li.~.~ I, [~-I w.i~i n (and n) 1 2 3 4 n-gram 314.2 56.0 40.4 39.8 bimultigrams 56.0 43.9 44.2 45.0 Model size n (and n) 1 2 3 4 n-gram 3526 32505 75511 112148 bimultigrams 32505 42347 43672 43186 Table 4: Comparison with n-grams: Test perplexity values and model size. 4Besides, the trig-ram score depends on the discounted scheme: with a linear discounting, the trlg'ram perplexity on our test data was 48.1. 303 4 Experiments with class-phrase based models 4.1 Protocol and database Evaluation protocol In section 4.2, we compare class versions and interpolated versions of the bi- gram, trigram and bi-multigram models, in terms of perplexity values and of model size. For bigrams (resp. trigrams) of classes, the size of the model is the number of distinct 2-uplets (resp. 3-uplets) of word-classes observed, plus the size of the vocab- ulary. For the class version of the bi-multigrams, the size of the model is the number of distinct 2- uplets of phrase-classes, plus the number of distinct phrases maintained. In section 4.3, we show samples from classes of up to 5-word phrases, to illustrate the potential benefit of clustering relatively long and variable-length phrases for issues related to language understanding. Training protocol All non-class models are the same as in section 3. The class-phrase models are trained with 5 iterations of the algorithm described in section 2.2: each iteration consists in clustering the phrases into 300 phrase-classes (step 1), and in reestimating the phrase distribution (step 2) with Eq. (4). The bigrams and trigrams of classes are es- timated based on 300 word-classes derived with the same clustering algorithm as the one used to cluster the phrases. The estimates of all the class ditribu- tions are smoothed with the backoff technique like in section 3. Linear interpolation weights between the class and non-class models are estimated based on Eq. (8) in the case of the bigram or trigram mod- els, and on Eq.(9) in the case of the bi-multigram model. Database The training and test data used to train and evaluate the models are the same as the ones described in Table 1. We use an additional set of 7350 sentences and 55000 word tokens to estimate the interpolation weights of the interpolated models. 4.2 Results The perplexity scores obtained with the non-class, class and interpolated versions of a bi-multigram model (limiting to 2 words the size of a phrase), and of the bigram and trigram models are in Ta- ble 5. Linear interpolation with the class based mod- els allows us to improve each model's performance by about 2 points of perplexity: the Viterbi perplex- ity score of the interpolated bi-multigrams (43.5) re- mains intermediate between the bigram (54.7) and trigram (38.6) scores. However in the trigram case, the enhancement of the performance is obtained at the expense of a great increase of the number of entries in the interpolated model (139256 entries). In the bi-multigram case, the augmentation of the model size is much less (63972 entries). As a re- sult, the interpolated bi-multigram model still has fewer entries than the word based trigram model (75511 entries), while its Viterbi perplexity score comes even closer to the word trigram score (43.5 versus 40.4). Further experiments studying the in- fluence of the threshold values and of the number of classes still need to be performed to optimize the performances for all models. Test perplexity values PP" non-class bigrams 56.04 bimultigrams 45.1 trigrams 40.4 class 66.3 57.4 49.3 Model size non-class bigrams 32505 bimultigrams 42347 75511 trigrams class 20471 21625 63745 interpolated 54.7 43.5 38.6 interpolated 52976 63972 139256 Table 5: Comparison of class-phrase bi-multigrams and of class-word bigrams and trigrams: Test per- plexity values and model size. 4.3 Examples Clustering variable-length phrases may provide a natural way of dealing with some of the language dis- fluencies which characterize spontaneous utterances, like the insertion of hesitation words for instance. To illustrate this point, examples of phrases which were merged into a common cluster during the training of a model allowing phrases of up to n = 5 words are listed in Table 6 (the phrases containing the hes- itation marker "*uh*" are in the upper part of the table). It is often the case that phrases differing mainly because of a speaker hesitation are merged together. Table 6 also illustrates another motivation for phrase retrieval and clustering, apart from word prediction, which is to address issues related to topic identifica- tion, dialogue modeling and language understand- ing (Kawahara et al., 1997). Indeed, though the clustered phrases in our experiments were derived fully blindly, i.e. with no semantic/pragmatic in- formation, intra-class phrases often display a strong semantic correlation. To make this approach effec- tively usable for speech understanding, constraints derived from semantic or pragmatic knowledge (like speech act tag of the utterance for instance) could be placed on the phrase clustering process. 5 Conclusion An algorithm to derive variable-length phrases as- suming bigram dependencies between the phrases has been proposed for a language modeling task. It has been shown how a paradigmatic element could 304 { yes_that_will ; *uh*_that_would } { yes_that_will_be ; *uh*_yes_that's } { *uh*_by_the ; and_by_the } { yes_*uh*i ; i_see_i ) { okay_i_understand ; *uh*_yes_please ) { could_you_recommend ; *uh*_is_there } { *uh*_could_you_tell ; and_could_you.tell } { so_that_will ; yes_that_will ; yes_that_would ; uh*.that_would ) { if_possible_i'd_like ; we_would_like ; *uh*_i_want } { that_sounds_good ; *uh*-i_understand ) { *uh*_i_really ; *uh*_i_don't } { *uh*_i'm.staying ; andA'm.staying } { all_right_we ; *uh*_yes.i ) { good_morning ; good_afternoon ; hello } { sorry_to_keep_you_waiting ; hello_front_desk ; thank_you_very_much ; thank_you_for_calling ; you're_very.welcome ; yes_that's_correct ; yes_that's_right } { non.smoking ; western_style ; first_class ; japanese_style } { familiar_with ;in_charge_of } { could_you_tell_me ; do_you_know } { how/ong ; how_much ; what_time ; uh*_what_time ; *uh*_how_much ; and_how_much ; and_what_time } { explain ; tell_us ; tell_me ; tell_me_about ; tell_me_what ; tell_me_how ; tell_me_how_much ; tell_me_the ; give_me ; give_me_the ; give_me_your ; please_tell_me } { are_there ; are_there_any ; if_there_are ; iLthereis ;if_you_have ; if_there's ; do_you_have ; do_you_have_a ; do_you_have_any ; we_have_two ; is_there ; is_there_any ; is_there_a ; is_there_anything ; *uh*_is_there ; uh*_do_you_have } { tomorrow_morning ; nine_o'clock ; eight_o'clock ; seven_o'clock ; three_p.m. ; august_tenth ; in_the_morning ; six_p.m. ; six_o'clock } { we'd_like ; i'dAike ; i_would_like } { that'll_be_fine ; that's_fine ; i_understand } { kazuko_suzuki ; mary ; mary_phillips ; thomas_nelson ; suzuki ; amy_harris ; john ;john_phillips } { fine ; no_problem ; anything_else } { return_the.car ; pick_it_up } { todaiji ; kofukuji ; brooklyn ; enryakuji ; hiroshima ; las_vegas ; saltAake_city ; chicago ; kinkakuji ; manhattan ; miami ; kyoto_station ; this_hotel ; our_hotel ; your_hotel ; the_airport ; the_hotel } Table 6: Example of phrases assigned to a common cluster, with a model allowing up to 5-word phrases (clusters are delimited with curly brackets) be integrated within this framework, allowing to as- sign common labels to phrases having a different length. Experiments on a task oriented corpus have shown that structuring sentences into phrases results in large reductions in the bigram perplexity value, while still keeping the number of entries in the lan- guage model nmch lower than in a trigram model, especially when these models are interpolated with class based models. These results might be further improved by finding a more efficient pruning strat- egy, allowing the learning of even longer dependen- cies without over-training, and by further experi- menting with the class version of the phrase-based model. Additionally, the semantic relevance of the clusters of phrases motivates the use of this approach in the areas of dialogue modeling and language under- standing. In that case, semantic/pragmatic infor- mations could be used to constrain the clustering of the phrases. Appendix: Forward-backward algorithm for the estimation of the bi-multigram parameters Equation (4) can be implemented at a complexity of O(n~T), with n the maximal length of a sequence and T the number of words in the corpus, using a forward-backward algorithm. Basically, it consists in re-arranging the order of the summations of the numerator and denominator of Eq. (4): the likeli- hood values of all the segmentations where sequence sj occurs after sequence si, with sequence si end- ing at the word at rank (t), are summed up first; and then the summation is completed by summing over t. The cumulated likelihood of all the segmen- tations where sj follows si, and si ends at (t), can be directly computed as a product of a forward and of a backward variable. The forward variable represents the likelihood of the first t words, where the last li words are constrained to form a sequence: = The backward variable represents the conditional likelihood of the last ( T -t) words, knowing that they are preceded by the sequence [w(t_zi+l)...w(0]: = Assuming that the likelihood of a parse is computed according to Eq. (2), then the reestimation equation (4) can be rewritten as shown in Tab. 7. The variables a and/3 can be calculated according to the following recursion equations (assuming a start and an end symbol at rank t = 0 and t = T+I): 305 p(k+l)(s j [Si) .- ~T=I O~(t, It) p(k)(Sj ISi) ~(t "1- lj, lj) 6i(t -- li -}- 1) 6j(t + 1) E, ~(t, li) l~(t, It) 6i(t--li+l) li and lj refer respectively to the lengths of the sequences si and sj, and where the Kronecker function 5k(t) equals 1 if the word sequence starting at rank t is sk, and equals 0 if not. Table 7: Forward-backward reestimation for 1 < t < T+ 1, and 1 < ii <_ n: n a(t, It) E a(t - li, l) (') "-" p([Wit_l,,l)] [ [W(~tTili~+l]) I=l a(0, 1) = 1, a(0, 2) = ... = a(0, n) = 0. for0<t <T, and l<lj < n: I Z(t + l, l) I=l ~(T+ 1, 1) = 1, fl(T+ 1,2) = ... =/~(T+ 1,n) = 0. In the case where the likelihood of a parse is computed with the class assumption, i.e. ac- cording to (6), the term p(k)(sj [st) in the reestimation equation shown in Table 7 should be replaced by its class equivalent, i.e. by p(k)(Cq(, D ICq(,,)) p(k)(sj [ Cq(,D). In the recursion equation of ~, the term p([W~)_t,+l)]l[Wft_Tt'__~+l]) is replaced by the corresponding class bigram prob- ability multiplied by the class conditional prob- ability of the sequence [W~_)t,+l)]. A similar change affects the recursion equation of ~, with P(tW~::~l]ltW~:)b+,)]) being replaced by the cor- responding class bigram probability multiplied by the class conditional probability of the sequence References F. Bimbot, R. Pieraccini, E. Levin, and B. Atal. 1995. Variable-length sequence modeling: Multi- grams. IEEE Signal Processing Letters, 2(6), June. P.F. Brown, V.J. Della Pietra, P.V. de Souza, J.C. Lai, and R.L. Mercer. 1992. Class-based n-gram models of natural language. Computational Lin- guistics, 18(4):467-479. P. Clarkson and R. Rosenfeld. 1997. Statistical lan- guage modeling using the cmu-cambridge toolkit. Proceedings of EUROSPEECH 9Z S. Deligne and F. Bimbot. 1995. Language modeling by variable length sequences: theoretical formula- tion and evaluation of multigrams. Proceedings of ICASSP 95. S. Deligne, F. Yvon, and F. Bimbot. 1996. In- troducing statistical dependencies and structural constraints in variable-length sequence models. In Grammatical Inference : Learning Syntax from Sentences, Lecture Notes in Artificial Intelligence 1147, pages 156-167. Springer. A. P. Dempster, N. M. Laird, and D. B. Rubin. 1977. Maximum-likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistics So- ciety, 39(1):1-38. F. Jelinek and R.L. Mercer. 1980. Interpolated esti- mation of markov source parameters from sparse data. Proceedings of the workshop on Pattern Recognition in Practice, pages 381-397. S. M. Katz. 1987. Estimation of probabilities from sparse data for the language model component of a speech recognizer. IEEE Trans. on Acous- tic, Speech, and Signal Processing, 35(3):400-401, March. T. Kawahara, S. Doshita, and C. H. Lee. 1997. Phrase language models for detection and verification-based speech understanding. Proceed- ings of the 1997 IEEE workshop on Automatic Speech Recognition and Understanding, pages 49- 56, December. H. Masataki and Y. Sagisaka. 1996. Variable- order n-gram generation by word-class splitting and consecutive word grouping. Proceedings of ICASSP 96. S. Matsunaga and S. Sagayama. 1997. Variable- length language modeling integrating global con- straints. Proceedings of EUROSPEECH 97. K. Ries, F. D. Buo, and A. Waibel. 1996. Class phrase models for language modeling. Proceedings of ICSLP 96. M. Siu. 1998. Learning local lezicai structure in spontaneous speech language modeling. Ph.D. the- sis, Boston University. B. Suhm and A. Waibel. 1994. Towards better lan- guage models for spontaneous speech. Proceedings of ICSLP 94. I.H. Witten and T.C. Bell. 1991. The zero-frequency problem: estimating the probabilities of novel events in adaptative text compression. IEEE Trans. on Information Theory, 37(4):1085-1094, July. 306
1998
47
Experiments with Learning Parsing Heuristics Sylvain DELISLE Drpartement de mathrmatiques et d'informatique Universit6 du Qurbec ~ Trois-Rivi~res Trois-Rivi~res, Qurbec, Canada, GgA 5H7 Sylvain_Delisle @uqtr.uquebec.ca Sylvain LI~TOURNEAU, Stan MATWlN School of Information Technology and Engineering, University of Ottawa Ottawa, Ontario, Canada, KIN 6N5 sletour@ ai.iit.nrc.ca, stan @site.uottawa.ca Abstract Any large language processing software relies in its operation on heuristic decisions concerning the strategy of processing. These decisions are usually "hard-wired" into the software in the form of hand- crafted heuristic rules, independent of the nature of the processed texts. We propose an alternative, adaptive approach in which machine learning techniques learn the rules from examples of sentences in each class. We have experimented with a variety of learning techniques on a representative in- stance of this problem within the realm of parsing. Our approach lead to the discovery of new heuristics that perform significantly better than the current hand-crafted heuris- tic. We discuss the entire cycle of applica- tion of machine learning and suggest a methodology for the use of machine learn- ing as a technique for the adaptive optimi- sation of language-processing software. 1 Introduction Any language processing program---in our case, a top-down parser which outputs only the first tree it could find--must make decisions as to what processing strategy, or rule ordering, is most ap- propriate for the problem (i.e. string) at hand. Given the size and the intricacy of the rule-base and the goal (to optimise a parser's precision, or recall, or even its speed), this becomes a complex decision problem. Without precise knowledge of the kinds of texts that will be processed, these de- cisions can at best be educated guesses. In the parser we used, they were performed with the help of hand-crafted heuristic rules, which are briefly presented in section 2. Even when the texts are available to fine-tune the parser, it is not obvious how these decisions are to be made from texts alone. Indeed, the decisions may often be expressed as rules whose representation is in terms which are not directly or easily available from the text (e.g. non-terminals of the grammar of the language in which the texts are written). Hence, any technique that may automatically or semi-automatically adapt such rules to the corpus at hand will be valuable. As it is often the case, there may be a linguistic shift in the kinds of texts that are processed, especially if the linguistic task is as general as parsing. It is then interesting to adapt the "version" of the parser to the corpus at hand. We report on an experiment that targets this kind of adaptability. We use machine learning as an artificial intelligence technique that achieves adaptability. We cast the task described above as a classification task: which, among the parser's top- level rules, is most appropriate to launch the parsing of the current input string? Although we restricted ourselves to a subset of a parser, our objective is broader than just applying an existing learning system on this problem. What is interes- ting is: a) definition of the attributes in which examples are given, so that the attributes are both obtainable automatically from the text and lead to good rules--this is called "feature engineering"; b) selection of the most interesting learned rules; c) incorporation of the learned rules in the parser; d) evaluation of the performance of the learned rules after they have been incorporated in the parser. It is the lessons from the whole cycle that we followed in the work that we report here, and we suggest it as a methodology for an adaptive optimisation of language processing programs. 307 2 The existing hand-crafted heuristics The rule-based parser we used was DIPETT [Delisle 1994]: it is a top-down, depth-first parser, augmented with a few look-ahead mecha- nisms, which returns the first analysis (parse tree). The fact that our parser produces only a single analysis, the "best" one according to its hand-crafted heuristics, is part of the motivation for this work. When DIPETT is given an input string, it first selects the top-level rules it is to at- tempt, as well as their ordering in this process. Ideally, the parser would find an optimal order that minimises parsing time and maximises par- sing accuracy by first selecting the most promi- sing rules. For example, there is no need to treat a sentence as multiply coordinated or compound when the data contains only one verb. DIPETT has three top-level rules for declarative state- ments: i) MULT_COOR for multiple (normally, three or more) coordinated sentences; ii) COMPOUND for compound sentences, that is, cor- relative and simple coordination (of, normally, two sentences); iii) NONCOMPOUND for simple and complex sentences, that is, a single main clause with zero or more subordinate clauses ([Quirk et el. 1985]). To illustrate the data that we worked with and the classes for which we needed the rules, here are two sentences (from the Brown corpus) used in our experiments: "And know, while all this went on, that there was no real reason to suppo- se that the murderer had been a guest in either hotel." is a non-compound sentence, and =Even I can remember nothing but ruined cellars and tumbled pillars, and nobody has lived there in the memory of any living man." is a com- pound sentence. The current hand-crafted heuristic ([Delisle 1994]) is based on three parameters, obtained af- ter (non-disambiguating) lexical analysis and be- fore parsing: 1) the number of potential verbs' in the data, 2) the presence of potential coordinators in the data, and 3) verb density (roughly spea- king, it indicates how potential verbs are distri- buted). For instance, low density means that verbs are scattered throughout the input string; high density means that the verbs appear close to each other in the input string, as in a conjunction i A "potential" verb may actually turn out to be, say, a noun, but only parsing can tell us how such a lexical ambiguity has been resolved. If the input were pre- processed by a tagger, the ambiguity might disappear. of verbs such as "Verbl and Verb2 and Verb3". Given the input string's features we have just dis- cussed, DIPETT's algorithm for top-level rule selection returns an ordered list of up to 3 of the rules COMPOUND, NONCOMPOUND, and MULT_COOR tO be attempted when parsing this string. For the purposes of our experiment, we sim- plified the situation by neglecting the MULT_COOR rule since it was rarely needed when parsing real- life text. Thus, the original problem went from a 3- class to a 2-class classification problem: COMPOUND or NON_COMPOUND. 3 Learning rules from sentences As any heuristic, the top-level rule selection mechanism just described is not perfect. Among the principal difficulties, the most important are: i) the accuracy of the heuristic is limited and ii) the internal choices are relatively complex and somewhat obscure from a linguist's viewpoint. The aim of this research was to use classification systems as a tool to help developing new know- ledge for improving the parsing process. To pre- serve the broad applicability of DIPETT, we have emphasised the generality of the results and did not use any kind of domain knowledge. The sentences used to build the classifiers and evaluate the performance have been randomly selected from five unrelated real corpora. Typical classification systems (e.g. decision trees, neural networks, instance based learning) require the data to be represented by feature vec- tors. Developing such a representation for the task considered here is difficult. Since the top-level rule selection heuristic is one of the first steps in the parsing process, very little information for making this decision is available at the early stage of parsing. All the information available at this phase is provided by the (non-disambiguating) lexical analysis that is performed before parsing. This preliminary analysis provides four features: 1) number of potential verbs in the sentence, 2) presence of potential coordinators, 3) verb density, and 4) number of potential auxiliaries. As mentioned above, only the first three features are actually used by the current hand-crafted heuristic. However, preliminary experiments have shown that no interesting knowledge can be inferred by using only these four features. We then decided to improve our representation by the use of DIPETT's 308 fragmentary parser: an optional parsing mode in which DIPETT does not attempt to produce a single structure for the current input string but, rather, analyses a string as a sequence of major constituents (i.e. noun, verb, prepositional and adverbial phrases). The new features obtained from fragmentary parsing are: the number of fragments, the number of "verbal" fragments (fragments that contain at least one verb), number of tokens skipped, and the total percentage of the input recognised by the fragmentary parser. The fragmentary parser is a cost-effective solution to obtain a better representation of sentences because it is very fast---on average, less than one second of CPU time for any sentence--in comparison to full parsing. Moreover, the information obtained from the fragmentary parser is adequate for the task at hand because it represents well the complexity of the sentence to be parsed. In addition to the featu- res obtained from the lexical analysis and those obtained from the fragmentary parser, we use the string length (number of tokens in the sentence) to describe each sentence. The attribute used to classify the sentences, provided by a human ex- pert, is called rule-to-attempt and it can take two values: compound or non-compound, according to the type of the sentence. To summarise, we used the ten following features to represent each sentence: l) string-length: number of tokens (integer); 2) num-potential-verbs: number of potential verbs (integer); 3) num-potential-auxiliary: number of potential auxiliaries (integer); 4) verb- density: a flag that indicates if all potential verbs are separated by coordinators (boolean); 5) nbr-potential, coordinators: number of potential coordinators (integer); 6) num-fragments: number of fragments used by the fragmentary parser (integer); 7) num- verbal-fragments: number of fragments that contain at least one potential verb (integer); 8) num-tokens- skip: number of tokens not considered by the fragmentary parser (integer); 9) %.input.recognized: percentage of the sentence recognized, i.e. not skipped (real); 10) rule-to-attempt: type of the sentence (COMPOUND or NON-COMPOUND). We built the first data set by randomly selecting 300 sentences from four real texts: a software user manual, a tax guide, a junior science textbook on weather phenomena, and the Brown corpus. Each sentence was described in terms of the above features, which are of course acquired automatically by the lexical analyser and the fragmentary parser, except for rule-to-attempt as mentioned above. After a preliminary analysis of these 300 sentences, we realised that we had un- balanced numbers of examples of compound and non-compound sentences: non-compounds are approximately five times more frequent than compounds. However, it is a well-known fact in machine learning that such unbalanced training sets are not suitable for inductive learning. For this reason, we have re-sampled our texts to obtain roughly an equal number of non-compound and compound sentences (55 compounds and 56 non- compounds). Our experiment consisted in running a variety of attribute classification systems: IMAFO ([Famili & Tumey 1991]), C4.5 ([Quinlan 1993]), and different learning algorithms from MLC++ ([Kohavi et al. 1994]). IMAFO includes an en- hanced version of ID3 and an interface to C4.5 (we used both engines in our experimentation). MLC++ is a machine learning library developed in C++. We experimented with many algorithms included in MLC++. We concentrated mainly on learning algorithms that generate results in the form of rules. For this project, rules are more interesting than other form of results because they are relatively easy to integrate in a rule-based parser and because they can be evaluated by experts in the domain. However, for accuracy comparison, we have also used learning systems that do not generate rules in terms of the initial representation: neural networks and instance-based systems. We randomly divided our data set into the training set (2/3 of the examples, or 74 instances) and the testing set (1/3 of the examples, or 37 instances). Table 1 summarises the results obtained from different systems in terms of the error rates on the testing set. All systems gave results with an error rate below 20%. SYSTEM Type of system decision rules Error rate ID3 16.2% C4.5 decision rules 18.9% IMAFO decision rules 16.5% decision rule (one) instance-based oneR 15.6% IB 10.8% aha-ib instance-based 18.9% belief networks naive-bayes perceptron 16.2% neural networks 13.5% Table 1. Global results from learning. 309 The error rates presented in Table I for the first four systems (decision rules systems) repre- sent the average rates for all rules generated by these systems. However, not all rules were parti- cularly interesting. We kept only some of them for further evaluation and integration in the parser. Our selection criteria were: 1) the esti- mated error rate, 2) the "reasonability" (only rules that made sense for a computational linguist were kept), 3) the readability (simple rules are preferred), and 4) the novelty (we discarded rules that are already in the parser). Tables 2 and 3 pre- sent rules that satisfy all the above the criteria: Table 2 focuses on rules to identify compound sentences while Table 3 presents rules to identify non-compound sentences. The error rate for each rule is also given. These error rates were obtained by a 10 fold cross-validation test. Rules to identify COMPOUND sen- tences num-potential-verbs <= 3 AND num-potential-coordinators > 0 AND num-verbal-fra£ments > 1 num-fragments > 7 num-fragments > 5 AND num-verbal-fragments <= 2 string-length <= 17 AND num-potential-coordinators > 0 AND num-verbal-fra£ments > 1 num-potential-verbs > 1 AND num-potential-verbs <= 3 AND num-potential-coordinators > 0 AND num-fra~ments > 4 Error rate (%) 10.5 9.4 23.9 5.4 4.2 num-potential-coordinators > 0 AND 4.3 num-fragrnents >= 7 num-potential-coordinators > 0 AND 16.8 num-verbal-fragments > 1 num-potential-coordinators > 0 AND num-fragments < 7 AND 4.7 string-length < 18 Table 2. Rules to identify COMPOUND sentences The error rates that we have obtained are quite respectable for a two-class learning problem given the volume of available examples. More- over, the rules are justified and make sense. They are also very compact in comparison with the original hand-crafted heuristics. We will see in section 4 how these rules behave on unseen data from a totally different text. Rules to identify NON- COMPOUND sentences num-potential-verbs <= 3 AND num-verbal-fragments <= 1 string-length > 10 AND num-potential-verbs <= 3 AND num-fra~ments <= 4 string-length <= 21 AND num-potential-coordinators = 0 Error rate (%) 8.3 6.7 5.6 num-potential-coordinators = 0 AND 9.7 num-fragments <= 7 Table 3. Rules to identify NON-COMPOUND sen- tences Attribute classification systems such as those used during the experiment reported here are highly sensitive to the adequacy of the features used to represent the instances. For our task (parsing), these features were difficult to find and we had only a rough idea about their appropriateness. For this reason, we felt that better results could be obtained by transforming the original instance space into a more adequate space by creating new attributes. In machine learning research, this process is referred as constructive learning, or constructive induction ([Wnek & Michalski 1994]). We even attempted to use principal component analysis (PCA) ([Johnson & Wichern 1992]) as a technique of choice for simple constructive learning but we did not get very impressive results. We see two reasons for this. The primary reason is that the ratio between the number of examples and the number of attributes is not high enough for PCA to derive high-quality new attributes. The se- cond reason is that the original attributes are al- ready highly non-redundant. It is important to note that these rules do not satisfy the reasonability criteria applied to the original representation. In fact, losing the understandability of the attributes is the usual consequence of almost all approaches that change the representation of instances. 4 Evaluation of the new rules We explained in section 3 how we derived new parsing heuristics with the help of machine learning techniques. The next step was to evaluate how well would the new rules perform if we replaced the parser's current hand-crafted heuris- tics with the new ones. In particular, we wanted to evaluate the accuracy of the heuristics in correctly identifying the appropriate rule, COMPOUND or NON COMPOUND, that should first be attempted by 310 the parser. This goal was prompted by an earlier evaluation of DIPETT in which it was noted that a good proportion of questionable parses (i.e. either bad parses or correct but too time- consuming parses) were caused by a bad first attempt, such as attempting COMPOUND instead of NON_COMPOUND. 4.1 From new rules to new parsers Our machine learning experiments lead us to two classes of rules obtained from a variety of classi- fiers and concerned only with the notion of com- poundness: 1) those predicting a COMPOUND sentence, and 2) those predicting a NON_COMPOUND. The problem was then to de- cide what should be done with the set of new rules. More precisely, before actually imple- menting the new rules and including them in the parser, we first had to decide on an appropriate strategy for exploiting the set of new rules. We now describe the three implementations that we realised and evaluated. The first implements only the rules for the COMPOUND class---one big rule which is a dis- junct of all the learned rules for that class. And since there are only two alternatives, either COMPOUND or NON_COMPOUND, if none of the COMPOUND rules applies, the NON_COMPOUND class is predicted. This first implementation is re- ferred to as C-Imp. The second implementation, referred to as NC-Imp, does exactly the opposite: i.e. it implements only the rules predicting the NON_COMPOUND class. The third implementation, referred to as NC_C-Imp, benefits from the first two imple- mentations. The class of a new sentence is deter- mined by combining the output from C-Imp and NC-Imp. The combination of the output is done according to the following decision table in Table 4. C-Imp NC-Imp .] Output of I NC_C-Imp C C C NC NC NC NC C NC C NC NC Table 4. Decision table used in the NC_C imple- mentation. The first two lines of this decision table are ob- vious since the outputs from both implementations are consistent. When the two implementations disagree, the NC_C-Imp implementation predicts the non-compound. This prediction is justified by a bayesian argumentation. In the absence of any additional knowledge, we are forced to assign an equal probability of success to each of the two sets of rules and the most probable class becomes the one with the highest frequency. Thus, in general, non-compound sentences are more frequent then compound ones. One obvious way to improve this third implementation would be to precisely evaluate the accuracies of the two sets of rules and then incorporate these accuracies in the decision process. 4.2 The results To perform the evaluation, we randomly sampled 200 sentences from a new corpus on mechanics ([Atkinson 1990]): note that this text had not been used to sample the sentences used for learning. Out of these 200 sentences, 10 were discarded since they were not representative (e.g. one-word "sentences"). We ran the original implementation of DIPETT plus the three new implementations described in the previous section on the remaining 190 test sentences. Table 5 presents the results. The error-rate, the standard deviation of the error-rate and the p-value are listed for each implementation. The p-value gives the probability that DIPETT's original hand-crafted heuristics are better than the new heuristics. In other words, a small p-value means an increase in performance with a high probability. Implementation Original heur. C-Imp NC-Imp NC_C-Imp Err- Std. p-value rate dev. (%) 25.268 ±3.2 20.526 ±2.9 0.126 22.105 ±3.0 0.229 16.316 ±2.7 0.009 Table 5. Performances of the new implementations versus DIPETT's original heuristics. We observe that all new automatically-derived heuristics did beat DIPETT's hand-crafted heu- ristics and quite clearly. The results from the third implementation (i.e. NC_C-Imp) are especially remarkable: with a confidence of over 99%, we can 311 affirm that the NC_C-lmplementation will outperform DIPETT's original heuristic. We also note that the error rate drops by 35% of its value for the original heuristic. Similarly, with a confi- dence of 87.4%, we can affirm that the imple- mentation that uses only the C-rules (i.e. C-Imp) will perform better then DIPETT's current heu- ristics. These very good results are also amplified by the fact that the testing described in this evalua- tion was done on sentences totally independent from the ones used for training. Usually, in ma- chine learning research, the training and the tes- ting sets are sampled from the same original data set, and the kind of "out-of-sample" testing that we perform here has only recently come to the attention of the learning community ([Ezawa et al. 1996]). Our experiments have shown that it is possible to infer rules that perform very well and are highly meaningful in the eyes of an expert even if the training set is relatively small. This indicates that the representation of sentences that we chose for the problem was adequate. Finally, an other important output of our research is the identification of the most significant attributes to distinguish non-compound sentences from com- pound ones. This alone is valuable information to a computational linguist. Only five out of ten original attributes are used by the learned rules, and all of them are cheap to compute: two attri- butes are derived by fragmentary parsing (num- ber of verbal fragments and number of frag- ments), and three are lexical (number of potential verbs, length of the input string, and presence of potential coordinators). 5 Related Work There have been successful attempts at using ma- chine learning in search of a solution for linguis- tic tasks, e.g. discriminating between discourse and sentential senses of cues ([Litman 1996]) or resolution of coreferences in texts ([McCarthy & Lehnert 1995]). Like our work, these problems are cast as classification problems, and then ma- chine learning (mainly C4.5) techniques are used to induce classifiers for each class. What makes "these applications different from ours is that they have worked on surface linguistic or mixed surfa- ce linguistic and intonational representation, and that the classes are relatively balanced, while in our case the class of compound sentences is much less numerous than the class of non-composite sentences. Such unbalanced classes create prob- lems for the majority of inductive learning systems. A distinctive feature of our work is the fact that we used machine learning techniques to improve an existing rule-based natural language processor from the inside. This contrasts with approaches where there are essentially no explicit rules, such as neural networks (e.g. [Buo 1996]), or approaches where the machine learning algorithms attempt to infer--via deduction (e.g. [Samuelsson 1994]), induction (e.g. [Theeramunkong et al. 1997]; [Zelle & Mooney 1994]) under user coope- ration (e.g. [Simmons & Yu 1992]; [Hermjakob & Mooney 1997]), transformation-based error-driven learning (e.g. [Brill 1993]), or even decision trees (e.g. [Magerman 1995])--a grammar from raw or preprocessed data. In our work, we do not wish to acquire a grammar: we have one and want to de- vise a mechanism to make some of its parts adaptable to the corpus at hand or, to improve some aspect of its performance. Other researchers, such as [Lawrence et al. 1996], have compared neural networks and machine learning methods at the task of sentence classification. In this task, the system must classify a string as either grammatical or not. We do not content ourselves with results based on a grammatical/ungrammatical dichotomy. We are looking for heuristics, using relevant features, that will do better than the current ones and improve the overall performance of a natural language processor: this is a very difficult problem (see, e.g., [Huyck & Lytinen 1993]). One could also look at this problem as one of optimisation of a rule-based system. Work somewhat related to ours was conducted by [Samuelsson 1994] who used explanation-based generalisation to extract a subset of a grammar that would parse a given corpus faster than the original, larger grammar [Neumann 1997] also used EBL but for a generation task. In our case, we are not looking for a subset of the existing rules but, rather, we are looking for brand new rules that would replace and outperform the existing rules. We should also mention the work of [Soderland 1997] who also worked on the comparison of automatically learned and hand-crafted rules for text analysis. 312 6 Conclusion We have presented an experiment which demon- strates that machine learning may be used as a technique to optimise in an adaptive manner the high-level decisions that any parser must make in the presence of incomplete information about the properties of the text it analyses. The results show clearly that simple and understandable rules learned by machine learning techniques can sur- pass the performance of heuristics supplied by an experienced computational linguist. Moreover, these very encouraging results indicate that the representation that we chose and discuss was an adequate one for this problem. We feel that a methodology is at hand to extend and deepen this approach to language processing programs in general. The methodology consists of three main steps: I) feature engineering, 2) learning, using several different available learners, 3) evaluation, with the recommendation of using the "out-of- sample" approach to testing. Future work will fo- cus on improvements to constructive learning; on new ways of integrating the rules acquired by dif- ferent learners in the parser; and on the identifi- cation of criteria for selecting parser rules that have the best potential to benefit from the gene- ralisation of our results. Acknowledgements The work described here was supported by the Natural Sciences and Engineering Research Council of Canada. References Atkinson, H.F. (1990) Mechanics of Small Engines. New York: Gregg Division, McGraw-Hill. Brill E. (1993) "Automatic Grammar Induction and Parsing Free Text: A Transformation-Based Approach", Proc. of the 31st Annual Meeting of the ACL, pp.259-265. Buo F.D. (1996) "FeasPar--A Feature Structure Parser Learning to Parse Spontaneous Speech", Ph.D. Thesis, Fakultiit ftir Informatik, Univ. Karlsruhe, Germany. Delisle S. (1994) "Text Processing without a priori Domain Knowledge: Semi-Automatic Linguistic for Incremental Knowledge Acquisition", Ph.D. Thesis, Dept. of Compu- ter Science, Univ. of Ottawa. Published as technical report TR-94-02. Ezawa K., Singh M. & Norton S. (1996) "Learning Goal Oriented Bayesian Networks for Telecommunications Risk Management", Proc. of the 13th International Conf. on Machine Learning, pp. 139-147. Famili A. & Turney P. (1991) "Intelligently Helping the Human Planner in Industrial Process Planing", AI EDAM - AI for Engineering Design Analysis and Manufacturing, 5 (2), pp. 109-124. Hermjakob U. & Mooney R.J. (1997) "Learning Parse and Translation Decisions From Examples With Rich Context", Proc. of ACL-EACL Conf., pp.482-489. Huyck C.R. & Lytinen S.L. (1993) "Efficient Heuristic Natural Language Parsing", Proc. of the llth National Conf. on AI, pp.386-391. Johnson R.A. & Wichern D.W. (1992) Applied Multivariate Statistical Analysis, Prentice Hall. Kohavi R., John G., Long R., Manley D. & Pleger K. (1994) "MLC++: A machine learning library in C++", Tools with AI, IEEE Computer Society Press, pp.740-743. Lawrence S., Fong S. & Lee Giles C. (1996) "Natural Lan- guage Grammatical Inference: A Comparison of Recurrent Neural Networks and Machine Learning Methods", in S. Wermter, E. Riloff and G. Scheler (eds.), Symbolic, Connectionnist, and Statistical Approaches to Learning for Natural Language Processing, Lectures Notes in AI, Springer-Verlag, pp.33-47. Litman D. (1996) "Cue Phrase Classification Using Machine Learning', Journal of Al Research, 5, pp.53-95. Magerman D. (1995) "Statistical Decision-Tree Models for Parsing", Proc. of the 33rd Annual Meeting of the ACL, 276-283. McCarthy J. & Lehnert W.G. (1995) "Using Decision Trees for Coreference Resolution", Proc. of IJCAI-95, pp.1050- 1055. Neumann G. (1997) "Applying Explanation-based Learning to Control and Speeding-up Natural Language Generation", Proc. of ACL-EACL Conf., pp.214-221. Quinlan J.R. (1993) C4.5: Programs for Machine Learning, Morgan Kaufmann. Quirk R., Greenbaum S., Leech G. & Svartvik J. 0985) A Comprehensive Grammar of the English Language, Longman. Samuelsson C. (1994) "Grammar Specialization Through Entropy Thresholds", Proc. of the 32nd Annual Meeting of the ACL, pp.188-195. Simmons F.S. & Yu Y.H. (1992) "The Acquisition and Use of Context-dependent Grammars for English", Computational Linguistics, 18(4), pp.392-418. Soderland S.G. (1997) "Learning Text Analysis Rules for Domain-Specific Natural Language Processing", Ph.D. Thesis, Dept. of Computer Science, Univ. of Massachusetts. Theeramunkong T., Kawaguchi Y. & Okumura (1997) "Exploiting Contextual Information in Hypothesis Selection for Grammar Refinement", Proc. of the CEGDLE Workshop at ACL-EACL'97, pp.78-83. Wnek J. & Michalski R.S. (1994) "Hypothesis-driven cons- tructive induction in AQ17-HCI: a method and experi- ments", Machine Learning, 14(2), pp. 139-168. Zelle J.M. & Mooney R.J. (1994) "Inducing Deterministic Prolog Parsers from Treebanks: A Machine Learning Ap- proach", Proc. of the 12th National Conf. on AI, pp.748- 753. 313
1998
48
Exp6rimentation en apprentissage d'heuristiques pour l'analyse syntaxique Sylvain DELISLE Drpartement de mathrmatiques et d'informatique Universit6 du Qurbec a Trois-Rivirres Trois-Rivirres, Qurbec, Canada, G9A 5H7 Sylvain_Delisle @uqtr.uquebec.ca Sylvain LI~TOURNEAU, Stan MATWlN School of Information Technology and Engineering, University of Ottawa Ottawa, Ontario, Canada, KIN 6N5 [email protected], [email protected] Les syst~mes ou programmes de traitement de la langue naturelle doivent prendre des drcisions quant au choix des meilleures stratrgies ou rrgles h appliquer en cours de rrsolution d'un probl~me particulier. Pour un analyseur syntaxique constitu6 d'une base de rrgles symboliques, le cas auquel nous nous intrressons ici, ces drcisions peuvent consister h srlectionner les rrgles ou l'ordonnancement de celles-ci permettant de produire la plus rapide ou la plus prrcise analyse syntaxique pour un 6noncr, un type d'rnonc6 ou m~me un corpus sprcifique. La complexit6 de telles bases de rrgles grammaticales et leurs subtilitrs computationnelles et linguistiques font en sorte que la prise de ces drcisions constitue un probl~me difficile. Nous nous sommes donc fix6 comme objectif de trouver des techniques qui permettraient d'apprendre des heuristiques performantes de prise de drcision afin de les incorporer ~ un analyseur syntaxique existant. Pour atteindre une telle adaptabilitr, nous avons adopt6 une approche d'apprentissage automatis6 supportre par l'utilisation de syst~mes de classification automatique. Nos travaux ont 6t6 rralisrs sur un analyseur syntaxique ~ large couverture syntaxique de l'anglais 6crit et ont port6 sur un sous-ensemble prrcis de celui-ci : le niveau le plus haut qui doit drcider avec quelle(s) r~gle(s)----et, s'il yen a plusieurs, dans quel ordre--lancer l'analyse syntaxique de l'rnonc6 en cours de traitement, selon que cet 6nonc6 semble comporter des phrnom~nes de coordination structurelle plus ou moins compliqurs. Ce problbme de drcision se traduit naturellement en un probl~me de classification, d'o~ notre utilisation de systrmes de classification automatique de plusieurs types : r~gles de drcision, bas6 sur les instances, rrseaux de croyances et rrseaux de neurones. Soulignons que notre analyseur syntaxique poss~dait drj~ des r~gles heuristiques d~dires ~ ce probl~me de d~cision. Elles avaient 6t6 composres par le premier auteur sans avoir recours h aucun mrcanisme automatique. Nous drsirions maintenant trouver de nouvelles heuristiques qui seraient encore plus performantes que les anciennes et qui pourraient donc les remplacer. La mrthodologie que nous avons utilisre est la suivante. Premi~rement, nous avons drfini les attributs les plus pertinents pour reprrsenter les exemples (rnoncrs). I1 importait d'identifier des attributs facilement calculables de fa~on automatique et qui permettraient d'obtenir de nouvelles heuristiques intrressantes. Par exemple, la prrsence de conjonctions de coordination et la longueur de l'rnonc6 sont deux attributs utiles. Deuxi~mement, nous avons soumis les exemples, traduits en termes des attributs srlectionnrs, aux syst~mes classificateurs afin d'obtenir des rbgles. Nous avons ensuite srlectionn6 les r~gles les plus intrressantes, c'est-h-dire celles qui 6taient les plus discriminantes tout en demeurant intelligibles dans une perspective linguistique. Troisi~mement, nous avons incorpor6 les rrgles srlectionnres ~ notre analyseur syntaxique en remplacement des anciennes. Finalement, nous avons 6valu6 la nouvelle version de l'analyseur obtenue grace ~ ces nouvelles r~gles et effectu6 une comparaison avec l'ancienne version. Les rrsultats que nous avons obtenus se rrsument ainsi : nous avons trouv6 de nouvelles heuristiques qui sont significativement meilleures que les anciennes et qui, en particulier, poss~dent un taux d'erreur de 35% infrrieur h celui des anciennes. Qui plus est, ces rrsultats ont 6t6 obtenus sur des 6noncrs tout ?~ fait indrpendants de ceux utilisrs pour l'entra~nement avec les syst~mes classificateurs. Ces rrsultats drmontrent que des techniques d'apprentissage automatis6 peuvent concourir h l'optimisation adaptive de certaines drcisions importantes en analyse syntaxique. 314
1998
49
Parole et traduction automatique : le module de reconnaissance RAPHAEL Mohammad AKBAR GEOD, CLIPS/IMAG Universit6 Joseph Fourier, BP. 53 38041 Grenoble cedex 9, France [email protected] Jean CAELEN GEOD, CLIPS/IMAG Universit6 Joseph Fourier, BP. 53 38041 Grenoble cedex 9, France [email protected] R~sum~ Pour la traduction de parole, il est n6cessaire de disposer d'un syst~me de reconnaissance de la parole spontan6e grand vocabulaire, tournant en temps r6el. Le module RAPHAEL a 6t6 con~u sur la plate- forme logicielle de JANUS-III d6velopp6e au laboratoire ISL (Interactive Systems Laboratory) des universit6s Karlsruhe et Carnegie Mellon. Le corpus BREF-80 (textes lus extraits du Journal Le Monde) a 6t6 utilis6 pour le d6veloppement, l'apprentissage et l'6valuation du module. Les r6sultats obtenus sont de l'ordre de 91% de bonne reconnaissance de mots. L'article d6crit l'architecture du module de reconnaissance et son int6gration ~ un module de traduction automatique. Introduction La traduction des documents 6crits a fait de r6els progr6s pendant ces derni6res ann6es. Nous pouvons constater l'6mergence de nouveaux syst~mes de traduction de textes qui proposent une traduction soign6e en diff~rentes langues[1]. I1 semble envisageable de les adapter pour la traduction de l'oral, ~ condition d'en am61iorer le temps de r6ponse et la robustesse : c'est le ~ challenge >> pos6 /~ ces syst~mes mais aussi au module de reconnaissance de Ia parole. Un syst6me de traduction de l'oral repose sur l'int6gration des modules de reconnaissance et de synth~se de la parole et des modules de traduction, pour obtenir une boucle complete d'analyse et de synth~se entre les deux intedocuteurs [Fig. 1]. Le projet CSTAR-II [3] est un projet international darts lequel toutes les 6quipes travaillent sur tousles aspects de ce module. Pour permettre /l deux personnes de communiquer, il faut deux s&ies de processus sym6triques dans les deux langues : un module de reconnaissance pour acqu6rir et transcrire les 6nonc6s dits par un locuteur dans sa langue pals un module de traduction qui traduit la transcription dans la langue du destinateur ou dans un format d'6change standard (IF = Interchange Format) et enfin un module de synth~se de la parole (et de g6n6ration si on utilise le format IF) dans la langue cible du Synth~se ~ Traduction de la parole instantan6 • / Transmission du texte I Reconnaissance Traduction • ~ de la parole . instantan6 ~Reconnaissance 1 de la parole ] Synth6se ] de la parole Fig. 1. L'architecture d'un syst~me de traduction instantan6e. 36 destinateur. Dans le cadre du projet C-STAR II nous avons en charge la conception et la r6alisation du module de reconnaissance de la parole continue ~ grand vocabulaire pour le frangais. Nous collaborons avec l'6quipe GETA du laboratoire CLIPS-IMAG et le laboratoire LATL pour la traduction automatique et le laboratoire LAIP pour la synth6se de la parole. Ce consortium s'est fix6 l'objectif de r6aliser un syst+me de traduction de l'oral pour le frangais. Dans cet article nous allons tout d'abord pr6senter l'architecture du syst6me de traduction et la plate-forme de d6veloppement JANUS-III [2], puis les diff6rentes 6tapes du d6veloppement du module RAPHAEL et enfin, les premiers r6sultats obtenus. 1 RAPHAEL pour la Traduction L'architecture du syst~me de traduction de parole est compos6e de trois modules essentiels (la reconnaissance, la traduction et la synth~se de la parole) [Fig. 2]. Dans ce projet nous utilisons ARIANE et GB [3] pour la traduction et LAIP-TTS [4] pour la synth~se. Le I Reconnaissance de la Parole ] RAPHAEL (CLIPS/IMAG-ISL) Texte Contr01e I Traduction Automatique 1 ARIANE (GETA), GB (LATL) I Synth~se de la Parole ] LAIP-TTS (LAIP) Fig. 2. Les composants du syst~me d~veloppement du module de reconnaissance RAPHAEL a 6t6 effectu6 sur la plate-forme logicielle de JANUS-Ill. RAPHAEL donne en sortie un treillis de mots sous le protocole TCP/IP. Le traducteur utilise ce r6sultat pour en donner une version traduite. Cette version est ensuite envoy6e au synth6tiseur de la parole. Dans cet article nous nous int6resserons seulement au module de reconnaissance RAPHAEL. Pour l'instant la strat6gie d'6change entre les modules est enti6rement s6quentielle. Afin d'am61iorer le r6sultat final (surtout du point de vue de la robustesse) nous envisageons l'int~gration d'une seconde couche de contr61e pour permettre le ~ restoring >> des hypotheses en tenant compte des taux de confiance associ~s aux diff~rents mots de l'~nonc~ reconnu. 1.1 Plate-forme de JANUS-Ill Cette plate-forme de traduction a 6t~ d~velopp~e dans le laboratoire d'ISL des universit6s Carnegie Mellon et Karlsruhe et contient tousles composants n6cessaires au d~veloppernent d'un syst/~me de reconnaissance phon~mique ~ grand vocabulaire /l base de Chaines de Markov Cach~es (CMC) et de r~seaux de neurones. La facilit6 d'~crire un module de reconnaissance en langage Tcl/Tk avec JANUS-Ill nous permet d'adapter ses capacit~s selon les besoins d'application et les caract~ristiques du frangais. De cette plate-forme, seul le moteur de reconnaissance est directement exploitS. Mais le travail de preparation des bases de donn~es, l'apprentissage des modules de phonemes, l'6valuation sont ~galement effectu~s dans cet environnement de programmation. Le langage PERL est en grand partie utilis6 parall~lement pour traitement du texte du corpus. Les d~tails techniques de JANUS-Ill sont donn~s dans [2], [5], [6]. Cependant nous en pr~sentons bri~vement quelques points ci-apr~s. 2 Le Module RAPHAEL L'architecture du module de reconnaissance RAPHAEL est pr6sent6e sur la [Fig. 3]. L'analyse de la parole produit une suite de vecteurs de param6tres acoustiques. Ces vecteurs sont utilis6s par un moteur de recherche base de CMC pour estimer la suite des phon6mes 6none6s. Un module de langage stochastique /~ bigramme et trigramme, et un dictionnaire des variantes phon6tiques sont en parall61e exploit6s pour restreindre le champ de recherche I. Au cours de la recherche le dictionnaire phon6tique fournit le(s) phon6me(s) suivant(s). Le mod61e probabiliste de langage base de bigramme et de trigramme est utilis6 lors de la transition entre deux mots pour fournir un ensemble de mots [Fig. 4]. I Avec 45 phonemes en moyenne une suite de cinq phonemes se transforme th~oriquement en un arbre de d6cision de 455= 184,528,125 feuilles ! 37 tion 1 la parole Traitement num6rique, Estimation des param~tres acoustiques ModUle stochastique de langage (bigramme et trigramme) I Base de donn6es des param~tres ] des Chaines de Markov Cach6es ~ Cha~nes de Markov Cach~es pour 1 la reconnaissance phon6mique Dictionnaire phon~tique (vocabulaire de reconnaissance) no.j~,j, chsmbr~. Fig. 3. Schema du module de reconnaissance phon~mique RAPHAEL. 2.1 Cha[ne de Markov Cach~es Pour utiliser les CMC il faut conduire une phase d'apprentissage pr6alable dans laquelle on adapte les probabilit6s des transitions et des symboles sortis pour un phon6me donn6 de mani~re /~ ce que la probabilit6 du processus associ6 soit maximale. Les param~tres des modules et la transcription phon6tique des 6nonc6s du corpus sont deux 616ments essentiels d'apprentissage. RAPHAEL comporte 45 CMC repr6sentant 42 phonemes de base du frangais et 3 mod61es pour le silence et le bruit. A quelques exceptions pros les CMC se composent de trois 6tats. Le vecteur de param~tres d'entr6e est de dimension 122. Les CMC ont 16 distributions Gaussiennes pour chaque 6tat. Lors de l'apprentissage nous produisons la transcription phon6tique correspondante /~ chaque 6nonc6 (cela se fait /~ l'aide du dictionnaire phon6tique). Pour chaque 6nonc6 les CMC correspondant aux phon6mes sont concat6n6es pour cr6er une longue chaSne. Ensuite l'algorithme de Viterbi [5] propose un alignement de l'6nonc6 avec cette chaine. Avec 2 Les coefficients MFCC [5] d'ordre 16 sont calcul6s sur une trame de 16 ms de parole, avec un pas d'avancement de 10ms. La parole est 6chantillonn6e 16 kHz et sur 16 bits. Les MFCC, l'6nergie du signal, et leurs premi6re et seconde d6riv6es (51 valeurs) subissent ensuite une analyse en composantes principales (ACP) pour r6duire la dimension du vecteur /~ 12. La matrice d'ACP est calcul6e avant la phase d'apprentissage, sur un grand corpus enregistr6. cet alignement l'algorithme de Baum-Welch [5] proc~de ~ l'estimation des param6tres de chaque CMC pr6sente dans la cha~ne. Ce proc6d6 est r6p6t6 pour tous les 6nonc6s du corpus d'apprentissage et cela plusieurs fois. La pr6sence des diff6rents contextes phon6miques permet /lce proc6d6 de minimiser le taux d'erreur de reconnaissance. L'6valuation du taux d'erreur /l la fin de chaque it6ration permet d'6tudier l'avancement de l'apprentissage. 2.2 ModUle de langage stoehastique Afin de r6duire le champ de recherche, un mod61e de langage doit ~tre utilis6. Bien que dans les syst6mes /l commande vocale qui utilisent une syntaxe r6duite les grammaires finies ou r6currentes peuvent ~tre utilis6es, celles-ci ne sont pas capables de d6crire tous les ph6nom6nes de la langue parl6e (ellipses, h6sitations, r6p6titions, etc.). Pour cette raison il est souhaitable d'utiliser un module stochastique qui estime dans un contexte donn6, la probabilit6 de succession des mots. Dans le mod61e actuel les contextes gauches d'ordres un et deux (bigramme et trigramme) sont en m~me temps exploit6s. Le bigramme est utilis6 dans la premiere phase de recherche pour cr6er un treillis de mots, puis le trigramme est utilis6 pour raffiner le r6sultat et d6terminer les N meilleurs phrases plausibles. Le mod61e de langage se charge en m~me temps de la r6solution de l'accord en frangais. Le calcul des param~tres de ce module a ~t~ effectu6 ~ partir des corpus enregistr6s et transcrits. Dans l'6tat actuel un vocabulaire de 7000 mots a 6t6 s61ectionn6. 38 d Repr6sentation d'un phon6me Darts un mot le dictionnaire phon6tique est utilis~ pour trouver et encha~ner les phonemes suiv~mts selon les variantes phon6tiques disponibles. L'hypoth6se de mot #1 ~- ..... Pour d6terminer les roots et les phon6mes suivants le mod61e stochastique du langage et le vocabulaire transcrit en phon6tique sont en m~me temps utilis6s. L'hypoth6se de mot #2 Fig. 4. Repr6sentation de I'algorithme de recherche 2.3 Dictionnaire Phon6tique La conversion d'une chalne d'hypoth6ses phon6tiques en une chaine orthographique se fait /t partir d'un dictionnaire phon6tique. Pour couvrir un grand hombre de prononciations diff6rentes dues aux diff6rents dialectes de la langue et aux habitudes des locuteurs, ce dictionnaire contient pour chaque mot un ensemble de variantes phon6tiques. A chaque hypoth6se de mot propos6 par le mod61e de langage on associe cet ensemble de variantes. Ind6pendamment done de la variante utilis6e dans l'6nonc6, nous obtenons la m~me transcription orthographique. Nous utilisons sp6cifiquement cette technique pour couvrir les variantes produites par la liaison, par exemple : Je suis parti de la maison. (-Z& sHi paRti ...) Je suis alld ~ la maison. (Z& sHiz ale ...) ensemble de BREF-80 comprenant les 6nonc6s de 4 femmes et 4 hommes a 6t6 utilis6 pour l'6valuation 4. Le vocabulaire a 6t6 transcrit soit manuellement, soit /l partir du dictionnaire phon6tique BDLEX-23000. Le mod61e de langage a 6t6 estim6 h partir de BREF-80 et un corpus de texte d'/t peu pr6s 10 millions de mots extrait du journal Le Monde. Pour l'initialisation des CMC, au lieu d'utiliser les valeurs al6atoires (technique habituelle), nous avons choisi d'utiliser les modules issus du projet GlobalPhone [7]. Pour chaque phoneme de notre module nous avons manuellement choisi un phon6me dans une des langues support6es par GlobalPhone (principalement allemande) et nous avons utilis6 ses param6tres comme valeurs initiales de nos CMC. Ensuite ces mod61es ont 6t6 adapt6s au fran~ais au moyen de l'algorithme d'apprentissage d6crit en 2.1. A la fin de chaque it6ration et ce pour 3 3 L'apprentissage Le corpus BREF-80 [8] comportant 5330 6nonc6s par 80 loeuteurs (44 femmes et 36 hommes) 3 a 6t6 utilis6 pour les phases d'apprentissage et d'6valuation. Un sous- 3 BREF-80 contient 3747 textes diff6rents et environ 150,000 mots. 4 Les sous-corpus de l'apprentissage et de l'6valuation n'ont aucun 6none6 et locuteur en commun. En r6alit6, nous avons enlev6 tous les 6nonc6s en communs entre ces deux sous corpus. Ainsi le sous-corpus d'apprentissage comprend 4854 ~nonc6s et le sous-corpus d'6valuation 371 6nonc6s. Nous avons retir6 105 6nonc6s pour assurer la disjonetion des deux sous-corpus. 39 itdrations, le syst~me a 6t6 6valu6 avec le sous corpus de l'dvaluation. 4 R6sultats Les r6sultats d'6valuation en terme de taux de reconnaissance sont donn6s dans le [Tableau 1]. Syst~mes % mots reconnus ModUles issus de GlobalPhone 29 Premi&e itdration 88,8 Troisidme itdration 91,1 Tableau 1. Les r6sultats de l'6valuation 4.1 Commentaires Une tr~s bonne initialisation de certaines consonnes identiques dans des diffdrentes langues (p, t, k, b, d, g, etc.) a rapidement permis d'obtenir un syst~me fonctionnel. On constate une saturation tr~s rapide du taux de reconnaissance d~s la troisi~me itdration. Nous pouvons distinguer trois types de probldme qui nous em#chent d'atteindre un meilleur taux de reconnaissance : • Fautes de frappe darts le texte du corpus, • Transcription erronde ou insuffisamment ddtai116e des 6noncds, • La couverture partielle de toutes les variantes phondtiques d'un mot. Ces trois probl~mes sont les causes d'un grand nombre d'erreurs d'alignement qui vont directement influencer le rdsultat final. Nous devons donc effectuer une vdrification compldte du corpus et du dictionnaire phondtique. Les mots hors du vocabulaire sont /~ l'origine d'un pourcentage important d'erreurs. En effet, dans 371 6noncds du sous-corpus de l'dvaluation nous rencontrons environ 300 mots hors vocabulaire. Ces mots reprdsentent environ 3,5 % de la taille du vocabulaire. I1 ne sont pas reprdsentds dans le corpus d'apprentissage et leur transcription n'existe pas darts le dictionnaire phon&ique. Conclusion et perspectives Dans cet article nous avons bri~vement ddcrit, en termes d'avancement de projet, notre syst~me de reconnaissance RAPHAEL /l grand vocabulaire et rapport6 des premiers rdsultats obtenus. Notre but est d'amdliorer le taux de reconnaissance par l'utilisation des moddles phondtiques contextuels et d'61argir le vocabulaire utilis6/t plus de 10000 mots. Pour atteindre ce but nous allons spdcialiser le vocabulaire dans le domaine du tourisme et utiliser d'autres corpus de la parole spontande dans ce domaine avec un nombre plus important de locuteurs. En mdme temps nous ddfinirons un protocole d'dchange plus 6labor6 avec le module de traduction afin de permettre la communication d'informations linguistiques et statistiques au module de traduction, toujour dans le but d'amdliorer les performances de notre systdme. Remerciement Nous remercions Alex Waibel pour la mise /l disposition de JANUS-III et Tanja Schultz pour son support scientifique et technique dans l'utilisation des rdsultats du projet GlobalPhone. Rdfdrences 1 Hutchins W. J. (1986) Machine Translation : Past, Present, Future. Ellis Horwood, John Wiley & Sons, Chichester, England, 382 p. 2 Finke M., Geutner P., Hild H., Kemp T., Ries K., Westphal M. (1997) : The Karlsruhe- Verbmobil Speech Recognition Engine, Proc. of ICASSP, Munich, Germany. 3 Boitet Ch., (1986) GETA's MTmethodology and a blueprint for its adaptation to speech translation within C-STARII, ATR International Workshop on Speech Translation, Kyoto, Japan. 4 Keller, E. (1997). Simplification of TTS architecture versus Operational quality, Proceedings of EuroSpeech'97, Rhodes, Greece. 5 Rabiner L., Juang B.H. (1993), Fundamentals of Speech Recognition, Prentice Hall, 507 p. 6 Haton J.P., Pierrel J.M., Perennou G., Caelen J., Gauvain J.L. (1991), Reconnaissance automatique de laparole, BORDAS, Paris, 239 p. 7 Schultz T. Waibel A., Fast Bootstrapping of L VCSR systems with multilingual phonem sets, Proceedings of EuroSpeech'97, Rhodes, Greece. 8 Lamel L.F., Gauvain J.L., Eskenazi M. (1991), BREF, a Large Vocabulary Spoken Corpus for French, Proceedings of. EuroSpeech'91, Genoa, Italy. 40
1998
5
Multext-East: Parallel and Comparable Corpora and Lexicons for Six Central and Eastern European Languages Ludmila DIM1TROVA Institute of Mathematics and Informatics Sofia, Bulgaria [email protected] Nancy IDE Dept, of Computer Science, Vassar College Poughkeepsie, New York, USA [email protected] Vladimir PETKEVIC Inst. of Theoretical and Computational Linguistics, Charles University Prague, Czech Republic Vladimir.Petkevic @ff.cuni .cz Tomaz ERJAVEC Institute Jozef Stefan Ljubljana, Slovenia [email protected] Heiki Jaan KAA P Dept. of General Linguistics, University of Tartu Tartu, Estonia [email protected] Dan TUFIS Romanian Academy Center for Artificial Intelligence Bucharest, Romania [email protected] Abstract The EU Copernicus project Multext-East has created a multi-lingual corpus of text and speech data, covering the six languages of the project: Bulgarian, Czech, Estonian, Hungarian, Romanian, and Slovene. In addition, wordform lexicons for each of the languages were developed. The corpus includes a parallel component consisting of Orwell's Nineteen Eighty-Four, with versions in all six languages tagged for part-of-speech and aligned to English (also tagged for POS). We describe the encoding format and data architecture designed especially for this corpus, which is generally usable for encoding linguistic corpora. We also describe the methodology for the development of a harmonized set of morphosyntactic descriptions (MSDs), which builds upon the scheme for western European languages developed within the EAGLES project. We discuss the special concerns for handling the six project languages, which cover three distinct language families. Introduction In order to provide resources to enable the efficient extraction of quantitative and qualitative information from corpora, several corpus development and distribution efforts have been recently established. However, few corpora exist for Central and Eastern European (CEE) languages, and corpus-processing tools that take into account the specific characteristics of these languages are virtually non-existent. The Multext-East Copernicus projec0 (Erjavec, et al., 1997) was a spin-off of the LRE project Multext 2 (Ide and Vtronis, 1994) intended to fill these gaps by developing significant resources for six CEE languages (Bulgarian, Czech, Estonian, Hungarian, Romanian, Slovene) that follow a consistent and principled encoding format and are maximally suited to easy processing by corpus-handling tools. To this end, Multext-East developed a corpus of parallel and comparable texts for the six CEE project languages, together with wordform lexicons and other language-specific resources. In the following sections we briefly describe the Multext-East corpora (text, speech) and the Multext-East lexicons and language-specific resources. 1 The Multext-East corpora 1.1 Encoding format Based on the principle that its corpus encoding format should be standardized and homogeneous both for interchange and for facilitating open- ended retrieval tasks, Multext-East adopted the lhttp://nl.ijs.si/ME 2http://www. lpl.univ-aix.fr/ projects/Multext/ 315 Corpus Encoding Standard (CES) 3 (Ide, 1998), which has been developed to be optimally suited for use in language engineering and corpus- based work. The CES is an application of SGML (ISO-8879, Standard Generalized Markup Language) and is based on the TEl Guidelines for Electronic Text Encoding and Interchange. In addition to providing encoding conventions for elements relevant to corpus-based work, the CES provides a data architecture for linguistic corpora and their annotations. Each corpus component, comprising a single text and its annotations, is organized as a hyper-document, with various levels of annotation stored in separate SGML documents (each with a separate DTD). Low-density (i.e., above the token level) annotation is expressed indirectly in terms of inter-document links. Markup for different types of annotation (e.g., part of speech, alignment, etc.) is described by a separate Data Type Definition (DTD) specifically tailored to that information. 1.2 The parallel corpus The Multext-East parallel corpus consists of seven translations of George Orwell's Nineteen Eighty-Four: besides the original English version, the corpus contains translations in the six project languages. There are three versions of each text in the parallel corpus, corresponding to different levels of annotation: a cesDoc encoding (SGML markup up to the sub- paragraph level, including markup for sentence boundaries); and a cesAna encoding, containing word-level morphosyntactic markup together with links to each sentence (and in some versions, to each word) in the cesDoc version. A fourth document, the cesAlign document, is associated with each of the non-English versions, which includes links between sentences in the cesDoc encoding for each and the English version, thus providing a parallel alignment at the sentence level. The cesAna versions, which are the most linguistically 3 The CES was developed in a joint effort of the European projects Multext (LRE) and EAGLES (in particular, the EAGLES Text Representation subgroup), together with the Vassar/CNRS collaboration (supported by the U.S. National Science Foundation). informative, are marked up as shown below for the English phrase "smell of bugs": <tok type=WORD from=' Oen. 1.6.15. i\62' > <orth>smell</orth> <dis amb><base>smel i</base><msd>Ncns </msd><ctag>NN</ctag></dis-mh> <lex><base>smell</base> <msd>Vmip-p</msd> <ctag>VERB</ctag></lex> <lex><base>sme ll</base> <msd>Vmipls</msd> <ct ag>VERB</crag></lex> <lex><base>smel l</base> <msd>vmip2 s</msd> <ctag>VERB</ctag></lex> <lex><base>smel i</base> <msd>Vmn</msd> <ctag>VINF</ctag></lex> <lex><base>sme11</base> <msd>Ncns</msd> <ctag>NN</ct ag>< / lex></tok> <tok type=WORD from =' Oen. 1.6.15.1\ 68' > <orth>of</orth> <disamh><base>of </base> <msd>Sp</msd> <ct ag>PREP</crag></dis amb> <lex><base>of</base> <msd>Sp</msd> <ct ag>PREP</ct ag>< / lex></tok> <tok type=WORD from =' Oen. 1.6.15.1\ 71' > <orth>bugs</orth> <dis amb><base>bug</base> <msd>Ncnp</msd> <ctag>NNS< / ctag> </di s amb> <lex><base>bug</base> <msd>Vmip3s</msd> <ctag>VERB3</ctag></lex> <lex><base>bug</base> <msd>Ncnp</msd> <ct ag>NNS</ct ag>< / lex></tok> In this example, the position of each token in the parallel corpus is given in the from attribute whose value specifies the hierarchical position of the token within the text (here, the token "smell" appears in part 1, chapter 6, paragraph 15, sentence 1, byte offset 62). All possible morphosyntactic interpretations of the token are given in the <lox> field consisting of the base form, a morphosyntactic description (see Section 2), and an associated corpus tag. The <disamb> field contains the interpretation that has been identified as valid within the respective context; within this tag, the <eeag> element provides the corresponding corpus tag (see section 2). 4 The 4 In the Czech and Slovene versions, <ctag> is omitted because its contents are identical to the <msd> tag contents. 316 disambiguation of each language version in the parallel corpus was aocomplished using automatic POS tagging algorithms and then partially or entirely hand-validated. Table 1 provides the main characteristics per language of this corpus. In this table: • tok = number of tokens • words = number of lexical items (excluding punctuation) • lex = number of MSD-based interpretations of the words in the text • MSD/amb = average ratio of the number of lexical variants per word The texts from the corpora were segmented using the corpus annotation toolset developed within the Multext project, augmented by language-specific resources developed by Multext-East. The Multext segmenter is a language-independent and configurable tokenizer whose output includes token, paragraph and sentence boundary markers. Punctuation, lexical items, numbers, and various alphanumeric sequences (such as dates and hours) are annotated with tags defined in a hierarchical, class-structured tagset. The language-specific behavior of the segmenter is enabled by its engine-driven design, in which all language-specific information is provided as data. Within Multext-East, resource data, including rules describing the form of sentence boundaries, word splitting (cliticized forms decomposition), word compounding, quotations, numbers, dates, punctuation, capitalization, abbreviations etc., was developed for the six project languages. Once the input text was tokenized, a dictionary look-up procedure was used to assign each lexical token all its possible morphosyntactic descriptors (MSDs). The ambiguously MSD- annotated texts were then hand-disambiguated (entirely for some languages and partially for the others). This time-consuming and error-prone process was sped up significantly by a special XEMACS mode, developed within the project, which is aware of the morphosyntactic descriptors' significance and allows for natural language expansion of the linear encoding of the MSDs. The ambiguously MSD-annotated texts and the corresponding disambiguated texts provided the basis for building the cesAna encoded version of the multilingual parallel corpus. The corpus also contains six language pair-wise alignments between each of the six project languages and English. The alignments were performed by three different automatic aligners (Multext-aligner, "vanilla-aligner", Silfide- aligner) with accuracy ranging between 75-90%, and then hand validated. Table 2 shows the distribution of sentence alignments for each pair of languages. 1.3 Multilingual comparable corpus Multext-East also produced a multilingual comparable corpus, including two subsets of at least 100,000 words each for each of the six project languages. The texts include fiction, comprising a single novel or excerpts from several novels, and newspaper data. The data is comparable across the six languages, in terms of the number and size of texts. The entire multilingual comparable corpus was prepared in CES format manually or using ad hoc tools. Language tokens B~sarian 101173 Czech English 118102 Estonian Hungarian 98426 Romanian Slovene 100358 94906 118063 107769 words 86020 79862 103997 75433 80705 101508 90792 lex 156002 214368 214404 147542 111945 189695 187562 VISD/amb 1.81 2.68 2.06 1.96 1.39 1.87 2.07 distinct 16348 19115 9745 17870 20316 15225 17861 words distinct 8517 9161 7260 8873 10387 7433 7916 lemmas Table 1: Corpus characteristics 317 Flnno-Ugric Languages Estonian-English Align Nr. Proc 3-1 2 0.030321% 7-0 1 2-2 3 0.045482% 4-1 1 2-i 60 0.909642% 3-1 7 1-3 1 0.015161% 3-0 1 1-2 I00 1.516070% 2-1 108 I-i 6426 97.422680% 1-6 1 i-0 1 0.015161% 1-5 1 0-2 1 0.015161% 1-2 46 0-i 2 0.030321% I-i 6479 0-4 1 0-2 3 0-i 19 Hungarian-English Align Nr. Proc Romance Language Romanian-English Align Nr. Proc 0.014997% 3-1 0.014997% 2-4 0.104979% 2-3 0.014997% 2-2 1.619676% 2-1 0.014997% 2-0 0.014997% 1-5 0.689862% 1-3 97.165567% 1-2 0.014997% 1-1 0.044991% 0-3 0.284943% 0-2 0-1 Proc 0.015029% 0.030057% 1.638112% 0.030057% 1.217313% 96.753832% 0.315600% Slavic Languages Bulgarian-English Czech-English Align Nr. Proc Align Nr. 2-2 2 0.030017% 4-1 1 2-1 23 0.345190% 3-1 2 1-2 72 1.080594% 2-1 109 i-I 6558 98.424133% 1-3 2 0-i 8 0.120066% 1-2 81 I-i 6438 0-I 21 3 0.046656% 1 0.015552% 3 0.046656% 2 0.031104% 85 1.321928% 1 0.015552% 1 0.015552% 14 0.217729% 259 4.027994% 6047 94.043546% 2 0.031104% 2 0.031104% I0 0.155521% Slovene-English Align Nr. Proc 3-3 1 0.014970% 2-1 48 0.718563% 1-5 1 0.014970% 1-2 53 0.793413% I-i 6572 98.383234% i-0 2 0.029940% 0-I 3 0.044910% Table 2: Distribution of sentence alignments 2 Morpho-lexical resources Multext-East, in collaboration with EAGLES, evaluated, adapted and extended the EAGLES morphosyntactic specifications (rule format, lexical specifications, corpus tagset, etc.) to cover the six Multext-East languages (Erjavec and Monachini, 1997). Accommodating the different language families represented among the Multext-East languages demanded substantial assessment and modification of the pre-existing specifications, which were originally developed for western European languages only. For corpus morpho-lexical processing purposes, the Multext-East consortium developed language-specific wordform dictionaries, which, for all languages except Estonian and Hungarian, contain the full inflectional paradigm for at least the lemmas appearing in the corpus. Each dictionary entry has the following structure: wordform [TAB] 1emma [TAB] MSD [TAB] where wordform represents an inflected form of the lemma, characterised by a combination of feature values encoded by a Morphosyntactic Description (MSD). The Multext-East lexicons and MSDs are fully described in Tufts, Ide, and Erjavec (1998). A general overview of the lexicons is shown in Table 3. The Entries column provides the number of dictionary entries, that is, triplets: <wordform lemma MSD>. The Wordforms column gives the number of distinct wordforms appearing in the lexicon, irrespective of their lemma and MSD. The Lemma column gives the number of distinct lemmas in the lexicon, eliminating duplications that appear due to lemma homography. The difference between the Lemma and "=" fields provides an estimate of the number of homographic lemmas. The MSD field gives the total number of distinct MSDs used in the encoding of the lexicon stock. The last two columns in Table 3 (AMB_POS and AMB_MSD) provide information about the number of ambiguity classification clusters. An ambiguity classification cluster provides the number of ways a homographic wordform can be classified. AMB_POS ("part of speech ambiguity") and AMB_MSD ("MSD- ambiguity") provide the classification based on the part of speech and MSD, respectively. The number of ambiguity classes (based either on POS or MSD) is a key figure in estimating the space needed to construct a statistical language 318 model (such as HMM) useful for morphosyntactic disambiguation. This number was a key factor in the tagset design. For several of the project languages and for English, a set of corpus tags has also been developed which are appropriate for use with stochastic disambiguators. Where corpus tags have been developed, mapping rules from MSDs to corpus tags (n-to-1 mapping) are also provided as a resource. Language Entries Wordforms Lemmas = MSD AmbPOS AmbMSD English 66469 43455 22571 25813 132 47 248 Romanian 440363 347960 33259 35421 674 90 981 Slovene 539213 191728 15671 15863 2044 48 1185 Czech 133803 41601 14458 14684 915 35 698 Bulgarian 333779 284211 18864 19071 185 42 400 Estonian 130409 89180 22054 23384 563 63 1012 59614 46886 15838 17380 603 62 890 Hungarian ['able 3: Multilingual Lexicon Overview Conclusion The multilingual resources (lexicons, rules, corpora) developed in Multext-East are among of the most comprehensive resources currently available for most of the project languages. In addition to resource development, the work carried out in Multext-East has contributed significantly to defining general mechanisms for lexical specification, and it has provided a test of References Erjavec, T., Ide, N., and Tufis, D. (1998) Standardized Specifications, Development and Assessment of Large Morpho-Lexical Resources for Six Central and Eastern European Languages. First International Language Resources and Evaluation Conference, Granada, Spain. Erjavec, T. and Monachini, M. (Eds.) (1997). the extensibility of standards and tools beyond the languages for which they were originally developed. All Multext-East resources and tools are distributed, at cost, on CD ROM through the TELRI project 5 (see Erjavec, Lawson, and Romary, 1998). Acknowledgements This project was supported by EU Copernicus Project COP106. The U.S. portion was supported in part by NSF grant IRI-9413451. We would like to thank the following for their contribution to the project: G. Priest-Dorman, A.M. Barbu, C. Popescu, V. Patrascu, G. Rotariu, S. Bruda, J. V~ronis, S. Hari6, P. DiCristo, L. Sinapova, R. Pavlov, K. Simov, D. Popov, S. Vidinska, M. Hnatkova, J. Hajic, B. Hladka, A. Bizjak, P .Jakopin, M. Romih, O. Vukovie, and M. Boldea. We would also like to acknowledge Laboratoire Parole et Langage, CNRS, Aix-en-Provence, which coordinated the project. Specifications and Notation for Lexicon Encoding. Deliverable DI.1 F. Multext-East Project COP- 106. http://nl.ijs.si/ME/CD/docs/mte-d 1 If/. Erjavec, T., Ide, N., Pelkevic, P. and V6ronis, J. (1996). Multext-East: Multilingual Text Tools and Corpora for Central and Eastern European Languages. Proceedings of the Trans European Language Resource Infrastructure First Conference, Tihany, pp. 87--98. Ide, N. (1998) Corpus Encoding Standard: SGML Guidelines for Encoding Linguistic Corpora First International Language Resources and Evaluation Conference, Granada, Spain. See alSO http:llwww.cs.vassar.edulCES/. Ide, N. and V6ronis, J. (1994). Multext (Multilingual Tools and Corpora). Proceedings of the 14th International Conference on Computational Linguistics, COLING'94, Kyoto, pp. 90-96. Erjavec, T., Lawson, A. and Romary, L. (1998). East meets West: Producing Multilingual Resources in a European Context. First International Language Resources and Evaluation Conference, Granada, Spain. 5 http://www.ids-mannheim.de/telril 319
1998
50
Error Driven Word Sense Disambiguation Luca Dini and Vittorio Di Tomaso Fr4d4rique Segond CELI Xerox Research Centre Europe {dini, dit omaso}@celi, sns. it Frederique. Segond©xrce. xerox, com Abstract In this paper we describe a method for per- forming word sense disambiguation (WSD). The method relies on unsupervised learning and ex- ploits functional relations among words as pro- duced by a shallow parser. By exploiting an er- ror driven rule learning algorithm (Brill 1997), the system is able to produce rules for WSD, which can be optionally edited by humans in or- der to increase the performance of the system. 1 Introduction Although automatic word sense disambiguation (WSD) remains a much more difficult task than part of speech (POS) disambiguation, resources and automatic systems are starting to appear. Some of these systems are even mature enough to be evaluated. This paper presents an overview of a system for English WSD which will be eval- uated ill the context of the SENSEVAL project 1. We report on performing automatic WSD us- ing a specially-adapted version of Brill's er- ror driven unsupervised learning program (Brill, 1997), originally developed for POS tagging. In our experiment, like ill Resnik (1997), we used both functional and semantic information in or- der to improve the learning capabilities of the system. Indeed, by having access to a syntactic and functional sketch of sentences, and by being able to stipulate which relations are important for sentence meaning, we overcame some of the traditional problems found in continuous bigram models, such as the occurrence of interpolated clauses and passive constructions. Consider, for example, temporal expressions like Tuesday in The stock market Tuesday staged a technical recovery. Such expressions are quite frequent in newspaper text, often appearing near 1 http://www.itri.bton.ac.uk/events/senseval verbs. Without any functional information, the semantic rules produced by the algorithm will stipulate a strong semantic relation between the semantic class of words like Tuesday and the se- mantic class of verbs like stage. On the contrary, if we use information from a shallow parser, we know that Tuesday is an adverbial expression, probably part of the verb phrase, and that the really important relation to learn is the one be- tween the subject and the verb. In the following sections we describe (i) the re- sources we used (Penn Tree Bank, 45 upper level WordNet tags); (ii) the experiment we ran using rule induction techniques on functional relations (functional relation extraction, tag merging, cor- pus preparation and learning); (iii) the evalu- ation we performed on the semantically hand- tagged part of the Brown corpus and, finally, we sketch out the general architecture we are in the process of implementing. 2 The Resources We decided to take advantage of the syntactic structures already contained in the Penn Tree Bank (PTB) (Mitchell et al., 1995) in order to build a large set of functional relation pairs (much as in Resnik (1997)). These relations are then used to learn how to perform semantic dis- ambiguation. To distinguish word meanings we use the top 45 semantic tags included in Word- Net (Miller, 1990). The non-supervised Brill al- gorithm is used to learn and then to apply se- mantic disambiguation rules. The semantically hand-tagged Brown Corpus is used to evaluate the performance of automatically acquired rules. 2.1 Obtaining Functional Structures. We consider as crucial for semantic dis- ambiguation the following functional rela- tions: SUB J/VERB, VERB/OBJ, VERB/PREP/PREP- 320 OBJ, NOUN/PREP/PREP-OBJ. In order to extract them, we parsed the PTB structures using Zebu (Laubusch, 1994), a LARLR(1) parser implemented in LISP. The parser scans the trees, collecting information about relevant functional relations and writing them out in an explicit format. For instance, the fragment you do something to the economy, af- ter some intermediate steps which are described in Dini et al. (1998a) and Dini et al. (1998b), is transformed into: HASOBJ do something HASSBJ do you PREPMOD do TO economy 2.2 Adding Lexical Semantics. The WordNet team has developed a general semantic tagging scheme where every set of synonymous senses, synsets, is tagged with one of 45 tags as in WordNet version 1.5. We use these tags to label all the content words contained in extracted functional relations. We associate each word with all its possible senses ordered in a canonical way. The semantically tagged version of the sample sentence given above is: HASOBJ do/sga~iv*_8o¢ial.mogion_¢rea*ion_body something/~op HASSBJ do/aga~iv*.social_mo'~ion_¢reation_body you/p*rson PREPMOD do/s~ative_social.mogion.craation_body TO econolTly / group_¢ ogn i g i on_at g r ibuge _act 2.3 Preparing the input. As a result of adding lexical semantics we get a triple <functional relation, wordi/tagsetl, wordj/tagsetj>, but in its current formulation, the unsupervised learning algorithm is only able to learn relations holding among bigrams. Thus, it can learn either relations between a func- tional relation name (e.g. "HASOBJ') and a tagset or between tagsets, without considering the relation between them. In both cases we report a loss of information which is fatal for the learning of proper rules for semantic dis- ambiguation. There is an intuitive solution to this problem: most of the relations we are in- terested in are diadic in nature. For example, adjectival modification is a relation holding be- tween two heads (MOD(hl,h2)). Also relations concerning verbal arguments can be split, in a neo-davidsonian perspective, into more atomic relations such as "SUBJ (h 1,h2)" "OBJ (h 1,h2)". These relations can be translated into a "bi- gram format" by assuming that the relation it- self is incorporated among the properties of the involved words (e.g. wl/IS-OBJ w2/IS-HEAD). Learnable properties of words are standardly ex- pressed through tags. Thus, we can merge func- tional and semantic tags into a single tag (e.g. wl/IS-OBJ w2/IS-HEAD + wi/2_3 w2/4 ~ wl/IS- OBJ2._IS-OBJ3 w2/IS-HEAD4). The learner ac- quires constraints which relate functional and semantic information, as planned in this exper- iment. We obtain the following format where every line of the input text represents what we label an FS-pair (Functional Semantic pair): d_i HASOBJ something/gAsosJ-I u/42_41_38..36..29 do/ HASSBJ you/HAS~_BJ-' / 42_4 I_38_36-29 where relations labelled with -I are just inverse relations (e.g. HAS-SUBJ -I - IS-SUB J-OF). Functional relation involving modification through prepositional phrases is ternary as it involves the preposition, the governing head and the governed head. Crucially, however, only substantive heads receive semantic tags, which allows us to condense the preposition form in the FS tags as well. The representation of the modification structure of the phrase do to the economy becomes: do/ MOD-TO economy/MOD-TO-i :~2_41_38_36_29 14_9_7_4 3 Unsupervised Learning for WSD Sufficiently large texts should contain good cues to learn rules for WSD in terms of selectional preferences. 2 The crucial assumption in using functional relations for WSD is that, when com- positionality holds, selectional preferences can be checked through an intersection operation between the semantic features of the syntacti- cally related lexical items. By looking at func- tional relations that contain at least one non- ambiguously tagged word, we can learn evidence for disambiguating ambiguous words appearing in the same context. So, if we know that in the sentence John went to Milan the word Milan is ~By selectional preferences we mean both the selection of semantic features of a dependent given a certain head and its inverse (i.e. selection of a head's semantic features by a dependent constituent). 321 unambiguously tagged as place, we learn that in a structure GO to X, where GO is a verb of the same semantic class as the word go and X is a word containing place among its possible senses, then X is disambiguated as place. The Brill algorithm 3 is based on rule patterns which describe rules that can be learned, as well as on a lexicon where words are associated with ambiguity classes. The learning algorithm is re- cursively applied to an ambiguously tagged cor- pus, producing a set of rules. The set of learn- able rules includes the rules for which there is corpus evidence in terms of unambiguous config- urations. In other words, the learning algorithm extensively relies on bigrams where one of the words is unambiguously tagged. The preferred rules, the ones with the highest score, are those that best minimize the entropy of the untagged corpus. For instance, a rule which resolves am- biguity for 1000 oceurences of a given ambiguity class is preferred to one which resolves the same ambiguity only 100 times. Consider the following rule pattern: Change tagSet (X1 ,.~.½ ...X~) into tag -¥i if the left con- text is associated with the tagSet (1~, Y2 ... lm). This pattern generates rules such as: 4 bil8_b±4 bii8 LEFT b42_b32 1209.64 which is paraphrased as: If a noun is ambigu- ous between person and act and it appears as the subject of a verb which is ambiguous be- tu, een stative and communication, then dis- ambiguate it as person. This instantiation re- lies on the fact that the untagged corpus con- tains a significant number of cases where a noun unambiguously tagged as person appears as sub- ject. of a verb ambiguous between s~catiw and communication. The rule is then applied to the corpus in order to further reduce its ambiguity, and the new corpus is passed again as an input to the learner, and the next most preferred rule is learned. Three different scoring methods have been used 5 as criteria to select the best rule. They are referred to in the program documentation, ZFor the sake of clarity, we just present here the gen- eral lines of Brill's algorithm. For a detailed version of the algorithm see Brill's original paper (Brill, 1997). 4Letters are abbreviation for functional relation and numbers are abbreviations for semantic tags. 5The search space of the algorithm is parametrised setting two different thresholds governing the possibility and in Dini et al. (1998a), as "paper", "origi- nal" and "goodlog". Here we will describe only "original" and "goodlog", because "paper" dif- fers from "original" only for some implementa- tion details. In the method called "original", at every it- eration step the best scored disambiguation rule is learned, and the score of a rule is computed, according to Brill, in the following way: assume that Change the tag of a word from ~ to Y in context Cis arule (Y E ~). Call R the tag Z which maximizes the following function (where Z ranges over all the tags in ~ except Y, freq(Y) is the number of occurences of words unambigu- ously tagged with Y, freq(Z) is the number of occurences of words unambiguously tagged with Z, and incontext( Z, C) is the number of times a word unambiguously tagged with Z occurs in context C): freq(Y)*incontext( Z,C) R = argmaxz ]req(Z) The score assigned to the rule would then be: S: incontext(Y, C) - freq(Y)*incontext(R,C) freq(R) In short, a good transformation from ~ to Y is one for which alternative tags in ~ have either very low frequency in the corpus or they seldom appear in context C. At every iteration cycle, the algorithm simply computes the best scoring transformation. The method "goodlog" uses a probabilistic measure which minimizes the effects of tag fre- quenc, adopting this is the formula for giving a score to the rule that selects the best tag Y in a context C (Y and Z belong to the ambiguous tagset): S ,~ . i, tincontext(YC) * ]req(Z) ~ = argrnaxy(~)aos(logt ]req(Y) incontext(Z,C) ")) The differences in results between the different scoring methods are reported and commented on in section 4 in table 1. 4 Evaluation For the evaluation we used as test corpus the sub- set of the Brown corpus manually tagged with the 45 top-level WordNet tags. We started with the Penn Tree Bank representation and went through all the necessary steps to build FS-pairs for a tag or a word to appear in a rule: i) the minimal frequency of a tag; ii) the minimal frequency of a word in the corpus. We set the first parameter to 400 (that is, we asked the learner to consider only the 400 most frequent TagSets) and we ignored the second one (that is we asked the learner to consider all words in the corpus). 322 used by the applier. These FS pairs were then labelled according to the manual codification and used as a standard for evaluation. We also pro- duced, from the same source, a randomly tagged corpus for measuring the improvements of our system with respect to random choice. The results of comparing the randomly tagged corpus and the corpus tagged by our system using the methods "original" and "goodlog" are shown in table 1. As usual, Precision is I II Precision I RecMI I F-measure Adjusted I Random 0.45 0.44 0.44 0.28 500 Goodlog 0.97 0.25 0.40 0.91 "500 Original 0.78 0.30 0.44 0.50 Table 1: Precision and recall figures the number of correctly tagged words divided by the total number of tagged words; Recall is the number of correctly tagged words di- vided by the number of words in the test cor- pus (about 40000). F-measure is (2*Preci- sion*Recall)/(Precison+Recall). The column la- belled "Adjusted" reports the Precision taking into account non-ambiguous words. The ad- justed precision is computed in the following way: (Correct - unambiguous words) / ((Cor- rect + Uncorrect) - unambiguous words). On an absolute basis, our results improve on those of Resnik (1997). who used an information-theory model of selectional strength preference rather than an error-driven learning algorithm. In- deed, if we compare the "Adjusted" measure we obtained with a set of about 500 rules (50% precision), with the average reported by Resnik (1997) (41°~ precision), we obtain an advantage of 10 points, which, for a task suchas WSD, is noteworthy. For comparison with other experi- ments, refer to Resnik (1997). It is interesting to compare the figures pro- vided by "'goodlog" and "original". Since "good- log" smooths the influence of absolute tag fre- quency, the learned rules achieve much higher precision, even though they are less efficient in terms of the number of words they can disam- biguate. This is due to the fact that the most fre- quent words also tend to be the most ambiguous ones, thus the ones for which the task of WSD is most difficult (cf. Dini et al. (1998a)). 5 Towards SENSEVAL As mentioned above, the present system will be adopted in the context of the SENSEVAL project, where we will adopt the Xerox Incre- mental Finite State Parser, which is completely based on finite state technology. Thus, in the present pilot experiment, we are only interested in relations which could reasonably be captured by a shallow parser, and complex informative relations present in the Penn Tree Bank are simply disregarded during the parsing step de- scribed in section 2.1. Also, structures which are traditionally difficult to parse through Finite State Automata, such as incidental and paren- thetic clauses or coordinate structures, are dis- carded from the learning corpus. This might have caused a slight decrease in the performance of the system. Some additional decrease might have been caused by noise introduced by incorrect assign- ment of senses in context during the learning phase (see Schuetze et al. (1995)). In particu- lar, the system has to face the problem of sense assignment to named entities such as person or industry names. Since we didn't use any text preprocessor, we simply made the assumption that any word having no semantic tag in Word- Net, and which is not a pronoun, is assigned the label human. This assumption is certainly questionable and we adopted it only as a work- ing hypothesis. In the following rounds of this experiment we will plug in a module for named entity recognition in order to improve the per- formance of the system. Another issue that will be tackled in the SEN- SEVAL project concerns word independence. In this experiment we duplicated lexical heads when they were in a functional relation with different items. This permitted an easy adaptation to the input specification of the Brill learner, but it has drawbacks both in the learning and the applica- tion phase. During the learning phase the in- ability to capture the identity of the same lexical head subtracts evidence for the learning of new rules. For instance, assume that at an iteration cycle n the algorithm has learned that verbal in- formation is enough to disambiguate the word cat as animal in the wild cat mewed. Since the FS-pairs cat/mew and wild/cat are autonomous, at cycle n + 1 the learner will have no evidence to learn that the adjective wild tends to associate 323 with nouns of type animal. On the contrary, cat, as appearing in wild cat, will still be ambiguous. The consequences of assuming independence of lexical heads are even worse in the rule ap- plication phase. First, certain words are disam- biguated only in some of the instances in which they appear, thus producing a decrease in terms of recall. Second, there might be a case where the same word is tagged differently according to the relations into which it enters, thus causing a decrease in terms of precision. Both problems will be overcome by the new Java-based versions of the Brill learner and applier which have been developed at CELI. When considering the particular WSD task, it is evident that the information conveyed by ad- jectives and pre-nominal modifiers is at least as important as that conveyed by verbs, and it is statistically more prominent. In the corpus ob- tained from parsing the PTB, approximately of FS-pairs are represented by pre-nominal mod- ification (roughly analogous to the subject-verb FS-pairs and more frequent than the object-verb pairs, which amount to 1 of the whole corpus). But adjectives receive very poor lexical-semantic information from WordNet. This forced us to ex- clude them both fl'om the training and test cor- pora. This situation will again improve in the SENSEVAL experiment with the adoption of a different semantic lexicon. 6 Conclusion We presented a WSD system with reasonable results as well as suggestions for improving it. We will implement these improvements in the context of the SENSEVAL experiment and we plan to extend the system to other languages, with special attention to French and Italian. 6 In- deed, the availability of lexical resources provid- ing a word sense classification with roughly the same granularity of the 45 top classes of Wordnet makes our method applicable also to languages for which no sense tagged corpora has been pro- duced. In the long run, these extensions will lead, we hope, to better systems for foreign lan- guage understanding and machine translation. Acknowledgements We are grateful to Ken Beesley, Andrea Bolioli, Gregory Grefenstette, 6The system will be used in the MIETTA project (LE4-8343) for enhancing the performance of the infor- mation extraction and information retrieval module. David Hull, Hinrich Schuetze and Annie Zaenen for their comments and discussion on earlier ver- sions of this paper. Our gratitude also goes to Vincent Nainemoutou and Herve Poirier for pro- viding us with technical support. Any remaining errors are our own fault. References E. Brill and P. Resnik. 1994. A rule-based ap- proach to prepositional phrase attachment dis- ambiguation. In Proceedings of COLING. E. Brill. 1997. Unsupervised learning of dis- ambiguation rules for part of speech tagging. In Natural Language Processing Using Very Large Corpora. Kluwer Academic Press. Dini, L., V. Di Tomaso, F. Segond 1998. Error Driven Unsupervised Semantic Disambigua- tion. In Proceedings of TANLPS ECML-98. Chemnitz, Germany. Dini, L., V. Di Tomaso, F. Segond 1998. Word Sense Disambiguation with Functional Rela- tion. In Proceedings of LREC-98. Granada, Spain. J. Laubusch. 1994. Zebu: A tool for specifying reversible LARL(1) parsers. G. Miller. 1990. Wordnet: An on-line lexical database. Int. Journal of Lexicography. M. Mitchell, B. Santorini, and M.A. Marcinkiewicz. 1995. Building a large anno- tated corpus of English : the Penn Treebank. Computational Linguistics, (19) :313-330. P. Resnik and D. Yarowsky. 1997. A perspec- tive on word sense disambiguation methods and their evaluation. In Proceedings of ACL SIGLEX Workshop on Tagging Text with Lexical Semantics: Why, What, and How?, Washington, D.C., USA. P. Resnik. 1997. Selectional preference and sense disambiguation. In Proceedings of ACL SIGLEX Workshop on Tagging Text with Lexical Semantics: Why, What, and How?, Washington, D.C., USA. H. Schuetze, , and J. Pedersen. 1995. Informa- tion retrieval based on word senses. In Pro- ceedings 4th Annual Symposium on Document Analysis and Information Retrieval, Las Ve- gas, USA. D. Yarowsky. 1995. Unsupervised word sense disambiguation method rivaling supervised methods. In Proceedings of the A CL. 324
1998
51
An Empirical Investigation of Proposals in Collaborative Dialogues Barbara Di Eugenio Pamela W. Jordan Johanna D. Moore Richmond H. Thomason Learning Research & Development Center, and Intelligent Systems Program University of Pittsburgh Pittsburgh, PA 15260, USA {dieugeni, jordan, jmoore, thomason}@isp, pitt. edu Abstract We describe a corpus-based investigation of propos- als in dialogue. First, we describe our DR/compliant coding scheme and report our inter-coder reliability results. Next, we test several hypotheses about what constitutes a well-formed proposal. 1 Introduction Our project's long-range goal (see http://www.isp. pitt.edu/'intgen/) is to create a unified architecture for collaborative discourse, accommodating both in- terpretation and generation. Our computational ap- proach (Thomason and Hobbs, 1997) uses a form of weighted abduction as the reasoning mechanism (Hobbs et al., 1993) and modal operators to model context. In this paper, we describe the corpus study portion of our project, which is an integral part of our investigation into recognizing how conversa- tional participants coordinate agreement. From our first annotation trials, we found that the recogni- tion of "classical" speech acts (Austin, 1962; Searle, 1975) by coders is fairly reliable, while recognizing contextual relationships (e.g., whether an utterance accepts a proposal) is not as reliable. Thus, we ex- plore other features that can help us recognize how participants coordinate agreement. Our corpus study also provides a preliminary as- sessment of the Discourse Resource Initiative (DR/) tagging scheme. The DRI is an international "grass- roots" effort that seeks to share corpora that have been tagged with the core features of interest to the discourse community. In order to use the core scheme, it is anticipated that each group will need to refine it for their particular purposes. A usable draft core scheme is now available for experimentation (see http://www.georgetown.edu/luperfoy/Discourse- Treebank/dri-home.html). Whereas several groups are working with the unadapted core DR/ scheme (Core and Allen, 1997; Poesio and Traum, 1997), we have attempted to adapt it to our corpus and particular research questions. First we describe our corpus, and the issue of tracking agreement. Next we describe our coding scheme and our intercoder reliability outcomes. Last we report our findings .on tracking agreement. 2 Tracking Agreement Our corpus consists of 24 computer-mediated dialogues 1 in which two participants collaborate on a simple task of buying furniture for the living and dining rooms of a house (a variant of the task in (Walker, 1993)). The participants' main goal is to negotiate purchases; the items of highest priority are a sofa for the living room and a table and four chairs for the dining room. The problem solving task is complicated by several secondary goals: 1) Match colors within a room, 2) Buy as much furniture as you can, 3) Spend all your money. A point system is used to motivate participants to try to achieve as many goals as possible. Each subject has a bud- get and inventory of furniture that lists the quanti- ties, colors, and prices for each available item. By sharing this initially private information, the partici- pants can combine budgets and select furniture from either's inventory. The problem is collaborative in that all decisions have to be consensual; funds are shared and purchasing decisions are joint. In this context, we characterize an agreement as accepting a partner's suggestion to include a specific furniture item in the solution. In this paper we will focus on the issue of recognizing that a suggestion has been made (i.e. a proposal). The problem is not easy, since, as speech act theory points out (Austin, 1962; Searle, 1975), surface form is not a clear indi- cator of speaker intentions. Consider excerpt (1): 2 (1) A: [35]: i have a blue sofa for 300. [36]: it's my cheapest one. B: [37]: I have 1 sofa for 350 [38]: that is yellow [39]: which is my cheapest, [40]: yours sounds good. [35] is the first mention of a sofa in the conversa- x Participants work in separate rooms and communicate via the computer interface. The interface prevents interruptions. 2We broke the dialogues into utterances, partly following the algorithm in (Passonneau, 1994). 325 tion and thus cannot count as a proposal to include it in the solution. The sofa A offers for considera- tion, is effectively proposed only after the exchange of information in [37]--[39]. However, if the dialogue had proceeded as below, [35'] would count as a proposal: (2) B: [32']: I have 1 sofa for 350 [33']: that is yellow [34']: which is my cheapest. A: [35']: i have a blue sofa for 300. Since context changes the interpretation of [35], our goal is to adequately characterize the context. For this, we look for guidance from corpus and domain features. Our working hypothesis is that for both participants context is partly determined by the do- main reasoning situation. Specifically, if the suitable courses of action are highly limited, this will make an utterance more likely to be treated as a proposal; this correlation is supported by our corpus analysis, as we will discuss in Section 5. 3 Coding Scheme We will present our coding scheme by first describing the core DR/ scheme, followed by the adaptations for our corpus and research issues. For details about our scheme, see (Di Eugenio et al., 1997); for details about features we added to DR/, but that are not relevant for this paper, see (Di Eugenio et al., 1998). 3.1 The DRI Coding Scheme The aspects of the core DR/scheme that apply to our corpus are a subset of the dimensions under Forward- and Backward-Looking Functions. 3.1.1 Forward-Looking Functions This dimension characterizes the potential effect that an utterance Ui has on the subsequent dialogue, and roughly corresponds to the classical notion of an illocutionary act (Austin, 1962; Searle, 1975). As each Ui may simultaneously achieve multiple effects, it can be coded for three different aspects: State- ment, Influence-on-Hearer, Influence-on-Speaker. Statement. The primary purpose of Statements is to make claims about the world. Statements are sub- categorized as an Assert when Speaker S is trying to change Hearer H's beliefs, and as a Reassert if the claim has already been made in the dialogue. Influence-on-Hearer (I-on-H). A Ui tagged with this dimension influences H's future action. DR/dis- tinguishes between S merely laying out options for H's future action (Open-Option), and S trying to get H to perform a certain action (see Figure 1). Infe- R°quest includes all actions that request informa- tion, in both explicit and implicit forms. All other actions 3 are Action-Directives. 3Although this may cause future problems (Tuomela, . . . . . . . . . . . . . . . . . . . . . . . i' Is S discussing potential actions of H? ',--Is S ~'-th-g-to get H to d . . . . . thing? : Open-Op.on ....... ;;-/ ....... -%.o..o. Is 14 supposed to provide information'? [ .... 3 ( ^otio..Diroo.vo Figure 1: Decision Tree for Influence-on-Hearer Influence-on-Speaker (I-on-S). A Ui tagged with this dimension potentially commits S (in varying de- grees of strength) to some future course of action. The only distinction is whether the commitment is conditional on H's agreement (Offer) or not (Com- mit). With an Offer, S indicates willingness to com- mit to an action if H accepts it. Commits include promises and other weaker forms. 3.1.2 Backward Functions This dimension indicates whether Ui is unsolicited, or responds to a previous Uj or segment. 4 The tags of interest for our corpus are: • Answer: Ui answers a question. • Agreement: 1. Ui Accept/Rejects if it indicates S's attitude to- wards a belief or proposal embodied in its an- tecedent. 2. Ui Holds if it leaves the decision about the pro- posal embodied in its antecedent open pending further discussion. 3.2 Refinements to Core Features The core DRI manual often does not operationalize the tests associated with the different dimensions, such as the two dashed nodes in Figure 1 (the shaded node is an addition that we discuss below). This resulted in strong disagreements regarding Forward Functions (but not Backward Functions) during our initial trials involving three coders. Statement, In the current DR/manual, the test for Statement is whether Ui can be followed by "That's not true.". For our corpus, only syntactic imperatives or interrogatives were consistently fil- tered out by this purely semantic test. Thus, we refined it by appealing to syntax, semantics, and do- main knowledge: Ui is a Statement if it is declarative 1995), DRI considers joint actions as decomposable into in- dependent Influence-on-Speaker / Hearer dimensions. 4Space constraints prevent discussion of segments. 326 and it is 1) past; or 2) non past, and contains a sta- tive verb; or 3) non past, and contains a non-stative verb in which the implied action: • does not require agreement in the domain; • or is supplying agreement. For example, We could start in the living room is not tagged as a statement if meant as a suggestion, i.e. if it requires agreement. I-on-H and I-on-S. These two dimensions de- pend on the potential action underlying U~ (see the root node in Figure 1 for I-on-H). The initial dis- agreements with respect to these functions were due to the coders not being able to consistently identify such actions; thus, we provide a definition for ac- tions in our domain, s and heuristics that correlate types of actions with I-on-H/I-on-S. We have two types of potential actions: put fur- niture item X in room Y and remove furniture item X from room Y. We subcategorize them as specific and general. A specific action has all necessary pa- rameters specified (type, price and color of item, and room). General actions arise because all necessary parameters are not set, as in I have a blue sofa ut- tered in a null context. Heuristic for I-on-H (the shaded node in Fig- ure 1). If H's potential action described by Ui is specific, Ui is tagged as Action-Directive, otherwise as Open-Option. Heuristic for I-on-S. Only a Ui that describes S's specific actions is tagged with an 1-on-S tag. Finally, it is hard to offer comprehensive guidance for the test is S trying to get H to do something? in Figure 1, but some special cases can be isolated. For instance, when S refers to one action that the partic- ipants could undertake, but in the same turn makes it clear the action is not to be performed, then S is not trying to get H to do something. This happens in excerpt (1) in Section 2. A specific action (get B's $350 yellow sofa) underlies [38], which qualifies as an Action-Directive just like [35]. However, because of [40], it is clear that B is not trying to get A to use B's sofa. Thus, [38] is tagged as an Open-Option. 3.3 Coding for problem solving features In order to investigate our working hypothesis about the relationship between context and limits on the courses of action, we coded each utterance for fea- tures of the problem space. Since we view the prob- lem space as a set of constraint equations, we decided to code for the variables in these equations and the number of possible solutions given all the possible assignments of values to these variables. The variables of interest for our corpus are the ob- jects of type t in the goal to put an object in a room (e.g. varsola, vartabte or varchairs). For a solution to 5Our definition of actions does not apply to Into-Requests, as the latter are easy to recognize. 327 [[ Stat. [I-on-H II-on-S H Answer [Agr. II II "831 .72 I .72 II .79 I .54 II Table 1: Kappas for Forward and Backward Func- tions exist to the set of constraint equations, each varl in the set of equations must have a solution. For exam- ple, if 5 instances of sofas are known for varsola, but every assignment of a value to varsoIa violates the budget constraint, then varsola and the constraint equations are unsolvable. We characterize the solution size for the problem as determinate if there is one or more solutions and indeterminate otherwise. It is important to note that the set of possible values for each vari is not known at the outset since this information must be exchanged during the interaction. If S supplies ap- propriate values for vari but does not know what H has available for it then we say that no solution is possible at this time. It is also important to point out that during a dialogue, the solution size for a set of constraint equations may revert from determinate to indeterminate (e.g. when S asks what else H has available for a vari). 4 Analysis of the Coding Results Two coders each coded 482 utterances with the adapted DRI features (44% of our corpus). Table 1 reports values for the Kappa (K) coefficient of agree- ment (Carletta, 1996) for Forward and Backward Functions .6 The columns in the tables read as follows: if utter- ance Ui has tag X, do coders agree on the subtag? For example, the possible set of values for I-on-H are: NIL (Ui is not tagged with this dimension), Action-Directive, Open-Option, and Info-Request. The last two columns probe the subtypes of Back- ward Functions: was Ui tagged as an answer to the same antecedent? was Ui tagged as accepting, re. jecting, or holding the same antecedent? T K factors out chance agreement between coders; K=0 means agreement is not different from chance, and K=I means perfect agreement. To assess the import of the values 0 <: K < 1 beyond K's sta- tistical significance (all of our K values are signifi- cant at p=0.000005), the discourse processing com- munity uses Krippendorf's scale (1980) 8, which dis- eFor problem solving features, K for two doubly coded dialogues was > .8. Since reliability was good and time was short, we used one coder for the remaining dialogues. 7In general, we consider 2 non-identical antecedents as equivalent if one is a subset of the other, e.g. if one is an utterance Uj and the other a segment containing Uj. SMore forgiving scales exist but have not yet been dis- cussed by the discourse processing community, e.g. the one in (Rietveld and van Hour, 1993). II Stat. I I-on-H I I-on-S II Answer I Agr. II I] "681 . 71 I N/Sa II .81 I .43 II aN/S means not significant Table 2: Kappas from (Core and Allen 97) counts any variable with K < .67, and allows tenta- tive conclusions when .67 < K < .8 K, and definite conclusions when K>.8. Using this scale, Table 1 suggests that Forward Functions and Answer can be recognized far more reliably than Agreement. To assess the DRI effort, clearly more experiments are needed. However, we believe our results show that the goal of an adaptable core coding scheme is reasonable. We think we achieved good results on Forward Functions because, as the DRI enterprise intended, we adapted the high level definitions to our domain. However, we have not yet done so for Agreement since our initial trial codings did not re- veal strong disagreements; now given our K results, refinement is clearly needed. Another possible con- tributing factor for the low K on Agreement is that these tags are much rarer than the Forward Func- tion tags. The highest possible value for K may be smaller for low frequency tags (Grove et al., 1981). Our assessment is supported by comparing our re- sults to those of Core and Allen (1997) who used the unadapted DRI manual -- see Table 2. Overall, our Forward Function results are better than theirs (the non significant K for I-on-S in Table 2 reveals prob- lems with coding for that tag), while the Backward Function results are compatible. Finally, our assess- ment may only hold for task-oriented collaborative dialogues. One research group tried to use the DRI core scheme on free-flow conversations, and had to radically modify it in order to achieve reliable coding (Stolcke et al., 1998). 5 Tracking Propose and Commit It appears we have reached an impasse; if human coders cannot reliably recognize when two partici- pants achieve agreement, the prospect of automat- ing this process is grim. Note that this calls into question analyses of agreements based on a single coder's tagging effort, e.g. (Walker, 1996). We think we can overcome this impasse by exploiting the relia- bility of Forward Functions. Intuitively, a U~ tagged as Action-Directive + Offer should correlate with a proposal -- given that all actions in our domain are joint, an Action-Directive tag always co-occurs with either Offer (AD+O) or Commit (AD÷C). Fur- ther, analyzing the antecedents of Commits should shed light on what was treated as a proposal in the dialogue. Clearly, we cannot just analyze the an- tecedents of Commit to characterize proposals, as a Det Indet Unknown AD+O 25 7 0 Open-Option 2 2 0 AD+C 10 2 0 Other 4 2 4 Table 3: Antecedents of Commit proposal may be discarded for an alternative. To complete our intuitive characterization of a proposal, we will assume that for a Ui to count as a well-formed proposal (WFP), the context must be such that enough information has already been ex- changed for a decision to be made. The feature so- lution size represents such a context. Thus our first testable characterization of a WFP is: 1.1 Ui counts as a WFP if it is tagged as Action- Directive + Offer and if the associated solution size is determinate. To gain some evidence in support of 1.1, we checked whether the hypothesized WFPs appear as antecedents of Commits? Of the 32 AD÷Os in Ta- ble 3, 25 have determinate solution size; thus, WFPs are the largest class among the antecedents of Com- mit, even if they only account for 43% of such an- tecedents. Another indirect source of evidence for hypothesis 1.1 arises by exploring the following ques- tions: are there any WFPs that are not committed to? if yes, how are they dealt with in the dialogue? If hypothesis 1.1 is correct, then we expect that each such Ui should be responded to in some fashion. In a collaborative setting such as ours, a partner can- not just ignore a WFP as if it had not occurred. We found that there are 15 AD+Os with determi- nate solution size in our data that are not commit- ted to. On closer inspection, it turns out that 9 out of these 15 are actually indirectly committed to. Of the remaining 6, four are responded to with a counterproposal (another AD+O with determinate solution size). Thus only two are not responded to in any fashion. Given that these 2 occur in a di- alogue where the participants have a distinctively non-collaborative style, it appears hypothesis 1.1 is supported. Going back to the antecedents of Commit (Ta- ble 3), let's now consider the 7 indeterminate AD÷Os. They can be considered as tentative pro- posals that need to be negotiated. 1° To further re- fine our characterization of proposals, we explore the hypothesis: 9Antecedents of Commits are not tagged. We recon- structed them from either variable tags or when Ui has both Commit and Accept tags, the antecedent of the Accept. 1°Becanse of our heuristics of tagging specific actions as ActionDirectives, these utterances are not Open-Options. 328 1.2 When the antecedent of a Commit is an AD+O and indeterminate, the intervening dialogue renders the solution size determinate. In 6 out of the 7 indeterminate antecedent AD+Os, our hypothesis is verified (see excerpt (1), where [35] is an AD+ 0 with indeterminate solution size, and the antecedent to the Commit in [40]). As for the other antecedents of Commit in Table 3, it is not surprising that only 4 Open-Options occur given the circumstances in which this tag is used (see Figure 1). These Open-Options appear to function as tentative proposals like indeterminate AD+ Os, as the dialogue between the Open-Option and the Com- mit develops according to hypothesis 1.2. We were instead surprised that AD+Cs are a very common category among the antecedents of Commit (20%); the second commit appears to simply reconfirm the commitment expressed by the first (Walker, 1993; Walker, 1996), and does not appear to count as a proposal. Finally, the Other column is a collection of miscellaneous antecedents, such as Info-Requests and cases where the antecedent is unclear, that need further analysis. For further details, see (Di Eugenio et al., 1998). 6 Future Work Future work includes, first, further exploring the fac- tors and hypotheses discussed in Section 5. We char- acterized WFPs as AD+Os with determinate solu- tion size: a study of the features of the dialogue pre- ceding the WFP will highlight how different options are introduced and negotiated. Second, whereas our coders were able to reliably identify Forward Func- tions, we do not expect computers to be able to do so as reliably, mainly because humans are able to take into account the full previous context. Thus, we are interested in finding correlations between Forward Functions and "simpler" tags. Acknowledgements This material is based on work supported by the Na- tional Science Foundation under Grant No. IRI-9314961. We wish to thank Liina Pyllk~inen for her contributions to the coding effort, and past and present project mem- bers Megan Moser and Jerry Hobbs. References John L. Austin. 1962. How to Do Things With Words. Oxford University Press, Oxford. Jean Carletta. 1996. Assessing agreement on classi- fication tasks: the kappa statistic. Computational Linguistics, 22(2). Mark G. Core and James Allen. 1997. Coding dialogues with the DAMSL annotation scheme. AAAI Fall Symposium on Communicative Actions in Human and Machines, Cambridge MA. Barbara Di Eugenio, Pamela W. Jordan, and Li- ina PylkkLrmn. 1997. The COCONUT project: Dialogue annotation manual, http://www.isp. pitt.edu/'intgen/research-papers. Barbara Di Eugenio, Pamela W. Jordan, Rich- mond H. Thomason, and Johanna D. Moore. 1998. The Acceptance cycle: An empirical inves- tigation of human-human collaborative dialogues. Submitted for publication. William M. Grove, Nancy C. Andreasen, Pa- tricia McDonald-Scott, Martin B. Keller, and Robert W. Shapiro. 1981. Reliability studies of psychiatric diagnosis, theory and practice. Archives General Psychiatry, 38:408-413. Jerry Hobbs, Mark Stickel, Douglas Appelt, and Paul Martin. 1993. Interpretation as abduction. Artificial Intelligence, 63(1-2):69-142. Klaus Krippendorff. 1980. Content Analysis: an In- troduction to its Methodology. Beverly Hills: Sage Publications. Rebecca J. Passonneau. 1994. Protocol for coding discourse referential noun phrases and their an- tecedents. Technical report, Columbia University. Massimo Poesio and David Traum. 1997. Rep- resenting conversation acts in a unified seman- tic/pragmatic framework. AAAI Fall Symposium on Communicative Actions in Human and Ma- chines, Cambridge MA. T. Rietveld and R. van Hout. 1993. Statistical Tech- niques .for the Study of Language and Language Behaviour. Mouton de Gruyter. John R. Searle. 1975. Indirect Speech Acts. In P. Cole and J.L. Morgan, editors, Syntax and Se- mantics 3. Speech Acts. Academic Press. A. Stolcke, E. Shriberg, R. Bates, N. Coccaro, D. Ju- rafsky, R. Martin, M. Meteer, K. Ries, P. Taylor, and C. Van Ess-Dykema. 1998. Dialog act model- ing for conversational speech. AAAI Spring Sym- posium on Applying Machine Learning to Dis- course Processing. Richmond H. Thomason and Jerry R. Hobbs. 1997. Interrelating interpretation and generation in an abductive framework. AAAI Fall Symposium on Communicative Actions in Human and Machines, Cambridge MA. Raimo Tuomela. 1995. The Importance of Us. Stan- ford University Press. Marilyn A. Walker. 1993. Informational Redun- dancy and Resource Bounds in Dialogue. Ph.D. thesis, University of Pennsylvania, December. Marilyn A. Walker. 1996. Inferring acceptance and rejection in dialogue by default rules of inference. Language and Speech, 39(2). 329
1998
52
Accumulation of Lexical Sets: Acquisition of Dictionary Resources and Production of New Lexical Sets DOAN-NGUYEN Hai GETA - CLIPS - IMAG BP 53, 38041 Grenoble, France Fax: (33) 4 76 51 44 05 - Tel: (33) 4 76 63 59 76 - E-mail: [email protected] Abstract This paper presents our work on accumulation of lexical sets which includes acquisition of dictionary resources and production of new lexical sets from this. The method for the acquisition, using a context-free syntax-directed translator and text modification techniques, proves easy-to-use, flexible, and efficient. Categories of production are analyzed, and basic operations are proposed which make up a formalism for specifying and doing production. About 1.7 million lexical units were acquired and produced from dictionaries of various ~pes and complexities. The paper also proposes a combinatorial and dynamic organization for lexical systems, which is based on the notion of virtual accumulation and the abstraction levels of lexical sets. Keywords: dictionary resources, lexical acquisition, lexical production, lexical accumulation, computational lexicography. Introduction Acquisition and exploitation of dictionary resources (DRs) (machine-readable, on-line dictionaries, computational lexicons, etc) have long been recognized as important and difficult problems. Although there was a lot of work on DR acquisition, such as Byrd & al (1987), Neff & Boguraev (1989), Bl~isi & Koch (1992), etc, it is still desirable to develop general, powerful, and easy-to-use methods and tools for this. Production of new dictionaries, even only crude drafts, from available ones, has been much less treated, and it seems that no general computational framework has been proposed (see eg, Byrd & al (1987), Tanaka & Umemura (1994), Don" & al (1995)). This paper deals with two problems: acquiring textual DRs by converting them into structured forms, and producing new lexical sets from those acquired. These two can be considered as two main activities of a more general notion: the accumulation of lexical sets. The term "lexical set" (LS) is used here to be a generic term for more specific ones such as "lexicon", "dictionary", and "lexical database". Lexical data accumulated will be represented as objects of the Common Lisp Object System (CLOS) (Steel 1990). This object-oriented high- level programming environment facilitates any further manipulations on them, such as presentation (eg in formatted text), exchange (eg in SGML), database access, and production of new lexical structures, etc; the CLOS object form is thus a convenient pivot form for storing lexical units. This environment also helps us develop our methods and tools easily and efficiently. In this paper, we will also discuss some other relevant issues: complexity measures for dictionaries, heuristic decisions in acquisition, the idea of virtual accumulation, abstraction levels on LSs, and a design for organization and exploitation of large lexical systems based on the notions of accumulation. 1 Acquisition Our method combines the use of a context-free syntax-directed translator and text modification techniques. 1.1 A syntax-directed translator for acquisition Transforming a DR into a structured form comprises parsing the source text and building the output structures. Our approach is different from those of other tools specialized for DR acquisition, eg Neff & Boguraev (1989) and Bl~.si & Koch (1992), in that it does not impose beforehand a default output construction mechanism, but rather lets the user build the output as he wants. This means the output structures are not to be bound tightly to the parsing grammar. Particularly, they can be different from the logic structure of the source, as it is sometimes needed in acquisition. The user can also keep any presentation information (eg typographic codes) as needed; our approach is thus between the two extremes in acquisition approaches: one is keeping all presentation information, and one is transferring it all into structural representation. Our tool consists of a syntax-directed translation (SDT) formalism called h-grammar, and its running engine. For a given dictionary, one writes an h-grammar describing the text of 330 its entry and the construction of the output. An h-grammar is a context-free grammar augmented with variables and actions. Its rules are of the form: A(ail ai2 ...; aol ao2 ...)-> B(bil bi2 ...; bol bo2 ...) C(cil ci2 ...; col co2 ...) .... A is a nonterminal; B, C .... may be a nonterminal, a terminal, the null symbol §, or an action, ail, ai2 .... are input variables, which will be initialized when the rule is called, aol, ao2 ..... bol, bo2 ..... col, co2 .... are output variables. bil, bi2 ..... cil, ci2 .... are input expressions (in LISP syntax), which may contain variables. When an item in the right-hand side of the rule is expanded, its input expressions are first computed. If the item is a nonterminal, a rule having it as the left-hand side is chosen to expand. If it is a terminal, a corresponding token is looked for in the parsed buffer and returned as the value of its (unique) output variable. If it is an action which is in fact a LISP function, the function is applied to the values of its input expressions, and the result values are assigned to its output variables (here we use the multiple- value function model of CLOS). Finally, the values of the output variables of the left-hand side nonterminal (aol, ao2 .... ) are collected and returned as the result of its expanding. With some predefined action functions, output structures can be constructed freely, easily, and flexibly. We usually choose to make them CLOS objects and store them in LISPO form. This is our text representation for CLOS objects, which helps to read, verify, correct, store and transfer the result easily. Finally, the running engine has several operational modes, which facilitate debugging the h-grammars and treating errors met in parsing. 1.2 Text modification in acquisition In general, an analyzer, such as the h-grammar tool above, is sufficient for acquisition. However, in practice, some precedent modification on the source text may often simplify much the analyzing phase. In contrast with many other approaches, we recognize the usefulness of text modification, and apply it systematically in our work. Its use can be listed as follows: (1) Facilitating parsing. By inserting some specific marks before and/or after some elements of the source, human work in grammar writing and machine work in parsing can be reduced significantly. (2) Obtaining the result immediately without parsing. In some simple cases, using several replacement operations in a text editor, we could obtain easily the LISPO form of a DR. The LISPification well-known in a lot of acquisition work is another example. (3) Retaining necessary information and stripping unnecessary one. In many cases, much of the typographic information in the source text is not needed for the parsing phase, and can be purged straightforwardly in an adequate text editor. (4) Pre-editing the source and post-editing the result, eg to correct some simple but common type of errors in them. It is preferable that text modification be carried out as automatically as possible. The main type of modification needed is replacement using a strong string pattern-matching (or precisely, regular expression) mechanism. The modification of a source may consist of many operations and they need to be tested several times; it is therefore advantageous to have some way to register the operations and to run them in batch on the source. An advanced word processor such as Microsoft Word TM, version 6, seems capable of satisfying those demands. For sources produced with formatting from a specific editing environment (eg Microsoft Word, HTML editors), making modification in the same or an equivalent environment may be very profitable, because we can exploit format-based operations (eg search/replace based on format) provided by the environment. 1.3 Some related issues 1.3.1 Complexity measures for dictionaries Intuitively, the more information types a dictionary has, the more complex it is, and the harder to acquire it becomes. We propose here a measure for this. Briefly, the structure complexity (SC) of a dictionary is equal to the sum of the number of elementary information types and the number of set components in its entry structure. For example, an English-French dictionary whose entries consist of an English headword, a part-of-speech, and a set of French translations, will have a SC of (1 + 1 + 1 )+ 1-4. Based on this measure, some others can be defined, eg the average SC, which gives the average number of information types present in an entry of a dictionary (because not all entries have all components filled). 1.3.2 Heuristics in acquisition Contrary to what one may often suppose, decisions made in analyzing a DR are not always totally sure, but sometimes only heuristic ones. For large texts which often contain many errors and ambiguities like DRs, precise analysis design may become too complicated, even impossible. 331 Imagine, eg, some pure text dictionary where the sense numbers of the entries are made from a number and a point, eg '1.', '2.'; and, moreover, such forms are believed not to occur in content strings without verification (eg, because the dictionary is so large). An assumption that such forms delimit the senses in an entry is very convenient in practice, but is just a heuristics. 1.4 Result and example Our method and tool have helped us acquire about 30 dictionaries with a total of more than 1.5 million entries. The DRs are of various languages, types, domains, formats, quantity, clarit.y, and complexity. Some typical examples are gwen in the following table. Dictionary Resource 1 DEC, vol. II (Mel'cuk & al 1988) French Official Terms (Drlrgation grnrrale ~ la langue franqaise) Free On-line Dictionary of Computing (D. Howe, http://wombat.doc.ic.ac.uk) English-Esperanto (D. Richardson, Esperanto League for North America) English-UNL (Universal Networking Language. The United Nations University) I. Kind's BABEL - Glossary of Computer Oriented Abbrevations and Acronyms SC Number of entries 79 100 19 3,500 15 10,800 11 6,000 6 220,000 6 3,400 We present briefly here the acquisition of a highly complex DR, the Microsoft Word source files of volume 2 of the "Dictionnaire explicatif et combinatoire du fran~ais contemporain " (DEC) (Mel'cuk & al 1988). Despite the numerous errors in the source, we were able to achieve a rather fine analysis level with a minimal manual cleaning of the source. For example, a lexical function expression such as Adv(1 )(Real 1 !IF6 + Real2IIF6 ) was analyzed into: (COMP ("Adv" NIL (O[¢I'IONAL 1) NIL NIL NIL) (PAREN (+ (COMP ("Real" NIL (1) 2 NIL NIL) ("F" 6)) (COMP ("Real" NIL (2) 2 NIL NIL) ("F" 6))))) Compared to the method of direct programming that we had used before on the same source, human work was reduced by half (1.5 vs 3 person-months), and the result was better (finer analysis and lower error rate). I All these DRs were used only for my personal research on acquisition, conforming to their authors' permission notes. 2 Production From available LSs it is interesting and .possible to produce new ones, eg, one can revert a bilingual dictionary A-B to obtain a B-A dictionary, or chain two dictionaries A-B and B- C to make an A-B-C, or only A-C (A, B, C are three languages). The produced LSs surely need more correction but they can serve at least as somewhat prepared materials, eg, dictionary drafts. Acquisition and production make the notion of lexical accumulation complete: the former is to obtain lexical data of (almost) the same linguistic structure as the source, the latter is to create data of totally new linguistic structures. Viewed as a computational linguistic problem, production has two aspects. The linguistic aspect consists in defining what to produce, ie the mapping from the source LSs to the target LSs. The quality of the result depends on the linguistic decisions. There were several experiments studying some specific issues, such as sense mapping or attribute transferring (Byrd & al (1987), Dorr & al (1995)). This aspect seems to pose many difficult lexicographic problems, and is not dealt with here. The computational aspect, in which we are interested, is how to do production. To be general, production needs a Turing machine computational power. In this perspective, a framework which can help us specify easily a production process may be very desirable. To build such a framework, we will examine several common categories of production, point out basic operations often used in them, and finally, establish and implement a formalism for specifying and doing production. 2.1 Categories of production Production can be done in one of two directions, or by combining both: "extraction" and "synthesis". Some common categories of production are listed below. (1) Selection of a subset by some criteria, eg selection of all verbs from a dictionary. (2) Extraction of a substructure, eg extracting a bilingual dictionary from a trilingual. (3) Inversion, eg of an English-French dictionary to obtain a French-English one. (4) Regrouping some elements to make a "bigger" structure, eg regrouping homograph entries into polysemous ones. (5) Chaining, eg two bilingual dictionaries A-B and B-C to obtain a trilingual A-B-C. (6) Paralleling, eg an English-French dictionary with another English-French, to make an English-[French( I ), French(2)] (for comparison or enrichment .... ). 332 (7) Starring combination, eg of several bilingual dictionaries A-B, B-A, A-C, C-A, A-D, D-A, to make a multiligual one with A being the pivot language (B, C, D)-A-(B, C, D). Numeric evaluations can be included in production, eg in paralleling several English- French dictionaries, one can introduce a fuzzy logic number showing how well a French word translates an English one: the more dictionaries the French word occurs in, the bigger the number becomes. 2.2 Implementation of production Studying the algorithms for the categories above shows they may make use of many common basic operations. As an example, the operation regroup set by functionl into function2 partitions set into groups of elements having the same value of applying function1, and applies function2 on each group to make a new element. It can be used to regroup homograph entries (ie those having the same headword forms) of a dictionary into polysemous ones, as follows: regroup dictionary by headword into polysem (polysem is some function combining the body of the homograph entries into a polysemous one.) It can also be used in the inversion of an English-French dictionary EF-dict whose entries are of the structure <English-word, French- translations> (eg <love, {aimer, amour}>): for-all EF-entry in EF-dict do split EF-entry into <French, English> pairs, eg split <love, {aimer, amour}> into {<aimer, love> <amour, love>}. Call the result FE-pairs. regroup FE-pairs by French into FE-entry (FE-entry is a function making French-English entries, eg making <aimer, {love, like}> from <aimer, like> and <aimer, love>.) Our formalism for production was built with four groups of operations (see Doan-Nguyen (1996) for more details): (1) Low-level operations: assignments, conditionals, and (rarely used) iterations. (2) Data manipulation functions, eg string functions. (3) Set and first-order predicate calculus operations, eg the for-all above. (4) Advanced operations, which d o complicated transformations on objects and sets, eg regroup, split above. Finally, LSs were implemented as LISP lists for "small" sets, and CLOS object databases and LISPO sequential files for large ones. 2.3 Result and example Within the framework presented above, about 1 0 dictionary drafts of about 200,000 entries were produced. As an example, an English-French- UNL 2 (EFU) dictionary draft was produced from an English-UNL (EU) dictionary, a French- English-Malay (FEM), and a French-English (FE). The FEM is extracted and inverted to give an English-French dictionary (EF-1), the FIE is inverted to give another (EF-2). The EFU is produced then by paralleling the EU, EF-1, and EF-2. This draft was used as the base for compiling a French-UNL dictionary at GETA (Boitet & al 1998). We have not yet had an evaluation on the draft. 3 Virtual Accumulation Abstraction of Lexical Sets and 3.1 Virtual accumulation Accumulation discussed so far is real accumulation: the LS acquired or produced is available in its whole and its elements are put in a "standard" form used by the lexical system. However, accumulation may also be virtual, ie LSs which are not entirely available may still be used and even integrated in a lexical system, and lexical units may rest in their original format and will be converted to the standard form only when necessary. This means, eg, one can include in his lexical system another's Web online dictionary which only supplies an entry to each request. Particularly, in virtual acquisition, the resource is untouched, but equipped with an acquisition operation, which will provide the necessary lexical units in the standard form when it is called. In virtual production, not the whole new LS is to be produced, but only the required unit(s). One can, eg, supply dynamically German equivalents of an English word by calling a function looking up English-French and French- German entries (in corresponding dictionaries) and then chaining them. Virtual production may not be suitable, however, for some production categories such as inversion. 3.2 Abstraction of LSs The framework of accumulation, real and virtual, presented so far allows us to design a very general and dynamic model for lexical systems. The model is based on some abstraction levels of LSs as follows. (1) A physical support is a disk file, database, Web page, etc. This is the most elementary level. 2 UNL: Universal Networking Language (UNL 1996). 333 (2) A LS support makes up the contents of a LS. It comprises a set of physical supports (as a long dictionary may be divided into several files), and a number of access ways which determine how to access the data in the physical supports (as a database may have several index). The data in its physical supports may not be in the standard form; in this case it will be equipped with a standardizing function on accessed data. (3) A lexical set (LS) comprises a set of LS supports. Although having the same contents, they may be different in physical form and data format; hence this opens the possibility to query a LS from different supports. Virtual LSs are "sets" that do not have "real" supports, their entries are produced from some available sets when required, and there are no insert, delete activities for them. (4) A lexical group comprises a number of LSs (real or virtual) that a user uses in a work, and a set of operations which he may need to do on them. A lexical group is thus a workstation in a lexical system, and this notion helps to view and develop the system modularly, combinatorially, and dynamically. Based on these abstractions, a design on the organization for lexical systems can be proposed. Fundamentally, a lexical system has real LSs as basic elements. Its performance is augmented with the use of virtual LSs and lexical groups. A catalogue is used to register and manage the LSs and groups. A model of such an orgamzation is shown in the figure below. alo~ lex cal tnd ~ups LEXICAL SYSTEM physical supports real lexical sets virtual lexical sets lexical groups Conclusion and perspective Although we have not yet been able to evaluate all the lexical data accumulated, our methods and tools for acquisition and production have shown themselves useful and efficient. We have also developed a rather complete notion of lexical data accumulation, which can be summarized as: ACCUMULATION = (REAL + VIRTUAL) (ACQUISITION + PRODUCTION) For the future, we would like to work on methods and environments for testing accumulated lexical data, for combining them with data derived from corpus-based methods, etc. Some more time and work will be needed to verify the usefulness and practicality of our lexical system design, of which the essential idea is the combinatorial and dynamic elaboration of lexical groups and virtual LSs. An experiment for this may be, eg, to build a dictionary server using Intemet online dictionaries as resources. Acknowledgement The author is grateful to the French Government for her scholarship, to Christian Boitet and Gilles Sdrasset for the suggestion of the theme and their help, and to the authors of the DRs for their kind permission of use. References Bl~isi C. & Koch H. (1992), Dictionary Entry Parsing Using Standard Methods. COMPLEX '92, Budapest, pp. 61-70. Boitet C. & al (1998), Processing of French in the UNL Project (Year 1). Final Report, The United Nations University and L'Univeristd J. Fourrier, Grenoble, 216 p. Byrd R. & al (1987), Tools and Methods for Computational Lexicology. Computational Linguistics, Vol 13, N ° 3-4, pp. 219-240. Doan-Nguyen H. (1996), Transformations in Dictionary Resources Accumulation Towards a Generic Approach. COMPLEX '96, Budapest, pp. 29-38. Dorr B. & al (1995), From Syntactic Encodings to Thematic Roles: Building Lexical Entries for Interlhtgual MT. Machine Translation 9, pp. 221-250. Mercuk I. & al (1988), Dictionnaire explicatif et combinatoire du fran~ais contemporain. Volume II. Les Presses de rUniversitd de Montrdal, 332 p. Neff M. & Boguraev B. (1989), Dictionaries, Dictionary Grammars and Dictionary Entry Parsing. 27th Annual Meeting of the ACL, Vancouver, pp. 91-101. Steele G. (1990). Common Lisp - The Language. Second Edition. Digital Press, 1030 p. Tanaka K. & Umemura K. (1994), Construction of a Bilingual Dictionary Intermediated by a Third Language. COLING '94, Kyoto, pp. 297-303. UNL (1996). UNL - Universal Networking Language. The United Nations University, 74 p. 334
1998
53
Accumulation of Lexical Sets: Acquisition of Dictionary Resources and Production of New Lexical Sets DOAN-NGUYEN Hai GETA - CLIPS - IMAG BP 53, 38041 Grenoble, France Fax: (33) 4 76 51 44 05 - Tel: (33) 4 76 63 59 76 - E-mail: [email protected] Following is the abstract written in Vietnamese: T6m t~t Trong bM n~y, chang toi trlnh b~y mot cOng tr~nh v~ ffch lu~ cdc ta.p h.op tit v.tmg, bao gbm s.~ thu h'Oi cdc thi nguyen tit die'n v~ s.tr s~n sinh cdc t.~p h.op tit v.trng m6i. Phttong phdp thu h'Oi s{t du.ng mot bo. djch theo ca phdp phi nggt cdnh vh cdc k~ thu.at s~a ddi van bhn, da chang tO d~ ddng, linh ho.at vh hiCu quh. Trdn ca s(r phan rich cdc lo.ai hinh sdn sinh cdc t.~p h.op tit v.~ng m6i, chang tOi d~ ra cdc thao tdc ca bdn vh thid't la.p mot ca chd" hlnh thgtc giap mO th vh th.u'c hiCn vi¢c sdn sinh. V6i cdc ph~ong phdp vh cOng cu. tr~n, khohng 1,7 triCu don vi tit v.tmg da d~qc thu hbi vh shn sinh tit cdc ngubn tit did'n thuo.c nhi~u 1o.ai vh dO. ph~c tap khdc nhau. Cang trong bhi nhy, chang tOi d~ nghi. mO.t phttong cdch td chic dO.ng vh c6 tinh tO" h.op cho cdc he thong tit v.tmg, d.tta trdn khdi niCm ffch luy ho vh cdc cap tritu ttt.ong hod c{ta t~.p h.op tit v.ung. Dohn Nguydn Hhi Following is the abstract written in French: R6sum6 Cet article pr~sente notre travail sur l'accumulation d'ensembles lexicaux qui comprend l'acquisition de ressources dictionnairiques et la production de nouveaux ensembles lexicaux. La mdthode pour l'acquisition, utilisant un traducteur hors-contexte gouvern~ par la syntaxe et des techniques de modification de texte, se montre flexible, efficace et facile ?, utiliser. Des operations de base sont propos~es qui forment un formalisme de spdcification et d'impl~mentation de processus de production. Environ 1,7 million unit~s lexicales ont ~t~ r~cup~r~es et produites ?, partir de dictionnaires de types et de complexit~s diff~rents. Nous proposons aussi une organisation combinatoire et dynamique pour les syst~mes lexicaux, bas~e sur la notion d'accumulation virtuelle et sur des niveaux d'abstraction d'ensembles lexicaux. 335
1998
54
A Text Input Front-end Processor as an Information Access Platform Shinichi DOI, Shin-ichiro KAMEI and Kiyoshi YAMABANA C&C Media Research Laboratories, NEC Corporation 4-1-1, Miyazaki, Miyamae-ku, Kawasaki, KANAGAWA 216-8555 JAPAN [email protected], [email protected], [email protected] Abstract This paper presents a practical foreign language writing support tool which makes it much easier to utilize dictionary and example sentence resources. Like a Kana-Kanji conversion front-end processor used to input Japanese language text, this tool is also implemented as a front-end processor and can be combined with a wide variety of applications. A morphological analyzer automatically extracts key words from text as it is being input into the tool, and these words are used to locate information relevant to the input text. This information is then automatically displayed to the user. With this tool, users can concentrate better on their writing because much less interruption of their work is required for the consulting of dictionaries or for the retrieval of reference sentences. Retrieval and display may be conducted in any of three ways: 1) relevant information is retrieved and displayed automatically; 2) information is retrieved automatically but displayed only on user command; 3) information is both retrieved and displayed only on user command. The extent to which the retrieval and display of information proceeds automatically depends on the type of information being referenced; this element of the design adds to system efficiency. Further, by combining this tool with a stepped-level interactive machine translation function, we have created a PC support tool to help Japanese people write in English. 1. Introduction When creating text using word processing software on a personal computer, it is common to refer to books or documents relevant to the text, including various kinds of dictionaries and reference works. The tools used for accessing relevant information, such as CD-ROM dictionaries, text databases, and text retrieval software, however, often require user actions that may seriously interrupt the writing process itself. These may include executing retrieval software, inputting key words, or copying retrieved information into texts. The foreign language writing support tool we propose here automatically access information relevant to input texts. Like a Kana-Kanji conversion front-end processor used to input Japanese language text, this tool is also implemented as a front-end processor (FEP) and can be combined with a wide variety of applications. The extent to which the retrieval and display of information proceeds automatically depends on the type of information being referenced; this element of the design adds to system efficiency. In Section 2, we consider the requirements for efficient writing support tools and discuss the characteristics of our front-end processor and its automatic information access function. In Section 3, we introduce our English writing support tool, which has been developed to help Japanese people write in English on a PC. This. tool combines a front-end processor with the stepped- level interactive machine translation method we first proposed in Yamabana (1997). In Section 4, we describe the automatic information access function of the English writing support tool. 336 2. FEP-type Information Access Platform 2.1. Text input front-end processor with information access functions To allow users to concentrate better on their work, writing support tools with reference information access functions should: 1) provide for automatic access of reference information, i.e. access without explicit user commands, 2) enable users to utilize retrieved information with simple operations, and 3) be compatible with a wide variety of word processing applications. In developing our FEP-type support tool, we started with the text retrieval application proposed in Muraki (1997), which provides a morphological analyzer that automatically analyzes users' input and extracts key words to retrieve relevant text from a database. This application fulfills the first of the requirement listed above. We converted such a morphological analyzer into an FEP for use in our tool, which is placed between the keyboard and an application. When a user inputs texts into this tool, the morphological analyzer identifies each word and extracts key words automatically before the text is entered into the application. The key words are used to retrieve information relevant to the input texts. This information is displayed for easy editing and utilization. Because all of this can be achieved with standard hooks and the IME API of the Microsoft Windows 95 operating system, this tool can be combined with any Windows- compatible text-input application. In addition, it can be combined with any other front-end processor, including Kana-Kanji conversion FEPs, through the use of a technique we have recently developed. Figure 1 shows the tool architecture. 2.2. Controlling the extent of the automation of information retrieval and display The automatic retrieval and display function introduced in the previous subsection allows users to concentrate better on their writing Input by User I Any Kana-Kanji Conversion FEP [ FEP-type Information Access Platform Any Text-input Application Mo ho,o,ic yzor I Retrieved ~ key words Znfo ma,ionl In o ation tnovo I Fie'are 1 Architecture of the FEP-tvtm v v - Information Access Platform because much less interruption of their work is required for the consulting of dictionaries or for the retrieval of reference sentences. This function, however, might prevent users from concentrating on their writing if all the retrieved information were displayed in a new window, especially when the quantity of the retrieved information were large and the majority of it were not relevant from the users' point of view. To compensate for this disadvantage, we divided the information access function into three steps: 1) extracting key words from the input text, 2) using the key words to retrieve reference information, and 3) displaying the retrieved information, and we developed a function to control whether the each step is executed automatically or manually. We prepare three methods for retrieval and display as follows. A) Relevant information is retrieved and displayed automatically, without user command. B) Information is retrieved automatically but displayed only on user command. After automatic retrieval, only the quantity of information is displayed, and users can decide whether to display it. C) Information is both retrieved and displayed only on user command. Even in this case, because key words are automatically 337 extracted before retrieval, our tool requires much less user action than other information accessing tools. The extent to which the retrieval and display of information proceeds automatically depends on the type of information being referenced; this element of the design adds to system efficiency. 3. English Writing Support Tool "Eibun Meibun Meikingu" By combining the FEP-type information access platform with the stepped-level interactive machine translation method we proposed in Yamabana (1997), we have developed an English writing support tool to help Japanese people write in English on a PC. This tool, named "Eibun Meibun Meikingu ''l, consists of the following three components: 1) an English writing FEP, "Eisaku Pen ''2, which converts Japanese into English, 2) a CD-ROM dictionary consulting tool, "Shoseki Renzu ''3, and 3) a Japanese-to-English bilingual example sentence database, "Reibun Bainda TM. Figure 2 shows the architecture of "Eibun Meibun Meikingu". This tool is now available as a software package. 3.1. English writing FEP "Eisaku Pen" "Eisaku Pen" has an interactive interface similar to Kana-Kanji conversion FEPs, and initially replaces most of the Japanese vocabulary items with English equivalents but maintains Japanese grammatical constructions. When a user inputs Japanese text, a conversion window of "Eisaku Pen" is automatically popped-up and English equivalents are displayed in the order of original Japanese words. Figure 3 illustrates how text is 1 The Japanese terms Eibun, Meibun and Meikingu mean, respectively, 'English writing', 'beautiful writing' and 'making'. 2 The Japanese terms Eisaku and Pen mean, respectively, 'Creating English' and 'a pen'. 3 The Japanese terms Shoseki and Renzu mean, respectively, 'written materials' and 'a lens'• 4 The Japanese terms Reibun and Bainda mean, respectively, 'example sentences' and 'a binder'. 338 Any I Kana-Kanji Conversion FEP I I ! ............. c'------~., t I i o i • m • l . . . . . . . --°| r l o ~ o m . . . . !i l[n'qIishl m~n'q '~pp°rt" "~ c°nvenient r~t°°l -I" ~:~ I ! ~ tk English sentence [a-ll[~.v*-~ I~:!=r'a)2ZI English text [a-'lWt:g.ffJ] I~:!=r,a~2Zill English passage [~$1[~=~] I~:!=r'¢gS~iill ~'iften English [a-]'~=~J] II~,~t'~3~l -------I ' System i Dictionary , i Expression i ! J Japanese- i to-English , Conversion J Function , I Eisaku Pen i I ° - - . ~ . n . . . . . . ,-- . . . . . ,wo . . . . . . --. .r . "-" -i i Example ~hosek, Renzu. . I Ex ..... eo ~ • _ ....... I;-•' ! ~, ~Re_ip_u.n_Ba_{n_d.d_. AnyText-input Application ] ~ Figure 2 Architecture of the English Writing Support Tool "Eibun Meibun Meikingu" displayed. When a user inputs Japanese sentence "purezento wo arigato", where each word means 'present', objective marker and 'thank you' respectively, "purezento " and "arigato" are replaced with their English equivalents 'present' and 'thank you' and displayed automatically in the conversion window shown in the center of the 11 appreciate I~] I Figure 3 Illustration of "Eisaku Pen" figure. The window below is an alternatives window to display all the possible equivalents for "arigato", by selecting from which, users can easily change equivalents. In this alternatives window, "Eisaku Pen" provides part-of-speech of each alternative equivalents and supplementary information indicating the difference between their meanings or usage in order to make users' equivalent selection easier. After confirming the equivalents of input words, users can execute the Japanese-to-English conversion function, which transforms Japanese grammatical constructions into those of English and the whole sentence is converted to an English sentence: 'Thank you for a present.' by automatic word reordering and article insertion. This syntactic transformation proceeds step by step, in a bottom-up manner, combining smaller translation components into larger ones. Such a 'dictionary-based interactive translation' approach allows users to refine dictionary suggestions at different steps of the process. Finally, users can also easily change articles to obtain the result sentence: 'Thank you for the present.' The system dictionary of "Eisaku Pen" contains about 100,000 Japanese vocabulary entries and 15,000 idiomatic expressions. Since there was no source available to build an idiom dictionary of this size, we collected them manually, from scratch, following a method described in Tamura (1997). 3.2. CD-ROM dictionary consulting tool "Shoseki Renzu" While using "Eisaku Pen", if users want to obtain more information on words or equivalents, "Shoseki Renzu" provides a function to consult CD-ROM dictionaries. For example, when users execute the CD- ROM dictionary consulting function of "Shoseki Renzu" at the situation of the Figure 3, the currently selected alternative 'thank you' is regarded as a key word for dictionary consulting and the contents of the dictionaries for 'thank you' is displayed. If users double-click on another word in a conversion window or an alternatives window including the original Japanese word shown at the top of the window, the word is regarded as a key word for dictionary consulting. 3.3. Bilingual example sentence database "Reibun Bainda" "Eibun Meibun Meikingu" also provides a function to retrieve and utilize bilingual example sentences. Example sentences relevant to the texts input by users are retrieved from the database of "Reibun Bainda" containing 3,000 of Japanese-to-English bilingual sentence pairs for letter writing. Figure 4 illustrates the Japanese-to- English sentence pairs retrieved when a user executes "Reibun Bainda" at the situation of the Figure 3. Here, the currently selected original Japanese word "arigato" is regarded as a key word for retrieving and the example sentences which are assigned a key word "arigato" beforehand or include strings of "arigato" in the Japanese sentence are retrieved from the bilingual example sentence database of "Reibun Bainda" and displayed in the window as illustrated in Figure 4. Japanese sentences are shown in the first column and translated English sentences are shown in the second one. The third one is for supplementary information indicating the difference between meanings or usage of the sentences. Users can easily send these sentences to text-input applications by drag-and-drop operation using a mouse. In addition, by using "Eisaku Pen", users easily edit a Japanese word and its English equivalents in example sentences synchronously. Ill II IIII I II II .II~l~- • " ~TC ~..~.~: ....................................................................... • r~ p,e~ ~o let you know of .,~ { ~, ~betfJ~t:.b~t:_~tL succe~ in pa~ny the enh'ance ,:, E'~. exam. Thank you'once again. :,o: ~L ~ ~t~. • Thank you for responding so promptly. • We appreciafe your quick response. • Your letter is acknowledged ~th many thanks. Fi~ure 4 Illustration of bilin~ual sentences v retrieved bv " Reibun Bainda" 339 4. Information Access Function of English Writing Support Tool Our tool currently accesses three types of information: 1) information, included in the system dictionary, regarding grammatical forms and idiomatic expressions; 2) straight CD-ROM dictionary information; and 3) Japanese-to- English example sentences in the database. The extent to which the retrieval and display of information proceeds automatically depends on the type of information being referenced; information of type 1) is retrieved and displayed automatically, that of type 2) is both retrieved and displayed manually, and that of type 3) is retrieved automatically but displayed manually. In the first case of translation equivalents and grammatical information retrieval, "Eisaku Pen" automatically retrieves and displays English words equivalent to the input Japanese texts without explicit user command because users always utilize the English equivalents in English writing. In the second case of CD-ROM dictionary consulting, "Shoseki Renzu" retrieves and displays contents of CD-ROM dictionaries on user command because this dictionary consulting function needs to be executed only when users require additional information. Our tool requires much less user action than other dictionary consulting tools because key words are automatically extracted before user command for retrieval and users don't always need to input key words. In the third case of bilingual sentence retrieval, "Reibun Bainda'" retrieves sentences automatically but displays only on user command. Because "Reibun Bainda" contains the example sentences in itself, relevant sentences are retrieved at high speed and the retrieval function doesn't interrupt users' writing process. Retrieved sentences, however, might include the ones not relevant to the input text from users' point of view, because similarity between sentences is judged with a simple method using key words. Therefore, the writing process might be interrupted if retrieved sentences were displayed automatically. To avoid this problem, the color of the icon of "Reibun Bainda" is changed after automatic retrieval, depending on the existence of relevant sentences, and users can decide whether to display the retrieved sentences. 5. Conclusion We present a practical foreign language writing support tool which makes it much easier to utilize dictionary and example sentence resources. This tool is implemented as a front-end processor and can be combined with a wide variety of applications. The extent to which the retrieval and display of information proceeds automatically depends on the type of information being referenced; this element of the design adds to system efficiency. We also describe our English writing support tool with a stepped-level interactive machine translation function, by which users can write English by accessing essential information resources including bilingual dictionaries and example sentences. Our tool is implemented as an English writing support tool, now under expansion to a general writing support tool. Another further work is enlarging resources our tool can access. We are also developing an example-based translation function which utilizes example sentences in "Reibun Bainda" for Japanese-to-English conversion function of "Eisaku Pen" and an automatic example sentence acquisition function which acquires users' input texts and their translation and adds them to "Reibun Bainda" automatically. References Muraki K., et al. (1997) Information Sharing Accelerated by Work History Based Contribution Management, Leads to Knowhow Sharing. In "Design of Computing Systems: Cognitive Considerations", Salvendy G., et al. ed., Elsevier Science B.V., Amsterdam, pp. 81- 84. Tamura S., et al. (1997) An Efficient Way to Build a Bilingual Idiomatic Lexicon with Wide Coverage for Newspaper Translation. NLPRS'97, Phuket, Thailand, pp. 479-484. Yamabana K.. et al. (1997) An Interactive Translation Support Facility for Non- Professional Users. ANLP-97, Washington, pp. 324-331. 340
1998
55
Syntactic and Semantic Transfer with F-Structures* Michael Dorna*, Anette Frank t, Josef van Genabith* and Martin C. Emele* *IMS, Universit~it Stuttgart tXerox Research Centre Europe *Dublin City University Azenbergstr. 12 6, chemin de Maupertuis Computer Applications D-70174 Stuttgart F-38240 Meylan Dublin 9, Ireland (dorna, emele}@ims, uni-stuttgart, de Anette. Frank@xrce. xerox, com j osef%compapp, dcu. ie Abstract We present two approaches for syntactic and se- mantic transfer based on LFG f-structures and compare the results with existing co-description and restriction operator based approaches, fo- cusing on aspects of ambiguity preserving trans- fer, complex cases of syntactic structural mis- matches as well as on modularity and reusabil- ity. The two transfer approaches are interfaced with an existing, implemented transfer com- ponent (Verbmobi1), by translating f-structures into a term language, and by interfacing f- structure representations with an existing se- mantic based transfer approach, respectively. 1 Introduction Target and source levels of representation in transfer-based machine translation (MT) are subject to often competing demands: on the one hand, they need to abstract away from partic- ulars of language specific surface realization to ensure that transfer is as simple and straightfor- ward as possible. On the other hand, they need to encode sufficiently fine-grained information to steer transfer. Furthermore, target and source representations should be linguistically well es- tablished and motivated levels of representa- tion. Finally, from a computational perspective they need to be sensible representations for both parsing and generation. LFG f-structures are abstract, "high-level" syntactic representations which go some way towards meeting these of- ten irreconcilable requirements. " We would like to thank H. Kamp, M. Schiehlen and the anonymous reviewers for helpful comments on ear- lier versions of this article. Part of this work was funded by the German Federal Ministry of Education, Science, Research and Technology (BMBF) in the framework of the Verbmobil project under grant 01 IV 701 N3. Correspondence-based transfer on f-structures has been proposed in (Kaplan et al., 1989). A closer look at translation problems involv- ing structural mismatches between languages - in particular head switching phenomena (Sadler and Thompson, 1991) - led to the contention that transfer is facilitated at the level of seman- tic representation, where structural differences between languages are often neutralized. Struc- tural misalignment is treated in semantics con- struction involving a restriction operator (Ka- plan and Wedekind, 1993) where f-structures are related to (possibly sets of) disambiguated se- mantic representations. Given the high potential of semantic ambigui- ties, the advantage of defining transfer on se- mantic representations could well be counter- balanced by the overhead generated by multi- ple disambiguated structures as input to trans- fer. This and the observation that many seman- tic (and syntactic) ambiguities can be preserved when translating into a target language that is ambiguous in similar ways, sheds light on the issue of the properties of representations for the task of defining transfer. In principle, the problem of semantic ambi- guity in transfer can be tackled in a number of ways. Packed ambiguity representation tech- niques (Maxwell III and Kaplan, 1993) could be integrated with the approach in (Kaplan and Wedekind, 1993). In the linear logic based se- mantics of (Dalrymple et al., 1996) scope am- biguities are accounted for in terms of alterna- tive derivations of meaning assignments from a set of meaning constructors. Ambiguity pre- serving semantic transfer can be devised on sets of meaning constructors rather than dis- ambiguated meanings (Genabith et al., 1998). Transfer on packed representations is considered 341 in (Emele and Dorna, 1998). In the present paper we consider alternative ap- proaches to transfer on underspecified - syntac- tic or semantic - representations, focusing on is- sues of modularity, reusability and practicality, interfacing existing implemented approaches in a flexible way. At the same time, the propos- als readdress the issue of what is an appropriate level of representation for translation, in view of the known problems engendered by structural mismatches and semantic ambiguity. We first show how the underlying machinery of the semantic-based transfer approach de- veloped in Dorna and Emele (1996b) can be ported to syntactic f-structure representations. Second, we show how the underspecified seman- tic interpretation approach developed in Gen- abith and Crouch (1997) can be exploited to in- terface f-structure representations directly with the named semantic-based transfer approach. Third, we compare the two approaches with each other, and with co-description and restric- tion operator based approaches. 2 Syntactic Transfer This section presents a simple bidirectional translation between LFG f-structures and term representations which serve as input to and output of a transfer component developed within the Verbmobil project (Dorna and Emele, 1996a). The term representation is inspired by earlier work (Kay et al., 1994; Caspari and Schmid, 1994) which uses terms as a quasi- semantic representation for transfer and gener- ation. The translation between f-structures and terms is based on the correspondence between directed graphs representing f-structures and the func- tional interpretation of these graphs (cf. (John- son, 1991)). Given an arc labeled f which con- nects two nodes nl and n2 in a graph, the same can be expressed by a function f(nl) = n2. An f-structure is the set of such feature equations describing the associated graph. Instead of fea- ture equations f(nl) -- n2 we use the relational notation f(nl, n2). Using this idea f-structures can be converted into sets of terms and vice versa} F-structure 1For motivation why we prefer term representations PRED features and their "semantic form" values are given special treatment. Instead of introduc- ing PRED terms we build unary relations with the semantic form predicate name as functor (see Example (1)). The resulting representation is similar to a Neo-Davidsonian style event se- mantics (Parsons, 1991) but uses syntactic roles. For a formalization of the f-structure-term cor- respondence see Appendix A. l (I) a. /PRED ~o~.~,,(~SUBJ) /m LADJN { [PRED GERNE][~]} J b. Hans kocht gerne C. { kochen(nl), SUBJ (nl ,n2), Hans (n2), ADJN(nl,n3), gerne(n3) } Consider the simple head switching example in- volving the German attitude adverb gerne and the English verb like (see (lb) and (3b)). (la) is the LFG f-structure for the German sen- tence (lb). 2 (lc) is the set of terms representing (la). Transfer works on source language (SL) and tar- get language (TL) sets of terms representing predicates, roles, etc. like the ones shown in (lc). The mapping is encoded in transfer rules as in (2). For a rule to be applied, the set on the SL side must be a matching subset of the SL input set. If this is the case, we remove the covering set from the input and add the set on the other side of the rule to the TL output. Transfer is complete, if the SL set is empty. (2) a. "[ kochen(E) ]" <-> { cook(E) }. b. (SUBJ(E,X) } <-> { SUBJ(E,X) ]-. c. { Hans(X) } <-> { Hans(X) ]'. d. (ADJN(E,X) ,gerne(X) ]- # "[ SUBJ(E,Y) } <-> { Iike(X),XCOMP(X,E),SUBJ(X,Y) }. The transfer operator <-> is bidirectional. Up- per case letters in argument positions are logical variables which will be bound to nodes at run- time. Because of the variable sharings on both sides of a rule we work on the same nodes of a graph. The result is a graph rewriting process. over feature structures for transfer, see (Emele and Dorna, 1998). 2For presentational purposes we leave out morpho- syntactic information in f-structures here and in the fol- lowing examples. 342 The head switching rule (2d) shows two compo- nents on its lefthand side: the part to the right of # is a test on a copy of the original input. The test binds the variable Y at runtime when ap- plying the rule from left to right. In the reverse direction (and in general), TL tests are ignored. Applying the rule set in (2) to (lc), we get (3c). We now use the correspondence between f- structures and term representations to construct the TL f-structure. The result is (3a) represent- ing the English sentence (3b). "suBJ [PRED ] PRED LIKE(~ SUB J, I" XCOMP) /- (3) a. [SUBJ [PRED HANS]I~I]~/131 XCOMe [PRED ooo ( SUB.> jwj b. Hans likes cooking C. (like(n3) SUBJ(n3,n2), Hans(n2), XCOMP(n3,nl), cook(nl), SUBJ(nl,n2) } 3 Semantic Transfer Semantic-based transfer as detailed in (Dorna and Emele, 1996a; Dorna and Emele, 1996b) is based on rewriting underspecified seman- tic representations. The representations (Bos et al., 1996) are UDRS variants (Reyle, 1993). F-structures are abstract syntactic representa- tions. They do, however, encode basic predicate- argument relations, and this is essentially se- mantic information. It turns out that there are important structural similarities between f-structures and UDRSs: f-structures can be "read" as UDRSs and hence be assigned an underspecified truth-conditional interpretation (Genabith and Crouch, 1997). 3 Appendix B gives a relational formulation of the corre- spondence between f-structures and UDRSs. The UDRS representations are processed by semantic-based transfer. The resulting system is bi-directional. Consider again the simple head switching case discussed in (1) and (3) above. (4) shows the corresponding UDRSs. The structural mismatch between the two f- structures has disappeared on the level of UDRS representations and transfer is facilitated. 4 3A similar corespondence between f-structures and QLFs (Alshawi and Crouch, 1992) has been shown in (Genabith and Crouch, 1996). 4In the implementation, a Neo-Davidsonian style en- (4) z, "° Hans(x~]) ] ¢ ÷ l~] : I gerne(l~l ) l--li-51 : I like(x~], l~1) I 7 l[i]: I k°chen(x~]) I~t[i:l : [ c°°k(x~) l Hans kocht gerne Hans likes cooking 4 Embedded Head Switching and Multiple Adjuncts How do the two approaches fare with embed- ded head switching and multiple adjuncts? Due to space limits we will not discuss straightfor- ward cases where ambiguites represented in un- derspecified representations are carried over into the target language. Examples of this type in- volve quantificational and plural NPs, negation, or adjunct sets. Instead, we concentrate on com- plex cases where a source language ambiguity needs to be resolved in target language. 4.1 Embedded Head-Switching The syntactic transfer rules (2) are supple- mented by (5). The complex rule for gerne in (5) overrides 5 (2d) and the COMP rule in (5). For each additional level of embedding triggered by head switching adjuncts a special rule is needed. (5) { vermuten(E) } <-> { suspect(E) }. Ede(X) } <-> (Ede(X) }. • [ COMP(E,X) } <-> { COMP(E,X) }. { gerne(X),ADJN(E,X),COMP(E1,E) } # (SUBJ(E,Y) } <-> { like(X),XCOMP(X,E),SUBJ(X,Y),COMP(EI,X) }. By contrast, on the level of UDRSs head switch- ing has disappeared and transfer is facilitated. Figure 1 shows the transfer correspondence be- tween terms and UDRSs. coding of predicate argument relations is used. The sub- ject of the target like relation is determined by the fol- lowing transfer rule: { L:gerne(L1) } # { L2 ~ L1, L2:agent(A) } <-> { L:like(A,L1) }. _~ is the transitive closure over subordination con- straints <. Here and in the following we do not give set representations of UDRSs and transfer rules. Instead, we provide a graphical representations of standard UDRSs to better illustrate the structural mismatches discussion. 5For the treatment of overriding see, e.g., the speci- ficity criterion in (Dorna and Emele, 1996a). 343 I zN, z• I IT : Ede(x~]) Hans(x~]) ¢ tin: [ "e"mut~n(xl~' lm~ ) l IN: [ ge'~e(IN,) I IN: [ k°ehen(~) I x[]], z[] IT : Ede(xl] 1) Hans(xr4 ]) l[]: I S'~peet(~r~] ' ImP) I lr~ : I l~ke(:':~n, l~, ) I lr~: I e°°k(xmYl { vermuten(nl), SUBJ(nl,n2), Ede(n2), COMP(nl,n3), kochen(n3), SUBJ(n3,n4), Hans(n4), ADJN(n3,n5), gerne(n5) } "SUBJ PRED COMP [PRED EoE]r~ } V~,aMUTEN('~ SUB J, ~" COMP} "suBJ [,RED .~][] ] [] PRED KOCHE~(~" SUBJ> /N { suspect (nl), SUBJ(nl,n2), Ede(n2), C0MP (nl,n5), like (n5), SUBJ(n5,n4), Hans(n4), XCOMP(n5,n3), cook(n3), SUBJ (n3,n4) } sms~ [eR~,D ~D~]r~ PRED SUSPECT(t SUB J, J" COMP> /PRED L,Kt:<~ SUBJ,~ XCOMP) |r~ COMP 'Lxco , rsu,. lrd [PRED COOK(]" SUBJ)J~J [] Ede vermutet daft Hans gerne kocht Ede suspects that Hans likes cooking Figure 1: Embedded Head Switching Example 4.2 Multiple Adjuncts Consider the sentences in (6). (6) a. Oft kocht Hans gerne b. Hans kocht gerne oft c. Often Hans likes cooking d. Hans likes cooking often (6a) is ambiguous between (6c) and (6d), (6b) can only mean (6d). (6c) and (6d) are not am- biguous. (6a) is represented by f-structure (7a). "SUBJ [PRED HANS]~] }] (7) a. PRED }<OCHEN<~" SUB J> [PRED OFT][~] [] ADJN [PRED OE.NE ] ['4] b. kochen(nl), SUBJ(nl,n2), Han,.(n2), ADJN(nl,n3), oft(n3), ADJN(nl,n4), gerne(n4) } lr : Hans(x~) C. lr~:14t(% ) l l[]:l ge~ne(lr4n,) l lm: I koehen(x~) I The corresponding term representation is (7b) and, in the absence of further constraints, we get a flat scopally underspecified UDRS (7c). Let (6a) be our translation candidate. For syntactic transfer, adding rules (9) to the ones introduced in (2) leads to (8a). (8) a. { like(n4), SUBJ(n4,n2), Hans(n2), XC0MP(n4,nl), cook(nl), SUBJ (nl ,n2), ADJN(nl,n3), often(n3) } [suBJ [PREp H~,Ns][] /PRED ~'~(1" SUm,T XCOMP) b. / rs~.~ []r~ ] L LADJN {[paED OFT~.]Sl)J IT :J x[~] Hans(x~]) I iN: I like(~, IN,) I c. l~: i oZten(l~,) I zm: I cook(~) I [] 344 (9) (ADJN(E,X) } <-> { ADJN(E,X) ].. { oft(E) ]- <-> { often(E) }. (8a) corresponds to only one of the En- glish translations, namely (6d), of (6a). As in the correspondence-based approach (Ha- plan et al., 1989), often can only be assigned wide scope over like if the transfer formal- ism allows reference to and rewriting of par- tial nodes. In the present case the two terms kochen(nl). SUBJ(nl,n2) could then be rewrit- ten as the complement of like, XCOMP(n4,nl), whereas ADJN(nl,n3) is rewritten as ADJN(n4,n3) or hDJN(nl ,n3).6 The target f-structure for English must resolve the relative scope between like and often ((8b) and (10)). (10) rsuB; [FRED H,,N ]m ] PRED LIKE(~" SUBS, 1" XCOMP) / r LPRED cooK(T SUBJ)J / .ADJN {[PRED OFTEN][~]} J Semantic transfer on the source UDRS (7c) pre- serves the underspecification and leads to (11). l-r :1 x[] Hans(x~]) I (11) lr.5 ] :1 o#en(l~) I lr~ :1 like(x[],l~]l) I I c°°k(xm) I However, (11) is not in the direct f-structure - UDRS correspondence with (10) and (Sb). In- stead, the correspondences on the enumerations of the scoping possibilities of (11) yield (10) and (8b) as required. By contrast, the reading of (6b) is restricted by the surface order in which the two adverbials occur. On the semantic level this is reflected in terms of corresponding subordination con- straints (12). The target UDRS corresponds to f-structure (Sb). OAs an alternative, we can get both readings if we define special rules for adverbials in head switch- ing contexts, giving them wide or narrow scope rel- ative to the head switching adverbial. A narrow scope rule is already given in (9). A wide scope rule would be {hDJN(E,X)} # {HS(E1), XC0~IP(E1,E)} ~-} {ADJN(EI,X)} where HS(E1) is a "marker" on the switched adverbial's node El. (12) lT :I x[] x[] Hons( )l lT:lHans(x~) I ! ¢ 4' l[~: I gerne(l~ 1 ) I l[~: I like(x~, 1~1) I 7 l~] : I °#(1[]1) I l~] : I o ften(l~],)'l In LFG linearization effects can be captured in terms of f-precedence constraints 41 as in (13). Semantic subordination and f-precedence con- straints can then be linked as in (14). (14) [~ -<$ [] ~ ~ l~ _< l[il 1 With (14) the head switching - multiple adjunct interaction is correctly resolved in semantic- based transfer. Similarly, in syntactic transfer, the precedence constraint (13) can be used to steer translation to f-structure (8b). 5 Discussion We have presented two alternative architectures for transfer in LFG. In both cases, transfer is driven by the transfer module developed and implemented by Dorna and Emele (1996a). In the case of syntactic transfer, transfer is de- fined on term representations of f-structures. In the case of semantic transfer, transfer is de- fined on UDRS translations of f-structures. F- structure, term and UDRS correspondences are defined in the Appendix. The transfer rules are bi-directional, as are the f-structure-term and f-structure-UDRS correspondences. Co-description based approaches (Kaplan and Wedekind, 1993) require annotation of source and target lexica and grammars. By contrast, both approaches presented here support mod- ular grammar development: they don't involve additional coding in the grammar specifications. An important issue, noted above, is the problem of ambiguities and ambiguity preserving trans- fer. F-structures and UDRSs are underspecified syntactic and semantic representations, respec- tively. Both support ambiguity preserving trans- fer to differing degrees (NP scope, operators, adjuncts). F-structure based syntactic represen- 345 tations may come up against structural mis- matches in transfer. The original co-description based approach in (Kaplan et al., 1989) faced problems when it came to examples involving embedded head-switching and multiple adjuncts (Sadler and Thompson, 1991), which led to the introduction of a restriction operator, to en- able transfer on partial f-structures or semantic structures (Kaplan and Wedekind, 1993). One might suppose that the need to refer to partial structures is an artifact of the correspondence- based approach, which doesn't allow the map- ping from a single node of the source f-structure to distinct nodes in the target f-structure with- out violation of the functional property of the correspondence. On closer inspection, though, the rewriting approach to syntactic f-structure- term translations presented above suffers from the very same problems that were met by the correspondence-based approach in (Kaplan et al., 1989). By contrast, transfer on the semantic UDRS representations does not suffer from such prob- lems. Head switching is dealt with in the con- struction of semantic representations. Under- specified semantic representations in the form of UDRSs (or related formalisms) offer the follow- ing advantanges for transfer: they abstract away from cross-language configurational variation to facilitate transfer. Unlike the original restric- tion operator approach (Kaplan and Wedekind, 1993) whenever possible they avoid the detour of multiple transfer on disambiguated represen- tations. At the same time they provide a flexible encoding of information essential to steer trans- fer. Of course, semantics does not come for free nor does it always blend as seamlessly with syntac- tic representations as one would hope for. Se- mantics has to be encoded in the grammar or defined in terms of correspondences as below. System design has to address the question where to do what at which cost. Semantic representa- tions pay off when they are useful for a num- ber of tasks: evaluation (as against a database), inference and transfer. Even more so when ex- isting resources can be interfaced qua semantic representations: in our case the tested transfer methodology and resources developed in (Dorna and Emele, 1996a). References H. Alshawi and R. Crouch. 1992. Monotonic seman- tic interpretation. In Proceedings of A CL, pages 32- 39, Newark, Delaware. J. Bos, B. Gamb~ick, C. Lieske, Y. Mori, M. Pinkal, and K. Worm. 1996. Compositional Semantics in Verbmobil. Coling'96, pages 131-136, Copenhagen, Denmark. R. Caspari and L. Schmid. 1994. Parsing und Generierung in TrUG. Verbmobil Report 40, Siemens AG, December. M: Dalrymple, J. Lamping, F.C.N Pereira, and V. Saraswat. 1996. A deductive account of quan- tification in lfg. In M. Kanazawa, C. Pinon, and H. de Swart, editors, Quantifiers, Deduction and Context, pages 33-57. CSLI Publications, No. 57. M. Dorna and M. C. Emele. 1996a. Efficient Imple- mentation of a Semantic-based Transfer Approach. ECAI'96, Budapest, Hungary. M. Dorna and M. C. Emele. 1996b. Semantic-based Transfer. Coling'96, Copenhagen, Denmark. M. C. Emele and M. Dorna. 1998. Ambiguity Preserving Transfer Using Packed Representations. Coling'98, Montreal, Canada. J. van Genabith and R. Crouch. 1996. Direct and underspecified interpretations of lfg f-structures. In COLING 96, Copenhagen, Denmark, pages 262-267. J. van Genabith and R. Crouch. 1997. On interpret- ing f-structures as udrss. In ACL-EACL-97, Madrid, Spain, pages 402-409. J. van Genabith, A. Frank, and M. Dorna. 1998. Transfer Constructors. LFG Conference '98, Bris- bane, Australia. M. Johnson. 1991. Features and Formulae. Compu- tational Linguistics, 17(2):131-151. R. M. Kaplan and J. Wedekind. 1993. Restriction and Correspondance-based Translation. EACL'93, pages 193-202, Utrecht, The Netherlands. R. Kaplan, K. Netter, J. Wedekind, and A. Zaenen. 1989. Translation by Structural Correspondences. EACL'8g, pages 272-281, Manchester, UK. M. Kay, M. Gawron, and P. Norwig. 1994. Verbmo- bil: a Translation System for Face-to-Face Dialogs. Number 33 in CSLI Lecture Notes. University of Chicago Press. John T. Maxwell III and Ronald M. Kaplan. 1993. The interface between phrasal and functional con- straints. Computational Linguistics, 19(4):571-590. T. Parsons. 1991. Events in the Semantics of En- glish. MIT Press, Cambridge, Mass. U. Reyle. 1993. Dealing with Ambiguities by Un- derspecification: Construction, Representation and Deduction. Jounal of Semantics, 10(2):123-179. L. Sadler and H. S. Thompson. 1991. Struc- tural Non-correspondence in Translation. EACL'91, pages 293-298, Berlin, Germany. 346 A F-Structures and Terms A 2-place relation between f-structures and sets of terms is defined below. ~] are references to feature structures which are mapped into node constants ni used in terms. F are features (grammatical func- tions), and ~ are f-structures. Predicates occur as YI(/ if they do not subcategorize for an argument, else as II(T Fx,..., 1" Fn). 1. (simple predicates) ([PRED l'I<)]~,-(n(ni)}) 2. (complex predicates) ( / F1 [PRED ~x[i~] n<t rl,..., t r,)] [], [] { II (nio), Fx (nio, ni 1 ) ..... Pn (n/o, nin) } U T1 U ... U Tn) • ,I. '.. (~a[~'[], T~) A ... A (~n[], T,) 3. (set values) < [ADJN {dr 1[~], . . . , O~m~']} ][~ , {ADJN(nio,nil) ..... ADJN(nio,nin) } U TI U...UTn) ". ~" (0tl[~, T1) A ... A (an[], Tn) B F-Structures and UDRSs In (Genabith and Crouch, 1997) the correspondence between f-structures and UDRSs was defined in terms of translation functions ~- : and v -1 between subsets of the f-structure and UDRS formalisms. Be- low we give a relational formulation of the corre- spondence ~ with a treatment of simple (scopal) adjuncts: 7 rPRED II(l" rl,...,l" rn) ] /r, ~,,[] / LADJN {a,[J-1], • • •, amid'I} J {l[~ : n(T~],..., ~]), lm~ _< l[~ } u s u AlU...uAmuF~ U...UFn ¢=:::> n<t r~q,...,t r[]> ~ {n(~3,...,~3)} n m i=1 i=1 "SPEC IM : tin1Qx[~l~, ~[~ : x M, { l~]l : II (x[i]),IM < l'r,l[]~. <_ l[~ } VIn LFG adjuncts do not subcategorize the material they modify nor are they subcategorized by that mate- rial. [PRED[SPEC ] [] 3. All() ~ ~> { {lr~ : ~M,t[~ : n(~03), < tv,l~]~ < l m j 4. [PRED II014 {lt : ~[i]' lT: H(~[i]), t[]. _< iT} rPRED If(l" FI,...,~" Fn)] rl ~1 [] J I_AmN r~ rl ~1 [] ADJN 6. n<t r,~,..., t r.~) ~ I {n(r~,..., r~])} u s holds iff there is a lexically specified map be- tween subcategorizable grammatical functions in LFG semantic form and argument positions in the corresponding UDRT predicate, e.g.: {like( x[-~, lira] ' )} $ $ LIKE( 1"SUBJ'], I"XCOMP~] ) [] ~. [PREO n<>]m <~o {tin: n(lm,),z $ <_ t[],,t[]~ < l[~1} F-structures and UDRSs are in the ,~ relation iff their components are ,~> related (clause 1). ,~ re- lates f-structure tags and UDRS labels. Clausal tags []] introduce a local top [i] T and a local bottom [~. The global top is T. For readability, tops and bot- toms are suppressed in the example translations. 7/ refers to discourse referents or labels. S in clause 1 is a set of subordination constraints induced lexi- cally by embedding verbs (clause 6). Clauses 2 - 4 relate quantificational, indefinite and proper name f-structure and UDRS components, clause 5 embed- ded clauses. Clause 7 translates simple adjuncts. 347
1998
56
Group Theory and Linguistic Processing* Marc Dymetman Xerox Research Centre Europe 6, chemin de Maupertuis 38240 Meylan, France Marc. Dymetman@xrce. xerox, com 1 Introduction There is currently much interest in bringing together the tradition of categorial grammar, and especially the Lambek calculus (Lambek, 1958), with the more recent paradigm of linear logic (Girard, 1987) to which it has strong ties. One active research area concerns the de- sign of non-commutative versions of linear logic (Abr- usci, 1991; Rdtor6, 1993) which can be sensitive to word order while retaining the hypothetical reasoning capabil- ities of standard (commutative) linear logic that make it so well-adapted to handling such phenomena as quanti- fier scoping (Dalrymple et al., 1995). Some connections between the Lambek calculus and group structure have long been known (van Benthem, 1986), and linear logic itself has some aspects strongly reminiscent of groups (the producer/consumer duality of a formula A with its linear negation Aa-), but no serious attempt has been made so far to base a theory of linguis- tic description solely on group structure. This paper presents such a model, G-grammars (for "group grammars"), and argues that: • The standard group-theoretic notion of conjugacy, which is central in G-grammars, is well-suited to a uniform description of commutative and non- commutative aspects of language; • The use of conjugacy provides an elegant approach to long-distance dependency and scoping phenom- ena, both in parsing and in generation; • G-grammars give a symmetrical account of the semantics-phonology relation, from which it is easy to extract, via simple group calculations, rewriting systems computing this relation for the parsing and generation modes. 2 Group Computation A MONOID AI is a set M together with a product M × 31 --+ ,ll, written (a, b) ~+ ab, such that: • This product is associative; • There is an element 1 E M (the neutral element) with la = al = a for all a 6 M. * This paper is an abridged version of Group Theory and Gram- matical Description, TR-MLTT-033, XRCE, April 1998; available on the CMP-LG archive at the address: http://xxx.lanl.gov/abs/cmp- Ig/9805002. A GROUP is a monoid in which every element a has an inverse a -1 such that a- l a = aa -1 -- l. A PREORDER on a set is a reflexive and transitive re- lation on this set. When the relation is also symmetrical, that is, R(x, Y) ~ R(y, x), then the preorder is called an EQUIVALENCE RELATION. When it is antisymmetrical, that is that is, R(x, Y) A R(y, x) ~ x = Y, it is called a PARTIAL ORDER. A preorder R on a group G will be said to be COM- PATIBLE with the group product iff, whenever R(x, Y) and R( x', y'), then R( xx', yy'). Normal submonoids of a group. We consider a com- patible preorder notated x -4 y on a group G. The fol- lowing properties, for any x, y E G, are immediate: x -+ y ¢:~ x y- l -41; x -4 y ¢0 y-l -4 x-1; x-41 ¢:v 1-4x-~; x-41 :::¢, yxy-l -41, foranyyEG. Two elements x, x' in a group G are said to be CONJU- GATE if there exists y 6 G such that x' = yxy -1. The fourth property above says that the set A,/ of elements x 6 G such that x -41 is a set which contains along with an element all its conjugates, that is, a NORMAL subset of G. As M is clearly a submonoid of G, it will be called a NORMAL SUBMONOID of G. Conversely, it is easy to show that with any nor- mal submonoid M of G one can associate a pre- order compatible with G. Indeed let's define x-+ y as xy -1 6 M. The relation --~ is clearly reflex- ive and transitive, hence is a preorder. It is also compatible with G, for if xl --)- yl and x2 -4 y_~, then xly1-1, x2yg. -1 and yl(x~y2-1)y1-1 are in M; hence XlX2y~.-ly1-1 : xlyl-lylx~.y2-1y1-1 is in M, im- plying that XlX2 -4 yly:, that is, that the preorder is compatible. If S is a subset of G, the intersection of all normal submonoids of G containing S (resp. of all subgroups of G containing S) is a normal submonoid of G (resp. a J ln general M is not a subgroup of G. It is iff x ~ y implies Y --+ x, that is, if the compatible preorder --~ is an equivalence re- lation (and, therefore, a CONGRUENCE) on G. When this is the case, M is a NORMAL SUBGROUPof G. This notion plays a pivotal role in classical algebra. Its generalization to submonoids of G is basic for the algebraic theory of computation presented here. 348 normal subgroup of G) and is called the NORMAL SUB- MONOID CLOSURE NM(S) of S in G (resp. the NOR- MAL SUBGROUP CLOSURE NG(S) of S in G). The free group over %'. We now consider an arbitrary set V, called the VOCABULARY, and we form the so- called SET OF ATOMS ON W, which is notated V t_J V -1 and is obtained by taking both elements v in V and the formal inverses v-1 of these elements. We now consider the set F(V) consisting of the empty string, notated 1, and of strings of the form zxx~....:e,, where zi is an atom on V. It is assumed that such a string is REDUCED, that is, never contains two consecu- tive atoms which are inverse of each other: no substring vv-1 or v-1 v is allowed to appear in a reduced string. When a and fl are two reduced strings, their concate- nation c~fl can be reduced by eliminating all substrings of the form v v- 1 or v- 1 v. It can be proven that the reduced string 7 obtained in this way is independent of the order of such eliminations. In this way, a product on F(V) is defined, and it is easily shown that F(V) becomes a (non-commutative) group, called the FREE GROUP over V (Hungerford, 1974). Group computation. We will say that an ordered pair GCS = (~, R) is a GROUP COMPUTATION STRUCTURE if: 1. V is a set, called the VOCABULARY, or the set of GENERATORS 2. R is a subset of F(V), called the LEXICON, or the set of RELATORS. 2 The submonoid closure NM(R) of R in F(V) is called the RESULT MONOID of the group computation structure GCS. The elements of NM(R) will be called COMPU- TATION RESULTS, or simply RESULTS. If r is a relator, and if ct is an arbitrary element of F(V), then ct, rc~ -1 will be called a QUASI-RELATOR of the group computation structure. It is easily seen that the set RN of quasi-relators is equal to the normal sub- set closure of R in F(V), and that NM(RN) is equal to NM(R). A COMPUTATION relative to GCS is a finite sequence c = (rl .... , rn) of quasi-relators. The product rx • • • r,, in F(V) is evidently a result, and is called the RESULT OF THE COMPUTATION c. It can be shown that the result monoid is entirely covered in this way: each result is the result of some computation. A computation can thus be seen as a "witness", or as a "proof", of the fact that a given element of F(V) is a result of the computation structure. 3 For specific computation tasks, one focusses on results of a certain sort, for instance results which express a re- lationship of input-output, where input and output are 2 For readers familiar with group theory, this terminology will evoke the classical notion of group PRESENTATION through generators and relators. The main difference with our definition is that, in the classical case, the set of relators is taken to be symmetrical, that is, to contain r -1 if it contains r. When this additional assumption is made, our preorder becomes an equivalence relation. 3The analogy with the view in constructive logics is clear. There what we call a result is called a formula or a tbpe, and what we call a computation is called aprot~ j john -1 1 louise -1 p parts ra man -1 W woman -1 A -I r (A) ran -1 A -I s (A, B) B -I saw -I E -I i(E,A) A -I in -I t(N) N -I the -I ev(N,X,P[X]) p[x]-1 ~-i X N -I ever)' -a sm(N,X,P[X]) p[x]-1 ~-i X N -1 some -x N -I tt(N,X,P[X]) p[X] -I a -I X ~ that -I Figure 1 : A G-grammar for a fragment of English assumed to belong to certain object types. For exam- ple, in computational linguistics, one is often interested in results which express a relationship between a fixed semantic input and a possible textual output (generation mode) or conversely in results which express a relation- ship between a fixed textual input and a possible seman- tic output (parsing mode). If GCS = (V, R) is a group computation structure, and if A is a given subset of F(V), then we will call the pair GCSA = (GCS, A) a GROUP COMPUTATION STRUCTURE WITH ACCEPTORS. We will say that A is the set of acceptors, or the PUBLIC INTERFACE, of GCSA. A result of GCS which belongs to the public interface will be called a PUBLIC RESULT of GCSA. 3 G-Grammars We will now show how the formal concepts introduced above can be applied to the problems of grammatical description and computation. We start by introducing a grammar, which we will call a G-GRAMMAR (for "Group Grammar"), for a fragment of English (see Fig. 1). A G-grammar is a group computation structure with acceptors over a vocabulary V = Vlog U ~/pho~ con- sisting of a set of logical forms l/~og and a disjoint set of phonological elements (in the example, words) l/~ho,,. Examples of phonological elements are john, saw, ever).,, examples of logical forms j, s (j, 1), ev (re,x, sra(w,y, s (x,y)) ); these logical forms can be glossed respectively as "john", "john saw louise" and "for every man x, for some woman y, x saw y". The grammar lexicon, or set of relators, R is given as a list of"lexical schemes". An example is given in Fig. 1. Each line is a lexical scheme and represents a set of re- lators in F(V). The first line is a ground scheme, which corresponds to the single relator j john-1, and so are the next four lines. The fifth line is a non-ground scheme, which corresponds to an infinite set of relators, obtained by instanciating the term meta-variable A (notated in up- percase) to a logical form. So are the remaining lines. We use Greek letters for expression meta-variables such as a, which can be replaced by an arbitrary expression of F(V); thus, whereas the term meta-variables A, B ..... range over logical forms, the expression meta-variables ,~, fl ..... range over products of logical forms and phono- 349 logical elements (or their inverses) in F(V). 4 The notation p [x] is employed to express the fact that a logical form containing an argument identifier x is equal to the application of the abstraction P to x. The meta-variable X in p [X] ranges over such identifiers (x, y, z .... ), which are notated in lower-case italics (and are always ground). The meta-variable p ranges over logi- cal form abstractions missing one argument (for instance Az. s ( j, z) ). When matching meta-variables in logical forms, we will allow limited use of higher-order unifica- tion. For instance, one can match P [X] to -~ (j ,x) by takingP = Az.s(j, z) and X = x. The vocabulary and the set of relators that we have just specified define a group computation structure GCS = (I,, _R). We will now describe a set of acceptors A for this computation structure. We take A to be the set of elements of F(V) which are products of the following form: S lI/n-lWr~_1-1 ...IV1-1 where S is a logical form (S stands for "semantics"), and where each II';- is a phonological element (W stands for "'word"). The expression above is a way of encoding the ordered pair consisting of the logical form S and the phonological string 111 l,I) ... l.I;~ (that is, the inverse of the product l, Vn- 11Vn- 1 - I ... I.V1-1). A public result SWn-lWn_l-1...t'Iq -1 in the group computation structure with acceptors ((V, R), A) -- the G-grammar --will be interpreted as meaning that the logical form S can be expressed as the phonological string IV1 l'l:~ ' .. lYn. Let us give an example of a public result relative to the grammar of Fig. 1. We consider the relators (instanciations of relator schemes): rl = j-1 s(j,1) r,_ = 1 louise -1 r3 = j john -t I- 1 saw-1 and the quasi-relators: '-i rl' = j rl 3 r2' = (j san,) r2 r3 ' = r3 j saw) -i Then we have: rl' r2' r3' = j j-1 s(j,l) i-I saw-1 j-I j saw 1 louise-1 saw- 1. j-1 j john-1 = s(j,1) louise-1 saw- 1 john- x which means that s ( j, 1 ) louise-I saw- l john- 1 is the result of a computation (r~ ', r2', r3 ' ) • This result is obviously a public one, which means that the logi- cal form s ( j, 1 ) can be verbalized as the phonological string john saw louise. 4Expression meta-variables are employed in the grammar for form- ing the set of conjugates c~ e:cp ~-1 of certain expressions ezp (in our example, earp is ov{N,X,P[X] ) P[X] -1, sm(N,X,P[X] ) P [X] -1 or X). Conjugacy allows the enclosed material exp to move as a bh, ck in expressions of F(V), see sections 3. and 4. j ~ john i ~ louise p ~ paris m ~ man w ~ woman r(A) -~ A ran s (A,B) -~ A saw B i(E,A) -~ E in A t(N) --~ the N ev(N,X,P[X]) ~ ce -1 sm(N,X,P[XI) ...x cr -1 tt (N,X,P[X]) eveo' N X -a oc P[X] some N X -1 a P[X] N that a -a X -1 c~ P[X] Figure 2: Generation-oriented rules 4 Generation Applying directly, as we have just done, the definition of a group computation structure in order to obtain public results can be somewhat unintuitive. It is often easier to use the preorder --+. If, for a, b, c 6 F(V), abc is a rela- tor, then abc --+ 1, and therefore b --+ a-lc -1. Taking this remark into account, it is possible to write the relators of our G-grammar as the "rewriting rules" of Fig. 2; we use the notation ----" instead of --+ to distinguish these rules from the parsing rules which will be introduced in the next section. The rules of Fig. 2 have a systematic structure. The left-hand side of each rule consists of a single logical form, taken from the corresponding relator in the G- grammar; the right-hand side is obtained by "moving" all the renmining elements in the relator to the right of the arrow. Because the rules of Fig. 2 privilege the rewriting of a logical form into an expression of F(V), they are called generation-oriented rules associated with the G- grammar. Using these rules, and the fact that the preorder is compatible with the product of F(V), the fact that s ( j, 1 ) louise-lsaw-ljohn - 1 is a public result can be obtained in a simpler way than previously. We have: s(j,l) j ~ john 1 ~ louise j saw 1 by the seventh, first and second rules (properly instanci- ated), and therefore, by transitivity and compatibility of the preorder: s(j,1) ~ j saw 1 john saw 1 ~ john saw louise which .proves that s (j, 1 ) ---~john saw louise, which Is equivalent to saying that s(j, 1) louise- 1 saw- l john- 1 is a public result. Some other generation examples are given in Fig. 3. The first example is straightforward and works simi- larly to the one we have just seen: from the logical form 5. ( s ( j, 1 ), p) one can derive the phonological string john saw louise in paris. 350 i(s(j,l) ,p) -~ s(j,l) in p _.x j saw 1 in p --~ john saw 1 in p john saw louise in p john saw louise in paris ev(m,x,sm(w,y, s (x,y) ) ) --~ ct -I every m x -I c~ sm(w,y,s(x,y)) 0 -1 every m x -1 o~ 19 -1 some w y-1 /3 s (x,y) ---, cr -~ every man x -1 a /3-1 some woman y-1 /3 x saw y a -1 every man x -1 a x saw some woman (by taking/3 = saw -1 x -1) __x every man saw some woman (by taking a = 1) sm(w,y,ev(m,x, s (x,y) ) ) ._~ /3-i some w y-1 /3 ev(m,x,s(x,y))) /3 -I some w y-1 /9 ce-1 ever)' m x -1 ce s(x,y) --~ /3 -1 some woman y-1 fl c~ -1 ever), man x -1 ce x saw y /3 -1 some woman y-1 /3 every man saw y (by taking a = 1) .--, every man saw some woman (by taking/3 = saw -1 man -a every -1) Figure 3: Generation examples merit, quantified noun phrases can move to whatever place is assigned to them after the expansion of their "scope" predicate, a place which was unpredictable at the time of the expansion of the quantified logical form. The identifiers act as "target markers" for the quantified noun phrase: the only way to "get rid" of an identifier x is by moving z -1, and therefore with it the correspond- ing quantified noun phrase, to a place where it can cancel with z. 5 Parsing To the compatible preorder ~ on F(V) there corre- sponds a "reverse" compatible preorder ---, defined as a ---, b iff b ~ a, or, equivalently, a- 1 __+ b- 1. The nor- mal submonoid M' in F(V) associated with ---, is the inverse monoid of the normal submonoid M associated with ~, that is, M' contains a iff M contains a- 1. It is then clear that one can present the relations: j john-i--+ 1 A-Ir(A) ran -I-+ 1 sm(N,X,P[X]) P[X]-I~-IX N-isom e-l-+ etc. in the equivalent way: john j -1._., 1 ran r (A) -IA ---7 1 some N x-lo ' P[X] etc. sm(N,X,P[X])-1~-1-v 1 Long-distance movement and quantifiers The sec- ond and third examples are parallel to each other and show the derivation of the same string ever}' man saw some woman from two different logical forms. The penultimate and last steps of each example are the most interesting. In the penultimate step of the second exam- ple,/3 is instanciated to saw -1 x -1 . This has the effect of "moving" as a whole the expression some woman y-~ to the position just before y, and therefore to allow for the cancellation of y- * and y. The net effect is thus to "re- place" the identifier y by the string some woman; in the last step c~ is instanciated to the neutral element 1, which has the effect of replacing x by ever}' man. In the penul- timate step of the third example, a. is instanciated to the neutral element, which has the effect of replacing x by ev- ery man; then fl is instanciated to saw-1man-levery-1, which has the effect of replacing y by some woman. Remark. In all cases in which an expression similar to a al ... am a-1 appears (with the ai arbitrary vo- cabulary elements), it is easily seen that, by giving a an appropriate value in F(V), the al ... am can move ar- bitrarily to the left or to the right, but only together in solidarity; they can also freely permute cyclically, that is, by giving an appropriate value to a, the expression a al ... am a -l can take on the value ak ak+l ..- a,,, al • •, ak-1 (other permutations are in general not possible). The values given to the or, fl, etc., in the exam- ples of this paper can be understood intuitively in terms of these two properties. We see that, by this mechanism of concerted move- john --~ j louise ---, 1 paris ---, p man --, m woman -.--, W ran -= A -1 r(A) saw -v A -I s(A,B) B -I in --, E -I i(E,A) A -I the --7 t(N) N -I ever)' --, o ev(N,X,P[X]) some --, c~ sm(N,X,P[X]) that-v N -I tt(N,X,P[X]) p[x]-I ~-I X N -I P[X]-a ~-1 X N -I p[x]-1 ~-I X Figure 4: Parsing-oriented rules Suppose now that we move to the right of the --7 ar- row all elements appearing on the left of it, but for the single phonological element of each relator. We obtain the rules of Fig. 4, which we call the "parsing-oriented" rules associated with the G-grammar. By the same reasoning as in the generation case, it is easy to show that any derivation using these rules and leading to the relation PS --, LF, where PS is a phono- logical string and LF a logical form, corresponds to a public result LF PS -1 in the G-grammar. A few parsing examples are given in Fig. 5; they are the converses of the generation examples given earlier. In the first example, we first rewrite each of the phonological elements into the expression appearing on 351 john saw louise in paris --, j A -1 s(A,B) B -1 i E -a --, s(j,B) B -I 1 E -I i(E,p) --, s(j,l) E -I i(E,p) --, i(s(j,l) ,p) i(E,C) C -a p ever 3 , man saw some woman • -, cr ev(N,x,P[x]) P[x] -I a -1 X N -1 m A -1 s(A,B) B -1 /3 sm(M,y,Q[y]) Q[y]-i ---, ~ ev(m,x,P[x]) Plx] -a o~ -1 x A -x s(A,B) B -1 /3 sm(w,y,Q[y]) Q[yl-a /3-1 y ---, x A -a ev(m,x,P[x]) P[x] -I s(A,B) B -1 /3 sm(w,y,Q[y]) Q[y]-i /3-a y -, x A -1 ev(m,x,P[x]) P[x] -a s(A,B) Q[y]-i sm(w,y,Q[y]) B -1 y --, ev(m,x,P[xl) P[x] -a s(x,y) Q[y]-a sm(w,y,Q[y]) and then either: ---, ev(m,x,P[xl) P[xl -a sm(w,y,s(x,y)) --, ev(m,x, sm(w,y,s(x,y) ) ) or: ---, ev(m,x, sO<,y)) Q[y]-i sm(w,y,Q[y]) sm(w,y, ev (m, x, s (x,y)) Figure 5: Parsing examples ~-*yM-lw the right-hand side of the rules (and where the meta- variables have been renamed in the standard way to avoid name clashes). The rewriting has taken place in par- allel, which is of course permitted (we could have ob- tained the same result by rewriting the words one by one). We then perform certain unifications: A is uni- fied with j, C with p; then B is unified to 1. 5 Finally E is unified with s ( j, i ), and we obtain the logical form ± ( s ( j, 3. ), p ). In this last step, it might seem feasible to unify v. to ± (E, p) instead, but that is in fact forbid- den for it would mean that the logical form -i ( E, p) is not a finite tree, as we do require. This condition pre- vents "self-cancellation" of a logical form with a logical form that it strictly contains. Quantifier scoping In the second example, we start by unifying m with N and w with M; then we "move" P[x] -1 next to s (A,B) by taking a = xA-1; 6 then again we "move" Q [y] -1 next to s (A, B) by taking fl = B sm (w, y, Q [y] ) -1; x is then unified with A and y with B. This leads to the expression: ev(m,x, P[x] ) P[x]-ls (x, y)Q[y]-lsm(w, y,Q[y] ) where we now have a choice. We can either unify s(x,y) with Q[y], or with P[x]. In the 5Another possibility at this point would be to unify 1 with E rather than with E. This would lead to the construction of the logical form i ( 1, p ), and, after unification of E with that logical form, would con- duct to the output s ( j, i ( 1, p) ). If one wants to prevent this output, several approaches are possible. The first one consists in typing the log- ical form with syntactic categories. The second one is to have some no- tion of logical-form well-formedness (or perhaps interpretability) dis- allowing the logical forms i ( 1, p) [louise in paris] or i ( t (w), p) [(the woman) in paris], although it might allow the form t (i (w, p) ) [the (woman in paris)]. t'We have assumed that the meta-variables corresponding to identi- fiers in P and Q have been instanciated to arbitrary, but different, values x and y. See (Dy,netman, 1998) for a discussion of this point. first case, we continue by now unifying P Ix] with sm(w,y,s(x,y) ), leading to the output ev(m,x, sm(w,y,s(x,y))). In the sec- ond case, we continue by now unifying Q[y] with ev(m,x,s(x,y) ), leading to the output sm(w,y, ev(m,x,s(x,y)). The two possible quantifier scopings for the input string are thus obtained, each corresponding to a certain order of performing the unifications. Acknowledgments Thanks to Christian Retor6, Eric de la Clergerie, Alain Lecomte and Aarne Ranta for comments and discussion. References V.M. Abrusci. 1991. Phase semantics and sequent cal- culus for pure non-commutative classical linear logic. Journal of Symbolic Logic, 56(4). M. Dalrymple, J. Lamping, E Pereira, and V. Saraswat. 1995. Linear logic for meaning assembly. In Proc. CLNLP, Edinburgh. Marc Dymetman. 1998. Group computation and its ap- plications to linguistic description. (in preparation). J.Y. Girard. 1987. Linear logic. Theoretical Computer Science, 50(1). Thomas W. Hungerford. 1974. Algebra. Springer- Verlag. J. Lambek. 1958. The mathematics of sentence struc- ture. American Mathematical Monthly, 65:154-168. C. R4tor& 1993. Rdseaux et sdquents ordonn~s. Ph.D. thesis, Univ. Paris 7. Johan van Benthem. 1986. Essays in Logical Semantics. D. Reidel, Dordrecht, Holland. 352
1998
57
Constraints over Lambda-Structures in Semantic Underspecification Markus Egg and Joachim Niehren* and Peter Ruhrberg and Feiyu Xu Department of Computational Linguistics / *Programming Systems Lab Universit/it des Saarlandes, Saarbriicken, Germany {egg, peru, feiyu}~coli, uni-sb, de niehren~ps, uni-sb, de Abstract We introduce a first-order language for seman- tic underspecification that we call Constraint Language for Lambda-Structures (CLLS). A A- structure can be considered as a A-term up to consistent renaming of bound variables (a- equality); a constraint of CLLS is an underspec- ified description of a A-structure. CLLS solves a capturing problem omnipresent in underspec- ified scope representations. CLLS features con- straints for dominance, lambda binding, paral- lelism, and anaphoric links. Based on CLLS we present a simple, integrated, and underspecified treatment of scope, parallelism, and anaphora. 1 Introduction A central concern of semantic underspecifica- tion (van Deemter and Peters, 1996) is the un- derspecification of the scope of variable bind- ing operators such as quantifiers (Hobbs and Shieber, 1987; Alshawi, 1990; Reyle, 1993). This immediately raises the conceptual problem of how to avoid variable-capturing when instan- tiating underspecified scope representations. In principle, capturing may occur in all formalisms for structural underspecification which repre- sent binding relations by the coordination of variables (Reyle, 1995; Pinkal, 1996; Bos, 1996; Niehren et al., 1997a). Consider for instance the verb phrase in (1) Manfred [vF knows every student] An underspecified description of the composi- tional semantics of the VP in (1) might be given along the lines of (2): (2) X--Cl(Vx(student(x)-+C2(know(Z, x)))) The meta-variable X in (2) denotes some tree representing a predicate logic formula which is underspecified for quantifier scope by means of two place holders C1 and C2 where a subject- quantifier can be filled in, and a place holder Z for the subject-variable. The binding of the object-variable x by the object-quantifier Vx is coordinated through the name of the object- variable, namely 'x'. Capturing occurs when a new quantifier like 3x is filled in C2 whereby the binding between x and Vx is accidentally undone, and is replaced with a binding of x by 3x. Capturing problems raised by variable coordi- nation may be circumvented in simple cases where all quantifiers in underspecified descrip- tions can be assumed to be named by distinct variables. However, this assumption becomes problematic in the light of parallelism between the interpretations of two clauses. Consider for instance the correction of (1) in (3): (3) No, Hans [vP knows every student] The description of the semantics of the VP in (3) is given in (4): (4) Y=C3(Vy(student(y)-+C4(know( Z', y) ) ) ) But a full understanding of the combined clauses (1) and (3) requires a grasp of the se- mantic identity of the two VP interpretations. Now, the VP interpretations (2) and (4) look very much Mike but for the different object- variable, namely 'y' instead of 'x'. This illus- trates that in cases of parallelism, like in cor- rections, different variables in parallel quanti- fied structures have to be matched against each other, which requires some form of renaming to be done on them. While this is unprob- lematic for fully specified structures, it presents serious problems with underspecified structures like (2) and (4), as there the names of the vari- 353 ables are crucial for insuring the right bindings. Any attempt to integrate parallelism with scope underspecification thus has to cope with con- flicting requirements on the choice of variable names. Avoiding capturing requires variables to be renamed apart but parallelism needs par- allel bound variables to be named alike. We avoid all capturing and renaming prob- lems by introducing the notion of A-structures, which represent binding relations without nam- ing variables. A A-structure is a standard pred- icate logic tree structure which can be con- sidered as a A-term or some other logical for- mula up-to consistent renaming of bound vari- ables (a-equality). Instead of variable names, a A-structure provides a partial function on tree-nodes for expressing variable binding. An graphical illustration of the A-structure corre- sponding to the A-term Ax.like(x,x) is given (5). (5) ( ', Axlike(x,x) Formally, the binding relation of the A-structure in (5) is expressed through the partial function A (5) defined by A(5)(v2) = v0 and A(5)(v3) = v0. We propose a first-order constraint language for A-structures called CLLS which solves the cap- turing problem of underspecified scope repre- sentations in a simple and elegant way. CLLS subsumes dominance constraints (Backofen et al., 1995) as known from syntactic processing (Marcus et al., 1983) with tree-adjoining gram- mars (Vijay-Shanker, 1992; Rogers and Vijay- Shanker, 1994). Most importantly, CLLS con- straints can describe the binding relation of a A- structure in an underspecified manner (in con- trast to A-structures like (5), which are always fully specified). The idea is that A-binding be- haves like a kind of rubber band that can be arbitraryly enlarged but never broken. E.g., (6) is an underspecified CLLS-description of the A- structure (5). Xo,~*X~ A A(X~)=X4A .~.? Xo Xl:lam(X2)A //lain I X1 (6) X2,~*X3A ' * X2 I Z3:,ke(X ,Xs)^ , X4:var A X5:var var,,~.~X4 vat ~ X5 The constraint (6) does not determine a unique A-structure since it leaves e.g. the space be- tween the nodes X2 and X3 underspecified. Thus, (6) may eventually be extended, say, to a constraint that fully specifies the A-structure for the A-term in (7). (7) Ay.Az.and(person(y), like(y, z) ) Az intervenes between Ay and an occurrence of y when extending (6) to a representation of (7) without the danger of undoing their binding. CLLS is sufficiently expressive for an integrated treatment of semantic underspecification, par- allelism, and anaphora. To this purpose it provides parallelism constraints (Niehren and Koller, 1998) of the form X/X',,~Y/Y I reminis- cent to equality up-to constraints (Niehren et al., 1997a), and anaphoric bindings constraints of the form ante(X)=X'. As proved in (Niehren and Koller, 1998), CLLS extends the expressiveness of context unifica- tion (Niehren et al., 1997a). It also extends its linguistic coverage (Niehren et al., 1997b) by integrating an analysis of VP ellipses with anaphora as in (Kehler, 1995). Thus, the cov- erage of CLLS is comparable to Crouch (1995) and Shieber et al. (1996). We illustrate CLLS at a benchmark case for the interaction of scope, anaphora, and ellipsis (8). (8) Mary read a book she liked before Sue did. The paper is organized as follows. First, we introduce CLLS in detail and define its syntax and semantics. We illustrate CLLS in sec. 3 by applying it to the example (8) and compare it to related work in the last section. 2 A Constraint Language for A-Structures (CLLS) CLLS is an ordinary first-order language inter- preted over A-structures. A-structures are par- ticular predicate logic tree structures we will in- troduce. We first exemplify the expressiveness of CLLS. 2.1 Elements of CLLS A A-structure is a tree structure extended by two additional relations (the binding and the linking relation). We represent A-structures as graphs. Every A-structure characterizes a unique A-term or a logical formula up to consis- tent renaming of bound variables (a-equality). E.g., the A-structure (10) characterizes the higher-order logic (HOL) formula (9). 354 (9) (many(language))(Ax.speak(x)(jolm)) (10) many ~ Two things are important here: the label '~' represents explicitly the operation of function application, and the binding of the variable x by the A-operator Ax is represented by an explicit binding relation A between two nodes, labelled as var and lain. As the binding relation is ex- plicit, the variable and the binder need not be given a name or index such as x. We can fully describe the above A-structure by means of the constraints for immediate dominance and labeling X:f(X1,..., Xn), (e.g. X1:@(X2,)(3) and X3:lam(X4) etc.) and bind- ing constraints A(X)=Y. It is convenient to dis- play such constraints graphically, in the style of (6). The difference of graphs as constraints and graphs as A-structures is important since under- specified structures are always seen as descrip- tions of the A-structures that satisfy them• Dominance. As a means to underspecify A- structures, CLLS employs constraints for domi- nance X~*Y. Dominance is defined as the tran- sitive and reflexive closure of immediate dom- inance. We represent dominance constraints graphically as dotted lines. E.g., in (11) we have the typical case of undetermined scope. It is analysed by constraint (12), where two nodes X1 and X2, lie between an upper bound Xo and a lower bound X3. The graph can be lin- earized by adding either a constraint XI~*X2 or X2~*X1, resulting in the two possible scop- ing readings for the sentence (11). (11) Every linguist speaks two Asian languages. (12) ..... ".X.o. •" ~ X ' 2 e_l t_a_l ,' .x4 | "'"' ..-" l | " .. .. • ,, ~ var~-~ speak Parallelism. (11) may be continued by an el- liptical sentence, as in (13). (13) Two European ones too. We analyse elliptical constructions by means of a parallelism constraint of the form (14) X,/Xp~YdY p which has the intuitive meaning that the seman- tics Xs of the source clause (12) is parallel to the semantics Yt of the elliptical target clause, up-to the exceptions Xp and Yp, which are the semantic representations of the so called paral- lel elements in source and target clause. In this case the parallel elements are the two subject NPs. (11) and (13) together give us a 'Hirschbiihler sentence' (Hirschbiihler, 1982), and our treat- ment in this case is descriptively equivalent to that of (Niehren et al., 1997b). Our paral- lelism constraints and their equality up-to con- straints have been shown to be (non-trivially) intertranslatable (Niehren and Koller, 1998) if binding and linking relations in A-structures are ignored. For the interaction of binding with parallelism we follow the basic idea that binding relations should be isomorphic between two similar sub- structures. The cases where anaphora interact with ellipsis are discussed below. Anaphoric links. We represent anaphoric dependencies in A-structures by another explicit relation between nodes, the linking relation. An anaphor (i.e. a node labelled as ana) may be linked to an antecedent node, which may be la- belled by a name or var, or even be another anaphor. Thus, links can form chains as in (15), where a constraint such as ante(X3)=X2 is rep- resented by a dashed line from X3 to X2. The constraint (15) analyzes (16), where the second pronoun is regarded as to be linked to the first, rather than linked to the proper name: (15) like ¢ ~ ~ ~ i ~2 rnother_of ~ ana ~ X3 (16) John i said he~ liked hisj mother 355 In a semantic interpretation of A-structures, analoguously to a semantics for lambda terms, 1 linked nodes get identical denotations. Intu- itively, this means they are interpreted as if names, or variables with their binding relations, would be copied down the link chain. It is cru- cial though not to use such copied structures right away: the link relation gives precise con- trol over strict and sloppy interpretations when anaphors interact with parallelism. E.g., (16) is the source clause of the many- pronouns-puzzle, a problematic case of interac- tion of ellipsis and anaphora. (Xu, 1998), where our treatment of ellipsis and anaphora was de- veloped, argues that link chains yield the best explanation for the distribution of strict/sloppy readings involving many pronouns. The basic idea is that an elided pronoun can either be linked to its parallel pronoun in the source clause (referential parallelism) or be linked in a structurally parallel way (structural parallelism). This analysis agrees with the pro- posal made in (Kehler, 1993; Kehler, 1995). It covers a series of problematic cases in the lit- erature such as the many-pronouns-puzzle, cas- caded ellipsis, or the five-reading sentence (17): (17) John revised his paper before the teacher did, and so did Bill The precise interaction of parallelism with bind- ing and linking relations is spelled out in sec. 2.2. 2.2 Syntax and Semantics of CLLS We start with a set of labels E= {@2, lam I ' var 0 ' ana 0 ' before 2, maryO, readO,,, .}, ranged over by ]ji, with arity i which may be omitted. The syntax of CLLS is given by: ::= XJ(Xl,...,X,) (]J"ES) I X<*Y I A(x)=Y I ante(X)=Y I X/X'~Y/Y' [ ~ A~' The semantics of CLLS is given in terms of first order structures L, obtained from underlying tree structures, by adding rela- tions eL for each CLLS relation symbol ¢ E {~*, A(.)= ", ante(.)=., ./.~-/-, :@, :lam, :vat,...}. 1We abstain from giving such a semantics here, as we would have to introduce types, which are of no concern here, to keep the semantics simple. A (finite) tree structure, underlying L, is given by a set of nodes u, u', ... connected by paths ~r, ~ff, ... (possibly empty words over positive in- tegers), and a labelling ]junction I from nodes to labels. The number of daughters of a node matches the arity of its label. The relationship Y:fL(Vl, ..., Yn) holds iff l(v)=]j and v.i = vi for i = 1..n, where v.~r stands for the node that is reached from v by following the path 7r (if de- fined). To express that a path lr is defined on a node v in L we write v.rSL. We write ~r<r' for ~r being an initial segment of 7d. The domi- nance relation v<~v' holds if 37r v.Tr = v'. If ~r is non-empty we have proper dominance v<+v '. A A-structure L is a tree structure with two (partially functional) binary relations AL(')= ", for binding, and anteL(')=', for anaphor-to- antecedent linking. We assume that the follow- ing conditions hold: (1) binding only holds be- tween variables (nodes labelled var) to A-binders (nodes labelled lain); (2) every variable has ex- actly one binder; (3) variables are dominated by their binders; (4) only anaphors (nodel la- belled ana) are linked to antecendents; (2) ev- ery anaphor has exactly one antecendent; (5) antecedents are terminal nodes; (6) there are no cyclic link chains; (7) if a link chain ends at a variable then each anaphor in the chain must be dominated by the binder of that variable. The not so straight forward part of the seman- tics of CLLS is the notion of parallelism, which we define for any given A-structure L as follows: iff there is a path ~r0 such that: 1. rr0 is the "exception path" from the top node of the parallel structures the the two exception positions: v{=Vl.~ro A v~=v2.~ro 2. the two contexts, which are the trees be- low Vl and v2 up-to the trees below the ex- ception positions v{ and v~, must have the same structure and labels: Vr -~0<r ~ ((v,.~$L ~ v2.rSL)A (Vl.Tr.~L =:~ l(Vl.Tr ) ---- l(v2.Tr)))) 3. there are no 'hanging' binders from the con- texts to variables outside them: VvVv' * + ' * ' AL(v')=v) ~(Vl<~LV<~ L Vl <~LV A 4. binding is structurally isomorphic within the two contexts: 356 V rr V rr' -~ir o < ~r A vl . Tr.L L A -~'tr o <_Tr' A vl . lr' J~ L :=~ 5. two variables in identical positions within their context and bound outside their con- ~_.~.:y,. " text must be bound by the same binder: ,--~'~. I~-~ v,,w-(,,o>,, /-'% :;*-1 x., (AL(Vl.rr)=v ¢~ AL(v2.~r)=v) ~'ana ? X,2~ ~ :. 6. two anaphors in identical positions within ~x .. their context must have isomorphic links x ". resents the semantics of the elided part of the target clause.) (18) ..X9" ....' b ~ x t • " , xTg o : : : within their context, or the target sentence anaphor is linked to the source sentence anaphor: VvVTr -mr0_<Tr A Vl.Tr,~L A anteL(Vl.Tr)=v =:> (37r'(v=vl.~r'A-=rr0<rr'AanteL (v=.rr)----v2nr') V anteL(u2.r)=Ul.rr) 3 Interaction of quantifiers, anaphora, and ellipsis In this section, we will illustrate our analysis of a complex case of the interaction of scope, anaphora, and ellipsis. In the case (8), both anaphora and quantification interact with ellip- sis. (8) Mary read a book she liked before Sue did. (8) has three readings (see (Crouch, 1995) for a discussion of a similar example). In the first, the indefinite NP a book she liked takes wide scope over both clauses (a particular book liked by Mary is read by both Mary and Sue). In the two others, the operator before outscopes the in- definite NP. The two options result from the two possibilities of reconstructing the pronoun she in the ellipsis interpretation, viz., 'strict' (both read some book that Mary liked) and 'sloppy' (each read some book she liked herself). The constraint for (8), displayed in (18), is an underspecified representation of the above three readings. It can be derived in a compositional fashion along the lines described in (Niehren et al., 1997b). Xs and Xt represent the semantics of the source and the target clause, while X16 and X21 stand for the semantics of the paral- lel elements (Mary and Sue) respectively. For readability, we represent the semantics of the complex NP a book she liked by a triangle dom- inated by X2, which only makes the anaphoric content 212 of the pronoun she within the NP explicit. The anaphoric relationship between the pronoun she and Mary is represented by the linking relation between X12 and X16. (X20 rep- ¢ read ~~7~1 ~Xz6 Xs/XI6~X~/X21 The first reading, with the NP taking wide scope, results when the relative scope between XI and XI5 is resolved such that XI dominates X15. The corresponding solution of the con- straint is visualized in (19). (19) za, x=, read ~'~ var~-.X"z~ read ~ var~'~ j The parallelism constraint Xs/Xl6,,~Xt/X21 is satisfied in the solution because the node Xt dominates a tree that is a copy of the tree dom- inated by Xs. In particular, it contains a node labelled by var, which has to be parallel to Xlr, and therefore must be A-linked to X3 too. The other possible scoping is for XlS to domi- nate X1. The two solutions this gives rise to are drawn in (20) and (21). Here X1 and the in- terpretation of the indefinite NP directly below enter into the parallelism as a whole, as these nodes lie below the source node Xs. Thus, there are two anaphoric nodes: X12 in the source and its 'copy' II12 in the target semantics. For the copy to be parallel to XI2 it can either have a link to X12 to have a same referential value (strict reading, see (20)) or a link to X21 that is structurally parallel to the link from X12 to X16, and hence leads to the node of the parallel element Sue (sloppy reading, see (21)). 357 (20) ~ x , I"" ~"r, ary.,, X~6"~. ' ~/sue * _X 4 Related Work CLLS allows a uniform and yet internally struc- tured approach to semantic ambiguity. We use a single constraint formalism in which to de- scribe different kinds of information about the meaning of an utterance. This avoids the prob- lems of order dependence of processing that for example Shieber et al. (1996) get by inter- leaving two formalisms (for scope and for el- lipsis resolution). Our approach follows Crouch (1995) in this respect, who also includes par- allelism constraints in the form of substitution expressions directly into an underspecified se- mantic formalism (in his case the formalism of Quasi Logical Forms QLF). We believe that the two approaches are roughly equivalent empiri- cally. But in contrast to CLLS, QLF is not for- malised as a general constraint language over tree-like representations of meaning. QLF has the advantage of giving a more direct handle on meanings themselves - at the price of its rel- atively complicated model theoretic semantics. It seems harder though to come up with solu- tions within QLF that have an easy portability across different semantic frameworks. We believe that the ideas from CLLS tie in quite easily with various other semantic formalisms, such as UDRT (Reyle, 1993) and MRS (Copes- take et al., 1997), which use dominance relations similar to ours, and also with theories of Logical Form associated with GB style grammars, such as (May, 1977). In all these frameworks one tends to use variable-coordination (or coindex- ing) rather than the explicit binding and linking relations we have presented here. We hope that these approaches can potentially benefit from the presented idea of rubber bands for binding and linking, without having to make any dra- matic changes. Our definition of parallelism implements some ideas from Hobbs and Kehler (1997) on the be- havior of anaphoric links. In contrast to their proposal, our definition of parallelism is not based on an abstract notion of similarity. Fur- thermore, CLLS is not integrated into a general theory of abduction. We pursue a more modest aim at this stage, as CLLS needs to be con- nected to "material" deduction calculi for rea- soning with such underspecified semantic rep- resentation in order to make progress on this front. We hope that some of the more ad hoc features of our definition of parallelism (e.g. ax- iom 5) may receive a justification or improve- ment in the light of such a deeper understand- ing. Context Unification. CLLS extends the expressiveness of context unification (CU) (Niehren et al., 1997a), but it leads to a more direct and more structured encoding of seman- tic constraints than CU could offer. There are three main differences between CU and CLLS. 1) In CLLS variables are interpreted over nodes rather than whole trees. This gives us a di- rect handle on occurrences of semantic material, where CU could handle occurrences only indi- rectly and less efficiently. 2) CLLS avoids the capturing problem. 3) CLLS provides explicit anaphoric links, which could not be adequately modeled in CU. The insights of the CU-analysis in (Niehren et al., 1997b) carry over to CLLS, but the awkward second-order equations for expressing dominance in CU can be omitted (Niehren and Koller, 1998). This omission yields an enormous simplification and efficiency gain for processing. Tractability. The distinguishing feature of our approach is that we aim to develop ef- ficiently treatable constraint languages rather than to apply maximally general but intractable formalisms. We are confident that CLLS can be implemented in a simple and efficient manner. First experiments which are based on high-level concurrent constraint programming have shown promising results. 358 5 Conclusion In this paper, we presented CLLS, a first-order language for semantic underspecification. It represents ambiguities in simple underspecified structures that are transparent and suitable for processing. The application of CLLS to some difficult cases of ambiguity has shown that it is well suited for the task of representing ambigu- ous expressions in terms of underspecification. Acknowledgements This work was supported by the SFB 378 (project CHORUS) at the Universit~t des Saar- landes. The authors wish to thank Manfred Pinkal, Gert Smolka, the commentators and participants at the Bad Teinach workshop on underspecification, and our anonymous review- ers. References Hiyan Alshawi. 1990. Resolving quasi logical form. Computational Linguistics, 16:133-144. R. Backofen, J. Rogers, and K. Vijay-Shanker. 1995. A first-order axiomatization of the theory of finite trees. J. Logic, Language, and Informa- tion, 4:5-39. Johan Bos. 1996. Predicate logic unplugged. In Proceedings lOth Amsterdam Colloquium, pages 133-143. Ann Copestake, Dan Flickinger, and Ivan Sag. 1997. Minimal Recursion Seman- tics. An Introduction. Manuscript, avail- able at ftp ://csli-ftp. stanford, edu/ linguist ic s/sag/mrs, ps. gz. Richard Crouch. 1995. Ellipsis and quantifica- tion: A substitutional approach. In Proceedings EACL'95, pages 229-236, Dublin. Paul Hirschbiihler. 1982. VP deletion and across the board quantifier scope. In J. Puste- jovsky and P. Sells, editors, NELS 12, Univ. of Massachusetts. Jerry R Hobbs and Andrew Kehler. 1997. A theory of parallelism and the case of VP-ellipsis. In Proceedings A CL '97, pages 394-401, Madrid. J.R. Hobbs and S. Shieber. 1987. An algo- rithm for generating quantifier scoping. Com- putational Linguistics, 13:47-63. Andrew Kehler. 1993. A discourse copying al- gorithm for ellipsis and anaphora resolution. In Proceedings of EA CL. Andrew Kehler. 1995. Interpreting Cohesive Forms in the Context of Discourse Inference. Ph.D. thesis, Harvard University. M. Marcus, D. Hindle, and M. Fleck. 1983. D- theory: Talking about talking about trees. In Proceedings of the 21st ACL, pages 129-136. Robert May. 1977. The Grammar of Quantifi- cation. Doctoral dissertation, MIT, Cambridge Mass. Joachim Niehren and Alexander Keller. 1998. Dominance Constraints in Context Unification, January. http://w~w, ps. un±- sb. de/Papers/ abstract s/Dominance, html. J. Niehren, M. Pinkal, and P. Ruhrberg. 1997a. On equality up-to constraints over finite trees, context unification, and one-step rewriting. In Proceedings 14th CADE. Springer-Verlag, Townsville. J. Niehren, M. Pinkal, and P. Ruhrberg. 1997b. A uniform approach to underspecification and parallelism. In Proceedings A CL '97, pages 410- 417, Madrid. Manfred Pinkal. 1996. Radical underspecifica- tion. In Proceed. lOth Amsterdam Colloquium, pages 587-606. Uwe Reyle. 1993. Dealing with ambiguities by underspecification: construction, represen- tation, and deduction. Journal of Semantics, 10:123-179. Uwe Reyle. 1995. Co-indexing labelled DRSs to represent and reason with ambiguities. In S. Peters and K. van Deemter, editors, Semantic Ambiguity and Underspecification. CSLI Publi- cations, Stanford. J. Rogers and K. Vijay-Shanker. 1994. Extract- ing trees from their descriptions: an application to tree-adjoining grammars. Computational In- telligence, 10:401-421. Stuart Shieber, Fernando Pereira, and Mary Dalrymple. 1996. Interaction of scope and el- lipsis. Linguistics and Philosophy, 19:527-552. Kees van Deemter and Stanley Peters. 1996. Semantic Ambiguity and Underspecification. CSLI, Stanford. K. Vijay-Shanker. 1992. Using description of trees in tree adjoining grammar framework. Computational Linguistics, 18. Feiyu Xu. 1998. Underspecified representa- tion and resolution of ellipsis. Master's thesis, Universit~it des Saarlandes. http ://www. col±. uni- sb. de/'feiyu/thesis, html. 359
1998
58
Spelling Correction Using Context* Mohammad Ali Elmi and Martha Evens Department of Computer Science, Illinois Institute of Technology 10 West 31 Street, Chicago, Illinois 60616 ([email protected]) Abstract This paper describes a spelling correction system that functions as part of an intelligent tutor that car- ries on a natural language dialogue with its users. The process that searches the lexicon is adaptive as is the system filter, to speed up the process. The basis of our approach is the interaction between the parser and the spelling corrector. Alternative cor- rection targets are fed back to the parser, which does a series of syntactic and semantic checks, based on the dialogue context, the sentence con- text, and the phrase context. 1. Introduction This paper describes how context-dependent spell- ing correction is performed in a natural language dialogue system under control of the parser. Our spelling correction system is a functioning part of an intelligent tutoring system called Circsim-Tutor [Elmi, 94] designed to help medical students learn the language and the techniques for causal reason- ing necessary to solve problems in cardiovascular physiology. The users type in answers to questions and requests for information. In this kind of man-machine dialogue, spelling correction is essential. The input is full of errors. Most medical students have little experience with keyboards and they constantly invent novel abbre- viations. After typing a few characters of a long word, users often decide to quit. Apparently, the user types a few characters and decides that (s)he has given the reader enough of a hint, so we get 'spec' for 'specification.' The approach to spelling correction is necessarily different from that used in word processing or other authoring systems, which submit candidate corrections and ask the user to make a selection. Our system must make automatic corrections and make them rapidly since the sys- tem has only a few seconds to parse the student input, update the student model, plan the appropri- ate response, turn it into sentences, and display those sentences on the screen. Our medical sublanguage contains many long *This work was supported by the Cognitive Science Pro- gram, Office of Naval Research under Grant No. N00014-94- 1-0338, to Illinois Institute of Technology. The content does not reflect the position or policy of the government and no official endorsement should be inferred. phrases that are used in the correction process. Our filtering system is adaptive; it begins with a wide acceptance interval and tightens the filter as better candidates appear. Error weights are position-sen- sitive. The parser accepts several replacement can- didates for a misspelled string from the spelling corrector and selects the best by applying syntactic and semantic rules. The selection process is dynamic and context-dependent. We believe that our approach has significant potential applications to other types of man-machine dialogues, espe- cially speech-understanding systems. There are about 4,500 words in our lexicon. 2. Spelling Correction Method The first step in spelling correction is the detection of an error. There are two possibilities: 1. The misspelled word is an isolated word, e.g. 'teh' for 'the.' The Unix spell program is based on this type of detection. 2. The misspelled word is a valid word, e.g. 'of' in place of "if.' The likelihood of errors that occur when words garble into other words increases as the lexicon gets larger [Peterson 86]. Golding and Schabes [96] present a system based on trigrams that addresses the problem of correcting spelling errors that result in a valid word. We have limited the detection of spelling errors to isolated words. Once the word S is chosen for spelling correction, we perform a series of steps to find a replacement candidate for it. First, a set of words from the lexicon is chosen to be compared with S. Second, a configurable number of words that are close to S are considered as candidates for replacement. Finally, the context of the sentence is used for selecting the best candidate; syntactic and semantic information, as well as phrase lookup, can help narrow the number of candidates. The system allows the user to set the limit on the number of errors. When the limit is set to k, the program finds all words in the lexicon that have up to k mismatches with the misspelled word. 3. Algorithm for Comparing Two Words This process, given the erroneous string S and the word from the lexicon W, makes the minimum number of deletions, insertions, and replacements in S to transform it to W. This number is referred to 360 as the edit distance. The system ignores character case mismatch. The error categories are: Error Type Example reversed order haert heart missing character hert heart added character hueart heart char. substitution huart heart We extended the edit distance by assigning weights to each correction which takes into account the position of the character in error. The error weight of 90 is equivalent to an error distance of one. If the error appears at the initial position, the error weight is increased by 10%. In character sub- stitution if the erroneous character is a neighboring key of the character on the keyboard, or if the char- acter has a similar sound to that of the substituted character, the error weight is reduced by 10%. 3.1 Three Way Match Method. Our string com- parison is based on the system developed by Lee and Evens [92]. When the character at location n of S does not match the character at location m of W, we have an error and two other comparisons are made. The three way comparison, and the order of the comparison is shown below: C (1 Comparison name Comparison number 1 2 3 no error T reversed order F T T missing character F F T added character F T F char. substitution F F F For example, to convert the misspelled string hoose to choose, the method declares missing char- acter 'c' in the first position since the character h in hoose matches the second character in choose. The three way match (3wm) is a fast and simple algorithm with a very small overhead. However, it has potential problems [Elmi, 94]. A few examples are provided to illustrate the problem, and then our extension to the algorithm is described. Let char(n) indicate the character at location n of the erroneous word, and char(m) indicate the character at location m of the word from the lexicon. 3.1.1 Added Character Error. If the character o of choose is replaced with an a, we get: chaose. The 3wm transforms chaose to choose in two steps: drops a and inserts an o. Solution: When the 3win detects an added char- acter error, and char(n+l)=char(m+l) and char(n+2)~ char(m+l), we change the error to character substitution type. The algorithm replaces 'a' with an 'o' in cha_ose to correct it to choose. 3.1.2 Missing Character Error. If o in choose is replaced with an s_, we get the string: chosse. The 3wm method converts chosse to choose in two steps: insert 'o' and drop the second s. Solution: When the 3wm detects a missing character and char(n+l)=char(m+l), we check for the following conditions: char(n+l)~-char(m+2), or char(n+2)=char(m+2). In either case we change the error to "character substitution". The algorithm replaces 's' with 'o' in chosse to correct it to choose. Without the complementary conditions, the algorithm does not work properly for converting coose to choose, instead of inserting an h, it replaces o with an h, and inserts an o before s. 3.1.3 Reverse Order Error. If a in canary is dropped, we get: cnary. The 3win converts cnary to canary with two transformations: 1) reverse order 'na': canry and 2) insert an 'a': canary. Similarly, if the character a is added to unary, we get the string: ua_nary. The 3wm converts uanary to unary with two corrections: 1) reverse order 'an': un__aary and 2) drop the second 'a': unary. Solution: When the 3wm detects a reverse order and char(n+2) ¢ char(m+2), we change the error to: • Missing character error: if char(n+l) = char(m+2). Insert char(m) at location n of the misspelled word. The modified algorithm inserts 'a' in cnary to correct it to canary. • Added character error: if char(n+2) = char(m+l). Drop char(n). The algorithm drops 'a' in uanary to correct it to unary. 3.1.4 Two Mismatching Characters. The final caveat in the three way match algorithm is that the algorithm cannot handle two or more consecutive errors. If the two characters at locations n and n+l of S are extra characters, or the two characters at locations m and m+l of W are missing in S, we get to an obvious index synchronization, and we have a disaster. For example, the algorithm compares enabcyclopedic to encyclopedic and reports nine substitutions and two extra characters. Handling errors of this sort is problematic for 361 many spelling corrector systems. For instance, both FrameMaker (Release 5) and Microsoft Word (Version 7.0a) detect e~__b.bcyclopedic as an error, but both fail to correct it to anything. Also, when we delete the two characters 'yc' in encvglopedic, Microsoft Word detects enclopedic as an error but does not give any suggestions. FrameMaker returns: inculpated, uncoupled, and encapsulated. Solution: When comparing S with W we parti- tion them as S=xuz and W=xvz. Where x is the ini- tial segment, z is the tail segment, u and v are the error segments. First, the initial segment is selected. This segment can be empty if the initial characters of S and W do not match. In an unlikely case that S=W, this segment will contain the whole word. Second, the tail segment is selected, and can be empty if the last characters of S and W are dif- ferent. Finally, the error segments are the remain- ing characters of the two words: initial / error segment in S [ tail segment [ error segment in W ] segment Using the modified algorithm, to compare the string e~_.b.bcyclopedic, to the word encyclopedic, the matching initial segment is en and the matching tail segment is cyclopedic. The error segment for the misspelled word is ab and it is empty for encyclope- dic. Therefore, the system concludes that there are two extra characters ab in ena_bbcyclopedic. 4. Selection of Words from the Lexicon To get the best result, the sure way is to compare the erroneous word S with all words in the lexicon. As the size of the lexicon grows, this method becomes impractical since many words in a large lexicon are irrelevant to S. We have dealt with this problem in three ways. 4.1 Adaptive Disagreement Threshold. In order to reduce the time spent on comparing S with irrel- evant words from the lexicon, we put a limit on the number of mismatches depending on the size of S. The disagreement threshold is used to terminate the comparison of an irrelevant word with S, in effect acting as a filter. If the number is too high (a loose filter), we get many irrelevant words. If the number is too low (a tight filter), a lot of good can- didates are discarded. For this reason, we use an adaptive method that dynamically lowers the toler- ance for errors as better replacement candidates are found. The initial disagreement limit is set depending on the size of S: 100 for one character strings, 51" length of S for two or more character strings. As the two words are compared, the program keeps track of the error weight. As soon as the error weight exceeds this limit, the comparison is termi- nated and the word from the lexicon is rejected as a replacement word. Any word with error weight less than the disagreement limit is a candidate and is loaded in the replacement list. After the replace- ment list is fully loaded, the disagreement limit is lowered to the maximum value of disagreement amongst the candidates found so far. 4.2 Use of the Initial Character. Many studies show that few errors occur in the first letter of a word. We have exploited this characteristic by starting the search in the lexicon with words hav- ing the same initial letter as the misspelled word. The lexicon is divided into 52 segments (26 lower case, 26 upper case) each containing all the words beginning with a particular character. Within each segment the words are sorted in ascending order of their character length. This effectively partitions the lexicon into subsegments (314 in our lexicon) that each contains words with the same first letter and the same character size: wo: :o words ..--4P'~ "~ words of length I ~ length 2 The order of the search in the lexicon is depen- dent on the first letter of the misspelled word, chr. The segments are dynamically linked as follows: 1. The segment with the initial character chr. 2. The segment with the initial character as reverse case of chr. 3. The segments with a neighboring character of chr as the initial character in a standard keyboard. 4. The segments with an initial character that has a sound similar to chr. 5. The segment with the initial character as the second character of the misspelled word. 6. The rest of the segments. 4.3 Use of the Word Length. When comparing the misspelled string S with length len to the word W of the lexicon with length len+j, in the best case scenario, we have at leastj missing characters in S for positive value of j, and j extra characters in S 362 for negative value ofj. With the initial error weight of 51*len, the program starts with the maximum error limit of limit=len/2. We only allow compari- son of words from the lexicon with the character length between len-limit and len+limit. Combining the search order with respect to the initial character and the word length limit, the cor- rection is done in multiple passes. In each alpha- betical segment of the lexicon, S is compared with the words in the subsegments containing the words with length lend: i, where 0 < i < limit. For each value of i there is at least i extra characters in S compared to a word of length len-i. Similarly, there is at least i missing characters in S compared to a word of length len+i. Therefore, for each i in the subsegments containing the words with length len + i, we find all the words with error distance of i or higher. At any point when the replacement list is loaded with words with the maximum error dis- tance of i the program terminates. 5. Abbreviation Handling Abbreviations are considered only in the segments with the same initial character as the first letter of the misspelled word and its reverse character case. In addition to the regular comparison of the misspelled string S with the words with the charac- ter length between len-limit and len+limit, for each word W of the lexicon with the length len+m where m>limit, we compare its first len characters to S. If there is any mismatch, W is rejected. Otherwise, S is considered an abbreviation of W. 6. Word Boundary Errors Word boundaries are defined by space characters between two words. The addition or absence of the space character is the only error that we allow in the word boundary errors. The word boundary errors are considered prior to regular spelling cor- rections in the following steps: 1. S is split into two words with character lengths n, and m, where n+m=len and l~n<len. If both of these two words are valid words, the process ter- minates and returns the two split words. For ex- ample, 'upto' will be split into 'u_ pto' for n=l, 'u_12 to' for n=2. At this point since both words 'up' and 'to' are valid words, the process terminates. 2. Concatenate S with the next input word S 2. If the result is a valid word, return the result as the replacement for S and S 2. For example, the string "specifi' in "soecifi cation" is detected as an error and is combined with "cation' to produce the word "specification." Otherwise, 3. Concatenate S with the previous input word S 1. If the result is a valid word, return the result as the replacement for S and S 1. For example, in the input 'specific ation' the word 'specific' is a valid word and we realize we have a misspelled word when we get to "ation.' In this case, 'ation' is combined with the previous word 'specific' and the valid word "specification' is returned. 7. Using the Context It is difficult to arrive at a perfect match for a mis- spelled word most of the time. Kukich [92] points out that most researchers report accuracy levels above 90% when the first three candidates are con- sidered instead of the first guess. Obviously, the syntax of the language is useful for choosing the best candidate among a few possible matching words when there are different parts of speech among the candidates. Further help can be obtained by applying semantic rules, like the tense of the verb with respect to the rest of the sentence, or information about case arguments. This approach is built on the idea that the parser is capable of handling a word with multiple parts of speech and multiple senses within a part of speech [Elmi and Evens 93]. The steps for spelling correction and the choice of the best candidates are organized as follows: I. Detection: The lexical analyzer detects that the next input word w is misspelled. 2. Correction: The spelling corrector creates a list of replacement words: ((wl el)... (w nen)), where W i is a replacement word, and e i is the associated error weight. The list is sorted in ascending order of e i. The error weights are dropped, and the replacement list (W i Wj ...) is returned. 3. Reduction: The phrase recognizer checks whether any word in the replacement list can be combined with the previous/next input word(s) to form a phrase. If a phrase can be constructed, the word that is used in the phrase is considered the only replacement candidate and the rest of the words in the replacement list are ignored. 4. Part of speech assignment: If w i has n parts of speech: Pl, P2, ..., Pn the lexical analyzer replaces w i in the list with: (pl wi) (P2 wi)... (Pn wi). Then, factors out the common part of speech, p, in: (p w i) (p wj) as: (p w i wj). The replacement list: ((p! w i wj...) (p2 w k w m ...)...) is passed to the parser. 5. Syntax analysis: The parser examines each sublist (p w i wj ...) of replacement list for the part of speech p and discards the sublists that violate the syntactic rules. In each parse tree a word can 363 have a single part of speech, so no two sublists of the replacement list are in the same parse tree. 6. Semantic analysis: If w i has n senses (s 1, s~ .... sn) with the part of speech p, and w, has m senses (t 1, t2, t m) • J .... with the part of speech p, the sublist (p w i wj ...) is replaced with (p s 1, s~ .... s~ t 1, t 2, .... t m ...). The semantic analyzer works with one parse tree at a time and examines all senses of the words and rejects any entry that violates the sematic rules. 8. Empirical Results from Circsim-Tutor We used the text of eight sessions by human tutors and performed the spelling correction. The text contains 14,703 words. The program detected 684 misspelled words and corrected all of them but two word boundary errors. There were 336 word boundary errors, 263 were split words that were joined (e.g., 'nerv' and 'ous' for nervous) and 73 were joined words that were split (e.g., ofone for 'of' and 'one'). Also, 60 misspelled words were part of a phrase. Using phrases, the system cor- rected 'end dia volum' to: 'end diastolic volume.' The two word boundary failures resulted from the restriction of not having any error except the addition or the absence of a space character. The system attempts to correct them individually: ... quite a sop[h isticated one ... .... is a deter miniic statement ... 9. Performance with a Large Lexicon To discover whether this approach would scale up successfully we added 102,759 words from the Collins English Dictionary to our lexicon. The new lexicon contains 875 subsegments following the technique described in section 4.2. Consider the misspelled string ater [Kukich, 92]. The program started the search in the subseg- ments with character length of 3, 4, and 5 and returned: A~er Aten Auer after alter as_ter ate aver tater water. Note that character case is ignored. Overall, the program compared 3,039 words from the lexicon to 'ater', eliminating the compari- son of 99,720 (102759-3039) irrelevant words. Only the segments with the initial characters 'aAqwszQWSZt' were searched. Note that charac- ters 'qwsz' are adjacent keys to "a.' With the early termination of irrelevant words, 1,810 of these words were rejected with the comparison of the second character. Also, 992 of the words were rejected with the comparison of the third character. This took 90 milliseconds in a PC using the Alle- gro Common Lisp. We looked for all words in the lexicon that have error distance of one from ater. The program used 12,780 words of length 3, 4, and 5 character to find the following 16 replacement words: Ayer Aten Auer after alter aster ate aver cater eater eter later materpater tater water. Out of these 12,780 words, 11,132 words were rejected with the comparison of the second character and 1,534 with the compari- son of the third character. Finally, lets look at an example with the error in the first position. The program corrected the mis- spelled string: 'rogram' into: grogram program engram roam isogram ogham pogrom. It used 32,128 words from the lexicon. Out of these 32,128 words, 3,555 words were rejected with the comparison of the second character, 21,281 words were rejected with the comparison of the third character, 5,778 words were rejected at the fourth character, and 1,284 at the fifth character. 10. Summary Our spelling correction algorithm extends the three way match algorithm and deals with word bound- ary problems and abbreviations. It can handle a very large lexicon and uses context by combining parsing and spelling correction. The first goal of our future research is to detect errors that occur when words garble into other words in the lexicon, as form into from. We think that our approach of combining the parser and the spelling correction system should help us here. 11. References Elmi, M. 1994. A Natural Language Parser with Interleaved Spelling Correction, Supporting Lex- ical Functional Grammar and Ill-formed Input. Ph.D. Dissertation, Computer Science Dept., Illi- nois Institute of Technology, Chicago, IL. Elmi, M., Evens, M. 1993. An Efficient Natural Language Parsing Method. Proc. 5 th Midwest Artificial Intelligence and Cognitive Science Conference, April, Chesterton, IN, 6-10. Golding, A., Schabes, Y., 1996. Combining Tri- gram-based and Feature-based Methods for Con- text-Sensitive Spelling Correction. Proc. 34 th ACL, 24-27 June, 71-78. Kukich, K. 1992. Techniques for Automatically Correcting Words in Text. ACM Computing Sur- veys, Vol. 24, No. 4, 377-439. Lee, Y., Evens, M. 1992. Ill-Formed Natural Input Handling System for an Intelligent Tutoring Sys- tem. The Second Pacific Rim Int. Conf. on AL Seoul, Sept 15-18, 354-360. Peterson, J. 1986. A Note on Undetected Typing Errors. Commun. ACM, Vol. 29, No. 7, 633-637. 364
1998
59
Automatic Acquisition of Hierarchical Transduction Models for Machine Translation Hiyan Alshawi Srinivas Bangalore Shona Douglas AT&T Labs Research 180 Park Avenue, P.O. Box 971 Florham Park, NJ 07932 USA Abstract We describe a method for the fully automatic learning of hierarchical finite state translation models. The input to the method is transcribed speech utterances and their corresponding hu- man translations, and the output is a set of head transducers, i.e. statistical lexical head- outward transducers. A word-alignment func- tion and a head-ranking function are first ob- tained, and then counts are generated for hy- pothesized state transitions of head transduc- ers whose lexical translations and word order changes are consistent with the alignment. The method has been applied to create an English- Spanish translation model for a speech trans- lation application, with word accuracy of over 75% as measured by a string-distance compari- son to three reference translations. 1 Introduction The fully automatic construction of translation lnodels offers benefits in terms of development effort and potentially in robustness over meth- ods requiring hand-coding of linguistic informa- tion. However, there are disadvantages to the automatic approaches proposed so far. The var- ious methods described by Brown et. al (1990; 1993) do not take into account the natural struc- turing of strings into phrases. Example-based translation, exemplified by the work of Sumita and Iida (1995), requires very large amounts of training material. The number of states in a simple finite state model such as those used by Vilar et al. (1996) becomes extremely large when faced with languages with large word order differences. The work reported in Wu (1997), which uses an inside-outside type of training algorithm to learn statistical context- free transduction, has a similar motivation to the current work, but the models we describe here, being fully lexical, are more suitable for direct statistical modelling. In this paper, we show that both the net- work topology and parameters of a head trans- ducer translation model (Alshawi, 1996b) can be learned fully automatically from a bilingual corpus. It has already been shown (Alshawi et al., 1997) that a head transducer model with hand-coded structure can be trained to give bet- ter accuracy than a comparable transfer-based system, with smaller model size, computational requirements, and development effort. We have applied the learning method to cre- ate an English-Spanish translation model for a limited domain, with word accuracy of over 75% measured by a string distance comparison (as used in speech recognition) to three reference translations. The resulting translation model has been used as a component of an English- Spanish speech translation system. We first present the steps of the transduc- tion training method in Section 2. In Section 3 we describe how we obtain an alignment func- tion from source word subsequences to target word subsequences for each transcribed utter- ance and its translation. The construction of states and transitions is specified in Section 4; the method for selecting phrase head words is described in Section 5. The string comparison evaluation metric we use is described in Sec- tion 6, and the results of testing the method in a limited domain of English-Spanish translation are reported in Section 7. 2 Overview 2.1 Lexical head transducers In our training method, we follow the simple lexical head transduction model described by Alshawi (1996b) which can be regarded as a type of statistical dependency grammar trans- 41 duction. This type of transduction model con- sists of a collection of head transducers; the pur- pose of a particular transducer is to translate a specific source word w into a target word v, and further to translate the pair of sequences of dependent words to the left and right of w to sequences of dependents to the left and right of c. When applied recursively, a set of such trans- ducerb effects a hierarchical transduction of the source string into the target string. A distinguishing property of head transduc- ers, as compared to 'standard' finite state trans- ducers is that they perform a transduction out- wards from a 'head' word in the input string rather than by traversing the input string from left to right. A head transducer for translating source word tt, to target word v consists of a set of states qo(w : v),ql(w : v),q~(w : v),.., and transitions of the form: (qi(w : v), qj(w : v), Wd, Vd, a', ~) where the transition is from state qi(w : v) to state qj(w : v), reading the next source depen- dent Wd at position c~ relative to w and writing a target dependent vd at position fi relative to v. Positions left of a head (in the source or tar- get) are indicated with negative integers, while those right of the head are indicated with posi- tive integers. The head transducers we use also include the following probability parameters for start, tran- sition, and stop events: P(start, q(w : v)]w) P(qj (w: v), Wd, Vd, a', fllqi(w : v)) P(stoplq(w : v)) In the present work, when a model is ap- plied to translate a source sentence, the cho- sen derivation of the target string is the deriva- tion that maximizes the product of the above transducer event probabilities. The transduc- tion search algorithm we use to apply the trans- lation model is a bottom-up dynamic program- ruing algorithm similar to the analysis algorithm for relational head acceptors described by A1- shawl (1996a). 2.2 Training method The training method is organized into two main stages, an alignment stage followed by a trans- ducer construction stage as shown in Figure 1. I ... wt... w" w ...w~... If(w) ... f(w ) ... I ... f(w ) ... Figure 2: Partitioning the source and target around a head w with respect to f The single input to the training process is a bitext corpus, constructed by taking each ut- terance in a corpus of transcribed speech and having it manually translated. We use the term bitext in what follows to refer to a pair consist- ing of the transcription of a single utterance and its translation. The steps in the training procedure are as fol- lows: 1. For each bitext, compute an alignment func- tion f from source words to target words, using the method described in Section 3. 2. Partition the source into a head word w and substrings to the left and right of w (as shown in Figure 2). The extents of the partitions pro- jected onto the target by f must not overlap. Any selection of the head satisfying this con- straint is valid but the selection method used influences accuracy (Section 5). 3. Continue partitioning the left and right sub- strings recursively around sub-heads wl and wr. 4. Trace hypothesized head-transducer transi- tions that would output the translations of the left and right dependents of w (i.e. wl and wr) at the appropriate positions in the target string, indicated by f. This step is described in more detail below in Section 4. 5. Apply step 4 recursively to partitions headed by wl and wr, and then their dependents, until all left and right partitions have at most one word. 6. Aggregate hypothesized transitions to form the counts of a maximum likelihood head trans- duction model. The recursive partioning of the source and tar- get strings gives the hierarchical decomposition for head transduction. In step 2, the constraint 42 bitexts bitexts bitexts source text Pairing Extraction event trace Model Builder alignment model Alignment Search alignments - - Head Selection ranked heads v Transducer I Construction event trace Model Builder translation model I Transduction V l Search translated text Figure 1: Head transducer training method on target partitions ensures that the transduc- tion hypothesized in training does not contain crossing dependency structures in the target. 3 Alignment The first stage in the training process is ob- taining, for each bitext, an alignment function f : W ~ V mapping word subsequences W in the source to word subsequences V in the tar- get. In this process an alignment model is con- structed which specifies a cost for each pairing (W, V) of source and target subsequences, and all alignment search is carried out to minimize the sum of the costs of a set of pairings which completely maps the bitext source to its target. 3.1 Alignment model The cost of a pairing is composed of a weighted combination of cost functions. We currently use two. The first cost function is the ¢ correlation measure (cf the use of ¢2 in Gale and Church (1991)) computed as follows: = (bc- ad) x/(a + b)(c + d)(a + c)(b + d) where a = nv -- n~,i~v b = nw, y c = N - nv - nw + nw, v d = nw - nw, v N is the total number of bitexts, nv the number of bitexts in which V appears in the target, nw the number of bitexts in which W appears in the source, and nw, y the number of bitexts in which W appears in the source and V appears in the target. We tried using the log probabilities of tar- get subsequences given source subsequences (cf Brown et al. (1990)) as a cost function instead of ¢ but ¢ resulted in better performance of our translation models. The second cost function used is a distance measure which penalizes pairings in which the source subsequence and target subsequence are in very different positions in their respective sentences. Different weightings of distance to correlation costs can be used to bias the model towards more or less parallel alignments for dif- ferent language pairs. 43 3.2 Alignment search The agenda-based alignment search makes use of dynamic programming to record the best cost seen for all partial alignments covering the same source and target subsequence; partial align- ments coming off the agenda that have a higher cost for the same coverage are discarded and take 11o further part in the search. An effort limit on the number of agenda items processed is used to ensure reasonable speed in the search re- gardless of sentence length. An iterative broad- ening strategy is used, so that at breadth i only the i lowest cost pairings for each source subse- quence are allowed in the search, with the result that most optimal alignments are found well be- fore the effort limit is reached. In the experiment reported in Section 7, source and target subsequences of lengths 0, 1 and 2 were allowed in pairings. 4 Transducer construction Building a head transducer involves creating ap- propriate head transducer states and tracing hy- pothesized head-transducer transitions between them that are consistent with the occurrence of the pairings (W, f(W)) in each aligned bi- text. When a source sequence W in an align- ment pairing consists of more than one word, the least frequent of these words in the train- ing corpus is taken to be the primary word of the subsequence. It is convenient to extend the domain of an alignment function f to include primary words w by setting f(w) = f(W). The main transitions that are traced in our construction are those that map heads, wl and wr, of the the right and left dependent phrases of w (see Figure 2) to their translations as indi- cated in the alignment. The positions of these dependents in the target string are computed by comparing the positions of f(wt) and f(wr) to the position of l: = f(w). The actual states and transitions in the construction are specified below. Additional transitions are included for cases of compounding, i.e. those for which the source subsequence in an alignment function pairing consists of more than one word. Specifically, the source subsequence W may be a compound consisting of a primary word w together with a secondary word w'. There are no additional transitions for cases in which the target subse- quence V = f(w) of an alignment function pair- ing has more than one word. For the purposes of the head-transduction model constructed, such compound target subsequences are effectively treated as single words (containing space char- acters). That is, we are constructing a tran- ducer for (w : V). We use the notation Q(w : V) for states of the constructed head transducer. Here Q is an additional symbol e.g. "initial" for identifying a specific state of this transducer. A state such as initial(w : V) mentioned in the construction is first looked up in a table of states created so far in the training procedure; and created if necessary. A bar above a substring denotes the number of words preceding the substring in the source or target string. We give the construction for the case illus- trated in Figure 2, i.e. one left dependent wt, one right dependent wr, and a single secondary word w' to the left of w. Figure 3 shows the result as part of a finite state transition dia- gram. The other transition arrows shown in the diagram will arise from other bitext alignments containing (w : V) pairings. Other cases cov- ered by our algorithm (e.g. a single left depen- dent but no right dependent) are simple vari- ants. Wt :(. w, f(wt) v) -1:0 ~) left~,(w : V) -1:31 ) raid~ I(w V) +t:32 )) :i.az(w : v) Figure 3: States and transitions constructed for the partition shown in Figure 2 1. Mark initial(w : V) as an initial state for the transducer. 2. Include a transition consuming the secondary 44 word w t without any target output: (initial(w: V), leftw,(W : V), w', e, -1, 0), where e is the empty string. 3. Include a transition for mapping the source dependent wl to the target dependent f(wt): (le ftw,(w : V), midw~(w : V), wt, f(wl), -1,/31) where 13l = f(wt) - V. 4. Include a transition for mapping the source dependent wr to the target dependent f(w~): (midw,(w : V), final(w : V), w~, f(wr), +l,/3r) where/3,. = f(w,.) - Y. .5. Mark final(w : 1 I) as a final state for the transducer. The inclusion of transitions, and the marking of states as initial or final, are treated as event observation counts for a statistical head trans- duction model. More specifically, they are used as counts for maximum likelihood estimation of the transducer start, transition, and stop prob- abilities specified in Section 2. 5 Head selection We have been using the following monolingual metrics which can be applied to either the source or target language to predict the likeli- hood of a word being the head word of a string. Distance: The distance between a dependent and its head. In general, the likelihood of a head-dependent relation decreases as distance increases (Collins, 1996). Word frequency: The frequency of occurrence of a word in the training corpus. IVord 'complezity': For languages with pho- netic orthography such as English, 'complexity' of a word can be measured in terms of number of characters in that word. Optionality: This metric is intended to iden- tify optional modifiers which are less likely to be heads. For each word we find trigrams with the word of interest as the middle word and compare the distribution of these trigrams with the distribution of the bigrams formed from the outer pairs of words. If these two distributions are strongly correlated then the word is highly optional. Each of the above metrics provides a score for the likelihood of a word being a head word. A weighted sum of these scores is used to produce a ranked list of head words given a string for use in step 2 of the training algorithm in Section 2. If the metrics are applied to the target language instead of the source, the ranking of a source word is taken from the ranking of the target word it is aligned with. In Section 7, we show the effectiveness of ap- propriate head selection in terms of the trans- lation performance and size of the head trans- ducer model in the context of an English- Spanish translation system. 6 Evaluation method There is no agreed-upon measure of machine translation quality. For our current purposes we require a measure that is objective, reliable, and that can be calculated automatically. We use here the word accuracy measure of the string distance between a reference string and a result string, a measure standardly used in the automatic speech recognition (ASR) com- munity. While for ASR the reference is a human transcription of the original speech and the re- sult the output of the speech recognition process run on the original speech, we use the measure to compare two different translations of a given source, typically a human translation and a ma- chine translation. The string distance metric is computed by first finding a transformation of one string into another that minimizes the total weight of sub- stitutions, insertions and deletions. (We use the same weights for these operations as in the NIST ASR evaluation software (NIS, 1997).) If we write S for the resulting number of substi- tions, I for insertions, D for deletions, and R for number of words in the reference translation string, we can express the metric as follows: word accuracy = (1 D+S+I)_ R This measure has the merit of being com- pletely automatic and non-subjective. How- ever, taking any single translation as reference is unrealistically unfavourable, since there is a range of acceptable translations. To increase the reliability of the measure, therefore, we give each system translation the best score it receives against any of a number of independent human translations of the same source. 45 wfw sys max source length 5 10 15 20 >20 45.8 46.5 45.2 44.5 44.0 79.4 78.3 77.3 75.2 74.1 Table 1: Word accuracy (percent) against the single held-out human translation 7 English-Spanish experiment The training and test data for the experiments reported here were taken from a set of tran- scribed utterances from the air travel infor- mation system (ATIS) corpus together with a translation of each utterance to Spanish. An utterance is typically a single sentence but is sometimes more than one sentence spoken in se- quence. There were 14418 training utterances, a total of 140788 source words, corresponding to 167865 target words. This training set was used as input to alignment model construction; align- ment search was carried out only on sentences up to length 15, a total of 11542 bitexts. Trans- duction training (including head ranking) was carried out on the 11327 alignments obtained. Tlle test set used in the evaluations reported here consisted of 336 held-out English sentences. We obtained three separate human translations of this test set: trl was translated by the same translation bureau as the training data; tr2 was translated by a different translation bureau; crl was a correction of the output of the trained system by a professional translator. The models evaluated are sys: the automatically trained head trans- duction model; wfw: a baseline word-for-word model in which each English word is translated by the Spanish word most highly correlated with it in the corpus. Table 1 shows the word accuracy percent- ages (see Section 6) for the trained system sys and the word-for-word baseline wfw against trl (the original held-out translations) at various source sentence lengths. The trained system has word accuracy of 74.1% on sentences of all lengths; on sentences up to length 15 (the length on which the transduction model was trained) the score was 77.3%. max source length 5 10 15 20 >20 wfw 46.2 47.5 46.6 45.8 45.3 sys 80.1 81.6 81.0 79.3 78.5 Table 2: Word accuracy (percent) against the closest of three human translations Head selector Baseline (Random Heads) In Source In Target (sys) Word accuracy 64.7% 71.4% 74.1% Number of parameters 108K 67K 66K Table 3: Translation performance with different head selection methods Table 2 shows the word accuracy percentages for the trained system sys and the word-for- word baseline wfw against any of the three ref- erence translations trl, crl, and tr2. That is, for each output string the human translation closest to it is taken as the reference transla- tion. With this more accurate measure, the sys- tem's word accuracy is 78.5% on sentences of all lengths. Table 3 compares the performance of the translation system when head words are se- lected (a) at random (baseline), (b) with head selection in the source language, and (c) with head selection in the target language, i.e., select- ing source heads that are aligned with the high- est ranking target head words. The reference for word accuracy here is the single reference trans- lation trl. Note that the 'In Target' head selec- tion method is the one used in training trans- lation model sys. The use of head selection metrics improves on random head selection in terms of translation accuracy and number of pa- rameters. An interesting twist, however, is that applying the metrics to target strings performs better than applying the metrics to the source words directly. 8 Concluding remarks We have described a method for learning a head transduction model automatically from trans- lation examples. Despite the simplicity of the current version of this method, the experiment 46 we reported in this paper demonstrates that the method leads to reasonable performance for English-Spanish translation in a limited do- main. We plan to increase the accuracy of the model using the kind of statistical modeling techniques that have contributed to improve- ments in automatic learning of speech recogni- tion models in recent years. We have started to experiment with learning models for more challenging language pairs such as English to Japanese that exhibit more variation in word order and complex lexical transformations. References H. Alshawi, A.L. Buchbaum, and F. Xia. 1997. A Comparison of Head Trandsucers and Transfer for a Limited Domain Transla- tion Application. In 35 th Annual Meeting of the Association for Computational Linguis- tics. Madrid, Spain, August. H. Alshawi. 1996a. Head automata and bilin- gual tiling: Translation with minimal repre- sentations. In 34th Annual Meeting of the Association for Computational Linguistics, pages 167-176, Santa Cruz, California. H. Alshawi. 1996b. Head automata for speech translation. In International Conference on .Spoken Language Processing, Philadelphia, Pennsylvania. P.J. Brown, J. Cocke, S. Della Pietra, V. Della Pietra, J. Lafferty, R. Mercer, and P. Rossin. 1990. A Statistical Approach to Ma- chine Translation. Computational Linguis- tics, 16(2):79-85. P.J. Brown, S.A. Della Pietra, V.J. Della Pietra, and R.L. Mercer. 1993. The mathematics of machine translation: Parameter estimation. Computational Linguistics, 16(2):263-312. Michael John Collins. 1996. A new statistical parser based on bigram lexical dependencies. In 34th Meeting of the Association for Com- putational Linguistics, pages 184-191, Santa Cruz. W.A. Gale and K.W. Church. 1991. Identify- iug word correspondences in parallel texts. In Proceedings of the Fourth DARPA Speech and Natural Language Processing Workshop, pages 152-157, Pacific Grove, California. National Institute of Standards and Technology, http://www.itl.nist.gov/div894, 1997. Spo- 47 ken Natural Language Processing Group Web page. Eiichiro Sumita and Hitoshi Iida. 1995. Hetero- geneous computing for example-based trans- lation of spoken language. In 6 th Interna- tional Conference on Theoretical and Method- ological Issues in Machine Translation, pages 273-286, Leuven, Belgium. J.M. Vilar, V. M. Jim~nez, J.C. Amengual, A. Castellanos, D. Llorens, and E. Vidal. 1996. Text and speech translation by means of subsequential transducers. Natural Lan- guage Engineering, 2(4) :351-354. Dekai Wu. 1997. Stochastic inversion trans- duction grammars and bilingual parsing of parallel corpora. Computational Linguistics, 23(3):377-404.
1998
6
Ambiguity Preserving Machine Translation using Packed Representations* Martin C. Emele and Michael Dorna IMS, Institut fiir Maschinelle Sprachverarbeitung Universit~it Stuttgart Azenbergstrai~e 12 D-70174 Stuttgart ~emele, dorna}@ims, tmi-stuttgart, de Abstract In this paper we present an ambiguity preserv- ing translation approach which transfers am- biguous LFG f-structure representations. It is based on packed f-structure representations which are the result of potentially ambiguous utterances. If the ambiguities between source and target language can be preserved, no un- packing during transfer is necessary and the generator may produce utterances which max- imally cover the underlying ambiguities. We convert the packed f-structure descriptions into a flat set of prolog terms which consist of pred- icates, their predicate argument structure and additional attribute-value information. Ambi- guity is expressed via local disjunctions. The flat representations facilitate the application of a Shake-and-Bake like transfer approach extended to deal with packed ambiguities. 1 Introduction It is a central problem for any practical NLP system and specifically for any machine trans- lation (MT) system to deal with ambiguity of natural language utterances. This is especially true for systems with large coverage grammars, where the number of potentially ambiguous de- scriptions grows drammatically as the number of acceptable syntactic constructions and the number of lexical readings increases. In gen- eral, it is not possible to resolve all potentially ambiguous descriptions without incorporating world knowledge of unlimited size. This funda- mental problem has been discussed in the litera- * We would like to thank our colleagues at Xerox PARC and Xerox RCE for fruitful discussions and the anony- mous reviewers for valuable feedback. This work was funded by the German Federal Ministry of Education, Science, Research and Technology (BMBF) in the frame- work of the Verbmobil project under grant 01 IV 701 N3. ture as the AI completeness problem (cf. Kay et al. (1994)). Nevertheless, it has been observed that many ambiguous utterances in the source language (SL) text can be translated by equiva- lently ambiguous phrases in the target language (TL) text. We call such an ambiguity a preserv- able ambiguity and the corresponding architec- ture for translation an ambiguity preserving MT approach. In order to achieve this goal of ambiguity pre- serving translations there exist a number of dif- ferent solutions we can apply. A naive solu- tion would enumerate all possible ambiguous de- scriptions, translate them and generate the cor- responding target utterances which would then be intersected to find a common string which covers all meanings. This strategy is obvi- ously not feasible because the number of poten- tial readings might grow exponentially with the length of the sentence. Another solution to overcome this problem is not to resolve ambiguities at all by using un- derspecified representations. This strategy has been successfully applied for a number of se- mantic ambiguities like quantifier and operator scope ambiguities. Therefore it is not surpris- ing that the usage of underspecified semantic representations have gained much popularity in recent years. Work in the literature include the QLF representations (Alshawi, 1992), the work on Underspecified Discourse Representa- tion Structures (UDRS) (Reyle, 1993; Bos et al., 1996), and the collection of papers in van Deemter and Peters (1996). For an application of using underspecified semantic representations within MT see Alshawi et al. (1991), Copestake et al. (1995) and Dorna and Emele (1996). Another source of ambiguities which might be preservable between related languages include syntactic ambiguities like the well-known PP at- 365 tachment ambiguities. There has been growing interest in developing underspecified or so called packed respresentations to deal with such syn- tactic ambiguities (cf. Rich et al. (1987), Seo and Simmons (1989), Bear and Hobbs (1988), Maxwell III and Kaplan (1993), Pinkal (1995), Egg and Lebeth (1995), Schiehlen (1996) and DSrre (1997)). The key idea of all these representations is to factor common information as much as pos- sible in a parse forest and to represent the at- tachment ambiguities as local disjunctions with- out conversion to disjunctive normal form. Such representations avoid the exponential explosion which would result if all possible readings are extracted from the parse forest. To achieve our overall goal of ambiguity pre- serving MT it requires not only a parser which is able to produce such packed representations but also a generator which is able to take such a packed representation as input and generate all possible paraphrases without explicitly enumer- ating all readings. The work in Kay (1996) and the extension to ambiguous input in Shemtov (1996) and Shemtov (1997) describes a chart- based generation process which takes packed representations as input and generates all para- phrases without expanding first into disjunctive normal form. What needs to be done to realize our envis- aged goal is a transfer system which is able to work on these packed translations without unpacking them or only as much as necessary if ambiguities can only partly be preserved in the target language. The rest of this paper is concerned with the extension of a Shake-and- Bake like transfer approach (Whitelock, 1992; Beaven, 1992) or the kind of semantic-based transfer approach as described for example in Dorna and Emele (1996) to cope with local am- biguities. To explain and illustrate the treatment of local ambiguities we show how an underspeci- fled representation of PP attachment ambigu- ities can be utilized in a machine translation architecture for providing ambiguity preserving translations. It is illustrated on the basis of LFG f-structure level representations (Kaplan and Bresnan, 1982). However, it could equally well be done on the level of underspecified se- mantic representations as shown in (Dorna et al., 1998). The main reason for choosing the f- structure level representation is due to the fact that we could use the Xerox Linguistic Envi- ronment (XLE) system (Maxwell III and Ka- plan, 1996) for the analysis and generation of English and German utterances. The key ar- gument for using this linguistic workbench is the ability to produce packed representations for ambiguous utterances using techniques de- scribed in Maxwell III and Kaplan (1993) and the availability of a generator which generates utterances from f-structure descriptions. The rest of the paper is structured as follows: first, we show how the hierarchical f-structure representations can be converted into a flat set of Prolog predicates such that the Shake-and- Bake like transfer approach can be applied. Sec- ond, we show how PP attachment ambiguities are represented using a packed representation. Then we show how this particular transfer ap- proach can be adopted for dealing with this kind of ambiguous representations. 2 Example To illustrate the approach we take a simple ex- ample which contains a PP attachment ambi- guity which can be preserved between German and English and probably between many other related languages as well. (1) wir treffen die KoUegen in Berlin we meet the colleagues in Berlin For example the sentence in (1) can either mean (a) that we will have a meeting in Berlin where we will meet our colleagues or (b) that we will meet our colleagues who live in Berlin. Without previous knowledge about the discourse and the specific people involved, it will not be possible to resolve these two meanings. Nevertheless, both the German and the English sentence express exactly the same ambiguity. There might exist other paraphrases using ex- actly the same semantic predicates, e.g. the ut- terances in (2) but they will not be chosen by the generator because they do not cover both readings at the same time. Instead sentence (2a) would be chosen to express the attachment of the prepositional phrase to the verb phrase whereas sentence (2b) would be chosen to ex- press the attachment to the noun phrase 'the colleagues'. 366 (2) a. In Berlin treffen wir die Kollegen In Berlin meet we the colleagues (In Berlin we will meet the colleagues.) b. wir treffen die Kollegen aus Berlin we meet the colleagues from Berlin (We will meet the colleagues from Berlin.) In addition, those two maximally discriminat- ing sentences could also be used as an interface for an interactive translation system, e.g. the negotiator approach (Kay, 1997) where the hu- man translator would be asked to distinguish between the two possible readings. The f-structures in (3) and (4) correspond to the disambiguated attachments as paraphrased in (2a) and (2b) respectively. (3) "PRED treffen<~, []]> SUBJ ['27[ PREDNUM plPr°] [PRED gollege] ~][PRED in<[~> ])J ADJN ~, L OBJ [~PRED Berlin] (4) "PRED tre1~en <~, I~> SUBJ ~][PREDNuM plPro] F,. D .ollege ITI INUM pl OBJ ['~[SPEC def I. L OBJ [~][PRED Berlin 3 From F-structures to Term Sets F-stuctures encode information in a hierarchical manner by recursively embedding substructures. They provide by nature only outside-in refer- ences whereas in transfer frequently inside-out access is necessary. Hence, information access for transformation processes like transfer is not as straightforward as it could be when using flat set representations (Beaven, 1992; Whitelock, 1992). Set representations can be seen as a pool of constraints where co-references between the constraints, i.e. the set elements, are used to en- code the same embedding f-structures provide. Therefore, the structural embedding which is, on the one hand, part of f-structures themself is represented, on the other hand, in the inter- pretation of constraint sets. Furthermore, sets come with very simple test and manipulation operations such as tests for membership and set union. In the following we define a correspondence between f-structures and sets of terms. We re- strict the f-structures to transfer relevant infor- mation such as PREDS, grammatical functions, etc. Feature structure constraints are encoded as relational constraints using Prolog syntax (cf. Johnson (1991)). As examples of such sets of terms see (5) and (6) which corresponds to f- structures (3) and (4), respectively. (5) treffen(1), subj (1,2) ,pro (2) ,num(2,pl), obj (1,3), kollege(3) ,num(3,pl), spec (3, def), adj n (1,4), in (4), obj (4,5) ,Berlin(5) (6) treffen(1), subj (I ,2) ,pro(2) ,num(2,pl), obj (1,3), kollege(3) ,num(3,pl), epec (3, def ), adj n (3,4), in (4), obj (4,5) ,Berlin(5) The 2-place relation trans given below trans- lates between f-structures and (sets of) terms. are references to f-structures which are mapped into nodes i used in terms. F are features, H(.../ describe predicates, v stands for atomic values, and ~o are complex f-structures. Co-occuring parts of f-structures are translated only once. 1. (atomic values) trans< ~[r v], r(i,v) > 2. (predicate values) traitS< [~PRED II(...)], H(i) ) 3. (complex f-structure values) trans< [~F [] ~o], r(i,j) u T > with trans< [~0, T > 4. (set values) trails< ~[ADJN {[~ ~Pl, ..-, [] ~On}], adjn(i,Q), ..., adjn(i,in) U T1 U ...U Tn > withtrans<[]~oj, Tj >; 1 <j<n zrans is bidirectional, i.e. we are able to translate between f-structures and terms for us- ing terms as transfer input, process terms in the transfer, and convert the transfer output back to f-structures which are the appropriate generator representations. 367 4 F-structure Transfer Transfer works on source language (SL) and tar- get language (TL) sets of terms representing predicates, roles, etc. like the ones shown in (5) and (6). The mapping is encoded in transfer rules as in (7). For a rule to be applied, the set on the SL side must be a matching subset of the SL input set. If this is the case, we remove the covering set from the input and add the set on the other side of the rule to the TL output. Transfer is complete, if the SL set is empty. (7) a. treffen(E) <-> meet(E). b. kollege(X) <-> colleague(X). c. Berlin(X) <-> Berlin(X). d. in(X) <-> in(X). e. pro(X) <-> pro(X). f. subj(X,Y) <-> subj(X,Y). g. obj(X,Y) <-> obj(X,Y). h. adjn(X,Y) <-> adjn(X,Y). The transfer operator <-> is bidirectional. Up- per case letters in argument positions are logical variables which will be bound to nodes at run- time. Because of the variable sharings on both sides of a rule we work on the same nodes of a graph. Hence, the overall mechanism can be formalized as a graph rewriting process. (8) a. meet(t), subj (1,2) ,pro (2) ,num(2,pl) obj (1,3), colleague (3), num (3, pl), spec (3, def), adj n (1,4), in(4) obj (4,5),Berlin (5) b. "FRED meet<~, ~> SUBJ [~][PRED pro] NtJM pl J [FRED colleague] [alFRED 'n<m> n }J ADJN [ L °B`] [~[PRED Berli Applying the rule set in (7) to (5), we yield the result in (8a). Using the correspondence be- tween f-structures and term representations it is possible to translate back to the TL f-structure in (8b). This f-structure will be passed on to the generator which will produce the utterance in (2a) as one of the possible paraphrases. The transfer rules in (7c-h) which are defined as the identity transformation between SL and TL are actually redundant. They can be re- placed via a general metarule which passes on all singleton sets which are not covered by any ex- plicit transfer rule. The same metarule transfers also morpho-syntactic information like number and definiteness. 5 Packed Representations The following example in (9) provides a packed f-structure respresentation for the German sen- tence in (1). The ambiguous PP attachment of the 'in' PP is represented via a local disjunction 1 (X=I V X=3) which binds the external variable X of the adjunct relation to either node I or node 3 representing the VP or NP attachment, respectively. (9) a. treffen(1), subj (1,2) ,pro (2) ,num(2 ,pl) obj (1,3), kollege(3), num(3,pl), spec (3, def), adjn(X,4) ,in(4) obj (4,5) ,Berlin(5), (xffit v xffi3) b. "PRED treffen<I~l, l~> __[PRED pr~] SUBJ IN[NUM pl [] [PRED gonege] I.SFEC [] ADJN [ [OBJ [~][PRED Berlin m_-~ v []=ill Applying the very same transfer rules in (7) to the input in (9) produces the result in (10) which fully preserves the ambiguity between source and target language. (I0) meet(l), subj (I, 2), pro (2),num (2,pl) obj (1,3), colleague(3), num(3 ,pl), spec (3 ,def), adj n (X ,4), in(4) obj (4,5) ,Berlin(5), (xft v x=3) If the generator takes the corresponding f- structure for this packed description as input it will generate (1) repeated in (11) and not any of 1The notation of using a local disjunction is used only for illustrating purposes. The actual implementation uses contexted contraints as developed and implemented in the XLE system (cf. Maxwell III and Kaplan (1991)). 368 the paraphrases in (2) because they would not cover both ambiguities at the same time. (11) We will meet the colleagues in Berlin. The local disjunction is not affected by the ap- plication of the transfer rule for mapping the adjunct relation to the target language because there is no interaction between the variable x and any other predicate. 6 Local Disambiguation If it is not possible to fully preserve the attach- ment ambiguities between source and target lan- guage, we need to partially disambiguate the rel- evant ambiguity. For example, this would be the case if we would translate (1) to Japanese. Depending whether we attach to the NP 'the colleagues' or to the VP we have to choose be- tween two different postpositions 'de' (location) vs. 'no' (adnominal modification). The two sen- tences in (12) show the Japanese translations together with their English glosses. (12) a. watashi tachi -ga berurin -de we NOM Berlin LOC dooryoo -to aimasu colleagues COM will meet (In Berlin we will meet the colleagues.) b. watashi tachi -ga berurin -no we NOM Berlin MOD dooryoo -to aimasu colleagues COM will meet (We will meet the colleagues from Berlin.) The choice of the postposition could be triggered via selectional restrictions in the condition part of the transfer rules. The rules in (13) show two components on their lefthand sides: the part to the right of # is a test on a copy of the origi- nal input. The test matches an adjunct relation where the variable Y is bound to the internal ar- gument. Y is coindexed with the node of the SL preposition 'in'. The variable X is bound to the external argument node where the adjunct is at- tached. The second element of the test checks the selectional restriction 2 of this attachment. 2Instead of using explicit predicates for testing selee- tional restrictions the real system uses a sort system. The test on explicit predicates is replaced with a more general sortal subsumption test, e.g. sort (X)<event vs. sort (X) <obj ect. (13) a. in(Y) # adjn(X,Y),treffen(X) -> de(Y). b. in(Y) # adjn(X,Y),kollege(X) -> no(Y). The Japanese distinction is parallel to the case where the German preposition 'in' would be translated either with the English preposition 'in' or the preposition 'from' depending which of the two meanings is taken. Hence for ease of exposition we will apply the two equivalent transfer rules in (14) for the translation of the 'in' instead of the equivalent Japanese ones. (14) a. in(Y) # adjn(X,Y),treffen(X) -> in(Y). b. in(Y) # adjn(X,Y),kollege(X) -> from (Y). Since the external argument of the adjunct rela- tion takes part in the local disjunction (X=l V X=3) the application of transfer rule (14a) trig- gers a local resolution. This is done by applying the distributive law such that the selectional re- striction can be tested. For the first disjunct this yields true whereas it fails for the second disjunct. Rule (14b) is treated in the same way where only the test on the second disjunct can be satisfied. Both results are joined together and are associated with the very same disjunc- tion: (X=l, in(4) V X=3, from(4)). (15) a. meet(l), subj (1,2) ,pro (2) ,num(2 ,pl) obj (1,3), colleague(3), hum(3, pl), spec (3, def), adjn(X,4), obj (4,5) ,Berlin(5), (X=l, in(4) V X=3, from(4)) b. "PRED meet<~], []> ~rPRED pro] SUBJ 121[NUM pl] [] [PrtEo colleagueq I.SPEC . ~(ADJN {~[OBJ ~[PRED Bcrlin~)] As a final result we get the packed representa- tion in (15), where the two prepositions are dis- tributed into the local disjunction without con- verting to disjunctive normal form. 369 The transferred packed representation corre- sponds to the two possible utterances in (16). It would be left as a task for the (human) negotia- tor to find out which of the two sentences would be more appropriate in a given context situa- tion. Due to the local nature of the disjunctions they can be handed over to an additional resolu- tion component in order to disambiguate them or if the discourse and world knowledge is not sufficient for disambiguating to leave them as choices for the human translator. (16) a. we will meet the colleagues in Berlin b. we will meet the colleagues from Berlin The main advantage of such an approach is that the transfer rules are independent of the fact whether they are applied to packed representa- tions or not. Unpacking is done only locally and as much as necessary. Only the internal pro- cessing needs to be adapted in order to keep track which of the local disjuncts are processed. This is done with a simple book-keeping mecha- nism which keeps track for any individual term to which local disjunct it belongs. Technically, it is done by using the contexted constraints as described in Maxwell III and Kaplan (1991). Hence the whole mechanism can be kept fully transparent for the transfer rule writer and all of the complexity can be dealt with internally in the transfer rule compiler which compiles the external transfer rule format into an executable Prolog program which propagates the necessary variable sharings. In order to avoid duplicated work while try- ing to apply all possible transfer rule combina- tions the transfer system uses an internal chart to store all successful rule applications. Each predicate in the input set gets assigned a unique bit in a bit vector such that it can be checked easily that no predicate is covered more than once while trying to combine different edges in the chart. With this scheme it is also possible to identify the final edges because they are the ones where all bits are set. The overall processing scheme using an agenda and the data structures are very similar to the chart representation as proposed for doing chart-based generation from ambiguous input (cf. Kay (1996) and Shemtov (1996)). The main difference stems from the lack of explicit context-free grammar rules. In- stead, in the proposed setup, the left hand sides of transfer rules are interpreted as immediate dominance rules as they are used for describing free word order languages supplemented with a single binary context-free rule which recursively tries to combine all possible subsets of terms for which no explicit transfer rule exists. 7 Summary In this paper we have demonstrated that a Shake-and-Bake inspired MT approach can be applied to flat f-structure respresentations. It has also been shown how such a transfer system can be combined with the treatment of packed ambiguities for the representation of (syntactic) ambiguities to achieve a truly ambiguity pre- serving translation architecture. Since the par- ticular treatment of syntactic ambiguities is or- thogonal to the possiblity of using underspeci- fled semantic representations, the same exten- sion could also be applied for a semantic-based transfer approach on flat representations as ad- vocated for example in Copestake et al. (1995) and Dorna and Emele (1996). The advan- tage for doing transfer on the level of under- specified semantic representations is the gain of parallelism between source and target language due to the abstraction and underspecification of language specific idiosyncracies which are al- ready dealt with in the linking between syntac- tic and semantic information. Popular examples are cases of head-switching, category switching and diathesis etc. which disappear on the level of semantic representations (e.g. Dorna et al. (1998)). The discussion of such examples can be found at length in the literature and will there- fore not be repeated here. The proposed transfer architecture is cur- rently being implemented as an extension to an experimental transfer MT system which is fully integrated and interfaced with the XLE system for doing parsing and generation. The appli- cation domain comprises the translation of in- struction manuals. References Hiyan Alshawi, David M. Carter, Bj6rn Gamb~ick, and Manny Rayner. 1991. Translation by Quasi Logical Form Transfer. In Proceedings of the 29th Annual Meeting of the Association for Com- putational Linguistics (ACL'91), pages 161-168, Berkeley, CA. 370 Hiyan Alshawi, editor. 1992. The Core Language Engine. ACL-MIT Press Series in Natural Lan- guages Processing. MIT Press, Cambridge, Mass. John Bear and Jerry R. Hobbs. 1988. Localizing ex- pression of ambiguity. In Proceedings of the 2nd International Conference on Applied Natural Lan- guage Processing, pages 235-241, Texas, Austin. J. L. Beaven. 1992. Shake-and-Bake Machine Trans- lation. In Proceedings of the 14th International Conference on Computational Linguistics (Col- ing'9$), Nantes, France. J. Bos, B. Gamb~f~k, C. Lieske, Y. Mori, M. Pinkal, and K. Worm. 1996. Compositional Semantics in Verbmobil. In Proceedings of the 16th Interna- tional Conference on Computational Linguistics (Coling'96), Copenhagen, Denmark. A. Copestake, D. Flickinger, R. Malouf, S. Riehe- mann, and I. Sag. 1995. Translation using Min- imal Recursion Semantics. In Proceedings of the 6th International Conference on Theoretical and Methodological Issues in Machine Translation (TMI'g5), Leuven, Belgium. Michael Dorna and Martin C. Emele. 1996. Semantic-based Transfer. In Proceedings of the 16th International Conference on Computational Linguistics (Coling'96), Copenhagen, Denmark. Michael Dorna, Anette Frank, Josef van Genabith, and Martin C. Emele. 1998. Syntactic and se- mantic transfer with f-structures. In Proceedings of the 17th International Conference on Compu- tational Linguistics (Coling-ACL '98), Montreal, Canada, August. Jochen DSrre. 1997. Efficient construction of un- derspecified semantics under massive ambiguity. In Proceedings of the 35th Annual Meeting of the Association for Computational Linguistics (ACL- EACL'97), Madrid, Spain. M. Egg and K. Lebeth. 1995. Semantic under- specifcation and modifier attachment ambigui- ties. In J. Kilbury and R. Wiese, editors, Integra- tive Ansatze in der Computerlinguistik. Beitrage zur 5. Fachtagung der Sektion Computerlinguis- tik der Deutschen Gesellschaft flit Sprachwis- senschaft (DGfS), pages 19-24, Dfisseldorf, Ger- many. Mark Johnson. 1991. Features and Formulae. Com- putational Linguistics, 17(2):131-151. Ronald M. Kaplan and Joan Bresnan. 1982. Lexical-Functional Grammar: A formal system for grammatical representation. In Joan Bresnan, editor, The Mental Representation of Grammat- ical Relations, pages 173-281. MIT Press, Cam- bridge, Mass. M. Kay, M. Gawron, and P. Norwig. 1994. Verb- mobil: a Translation System for Face-to-Face Di- alogs. Number 33 in CSLI Lecture Notes. Univer- sity of Chicago Press. Martin Kay. 1996. Chart generation. In Proceedings of the 34th Annual Meeting of the Association for Computational Linguistics (ACL'g6), pages 200- 204, Santa Cruz, CA. Martin Kay. 1997. The Proper Place of Men and Machines in Language Translation. Machine Translation, 12:3-23. John T. Maxwell III and Ronald M. Kaplan. 1991. A method for disjunctive constraint satisfaction. In Masaru Tomita, editor, Current Issues in Pars- ing Techonlogy, pages 18-27. Kluwer Academic Publishers, Dordrecht, Holland. John T. Maxwell III and Ronald M. Kaplan. 1993. The interface between phrasal and functional con- straints. Computational Linguistics, 19(4):571- 590. John T. Maxwell III and Ronald M. Kaplan. 1996. An efficient parser for LFG. In Proceedings of the 1st LFG Conference. Manfred Pinkal. 1995. Radical Underspecification. In Proceedings of the lOth Amsterdam Collo- quium, pages 587-606, Amsterdam, Holland, De- cember. ILLC/Department of Philosophy, Univer- sity of Amsterdam. Uwe Reyle. 1993. Dealing with Ambiguities by Underspecification: Construction, Representation and Deduction. Jounal of Semantics, 10(2):123- 179. E. Rich, J. Barnett, K. Wittenburg, and D. Wrob- lewski. 1987. Ambiguity procrastination. In Pro- ceedings of the 6th National Conference of the American Association for Artificial Intelligence (AAAI'87), pages 571-576, Seattle, WA. Michael Schiehlen. 1996. Semantic Construction from Parse Forests. In Proceedings of the 16th International Conference on Computational Lin- guistics (Coling'96). Jungyun Seo and Robert F. Simmons. 1989. Syntac- tic graphs: A representation for the union of all ambiguous parse trees. Computational Linguis- tics, 15(1):19-32, March. Hadar Shemtov. 1996. Generation of Paraphrases from Ambiguous Logical Forms. In Proceedings of the 16th International Conference on Computa- tional Linguistics (Coling'g6), Copenhagen, Den- mark. Hadar Shemtov. 1997. Ambiguity Management in Natural Language Generation. Ph.D. thesis, Stan- ford University, June. Kees van Deemter and Stanley Peters, editors. 1996. Semantic ambiguity and underspecification. Num- ber 55 in CSLI Lecture Notes. CSLI Publications, Stanford University, CA. Pete Whitelock. 1992. Shake-and-Bake Translation. In Proceedings of the l~th International Confer- ence on Computational Linguistics (Coling'92), pages 784-791, Nantes, France. 371
1998
60
A structure-sharing parser for lexicalized grammars Roger Evans Information Technology Research Institute University of Brighton Brighton, BN2 4G J, UK Roger. Evans @it ri. brighton, ac. uk David Weir Cognitive and Computing Sciences University of Sussex Brighton, BN1 9QH, UK [email protected] Abstract In wide-coverage lexicalized grammars many of the elementary structures have substructures in common. This means that in conventional pars- ing algorithms some of the computation associ- ated with different structures is duplicated. In this paper we describe a precompilation tech- nique for such grammars which allows some of this computation to be shared. In our approach the elementary structures of the grammar are transformed into finite state automata which can be merged and minimised using standard al- gorithms, and then parsed using an automaton- based parser. We present algorithms for con- structing automata from elementary structures, merging and minimising them, and string recog- nition and parse recovery with the resulting grammar. 1 Introduction It is well-known that fully lexicalised grammar formalisms such as LTAG (Joshi and Schabes, 1991) are difficult to parse with efficiently. Each word in the parser's input string introduces an elementary tree into the parse table for each of its possible readings, and there is often a substantial overlap in structure between these trees. A conventional parsing algorithm (Vijay- Shanker and Joshi, 1985) views the trees as in- dependent, and so is likely to duplicate the pro- cessing of this common structure. Parsing could be made more efficient (empirically if not for- mally), if the shared structure could be identi- fied and processed only once. Recent work by Evans and Weir (1997) and Chen and Vijay-Shanker (1997) addresses this problem from two different perspectives. Evans and Weir (1997) outline a technique for com- piling LTAG grammars into automata which are then merged to introduce some sharing of struc- ture. Chen and Vijay-Shanker (1997) use un- derspecified tree descriptions to represent sets of trees during parsing. The present paper takes the former approach, but extends our previous work by: • showing how merged automata can be min- imised, so that they share as much struc- ture as possible; • showing that by precompiling additional information, parsing can be broken down into recognition followed by parse recovery; • providing a formal treatment of the algo- rithms for transforming and minimising the grammar, recognition and parse recovery. In the following sections we outline the basic approach, and describe informally our improve- ments to the previous account. We then give a formal account of the optimisation process and a possible parsing algorithm that makes use of it 1 . 2 Automaton-based parsing Conventional LTAG parsers (Vijay-Shanker and Joshi, 1985; Schabes and Joshi, 1988; Vijay- Shanker and Weir, 1993) maintain a parse ta- ble, a set of items corresponding to complete and partial constituents. Parsing proceeds by first seeding the table with items anchored on the input string, and then repeatedly scanning the table for parser actions. Parser actions introduce new items into the table licensed by one or more items already in the table. The main types of parser actions are: 1. extending a constituent by incorporating a complete subconstituent (on the left or 1However, due to lack of space, no proofs and only minimal informal descriptions are given in this paper. 372 right); 2. extending a constituent by adjoining a sur- rounding complete auxiliary constituent; 3. predicting the span of the foot node of an auxiliary constituent (to the left or right). Parsing is complete when all possible parser ac- tions have been executed. In a completed parse table it is possible to trace the sequence of items corresponding to the recognition of an elementary tree from its lexi- cal anchor upwards. Each item in the sequence corresponds to a node in the tree (with the se- quence as a whole corresponding to a complete traversal of the tree), and each step corresponds to the parser action that licensed the next item, given the current one. From this perspective, parser actions can be restated relative to the items in such a sequence as: 1. substitute a complete subconstituent (on the left or right); 2. adjoin a surrounding complete auxiliary constituent; 3. predict the span of the tree's foot node (to the left or right). The recognition of the tree can thus be viewed as the computation of a finite state automaton, whose states correspond to a traversal of the tree and whose input symbols are these relao tivised parser actions. This perspective suggests a re-casting of the conventional LTAG parser in terms of such au- tomata 2. For this automaton-based parser, the grammar structures are not trees, but automata corresponding to tree traversals whose inputs are strings of relativised parser actions. Items in the parse table reference automaton states instead of tree addresses, and if the automa- ton state is final, the item represents a complete constituent. Parser actions arise as before, but are executed by relativising them with respect to the incomplete item participating in the ac- tion, and passing this relativised parser action as the next input symbol for the automaton ref- erenced by that item. The resulting state of that automaton is then used as the referent of the newly licensed item. On a first pass, this re-casting is exactly that: it does nothing new or different from the original 2Evans and Weir (1997) provides a longer informal introduction to this approach. parser on the original grammar. However there are a number of subtle differences3: • the automata are more abstract than the trees: the only grammatical information they contain are the input symbols and the root node labels, indicating the category of the constituent the automaton recognises; • automata for several trees can be merged together and optimised using standard well-studied techniques, resulting in a sin- gle automaton that recognises many trees at once, sharing as many of the common parser actions as possible. It is this final point which is the focus of this paper. By representing trees as automata, we can merge trees together and apply standard optimisation techniques to share their common structure. The parser will remain unchanged, but will operate more efficiently where struc- ture has been shared. Additionally, because the automata are more abstract than the trees, capturing precisely the parser's view of the trees, sharing may occur between trees which are structurally quite different, but which hap- pen to have common parser actions associated with them. 3 Merging and minimising automata Combining the automata for several trees can be achieved using a variety of standard algo- rithms (Huffman, 1954; Moore, 1956). How- ever any transformations must respect one im- portant feature: once the parser reaches a fi- nal state it needs to know what tree it has just recognised 4. When automata for trees with dif- ferent root categories are merged, the resulting automaton needs to somehow indicate to the parser what trees are associated with its final states. In Evans and Weir (1997), we combined au- tomata by introducing a new initial state with e-transitions to each of the original initial states, 3A further difference is that the traversal encoded in the automaton captures part of the parser's control strategy. However for simplicity we assume here a fixed parser control strategy (bottom-up, anchor-out) and do not pursue this point further - Evans and Weir (1997) offers some discussion. 4For recognition alone it only needs to know the root category of the tree, but to recover the parse it needs to identify the tree itself. 373 and then determinising the resulting automa- ton to induce some sharing of structure. To recover trees, final automaton states were an- notated with the number of the tree the final state is associated with, which the parser can then readily access. However, the drawback of this approach is that differently annotated final states can never be merged, which restricts the scope for structure sharing (minimisation, for example, is not pos- sible since all the final states are distinct). To overcome this, we propose an alternative ap- proach as follows: • each automaton transition is annotated with the set of trees which pass through it: when transitions are merged in au- tomaton optimisation, their annotations are unioned; • the parser maintains for each item in the table the set of trees that are valid for the item: initially this is all the valid trees for the automaton, but gets intersected with the annotation of any transition followed; also if two paths through the automaton meet (i.e., an item is about to be added for a second time), their annotations get unioned. This approach supports arbitrary merging of states, including merging all the final states into one. The parser maintains a dynamic record of which trees are valid for states (in particular fi- nal states) in the parse table. This means that we can minimise our automata as well as deter- minising them, and so share more structure (for example, common processing at the end of the recognition process as well as the beginning). 4 Recognition and parse recovery We noted above that a parsing algorithm needs to be able to access the tree that an automaton has recognised. The algo- rithm we describe below actually needs rather more information than this, because it uses a two-phase recognition/parse-recovery approach. The recognition phase only needs to know, for each complete item, what the root label of the tree recognised is. This can be recovered from the 'valid tree' annotation of the complete item itself (there may be more than one valid tree, corresponding to a phrase which has more than one parse which happen to have been merged to- gether). Parse recovery, however, involves run- ning the recogniser 'backwards' over the com- pleted parse table, identifying for each item, the items and actions which licensed it. A complication arises because the automata, es- pecially the merged automata, do not directly correspond to tree structure. The recogniser re- turns the tree recognised, and a search of the parse table reveals the parser action which com- pleted its recognition, but that information in itself may not be enough to locate exactly where in the tree the action took place. However, the additional information required is static, and so can be pre-compiled as the automata them- selves are built up. For each action transition (the action, plus the start and finish states) we record the tree address that the transition reaches (we call this the action-site, or just a-site for short). During parse recovery, when the parse table indicates an action that licensed an item, we look up the relevant transition to discover where in the tree (or trees, if we are traversing several simultaneously) the present item must be, so that we can correctly construct a derivation tree. 5 Technical details 5.1 Constructing the automata We identify each node in an elementary tree 7 with an elementary address 7/i. The root of 7 has the address 7/e where e is the empty string. Given a node 7/i, its n children are ad- dressed from left to right with the addresses 7/il,..."//in, respectively. For convenience, let anchor (7) and foot (7) denote the elemen- tary address of the node that is the anchor and footnode (if it has one) of 7, respectively; and label (7/i) and parent (7/i) denote the label of 7/i and the address of the parent of 7/i, respec- tively. In this paper we make the following assumup- tions about elementary trees. Each tree has a single anchor node and therefore a single spine 5. In the algorithms below we assume that nodes not on the spine have no children. In practice, not all elementary LTAG trees meet these con- ditions, and we discuss how the approach de- scribed here might be extended to the more gen- 5The path from the root to the anchor node. 374 eral case in Section 6. Let "y/i be an elementary address of a node on the spine of 7 with n children "y/il,... ,7/ik,... ,7~in for n > 1, where k is such that 7/ik dominates anchor (7). 7/ik+l ifj=l&n>k "l/ij -1 if2_<j<_k next(-y/ij)= "l/ij+l ifk<j<n 7/i otherwise next defines a function that traverses a spine, starting at the anchor. Traversal of an elemen- tary tree during recognition yields a sequence of parser actions, which we annotate as follows: the two actions A and ~ indicate a substitu- tion of a tree rooted with A to the left or right, respectively; A and +A indicate the presence of the foot node, a node labelled A, to the left or right, respectively; Finally A indicates an adjunct±on of a tree with root and foot labelled A. These actions constitute the input language of the automaton that traverses the tree. This automaton is defined as follows (note that we use e-transitions between nodes to ease the con- struction - we assume these are removed using a standard algorithm). Let 9' be an elementary tree with terminal and nonterminal alphabets VT and VN, respectively. Each state of the following automaton specifies the elementary address 7/i being visited. When the node is first visited we use the state _L[-y/i]; when ready to move on we use the state T[7/i]. Define as follows the finite state automaton M = (Q, E, ]_[anchor (7)],6, F). Q is the set of states, E is the input alphabet, q0 is the ini- tial state, (~ is the transition relation, and F is the set of final states. Q = { T['l/i], ±['l/i] I'l/i is an address in "l }; = { A, IA }; F = { T[')'/e] }; and 6 includes the following transitions: (±[foot ('l)], _A., T[foot ('l)]) if foot (7) is to the right of anchor ('l) (±[foot ('/)], +A_, T[foot ('l)]), if foot ('l) is to the left of anchor ('l) { (T['l/i], e, ±[next ('l/i)]) I "l/i is an address in 'l ice} { (m['y/i], A, T['l/i]) I "y/i substitution node, label ('l/i) = A, "l/i to right of anchor (7) } { (±[7/i], ~, T[7/i]) I 7/i substitution node, label ('l/i) = A, "l/i to left of anchor (7) } { (±['l/i], 4, T['l/i]) I "l/i adjunct±on node label ('I/i) = A } { (±['l/i], e, T['l/i]) [ 7/i adjunct±on node } { (T[7/i], ~__+, T['l/i]) [ 7/i adjunct±on node, label ('l/i) = A } In order to recover derivation trees, we also define the partial function a-site(q,a,q') for (q, a, q') E ~ which provides information about the site within the elementary tree of actions occurring in the automaton. a-site(q, a, q') = { "y/i if a ¢ e & q' -- T['l/i] undefined otherwise 5.2 Combining Automata Suppose we have a set of trees F -- {71,... ,% }. Let M~I,... ,M~, be the e-free automata that are built from members of the set F using the above construction, where for 1 < k < n, Mk = (Qk, P,k, qk,~k, Fk). Construction of a single automaton for F is a two step process. First we build an automa- ton that accepts all elementary computations for trees in F; then we apply the standard au- tomaton determinization and minimization al- gorithms to produce an equivalent, compact au- tomaton. The first step is achieved simply by introducing a new initial state with e-transitions to each of the qk: Let M = (Q, ~, qo, 6, F) where Q = { qo } u Ul<k<. Qi; ~2 = U,<k<, P~k F = Ul<k<_,, Fk (~ = Ul<k<n(q0, e, qk) U Ul<k<n 6k. We determinize and then minimize M using the standard set-of-states constructions to pro- duce Mr -- (Q', P,, Q0, (V, F'). Whenever two states are merged in either the determinizing or minimizing algorithms the resulting state is named by the union of the states from which it is formed. For each transition (Q1, a, Q2) E (V we define the function a-sites(Q1, a, Q2) to be a set of el- ementary nodes as follows: a-sites(Q1, a, Q2) = Uq, eq,,q=eq= a-site(ql, a, q2) Given a transition in Mr, this function returns all the nodes in all merged trees which that tran- 375 sition reaches. Finally, we define: cross(Q1, a, Q2) = { 7 ['y/i E a-sites(Q1, a, Q2) } This gives that subset of those trees whose el- ementary computations take the Mr through state Q1 to Q2. These are the transition an- notations referred to above, used to constrain the parser's set of valid trees. 5.3 The Recognition Phase This section illustrates a simple bottom-up parsing algorithm that makes use of minimized automata produced from sets of trees that an- chor the same input symbol. The input to the parser takes the form of a se- quence of minimized automata, one for each of the symbols in the input. Let the input string be w = at...ar~ and the associated automata be M1,...Mn where Mk = (Qk, Ek, qk,(~k, Fk) for 1 _< k < n. Let treesof(Mk) = Fk where Fk is a set of the names of those elementary trees that were used to construct the automata Mk. During the recognition phase of the algorithm, a set I of items are created. An item has the form (T, q, [l, r,l', r']) where T is a set of elementary tree names, q is a automata state and l, r, l', r' • { 0,... , n, - } such that either l<_l'<_r ~<_rorl<randl ~=r'=-. Thein- dices l, l', #, r are positions between input sym- bols (position 0 is before the first input symbols and position n is after the final input symbol) and we use wp,p, to denote that substring of the input w between positions p and p~. I can be viewed as a four dimensional array, each entry of which contains a set of pairs comprising of a set of nonterminals and an automata state. Roughly speaking, an item (T, q, [l, r, l', r]) is in- cluded in I when for every 't • T, anchored by some ak (where I < k < r and ifl I ~ - then k < l ~ or r t < k); q is a state in Qk, such that some elementary subcomputation reaching q from the initial state, qk, of Mk is an ini- tial substring of the elementary computation for 't that reaches the elementary address "t/i, the subtree rooted at "t/i spans Wl,r, and if't/i dom- inates a foot node then that foot node spans Wl, r, , otherwise l ~ = r ~ = -. The input is accepted if an item (T, qs,[O,n,-,-]) is added to I where T contains some initial tree rooted in the start symbol S and qf • Fk for some k. When adding items to I we use the procedure add(T, q, [/, r, l', r']) which is defined such that if there is already an entry (T ~, q, [/, r, l ~, rq/ • I for some T ~ then replace this with the entry (T U T', q, [/, r, l', #])6; otherwise add the new entry {T, q, [l, r, l', r']) to I. I is initialized as follows. For each k • { 1,... ,n } call add(T, qk,[k- 1, k,-,-]) where T = treesof(Mk) and qk is the initial state of the automata Mk. We now present the rules with which the com- plete set I is built. These rules correspond closely to the familiar steps in existing bottom- up LTAG parser, in particular, the way that we use the four indices is exactly the same as in other approaches (Vijay-Shanker and Joshi, 1985). As a result a standard control strategy can be used to control the order in which these rules are applied to existing entries of I. 1. If (T,q,[l,r,l',r']),(T',qI,[r,r",-,-]) e I, ql E Fk for some k, (q, A, q,) E ~k' for some k r, label ('//e) = A from some 't' E T' & T" = T n cross(q,A, qt) then call add(T", q', If, r", l', r']). 2. If (T, q, [l, r, l r, rq), (T', ql, [l", l, -, -]) • I, ql • Fk for some k, (q,A,q~) • ~k' for some k t, label ('t~/e) = A from some 't~ • T ~ & T" = T N cross(q,A,q~) then call add(T", q', [l", r, l', r']). 3. If (T,q,[l,r,-,-]) • I, (q,_A.,q,) • ~k for some k & T' = T n cross(q,_A.,q') then for each r' such that r < r' < n call m add(T', q', [l, r', r, r']}. 4. If (T, q, [l, r, -, -]) • I, (q,÷A,q') • ~k for some k & T ~ = Tncross(q,.A,q~) then for each I r such that 0 < l ~ < l call add(T', q', [l', r, l', l]). 5. If (T,q,[l,r,l',r']),(T',q/,[l",r",l,r]) • I, ql • Fk for some k, (q,A,q') • (fk, for some k ~, label ('t~/e) = A from some 't~ • T' & T" = T r'l cross(q, A,q,) then call add(T", q', [l", r", l', r']). 6This replacement is treated as a new entry in the table. If the old entry has already licenced other entries, this may result in some duplicate processing. This could be eliminated by a more sophisticated treatment of tree sets. 376 The running time of this algorithm is O(n 6) since the last rule must be embedded within six loops each of which varies with n. Note that although the third and fourth rules both take O(n) steps, they need only be embedded within the l and r loops. 5.4 Recovering Parse Trees Once the set of items I has been completed, the final task of the parser is to a recover derivation tree 7. This involves retracing the steps of the recognition process in reverse. At each point, we look for a rule that would have caused the inclusion of item in I. Each of these rules in- volves some transition (q, a, ql) • 5k for some k where a is one of the parser actions, and from this transition we consult the set of elementary addresses in a-sites(q, a, q~) to establish how to build the derivation tree. We eventually reach items added during the initialization phase and the process ends. Given the way our parser has been designed, some search will be needed to find the items we need. As usual, the need for such search can be reduced through the inclu- sion of pointers in items, though this is at the cost of increasing parsing time. There are var- ious points in the following description where nondeterminism exists. By exploring all possi- ble paths, it would be straightforward to pro- duce an AND/OR derivation tree that encodes all derivation trees for the input string. We use the procedure der((T, q, If, r, l', r']), r) which completes the partial derivation tree r by backing up through the moves of the automata in which q is a state. A derivation tree for the input is returned by the call der((T, ql, [0, n, -, -]), ~-) where (T, qs,[O,n,-,-]) • I such that T contains some initial tree 7 rooted with the start non- terminal S and ql is the final state of some au- tomata Mk, 1 <_ k <_ n. r is a derivation tree containing just one node labelled with name % In general, on a call to der((T, q, [l, r, l ~, rq), T) we examine I to find a rule that has caused this item to be included in I. There are six rules to consider, corresponding to the five recogniser rules, plus lexical introduction, as follows: 1. If (T', q', [l, r", l', r']), (T', ql, [r", r, -, -]) • 7Derivation trees axe labelled with tree names and edges axe labelled with tree addresses. I, qs E Fk for some k, (q', A, q) E ~k' for some k ~, "), is the label of the root of r, ")' E T', label (7'/e) = A from some "y' E T" & "y/i e a-sites(q', A, q), then let r' be the derivation tree containing a single node labelled "/', and let r '~ be the result of at- taching der((T", ql, Jr", r, -, -]), r') under the root of r with an edge labelled the tree address i. We then complete the derivation tree by calling der((T', q', [l, r I', l', r']), T'). 2. If(T',q',[r",r,l',r']),(T",ql,[l,r",-,-]) • I, qs • Fk for some k, (q~, A, q) • 5k, for some k' ~, is the label of the root of T, ~/ • T ~, label ("/~/e) = A from some "/~ • T" & ~/i • a-sites(q I, A, q), then let T' be the derivation tree containing a single node labelled -y~, and let T ~ be the result of at- taching der((T", ql, [l, r', -, -]), r I) under the root of T with an edge labelled the tree address i. We then complete the derivation tree by calling der((T', q', [r '~, r, l ~, rq), r'~). 3. If r = r ~, (T~,q~,[l,l~,-,-]) • I and (q~,_A,,q) • 5k for some k, "y is the label of the root of 7-, ~/ • T' and foot ('),) • a-sites(q t, A÷, q) then make the call der((T', q', [l, l',-,-]), r). 4. If / = l', (T', q', [r', r, -, -]) E I and (q,,+A,ql) • 5k for some k, "), is the label of the root of ~-, -), E T ~ and foot (~/) • a-sites(q', +A, q) then make the call der((T', ql, Jr', r, -, -]), r). 5. If (T~,q ', [l',r'~,l~,r']), (T~I, qs, [l,r,l',r"]) • I, ql • Fk for some k, (q~, A, q) • 5k, for some k ~, ~, is the label of the root of r, "), • T ~, label ('y~/e) = A from some ~/' • T" and "I/i • a-sites(q', A,q), then let T' be the derivation tree containing a single node labelled "/~, and let T" be the result of at- taching der((T", q/, [l, r, l", r"]), ~-') under the root of r with an edge labelled the tree address i. We then complete the derivation tree by calling der((T', ql, [In, r 'l, l', r']), Tll). 6. If l + 1 = r, r ~ = l ~ ---- -- q is the initial state of Mr, ")' is the label of the root ofT, ",/• T, then return the final derivation tree T. 6 Discussion The approach described here offers empirical rather than formal improvements in perfor- mance. In the worst case, none of the trees 377 word come break give no. of trees automaton no. of states no. of transitions trees per state 133 merged 898 1130 1 minimised 50 130 11.86 177 merged 1240 1587 1 minimised 68 182 12.13 337 merged 2494 3177 1 minimised 83 233 20.25 Table 1: DTG compaction results (from Carroll et al. (1998)). in the grammar share any structure so no op- timisation is possible. However, in the typi- cal case, there is scope for substantial structure sharing among closely related trees. Carroll et al. (1998) report preliminary results using this technique on a wide-coverage DTG (a variant of LTAG) grammar. Table 1 gives statistics for three common verbs in the grammar: the total number of trees, the size of the merged automa- ton (before any optimisation has occurred) and the size of the minimised automaton. The fi- nal column gives the average of the number of trees that share each state in the automaton. These figures show substantial optimisation is possible, both in the space requirements of the grammar and in the sharing of processing state between trees during parsing. As mentioned earlier, the algorithms we have presented assume that elementary trees have one anchor and one spine. Some trees, how- ever, have secondary anchors (for example, a subcategorised preposition). One possible way of including such cases would be to construct automata from secondary anchors up the sec- ondary spine to the main spine. The automata for both the primary and secondary anchors associated with a lexical item could then be merged, minimized and used for parsing as above. Using automata for parsing has a long his- tory dating back to transition networks (Woods, 1970). More recent uses include Alshawi (1996) and Eisner (1997). These approaches differ from the present paper in their use of automata as part of the grammar formalism itself. Here, automata are used purely as a stepping-stone to parser optimisation: we make no linguistic claims about them. Indeed one view of this work is that it frees the linguistic descriptions from overt computational considerations. This work has perhaps more in common with the technology of LR parsing as a parser optimi- sation technique, and it would be interesting to compare our approach with a direct application of LR ideas to LTAGs. References H. Alshawi. 1996. Head automata and bilingual tilings: Translation with minimal representations. In ACL96, pages 167-176. J. Carroll, N. Nicolov, O. Shaumyan, M. Smets, and D. Weir. 1998. Grammar compaction and computa- tion sharing in automaton-based parsing. In Pro- ceedings of the First Workshop on Tabulation in Parsing and Deduction, pages 16-25. J. Chen and K. Vijay-Shanker. 1997. Towards a reduced-commitment D-theory style TAG parser. In IWPT97, pages 18-29. J. Eisner. 1997. Bilexical grammars and a cubic- time probabilistic parser. In IWPT97, pages 54-65. R. Evans and D. Weir. 1997. Automaton-based parsing for lexicalized grammars. In IWPT97, pages 66-76. D. A. Huffman. 1954. The synthesis of sequential switching circuits. J. Franklin Institute. A. K. Joshi and Y. Schabes. 1991. Tree-adjoining grammars and lexicalized grammars. In Maurice Ni- vat and Andreas Podelski, editors, Definability and Recognizability of Sets of Trees. Elsevier. E. F. Moore, 1956. Automata Studies, chap- ter Gedanken experiments on sequential machines, pages 129-153. Princeton University Press, N.J. Y. Schabes and A. K. Joshi. 1988. An Earley-type parsing algorithm for tree adjoining grammars. In ACL88. K. Vijay-Shanker and A. K. Joshi. 1985. Some com- putational properties of tree adjoining grammars. In ACL85, pages 82-93. K. Vijay-Shanker and D. Weir. 1993. Parsing some constrained grammar formalisms. Computational Linguistics, 19(4):591-636. W. A. Woods. 1970. Transition network gram- mars for natural language analysis. Commun. A CM, 13:591-606. 378
1998
61
Combining Stochastic and Rule-Based Methods for Disambiguation in Agglutinative Languages Ezeiza N., Alegria I., Arriola J.M., Urizar R. Informatika Fakultatea 649 P.K Donostia E-20080 [email protected] http://ixa.si.ehu.es Aduriz I. UZEI Aldapeta, 20. Donostia E-20009 [email protected] Laburpena Artikulu honetan metodo estokastiko eta erregeletan oinarritutako metodoen arteko konbinaketa euskarari aplikatzearen emaitzak aurkeztuko ditugu.Desanbiguazioan erabilitako metodoak Murrizpen Gramatika (CG) eta MULTEXT proiektuak garatutako HMMn oinarritutako etiketatzailea dira. Euskara hizkuntza eranskaria izaki, hitz bakoitzari dagozkion irakurketa guztiak esleitzeko analizatzaile morfologikoa beharrezkoa da. Ondoren, CG erregelak informazio morfologiko guztiari aplikatzen zaizkio eta prozesu honek testuen anbiguotasuna gutxitzen du. Azkenik, geratutako etiketen artean bakarra hautatzeko MULTEXT proiektuko tresnak erabiltzen dira. Metodo estokastikoa soilik erabiltzean, errore-tasa %14 ingurukoa da, baina etiketatzailearen doitasuna hitz ezezagunekin lexikoa aberastuz gero %2 hobe daitekeen arren. Metodo biak konbinatzen direnean, berriz, prozesu osoaren errore-tasa % 3.5ekoa da. Ikasketarako corpusa nahikoa txikia dela, HMM eredua lehenengo mailakoa eta euskararako Murrizpen Gramatika oraindik ere garapen prozesuan dagoela kontuan izanik, gure ustez metodo konbinatu hau erabilita emaitza onak lor daitezke eta beste hizkuntza eranskarietarako bereziki egokia izan daiteke. Resum En aquest article presentem els resultats de la combinaci6 de m~todes estoc/lstics i basats en regles aplicats a la desambiguaci6 morfosinthcfica de l'euskara. Els m6todes utilitzats per a la desambiguaci6 s6n: les Gramhtiques de Restrictions (CG) i l'etiquetador basat en HMM del projecte MULTEXT. E1 carhcter aglutinant de l'euskara fa necessari la utilitzaci6 d'un analitzador morfolbgic per assignar a cada paraula totes les seves interpretacions. Les regles de CG s'apliquen utilitzant la informaci6 morfol6gica completa i aquest proc6s redueix parcialment rambigtiitat dels textos. A continuaci6, s'apliquen les eines de MULTEXT per escollir una finica etiqueta. Utilitzant nom6s el m6tode estoc/lstic la taxa d'error 6s aproximadament del 14%, encara que la precisi6 de l'etiquetador es pot incrementar en un 2% utilitzant les paraules desconegudes per enriquir el 16xic. En canvi, la combinaci6 d'ambd6s m6todes permet reduir l'error fins al 3.5%. Tenint en compte que el corpus d'aprenentatge 6s bastant petit, que el model HMM 6s de primer ordre i que la Gramhtica de Restriccions de l'euskara esth encara en fase de desenvolupament, creiem que els resultats del m6tode combinat s6n bons i que la combinaci6 de m6todes 6s especialment adequada per a llengiies aglutinants. Resumen En este articulo presentamos los resultados de la combinaci6n de m6todos estoc~sticos y basados en reglas aplicados al euskara. Los m6todos utilizados para la desambiguaci6n son las Gram~iticas de Restricciones (CG) y el etiquetador basado en HMM del proyecto MULTEXT. Siendo el euskara una lengua aglutinante, serfi necesario un analizador morfol6gico para asignar a cada palabra todas sus interpretaciones. A continuaci6n se aplican las reglas de CG ufilizando toda la informaci6n morfol6gica y este proceso disminuye la ambigtiedad de los textos. Por filfimo, las herramientas de MULTEXT escoger~in una finica etiqueta. Utilizando finicamente el m6todo estoc~tstico la tasa de error es de alrededor del 14°/o, aunque la precisi6n del etiquetador puede incrementarse en un 2% ufilizando las palabras desconocidas para enriquecer el 16xico. En cambio, combinando ambos m6todos la tasa de error del proceso completo es del 3.5%. Teniendo en cuenta que el corpus de aprendizaje es bastante pequefio, que el modelo HMM es de primer orden y que la Gramfitica de Restricci6n del euskara esth afin en fase de desarrollo, creemos el m6todo combinado obtiene buenos resultados y puede ser adecuado para otras lenguas aglufinantes. 379
1998
62
Combining Stochastic and Rule-Based Methods for Disambiguation in Agglutinative Languages Ezeiza N., Alegria I., Arriola J.M., Urizar R. Informatika Fakultatea 649 P.K Donostia E-20080 [email protected] http://ixa.si.ehu.es Abstract In this paper we present the results of the combination of stochastic and rule-based disambiguation methods applied to Basque languagel. The methods we have used in disambiguation are Constraint Grammar formalism and an HMM based tagger developed within the MULTEXT project. As Basque is an agglutinative language, a morphological analyser is needed to attach all possible readings to each word. Then, CG rules are applied using all the morphological features and this process decreases morphological ambiguity of texts. Finally, we use the MULTEXT project tools to select just one from the possible remaining tags. Using only the stochastic method the error rate is about 14%, but the accuracy may be increased by about 2% enriching the lexi- con with the unknown words. When both methods are combined, the error rate of the whole process is 3.5%. Considering that the training corpus is quite small, that the HMM model is a first order one and that Constraint Grammar of Basque language is still in progress, we think that this com- bined method can achieve good results, and it would be appropriate for other agglutinative languages. Introduction Based on the results of the combination of stochastic and rule-based disambiguation methods applied to Basque language, we will show that the results of the combination are significantly better than the ones obtained applying the methods separately. As Basque is an agglutinative and highly in- This research has been supported by the Education Department of the Government of the Basque Country and the Interministerial Commision for Science and Technology. Aduriz I. UZEI Aldapeta, 20. Donostia E-20009 [email protected] fleeted language, a morphological analyser is needed to attach all possible interpretations to each word. This process, which may not be necessary in other languages such as English, makes the tagging task more complex. We use MORFEUS, a robust morphological analyser for Basque developed at the University of the Basque Country (Alegria et al., 1996). We present it briefly in section 1, in the overview of the whole system, the lemmatiser/tagger for Basque EUSLEM. We have added to MOKFEUS a lemma dis- ambiguation process, described in section 2, which discards some of the analyses of the word based on statistical measures. Another important issue concerning a tagger is the tagset itself. We discuss the design of the tagset in section 3. In section 4, we present the results of the ap- plication of rule-based and stochastic disambi- guation methods to Basque. These results are deeply improved by combin- ing both methods as explained in section 5. Finally, we discuss some possible improve- ments of the system and future research. 1 Overview of the system The disambiguation system is integrated in EUSLEM, a lemmatiser/tagger for Basque (Aduriz et al., 1996). EUSLEM has three main modules: • MORFEUS, the morphological analyser based on the two-level formalism. It is a ro- bust and wide coverage analyser for Basque. • the module that treats multiword lexical units. It has not been used in the experiments in order to simplify the process. • the disambiguation module, which will be described in sections 5 and 6. MORFEUS plays an important role in the lemmatiser/tagger, because it assigns every to- ken all the morphological features. The most important functions are: • incremental analysis, which is divided in 380 three phases, using the two level formalism in all of them: 1) the standard analyser pro- cesses words according to the standard lexi- con and standard rules of the language; 2) the analyser of linguistic variants analyses dialectal variants and competence errors2; and 3) the analyser of unknown words or guesser processes the remaining words. • lemma disambiguation, presented below. 2 Lemma disambiguation The lemma disambiguation has been added to the previously developed analyser for two main reasons: • the average number of interpretations in un- known words is significantly higher than in standard words. • there could be more than one lemma per tag. Since the disambiguation module won't deal with this kind of ambiguity, it has to be solved to lemmatise the text. We use different methods for the disambigua- tion of linguistic variants and unknown words. In the case of linguistic variants we try to select the lemma that is "nearest" to the standard one according to the number of non-standard mor- phemes and rules. We choose the interpretation that has less non-standard uses. before after variants 2.58 2.52 unknown 13.1 6.21 Table 1- Number of readings. In the case of unknown words, the procedure uses the following criteria: • for each category and subcategory pair, leave at least one interpretation. • assign a weight to each lemma according to the final trigram and the category and subca- tegory pair. • select the lemma according to its length and weight -best combination of high weight and short lemma. These procedures have been tested with a small corpus and the produced error-rate is 0.2%. This is insignificant considering that the avera- ge number of interpretations of unknown words decreases by 7, as shown in table 1. 3 Designing the tagset The choice of a tagset is a critical aspect when designing a tagger. Before defining the tagset 2 This module is very useful since Basque is still in normalisation process. we have had to take some aspects into account: there was not any exhaustive tagset for auto- matic use, and the output of the morphological analyser is too rich and does not offer a directly applicable tagset. While designing the general tagset, we tried to meet the following requirements: • it had to take into account all the problems concerning ellipsis, derivation and composi- tion (Aduriz et al., 1995). • in addition, it had to be general, far from ad hoc tagsets. • it had to be coherent with the information provided by the morphological analyser. Bearing all these considerations in mind, the tagset has been structured in four levels: • in the first level, general categories are inclu- ded (noun, verb, etc.). There are 20 tags. • in the second level each category tag is fur- ther refined by subcategory tags. There are 48 tags. • the third level includes other interesting in- formation, as declension case, verb tense, etc. There are 318 tags in the training cor- pus, but using a larger corpus we found 185 new tags. • the output of the morphological analysis constitutes the last level of tagging. There are 2,943 different interpretations in this training corpus, but we have found more than 9,000 in a larger c first orpus. ambi~;uity rate ta~s/token 35.11% 1.48 second 40.68% 62.24% third fourth 64.42% 1.57 2.20 3.48 Table 2- Ambiguity of each level. The morphological ambiguity will differ de- pending on the level of tagging used in each case, as shown in table 2. 4 Morphological Disambiguation There are two kinds of methods for morpho- logical disambiguation: on one hand, statistical methods need little effort and obtain very good results (Church, 1988; Cutting etal., 1992), at least when applied to English, but when we try to apply them to Basque we encounter addi- tional problems; on the other hand, some rule-based systems (Brill, 1992; Voutilainen et aL, 1992) are at least as good as statistical systems and are better adapted to free-order languages and agglutinative languages. So, we 381 have selected one of each group: Constraint Grammar formalism (Karlsson et aL, 1995) and the HMM based TATOO tagger (Armstrong et aL, 1995), which has been de- signed to be applied it to the output of a mor- phological analyser and the tagset can be switched easily without changing the input text. • second [] third 70 M M* M+CG M*+CG Figure 1-Initial ambiguity3. We have used the second and third levels tagsets for the experiments and a small corpus -28,300 words- divided in a training corpus of 27,000 words and a text of 1,300 words for testing. • second [] third M M* M+CG M*+CG Figure 2- Number of tags per token. The initial ambiguity of the training corpus is relatively high, as shown infig. 1, and the ave- rage number of tags per token is also higher than in other languages -see fig. 2. The num- ber of ambiguity classes is also high -290 and 1138 respectively- and some of the classes in the test corpus aren't in the training corpus, specially in the 3rd level tagset. This means that the training corpus doesn't cover all the phenomena of the language, so we would need a larger corpus to assure that it is general and representative of the language. We tried both supervised and unsupervised 4 3 These measures are taken after the process denoted in each column: M-' morphological analysis; M* morphological analysis with enriched lexicon; CG --, Contraint Grammar. 4 Even if we used the same corpus for both training 382 training using the 2nd level tagset and only su- pervised training using the third level tagset. The results are shown infig. 3(S). Accuracy is below 90% and 75% respectively. Using un- known words to enrich the lexicon, the results are improved -seefig. 3(S*)-, but are still far from the accuracy of other systems. We have also written some biases -to be exact 11- to correct the most evident errors in the 2nd level. We didn't write more biases for the following reasons: • They can use just the previous tag to change the probabilities, and in some cases we need a wider context to the left and/or to the right. • They can't use the lemma or the word. • From the beginning of this research, our in- tention was to combine this method with Constraint Grammar. Using these biases, the error rate decreases by 5% in supervised training and by 7% in unsu- pervised one-fig. 3(S+B). We also used biases 5 with the enriched lexicon and the accuracy increases by less than 2% in both experiments -fig. 3(S+B*). This is not a great improvement when trying to decrease an error rate greater than 10%, but the enrichment of the lexicon may be a good way to improve the system. The logical conclusions of these experiments are: • the statistical approach might not be a good approach for agglutinative and free-order languages -as pointed out by Oflazer and KuruOz (1994). • writing good disambiguation rules may real- ly improve the accuracy of the disambigua- tion task. As we mentioned above, it is difficult to define accurate rules using stochastic models, so we use the Constraint Grammar for Basque 6 (Aduriz et al., 1997) for this purpose. The morphological disambiguator uses around 800 constraint rules that discard illegitimate analyses on the basis of local or global context methods to compare the results, the latter performed better using a larger corpus. These biases were written taking into account the errors made in the first experiment. The rules were designed having syntactic analysis as the main goal. conditions. The application of CG formalism 7 is quite satisfactory, obtaining a recall of 99,8% but there are still 2.16 readings per to- ken. The ambiguity rate after applying CG of Basque drop from 41% to 12% using 2nd level tagset and 64% to 22% using 3rd level tagset -fig. 2- and the error rate in terms of the ta ~sets is approximately 1%. r.) Figure 3- Accuracy of the experiments 8. 5 Combining methods There have been some approaches to the com- bination of statistical and linguistic methods applied to POS disambiguation (Leech et al., 1994; Tapanainen and Voutilainen, 1994; Oflazer and Tiar, 1997) to improve the accuracy of the systems. Oflazer and "FOr (1997) use simple statistical in- formation and constraint rules. They include a constraint application paradigm to make the disambiguation independent of the rule se- quence. The approach of Tapanainen and Voutilainen (1994) disambiguates the text using XT and ENGCG independently; then the ambiguities remaining in ENGCG are solved using the re- suits of XT. We propose a similar combination, applying both disambiguation methods one after the other, but training the stochastic tagger on the output of the CG disambiguator. Since in the output of CG of Basque the avera- 7 These results were obtained using the CG-2 parser, which allows grouping the rules in different ordered subgrammars depending on their accuracy. This morphological disam-biguator uses only the first two subgrammars. s S '--* stochastic; * --* with enriched lexicon; B --, with biases; CG --, Constraint Grammar. ge number of possible tags is still high -1.13- 1.14 for 2nd level tagset and 1.29-1.3 for 3rd level tagset- and the stochastic tagger produces relatively high error rate -around 15% in 2nd level and almost 30% in 3rd level-, we first apply constraint rules and then train the stochastic tagger on the output of the rule- based disambiguator. Fig. I(CG) shows the ambiguity left by Basque CG in terms of the tagsets. Although the ambiguity rate is significantly lower than in previous experiments, the remaining ambigui- ties are hard to solve even using all the lingu|s- tic information available. We have also experimented with the enriched lexicon and the results are very encouraging, as shown in fig. 3(CG+S*). Considering that the number of ambiguity classes is still high -around 240 in the 2nd level and more than 1000 in the 3rd level-, we think that the results are very good. For the 2nd level tagging, the error rate after combining both methods is less than 3.5%, half of it comes from MORFEUS and Basque CG and the rest is made by the stochastic dis- ambiguation. This is due to the fact that gene- rally the types of ambiguity remaining after CG is applied are hard to solve. Examining the errors, we find that half of them are made in unknown words trying to distin- guish between proper names of persons and places. We use two different tags because it is interesting for some applications and the tagset was defined based on morphological features. This kind of ambiguity is very hard to solve and in some applications this distinction is not important. So in this case the accuracy of the tagger would be 98%. The accuracy in the third level tagset is around 91% using the combined method, which is not too bad bearing in mind the number of tags -310-, the precision of the input-1.29 tags/token- and that the training corpus does not cover all the phenomena of the language 9. We want to point out that the experiments with the 3rd level tagset show even clearer that the combined method performs much better than the stochastic. Moreover, we think that CG disambiguation is even convenient at this level because of the initial ambiguity -63%. 9 In a corpus of around 900,000 words we found 185 new tags and more than 1700 new classes. 383 Conclusion We have presented the results of applying different disambiguation methods to an agglu- tinative and highly inflected language with a relatively free order in sentences. On one hand, this latter characteristic of Basque makes it difficult to learn appropriate probabilities, particularly first order stochastic models. We solve this problem in part with CG for Basque, which uses a larger context and can tackle the free word-order problem. However, it is a very hard work to write a full grammar and disambiguate texts completely using CG formalism, so we have complemen- ted this method with a stochastic disambigua- tion process and the results are quite encouraging. Comparing the results of Tapanainen and Voutilainen (1994) with ours, we see that they achieve 98.5% recall combining 1.02-1.04 readings from ENGCG and 96% accuracy in XT, while we begin with 1.13-1.14 readings, the quality of our stochastic tagger is less than 90% and our result is better than 96%. Unlike Tapanainen and Voutilainen (1994), we think that training on the output of the CG the statistical disambiguation works quite better 10, at least using such a small training corpus. In the future we will compile a larger corpus and to decrease the number of readings left by CG. On the other hand, we think that the informa- tion given by the second level tag is not suffi- cient to decide which of the choices is the correct one, but the training corpus is quite small. However, translating the results of the 3rd level to the 2nd one we obtain around 97% of accuracy. So, we think that improving the 3rd level tagging would improve the 2nd level tagging too. We also want to experiment unsu- pervised learning in the 3rd level tagging with a large training corpus. Along with this, the future research will focus on the following processes: • morphosyntactic treatment for the elaboration of morphological information (nominalisa- tion, ellipsis, etc.). • treatment of multiword lexical units (MWLU). We are planning to integrate this module to process unambiguous MWLU, to decreases the ambiguity rate and to make the input of the disambiguation more precise. 10 With their method accuracy is 2% lower. Acknowledgement We are in debt with the research-team of the General Linguistics Department of the University of Helsinki for giving us permission to use CG Parser. We also want to thank Gilbert Robert for tuning TATOO. References Aduriz I., Aldezabal I., Alegria I., Artola X., Ezeiza N., Urizar R. (1996) EUSLEM: A lemmatiser/tagger for Basque. EURALEX. Aduriz I., Alegria I., Arriola J.M., Artola X., Diaz de Ilarraza A., Ezeiza N., Gojenola K., Maritxalar M. (1995) Different issues in the design of a lemmatizer/tagger for Basque. "From text to tag" SlGDAT, EACL Workshop. Aduriz, I., Arriola, J.M., Artola, X., Diaz de Illarraza, A., Gojenola, K., Maritxalar, M. (1997) Morphosyntactic Disambiguation for Basque based on the Constraint Grammar Formalism. RANLP, Bulgaria. Alegria, I., Sarasola, K., Urkia, M. (1996) Automatic morphological analysis of Basque. Literary and Linguistic Computing Vol 11, N. 4. Armstrong S., Russel G., Petitpierre D., Robert G. (1995) An open architecture for Multilingual Text Processing. EACL'95. vol 1, 101-106. Brill E. (1992) A simple rule-based part of speech tagger. ANLP, 152-155. Church K. W. (1988) A stochastic parts pro- gram and phrase parser for unrestricted text. ANLP, 136-143. Cutting D., Kupiec J., Pedersen J., Sibun P. (1992) A practical part-of-speech tagger. ANLP, 133-140. Karlsson F., Voutilainen A., Heikkil~i J., Anttila A. (1995)Constraint Grammar: Language-in- dependent System for Parsing Unrestricted Text. Mouton de Gruyter. Leech G., Garside R., Bryan M. (1994) CLAWS4: The tagging of the British National Corpus. COLING, 622-628. Oflazer K., Kururz I. (1994) Tagging and Morphological Disambiguation of Turkish Text. ANLP, 144-149. Oflazer K., Tiir G. (1997) Morphological Disambiguation by Voting Constraints. ACL- EACL, 222-229. Tapanainen P., Voutilainen A. (1994) Tagging Accurately - Don "t guess if you know. ANLP, 47-52. 384
1998
63
Anaphor resolution in unrestricted texts with partial parsing A. Ferr~indez; M. Palomar Dept. Languages and Information Systems Alicante University - Apt. 99 03080 - Alicante - Spain [email protected] [email protected] L. Moreno Dept. Information Systems and Computation Valencia University of Technology [email protected] Abstract In this paper we deal with several kinds of anaphora in unrestricted texts. These kinds of anaphora are pronominal references, surface- count anaphora and one-anaphora. In order to solve these anaphors we work on the output of a part-of-speech tagger, on which we automatically apply a partial parsing from the formalism: Slot Unification Grammar, which has been implemented in Prolog. We only use the following kinds of information: lexical (the lemma of each word), morphologic (person, number, gender) and syntactic. Finally we show the experimental results, and the restrictions and preferences that we have used for anaphor resolution with partial parsing. Introduction Nowadays there are two different approaches to anaphor resolution: integrated and alternative. The former is based on the integration of different kinds of knowledge (e.g. syntactic or semantic information) whereas the latter is based on statistical, neural networks or the principles of reasoning with uncertainty: e.g. Connoly (1994) and Mitkov (1997). Our system can be included into the first approach. In these integrated approaches the semantic and domain knowledge information is very expensive in relation to computational processing. As a consequence, current anaphor resolution implementations mainly rely on constraints and preference heuristics which employ information originated from morphosyntactic or shallow semantic analysis, e.g. in Baldwin (1997). These approaches, however, perform remarkably well. In Lappin and Leass (1994) it is described an algorithm for pronominal anaphor resolution with a high rate of correct analyses: 85%. This one operates primarily on syntactic information only. In Kennedy and Boguraev (1996) it is proposed an algorithm for anaphor resolution which is a modified and extended version of that developed by Lappin and Leass (1994). In contrast to that work, this algorithm does not require in-depth, full, syntactic parsing of text. The modifications enable the resolution process to work from the output of a POS tagger, enriched only with annotations of grammatical function of lexical items in the input text stream. The advantage of this algorithm is that anaphor resolution can be realized within NLP frameworks which do not -or cannot- employ robust and reliable parsing components. Quantitative evaluation shows the anaphor resolution algorithm described here to run at a rate of 75% accuracy. Our framework will allow us a similar approach to that of Kennedy and Boguraev (1996), but we will automatically get syntactic information from partial parsing. Moreover, our proposal will also be applied to other kinds of anaphors such as surface-count anaphora or one-anaphora. There are some other approaches that work on the output of a POS tagger, e.g. that of Mitkov and Stys (1997), in which it is proposed another knowledge-poor approach to resolving pronouns in technical manuals in both English and Polish. This approach is a modification of the reported in Mitkov (1997). Here, the knowledge is limited to a small noun phrase grammar, a list of terms and This paper has been supported by the CICYT number TIC97-0671-C02-01 / 02 385 a set of antecedent indicators (definiteness, giveness, term preference, lexical reiteration, ...). We will work in a similar way to this approach, since we use some of its antecedent indicators, but we automatically apply a partial parsing that allows us to deal with other kinds of anaphors as well as pronouns. In this work we are going to apply a partial parsing on the output of a POS tagger in order to solve anaphora problem. We will work over the corpus used within CRATER z. This corpus contains the International Telecommunications Union CCITT handbook, also known as The Blue Book, in English, French and Spanish versions. This corpus is the most important collection of telecommunication texts and contains 5M words, automatically tagged by the Spanish version of the Xerox tagger. We will use the system Slot Unification Grammar (SUG) in order to get a partial parsing on the output of this tagger. SUG is a logical formalism based on unification, which is an extension of Definite Clause Grammars (DCG). It is called Slot Unification Grammar due to the slot structures generated by the parser. SUG has been developed with the aim of extending DCG in order to facilitate the resolution of several Natural Language Processing (NLP) problems in a modular way. This system has been firstly proposed in Ferr~ndez (1997a), and it has been previously applied to anaphor resolution in Ferr~indez (1997b). We have used SUG instead of other well known formalisms such as Head Driven Phrase Structure Grammar (HPSG), Lexical Functional Grammar (LFG) or Slot Grammars (SG), because SUG allows a modular and computational treatment of NLP problems, and it facilitates its integration with a POS tagger. In the following section we will briefly describe SUG formalism in order to facilitate the undertanding of this paper. In section 2 we will propose a SUG grammar to accomplish the partial parsing of the unrestricted text and the interface to work with the output of the POS tagger. In section 3 we will explain the algorithm used to anaphor resolution and its constraints and 2 http://138.87.135.33/-mdavies/roanoke.htm preferences. And, finally, in section 4 we will offer some figures of the evaluation of the system. 1 Slot Unification Grammar In this section we will briefly describe SUG formalism. We will only show some of the capabilities of SUG in order to undertand this paper. For further details on SUG it is necessary to consult Femindez (1997a). SUG can be defined as this quadruple: (NT, T,P,H), where NT and T are a finite set of nonterminal and terminal symbols respectively; moreover NT~ T = fD. P is a finite set of pairs ++> 13 where ot~NT, 13~(TuNT)*u {procedures calls}, and these pairs are called production rules. Finally H is a set of production rules which only has the first member of the production rule, i.e. a, and ot's name is either coordinated, juxtaposition, fusion, basicWord or isWord. SUG's production rules adds to those of DCG that each subconstituent of 13 could be omitted in the sentence if it is noted between the optional operator: << constituent >>. It is a well-known fact that we can get optional constituents in DCG from making use of a nonterminal symbol (e.g. optA, with optA--->A and optA-~[]). However this skill obliges us to add new nonterminal symbols, whereas SUG allows us to get it without adding any new one. We can get an example from Figure 1, in which we can see the reduction of grammatical rules in SUG. DCG Grammar: np -> subst. [ SUG Grammar: np -> det, subst. IlnP ++> <<det>>, <<adj>>, np->det, adj, subst. np->det, subst, adj. L subst, <<adj>>, <<pp>>. np ->det, subst, pp, ~CG Grammar with optional constituents: np -> optDet, optAdj, subst, optAdj, optPP. optDet -> det. optAdj-> adj. ..... optDet -> []. optAdj -> []. Figure 1. Comparison between DCG and SUG with reference to optional constituents. Furthermore, this optional operator has the possibility of reminding whether the optional constituent has been parsed in the sentence or not. This information will be very useful in the resolution of NLP problems such as ellipsis or 386 extraposition. This fact is carried out by adding a label to the optional constituent, e.g. << SSNP" np >>. This label will be an uninstantiated Prolog variable if constituent np is missing, so Prolog predicate var (SSNP) would success. We have developed a translator which turns SUG rules into Prolog clauses. This translator has been run under SICStus Prolog 2.1 and Arity Prolog 5.1, and it will translate into Prolog each SUG production rule. This translator will provide what we call slot structure (henceforth SS). This SS stores the syntactic, morphologic and semantic information of every constituent of the grammar. Each SS consists of a structure with functor the name of the constituent (np, vp .... ). Its first argument corresponds to another structure with functor conc which includes all the arguments of the constituent (Number, Gender, SemanticType). The second one corresponds to the 3.p of the final logical formula of the constituent. And the remaining arguments correspond to the SS of its subconstituents. In this SS the parser leaves as uninstantiated Prolog variables ("_") the slots corresponding to the optional constituents that do not appear in the sentence, in this way, we know what has been parsed and what has not. From now on we will show each SS with 3.p and conc only if it is necessary, in order to get simplicity. Se,t .ce )_.f 1' .roo, o oc .... to the Dictionary I lo, st ctu,,I I Processof~solutionofNLPproblems: ~ anaphora, ellipsis, PP-atachraent, ... bTnal Slot Structure without these NLP problems [ Figure 2 Now we would like to make clear the process in which we obtain the final logical formula. First of all we parse the sentence, and then we get its SS. After that, it would be the moment in which we could try to solve NLP problems such as extraposition, ellipsis, PP-attachment and anaphora. The solution will consist of a new SS which will be used to obtain the final logical formula. This process has been summed up in Figure 2. We would like to emphasize that this skill of resolution allows us to produce modular NLP systems in which grammatical rules, logical formulas and the module of resolution of NLP problems are quite independent from each other. Our SUG parser will access the dictionary only once during the whole process of parsing in order to avoid repeated access to the same word from the dictionary. It stores the information of each word on a list before starting the parse and it will work with this structure instead of the list of words of a DCG parser in Prolog; e.g. DCG list: [this, book, is, mine], SUG list: [word (this, [adj (sing, dem), pron (sing, dem)] ), word (book, [noun (...)]) .... ]. Each element from the SUG list is a structure with name word and with two arguments. The first one corresponds to the same word of the sentence like a Prolog atom. The second one corresponds to a structure list which refers to the lexical entries of the word. That is to say that every time the parser has to access a lexical entry of a word, it will look it up in this list; it will not access the dictionary ever again. 2 Partial parsing with SUG In Abney (1997) it is considered necessary to carry out a partial parsing on the unrestricted text instead of a complete parsing, both due to errors and the unavoidable incompleteness of lexicon and grammar. It is also difficult to do a global search efficiently with unrestricted text, due to the length of sentences and the ambiguity of grammars. Partial parsing is considered a response to these difficulties. Partial parsing techniques aim to recover syntactic information efficiently and reliably from unrestricted text, by sacrificing completeness and depth of analysis. In this section we will show the application of SUG in partial parsing. We are going to take the output of a POS tagger as input, and after apply a partial parsing with SUG. The previously mentioned corpus The blue book is going to be worked on, which has been automatically tagged by the Spanish version of the Xerox tagger. Each word in a tagged sentence has the following syntax: (surfaceForm, lemma, TAG). 387 - - i(cormcetions, connection, NCFP) .... : (f lnterface in order to~ i[ ";'o;~i,";,.ib;g~',ii~o'.']'): go;,~'(i/'g: " "] [ map each tag into the [~:[art( fem, pl,det)]), word (¢onnection, : [ aproprlate labehnto[ :[noun (common,fem,pl)]), ... ] : the SUG grammar J " i ~ Partial parsing ~ ..................................... with SUG J ~ ISUG grammar in Figure 5 that will ionly parse certain constituents • Slot Structure that will be used in anaphora resolution ) Figure 3. Interface between the tagger and SUG. We will proceed in the way that is described in Figure 3. Firstly the tagged sentence is turned into the SUG list format, where each Xerox tag is mapped into the apropriate label into the SUG grammar, e.g. the Xerox tag (connections, connection, NCFP) is mapped into the SUG tag word (connection, [noun (common, fem, pl)]). Finally, this SUG list of words will be taken as input for the grammar described in Figure 4. This grammar will carry out the partial parsing of the text, and the SUG parser will produce the SS that will be used in the algorithm, which is proposed for anaphor resolution. This simple interface between the tagger and SUG is one of the advantages of the modularity that presents SUG. It will allow us to work with different dictionaries or taggers with the same SUG grammar. This is due to the fact that in this system there is a great independence between the grammar, the lexicon, the process of dealing with NLP problems and the process of obtaining the final logical formula. sentence ++> << PP:pp >>, << NP:np >>, <<P:pronoun>>, << V:verb >>, <<C:conj>>, <#[1, remainingSentence(PP, NP, P, V, C) #>. remainingSentence(PP, NP, P, V, C) ++> <t## ( {(var(PP), var(NP), vat(P), var(V), var(C))}, IVV]), (_ _) ~/f>, sentence. % . Grammatical rules for each constituent to parse coordinated( pp, simplePP ). simplePP ++> preposition, np. coordinated( np, simpleNP 0 ). simpleNP (substantive Type) ++> <<determiner>>, <<adjective>>, noun, <<pp>>. simpleNP (adjective Type) ++> <<determiner>>, adjective, <<pp>>. Figure 4. Partial parsing with SUG. The grammar in Figure 4 will only parse coordinated prepositional phrases (pp), coordinated noun phrases (np), pronouns (p), conjunctions (conj) and verbs (verb) in whatever order that they appear in the text and it will allow us to work in a similar way that the algorithm mentioned in Kennedy and Boguraev (1996). But in our approach we will automatically get the syntactic information from this grammar. The SS returned by the parser will consist of a sequence of these constituents: pp, np, p, conj, verb and free words. The attachments (e.g. of the pp) will be postponed to the module of resolution of NLP problems, which could work jointly with the algorithm for anaphor resolution (in a similar way to the approach proposed in Azzam (1995)). The free words will consist of constituents that are not covered by the grammar (e.g. adverbs) or words that are not important for the anaphor resolution. The output of the whole system will consist of a sequence of the logical formulas of each constituent. Here sentence will be the initial symbol of the grammar and the partial parsing will be applied with the rules shown in Figure 4. If we want a complete parsing, we just have to substitute these rules for the following: sentence ++> np, vp, and obviously we will have to add the grammatical rule for a verbal phrase (vp). 3 The algorithm In this section we are going to propose an algorithm which can deal with discourse anaphora in unrestricted texts with partial parsing. It is based on the process of parsing described in Figure 3. So this process will take the output of a POS tagging as input, and it will be applied after the partial parsing of a sentence (using the grammar described in Figure 4) and before obtaining its logical formula. This algorithm is shown in Figure 5 and it will deal with pronominal references, surface-count anaphora and one-anaphora. This algorithm will take a slot structure (SS) that consists of a sequence of the following constituents: np, pp, p, conj and verbs and it will return a new one without anaphors. Every possible antecedent (noun phrases) will be stored in a list of 388 antecedents, that will be used to solve the anaphors. Another structure will be stored in this list for each antecedent: paral (Sent, Clause, PosVerb, NumConst, NumCoord). This structure will be used to deduce the parallelism with partial parsing between an anaphor and its antecedent. Its first argument, Sent, is the sentence in which the antecedent appears. The second one is the clause in which it appears. Consider that the beginning of a new clause has been found when we parse a free conjuction (we do not refer to the conjunctions that join the coordinated noun and prepositional phrases). The third one is the position of the antecedent with reference to the verb of the clause: before (bv) or after (av). The fourth one is the number of constituent in the sentence and the fifth one is the number of coordinated constituent if it is included in a coordinated np or pp. For example in: He said that Peter and John bought a book, we have the following: paralm (S, 1, bv, 1, 1), paraljoh, (S, 2, bv, 4, 2) and paralbook (S,2,av,6,1). Parse a sentence. We obtain its slot structure (SS1). For each anaphor in SSI: Select the antecedents of the previous X sentences depending on the kind of anaphor in LO Apply constraints (depending on the kind of anaphor) to LO with a result of LI: Case of: IL ll = I Then: This one will be the antecedent of the anaphor ILII • 1 Then: Apply preferences (depending on the kind of enaphor) to L 1, with a result of L2: The first one of L2 will be the selected antecedent Update SSf with each antecedent of each anaphor with a result of SS2. Figure 5. Algorithm for anaphor resolution. At the same time that we are searching for antecedents, we will also search for anaphors and whenever we found an anaphor this algorithm will be applied. The kind of anaphors we are going to search are the following: pronouns (he, she .... ), pronominal noun phrases formed by: determiner + pronoun (the second, the former, ...), noun phrases with the structure: determiner + adjective + "one" (the red one, this anaphors in Spanish 3 are noun phrases in which the noun has 3 We are going to work with Spanish unrestricted texts, but whenever it is possible, all the examples will be translated into English in order to facilitate its understanding. been omitted: el rojo). We will identify such anaphors from its SS (its functor and its number and type of arguments). For example, the one- anaphor in Spanish will have the following SUG rule: np ++> <<determiner>>, adjective, <<pp>>, and the following SS: np (determiner (...), adjective (...), pp (...)). The number of previous sentences considered in the resolution of an anaphor will be determined by the kind of anaphor itself. For pronominal references will be considered the antecedents in the same sentence or in the previous sentence if it is in the same paragraph, unlike to one-anaphora which have more lexical information, so we will consider the antecedents in the same paragraph. We will be able to know the number of sentence because this information will be stored jointly with the SS of every antecedent: for each sentence will be assigned a different Prolog variable and all the antecedents in this sentence will have this variable in itsparal structure. The algorithm will apply a set of constraints to the list of possible antecedents in order to discount candidates. If there is only one candidate, this one will be the antecedent of the anaphor. Otherwise, if there are still more than one candidates left, a set of preferences will be applied that will sort the list of remaining antecedents, and the selected antecedent will be the first one. It is important to remark that these constraints and preferences could be different for each kind of anaphor. Next the constraints and preferences are going to be briefly explained. Morphosyntactic agreement (person, gender and number) will be checked by unification of the structure conc described in section 1. It is a strong constraint on reference, but it is not absolute: At the zoo, a monkey scampered between two elephants. One snorted at it 4, or in: John and Bill~ went into the shop. They~ bought a book. To solve the second example we will store a new antecedent with plural number which includes all the coordinated noun phrases (in this case John and Bill). We will detect the coordination of noun phrases from the SS returned by the SUG fact coordinated. In one- 4 In this paper we will not deal with problems caused by quantification. 389 anaphora we have considered the number agreement as a preference instead of a constraint in order to solve sentences like this: Wendy didn't give either boy a green shirti, but she gave Sue two red onesj, where the anaphor and its antecedent do not agree in number (so they do not co-refer to the same entity of the discourse). The c-command constraints will be applied on the syntactic information stored in the SS of each constituent and its structure paral. For example the following constraint: "A pronominal NP must be interpreted as non-coreferential with any NP that c-commands it", e.g. Zeldai bores herj. It is accomplished by the information stored in their structures: paral~ (Sent1, Clause1 .... ) and paralj (Sent1, Clause1 .... ) which means that they are in the same sentence and clause. However in Johnj was late for work, because he~ slept in, here John and he can be coreferential because they are in different clauses separated by the conjunction because: paraljoh, (Senti, Clausel .... ), paralh~ (Sent1, Clause2 .... ). But in John~ and hej bought a book, the pronoun will not corefer with John although there is a conjunction between them because they are in the same coordinated noun phrase, which is known from: parali ($1, C1, by, 1_, 1) and paralj ($1, C1, by, 1, 2). In sentences like (John~ 's portrait of himj)ue is interesting and This is (the mani who hej saW)N P the coreference is not permitted because the pronoun and the antecedent are in the same constituent NP (they are in the same slot structure: np (det (the), noun (man), relSent (...)). As well in John bought a book for Peteri and for a friend of him~, the pronoun can corefer with Peter although they belong to the same coordinated constituent because the pronoun is an adjunct of the second coordinated constituent. From the reflexivity constraints in Maryj loves herse~, we can conclude the antecedent of herself is Mary because they are in the same clause. In relation to preferences, they will be different for each kind of anaphor: the non-reflexive pronouns will prefer the antecedent in the same sentence and clause, and if there are still more than one antecedent left, those in the same position with reference to the verb: syntactic parallelism. Moreover we have added some other preferences, e.g. a non-reflexive pronoun would not be allowed to have an antecedent that appear in the same clause due to reflexivity constraints: Jacki saw Samj at the party. Samj gave himi a drink. If after applying these preferences, there are more than one antecedent left, we will choose the antecedent most recently mentioned. In order to solve surface-count anaphora we will use the SS returned by the SUG fact coordinated. This fact allows the coordination of constituents with the same or different form: Peter, your daughter and she and it will allow us to access whatever coordinated constituent in the order we wish. That is to say, its SS: np (simpleNP (Peter), conj(', '), np (simpleNP (det (your), noun (daughter)), conj (and), np (simpleNP (pron (she)), , _))), and their structures paral with their fifth argument will tell us the number of coordinated constituent: paralp,,er (S, C, V, P, 1), paraldaugh,e, (S, C, V, P, 2), .... In this way the anaphor: the second one will choose an antecedent with a structure paral with a value of 2 in its fifth argument. To solve one-anaphora we will apply the following preference: we will choose the antecedents with a similar structure. For example, in Wendy didn't give either boy a green tie-dyed T-shirti, but she gave Sue a blue onej, the antecedent a green tie-dyed T-shirt would be chosen instead of Wendy or Sue because they have similar SS (a determiner, a common noun and an adjective): np (noun(Wendy)), npi (~, det (a), adj ([green, tie-dyed]5), noun (T-shirt)) and npj ~, det (a), adj ([blue]), pron (one)). This SS will allow decomposition of the description (i.e. green can be broken off) and the solution of the anaphora will be: np (Y, det (a), adj ([blue]), noun (T-shirt)). It is important to remark that the solution will have a different variable 6 (Y) than its antecedent (X). It means the anaphor and its antecedent do not co-refer, so the anaphor refers to a new entity in the discourse. However in John bought a red dark apple~ and a green pear. He ate the red one~, the anaphor will co-refer with a red dark apple. We will distinguish both cases s This list of adjectives is provided by the SUG fact juxtaposition. 6 This variable corresponds to the ~.p of the final logical formula of the constituent (see section 1). 390 because in the second one the anaphor and its antecedent share the same modifiers 7 (red) and they agree in number. 4 Evaluation of the system We have run our system on part of the previously mentioned corpus (9600 words), and we have got the following figures. Our system has detected 100% of the anaphors described in this paper, and the partial parsing described in Figure 4, has parsed 81% of words with a very simple grammar 8. The medium length of the sentences with anaphors is 48 words. For pronominal references we have a 83% accuracy in detecting the position of the antecedent. For one-anaphora and surface-count anaphora, we have not got significant figures since there were not so many anaphors as we wished (only 5 anaphors with a 80% accuracy). The reason why some of the references have failed is mainly due to the lack of semantic information and due to the problem of attachments between different parsed constituents 9. Conclusions In this paper we have proposed a computational approach to the resolution of pronominal references, surface-count anaphora and one- anaphora. This approach works on the output of a POS tagger, on which we will automatically apply a partial parsing from the formalism: Slot Unification Grammar. We have only used lexical, morphologic and syntactic information. We have slightly '° improved the accuracy (83%) in pronominal references to the work of Kennedy and Boguraev (1996) (75%), but we have also improved that approach since we automatically 7 It is obvious that we will probably need more semantic information in order to solve these anaphors, but in this paper we are not going to consider this information since the tagger does not provide it. s We could easily improve this percentage from adding more constituents to the grammar (e.g. adverbs or punctuation marks). 9 To solve this problem is also necessary semantic information. ,o It is difficult to compare both measures because we have worked on different texts (Spanish texts). apply a partial parsing and we deal with other kinds of anaphors. As a future aim we will include semantic information in our algorithm in order to check the improvement that we get with it. This information will be stored in a dictionary which could be automatically consulted (since this semantic information is not provided by the tagger). References Abney S. (1997) Part*of-Speech Tagging and Partial Parsing. In Steve Young and Gerrit Bloothooft (eds) Corpus-based methods in language and speech processing. Kluwer Academic Publishers Azzam S. (1995) An Algorithm to Co-Ordinate Anaphor resolution and PPS Disambiguation Process. EACL Baldwin B. (1997) CogNIAC: high precision coreference with limited knowledge and linguistic resources. ACL/EACL workshop on Operational factors in practical, robust anaphor resolution Connoly D., Burger J. and Day D. (1994) A Machine learning approach to anaphoric reference. International Conference on New Methods in Language Processing, UMIST Ferdmdez A., Palomar M. and Moreno L. (1997a) Slot Unification Grammar. Joint Conference on Declarative Programming. APPIA-GULP-PRODE Ferr6ndez A., Palomar M. and Moreno L. (1997b) Slot Unificacion Grammar and anaphor resolution. Recent Advances in Natural Language Processing Kennedy C. and Boguraev B. (1996) Anaphora for Everyone: Pronominal Anaphor resolution without a Parser. COLING Lappin S. and Leass H. (1994) An algorithm for pronominal anaphor resolution. Computational Linguistics, 20(4) Mitkov R. (1997) Pronoun resolution: the practical alternative". In S. Botley, T. McEnery (eds) Discourse Anaphora and Anaphor Resolution, Univ. College London Press Mitkov R. (1995) An uncertainty reasoning approach to anaphor resolution. Natural Language Pacific Rim Symposium. Seoul. Korea Mitkov R. and Stys M. (1997) Robust reference resolution with limited knowledge: high precision genre-specific approach for English and Polish. Recent Advances in Natural Language Processing 391
1998
64
Thematic segmentation of texts: two methods for two kinds of texts Olivier FERRET LIMSI-CNRS B~t. 508 - BP 133 F-91403, Orsay Cedex, France ferret @ limsi, fr Brigitte GRAU LIMSI-CNRS Brit. 508 - BP 133 F-91403, Orsay Cedex, France grau @ l imsi.fr Nicolas MASSON LIMSI-CNRS B~t. 508 - BP 133 F-91403, Orsay Cedex, France [email protected] Abstract To segment texts in thematic units, we present here how a basic principle relying on word distribution can be applied on different kind of texts. We start from an existing method well adapted for scientific texts, and we propose its adaptation to other kinds of texts by using semantic links between words. These relations are found in a lexical network, automatically built from a large corpus. We will compare their results and give criteria to choose the more suitable method according to text characteristics. 1. Introduction Text segmentation according to a topical criterion is a useful process in many applications, such as text summarization or information extraction task. Approaches that address this problem can be classified in knowledge-based approaches or word-based approaches. Knowledge-based systems as Grosz and Sidner's (1986) require an extensive manual knowledge engineering effort to create the knowledge base (semantic network and/or frames) and this is only possible in very limited and well-known domains. To overcome this limitation, and to process a large amount of texts, word-based approaches have been developed. Hearst (1997) and Masson (1995) make use of the word distribution in a text to find a thematic segmentation. These works are well adapted to technical or scientific texts characterized by a specific vocabulary. To process narrative or expository texts such as newspaper articles, Kozima's (1993) and Morris and Hirst's (1991) approaches are based on lexical cohesion computed from a lexical network. These methods depend on the presence of the text vocabulary inside their network. So, to avoid any restriction about domains in such kinds of texts, we present here a mixed method that augments Masson's system (1995), based on word distribution, by using knowledge represented by a lexical co-occurrence network automatically built from a corpus. By making some experiments with these two latter systems, we show that adding lexical knowledge is not sufficient on its own to have an all-purpose method, able to process either technical texts or narratives. We will then propose some solutions to choose the more suitable method. 2. Overview In this paper, we propose to apply one and the same basic idea to find topic boundaries in texts, whatever kind they are, scientific/technical articles or newspaper articles. This main idea is to consider smallest textual units, here the paragraphs, and try to link them to adjacent similar units to create larger thematic units. Each unit is characterized by a set of descriptors, i.e. single and compound content words, defining a vector. Descriptor values are the number of occurrences of the words in the unit, modified by the word distribution in the text. Then, each successive units are compared through their descriptors to know if they refer to a same topic or not. This kind of approach is well adapted to scientific articles, often characterized by domain technical term reiteration since there is often no synonym for such specific terms. But, we will show that it is less efficient on narratives. Although the same basic principle about word distribution applies, topics are not so easily detectable. In fact, narrative or expository texts often refer to a same entity with a large set of different words. Indeed, authors avoid repetitions and redundancies by using hyperonyms, synonyms and referentially equivalent expressions. To deal with this specificity, we have developed another method that augments the first method by making use of information coming from a lexical co-occurrence network. 392 This network allows a mutual reinforcement of descriptors that are different but strongly related when occurring in the same unit. Moreover, it is also possible to create new descriptors for units in order to link units sharing semantically close words. In the two methods, topic boundaries are detected by a standard distance measure between each pair of adjacent vectors. Thus, the segmentation process produces a text representation with thematic blocks including paragraphs about the same topic. The two methods have been tested on different kinds of texts. We will discuss these results and give criteria to choose the more suitable method according to text characteristics. 3. Pre-processing of the texts As we are interested in the thematic dimension of the texts, they have to be represented by their significant features from that point of view. So, we only hold for each text the lemmatized form of its nouns, verbs and adjectives. This has been done by combining existing tools. MtSeg from the Multext project presented in V6ronis and Khouri (1995) is used for segmenting the raw texts. As compound nouns are less polysemous than single ones, we have added to MtSeg the ability to identify 2300 compound nouns. We have retained the most frequent compound nouns in 11 years of the French Le Monde newspaper. They have been collected with the INTEX tool of Silberztein (1994). The part of speech tagger TreeTagger of Schmid (1994) is applied to disambiguate the lexical category of the words and to provide their lemmatized form. The selection of the meaningful words, which do not include proper nouns and abbreviations, ends the pre-processing. This one is applied to the texts both for building the collocation network and for their thematic segmentation. 4. Building the collocation network Our segmentation mechanism relies on semantic relations between words. In order to evaluate it, we have built a network of lexical collocations from a large corpus. Our corpus, whose size is around 39 million words, is made up of 24 months of the Le Monde newspaper taken from 1990 to 1994. The collocations have been calculated according to the method described in Church and Hanks (1990) by moving a window on the texts. The corpus was pre-processed as described above, which induces a 63% cut. The window in which the collocations have been collected is 20 words wide and takes into account the boundaries of the texts. Moreover, the collocations here are indifferent to order. These three choices are motivated by our task point of view. We are interested in finding if two words belong to the same thematic domain. As a topic can be developed in a large textual unit, it requires a quite large window to detect these thematic relations. But the process must avoid jumping across the texts boundaries as two adjacent texts from the corpus are rarely related to a same domain. Lastly, the collocation wl-w2 is equivalent to the collocation w2-wl as we only try to characterize a thematic relation between wl and w2. After filtering the non-significant collocations (collocations with less than 6 occurrences, which represent 2/3 of the whole), we obtain a network with approximately 31000 words and 14 million relations. The cohesion between two words is measured as in Church and Hanks (1990) by an estimation of the mutual information based on their collocation frequency. This value is normalized by the maximal mutual information with regard to the corpus, which is given by: /max = log2 N2(Sw - 1) with N: corpus size and Sw: window size 5. Thematic segmentation without lexical network The first method, based on a numerical analysis of the vocabulary distribution in the text, is derived from the method described in Masson (1995). A basic discourse unit, here a paragraph, is represented as a term vector Gi =(gil,gi2,...,git) where gi is the number of occurrences of a given descriptor in Gi. The descriptors are the words extracted by the pre-processing of the current text. Term vectors are weighted. The weighting policy is tf.idf which is an indicator of the importance of a term according to its distribution in a text. It is defined by: wij = ~). log where tfij is the number of occurrences of a descriptor Tj in a paragraph i; dfi is the number of paragraphs in which Tj occurs and 393 N the total number of paragraphs in the text. Terms that are scattered over the whole document are considered to be less important than those which are concentrated in particular paragraphs. Terms that are not reiterated are considered as non significant to characterize the text topics. Thus, descriptors whose occurrence counts are below a threshold are removed. According to the length of the processed texts, the threshold is here three occurrences. The topic boundaries are then detected by a standard distance measure between all pairs of adjacent paragraphs: first paragraph is compared to second paragraph, second one to third one and so on. The distance measure is the Dice coefficient, defined for two vectors X= (x 1, x2 ..... xt) and Y= (Yl, Y2 ..... Yt) by: C(X,Y)= t 2 w(xi)w(yi) i=l t t w(xi)2÷ w(yi) 2 i=l i=l where w(xi) is the number of occurrences of a descriptor xi weighted by tf.idf factor Low coherence values show a thematic shift in the text, whereas high coherence values show local thematic consistency. 6. Thematic segmentation with lexical network Texts such as newspaper articles often refer to a same notion with a large set of different words linked by semantic or pragmatic relations. Thus, there is often no reiteration of terms representative of the text topics and the first method described before becomes less efficient. In this case, we modify the vector representation by adding information coming from the lexical network. Modifications act on the vectorial representation of paragraphs by adding descriptors and modifying descriptor values. They aim at bringing together paragraphs which refer to the same topic and whose words are not reiterated. The main idea is that, if two words A and B are linked in the network, then " when A is present in a text, B is also a little bit evoked, and vice versa " That is to say that when two descriptors of a text A and B are linked with a weight w in the lexical network, their weights are reinforced into the paragraphs to which they simultaneously belong. Moreover, the missing descriptor is added in the paragraph if absent. In case of reinforcement, if the descriptor A is really present k times and B really present n times in a paragraph, then we add wn to the number of A occurrences and wk to the number of B occurrences. In case of descriptor addition, the descriptor weight is set to the number of occurrences of the linked descriptor multiplied by w. All the couples of text descriptors are processed using the original number of their occurrences to compute modified vector values. These vector modifications favor emergence of significant descriptors. If a set of words belonging to neighboring paragraphs are linked each other, then they are mutually reinforced and tend to bring these paragraphs nearer. If there is no mutual reinforcement, the vector modifications are not significant. These modifications are computed before applying a tf.idf like factor to the vector terms. The descriptor addition may add many descriptors in all the text paragraphs because of the numerous links, even weak, between words in the network. Thus, the effect of tf.idf is smoothed by the standard-deviation of the current descriptor distribution. The resulting factor is: - N log(-7=- (1 ~ )) dj6 with k, the paragraphs where Tj occurs. 7. Experiments and discussion We have tested the two methods presented above on several kinds of texts. 0.8 .... 0.6 0.2 0 m e ~ 1 -- ~t/~a 2 .... ! : : i . . . . i .................. ................................................. 1 2 3 4 5 $ ? Figure 1 - Improvement by the second method with low word reiteration 394 Figure 1 shows the results for a newspaper article from Le Monde made of 8 paragraphs. The cohesion value associated to a paragraph i indicates the cohesion between paragraphs i and i+l. The graph for the first method is rather flat, with low values, which would a priori mean that a thematic shift would occur after each paragraph. But significant words in this article are not repeated a lot although the paper is rather thematically homogeneous. The second method, by the means of the links between the text words in the collocation network, is able to find the actual topic similarity between paragraphs 4 and 5 or 7 and 8. The improvement resulting from the use of lexical cohesion also consists in separating paragraphs that would be set together by the only word reiteration criterion. It is illustrated in Figure 2 for a passage of a book by Jules Verne 1. A strong link is found by the first method between paragraphs 3 and 4 although it is not thematically justified. This situation occurs when too few words are left by the low frequency word and tf.idffilters. 0.8 ' • 0.6 0.4 0.2 : " ~¢.e~d 1 -- : : Mt.hod 2 --- 1 2 3 4 S Figure 2 - Improvement by the second method when too many words are filtered More generally, the second method, even if it has not so impressive an effect as in Figures 1 and 2, allows to refine the results of the first method by proceeding with more significant words. Several tests have been made on newspaper articles that show this tendency. Experiments with scientific texts have also been made. These texts use specific reiterated vocabulary (technical terms). By applying the first method, significant results are obtained I De la Terre ~ la Lune. 2Le vin jaune, Pour la science (French edition of Scientific American), October 1994, p. 18 because of this specificity (see Figure 3, the coherence graph in solid line). C•l im 0.8 "'" %6 0,4 0.2 0 i : .t~ t D i : ,,.,~.4 2 --- ...... ':,," ............ i . . . . . . . . " ............. L ";:~,.., ....... ! ...... 6 $ 10 Figure 3 - Test on a scientific paper 2 in a specialized domain On the contrary, by applying the second method to the same text, poor results are sometimes observed (see Figure 3, the coherence graph in dash line). This is due to the absence of highly specific descriptors, used for Dice coefficient computation, in the lexical network. It means that descriptors reinforced or added are not really specific of the text domain and are nothing but noise in this case. The two methods have been tested on 16 texts including 5 scientific articles and 11 expository or narrative texts. They have been chosen according to their vocabulary specificity, their size (between 1 to 3 pages) and their paragraphs size. Globally, the second method gives better results than the first one: it modulates some cohesion values. But the second method cannot always be applied because problems arise on some scientific papers due to the lack of important specialized descriptors in the network. As the network is built from the recurrence of collocations between words, such words, even belonging to the training corpus, would be too scarce to be retained. So, specialized vocabulary will always be missing in the network. This observation has lead us to define the following process to choose the more suitable method: Apply method 1; If x% of the descriptors whose value is not null after the application of tf.idf are not found in the network, then continue with method 1 otherwise apply method 2. According to our actual studies, x has been settled to 25. 395 8. Related works Without taking into account the collocation network, the methods described above rely on the same principles as Hearst (1997) and Nomoto and Nitta (1994). Although Hearst considers that paragraph breaks are sometimes invoked only for lightening the physical appearance of texts, we have chosen paragraphs as basic units because they are more natural thematic units than somewhat arbitrary sets of words. We assume that paragraph breaks that indicate topic changes are always present in texts. Those which are set for visual reasons are added between them and the segmentation algorithm is able to join them again. Of course, the size of actual paragraphs are sometimes irregular. So their comparison result is less reliable. But the collocation network in the second method tends to solve this problem by homogenizing the paragraph representation. As in Kozima (1993), the second method exploits lexical cohesion to segment texts, but in a different way. Kozima's approach relies on computing the lexical cohesiveness of a window of words by spreading activation into a lexical network built from a dictionary. We think that this complex method is specially suitable for segmenting small parts of text but not large texts. First, it is too expensive and second, it is too precise to clearly show the major thematic shifts. In fact, Kozima's method and ours do not take place at the same granularity level and so, are complementary. 9. Conclusion From a first method that considers paragraphs as basic units and computes a similarity measure between adjacent paragraphs for building larger thematic units, we have developed a second method on the same principles, making use of a lexical collocation network to augment the vectorial representation of the paragraphs. We have shown that this second method, if well adapted for processing such texts as newspapers articles, has less good results on scientific texts, because the characteristic terms do not emerge as well as in the first method, due to the addition of related words. So, in order to build a text segmentation system independent of the kind of processed text, we have proposed to make a shallow analysis of the text characteristics to apply the suitable method. 10. References Kenneth W. Church and Patrick Hanks. (1990)Word Association Norms, Mutual Information, And Lexicography. Computational Linguistics, 16/1, pp. 22--29. Barbara J. Grosz and Candace L. Sidner. (1986) Attention, Intentions and the Structure of Discourse. Computational Linguistics, 12, pp. 175--204. Marti A. Hearst. (1997) TextTiling: Segmenting Text into Multi-paragraph Subtopic Passages. Computational Linguistics, 23/1, pp. 33--64. Hideki Kozima. (1993) Text Segmentation Based on Similarity between Words. In Proceedings of the 31th Annual Meeting of the Association for Computational Linguistics (Student Session), Colombus, Ohio, USA. Nicolas Masson. (1995) An Automatic Method for Document Structuring. In Proceedings of the 18th Annual International ACM-SIGIR Conference on Research and Development in Information Retrieval, Seattle, Washington, USA. Jane Morris and Graeme Hirst. (1991) Lexical Cohesion Computed by Thesaural Relations as an Indicator of the Structure of Text. Computational Linguistics, 17/1, pp. 21 48. Tadashi Nomoto and Yoshihiko Nitta. (1994) A Grammatico-Statistical Approach To Discourse Partitioning. In Proceedings of the 15th International Conference on Computational Linguistics (COLING), Kyoto, Japan. Helmut Schmid. (1994) Probabilistic Part-of-Speech Tagging Using Decision Trees. In Proceedings of the International Conference on New Methods in Language Processing, Manchester, UK. Max D. Silberztein. (1994) INTEX: A Corpus Processing System. In Proceedings of the 15th International Conference on Computational Linguistics (COLING), Kyoto, Japan. Jean V6ronis and Liliane Khouri. (1995) Etiquetage grammatical multilingue: le projet MULTEXT. TAL, 36/1-2, pp. 233--248. 396
1998
65
A LAYERED APPROACH TO NLP-BASED INFORMATION RETRIEVAL Sharon Flank SRA International 4300 Fair Lakes Court Fairfax, VA 22033, USA flanks~sra.com Abstract A layered approach to information retrieval permits the inclusion of multiple search en- gines as well as multiple databases, with a natural language layer to convert English queries for use by the various search en- gines. The NLP layer incorporates mor- phological analysis, noun phrase syntax, and semantic expansion based on Word- Net. 1 Introduction This paper describes a layered approach to infor- mation retrieval, and the natural language compo- nent that is a major element in that approach. The layered approach, packaged as Intermezzo TM, was deployed in a pre-product form at a government site. The NLP component has been installed, with a proprietary IR engine, PhotoFile, (Flank, Martin, Balogh and Rothey, 1995), (Flank, Garfield, and Norkin, 1995), at several commercial sites, includ- ing Picture Network International (PNI), Simon and Schuster, and John Deere. Intermezzo employs an abstraction layer to per- mit simultaneous querying of multiple databases. A user enters a query into a client, and the query is then passed to the server. The abstraction layer, part of the server, converts the query to the ap- propriate format for each of the databases (e.g. Fulcrum TM, RetrievalWare TM, Topic TM, WAIS). In Boolean mode, queries are translated, using an SGML-based intermediate query language, into the appropriate form; in NLP mode the queries un- dergo morphological analysis, NP syntax, and se- mantic expansion before being converted for use by the databases. The following example illustrates how a user's query is translated. Unexpanded query natural disasters in New England Search-engine specific natural AND disaster(s) AND New AND England Semantic expansion ((natural and disaster(s)) or hurricane(s) or earthquake(s) or tornado(es) in ("New England" or Maine or Vermont or "New Hampshire" or "Rhode Island" or Connecticut or Massachusetts) The NLP component has been deployed with as many as 500,000 images, at Picture Network In- ternational (PNI). The original commercial use of PNI was as a dialup system, launched with ap- proximately 100,000 images. PNI now operates on the World Wide Web (www.publishersdepot.com). Adjustment of the NLP component continued ac- tively up through about 250,000 images, including additions to the semantic net and tuning of the parameters for weighting. Retrieval speed for the NLP component averages under a second. Semantic expansion is performed in advance on the caption database, not at runtime; runtime expansion makes operation too slow. The remainder of this paper describes how the NLP mode works, and what was required to create it. 2 The NLP Techniques The natural language processing techniques used in this system are well known, including in infor- mation retrieval applications (Strzalkowski, 1993), (Strzalkowski, Perez Carballo and Marinescu, 1995), (Evans and Zhai, 1996). The importance of this work lies in the scale and robustness of the tech- niques as combined into a system for querying large databases. The NLP component is also layered, in effect. It uses a conventional search algorithm (several were tested, and the architecture supports plug-and-play here). User queries undergo several types of NLP processing, detailed below, and each element in the processing contributes new query components (e.g. synonyms) and/or weights. The resulting query, as in the example above, natural disasters in New Eng- land, contains expanded terms and weighting infor- mation that can be passed to any search engine. Thus the Intermezzo multisearch layer can be seen 397 as a natural extension of the layered design of the NLP search system. When texts (or captioned images) are loaded into the database, each word is looked up, words that may be related in the semantic net are found based on stored links, and the looked-up word, along with any related words, are all displayed as the "expan- sion" of that word. Then a check is made to de- termine whether the current word or phrase corre- sponds to a proper name, a location, or something else. If it corresponds to a name, a name expansion process is invoked that displays the name and related names such as nicknames and other variants, based on a linked name file. If the current word or phrase corresponds to a location, a location expansion pro- cess is invoked that, accessing a gazetteer, displays the location and related locations, such as Arlington, Virginia and Arlington, Massachusetts for Arlington, based on linked location information in the gazetteer and supporting files. If the current word or phrase is neither a name nor a location, it is expanded using the semantic net links and weights associated with those links. Strongly related concepts are given high weights, while more remotely related concepts re- ceive lower weights, making them less exact matches. Thus, for a query on car, texts or captions contain- ing car and automobile are listed highest, followed by those with sedan, coupe, and convertible, and then by more remotely related concepts such as transmis- sion, hood, and trunk. Once the appropriate expansion is complete, the current word or phrase is stored in an index database, available for use in searching as described below. Processing then returns to the next word or phrase in the text. Once a user query is received, it is tokenized so that it is divided into individual tokens, which may be single words or multiwords. For this process, a variation of conventional pattern matching is used. If a single word is recognized as matching a word that is part of a stored multiword, a decision on whether to treat the single word as part of a multi- word is made based on the contents of the stored pat- tern and the input pattern. Stored patterns include not just literal words, but also syntactic categories (e.g. adjective, non-verb), semantic categories (e.g. nationality, government entity), or exact matches. If the input matches the stored pattern information, then it is interpreted as a multiword rather than in- dependent words. A part-of-speech tagger then makes use of linguis- tic and statistical information to tag the parts of speech of incoming query portions. Only words that match by part of speech are considered to match, and if two or more parts of speech are possible for a particular word, it is tagged with both. After tag- ging, word affixes (i.e. suffixes) are stripped from query words to obtain a word root, using conven- tional inflectional morphology. If a word in a query is not known, affixes are stripped fi'om the word one by one until a known word is found. Derivational morphology is not currently implemented. Processing then checks to determine whether the resulting word is a function word (closed-class) or content word (open-class). Function words are ig- nored. 1 For content words, the related concepts for each sense of the word are retrieved from the se- mantic net. If the root word is unknown, the word is treated as a keyword, requiring an exact match. Multiwords are matched as a whole unit, and names and locations are identified and looked up in the sep- arate name and location files. Next, noun phrases and other syntactic units are identified. An intermediate query is then formulated to match against the index database. Texts or captions that match queries are then returned, ranked, and displayed to the user, with those that match best being displayed at the top of the list. In the current system, the searching is implemented by first build- ing a B-tree of ID lists, one for each concept in the text database. The ID lists have an entry for each object whose text contains a reference to a given con- cept. An entry consists of an object ID and a weight. The object ID provides a unique identifier and is a positive integer assigned when the object is indexed. The weight reflects the relevance of the concept to the object's text, and is a positive integer. To add an object to an existing index, the object ID and a weight are inserted into the ID list of every concept that is in any way relevant to the text. For searching, the ID lists of every concept in the query are retrieved and combined as specified by the query. Since ID lists contain IDs with weights in sorted or- der, determining existence and relevance of a match is simultaneous and fast, using only a small number of processor instructions per concept-object pair. The following sections treat the NLP issues in more detail. 2.1 Semantic Expansion, Part-of-Speech Tagging, and WordNet Semantic expansion, based on WordNet 1.4 (Miller et al., 1994), makes it possible to retrieve words by synonyms, hypernyms, and other relations, not sim- ply by exact matches. The expansion must be con- strained, or precision will suffer drastically. The first constraint is part of speech: retrieve only those ex- pansions that apply to the correct part of speech in context. A Church-style tagger (Church, 1988) tin a few cases, the loss oI prepositions presents a problem. In practice, the problem is largely restricted to pictures showing unexpected relationships, e.g. a pack- age under a table. Treating prepositions just like content works leads to odd partial matches (things under tables before other pictures of packages and tables, for exam- ple). The solution will involve an intermediate treatment of prepositions. 398 marks parts of speech. Sense tagging is a further re- finement: the algorithm first distinguishes between, e.g. crane as a noun versus crane as a verb. Once noun has been selected, further ambiguity still re- mains, since a crane can be either a bird or a piece of construction equipment. This additional disam- biguation can be ignored, or it can be performed manually (impractical for large volumes of text and impractical for queries, at least for most users). It can also be performed automatically, based on a sense-tagged corpus. The semantic net used in this application incor- porates information from a variety of sources be- sides WordNet; to some extent it was hand-tailored. Senses were ordered according to thdir frequency of occurrence in the first 150,000 texts used for re- trieval, in this case photo captions consisting of one to three sentences each. WordNet 1.5 and subse- quent releases have the senses ordered by frequency, so this step would not be necessary now. The top level of the semantic net splits into events and entities, as is standard for knowledge bases sup- porting natural language applications. There are ap- proximately 100,000 entries, with several links for each entry. The semantic net supplies information about synonymy and hierarchical relations, as well as more sophisticated links, like part-of. The closest synonyms, like dangerous and perilous, are ranked most highly, while subordinate types, like skating and rollerblading, are next. More distant links, like the relation between shake hands and handshake, links between adjectives and nouns, e.g. danger- ous and danger, and part-of links, e.g. brake and brake shoe, contribute lesser amounts to the rank and therefore yield a lower overall ranking. Each returned image has an associated weight, with 100 being a perfect match. Exact matches (disregard- ing inflectional morphology) rank 100. The system may be configured so that it does not return matches ranked below a certain threshold, say 50. Table 1 presents the weights currently in use for the various relations in WordNet. The depth figure indicates how many levels a particular relation is followed. Some relations, like hypernyms and per- tainyms, are clearly relevant for retrieval, while oth- ers, such as antonyms, are irrelevant. If the depth is zero, as with antonyms, the relation is not followed at all: it is not useful to include antonyms in the semantic expansion of a term. If the depth is non- zero, as with hypernyms, its relative weight is given in the weight figure. Hypernyms make sense for re- trieval (animals retrieves hippos) but hyponyms do not (hippos should not retrieve animals). The weight indicates the degree to which each succeeding level is discounted. Thus a ladybug is rated 90% on a query for beetle, but only 81% (90% x 90%) on a query for insect, 73% (90% x 81%) on a query for arthropod, 66% (90% x 73%) on a query for invertebrate, 59% (90% x 66%) on a query for animal, and not at, all Table 1: Expansion depth for WordNet relations Relation Part of Speech Depth Weight ANTONYM noun 0 ANTONYM verb 0 ANTONYM adj 0 ANTONYM adv 0 HYPERNYM noun 4 90 HYPERNYM verb 4 90 HYPONYM noun 0 HYPONYM verb 0 MEM MERONYM noun 3 90 SUB MERONYM noun 0 PART MERONYM noun 3 90 MEM HOLONYM noun 0 SUB HOLONYM noun 0 PART HOLONYM noun 0 ENTAILMENT verb 2 90 CAUSE verb 2 90 ALSO SEE verb 1 90 ALSO SEE adj 1 90 ALSO SEE adv 1 90 ALSO SEE noun 1 90 SIMILAR TO adj 2 90 PERTAINYM adj 2 95 PERTAINYM noun 2 95 ATTRIBUTE noun 0 ATTRIBUTE adj 1 80 (more than four levels) on a query for organism. A query for organisms returns images that match the request more closely, for example: • An amorphous amoeba speckled with greenish- yellow blobs. It might appear that ladybugs should be re- trieved in queries for organism, but in fact such high-level queries generate thousands of hits even with only four-level expansion. In practical terms, then. the number of levels must be limited. Excal- ibur's WordNet-based retrieval product., Retrieval- Ware, does not limit expansion levels, instead al- lowing the expert user to eliminate particular senses of words at query time, in recognition of the need to limit term expansion in one aspect of the sys- tem if not in another. The depth and weight fig- ures were tuned by trial and error on a corpus of several hundred thousand paragraph-length picture captions. For longer texts, the depth, particularly for hypernyms, should be less. The weights file does not affect which images are selected as relevant, but it does affect their relevance ranking, and thus the ordering that the user sees. In practical terms this means that for a query on ani- mal, exact matches on animal appear first, and hip- pos appear before ladybugs. Of course, if the thresh- old is set at 50 and the weights alter a ranking from 399 51 to 49, the user will no longer see that image in the list at all. Technically, however, the image has not been removed from the relevance list, but rather simply downgraded. WordNet was designed as a multifunction natural language resource, not as an IR expansion net. In- evitably, certain changes were required to tailor it for NLP-based IR. First, there were a few links high in the hierarchy that caused bizarre behavior, like animals being retrieved for queries including man or men. Other problems were some "unusual" correla- tions, such as: ** grimace linked to smile o juicy linked to sexy Second, certain slang entries were inappropriate for a commercial system and had to be removed in order to avoid giving offense. Single sense words (e.g. crap) were not particularly problematic, since users who employed them in a query presumably did so on purpose. Polysemous terms such as nuts, skirt, and frog, however, were eliminated, since they could inadvertently cause offense. Third, there were low-level edits of single words. Before the senses were reordered by frequency, some senses were disabled in response to user feedback. These senses caused retrieval behavior that users found inexplicable. For example, the battle sense of engagement, the fervor sense of fire, and the Indian language sense of Massachusetts, all were removed, because they retrieved images that users could not link to the query. Although users were forgiving when they could understand why a bad match had occurred, they were far less patient with what they viewed as random behavior. In this case, the rarity of the senses made it difficult for users to trace the logic at work in the sense expansion. Finally, since language evolves so quickly, new terms had to be added, e.g. rollerblade. This task was the most common and the one requiring the least expertise. Neologisms and missing terms numbered in the dozens for 500,000 sentences, a testament to WordNet's coverage. 2.2 Gazetteer Integration Locations are processed using a gazetteer and sev- eral related files. The gazetteer (supplied by the U.S. Government for the Message Understanding Confer- ences [MUC]), is extremely large and comprehen- sive. In some ways, it is almost too large to be use- ful. Algorithms had to be added, for example, to select which of many choices made the most sense. Moscow is a town in Idaho, but the more relevant city is certainly the one in Russia. The gazetteer contains information on administrative units as well as rough data on city size, which we used to develop a sense-preference algorithm. The largest adminis- trative unit (country, then province, then city) is always given a higher weight, so that New York is first interpreted as a state and then as a city. Within the city size rankings, the larger cities are weighted higher. Of course explicit designations are under- stood more precisely, i.e. New York State and New York City are unambiguous references only to the state and only to the city, respectively. And Moscow, Idaho clearly does not refer to any Moscow outside of Idaho. Furthermore, since this was a U.S. product, U.S. states were weighted higher than other loca- tions, e.g. Georgia was first understood as a state, then as a country. At the most basic level, the gazetteer is a hierar- chy. It permits subunits to be retrieved, e.g. Los Angeles and San Francisco for a query California. An alias table converted the various state abbrevia- tions and other variant forms, e.g. Washington D.C.; Washington, DC; Washington, District of Columbia; Washington DC; Washington, D.C.; DC; and D.C. Some superunits were added, e.g. Eastern Europe, New England, and equivalences based on changing political situations, e.g. Moldavia, Moldova. To han- dle queries like northern Montana, initial steps were taken to include latitude and longitude information. The algorithm, never implemented, was to take tile northernmost 50% of the unit. So if Montana covers X to Y north latitude, northern Montana would be between (X+Y)/2 and Y. Additional locations are matched oil the fly by patterns and then treated as units for purposes of retrieval. For example, Glacier National Park or Mount Hood should be treated as phrases. To ac- complish this, a pattern matcher, based oil finite state automata, operates on simple patterns such as: (LOCATION -- (& (* {word "[a-Z][a-z]*"}) {word "[Nn]ational"} {OR {word "[Pp]ark"} {word "[Ff]orest"}}) 2.3 Syntactic and Other Patterns The pattern matcher also performs noun phrase (NP) identification, using the following patterns for core NPs: (& {tag deter} [MODIFIER (& (? (& {tag adj} {tag conj})) (* -- {tag noun} {tag adj} {tag number} {tag listmark}))] [HEAD_NOUN {tag noun}]) Identification of core NPs (i.e. modifier- head groupings, without any trailing prepositional phrases or other modifiers) makes it possible to dis- tinguish stock cars from car stocks, and, for a query on little girl in a red shirt, to retrieve girls in red shirts in preference to a girl in a blue shirt and red hat. Examples of images returned for the little girl in a red shirt query, rated at 92%, include: • Two smiling young girls wearing matching jean overalls, red shirts. The older girl wearing a 400 blue baseball cap sideways has blond pigtails with yellow ribbons. The younger girl wears a yellow baseball cap sideways. • An African American little girl wearing a red shirt, jeans, colorful hairband, ties her shoelaces while sitting on a patterned rug on the floor. • A young girl in a bright red shirt reads a book while sitting in a chair with her legs folded. The hedges of a garden surround the girl while a woods thick with green leaves lies nearby. • A young Hispanic girl in a red shirt smiles to reveal braces on her teeth. The following image appears with a lower rating, 90%, because the red shirt is later in the sentence. The noun phrase ratings do not play a role here, since red does modify shirt in this case; the ratings apply only to core noun phrases, not prepositional modifiers. • A young girl in a blue shirt presents a gift to her father. The father wears a red shirt. hnages with girls in non-red shirts appear with even lower ratings if no red shirt is mentioned at all. This image was ranked at 88%. • A laughing little girl wearing a straw hat with a red flower, a purple shirt, blue jean overalls. Of course, in a fully NLP-based IR system, neither of these examples would match at all. But full NLP is too slow for this application, and partial matches do seem to be useful to its users, i.e. do seem to lead to licensing of photos. Using the output of the part-of-speech tagger, the patterns yield weights that prefer syntactically sinai- lar matches over scrambled or partial matches. The weights file for NPs contains three multipliers that can be set: scale noun 200 This sets the relative weight of the head noun itself to 200%. scale modifier 50 This sets the relative impor- tance of each modifier to half of what it would be otherwise. scale phrase 200 This sets the relative weight of the entire noun phrase, compared to the old ranking values. This effect multiplies the noun and modifier effects, i.e. it is cumulative. 2.4 Name Recognition Patterns are also the basis for the name recognition module, supporting recognition of the names of per- sons and organizations. Elements marked as names are then marked with a preference that they be re- trieved as a unit, and the names are expanded to match related forms. Thus Bob Dole does not match Bob Packwood worked with Dole Pineapple at 100%, but it does match Senator Robert Dole. The name recognition patterns employ a large file of name variants, set up as a simple alias table: the nicknames and variants of each name appear on a single line in the file. The name variants were derived manually from standard sources, including baby-naming books. 3 Interactions In developing the system, interactions between sub- systems posed particular challenges. In general, the problems arose fi'om conflicts in data files. Ill keep- ing with the layered approach and with good soft- ware engineering in general, the system is maximally modular and data-driven. Several of the modules utilize the same types of information, and inconsis- tencies caused conflicts in several areas. The part- of-speech tagger, morphological analyzer, tokenizer, gazetteer, semantic net, stop-word list, and Boolean logic all had to be made to cooperate. This section describes several problems in. interaction and how they were addressed. In most cases, the solution was tighter data integration, i.e. having the conflicting subsystems access a single shared data file. Other cases were addressed by loosening restrictions, pro- viding a backup in case of inexact data coordination. The morphological analyzer sometimes stemmed differently from WordNet, complicating synonym lookup. The problem was solved by using WordNet's morphology instead. In both cases, morphological variants are created in advance and stored, so that stemming is a lookup rather than a run-time process. Switching to WordNet's morphology was therefore quite simple. However, some issues remain. For ex- ample, pies lists the three senses of pi first, before the far more likely pie. The database on which the part-of-speech tagger trained was a collection of Wall Street Journal arti- cles. This presented a problem, since the domain was specialized. In any event, since the training data set was not WordNet, they did not always agree. This was sorted out by performing searches independent of part of speech if no match was found for the initial part of speech choice. That is, if the tagger marked short as a verb only (as in to short a stock), and WordNet did not find a verb sense, the search was broadened to allow any part of speech in WordNet. Apostrophes in possessives are tokenized as sep- arate words, turning Alzheimer's into Alzheimer's and Nicole's into Nicole 's. In the former case, the full form is in WordNet and therefore should be taken as a unit; in the latter case, it should not. The fix here was to look up both, preferring the full form. For pluralia tantum words (shorts, fatigues, dou- bles, AIDS, twenties), stripping the affix -s and then looking up the root word gives incorrect results. In- stead, when the word is plural, the pluralia tantum, if there is one, is preferred; when it. is singular, that. 401 Table 2: Conversions from English to Boolean English and or with not but without except nor Boolean and or and not and not not not meaning is ruled out. WordNet contains some location information, but it is not nearly as complete as a gazetteer. Some locations, such as major cities, appear in both the gazetteer and in WordNet, and, particularly when there are multiple "senses" (New York state and city, Springfield), must be reconciled. We used the gazetteer for all location expansions, and recast it so that it was in effect a branch of the WordNet seman- tic net, i.e. hierarchically organized and attached at the appropriate WordNet node. This recasting enabled us to take advantage of WordNet's generic terms, so that city lights, for example, would match lights on a Philadelphia street. It also preserved the various gazetteer enhancements, such as the sense preference algorithm, superunits, and equivalences. Boolean operators appear covertly as English words. Many IR systems ignore them, but that yields counterintuitive results. Instead of treating operators as stop words and discarding them, we in- stead perform special handling on the standard set of Boolean operators, as well as an expandable set of synonyms. For example, given insects except ants, many IR systems simply discard except, turning the query, incorrectly, into insects and ants, retrieving exactly the items the user does not want. To avoid this problem, we convert the terms in Table 2 into Boolean operators. 4 Evaluation Evaluation has two primary goals in commercial work. First, is the software robust enough and accu- rate enough to satisfy paying customers? Second, is a proposed change or new feature an improvement or a step backward? Customers are more concerned with precision, be- cause they do not like to see matches they can- not explain. Precision above about 80% eliminated the majority of customer complaints about accuracy. Oddly enough, they are quite willing to make ex- cuses for bad system behavior, explaining away im- plausible matches, once they have been convinced of the system's basic accuracy. The customers rarely test recall, since it is rare either for them to know which pictures are available or to enter successive related queries and compare the match sets. Com- plaints about recall in the initial stages of system development came from suppliers, who wanted to ensure their own pictures could be retrieved reliably. To test recall as well as precision in a controlled environment, in tile early phase of development, a test set of 1200 images was created, and manually matched, by a photo researcher, against queries sub- mitted by other photo researchers. The process was time-consuming and frustratingly imprecise: it was difficult to score, since matches call be partial, and it was hard to determine how much credit to assign for, say, a 70% match that seemed more like a 90% match to the human researcher. Precision tests on the live (500,000-image) PNI system were much eas- ier to evaluate, since the system was more likely to have the images requested. For example, while a database containing no little girls in red shirts will offer up girls with any kind of shirt and anything red, a comprehensive database will bury those imperfect matches beneath the more highly ranked, more ac- curate matches. Ultimately, precision was tested on 50 queries on the full system; any bad match, or par- tial match if ranked above a more complete match, was counted as a miss, and only the top 20 images were rated. Recall was tested on a 50-image subset created by limiting such non-NLP criteria as image orientation and photographer. Precision was 89.6% and recall was 92%. In addition, precision was tested by comparing query results for each new feature added (e.g. "Does noun phrase syntax do us any good? What rank- ings work best?"}. It was also tested by series of related queries, to test, for example, whether pen- guins swimming retrieved the same images as swim- ming penguins. Recall was tested by more related queries and for each new feature, and, more formally, in comparison to keyword searches and to Excal- ibur's RetrievalWare. Major testing occurred when the database contained 30,000 images, and again at 150,000. At 150,000, one major result was that WordNet senses were rearranged so that they were in frequency order based on the senses hand-tagged by captioners for the initial 150,000 images. In one of our retrieval tests, the combination of noun phrase syntax and name recognition improved recall by 18% at a fixed precision point. While we have not yet attempted to test the two capabili- ties separately, it does appear that name recogni- tion played a larger role in the improvement than did noun phrase syntax. This is in accord with pre- vious literature on the contributions of noun phrase syntax (Lewis, 1992), (Lewis and Croft, 1990). 4.1 Does Manual Sense-Tagging Improve Precision? Preliminary experiments were performed on two subcorpora, one with WordNet senses manually tagged, and the other completely untagged. The 402 corpora are not strictly comparable: since the pho- tos are different, the correct answers are different in each case. Nonetheless, since each corpus includes over 20,000 pictures, there should be enough data to provide interesting comparisons, even at this pre- liminary stage. Certain other measures have been taken to ensure that the test is as useful as possi- ble within the constraints given; these are described below. Results are consistent with those shown in Voorhees (1994). Only precision is measured here, since the princi- pal effect of tagging is on precision: untagged irrel- evant captions are likely to show up in the results, but lack of tagging will not cause correct matches to be missed. Only crossing matches are scored as bad. That is, if Match 7 is incorrect, but Match 8, 9 and 10 are correct, then the score is 90% precision. If, on the other hand, Match 7 is incorrect and Matches 8, 9 and 10 are also incorrect, there is no precision penalty, since we want and expect partial matches to follow the good matches. Only the top ten matches are scored. There are three reasons for this: first, scoring hundreds or thousands of matches is impractical. Second, in ac- tual usage, no one will care if Match 322 is better than Match 321, whereas incongruities in the top ten will matter very much. Third, since the threshold is set at 50%, some of the matches are by definition only "half right." Raising the threshold would in- crease perceived precision but provide less insight about system performance. Eleven queries scored better in the sense-tagged corpus, while only two scored better in the untagged corpus. The remainder scored the same in both cor- pora. In terms of precision, the sense-tagged corpus scored 99% while the untagged corpus scored 89% (both figures are artificially inflated, but in parallel, since only crossing matches are scored as bad). 5 Future Directions Future work will concentrate on speed and space op- timizations, and determining how subcomponents of this NLP capability can be incorporated into ex- isting IR packages. This fine-grained NLP-based IR can also answer questions such as who, when, and where, so that the items retrieved can be more specifically targeted to user needs. The next step for caption-based systems will be to incorporate au- tomatic disambiguation, so that captioners will not need to select a WordNet sense for each ambigu- ous word. In this auto-disambiguation investiga- tion, it will be interesting to determine whether a specialized corpus, e.g. of photo captions, performs sense-tagging significantly better than a general- purpose corpus, such as the Brown corpus (Francis and Ku~era, 1979). References Church, K. W. 1988. Stochastic Parts Program and Noun Phrase Parser for Unrestricted Text. In Proceedings of the Second Conference on Applied Natural Language Processing, Austin, TX, 1988. Evans, D. and C. Zhai 1996. Noun-Phrase Analy- sis in Unrestricted Text for Information Retrieval. In Proceedings of the 34th Annual Meeting of the Association for Computational Linguistics (A CL), Santa Cruz, CA, 24-27 June 1996, pp.17-24. Flank, S., P. Martin, A. Balogh and J. Rothey 1995. PhotoFile: A Digital Library for Image Retrieval. In Proceedings of the International Conference o17 Multimedia Computing and Systems (IEEE), Washington, DC, 15-18 May 1995, pp. 292-295. Flank, S., D. Garfield, and D. Norkin 1995. Dig- ital Image Libraries: An Innovative Method for Storage, Retrieval, and Selling of Color Images. In Proceedings of the First International Sympo- sium on Voice, Video, and Data Communications of the Society of Photo-Optical Instrumentation Engineers (SPIE}, Philadelphia, PA, 23-26 Octo- ber 1995. Francis, W. N. and H. Ku~era 1979. Manual of btformation to Accompany a Standard Corpus of Present-Day Edited American English, for use with Digital Computers (Corrected and Revised Edition), Department of Linguistics, Brown Uni- versity, Providence, RI. Lewis, D. D. 1992. An Evaluation of Phrasal and Clustered Representations on a Text Categoriza- tion Task. In Proceedings of ACM SIGIR, 1992, pp. 37-50. Lewis, D. D. and W. B. Croft 1990. Term Cluster- ing of Syntactic Phrases. In Proceedings of ACM SIGIR, 1990, pp. 385-404. Miller, G., M. Chodorow, S. Landes, C. Leacock and R. Thomas 1994. Using a semantic concor- dance for sense identification. In ARPA Workshop of Human Language Technology, Plainsboro, N J, March 1994, pp. 240-243. Strzalkowski, T. 1993. Natural Language Process- ing in Large-Scale Text Retrieval Tasks. In First Text Retrieval Conference (TREC-1), National Institute of Standards and Technology, March 1993, pp. 173-187. Strzalkowski, T., J. Perez Carballo and M. Mari- nescu 1995. Natural Language Information Re- trieval: TREC-3 Report. In Third Text Retrieval Conference (TREC-3), National Institute of Stan- dards and Technology, March 1995. Voorhees, E. 1994. Query Expansion Using Lexical- Semantic Relations. In Proceedings of ACM SI- GIR 1994, pp. 61-69. 403
1998
66
Toward General-Purpose Learning for Information Extraction Dayne Freitag School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213, USA dayne©cs, crau. edu Abstract Two trends are evident in the recent evolution of the field of information extraction: a preference for simple, often corpus-driven techniques over linguistically sophisticated ones; and a broaden- ing of the central problem definition to include many non-traditional text domains. This devel- opment calls for information extraction systems which are as retctrgetable and general as possi- ble. Here, we describe SRV, a learning archi- tecture for information extraction which is de- signed for maximum generality and flexibility. SRV can exploit domain-specific information, including linguistic syntax and lexical informa- tion, in the form of features provided to the sys- tem explicitly as input for training. This pro- cess is illustrated using a domain created from Reuters corporate acquisitions articles. Fea- tures are derived from two general-purpose NLP systems, Sleator and Temperly's link grammar parser and Wordnet. Experiments compare the learner's performance with and without such linguistic information. Surprisingly, in many cases, the system performs as well without this information as with it. 1 Introduction The field of information extraction (IE) is con- cerned with using natural language processing (NLP) to extract essential details from text doc- uments automatically. While the problems of retrieval, routing, and filtering have received considerable attention through the years, IE is only now coming into its own as an information management sub-discipline. Progress in the field of IE has been away from general NLP systems, that must be tuned to work ill a particular domain, toward faster sys- tems that perform less linguistic processing of documents and can be more readily targeted at novel domains (e.g., (Appelt et al., 1993)). A natural part of this development has been the introduction of machine learning techniques to facilitate the domain engineering effort (Riloff, 1996; Soderland and Lehnert, 1994). Several researchers have reported IE systems which use machine learning at their core (Soder- land, 1996; Califf and Mooney, 1997). Rather than spend human effort tuning a system for an IE domain, it becomes possible to conceive of training it on a document sample. Aside from the obvious savings in human development ef- fort, this has significant implications for infor- mation extraction as a discipline: Retargetability Moving to a novel domain should no longer be a question of code mod- ification; at most some feature engineering should be required. Generality It should be possible to handle a much wider range of domains than previ- ously. In addition to domains characterized by grammatical prose, we should be able to perform information extraction in domains involving less traditional structure, such as netnews articles and Web pages. In this paper we describe a learning algorithm similar in spirit to FOIL (Quinlan, 1990), which takes as input a set of tagged documents, and a set of features that control generalization, and produces rules that describe how to extract in- formation from novel documents. For this sys- tem, introducing linguistic or any other infor- mation particular to a domain is an exercise in feature definition, separate from the central al- gorithm, which is constant. We describe a set of experiments, involving a document collection of newswire articles, in which this learner is com- pared with simpler learning algorithms. 404 2 SRV In order to be suitable for the widest possible variety of textual domains, including collections made up of informal E-mail messages, World Wide Web pages, or netnews posts, a learner must avoid any assumptions about the struc- ture of documents that might be invalidated by new domains. It is not safe to assume, for ex- ample, that text will be grammatical, or that all tokens encountered will have entries in a lexicon available to the system. Fundamentally, a doc- ument is simply a sequence of terms. Beyond this, it becomes difficult to make assumptions that are not violated by some common and im- portant domain of interest. At the same time, however, when structural assumptions are justified, they may be criti- cal to the success of the system. It should be possible, therefore, to make structural informa- tion available to the learner as input for train- ing. The machine learning method with which we experiment here, SRV, was designed with these considerations in mind. In experiments re- ported elsewhere, we have applied SRV to collec- tions of electronic seminar announcements and World Wide Web pages (Freitag, 1998). Read- ers interested in a more thorough description of SRV are referred to (Freitag, 1998). Here, we list its most salient characteristics: • Lack of structural assumptions. SRV assumes nothing about the structure of a field instance 1 or the text in which it is embedded--only that an instance is an un- broken fragment of text. During learning and prediction, SRV inspects every frag- ment of appropriate size. • Token-oriented features. Learning is guided by a feature set which is separate from the core algorithm. Features de- scribe aspects of individual tokens, such as capitalized, numeric, noun. Rules can posit feature values for individual tokens, or for all tokens in a fragment, and can constrain the ordering and positioning of tokens. • Relational features. SRV also includes 1We use the terms field and field instance for the rather generic IE concepts of slot and slot filler. For a newswire article about a corporate acquisition, for exam- ple, a field instance might be the text fragment listing the amount paid as part of the deal. a notion of relational features, such as next-token, which map a given token to an- other token in its environment. SRV uses such features to explore the context of frag- ments under investigation. • Top-down greedy rule search. SRV constructs rules from general to specific, as in FOIL (Quinlan, 1990). Top-down search is more sensitive to patterns in the data, and less dependent on heuristics, than the bottom-up search used by sim- ilar systems (Soderland, 1996; Califf and Mooney, 1997). • Rule validation. Training is followed by validation, in which individual rules are tested on a reserved portion of the train- ing documents. Statistics collected in this way are used to associate a confidence with each prediction, which are used to manip- ulate the accuracy-coverage trade-off. 3 Case Study SRV's default feature set, designed for informal domains where parsing is difficult, includes no features more sophisticated than those immedi- ately computable from a cursory inspection of tokens. The experiments described here were an exercise in the design of features to capture syntactic and lexical information. 3.1 Domain As part of these experiments we defined an in- formation extraction problem using a publicly available corpus. 600 articles were sampled from the "acquisition" set in the Reuters corpus (Lewis, 1992) and tagged to identify instances of nine fields. Fields include those for the official names of the parties to an acquisition (acquired, purchaser, seller), as well as their short names (acqabr, purchabr, sellerabr), the location of the purchased company or resource (acqloc), the price paid (dlramt), and any short phrases sum- marizing the progress of negotiations (status). The fields vary widely in length and frequency of occurrence, both of which have a significant impact on the difficulty they present for learn- ers. 3.2 Feature Set Design We augmented SRV's default feature set with features derived using two publicly available 405 .---,---.---,---.--,,-+-Ce-+Ss*b+ I I I I I I First Wisconsin Corp said.v it plans.v ... token." Corp I [token: soi 1 I oken: it I Ilg_tag: nil | /lg_tag: "v" / |lg_tag: nil / ~left_G / I ~left_S / I l\left C / I Figure 1: An example of link grammar feature derivation. NLP tools, the link grammar parser and Word- net. The link grammar parser takes a sentence as input and returns a complete parse in which terms are connected in typed binary relations ("links") which represent syntactic relationships (Sleator and Temperley, 1993). We mapped these links to relational features: A token on the right side of a link of type X has a cor- responding relational feature called left_)/ that maps to the token on the left side of the link. In addition, several non-relational features, such as part of speech, are derived from parser output. Figure 1 shows part of a link grammar parse and its translation into features. Our object in using Wordnet (Miller, 1995) is to enable 5RV to recognize that the phrases, "A bought B," and, "X acquired Y," are in- stantiations of the same underlying pattern. Al- though "bought" and "acquired" do not belong to the same "synset" in Wordnet, they are nev- ertheless closely related in Wordnet by means of the "hypernym" (or "is-a') relation. To ex- ploit such semantic relationships we created a single token feature, called wn_word. In con- trast with features already outlined, which are mostly boolean, this feature is set-valued. For nouns and verbs, its value is a set of identifiers representing all synsets in the hypernym path to the root of the hypernym tree in which a word occurs. For adjectives and adverbs, these synset identifiers were drawn from the cluster of closely related synsets. In the case of multiple Word- net senses, we used the most common sense of a word, according to Wordnet, to construct this set. 3.3 Competing Learners \¥e compare the performance of 5RV with that of two simple learning approaches, which make predictions based on raw term statistics. Rote (see (Freitag, 1998)), memorizes field instances seen during training and only makes predic- tions when the same fragments are encountered in novel documents. Bayes is a statistical ap- proach based on the "Naive Bayes" algorithm (Mitchell, 1997). Our implementation is de- scribed in (Freitag, 1997). Note that although these learners are "simple," they are not neces- sarily ineffective. We have experimented with them in several domains and have been sur- prised by their level of performance in some cases. 4 Results The results presented here represent average performances over several separate experiments. In each experiment, the 600 documents in the collection were randomly partitioned into two sets of 300 documents each. One of the two subsets was then used to train each of the learn- ers, the other to measure the performance of the learned extractors. \¥e compared four learners: each of the two simple learners, Bayes and Rote, and SRV with two different feature sets, its default feature set, which contains no "sophisticated" features, and the default set augmented with the features de- rived from the link grammar parser and Word- net. \¥e will refer to the latter as 5RV+ling. Results are reported in terms of two metrics closely related to precision and recall, as seen in information retrievah Accuracy, the percentage of documents for which a learner predicted cor- rectly (extracted the field in question) over all documents for which the learner predicted; and coverage, the percentage of documents having the field in question for which a learner made some prediction. 4.1 Performance Table 1 shows the results of a ten-fold exper- iment comparing all four learners on all nine fields. Note that accuracy and coverage must be considered together when comparing learn- ers. For example, Rote often achieves reasonable accuracy at very low coverage. Table 2 shows the results of a three-fold ex- periment, comparing all learners at fixed cover- 406 Acc lCov Alg acquired Rote 59.6 18.5 Bayes 19.8 100 SRV 38.4 96.6 SRVIng 38.0 95.6 acqabr Rote 16.1 42.5 Bayes 23.2 100 SRV 31.8 99.8 SRVlng 35.5 99.2 acqloc Rote 6.4 63.1 Bayes 7.0 100 SRV 12.7 83.7 SRVlng 15.4 80.2 Ace IV or purchaser 43.2 23.2 36.9 100 42.9 97.9 42.4 96.3 purchabr 3.6 41.9 39.6 100 41.4 99.6 43.2 99.3 status 42.0 94.5 33.3 100 39.1 89.8 41.5 87.9 Acc l Cov seller 38.5 15.2 15.6 100 16.3 86.4 16.4 82.7 sellerabr 2.7 27.3 16.0 100 14.3 95.1 14.7 91.8 dlramt 63.2 48.5 24.1 100 50.5 91.0 52.1 89.4 Table 1: Accuracy and coverage for all four learners on the acquisitions fields. age levels, 20% and 80%, on four fields which we considered representative of tile wide range of behavior we observed. In addition, in order to assess the contribution of each kind of linguis- tic information (syntactic and lexical) to 5RV's performance, we ran experiments in which its basic feature set was augmented with only one type or the other. 4.2 Discussion Perhaps surprisingly, but consistent with results we have obtained in other domains, there is no one algorithm which outperforms the others on all fields. Rather than the absolute difficulty of a field, we speak of the suitability of a learner's inductive bias for a field (Mitchell, 1997). Bayes is clearly better than SRV on the seller and sellerabr fields at all points on the accuracy- coverage curve. We suspect this may be due, in part, to the relative infrequency of these fields in the data. The one field for which the linguistic features offer benefit at all points along the accuracy- coverage curve is acqabr. 2 We surmise that two factors contribute to this success: a high fre- quency of occurrence for this field (2.42 times 2The acqabr differences in Table 2 (a 3-split exper- iment) are not significant at the 95% confidence level. However, the full 10-split averages, with 95% error mar- gins, are: at 20% coverage, 61.5+4.4 for SRV and 68.5=1=4.2 for SRV-I-[ing; at 80% coverage, 37.1/=2.0 for SRV and 42.4+2.1 for SRV+ling. Field 80%[20% Rote p.r0h .... .. -- ' 50.3 acqabr .... 24.4 dlramt .... 69.5 status 46.7 65.3 SRV+ling purch .... 48.5 56.3 acqabr 44.3 75.4 dlramt 57.1 61.9 status 43.3 72.6 80%12o% Bayes 40.6 55.9 29.3 50.6 45.9 71.4 39.4 62.1 srv+lg 46.3 63.5 40.4 71.4 55.4 67.3 38.8 74.8 80%120% SRV 45.3 55.7 40.0 63.4 57.1 66.7 43.8 72.5 srv- -wfl 46.7 58.1 41.9 72.5 52.6 67.4 42.2 74.1 Table 2: Accuracy from a three-split experiment at fixed coverage levels. A fragment is a acqabr, if: it contains exactly one token; the token (T) is capitalized; T is followed by a lower-case token; T is preceded by a lower-case token; T has a right AN-link to a token (U) with wn_word value "possession"; U is preceded by a token with wn_word value "stock"; and the token two tokens before T is not a two-character token. to purchase 4.5 m l n ~ common shares at acquire another 2.4 mln~-a6~treasury shares Figure 2: A learned rule for acqabr using linguis- tic features, along with two fragments of match- ing text. The AN-link connects a noun modifier to the noun it modifies (to "shares" in both ex- amples). per document on average), and consistent oc- currence in a linguistically rich context. Figure 2 shows a 5RV+ling rule that is able to exploit both types of linguistic informa- tion. The Wordnet synsets for "possession" and "stock" come from the same branch in a hy- pernym tree--"possession" is a generalization of "stock"3--and both match the collocations "common shares" and "treasury shares." That the paths [right_AN] and [right_AN prev_tok] both connect to the same synset indicates the presence of a two-word Wordnet collocation. It is natural to ask why SRV+ling does not 3SRV, with its general-to-specific search bias, often employs Wordnet this way--first more general synsets, followed by specializations of the same concept. 407 outperform SRV more consistently. After all, the features available to SRV+ling are a superset of those available to SRV. As we see it, there are two basic explanations: • Noise. Heuristic choices made in handling syntactically intractable sentences and in disambiguating Wordnet word senses in- troduced noise into the linguistic features. The combination of noisy features and a very flexible learner may have led to over- fitting that offset any advantages the lin- guistic features provided. • Cheap features equally effective. The simple features may have provided most of the necessary information. For exam- ple, generalizing "acquired" and "bought" is only useful in the absence of enough data to form rules for each verb separately. 4.3 Conclusion More than similar systems, SRV satisfies the cri- teria of generality and retargetability. The sep- aration of domain-specific information from the central algorithm, in the form of an extensible feature set, allows quick porting to novel do- mains. Here, we have sketched this porting process. Surprisingly, although there is preliminary evi- dence that general-purpose linguistic informa- tion can provide benefit in some cases, most of the extraction performance can be achieved with only the simplest of information. Obviously, the learners described here are not intended to solve the information extraction problem outright, but to serve as a source of in- formation for a post-processing component that will reconcile all of the predictions for a docu- ment, hopefully filling whole templates more ac- curately than is possible with any single learner. How this might be accomplished is one theme of our future work in this area. Acknowledgments Part of this research was conducted as part of a summer internship at Just Research. And it was supported in part by the Darpa HPKB pro- gram under contract F30602-97-1-0215. References Douglas E. Appelt, Jerry R. Hobbs, John Bear, David Israel, and Mabry Tyson. 1993. FAS- 408 TUS: a finite-state processor for information extraction from real-world text. Proceedings of IJCAI-93, pages 1172-1178. M. E. Califf and R. J. Mooney. 1997. Relational learning of pattern-match rules for informa- tion extraction. In Working Papers of ACL- 97 Workshop on Natural Language Learning. D. Freitag. 1997. Using grammatical in- ference to improve precision in informa- tion extraction. In Notes of the ICML-97 Workshop on Automata Induction, Gram- matical Inference, and Language Acquisition. http://www.cs.cmu.edu/f)dupont/m197p/ m197_GI_wkshp.tar. Dayne Freitag. 1998. Information extraction from HTML: Application of a general ma- chine learning approach. In Proceedings of the Fifteenth National Conference on Artifi- cial Intelligence (AAAI-98). D. Lewis. 1992. Representation and Learning in Information Retrieval. Ph.D. thesis, Univ. of Massachusetts. CS Tech. Report 91-93. G.A. Miller. 1995. WordNet: A lexical database for English. Communications of the ACM, pages 39-41, November. Tom M. Mitchell. 1997. Machine Learning. The McGraw-Hilt Companies, Inc. J. R. Quinlan. 1990. Learning logical def- initions from relations. Machine Learning, 5(3):239-266. E. Riloff. 1996. Automatically generating ex- traction patterns from untagged text. In Proceedings of the Thirteenth National Con- ference on Artificial Intelligence (AAAI-96), pages 1044-1049. Daniel Sleator and Davy Temperley. 1993. Parsing English with a link grammar. Third International Workshop on Parsing Tech- nologies. Stephen Soderland and Wendy Lehnert. 1994. Wrap-Up: a trainable discourse module for information extraction. Journal of Artificial Intelligence Research, 2:131-158. S. Soderland. 1996. Learning Text Analysis Rules for Domain-specific Natural Language Processing. Ph.D. thesis, University of Mas- sachusetts. CS Tech. Report 96-087.
1998
67
Japanese Morphological Analyzer using Word Co-occurrence -- JTAG- Takeshi FUCHI NTT Information and Communication Systems Laboratories Hikari-no-oka 1-1 Yokosuka 239-0847, Japan, [email protected] Shinichiro TAKAGI NTr Information and Communication Systems Laboratories Hikari-no-oka 1-1 Yokosuka 239-0847, Japan, [email protected] Abstract We developed a Japanese morphological analyzer that uses the co-occurrence of words to select the correct sequence of words in an unsegmented Japanese sentence. The co-occurrence information can be obtained from cases where the system incorrectly analyzes sentences. As the amount of information increases, the accuracy of the system increases with a small risk of degradation. Experimental results show that the proposed system assigns the correct phonological representations to unsegmented Japanese sentences more precisely than do other popular systems. Introduction In natural language processing for Japanese text, morphological analysis is very important. Currently, there are two main methods for automatic part-of-speech tagging, namely, corpus- based and rule-based methods. The corpus-based method is popular for European languages. Samuelsson and Voutilainen (1997), however, show significantly higher achievement of a rule- based tagger than that of statistical taggers for English text. On the other hand, most Japanese taggers I are rule-based. In previous Japanese taggers, it was difficult to increase the accuracy of the analysis. Takeuchi and Matsumoto (1995) combined a rule-based and a corpus-based method, i In this paper, a tagger is identical to a morphological analyzer. resulting in a marginal increase in the accuracy of their taggers. However, this increase is still insufficient. The source of the trouble is the difficulty in adjusting the grammar and parameters. Our tagger is also rule-based. By using the co- occurrence of words, it reduces the difficulty and generates a continuous increase in its accuracy. The proposed system analyzes unsegmented Japanese sentences and segments them into words. Each word has a part-of-speech and phonological representation. Our tagger has the co-occurrence information of words in its dictionary. The information can be adjusted concretely by hand in each case of incorrect analysis. Concrete adjustment is different from detailed adjustment. It must be easy to understand for people who make adjustments to the system. The effect of one adjustment is concrete but small. Therefore, much manual work is needed. However, the work is so simple and easy. Section 1 shows the drawbacks to previous systems. Section 2 describes the outline of the proposed system. In Section 3, the accuracy of the system is compared with that of others. In addition, we show the change in the accuracy while the system is being adjusted. 1 Previous Japanese Morphological Analyzers Most Japanese morphological analyzers use linguistic grammar, generate possible sequences of words from an input string, and select a sequence. The following axe methods for selecting the sequence: • Choose the sequence that has a longer word on the right-hand side. (right longest match principle) 409 / • Choose the sequence that has a longer word on the left-hand side. (left longest match principle) • Choose the sequence that has the least number of phrases. (least number of phrases principle) • Choose the sequence that has the least connective-cost of words. (least connective- cost principle) • Use pattern matching of words and/or parts-of- speech to specify the priority of sequences. • Choose the sequence that contains modifiers and modifiees. • Choose the sequence that contains words used frequently. In practice, combinations of the above methods are used. Using these methods, many Japanese morphological analyzers have been created. However, the accuracy cannot increase continuously in spite of careful manual adjustments and statistical adjustments. The cause of incorrect analyses is not only unregistered words, in fact, many sentences are analyzed incorrectly even though there is a sufficient vocabulary for the sentences in their dictionaries. In this case, the system generates a correct sequence but does not select it. Parameters such as the priorities of words and connective-costs between parts-of-speech, can be adjusted so that the correct sequence is selected. However, this adjustment often causes incorrect side effects and the system analyzes other sentences incorrectly that have already been analyzed correctly. This phenomenon is called 'degrading'. In addition to parameter adjustment, parts-of- speech may need to be expanded. Both operations are almost impossible to complete by people who are not very familiar with the system. If the system uses a complex algorithm to select a sequence of words, even the system developer can hardly grasp the behaviour of the system. These operations begin to become more than what a few experts can handle because vocabularies in the systems are big. Even to add an unregistered word to a dictionary, operators must have good knowledge of parts-of-speech, the priorities of words, and word classification for modifiers and modifiees. In this situation, it is difficult to increase the number of operators. This is situation with previous analyzers. Unfortunately, current statistical taggers cannot avoid this situation. The tuning of the systems is very subtle. It is hard to predict the effect of parameter tuning of the systems. To avoid this situation, our tagger uses the co-occurrence of words whose effect is easy to understand. 2 Overview of our system We developed the Japanese morphological analyzer, JTAG, paying attention to simple algorithm, straightforward adjustment, and flexible grammar. The features of JTAG are the followings. • An attribute value is an atom. In our system, each word has several attribute values. An attribute value is limited so as not to have structure. Giving an attribute value to words is equivalent to naming the words as a group. • New attribute values can be introduced easily. An attribute value is a simple character string. When a new attribute value is required, the user writes a new string in the attribute field of a record in a dictionary. • The number of attribute values is unlimited. • A part-of-speech is a kind of attribute value. • Grammar is a set of connection rules. Grammar is implemented with connection rules between attribute values. List 1 is an example 2. One connection rule is written in one line. The fields are separated by commas. Attribute values of a word on the left are written in the first field. Attribute values of a word on the right are written in the second field. In the last field, the cost 3 of the rule is written. Attribute values are separated by colons. A minus sign '-' means negation. For example, the fn'st rule shows that a word with 'Noun' can be followed by a word with Noun, Case:ConVerb, 50 Noun:Name, Postfix:Noun, 100 Noun:-Name, Postfix:Noun, 90 Copula:de, VerbStem:Lde, 50 List 1: Connection rules. 2 Actual rules use Japanese characters. 3 The cost figures were intuitively determined. The grammar is used mainly to generate possible sequences of words, so the determination of the cost figures was not very subtle. The precise selection of the correct 410 Vocabulary Standard Words Output Words Segmentation Segmentation & Part-of-Speech Segmentation & Phoneme Segmentation & Phoneme & Part-of-Speech JTAG 350K 710K 115K 11809 11855 98.9% 199.3% 98.8% 199.2% 98.8% 199.2% 98.7% 1 99.1% 9830 9864 98.9% 1 99.3% 98.3% 198.7% 98.2% 198.6% 98.0 % 1 98.3 % 9901 9948 98.5% 198.9% 97.6% 198.1% 97.5% 197.9% 97.1% 197.6% Table H: Accuracy per word (precision I recall) 'Case' and 'ConVerb'. The cost of the rule is 50. The second rule shows that a word with 'Noun' and 'Name' can be followed by a word with 'Postfix' and 'Noun'. The cost is 100. The third rule shows that a word that has 'Noun' and does not have 'Name' can be followed by a word with 'Postfix' and 'Noun'. The cost is 90. Only the word '"C' has the combination of 'Copula' and 'de', so the fourth rule is specific to • The co-occurrence of words. In our system, the sequence of words that includes the maximum number of co-occurrence of words is selected. Table I shows examples of records in a dictionary. '~' means 'amount', 'frame', 'forehead' or a human name 'Gaku'. In the co-occurrence field, words are presented directly. If there are no co- occurrence words in a sentence that includes '~[~', 'amount' is selected because its cost is the smallest. If ',~'(picture) is in the sentence, 'frame' is selected. • Selection Algorithm JTAG selects the correct sequence of words using connective-cost, the number of co- occurrences, the priority of words, and the length of words. The precise description of the algolithm is shown in the Appendix. This algolithrn is too simple to analyze Japanese sentences perfectly. However, it is sufficient in practice. sequence is done by the co-occurrence of words. 3 Evaluation In this section, Japanese morphological anayzers are evaluated using the following : • Segmentation • Part-of-speech tagging • Phonological representation FLAG, is compared with JUMAN 4 and CHASEN 5. A single "correct analysis" is meaningless because these taggers use different parts-of-speech, grammars, and segmentation policies. We checked the outputs of each and selected the incorrect analyses that the grammar maker of each system must not expect. 3.1 Comparison To make the output of each system comparable, we reduce them to 21 parts-of-speech and 14 verb- inflection-types. In addition, we assume that the part-of-speech of unrecognized words is Noun. The segmentation policies are not unified. Therefore, the number of words in sentences is different from each other. Table II shows the system accuracy. We used 500 sentences 6 (19,519 characters) in the EDR 7 corpus. For segmentation, the accuracy of JTAG is 4 JUMAN Version 3.4. http://www-nagao.kuee.kyoto-u.ac.jp/index-e~tml 5 CHASEN Version 1.5.1. http://cactus.aist-nara.ac.jp/lab/nlt/chasen.html 6 The sentences do not include Arabic numerals because ~'MAN and CHASEN do not assign phonological representation to them. 7 Japan Electronic Dictionary Research Institute. http://www.iijnet.or.jp/edr/ 411 I ~ JTAG I JUMAN CHASEN I C°nversi°nRati° I 88"5% I 71.7% 72.3% Processin~ Time 86see 576see 335see Table HI: Correct phonological representation per sentence. Average 38 characters in one sentence. Sun Ultra-1 170Mhz. the same as that of JUMAN. Table II shows that JTAG assigns the correct phonological representations to unsegmented Japanese sentences more precisely than do the other systems. Table 1TI shows the ratio of sentences that are converted to the correct phonological representation where segmentation errors are ignored. 80,000 sentences s (3,038,713 characters, no Arabic numerals) were used in the EDR corpus. The average number of characters in one sentence is 38. JTAG converts 88.5% of sentences correctly. The ratio is much higher than that of the other systems. Table III also shows the processing time of each system. JTAG analyzes Japanese text more than do four times faster than the other taggers. The simplicity of the JTAG selection algorithm contributes to the fast processing speed. 3.2 Adjustment Process To show the adjustablity of JTAG, we tuned it for a specific set of 10,000 sentences 9. The average number of words in a sentence is 21. Graph 1 shows the transition of the number of sentences converted correctly to their phonological representation. We finished the adjustment when the system could no longer be tuned in the framework of JTAG. The last accuracy rating (99.8% per sentence) shows the maximum ability of JTAG. The feature of each phase of the adjustment is described below. Phase I. In this phase, the grammar of JTAG was changed. New attribute values were introduced and the costs of connection rules were changed. s In the EDR corpus, 2.3% of sentences have errors and 1.5% of sentences have phonological representation inconsistencies. In this case, the sentences are not revised. 9 311,330 characters without Arabic numerals. Average 31 characters per sentence. In this case, we fixed all errors of the sentences and the inconsistency of their phonological representation. 02 O Z I H HI IV 100013 ~ 9800 9700 9600 9500 ~ 9400 9300 9200 q 9 lO0 ustment 9000 o 50 IOO 150 200 Duration of Adjustment (honr~ Graph 1: Transition of the number of sentences correctly converted to phonological representation. These adjustments caused large occurrences of degradation in our tagger. Phase ]l. The grammar was almost fixed. One of the authors added unregistered words to the dictionaries, changed the costs of registered words, and supplied the information of the co-occurrence of words. The changes in the costs of words caused a small degree of degradation. Phase II1. In this phase, all unrecognized words were registered together. The unrecognized words were extracted automatically and checked manually. The time taken for this phase is the duration of the checking. Phase IV. Mainly, co-occurrence information was supplied. This phase caused some degradation, but these instances were very small. Graph 1 shows that JTAG converts 91.9% of open sentences to the correct phonological representation, and 99.8% of closed sentences. Without the co-occurrence information, the ratio is 97.5%. Therefore, the co-occurrence information corrects 2.3% of the sentences. Without new registered words, the ratio is 95.6%, so unrecognized words caused an error in 4.2% of the onversions :urrence ~nal words Sentences Errors Unrecognized Words 4.2% 52% Co-occurrence 2.3% 28% Others 1.6% 20% Total 8.1% 100% Table IV: Causes of errors. 412 sentences. Table IV shows the percentages of the causes. Conclusion We developed a Japanese morphological analyzer that analyzes unsegmented Japanese sentences more precisely than other popular analyzers. Our system uses the co-occurrence of words to select the correct sequence of words. The efficiency of the co-occurrence information was shown through experimental results. The precision of our current tagger is 98.7% and the recall is 99.1%. The accuracy of the tagger can be expected to increase because the risk of degradation is small when using the co-occurrence information. References Yoshimura K, Hitaka T. and Yoshida S. (1983) Morphological Analysis of Non-marked-off Japanese Sentences by the Least BUNSETSU's Number Method. Trans. IPSJ, Vol.24, No.l, pp.40-46. (in Japanese) Miyazaki M. and Ooyama Y. (1986) Linguistic Method for a Japanese Text to Speech System. Trans. IPSJ, Voi.27, No.1 I, pp.1053-1059. (in Japanese) Hisamitsu T. and Nitta Y. (1990) Morphological Analysis by Minimum Connective-Cost Method. SIGNLC 90-8, IEICE, pp.17-24. (in Japanese) Brill E. (1992) A simple rule-based part of speech tagger. Procs. Of 3 'd Conference on Applied Naural Language Processing, ACL. Maruyama M. and Ogino S. (1994) Japanese Morphological Analysis Based on Regular Grammar. Trans. IPSJ, Vol.35, No.7, pp.1293-1299. (in Japanese) Nagata M. (1994) A Stochastic Japanese Morphological Analyzer Using a Forward-DP Backward-A* N-Best Search Algorithm. Computational Linguistics, COLING, pp.201-207. Fuchi T. and Yonezawa M. (1995) A Morpheme Grammar for Japanese Morphological Analyzers. Journal of Natural Language Processing, The Association for Natural Language Processing, Vo12, No.4, pp.37-65. Pierre C. and Tapanainen P. (1995) Tagging French - comparing a statical and a constraint-based method. Procs. Of 7 ~ Conference of the European Chapter of the ACL, ACL, pp.149-156. Takeuehi K. and Matsumoto Y. (1995) HMM Parameter Learning for Japanese Morphological Analyzer. Proes. Of 10 ~ Pacific Asia Conference Language, Information and Computation, pp.163- 172. Voutilainen A. (1995) A syntax-based part of speech analyser. Procs. Of 7 ~ Conference of the European Chapter of the Association for Computational Linguistics, ACL, pp.157-164. Matsuoka K., Takeishi E. and Asano H. (1996) Natural Language Processing in a Japanese Text-To-Speech System for Written-style Texts. Procs. Of 3 ~ IEEE Workshop On Interactive Voice Technology For Telecommunications Applications, IEEE, pp.33-36. Samuelsson C. and Voutilainen A. (1997) Comparing a Linguistic and a Stochastic Tagger. Procs. Of 35 ~ Annual Meeting of the Association for Computational Linguistics, ACL. Appendix ELEMENT selection(SET sequences) [ ELEMENT selected; int best_total_connective_cost -MAX_INT; int best_number_of_cooc - -1; int best_total_word_cost - -i; int best_number_of_2character_word - -i; foreach s (sequences) { s.total_connective_cost - sum_of_connective_cost(s); if (best_total_connective_cost > s.total_connective_cost) [ best_total_connective_cost - s.total_connective_cost; selected - s; ]} foreach s (sequences) [ if (s.total_connective_cost - best_total_connective_cost > PRUNE_RANGE) [ sequcences.delete(s); ]] foreaoh s (sequences) [ s.number_of_cooc = count_cooccurence_of_words(s); if (best_number_of_cooc < s.number_of_cooc) [ best_number_of_cooc - s.number_of_cooc; selected - s; ]] foreaoh s (sequences) [ if (s.number_of_cooc < best_number_of_cooc) [ sequoences.delete(s); ]} foreach s (sequences) [ s.total_word_cost - sum_of_word_cost(s); if (best_total_word_cost > s.total_word_cost) [ best_total_word_cost - s.total_word_cost; selected - s; }} foreach s (sequences) [ if (s.total_word_cost > best_total_word_cost) { sequcences.delete(s); }] foreach s (sequences) [ s.number_of_2character_word - count_2character_word(s); if (best_number_of_2character_word < s.number_of_2character_word) { best_number_of_2character_word - s.number_of_2character_word; selected - s; ]] return selected; 413
1998
68
An IR Approach for Translating New Words from Nonparallel, Comparable Texts Pascale Fung and Lo Yuen Yee HKUST Human Language Technology Center Department of Electrical and Electronic Engineering University of Science and Technology Clear Water Bay, Hong Kong {pascale, eeyy}©ee, ust. hk 1 Introduction In recent years, there is a phenomenal growth in the amount of online text material available from the greatest information repository known as the World Wide Web. Various traditional information retrieval(IR) techniques combined with natural language processing(NLP) tech- niques have been re-targeted to enable efficient access of the WWW--search engines, indexing, relevance feedback, query term and keyword weighting, document analysis, document clas- sification, etc. Most of these techniques aim at efficient online search for information already on the Web. Meanwhile, the corpus linguistic community regards the WWW as a vast potential of cor- pus resources. It is now possible to download a large amount of texts with automatic tools when one needs to compute, for example, a list of synonyms; or download domain-specific monolingual texts by specifying a keyword to the search engine, and then use this text to ex- tract domain-specific terms. It remains to be seen how we can also make use of the multilin- gual texts as NLP resources. In the years since the appearance of the first papers on using statistical models for bilin- gual lexicon compilation and machine transla- tion(Brown et al., 1993; Brown et al., 1991; Gale and Church, 1993; Church, 1993; Simard et al., 1992), large amount of human effort and time has been invested in collecting parallel cor- pora of translated texts. Our goal is to alleviate this effort and enlarge the scope of corpus re- sources by looking into monolingual, compara- ble texts. This type of texts are known as non- parallel corpora. Such nonparallel, monolingual texts should be much more prevalent than par- allel texts. However, previous attempts at using nonparallel corpora for terminology translation were constrained by the inadequate availability of same-domain, comparable texts in electronic form. The type of nonparallel texts obtained from the LDC or university libraries were of- ten restricted, and were usually out-of-date as soon as they became available. For new word translation, the timeliness of corpus resources is a prerequisite, so is the continuous and au- tomatic availability of nonparallel, comparable texts in electronic form. Data collection ef- fort should not inhibit the actual translation effort. Fortunately, nowadays the World Wide Web provides us with a daily increase of fresh, up-to-date multilingual material, together with the archived versions, all easily downloadable by software tools running in the background. It is possible to specify the URL of the online site of a newspaper, and the start and end dates, and automatically download all the daily newspaper materials between those dates. In this paper, we describe a new method which combines IR and NLP techniques to ex- tract new word translation from automatically downloaded English-Chinese nonparallel news- paper texts. 2 Encountering new words To improve the performance of a machine trans- lation system, it is often necessary to update its bilingual lexicon, either by human lexicog- raphers or statistical methods using large cor- pora. Up until recently, statistical bilingual lex- icon compilation relies largely on parallel cor- pora. This is an undesirable constraint at times. In using a broad-coverage English-Chinese MT system to translate some text recently, we dis- covered that it is unable to translate ~,~,/li- ougan which occurs very frequently in the text. Other words which the system cannot find in its 20,000-entry lexicon include proper names 414 such as the Taiwanese president Lee Teng-Hui, and the Hong Kong Chief Executive Tung Chee- Hwa. To our disappointment, we cannot lo- cate any parallel texts which include such words since they only start to appear frequently in re- cent months. A quick search on the Web turned up archives of multiple local newspapers in English and Chi- nese. Our challenge is to find the translation of ~/liougan and other words from this online nonparallel, comparable corpus of newspaper materials. We choose to use issues of the En- glish newspaper Hong Kong Standard and the Chinese newspaper Mingpao, from Dec.12,97 to Dec.31,97, as our corpus. The English text con- tains about 3 Mb of text whereas the Chinese text contains 8.8 Mb of 2 byte character texts. So both texts are comparable in size. Since they are both local mainstream newspapers, it is rea- sonable to assume that their contents are com- parable as well. 3 YL~,/liougan is associated with flu but not with Africa Unlike in parallel texts, the position of a word in a text does not give us information about its translation in the other language. (Rapp, 1995; Fung and McKeown, 1997) suggest that a con- tent word is closely associated with some words in its context. As a tutorial example, we postu- late that the words which appear in the context of ~/liougan should be similar to the words appearing in the context of its English trans- lation, flu. We can form a vector space model of a word in terms of its context word indices, similar to the vector space model of a text in terms of its constituent word indices (Salton and Buckley, 1988; Salton and Yang, 1973; Croft, 1984; Turtle and Croft, 1992; Bookstein, 1983; Korfhage, 1995; Jones, 1979). The value of the i-th dimension of a word vector W is f if the i-th word in the lexicon appears f times in the same sentences as W. Left columns in Table 1 and Table 2 show the list of content words which appear most fre- quently in the context of flu and Africa respec- tively. The right column shows those which oc- cur most frequently in the context of ~,~,. We can see that the context of ~ is more similar to that of flu than to that of Africa. Table 1: ~ and flu have similar contexts English Freq. bird 170 virus 26 spread 17 people 17 government 13 avian 11 scare 10 deadly 10 new 10 suspected 9 chickens 9 spreading 8 prevent 8 crisis 8 health 8 symptoms 7 Chinese Freq. ~ (virus) 147 ]:~ (citizen) 90 ~'~ (nong Kong) 84 ,~ (infection) 69 ~ (confirmed) 62 ~-~ (show) 62 ~ (discover) 56 [~[] (yesterday) 54 ~i~ j~ (patient) 53 ~i]~ (suspected) 50 ~- (doctor) 49 ~_t2 (infected) 47 ~y~ (hospital) 44 ~:~ (no) 42 ~ (government) 41 $~1= (event) 40 Table 2: ~ and Africa have different contexts English Freq. South 109 African 32 China 20 ties 15 diplomatic 14 Taiwan 12 relations 9 Test 9 Mandela 8 Taipei 7 Africans 7 January 7 visit 6 tense 6 survived 6 Beijing 6 Chinese Freq. ~j~ (virus) 147 ~ (citizen) 90 ~ (Uong Kong) 84 ,~ (infection) 69 -~J~ (confirmed) 62 ~p-~ (show) 62 • ~.t~ (discover) 56 I~ [] (yesterday) 54 ~j~ (patient) 53 ~ (suspected) 50 ~ (doctor) 49 ~l" (infected) 47 ~ (hospital) 44 bq~ (no) 42 ~[ J~J: (government) 41 ~: (event) 40 4 Bilingual lexicon as seed words So the first clue to the similarity between a word and its translation number of common words in their contexts. In a bilingual corpus, the "com- mon word" is actually a bilingual word pair. We use the lexicon of the MT system to "bridge" all bilingual word pairs in the corpora. These word pairs are used as seed words. We found that the contexts of flu and ~,~ /liougan share 233 "common" context words, whereas the contexts of Africa and ~,~/liougan share only 121 common words, even though the context of flu has 491 unique words and the con- text of Africa has 328 words. In the vector space model, W[flu] and W[liougan] has 233 overlapping dimensions, whereas there are 121 overlapping dimensions between W[flu] and W[A frica]. 415 5 Using TF/IDF of contextual seed words The flu example illustrates that the actual rank- ing of the context word frequencies provides a second clue to the similarity between a bilingual word pair. For example, virus ranks very high for both flu and ~g~/liougan and is a strong "bridge" between this bilingual word pair. This leads us to use the term frequency(TF) mea- sure. The TF of a context word is defined as the frequency of the word in the context of W. (e.g. TF of virus in flu is 26, in ~,~ is 147). However, the TF of a word is not indepen- dent of its general usage frequency. In an ex- treme case, the function word the appears most frequently in English texts and would have the highest TF in the context of any W. In our HK- Standard/Mingpao corpus, Hong Kong is the most frequent content word which appears ev- erywhere. So in the flu example, we would like to reduce the significance of Hong Kong's TF while keeping that of virus. A common way to account for this difference is by using the inverse document frequency(IDF). Among the variants of IDF, we choose the following representation from (Jones, 1979): maxn IDF = log--+l ni where maxn = the maximum frequency of any word in the corpus ni = the total number of occurrences of word i in the corpus The IDF of virus is 1.81 and that of Hong Kong is 1.23 in the English text. The IDF of ~,~ is 1.92 and that of Hong Kong is 0.83 in Chinese. So in both cases, virus is a stronger "bridge" for ~,~,/liougan than Hong Kong. Hence, for every context seed word i, we as- sign a word weighting factor (Salton and Buckley, 1988) wi = TFiw x IDFi where TFiw is the TF of word i in the context of word W. The updated vector space model of word W has wi in its i-th dimension. The ranking of the 20 words in the contexts of ~/liougan is rearranged by this weighting factor as shown in Table3. Table 3: virus is a Kong bird 259.97 spread 51.41 virus 47.07 avian 43.41 scare 36.65 deadly 35.15 spreading 30.49 suspected 28.83 symptoms 28.43 prevent 26.93 people 23.09 crisis 22.72 health 21.97 new 17.80 government 16.04 chickens 15.12 stronger bridge than Hong ~iij~ (virus) 282.70 ,1~, ~1~ (infection) 187.50 i=~i~ (citizens) 163.49 LI~ (confirmed) 161.89 ~[-_ (infected) 158.43 ~ijj~ (patient) 132.14 ~i~ (suspected) 123.08 U~:~_ (doctor) 108.54 U~ (hospital) 102.73 ~ (discover) 98.09 ~J~ ~: (event) 83.75 ~ (Hong Kong) 69.68 [~ [] (yesterday) 66.84 ~--~ (possible) 60.20 ~p-~ (no) 59.76 ~ (government) 59.41 6 Ranking translation candidates Next, a ranking algorithm is needed to match the unknown word vectors to their counterparts in the other language. A ranking algorithm se- lects the best target language candidate for a source language word according to direct com- parison of some similarity measures (Frakes and Baeza-Yates, 1992). We modify the similarity measure proposed by (Salton and Buckley, 1988) into the following SO: so(wc, We) = t .2 ~/~'~i=l Wzc where Wic = TFic Wie = T Fie ~=1 (Wic X Wie ) t 2 X Y]~i=lWie Variants of similarity measures such as the above have been used extensively in the IR com- munity (Frakes and Baeza-Yates, 1992). They are mostly based on the Cosine Measure of two vectors. For different tasks, the weighting fac- tor might vary. For example, if we add the IDF into the weighting factor, we get the following measure SI: t SI(Wc, We) = ~i=l(Wic × Wie) t .2 t 2 ~/~i=lWzc X ~i=lWie where wic = TFic x IDFi Wie = TFie x IDFi 416 In addition, the Dice and Jaccard coefficients are also suitable similarity measures for doc- ument comparison (Frakes and Baeza-Yates, 1992). We also implement the Dice coefficient into similarity measure $2: t 2Ei=l (Wic X Wie) S2(W , We) = t .2 t .2 ~i=l W2c "~- ~i=l W~e where Wic = TFic x IDFi Wie = TFie x IDFi S1 is often used in comparing a short query with a document text, whereas $2 is used in comparing two document texts. Reasoning that our objective falls somewhere in between--we are comparing segments of a document, we also multiply the above two measures into a third similarity measure $3. 7 Confidence on seed word pairs In using bilingual seed words such as IN~/virus as "bridges" for terminology translation, the quality of the bilingual seed lexicon naturally affects the system output. In the case of Eu- ropean language pairs such as French-English, we can envision using words sharing common cognates as these "bridges". Most importantly, we can assume that the word boundaries are similar in French and English. However, the situation is messier with English and Chinese. First, segmentation of the Chinese text into words already introduces some ambiguity of the seed word identities. Secondly, English-Chinese translations are complicated by the fact that the two languages share very little stemming properties, or part-of-speech set, or word order. This property causes every English word to have many Chinese translations and vice versa. In a source-target language translation scenario, the translated text can be "rearranged" and cleaned up by a monolingual language model in the tar- get language. However, the lexicon is not very reliable in establishing "bridges" between non- parallel English-Chinese texts. To compensate for this ambiguity in the seed lexicon, we intro- duce a confidence weighting to each bilingual word pair used as seed words. If a word ie is the k-th candidate for word ic, then wi,~ = wi,~/ki. The similarity scores then become $4 and $5 and $6 = $4 x $5: ~=l(Wic × Wie)/ki S4(Wc, We) = t .2 t 2 ~/~i=lWzc × ~i=lWie where wic = TFic × IDFi Wie = TFie x IDFi 2~=l(Wic x Wie)/ki s5(wc, we) = t .2 t 2 Ei=lWzc + ~i=lWie where wic = TFic x IDFi wie = TFie x IDFi We also experiment with other combinations of the similarity scores such as $7 --- SO x $5. All similarity measures $3 - $7 are used in the experiment for finding a translation for ~,~,. 8 Results In order to apply the above algorithm to find the translation for ~/liougan from the HKStan- dard/Mingpao corpus, we first use a script to select the 118 English content words which are not in the lexicon as possible candidates. Using similarity measures $3-$7, the highest ranking candidates of ~ are shown in Table 6. $6 and $7 appear to be the best similarity measures. We then test the algorithm with $7 on more Chinese words which are not found in the lex- icon but which occur frequently enough in the Mingpao texts. A statistical new word extrac- tion tool can be used to find these words. The unknown Chinese words and their English coun- terparts, as well as the occurrence frequencies of these words in HKStandard/Mingpao are shown in Table 4. Frequency numbers with a * in- dicates that this word does not occur frequent enough to be found. Chinese words with a * indicates that it is a word with segmentation and translation ambiguities. For example, (Lam) could be a family name, or part of an- other word meaning forest. When it is used as a family name, it could be transliterated into Lam in Cantonese or Lin in Mandarin. Disregarding all entries with a * in the above table, we apply the algorithm to the rest of the Chinese unknown words and the 118 English un- known words from HKStandard. The output is ranked by the similarity scores. The highest ranking translated pairs are shown in Table 5. The only Chinese unknown words which are not correctly translated in the above list are 417 Table 4: Unknown words which occur often Freq. Chinese 59 ~'~ (Causeway) 1965 ~J (Chau)* 481 ~ (Chee-hwa) 115 ~ (Chek)* 164 ~ ~J~ (Diana) 3164 ~j (Fong)* 2274 ~ (HONG) 1128 ~ (Huang)* 477 ~ (Ip)* 1404 ~ (Lam)* 687 ~lJ (Lau)* 324 I~ (Lei) 967 ~ (Leung) 312 A~ (Lunar) 164 ~'$~ (Minister) 949 ~,)~ (Personal) 56 ~~ (Pornography) 493 ~$I (Poultry) 1027 :~.]~ (President) 946 ~,~ (Qian)* 154 ~]~ (Qichen) 824 ~j~ (SAR) 325 -~ (Tam)* 281 ~ (Tang) 307 ~_}~ (Teng-hui) 350 ~ (Tuen) lO52 t (Tung) 79 ¢tl~. (Versace)* 107 ~J~ (Yeltsin) ll2 ~ (Zhuhai) 1171 ~ (flu) Freq. English 37* Causeway 49 Chau 77 Chee-hwa 28 Chek 100 Diana 32 Fong 60 HONG 30 Huang 32 Ip 175 Lam 111 Lau 30 Lei 145 Leung 36 Lunar 197 Minister 8* Personal 13" Pornography 57 Poultry 239 President 62 Qian 28* Qichen 142 SAR 154 Tam 80 Tang 37 Teng-hui 76 Tuen 274 Tung 74 Versace 100 Yeltsin 76 Zhuhai 491 flu ~/Lunar and ~J~/Yeltsin I. Tung/Chee- Hwa is a pair of collocates which is actually the full name of the Chief Executive. Poultry in Chinese is closely related to flu because the Chinese name for bird flu is poultry flu. In fact, almost all unambiguous Chinese new words find their translations in the first 100 of the ranked list. Six of the Chinese words have correct trans- lation as their first candidate. 9 Related work Using vector space model and similarity mea- sures for ranking is a common approach in IR for query/text and text/text comparisons (Salton and Buckley, 1988; Salton and Yang, 1973; Croft, 1984; Turtle and Croft, 1992; Book- stein, 1983; Korfhage, 1995; Jones, 1979). This approach has also been used by (Dagan and Itai, 1994; Gale et al., 1992; Shiitze, 1992; Gale et al., 1993; Yarowsky, 1995; Gale and Church, 1Lunar is not an unknown word in English, Yeltsin finds its translation in the 4-th candidate. Table 5: tion out score 0.008421 0.007895 0.007669 0.007588 0.007283 0.006812 0.006430 0.006218 0.005921 0.005527 0.005335 0.005335 0.005221 0.004731 0.004470 0.004275 0.003878 0.003859 0.003859 0.003784 0.003686 0.003550 0.003519 0.003481 0.003407 0.003407 0.003338 0.003324 Some Chinese )ut English Teng-hui SAR flu Lei poultry SAR hijack poultry Tung Diaoyu PrimeMinister President China Lien poultry China flu PrimeMinister President poultry Kalkanov poultry SAR Zhuhai PrimeMinister President flu apologise unknown word transla- Chinese ~}~ (Weng-hui) ~ (~u) (Lei) ~j~ (Poultry) ~ (Chee-hwa) ~}~ (Teng-hui) ~#~ (SAR) ~'~ (Chee-hwa) :~ (Teng-hui) ~}~ (Weng-hui) W}~ (Weng-hui) CLam) ~}~ (Teng-hui) ~-~ (Chee-hwa) ~_}~ (Teng-hui) (Lei) ~'~ (Chee-hwa) ~'~ (Chee-hwa) .~ (Leung) ~ (Zhuhai) I~ (Lei) ~J~ (Yeltsin) ~-~ (Chee-hwa) )~ (Lam) (Lam) ~j~ (Poultry) W~ (Teng-hui) 0.003250 DPP 0.003206 Tang 0.003202 Tung 0.003040 Leung 0.003033 China 0.002888 Zhuhai 0.002886 Tung ~}~ (Teng-hui) (Tang) (Leung) (Leung) ~#~ (SAR) ~ (Lunar) (Tung) 1994) for sense disambiguation between mul- tiple usages of the same word. Some of the early statistical terminology translation meth- ods are (Brown et al., 1993; Wu and Xia, 1994; Dagan and Church, 1994; Gale and Church, 1991; Kupiec, 1993; Smadja et al., 1996; Kay and RSscheisen, 1993; Fung and Church, 1994; Fung, 1995b). These algorithms all require par- allel, translated texts as input. Attempts at exploring nonparallel corpora for terminology translation are very few (Rapp, 1995; Fung, 1995a; Fung and McKeown, 1997). Among these, (Rapp, 1995) proposes that the associ- ation between a word and its close collocate is preserved in any language, and (Fung and McKeown, 1997) suggests that the associations between a word and many seed words are also preserved in another language. In this paper, 418 we have demonstrated that the associations be- tween a word and its context seed words are well-preserved in nonparallel, comparable texts of different languages. 10 Discussions Our algorithm is the first to have generated a collocation bilingual lexicon, albeit small, from a nonparallel, comparable corpus. We have shown that the algorithm has good precision, but the recall is low due to the difficulty in extracting unambiguous Chinese and English words. Better results can be obtained when the fol- lowing changes are made: • improve seed word lexicon reliability by stemming and POS tagging on both En- glish and Chinese texts; • improve Chinese segmentation by using a larger monolingual Chinese lexicon; • use larger corpus to generate more un- known words and their candidates by sta- tistical methods; We will test the precision and recall of the algorithm on a larger set of unknown words. 11 Conclusions We have devised an algorithm using context seed word TF/IDF for extracting bilingual lexicon from nonparallel, comparable cor- pus in English-Chinese. This algorithm takes into account the reliability of bilingual seed words and is language independent. This al- gorithm can be applied to other language pairs such as English-French or English-German. In these cases, since the languages are more sim- ilar linguistically and the seed word lexicon is more reliable, the algorithm should yield bet- ter results. This algorithm can also be applied in an iterative fashion where high-ranking bilin- gual word pairs can be added to the seed word list, which in turn can yield more new bilingual word pairs. References A. Bookstein. 1983. Explanation and generalization of vector models in information retrieval. In Proceedings of the 6th Annual International Conference on Research and Devel- opment in Information Retrieval, pages 118-132. P. Brown, J. Lai, and R. Mercer. 1991. Aligning sentences in parallel corpora. In Proceedings of the P9th Annual Con- ference of the Association for Computational Linguistics. Table 6: English words most similar to ~,~/li- ougan SO 0.181114 Lei ~ 0.088879 flu b'-~,~ 0.085886 Tang ~,l~ 0.081411 Ap ~'~ $4 0.120879 flu ~,~ 0.097577 Lei ~,~ 0.068657 Beijing ~r~ 0.065833 poultry ~,r~, $5 0.086287 flu ~r-~, 0.040090 China ]~:~ 0.028157 poultry ~7"~ 0.024500 Beijing ~,~, $6 0.010430 flu ~ 0.001854 poultry ~,-~1-~, 0.001840 China ~,~, 0.001682 Beijing ~:~ $7 0.007669 flu ~r'~, 0.001956 poultry ~l-n~, 0.001669 China ~1~ 0.001391 Beijing ~1~ P.F. Brown, S.A. Della Pietra, V.J. Della Pietra, and R.L. Mercer. 1993. The mathematics of machine transla- tion: Parameter estimation. Computational Linguistics, 19(2):263-311. Kenneth Church. 1993. Char.align: A program for aligning parallel texts at the character level. In Proceedings of the 31st Annual Conference of the Association for Computa- tional Linguistics, pages 1-8, Columbus, Ohio, June. W. Bruce Croft. 1984. A comparison of the cosine correla- tion and the modified probabilistic model. In Information Technology, volume 3, pages 113-114. Ido Dagan and Kenneth W. Church. 1994. Termight: Iden- tifying and translating technical terminology. In Proceed- ings of the 4th Conference on Applied Natural Language Processing, pages 34-40, Stuttgart, Germany, October. Ido Dagan and Alon Itai. 1994. Word sense disambiguation using a second language monolingual corpus. In Compu- tational Linguistics, pages 564-596. William B. Frakes and Ricardo Baeza-Yates, editors. 1992. Information Retrieval: Data structures ~ Algorithms. Prentice-Hall. Pascale Fung and Kenneth Church. 1994. Kvec: A new ap- proach for aligning parallel texts. In Proceedings of COL- ING 9J, pages 1096-1102, Kyoto, Japan, August. Pascale Fung and Kathleen McKeown. 1997. Finding termi- nology translations from non-parallel corpora. In The 5th Annual Workshop on Very Large Corpora, pages 192-202, Hong Kong, Aug. Pascale Fung and Dekai Wu. 1994. Statistical augmentation of a Chinese machine-readable dictionary. In Proceedings of the Second Annual Workshop on Very Large Corpora, pages 69-85, Kyoto, Japan, June. 419 Pascale Fung. 1995a. Compiling bilingual lexicon entries from a non-parallel English-Chinese corpus. In Proceedings of the Third Annual Workshop on Very Large Corpora, pages 173-183, Boston, Massachusettes, June. Pascale Fung. 1995b. A pattern matching method for find- ing noun and proper noun translations from noisy parallel corpora. In Proceedings of the 33rd Annual Conference of the Association for Computational Linguistics, pages 236- 233, Boston, Massachusettes, June. William Gale and Kenneth Church. 1991. Identifying word correspondences in parallel text. In Proceedings of the Fourth Darpa Workshop on Speech and Natural Language, Asilomar. William A. Gale and Kenneth W. Church. 1993. A program for aligning sentences in bilingual corpora. Computational Linguistics, 19(1):75-102. William A. Gale and Kenneth W. Church. 1994. Discrim- ination decisions in 100,000 dimensional spaces. Current Issues in Computational Linguisitcs: In honour of Don Walker, pages 429-550. W. Gale, K. Church, and D. Yarowsky. 1992. Estimating upper and lower bounds on the performance of word-sense disambiguation programs. In Proceedings of the 30th Con- ference of the Association for Computational Linguistics. Association for Computational Linguistics. W. Gale, K. Church, and D. Yarowsky. 1993. A method for disambiguating word senses in a large corpus. In Comput- ers and Humanities, volume 26, pages 415-439. K. Sparck Jones. 1979. Experiments in relevance weighting of search terms. In Information Processing and Manage- ment, pages 133-144. Martin Kay and Martin R6scheisen. 1993. Text-Translation alignment. Computational Linguistics, 19(1):121-142. Robert Korfhage. 1995. Some thoughts on similarity mea- sures. In The SIGIR Forum, volume 29, page 8. Julian Kupiec. 1993. An algorithm for finding noun phrase correspondences in bilingual corpora. In Proceedings of the 31st Annual Conference of the Association for Computa- tional Linguistics, pages 17-22, Columbus, Ohio, June. Reinhard Rapp. 1995. Identifying word translations in non- parallel texts. In Proceedings of the 35th Conference of the Association of Computational Linguistics, student ses- sion, pages 321-322, Boston, Mass. G. Salton and C. Buckley. 1988. Term-weighting approaches in automatic text retrieval. In Information Processing and Management, pages 513-523. G. Salton and C. Yang. 1973. On the specification of term values in automatic indexing, volume 29. Hinrich Shiitze. 1992. Dimensions of meaning. In Proceedings of Supercomputing '92. M. Simard, G Foster, and P. Isabelle. 1992. Using cognates to align sentences in bilingual corpora. In Proceedings of the Forth International Conference on Theoretical and Methodological Issues in Machine Translation, Montreal, Canada. Frank Smadja, Kathleen McKeown, and Vasileios Hatzsivas- siloglou. 1996. Translating collocations for bilingual lexi- cons: A statistical approach. Computational Linguistics, 21(4):1-38. Howard R. Turtle and W. Bruce Croft. 1992. A compari- son of text retrieval methods. In The Computer Journal, volume 35, pages 279-290. Dekai Wu and Xuanyin Xia. 1994. Learning an English- Chinese lexicon from a parallel corpus. In Proceedings of the First Conference of the Association for Machine Translation in the Americas, pages 206-213, Columbia, Maryland, October. D. Yarowsky. 1995. Unsupervised word sense disambiguation rivaling supervised methods. In Proceedings of the 33rd Conference o.f the Association for Computational Linguis- tics, pages 189-196. Association for Computational Lin- guistics. 420
1998
69
Tense and Connective Constraints on the Expression of Causality Pascal Amsili TALANA, Universit6 Paris 7 2, pl. Jussieu, case 7003 F-75251 PARIS Cedex 05, France Pascal. Amsili@linguist. jussieu, fr and Corinne Rossari Universlt6 de Gen~ve, Facult6 des Lettres, dpt de Linguistique 2, rue de Candolle, CH-1211 GENI~VE 4, Switzerland Corinne. Rossar i@lettres, unige, ch Abstract Starting from descriptions of French connectives (in particular "donc"---therefore), on the one hand, and aspectual properties of French tenses pass4 simple and imparfait on the other hand, we study in this paper how the two interact with respect to the expression of causality. It turns out that their interaction is not free. Some combinations are not acceptable, and we pro- pose an explanation for them. These results ap- ply straightforwardly to natural language gen- eration: given as input two events related by a cause relation, we can choose among vari- ous ways of presentation (the parameters being (i) the order, (ii) the connective, (iii) the tense) so that we are sure to express a cause relation, without generating either an incorrect discourse or an ambiguous one. 1 Introduction The work reported in this paper aims at deter- mining which constraints hold on the interac- tion between the expression of causality (with or without connective) and aspectual properties of the eventualities and of the tenses used to ex- press them. As a matter of fact, it turns out that, at least in French, the choice of one of the two tenses pass4 simple (PS) or impar/ait (IMP) is not neutral with respect to the expression of causality, in particular realised by means of the connective "don c" (theref0re). It has been observed that even when con- cerned only with temporal localisation, it is not enough to characterize tenses if one do not take into account the effects of discourse relations be- tween eventualities 1: (1a-b) (Molendijk, 1996); it has also been observed that the use of the 1We use the term eventuality to refer to either events, states or processes, as is traditional since (Bach, 1981). connective "donc" is itself subject to various ac- ceptability constraints (lc-d) (Jayez, 1998). (1) a. Paul attrapa une contravention. I1 roulait avec plaisir Paul got fined. He was driving with pleasure 2 b. Paul attrapa une contravention. I1 roulait trop vite Paul got fined. He was driving too fast c. La branche cassa. Paul tombait donc dans le vide The branch broke. Paul was therefore falling down d. Sa premiere demande fut refus6e. I1 en r6digeait donc une autre His first application was refused. He was therefore writing another one Our objective in this paper is twofold: we want to study systematically the interaction between the various parameters we have men- tionned, in order to provide a general expla- nation for the acceptabilities that have been observed, and we also want these explanations be formulated in terms of "conditions of use", so that our results are exploitable for text gen- eration. As a matter of fact, the choice of an appropriate form to express a cause relation be- tween events has proved a non trivial problem (Danlos, 1987; Danlos, 1998). Two parameters have been identified as playing an important role: first, the order of presentation (cause be- fore consequence, or the contrary), and second, 2The contrast between PS and IMP is only roughly par- allel to that between simple past and past progressive: e.g., the translation into French of a simple past can be either PS or IMP. We translate systematically IMP into past progressive, even when the glose does not have the same aspectuo-temporal properties as the French origi- nal. Similarly, "therefore" is only roughly equivalent to "done". 48 the presence (or absence) of a connective 3. The examples we deal with in this paper suggest that tenses, at least in French and in particular the choice between PS and IMP must also be taken into account. The assumptions we make for this work are the following. We assume the view on discourse adopted within the SRDT framework (Asher, 1993): in a coherent discourse, sentences are linked by discourse relations, which help finding anaphor antecedents, computing temporal localisations, etc. Here, we are concerned only with two dis- course relations, both involving causality. We call the first one result, as in (Lascarides and Asher, 1993), it holds between two sentences when the main eventuality of the first one is the cause of the main eventuality of the second one. We assume here a very open notion of causal- ity that we don't want to refine. 4 We call the other one explanation, it holds between two sen- tences when the cause is presented after its con- sequence, thus playing an explanation role for the first sentence. This configuration in inter- action with "donc" has been studied in (Rossari and Jayez, 1997) where it is called "causal ab- duction". We adopt as a basis for the description of IMP the proposal made in the DRT frame- work (Kamp and Rohrer, 1983; Kamp and Reyle, 1993), amended with proposals made in French literature, in particular concerning the anaphoric properties of this tense (Tasmowski- De Ryck, 1985; Vet and Molendijk, 1985; Molendijk, 1994). At last, we adopt the description of the con- nective "donc" which is elaborated, in terms of conditions of use and semantic effects, in (Jayez and Rossari, 1998). We start by considering discourses where a cause is presented after its consequence (i.e., an explanation discourse relation should hold). We observe that a PS-IMP sequence is sufficient to achieve the explanation effect, but that this se- quence is constrained by the type of causality 3(Danlos, 1988) shows the influence of many others parameters, like the voice active vs. passive, the presence of a relative clause, etc. 4For instance, we assume that causality holds be- tween a branch breaking and John's falling (direct), but also between Jean's repairing his car and his driving it (indirect). at stake. We also notice that connectives do not seem to interfere with tenses in this case (§ 2). We then examine discourses where the cause is presented before the consequence. In the ab- sence of connective, we observe that none of the acceptable forms automatically convey causality (§ 3.1). With the connective "donc", causality is imposed by the connective, but in its turn it brings new constraints (§ 3.2). For each set of examples, we provide a general explanation and draw conclusions for text generation. 2 Consequence-Cause Configuration 2.1 Data Even if a causality (the second sentence intro- ducing the cause of the first one) is pragmati- cally possible in all these examples, we observe that a sequence PS-PS imposes in French a tem- poral sequence interpretation: in all the exam- ples (3), the main eventuality of the second sen- tence is interpreted as temporally located after the one of the first sentence, and this is strictly incompatible with a causality, where cause must precede its effect. Notice that here Ps in French behaves differently from simple past in English. 5 (3) a. Jean tomba. La branche cassa Jean fell. The branch broke b. Jean attrapa une contravention. Il roula trop vite Jean got fined. He drove too fast c. Marie cria. Jean lui cassa la figure Marie cried. Jean hit her d. Jean prit sa vulture. Il la r@para Jean took his car. He repaired it e. Jean se salit. Il r@para sa voiture Jean dirtied himsel£ He repaired his car Now, if one chooses, with the same order of presentation, the tense combination PS-IMP, the causality effect is easily achieved. This is the case for the examples (4). (4) a. Jean attrapa une contravention. I1 roulait trop vite Jean got a fine. He was driving too fast b. Marie cria. Jean lui cassait la figure Marie cried. Jean was hitting her 5The translation of the ambiguous example (2a) (Las- carides and Asher, 1993) is not ambiguous in French where no causal interpretation is available (2b). (2) a. John fell. Max pushed him. b. Jean tomba. Max le poussa. 49 However, this choice is not always applicable, since it can give rise to unacceptable forms: (5) are either incorrect, or do not convey causality. (5) a. @ Jean tomba. La branche cassait Jean fell. The branch was breaking b. @ Jean prit sa voiture. Il la r6parait Jean took the car. He was repairing it The connective "donc" can be used in such configurations, without changing acceptability. The denoted relation in this case concerns both the epistemic level (attitudinal) and the descrip- tive level (propositional) (Jayez and Rossari, 1998). We consider in this paper only uses of "donc" where the epistemic level does not change fondamentaly the relation. 6 2.2 Discussion We think that these acceptabilities can be ex- plained if one takes into account two princi- ples: one concerns causality itself in connection with aspectuality, the other concerns the French IMP'S ability to act as an aspectual operator. 2.2.1 Causality To account for the contrast between (4) and (5), we have to be more specific about the way causality can hold between eventualities. Let us assume el is the cause of e2. We can distinguish two cases: 1. el has to be completed to be the cause of e2. For instance, the breaking of the branch has to be completed before Jean can fall; Jean's car has to be repaired before he can drive it. 2. it is not necessary for el to be completed to be the cause of e2. For instance, starting to repair the car is enough to be the cause of one's getting dirty; driving too fast is enough to get a fine, independantly of the completion of el. We call the first case accomplished causality. Notice that this distinction is independant of the aspectual class of the verb describing the even- tuality. It is only a matter of world knowledge. 6In this configuration, "car" (.for) is the non marked connective. Its introduction does not change notably the acceptability jugements, we leave the examination of its specific constraints for another study. 2.2.2 IMP as an aspectual operator One of the most important properties of IMP is that it imposes an imperfective (durative, non accomplished) view on the eventuality (Vet, 1980). The way this effect operates can be de- scribed the following way, assuming the usual partition of predicates into the four Vendler's (1967) aspectual classes. States, activities These eventualities, either homogenious (states) or not (activities), are non terminative, in the sense that they do not have a natural term (end) (e.g., to know the truth--state, to run--activity). Then IMP is entirely compatible, thus have no particular effect. Achievements, accomplishments These are characterised by the existence of a natu- ral term. The imperfective point of view brought by IMP imposes a change of point of view on the term of the eventuality. As for accomplishments, we can assume that they can be decomposed into several stages, according to (Moens and Steedman, 1988): first a preparatory phase, second a culmination (or achievement) (we are not concerned here with the result state). We can then say that IMP refers only to the preparatory phase, so that the term of the eventuality loses all relevance. This ex- plains the so-called imperfective paradox: it is possible to use IMP even though the eventuality never reaches its term: (6) a. I1 traversait la rue quand la voiture l'a 6cras6 He was crossing the street when the car hit him b. * I1 traversa la rue quand la voiture l'a 6cras6 He crossed the street when the car hit him As for achievements, we can assume that they are reduced to a culmination. Then IMP can only be interpreted by stretching this culmination, transforming a fundamen- taly punctual event into a process or activ- ity. Then there is no more natural term for such a stretched event. 2.2.3 Causality and aspect So, when we have a non accomplished causality, i.e., when it is possible to state the cause rela- 50 tion as soon as the eventuality has started, then IMP does not impose further constraint, and the sequence PS-IMP is always correct, and conveys the appropriate causality effect. This is the case for the examples (4, 7), where an explanation discourse relation is infered. (7) Jean se salit. I1 rfiparait sa voiture Jean got dirty. He was repairing his car On the contrary, if we have an accomplished causality, i.e. if the cause event has to be com- pleted to be a cause for the other event, then IMP is never possible, for even with terminative even- tualities (the branch breaking, fixing the car), it has the effect of blocking the terminativity, and a causal interpretation is no longer possible (5). The contrast (8) can thus be easily explained: in (8a), we have a lexically punctual event, made durative by the IMP. But going through a red light has to be completed to risk a fine; in (8b), we have an activity, and it is sufficient to have started it to risk a fine. (8) a. , Jean attrapa une contravention. I1 brfllait un feu rouge Jean got a fine. He was going through a red light b. Jean attrapa une contravention. I1 brfilait les feux rouges Jean got a fine. He was going through the red lights 2.3 Application The consequences of the observations and the hypotheses made earlier, when it comes to text generation, are the following: If one wants to present two eventualities re- lated by a cause relation, so that the conse- quence is presented before the cause, leading to an explanation interpretation of the discourse, one should obey the following principles: 1. A PS-PS combination is not appropriate. 2. A PS-IMP combination conveys causality, provided that we have a non accomplished causality. Otherwise, the PS-IMP combina- tion is not valid. We should note again that these constraints are not lexical, in the sense that they do not rely on aspectual classes, but rather on world knowledge. 3 Cause-Consequence Configuration Let us now turn to the other mode of presenta- tion, namely the one where cause is presented before its consequence. We first consider cases without connectives, and see that good accept- abilities go along with higher ambiguity: cor- rect example do not always convey causality (§ 3.1). Then we consider the use of the con- nective "donc", and observe that it changes the acceptabilities (§ 3.2). 3.1 Without connective 3.1.1 Data The first observation is that it is possible to use a PS-PS sequence. In the absence of other dis- course clues, such a sequence is interpreted in French as a temporal sequence relation. Such a temporal interpretation is compatible with, but of course does not necessary imply, a cause re- lation. (9) a. b. C. d. La branche cassa. I1 tomba dans le vide The branch broke. He fell down Paul vit sa demande rejet~e. IIen r~digea une autre Paul's application was rejected. He wrote an other one I1 rut nomm~ PDG. I1 contr61a tout le personnel He was appointed chairman. He had control over the whole staff I1 appuya sur la d~tente. Le coup partit. He pressed the trigger. The gun went off Changing the PS-PS sequence into a PS-IMP changes only marginally the acceptabilities, and the same observation as before holds: these dis- courses do not necessarily imply causality. (10) a. La branche cassa. I1 tombait duns le vide The branch broke. He was falling down b. Paul vit sa demande rejet~e. I1 en r~digeait une autre Paul's application was rejected. He was writing an other one c. I1 fut nomm~ PDG. I1 contr61ait tout le personnel He was appointed chairman. He was having control over the whole staff d. ? I1 appuya sur la d~tente. Le coup partait. He pressed the trigger. The gun was going off 51 For instance, (10b-c) can also be interpreted as background discourses, where the IMP of the second sentence is seen as introducing a back- ground situation holding before and after the event introduced in the first sentence. This in- terpretation, often given as the default one for IMP-PS sequences (Kamp and Rohrer, 1983), is nevertheless only available when world knowl- edge does not exclude it (10a). In any case, such an interpretation is incompatible with a causal interpretation. 3.1.2 Discussion So it turns out that PS-IMP sequences can have in general two interpretations: one where the two events follow each other, and this interpre- tation is thus compatible with a causality inter- pretation, and another one where the eventual- ity described by the IMP sentence overlaps with the event given before. This can be explained if one assumes the op- eration of IMP as described in (Molendijk, 1994), in a DRT framework, itself inspired by (Reichen- bach, 1947). One of the features of IMP is to state the simultaneousness of the eventuality described with some reference point (henceforth Rpt), lo- cated in the past of the speech time. This oper- ation can be called anaphoric, since IMP needs some other point given by the context. This is clearly what happens with the background ef- fect. But it has also been shown, in particular by Tasmowski-De Ryck (1985), that there are some uses of IMP (called imparfait de rupture-- "breaking IMP") which are not strictly anaphoric, in the sense that the Rpt cannot be identified with any previously introduced event. Rather, it seems that such uses of IMP strongly entail the existence of an implicit Rpt, distinct from the events already introduced. It is also observed that this ability of IMP to bring with it a Rpt is constrained. In particular, there must be a way to connect this Rpt to the other eventual- ities of the discourse. Molendijk (1996) shows that this connection can be a causal relation. It has also been observed that an implicit Rpt is always temporally located after the last event introduced. So this is compatible with a causal- ity interpretation. 3.1.3 Application From a text generation point of view, the obser- vations we have just made cannot be easily ex- ploited: obviously, in a Cause-Consequence con- figuration, all the tense combinations we have seen are not informative enough, and cannot be used, if one wants to guarantee that the concept of causality is conveyed by the discourse. It is thus necessary to be more explicit, for instance by adding a connective. This is what we are concerned with in the next section. So, if we leave apart the PS-PS sequence, what we have seen so far in § 2 is that the tense com- bination is sufficient to convey a causality rela- tion in Consequence-Cause configurations, and then the connectives do not impose further con- straints and do not change what is conveyed. The situation in this section (§ 3) is in a way symetrical: in a Cause-Consequence configura- tion, the tense configuration is not sufficient, so that adding a connective is necessary. But, as we see in the next section, there are further con- straints on the connectives. 3.2 With the connective "doric" 3.2.1 Data One can observe that "donc" is perfectly com- patible with PS-PS sequences like the ones in (9). What is more surprising is that adding "donc" to the PS-IMP sequence examples we have seen (10) clearly changes the acceptabilities: (11) a. ?? La branche cassa. I1 tombait donc dans le vide The branch broke. He was therefore falling down b. Paul vit sa demande rejet~e. IIen r~digeait doric une autre Paul's application was rejected. He was therefore writing another one c. I1 fut nomm~ PDG. I1 contr61ait donc tout le personnel He was appointed chairman. He was therefore having control over the whole staff d. ?? I1 appuya sur la d~tente. Le coup partait donc. He pressed the trigger. The gun was therefore going off The clearer contrast concerns cases where the second sentence contains an activity verb. In such cases, the introduction of"donc" leads sys- tematically to bad sentences. On the contrary, it seems that "donc" is always compatible with state and accomplishment verbs. As for achievements, it seems that the intro- duction of"donc" also yields bad sentences, but 52 it is worth noting that the simple sequence PS- IMP without connective is already slightly prob- lematic, as we have seen in (10d). We come back to this point later. 3.2.2 Discussion We are not yet able to provide a completely elab- orated explanation for these observations. What we propose here is a list of possible answers, sug- gested by more fine-grained considerations on data. Note however that from the previous observa- tion we can draw the principle that we can gen- erate sentences in a Cause-Consequence configu- ration, with a PS-IMP sequence, and the connec- tive "doric" but the aspectual class of the verb has to be taken into account. It leads to accept- able sentences only with accomplishments and states. It is clear that aspectual classes play a role, which is not surprising, and this is the reason why all our example lists comprise each time one verb from each aspectual class. The most problematic contrast concerns the difference between activities and accomplish- ments. The connective "donc" seems to work very well with accomplishments and very bad with activities, even though accomplishments can be seen as composed of an activity fol- lowed by a culmination. One possible explana- tion could rely on the observation that the re- sult relation brought by "donc" holds not at the propositionnal level, not even at the aspectual (i.e., point of view on events), but rather at an attitudinal level (Rossari and Jayez, 1997). Be- sides, one can observe that what distinguishes activities and accomplishments is not the na- ture itself of the eventuality, but rather the fact that one expects/considers the culmination of it in one case and not in the other. So this can be seen as a difference of (propositional) attitude over the eventualities. We are presently working on the elaboration of a proposal based on this viewpoint. It is also worth observing that the temporal interval that lies between a cause and its consequence might play a role, as suggested by (Jayez, 1998), especially for this contrast be- tween activities and accomplishments. As for achievements, we have already noted that their compatibility with IMP is not entirely established, for reasons coming from the punc- tual nature of achievements. It is also worth noting that there is an affinity between achieve- ments and "imparfait de rupture" (Tasmowski- De Ryck, 1985). Of course, as suggested by its name, such use of IMP introduces a sort of break in the discourse, which is of course com- patible with causality, but might not be com- patible with the way "donc" operates, requiring a strong connection between two utterances. 4 Conclusion Summary We summarize our observations in the table 1. We consider in this table all the possible configurations one has when the three following parameters vary. 1. Order of presentation: el before e2 or the other way around (assuming el is the cause of ee). 2. Presence of a connective "donc" or "car". 7 3. Use of PS or IMP. Table 1: Ways of expressing "CAUSE(el, e2)" When D.R. How Always res e~ S. Donc e~ s exp e~ ~. Car 7 e~ S Ps e~S SHC e 1 . ntr e~ s. e~P Sometimes C1 res e~'. Donc e~ P C2 exp e~ ~. ( CarT/¢ ) e~ MP Never e~ M". (Donc / I~ ) e~ ~ Ps e~S e.~. e~ p. (Car 7/0)e~ ~ Constraints CI: e2 : state or accomplishment C2: non accomplished causality Among the combinations, some are always possible (which does not mean they always con- vey causality), some are never possible, that is, either uninterpretable or incompatible with causality. Some are sometimes possible, depend- ing on various constraints as shown in this pa- per. Notice that we mention in this table some configurations we have not considered so far, namely configurations with an IMP-PS sequence. r As we have already said, we are only concerned in this paper with "donc" and mention "car" only for the sake of completeness. 53 We mention them here only for the sake of com- pleteness, since they can never be used to ex- press causality. The second column of the table gives the dis- course relation associated with each configura- tion. In some cases, it is a cause relation, ei- ther in one direction (result-res) or in the other (explanation-exp). The other cases are compat- ible with a cause relation, without conveying it, which is noted in the table as "suc" (for tempo- ral succession) or "ntr" (neutral-for ambiguous cases between background or temporal succes- sion). Conclusion This paper shows that the inter- action of constraints coming from tenses and connecti.ves is rather delicate to characterize, even in the limited domain of the expression of causality. It also shows, however, that it is pos- sible to draw from the linguistic characterisation of these enough principles to be able to generate discourses conveying causality with good guar- anties on the achieved effect, and control over the influence of tenses often neglected in this respect. We are presently studying the treatment of other connectives, and the extension to other tenses. Acknowledgments We wish to thank Laurent Roussarie, as well as the anonymous reviewers for their helpful com- ments on earlier versions of this paper. References Nicholas Asher. 1993. Reference to Abstract Ob- jects in Discourse. Kluwer Academic Pub- lisher. Emmon Bach. 1981. On time, tense and aspect: An essay on english metaphysics. In Peter Cole, editor, Radical Pragmatics, pages 62- 81. Academic Press, New York. Laurence Danlos. 1987. The Linguistic Basis of Text Generation. Cambridge University Press. Laurence Danlos. 1988. Connecteurs et rela- tions causales. Langue Franfaise, 77:92-127. Laurence Danlos. 1998. Causal relations in discourse: Event structure and event coref- erence. In Pierrette Bouillon and Frederica Busa, editors, Studies within the Generative Lexicon Framework. CUP Press. to appear. Jacques Jayez and Corinne Rossari. 1998. La port@e s@mantique d'un connecteur pragma- tique. Cahiers de l'Institut de Linguistique de Louvain. to appear. Jacques Jayez. 1998. Les approches formelles de l'enchatnement des temps. L'exemple de la SDRT. Manuscript. Hans Kamp and Uwe Reyle. 1993. From dis- course to logic. Kluwer Academic Publisher. Hans Kamp and Christian Rohrer. 1983. Tense in texts. In R. B~uerle, C. Schwarze, and A. von Stechow, editors, Meaning, Use and Interpretation of Language, pages 250-269. De Gruyter, Berlin. Alex Lascarides and Nicholas Asher. 1993. Temporal interpretation, discourse relations and commonsense entailment. Linguistics and Philosophy, 16(5):437-493. Marc Moens and Marc Steedman. 1988. Tem- poral ontology and temporal reference. Com- putational Linguistics, 14(2):15-28. Arie Molendijk. 1994. Tense use and temporal orientation: the 'pass@ simple' and 'imparfait' of french. In C. Vet and C. Vetters, editors, Tense and Aspect in Sentence and Discourse, pages 21-47. De Gruyter. Arie Molendijk. 1996. Anaphore et imparfait : la r@f@rence globale £ des situations pr@- suppos@es ou impliqu@es. Cahiers Chronos, 1:109-123. Hans Reichenbach. 1947. Elements of symbolic logic. McMillan, New York. Corinne Rossari and Jacques Jayez. 1997. Con- necteurs de cons@quence et port@e s@man- tique. Cahiers de Linguistique Franfaise, 19:233-265. Liliane Tasmowski-De Ryck. 1985. L'imparfait avec et sans rupture. Langue Franfaise, 67:59-77. Zeno Vendler. 1967. Linguistics and Philoso- phy. Cornel University Press, Ithaca, N.Y. Co Vet and Arie Molendijk. 1985. The dis- course functions of past tenses of french. In V. Lo Cascio and C. Vet, editors, Temporal Structure in Sentence and Discourse, pages 133-159. Foris. Co Vet. 1980. Temps, aspect et adverbes de temps en franfais contemporain. Droz, Gen@ve. 54
1998
7
Splitting Long or Ill-formed Input for Robust Spoken-language Translation Osamu FURUSE t, Setsuo YAMADA, Kazuhide YAMAMOTO ATR Interpreting Telecommunications Research Laboratories 2-2 Hikaridai, Seika-cho, Soraku-gun, Kyoto, 619-0288, Japan furuse~cslab, kecl. ntt. co. jp, {syamada, yamamoto}@itl, atr. co. jp Abstract This paper proposes an input-splitting method for translating spoken-language which includes many long or ill-formed expressions. The pro- posed method splits input into well-balanced translation units based on a semantic distance calculation. The splitting is performed dur- ing left-to-right parsing, and does not degrade translation efficiency. The complete translation result is formed by concatenating the partial translation results of each split unit. The pro- posed method can be incorporated into frame- works like TDMT, which utilize left-to-right parsing and a score for a substructure. Experi- mental results show that the proposed method gives TDMT the following advantages: (1) elim- ination of null outputs, (2) splitting of utter- ances into sentences, and (3) robust translation of erroneous speech recognition results. 1 Introduction A spoken-language translation system requires the ability to treat long or ill-formed input. An utterance as input of a spoken-language trans- lation system, is not always one well-formed sentence. Also, when treating an utterance in speech translation, the speech recognition result which is the input of the translation component, might be corrupted even though the input utter- ance is well-formed. Such a misrecognized result can cause a parsing failure, and consequently, no translation output would be produced. Further- more, we cannot expect that a speech recogni- tion result includes punctuation marks such as a comma or a period between words, which are useful information for parsing. 1 As a solution for treating long input, long- sentence splitting techniques, such as that of tCurrent affiliation is NTT Communication Science Laboratories. 1 Punctuation marks are not used in translation input in this paper. Kim (1994), have been proposed. These tech- niques, however, use many splitting rules writ- ten manually and do not treat ill-formed in- put. Wakita (1997) proposed a robust transla- tion method which locally extracts only reliable parts, i.e., those within the semantic distance threshold and over some word length. This technique, however, does not split input into units globally, or sometimes does not output any translation result. This paper proposes an input-splitting method for robust spoken-language translation. The proposed method splits input into well- balanced translation units based on a seman- tic distance calculation. The complete trans- lation result is formed by concatenating the partial translation results of each split unit. The proposed method can be incorporated into frameworks that utilize left-to-right parsing and a score for a substructure, In fact, it has been added to Transfer-Driven Machine Trans- lation (TDMT), which was proposed for efficient and robust spoken-language translation (Fu- ruse, 1994; Furuse, 1996). The splitting is per- formed during TDMT's left-to-right chart pars- ing strategy, and does not degrade translation efficiency. The proposed method gives TDMT the following advantages: (1) elimination of null outputs, (2) splitting of utterances into sen- tences, and (3) robust translation of erroneous speech recognition results. In the subsequent sections, we will first out- line the translation strategy of TDMT. Then, we will explain the framework of our split- ting method in Japanese-to-English (JE) and English-to-Japanese (E J) translation. Next, by comparing the TDMT system's performance be- tween two sets of translations with and with- out using the proposed method, we will demon- strate the usefulness of our method. 421 2 Translation strategy of TDMT 2.1 Transfer knowledge TDMT produces a translation result by mim- icking the example judged most semantically similar to the input string, based on the idea of Example-Based MT. Since it is difficult to store enough example sentences to translate ev- ery input, TDMT performs the translation by combining the examples of the partial expres- sions, which are represented by transfer knowl- edge patterns. Transfer knowledge in TDMT is compiled from translation examples. The fol- lowing EJ transfer knowledge expression indi- cates that the English pattern "X at Y" corre- sponds to several possible Japanese expressions: X at Y => yt de X t ((present, conference)..), V ni X ~ ((stay, hotel)..), }'I wo X I ((look, it)..) The first possible translation pattern is "V de X", with example set ((present, conference)..). We will see that this pattern is likely to be se- lected to the extent that the input variable bind- ings are semantically similar to the sample bind- ings, where X ="present" and Y ="conference". X' is the transfer result of X. The source expression of the transfer knowl- edge is expressed by a constituent boundary pattern, which is defined as a sequence that consists of variables and symbols representing constituent boundaries (Furuse, 1994). A vari- able corresponds to some linguistic constituent. A constituent boundary is expressed by either a functional word or a part-of-speech bigram marker. In the case that there is no func- tional surface word that divides the expression into two constituents, a part-of-speech bigram is employed as a boundary marker, which is ex- pressed by hyphenating the parts-of-speech of a left-constituent's last word and that of a right- constituent's first word. For instance, the expression "go to Kyoto" is divided into two constituents, "go" and "Kyoto'. The preposition "to" can be identified as a con- stituent boundary. Therefore, in parsing "go to Kyoto", we use the pattern "X to Y". The expression "I go" can be divided into two constituents "f' and "go", which are a pro- noun and a verb, respectively. Since there is no functional surface word between the two constituents, pronoun-verb can be inserted as a boundary marker into "I go", giving "I pronoun- verb go", which will now match the general transfer knowledge pattern "X pronoun-verb Y'. 2.2 Left-to-right parsing In TDMT, possible source language structures are derived by applying the constituent bound- ary patterns of transfer knowledge source parts to an input string in a left-to-right fashion (Fu- ruse, 1996), based on a chart parsing method. An input string is parsed by combining active and passive arcs shifting the processed string left-to-right. In order to limit the combina- tions of patterns during pattern application, each pattern is assigned its linguistic level, and for each linguistic level, we specify the linguistic sublevels permitted to be used in the assigned variables. I X pronoun-verb Y X pronoun-verb Y I # I I I I go (a) (b) (c) X pronoun-verb Y Xto Y I I X to Y I I I X to Y If go Kyoto I I go ~ go Kyoto (d) (e) (f) Figure 1: Substructures for "I go to Kyoto" Figure 1 shows the substructures for each pas- sive arc and each active arc in "I go to Kyoto". A processed string is indicated by "~". A pas- sive arc is created from a content word shown in (a), or from a combination of patterns for which all of the variables are instantiated, like (c), (e), and (f). An active arc, which corresponds to an incomplete substructure, is created from a com- bination of patterns some of which have unin- stantiated variables as right-hand neighbors to the processed string, like (b) and (d). If the processed string creates a passive arc for a substring and the passive arc satisfies the left- most part of an uninstantiated variable in the pattern of active arcs for the left-neighboring substring, the variable is instantiated with the passive arc. Suppose that the processed string is "Kyoto" in "I go to Kyoto". The passive arc (e) is created, and it instantiates Y of the ac- tive arc (b). Thus, by combining (b) and (e), the structure of "I go to Kyoto" is composed like (f). If a passive arc is generated in such op- eration, the creation of a new arc by variable instantiation is repeated. If a new arc can no longer be created, the processed string is shifted 422 to the right-neighboring string. If the whole in- put string can be covered with a passive arc, the parsing will succeed. 2.3 Disambiguation The left-to-right parsing determines the best structure and best transferred result locally by performing structural disambiguation using se- mantic distance calculations, in parallel with the derivation of possible structures (Furuse, 1996). The best structure is determined when a relative passive arc is created. Only the best substructure is retained and combined with other arcs. The best structure is selected by computing the total sum of all the possible combinations of the partial semantic distance values. The structure with the smallest to- tal distance is chosen as the best structure. The semantic distance is calculated according to the relationship of the positions of the words' semantic attributes in the thesaurus (Sumita, 1992). 3 Splitting strategy If the parsing of long or ill-formed input is only undertaken by the application of stored pat- terns, it often fails and generates no results. Our strategy to parse such input, is to split the input into units each of which can be parsed and translated, and is explained as items (A)-(F) in this section. 3.1 Concatenation of neighboring substructures The splitting is performed during left-to-right parsing as follows: (A) Neighboring passive arcs can create a larger passive arc by concatenating them. (B) A passive arc which concatenates neigh- boring passive arcs can be further concate- nated with the right-neighboring passive arc. These items enable two neighboring substruc- tures to compose a structure even if there is no stored pattern which combines them. Figure 2 shows structure composition from neighboring substructures based on these items, a, ~3, and 7 are structures of neighboring substrings. The triangles express substructures composed only from stored patterns. The boxes express sub- structures produced by concatenating neighbor- ing substructures. ~ is composed from its neigh- boring substructures, i.e., a and 8. In addition, e is composed from its neighboring substruc- tures, i.e., ~f and 7. Figure 2: Structure from split substructures Items (A) and (B) enable such a colloquial utterance as (1) to compose a structure by split- ting, as shown in Figure 3. (1) "Certainly sir for how many people please" Figure 3: Structure for (1) 3.2 Splitting input into well-formed parts and ill-formed parts Item (C) splits input into well-formed parts and ill-formed parts, and enables parsing in such cases where the input is ill-formed or the trans- lation rules are insufficient. The well-formed parts can be applied patterns or they can con- sist of one content word. The ill-formed parts, which consist of one functional word or one part-of-speech bigram marker, are split from the well-formed parts. (c) In addition to content words, boundary markers, namely, any functional words and inserted part-of-speech bigram mark- ers, also create a passive arc and compose a substructure. (2) "They also have tennis courts too plus a disco" (3) "Four please two children two adults" Suppose that the substrings of utterance (2), "they also have tennis courts too" and "a disco", can create a passive arc, and that the system has not yet learned a pattern to which preposition "plus" is relevant, such as "X plus Y" or "plus X'. Also, suppose that the substrings of utterance (3), "four please" and "two children two adults", can create a passive arc, that part-of-speech 423 bigram marker "adverb-numeral' is inserted be- tween these substrings, and that the system does not know pattern "X adverb-numeral Y" to combine a sentence for X and a noun phrase for Y. By item (C), utterances (2) and (3) can be parsed in these situations as shown in Figure 4. Figure 4: Structures for (2) and (3) 3.3 Structure preference Although the splitting strategy improves ro- bustness of the parsing, heavy dependence on the splitting strategy should be avoided. Since a global structure has more syntactic and se- mantic relations than a set of fragmental ex- pressions, in general, the translation of a global expression tends to be better than the transla- tion of a set of fragmental expressions. Accord- ingly, the splitting strategy should be used as a backup function. Figure 5 shows three possible structures for "go to Kyoto". (a) is a structure relevant to pat- tern "X to Y" at the verb phrase level. In (b), the input string is split into two substrings, "go" and "to Kyoto". In (c), the input string is split into three substrings, "go", "to", and "Kyoto". The digit described at the vertex of a triangle is the sum of distance values for that strucure. Among these three, (a), which does not use splitting, is the best structure. Item (D) is regu- lated to give low priority to structures including split substructures. (D) When a structure is composed by splitting, a large distance value is assigned. In the TDMT system, the distance value in each variable varies from 0 to 1. We experimen- tally assigned the distance value of 5.00 to one application of splitting, and 0.00 to the struc- ture including only one word or one part-of- (a) /,,9, .33 [ (b) 0.00 0.00 .0.00 (c) Figure 5: Structures for "go to Kyoto" speech bigram marker. 2 Suppose that substructures in Figure 5 are assigned the following distance values. The to- tal distance value of (a) is 0.33. The splitting is applied to (b) and (c), once and twice, re- spectively. Therefore, the total distance value of (b) is 0.00+0.33+5.00x 1=5.33, and that of (c) is 0.00+0.00+0.00+5.00x2=10.00. (a) is selected as the best structure because it gives the small- est total distance value. 3.4 Translation output The results gained from a structure correspond- ing to a passive arc can be transferred and a partial translation result can then be generated. The translation result of a split structure is formed as follows: (E) The complete translation result is formed by concatenating the partial translation re- sults of each split unit. A punctuation mark such as "," can be in- serted between partial translation results to make the complete translation result clear, al- though we cannot expect punctuation in an in- put utterance. The EJ translation result of ut- terance (1) is as follows: certainly sir I for how many people please h~ai , nan-nin ~desuka Strings such as functional words and part-oh speech bigram markers have no target expres- sion, and are transferred as follows: 2These values are tentatively assigned through com- paring the splitting performance for some values, and are effective only for the present TDMT system. 424 Table 1: Effect of splitting on translation performance output rate (%) parsing success rate/%) output understandability (%) w/o splitting w/splitting w/o w/ w/o w/ JE 95.8 100 75.5 76.7 71.8 75.9 EJ 94.2 100 75.0 76.0 81.0 84.0 JK 83.4 100 68.3 71.2 80.4 94.5 KJ 66.7 100 54.1 56.4 64.1 90.5 (F) A string which does not have a target ex- pression, is transferred to a string as "..", which means an incomprehensible part. The EJ translation results of utterances (2) and (3) are as follows. "r' denotes a splitting position. they also have tennis courts too I plus la disco douyouni tenisu-kooto ga mata ari-masu, .., disuko four please ladverb-numeral Itwo children two adults II futa~ otona futari yon o-negai-shi masu, .. ,kodomo 4 Effect of splitting The splitting strategy based on items (A)-(F) in Section 3 can be introduced to frameworks such as TDMT, which utilize left-to-right pars- ing and a score for a substructure. We discuss the effect of splitting by showing experimental results of the TDMT system's JE, E J, Japanese- to-Korean (Jg), and Korean-to-Japanese (gJ) translations. 3 The TDMT system, whose domain is travel conversations, presently can treat multi-lingual translation. The present vo- cabulary size is about 13,000 words in JE and JK, about 7,000 words in EJ, and about 4,000 words in KJ. The number of training sentences is about 2,900 in JE and EJ, about 1,400 in JK, and about 600 in KJ. 4.1 Null-output elimination It is crucial for a machine translation system to output some result even though the input is ill- formed or the translation rules are insufficient. Items (C) and (D) in Section 3, split input into well-formed parts and ill-formed parts so that weU-formed parts can cover the input as widely as possible. Since a content word and a pattern tin the experimental results referred to later in this section, the input does not consist of strings but of cor- rect morpheme sequences. This enables us to focus on the evaluation of our splitting method by excluding cases where the morphological analysis fails. can be assigned some transferred results, some translation result can be produced if the input has at least one well-formed part. Table 1 shows how the splitting improves the translation performance of TDMT. More than 1,000 sentences, i.e., new data for the system, were tested in each kind of translation. There was no null output, and a 100 % output rate in every translation. So, by using the splitting method, the TDMT can eliminate null output unless the morphological analysis gives no re- sult or the input includes no content word. The splitting also improves the parsing success rate and the understandability of the output in every translation. The output rates of the JK and KJ transla- tions were small without splitting because the amount of sample sentences is less than that for the JE and EJ translations. However, the split- ting compensated for the shortage of sample sentences and raised the output rate to 100 %. Since Japanese and Korean are linguistically close, the splitting method increases the under- standable results for JK and KJ translations more than for JE and EJ translations. 4.2 Utterance splitting into sentences In order to gain a good translation result for an utterance including more than one sentence, the utterance should be split into proper sen- tences. The distance calculation mechanism aims to split an utterance into sentences cor- rectly. (4) "Yes that will be fine at five o'clock we will re- move the bea~' For instance, splitting is necessary to trans- late utterance (4), which includes more than one sentence. The candidates for (4)'s structure are shown in Figure 6. The total distance value of (a) is 0.00+1.11+5.00×1=6.11, that of (b) is 0.00+0.00+1.11+5.00×2=11.11, and that of (c) is 0.83+0.00+0.42+5.00×2=11.25. As (a) has the smallest total distance, it is chosen as the best structure, and this agrees with our intuition. 425 (a) (b) (c) Figure 6: Structures for (4) We have checked the accuracy of utterance splitting by using 277 Japanese utterances and 368 English utterances, all of which included more than one sentence. Table 2 shows the suc- cess rates for splitting the utterances into sen- tences. Although TDMT can also use the pat- tern "X boundary Y" in which X and Y are at the sentence level to split the utterances, the proposed splitting method increases the success rates for splitting the utterances in both lan- guages. Table 2: Success rates for splitting utterances w/o splitting w/ splitting Japanese 75.8 83.8 English 59.2 69.3 4.3 Translation after speech recognition Speech recognition sometimes produces inaccu- rate results from an actual utterance, and erro- neous parts often provide ill-formed translation inputs. However, our splitting method can also produce some translation results from such mis- recognized inputs and improve the understand- ability of the resulting speech-translation. Table 3 shows an example of a JE translation of a recognition result including a substitution error. The underlined words are misrecognized parts. "youi(preparation)" in the utterance is re- placed with "yom'(postposition)". Table 4 shows an example of a JE translation of a recognition result including an insertion er- ror. "wo" has been inserted into the utterance after speech recognition. The translation of the speech recognition result, is the same as that of the utterance except for the addition of ".."; ".." is the translation result for "wo", which is a postposition mainly signing an object. Table 5 shows an example of the EJ trans- lation of a recognition result including a dele- tion error. "'s" in the utterance is deleted after speech recognition. In the translation of this result, ".." appears instead of "wa", which is a postposition signing topic. ".." is the trans- lation for marker "pronoun-adverb", which has been inserted between "that" and "a//". The recognition result is split into three parts "yes that", "pronoun-adverb", and "all correct". Al- though the translations in Tables 3, 4, and 5 might be slightly degraded by the splitting, the meaning of each utterance can be commu- nicated with these translations. We have experimented the effect of split- ting on JE speech translation using 47 erro- neous recognition results of Japanese utter- ances. These utterances have been used as ex- ample utterances by the TDMT system. There- fore, for utterances correctly recognized, the translations of the recognition results should succeed. The erroneous recognition results were collected from an experimental base using the method of Shimizu (1996). Table 6 shows the numbers of sentences at each level based on the extent that the mean- ing of an utterance can be understood from the translation result. Without the splitting, only 19.1% of the erroneous recognition results are wholly or partially understandable. The split- ting method increases this rate to 57.4%. Fail- ures in spite of the splitting are mainly caused by the misrecognition of key parts such as pred- icates. Table 6: Translation after erroneous recognition wholly understandable partially understandable misunderstood, or never understandable null output w/o splitting w/splitting 6 (12.8%) 15 (31.9%) 3 (6.3%) 12 (25.5%) 6 (12.8%) 20 (42.6%) 32 (68.1%) 0 (0.0%) 4.4 Translation time Since our splitting method is performed under left-to-right parsing, translation efficiency is not 426 Table 3: Substitution error in JE translation I translation input I TDMT system's translation result I utterance I Chousyokn no go yoni wa deki masu ga recognition result I Chousyoku no go yori wa deki masu ga I We can prepare breakfast. I Breakfast ........ we can do. Table 4: Insertion error in JE translation I translation input I TDMT system's translation result I i utterance I Sore'o h"s o, esu I is a rese o"on ecesso ' I recognition result Soreto w_go yoyaku ga hitsuyou desu ka And .. is a reservation necessary? Table 5: Deletion error in EJ translation I I translation input I TDMT system's translation result I I utterance [ Yesthat'sallcorrect[ Haisorewamattakutadashiidesn. I recognition result Yes that all correct Hai sore .. mattaku tadashii desu. a serious problem. We have compared EJ trans- lation times in the TDMT system for two cases. One was without the splitting method, and the other was with it. Table 7 shows the translation time of English sentences with an average in- put length of 7.1 words, and English utterances consisting of more than one sentence with an average input length of 11.4 words. The trans- lation times of the TDMT system written in LISP, were measured using a Sparcl0 worksta- tion. Table 7: Translation time of EJ input w/o splitting w/splitting sentence 0.35sec 0.36sec utterance 0.60sec 0.61sec The time difference between the two situa- tions is small. This shows that the translation efficiency of TDMT is maintained even if the splitting method is introduced to TDMT. 5 Concluding remarks We have proposed an input-splitting method for translating spoken-language which includes many long or ill-formed expressions. Experi- mental results have shown that the proposed method improves TDMT's performance with- out degrading the translation efficiency. The proposed method is applicable to not only TDMT but also other frameworks that uti- lize left-to-right parsing and a score for a substructure. One important future research goal is the achievement of a simultaneous in- terpretation mechanism for application to a practical spoken-language translation system. The left-to-right mechanism should be main- tained for that purpose. Our splitting method meets this requirement, and can be applied to multi-lingual translation because of its universal framework. References O. Furuse and H. Iida. 1994. Constituent Boundary Parsing for Example-Based Ma- chine Translation. In Proc. of Coling '94, pages 105-111. O. Furuse and H. Iida. 1996. Incremental Translation Utilizing Constituent Boundary Patterns. In Proc. of Coling '96, pages 412- 417. Y.B. Kim and T. Ehara. 1994. An Auto- matic Sentence Breaking and Subject Supple- ment Method for J/E Machine Translation (in Japanese). In Transactions of Informa- tion Processing Society of Japan, Vol. 35, No. 6, pages 1018-1028. T. Shimizu, H. Yamamoto, H. Masataki, S. Matsunaga, and Y. Sagisaka. 1996. Spon- taneous Dialogue Speech Recognition us- ing Cross-word Context Constrained Word Graphs. In Proc. of ICASSP '96, pages 145- 148. E. Sumita and H. Iida. 1992. Example-Based Transfer of Japanese Adnominai Particles into English. IEICE Transactions on Infor- mation and Systems, E75-D, No. 4, pages 585-594. Y. Wakita, J. Kawai, and H. Iida. 1997. Cor- rect parts extraction from speech recognition results using semantic distance calculation, and its application to speech translation. In Proc. of ACL//EACL Workshop on Spoken Language Translation, pages 24-31. 427
1998
70
Automatic extraction of subcorpora based on subcategorization frames from a part-of-speech tagged corpus Susanne GAHL UC Berkeley, Department of Linguistics ICSI 1947 Center St, Suite 600 Berkeley, CA 94704-1105 [email protected] Abstract This paper presents a method for extracting subcorpora documenting different subcate- gorization frames for verbs, nouns, and adjectives in the 100 mio. word British National Corpus. The extraction tool consists of a set of batch files for use with the Corpus Query Processor (CQP), which is part of the IMS corpus workbench (cf. Christ 1994a,b). A macroprocessor has been developed that allows the user to specify in a simple input file which subcorpora are to be created for a given lemma. The resulting subcorpora can be used (1) to provide evidence for the subcategorization properties of a given lemma, and to facilitate the selection of corpus lines for lexicographic research, and (2) to determine the frequencies of different syntactic contexts of each lemma. Introduction A number of resources are available for obtaining subcategorization information, i.e. information on the types of syntactic complements associated with valence-bearing predicators (which include verbs, nouns, and adjectives). This information, also referred to as valence information is available both in machine-readable form, as in the COMLEX database (Macleod et al. 1995), and in human- readable dictionaries (e.g. Hornby 1989, Procter 1978, Sinclair 1987). Increasingly, tools are also becoming available for acquiring subcategorization information from corpora, i.e. for inferring the subcategorization frames of a given lemma (e.g. Manning 1993). None of these resources provide immediate access to corpus evidence, nor do they provide information on the relative frequency of the patterns that are listed for a given lemma. There is a need for a tool that can (1) find evidence for subcategorization patterns and (2) determine their frequencies in large corpora: 1. Statistical approaches to NLP rely on information not just on the range of combinatory possibilities of words, but also the relative frequencies of the expected patterns. 2. Dictionaries that list subcategorization frames often list expected patterns, rather than actual ones. Lexicographers and lexicologist need access to the evidence for this information. 3. Frequency information has come to be the focus of much psycholinguistic research on sentence processing (see for example MacDonald 1997). While information on word frequency is readily available (e.g. Francis and Kucera (1982)), there is as yet no easy way of obtaining information from large corpora on the relative frequency of complemen- tation patterns. None of these points argue against the use- fulness of the available resources, but they show that there is a gap in the available in- formation. To address this need, we have developed a tool for extracting evidence for subcategorization patterns from the 100 mio. word British National Corpus (BNC). The tool is used as pan of the lexicon-building process in the FrameNet project, an NSF-funded project aimed at creating a lexical database based on the principles of Frame Semantics (Fillmore 1982). 428 1 Infrastructure 1.1 Tools We are using the 100 mio. word British National Corpus, with the following corpus query tools: • CQP (Corpus Query Processor, Christ (1994)), a general corpus query processor for complex queries with any number and combination of annotated information types, including part-of-speech tags, morphosyntactic tags, lemmas and sentence boundaries. • A macroprocessor for use with CQP that allows the user to specify which subcorpora are to be created for a given lemma. The corpus queries are written in the CQP corpus query language, which uses regular expressions over part-of-speech tags, lemmas, morphosyntactic tags, and sentence boundaries. For details, see Christ (1994a). The queries essentially simulate a chunk parser, using a regular grammar. 1.2 Coverage A list of the verb frames that are currently searchable is given in figure 1 below, along with an example of each pattern. The categories we are using are roughly based on those used in the COMLEX syntactic dictionary (Macleod et al. 1995). intransitive 'worms wiggle' np 'kiss me' np_np 'brought her flowers' np_pp 'replaced it with a new one' np_Pvping 'prevented him from leaving' np_pwh 'asked her about what it all meant' np_vpto 'advised her to go' np_vping 'kept them laughing' np_sfin 'told them (that) he was back' np_wh 'asked him where the money Was' np_ap 'considered him foolish' np_sbrst 'had him clean up' ap 'turned blue' figure 1: searchable In our queries for nouns and adjectives as targets, we are able to extract prepositional, clausal, infinitival, and gerundial complements. In addition, the tool accomodates searches for compounds and for possessor phrases (my neighbor's addiction to cake, my milk allergy). Even though these categories are not tied to the syntactic subcategorization frames of the target lemmas, they often instantiate semantic arguments, or, more specifically, Frame elements (Fillmore 1982, Baker et al. forthcoming). 1.3 Method 1.3.1 Overview We start by creating a subcorpus containing all concordance lines for a given lemma. We call this subcorpus a lemma-subcorpus. The extraction of smaller subcorpora from the lemma subcorpus then proceeds in two stages. During the first stage, syntactic patterns involving 'displaced' arguments (i.e. 'left isolation' or 'movement' phenomena) are extracted, such as passives, tough movement and constructions involving WH-extraction. The result of this procedure is a set of subcorpora that are homogeneous with respect to major constituent order. Following this, the remainder of the lemma-subcorpus is partitioned into subcorpora based on the subcategorization properties of the lemma in question. PP PP-PP Pvping Pwh intrans, part. np_particle particle_pp: particle_wh: vping sfin sbrst vpto directquote adverb complement 'look at the picture' 'turned from a frog into a prince' 'responded by nodding her head' 'wonder about how it happened' 'touch down', 'turn over' 'put the dishes away', 'put away the dishes' 'run off with it' 'figured out how to get there' 'needs fixing' 'claimed (that) it was over' 'demanded (that) he leave' 'agreed to do it over' 'no, said he', '"no", 'he said', 'he said: "no"' 'behave badly' types for verbs 429 1.3.2 Search strategies: positive and negative queries For the extraction of certain subcategorization patterns, it is not necessary to simulate a parse of all of the constituents. Where an explicit context cue exists, a partial parse suffices. For example, the query given in figure 2 below is used to find [_ NP VPing] patterns (e.g. kept them laughing). Note that the query does not positively identify a noun phrase in the ~osition followin encoding [$search_b,,] [pos!="V.*lCJC ICJSICJTIPRFIP RPIPUN"] { 1,5} [pos ="VVG IVBGIVDGIVH G"] within s; verb. description target lemma gerund example kept them coming within a sentence figure 2: A query for [NP VPing] 1.3.3Searches driven by subcategorization frames Applying queries like the one for [NP VPing] "blindly", i.e. in the absence of any information on the target lemma, would produce many false hits, since the query also matches gerunds that are not subcategorized. However, the information that the target verb subcategorizes for a gerund dramatically reduces the number of such errors. The same mechanism is used for addressing the problems associated with prepositional phrase attachment. The general principle is that prepositional phrases in certain contexts are considered to be embedded in a preceding noun phrase , unless the user specifies that a given preposition is subcategorized for by the target lemma. For example, the of-phrase in a sequence Verb - NP - of- NP is interpreted as part of the first NP (as in met the president of the company), unless we are dealing with a verb that has a [_NP PPof] subcategorization frame, e.g. cured the president of his asthma. 1.3.4 Cascading queries The result of each query is subtracted from the lemma subcorpus and the remainder submitted to the next set of queries. As a result, earlier queries pre-empt later queries. For example, concordance lines matching the queries for passives, e.g. he was cured of his asthma are filtered out early on in the process, so as to avoid getting matched by the queries dealing with (active intransitive) verb + prepositional phrase complements, such as he boasted of his achievements. Another example of this type of preemption concerns the interaction of the query for ditransitive frames (brought her flowers) with later queries for NP complements. A proper name immediately followed by another proper name (e.g. Henry James) is interpreted as a single noun phrase except when the target lemma subcategorizes for a ditransitive frame t. An analogous strategy is used for identifying noun compounds. For ditransitives, strings that represent two consecutive noun phrases are queried for first. Note that this method crucially relies on the fact that the subcategorization properties of the target lemma are given as the input to the query process. 2 Examples 2.1 NPs An example of a complex query expression of the kind we are using is given in figure 3. The expression matches noun phrases like "the three kittens", "poor Mr. Smith", "all three", "blue flowers", "an unusually large hat", etc. ([pos = "AT01CRDIDPSIDT0IORDICJT- DT0[CRD-PNI"]* [pos = "AV01AJ0- AV0"]* [pos = "AJ01AJCIAJSIAJ0- AV01AJ0-NN 11AJ0-VVG"]* [pos="NN0 INN 11NN21AJ0-NN1 INN 1-NP01NN 1- VVBINN 1VVGINN2-VVZ"]) I([pos = "AT01CRDIDPSIDT01ORDICJT- DT01CRD-PNI"]+ [pos = "AV01AJ0- AV0"]* [pos = "AJ01AJCIAJSIAJ0- AV01AJ0-NNllAJ0-VVG"]+)I ([pos = "AT01CRDIDPSIDT01ORDICJT-DT01 CRD-PNI"]* [pos = "AV01AJ0-AV0"]* [pos = "AJ01AJCIAJSIAJ0-AV01AJ0- NNllAJ0-VVG"]* [pos = "NP01NN1- NP0"]+)l([pos = "AJ01AJCIAJS"]* [pos = "PNIIPNPIPNXICRD-PNI"]) figure 3. A regular expression matching NPs 2.2 Coordinated passives As an example of a query matching a 'movement' structure, consider the query for coordinated passives, given in figure 3 below. The leftmost column gives the query expression itself, while the other columns show i Inevitably, this strategy fails in some cases, such as "I'm reading Henry James now" (vs. "I read Henry stories." 430 concordance lines found by this query. The [0mmm = 'beibeinglge0 [(class ~ '~'}! (class= '~"a & (v, ord ~= "s') & (pos I= pos = 'l~dQ ') I (~ord = 'NNIlNN2')] ":)]{0,41 [po~'VVNIVVI]VVD- VVNIAD-VVN1AD.D- VVD'] [po~"AVP1? [(((pos = 'l~tJq') I (v~ord = "3) a (c~s = "c')) I (dass '~')]{o3} been be be be ~l.Iq {i ttl #1 f~ri prevented managed treated tgure 4. A query 3 The macroprocessor A macroprocessor has been developed 2 that allows the user to specify in a simple input file which subcorpora are to be created for a given lemma. The macroprocessor reads the the number target lemma is [word='br'lword='~md'lw ~d='buflv~nd=";Iv~i= '~ah~ ~an'1~on~='~] [~os~'VVNVV~VB~ VB~VBGVB~VB~VB ~VDI~VDE~VDGVD] VDN~VDr~VH~VHI~V HGVH~VH~VH~VM 0{VV~VVGVV]]VV-Z]A T~DI~DT~DTQPNDP NBm~'~ (ms = '~Q" & ~ord = ".* e~)]{03 } but not or largely and often for it and passives in the verb cure: [kmmaa = "sere" & po~"VVBIVVDIVVGIV VIIVVN1VVZ]AD- VVNIAJ01VVDI AD- WGINN1-VVB INN1- VVGINN2-VV 7]VVD- VVN"& pos = "VVN" & pos ~ "A~'l[pos~ "Af01AJOAISIAT01CRI] DI~DT~DTONNONN 11NN21NF01ORDlt~I]PN r~qr'r~vvavvD'l cured cured cured cured structures of matches for each subcategorization pattern into an output file. A sample input file for the lemma insist is given in figure 5 below. lemma: insist CQP Search Definition search_by: lemma POS: verb np: (y/n) n np_np: (y/n) n np_ap: (y/n) n np_p.p: (_list_ prepositions) none np_pmg: (_list_ prepositions) none np_pwh: (_list_ prepositions) none np_vpto: (y/n) n np_vping: (y/n) n np_sfin: (y/n) n np_wh: (y/n) n np_sbrst: (y/n) n save_text: no save_binary: yes p.p: (_list_ prepositions) on ping: (_list_ prepositions) on pwh: (_list_ prepositions) on particle: (y/n) n np_particle: (y/n) n particle_pp: {y/n) n particle_wh: (y'n) n ap: (y/n) n directquote: (y/n) y sfin (y/n) y sbrst: (y/n) y figure 5 Input form for macroporcessor 4 Output format sorted, usually by the head of the first complement following the target lemma. The subcorpora can be saved as binary files for further processing in CQP or XKWIC, an interactive corpus query tool (Christ 1994) and as text files. The text files are 5 Limitations of the approach Our tool relies on subcategorization informa- tion as its input. Hence it is not capable of automatically learning subcategorization frames, e.g. ones that are missing in diction- 2 Our macroprocessor was developed by Collin Baker (ICSI-Berkeley) and Douglas Roland (U of Colorado, Boulder). 431 aries or omitted in the input file. The tool facilitates the (manual) discovery of evidence for new subcategorization frames, however, as potential complement patterns are saved in separate subcorpora. Indeed, this is one of the ways in which the tool is being used in the context of the FrameNet project. Some of the technical limitations of the exist- ing tools result from the fact that we are working with an unparsed corpus. Thus, many types of 'null' or 'empty' constituents 3 are not recognized by the queries. Ambiguities in prepositional phrase attachment are another major source of errors. For instance, of the concordance lines supposedly instantiating a [_NP PPwith] frame for the verb heal, several in fact contained embedded PPs (e.g. [_NP], as in heal [children with asthma], rather than [_NP PPwith], as in healing [arthritis] [with a crystal ball]), Finally, the search results can only be as accu- rate as the part-of-speech tags and other an- notations in the corpus. 7 Future directions Future versions of the tool will include searches for predicative (vs. attributive) uses for adjectives and nouns. For verbs, the searches will be expanded to cover the entire set of complementation patterns described in the COMLEX syntactic dictionary. Conclusion We have presented an overview of a set of tools for extracting corpus lines illustrating subcate- gorization patterns of nouns, verbs, and adjec- tives, and for determining the frequency of these patterns. The tools are currently used as part of the FrameNet project. An overview of the whole project can be found at: http://www.icsi.berkeley.edu/~framenet. Acknowledgements This work grew out of an extremely enjoyable collaborative effort with Dr. Ulrich Heid of IMS Stuttgart and Dan Jurafsky of the University of Boulder, Colorado. I would like to thank Doug Roland and especially the untiring Collin Baker for their work on the macroprocessor. I would also like to thank the members of the FrameNet project for their comments and suggestions. I thank Judith Eckle-Kohler of IMS-Stuttgart, JB Lowe of ICSI-Berkeley and Dan Jurafsky for com- ments on an earlier draft of this paper. References Baker, C. F., Fillmore, C. J. and Lowe, J. B (forthcoming). The Berkeley FrameNet project. Proceedings of the 1998 ACL-COLING conference. Christ, O. (1994a) The IMS Corpus Workbench Technical Manual. Institut ffir maschinelle Sprachverarbeitung, Universit~t Stuttgart. Christ, O. (1994b) The XKwic User Manual. Institut fur maschinelle Sprachverarbeitung, Universit~t Stuttgart. Fillmore, C. J. (1982) Frame Semantics. In "Linguistics in the morning calm", Hanshin Publishing Co., Seoul, South Korea, 11 !-137. Francis, W. N. and Kucera, H. (1982) Frexluency Analysis of English Usage: Lexicon and Grammar, Houghton Mifflin, Boston, MA. Hornby, A. S. (1989) Oxford Advanced Learner's Dictionary of Current English. 4th edition. Oxford University Press, Oxford, England. MacDonald, M. C. (ed.) (1997) Lexical Representa- tions and Sentence Processing. Erlbaum Taylor & Francis. Macleod, C. and Grishman, R. (1995) COMLEX Syntax Reference Manual. Linguistic Data Consortium, U. of Pennsylvania. Manning, Christopher D. (1993). Automatic Acquisi- tion of a large subcategorization dictionary from corpora. Proceedings of the 31st ACL, pp. 235- 242. Procter, P. (ed.). (1989) Longman Dictionary of Contemporary English. Longman, Burnt Mill, Harlow, Essex, England. Sinclair, J. M. (1987) Collins Cobuild English Language Dictionary. Collins, London, England. 3 Our system is able to identify passive structures, tough-movement, and a number of common left isolation constructions, i.e. constructions involving 'traces' or movement sites. 432
1998
71
Semantic-Head Based Resolution of Scopal Ambiguities* BjSrn Gamb/ick Information and Computational Linguistics Language Engineering University of Helsinki SICS, Box 1263 P.O. Box 4 S-164 29 Kista, Sweden SF-00014 Helsinki, Finland gamback@sics, se Johan Bos Computational Linguistics University of the Saarland Postfach 15 11 50 D-66041 Saarbriicken, Germany bos©coli, uni- sb. de Abstract We introduce an algorithm for scope resolution in underspecified semantic representations. Scope pref- erences are suggested on the basis of semantic argu- ment structure. The major novelty of this approach is that, while maintaining an (scopally) underspec- ified semantic representation, we at the same time suggest a resolution possibility. The algorithm has been implemented and tested in a large-scale system and fared quite well: 28% of the utterances were ambiguous, 80% of these were correctly interpreted, leaving errors in only 5.7% of the utterance set. 1 Introduction Scopal ambiguities are problematic for language processing systems; resolving them might lead to combinatorial explosion. In applications like transfer-based machine translation, resolution can be avoided if transfer takes place at a rep- resentational level encoding scopal ambiguities. The key idea is to have a common representa- tion for all the possible interpretations of an am- biguous expression, as in Alshawi et al. (1991). Scopal ambiguities in the source language can then carry over to the target language. Recent research has termed this underspecification (see e.g., KSnig and Reyle (1997), Pinkal (1996)). A problem with underspecification is, how- ever, that structural restrictions are not en- coded. Clear scope configurations (preferences) in the source language are easily lost: (1) das paflt auch nicht that fits also not 'that does not fit either' (2) ich kanni sie nicht verstehen ~i I can you not understand 'I cannot understand you' * This work was funded by BMBF (German Federal Ministry of Education, Science, Research, and Technol- ogy) grant 01 IV 101 R. Thanks to Christian Lieske, Scott McGlashan, Yoshiki Mori, Manfred Pinkal, CJ Rupp, and Karsten Worm for many useful discussions. 433 In (1) the focus particle 'auch' outscopes the negation 'nicht'. The preferred reading in (2) is the one where 'nicht' has scope over the modal 'kann'. In both cases, the syntactic configu- rational information for German supports the preferred scoping: the operator with the widest scope is c-commanding the operator with nar- row scope. Preserving the suggested scope res- olution restrictions from the source language would be necessary for a correct interpretation. However, the configurational restrictions do not easily carry over to English; there is no verb movement in the English sentence of (2), so 'not' does not c-command 'can' in this case. In this paper we focus on the underspecifi- cation of scope introduced by quantifying noun phrases, adverbs, and particles. The representa- tions we will use resembles Underspecified Dis- course Representation Structures (Reyle, 1993) and Hole Semantics (Bos, 1996). Our Underspecified Semantic Representation, USR, is introduced in Section 2. Section 3 shows how USRs are built up in a compositional se- mantics. Section 4 is the main part of the paper. It introduces an algorithm in which structural constraints are used to resolve underspecified scope in USR structures. Section 5 describes an implementation of the algorithm and evaluates how well it fares on real dialogue examples. 2 Underspecified Semantics: USR The representation we will use, USR, is a ter- tiary term containing the following pieces of se- mantic information: a top label, a set of labeled conditions, and a set of constraints. The condi- tions represent ordinary predicates, quantifiers, pronouns, operators, etc., all being uniquely la- beled, making it easier to refer to a particular condition. Scope (appearing in quantifiers and operators) is represented in an underspecified way by variables ("holes") ranging over labels. Labels are written as ln, holes as hn, and vari- ables over individuals as in. The labelling allows us to state meta-level constraints on the rela- tions between conditions. A constraint l < h is a relation between a label and a hole: 1 is either equal to or subordinated to h (the labeled con- dition is within the scope denoted by the hole). (ll , (top) {lldecl m / / } 12 : pron(il), 14 _< hi, 13 : passen(i2,il), 15 _< hi, 14 : auch(h2), , 18 _< hl, ) I~ : nicht(h3), Is _< h2, 16 : group(12,13) 16 _< hs (conditions) (constraints) Figure 1: The USR for 'das patgt auch nicht'. Fig. 1 shows the USR for (1). The top label 11 introduces the entire structure and points to the declarative sentence mood operator, outscop- ing all other elements. The pronoun 'das' is pron, marking unresolved anaphora. 'auch' and 'nicht' are handled as operators. The verb con- dition (passen) and its pronoun subject are in the same scope unit, represented by a grouping. The first three constraints state that neither the verb, nor the two particles outscope the mood operator. The last two put the verb in- formation in the scope of the particles. (NB: no restrictions are placed on the particles' relative scope.) Fig. 2 shows the subordination relations. ll:decl(hl) 14:auch(h2)~.~ <" < - " " h3) 16: [ 13:passen 12:pron ] Figure 2: Scopal relations in the USR. A USR is interpreted with respect to a "plug- ging", a mapping from holes to labels (Bos, 1996). The number of readings the USR encodes equals the number of possible pluggings. Here, two pluggings do not violate the _< constraints: /3/ }h I = 14, h2 = 15, h3 = 18 t ls, h2=le, hs 14 The plugging in (3) resembles the reading where 'auch' outscopes 'nicht': the label for 'nicht', 15, is taken to "plug" the hole for 'auch', h2, while 'auch' (14) is plugging the top hole of the sen- tence, hi. In contrast, the plugging in (4) gives the reading where the negation has wide scope. 434 With a plugging, a USR can be translated to a Discourse Representation Structure, DRS (Kamp and Reyle, 1993): a pron condition in- troduces a discourse marker which should be linked to an antecedent, group is a merge be- tween DRSs, passen a one place predicate, etc. 3 Construction of USRs In addition to underspecification, we let two other principles guide the semantic construc- tion: lexicalization (keep as much as possible of the semantics lexicalized) and compositionality (a phrase's interpretation is a function of its sub- phrases' interpretations). The grammar rules al- low for addition of already manifest information (e.g., from the lexicon) and three ways of pass- ing non-manifest information (e.g., about com- plements sought): trivial composition, functor- argument and modifier-argument application. Trivial composition occurs in grammar rules which are semantically unary branching, i.e., the semantics of at the most one of the daughter (right-hand side) nodes need to influence the in- terpretation of the mother (left-hand side) node. The application type rules appear on se- mantically binary branching rules: In functor- argument application the bulk of the semantic information is passed between the mother node and the functor (semantic head). In modifier- argument application the argument is the se- mantic head, so most information is passed up from that. (Most notably, the label identifying the entire structure will be the one of the head daughter. We will refer to it as the main label.) The difference between the two application types pertains to the (semantic) subcategoriza- tion schemes: In functor-argument application (5), the functor subcategorizes for the argument, the argument may optionally subcategorize for the functor, and the mother's subcategorization list is the functor's, minus the argument: Mother (5) [ main-label =~ I. Functor (head) Argument (nonhead) main-label "main-label F])] In modifier-argument application (6), Modi- fier subcategorizes for Argument (only), while Argument does not subcategorize for Modifier. Its subcat list is passed unchanged to Mother. Mother • [ subeat ( ) Modifier (nonhead) Argument (head) main-label Label subeat ([i]) ] [main-label 4 A Resolution Algorithm Previous approaches to scopal resolution have mainly been treating the scopal constraints sep- arately from the rest of the semantic structure and argued that contextual information must be taken into account for correct resolution. How- ever, the SRI Core Language Engine used a straight-forward approach (Moran and Pereira, 1992). Variables for the unresolved scoped were asserted at the lexical level together with some constraints on the resolution. Constraints could also be added in grammar rules, albeit in a somewhat ad hoc manner. Most of the sco- pal resolution constraints were, though, pro- vided by a separate knowledge-base specifying the inter-relation of different scope-bearing op- erators. The constraints were applied in a pro- cess subsequent to the semantic construction. 4.1 Lexical entries In contrast, we want to be able to capture the constraints already given by the function- argument structure of an utterance and provide a possible resolution of the scopal ambiguities. This resolution should be built up during the construction of (the rest of) the semantic repre- sentation. Thus we introduce a set of features (called holeinfo) on each grammatical category. On terminals, the features in this set will nor- mally have the values shown in (7), indicating that the category does not contain a hole (isa- hole has the value no), i.e., it is a nonscope- bearing element, sb-label, the semantic-head based resolution label, is the label of the element of the substructure below it having widest scope. In the lexicon, it is the entry's own main label. (7) holeinfo isa-hole no hole no Scope-bearing categories (quantifiers, parti- cles, etc.) introduce holes and get the feature setting of (8). The feature hole points to the hole introduced. (Finite verbs are also treated this way: they are assumed to introduce a hole for the scope of the sentence mood operator.) 435 (8) holeinfo isa-hole yes hole Hole 4.2 Grammar rules When the holeinfo information is built up in the analysis tree, the sb°labels are passed up as the main labels (i.e., from the semantic head daugh- ter to the mother node), unless the nonhead daughter of a binary branching node contains a hole. In that case, the hole is plugged with the sb-label of the head daughter and the sb- label of the mother node is that of the nonhead daughter. The effect being that a scope-bearing nonhead daughter is given scope over the head daughter. On the top-most level of the gram- mar, the hole of the sentence mood operator is plugged with the sb-label of the full structure. Concretely, grammar rules of both application types pass holeinfo as follows. If the nonhead daughter does not contain a hole, holeinfo is unchanged from head daughter to mother node: Mother (9) [ holeinfo [] ] =¢" Head Nonhead [holeinfo IS-I] [holeinfo [isa-hole no ]] However, if the nonhead daughter does con- tain a hole, it is plugged with the sb-label of the head daughter and the mother node gets its sb- label from the nonhead daughter. The rest of the holeinfo still come from the head daughter: Mother isa-hole hole Head sb-label H~adLabel" isa-hole hole Nonhead isa-hole yes hole Hole The hole to be plugged is here identified by the hole feature of the nonhead daughter. To show the preferred scopal resolution, a relation 'Hole =sb HeadLabel', a semantic-head based plugging, is introduced into the USR. 4.3 Resolution Example We will illustrate the rules with an example. The utterance (1) 'das pa£t auch nicht' has the semantic argument structure shown in Fig. 3, where Node[L, HI stands for the node Node hav- ing an sb-label L and hole feature value H. The verb passen is first applied to the subject 'alas'. The sb-label of 'passen' is its main label (the grouping label 16). Its hole feature points to hi, the mood operator's scope unit. The pro- noun contains no hole (is nonscope-bearing), so we have the first case above, rule (9), in which the mother node's holeinfo is identical to that of the head daughter, as indicated in the figure. /\ ni cht [15,/h3] ~S[16 ,hi] das[12,no~assen[16,hl] Figure 3: Semantic argument structure Next, the modifier 'nicht' is applied to the ver- bal structure, giving the case with the nonhead daughter containing a hole, rule (10). For this hole we add a 'h3 =sb 16' to the USR: The la- bel plugging the hole is the sb-label of the head daughter. The sb-label of the resulting struc- ture is 15, the sb-label of the modifier. The pro- cess is repeated for 'auch' so that its hole, h2, is plugged with 15, the label of its argument. We have reached the end of the analysis and hi, the remaining hole of the entire structure is plugged by the structure's sb-label, which is now 14. In total, three semantic-head based plugging con- straints are added to the USR in Fig. 1: (11) hi =sb 14, h2 =sb 15, 53 "=sb 16 Giving a scope preference corresponding to the plugging (3), the reading with auch outscoping nicht, resulting in the correct interpretation. 4.4 Coordination Sentence coordinations, discourse relation ad- verbs, and the like add a special case. These categories force the scopal elements of their sen- tential complements to be resolved locally, or in other words, introduce a new hole which should be above the top holes of both complements. They get the lexical setting (12) holeinfo isa-hole island hole Hole So, isa-hole indicates which type of hole a structure contains. The values are no, yes, and island, island is used to override the ar- gument structure to produce a plugging where 436 the top holes of the sentential complements get plugged with their own sb-labels. This compli- cates the implementation of rules (9) and (10) a bit; they must also account for the fact that a daughter node may carry an island type hole. 5 Implementation and Evaluation The resolution algorithm described in Section 4 has been implemented in Verbmobil, a system which translates spoken German and Japanese into English (Bub et al., 1997). The under- specified semantic representation technique we have used in this paper reflects the core seman- tic part of the Verbmobil Interface Term, VIT (Bos et al., 1998). The aim of VIT is to de- scribe a consistent interface structure between the different language analysis modules within Verbmobil. Thus, in contrast to our USR, VIT is a representation that encodes all the linguistic information of an utterance; in addition to the USR semantic structure of Sectiom 2, the Verb- mobil Interface Term contains prosodic, syntac- tic, and discourse related information. In order to evaluate the algorithm, the results of the pluggings obtained for four dialogues in the Verbmobil test set were checked (Table 1). We only consider utterances for which the VITs contain more than two holes: The num- ber of scope-bearing operators is the number of holes minus one. Thus, a VIT with one hole only trivially contains the top hole of the utterance (i.e., the hole for the sentence mood predicate; introduced by the main verb). A VIT with two holes contains the top hole and the hole for one scope-taking element. How- ever, the mood-predicate will always have scope over the remaining proposition, so resolution is still trivial. Table 1: Results of evaluation Dial. # # Correct utt. / # holes Id. Utt. <2 3 4 >5 B1 48 34 9/11 1/2 1/1 79 B2 41 26 5/8 2/3 4/4 73 87 48 36 7/8 0/1 3/3 83 RHQ1 91 68 10/11 5/6 4/6 83 Total 228 164 31/38 8/12 12/14 80 The dialogues evaluated are identified as three of the "Blaubeuren" dialogues (B1, B2, and BT) and one of the "Reithinger-Herweg-Quantz" dialogues (RHQ1). These four together form the standard test-set for the German language modules of the Verbmobil system. For VITs with three or more holes, we have true ambiguities. Column 3 gives the number of utterances with no ambiguity (< 2 holes), the columns following look at the ambiguous sentences. Most commonly the utterances con- tained one true ambiguity (3 holes, as in Fig. 2). Utterances with more than two ambiguities (> 5 holes) are rare and have been grouped together. Even though the algorithm is fairly straight- forward, resolution based on semantic argument structure fares quite well. Only 64 (28%) of the 228 utterances are truely ambiguous (i.e., con- tain more than two holes). The default scoping introduced by the algorithm is the preferred one for 80% of the ambiguous utterances, leaving er- rors in just 13 (5.7%) of the utterances overall. Looking closer at these cases, the reasons for the failures divide as: the relative scope of two particles did not conform to the c-command structure assigned by syntax (one case); an in- definite noun phrase should have received wide scope (3), or narrow scope (1); an adverb should have had wide scope (3); combination of (a modal) verb movement and negated question (1); technical construction problem in VIT (4). The resolution algorithm has been imple- mented in Verbmobil in both the German se- mantic processing (Bos et al., 1996) and the (substantially smaller) Japanese one (Gamb~ick et al., 1996). Evaluating the performance of the resolution algorithm on the standard test suite for the Japanese parts of Verbmobil (the "RDSI" reference dialogue), we found that only 7 of the 36 sentences in the dialogue contained more than two holes. All but one of the ambi- guities were correctly resolved by the algorithm. Even though the number of sentences tested cer- tainly is too small to draw any real conclusions from, the correctness rate still indicates that the algorithm is applicable also to Japanese. 6 Conclusions We have presented an algorithm for scope res- olution in underspecified semantic representa- tions. Scope preferences are suggested on the basis of semantic argument structure, letting the nonhead daughter node outscope the head daughter in case both daughter nodes are scope- bearing. The algorithm was evaluated on four "real-life" dialogues and fared quite well: about 80% of the utterances containing scopal ambi- guities were correctly interpreted by the sug- gested resolution, leaving scopal resolution er- rors in only 5.7% of the overall utterances. The algorithm is computationally cheap and quite straight-forward, yet its predictions are relatively accurate. Our results indicate that for a practical system, more sophisticated ap- proaches to scopal resolution (i.e., based on the relations between different scope-bearing el- ements and/or contextual information) will not add much to the overall system performance. References Alshawi H., D.M. Carter, B. Gamb~ick, and M. Rayner. 1991. Translation by quasi logical form transfer. Proc. 29th ACL, pp. 161-168, University of California, Berkeley. Bos J. 1996. Predicate logic unplugged. Proc. lOth Amsterdam Colloquium, pp. 133-142, University of Amsterdam, Holland. Bos J., B. Gamb~ick, C. Lieske, Y. Mori, M. Pinkal, and K. Worm. 1996. Compositional semantics in Verbmobil. Proc. 16th COLING, vol. 1, pp. 131- 136, Kcbenhavn, Denmark. Bos J., B. Buschbeck-Wolf, M. Dorna, and C.J. Rupp 1998. Managing information at linguistic interfaces. Proc. 17th COLING and 36th A CL, Montreal, Canada. Bub T., W. Wahlster, and A. Waibel. 1997. Verb- mobil: The combination of deep and shallow pro- cessing for spontaneous speech translation. Proc. Int. Conf. on Acoustics, Speech and Signal Pro- cessing, pp. 71-74, Miinchen, Germany. Gamb~ick B., C. Lieske, and Y. Mori. 1996. Under- specified Japanese semantics in a machine trans- lation system. Proc. 11th Pacific Asia Conf. on Language, Information and Computation, pp. 53- 62, Seoul, Korea. Kamp H. and U. Reyle. 1993. ~rom Discourse to Logic. Kluwer, Dordrecht, Holland. Kbnig E. and U. Reyle. 1997. A general reason- ing scheme for underspecified representations. In H. J. Ohlbach and U. Reyle, eds, Logic and its Applications. Festschri~ for Dov Gabbay. Part I. Kluwer, Dordrecht, Holland. Moran D.B. and F.C.N. Pereira. 1992. Quanti- fier scoping. In Alshawi H., ed. The Core Lan- guage Engine. The MIT Press, Cambridge, Mas- sachusetts, pp. 149-172. Pinkal M. 1996. Radical underspecification. Proc. lOth Amsterdam Colloquium, pp. 587-606, Uni- versity of Amsterdam, Holland. Reyle U. 1993. Dealing with ambiguities by under- specification: Construction, representation and deduction. Journal of Semantics, 10:123-179. 437
1998
72
Vers l'utilisation des m thodes formelles pour le d veloppement de linguiciels Bilel Gargouri, Mohamed Jmaiel, Abdelmajid Ben Hamadou Laboratoire LARIS FSEG-SFAX, B.P. 1088 3018 SFAX, TUNISIA E-mail: {[email protected]} Abstract Formal methods have'nt been applied enough in the development process of lingware although their advantages have been proved in many other domains. In this framework, we have investigated some applications dealing with different processing levels (lexical analyses, morphology, syntax, semantic and pragmatic). These investigations has mainly led to the following observations. First of all, we have no- ticed a lack of use of methodologies that cover all the life cycle of a software development. The formal specification has not been used in the first development phases. In addition, we have noticed the lack of formal validation and consequently the insufficient guarantee of the developed software results. Moreover, there has been no appeal to rigorous methods of integration to solve the dichotomy of data and processing problem. However, the use of the formal aspect in the Natural Language Processing (NLP) has generally been limited to describing the natural language knowledge (i.e., grammars) and specifying the treatments using algorithmic languages. Few are those who have used a high level specification language. This paper focuses on the contributions of formal methods in developing natural language software starting from an experimentation carried out on a real application and which consists in specifying and validating the sys- tem CORTEXA (Correction ORthographique des TEXtes Arabes) using the VDM formal method. First of all, we review the advantages of formal methods in the general software development process. Then, we present the experimentation and the obtained results. After that, we place the formal methods advantages in the context of NLP. Finally, we give some methodological criteria that allow the choice of an appropriate formal method. 438 Rdsumd : Les mkthodes formelles n'ont pas ktd surf- isamment utiliskes dans le processus de ddveloppement des linguiciels, alors qu'elles ont fait leurs preuves dans d'autres domaines. Le prdsent article essaye de mettre en relief les avantages des mkthodes formelles dans le contexte des langues naturelles, partant des rksultats d'une expkrience rkaliske sur une ap- plication rkelle. Dans un premier temps, nous rappellons les avantages globaux des mkthodes formelles dans le processus de dkveloppement d'un logiciel. Ensuite, nous pla§ons ces avantages dans le contexte des langues na- turelles. Enfin, nous donnons les crithres mkthodologiques pour le choix d'une mkthode formelle approprike. 1 Introduction L'automatisation des langues naturelles a bdnkficik jusqu'k nos jours de nombreuses anndes de recherches et continue encore faire l'objet de plusieurs travaux, notamment dans le domaine du gknie linguistique pour le dkveloppement d'applications spkcifiques. L'ktude des approches de dkveloppement des applications likes au Traitement Automatique des Langues Naturelles (TALN), k tous ses niveaux (i.e., lexical, morphologique, syntax- ique, skmantique et pragmatique), (Fuchs, 1993; Sabah, 1989) nous a permis de constater une quasi-absence de l'utilisation de mdthodologies de dkveloppement qui inthgrent toutes les phases du cycle de vie d'un logiciel. En par- ticulier, au niveau des premihres dtapes, nous avons constatk l'absence quasi-totale de la phase de spkcification formelle. D'un autre c5t4, nous avons constatd une dif- ficultk, voire absence de validation formelle des approches utilisdes dans le dkveloppement et par consdquent de garantie sur les perfor- mances des rksultats obtenus. De m~me, nous avons remarqu6 le non recours £ des mdthodes rigoureuses d'intkgration pour rksoudre le problhme de la dichotomie donn6es-traitements. L'utilisation des outils formels s'est limitke, dans la plupart des cas, k la description du lan- gage (i.e., les grammaires) et k la spdcification des traitements r~duite, g@nkralement, k l'usage d'un langage algorithmique. Rares sont ceux qui ont utilisk un langage de spkcification formelle de haut niveau (Zajac, 1986; Jensen et al., 1993). Aprhs une prksentation des avantages qu'offrent les mkthodes formelles dans le processus de dkveloppement d'un logiciel, d'une manihre gknkrale, cet article essaye de mettre en re- lief les avantages specifiques au domaine de TALN partant d'une expkrience mende au sein de notre kquipe en utilisant la mkthode VDM (Dawes, 1991; Jones, 1986). I1 donne, ~ la fin, des crithres permettant le choix d'une mkthode formelle approprike. 2 Rappel des principaux avantages des mdthodes formelles L'int@gration des mkthodes formelles dans le processus de dkveloppement de certaines ap- plications critiques comme les systhmes temps rdel et les systhmes distribu'ks a donnk ses preuves ces dernihres annkes (Barroca and Der- mid, 1992; Dick and Woods, 1997; Ledru, 1993). C'est ce qui a motivk leur utilisation dans le ddveloppement de logiciels traitant des problhmes complexes au niveau industriel (Hui et al., 1997). Une mkthode formelle est considkrke comme une ddmarche de dkveloppement de logiciels baske sur des notations mathdmatiques et des preuves de validation formelles (Habrias, 1995). Cette dkmarche utilise un processus de raiTine- ment qui part d'une spkcification abstraite des besoins pour dkboucher sur une spkcification raffinke et exkcutable (ou directement codable en un langage de programmation). Les princi- paux avantages des mkthodes formelles peuvent ~tre rksumks dans les points suivants : La prdcision et la non ambiguitd : l'utilisation d'un langage bask sur des notations formelles et prkcises permet d'kviter toute ambiguitk et toute redondance dans la spkcification. La ddteetion d'erreurs conceptueUes le plus tSt possible : l'application de preuves de validation de la spkcification tout le long du processus de raffinement de cette dernihre, garanti la ddtection des erreurs de conception le plus tSt possible dans le processus de dkveloppement de l'application. En l'absence d'une telle validation, les erreurs de conception ne seront 439 d~tect4es qu'aprhs la phase d'impl4mentation ce qui engendrera un c6ut suppl~mentaire. La satisfaction de la conception (dventuellement de l'impldmentation ) par rapport aux besoins : elle est garantie gr£ce au processus de raffinement qui part d'une sp4cification des besoins et applique des rhgles coh~rentes de transformation pour aboutir ~ la conception finale. Le contrble de la cohdrence donndes- traitements : qui est directement pris en charge gr£ce aux preuves de validation. La rdutilisation : le raffinement des specifications formelles et leurs d~compositions successives permettent de mettre en ~vidence des niveaux d'abstraction int~ressants pour la r~solution du probl~me et pour promouvoir la r~utilisation (des sp4cifications). 3 Presentation et r~sultats de l'exp~rimentation 3.1 Choix et d~marche utilis4e Pour mesurer l'impact de l'utilisation des m~thodes formelles dans le contexte du TALN, nous avons effectu~ la specification complhte et valid~e du systhme CORTEXA (Correction ORthographique des TEXtes Arabes) (Ben- Hamadou, 1993) d~velopp~ au sein de notre lab- oratoire. Outre la disponibilit~ de la documentation, en mati~re de conception et d'impl~mentation, le choix du syst~me CORTEXA est aussi motiv~ par la diversit~ des approches utilis~es pour la representation des connaissances et des traite- ments. En effet, il se compose : • d'un module de d~tection des erreurs bas~ sur une analyse affixale qui utilise un systhme ~ 4tats finis (les r~seaux de tran- sitions augment~es : ATN). L'analyse af- fixale effectue la d~composition d'un mot en ses composants premiers : pr~fixe, in- fixe, suffixe et racine en se r~f~rant £ un ensemble de lexiques et de structures de donn~es, • d'un module de correction des erreurs or- thographiques qui utilise un systhme ~ base de rhgles et • d'un autre module de correction des erreurs typographiques qui se base sur un systbme mixte. Le choix de VDM pour la specification de COR- TEXA est motive, d'une part, par le fait que cette m~thode se base sur les pr~dicats qui don- nent un haut pouvoir expressif, et d'autre part, pour sa notation simple et riche. Aussi, VDM a fait ses preuves dans le d~veloppement de plusieurs systhmes d'information. Contraire- ment aux environnements de specification des donn~es linguistiques tels que D-PATR (Kart- tunen, 1986), EAGLES (Erbach et al., 1996), etc, VDM permet de specifier £ la fois des traite- ments et des donn~es (dans notre contexte des donn~es linguistiques) et offre une m~thodologie de d~veloppement d'applications se basant sur des raffinements et des transformations valid~es. Partant de la description informelle des be- soins, nous avons d6velopp~ la spficification abstraite du systbme CORTEXA (appelfie aussi spgcification implicite) qui englobe, en- tre autres, la spficification formelle de ses fonc- tions, de ses actions et de ses rbgles de correc- tion. Cette sp~cification a fit6, ensuite, validfie en utilisant des preuves formelles. Enfin, nous avons g~n~ralis~ la sp~cification de conception (appel~e aussi spficification explicite ou directe) partir de la sp~cification abstraite moyen- nant des rbgles relatives £ la m6thode VDM. Cette sp4cification de conception est facile- merit transform6e en code pour rfialiser la phase d'implfimentation. 3.2 R~sultats obtenus L'utilisation de la m~thode formelle VDM pour la sp6cification complbte et valid~e du systbme CORTEXA a conduit, entre autres, aux con- stats suivants : InsuJfisance en r~gles : l'utilisation des preuves formelles nous a permis de mettre en relief, par rapport ~ [a specification initiale, certaines situ- ations non prises en compte. En particulier, les preuves qui permettent de s'assurer que pour chaque type d'erreur dolt exister au moins une rhgle de correction nous ont permis de constater que l'ensemble des rbgles de correction, initiale- ment propos~, ne permet pas de prendre en charge toute la typologie d'erreurs. Exemple 1: preuve relative £ l'erreur de sup- 440 pression Exemple 3: (Vw' ¢~ CH, Vw ~ Lex).(Del(w, w')A w' ¢_ Lex) ~, (3R e Reg).w ~ R(w') oh Lex : le lexique de r4f~rence CH : l'ensemble des s~quences de chaines de caracthres Reg : l'ensemble des rhgles de correction R(w) : l'application de la rhgle R sur la chaine w. On repr~sente une rhgle en VDM par une fonction Del 0 : un pr~dicat qui v~rifie l'erreur de suppression de caract~re. La prdcision et la concision de la spdcification : en comparant la specification in- formelle du systhme CORTEXA, pr~sent~e dans la documentation, avec la specification formelle d~velopp~e, nous remarquons que eette dernihre est plus precise et plus concise. L'exemple 2, donn~ ci-aprhs, qui pr~sente la specification formelle de la fonction de g~n~ration des d~compositions affixales possibles d'un mot w, illustre ce constat. Exemple 2: lsdecomp(w, p, i, s, root : CH)r : B pre True post 3a, bE CH (w=p.a.i.b.sA root = a * b) A (Sprefixe(w, p) A Ssuf fix(w, s)/X Sin fixe(w, i) ) oh B : le type bool~en Sinfixe 0 ( respectivement Sprefixe 0 et Ssu]fixeO) : un pr~dicat qui v~rifie la propri~t~ d'un infixe (respectivementd'un pr~fixe et d'un suffixe) pour une chaine. Facilitd du ddveloppement du code : la specification de conception obtenue est suffisam- ment explicite pour les donn~es et algorith- mique pour les traitements. Elle est donc facile- ment codable en un langage de programmation. L'exemple 3, illustre l'usage d'une notation al- gorithmique dans la sp6cification des fonctions. Il pr~sente la fonction S-Radical de v~rification de la propri~t~ d'un radical (form6 par la racine et l'infixe). Sradical : CH x CH > B Sradieal(sl,s2) -=De] if s, -= [] then False else if Sprefixe(sl, s2) then True else Sradieal(tl(sl), s2) oh riO: une fonction VDM qui retournela s~quence en entree priv~e de sa t~te. Unicitd de la notation : les m~thodes formelles permettent d'utiliser la m~me notation pour d~crire aussi bien les donn~es que les traite- ments. En effet, avec le langage VDM-SL, as- soci~ k VDM, nous avons pu specifier toutes les fonctions et les donn~es de r~f~rence de COR- TEXA. Les exemples 4 et 5 illustrent cette unicit~ pour la representation des donn~es com- posdes et des fonctions. Exemple 4 : l'enregistrement relatif aux donn~es d'une d~composition d'un mot en un pr~fixe, un infixe, un suffixe et une racine. Decomp :: p: CH i: CH s: CH r: CH Exemple 5: specification de l'action qui g~nhre les propositions de correction des suffixes par suppression de caracthre A3s(p : CH, c : CHAR)SCand : set of CH pre True post 3 a,b, pl E CH p = aec. b Apl = a • b A Pl e Su f f ~ {Pl } C SCand oh CHAR : l'ensemble des caracthres SCand : les suffixes candidats k la correction Surf: l'ensemble des suffixes. Cohdrence donndes-traitements : l'unicit~ de la notation, a permis d'appliquer des preuves formelles k la lois sur des donn~es et des traitements et par consequent de contr61er la coherence de ces derniers. L'exemple 1 illustre ce contr61e dans le cas d'un systhme ~ base de rhgles. 441 La validation de chaque composant du syst~me : pour chaque composant ou module du systbme CORTEXA, nous avons appliqu6 les preuves de validation appropri6es, ce qui nous a permis de valider tousles r6sultats partiels du systbme. Le th6orbme de l'exemple 6, donn6 ci-aprbs, permet de prouver qu'£ la suite de l'application de la rbgle de correction d'une er- reur de substitution, les propositions de correc- tion obtenues appartiennent au lexique. Exemple 6: Vw' E CH, Vw ~ Lex.Sub(w, w ~) :. 3R ~ Reg.R(w') C_ Lex o~ Sub : un pr@dicat qui v6rifie l'erreur de substitution de caracthres. 4 Int6r6ts des m6thodes formelles en g6nie linguistique Cette exp6rimentation, bien qu'elle soit assez limit~e dans le temps (elle a dur~ une annie en- viron) et dans son contexte (elle s'est int6ress6 un seul systhme et non k plusieurs), elle nous a permis d'appr@cier £ juste titre l'int@r6t de recourir aux m6thodes formelles dans le pro- cessus de d6veloppement des applications li6es au TALN. Elle nous a aussi permis de d6gager certains avantages globaux d6di6s au domaine du TALN qui viennent consolider ceux que nous avons d4j£ cit6s dans un cadre g6n6ral de d6veloppement des Iogiciels. Ces avantages sp6cifiques peuvent ~tre r@sum6s et argument6s dans les points qui suivent. D'abord, au niveau de la specification des besoins, les applications du TALN sont g6n6ralement trhs ambitieuses au d6part. Or on connait aujourd'hui les limites des modbles linguistiques et des outils de repr6sentation des connaissances. L'utilisation d'outils formels dans les premibres 6tapes de d6veloppement (i.e., analyse) permet de mettre trbs vite en 6vidence les limites du systbme k d6velopper, en particulier, sur le plan de la couverture linguis- tique et par cons6quent de partir pour l'6tape de conception sur une version valid6e du systbme qui sera impl@ment6 et de pr4voir d'embl6 les possibilit6s d'extention et de r6utilisation. Par ailleurs, la complexit6 des traitements li6s au langage naturel et la diversit6 des donn6es linguistiques et des fortes int6ractions qui ex- istent entre donn@es et traitements rendent la t~che de conception trbs difficile et pou- vant engendrer des problbmes d'incoh6rence. L'utilisation des m6thodes formelles au niveau de la conception permet d'abord, de g6rer la dichotomie donn6es-traitements soit par l'int6gration (i.e., en utilisation l'approche ob- jet), soit par le contrSle de coh6rence (i.e., par des preuves de validation) et ensuite de mettre en 6vidence, par des regroupements et des raffinements successifs, des abstractions int6ressantes r6utilisables telsque des modules ou des sous-systbmes pouvant ~tre disponibles dans une bibliothbque (Darricau et al., 1997). Ces abstractions correspondent par exemple des modules standards du TALN traitant le niveau phon6tique, morphologique, syntaxique, etc. Notons £ ce propos que, la r6utilisation de sp6cifications (i.e., de conception) peut se faire directement ou moyennant des adapta- tions. Les m6thodes formelles offrent des environnements qui facilitent ces adaptations (6diteurs,..) et qui permettent la validation des nouvelles sp6cifications. Enfin, l'utilisation d'une notation uniforme donne la possibilit6 d'int6grer dans la m@me application une vari6t@ de connaissances sur la langue sp6cifi@es avec des formalismes diff6rents (i.e., grammaires d'unification, HPSG, Gram- maires Formelles, etc). Ce qui permet- tera d'avoir une meilleure coh6rence dans la sp6cification finale k produire. 5 Les critbres de choix d'une m6thode formelle pour le TALN L'utilisation de la m~thode VDM pour la specification complhte et valid~e du systhme CORTEXA a @t@ £ titre d'essai. Toute- lois, le choix d'une m~thode formelle pour le d~veloppement d'une application de TALN reste crucial. Ce choix dolt tenir compte des sp~cificit~s du domaine des langues naturelles sur le plan du langage de specification et sur celui de la m~thodologie appliqu~e. Dans ce qui suit, nous donnons quelques crithres que nous jugeons pertinents dans le choix d'une m~thode formelle dans le contexte de TALN : 442 • Le pouvoir expressif de la m~thode : possibilit~ d'int~grer dans la m~me specification des connaissances lin- guistiques vari~es d~crites avec des formal- ismes diff4rents. Le langage de sp4cification doit pouvoir unifier la representation des diff4rentes expressions. Le pouvoir expres- sif concerne aussi la specification conjointe des donn~es linguistiques et les traitements qui leurs sont appliques. • Simplicit~ de la notation et de la m~thodologie de d~veloppement. • Couverture maximale du cycle de vie du logiciel ~ d4velopper. • Existance d'Ateliers de G~nie Logiciel (AGLs) qui supportent la m~thode. • Possibilit~ de supporter l'architecture du systhme envisag~ (i.e., s~quentielle, dis- tribu~e, parallhle, etc). 6 Conclusion L'utilisation des m~thodes formelles dans le contexte des langues naturelles permet, non seulement de consolider les avantages globaux de ces methodes dans le cadre g~n~ral de d~veloppement de logiciels, mais aussi de rap- porter de nouveaux profits sp~cifiques au do- maine. Cette utilisation concerne aussi bien le processus de d~veloppement des applications que leur maintenance. Cependant, le choix d'une m~thode appropri~e reste li~ £ la disponi- bilit~ d'outils logiciels associ4s qui facilitent sa mise en oeuvre et k la construction d'une bib- liothhque de specifications r~utilisables. Actuellement, nos travaux se concentrent sur la finalisation d'une approche que nous avons d~velopp~e pour g~n~raliser l'utilisation des m~thodes formelles (VDM ou autres) dans le processus de d~veloppement des linguiciels. Cette approche inthgre les principaux formal- ismes existants de description des connaissances linguistiques (i.e., Grammaires d'Unification, Grammaires Formelles, HPSG, etc). References L. M. Barroca and J. A. Mc Dermid. 1992. For- mal methods : use and relevance for the de- velopment of safety-critical systems. The Com- puter Journal, 35(6). A. BenHamadou. 1993. Vdrification et correc- tion automatiques par analyse affixale des textes dcrits en langage naturel : le cas de l'arabe non voyelld. Ph.D. thesis, Facult~ des Sciences de Tunis. Thhse Es-Sciences en Informatique. M. Darricau, H. Hadj Mabrouk, and J.G. Ganascia. 1997. Une approche pour la r~utilisation des sp6cifications de logiciels. Gdnie Logiciel, (45):21-27, September. J. Dawes. 1991. The VDM-SL reference guide. Pitman Publishing. J. Dick and E. Woods. 1997. Lessons learned from rigorous system software development. In- formation and Software Technology, 39:551- 560. G. Erbach, J. Dorre, S. Manandhar, and H. Uszkoreit. 1996. A report on the draft ea- gles encoding standard for hpsg. In Actes de TALN-96, Marseille, France, May. C. Fuchs. 1993. Linguistique et Traitements Automatiques des Langues. Hachette. H. Habrias. 1995. Les specifications formelles pour les systhmes d'informations quoi ? pourquoi ?comments ? Ingdnierie des syst~mes d'information, 3 (2) :.205-253. J. Hui, L. Dong, and X. Xiren. 1997. Using formal specification language in industrial soft- ware development. In Procedings of the IEEE International Conference on Intelligent Process- ing Systems, pages 1847-1851, Beijing, China, October. K. Jensen, G.E. Heidorn, and S. D. Richard- son. 1993. NLP: The PLNLP Approach. Kul- wer academic publishers. C. B. Jones. 1986. Systematic software devel- opment using VDM. Printice Hall. L. Karttunen. 1986. D-patr : A development environment for unification-based grammars. In In Proceedings of the ~ lth International Confer- ence on Computational Linguistics, pages 74- 80, Bonn, Germany. Y. Ledru. 1993. Developing reactive systems in a vdm framework. Science of Computer Pro- gramming, 20:51-71. G. Sabah. 1989. L'intelligence artificielle et le langage. Hermes. R. Zajac. 1986. Scsl : a linguistic specification language for rot. In Procedings of COLING'86, pages 25-92, Bonn, Germany, August. 443
1998
73
Flow Network Models for Word Alignment and Terminology Extraction from Bilingual Corpora l~ric Gaussier Xerox Research Centre Europe 6, Chemin de Maupertuis 38240 Meylan F. [email protected] Abstract This paper presents a new model for word align- ments between parallel sentences, which allows one to accurately estimate different parameters, in a computationally efficient way. An applica- tion of this model to bilingual terminology ex- traction, where terms are identified in one lan- guage and guessed, through the alignment pro- cess, in the other one, is also described. An ex- periment conducted on a small English-French parallel corpus gave results with high precision, demonstrating the validity of the model. 1 Introduction Early works, (Gale and Church, 1993; Brown et al., 1993), and to a certain extent (Kay and R6scheisen, 1993), presented methods to ex- ~.:'~.ct bi'_.'i~gua! le~cons of words from a parallel COl'p~s, relying on the distribution of the words in the set of parallel sentences (or other units). (Brown et al., 1993) then extended their method and established a sound probabilistic model se- ries, relying on different parameters describing how words within parallel sentences are aligned to each other. On the other hand, (Dagan et al., 1993) proposed an algorithm, borrowed to the field of dynamic programming and based on the output of their previous work, to find the best alignment, subject to certain constraints, between words in parallel sentences. A simi- lar algorithm was used by (Vogel et al., 1996). Investigating alignments at the sentence level allows to clean and to refine the le~cons other- wise extracted from a parallel corpus as a whole, pruning what (Melamed, 1996) calls "indirect associations". Now, what differentiates the models and algo- rithms proposed are the sets of parameters and constraints they rely on, their ability to find an appropriate solution under the constraints de- fined and their ability to nicely integrate new parameters. We want to present here a model of the possible alignments in the form of flow net- works. This representation allows to define dif- ferent kinds of alignments and to find the most probable or an approximation of this most prob- able alignment, under certain constraints. Our procedure presents the advantage of an accurate modelling of the possible alignments, and can be used on small corpora. We will introduce this model in the next section. Section 3 describes a particular use of this model to find term trans- lations, and presents the results we obtained for this task on a small corpus. Finally, the main features of our work and the research directions we envisage are summarized in the conlcusion. 2 Alignments and flow networks Let us first consider the following a)Jgned sentences, with the actual alignment beween words I: Assuming that we have probabilities of associ- ating English and French words, one way to find the preceding alignment is to search for the most 1All the examples consider English and French as the source and target languages, even though the method we propose is independent of the language par under consideration 444 probable alignment under the constraints that any given English (resp. French) word is asso- ciated to one and only one French (resp. En- glish) word. We can view a connection between an English and a French word as a flow going from an English to a French word. The preced- ing constraints state that the outgoing flow of an English word and the ingoing one of a French word must equal 1. We also have connections entering the English words, from a source, and leaving the French ones, to a sink, to control the flow quantity we want to go through the words. 2.1 Flow networks We meet here the notion of flow networks that we can formalise in the following way (we as- sume that the reader has basic notions of graph theory). Definition 1: let G = (17, E) be a directed connected graph with m edges. A flow in G is a vector =(91,~2, " ~m) T~R m (where T denotes the transpose of a matrix) such as, for each vertex i E V: E (1) ue~+(i) uew-(i) where w+(i) denotes the set of edges entering vertex i, whereas w-(i) is the set of edges leav- ing vertex i. We can, furthermore, associate to each edge u of G = (V,E) two numbers, b~, and eu with b~, _< c,,, which will be called the lower capac- ity bound and the upper capacity bound of the edge. Definition 2: let G = (1/'.. E) be a directed connected graph with lower and upper capacity bounds. We will say that a flow 9in G is a feasible flow in G if it satisfies the following capacity constraints: Vu ~ E, b~ < 9~ < cu (2) Finally, let us associate to each edge u of a di- rected connected graph G = (V, E) with capac- ity intervals [b~; c~] a cost %, representing the cost (or inversely the probability) to use this edge in a flow. We can define the total cost, 7 × 9, associated to a flow 9 in G as follows: × (a) uEE Definition 3: let G = (V,E) be a connected graph with capacity intervals Ibm; c~], u 6 E and costs %,u 6 E. We will call minimal cost flow the feasible flow in G for which 7 x ¢2 is minimal. Several algorithms have been proposed to com- pute the minimal cost flow when it exists. We will not detail them here but refer the interested reader to (Ford and Fulkerson, 1962; Klein, 1967). 2.2 Alignment models Flows and networks define a general framework in which it is possible to model alignments be- tween words, and to find, under certain con- stralnts, the best alignment. We present now an instance of such a model, where the only pa- rameters involved are association probabil- ities between English and French words, and in which we impose that any English, respec- tively French word, has to be aligned with one and only one French, resp. English, word, possi- bly empty. We can, of course, consider different constraints. The constraints we define, though they would yield to a complex computation for the EM algorithm, do not privilege any direc- tion in an underlying translation process. This model defines for each pair of aligned sentences a graph G(V, E) as follows: • V comprises a source, a sink, all the English and French words, an empty English word, and an empty French word, • E comprises edges from the source to all the English words (including the empty one), edges from all the French words (including the empty one) to the sink, an edge from the sink to the source, and edges from all English words (including the empty one) to all the French words (including the empty one) 2. • from the source to all possible English words (excluding the empty one), the ca- pacity interval is [1;1], 2The empty words account for the fact that words may not be aligned with other ones, i.e. they are not exphcitely translated for example. 445 • from the source to the empty English word, the capacity interval is [O;maz(le, 1/)], where l I is the number of French words, and l~ the number of English ones, • from the English words (including the empty one) to the French words (includ- ing the empty one), the capacity interval is [0;1], • from the French words (excluding the empty one) to the sink, the capacity inter- val is [1;1]. • from the empty French word to the sink, the capacity interval is [0; rnaz(l~, l/)], • from the sink to the source, the capacity interval is [0; max(le, l/)]. Once such a graph has been defined, we have to assign cost values to its edges, to reflect the different association probabilities. We will now see how to define the costs so as to re- late the minimal cost flow to a best alignment. Let a be an alignment, under the above con- straints, between the English sentence es, and the French sentence f~. Such an alignment a can be seen as a particular relation from the set of English words with their positions, including empty words, to the set of French words with their positions, including empty words (in our framework, it is formally equivalent to consider a single empty word with larger upper capac- ity bound or several ones with smaller upper capacity bounds; for the sake of simplicity in the formulas, we consider here that we add as many empty words as necessary in the sentences to end up with two sentences containing le + l/ words). An alignment thus connects each En- glish word, located in position i, el, to a French word, in position j, fj. We consider that the probability of such a connection depends on two distinct and independent probabilities, the one of linking two positions, p(%(i) = a~), and the one of linking two words, p(a~(ei) = f~). We can then write: le+l I P(a,e~,f~)= II P(%(i)=ail(a,e,f)~ -1) i=1 le+l I rI p(a,~(ei)= f~,l(a,e,f)~ -~) i=1 (4) where P(a,e~,f~) is the probability of ob- serving the alignment a together with the English and French sentences, es and f~, and (a,e,f)~ -1 is a shorthand for (al, .., ai-1, el,.., el-l, fal,.', fa,-i ). Since we simply rely in this model on asso- ciation probabilities, that we assume to be in- dependent, the only dependencies lying in the possibilities to associate words across languages, we can simplify the above formula and write: le+l 1 P(a,e,,f,)= ]-I p(ei,f~ilal )i-1 (5) i=1 where a~ -1 is a shorthand for (al,..,ai-1). p(ei, f~,) is a shorthand for p(a~(ei) = f~,) that we will use throughout the article. Due to the constraints defined, we have: p(ei, f~,[a~) = 0 if ai E a~ -1, and p(ei, £,) otherwise. Equation (5) shows that if we define the cost associated to each edge from an English word ei (excluding the empty word) to a French word fj (excluding the empty word) to be 7~ = -lnp(ei, fj), the cost of an edge involving an empty word to be e, an arbitrary small positive value, and the cost of all the other edges (i.e. the edges from SoP and SiP) to be 1 for example, then the minimal cost flow defines the alignment a for which P(a, es, fs) is ma~mum, under the above constraints and approximations. We can use the following general algorithm based on maximum likelihood under the max- imum approximation, to estimate the parame- ters of our model: . . . set some initial value to the different pa- rameters of the model, for each sentence pair in the corpus, com- pute the best alignment (or an appro~- mation of this alignment) between words, with respect to the model, and update the counts of the different parameters with re- spect to this alignment (the ma~mum like- lihood estimators for model free distribu- tions are based on relative frequencies, con- ditioned by the set of best alignments in our case), go back to step 2 till an end condition is reached. 446 This algorithm converges after a few itera- tions. Here, we have to be carefull with step 1. In particular, if we consider at the beginning of the process all the possible alignments to be equiprobable, then all the feasible flows are min- imal cost flows. To avoid this situation, we have to start with initial probabilities which make use of the fact that some associations, occurring more often in the corpus, should have a larger probability. Probabilities based on relative fre- quencies, or derived fl'om the measure defined in (Dunning, 1993), for example, allow to take this fact into account. We can envisage more complex models, in- cluding distortion parameters, multiword no- tions, or information on part-of-speech, infor- mation derived from bilingual dictionaries or from thesauri. The integration of new param- eters is in general straigthforward. For multi- word notions, we have to replace the capacity values of edges connected to the source and the sink with capacity intervals, which raises several issues that we will not address in this paper. We rather want to present now an application of the flow network model to multilingual terminology extraction. 3 Multilingual terminology extraction Several works describe methods to extract terms, or candidate terms, in English and/or French (Justeson and Katz, 1995; Daille, 1994; Nkwenti-Azeh, 1992). Some more specific works describe methods to align noun phrases within parallel corpora (Kupiec, 1993). The under- lying assumption beyond these works is that the monolingually extracted units correspond to each other cross-lingually. Unfortunately, this is not always the case, and the above method- ology suffers from the weaknesses pointed out by (Wu, 1997) concerning parse-parse-match procedures. It is not however possibie to fully reject the notion of grammar for term extraction, in so far as terms are highly characterized by their internal syntactic structure. We can also admit that lexical affinities between the diverse constituents of a unit can provide a good clue for termhood, but le~cal affinities, or otherwise called collocations, affect differ- ent finguistic units that need anyway be distin- guished (Smadja, 1992). Moreover, a study presented in (Gaussier, 1995) shows that terminology extraction in En- glish and in French is not symmetric. In many cases, it is possible to obtain a better approxi- mation for English terms than it is for French terms. This is partly due to the fact that English relies on a composition of Germanic type, as defined in (Chuquet and Palllard, 1989) for example, to produce compounds, and of Romance type to produce free NPs, whereas French relies on Romance type for both, with the classic PP attachment problems. These remarks lead us to advocate a mixed model, where candidate terms are identified in English and where their French correspondent is searched for. But since terms constitute rigid units, lying somewhere between single word no- tions and complete noun phrases, we should not consider all possible French units, but only the ones made of consecutive words. 3.1 Model It is possible to use flow network models to capture relations between English and French terms. But since we want to discover French units, we have to add extra vertices and nodes to our previous model, in order to account for all possible combinations of consecutive French words. We do that by adding several layers of vertices, the lowest layer being associated with the French words themselves, and each vertex in any upper layer being linked to two consec- utive vertices of the layer below. The uppest layer contains only one vertex and can be seen as representing the whole French sentence. We will call a fertility graph the graph thus ob- tained. Figure 1 gives an example of part of a fertility graph (we have shown the flow val- ues on each edge for clarity reasons; the brack- ets delimit a nultiword candidate term; we have not drawn the whole fertility graph encompass- ing the French sentence, but only part of it, the one encompassing the unit largeur de bande utilisde, where the possible combinations of con- secutive words are represented by A, B, and C). Note that we restrict ourselves to le:dcal words (nouns, verbs, adjectives and adverbs), not try- ing to align grammatical words. Furthermore, we rely on lemmas rather than inflected froms, thus enabling us to conflate in one form all the variants of a verb for example (we have keeped 447 bandwidth used in [ FSS telecommunications ] largeur ~ a l ~ SFS Figure 1: Pseudo-alignment within a fertility graph inflected forms in our figures for readability rea- sons). The minimal cost flow in the graphs thus de- fined may not be directly usable. This is due to two problems: 1. first, we can have ambiguous associations: in figure 1, for example, the association be- tween bandwidth and largeur de bande can be obtained through the edge linking these two units (type 1), or through two edges, one from bandwidth to largeur de bande., and one from bandwidth to either largeur or hap.de (type 2), or even through the two edges from bandwidth to largeur and bande (type 3), 2. secondly, there may be conflicts between connections: in figure 1 both largeur de bande and tdldcommunications are linked to bandwidth even though they are not con- tiguous. To solve ambiguous associations, we simply replace each association of type 2 or 3 by the equivalent type 1 association 3. For conflicts, we use the following heuristics: first select the con- flicting edge with the lowest cost and assume 3We can formally define an equivalence relation, in terms of the associations obtained, but this is beyond the scope of this paper. that the association thus defined actually oc- curred, then rerun the minimal cost flow algo- rithm with this selected edge fixed once and for all, and redo these two steps until there is no more conflicting edges, replacing type 2 or 3 as- sociations as above each time it is necessary. Finally, the alignment obtained in this way will be called a solved alignment 4. 3.2 Experiment In order to test the previous model, we se- lected a small bilingual corpus consisting of 1000 aligned sentences, from a corpus on satel- lite telecommunications. We then ran the fol- lowing algorithm, based on the previous model: 1. tag and lemmatise the English and French texts, mark all the English candidate terms using morpho-syntactic rules encoded in regular expressions, 2. build a first set of association probabili- ties, using the likelihood ratio test defined in (Gaussier, 1995), 3. for each pair of aligned sentences, con- struct the fertility graph allowing a candi- date term of length n to be aligned with units of lenth (n-2) to (n+2), define the 4Once the solved alignment is computed, it is possi- ble to determine the word associations between aligned units, through the application of the process described in the previous section with multiword notions. 448 costs of edges linking English vertices to French ones as the opposite of the loga- rithm of the normalised sum of probabili- ties of all possible word associations defined by the edge (for the edge between multiple (el) access (e2) to the French unit acc~s (fl) mulitple (f2) it is ¼ (~i,jp(ei, fj))), all the other edges receive an arbitrary cost value, compute the solved alignment, and increment the count of the associations ob- tained by overall value of the solved align- nlent, 4. select the fisrt i00 unit associations accord- ing to their count, and consider them as valid. Go back to step 2, excluding from the search space the associations selected, till all associations have been extracted. 3.3 Results To evaluate the results of the above procedure, we manually checked each set of associations ob- tained after each iteration of the process, going from the first 100 to the first 500 associations. We considered an association as being correct if the French expression is a proper translation of the English expression. The following table gives the precision of the associations obtained. N. Assoc. Prec. 100 98 200 97 300 96 400 95 500 90 1: GenerM results Table The associations we are faced with represent different linguistic units. Some consist of single content words, whereas others represent multi- word expressions. One of the particularity of our process is precisely to automatically identify multiword expressions in one language, know- ing units in the other one. With respect to this task, we extracted the first two hundred mul- tiword expressions from the associations above, and then checked wether they were valid or not. We obtained the following results: N. Assoc. Prec. 100 97 200 94 Table 2: Multiword notion results As a comparison, (Kupiec, 1993) obtained a precision of 90% for the first hundred associa- tions between English and French noun phrases, using the EM algorithm. Our experiments with a similar method showed a precision around 92% for the first hundred associations on a set of aligned sentences comprising the one used for the above experiment. An evaluation on single words, showed a pre- cision of 9870 for the first hundred and 97% for the first two hundred. But these figures should be seen in fact as lower bounds of actual val- ues we can get, in so far as we have not tried to extract single word associations from multi- word ones. Here is an example of associations obtained. telecommunication satellite satelllite de tdldcommunication communication satellite satelllite de tdldcommunication new satellite system nouveau syst~me de satellite syst~me de satellite nouveau syst~me de satellite enti~rement nouveau operating fss telecommunication link exploiter la liason de tdldcommunication du sfs implement mise en oeuvre wavelength longueur d'oncle offer offrir, proposer operation exploitation, opdration The empty words (prepositions, determiners) were extracted from the sentences. In all the cases above, the use of prepositions and deter- miners was consistent all over the corpus. There are cases where two French units differ on a preposition. In such a case, we consider that we have two possible different translations for the English term. 4 Conclusion We presented a new model for word alignment based on flow networks. This model allows us to integrate different types of constraints in the search for the best word alignment within aligned sentences. We showed how this model can be applied to terminology extraction, where candidate terms are extracted in one language, 449 and discovered, through the alignment process, in the other one. Our procedure presents three main differences over other approaches: we do not force term translations to fit within specific patterns, we consider the whole sentences, thus enabling us to remove some ambiguities, and we rely on the association probabilities of the units as a whole, but also on the association proba- bilities of the elements within these units. The main application of the work we have described concerns the extraction of bilingual lexicons. Such extracted lexicons can be used in different contexts: as a source to help le~- cographers build bilingual dictionaries in techni- cal domains, or as a resource for machine aided human translation systems. In this last case, we can envisage several ways to extend the no- tion of translation unit in translation memory systems, as the one proposed in (Lang~ et al., 1997). 5 Acknowledgements Most of this work was done at the IBM-France Scientific Centre during my PhD research, un- der the direction of Jean-Marc Lang,, to whom I express my gratitude. Many thanks also to Jean-Pierre Chanod, Andeas Eisele, David Hull, and Christian Jacquemin for useful comments on earlier versions. References Peter F. Brown, Stephen A. Della Pietra, Vin- cent J. Della Pietra, and Robert L. Mercer. 1993. The mathematics of statistical machine translation: Parameter estimation. Compu- tational Linguistics, 19(2). H. Chuquet and M. Paillard. 1989. Ap- proche linguistique des probl@mes de traduc- tion anglais-fran~ais. Ophrys. Ido Dagan, Kenneth W. Church, and William A. Gale. 1993. Robust bilin- gual word alignment for machine aided translation. In Proceedings of the Workshop on Very Large Corpora. B~atrice Daille. 1994. Approche mixte pour l'extraction de terminologie : statistique lex- icale et filtres linguistiques. Ph.D. thesis, Univ. Paris 7. T. Dunning. 1993. Accurate methods for the statistics of surprise and coincidence. Com- putational Linguistics, 19(1). L.R. Ford and D.R. Fulkerson. 1962. Flows in networks. Princeton University Press. William Gale and Kenneth Church. 1993. A program for aligning sentences in bilingual corpora. Computational Linguistics, 19(1). ]~ric Gaussier. 1995. Mod@les statistiques et pa- trons morphosyntaxiques pour l'extraction de lexiques bilingues de termes. Ph.D. thesis, Univ. Paris 7. John S. Justeson and Slava M. Katz. 1995. Technical terminology: some linguistic prop- erties and an algorithm for identification in text. Natural Language Engineering, 1(1). Martin Kay and M. R6scheisen. 1993. Text- translation alignment. Computational Lin- guistics, 19(1). M. Klein. 1967. A primal method for minimal cost flows, with applications to the assign- ment and transportation problems. Manage- ment Science. Julian Kupiec. 1993. An algorithm for finding noun phrase correspondences in bilingual cor- pora. In Proceedings of the 31st Annual Meet- ing of the Association for Computational Lin- guistics. Jean-Marc Lang@, ]~ric Gaussier, and B~atrice Daill. 1997. Bricks and skeletons: some ideas for the near future of maht. Machine Trans- lation, 12(1). Dan I. Melamed. 1996. Automatic construction of clean broad-coverage translation lexicons. In Proceedings of the Second Conference of the Association for Machine Translation in the Americas (AMTA). Basile Nkwenti-Azeh. 1992. Positional and combinational characteristics of satellite com- munications terms. Technical report, CC1- UMIST, Manchester. Frank Smadja. 1992. How to compile a bilingual collocational lexicon automatically. In Proceedings of AAAI-92 Workshop on Statistically-Based NLP techniques. Stephan Vogel, Hermann Ney, and Christoph Tillmann. 1996. Hmm-based word align- ment in statistical translation. In Proceedings of the Sixteenth International Conference on Computational Linguistics. Dekai Wu. 1997. Stochastic inversion trans- duction grammars and bilingual parsing of parallel corpora. Computational Linguistics, 23(3). 450
1998
74
Growing Semantic Grammars Marsal Gavaldh and Alex Waibel Interactive Systems Laboratories Carnegie Mellon University Pittsburgh, PA 15213, U.S.A. marsal@cs, cmu. edu Abstract A critical path in the development of natural language understanding (NLU) modules lies in the difficulty of defining a mapping from words to semantics: Usually it takes in the order of years of highly-skilled labor to de- velop a semantic mapping, e.g., in the form of a semantic grammar, that is comprehensive enough for a given do- main. Yet, due to the very nature of human language, such mappings invariably fail to achieve full coverage on unseen data. Acknowledging the impossibility of stat- ing a priori all the surface forms by which a concept can be expressed, we present GsG: an empathic computer system for the rapid deployment of NLU front-ends and their dynamic customization by non-expert end-users. Given a new domain for which an NLU front-end is to be developed, two stages are involved. In the author- ing stage, GSQ aids the developer in the construction of a simple domain model and a kernel analysis gram- mar. Then, in the run-time stage, GSG provides the end- user with an interactive environment in which the kernel grammar is dynamically extended. Three learning meth- ods are employed in the acquisition of semantic mappings from unseen data: (i) parser predictions, (ii) hidden un- derstanding model, and (iii) end-user paraphrases. A baseline version of GsG has been implemented and pre- llminary experiments show promising results. 1 Introduction The mapping between words and semantics, be it in the form of a semantic grammar, t or of a set of rules that transform syntax trees onto, say, a frame-slot structure, is one of the major bottlenecks in the de- velopment of natural language understanding (NLU) systems. A parser will work for any domain but the semantic mapping is domain-dependent. Even after the domain model has been established, the daunting task of trying to come up with all the possible surface forms by which each concept can 1 Semantic grammars are grammars whose non-terminals correspond to semantic concepts (e.g., [greeting] or [suggest.time] ) rather than to syntactic constituents (such as Verb or WounPhrase). They have the advantage that the semantics of a sentence can be directly read off its parse tree, and the disadvantage that a new grammar must be developed for each domain. be expressed, still lies ahead. Writing such map- pings takes in the order of years, can only be per- formed by qualified humans (usually computational linguists) and yet the final result is often fragile and non-adaptive. Following a radically different philosophy, we pro- pose rapid (in the order of days) deployment of NLU modules for new domains with on-need basis learn- ing: let the semantic grammar grow automatically when and where it is needed. 2 Grammar development If we analyze the traditional method of developing a semantic grammar for a new domain, we find that the following stages are involved. 1. Data collection. Naturally-occurring data from the domain at hand are collected. 2. Design of the domain model. A hierarchical structuring of the relevant concepts in the do- main is built in the form of an ontology or do- main model. 3. Development of a kernel grammar. A grammar that covers a small subset of the collected data is constructed. 4. Expansion of grammar coverage. Lengthy, ar- duous task of developing the grammar to extend its coverage over the collected data and beyond. 5. Deployment. Release of the final grammar for the application at hand. The GsG system described in this paper aids all but the first of these stages: For the second stage, we have built a simple editor to design and analize the Domain Model; for the third, a semi-automated way of constructing the Kernel Grammar; for the fourth, an interactive environment in which new semantic mappings are dynamically acquired. As for the fifth (deployment), it advances one place: after the short initial authoring phase (stages 2 and 3 above) the final application can already be launched, since the semantic grammar will be extended, at run-time, by the non-expert end-user. 3 System architecture As depicted in Fig. 1, GsG is composed of the fol- lowing modules: the Domain Model Editor and the 451 authoring stage run.~me stage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Figure 1: System architecture of GSG. Kernel Grammar Editor, for the authoring stage, and the SouP parser and the IDIGA environment, for the run-time stage. 3.1 Authoring stage In the authoring stage, a developer s creates the Do- main Model (DM) with the aid of the DM Editor. In our present formalism, the DM is simply a di- rected acyclic graph in which the vertices correspond to concept-labels and the edges indicate concept- subconcept relations (see Fig. 2 for an example). Once the DM is defined, the Kernel Grammar Ed- itor drives the development of the Kernel Grammar by querying the developer to instantiate into gram- mar rules the rule templates derived from the DM. For instance, in the DM in Fig. 2, given that con- cept {suggest_time} requires subconcept [time], the rule template [suggest_time] < [time] is generated, which the developer can instantiate into, say, rule (2) in Fig. 3. The Kernel Grammar Editor follows a concrete- to-abstract ordering of the concepts obtained via a topological sort of the DM to query the developer, after which the Kernel Grammar is complete 3 and 2Understood here as a qualified person (e.g., knowledge engineer or software developer) who is familiar with the do- main at hand and has access to some sample sentences that the NLU front-end is supposed to understand. 3We say that grammar G is complete with respect to do- main model DM if and only if for each arc from concept i to concept j in DM there is at least one grammar rule headed by concept i that contains concept j. This ensures that any idea expressible in DM has a surface form, or, seen it from another angle, that any in-domain utterance has a paraphrase 452 [greeting] [farewell] -. o- [namel {suggestionl [rejectionl [acceptance] T v ~ [suggest_timel [reject eime] {accept_timel [ time } [interval} • {start_point} [end..point} ', {point} [ day_of week } [ t ime_o f_day I Figure 2: Fragment of a domain model for a schedul- ing task. A dashed edge indicates optional subconcept (default is required), a dashed angle indicates inclusive subconcepts (default is exclusive). (1) [suggestion] ~-- {suggest_time} (2) {suggest_time} ~-- how about [time] (3) [time] ~ [point] (4) [point] 4---- *on {day_of_week} *{time_of_day} (5) {day_of_week} ~--- Tuesday (6) {time_of_day} 6--- afternoon Figure 3: Fragment of a grammar for a scheduling task. A '*' indicates optionality. the NLU front-end is ready to be deployed. It is assumed that: (i) after the authoring stage the DM is fixed, and (ii) the communicative goal of the end-user is expressible in the domain. 3.2 Run-time stage Instead of attempting "universal coverage" we rather accept the fact that one can never know all the sur- face forms by which the concepts in the domain can be expressed. What GsG provides in the run-time stage are mechanisms that allow a non-expert end- user to "teach" the meaning of new expressions. The tight coupling between the SouP parser 4 and the IDIGA s environment allows for a rapid and multi- faceted analysis of the input string. If the parse, or rather, the paraphrase automatically generated by GSG 6, is deemed incorrect by the end-user, a learn- ing episode ensues. that is covered by G. 4Very fast, stochastic top-down chart parser developed by the first author incorporating heuristics to, in this order, max- imize coverage, minimize tree complexity and maximize tree probability. 5Acronym for interactive, distributed, incremental gram- mar acquisition. 6In order for all the interactions with the end-user to be performed in natural language only, a generation grammar is needed to transform semantic representations into surface forms. To that effect GSG is able to cleverly use the analysis grammar in "reverse." By bringing to bear contextual constraints, Gso can make predictions as to what a sequence of un- parsed words might mean, thereby exhibiting an "empathic" behavior toward the end-user. To this aim, three different learning methods are employed: parser predictions, hidden understanding model, and end-user paraphrases. 3.2.1 Learning Similar to Lehman (1989), learning in GsQ takes place by the dynamic creation of grammar rules that capture the meaning of unseen expressions, and by the subsequent update of the stochastic models. Ac- quiring a new mapping from an unparsed sequence of words onto its desired semantic representation in- volves the following steps. 1. Hypothesis formation and filtering. Given the context of the sentence at hand, Gsc constructs hypotheses in the form of parse trees that cover the unparsed sequence, discards those hypothe- ses that are not approved by the DM r and ranks the remaining by likelihood. 2. Interaction with the end-user. The ranked hy- potheses are presented to the end-user in the form of questions about, or rephrases of, the original utterance. 3. Dynamic rule creation. If the end-user is sat- isfied with one of the options, a new grammar rule is dynamically created and becomes part of the end-user's grammar until further notice. Each new rule is annotated with the learning episode that gave rise to it, including end-user ID, time stamp, and a counter that will keep track of how many times the new rule fires in successful parses, s 3.2.2 Parser predictions As suggested by Kiyono and Tsujii (1993), one can make use of parse failures to acquire new knowledge, both about the nature of the unparsed words and about the inadequacy of the existing grammar rules. GsG uses incomplete parses to predict what can come next (i.e. after the partially-parsed sequence 7I.e., parse trees containing concept-subconcept relations that are inconsistent with the stipulations of the DM. SThe degree of generalization or level o.f abstraction that a new rule should exhibit is an open question but currently a Principle of Maximal Abstraction is followed: (a) Parse the lexical items of the new rule's right-hand-side with all concepts granted top-level status, i.e., able to stand at the root of a parse tree. (b) If a word is not covered by any tree, take it as is into the final right-hand side. Else, take the root of the parse tree with largest span; if tie, prefer the root that ranks higher in the DM. For example, with the DM in Fig. 2 and the grammar in Fig. 3, What about Tuesdayf is abstracted to the maximally general what about [time] (as opposed to what about [day_of_week] or what about [point]). 453 Figure 4: Example of a learning episode using parser predictions. Initially only the temporal expression is un- derstood... in left-to-right parsing, or before the partially-parsed sequence in right-to-left parsing). This allows two kinds of grammar acquisition: 1. Discovery of expression equivalence. E.g., with the grammar in Fig. 3 and input sentence What about Tuesday afternoon? GsQ is able to ask the end-user whether the utterance means the same as How about Tuesday afternoon? (See Figs. 4, 5 and 6). That is because in the pro- cess of parsing What about Tuesday afternoon? right-to-left, the parser has been able to match rule (2) in Fig. 2 up to about, and thus it hypothesizes the equivalence of what and how since that would allow the parse to complete. 9 2. Discovery of an ISA relation. Similarly, from input sentence How about noon? GsG is able to predict, in left-to-right parsing, that noon is a [time]. 3.2.3 Hidden understanding model As another way of bringing contextual information to bear in the process of predicting the meaning 9For real-world grammars, of, say, over 1000 rules, it is necessary to bound the number of partial parses by enforcing a maximum beam size at the left-hand side level, i.e., placing a limit on the number of subparses under each nonterminal to curb the exponential explosion. YN NO :"; -" " "<i Figure 5: ...but a correct prediction is made... Pmdoes .Sin~ n¢~ ~Vhat about Tuesday aftar~ooo? What ~t Tuesaay aftemo~? I I *-[ su:JgosLttl] I + --,lsit I ÷-about I +-[tlm] I +-[polntl I ÷- [ day_of_woek l I I I +-ttmlday I 4.-[ tii.. el_day] I llutoml~ Refill a, hat i~ut ~ue~l~ aftemoon ii ok If 8,a ---q L... Z..J ......... ; lst~a~LlJ,'~ } <-- ",,mat about [ume] {I Figure 6: ...and a new rule is acquired. of unparsed words, the following stochastic models, inspired in Miller et al. (1994) and Seneff (1992), and collectively referred to as hidden understanding model (HUM), are employed. • Speech-act n-gram. Top-level concepts can be seen as speech acts of the domain. For instance, in the DM in Fig. 2 top-level concepts such as [greeting], Cfarewell] or [suggestion], correspond to discourse speech acts, and in normally-occurring conversation, they follow a distribution that is clearly non-uniform. 1° • Concept-subconcept HMM. Discrete hidden Markov model in which the states correspond l°Needless to say, speech-act transition distributions are empirically estimated, but, intuitively, the sequence <[greeting], [suggestion]> is more likely than the se- quence < [greeting], [farewell]>. to the concepts in the DM (i.e., equivalent to grammar non-terminals) and the observations to the embedded concepts appearing as imme- diate daughters of the state in a parse tree. For example, the parse tree in Fig. 4 contains the following set of <state, observation> pairs: {< [time], [point] >, < [point], [day_of_week] >, < [point], [time_of_day] >}. • Concept-word HMM. Discrete hidden Markov model in which the states correspond to the con- cepts in the DM and the observations to the em- bedded lexical items (i.e., grammar terminals) appearing as immediate daughters of the state in a parse tree. For example, the parse tree in Fig. 4 contains the pairs: {<[day_of_week], tuesday>, < [time_of_day], afternoon>}. The HUM thus attempts to capture the recurring patterns of the language used in the domain in an asynchronous mode, i.e., independent of word order (as opposed to parser predictions that heavily de- pend on word order). Its aim is, again, to provide predictive power at run-time: upon encountering an unparsable expression, the HUM hypothesizes possi- ble intended meanings in the form of a ranked list of the most likely parse trees, given the current state in the discourse, the subparses for the expression and the lexical items present in the expression. Its parameters can be best estimated through training over a given corpus of correct parses, but in order not to compromise our established goal of rapid deployment, we employ the following tech- niques. 1. In the absence of a training corpus, the HUM parameters are seeded from the Kernel Gram- mar itself. 2. Training is maintained at run-time through dy- namic updates of all model parameters after each utterance and learning episode. 3.2.4 End-user paraphrases If the end-user is not satisfied with the hypotheses presented by the parser predictions or the HUM, a third learning method is triggered: learning from a paraphrase of the original utterance, given also by the end-user. Assuming the paraphrase is understood, 11 GsG updates the grammar in such a fashion so that the semantics of the first sentence are equivalent to those of the paraphrase. 12 11 Precisely, the requirement that the grammar be complete (see note 3} ensures the existence of a suitable paraphrase for any utterance expressible in the domain. In practice, however, it may take too many attempts to find an appropriate para- phrase. Currently, if the first paraphrase is not understood, no further requests are made. 12Presently, the root of the paraphrase's parse tree directly becomes the left-hand-side of the new rule. 454 Perfect Ok Bad Expert before 55.41 17.58 27.01 Expert after 75.68 10.81 13.51 A +£0.£7 --6.77 --13.50 End-user1 before 58.11 18.92 22.97 End-user1 after 64.86 22.97 12.17 A +6.75 +.~.05 --10.80 End-user2 before 41.89 16.22 41.89 End-user2 after 48.64 28.38 22.98 A +6.75 +1£.16 --18.91 Table 1: Comparison of parse grades (in %). Expert using traditional method vs. non-experts using GSG. 4 Preliminary results We have conducted a series of preliminary exper- iments in different languages (English, German and Chinese) and domains (scheduling, travel reserva- tions). We present here the results for an experiment involving the comparison of expert vs. non-expert grammar development on a spontaneous travel reser- vation task in English. The grammar had been de- veloped over the course of three months by a full- time expert grammar writer and the experiment con- sisted in having this expert develop on an unseen set of 72 sentences using the traditional environment and asking two non-expert users is to "teach" Gs6 the meaning of the same 72 sentences through in- teractions with the system. Table 1 compares the correct parses before and after development. It took the expert 15 minutes to add 8 rules and reduce bad coverage from 27.01% to 13.51%. As for the non-experts, end-user1, starting with a sim- ilar grammar, reduced bad parses from 22.97% to 12.17% through a 30-minute session 14 with GsG that gave rise to 8 new rules; end-user2, starting with the smallest possible complete grammar, reduced bad parses from 41.89% to 22.98% through a 35-minute session 14 that triggered the creation of 17 new rules. 60% of the learning episodes were successful, with an average number of questions of 2.91. The unsuc- cessful learning episodes had an average number of questions of 6.19 and their failure is mostly due to unsuccessful paraphrases. As for the nature of the acquired rules, they dif- fer in that the expert makes use of optional and re- peatable tokens, an expressive power not currently available to GSG. On the other hand this lack of generality can be compensated by the Principle of Maximal Abstraction (see note 8). As an example, to cover the new construction And your last name?, the expert chose to create the rule: [requestmame] ~ *and your last name tSUndergraduate students not majoring in computer sci- ence or linguistics. 14 Including a 5-minute introduction. whereas both end-user1 and end-users induced the automatic acquisition of the rule: [requostmame] ~ CONJ POSS [last] name. 15 5 Discussion Although preliminary and limited in scope, these results are encouraging and suggest that grammar development by non-experts through GsG is indeed possible and cost-effective. It can take the non- expert twice as long as the expert to go through a set of sentences, but the main point is that it is possible at all for a user with no background in computer sci- ence or linguistics to teach Gso the meaning of new expressions without being aware of the underlying machinery. Potential applications of GSG are many, most no- tably a very fast development of NLU components for a variety of tasks including speech recognition and NL interfaces. Also, the IDIGA environment enhances the usability of any system or application that incorporates it, for the end-users are able to eas- ily "teach the computer" their individual language patterns and preferences. Current and future work includes further develop- ment of the learning methods and their integration, design of a rule-merging mechanism, comparison of individual vs. collective grammars, distributed grammar development over the World Wide Web, and integration of GSG's run-time stage into the JANUS speech recognition system (Lavie et al. 1997). Acknowledgements The work reported in this paper was funded in part by a grant from ATR Interpreting Telecommunications Re- search Laboratories of Japan. References Kiyono, Masaki and Jun-ichi Tsujii. 1993. "Linguistic knowledge acquisition from parsing failures." In Pro- ceedings of the 6th Conference of the European Chap- ter of the A CL. Lavie, Alon, Alex Waibel, Lori Levin, Michael Finke, Donna Gates, Marsal Gavaldh, Torsten Zeppenfeld, and Puming Zhan. 1997. "JANus IIh speech-to- speech translation in multiple languages." In Proceed- ings of ICASSP-97. Lehman, Jill Fain. 1989. Adaptive parsing: Self- extending natural language interfaces. Ph.D. disserta- tion, School of Computer Science, Carnegie Mellon University. Miller, Scott, Robert Bobrow, Robert Ingria, and Richard Schwartz. 1994. "Hidden understanding mod- els of natural language." In Proceedings of ACL-9$. Seneff, Stephauie. 1992. "TINA: a natural language sys- tem for spoken language applications." In Computa- tional Linguistics, vol. 18, no. 1, pp. 61-83. 15Uppercased nonterminals (such as COIJ and POSS) are more syntactical in nature and do not depend on the DM. 455 Resum Un dels camins critics en el desenvolupament de mbduls de comprensi6 del llenguatge natural passa per la dificultat de definir la funci6 que assigna, a una seqii~ncia de mots, la representaci6 sem~ntica desitjada. Els m~todes tradicionals per definir aquesta correspond~ncia requereixen l'esforq de lingiiistes computacionals, que dediquen mesos o ~dhuc anys construint, per exemple, una gram~tica sem~ntica (formalisme en el qual els s~mbols no ter- minals de la gram~tica corresponen directament als conceptes del domini de l'aplicaci6 determinada), i, tanmateix, degut precisament a la prbpia natura del llenguatge hum~, la gram~tica resultant mai no 4s capaq de cobrir tots els mots i expressions que ocor- ren naturalment al domini en qiiesti6. Reconeixent per tant la impossibilitat d'establir a priori totes les formes superficials amb qu~ un con- cepte pot ser expressat, presentem en aquest tre- ball GsG: un sistema computacional emp~tic per al r~pid desplegament de mbduls de comprensi6 del llenguatge natural i llur adaptaci6 din&mica a les particularitats i prefertncies d'usuaris finals inex- perts. El proc4s de construcci6 d'un mbdul de com- prensi6 del llenguatge natural per a un nou domini pot set dividit en dues parts. Primerament, durant la fase de composici5, GsG ajuda el desenvolupador expert en l'estructuraci6 dels conceptes del domini (ontologia) i en l'establiment d'una gram&tica mi- nimal. Tot seguit, durant la fase d'execuci5, Gs~ forneix l'usuari final inexpert d'un medi interactiu en qu& la gram&tica 4s augmentada din&micament. Tres m~todes d'aprenentatge autom&tic s6n uti- litzats en l'adquisici6 de regles gramaticals a partir de noves frases i construccions: (i) prediccions de l'analitzador (GSG empra an&lisis incompletes per conjecturar quins roots poden apar&ixer tant desprds de l'arbre d'anMisi incomplet, en anMisi d'esquerra a dreta, corn abans de l'arbre d'anMisi incomplet, en anMisi de dreta a esquerra), (ii) cadenes de Markov (m~todes estochstics que modelen, independentment de l'ordre dels mots, la distribuci6 dels conceptes i llurs transicions, emprats per calcular el concepte global m4s probable donats un context i uns arbres d'anMisi parcials determinats), i (iii) par&frasis (em- prades per assignar llur representaci6 sem&ntica a la frase original). Hem implementat una primera versi6 de GsG i els resultats obtinguts, per b4 que preliminars, s6n ben encoratjadors car demostren que un usuari inexpert pot "ensenyar" a GsG el significat de noves expres- sions i causar una extensi6 de la gram&tica compa- rable a la d'un expert. Actualment estem treballant en la millora dels m&todes autom&tics d'aprenentatge i llur inte- graci6, en el disseny d'un mecanisme de corn- binaci6 autom~tica de regles gramaticals, en la comparaci6 de gram&tiques individuals amb gram&tiques col.lectives, en el desenvolupament distribu'it de gram~tiques a trav4s de la World Wide Web, i en la integraci6 de la fase d'execuci6 de GsG en el sistema de reconeixe- ment de la parla i traducci6 autom~tica JANUS. 456
1998
75
One Tokenization per Source Jin GUt Kent Ridge Digital Labs 21 Heng Mui Keng Terrace, Singapore 119613 Abstract We report in this paper the observation of one tokenization per source. That is, the same critical fragment in different sentences from the same source almost always realize one and the same of its many possible tokenizations. This observation is demonstrated very helpful in sentence tokenization practice, and is argued to be with far-reaching implications in natural language processing. 1 Introduction This paper sets to establish the hypothesis of one tokenization per source. That is, if an ambiguous fragment appears two or more times in different sentences from the same source, it is extremely likely that they will all share the same tokenization. Sentence tokenization is the task of mapping sentences from character strings into streams of tokens. This is a long-standing problem in Chinese Language Processing, since, in Chinese, there is an apparent lack of such explicit word delimiters as white-spaces in English. And researchers have gradually been turning to model the task as a general lexicalization or bracketing problem in Computational Linguistics, with the hope that the research might also benefit the study of similar problems in multiple languages. For instance, in Machine Translation, it is widely agreed that many multiple-word expressions, such as idioms, compounds and some collocations, while not explicitly delimited in sentences, are ideally to be treated as single lexicalized units. The primary obstacle in sentence tokenization is in the existence of uncertainties both in the notion of words/tokens and in the recognition of words/tokens in context. The same fragment-in different contexts would have to be tokenized differently. For instance, the character string todayissunday would normally be tokenized as 457 "'today is sunday" but can also reasonably be "'today is sun day". In terms of possibility, it has been argued that no lexically possible tokenization can not be grammatically and meaningfully realized in at least some special contexts, as every token can be assigned to bear any meaning without any orthographic means. Consequently, the mainstream research in the literature has been focused on the modeling and utilization of local and sentential contexts, either linguistically in a rule-based framework or statistically in a searching and optimization set-up (Gan, Palmer and Lua 1996; Sproat, Shih, Gale and Chang 1996; Wu 1997; Gut 1997). Hence, it was really a surprise when we first observed the regularity of one tokenization per source. Nevertheless, the regularity turns out to be very helpful in sentence tokenization practice, and to be with far-reaching implications in natural language processing. Retrospectively, we now understand that it is by no means an isolated special phenomenon but another display of the postulated general law of one realization per expression. In the rest of the paper, we will first present a concrete corpus verification (Section 2), clarify its meaning and scope (Section 3), display its striking utility value in tokenization (Section 4), and then disclose its implication for the notion of words/tokens (Section 5), and associate the hypothesis with the general law of one realization per expression through examination of related works in the literature (Section 6). 2 Corpus Investigation This section reports a concrete corpus investigation aimed at validating the hypothesis. 2.1 Data The two resources used in this study are the Chinese PH corpus (Gut 1993) and the Beihang dictionary (Liu and Liang 1989). The Chinese PH corpus is a collection of about 4 million morphemes of news articles from the single source of China's Xinhua News Agency in 1990 and 1991. The Beihang dictionary is a collection of about 50,000 word-like tokens, each of which occurs at least 5 times in a balanced collection of more than 20 million Chinese characters. What is unique in the PH corpus is that all and only unambiguous token boundaries with respect to the Beihang dictionary have been marked. For instance, if the English character string fundsandmoney were in the PH corpus, it would be in the form of fundsand/money, since the position in between character d and m is an unambiguous token boundary with respect to normal English dictionary, but fundsand could be either funds/and or fund/sand. There are two types of fragments in between adjacent unambiguous token boundaries: those which are dictionary entries on the whole, and those which are not. 2.2 Dictionary-Entry Fragments We manually tokenized in context each of the dictionary-entry fragments in the first 6,000 lines of the PH corpus. There are 6,700 different fragments which cumulatively occur 46,635 times. Among them, 14 fragments (Table 1, Column 1) realize different tokenizations in their 87 occurrences. 16 tokenization errors would be introduced if taking majority tokenizations only (Table 2). Also listed in Table 1 are the numbers of fragments tokenized as single tokens (Column 2) or as a stream of multiple tokens (Column 3). For instance, the first fragment must be tokenized as a single token for 17 times but only for once as a token-pair. Table 1: Dictionary-entry fragments realizing different tokenizations in the PH corpus. mJmil_lmlm mn nnu mnnn: nnu mn/ - nnu mn nE nnm lmmmlmm nmmn munmmm m R nnm Table 2: Statistics for dictionary-entry fragments. (0) (1) (2) (3)=(2)/(1) Fragment All Multiple Percentage Occurrences 46635 87 0.19 Forms 6700 14 0.21 Errors 46635 16 0.03 In short, 0.19% of all the different dictionary-entry fragments, taking 0.21% of all the occurrences, have realized different tokenizations, and 0.03% tokenization errors would be introduced if forced to take one tokenization per fragment. 2.3 Non-Dictionary-Entry Fragments Similarly, we identified in the PH corpus all fragments that are not entries in the Beihang dictionary, and manually tokenized each of them in context. There are 14,984 different fragments which cumulatively occur 49,308 times. Among them, only 35 fragments (Table 3) realize different tokenizations in their 137 occurrences. 39 tokenization errors would be introduced if taking majority tokenizations only (Table 4). Table 3: Non-dictionary-entry fragments realizingd~renttokeni. I~ {I~ ~--~ +--~ ~ --~ ations in the PH corpus. Ak~ :J: Table 4: Statistics for non-dictionary entry fragments. (0) Fragment Forms (1) (2) (3)=(2)/(1) All Multiple Percenta[~e 14984 35 0.23 Occurrences 49308 137 Errors 49308 39 0.28 0.08 In short, 0.23% of all the non-dictionary-entry fragments, taking 0.28% of all occurrences, have realized different tokenizations, and 0.08% tokenization errors would be introduced if forced to take one tokenization per fragment. 2.4 Tokenization Criteria Some readers might question the reliability of the preceding results, because it is well-known in the literature that both the inter- and intra-judge tokenization consistencies can hardly be better than 95% but easily go worse than 70%, if the 458 tokenization is guided solely by the intuition of human judges. To ensure consistency, the manual tokenization reported in this paper has been independently done twice under the following three criteria, applied in that order: (1) Dictionary Existence: The tokenization contains no non-dictionary-entry character fragment. (2) Structural Consistency: The tokenization has no crossing-brackets (Black, Garside and Leech 1993) with at least one correct and complete structural analysis of its underlying sentence. (3) Maximum Tokenization: The tokenization is a critical tokenization (Guo 1997). The basic idea behind is to regard sentence tokenization as a (shallow) type of (phrase- structure-like) morpho-syntactic parsing which is to assign a tree-like structure to a sentence. The tokenization of a sentence is taken to be the single-layer bracketing corresponding to the highest-possible cross-section of the sentence tree, with each bracket a token in dictionary. Among the three criteria, both the criterion of dictionary existence and that of maximum tokenization are well-defined without any uncertainty, as long as the tokenization dictionary is specified. However, the criterion of structural consistency is somewhat under-specified since the same linguistic expression may have different sentence structural analyses under different grammatical theories and/or formalisms, and it may be read differently by different people. Fortunately, our tokenization practice has shown that this is not a problem when all the controversial fragments are carefully identified and their tokenizations from different grammar schools are purposely categorized. Note, the emphasis here is not on producing a unique "correct" tokenization but on managing and minimizing tokenization inconsistencyL 3 One Tokenization per Source Noticing that all the fragments studied in the preceding section are critical fragments (Guo 1997) from the same source, it becomes reasonable to accept the following hypothesis. One tokenization per source: For any critical fragment from a given source, if one of its tokenization is correct in one occurrence, the same tokenization is also correct in all its other occurrences. The linguistic object here is a critical fragment, i.e., the one in between two adjacent critical points or unambiguous token boundaries (Guo 1997), but not an arbitrary sentence segment. The hypothesis says nothing about the tokenization of a non- critical fragment. Moreover, the hypothesis does not apply even if a fragment is critical in some other sentences from the same source, but not critical in the sentence in question. The hypothesis does not imply context independence in tokenization. While the correct tokenization correlates decisively with its source, it does not indicate that the correct tokenization has no association with its local sentential context. Rather, the tokenization of any fragment has to be realized in local and sentential context. It might be arguable that the PH corpus of 4 million morphemes is not big enough to enable many of the critical fragments to realize their different readings in diverse sentential contexts. To answer the question, I0 colleagues were asked to tokenize, without seeing the context, the most frequent 123 non-dictionary-entry critical fragments extracted from the PH corpus. Several of these fragments 2 have thus been marked "context dependent", since they have "obvious" different readings in different contexts. Shown in Figure 1 are three examples. 219[c< ~J~ 7]~ >< 5~; ~7~ >1 180[c< ~ ~ >< • ~ >] 106[< A.~ ~ >c< X ~" >] Figure 1: Critical fragments with "obvious" multiple readings. Preceding numbers are their occurrence counts in the PH corpus. i For instance, the Chinese fragment dp dx (secondary primary school) is taken as "[secondary (and) primary] school" by one school of thought, but "[secondary (school)] (and) [primary school]" by another. But both will never agree that the fragment must be analyzed differently in different context. 2 While all fragments are lexically ambiguous in tokenization, many of them have received consistent unique tokenizations, as these fragments are, to the human judges, self-sufficient for comfortable ambiguity resolution. 459 We looked all these questionable fragments up in a larger corpus of about 60 million morphemes of news articles collected from the same source as that of the PH corpus in a longer time span from 1989 to 1993. It turns out that all the fragments each always takes one and the same tokenization with no exception. While we have not been able to specify the notion of source used in the hypothesis to the same clarity as that of critical fragment and critical tokenization in (Guo 1997), the above empirical test has made us feel comfortable to believe that the scope of the source can be sufficiently large to cover any single domain of practical interest. 4 Application in Tokenization The hypothesis of one tokenization per source can be applied in many ways in sentence tokenization. For tokenization ambiguity resolution, let us examine the following strategy: Tokenization by memorization: If the correct tokenization of a critical fragment is known in one context, remember the tokenization. If the same critical fragment is seen again, retrieve its stored tokenization. Otherwise, if a critical fragment encountered has no stored tokenization, randomly select one of its critical tokenizations. This is a pure and straightforward implementation of the hypothesis of one tokenization per source, as it does not explore any constraints other than the tokenization dictionary. While sounds trivial, this strategy performs surprisingly well. While the strategy is universally applicable to any tokenization ambiguity resolution, here we will only examine its performance in the resolution of critical ambiguities (Guo 1997), for ease of direct comparison with works in the literature. As above, we have manually tokenized 3 all non- dictionary-entry critical fragments in the PH corpus; i.e., we have known the correct tokenizations for all of these fragments. Therefore, if any of these fragments presents somewhere else, its tokenization can be readily retrieved from what we have manually done. If the hypothesis holds perfect, we could not make any error. 3 This is not a prohibitive job but can be done well within one man-month, if the hypothesis is adopted. 460 The only weakness of this strategy is its apparent inadequacy in dealing with the sparse data problem. That is, for unseen critical fragments, only the simplest tokenization by random selection is taken. Fortunately, we have seen on the PH corpus that, on average, each non-dictionary-entry critical fragment has just two (100,398 over 49,308 or 2.04 to be exact) critical tokenizations to be chosen from. Hence, a tokenization accuracy of about 50% can be expected for unknown non- dictionary-entry critical fragments. The question then becomes that: what is the chance of encountering a non-dictionary-entry critical fragment that has not been seen before in the PH corpus and thus has no known correct tokenization? A satisfactory answer to this question can be readily derived from the Good- Turing Theorem 4 (Good 1953; Church and Gale with Kruskal 1991, page 49). Table 5: Occurrence distribution of non-dictionary- entry critical fragments in the PH corpus. r 1 2 3 4 5 Nr 9587 2181 939 523 339 r 6 7 8 9 _>9 Nr 230 188 128 94 775 Table 4 and Table 5 show that, among the 14,984 different non-dictionary-entry critical fragments and their 49,308 occurrences in the PH corpus, 9,587 different fragments each occurs exactly once. By the Good-Turing Theorem, the chance of encountering an arbitrary non-dictionary-entry critical fragment that is not in the PH corpus is about 9,587 over 49,308 or slightly less than 20%. In summary, if applied to non-dictionary-entry critical fragment tokenization, the simple strategy of tokenization by memorization delivers virtually 100% tokenization accuracy for slightly over 80% of the fragments, and about 50% accuracy for the rest 20% fragments, and hence has an overall tokenization accuracy of better than 90% (= 80% x 100% + 20% x 50%). 4 The theorem states that, when two independent marginally binomial samples B e and B 2 are drawn, the expected frequency r" in the sample B~ of types occurring r times in B t is r'=(r+I)E(N,.~)/E(N,), where E(N,) is the expectation of the number of types whose frequency in a sample is r. What we are looking for here is the quantity of r'E(N,) for r=O, or E(N~), which can be closely approximated by the number of non-dictionary-entry fragments that occurred exactly once in the PH corpus. This strategy rivals all proposals with directly comparable performance reports in the literature, including 5 the representative one by Sun and T'sou (1995), which has the tokenization accuracy of 85.9%. Notice that what Sun and T'sou proposed is not a trivial solution. They developed an advanced four-step decision procedure that combines both mutual information and t-score indicators in a sophisticated way for sensible decision making. Since the memorization strategy complements with most other existing tokenization strategies, certain types of hybrid solutions are viable. For instance, if the strategy of tokenization by memorization is applied to known critical fragments and the Sun and T'sou algorithm is applied to unknown critical fragments, the overall accuracy of critical ambiguity resolution can be better than 97% (= 80% + 20% x 85.9%). The above analyses, together with some other more or less comparable results in the literature, are summarized in Table 6 below. It is interesting to note that, the best accuracy registered in China's national 863-Project evaluation in 1995 was only 78%. In conclusion, the hypothesis of one tokenization per source is unquestionably helpful in sentence tokenization. Table 6: Tokenization performance comparisons. Approach Memorization Sun et al. (1996) Wong et al. (1994) Zheng and Liu (1997) 863-Project 1995 Evaluation (Zheng and Liu, 1997) Memorization + Sun et al. Accuracy, (%) 90 85.9 71.2 81 78 97 s The task there is the resolution of overlapping ambiguities, which, while not exactly the same, is comparable with the resolution of critical ambiguities. The tokenization dictionary they used has about 50,000 entries, comparable to the Beihang dictionary we used in this study. The corpus they used has about 20 million words, larger than the PH corpus. More importantly, in terms of content, it is believed that both the dictionary and corpus are comparable to what we used in this study. Therefore, the two should more or less be comparable. 461 5 The Notion of Tokens Upon accepting the validness of the hypothesis of one tokenization per source, and after experiencing its striking utility value in sentence tokenization, now it becomes compelling for a new paradigm. Parallel to what Dalton did for separating physical mixtures from chemical compounds (Kuhn 1970, page 130-135), we are now suggesting to regard the hypothesis as a law- of-language and to take it as the proposition of what a word/token must be. The Notion of Tokens: A stretch of characters is a legitimate token to be put in tokenization dictionary if and only if it does not introduce any violation to the law of one tokenization per source. Opponents should reject this notion instantly as it obviously makes the law of one tokenization per source a tautology, which was once one of our own objections. We recommend these readers to reexamine some of Kuhn's (1970) arguments. Apparently, the issue at hand is not merely over a matter of definition of words/tokens. The merit of the notion, we believe, lies in its far-reaching implications in natural language processing in general and in sentence tokenization in particular. For instance, it makes the separation between words and non-words operational in Chinese, yet maintains the cohesiveness of words/tokens as a relatively independent layer of linguistic entities for rigorous scrutiny. In contrast, while the paradigm of "mutual affinity" represented by measurements such as mutual information and t- score has repetitively exhibited inappropriateness in the very large number of intermediate cases, the paradigm of "linguistic words" represented by terms like syntactic-words, phonolo~cal-words and semantic-words is in essence rejecting the notion of Chinese words/tokens at all, as compounding, phrase-forming and even sentence formation in Chinese are governed by more or less the same set of regularities, and as the whole is always larger than the simple sum of its parts. We shall leave further discussions to another place. 6 Discussion Like most discoveries in the literature, when we first captured the regularity several years ago, we simply could not believe it. Then, after careful experimental validation on large representative corpora, we accepted it but still could not imagine any of its utility value. Finally, after working out ways that unquestionably demonstrated its usefulness, we realized that, in the literature, so many supportive evidences have already been presented. Further, while never consciously in an explicit form, the hypothesis has actually already been widely employed. For example, Zheng and Liu (1997) recently studied a newswire corpus of about 1.8 million Chinese characters and reported that, among all the 4,646 different chain-length-l two-character- overlapping-typd s ambiguous fragments which cumulatively occur 14,581 times in the corpus, only 8 fragments each has different tokenizations in different context, and there is no such fragment in all the 3,409 different chain-length-2 two- character-overlapping-type 7 ambiguous fragments. Unfortunately, due to the lack of a proper representation framework comparable to the critical tokenization theory employed here, their observation is neither complete nor explanatory. It is not complete, since the two ambiguous types apparently do not cover all possible ambiguities. It is not explanatory, since both types of ambiguous fragments are not guaranteed to be critical fragments, and thus may involve other types of ambiguities. Consequently, Zheng and Liu (1997) themselves merely took the apparent regularity as a special case, and focused on the development of local- context-oriented disambiguation rules. Moreover, while they constructed for tokenization disambiguation an annotated "phrase base" of all ambiguous fragments in the large corpus, they still concluded that good results can not come solely from corpus but have to rely on the utilization of syntactic, semantic, pragmatic and other information. The actual implementation of the weighted finite- state transducer by Sproat et al. (1996) can be taken as an evidence that the hypothesis of one tokenization per source has already in practical use. While the primary strength of such a transducer is its effectiveness in representing and 6 Roughly a three-character fragment abc where a, b, c, ab, and bc are all tokens in the tokenization dictionary. 7 Roughly a four-character fragment abcd, where a, b, c, d, ab, bc, and cd are all tokens in the tokenization dictionary. utilizing local and sentential constraints, what Sproat et al. (1996) implemented was simply a token unigram scoring function. Under this setting, no critical fragment can realize different tokenizations in different local sentential context, since no local constraints other than the identity of a token together with its associated token score can be utilized. That is, the requirement of one tokenization per source has actually been implicitly obeyed. We admit here that, while we have been aware of the fact for long time, only after the dissemination of the closely related hypotheses of one sense per discourse (Gale, Church and Yarowsky 1992)" and one sense per collocation (Yarowsky 1993), we are able to articulate the hypothesis of one tokenization per source. The point here is that, one tokenization per source is unlikely an isolated phenomenon. Rather, there must exist a general law that covers all the related linguistic phenomena. Let us speculate that, for a proper linguistic expression in a proper scope, there always exists the regularity of one realization per expression. That is, only one of the multiple values on one aspect of a linguistic expression can be realized in the specified scope. In this way, one tokenization per source becomes a particular articulation of one realization per expression. The two essential terms here are the proper linguistic expression and the proper scope of the claim. A quick example is helpful here: part-of- speech tagging for the English sentence "Can you can the can?" If the linguistic expressions are taken as ordinary English words, they are nevertheless highly ambiguous, e.g., the English word can realizes three different part-of-speeches in the sentence. However, if "the can", "can the" and the like are taken as the underling linguistic expressions, they are apparently unambiguous: "the can/NN", "can/VB the" and the rest "can/MD". This fact can largely be predicted by the hypothesis of one sense per collocation, and can partially explain the great success of Brill's transformation-based part-of-speech tagging (Brill 1993). As to the hypothesis of one tokenization per source, it is now clear that, the theory of critical tokenization has provided the suitable means for capturing the proper linguistic expression. 462 7 Conclusion The hypothesis of one tokenization per source confirms surprisingly well (99.92% ~ 99.97%) with corpus evidences, and works extremely well (90% - 97%) in critical ambiguity resolution. It is formulated on the critical tokenization theory and inspired by the parallel hypotheses of one sense per discourse and one sense per collocation, as is postulated as a particular articulation of the general law of one realization per expression. We also argue for the further generalization of regarding it as a new paradigm for studying the twin-issue of token and tokenization. Acknowledgements Part of this paper, especially the Introduction and Discussion sections, was once presented at the November 1997 session of the monthly Symposium on Linguistics and Language Information Research organized by COLIPS (Chinese and Oriental Languages Information Processing Society) in Singapore. Fruitful discussions, especially with Xu Jie, Ji Donghong, Su Jian, Ni Yibin, and Lua Kim Teng, are gratefully acknowledged, as are the tokenization efforts by dozen of my colleagues and friends. However, the opinions expressed reflect solely those of the author. References Black, Ezra, Roger Garside, and Geoffery Leech (1993). Statistically-Driven Computer Grammars of English: The IBM/Lancaster Approach, Amsterdam: Rodopi Publishers. Brill, Eric (1993). A Corpus-Based Approach to Language Learning, Ph.D Dissertation, Department of Computer and Information Science, University of Pennsylvania. Church, Kenneth. W. and William A. Gale (1991). A Comparison of the Enhanced Good-Turing and Deleted Estimation Methods for Estimating Probabilities of English Bigrams, Computer Speech and Language, Vol. 5, No. 1, pages 19-54. Gale, William A., Kenneth W. Church and David Yarowsky (1992b). One Sense Per Discourse, In: Proceedings of the 4 '~ DARPA Workshop on Speech and Natural Language, pages 233-237. Gan, Kok-Wee; Palmer, Martha; and Lua, Kim-Teng (1996). A Statistically Emergent Approach for Language Processing: Application to Modeling Context Effects in Ambiguous Chinese Word Boundary Perception. Computational Linguistics Vol. 22, No. 4, pages 531-553. Good, I. J. (1953). The Population Frequencies of Species and the Estimation of Population Parameters. Biometrika, Volume 40, pages 237-264. Guo, Jin (1993). PH - A Free Chinese Corpus, Communications of COUPS, Vol. 3, No. 1, pages 45-48. Guo, Jin (1997). Critical Tokenization and its Properties, Computational Linguistics, Vol. 23, No. 4, pages 569-596. Kuhn, Thomas (1970). The Structure of Scientific Revolutions. Second Edition, Enlarged. The University of Chicago Press. Chicago. Liu, Yuan and Nanyuan Liang (1989). Contemporary Chinese Common Word Frequency Dictionary (Phonetically Ordered Version). Yuhang Press, Beijing. Sproat, Richard, Chilin Shih, Villiam Gale, and Nancy Chang (1996). A Stochastic Finite-State Word- Segmentation Algorithm for Chinese, Computational Linguistics, Vol. 22, No. 3, pages 377--404. Sun, Maosong and Benjemin Tsou (1995). Ambiguity Resolution in Chinese Word Segmentation, Proceedings of the 10th Pacific Asia Conference on Language, Information and Computation (PACLIC- 95), pages 121-126, Hong Kong. Wong, K-F.; Pan, H-H.; Low, B-T.; Cheng, C-H.; Lum, V. and Lam, S-S. (1995). A Tool for Compute-Assisted Open Response Analysis, Proceedings of the 1995 International Conference on Computer Processing of Oriental Languages, pages 191-198, Hawaii. Wu, Dekai (1997). Stochastic Inversion Transduction Grammars and Bilingual Parsing of Parallel Corpora, Computational Linguistics, Vol. 23, No. 3, pages 377-403. Yarowsky, David (1993). One Sense Per Collocation, In: Proceedings of ARPA Human Language Technology Workshop, Princeton, pages 266-271. Zheng, Jiaheng and Kaiying Liu (1997). The Research of Ambiguity Word-Segmentation Technique for the Chinese Text, In Chert, Liwai and Qi Yuan (editors). Language Engineering, Tsinghua University Press. Page 201-206. 463
1998
76
Efficient Linear Logic Meaning Assembly Vineet Gupta Caelum Research Corporation NASA Ames Research Center Moffett Field CA 94035 vgupt a@pt olemy, arc. nasa. gov John Lamping Xerox PARC 3333 Coyote Hill Road Palo Alto CA 94304 USA i amping©parc, xerox, tom 1 Introduction The "glue" approach to semantic composition in Lexical-Functional Grammar uses linear logic to assemble meanings from syntactic analyses (Dalrymple et al., 1993). It has been compu- rationally feasible in practice (Dalrymple et al., 1997b). Yet deduction in linear logic is known to be intractable. Even the propositional ten- sor fragment is NP complete(Kanovich, 1992). In this paper, we investigate what has made the glue approach computationally feasible and show how to exploit that to efficiently deduce underspecified representations. In the next section, we identify a restricted pattern of use of linear logic in the glue analyses we are aware of, including those in (Crouch and Genabith, 1997; Dalrymple et al., 1996; Dal- rymple et al., 1995). And we show why that fragment is computationally feasible. In other words, while the glue approach could be used to express computationally intractable analyses, actual analyses have adhered to a pattern of use of linear logic that is tractable. The rest of the paper shows how this pat- tern of use can be exploited to efficiently cap- ture all possible deductions. We present a con- servative extension of linear logic that allows a reformulation of the semantic contributions to better exploit this pattern, almost turning them into Horn clauses. We present a deduction algo- rithm for this formulation that yields a compact description of the possible deductions. And fi- nally, we show how that description of deduc- tions can be turned into a compact underspeci- fled description of the possible meanings. Throughout the paper we will use the illus- trative sentence "every gray cat left". It has flHlctional structure (1) [PRED 'LEAVE' 1 PRED 'CAT' f: SUBJ g: [SPEC 'EVERY' [MODS {[ PRED 'GRAY']} and semantic contributions leave :Vx. ga',-*x --.o fo',-*leave(x) cat :w. (ga VAR)--** ~ (ga RESTR)-,~ Cat(*) gray NP. [Vx. (ga VAtt)---* x --o (g~ RESTR)-,-* P(x)] -o [w. (g~ VAR)~, every :VH, R, S. [w. (g~ VAR)--~, --o (g~ RESTR)--~R(z)] ®[Vx. g,,'...*x ~ g-,-*S(x)] --~ H',-* every(R, S) For our purposes, it is more convenient to fol- low (Dalrymple et al., 1997a) and separate the two parts of the semantic contributions: use a lambda term to capture the meaning formulas, and a type to capture the connections to the f-structure. In this form, the contributions are leave : cat : gray : every : Ax.leave(x) : g,, --o fa .,~x.cat(x) : (ga VAR) --o (ga RESTR) AP.Ax.gray(P)(x) : ((g~ vAR) --o (ga RESTR)) --o (g~ VAn) ~ (ga RESTR) AR.AS.every(R, S) : VH. (((g~ 'CAR) --o (ga RESTR)) ®(g. ~ H)) --oH With this separation, the possible derivations are determined solely by the "types", the con- nections to the f-structure. The meaning is as- sembled by applying the lambda terms in ac- cordance with a proof of a type for the sen- tence. We give the formal system behind this approach, C in Figure 1 -- this is a different presentation of the system given in (Dalrymple 464 et al., 1997a), adding the two standard rules for tensor, using pairing for meanings. For the types, the system merely consists of the Inear logic rules for the glue fragment. We give the proof for our example in Figure 2, where we have written the types only, and have omitted the trivial proofs at the top of the tree. The meaning every(gray(cat),left) may be as- sembled by putting the meanings back in ac- cording to the rules of C and r/-reduction. M : A ~-c M / : A where M --a,n M ~ F,P,Q,A~-c R F,Q,P, AF-c R r, : A[B/X] -o R F,M :VX.AF-c R r M: A[Y/X] (r new) F t-c M : VX.A F ~-c N : A A,M[N/x] : B F-c R F,A, Ax.M : A --o B ~-c R r,y : A be M[y/x] : B r F-c Ax.M : A .-.o B (y new) F,M :A,N : B I-- R r, (M, N) :A® B ~- R FF-M:A A~-N:B F,A~-(M,N):A®B Figure 1: The system C. M,N are meanings, and x, y are meaning variables. A, B are types, and X, Y are type variables. P, Q, R are formu- las of the kind M : A. F,A are multisets of formulas. 2 Skeleton references and modifier ref- erences The terms that describe atomic types, terms Ike ga and (g~ vA1Q, are semantic structure refer- ences, the type atoms that connect the semantic assembly to the syntax. There is a pattern to how they occur in glue analyses, which reflects their function in the semantics. Consider a particular type atom in the ex- ample, such as g~. It occurs once positively in the contribution of "every" and once negatively in the contribution of "leave". A sightly more compIcated example, the type (ga l~nSTR) oc- curs once positively in the contribution of "cat", once negatively in the contribution of "every", and once each positively and negatively in the contribution of "gray". The pattern is that every type atom occurs once positively in one contribution, once nega- tively in one contribution, and once each posi- tively and negatively in zero or more other con- tributions. (To make this generaIzation hold, we add a negative occurrence or "consumer" of fa, the final meaning of the sentence.) This pat- tern holds in all the glue analyses we know of, with one exception that we will treat shortly. We call the independent occurrences the skele- ton occurrences, and the occurrences that occur paired in a contribution modifier occurrences. The pattern reflects the functions of the lex- ical entries in LFG. For the type that corre- sponds to a particular f-structure, the idea is that, the entry corresponding to the head makes a positive skeleton contribution, the entry that subcategorizes for the f-structure makes a neg- ative skeleton contribution, and modifiers on the f-structure make both positive and negative modifier contributions. Here are the contributions for the example sentence again, with the occurrences classified. Each occurrence is marked positive or negative, and the skeleton occurrences are underlined. leave : g_Ka- --o fa+ cat : (ga VAtt)- --o (ga ttESWtt) + gray : ((ga VAn) + --o (ga aESTR)-) ---o (ga VAn)- --o (ga RESTR) + every : VH. (((ga VAR) + --o (ga RESTR.)-) ®(g_z.~ ~ --~ g-)) ---o H + This pattern explains the empirical tractabil- ity of glue inference. In the general case of multiplicative Inear logic, there can be complex combinatorics in matching up positive and neg- ative occurrences of literals, which leads to NP- completeness (Kanovich, 1992). But in the glue fragment, on the other hand, the only combina- torial question is the relative ordering of modi- tiers. In the common case, each of those order- ings is legal and gives rise to a different mean- ing. So the combinatorics of inference tends to be proportional to the degree of semantic am- biguity. The complexity per possible reading is thus roughly tnear in the size of the utterance. But, this simple combinatoric structure sug- gests a better way to exploit the pattern. Rather than have inference explore all the com- binatorics of different modifier orders, we can get a single underspecitied representation that captures all possible orders, without having to 465 cat F- (ga VAR) --o (go RESTR) (go VAR) --O (go RESTR) ~ (go" VAR) ---O (ga RESTR) cat, ((ga VAR) ---o (ga RESTR)) --o (ga VAR) ---o (ga RESTR) ~ (ga VAR) ---o (ga RESTR) gray, cat ~- (ga VAR) --~ (ga RESTR) leave F- ga --o fo gray, cat, leave F ((go VAR) --~ (go RESTR)) ~(ga .--o fa) fo ~- fa gray, cat,leave, (((ga VAR) --o (ga RESTR)) ® (go. ---o fo)) --o fo ~- fo every, gray, cat, leave F- fa Figure 2: Proof of "Every gray cat left", omitting the lambda terms explore them. The idea is to do a preliminary deduction in- volving just the skeleton, ignoring the modifier occurrences. This will be completely determin- istic and linear in the total length of the for- mulas. Once we have this skeletal deduction, we know that the sentence is well-formed and has a meaning, since modifier occurrences es- sentially occur as instances of the identity ax- iom and do not contribute to the type of the sentence. Then the system can determine the meaning terms, and describe how the modifiers can be attached to get the final meaning term. That is the goal of the rest of the paper. 3 Conversion toward horn clauses The first hurdle is that the distinction between skeleton and modifier applies to atomic types, not to entire contributions. The contribution of "every", for example, has skeleton contributions for go, (go VAR), and (ga RESTR), but modifier contributions for H. Furthermore, the nested implication structure allows no nice way to dis- entangle the two kinds of occurrences. When a deduction interacts with the skeletal go in the hypothetical it also brings in the modifier H. If the problematic hypothetical could be con- verted to Horn clauses, then we could get a bet- ter separation of the two types of occurrences. We can approximate this by going to an in- dexed linear logic, a conservative extension of the system of Figure 1, similar to Hepple's sys- tem(Hepple, 1996). To handle nested implications, we introduce the type constructor A{B}, which indicates an A whose derivation made use of B. This is sim- ilar to Hepple's use of indices, except that we indicate dependence on types, rather than on in- dices. This is sufficient in our application, since each such type has a unique positive skeletal occurrence. We can eliminate problematic nested impli- cations by translating them into this construct, in accordance with the following rule: For a nested hypothetical at top level that has a mix of skeleton and modifier types: M : ( A -o B ) -o C replace it with x:A, M:(B{A}---oC) where x is a new variable, and reduce complex dependency formulas as follows: 1. Replace A{B ---o C} with A{C{B}}. 2. Replace (A --o B){C} with A --o B{C}. The semantics of the new type constructors is captured by the additional proof rule: F,x:AF-M:B F,x : A ~- Ax.M : B{A} The translation is sound with respect to this rule: Theorem 1 If F is a set of sentences in the unextended system of Figure 1, A is a sentence in that system, and F ~ results from F by applying the above conversion rules, then F F- A in the system of Figure 1 iff F' F- A in the extended system. The analysis of pronouns present a different problem, which we discuss in section 5. For all other glue analyses we know of, these conver- sions are sufficient to separate items that mix interaction and modification into statements of 466 the form S, Jr4, or S -o .h4, where S is pure skeleton and M is pure modifier. Furthermore, .h4 will be of the form A -o A, where A may be a formula, not just an atom. In other words, the type of the modifier will be an identity axiom. The modifier will consume some meaning and produce a modified meaning of the same type. In our example, the contribution of "every", can be transformed by two applications of the nested hypothetical rule to every :AR.AS.every(R, S) : VH. (ga RESTR){(ga VAR)} --o H{gq} -o H x :(go VAR) Y :ga Here, the last two sentences are pure skele- ton, producing (g~ VAR) and ga, respectively. The first is of the form S -o M, consuming (ga RESTR), to produce a pure modifier. While the rule for nested hypotheticals could be generalized to eliminate all nested implica- tions, as Hepple does, that is not our goal, be- cause that does remove the combinatorial com- bination of different modifier orders. We use the rule only to segregate skeleton atoms from mod- ifier atoms. Since we want modifiers to end up looking like the identity axiom, we leave them in the A -o A form, even if A contains further implications. For example, we would not apply the nested hypothetical rule to simplify the en- try for gray any further, since it is already in the form A ---o A. Handling intensional verbs requires a more precise definition of skeleton and modifier. The type part of an intensional verb contribution looks like (VF.(ha -o F) --o F) -o ga -o fa (Dalrymple et al., 1996). First, we have to deal with the small technical problem that the VF gets in the way of the nested hypothetical translation rule. This is easily resolved by introducing a skolem constant, 5', turning the type into ((h~ -o 5') --o 5') --o g~ --o f~. Now, the nested hypothetical rule can be applied to yield (ho -o S) and S{5"{h~}} ---o ga --o fa. But now we have the interesting question of whether the occurrences of the skolem constant, S, are skeleton or modifier. If we observe how 5' resources get produced and consumed in a de- duction involving the intensional verb, we find that (ha --o 5') produces an 5', which may be modified by quantifiers, and then gets consumed by S { S { ha } } ---o ga -o f~. So unlike a modifier, which takes an existing resource from the envi- ronment and puts it back, the intentional verb places the initial resource into the environment, allows modifiers to act on it, and then takes it out. In other words, the intensional verb is act- ing like a combination of a skeleton producer and a skeleton consumer. So just because an atom occurs twice in a contribution doesn't make the contribution a modifier. It is a modifier if its atoms must in- teract with the outside, rather than with each other. Roughly, paired modifier atoms function as f -o f, rather than as f ® f±, as do the S atoms of intensional verbs. Stated precisely: Definition 2 Assume two occurrences of the same type atom occur in a single contribution. Convert the formula to a normal form consist- ing of just ®, ~ , and J_ on atoms by converting subformulas A -o B to the equivalent A ± :~ B, and then using DeMorgan's laws to push all J_ 's down to atoms. Now, if the occurrences of the same type atom occur with opposite polarity and the connective between the two subexpressions in which they occur is ~ , then the occurrences are modifiers. All other occurrences are skeleton. For the glue analyses we are aware of, this def- inition identifies exactly one positive and one negative skeleton occurrence of each type among all the contributions for a sentence. 4 Efficient deduction of underspecified representation In the converted form, the skeleton deductions can be done independently of the modifier de- ductions. Furthermore, the skeleton deductions are completely trivial, they require just a lin- ear time algorithm: since each type occurs once positively and once negatively, the algorithm just resolves the matching positive and nega- tive skeleton occurrences. The result is several deductions starting from the contributions, that collectively use all of the contributions. One of the deductions produces a meaning for fa, for the whole f-structure. The others produce pure modifiers -- these are of the form A --o A. For 467 Lexical contributions in indexed logic: leave : cat : gray : everyx : every2 : everya : Ax.leave(x) : ga --o fc, ax.eat(x): VAR) R .STR) : VAR) --o R STR)) VAR) --o RESTR) AR.AS.every(R, S) : vg. (g~ RnSTR){(g~ 'CAR)} --o g{ga} ---o H z VAR) Y :g~ The following can now be proved using the extended system: gray ~- AP.Ax.gray(P)(x) : ((ga VAR) --o (g~ RESTR)) ----O (g~ VAR) --o (ga RESTR) every2, cat, every1 ~- AS.every(Ax.eat(x), S): VH. H{ga} --o H everya, leave F- leave(y) : fa Figure 3: Skeleton deductions for "Every gray cat left". the example sentence, the results are shown in Figure 3. These skeleton deductions provide a compact representation of all possible complete proofs. Complete proofs can be read off from the skele- ton proofs by interpolating the deduced modi- tiers into the skeleton deduction. One way to think about interpolating the modifiers is in terms of proof nets. A modifier is interpolated by disconnecting the arcs of the proof net that connect the type or types it modifies, and recon- necting them through the modifier. Quantifiers, which turn into modifiers of type VF.F ---o F, can choose which type they modify. Not all interpolations of modifiers are le- gal. however. For example, a quantifier must outscope its noun phrase. The indices of the modifier record these limitations. In the case of the modifier resulting from "every cat", VH.H{ga} ---o H, it records that it must outscope "every cat" in the {ga}. The in- dices determine a partial order of what modi- fiers must outscope other modifiers or skeleton terms. In this particular example, there is no choice about where modifiers will act or what their rel- ative order is. In general, however, there will be choices, as in the sentence "someone likes every cat", analyzed in Figure 4. To summarize so far, the skeleton proofs pro- vide a compact representation of all possible de- ductions. Particular deductions are read off by interpolating modifiers into the proofs, subject to the constraints. But we are usually more in- terested in all possible meanings than in all pos- sible deductions. Fortunately, we can extract a compact representation of all possible meanings from the skeleton proofs. We do this by treating the meanings of the skeleton deductions as trees, with their arcs an- notated with the types that correspond to the types of values that flow along the arcs. Just as modifiers were interpolated into the proof net links, now modifiers are interpolated into the links of the meaning trees. Constraints on what modifiers must outscope become constraints on what tree nodes a modifier must dominate. Returning to our original example, the skele- ton deductions yield the following three trees: !g RESTR) / ~/-/Iga~ tga VAR) ---o • ,~Z. ] ga RESTR) leave (go RESTR)I gray cat lga VAR) --o I g~ (go VAR) ~ I tgo' RESTR) y leave(y) aS.every(;~x.cat(x),S) aP.ax.gray(P)(x) Notice that higher order arguments are reflected as structured types, like (g~ VAR) ----o (g~ RESTR). These trees are a compact description of the possible meanings, in this case the one possible meaning. We believe it will be possible to translate this rep- resentation into a UDRS representation(Reyle, 1993), or other similar representations for ambiguous sentences. We can also use the trees directly as an un- derspecified representation. To read out a par- ticular meaning, we just interpolate modifiers into the arcs they modify. Dependencies on a 468 The functional structure of "Someone likes every cat". PRED SUBJ /: OBJ The lexical entries after 'LIKE' h:[ pRro 'soMroNE'] PRED 'eAT' ] g: SPEC ~EVERY' conversion to indexed form: like : cat : someonel : someone2 : everyl : every2 : everya : Ax.Ay.tike(x, y): (ho ® go) -o/o Ax.cat(x): (go VAR) -o (ga RESTR) z:hv AS.some(person, S) : VH. H{ho) --o H AR.AS.every(R, S) : vg. (go RESTR){(go VA1Q) --o H{go) --o H x : (go VAR) Y:go From these we can prove: someone1, everya, like ~- like(z, y) : fo someone2 F- AS.some(person, S) : VH. H{ho} --o H every2, cat, every1 b AS.every(cat, S) : VH. H{go} -o H Figure 4: Skeleton deductions for "Someone likes every cat" modifier's type indicate that a lambda abstrac- tion is also needed. So, when "every cat" mod- ifies the sentence meaning, its antecedent, in- stantiated to fo{go) indicates that it lambda abstracts over the variable annotated with go and replaces the term annotated fo. So the re- sult is: Ifo , every RESTR.) A Ax. Y. (go RESTR)] ]fo cat leave (go VAR)!: /o Similarly "gray" can modify this by splicing it into the line labeled (go VAR) --o (go RESTR) to yield (after y-reduction, and removing labels on the arcs). Ifo /ver gray leave I cat This gets us the expected meaning every(gray(cat), leave). In some cases, the link called for by a higher order modifier is not directly present in the tree, and we need to do A-abstraction to support it. Consider the sentence "John read Hamlet quickly". We get the following two trees from the skeleton deductions: re!fd g/ \ho John Hamlet read(John, Hamlet) I go --o fo quickly Igo-o fo AP.Ax.quickly( P )( x ) There is no link labeled ga --o fa to be modi- fied. The left tree however may be converted by A-abstraction to the following tree, which has a required link. The @ symbol represents A ap- plication of the right subtree to the left. I/o Ax. John I/o read gj \ho x Hamlet Now quickly can be interpolated into the link labeled go --o fo to get the desired meaning quickly(read(Hamlet), John), after r/- reduction. The cases where A-abstraction is re- quired can be detected by scanning the modi- fiers and noting whether the links to be mod- ified are present in the skeleton trees. If not, A-abstraction can introduce them into the un- 469 derspecified representation. Furthermore, the introduction is unavoidable, as the link will be present in any final meaning. 5 Anaphora As mentioned earlier, anaphoric pronouns present a different challenge to separating skele- ton and modifier. Their analysis yields types like f~ --o (f~ ® g~) where g~ is skeleton and f~ is modifier. We sketch how to separate them. We introduce another type constructor (B)A, informally indicating that A has not been fully used, but is also used to get B. This lets us break apart an implication whose right hand side is a product in accordance with the following rule: For an implication that occurs at top level, and has a product on the right hand side that mixes skeleton and modifier types: Ax.(M, N) : A ---o (B ® C) replace it with Ax.M : (C)A -o B, N : C The semantics of this constructor is captured by the two rules: M1 : AI~...,M,~ : An ~- M : A M1 : (B)A1,...,Mn: (B)A,~ t- M: (B)A F, M1 : (B)A, M2 :B~-N :C F t, M~:A, M~:B~-N':C where the primed terms are obtained by replacing free x's with what was applied to the Ax. in the deduction of (B)A With these rules, we get the analogue of The- orem 1 for the conversion rule. In doing the skeleton deduction we don't worry about the (B)A constructor, but we introduce constraints on modifier positioning that require that a hy- pothetical dependency can't be satisfied by a deduction that uses only part of the resource it requires. 6 Acknowledgements We would like to thank Mary Dalrymple, John Fry, Stephan Kauffmann, and Hadar Shemtov for discussions of these ideas and for comments on this paper. References Richard Crouch and Josef van Genabith. 1997. How to glue a donkey to an f-structure, or porting a dynamic meaning representation into LFG's linear logic based glue-language semantics. Paper to be presented at the Sec- ond International Workshop on Computa- tional Semantics, Tilburg, The Netherlands, January 1997. Mary Dalrymple, John Lamping, and Vijay Saraswat. 1993. LFG semantics via con- straints. In Proceedings of the Sixth Meeting of the European ACL, pages 97-105, Univer- sity of Utrecht. European Chapter of the As- sociation for Computational Linguistics. Mary Dalrymple, John Lamping, Fernando C. N. Pereira, and Vijay Saraswat. 1995. Lin- ear logic for meaning assembly. In Proceed- ings of CLNLP, Edinburgh. Mary Dalrymple, John Lamping, Fernando C. N. Pereira, and Vijay Saraswat. 1996. In- tensional verbs without type-raising or lexical ambiguity. In Jerry Seligman and Dag West- erst£hl, editors, Logic, Language and Com- putation, pages 167-182. CSLI Publications, Stanford University. Mary Dalrymple, Vineet Gupta, John Lamp- ing, and Vijay Saraswat. 1997a. Relating resource-based semantics to categorial seman- tics. In Proceedings of the Fifth Meeting on Mathematics of Language (MOL5), Schloss Dagstuhl, Saarbriicken, Germany. Mary Dalrymple, John Lamping, Fernando C. N. Pereira, and Vijay Saraswat. 1997b. Quantifiers, anaphora, and intensionality. Journal of Logic, Language, and Information, 6(3):219-273. Mark Hepple. 1996. A compilation-chart method for linear categorical deduction. In Proceedings of COLING-96, Copenhagen. Max I. Kanovich. 1992. Horn programming in linear logic is NP-complete. In Seventh An- nual IEEE Symposium on Logic in Computer Science, pages 200-210, Los Alamitos, Cali- fornia. IEEE Computer Society Press. Uwe Reyle. 1993. Dealing with ambiguities by underspecification: Construction, representa- tion, and deduction. Journal of Semantics, 10:123-179. 470
1998
77