text
stringlengths
0
316k
year
stringclasses
50 values
No
stringclasses
911 values
What’s the Trouble: Automatically Identifying Problematic Dialogues in DARPA Communicator Dialogue Systems Helen Wright Hastie, Rashmi Prasad, Marilyn Walker AT& T Labs - Research 180 Park Ave, Florham Park, N.J. 07932, U.S.A. hhastie,rjprasad,[email protected] Abstract Spoken dialogue systems promise efficient and natural access to information services from any phone. Recently, spoken dialogue systems for widely used applications such as email, travel information, and customer care have moved from research labs into commercial use. These applications can receive millions of calls a month. This huge amount of spoken dialogue data has led to a need for fully automatic methods for selecting a subset of caller dialogues that are most likely to be useful for further system improvement, to be stored, transcribed and further analyzed. This paper reports results on automatically training a Problematic Dialogue Identifier to classify problematic human-computer dialogues using a corpus of 1242 DARPA Communicator dialogues in the travel planning domain. We show that using fully automatic features we can identify classes of problematic dialogues with accuracies from 67% to 89%. 1 Introduction Spoken dialogue systems promise efficient and natural access to a large variety of information services from any phone. Deployed systems and research prototypes exist for applications such as personal email and calendars, travel and restaurant information, personal banking, and customer care. Within the last few years, several spoken dialogue systems for widely used applications have moved from research labs into commercial use (Baggia et al., 1998; Gorin et al., 1997). These applications can receive millions of calls a month. There is a strong requirement for automatic methods to identify and extract dialogues that provide training data for further system development. As a spoken dialogue system is developed, it is first tested as a prototype, then fielded in a limited setting, possibly running with human supervision (Gorin et al., 1997), and finally deployed. At each stage from research prototype to deployed commercial application, the system is constantly undergoing further development. When a system is prototyped in house or first tested in the field, human subjects are often paid to use the system and give detailed feedback on task completion and user satisfaction (Baggia et al., 1998; Walker et al., 2001). Even when a system is deployed, it often keeps evolving, either because customers want to do different things with it, or because new tasks arise out of developments in the underlying application. However, real customers of a deployed system may not be willing to give detailed feedback. Thus, the widespread use of these systems has created a data management and analysis problem. System designers need to constantly track system performance, identify problems, and fix them. System modules such as automatic speech recognition (ASR), natural language understanding (NLU) and dialogue management may rely on training data collected at each phase. ASR performance assessment relies on full transcription of the utterances. Dialogue manager assessment relies on a human interface expert reading a full transcription of the dialogue or listening to a recording of it, possibly while examining the logfiles to understand the interaction between all the components. However, because of the high volume of calls, spoken dialogue service providers typically can only afford to store, transcribe, and analyze a small fraction of the dialogues. Computational Linguistics (ACL), Philadelphia, July 2002, pp. 384-391. Proceedings of the 40th Annual Meeting of the Association for Therefore, there is a great need for methods for both automatically evaluating system performance, and for extracting subsets of dialogues that provide good training data for system improvement. This is a difficult problem because by the time a system is deployed, typically over 90% of the dialogue interactions result in completed tasks and satisfied users. Dialogues such as these do not provide very useful training data for further system development because there is little to be learned when the dialogue goes well. Previous research on spoken dialogue evaluation proposed the application of automatic classifiers for identifying and predicting of problematic dialogues (Litman et al., 1999; Walker et al., 2002) for the purpose of automatically adapting the dialogue manager. Here we apply similar methods to the dialogue corpus data-mining problem described above. We report results on automatically training a Problematic Dialogue Identifier (PDI) to classify problematic human-computer dialogues using the October2001 DARPA Communicator corpus. Section 2 describes our approach and the dialogue corpus. Section 3 describes how we use the DATE dialogue act tagging scheme to define input features for the PDI. Section 4 presents a method and results for automatically predicting task completion. Section 5 presents results for predicting problematic dialogues based on the user’s satisfaction. We show that we identify task failure dialogues with 85% accuracy (baseline 59%) and dialogues with low user satisfaction with up to 89% accuracy. We discuss the application of the PDI to data mining in Section 6. Finally, we summarize the paper and discuss future work. 2 Corpus, Methods and Data Our experiments apply CLASSIFICATION and REGRESSION trees (CART) (Brieman et al., 1984) to train a Problematic Dialogue Identifier (PDI) from a corpus of human-computer dialogues. CLASSIFICATION trees are used for categorical response variables and REGRESSION trees are used for continuous response variables. CART trees are binary decision trees. A CLASSIFICATION tree specifies what queries to perform on the features to maximize CLASSIFICATION ACCURACY, while REGRESSION trees derive a set of queries to maximize the CORRELATION of the predicted value and the original value. Like other machine learners, CART takes as input the allowed values for the response variables; the names and ranges of values of a fixed set of input features; and training data specifying the response variable value and the input feature values for each example in a training set. Below, we specify how the PDI was trained, first describing the corpus, then the response variables, and finally the input features derived from the corpus. Corpus: We train and test the PDI on the DARPA Communicator October-2001 corpus of 1242 dialogues. This corpus represents interactions with real users, with eight different Communicator travel planning systems, over a period of six months from April to October of 2001. The dialogue tasks range from simple domestic round trips to multileg international trips requiring both car and hotel arrangements. The corpus includes logfiles with logged events for each system and user turn; hand transcriptions and automatic speech recognizer (ASR) transcription for each user utterance; information derived from a user profile such as user dialect region; and a User Satisfaction survey and hand-labelled Task Completion metric for each dialogue. We randomly divide the corpus into 80% training (894 dialogues) and 20% testing (248 dialogues). Defining the Response Variables: In principle, either low User Satisfaction or failure to complete the task could be used to define problematic dialogues. Therefore, both of these are candidate response variables to be examined. The User Satisfaction measure derived from the user survey ranges between 5 and 25. Task Completion is a ternary measure where no Task Completion is indicated by 0, completion of only the airline itinerary is indicated by 1, and completion of both the airline itinerary and ground arrangements, such as car and hotel bookings, is indicated by 2. We also defined a binary version of Task Completion, where Binary Task Completion=0 when no task or subtask was complete (equivalent to Task Completion=0), and Binary Task Completion=1 where all or some of the task was complete (equivalent to Task Completion=1 or Task Completion=2). Figure 1 shows the frequency of dialogues for varying User Satisfaction for cases where Task Completion is 0 (solid line) and Task Completion is greater than 0 (dotted lines). Note that Task Completion is 1 or 2 for a number of dialogues for which User Satisfaction is low. Figure 2 illustrates such a dialogue (system turns are labelled S, user turns as U, and ASR hypotheses as REC). Here, low User Satisfaction may be due to the fact that the user had to repeat herself many times before the system understood the departure city. An automatic surrogate for ASR accuracy (such as ASR confidence) would Figure 1: User Satisfaction for dialogues where Task Completion is 0, 1 or 2 not be adequate for identifying this problematic dialogue, because here either the dialogue manager or the SLU component is at fault. Another dialogue subset of interest in Figure 1 is one for which Task Completion is 0 but User Satisfaction is high. A common cause for non-Task Completion in these dialogues is database access problems. The fact that the interaction went well until this point is not captured by the Task Completion metric. Thus we decided to use both User Satisfaction and Task Completion as response variables in separate experiments in order to allow the PDI to capture the relationship between these two variables. Input Features: Both User Satisfaction and Task Completion predictors are trained using two types of automatically extractable features: (1) logfile features; and (2) system dialogue act types. The logfile features include efficiency metrics such as the number of system and user turns spent on the task; the total time of the dialogue in milliseconds; and situational metrics, such as phone type1 which affects ASR. The system dialogue act types are extracted from the logfiles using the DATE (Dialogue Act Tagging for Evaluation) automatic tag1We assume this is automatically derivable by automatic number identification (ANI). S13: Sorry, I must have misunderstood. I just heard you say that the departure city was Minto, Arkansas. Was the departure city Minneapolis-St-Paul, Minto, Arkansas or Milan, Italy? U14: THE DEPARTURE CITY IS MINNEAPOLIS SAINT PAUL REC: the departure city in minneapolis saint paul S15: Here’s a summary of the second flight segment: I’ve got you leaving from Minto, Arkansas on Friday, October twelfth in the morning and arriving in newark flying on continental. Is this correct? U16: NO I WANT TO LEAVE FROM MINNEAPOLIS REC: no i want to leave from minneapolis Repeat dialogue segment 3 times S25: Sorry, I must have misunderstood. I just heard you say that the departure city was Minto, Arkansas. Was the departure city Minto, Arkansas or MinneapolisSt-Paul ? U26: THE DEPARTURE CITY IS MINNEAPOLIS SAINT PAUL REC: the departure city minneapolis by all S27: Recapping the second flight segment: I have you leaving out of minneapolis-st-paul on Friday, October twelfth in the morning and arriving in newark flying on continental. Is this correct? U28: YES REC: yes Figure 2: Sample low user satisfaction dialogue ging scheme (Walker et al., 2001). The purpose of these features is to extract numerical correlates of system dialogue behaviors. This dialogue act labelling procedure is detailed in Section 3. Figure 3 summarizes the types of features used to train the User Satisfaction predictor. In addition to the efficiency metrics and the DATE labels, Task Success can itself be used as a predictor. This can either be the hand-labelled feature or an approximation as predicted by the Task Completion Predictor, described in Section 4. Figure 4 shows the system design for automatically predicting User SatisfacEfficiency Measures – Hand-labelled: WERR, SERR – Automatic: TimeOnTask, TurnsOnTask, NumOverlaps, MeanUsrTurnDur, MeanWrdsPerUsrTurn, MeanSysTurnDur, MeanWrdsPerSysTurn, DeadAlive, Phone-type, SessionNumber Qualitative Measures – Automatic: DATE Unigrams, e.g. presentinfo:flight, acknowledgement:flight booking etc. – Automatic: DATE Bigrams, e.g. presentinfo:flight+acknowledgement:flight booking etc. Task Success Features – Hand-labelled: HL Task Completion – Automatic: Auto Task Completion Figure 3: Features used to train the User Satisfaction Prediction tree tion with the three types of input features. DATE Output of SLS Completion Auto Task Completion CART Predictor UserSatisfaction Task Predictor TAGGER Automatic Logfile Features DATE Rules Figure 4: Schema for User Satisfaction prediction 3 Extracting DATE Features The dialogue act labelling of the corpus follows the DATE tagging scheme (Walker et al., 2001). In DATE, utterance classification is done along three cross-cutting orthogonal dimensions. The CONVERSATIONAL-DOMAIN dimension specifies the domain of discourse that an utterance is about. The SPEECH ACT dimension captures distinctions between communicative goals such as requesting information (REQUEST-INFO) or presenting information (PRESENT-INFO). The TASK-SUBTASK dimension specifies which travel reservation subtask the utterance contributes to. The SPEECH ACT and CONVERSATIONAL-DOMAIN dimensions are general across domains, while the TASK-SUBTASK dimension is domain- and sometimes system-specific. Within the conversational domain dimension, DATE distinguishes three domains (see Figure 5). The ABOUT-TASK domain is necessary for evaluating a dialogue system’s ability to collaborate with a speaker on achieving the task goal. The ABOUTCOMMUNICATION domain reflects the system goal of managing the verbal channel of communication and providing evidence of what has been understood. All implicit and explicit confirmations are about communication. The ABOUT-SITUATIONFRAME domain pertains to the goal of managing the user’s expectations about how to interact with the system. DATE distinguishes 11 speech acts. Examples of each speech act are shown in Figure 6. The TASK-SUBTASK dimension distinguishes among 28 subtasks, some of which can also be grouped at a level below the top level task. The TOP-LEVEL-TRIP task describes the task which contains as its subtasks the ORIGIN, DESTINATION, Conversational Domain Example ABOUT-TASK And what time didja wanna leave? ABOUTCOMMUNICATION Leaving from Miami. ABOUT-SITUATIONFRAME You may say repeat, help me out, start over, or, that’s wrong Figure 5: Example utterances distinguished within the Conversational Domain Dimension Speech-Act Example REQUEST-INFO And, what city are you flying to? PRESENT-INFO The airfare for this trip is 390 dollars. OFFER Would you like me to hold this option? ACKNOWLEDGMENT I will book this leg. BACKCHANNEL Okay. STATUS-REPORT Accessing the database; this might take a few seconds. EXPLICITCONFIRM You will depart on September 1st. Is that correct? IMPLICITCONFIRM Leaving from Dallas. INSTRUCTION Try saying a short sentence. APOLOGY Sorry, I didn’t understand that. OPENINGCLOSING Hello. Welcome to the C M U Communicator. Figure 6: Example speech act utterances DATE, TIME, AIRLINE, TRIP-TYPE, RETRIEVAL and ITINERARY tasks. The GROUND task includes both the HOTEL and CAR-RENTAL subtasks. The HOTEL task includes both the HOTEL-NAME and HOTEL-LOCATION subtasks.2 For the DATE labelling of the corpus, we implemented an extended version of the pattern matcher that was used for tagging the Communicator June 2000 corpus (Walker et al., 2001). This method identified and labelled an utterance or utterance sequence automatically by reference to a database of utterance patterns that were hand-labelled with the DATE tags. Before applying the pattern matcher, a named-entity labeler was applied to the system utterances, matching named-entities relevant in the travel domain, such as city, airport, car, hotel, airline names etc.. The named-entity labeler was also applied to the utterance patterns in the pattern database to allow for generality in the expression of communicative goals specified within DATE. For this named-entity labelling task, we collected vocabulary lists from the sites, which maintained such lists for 2ABOUT-SITUATION-FRAME utterances are not specific to any particular task and can be used for any subtask, for example, system statements that it misunderstood. Such utterances are given a “meta” dialogue act status in the task dimension. developing their system.3 The extension of the pattern matcher for the 2001 corpus labelling was done because we found that systems had augmented their inventory of named entities and utterance patterns from 2000 to 2001, and these were not accounted for by the 2000 tagger database. For the extension, we collected a fresh set of vocabulary lists from the sites and augmented the pattern database with additional 800 labelled utterance patterns. We also implemented a contextual rule-based postprocessor that takes any remaining unlabelled utterances and attempts to label them by looking at their surrounding DATE labels. More details about the extended tagger can be found in (Prasad and Walker, 2002). On the 2001 corpus, we were able to label 98.4  of the data. A hand evaluation of 10 randomly selected dialogues from each system shows that we achieved a classification accuracy of 96  at the utterance level. For User Satisfaction Prediction, we found that the distribution of DATE acts were better captured by using the frequency normalized over the total number of dialogue acts. In addition to these unigram proportions, the bigram frequencies of the DATE dialogue acts were also calculated. In the following two sections, we discuss which DATE labels are discriminatory for predicting Task Completion and User Satisfaction. 4 The Task Completion Predictor In order to automatically predict Task Completion, we train a CLASSIFICATION tree to categorize dialogues into Task Completion=0, Task Completion=1 or Task Completion=2. Recall that a CLASSIFICATION tree attempts to maximize CLASSIFICATION ACCURACY, results for Task Completion are thus given in terms of percentage of dialogues correctly classified. The majority class baseline is 59.3% (dialogues where Task Completion=1). The tree was trained on a number of different input features. The most discriminatory ones, however, were derived from the DATE tagger. We use the primitive DATE tags in conjunction with a feature called GroundCheck (GC), a boolean feature indicating the existence of DATE tags related to making ground arrangements, specifically request info:hotel name, request info:hotel location, offer:hotel and offer:rental. Table 1 gives the results for Task Completion prediction accuracy using the various types of features. 3The named entities were preclassified into their respective semantic classes by the sites. Baseline Auto ALF + ALF + Logfile GC GC+ DATE TC 59% 59% 79% 85% BTC 86% 86% 86% 92% Table 1: Task Completion (TC) and Binary Task Completion (BTC) prediction results, using automatic logfile features (ALF), GroundCheck (GC) and DATE unigram frequencies The first row is for predicting ternary Task Completion, and the second for predicting binary Task Completion. Using automatic logfile features (ALF) is not effective for the prediction of either types of Task Completion. However, the use of GroundCheck results in an accuracy of 79% for the ternary Task Completion which is significantly above the baseline (df = 247, t = -6.264, p  .0001). Adding in the other DATE features yields an accuracy of 85%. For Binary Task Completion it is only the use of all the DATE features that yields an improvement over the baseline of 92%, which is significant (df = 247, t = 5.83, p  .0001). A diagram of the trained decision tree for ternary Task Completion is given in Figure 7. At any junction in the tree, if the query is true then one takes the path down the right-hand side of the tree, otherwise one takes the left-hand side. The leaf nodes contain the predicted value. The GroundCheck feature is at the top of the tree and divides the data into Task Completion  2 and Task Completion  2. If GroundCheck  1, then the tree estimates that Task Completion is 2, which is the best fit for the data given the input features. If GroundCheck  0 and there is an acknowledgment of a booking, then probably a flight has been booked, therefore, Task Completion is predicted to be 1. Interestingly, if there is no acknowledgment of a booking then Task Completion  0, unless the system got to the stage of asking the user for an airline preference and if request info:top level trip  2. More than one of these DATE types indicates that there was a problem in the dialogue and that the information gathering phase started over from the beginning. The binary Task Completion decision tree simply checks if an acknowledgement:flight booking has occurred. If it has, then Binary Task Completion=1, otherwise it looks for the DATE act about situation frame:instruction:meta situation info, which captures the fact that the system has told the user what the system can and cannot do, or has informed the user about the current state of the task. This must help with Task Completion, as the tree tells us that if one or more of these acts are observed then Task Completion=1, otherwise Task Completion=0. TC=1 GroundCheck =0 TC=2 request_info:airline <1 request_info:top_level_trip < 2 acknow.: flight_booking< 1 TC=0 TC=1 TC=0 TC=1 Figure 7: Classification Tree for predicting Task Completion (TC) 5 The User Satisfaction Predictor Feature Log LF + LF + used features unigram bigram HL TC 0.587 0.584 0.592 Auto TC 0.438 0.434 0.472 HL BTC 0.608 0.607 0.614 Auto BTC 0.477 0.47 0.484 Table 2: Correlation results using logfile features (LF), adding unigram proportions and bigram counts, for trees tested on either hand-labelled (HL) or automatically derived Task Completion (TC) and Binary Task Completion (BTC) Quantitative Results: Recall that REGRESSION trees attempt to maximize the CORRELATION of the predicted value and the original value. Thus, the results of the User Satisfaction predictor are given in terms of the correlation between the predicted User Satisfaction and actual User Satisfaction as calculated from the user survey. Here, we also provide R  for comparison with previous studies. Table 2 gives the correlation results for User Satisfaction for different feature sets. The User Satisfaction predictor is trained using the hand-labelled Task Completion feature for a topline result and using the automatically obtained Task Completion (Auto TC) for the fully automatic results. We also give results using Binary Task Completion (BTC) as a substitute for Task Completion. The first column gives results using features extracted from the logfile; the second column indicates results using the DATE unigram proportions and the third column indicates results when both the DATE unigram and bigram features are available. The first row of Table 2 indicates that performance across the three feature sets is indistinguishable when hand-labelled Task Completion (HL TC) is used as the Task Completion input feature. A comparison of Row 1 and Row 2 shows that the PDI performs significantly worse using only automatic features (z = 3.18). Row 2 also indicates that the DATE bigrams help performance, although the difference between R = .438 and R = .472 is not significant. The third and fourth rows of Table 1 indicate that for predicting User Satisfaction, Binary Task Completion is as good as or better than Ternary Task Completion. The highest correlation of 0.614 (     ) uses hand-labelled Binary Task Completion and the logfile features and DATE unigram proportions and bigram counts. Again, we see that the Automatic Binary Task Completion (Auto BTC) performs significantly worse than the handlabelled version (z = -3.18). Row 4 includes the best totally automatic system: using Automatic Binary Task Completion and DATE unigrams and bigrams yields a correlation of 0.484 (     ). Regression Tree Interpretation: It is interesting to examine the trees to see which features are used for predicting User Satisfaction. A metric called Feature Usage Frequency indicates which features are the most discriminatory in the CART tree. Specifically, Feature Usage Frequency counts how often a feature is queried for each data point, normalized so that the sum of Feature Usage Frequency values for all the features sums to one. The higher a feature is in the tree, the more times it is queried. To calculate the Feature Usage Frequency, we grouped the features into three types: Task Completion, Logfile features and DATE frequencies. Feature Usage Frequency for the logfile features is 37%. Task Completion occurs only twice in the tree, however, it makes up 31because it occurs at the top of the tree. The Feature Usage Frequency for DATE category frequency is 32%. We will discuss each of these three groups of features in turn. The most used logfile feature is TurnsOnTask which is the number of turns which are taskoriented, for example, initial instructions on how to use the system are not taken as a TurnOnTask. Shorter dialogues tend to have a higher User Satisfaction. This is reflected in the User Satisfaction scores in the tree. However, dialogues which are long (TurnsOnTask  79 ) can be satisfactory (User Satisfaction = 15.2) as long as the task that is completed is long, i.e., if ground arrangements are made in that dialogue (Task Completion=2). If ground arrangements are not made, the User Satisfaction is lower (11.6). Phone type is another important feature queried in the tree, so that dialogues conducted over corded phones have higher satisfaction. This is likely to be due to better recognition performance from corded phones. As mentioned previously, Task Completion is at the top of the tree and is therefore the most queried feature. This captures the relationship between Task Completion and User Satisfaction as illustrated in Figure 1. Finally, it is interesting to examine which DATE tags the tree uses. If there have been more than three acknowledgments of bookings, then several legs of a journey have been successfully booked, therefore User Satisfaction is high. In particular, User Satisfaction is high if the system has asked if the user would like a price for their itinerary which is one of the final dialogue acts a system does before the task is completed. The DATE act about comm:apology:meta slu reject is a measure of the system’s level of misunderstanding. Therefore, the more of these dialogue act types the lower User Satisfaction. This part of the tree uses length in a similar way described earlier, whereby long dialogues are only allocated lower User Satisfaction if they do not involve ground arrangements. Users do not seem to mind longer dialogues as long as the system gives a number of implicit confirmations. The dialogue act request info:top level trip usually occurs at the start of the dialogue and requests the initial travel plan. If there are more than one of this dialogue act, it indicates that a STARTOVER occurred due to system failure, and this leads to lower User Satisfaction. A rule containing the bigram request info:depart day month date+USER states that if there is more than one occurrence of this request then User Satisfaction will be lower. USER is the single category used for user-turns. No automatic method of predicting user speech act is available yet for this data. A repetition of this DATE bigram indicates that a misunderstanding occurred the first time it was requested, or that the task is multi-leg in which case User Satisfaction is generally lower. The tree that uses Binary Task Completion is identical to the tree described above, apart from one binary decision which differentiates dialogues where Task Completion=1 and Task Completion=2. Instead of making this distinction, it just uses dialogue length to indicate the complexity of the task. In the original tree, long dialogues are not penalized if they have achieved a complex task (i.e. if Task Completion=2). The Binary Task Completion tree has no way of making this distinction and therefore just penalizes very long dialogues (where TurnsOnTask  110). The Feature Usage Frequency for the Task Completion features is reduced from 31% to 21%, and the Feature Usage Frequency for the logfile features increases to 47%. We have shown that this more general tree produces slightly better results. 6 Results for Identifying Problematic Dialogues for Data Mining So far, we have described a PDI that predicts User Satisfaction as a continuous variable. For data mining, system developers will want to extract dialogues with predicted User Satisfaction below a particular threshold. This threshhold could vary during different stages of system development. As the system is fine tuned there will be fewer and fewer dialogues with low User Satisfaction, therefore in order to find the interesting dialogues for system development one would have to raise the User Satisfaction threshold. In order to illustrate the potential value of our PDI, consider an example threshhold of 12 which divides the data into 73.4% good dialogues where User Satisfaction  12 which is our baseline result. Table 3 gives the recall and precision for the PDIs described above which use hand-labelled Task Completion and Auto Task Completion. In the data, 26.6% of the dialogues are problematic (User Satisfaction is under 12), whereas the PDI using handlabelled Task Completion predicts that 21.8% are problematic. Of the problematic dialogues, 54.5% are classified correctly (Recall). Of the dialogues that it classes as problematic 66.7% are problematic (Precision). The results for the automatic system show an improvement in Recall: it identifies more problematic dialogues correctly (66.7%) but the precision is lower. What do these numbers mean in terms of our original goal of reducing the number of dialogues that need to be transcribed to find good cases to use Task Completion Dialogue Recall Prec. Hand-labelled Good 90% 84.5% Hand-labelled Problematic 54.5% 66.7% Automatic Good 88.5% 81.3% Automatic Problematic 66.7% 58.0% Table 3: Precision and Recall for good and problematic dialogues (where a good dialogue has User Satisfaction  12) for the PDI using hand-labelled Task Completion and Auto Task Completion for system improvement? If one had a budget to transcribe 20% of the dataset containing 100 dialogues, then by randomly extracting 20 dialogues, one would transcribe 5 problematic dialogues and 15 good dialogues. Using the fully automatic PDI, one would obtain 12 problematic dialogues and 8 good dialogues. To look at it another way, to extract 15 problematic dialogues out of 100, 55% of the data would need transcribing. To obtain 15 problematic dialogues using the fully automatic PDI, only 26% of the data would need transcribing. This is a massive improvement over randomly choosing dialogues. 7 Discussion and Future Developments This paper presented a Problematic Dialogue Identifier which system developers can use for evaluation and to extract problematic dialogues from a large dataset for system development. We describe PDIs for predicting both Task Completion and User Satisfaction in the DARPA Communicator October 2001 corpus. There has been little previous work on recognizing problematic dialogues. However, a number of studies have been done on predicting specific errors in a dialogue, using a variety of automatic and handlabelled features, such as ASR confidence and semantic labels (Aberdeen et al., 2001; Hirschberg et al., 2000; Levow, 1998; Litman et al., 1999). Previous work on predicting problematic dialogues before the end of the dialogue (Walker et al., 2002) achieved accuracies of 87% using hand-labelled features (baseline 67%). Our automatic Task Completion PDI achieves an accuracy of 85%. Previous work also predicted User Satisfaction by applying multi-variate linear regression features with and without DATE features and showed that DATE improved the model fit from     to  (Walker et al., 2001). Our best model has an   . One potential explanation for this difference is that the DATE features are most useful in combination with non-automatic features such as Word Accuracy which the previous study used. The User Satisfaction PDI using fully automatic features achieves a correlation of 0.484. In future work, we hope to improve our results by trying different machine learning methods; including the user’s dialogue act types as input features; and testing these methods in new domains. 8 Acknowledgments The work reported in this paper was partially funded by DARPA contract MDA972-99-3-0003. References J. Aberdeen, C. Doran, and L. Damianos. 2001. Finding errors automatically in semantically tagged dialogues. In Human Language Technology Conference. P. Baggia, G. Castagneri, and M. Danieli. 1998. Field Trials of the Italian ARISE Train Timetable System. In Interactive Voice Technology for Telecommunications Applications, IVTTA, pages 97–102. L. Brieman, J. H. Friedman, R. A. Olshen, and C. J. Stone. 1984. Classification and Regression Trees. Wadsworth and Brooks, Monterey California. A.L. Gorin, G. Riccardi, and J.H. Wright. 1997. How may i help you? Speech Communication, 23:113–127. J. B. Hirschberg, D. J. Litman, and M. Swerts. 2000. Generalizing prosodic prediction of speech recognition errors. In Proceedings of the 6th International Conference of Spoken Language Processing (ICSLP-2000). G. Levow. 1998. Characterizing and recognizing spoken corrections in human-computer dialogue. In Proceedings of the 36th Annual Meeting of the Association of Computational Linguistics, pages 736–742. D. J. Litman, M. A. Walker, and M. J. Kearns. 1999. Automatic detection of poor speech recognition at the dialogue level. In Proceedings of the Thirty Seventh Annual Meeting of the Association of Computational Linguistics, pages 309–316. R. Prasad and M. Walker. 2002. Training a dialogue act tagger for human-human and human-computer travel dialogues. In Proceedings of the 3rd SIGdial Workshop on Discourse and Dialogue, Philadelphia PA. M. Walker, R. Passonneau, and J. Boland. 2001. Quantitative and qualitative evaluation of darpa communicator spoken dialogue systems. In Proceedings of the 39rd Annual Meeting of the Association for Computational Linguistics (ACL/EACL-2001). M. Walker, I. Langkilde-Geary, H. Wright Hastie, J. Wright, and A. Gorin. 2002. Automatically training a problematic dialogue predictor for a spoken dialogue system. JAIR.
2002
49
     ! "$#%'&$(&)* ,+-.%/1023 &) 4 65'&$7!98: &);=<>#%?5' @BA'C(DFEHGJIE6K'ACLMDNA'OQPJRTSVUAW SYXZA[L3\]ACTIA_^`A'OaA'bA'cHPdRfe1g'hiDFPdjAP%\[ROQIk?AC[R lemg'nZoe1nZprq7sZtvu'owxpzy{q7sZy|uHsZyxeQwx}~smg €em~~em‚Yƒ6„p†…em‚ ‡ tvsZ~h's‰ˆ1emg[ƒŠt‹emy|}Œo'‚YƒŠ‚xemgheƒŠt‹}Œem}dŽ1~emg'nZoe1nZpYzsZt‹uowxpzyY‘АYsZt ’ b“S†”‰OaAXm” • –?—™˜7š6›œš'Ÿž š?ž$˜¡¢1£$˜%›‰¢`—™¢Z¤J¥Ÿš?£$–¦›‰¢?›‰§¨¤ © ˜$—™˜ ª‰«›¬˜$£$›œ£¡x¤­ª‰«®¤­£$–x¤J›œž)£{¯±°˜¡£$—¨ªa¢‹²3¢¤ ˜$³TŸž)—®¢´‹˜ © ˜¡£¡µV¶¸·mŸ¹‰Ÿž)›‰§“˜$ºx¢?›œž)—™ªa˜»›œž) x¼›‰µ½—™¢?¥¾À¿ÂÁ†Ã£$–š'Ÿž$«~ª‰ž)µ½›‰¢6ºxª‰«›‰º)– µ½ªm¥?°6§¨7—®¢V›½˜¡Ÿž)—™›‰§'Ä9›‰˜¡§™—™¢±˜ © ˜¡£¡µVÅÆ¿ÈÇaà £$–?%—™µ€š6›‰ºx£ ª‰«]«~Ÿ¥Ä6›‰º)Ém˜7›‰¢?¥v£$–%—™¢?˜$Ÿž¡¤ £$—™ªa¢‹ª‰«À›Ê§¨ª‰´a—™º»š?ž$ªz¹‰ŸžÅH›‰¢?¥Ë¿ÈÌaÃ7£$–?—®µ¤ š9›‰ºx£{ª‰«M¹œ›œž)—¨ªa°?˜{§¨x¼—™ºŸ›‰§ž$˜$ªa°ž)ºx˜Ÿ¶v• – µr›‰—™¢ºxªa¢?ºŸ§™°?˜$—™ªa¢»—®˜[£$–?›œ££$–Àª†¹‰Ÿž)›‰§®§š'Ÿž¡¤ «~ª‰ž)µ½›‰¢6ºx3¥Ÿš'¢?¥?˜ªa¢½£$–7¥Ÿš?£$–½ª‰« ¢6›œ£Â¤ °?ž)›‰§?§™›‰¢´a°6›œ´‰ š?ž$ªZºx˜$˜$—®¢´>ž$˜¡ªa°?ž)ºx˜T›‰¢6¥ £$–? £¡ª1ªa§™˜3°?˜¡¥Ê«~ª‰ž ›‰¢?˜¡³“Ÿž Í9¢?¥?—™¢´¶ Î Ï CƔ‰OaEIRXm”aPJEHC ²7—™µ½—™¢´±›œ£ž$Ÿ£$°žÐ¢?—™¢´3Ä?ž)—¨Ÿ«6›‰¢6˜¡³TŸžÐ˜—™¢»ž$˜¡š'ªa¢?˜¡“£¡ª ¢?›œ£$°žÐ›‰§§™›‰¢´a°?›œ´‰%Ñm°˜¡£$—™ªa¢?˜ŸÅZª‰š'¢Z¤J¥ªaµ½›‰—™¢v¯ °˜Â¤ £$—¨ªa¢²3¢?˜$³TŸž)—®¢´`¿­¯±² Ã ˜ © ˜¡£¡µ½˜%ž$Ÿš?ž$˜¡¢1£{›‰¢›‰¥Z¤ ¹œ›‰¢?ºx¥v›œš?š9§™—™ºŸ›œ£$—¨ªa¢Êª‰«¢?›œ£$°žÐ›‰§ §™›‰¢´a°?›œ´‰%š?ž$ªZºx˜$˜Â¤ —™¢´¶•–Ê´a§¨ª‰Ä6›‰§Àµ€Ÿ£¡ž)—®ºŸ˜€°?˜¡¥Ë—™¢£$–‹¯±²£¡ž)›‰º$É Ÿ¹œ›‰§™°?›œ£$—¨ªa¢?˜¦ª‰«Ê£$–Ò•Ɲx¼Z£`Ó Ô£¡ž)—¨Ÿ¹œ›‰§rՓªa¢«~Ÿž$¢?ºx ¿Ö•Ó3ԓÕÃr¿~ת1ª‰ž)–?Ÿ˜ŸÅÁ؉؉Øaû›‰§™§¨ªz³!«~ª‰ž{£$–.ª†¹‰ŸžÐ›‰§™§ ›‰˜$˜¡˜)˜$µ€¢1£>ª‰«À£$–¬¯±²2˜ © ˜$£¡µiš'Ÿž$«~ª‰ž)µ½›‰¢?ºx‰¶½²7˜ š6›œž$£.ª‰«{›ž$§¨¢1£$§¨˜$˜ÊÑm°˜¡£Ê£¡ªË—™µ€š?ž)ª†¹‰B¯±²i˜ © ˜Â¤ £¡µ½˜ŸÅ7—¨£V—™˜.¢ºx˜$˜$›œž © £¡ªÒµ€›‰˜$°ž)Ù¢ª‰£Êªa¢?§ © £$– ´a§¨ª‰Ä6›‰§6š'Ÿž$«Œª‰žÐµ½›‰¢?ºx‰ÅaÄ6°?£›‰§™˜¡ª>£$–? š'Ÿž$«~ª‰ž)µ½›‰¢?ºxª‰« ›‰º)–—™¢?¥?—™¹m—™¥6°?›‰§ µ€ªZ¥?°?§¨»›‰¢?¥¦ª‰£$–Ÿž%›œžÐº)–?—¨£¡ºx£$°?ž)›‰§ «~›œ£$°ž$˜Ÿ¶»²Ú¥Ÿ£$›‰—®§¨¥ÙšŸž)«Œª‰ž)µr›‰¢?ºx»›‰¢6›‰§ © ˜)—™˜±—™¢?¥?—Û¤ ºŸ›œ£¡˜¢ª‰£Æªa¢?§ © £$–?›œ£]£$–T˜ © ˜¡£¡µÜ«~›‰—®§™˜ £¡ª7š?ž$ªz¹m—™¥?›‰¢ ›‰¢?˜¡³“Ÿž Ä6°£±³ – © £$–?½˜ © ˜$£¡µÝ«~›‰—™§™¥¶{•–?»š'Ÿž$«~ª‰ž¡¤ µ½›‰¢?ºxM›‰¢?›‰§ © ˜$—®˜H—™˜]°?˜¡Ÿ«Ö°?§1£¡ª7˜ © ˜¡£¡µ(¥˜)—¨´a¢Ÿž)˜H³ –ª ³À›‰¢Q£Æ£¡ª —™¥¢1£$—¨« © Ÿž$ž$ª‰ž]˜¡ªa°ž)ºx˜Æ›‰¢?¥ ³“›œÉ µ€ªm¥6°?§¨˜Ÿ¶ •–?Ò¯ ²=§™—¨£¡ŸžÐ›œ£$°ž$«Œž$ªaµÞ£$–?Ò§™›‰˜¡£¦«ŒŸ³ © ›œžÐ˜ ž$Ÿš'ª‰ž$£$˜vªa¢ß´a§¨ª‰Ä9›‰§>š'Ÿž$«~ª‰ž)µ½›‰¢?ºxBª‰«€¹œ›œž)—¨ªa°6˜v˜ © ˜Â¤ £¡µ½˜à¿Ö² Ä6¢? © Ÿ£*›‰§È¶¨Åáǜâ‰â‰âZã ä3ª†¹ © Ÿ£*›‰§È¶¨Å ǜâ‰âÁ†ÃжÀ屝¢Ÿž)›‰§Ÿ¹œ›‰§™°?›œ£$—¨ªa¢Êµ€Ÿ£¡žÐ—™ºŸ˜M›œž$>¥?—™˜)ºŸ°?˜$˜¡¥ —™¢¿~ת1ª‰ž)–?Ÿ˜7›‰¢6¥v•—™ºx‰Åǜâ‰â‰âQÃ3›‰¢?¥¿Ö擞$º)ÉvŸ£±›‰§È¶¨Å ǜâ‰â‰âQÃÄ9°£ŸÅ³ —™£$–竌Ÿ³Fx¼ºxŸš?£$—¨ªa¢6˜v¿~èd£¡£ © ºÐ–Ÿž)—™›‰–靟£ ›‰§È¶¨Å‰Çœâ‰âÁ†ÃÐÅa§™—¨£¡£$§¨—™˜H˜$›‰—™¥ ›œÄ'ªa°£H—™¢¤J¥Ÿš?£$–±Ÿž$ž$ª‰žÆ›‰¢?›‰§Û¤ © ˜$—™˜M—™¢v¯±²ê˜ © ˜¡£¡µ½˜¶ ·Z—®¢?ºxµ€ªa˜$£¸¯±²ë˜ © ˜¡£¡µr˜Ùºxªa¢6˜$—™˜¡£‹ª‰«rµ€ªZ¥?°?§¨˜ £$–?›œ£v›œž)`º)–?›‰—®¢¥ß˜¡ŸžÐ—™›‰§™§ © ¿Ö²3Ä6¢ © Ÿ£v›‰§­¶¨Å%ǜâ‰â‰âZã ì žÐ›œ´‰Ÿž>Ÿ£%›‰§È¶¨Å[ǜâ‰â‰âQÃÐÅ]£$–?€ª†¹‰Ÿž)›‰§™§šŸž)«Œª‰ž)µr›‰¢?ºx½—™˜ ºxªa¢1£¡ž$ªa§™§¨¥€Ä © £$–—¨ž³“›œÉ‰˜¡£T§®—™¢É¶H袀£$–?—™˜ºŸ›‰˜¡3£$– Ÿž$ž$ª‰ží›‰¢6›‰§ © ˜)—™˜Ê—™˜V˜¡£¡ž)›‰—¨´a–1£¡«~ª‰ž$³“›œžÐ¥¶Nî °ží˜ © ˜¡£¡µ ›œž)ºÐ–?—¨£¡ºx£$°ž$Ê°?˜¡˜»˜¡Ÿ¹‰Ÿž)›‰§“«~Ÿ¥Ä6›‰º)Ém˜³ –6—™º)–ºxªaµ¤ š6§™—®ºŸ›œ£¡˜˜$—¨´a¢?—™Í6ºŸ›‰¢Q£$§ © £$–>Ÿž$ž)ª‰ž ›‰¢?›‰§ © ˜$—™˜Ÿ¶ •–6—™˜Àš6›œš'Ÿžš?ž$˜$¢Q£$˜3›‰¢v—™¢Z¤J¥?Ÿš?£$–Vš'Ÿž$«~ª‰ž)µ½›‰¢?ºx ›‰¢?›‰§ © ˜$—™˜ ª‰«Z› ˜¡£$›œ£¡x¤­ª‰«®¤­£$–?x¤J›œž$£¯±²ç˜ © ˜$£¡µV¶·ZŸ¹‰Ÿž)›‰§ ºxªa¢Í?´a°?ž)›œ£$—¨ªa¢?˜V›œž$¸x¼›‰µ½—™¢?¥ã>Í?ž)˜$£ŸÅ3£$–¸š'Ÿž$«~ª‰ž¡¤ µ½›‰¢?ºx{ª‰«›‰º)–¦µ€ªZ¥?°?§¨>—®¢í›½Ä6›‰˜$§™—™¢%ºÐ–?›‰—™¢¥‹›œž¡¤ º)–6—¨£¡ºx£$°ž$‰Åœ£$–¢Åz£$–—™µ½š6›‰ºx£ ª‰«Z«ŒŸ¥?Ä6›‰º$ÉZ˜H›‰¢?¥%£$– —™¢?˜$Ÿž$£$—¨ªa¢Vª‰«¢Ÿ³›‰¥¹Y›‰¢6ºx¥‹µ€ªZ¥?°?§™˜ŸÅ?›‰¢?¥íÍ9¢?›‰§™§ © Å £$–±—™µ€š6›‰ºx£Tª‰« ¹œ›œž)—¨ªa°?˜“§¨x¼—™ºŸ›‰§6ž)˜¡ªa°ž)ºx˜Ÿ¶Tî °žÀ¯±² ˜ © ˜¡£¡µï³À›‰˜Àž)›‰¢ɉ¥Ê–?—¨´a–.—™¢.£$– §™›‰˜¡£“£$–ž$Ÿ •Ó3Ô“Õ ¯±²,£¡ž)›‰º$ÉNŸ¹œ›‰§™°?›œ£$—™ªa¢?˜ð¿Öºx«d¶Ù¿~ת1ª‰ž)–?Ÿ˜ŸÅÁ؉؉ØaáÃж •–Ÿž)Ÿ«Œª‰ž$ £$– ž$˜)°?§¨£$˜›œž) ž$Ÿš?ž)˜¡¢Q£$›œ£$—™¹‰ £¡ª»ª‰£$–Ÿž ¯±² ˜$Ÿž)—™›‰§]›œž)º)–?—™£¡ºx£$°ž$˜7³ –ªa˜¡—™¢1£¡Ÿž)¢?›‰§]µ€ªZ¥?°?§¨˜ š'Ÿž$«~ª‰ž)µ(Ñm°?—¨¹œ›‰§¨¢Q££$›‰˜¡Ém˜±¿Ö²3Ä6¢ © Ÿ£T›‰§­¶¨Åmǜâ‰â‰âQꉞ µ€š6§™ª © ˜$—™µ½—™§®›œž]§¨x¼—™ºŸ›‰§Zž$˜$ªa°ž)ºx˜›‰¢?¥½£¡ª1ªa§™˜3¿Öä ªz¹ © Ÿ£ ›‰§È¶¨Åǜâ‰âÁœã ì žÐ›œ´‰Ÿž Ÿ£ ›‰§È¶¨Å6ǜâ‰â‰âQÃж ñ ò AóEHCEHôçõ •–7š'Ÿž$«~ª‰ž)µ½›‰¢?ºx3ª‰«H›½¯±²ö˜ © ˜¡£¡µN—™˜T£$—¨´a–Q£$§ © ºxªa°Z¤ š6§¨¥½³ —¨£$–€£$–7ºxªaµ€š6§¨x¼—¨£ © ª‰«Ñm°˜¡£$—¨ªa¢6˜›‰˜¡É‰¥.›‰¢?¥ £$–í¥?—™÷rºŸ°?§¨£ © ª‰« ›‰¢?˜¡³“Ÿž½x¼Z£¡ž)›‰ºx£$—¨ªa¢¶éø?ª‰žrx¼›‰µ¤ š6§¨‰ÅZ—™¢Ê•Ó3ԓÕéµ½›‰¢ © ˜ © ˜¡£¡µ½˜“³TŸž)±Ñm°?—¨£¡±˜$°?ºŸºx˜)˜Â¤ «Ö°?§›œ£š?ž$ªz¹m—®¥?—™¢´±ºxª‰ž$ž$ºx£›‰¢6˜¡³TŸžÐ˜£¡ª%˜$—®µ€š6§¨ŸžŜ«Ö›‰ºx£Â¤ ˜¡ŸŸÉZ—™¢´±Ñ1°?˜¡£$—¨ªa¢?˜ŸÅœÄ6°?£Æ«Ö›‰—™§¨¥{£¡ª ›‰¢?˜¡³“ŸžÑ1°˜$£$—¨ªa¢?˜ £$–?›œ£Êž$Ñm°?—¨ž$¥éž$›‰˜¡ªa¢?—®¢´ª‰žV›‰¥¹œ›‰¢?ºx¥ö§™—®¢´a°?—™˜¡£$—®º Computational Linguistics (ACL), Philadelphia, July 2002, pp. 33-40. Proceedings of the 40th Annual Meeting of the Association for ›‰¢?›‰§ © ˜$—™˜¬¿~ת1ª‰ž)–?Ÿ˜ŸÅÁ؉؉ØaÃж‹øž)ªaµ £$–.ºxªaµ%Ä9—™¢¥ ˜¡Ÿ£ª‰«[ÁxùQúœâ Ÿ¹œ›‰§™°?›œ£$—¨ªa¢€Ñm°˜¡£$—¨ªa¢6˜ŸÅ1ûYâQüª‰«6£$–?Mš6›œž¡¤ £$—™ºŸ—¨š9›œ£$—™¢´Ë˜ © ˜¡£¡µ½˜í›‰¢?˜$³TŸž$¥Ü˜$°?ºŸºx˜$˜$«~°?§®§ © Ñm°˜Â¤ £$—¨ªa¢?˜±§™—™É‰½¯€ÁŸâÁÌm¾çýzþÊÿ  ~ÿÐÅHÄ6°£±¢ªa¢ ºxªa°?§™¥{Í6¢?¥»›±ºxª‰ž$ž$ºx£›‰¢?˜¡³“Ÿž£¡ª ºxªaµ€š6§¨x¼»Ñ1°˜$£$—¨ªa¢?˜ ˜$°?ºÐ–v›‰˜±¯Á‰Áúm¾ÙýzþÊÿ  ~ÿ    !"# %$" &' (*)+, -( %%)./021 ' 34%)5, - %*)6ж ·Z—®¢?ºxMš'Ÿž$«Œª‰žÐµ½›‰¢?ºx —™˜›87ºx£¡¥¬Ä © £$–? ºxªaµ€š6§¨x¼m¤ —¨£ © ª‰«¬Ñm°˜¡£$—™ªa¢Üš6ž$ªmºx˜)˜$—™¢´Å»³TçÍ?žÐ˜¡£‹š6ž$ª†¹Z—™¥Ë› Ä?ž$ªa›‰¥í£$›Y¼mªa¢ªaµ © ª‰«¯±²Ü˜ © ˜¡£¡µr˜Ÿ¶ 9;:*< =?>8@ACB>@D •–£$›Y¼Zªa¢ªaµ © —™˜±Ä6›‰˜¡¥¦ªa¢B˜$Ÿ¹‰Ÿž)›‰§ºxž)—¨£¡ŸžÐ—™›.£$–?›œ£ š6§™› © ›‰¢v—™µ€š'ª‰ž$£$›‰¢1£“ž)ªa§¨>—™¢ÊÄ6°?—®§™¥?—™¢´€¯±²˜ © ˜$£¡µ½˜Ÿ¾ ¿ÂÁ†Ã]§™—™¢?´a°?—™˜¡£$—™º›‰¢?¥Ém¢?ª†³ §¨¥?´‰“ž)˜¡ªa°ž)ºx˜ŸÅ¿ÈÇaÃ]¢?›œ£$°Z¤ ž)›‰§[§™›‰¢´a°6›œ´‰€š?ž$ªZºx˜$˜$—™¢´Ê—™¢1¹‰ªa§¨¹‰¥Å¿ÈÌaÃ7¥ªZºŸ°?µ€¢1£ š?ž$ªZºx˜$˜$—®¢´Å¿~ù1Þ)›‰˜¡ªa¢?—™¢´Vµ€Ÿ£$–?ªm¥?˜Å¿*aÃ3³3–Ÿ£$–Ÿž ª‰žT¢?ª‰£›‰¢?˜¡³“ŸžT—™˜[x¼Zš6§™—™ºŸ—™£$§ © ˜¡£$›œ£¡¥.—™¢€›>¥ªZºŸ°?µ€¢1£ŸÅ ¿ÈúaÓ³ –?Ÿ£$–Ÿžª‰ž3¢?ª‰£›‰¢?˜¡³“Ÿž«~°6˜$—¨ªa¢V—™˜¢?ºx˜$˜$›œž © ¶ 9;:E9 =/FGD.H#H#BH2IKJ LNMBHAO@GIPH QSR 8"T(UWV & ,XO Y Z/" [64! R 5)3\ [0)# " G^]/\(_ %`. Rba `. , %*)6 •–˜${˜ © ˜¡£¡µ½˜7x¼Z£¡ž)›‰ºx£ ›‰¢?˜¡³“Ÿž)˜±›‰˜3£¡x¼Z£±˜$¢?—¨š?š'Ÿ£$˜ «~ž$ªaµ ªa¢`ª‰ž‹µ€ª‰ž$¥ªmºŸ°6µ€¢Q£$˜¶ î7«~£¡¢ö£$–›‰¢Z¤ ˜¡³“Ÿž —™˜7«Œªa°?¢6¥v¹‰Ÿž$Ä9›œ£$—™µÝ—™¢¦›r£¡x¼Z£ ª‰ž ›‰˜ ›.˜)—™µ€š6§¨ µ€ª‰ž$š9–ªa§¨ª‰´a—™ºŸ›‰§]¹œ›œž)—™›œ£$—¨ªa¢¶{• © š6—™ºŸ›‰§™§ © £$–€›‰¢?˜¡³“Ÿž)˜ ›œž$¦x¼Z£¡ž)›‰ºx£¡¥ð°?˜$—®¢´µ½š6—¨ž)—™ºŸ›‰§3µ€Ÿ£$–ªZ¥?˜Êž$§ © —™¢´ ªa¢Vɉ © ³Tª‰žÐ¥vµ½›‰¢?—™š6°?§™›œ£$—¨ªa¢6˜Ÿ¶ QSR 8"/c^UdV &  X8 Y Z/04! R G^]5,Z [ R /8)._ G^]+Z2"xÿ.E,Z/ •–Êº)–6›œž)›‰ºx£¡Ÿž)—™˜¡£$—®º¬ª‰« £$–?—™˜»ºŸ§™›‰˜$˜—™˜{£$–?›œ£€›‰¢?˜¡³“Ÿž)˜ ›œž$M«~ªa°?¢?¥€—™¢€˜)¢?—¨š?š'Ÿ£$˜]ª‰«9£¡x¼Z£ŸÅQÄ9°£°?¢?§™—™É‰“—™¢rÕM§™›‰˜$˜ ÁœÅT—®¢«ŒŸž)¢?ºx.—®˜»¢?ºx˜$˜$›œž © £¡ªÙž)§™›œ£¡V£$–VÑ1°?˜¡£$—¨ªa¢ ³ —¨£$–v£$–›‰¢?˜¡³“Ÿž¶fevª‰ž$§™›œÄ'ª‰ž)›œ£¡›‰¢?˜¡³“Ÿž ¥Ÿ£¡º|¤ £$—¨ªa¢Bµ½Ÿ£$–ªm¥6˜ ˜$°?º)–`›‰˜>ªa¢1£¡ªa§¨ª‰´a—¨˜>ª‰ž{ºxªm¥?—™Í6ºŸ›œ£$—¨ªa¢ ª‰«š?ž)›œ´aµ½›œ£$—™º“ÉZ¢ª†³3§¨¥´‰›œž$M¢ºx˜$˜)›œž © ¶T·mµ½›‰¢1£$—™º ›‰§¨£¡Ÿž)¢6›œ£$—¨ªa¢?˜ŸÅ ³“ª‰ž)§™¥¦ÉZ¢ªz³ §¨¥´‰½›Y¼—¨ªaµ½˜ ›‰¢?¥`˜$—™µ¤ š6§¨ ž)›‰˜¡ªa¢?—™¢´€µ€Ÿ£$–?ªm¥?˜ ›œž$%¢ºx˜$˜)›œž © ¶T²7¢Vx¼›‰µ¤ š6§¨V—™˜r¯€ÁØgm¾ ýGh?)$i*kj)# Y,l*";!³3–Ÿž$ -]–?›‰˜Æ£¡ª3Ä'T§®—™¢ɉ¥>³3—¨£$–d G6mOG^]n[) );?$o;x¶ p ª‰ž)¥.q3Ÿ£›‰¢?¥.—¨£$˜“x¼m£¡¢6˜$—¨ªa¢?˜À›œž$±˜¡ªaµ€Ÿ£$—™µ½˜M°?˜¡¥ ›‰˜ ˜$ªa°ž)ºx˜Mª‰«[³“ª‰ž)§™¥ÊÉZ¢ª†³ §™¥´‰‰¶ QSR 8"r4U/V &  X8, YZ?f" [^! R )3\ 6 $b\,` *) \,3)Zs   0 t()C`Z K G è¢ң$–?—™˜¬ºŸ§™›‰˜$˜¬£$–vš9›œž$£$—™›‰§ ›‰¢6˜¡³TŸžÊ—™¢«~ª‰ž)µ½›œ£$—™ªa¢Ò—™˜ ˜$ºŸ›œ£¡£¡Ÿž$¥½£$–ž$ªa°´a–ªa°?£[˜$Ÿ¹‰Ÿž)›‰§?¥ªZºŸ°?µ€¢1£$˜[›‰¢?¥€›‰¢Z¤ ˜¡³“Ÿž½«Ö°?˜$—¨ªa¢Ë—™˜r¢ºx˜$˜$›œž © ¶:•–íºxªaµ€š9§¨x¼Z—™£ © –Ÿž$ ž)›‰¢´‰˜7«~ž$ªaµ ›‰˜$˜¡µ{Ä6§™—™¢´¬˜$—™µ€š6§™>§™—™˜¡£$˜ £¡ª.«Ö›œž±µ€ª‰ž$ ºxªaµ€š6§™x¼ÚÑ1°?˜¡£$—¨ªa¢?˜ç§™—¨É‰_˜$ºxž)—™š?£Ñm°˜¡£$—¨ªa¢6˜ŸÅB¿~‰¶ ´¶ ýGh?)$u()wv+"Z! R xk!%#X R 6¡ÃÐŪ‰ž£¡µ€š9§™›œ£¡x¤ §™—¨É‰¬Ñ1°˜$£$—¨ªa¢?˜Ê¿Zý†þÊÿ( fZ0#]^Z K ,`6  " *)6 )# ` x yvz ' Gl ~ÿ6{[8, tX4^$Ãж QSR 8"|KU{v K Y (C %}dV &  X8, YZ? •–˜$Ê˜ © ˜¡£¡µr˜€›œž$V›œÄ6§¨Ê£¡ª¸›‰¢?˜¡³“Ÿž½Ñ1°˜$£$—¨ªa¢?˜—™¢ £$–Òºxªa¢1£¡x¼m£ª‰«¬š6ž$Ÿ¹m—™ªa°?˜¸—™¢1£¡Ÿž)›‰ºx£$—¨ªa¢?˜¸³3—¨£$–ê£$– °?˜¡Ÿž†¶B²3˜ž$Ÿš'ª‰ž$£¡¥ç—™¢_¿Öä7›œž)›œÄ6›œ´a—™°çŸ£½›‰§È¶¨Å“Çœâ‰âÁ†ÃÐÅ š?ž$ªZºx˜$˜$—®¢´Ë›Ò§™—™˜¡£Êª‰«Ñ1°?˜¡£$—¨ªa¢?˜Êš'ªa˜¡¥ð—®¢ö›Ëºxªa¢Z¤ £¡x¼Z£±—®¢Q¹‰ªa§¨¹‰˜±ºxªaµ€š9§¨x¼íž$Ÿ«~Ÿž$¢?ºx»ž$˜¡ªa§™°£$—™ªa¢¶ ~7¢Z¤ §™—¨É‰%£ © š6—™ºŸ›‰§ ž)Ÿ«ŒŸž$¢6ºx%ž$˜¡ªa§®°£$—¨ªa¢v›‰§™´‰ª‰ž)—¨£$–?µ½˜£$–?›œ£ ›‰˜$˜¡ªZºŸ—™›œ£¡›‰¢?›œš9–ª‰ž$À³ —¨£$–›7ž$Ÿ«~Ÿž$¢Q£ŸÅQ£$–Àž$Ÿ«ŒŸž)¢?ºx —™µ€š'ªa˜¡¥.Ä © ºxªa¢1£¡x¼m£7Ñ1°?˜¡£$—¨ªa¢?˜“ž$Ñm°?—¨ž$˜“£$–>›‰˜)˜¡ªœ¤ ºŸ—™›œ£$—¨ªa¢Òª‰«±›‰¢é›‰¢?›œš6–ª‰ž)›¦«~ž$ªaµë£$–?vºŸ°ž$ž$¢1£½Ñm°˜Â¤ £$—¨ªa¢³ —¨£$–—¨£$–Ÿžªa¢Êª‰«3£$–Êš?ž$Ÿ¹Z—¨ªa°?˜»Ñ1°˜$£$—¨ªa¢?˜ŸÅ ›‰¢?˜¡³“Ÿž)˜Mª‰ž £$–—¨ž ›‰¢?›œš9–ª‰ž)›Z¶ QSR 8"{(UV &  X8 Y Z/n  [4! R )3\n0 R )](%  R _  ).] •–±º)–6›œž)›‰ºx£¡Ÿž)—™˜¡£$—®º3ª‰«H£$–˜¡ ˜ © ˜¡£¡µ½˜À—™˜T£$–—¨žÀ›œÄ6—™§Û¤ —¨£ © £¡ªr›‰¢6˜¡³TŸž7˜¡š'ºŸ°?§™›œ£$—¨¹‰ Ñm°˜¡£$—¨ªa¢6˜˜$—™µ½—®§™›œžÀ£¡ª¾ ýGv" ~ÿf1b€]4)G^]d -)3?GK Y , G?  ~ÿ G "( Z"# %^]NÐã3ýGv"n ~ÿ‚Sjd)` b)3\ƒ" ,  *)„|ã7ý%v"ƒ ~ÿ6G R G?`, %,X+G… %)`K! R (ж ·Z—®¢?ºx µ€ªa˜¡£“š?ž$ª‰Ä9›œÄ6§ © £$–7›‰¢6˜¡³TŸž“£¡ª˜$°?º)–.Ñm°˜Â¤ £$—¨ªa¢?˜M—®˜“¢?ª‰£Mx¼mš6§®—™ºŸ—¨£$§ © ˜¡£$›œ£¡¥v—™¢Ê¥ªmºŸ°6µ€¢Q£$˜Å?˜$—™µ¤ š6§ © Ä'ºŸ›‰°?˜¡ Ÿ¹‰¢1£$˜ µ½› © ¢?ª‰£–?›¹‰%–6›œš?š'¢¥ © Ÿ£ŸÅ ¯±² ˜ © ˜$£¡µ½˜«~ž$ªaµê£$–?—™˜[ºŸ§™›‰˜$˜¥ºxªaµ€š'ªa˜¡À£$–Ñm°˜Â¤ £$—¨ªa¢.—™¢1£¡ª{Ñm°Ÿž)—¨˜£$–?›œ£x¼Z£¡ž)›‰ºx£Àš6—¨ºx˜Tª‰«Ÿ¹m—®¥¢?ºx‰Å ›œ«~£¡Ÿž]³3–?—™º)–%›‰¢?˜¡³“Ÿž]—™˜H«Œª‰žÐµ{°?§®›œ£¡¥{°?˜)—™¢´ž$›‰˜$ªa¢?—™¢´ Ä © ›‰¢6›‰§¨ª‰´ © ¶•–ž$˜$ªa°ž)ºx˜H—®¢?ºŸ§™°?¥›‰¥Z¤J–?ªmºÉZ¢ªz³ §Û¤ ¥´‰Ä6›‰˜¡˜Ù´‰¢Ÿž)›œ£¡¥«~ž$ªaµÞµ½—™¢?—®¢´Ë£¡x¼m£¦¥ªZºŸ°Z¤ µ€¢1£$˜íºŸ§™°?˜$£¡Ÿž$¥öÄ © £$–`Ñm°˜¡£$—™ªa¢_£¡ª‰š6—™ºœ¶N²3˜)˜¡ªœ¤ ºŸ—™›œ£¡¥ö³3—¨£$–:£$–˜¡¸ÉZ¢ª†³ §™¥´‰¸˜¡ªa°ž)ºx˜í›œž)BºŸ›‰˜¡x¤ Ä6›‰˜¡¥ž$›‰˜¡ªa¢6—™¢´v£¡ºÐ–?¢?—™Ñm°˜»›‰˜»³T§™§“›‰˜µ€Ÿ£$–ªZ¥?˜ «~ª‰ž‹£¡µ€š'ª‰ž)›‰§{ž$›‰˜¡ªa¢6—™¢´Å%˜¡š9›œ£$—™›‰§%ž)›‰˜¡ªa¢?—™¢´:›‰¢?¥ Ÿ¹Z—™¥¢1£$—™›‰§'ž$›‰˜¡ªa¢?—®¢´¶ •]›œÄ6§¨½Áœ¾†7—®˜¡£¡ž)—¨Ä6°?£$—¨ªa¢rª‰«•Ó3ԓÕ:Ñm°˜¡£$—¨ªa¢6˜ • © š' q3°6µ%Ä'Ÿž%¿­ü½Ã ÕÀ§™›‰˜$˜ Á¬¿~«~›‰ºx£$°?›‰§~à Øgv¿Èúaû1¶‡aürà ÕÀ§™›‰˜$˜Çv¿Ö˜$—™µ€š6§™x¤­ž$›‰˜¡ªa¢?—™¢?´1à ùaâ(gv¿ÈÇaû1¶ŠØaürà ÕÀ§™›‰˜$˜Ìv¿~«~°?˜)—¨ªa¢.¤§™—™˜¡£Ðà Çí¿ÂÁœ¶ û‰ürà ÕÀ§™›‰˜$˜Àùٿ֗™¢Q£¡ŸžÐ›‰ºx£$—¨¹‰ ¤Tºxªa¢1£¡x¼m£Ðà ùQÇí¿ÈÇm¶ŠØaürà ÕÀ§™›‰˜$˜{v¿Ö˜¡š'ºŸ°?§™›œ£$—¨¹‰zà âv¿ÖâZ¶ âQü½Ã •]›œÄ6§¨vÁ¬—™§™§™°6˜¡£¡ž)›œ£¡˜ £$–¬¥?—™˜¡£¡ž)—™Ä6°£$—¨ªa¢Ùª‰«•Ó3Ô“Õ Volkswagen AND bug 60 passages 2 passages "in 1966" USD 520 $1 $1 USD 520 "rent a Volkswagen bug for $1 a day" of expected answer type selection Keyword Keyword expansion post−filtering Passage of candidate Identification Answer formulation ranking Answer Answer Question Country Volkswagen bug rent built − build Examples Examples System answers and passages of documents Modules Niagra − Niagara Volkswagen Volkswangen − Money Person invented − inventor rented − rent pre−processing (split/bind/spell) Keyword of question representation Derivation Construction Retrieval How much rent 1966 bug Volkswagen M1 M2 M3 M4 M5 M6 M10 M9 M8 M7 ø]—™´a°ž$½Áœ¾²3ž)º)–6—¨£¡ºx£$°ž$ ª‰«[Ä6›‰˜¡§®—™¢ ˜¡Ÿž)—™›‰§˜ © ˜¡£¡µ ¿Ö¢ª€«~Ÿ¥Ä6›‰º$ÉZ˜Ðà Ñm°˜¡£$—¨ªa¢?˜¦—™¢1£¡ª_£$–ËÑ1°?˜¡£$—¨ªa¢(ºŸ§™›‰˜$˜$˜Ÿ¶=è¢ꛉ¥?¥?—Û¤ £$—¨ªa¢V£¡ªÙÁ̉؉̽µ½›‰—®¢Z¤­£$›‰˜¡ÉVÑ1°?˜¡£$—¨ªa¢?˜Mºxªa§™§™ºx£¡¥í«~ž$ªaµ •Ó3ԓÕT¤ˆgmÅ7•Ó ÔÀÕT¤dØ›‰¢?¥_•Ó ÔÀÕT¤dǜâ‰âÁœÅ±£$–Ÿž$¸›œž$ Ǭ§™—™˜¡£ Ñm°˜¡£$—™ªa¢?˜{¿~‰¶ ´¶¨Å€ýG‰?Z€cŠ‹")`K %,-,/ ~ÿ [0)#`. ?")"CU$Ã%›‰¢6¥€ùQÇ>ºxªa¢1£¡x¼m£ÀÑm°˜¡£$—¨ªa¢?˜3¿~‰¶ ´¶¨Å ýGh?)$ R )^]d$Œ8 ~ÿl. X(#]NÐã»ýGh?)$Ž$o*46¡Ãж  \]k?OQPJAG3SzõSz”œk9ô A'OQXQjTPȔ‰k6Xm”aROak •–?—®˜»˜$ºx£$—¨ªa¢Ò—™¢1£¡ž$ªZ¥?°?ºx˜»£$–í˜¡Ÿž)—®›‰§™—‘Ÿ¥ç›œž)º)–6—¨£¡º|¤ £$°ž$Ùª‰«>ªa°žv¯±² ˜ © ˜$£¡µ*—®¢é³ –6—™º)– £$–Ÿž$¦›œž$¸¢ª «~Ÿ¥Ä6›‰º$ÉZ˜Ÿ¶:•–íºxªaµ€š6§™Ÿ£¡v›œž)º)–6—¨£¡ºx£$°ž$í³3—¨£$–Ë›‰§™§ £$–.«ŒŸ¥?Ä6›‰º$ÉZ˜—™˜>š?ž)˜¡¢Q£¡¥ç—™¢›‹§®›œ£¡Ÿž»˜¡ºx£$—™ªa¢ª‰« £$–€š6›œš'Ÿž¶{²3˜>˜)–ª†³ ¢¦—™¢¦ø[—¨´a°ž)ÊÁœÅH£$–½›œž)º)–6—¨£¡º|¤ £$°ž$Êºxªa¢?˜$—®˜¡£$˜»ª‰«{ÁŸâ¦µ€ªZ¥?°?§¨˜%šŸž)«Œª‰ž)µr—™¢´‹˜$Ÿ¹‰Ÿž)›‰§ ¢?›œ£$°žÐ›‰§§™›‰¢´a°?›œ´‰>š6ž$ªmºx˜)˜$—™¢´£$›‰˜¡ÉZ˜Ÿ¶ •–?7Í6ž)˜¡£ÀÍ?¹‰>µ€ªZ¥?°?§¨˜Àºxª‰ž$ž)˜¡š'ªa¢?¥.£¡ªrÑ1°?˜¡£$—¨ªa¢ š?ž$ªZºx˜$˜$—®¢´Åa£$– ¢?x¼m££d³Tª»µ€ªZ¥?°?§¨˜[š'Ÿž$«~ª‰ž)µ(¥ªZºŸ°Z¤ µ€¢1£“›‰¢6¥¬š9›‰˜$˜$›œ´‰7š?ž$ªZºx˜$˜$—™¢´ÅZ›‰¢?¥¬£$–±§™›‰˜¡£“£$–ž$Ÿ µ€ªZ¥?°?§¨˜Àš'Ÿž$«~ª‰ž)µN›‰¢?˜$³TŸžš6ž$ªmºx˜)˜$—™¢´¶ eÁ • –¬—™¢6¥?—¨¹Z—™¥?°?›‰§[Ñm°˜¡£$—¨ªa¢`³Tª‰ž)¥6˜{›œž$Ê˜¡š'§™§Û¤ º)–?º$ɉ¥¶ p ª‰ž)¥?˜Æ§™—¨É‰2.) R mC,$Œ^] ±›‰¢?¥‰’*#](3T›œž$ x¼Zš6›‰¢?¥¥ —™¢Q£¡ª£$–—¨ž.˜¡š'§™§™—™¢´¦¹œ›œž)—™›‰¢1£$˜Ž.) R mO $Œ_ ]^›‰¢?¥“‰’*#]43œ¶öè «±¢ºx˜$˜$›œž © ÅÑm°˜¡£$—™ªa¢?˜½˜$°?ºÐ– ›‰˜`¯’ggm¾ýG”{) -,X• ^](;–" —$ —Z2(4“! X $[ÿ ?")Z [KX(!›œž$¬ž)Ÿš6–ž)›‰˜¡¥ç—™¢1£¡ª¦›Ù¢?ª‰ž)µ½›‰§Û¤ —‘Ÿ¥Ê«~ª‰ž)µï³ –Ÿž) £$– ³ –Z¤­³“ª‰ž)¥¸¿˜$[ÿ ®Ã›œš?š'›œž)˜M›œ£ £$– Ä'Ÿ´a—™¢?¢6—™¢´Åm‰¶ ´¶vý†þÊÿ t )Z [.X5$?) -,X ]G/" Z2(4!X(ж eÙÇ •–Þ—™¢š6°?£*Ñm°˜¡£$—¨ªa¢ —™˜*š6›œžÐ˜¡¥ ›‰¢?¥ £¡ž)›‰¢?˜$«Œª‰ž)µ½¥ —™¢Q£¡ª ›‰¢ —®¢Q£¡Ÿž)¢6›‰§ ž)Ÿš?ž$˜¡¢1£$›Y¤ £$—¨ªa¢Ù¿Öä3›œž)›œÄ9›œ´a—™°rŸ£À›‰§È¶¨Åǜâ‰â‰âQÃTºŸ›œš?£$°ž)—™¢?´%Ñ1°?˜¡£$—¨ªa¢ ºxªa¢?ºxŸš?£$˜ë›‰¢6¥ Ä6—®¢?›œž © ¥Ÿš'¢?¥¢?ºŸ—¨˜ Ä'Ÿ£J³“Ÿ¢ £$–éºxªa¢6ºxŸš?£$˜Ÿ¶ ·m£¡ª‰š!³“ª‰ž)¥?˜ ¿~‰¶ ´¶¨Å¬š?ž)Ÿšªa˜)—¨£$—¨ªa¢?˜ ª‰žF¥Ÿ£¡Ÿž)µr—™¢Ÿž)˜Ðà ›œž$ —™¥¢1£$—¨Í?¥*›‰¢?¥*ž$µ€ªz¹‰¥ «~ž$ªaµ£$–¸ž$Ÿš6ž$˜¡¢1£$›œ£$—¨ªa¢¶øª‰žÙ—™§™§™°6˜¡£¡ž)›œ£$—¨ªa¢Å3£$– ž$Ÿš?ž)˜¡¢Q£$›œ£$—™ªa¢v«~ª‰ž ¯ âÁÌm¾BýGh?)$™Z€`6Ÿÿw")` R ‹X()`  K šK) R mO $Œ#] “!`(]€\ )+G›T(œ4(;ºŸ›œš?£$°?ž$˜ £$– Ä6—™¢?›œž © ¥Ÿš'¢?¥¢6º © Ä'Ÿ£J³“Ÿ¢Ú£$– ºxªa¢?ºxŸš6£$˜  K ›‰¢6¥žT(œ4(œ¶ eÙÌ •–¬µr›œš?š6—™¢´‹ª‰« ºxŸž$£$›‰—™¢çÑm°˜¡£$—¨ªa¢¥Ÿš'¢Z¤ ¥¢?ºŸ—™˜%ªa¢› p ª‰žÐ¥.q Ÿ£Â¤­Ä9›‰˜¡¥ç›‰¢?˜¡³“Ÿž{£ © š'Ê–?—¨Ÿž¡¤ ›œž)ºÐ– © ¥6—™˜$›‰µ{Ä6—¨´a°?›œ£¡˜v£$–˜¡µr›‰¢Q£$—™ººŸ›œ£¡Ÿ´‰ª‰ž © ª‰« £$–:x¼mš'ºx£¡¥ï›‰¢?˜¡³“Ÿž)˜_¿ ì ›.Ÿ ˜$ºŸ›(›‰¢?¥2ä3›œž)›œÄ9›œ´a—™°Šǜâ‰âÁ†Ãж øª‰žVx¼›‰µ€š6§¨‰Å £$–Ù¥?Ÿš¢6¥¢?º © Ä'Ÿ£J³“Ÿ¢ h?)$ Z€`.xÿ{›‰¢?¥… K «~ª‰ž3¯±âÁÌ—™˜Tx¼mš9§¨ªa—¨£¡¥.£¡ª½¥x¤ ž)—¨¹‰»£$–x¼Zšºx£¡¥¸›‰¢6˜¡³TŸž±£ © š' ' ); Xœ¶>•–›‰¢Z¤ ˜¡³“Ÿž€£ © š'.—™˜»š6›‰˜)˜¡¥£¡ª¦˜)°Ä6˜¡Ñm°¢1£»µ€ªZ¥?°?§™˜%«~ª‰ž £$–>—™¥?¢Q£$—¨Í9ºŸ›œ£$—¨ªa¢Êª‰«Æš'ªa˜$˜)—¨Ä6§¨±›‰¢?˜¡³“Ÿž)˜%¿Ö›‰§™§Hµ€ªa¢x¤ £$›œž © ¹œ›‰§™°˜ÐÃж evù 擛‰˜$¥¸µ½›‰—™¢?§ © ªa¢¦š6›œž)£3ª‰«˜$šŸºÐ–¸—™¢«~ª‰ž)µ½›Y¤ £$—¨ªa¢ÅT›¸˜$°Ä6˜¡Ÿ£ª‰«3£$–íÑ1°?˜¡£$—¨ªa¢Ëºxªa¢?ºxŸš?£$˜r›œž$V˜¡x¤ §¨ºx£¡¥¦›‰˜7ɉ © ³“ª‰ž)¥?˜7«~ª‰ž7›‰ºŸºx˜)˜$—™¢´.£$–°?¢?¥Ÿž)§ © —™¢?´ ¥ªZºŸ°?µ€¢1£Mºxªa§™§¨ºx£$—¨ªa¢¶²ßš6›‰˜$˜$›œ´‰ ž)Ÿ£¡ž)—¨Ÿ¹œ›‰§9¢?´a—™¢ ›‰ºŸºxŸš?£$˜>擪mªa§¨›‰¢¦Ñ1°?Ÿž)—¨˜7Ä6°?—™§™£«Œž)ªaµ £$–?»˜¡§™ºx£¡¥ ɉ © ³Tª‰ž)¥6˜ŸÅ‰¶ ´¶¡K) R mO $Œ#] B²q †¢!`]a¶B•–.ž$x¤ £¡ž)—¨Ÿ¹œ›‰§¢´a—®¢½ž$Ÿ£$°žÐ¢?˜±š6›‰˜$˜$›œ´‰˜{£$–?›œ£»ºxªa¢Q£$›‰—™¢›‰§™§ ɉ © ³Tª‰ž)¥6˜˜¡š'ºŸ—¨Í?¥—™¢£$–? 擪1ªa§¨›‰¢rÑ1°Ÿž © ¶[• –Ÿž$x¤ «~ª‰ž$>ɉ © ³Tª‰ž)¥‹˜¡§¨ºx£$—¨ªa¢‹—™˜M›½˜¡¢6˜$—¨£$—¨¹‰ £$›‰˜¡É¶“èd«]£$– ³ž$ªa¢?´ÒÑ1°˜$£$—¨ªa¢_³Tª‰žÐ¥ ¿~‰¶ ´¶£Z€`6ŸÿœÃʗ™˜V—®¢?ºŸ§™°?¥¥ —™¢ç£$–‹æTªmªa§¨›‰¢ Ñ1°Ÿž © ¿˜Z€`.xÿ¦² q† .) R mC,$Œ#]^ ²q † ! `(]zÃÐÅV£$–? ž$Ÿ£¡žÐ—¨Ÿ¹Y›‰§Ê—™˜ç—™¢6˜$°?ºŸºx˜$˜¡«Ö°?§¬˜$—™¢?ºx £$–{š6›‰˜$˜$›œ´‰˜7ºxªa¢Q£$›‰—®¢?—™¢´½£$–?{ºxª‰ž$ž$ºx£±›‰¢?˜¡³“Ÿž)˜7›œž$ µ½—™˜)˜¡¥¶ e… 擝Ÿ«~ª‰ž$T£$–?Tºxªa¢?˜¡£¡žÐ°?ºx£$—¨ªa¢{ª‰«?æTªmªa§¨›‰¢»Ñ1°?Ÿž)—¨˜ «~ª‰ží›‰ºx£$°?›‰§±ž$Ÿ£¡ž)—¨Ÿ¹œ›‰§ÈÅ3£$–¸˜¡§¨ºx£¡¥_ɉ © ³Tª‰ž)¥6˜í›œž$ x¼Zš6›‰¢?¥¥Ê³ —™£$–.µ½ª‰ž$š6–ªa§¨ª‰´a—®ºŸ›‰§ÈÅm§™x¼Z—™ºŸ›‰§ª‰ž ˜$µ½›‰¢Z¤ £$—™º7›‰§¨£¡Ÿž)¢?›œ£$—™ªa¢?˜Ÿ¶•–7›‰§¨£¡Ÿž)¢?›œ£$—™ªa¢?˜Àºxª‰ž$ž$˜¡š'ªa¢?¥¬£¡ª ª‰£$–ŸžÀ«~ª‰ž)µ½˜“—™¢½³ –6—™º)–¬£$–±Ñ1°˜$£$—¨ªa¢¬ºxªa¢6ºxŸš?£$˜Àµ½› © ªZºŸºŸ°žM—™¢r£$–?7›‰¢?˜$³TŸž)˜¶øª‰žMx¼Z›‰µ½š6§¨‰Å;K Y±—®˜x¼m¤ š6›‰¢?¥?¥V—™¢Q£¡ªxK Ö¶ eÙú •–?Ëž$Ÿ£¡ž)—¨Ÿ¹œ›‰§€¢´a—™¢?Ëž$Ÿ£$°ž)¢?˜B£$–é¥ªZºŸ°Z¤ µ€¢1£$˜rºxªa¢1£$›‰—™¢?—™¢?´B›‰§™§Mɉ © ³Tª‰ž)¥6˜r˜¡š'ºŸ—¨Í?¥Ë—®¢Ë£$– 擪1ªa§™›‰¢Ñm°Ÿž)—¨˜¶í•–¬¥?ªmºŸ°?µ½¢Q£$˜»›œž$r£$–?¢`«Ö°ž¡¤ £$–ŸžVž$˜$£¡ž)—™ºx£¡¥:£¡ªÒ˜$µ½›‰§™§™Ÿž¬£¡x¼Z£Vš6›‰˜$˜$›œ´‰˜Ê³3–Ÿž$ ›‰§™§9ɉ © ³“ª‰ž)¥?˜M›œž)±§™ªmºŸ›œ£¡¥V—™¢¬£$–±š?ž$ª†¼Z—®µ½—¨£ © ª‰«Æªa¢ ›‰¢ª‰£$–Ÿž†¶rÔ›‰ºÐ–Bž$Ÿ£¡ž)—™Ÿ¹‰¥Bš6›‰˜$˜)›œ´‰r—™¢?ºŸ§™°6¥˜±›‰¥?¥?—Û¤ £$—¨ªa¢?›‰§£¡x¼m£Ê¿~x¼m£¡ž)›‹§™—™¢˜|Ã3ĝŸ«~ª‰ž$½£$–?½›œž)§™—¨˜$£%›‰¢?¥ ›œ«~£¡ŸžT£$–3§™›œ£¡˜¡£ɉ © ³“ª‰ž)¥¬µr›œ£$º)–¶øª‰žÀ—™§™§®°?˜¡£¡ž)›œ£$—¨ªa¢ Å ºxªa¢?˜$—®¥Ÿž±¯±â‰â(m¾¸ýzþÊÿ ƒ? ~ÿ/Z2)3\/ ~ÿ€Z2._ #](^]/G"# -))3\ & [0,*") Q )Z [0`6 Y %›‰¢?¥»£$–À›‰˜Â¤ ˜¡ªZºŸ—™›œ£¡¥.æTªmªa§¨›‰¢ÊÑm°Ÿž © & [ *") '²q † Q )Z [0`6 Y  ²q †¤G# -)ж“•– ž$§¨Ÿ¹œ›‰¢1£M£¡x¼m£«~ž)›œ´aµ€¢1£M«~ž$ªaµ £$–Ò¥ªZºŸ°?µ€¢1£¸ºxªa§™§¨ºx£$—¨ªa¢!—™˜NýG¥k ¦ YWh?),,§ Z20#](]wG# -))3\ & [0,*") Q )Z¦[` Y"^ж ~7¢Z¤ §¨˜$˜±›‰¥?¥6—¨£$—¨ªa¢?›‰§Æ£¡x¼m£>—®˜ —™¢?ºŸ§™°?¥?¥v—™¢Ù£$–š6›‰˜$˜)›œ´‰˜ŸÅ £$–±›‰ºx£$°?›‰§›‰¢?˜¡³“Ÿž¦ ¦# Y {h),“³“ªa°?§™¥¬Ä'3µ½—®˜$˜¡¥ Ä'ºŸ›‰°?˜¡½—¨£±ªZºŸºŸ°ž)˜ Ä'Ÿ«~ª‰ž$€›‰§™§µ½›œ£$ºÐ–¥¸É‰ © ³Tª‰ž)¥6˜ŸÅ ¢?›‰µ€§ © G# -)ÐÅ & [0,*") ƛ‰¢?¥ Q )Z [0`6 Y |¶ e¦û • – ž$Ÿ£¡žÐ—¨Ÿ¹‰¥.š6›‰˜$˜$›œ´‰˜À›œž$7«~°?ž$£$–Ÿžž)ŸÍ6¢¥ «~ª‰žV¢?–?›‰¢?ºx¥:š?ž$ºŸ—®˜$—¨ªa¢¶ ì ›‰˜$˜$›œ´‰˜í£$–?›œ£í¥ªË¢ª‰£ ˜$›œ£$—™˜$« © £$–»˜¡µ½›‰¢1£$—™º%ºxªa¢6˜¡£¡ž)›‰—™¢1£$˜±˜¡š'ºŸ—¨Í?¥í—™¢v£$– Ñm°˜¡£$—¨ªa¢ ›œž$v¥?—™˜)ºŸ›œž)¥¥¶_øª‰ž.x¼Z›‰µ€š9§¨‰ÅM˜¡ªaµ€vª‰« £$–íš9›‰˜$˜$›œ´‰˜½ž)Ÿ£¡ž)—¨Ÿ¹‰¥Ë«~ª‰žÊ¯±âÁÌB¥ª`¢ª‰£½˜$›œ£$—®˜¡« © £$–M¥?›œ£¡Mºxªa¢?˜¡£¡žÐ›‰—™¢Q£nT(œ44Y¶“î±°£]ª‰«?£$–úœâ7š6›‰˜$˜)›œ´‰˜ ž$Ÿ£$°žÐ¢¥Ä © £$–Mž$Ÿ£¡žÐ—¨Ÿ¹Y›‰§?¢´a—™¢À«~ª‰ž“¯±âÁÌmÅmÇ š6›‰˜Â¤ ˜$›œ´‰˜3›œž$>ž$Ÿ£$›‰—™¢?¥í›œ«Œ£¡Ÿž3š6›‰˜$˜$›œ´‰%šªa˜$£Â¤­Í6§¨£¡Ÿž)—™¢?´¶ e…g • – ˜¡›œžÐº)–2«Œª‰žË›‰¢?˜$³TŸž)˜³ —¨£$–6—™¢(£$–:ž$x¤ £¡ž)—¨Ÿ¹‰¥»š6›‰˜)˜$›œ´‰˜—™˜Æž$˜¡£¡ž)—™ºx£¡¥»£¡ª±£$–ªa˜¡ÀºŸ›‰¢?¥?—®¥?›œ£¡˜ ºxª‰ž$ž$˜$šªa¢6¥?—™¢´í£¡ªÙ£$–.x¼mš'ºx£¡¥ç›‰¢?˜¡³“Ÿž»£ © š‰¶‹èd« £$–Mx¼Zš'ºx£¡¥½›‰¢?˜¡³“Ÿž£ © š'—™˜› ¢?›‰µ€¥¢1£$—¨£ © ˜$°?ºÐ– ›‰˜d¨–©bªN«¬Å£$–vºŸ›‰¢6¥?—™¥?›œ£¡˜Ù¿˜­4TœÅ‹‚bj¥®cŠÃr›œž$ —™¥¢1£$—¨Í?¥Ê³3—¨£$–V›½¢?›‰µ€¥í¢Q£$—¨£ © ž$ºxª‰´a¢?—‘ŸŸž¶ÀՓªa¢Z¤ ¹‰Ÿž)˜¡§ © ÅZ—™« £$– ›‰¢?˜¡³“ŸžÀ£ © š —™˜“›2¯€«.°±ª²±G³±G©bª[ŝ‰¶ ´¶ ¯ ØœâaÌm¾ýzþÊÿ /w`6 %E,ZlÐÅ3£$–ÙºŸ›‰¢6¥?—™¥?›œ£¡˜.›œž$ ª‰Ä?£$›‰—™¢?¥BÄ © µr›œ£$º)–?—™¢?´v›v˜¡Ÿ£{ª‰«›‰¢?˜¡³“Ÿž%š6›œ£¡£¡ŸžÐ¢?˜ ªa¢V£$–%š6›‰˜$˜$›œ´‰˜Ÿ¶ eÙØ Ô›‰ºÐ–!ºŸ›‰¢?¥?—®¥?›œ£¡Ò›‰¢?˜¡³“Ÿž¸ž$ºx—¨¹‰˜›_ž$§¨x¤ ¹œ›‰¢?ºxV˜$ºxª‰ž$í›‰ºŸºxª‰ž)¥?—®¢´¦£¡ª¦§™x¼Z—™ºŸ›‰§À›‰¢?¥š?ž)ª¼—™µ½—¨£ © «~›œ£$°ž$˜>˜$°6º)–¸›‰˜>¥?—®˜¡£$›‰¢?ºxÄ'Ÿ£J³“Ÿ¢¸É‰ © ³“ª‰ž)¥?˜Å ª‰ž £$–ÊªZºŸºŸ°ž$ž$¢?ºxVª‰«3£$–VºŸ›‰¢?¥?—™¥6›œ£¡Ê›‰¢6˜¡³TŸž€³3—¨£$–?—™¢ ›‰¢í›œš?š'ªa˜$—™£$—¨ªa¢¶[•–>ºŸ›‰¢?¥?—®¥?›œ£¡˜M›œž$>˜¡ª‰ž$£¡¥v—™¢V¥x¤ ºxž$›‰˜$—®¢´€ª‰ž)¥Ÿžª‰«[£$–—™ž˜$ºxª‰ž$˜Ÿ¶ eÁŸâ •–?˜ © ˜¡£¡µß˜¡§™ºx£$˜H£$–TºŸ›‰¢?¥?—™¥?›œ£¡T›‰¢?˜¡³“Ÿž)˜ ³ —¨£$–í£$–%–?—¨´a–?˜¡£Mž$§¨Ÿ¹œ›‰¢?ºx{˜)ºxª‰ž$˜Ÿ¶À•–>Í6¢6›‰§ ›‰¢Z¤ ˜¡³“Ÿž)˜]›œž$—¨£$–?ŸžH«~ž)›œ´aµ€¢1£$˜Hª‰«£¡x¼Z£Æx¼Z£¡ž)›‰ºx£¡¥»«~ž$ªaµ £$–Àš6›‰˜$˜)›œ´‰˜›œž$ªa°?¢?¥»£$–ÀÄ'˜¡£ºŸ›‰¢?¥?—®¥?›œ£¡M›‰¢?˜¡³“Ÿž)˜ŸÅ ª‰ž£$– © ›œž${—™¢Q£¡ŸžÐ¢?›‰§™§ © ´‰¢?Ÿž)›œ£¡¥¶ ´ µ OQOaE OvACA'G­õSYPdS·¶ÂE O‹”ajkçbASYk6GJPdCk Szk6OQP­AG7S†õSz”‰k6ô ¸N:*< ¹ B>OJˆIK>ºžD0P»(B…B^¼N½„B^>@%ºžBPACH •–‹˜ © ˜$£¡µ ³À›‰˜¬£¡˜¡£¡¥:ªa¢ÜÁxùQúœâÑ1°?˜¡£$—¨ªa¢?˜.ºxªa§Û¤ §¨ºx£¡¥é«Œž$ªaµë• Ó Ô“Õ“¤ˆgmÅÀظ›‰¢6¥Ë•Ó ÔÀÕT¤dǜâ‰âÁœ¶ ²7¢Z¤ ˜¡³“Ÿž)˜³“Ÿž$x¼Z£¡ž)›‰ºx£¡¥¬«Œž$ªaµ!›%Ì%å7Ä © £¡ £¡x¼m£“ºxªa§™§¨º|¤ £$—¨ªa¢.ºxªa¢Q£$›‰—®¢?—™¢´»›œÄªa°?£3Á7µ½—™§™§™—™ªa¢€¥ªmºŸ°6µ€¢Q£$˜T«~ž$ªaµ ˜¡ªa°žÐºx˜Ù˜)°?º)–›‰˜l¾ ªa˜¦²3¢?´‰§¨˜¦•—™µ€˜Ù›‰¢?¥ p ›‰§™§ ·m£¡ž$ŸŸ£¿‰ªa°ž)¢6›‰§È¶Ô›‰º)–‹›‰¢?˜¡³“Ÿž –?›‰˜¦œârÄ © £¡˜Ÿ¶ •–?7›‰ºŸºŸ°žÐ›‰º © ³À›‰˜Mµ€›‰˜$°ž$¥.Ä © £$–’e흛‰¢VÓ x¤ ºŸ—¨š?ž)ªmºŸ›‰§MÓ3›œ£¡¿*e‹Ó Ó7ýµ€Ÿ£¡ž)—™ºV°6˜¡¥ËÄ © q3è¡·•F—™¢ £$–:•Ó ÔÀÕ ¯±²fŸ¹œ›‰§™°?›œ£$—¨ªa¢?˜ ¿~ת1ª‰ž)–?Ÿ˜ŸÅ¸Á؉؉ØaÃж •– ž)ºŸ—¨š?ž$ªZºŸ›‰§ž)›‰¢ÉZ—™¢´Ä6›‰˜$—™ºŸ›‰§®§ © ›‰˜$˜)—¨´a¢?˜M›€¢m°?µ¤ Ä'ŸžVÑ1°6›‰§±£¡ªðÁCÀYÓ ³ –?Ÿž$¸Ó —™˜Ê£$–¸žÐ›‰¢É骉«»£$– ºxª‰ž$ž$ºx£›‰¢6˜¡³TŸž†¶î±¢6§ © £$–ÀÍ?ž)˜¡£y7›‰¢?˜¡³“Ÿž)˜[›œž$Mºxªa¢Z¤ ˜$—™¥?Ÿž$¥Å£$–1°?˜7Ó —™˜7§¨˜$˜3ª‰ž±Ñ1°6›‰§Æ£¡ª·m¶ p –¢Ù£$– ˜ © ˜¡£¡µ¥ª1˜]¢ª‰£Æž$Ÿ£$°ž)¢{› ºxª‰ž$ž$ºx£[›‰¢?˜¡³“Ÿž[—™¢>£¡ª‰šmÅ £$–Àš?ž$ºŸ—®˜$—¨ªa¢»˜$ºxª‰ž$M«~ª‰ž]£$–6›œ£Ñ1°˜$£$—¨ªa¢—™˜SŸŸž$ª¶•– ª†¹‰ŸžÐ›‰§™§˜ © ˜¡£¡µïš?ž$ºŸ—™˜)—¨ªa¢r—™˜“£$– µ€›‰¢Êª‰«H£$– —™¢?¥?—Û¤ ¹Z—™¥?°?›‰§M˜$ºxª‰ž)˜Ÿ¶Ü· © ˜¡£¡µ=›‰¢?˜$³TŸž)˜¬³“Ÿž$‹µ€›‰˜)°ž$¥ ›œ´a›‰—™¢?˜$£ ºxª‰ž$ž$ºx£ ›‰¢6˜¡³TŸžÐ˜Mš?ž$ªz¹m—™¥?¥ÊÄ © q3è¡·Z•>¶ ¸N:E9 ÁÂI²ÃyMSFBlB>O>8IK>8H •–ö—™¢6˜¡š'ºx£$—¨ªa¢!ª‰«¦—™¢1£¡Ÿž)¢?›‰§.£¡ž)›‰ºx˜ŸÅٛœ£Ë¹Y›œžÐ—¨ªa°?˜ º)–?º$Émšªa—®¢Q£$˜Æ—™¢?˜¡Ÿž$£¡¥%›œ«Œ£¡ŸžÆ›‰º)–»µ€ªm¥6°?§¨[«Œž)ªaµßø[—¨´œ¤ °ž$€ÁœÅž$Ÿ¹‰›‰§®˜M£$– ˜ © ˜¡£¡µNŸž$ž$ª‰žÐ˜À«Œª‰žM›‰ºÐ–흟¹œ›‰§™°?›Y¤ £$—¨ªa¢íÑm°˜¡£$—™ªa¢¶•–±´‰ªa›‰§H—®¢Ê£$–?—™˜Àx¼ZšŸžÐ—™µ€¢1£—™˜À£¡ª —™¥¢1£$—¨« © £$–?r›œž)§™—™˜¡£{µ€ªZ¥?°?§™½—™¢B£$–.º)–?›‰—®¢ ¿~«~ž$ªaµ §¨Ÿ«~£½£¡ªBž)—™´a–Q£Ðû£$–6›œ£rš?ž$Ÿ¹‰¢1£$˜½£$–?‹˜ © ˜¡£¡µ £¡ª`Í6¢?¥ £$–>žÐ—¨´a–Q£›‰¢6˜¡³TŸž†Å?—ȶ ‰¶ºŸ›‰°?˜¡˜£$–>Ÿž$ž$ª‰ž†¶ ²7˜ ˜$–?ª†³ ¢‹—™¢í•]›œÄ6§¨{ÇmÅÑ1°˜$£$—¨ªa¢Vš?ž$x¤­š6ž$ªmºx˜)˜$—™¢´ —™˜3ž$˜¡š'ªa¢?˜$—™Ä6§¨±«Œª‰ž>û1¶™Á†ü ª‰«£$–»Ÿž$ž$ª‰ž)˜7¥?—™˜$£¡ž)—¨Ä6°£¡¥ ›‰µ€ªa¢´>µ½ªm¥?°6§¨teÁ>¿ÂÁœ¶ŠØaü½Ã[›‰¢?¥eÙǀ¿*m¶ŠÌaü½ÃжŒevªa˜¡£ Ÿž$ž$ª‰žÐ˜]—®¢{µ€ªZ¥?°?§™e‹Ç7›œž$M¥?°?£¡ª —™¢?ºxª‰ž$ž)ºx£Æš6›œžÐ˜$—™¢´ ¿~ù¶‡aü½Ãж.•M³TªVª‰«À£$–€£¡¢µ€ªZ¥?°?§¨˜€¿*eÙÌV›‰¢?¥že…aà ›‰ºŸºxªa°?¢1£€«Œª‰ž½µ½ª‰ž$Ê£$–?›‰¢Ë–?›‰§¨«ª‰«3£$–ÊŸž$ž$ª‰ž)˜¶B•– «Ö›‰—™§™°ž$ª‰«?—¨£$–Ÿž[µ€ªZ¥?°?§¨µr›œÉ‰˜]—™£]–?›œžÐ¥¬¿~ª‰ž—™µ€š'ªa˜Â¤ •]›œÄ6§¨%Çm¾Œ†±—™˜¡£¡žÐ—¨Ä6°£$—¨ªa¢.ª‰«[Ÿž$ž$ª‰žÐ˜ÀšŸž3˜ © ˜$£¡µFµ€ªZ¥?°?§¨ evªZ¥?°?§¨ evªm¥?°6§¨ ¥ŸÍ6¢?—¨£$—™ªa¢ Ôž$ž$ª‰ž)˜>¿­ü½Ã ¿*eÁ†Ã Ä  © ³Tª‰žÐ¥Vš?ž$x¤­š?ž)ªmºx˜$˜)—™¢´í¿Ö˜¡š6§™—™£ ÀzÄ6—™¢?¥0ÀY˜$š§®§?º)–º)Éà Áœ¶ŠØ ¿*eÙÇaà Փªa¢?˜$£¡ž)°?ºx£$—¨ªa¢Êª‰«—™¢1£¡Ÿž)¢?›‰§Ñm°˜¡£$—¨ªa¢Êž)Ÿš?ž$˜¡¢1£$›œ£$—¨ªa¢ m¶ŠÇ ¿*eÙÌaà †7Ÿž)—¨¹œ›œ£$—¨ªa¢Vª‰«]x¼mš'ºx£¡¥‹›‰¢?˜¡³“Ÿž£ © š' ̉úm¶ ù ¿*evù1à Ġ © ³Tª‰žÐ¥í˜¡§¨ºx£$—¨ªa¢¿Ö—™¢?ºxª‰ž$ž$ºx£$§ © ›‰¥?¥?¥Vª‰žx¼ºŸ§™°?¥¥9à gm¶ŠØ ¿*e…aà Ġ © ³Tª‰žÐ¥Vx¼mš9›‰¢?˜$—¨ªa¢V¥˜)—¨ž)›œÄ6§¨±Ä6°?£Àµr—™˜$˜$—™¢?´ Çm¶ û ¿*eÙúaà ²7ºx£$°?›‰§ž$Ÿ£¡ž)—™Ÿ¹Y›‰§¿Ö§®—™µ½—¨£“ªa¢íš6›‰˜)˜$›œ´‰%¢m°?µ%Ä'ŸžMª‰ž ˜)—‘Ÿzà Áœ¶Šú ¿*e¦û‰Ã ì ›‰˜$˜$›œ´‰{šªa˜$£Â¤­Í6§¨£¡Ÿž)—™¢?´V¿Ö—®¢?ºxª‰ž$ž$ºx£$§ © ¥?—™˜$ºŸ›œžÐ¥¥9à Áœ¶Šú ¿*e…gaà èÂ¥¢Q£$—™Í6ºŸ›œ£$—¨ªa¢Êª‰«ºŸ›‰¢?¥?—™¥6›œ£¡>›‰¢?˜¡³“Ÿž)˜ gm¶ â ¿*eÙØaà ²7¢?˜¡³“Ÿžž)›‰¢ÉZ—™¢´ úm¶ŠÌ ¿*eÁŸâQà ²7¢?˜¡³“Ÿž«Œª‰žÐµ{°?§®›œ£$—¨ªa¢ ù¶ ù ˜$—¨Ä9§¨zÃ7«~ª‰ž˜)°Ä6˜¡Ñm°¢1£%µ€ªZ¥?°?§¨˜%£¡ªÙš'Ÿž$«~ª‰ž)µ £$–—¨ž £$›‰˜¡É¶ p –¢Ÿ¹‰Ÿž»£$–½¥Ÿž)—¨¹œ›œ£$—¨ªa¢¦ª‰«À£$–½x¼Zš'ºx£¡¥ ›‰¢?˜¡³“Ÿž“£ © š¿Öµ€ªZ¥?°?§¨ eÙÌaë~›‰—™§®˜ŸÅQ£$–7˜¡Ÿ£ª‰«ÆºŸ›‰¢?¥?—Û¤ ¥?›œ£¡7›‰¢?˜¡³“Ÿž)˜T—®¥¢Q£$—™Í?¥€—™¢½£$–? ž$Ÿ£¡ž)—™Ÿ¹‰¥½š6›‰˜$˜)›œ´‰˜ —™˜M—™£$–Ÿžµ€š?£ © —™¢íÇgm¶ŠÇaüNª‰«[£$–>ºŸ›‰˜$˜»¿~³ –?¢V£$– ›‰¢?˜¡³“Ÿž£ © š'%—™˜ °?¢ÉZ¢ª†³ ¢Ãª‰ž7ºxªa¢Q£$›‰—®¢?˜£$– ³ ž$ªa¢´ ¢1£$—¨£$—¨˜«~ª‰žgm¶ŠÇaüF¿~³3–¢£$– ›‰¢?˜$³TŸž£ © š'—™˜—®¢?ºxª‰ž¡¤ ž$ºx£ÐÃжè «H£$–? É‰ © ³“ª‰ž)¥?˜M°?˜$¥Ê«Œª‰žMš6›‰˜)˜$›œ´‰ ž$Ÿ£¡ž)—¨Ÿ¹œ›‰§ ›œž$¢ª‰£7x¼mš6›‰¢6¥¥í³ —™£$–v£$–»˜¡µ½›‰¢1£$—™ºŸ›‰§™§ © ž)§™›œ£¡¥ «~ª‰ž)µ½˜ªZºŸºŸ°ž$ž)—®¢´—™¢±£$–›‰¢6˜¡³TŸžÐ˜T¿Öµ€ªZ¥?°?§¨Œe…aÃÐŜ£$– ž$§¨Ÿ¹œ›‰¢1£š6›‰˜$˜$›œ´‰˜3›œž$%µ½—®˜$˜¡¥¶ •–?v˜¡§¨ºx£$—¨ªa¢éª‰« É‰ © ³“ª‰ž)¥?˜¬«~ž$ªaµ £$–Ù—™¢1£¡Ÿž)¢?›‰§ Ñm°˜¡£$—¨ªa¢(ž)Ÿš?ž$˜¡¢1£$›œ£$—¨ªa¢¿Öµ€ªZ¥?°?§™–evù1øºxªa°?š6§¨¥ ³ —¨£$–¦£$–?€É‰ © ³“ª‰ž)¥Bx¼Zš6›‰¢?˜)—¨ªa¢Ë¿Öµ€ªm¥6°?§¨2ewañ´‰¢Z¤ Ÿž)›œ£¡VÌYù¶Šúaü ª‰«“£$–?rŸž$ž$ª‰žÐ˜Ÿ¶VæTª‰£$–£$–˜¡.µ€ªZ¥?°?§¨˜ ›87'ºx£>£$–ªa°£¡š6°£±ª‰«Tš6›‰˜$˜$›œ´‰€ž$Ÿ£¡žÐ—¨Ÿ¹Y›‰§­Å'˜)—™¢?ºx£$– ˜¡Ÿ£]ª‰«?ž$Ÿ£¡ž)—™Ÿ¹‰¥{š9›‰˜$˜$›œ´‰˜[¥Ÿš'¢?¥?˜Hªa¢{£$–?“擪mªa§¨›‰¢ Ñm°Ÿž)—¨˜ Ä6°?—™§¨£3›‰¢?¥v˜$°?Ä6µ½—¨£¡£¡¥v£¡ª.£$–?%ž$Ÿ£¡ž)—™Ÿ¹Y›‰§Æ¢Z¤ ´a—™¢ Ä © £$–»¯±²˜ © ˜¡£¡µV¶ evªm¥6°?§¨˜€e‹úٛ‰¢?¥—eÙû‹›œž$Êž$˜$šªa¢6˜$—¨Ä6§¨€«~ª‰ž£$– ž$Ÿ£¡ž)—™Ÿ¹Y›‰§1ª‰«š6›‰˜$˜$›œ´‰˜]³ –?Ÿž$T›‰¢?˜$³TŸž)˜]µ½› © ›‰ºx£$°?›‰§™§ © ªZºŸºŸ°ž¶Æ• –—¨ž ºxªaµ{Ä6—™¢¥±Ÿž$ž)ª‰ž)˜ —™˜HÌm¶ŠÇaüʶÆè¢>µ€ªZ¥?°?§¨ eÙú£$–Ÿž$%›œž)>š6›œž)›‰µ€Ÿ£¡ŸžÐ˜£¡ªrºxªa¢1£¡ž$ªa§ £$–?%¢1°6µ%Ä'Ÿž ª‰«ž$Ÿ£¡ž)—™Ÿ¹‰¥v¥ªZºŸ°?µ€¢1£$˜ ›‰¢?¥vš6›‰˜$˜$›œ´‰˜Å›‰˜ ³T§™§Æ›‰˜ £$–%˜)—‘Ÿ ª‰«Æ›‰ºÐ–vš9›‰˜$˜$›œ´‰‰¶ ²7¢?˜¡³“Ÿžçš?ž$ªZºx˜$˜$—™¢´—™˜¥ªa¢_—™¢Úµ½ªm¥?°6§¨˜Åe…g £$–ž$ªa°?´a–leÁŸâZ¶ p –?¢¦£$–x¼mš'ºx£¡¥B›‰¢?˜$³TŸž £ © š' —™˜íºxª‰ž)ž$ºx£$§ © ¥Ÿ£¡ºx£¡¥Å%£$–—™¥?¢Q£$—¨Í9ºŸ›œ£$—¨ªa¢_ª‰«€£$– ºŸ›‰¢?¥?—®¥?›œ£¡»›‰¢6˜¡³TŸžÐ˜¿Öµ½ªm¥?°6§¨/e…gaà š?ž$ªZ¥?°?ºx˜fgm¶ âQü Ÿž$ž$ª‰žÐ˜Ÿ¶áÌm¶™Á†ü Ÿž$ž)ª‰ž)˜¦›œž$¥6°`£¡ªö¢?›‰µ€¥Ü¢1£$—¨£ © ž$ºxª‰´a¢?—™£$—¨ªa¢ð¿Ö—™¢?ºxªaµ€š9§¨Ÿ£¡V¥?—™ºx£$—¨ªa¢?›œžÐ—¨˜Ðà›‰¢6¥çù¶ŠØaü ›œž$¸¥?°Ù£¡ªË˜¡š6°žÐ—¨ªa°?˜r›‰¢6˜¡³TŸžVš9›œ£¡£¡Ÿž)¢öµ½›œ£$º)–6—™¢´¶ evªm¥?°6§¨˜2e‹ØB›‰¢?¥›eÁŸâB«Ö›‰—™§À£¡ªBž)›‰¢É£$–vºxª‰ž)ž$ºx£ ›‰¢?˜¡³“Ÿž.³ —¨£$–?—™¢Ò£$–‹£¡ª‰š™`ž$Ÿ£$°ž)¢¥ —™¢ßÁŸâZ¶ û‰ü ª‰« £$–¦ºŸ›‰˜¡˜Ÿ¶ÆeíªZ¥?°?§™…eÙØB«Ö›‰—™§™˜¬—¨« £$–¦ºxª‰ž$ž$ºx£V›‰¢Z¤ ˜¡³“Ÿž{ºŸ›‰¢?¥6—™¥?›œ£¡r—™˜{¢ª‰£>ž)›‰¢?ɉ¥B³ —¨£$–6—™¢¦£$–r£¡ª‰š—mÅ ³ –Ÿž)›‰˜eÁŸâr«Ö›‰—™§™˜3—¨«[£$–»ž$Ÿ£$°ž)¢¥‹›‰¢?˜¡³“Ÿž7˜$£¡ž)—™¢´ —™˜{—™¢?ºxªaµ€š6§™Ÿ£¡‰ÅÆ¢?›‰µ€§ © —™£%¥ªm˜%¢ª‰£{Í?£>³ —¨£$–6—™¢kœâ Ä © £¡˜Ÿ¶ ¸N:ÈÇ É+BH#I0Mb>8»(BlB^>8>OI0>OH •–‹˜¡ºxªa¢?¥é˜$Ÿ£rª‰«±x¼mš'Ÿž)—®µ€¢Q£$˜rºxªa¢?˜$—™˜¡£$˜rª‰« ¥?—™˜Â¤ ›œÄ6§™—®¢´3£$–? µ½›‰—™¢€¢?›œ£$°?ž)›‰§§™›‰¢´a°?›œ´‰ ž$˜¡ªa°ž)ºx˜°?˜¡¥ —™¢Ù£$–r¯±²Ú˜ © ˜$£¡µVÅH¢?›‰µ€§ © £$–½›‰ºŸºx˜$˜ £¡ª p ª‰žÐ¥Z¤ q3Ÿ£›‰¢?¥!£$–é¢?›‰µ€¥!¢Q£$—™£ © ž$ºxª‰´a¢?—ȐŸŸžÅr£¡ª›‰˜Â¤ ˜¡˜$˜[£$–?—¨ž]—®µ€š6›‰ºx£]ªa¢£$–Àª†¹‰Ÿž)›‰§™§›‰¢6˜¡³TŸž›‰ºŸºŸ°žÐ›‰º © ¶ q3ª‰£¡.£$–6›œ£%£$–.š6›œž)˜¡Ÿž—™˜{›‰¢—™¢1£¡Ÿ´‰ž)›‰§Tš6›œž$£{ª‰«Mªa°ž Ñm°˜¡£$—¨ªa¢{š?ž$ªZºx˜$˜$—™¢?´3µ€ªZ¥§m›‰¢?¥%£$–?Ÿž$Ÿ«Œª‰ž)“—¨£[—™˜]—™µ¤ š?ž)›‰ºx£$—®ºŸ›‰§£¡ªr¥?—™˜)›œÄ6§¨ —¨£Ÿ¶ †7¢ª‰£¡ö³ —™£$–Ê!£$–:Ä6›‰˜¡§®—™¢ ˜ © ˜¡£¡µ š'Ÿž$«~ª‰ž¡¤ µ½›‰¢?ºx±³ –?¢.›‰§®§6ž$˜$ªa°ž)ºx˜M›œž$±¢?›œÄ6§™¥¶•–7š?ž$x¤ ºŸ—™˜$—™ªa¢`˜$ºxª‰ž$¸¿*e‹Ó Ó7Ã%¥ž$ª‰š9˜>£¡ª—Š(U(œ^!—¨« p ª‰ž)¥.q3Ÿ£ —™˜{¥?—™˜$›œÄ6§™¥¶.• –¬¥ŸžÐ—¨¹Y›œ£$—™ªa¢¸ª‰«£$–.›‰¢?˜¡³“Ÿž{£ © š' ¿Öµ€ªZ¥?°?§¨xeÙÌaÃr›‰¢?¥Òɉ © ³Tª‰žÐ¥éx¼Zš6›‰¢?˜$—™ªa¢ß¿Öµ€ªZ¥?°?§¨ e…aÃ.«Œž)ªaµø]—™´a°ž$ËÁ¸›œž$¸£$–¸£d³Tªéµ½ªm¥?°6§¨˜.£$–?›œ£ ›œž$µ€ªa˜¡£‹—®¢6Ë6°¢?ºx¥_Ä © p ª‰ž)¥.q3Ÿ£Ÿ¶ øª‰žÙx¼›‰µ¤ š6§¨‰Å'£$– p ª‰ž)¥Kq Ÿ£>¢ªa°6¢Ù–6—¨Ÿž)›œž)ºÐ–?—¨˜±˜¡š'ºŸ—¨« © £$–?›œ£ £$–Vºxªa¢?ºxŸš?£/[ R ) À—™˜›¦˜¡š'ºŸ—™›‰§™—‘›œ£$—¨ªa¢ª‰«/}C* -)|Å ³ –?—®º)–B—™¢B£$°ž)¢B—™˜{›íÉZ—™¢?¥¦ª‰«t[6")?¶í•–?r›‰¢?˜¡³“Ÿž £ © š'«Œª‰ž%¯±âaÌaû1¾çý†þÊÿ n$Œ82 ~ÿ0Zd)3\/ ~ÿ6W‚Sj ÿR *") [; Y €[0 R ) ’Ðÿ) €()$o™)}+‰) ~ÿžÌ€)"; —™˜· ¦")?¶!• –Ù˜ © ˜¡£¡µºŸ›‰¢6¢ª‰£V¥Ÿž)—¨¹‰Ù£$–¸›‰¢Z¤ ˜¡³“Ÿž£ © š'Mºxª‰ž$ž$ºx£$§ © °?¢?§¨˜)˜]—™£–?›‰˜›‰ºŸºx˜$˜£¡ª p ª‰žÐ¥Z¤ q3Ÿ£“–?—™Ÿž)›œž)º)–6—¨˜Ä'ºŸ›‰°?˜$ £$–±›‰µ%Ä9—¨´a°ªa°?˜“Ñ1°?˜¡£$—¨ªa¢ ˜¡£¡µ þVÿ ?›‰§¨ªa¢?À¥?ª1˜¢?ª‰£š?ž$ª†¹Z—™¥M›‰¢ © ºŸ§®°M›‰˜£¡ª ³ –?›œ£]£$–“x¼mš'ºx£¡¥€›‰¢?˜$³TŸž]£ © š'À—®˜Ÿ¶Æ²ÒºŸ§™ªa˜¡Ÿž›‰¢?›‰§Û¤ D N NP NP NP Í‹Í‹Í Í‹Í‹Í Í‹Í‹Í Í‹Í‹Í Í‹Í‹Í Í‹Í‹Í Í‹Í‹Í Í‹Í‹Í Í‹Í‹Í Í‹Í‹Í Í‹Í‹Í Î‹Î Î‹Î Î‹Î Î‹Î Î‹Î Î‹Î Î‹Î Î‹Î Î‹Î Î‹Î Î‹Î Ï‹Ï Ï‹Ï Ï‹Ï Ï‹Ï Ï‹Ï Ï‹Ï Ï‹Ï Ï‹Ï Ï‹Ï Ï‹Ï Ï‹Ï Ï‹Ï Ï‹Ï Ï‹Ï Ï‹Ï Ï‹Ï Ï‹Ï Ï‹Ï Ï‹Ï Ï‹Ï Ï‹Ï Ï‹Ï Ï‹Ï Ð‹Ð Ð‹Ð Ð‹Ð Ð‹Ð Ð‹Ð Ð‹Ð Ð‹Ð Ð‹Ð Ð‹Ð Ð‹Ð Ð‹Ð Ð‹Ð Ð‹Ð Ð‹Ð Ð‹Ð Ð‹Ð Ð‹Ð Ð‹Ð Ð‹Ð Ð‹Ð Ð‹Ð Ð‹Ð Ñ‹Ñ‹Ñ Ñ‹Ñ‹Ñ Ñ‹Ñ‹Ñ Ñ‹Ñ‹Ñ Ñ‹Ñ‹Ñ Ò‹Ò‹Ò Ò‹Ò‹Ò Ò‹Ò‹Ò Ò‹Ò‹Ò Ò‹Ò‹Ò Ó‹Ó Ó‹Ó Ó‹Ó Ó‹Ó Ó‹Ó Ó‹Ó Ó‹Ó Ô‹Ô Ô‹Ô Ô‹Ô Ô‹Ô Ô‹Ô Ô‹Ô Ô‹Ô Õ‹Õ Õ‹Õ Õ‹Õ Õ‹Õ Õ‹Õ Õ‹Õ Õ‹Õ Õ‹Õ Ö‹Ö Ö‹Ö Ö‹Ö Ö‹Ö Ö‹Ö Ö‹Ö Ö‹Ö Ö‹Ö ×‹×‹× ×‹×‹× ×‹×‹× ×‹×‹× ×‹×‹× ×‹×‹× ×‹×‹× ×‹×‹× ×‹×‹× ×‹×‹× ×‹×‹× ×‹×‹× ×‹×‹× ×‹×‹× ×‹×‹× ×‹×‹× Ø‹Ø‹Ø Ø‹Ø‹Ø Ø‹Ø‹Ø Ø‹Ø‹Ø Ø‹Ø‹Ø Ø‹Ø‹Ø Ø‹Ø‹Ø Ø‹Ø‹Ø Ø‹Ø‹Ø Ø‹Ø‹Ø Ø‹Ø‹Ø Ø‹Ø‹Ø Ø‹Ø‹Ø Ø‹Ø‹Ø Ø‹Ø‹Ø Ø‹Ø‹Ø Ù ÙÚ Ú Û ÛÜ Ü 50 20 200 0.421 0.419 0.414 0.36 0.37 0.38 0.39 0.40 0.41 0.42 0.369 0.374 0.376 0.400 0.392 Precision (MRR) 0.400 = 50 = 200 = 500 ø[—¨´a°ž$%Çm¾è µ€š9›‰ºx£Mª‰«[µr›Y¼Z—™µ»°?µN¢m°?µ{ĝŸžMª‰«[¥ªZºŸ°Z¤ µ€¢1£$˜ ›‰¢?¥Êš9›‰˜$˜$›œ´‰˜3š?ž$ªZºx˜$˜¡¥ © ˜$—™˜½˜$–?ª†³ ˜¬£$–?›œ£r£$–?vš'Ÿž$«~ª‰ž)µ½›‰¢?ºxv¥?ž$ª‰šé—™˜¬µ€ª‰ž$ ˜$—¨´a¢6—¨Í6ºŸ›‰¢1£]«~ª‰ž£$–íþVÿ ?Ñm°˜¡£$—¨ªa¢6˜Ÿ¶ p –¢ p ª‰žÐ¥Z¤ q3Ÿ£]—™˜Æ¥?—™˜$›œÄ9§¨¥Å£$–ƒevÓ3Ó«Œª‰ž]£$–¬þVÿ mÑ1°˜$£$—¨ªa¢?˜ ¥ž$ª‰š9˜À£¡ª…ŠUr.ÝO! ›‰˜ºxªaµ€š6›œž$¥v£¡ªwŠUG(œ4! «Œª‰žM£$–?>¢Z¤ £$—¨ž$ ˜¡Ÿ£Ÿ¶T•–?—™˜Tž$˜$°?§™£À—®¢?¥?—™ºŸ›œ£¡˜“£$–?›œ£M£$– ›†¹Y›‰—™§®›œÄ6—™§Û¤ —¨£ © ª‰«9§™x¼Z—™ºxªœ¤J˜$µ½›‰¢Q£$—®º“—®¢«Œª‰žÐµ½›œ£$—¨ªa¢{ĝºxªaµ½˜µ€ª‰ž$ —™µ€š'ª‰ž$£$›‰¢1£M«~ª‰ž ¥?—¨÷rºŸ°6§¨£“Ñm°˜¡£$—™ªa¢?˜Ÿ¶ æ © ¥?—™˜)›œÄ6§™—™¢´±£$–±¢?›‰µ€¥r¢Q£$—™£ © ž$ºxª‰´a¢6—‘ŸŸžÅZ£$– ›‰¢?˜¡³“Ÿž.š?ž$ªZºx˜$˜$—™¢´§™›‰º$ÉZ˜r£$–?‹˜¡µ½›‰¢1£$—™ºÙ—™¢«~ª‰ž)µ½›Y¤ £$—¨ªa¢2¢ºx˜$˜$›œž © £¡ªê—™¥¢1£$—¨« © ºŸ›‰¢?¥6—™¥?›œ£¡_›‰¢?˜¡³“Ÿž)˜Ÿ¶ ¾ ªmªa˜¡ ›œš6š?ž$ª¼—™µ½›œ£$—™ªa¢?˜[«Œª‰ž£$–ºŸ›‰¢?¥?—™¥6›œ£¡›‰¢?˜¡³“Ÿž)˜ ›œž$Mºxªaµ€š6°?£¡¥{Ä9›‰˜¡¥˜¡£¡ž)—™ºx£$§ © ªa¢{ɉ © ³“ª‰ž)¥?˜µ½›œ£$ºÐ–Z¤ —™¢´¶è¢V£$–?—®˜MºŸ›‰˜¡>£$–>š6ž$ºŸ—™˜$—¨ªa¢Ê¥ž)ª‰š6˜M£¡ª…ŠUrc!Ÿ¶ Þ Ï ô“ßAXm”vEN¶{S†õSz”‰k6ô ßTAOQA'ô:k”‰k?O1S •–Ñm°?›‰¢1£$—¨£$›œ£$—¨¹‰3šŸž)«Œª‰ž)µr›‰¢?ºxMª‰«9£$–?7¯±²_˜ © ˜¡£¡µ —™˜{§™›œž$´‰§ © ¥Ÿš'¢?¥¢1£ ªa¢£$–¬›‰µ€ªa°6¢Q£{ª‰«M£¡x¼m£»ž$x¤ £¡ž)—¨Ÿ¹‰¥‹«~ž$ªaµ£$–?»¥ªZºŸ°?µ€¢1£3ºxªa§®§¨ºx£$—¨ªa¢ã£$–µ€ª‰ž$ £¡x¼Z£“—™˜ž$Ÿ£¡ž)—¨Ÿ¹‰¥.£$–3Ä'Ÿ£¡£¡ŸžT£$–±º)–6›‰¢?ºx3ª‰«Í9¢?¥?—™¢´ £$–“›‰¢?˜¡³“Ÿž¶]ä3ª†³“Ÿ¹‰ŸžÅaš6ž)›‰ºx£$—™ºŸ›‰§¯±²Ë˜ © ˜¡£¡µ½˜]ºŸ›‰¢Z¤ ¢ª‰£%›87'ª‰ž)¥`£¡ªv›œš?š6§ © £$—™µ€½ºxªa¢6˜$°?µ½—™¢?´·q ¾ ì £¡ºÐ–Z¤ ¢?—™Ñm°˜¿~˜¡š'ºŸ—™›‰§®§ © š6›œžÐ˜$—™¢´1Ã6£¡ª¹‰Ÿž © §™›œž)´‰›‰µ€ªa°?¢1£$˜ ª‰«Z£¡x¼m£Ÿ¶Ó Ÿ£¡ž)—¨Ÿ¹œ›‰§‰š6›œž)›‰µ€Ÿ£¡ŸžÐ˜H›œž$°6˜¡¥ £¡ª š6ž$ª†¹Z—™¥ £¡ž)›‰¥x¤­ª7˜ÙÄ'Ÿ£J³“Ÿ¢ß£$–›‰µ€ªa°?¢1£vª‰«€£¡x¼Z£vš6›‰˜)˜¡¥ —™¢1£¡ª3£$–?“›‰¢?˜$³TŸž[š?ž$ªZºx˜$˜$—®¢´3µ€ªZ¥?°?§™›‰¢?¥»£$–À›‰ºŸºŸ°Z¤ ž)›‰º © ª‰«£$–»x¼m£¡žÐ›‰ºx£¡¥B›‰¢?˜¡³“Ÿž)˜Ÿ¶>•–?€¯±²Ú˜ © ˜¡£¡µ –?›‰˜M£$–?>«Œªa§®§¨ª†³ —®¢´5# % - } R [Z Y"|¾ àWá€â ¤7£$–¬µ½›Y¼—™µ»°?µ ¢m°?µ{Ä'Ÿž%ª‰«¥ªZºŸ°?µ€¢1£$˜ ž)Ÿ£¡ž)—¨Ÿ¹‰¥f«~ž$ªaµ ›ë˜$°Ä¤Jºxªa§®§¨ºx£$—¨ªa¢ ¿Ö¥Ÿ«Ö›‰°?§¨£ ¹œ›‰§™°%ǜâ‰â½«Œª‰ž›‰ºÐ–vª‰«MÁǀ˜)°Ä¤Jºxªa§™§¨ºx£$—™ªa¢?˜ÐÃÐãã ä-å;æOç0è8é ê*ê-é"ë,ç0ì-çˆí*ì-î‡ç˜ï,é"ð ç3ñ8ë îòñ8çKó{é ñé"ë,ç3ê4í-æ8çKå;ôbõö÷Oøù˜ú û ónç˜ñ#í;ù3ø,ðòðòç3ù˜í-îòø,ñ é"êétê*ç˜íø ü²ýþê*ç3èéìYé"í-ç„ê û8ÿ úEù3ø,ðòðòç3ù˜í-îòø,ñOê 265 SP Precision (MRR) 3 10 20 40 6 110 59 43 32 Exec time (sec) 0.400 0.411 0.421 0.401 0.387 (nr. extra lines) precision time ø[—¨´a°ž$¦Ìm¾`赀š6›‰ºx£.ª‰«>š6›‰˜$˜)›œ´‰¦˜)—‘Ÿ‹ªa¢:š?ž)ºŸ—™˜$—¨ªa¢ ›‰¢?¥Vx¼ZºŸ°£$—¨ªa¢í£$—™µ€ àWᤣ$–“µr›Y¼Z—™µ»°?µÜ¢1°?µ{Ä'ŸžHª‰«š6›‰˜)˜$›œ´‰˜]š?ž$ªœ¤ ºx˜)˜¡¥ß£¡ªö—™¥?¢Q£$—¨« © ºŸ›‰¢?¥?—™¥?›œ£¡›‰¢?˜¡³“Ÿž)˜ç¿Ö¥x¤ «Ö›‰°?§¨£À¹œ›‰§™°œâ‰âQÃж à¤F£$–?=˜$—‘Ÿ ›‰§™§™ª†³À›‰¢?ºx «~ª‰ži›‰º)– ž$x¤ £¡žÐ—¨Ÿ¹‰¥š6›‰˜$˜$›œ´‰B¿Ö¥Ÿ«Ö›‰°?§¨£»¹Y›‰§®°ÙÁŸâ¦§™—®¢˜>Ä'x¤ «~ª‰ž$£$–?›œž)§™—¨˜$£ ›‰¢?¥%›œ«~£¡ŸžH£$–T§™›œ£¡˜¡£ ɉ © ³Tª‰žÐ¥ µr›œ£$º)–9ÃÐã p –¢ á€â ›‰¢?¥ ᛜž$r˜¡Ÿ£%£¡ª‹˜$µ½›‰§®§¨Ÿž ¹Y›‰§®°˜ŸÅ £$–Êx¼ZºŸ°£$—¨ªa¢ç£$—™µ€V—™˜§¨ªz³TŸžrÄ6°£»ž$§¨Ÿ¹œ›‰¢Q£€¥ªZºŸ°Z¤ µ€¢1£$˜¬›‰¢6¥Òš6›‰˜$˜)›œ´‰˜¬µ½› © Ä'vµ½—™˜$˜$¥¶:ø[—¨´a°?ž$‹Ç —™§™§®°?˜¡£¡ž)›œ£¡˜[£$–—®µ€š6›‰ºx£ª‰«'£$–Mš6›œž)›‰µ½Ÿ£¡Ÿž)˜ á€â ›‰¢?¥ á ªa¢>£$–Tš?ž$ºŸ—™˜$—™ªa¢>ºxªaµ€š6°£¡¥%ª†¹‰Ÿž]£$–¢1£$—¨ž$T˜¡Ÿ£ ª‰«±ÁxùQúœâí£¡˜¡£%Ñm°˜¡£$—™ªa¢?˜Ÿ¶€•–€–?—¨´a–?Ÿž £$–½¢1°6µ%Ä'Ÿž ª‰«H¥ªZºŸ°?µ€¢1£$˜“ž$Ÿ£¡ž)—¨Ÿ¹‰¥ Åm£$–?7–?—™´a–ŸžT£$–?3š?ž)ºŸ—™˜$—¨ªa¢ ˜$ºxª‰ž$‰¶çè £½—™˜›œš?š6›œž$¢1£€£$–?›œ£ á–6›‰˜½›¦ž$§™›œ£$—¨¹‰§ © ˜$µ½›‰§®§¨Ÿž3—™µ½š6›‰ºx£ ªa¢‹£$–»š?ž$ºŸ—™˜$—™ªa¢í£$–?›‰¢ á/â ¶7•–?—™˜ —™˜¥6°M£¡ª%£$– «~›‰ºx£T£$–?›œ££$– ž$Ÿ£¡ž)—¨Ÿ¹‰¥rš6›‰˜$˜$›œ´‰˜“›œž$ ž$x¤­ª‰ž)¥?Ÿž$¥>Ä6›‰˜$¥%ªa¢{› ˜¡Ÿ£]ª‰«?§¨x¼—™ºŸ›‰§‰«~›œ£$°ž$˜ŸÅ‰˜$°?ºÐ– £$–?›œ£3£$–{—™¥?¢Q£$—¨Í9ºŸ›œ£$—¨ªa¢Vª‰«£$–»ºŸ›‰¢?¥?—™¥?›œ£¡{›‰¢?˜¡³“Ÿž)˜ —™˜[š'Ÿž$«~ª‰ž)µ€¥»ªa¢€£$–M£¡ª‰š áž$x¤­ª‰ž)¥?Ÿž$¥€š6›‰˜$˜)›œ´‰˜Ÿ¶ ø[—¨´a°ž)±Ì»˜$–ªz³ ˜À£$–7š'ªa˜$˜$—¨Ä6§™ £¡ž)›‰¥x¤­ª7ÙÄ'Ÿ£J³“Ÿ¢ ª†¹‰ŸžÐ›‰§™§š6ž$ºŸ—™˜$—¨ªa¢¸›‰¢6¥Bx¼ZºŸ°£$—¨ªa¢£$—™µ€r›‰˜»›í«Ö°?¢?º|¤ £$—¨ªa¢çª‰«±£$–Vš6›‰˜)˜$›œ´‰v˜$—‘Ÿ ¶Ëè¢Q£¡Ÿž)˜¡£$—™¢´a§ © ÅT£$– –?—¨´a–?˜¡£Hš?ž)ºŸ—™˜$—¨ªa¢>˜)ºxª‰ž$TªZºŸºŸ°ž)˜]«~ª‰žÆ£$–?T¥Ÿ«Ö›‰°?§¨£]˜¡Ÿ£Â¤ £$—™¢´Å€ÁŸâZ¶ p –?¢  —®˜M˜$µ½›‰§™§¨Ÿž†Å1£$–>›‰¢6˜¡³TŸžÐ˜›œž$ µ½—™˜)˜¡¥¦Ä'ºŸ›‰°?˜¡€£$– © ¥ªí¢ª‰£ Í6£%—™¢¦£$–€ž$Ÿ£¡žÐ—¨Ÿ¹‰¥ š6›‰˜$˜)›œ´‰˜Ÿ¶ p –¢  —™˜»§™›œž$´‰ŸžÅ£$–Ê›‰ºx£$°?›‰§™§ © ž$§Û¤ Ÿ¹œ›‰¢Q£€£¡x¼Z£«Œž)›œ´aµ½¢Q£$˜€›œž$V˜$°?Ä6µ€Ÿž$´‰¥ç—™¢›¦§™›œž$´‰ ›‰µ€ªa°?¢1£ª‰«]£¡x¼m£Ÿ¶MՓªa¢?˜¡Ñm°¢1£$§ © £$–?%›‰¢?˜¡³“Ÿžž)›‰¢É1¤ —™¢´½µ½ªm¥?°6§¨½¿*e‹Ø½«Œž$ªaµø]—™´a°ž$½Á†ÃM˜$ª‰ž$£$˜M£$–ž$ªa°?´a–v› ¹‰Ÿž © §®›œž$´‰€¢1°?µ{Ä'Ÿž ª‰«ºŸ›‰¢6¥?—™¥?›œ£¡›‰¢?˜¡³“Ÿž)˜ŸÅ›‰¢?¥Ù—¨£ ¥ªm˜¢ª‰£“›‰§¨³À› © ˜ž)›‰¢?É»£$–?3ºxª‰ž$ž$ºx£“›‰¢?˜¡³“Ÿž)˜³3—¨£$–?—™¢ £$–>£¡ª‰š…ž$Ÿ£$°ž)¢?¥¶  Ï ô“ßAXm”vEN¶ k?k6IbAX S •–.ž$˜$°6§¨£$˜%š6ž$˜¡¢1£¡¥—®¢`š?ž)Ÿ¹m—¨ªa°6˜{˜¡ºx£$—™ªa¢?˜ºxª‰ž¡¤ ž$˜¡š'ªa¢?¥:£¡ªç£$–B˜¡ŸžÐ—™›‰§™—‘Ÿ¥ Ä6›‰˜¡§™—®¢Ù›œžÐº)–?—¨£¡ºx£$°?ž$ «~ž$ªaµ ø[—¨´a°?ž$Áœ¶ê•–?›œ£Ê›œž)ºÐ–?—¨£¡ºx£$°ž$¦—™˜.—™¢ «~›‰ºx£V› ˜$—™µ½š6§™—¨Í?¥r¹‰Ÿž)˜$—¨ªa¢íª‰«Æªa°ž˜ © ˜¡£¡µF³ –?—™ºÐ–V°?˜¡˜M˜¡Ÿ¹1¤ Ÿž)›‰§9«ŒŸ¥?Ä6›‰º$ÉZ˜À£¡ª{Ä'ªmªa˜¡££$–?3ªz¹‰Ÿž)›‰§™§š'Ÿž$«Œª‰žÐµ½›‰¢?ºx‰¶ ²7˜˜$–ª†³3¢—®¢ø[—¨´a°ž)¬ùÅ£$–?Ê›œžÐº)–?—¨£¡ºx£$°?ž$Ê³ —¨£$– «~Ÿ¥Ä6›‰º$ÉZ˜ x¼Z£¡¢?¥?˜ £$–?½˜¡Ÿž)—™›‰§™—ȐŸ¥Ù›œžÐº)–?—¨£¡ºx£$°?ž$½—™¢ ˜¡Ÿ¹‰Ÿž)›‰§Æ³À› © ˜Ÿ¶¦Ä  © ³Tª‰žÐ¥íx¼Zš6›‰¢?˜$—¨ªa¢¿Öµ€ªm¥6°?§¨?e…aà —™˜%¢?–?›‰¢?ºx¥£¡ª¸—™¢?ºŸ§™°?¥?½§¨x¼Z—®ºxªœ¤J˜¡µ½›‰¢1£$—™º¬›‰§™£¡Ÿž)¢?›Y¤ £$—¨ªa¢?˜r«Œž)ªaµ p ª‰ž)¥.q3Ÿ£Ÿ¶ð² ¢Ÿ³µ€ªZ¥?°?§™V«Œª‰ž.§¨ª‰´a—™º š?ž$ªz¹m—®¢´›‰¢6¥ $°?˜¡£$—™Í6ºŸ›œ£$—¨ªa¢ ª‰«>£$–¸›‰¢?˜¡³“Ÿž)˜Ê—™˜Ê—™¢Z¤ ˜¡Ÿž$£¡¥¬Ä'Ÿ«Œª‰ž) ›‰¢?˜¡³“ŸžTžÐ›‰¢Ém—®¢´¶Æè¢r›‰¥6¥?—¨£$—¨ªa¢ÅQ£$–ž$Ÿ §¨ªmª‰š6˜íÄ'ºxªaµ€ç›‰¢—™¢Q£¡Ÿ´‰žÐ›‰§%š6›œž)£vª‰«€£$–ç˜ © ˜¡£¡µV¾ £$–Êš6›‰˜$˜)›œ´‰Vž$Ÿ£¡ž)—¨Ÿ¹œ›‰§M§¨ªmª‰šö¿Ö§™ª1ª‰šöÁ†ÃÐãM£$–V§¨x¼—™ºxªœ¤ ˜¡µ½›‰¢1£$—™º¦§¨ªmª‰š ¿Ö§¨ª1ª‰šðÇaÃÐã%›‰¢6¥ £$–?Ù§¨ª‰´a—®º‹š6ž$ª†¹Z—™¢´ §¨ªmª‰š`¿Ö§¨ªmª‰š‹ÌaÃж ²7˜Êš6›œž$£.ª‰«{§¨ªmª‰š(ÁœÅ £$–`¯ ÀY² ˜ © ˜¡£¡µ›‰¥ $°?˜¡£$˜ 擪1ªa§™›‰¢`Ñm°Ÿž)—¨˜±Ä'Ÿ«~ª‰ž$€š6›‰˜$˜$—®¢´Ê£$–µi£¡ªí£$–?½ž$x¤ £¡ž)—¨Ÿ¹œ›‰§¢´a—®¢‰¶½è «T£$–rªa°£¡š6°£>«~ž$ªaµ £$–?½ž$Ÿ£¡ž)—¨Ÿ¹œ›‰§ ¢´a—™¢?.—™˜{£¡ªmª¦˜)µ½›‰§™§ÈÅ›‹É‰ © ³“ª‰ž)¥Ë—™˜»¥ž$ª‰š?š'¥›‰¢?¥ ž$Ÿ£¡ž)—™Ÿ¹Y›‰§7ž$˜$°6µ€¥¶!è «%£$–?Ùªa°?£¡š6°£V—™˜Ê£¡ªmªÒ§™›œž$´‰‰Å ›»É‰ © ³“ª‰ž)¥V—™˜“›‰¥?¥¥Ê›‰¢?¥Ê›»¢Ÿ³ö—¨£¡Ÿž)›œ£$—¨ªa¢Ê˜¡£$›œž)£¡¥Å °?¢1£$—™§m£$–Mªa°£¡š6°?£˜$—‘ŸM—™˜¢—¨£$–?Ÿž£¡ª1ª%§®›œž$´‰‰ÅQ¢ª‰ž£¡ªmª ˜$µ½›‰§®§È¶ p –¢§¨x¼Z—®ºxªœ¤J˜¡µ½›‰¢1£$—™ºÊºxªa¢?¢ºx£$—¨ªa¢?˜{«~ž$ªaµ £$–ÙÑ1°˜$£$—¨ªa¢Ë£¡ª`£$–?vž$Ÿ£¡ž)—™Ÿ¹‰¥Òš6›‰˜$˜)›œ´‰˜¬›œž$Ù¢ª‰£ š'ªa˜$˜$—¨Ä9§¨‰ÅQ§¨ªmª‰šÊÇ»—™˜T£¡ž)—¨´‰´‰Ÿž)¥¶T¯±°˜¡£$—™ªa¢rɉ © ³Tª‰žÐ¥?˜ ›œž$¸ž$Ÿš9§™›‰ºx¥_³ —¨£$– p ª‰ž)¥Kq Ÿ£Â¤­Ä6›‰˜$¥ß›‰§¨£¡Ÿž)¢6›œ£$—¨ªa¢?˜ ›‰¢?¥»ž$Ÿ£¡ž)—™Ÿ¹Y›‰§—™˜]ž$˜)°?µ€¥¶S¾ ªmª‰šrÌ ž)§™—¨˜]ªa¢€›±§¨ª‰´a—™º š?ž$ªz¹‰ŸžM£$–?›œ£À¹‰Ÿž)—¨Í?˜“£$– °?¢6—¨Í6ºŸ›œ£$—¨ªa¢?˜TÄ'Ÿ£J³“Ÿ¢Ê£$– Ñm°˜¡£$—¨ªa¢½›‰¢6¥r§¨ª‰´a—™ºÀ«~ª‰ž)µ½˜¶ p –¢r£$– °?¢?—™Í6ºŸ›œ£$—¨ªa¢?˜ «Ö›‰—™§ÈÅ£$–Vɉ © ³“ª‰ž)¥?˜½›œž$Vx¼Zš6›‰¢?¥?¥³3—¨£$–˘¡µ½›‰¢1£$—Û¤ ºŸ›‰§™§ © ž$§™›œ£¡¥í›‰§™£¡Ÿž)¢?›œ£$—¨ªa¢?˜ ›‰¢?¥Êž$Ÿ£¡ž)—¨Ÿ¹œ›‰§'ž$˜$°6µ€˜Ÿ¶ •]›œÄ6§¨%Ìm¾赀š6›‰ºx£ ª‰«Æ«~Ÿ¥Ä6›‰º)Ém˜ ªa¢íš?ž$ºŸ—®˜$—¨ªa¢ øŸ¥Ä6›‰º)É ì ž$ºŸ—™˜)—¨ªa¢ è¢?ºxž$µ€¢1£$›‰§ ›‰¥?¥¥ ¿*evÓ3Ó7à ¢?–?›‰¢?ºxµ€¢1£ ¢ªa¢ âZ¶ ùQÇZÁ âQü ì ›‰˜$˜$›œ´‰%ž)Ÿ£¡ž)—¨Ÿ¹œ›‰§ âZ¶ ùQúg  ã Á‰Á†ü ¿Ö§¨ª1ª‰šÁ†Ã ¾ x¼—™ºxªœ¤J˜¡µr›‰¢Q£$—™º âZ¶‡YùQÇ   ã Á#aü ¿Ö§¨ª1ª‰šÙÇaÃ ì ž$ªz¹m—™¢?´v¿Ö§¨ª1ª‰šÙÌaà âZ¶‡aûœÇ  ’aü •]›œÄ6§¨{Ìr—™§™§™°6˜¡£¡ž)›œ£¡˜À£$–»—™µ€š6›‰ºx£ª‰«[£$–{ž$Ÿ£¡ž)—¨Ÿ¹œ›‰§ §¨ªmª‰š6˜»ªa¢ç£$–í›‰¢?˜¡³“Ÿž½›‰ºŸºŸ°ž)›‰º © ¶ç•–ÊÉZ¢ª†³3§¨¥´‰ Ä?ž$ªa°?´a–Q£7—™¢Q£¡ª¬£$–Ñ1°˜$£$—¨ªa¢v›‰¢?˜$³TŸž)—®¢´½š?ž$ªZºx˜$˜3Ä © §¨x¼—™ºxªœ¤J˜¡µ½›‰¢1£$—™º€›‰§¨£¡ŸžÐ¢?›œ£$—¨ªa¢?˜±–?›‰˜±£$–€–?—™´a–˜¡£ —™¢Z¤ ¥?—¨¹Z—™¥?°6›‰§7ºxªa¢1£¡ž)—¨Ä9°£$—¨ªa¢Å7«Œªa§®§¨ª†³“¥ðÄ © £$–`µ€ºÐ–?›Y¤ ¢?—™˜)µïª‰«]›‰¥?¥6—™¢´^ÀY¥ž$ª‰š6š6—™¢´{ɉ © ³“ª‰ž)¥?˜Ÿ¶ •–?—™¢?˜¡Ÿž$£$—™ªa¢±ª‰«m£$–?T§¨ª‰´a—™º[š6ž$ª†¹Z—™¢´µ€ªZ¥?°?§™›‰¥?¥?˜ ›7¢Ÿ³ºxªaµ½š6§¨x¼—¨£ © §®› © Ÿž[£¡ª ›‰¢?˜¡³“ŸžÆš?ž)ªmºx˜$˜)—™¢´Ŝ¢Z¤ ›œÄ6§™—®¢´½µ€ª‰ž${£¡ž)›‰¥x¤­ª7˜ Ä'Ÿ£J³“Ÿ¢Ùš?ž$ªZºx˜$˜$—™¢?´½ºxªaµ¤ š6§¨x¼—¨£ © ›‰¢?¥Ê›‰¢?˜¡³“Ÿž ›‰ºŸºŸ°žÐ›‰º © ¶T•]›œÄ6§¨±ù½˜$–ªz³ ˜À£$– ª†¹‰ŸžÐ›‰§™§ š?ž)ºŸ—™˜$—¨ªa¢V«~ª‰ž «~ªa°ž ¥6— 7Ÿž)¢Q£ ˜$Ÿ£¡£$—™¢´a˜Ÿ¶“•– Í?ž)˜$£ ˜¡Ÿ£¡£$—™¢´ÅC 4 %3(C %%)?Åaºxª‰ž$ž$˜¡š'ªa¢?¥6˜'£¡ª£$– ˜$—™µ½š6§¨˜¡£M¯±²ð˜ © ˜¡£¡µï£$–?›œ£M¥ªm˜À¢ª‰£M°?˜¡±›‰¢ © q ¾ ì £¡º)–6¢?—™Ñm°˜Êª‰žíž$˜¡ªa°?ž)ºx˜Ÿ¶ï•–¸›‰¢?˜¡³“Ÿž)˜V›œž$¸x¼m¤ £¡ž)›‰ºx£¡¥`«Œž$ªaµi£$–r˜¡£$›œž$£%ª‰«T›‰ºÐ–`š6›‰˜)˜$›œ´‰‰Å]›‰¢6¥¸ž$x¤ £$°ž)¢?¥Ù—®¢v£$–€ª‰žÐ¥Ÿž —™¢Ù³ –?—™ºÐ–‹£$–?»š9›‰˜$˜$›œ´‰˜±³“Ÿž$ ž$Ÿ£¡ž)—™Ÿ¹‰¥¶ö•–íš?ž)ºŸ—™˜$—¨ªa¢Ë—™˜rªa¢?§ © âZ¶ âaÇgm¶ p –¢ £$–xq¾ ì £¡º)–6¢?—™Ñm°˜r›œž$‹¢?›œÄ6§¨¥ ÅT³ —™£$–Ë£$–íx¼m¤ ºxŸš?£$—¨ªa¢¬ª‰«£$–7¥Ÿž)—¨¹œ›œ£$—¨ªa¢€ª‰«£$–3x¼mš'ºx£¡¥.›‰¢?˜¡³“Ÿž £ © š'‰Åœ£$–Tš6ž$ºŸ—™˜$—¨ªa¢{—™µ€š?ž)ª†¹‰˜Æ«Œž$ªaµâZ¶ âaÇg3£¡ª âZ¶™Á#œâZ¶ •–±›‰¢?˜¡³“ŸžM›‰ºŸºŸ°ž)›‰º © —™˜“˜¡£$—™§™§9§™—™µ½—¨£¡¥rÄ'ºŸ›‰°?˜¡±£$– ºŸ›‰¢?¥?—®¥?›œ£¡3›‰¢?˜$³TŸž)˜ÀºŸ›‰¢?¢?ª‰£TÄ' š6ž$ª‰š'Ÿž)§ © —™¥¢1£$—¨Í?¥ ³ —¨£$–?ªa°£%ÉZ¢ª†³3—™¢´v£$–?—¨ž»˜$µ½›‰¢Q£$—®º¬ºŸ›œ£¡Ÿ´‰ª‰ž © ¿~š'Ÿž¡¤ ˜¡ªa¢?˜œºŸ—¨£$—™˜€›‰¢?¥Ò˜¡ªB«~ª‰ž$£$–9ÃжËè «7£$–v¥Ÿž)—™¹Y›œ£$—¨ªa¢çª‰« £$–Ùx¼mš'ºx£¡¥_›‰¢?˜¡³“ŸžÊ£ © š'Ù—™˜.›‰§™˜¡ª¢?›œÄ6§™¥ÅM£$– š?ž$ºŸ—®˜$—¨ªa¢Ù˜$ºxª‰ž$€º)–?›‰¢?´‰˜ £¡ªíâZ¶ ùQúgm¶»ø[—™¢6›‰§™§ © ų –¢ ›‰§™§m«~Ÿ¥Ä6›‰º$ÉZ˜›œž$À¢?›œÄ6§™¥{£$–?À–6—¨´a–˜¡£]ª†¹‰Ÿž)›‰§®§Zš?ž$x¤ ºŸ—™˜$—™ªa¢¸ª‰«MâZ¶‡aûœÇ헙˜>›‰ºÐ–?—¨Ÿ¹‰¥¶íՓªaµ€š9›œž)›œ£$—¨¹‰§ © Å]£$– ›‰¢?˜¡³“Ÿžš6ž$ªmºx˜)˜$—™¢´%µ½ªm¥?°6§¨˜[ª‰«ª‰£$–Ÿž¯±²_˜ © ˜$£¡µ½˜ °?˜$°6›‰§™§ © ˜¡š9›‰¢Vª†¹‰Ÿž7§¨Ÿ¹‰§™˜ ǀ›‰¢6¥vÌ«~ž$ªaµF•]›œÄ6§¨ ù¶ •]›œÄ6§¨ ù¾ ì Ÿž$«~ª‰ž)µ½›‰¢?ºx%ª‰«]›‰¢?˜$³TŸž3š?ž$ªZºx˜$˜$—™¢´ ²7¢?˜¡³“Ÿžš?ž$ªZºx˜$˜$—™¢?´ eíªZ¥?°?§¨˜ ì ž$ºŸ—™˜$—™ªa¢ ºxªaµ€š6§™x¼Z—¨£ © §™Ÿ¹‰§ °?˜¡¥ ¿*evÓ3Ó7à ¿ÂÁ†Ãƒ†±—¨ž$ºx£ x¼m£¡ž)›‰ºx£$—™ªa¢ e`Á|¤ˆeÙúmÅ âZ¶ âaÇg e`ÁŸâ ¿ÈÇaþHx¼Z—™ºŸ›‰§Hµ½›œ£$º)–?—®¢´ e`Á|¤ˆe¦û1Å âZ¶™Á#œâ e‹Øz¤ˆeÁŸâ ¿ÈÌa÷mµr›‰¢Q£$—™º%µ½›œ£$º)–?—®¢´ e`Á|¤ˆeÁŸâ âZ¶ ùQúg ¿~ù1ÃÀøŸ¥?Ä6›‰º$ÉZ˜ ¢6›œÄ6§¨¥ ›‰§™§ âZ¶‡aûœÇ •–?Í6¢?›‰§‰š?ž)ºŸ—™˜$—¨ªa¢ ˜$ºxª‰ž)˜H«~ª‰žÆ•Ó3ԓÕT¤ˆgmŜ•Ó3ԓÕT¤ Ø»›‰¢?¥V•Ó3ԓÕT¤dǜâ‰âÁ ›œž$ ž)˜¡š'ºx£$—¨¹‰§ © âZ¶‡mÅâZ¶‡gœâZÅ ›‰¢?¥íâZ¶‡aûYâZ¶À•–Ÿž)Ÿ«Œª‰ž$%£$–>š?ž$ºŸ—®˜$—¨ªa¢Ê¥?—™¥V¢ª‰£ ¹Y›œž © µ»°?º)–B—®¢Ù˜$š6—¨£¡ª‰«£$–½–6—¨´a–Ÿž ¥Ÿ´‰ž$Ÿ€ª‰«À¥?—¨÷rºŸ°6§¨£ © ¶ •–?—®˜3—™˜7¥?°{£¡ªV£$–—™¢?ºxž$›‰˜¡¥¦°?˜${ª‰«¢6›œ£$°ž)›‰§]§™›‰¢Z¤ Question + M3 + M4 M1 + M2 M5 alternations + lexico−sem M6 M7 + M8 M9 + M10 Answer Loop 1 Loop 2 proving Logic Loop 3 ø[—¨´a°ž) ù¾² žÐº)–?—¨£¡ºx£$°?ž$>³ —¨£$–Ê«~Ÿ¥Ä6›‰º)Ém˜ ´a°?›œ´‰{š?ž$ªZºx˜$˜$—™¢´€—™¢Êªa°?ž ˜ © ˜¡£¡µV¶   EHCXGdRSœP­EÆCS •–Òµr›‰—™¢(ºxªa¢?ºŸ§™°?˜)—¨ªa¢(—™˜¸£$–?›œ£`£$–Òª†¹‰ŸžÐ›‰§™§½š'Ÿž¡¤ «~ª‰ž)µ½›‰¢?ºx‹ª‰«»¯±²Ý˜ © ˜¡£¡µ½˜.—™˜¬¥6—¨ž$ºx£$§ © ž$§™›œ£¡¥ £¡ª £$–:¥Ÿš?£$–Úª‰«í¢?›œ£$°ž)›‰§¬§™›‰¢´a°?›œ´‰:š?ž$ªZºx˜$˜$—™¢?´ðž$x¤ ˜¡ªa°žÐºx˜r›‰¢?¥é£$–í£¡ªmªa§™˜r°?˜$¥Ò«~ª‰ž¬›‰¢?˜¡³“ŸžrÍ9¢?¥?—™¢´¶ ²7˜ ˜$–ª†³3¢Ù—®¢‹•]›œÄ6§¨»ùÅ£$–š'Ÿž$«Œª‰žÐµ½›‰¢?ºx»ª‰«T—™¢«~ª‰ž¡¤ µ½›œ£$—¨ªa¢:ž$Ÿ£¡žÐ—¨Ÿ¹Y›‰§3£¡º)–6¢?—™Ñm°˜¬—®˜¬˜$—™´a¢?—¨Í6ºŸ›‰¢1£$§ © ¢Z¤ –?›‰¢?ºx¥>³ –¢>§¨x¼—™ºxªœ¤J˜¡µr›‰¢Q£$—™º—™¢«~ª‰ž)µ½›œ£$—¨ªa¢ —™˜ «~°?§®§ © x¼Zš6§¨ªa—¨£¡¥ £$–?ž$ªa°´a–ªa°££$–?›‰¢?˜¡³“ŸžHÍ6¢?¥6—™¢´“š?ž$ªZºx˜$˜Ÿ¶ •]›œÄ6§¨Çr—™§®§™°?˜¡£¡ž)›œ£¡˜ £$–?›œ£±£$–»šŸž)«Œª‰ž)µr›‰¢?ºx%Ä'ª‰£Â¤ £$§¨¢º)Ém˜ ª‰«Æªa°?ž7¯±²˜ © ˜$£¡µ›œž$>¥?°? £¡ª½£J³“ªrµ€ªZ¥Z¤ °?§¨˜Å¢?›‰µ½§ © £$–¥Ÿž)—¨¹œ›œ£$—¨ªa¢íª‰«£$–{x¼Zš'ºx£¡¥¸›‰¢Z¤ ˜¡³“ŸžT£ © š'7›‰¢?¥r£$–? ɉ © ³Tª‰ž)¥.x¼Zš6›‰¢?˜$—™ªa¢¶]• – Ä'ª‰£Â¤ £$§¨¢º)Ém˜%›œž$r¢ª‰£>˜¡š'ºŸ—¨Í9º{£¡ªVªa°?ž»¯±²2˜ © ˜$£¡µ Ä6°£ ž$Ë?ºx£€£$–?í§™—™µ½—¨£$›œ£$—™ªa¢?˜%ª‰«±ºŸ°ž$ž)¢Q£r¯±² £¡º)–?¢?ªa§¨ªœ¤ ´a—¨˜Ÿ¶r¯±°?˜¡£$—¨ªa¢¦›‰¢?˜¡³“Ÿž)—™¢´V˜ © ˜¡£¡µ½˜>šŸž)«Œª‰ž)µ Ä'Ÿ£Â¤ £¡Ÿžr³3–¢ç£$–íž$§™Ÿ¹Y›‰¢1£€š6›‰˜$˜$›œ´‰˜¬›‰¢?¥Ë£$–‹ºŸ›‰¢?¥?—Û¤ ¥?›œ£¡Ë›‰¢?˜¡³“Ÿž)˜¸›œž$ËºŸ§¨›œžÐ§ © ¥ŸÍ6¢¥ê—™¢Ü£$–ËÑm°˜Â¤ £$—¨ªa¢?˜¶[• – µ½›‰—™¢¬š?ž$ª‰Ä6§¨µ2—™˜“£$– §™›‰º)Érª‰« š'ª†³“Ÿž$«Ö°?§ ˜$ºÐ–µ€˜Ê›‰¢?¥:›‰§¨´‰ª‰ž)—¨£$–?µr˜½«~ª‰ž.µ€ªZ¥§™—®¢´Bºxªaµ€š6§¨x¼ Ñm°˜¡£$—¨ªa¢?˜[—™¢{ª‰ž)¥Ÿž[£¡ª>¥ŸžÐ—¨¹‰“›‰˜µ»°?ºÐ–€—™¢«~ª‰ž)µ½›œ£$—¨ªa¢ ›‰˜Ùš'ªa˜$˜$—¨Ä6§™‰Å±›‰¢6¥ß«~ª‰ž¦šŸž)«Œª‰ž)µr—™¢´Ò› ³T§™§¨¤­´a°?—™¥¥ ˜¡›œž)ºÐ–í£$–ž)ªa°´a–V£$–ªa°?˜)›‰¢?¥?˜Àª‰«[£¡x¼m£3¥ªmºŸ°6µ€¢Q£$˜¶ •–?½§¨x¼Z—®ºxªœ¤J˜¡µ½›‰¢1£$—™º½—™¢«~ª‰ž)µ½›œ£$—™ªa¢¸—™µ€š'ª‰ž$£¡¥¦—™¢ £$–¬¯±²2˜ © ˜¡£¡µi£$–?ž$ªa°´a–¦£$–½ž)Ÿ£¡ž)—¨Ÿ¹œ›‰§[«ŒŸ¥Ä9›‰º$ÉZ˜ Ä?ž)—®¢´a˜3ºxªa¢6˜$—™˜¡£¡¢1£ —™µ€š?ž$ªz¹‰µ€¢1£$˜±ª†¹‰Ÿž%˜¡ŸžÐ—™›‰§]š?ž$ªœ¤ ºx˜$˜$—®¢´¶ ì Ÿž¡¤Jºxªaµ½šªa¢?¢Q£½Ÿž)ž$ª‰ž)˜€›œž$í˜¡š6ž$›‰¥ç°?¢?—Û¤ «~ª‰ž)µ½§ © ª†¹‰Ÿž[£$–Í6ž)˜¡£«~ªa°žÆºŸ§™›‰˜)˜¡˜ ª‰«?Ñm°˜¡£$—¨ªa¢%ºxªaµ¤ š6§¨x¼—¨£$—¨˜ÅÀ—®¢?¥?—™ºŸ›œ£$—™¢?´`–ª†³Ýªa°žÊ˜ © ˜¡£¡µf—™µ€š?ž)ª†¹‰¥ ª†¹‰Ÿž7£$– © ›œž)˜¶ ¦k.¶Âk?Oak6CTXZk6S   ÿ ñ8ç! "$#  öø,ðòðòîòñ8ê%"Sé ñ8÷ &' îòñ8ë æé"ð  þ)()(( * ñ8ê,+ç˜ì ç!-#í*ìYé ù˜í-îòø ñ /. ñ102,3547686:9);=<>)?@3,ACBED6@FGBEDH'IIKJ ;L6:9@M/N)BPO2:NJ Q N)<>)ON%>6R0'2:354S67?8?S;E<>UTV3G<5A5672:67<K4S6@W=HXM Q 0$Y[Z\]\]\S^_"8èé"ë,ç3ê þ)`)a_bc)(Cý)"  çéí*í-ð‡ç)"ed€é ê*æ8îòñOë í-ø,ñ  õ gf ì-ç3ù7hK"gi gf û ì-ë ç˜ì5"kj gl ç˜ì*ì-øe"gj nm î ì-ê*ùYæ8ó{é"ño"np nm ø û ê*ç" #  j4î‡ë æí5".é ñ8÷ .7 #fé ñ8î  þ)(()( Um ø_+kí-ø’ç3ï é ð û é"í-çq ø û ì rû ç3ê%í-îòø,ñ€é ñ8ê,+ç˜ì-îòñ8ë’ê, Cê%í-ç3ó™ç3ï,ç˜ì: ?÷8é% ss é"ñ÷ê%í-î‡ðòð;ë ç˜í ì-çé ðn+;ø"ì8hf÷Oø ñ8ç . ñ02,3547686:9);=<>)?t3,ACBEDe6&Z]<K9TV3)<%A5672867<u476 3)< Q N)<>)OeN5>6Xv$67?73)O2:4S67?wNG<K9/x'yGNJ OeN)BP;[3)<W Q vx/TzY[Z\]\\7^5"  í-æ8ç3ñOê%"u{bì-ç˜ç3ù3ç   em éìYé ÿ é ë,î û " # | ée} ê*ùé"é ñ8÷   #fé îòø"ìYé ñ8ø  þG(()(  õk-Cèç˜ì*ú îòónç3ñí-ê'+Nîòí-æ ø,èç3ñCú÷Oø,ó{é"îòñ í-ç!-#í û é"ð r#û ç3ê%í-îòø,ñé ñ8ê,+ç˜ì-îòñ8ë  . ñ~0'2:354S686:9);=<>)?3,ABED6€)‚BED„ƒS<eB…6727<uN)BP;[3)<uNJ†TV3)<%A5672867<u476 3)<‡TV3)ˆRIuOBLN)BP;P3)<KNJ Q ;=<>)O;E?SBP;P4!?qWST‰ Q ƒ:M‹ŠYLZ\]\]\7^5"  é é"ì*ú ÿ ì û ù7h ç˜ño"e{Sç˜ì-ó{é"ñ   Œm éìYé ÿ é ë,î û "‹p  #ø,ð‡÷Cø"ï é ñn"‹# /| é} ê*ù3é"†# C û ìY÷Oçé"ñ û " ô  #î‡æ8é ðòù3çé"‹ô  { Ž ìP û "‘  ô û ê%" l j’ é"ù]’ é"í û } ê û |“ #ø ì_’ éì-ç3ê*ù û "é ñ8÷€ô “f û ñOç3ê*ù û  þ)((#ý  ñOê,+;ç˜ì-îòñOëù˜ø,óƒú è8ðòç!- "Nðòîòê%í’é ñ8÷dù3ø ñ#í-ç7-Cí rû ç˜ê%í-î‡ø ñ8ê@+Nî í-æ‹ð‡ù˜ù” ê rû ç˜ê%í-î‡ø ñOú é ñOê,+;ç˜ì-îòñ8ëxê*ç˜ì-ï çˆì •. ñ0'2:354S68689G;=<>)?–3,ABED6„€\GBED˜—V6:™)B vxBP2S;L67yGNJ‡TV3)<5A5672:67<K4S6˜WS—ovxwTzYLZ\\]€7^5"‘{yé"î í-æ8ç˜ì-ê ÿOû ì-ëe" #fé"ì: Cð‡é ñ8÷ ešR.: å  õ ›m øï "j  {Sçˆì ÿ ç˜ì5"œ @m ç˜ì-ów%é)h ø ÿ "€ö  t j^îòño"/é ñ8÷ p  ôbéïCîòùYæé"ñ÷CìYé"ñ  þ)()(Cý  åø5+²éìY÷ê*ç3ó{é"ñí-î‡ù˜ê%ú ÿ é ê*ç÷’é"ñOú ê,+;ç˜ìŒèOî‡ñOèø,îòñí-îòñ8ë ‹. ñ02,3547686:9);=<>)?ž3,A‹BED6tŸwOˆ†N)< Q N)<Y >)OeN5>6†— 6:4SD<K3Js38>) ›TV3)<5A%672867<K4S6†W=Ÿ Q —kYLZ\]\]€7^5"  é"ñžpSî‡ç˜ë,øe" ö;é ðòî ü ø"ì-ñ8î‡é  &¡. í*í… OùYæ8çˆì-î é"æo"‡# ¢l ìYé ñe£"d ¤ æ û "é ñ÷ C ôbé"í-ñ8é"ú èéì8hCæOî  þG((#ý ¦¥ û ç3ê%í-îòø,ñ+é ñOê,+;ç˜ì-îòñ8ë û ê*îòñ8ë?ó{éG-Cîòó û óƒú ç3ñí*ì-ø,è ¦ù3ø,ónèø ñ8ç3ñí-ê . ñU0'2:354S686:9);=<>)?‹3§ACBED6CZ]<K9†¨6867B[Y ;=<>q3,AXBED6RM/3)2SBED@HXˆ@672S;[48N)<TgDeN7IB…672C3,ARBED6ŒHw?8?7354!;[N)BP;P3)< A%3)2žTV3)ˆRIOBLNGBP;[3)<KN)J Q ;=<>)O;=?8BP;[4!?žW=MŒHXH&T Q Y[Z\]\]€S^_" | î í*í-ê%ú ÿ8û ì-ë,æo" | ç3ñOñ8ê, Cðòï,é"ñ8î‡é  #  | é} ê*ùé{é ñ÷  um é"ìYé ÿ é ë î û  þ)()(Cý  å;æOç„î‡ñCü ø"ì-ó{é"í-îòï çoì-ø,ðòç ø üŒd€ø ìY÷ š ç˜íƒî‡ñø,èç˜ñOú÷Oø ó{é îòñ rû ç3ê%í-îòø,ñ+é ñ8ê,+ç˜ì-îòñ8ë „. ñ 02,354S6S6:9);E<>)?R3,AXBED6ŒZ<K9/¨6S67BP;E<>†3,ABED6$M/3)27BED‹HXˆ@672S;[48N)< TgDeN7IuB…6723,ABED6Hw?8?7354!;[N)BP;[3G<XA%3)2RTV3GˆRIuOBLN)BP;[3G<KNJ Q ;=<>)O;=?SY BP;[4!?¡W=MXHŒH&T Q Y,\€7^5©¡ª¡3)28«G?,De3SI„3)<*ª¡3)2,9_M&67BwN)<K9‰'BEDe672 Q 6:™);[48NJnv$67?73)O2:4S67?!¬wHI]IKJ ;[48N)BP;P3)<?8©“x™)B…67<e?S;P3)<?tN)<u91TkO?SY BL3)ˆ‹;s­_NGBP;[3)<?8" | î í*í-ê ÿ8û ì-ë,æn" | ç3ñOñ8ê, Cðòï,é"ñ8î‡é"i û ñ8ç  i | ìYé ë,çˆì5"õ f ì-ø5+Nño" & öø#÷Oç˜ño"²é"ñ÷‡p  ôSé ÷Oç3ï  þ)()((  ¥ û ç3ê%í-îòø ñ é ñOê,+;ç˜ì-îòñOë ÿ –èOì-ç÷Cîòù˜í-îòï çWé ñOñ8ø"íYé"í-îòø,ñ ®. ñ 02,354S6S6:9);E<>)?13,A¡BED6¢Z¯2,9ƒS<B…672S<KN)BP;[3G<KNJ&TV3)<5A5672:67<K4S63G< vX67?!6:N)2,4SD°N)<K9±t67y)6!Js37Iˆ@67<Bq;E<‘ƒS<%A%3)2SˆqN)BP;P3)<„vX67BP2S;L67yGNJ W[²ƒGŠƒ:vRY[Z\]\]\7^5"8èé ë ç3ê„ý5³)´_bý%`Cý"  í-æOç3ñ8ê%"{bì-ç3ç3ù˜ç  õ  #  Køø ì-æ8ç˜ç3êxé ñ÷˜p  #  å;î‡ù˜ç  þ)()((  f û îòð‡÷Oîòñ8ë—é rû ç3ê%í-îòø,ñCúé ñ8ê,+ç˜ì-îòñ8ë’í-ç3ê%ítù3ø,ðòðòç3ù˜í-îòø,ñ . ñ‡02,354S6S6:9);E<>)?¡3,A BED6Z¯2:9µƒS<B…672S<KNGBP;[3)<KN)J›TV3)<%A5672867<u476‡3)<¶v$67?%6:N)2,4SD NG<K9 ±t67y)6!Js37Iˆ@67<B$;=<UƒS<5A%3G27ˆ†N)BP;[3)<vX67BP2S;L67yGNJW[²ƒGŠVƒ:vwY[Z\]\\7^5" èé"ë,ç3êbþ)()(_b#þ)(]·"  í-æ8ç3ñOê%"u{bì-ç˜ç3ù3ç  õ  #  Køø ì-æOç3ç3ê  ý5`)``  å;æOçå;ôbõö0ú…³ ¥ û ç˜ê%í-î‡ø ñ  ñOê,+;ç˜ì*ú îòñ8ë í*ìYé"ù7h’ì-ç3èø"ì*í ‹. ñ0'2:354S686:9);=<>)?q3,A‹BED6q‚BEDµ—V6:™)BXvx$Y BP27;[67yGNJ'TV3)<5A%672867<K4S6›WS—ovx/TzYL‚%^5"(èé"ë,ç3êw··%b³ þ"g{yé î í-æ8ç˜ì-ê%ú ÿ8û ì-ëe"u#fé"ì: Cð‡é ñ÷ šŒ.: å 
2002
5
       !"#$%&'()  +*,-. &    /0&1 2'3   46587.589:9:;=<?>@;ACBDFE2GIHJGIKL4M5ONQP GIR(AS TUBV6WYX[Z\58GJP]7.58^`_aAS TUB;P bdceR; PLf=gaHI;R(AS h ikjalnmpoqmr ms%tIuwvx%y8zw{`ja|}s`y=~u€6rmpsQvƒ‚:mprayOo„s`lA …ƒs†a{Cvpm6sQj8m‡u`tˆ3o„j‰wr oŠlnmpoŠ|QlpT …sQa{Cvpm6s†jOmuCt~uw6r msQv%‚8|‹oqsQja|‹sQh Œƒjoqz:s†vUlno„mŽuCt {Cv8Š{`jay3‘3~u€qqs†‰ws@’&{Cvp“3‘a…•”€–˜—Q™š” ›Qœ8Ÿž ?¡¢£`¤3¥`¦ a¢¥`¤O§a¢š¡˜¨ 6©w¦ažw¤]ª`«:¬˜­®¥˜ž8¯ £ °±¬w­³²‡°n¢š²:¬ V67 N}´`^`;9w´ µ ¶‹·¶U¸€¹kº¼»`½"¾k¹p¿Q¹kÀ„¾k¹±ÀÁ·U¿†º-Â@¿`·nÃwÀ¼¸:¶¹Äp¿Q¸8¾ÅºÁ¿ÇÆ ¹±ÀÉÈC¸@ÂFÈ:Êw¶º„¾2Ã8¿‹ËC¶Ì8¶UÍ`Î:¸Ž¹nȹp¿QÏC¶¿`Êwˆ¿Ç¸:Æ ¹¿QÍ`¶(ÈQÐ:ÃwÀ¼Í`Ã:¶UÄ3º¼¶UË`¶º€ºÑÀ¼¸:Í`ÎwÀ„¾k¹kÀ„·¾±¹ÄÎ8·¹Î:Ķ‹¾ ¾±Î8·ÃÒ¿C¾e¾k»w¸€¹¿`·¹kÀ„·?Êw¶UÓ8¶U¸8Êw¶U¸8·nÀ¼¶‹¾UÔ+Õ.¸:Æ Êw¶UıºÉ»CÀ¼¸:͹Ã:¶‹¾±¶ÂFÈ:Êw¶ºÁ¾ŸÀ„¾2¿Q¸F¿`¾¾kÎ:ÂFÓ:¹±ÀÉÈC¸ ¿ÇÌ8È`Î:¹=¹Ã:¶ÖÊCÀ¼Ä¶‹·¹¸:¶‹¾¾=ÈÇÐe¹Äp¿Q¸8¾Jº„¿Q¹±ÀÉÈC¸8¿}º ·ÈCÄĶ‹¾kÓOÈ`¸8Êw¶U¸8·¶[Ì8¶U¹k×&¶U¶U¸Ø¾±¶U¸€¹¶U¸8·¶‹¾À¼¸ ¹nÃ:¶¹k×ȇº„¿Q¸:ÍCÎ8¿QÍ`¶‹¾UÙCÃ:Ȇ×&¶UË`¶UÄ}½Q¹Ã:¶¶Ú:¹¶U¸€¹ ¹nÈe×ÃwÀ„·nÃÛ¹ÃwÀ„¾%¿`¾¾±Î:ÂFÓ:¹kÀ¼È`¸ÀÁ¾ƒËQ¿}ºÑÀÁʿǸ8Ê Î8¾±¶ÐÜÎwº&À„¾ƒ¸:È`¹×&¶ºÑºÎ:¸8Êw¶UÄp¾k¹ÈwȚÊÔÝI¸¹nÃwÀÁ¾ Ó8¿ÇÓ8¶Uċ½&×&¶ŽÓ:Ķ‹¾k¶U¸w¹@¿Q¸0¶UÂFÓwÀ¼ÄkÀ„·U¿}º¾k¹Î8Êw» ¹nÃ8¿Q¹]ÞCÎ8¿Q¸€¹±Ààß8¶‹¾Ÿ¹Ã:¶&Êw¶UÍ`ĶU¶(¹È×.ÃwÀÁ·nÃF¾k»w¸:Æ ¹¿`·¹kÀ„·(Êw¶UÓ8¶U¸8Êw¶U¸8·nÀ¼¶†¾2¿QĶ(Ó:Ķ‹¾±¶UÄË`¶‹Ê#×.Ã:¶U¸ Ó8¿ÇÄp¾k¶‹¾‡¿QĶÓ:ÄnÈ`ák¶‹·¹¶‹ÊeÊCÀ¼Ä¶‹·¹kº¼»âÐÜÄÈCÂäãå¸:Æ ÍǺàÀ„¾kÃ%¹ȃæ ÃwÀ¼¸:¶‹¾k¶`Ô&çÎ:Ä(Än¶‹¾kÎwº¼¹p¾(¾kÃ:ÈQ×Û¹nÃ8¿Q¹ ¿†ºÉ¹nÃ:È`Î:Í`Ã"¹Ã:¶ÊCÀ¼Ä¶‹·¹·È`ÄnĶ‹¾kÓ8ÈC¸8Êw¶U¸8·¶¿`¾±Æ ¾±Î:ÂFÓ:¹kÀ¼È`¸èÀ„¾ÈQЊ¹¶U¸è¹ȀÈéĶ‹¾±¹ÄkÀ„·¹kÀ¼Ë`¶`½Û¿ ¾±Â@¿}ºÑº®¾k¶U¹&ÈQÐ(Ó:ÄkÀ¼¸8·nÀ¼ÓwºÉ¶‹Ê½¶º¼¶UÂF¶U¸w¹p¿QÄn»FºÑÀ¼¸:Æ ÍCÎwÀÁ¾±¹kÀ„·Ž¹Äp¿Ç¸8¾JÐÜÈCÄÂ@¿Q¹kÀ¼È`¸8¾%·U¿Q¸0Ì8ÈwȀ¾k¹F¹nÃ:¶ ÞCÎ8¿}ºÑÀ¼¹k»-ÈÇг¹nÃ:¶Ó:ÄÈ`ák¶‹·¹¶‹Êêæ ÃwÀ¼¸:¶‹¾k¶‡Ó8¿QÄp¾k¶‹¾ Ìw»ìë†íšî Än¶ºÁ¿Ç¹kÀ¼Ë`¶ï¹Èé¹Ã:¶LÎ:¸wÀ¼ÂFÓ:ÄÈQË`¶‹Ê Ì8¿C¾k¶ºÑÀɸ:¶`Ô ð ñ PŸ´Ç^Cg(b ò 9w´`GJgaP ó Êwˆ¿Q¸8·¶‹¾ôÀɸè¾k¹p¿Q¹kÀ„¾k¹±ÀÁ·U¿†ºeÓ8¿QÄp¾JÀ¼¸:Íï¿Q¸8Êdº„¿Q¸:ÍCÎ8¿QÍ`¶ ÂFÈ:Êw¶ºÑÀɸ:Í0Ã8¿}Ë`¶e¾kÃ:ÈQ׸ֹnÃ:¶ŽÀ¼ÂFÓ8ÈCĹp¿Q¸8·¶6ÈQÐÂFÈ:ÊwÆ ¶ºÑÀ¼¸:ÍÖÍ`Äp¿ÇÂFÂ@¿Q¹kÀ„·U¿}ºÊw¶UÓ8¶U¸8Êw¶U¸8·nÀ¼¶‹¾=õöÀöÔ÷¶`Ô¼½Ä¶º„¿Q¹±ÀÉÈC¸:Æ ¾kÃwÀ¼Ó8¾"Ì8¶U¹k×&¶U¶U¸[¾k»w¸€¹p¿C·¹kÀ„·ŽÃ:¶‹¿`Ê:¾Ž¿Q¸8Ê0¹Ã:¶À¼ÄŽÂFÈ:ÊwÆ ÀÑß8¶UÄp¾øÌ8¶U¹k×&¶U¶U¸•×&È`ÄpÊ:¾0õnæ ÈQºÑºÑÀɸ8¾ù½Öú‹û`ûšë€ÙÛã®ÀÁ¾±¸:¶Uċ½ ú‹û`ûšë€Ùüæ Ã:¶º¼Ì8¿ý¿Q¸8Ê+þQ¶ºÑÀɸ:¶UϽïú}û`û`ÿ:Ù æ Ã8¿QĸwÀ„¿QϽ  úQøpÔaÝJ¸wÐÜÈCÄÂF¶‹Ê%Ìw»%¹Ã:¶&À¼¸8¾JÀ¼Í`Ãw¹p¾2ÈQÐ ¹ÃwÀ„¾2×&È`ÄϽQĶUÆ ·¶U¸w¹¾k¹¿Q¹kÀ„¾k¹kÀ„·U¿}º(Â@¿`·nÃwÀ¼¸:¶-¹nÄp¿Q¸8¾Jº„¿Q¹kÀ¼È`¸õ‡ø ÂFÈ:ÊwÆ ¶º„¾(Ã8¿}Ë`¶2Ì8¶‹·È`ÂF¶ ºàÀ¼¸:Í`ÎwÀ„¾k¹±ÀÁ·U¿†ºàº¼»"ÄkÀ„·Ã:¶UÄ3À¼¸F¹nÃ:¶ÀÉÄ2ĶUÓ:Æ Ä¶‹¾k¶U¸w¹p¿Q¹±ÀÉÈC¸ïÈQÐeÂFÈC¸:ÈQºÑÀɸ:ÍCÎ8¿}ºeĶº„¿Q¹kÀ¼È`¸8¾kÃwÀ¼Ó8¾Y¹Ã8¿Q¸ ¹Ã:¶À¼ÄŽÓ:Ķ‹Êw¶‹·¶‹¾¾±È`Äp¾õõ Îa½#ú‹û`û :Ù ó º„¾kÃ8¿}×&À#¶U¹Ž¿}ºIÔ¼½   Ù ¿QÂ"¿`Ê:¿¿Q¸8Ê ‡¸wÀ¼Í`Ãw¹‹½  úQøـ·nÐÔ õ ÄnȆ×.¸F¶U¹ ¿}ºIÔ¼½aú}û`û  Ù ÄnȆ×.¸Ž¶U¹¿}ºIÔ¼½aú‹ûCûwøøpÔ Õ‡¾JÀ¼¸:Í[ÄkÀ„·Ã:¶UÄ6ÂFÈC¸:ÈQºÑÀɸ:ÍCÎ8¿}ºFĶUÓ:Ķ‹¾k¶U¸w¹p¿Q¹kÀ¼È`¸8¾ À¼¸ ¾k¹p¿Ç¹kÀ„¾k¹kÀ„·U¿}ºìÄp¿}À„¾k¶‹¾¹Ã:¶Ö·nÃ8¿}ºÑº¼¶U¸:Í`¶ÈQÐ@Ã:ÈQ×+¹È ·nÃ8¿QÄp¿`·¹¶UıÀU¶%¹Ã:¶%·ÄȀ¾¾±ÆqºÁ¿Ç¸:Í`Î8¿QÍ`¶ƒÄ¶º„¿Q¹kÀ¼È`¸8¾kÃwÀ¼Ó?ÌO¶UÆ ¹k×&¶U¶U¸[¹k×Ⱦk¶U¹p¾âÈQÐ@ÂFÈ`¸:ÈǺàÀ¼¸:Í`Î8¿†ºê¾±»€¸w¹p¿`·¹kÀ„·6Än¶ºÁ¿ÇÆ ¹kÀ¼È`¸8¾UÔåÝI¸e¹nÃwÀÁ¾.Ó8¿QÓO¶Uċ½:×&¶‡À¼¸€Ë`¶‹¾±¹kÀ¼Í€¿Q¹¶%¿F·Ã8¿ÇÄp¿`·¹¶UÄÆ À‹¿Q¹kÀ¼È`¸0¹Ã8¿Q¹%ÈQЊ¹¶U¸Ö¿QÓ:Ó8¶‹¿ÇÄp¾ƒÀ¼ÂFÓwºÑÀ„·nÀɹ±ºÉ»[¿`¾ê¿6Ó8¿QĹ ÈQÐ ¸:¶U×&¶Uľk¹p¿Ç¹kÀ„¾k¹kÀ„·U¿}º ÂFȚÊw¶º„¾U½Ÿ×ÃwÀ„·Ã6×&¶¹n¶UÄ ¹Ã:¶ "!# $ &% '($)* ")* ,+-%.%0/213'4!56$)87:9<;>=@?`Ô ÝI¸w¹ÎwÀ¼¹kÀ¼Ë`¶º¼»`½(¹Ã:¶"¿`¾¾kÎ:ÂFÓ:¹±ÀÉÈC¸6À„¾#¹Ã8¿Q¹‡ÐŠÈ`Ä#¹k×&È ¾±¶U¸:Æ ¹¶U¸8·¶‹¾åÀ¼¸ŽÓ8¿QÄp¿}ºÑº¼¶ºa¹Äp¿Q¸8¾ÅºÁ¿Ç¹kÀ¼È`¸a½w¹Ã:¶¾±»€¸w¹p¿`·¹kÀ„·&Än¶ºÁ¿ÇÆ ¹kÀ¼È`¸8¾kÃwÀ¼Ó8¾êÀ¼¸È`¸:¶º„¿Q¸:Í`Î8¿QÍ`¶6ÊCÀ¼Ä¶‹·¹kº¼»ÖÂ@¿ÇÓÖ¹È=¹Ã:¶ ¾k»w¸€¹¿`·¹kÀ„·=Ķº„¿Q¹kÀ¼È`¸8¾kÃwÀ¼Ó8¾ À¼¸ï¹Ã:¶=È`¹Ã:¶UÄ‹Ô AQÀ¼¸8·¶ À¼¹FÃ8¿`¾%¸:ÈC¹FÌ8¶U¶U¸Êw¶‹¾·ÄkÀ¼Ì8¶‹Ê0¶Ú:ÓwºàÀ„·nÀ¼¹kº¼»`½‡¹Ã:¶ŽË†¿†ºàÀ„ÊwÆ À¼¹k» ¿Q¸8Ê Î:¹kÀѺÑÀɹk»üÈQЎ¹Ã:¶,Bƒæ ó ¿Q͏:ÈC¹?×&¶ºàº"Î:¸:Æ Êw¶UÄp¾k¹nȀÈ:ÊDC ¿}º¼¹Ã:È`Î:Í`Ãa½‡×&À¼¹Ã:ÈCÎ:¹"ÀÁÊw¶U¸w¹kÀÑЊ»`À¼¸:͹Ã:¶ Bƒæ ó ¿`¾ ¾kÎ8·nÃa½`ÈC¹Ã:¶UÄ ¹Ä¿Q¸8¾Jº„¿Q¹kÀ¼È`¸FÄn¶‹¾k¶‹¿QÄp·nÃ:¶UÄp¾3Ã8¿}Ë`¶ ¸:È`¸:¶U¹Ã:¶º¼¶‹¾¾³ÐŠÈ`Î:¸8Ê#¹Ã:¶UÂ"¾k¶º¼Ë`¶‹¾a×&È`ÄnÏ`À¼¸:Í¿QÄnÈ`Î:¸8Ê.À¼¹p¾ ºÑÀ¼Â%À¼¹p¿Q¹kÀ¼È`¸8¾UÔ A ÝI¸EA`¶‹·¹kÀ¼È`¸  ×&¶@¾kÃ:ÈQ×üÃ:ÈQ×ï¹nÃ:¶Bƒæ ó ¿QÓ:Ó8¶‹¿ÇÄp¾ À¼ÂFÓwºÑÀ„·nÀɹ±ºÉ»LÀ¼¸ü¾k¶UË`¶UÄp¿†ºÂFȚÊw¶º„¾U½ƒÓ:ÄÈQËCÀÁÊCÀ¼¸:Í[¿Q¸[¶Ú:Æ ÓwºÑÀ„·nÀɹêЊÈ`ÄÂ@¿}º¾±¹p¿Q¹¶UÂF¶U¸w¹‹½(¿Q¸8Ê×&¶ ÊCÀ„¾·Î8¾¾-Àɹ¾ÓOÈ`Æ ¹¶U¸w¹kÀ„¿}ºÀ¼¸8¿`Êw¶‹ÞCÎ8¿`·nÀ¼¶‹¾UÔ0ÝI¸FA`¶‹·¹±ÀÉÈC¸G:½2×&¶@Ó:ÄÈQËCÀ„Êw¶ ¿ê× ¿}»"¹Ȏ¿`¾¾k¶‹¾¾.¶UÂFÓwÀ¼ÄkÀ„·U¿}ºÑº¼»6¹Ã:¶%¶ښ¹¶U¸w¹‡¹ÈF×ÃwÀ„·nà ¹Ã:¶HBƒæ ó Ã:ÈQº„Ê:¾3¹ÄÎ:¶`ÔåçÎ:Ä(Ķ‹¾kÎwº¼¹p¾3¾kÎ:Í`Í`¶‹¾k¹3¹Ã8¿Q¹3¿}º¼Æ ¹Ã:È`Î:ÍCùÃ:¶IBƒæ ó À„¾3¹ÈwÈĶ‹¾k¹nÄkÀ„·¹kÀ¼Ë`¶2Àɸ%Â@¿Ç¸€»·U¿`¾±¶‹¾U½ ¿#ÍC¶U¸:¶UÄp¿}º³¾±¶U¹ÈÇгÓ:ıÀɸ8·nÀ¼Ówº¼¶‹Ê½¶º¼¶UÂF¶U¸w¹p¿QÄ»ƒºÑÀ¼¸:Í`ÎwÀ„¾k¹kÀ„· ¹Äp¿Ç¸8¾JÐÜÈCÄÂ@¿Q¹kÀ¼È`¸8¾#·U¿Ç¸ÛÈQЊ¹¶U¸?Än¶‹¾kÈQº¼Ë`¶"¹Ã:¶@Ó:ÄnÈ`Ìwº¼¶UÂeÔ JK"L MONP"QSRHT&UVN.WHX4QYRIQSZ[QQY\"Z^]_\&`Va.b0cedf.g g0hSijQYkkL.lm\0c n L.Mo\"L \0prq_sutvpM5Nw5T[Nkc6`V\ma<x QSM6`yQYc6`VL.\z0{OUVNSQYM5\0`V\ma<kL \"w5c5M5lmk p c6`VL \Ow5T|Nk}`~|k€UVL0kSQU(c5M QS\mw n L.M5RIQYc6`VL.\mwL.\‚kL.\mw5c6`Vc5l"N\"k{Oc5M5NNwƒ „ bmNM5N…Q†UVw5LNP `Vw5c5w‡Qˆw5l"zmw5c QS\0c6`yQ†UOU`Vc5NM QSc5lmM5N#`V\‰c5M QY\"w n NM6p z|QYw5NZŠ „ L \#UVNSQSM5\&`V\"a#RIQYT"T0`V\ma#T|QYc5c5NM5\"w n L.M‚w5{0\0c QYkc6`Vk M5N}UyQSc6`VL \"w5b&`VT[wc5b|QYc4ZmLH\"L c4kL M5M5Nw5T|L \"Z‹dN ƒ a"ƒŒW|dŠ‚N\mNNw>QY\"Z Ž `Vk}b|QYM5Z"w5L \Wf g g0h -‘Qx LS`VNoNc>QUƒŒW-f g g0hSi5i}ƒ Computational Linguistics (ACL), Philadelphia, July 2002, pp. 392-399. Proceedings of the 40th Annual Meeting of the Association for ÝI¸ˆA`¶‹·¹kÀ¼È`¸’8½Q×&¶·È`¸8¾JÀ„Êw¶UÄ&¹Ã:¶&À¼ÂFÓwºÑÀ„·U¿Q¹kÀ¼È`¸8¾.ÈQÐÈ`Î:Ä ¶Ú:Ó8¶UÄkÀ¼ÂF¶U¸w¹p¿}º(Ķ‹¾kÎwº¼¹p¾¿Ç¸8ÊeÊCÀ„¾·Î8¾¾ ÐÜÎ:¹nÎ:Ķ#×&È`ÄÏÔ “ ” E25–•?GJ^`589w´—@gŸ^`^`58NQKgŸP b 5OP 9:5 VMNQNQò W K(´`GIgaP ŸÈ#È`Î:Ä(π¸:ÈQ×&º¼¶‹ÊwÍ`¶`½w¹Ã:¶ÊCÀ¼Ä¶‹·¹&·È`ÄĶ‹¾kÓOÈ`¸8Êw¶U¸8·¶#¿`¾kÆ ¾kÎ:ÂFÓ:¹±ÀÉÈC¸-Î:¸8Êw¶UıºàÀ¼¶‹¾2¿}ºÑº˜¾k¹¿Q¹kÀ„¾k¹kÀ„·U¿}º€ÂêȚÊw¶º„¾3¹Ã8¿Q¹3¿Q¹Æ ¹¶UÂFÓ:¹2¹È-·U¿ÇÓ:¹Î:Ķ¿ Ķº„¿Q¹kÀ¼È`¸8¾kÃwÀ¼ÓŽÌ8¶U¹k×&¶U¶U¸@¾k»w¸w¹p¿`·Æ ¹kÀ„·=¾k¹ÄÎ8·¹Î:Än¶‹¾"À¼¸Ò¹k×&Ⱥ„¿Q¸:ÍCÎ8¿QÍ`¶‹¾U½#ÌO¶Û¹nÃ:¶U»Ò·È`¸:Æ ¾k¹kÀ¼¹Î:¶U¸w¹ÂFÈ:Êw¶ºÁ¾ È`Ä[Êw¶UÓ8¶U¸8Êw¶U¸8·»ìÂFÈ:Êw¶º„¾UÔ ó ¾ ¿Q¸ ¶Ú8¿QÂFÓwº¼¶ÈÇÐØ¹Ã:¶+ЊÈ`ÄÂê¶Uċ½•·È`¸8¾JÀ„Êw¶UĘΚ™ ¾ õpú‹ûCû wø¾k¹È:·Ã8¿C¾k¹kÀ„·êÀ¼¸€Ë`¶UľJÀ¼È`¸Ö¹Ä¿Q¸8¾ÊwÎ8·¹kÀ¼È`¸0Í`Äp¿QÂFÆ Â@¿QÄâõSA`Ý‚›ƒøp½ŸÀ¼¸Ö×ÃwÀ„·nÃÖÓ8¿}À¼Ä¶‹Ê0¾k¶U¸€¹n¶U¸8·¶‹¾@¿QÄn¶e¾JÀ¼Æ Â%Îwº¼¹p¿Q¸:¶UÈ`Î8¾Jº¼»ƒÍ`¶U¸:¶UÄp¿Q¹¶‹ÊƒÎ8¾JÀ¼¸:Í·È`¸w¹¶Ú:¹ÆqЊÄ¶U¶&ÄÎwº¼¶‹¾UÙ ×&È`ÄpÊ)È`ÄpÊw¶UÄ[ÊCÀœa¶UĶU¸8·¶‹¾[¿QĶï¿C·U·È`Î:¸€¹n¶‹ÊéЊÈ`Ä̀» ¿}ºÑº¼ÈQ×&À¼¸:Í춋¿`·Ã+ÄÎwº¼¶é¹ȕÌO¶éĶ‹¿`Ê À¼¸ ¿,º¼¶ÐܹnƊ¹È`Æ ÄkÀ¼Í`Ãw¹éÈ`ÄéÄkÀ¼Í`À¹nƊ¹È`Æqº¼¶ÐܹdÐI¿`¾kÃwÀ¼È`¸a½üÊw¶UÓ8¶U¸8ÊCÀ¼¸:Í È`¸ ¹Ã:¶0º„¿Q¸:Í`Î8¿QÍC¶`Ô ž:È`Ä=¶Ú8¿QÂFÓwº¼¶`½A`ÝŸ› ·U¿Q¸éÍ`¶U¸:Æ ¶UÄp¿Q¹n¶ Ë`¶UÄÌ À¼¸wÀɹ±ÀÁ¿†º•õÅã(¸:ÍQºÑÀ„¾kÃø\¿Q¸8Ê Ë`¶UÄÌ ß8¸8¿}º õÅþ`¿ÇÓ8¿Q¸:¶‹¾k¶Qø6Ë`¶UÄÌïÓ:Ã:Ä¿`¾k¶‹¾=Î8¾JÀ¼¸:Íü¹Ã:¶¾¿QÂF¶0ÄÎwº¼¶  O¡£¢¤ ¦¥‚¡Ô ž:È`Ä¿Q¸w»ØÊw¶UÄkÀ¼Ë†¿Ç¹kÀ¼È`¸\Î8¾ÅÀɸ:Íd¹ÃwÀ„¾ ÄÎwº¼¶`½0ÀÑÐG§š¨ ¿Q¸8ʪ©š«2¨ ¿QĶé¹Ã:¶•ã(¸:ÍǺàÀ„¾kà Ë`¶UÄÌ ¿Q¸8Ê)¸:È`Î:¸ìÓ:Ã:Äp¿`¾±¶`½?¿Q¸8ʕ¹nÃ:¶U»ä¿QÄn¶[Ķ‹¾kÓO¶‹·¹kÀ¼Ë`¶º¼» ¿}ºÑÀ¼Í`¸:¶‹Êd×&À¼¹Ãïþ`¿QÓ8¿Ç¸:¶‹¾k¶ÖË`¶UÄnÌï¿Q¸8ÊL¸:È`Î:¸ïÓ:Ã:Ä¿`¾k¶ §>¬?¿Ç¸8ʉ©š«2¬O½3¹Ã:¶U¸­§®(¯u°±²°´³.®2µ´¶2õ § ¨H· ©š« ¨ ø#¿Q¸8Ê §®(¯u°±²€°*³ ®2µ´¶ õ §š¬ · ©š«-¬Oø2Â-Î8¾±¹ÌOÈ`¹ÎÌO¶¹ÄÎ:¶`Ô ó ¾ ¿Q¸ ¶Ú8¿QÂFÓwº¼¶ ×Ã:¶U͹nÃ:¶¸Bƒæ ó Än¶ºÁ¿Ç¹¶‹¾ Êw¶UÓ8¶U¸8Êw¶U¸8·» ¾k¹ÄÎ8·¹nÎ:Ķ‹¾U½ ·È`¸8¾JÀ„Êw¶UÄ ¹nÃ:¶ ÃwÀɶUÄnÆ ¿QÄp·nÃwÀ„·U¿}º ¿}ºÑÀ¼Í`¸:ÂF¶U¸w¹ ¿}º¼Í`ÈCÄkÀ¼¹Ã: Ó:ÄÈ`Ó8Èw¾k¶‹Ê ̀» ó ºÁ¾±Ã8¿‹× À]¶U¹¿}ºIÔ&õ   øpÔ3ÝI¸Ž¹ÃwÀ„¾ ÐÜÄ¿QÂF¶U×&È`ÄϽ`×&È`ÄpÊwÆ º¼¶UË`¶º¿}ºÑÀ¼Í`¸:ÂF¶U¸w¹p¾"¿Q¸8Ê=Ó8¿}À¼Ä¶‹Ê0Êw¶UÓ8¶U¸8Êw¶U¸8·» ¾±¹ÄÎ8·Æ ¹Î:Ķ‹¾d¿QĶ\·ÈC¸8¾k¹ÄÎ8·¹¶‹Ê ¾JÀ¼Â%Îwº¼¹p¿Q¸:¶UÈ`Î8¾Jº¼»`Ô &Ã:¶ ã(¸:ÍǺàÀ„¾kÃ:Æ &¿`¾nÞ`Î:¶=¶ژ¿ÇÂFÓwº¼¶õpúQøƒÀѺຼÎ8¾k¹nÄp¿Q¹¶‹¾0¹ÀÑÐF¹Ã:¶ ã(¸:ÍǺàÀ„¾kÃ×&È`Äpʺ0/»À„¾‡¿}ºÑÀ¼Í`¸:¶‹Ê޹È%¹Ã:¶j&¿`¾nÞ`Î:¶#×&È`ÄpÊ "$[%0&¿Q¸8Ê,¼  ½0!&À„¾"¿}ºÑÀÉÍC¸:¶‹ÊÖ¹nÈ,$ '+ .q½&¹Ã:¶M·Än¶‹¿Q¹kÀ¼È`¸ ÈQÐ%¹Ã:¶?Ã:¶‹¿`ÊwƊÂFÈ:ÊCÀÑß8¶UÄ6Ķº„¿Q¹kÀ¼È`¸8¾kÃwÀ¼ÓüÌ8¶U¹k×&¶U¶U¸¾º&/2» ¿Q¸8Ê¿¼  ½0!wÀ„¾&¿`·U·È`ÂFÓ8¿Ç¸wÀɶ‹ÊêÌw»-¹nÃ:¶·Ä¶‹¿Q¹kÀ¼È`¸êÈQг¿ƒ·È`ÄÆ Ķ‹¾kÓOÈ`¸8ÊCÀ¼¸:ÍÖÃ:¶‹¿`ÊwƊÂêȚÊCÀÑß8¶UÄeÄn¶ºÁ¿Ç¹kÀ¼È`¸8¾kÃwÀ¼Ó[Ì8¶U¹k×&¶U¶U¸ "$[%0¿Q¸8Ê$ '+ .qÔ õpúÇø ¿:Ô(Ý&Í`È`¹¿%ÍQÀÑЊ¹&ЊÈ`ÄÂ%»@Ì:ÄnÈ`¹Ã:¶UÄ ÌaÔ¥ ÀÉÏ6õÀŠø ¸wÀ¼Ä¶êõSÁHÂ(ø ¿Q¸8¿}À„¿QÄkÀ õY°*¯²v¶_Ú®(¯*± Ä(Å ¶3ø È`Ó8¿ÇÄkÀ8õÆ_À5Ç[¶3ø8Ì8¿Q¹(õ Å ø8¶UÄȀ¾ÅÀ õS°´È*Â(ø ¸wÀ¼È`¸?õY« Å_É ¶3ø Ê4Ë5Ì ÍšÎuÏÐÑ*Ò6ÓÔ2Ó:Õu։×[Ø4ÙڈÛ<Ü Ý¶U¹Î8¾ ЊÈ`ÄÂ@¿}ºÑÀU¶¹nÃwÀÁ¾Àɸw¹ÎwÀ¼¹kÀ¼Ë`¶ƒÀ„Êw¶‹¿@¿ÇÌ8È`Î:¹·ÈCÄĶUÆ ¾kÓOÈ`¸8ÊCÀ¼¸:Í ¾±»€¸w¹p¿`·¹kÀ„·2Ķº„¿Q¹kÀ¼È`¸8¾±ÃwÀÉÓ8¾åÀ¼¸F¹Ã:¶2ЊÈQºÑº¼È†× Àɸ:Í ÂFÈ`Än¶#Í`¶U¸:¶UÄp¿}ºa׿}»4¹ Ú#Ó6Ï"Ù(Þ[× ÛjÎuÏ[Ï"Ù(ß"à_ÎvÕâávÙ´ÕâÞ[Ù Ü#ßmß[ã_Ðäàv×"ÓÎvÕ å ڈÛ<Üˆæ ¹ç›À¼Ë`¶U¸ü¿0Ó8¿}À¼Ä6ÈQÐ ¾k¶U¸w¹¶U¸8·¶‹¾è ¿Q¸8Êé ¹Ã8¿Q¹¿QÄn¶õŠºÑÀ¼¹¶UÄp¿†ºÜø2¹Äp¿Q¸8¾Jº„¿Q¹±ÀÉÈC¸8¾ ÈÇÐ]¶‹¿`·nÎÈ`¹Ã:¶UÄ&×&À¼¹à ¾k»w¸€¹¿`·¹kÀ„·#¾k¹ÄÎ8·¹nÎ:Ķ‹¾Hêìë[í"í ¨ ¿Q¸8Êê‚ë|ímí0î&½:ÀÑÐ(¸:ȚÊw¶‹¾ ï ¨ ¿Q¸8Ê…ð ¨ ÈQÐê‚ë|ímí ¨ ¿QÄn¶-¿}ºÑÀ¼Í`¸:¶‹Ê×&À¼¹Ãe¸:È:Êw¶‹¾ ï î ¿Q¸8Ê^ðmî[ÈÇÐñêìë[í"í&Ķ‹¾kÓO¶‹·¹kÀ¼Ë`¶º¼»`½¿Q¸8ÊMÀàоk»w¸w¹p¿`·¹kÀ„· Ķº„¿Q¹kÀ¼È`¸8¾±ÃwÀÉӘòƒõ ï ¨ · ðm¨&ø?Ã:ÈQº„Ê:¾0À¼¸óê‚ë|ímí0¨&½6¹Ã:¶U¸ òƒõ ï î · ð"î2ø(Ã:ÈQº„Ê:¾&À¼¸ˆêìë[í"í0îÔ ô.¶UĶ`½òƒõ ï · ð˜ø6Â@¿}»Ø¾kÓO¶‹·nÀàЊ»\¿LÃ:¶‹¿`ÊwƊÂêȚÊCÀÑß8¶UÄ Ä¶º„¿Q¹kÀ¼È`¸8¾±ÃwÀÉÓìÌO¶U¹k×¶U¶U¸ì×&È`ÄÊ:¾=À¼¸ä¿éÊw¶UÓO¶U¸8Êw¶U¸8·» ¹Ä¶U¶`½ŸÈ`Ä-¿¾JÀ„¾k¹¶UÄÃ:ÈwÈ:Ê6Ķº„¿Q¹kÀ¼È`¸8¾kÃwÀ¼ÓÌO¶U¹Å×&¶U¶U¸=¸:È`¸:Æ ¹¶UÄ Àɸ8¿†ºÁ¾ À¼¸M¿F·ÈC¸8¾k¹kÀ¼¹Î:¶U¸8·»e¹nĶU¶`Ô ó ¾¾k¹p¿Q¹n¶‹Ê½:¹Ã:¶ Bƒæ ó ¿ÇÂFÈ`Î:¸w¹p¾2¹È@¿Q¸Ž¿`¾¾kÎ:ÂêÓ:¹kÀ¼È`¸Ž¹Ã8¿Q¹&¹Ã:¶·ÄÈw¾¾kÆ º„¿Q¸:Í`Î8¿QÍC¶¿}ºÑÀÉÍC¸:ÂF¶U¸€¹.Ķ‹¾k¶UÂ%Ìwº¼¶‹¾¿ƒÃ:È`ÂFÈ`ÂFÈCÄÓ:ÃwÀ„¾k Ä¶º„¿Q¹kÀ¼¸:͇¹Ã:¶¾±»€¸w¹p¿`·¹kÀ„·åÍ`Äp¿QÓ:ÃÈQÐõè0¹È.¹Ã:¶¾k»w¸w¹p¿`·¹kÀ„· Í`Äp¿ÇÓ:ÎÈQЀé%Ô T ?Κ™ ¾3A`Ý‚› Â"¿QÏ`¶‹¾(¹nÃwÀÁ¾&¿`¾n¾kÎ:ÂFÓ:¹kÀ¼È`¸a½wÎ:¸8Êw¶UÄ&¹Ã:¶ À¼¸€¹n¶UÄÓ:ĶU¹p¿Q¹±ÀÉÈC¸[¹Ã8¿Q¹#òýÀ„¾"¹Ã:¶?Ã:¶‹¿`ÊwƊÂêȚÊCÀÑß8¶UÄ6ĶUÆ º„¿Q¹kÀ¼È`¸L¶ښÓ:Än¶‹¾¾k¶‹Ê Àɸï¿YĶU×ÄkÀ¼¹¶ÄÎwº¼¶`Ôö&Ã:¶=ÝH ÂêȚÊw¶º„¾0õ2ÄÈQ׸ü¶U¹¿}ºIÔ¼½eú}û`ûwøMÊwÈL¸:È`¹=ĶUÆ ¾kÓO¶‹·¹¹Ã:¶jBƒæ ó ½:Ì:Î:¹‡¸:¶À¼¹Ã:¶UÄ#ÊwÈ@¹Ã:¶U»Ž¿Q¹¹n¶UÂFÓ:¹¹È ÂFÈ:Êw¶º ¿Q¸w»#ÃwÀÉÍCÃ:¶UÄ(º¼¶UË`¶º ¾k»w¸€¹¿`·¹kÀ„·2Ķº„¿Q¹kÀ¼È`¸8¾kÃwÀ¼ÓFÌO¶UÆ ¹k×&¶U¶U¸@·È`¸8¾k¹kÀ¼¹Î:¶U¸w¹p¾2×&À¼¹ÃwÀ¼¸ŽÈ`Ä&¿`·ÄȀ¾¾®º„¿Q¸:Í`Î8¿QÍ`¶‹¾C ¹Ã:¶‡¹Äp¿Q¸8¾ÅºÁ¿Ç¹kÀ¼È`¸ŽÂFÈ:Êw¶º®õk¿}ºÑÀÉÍC¸:ÂF¶U¸€¹¾ø ¿Q¸8Ê"¹Ã:¶.ºÁ¿Ç¸:Æ Í`Î8¿QÍC¶eÂFÈ:Êw¶º ¿ÇĶ?¾k¹p¿Q¹kÀ„¾k¹±ÀÁ·U¿†ºàº¼»YÀɸ8Êw¶UÓO¶U¸8Êw¶U¸€¹}ÔïÝJ¸ ¿QÂ@¿CÊ:¿Û¿Q¸8Ê£ ‡¸wÀ¼Í`Ãw¹"™ ¾õ   úQø¶Ú:¹¶U¸8¾JÀ¼È`¸ ÈQйÃ:¶ ÝH ÂFÈ:Êw¶ºÁ¾ù½È`¸0¹Ã:¶6È`¹Ã:¶UÄFÃ8¿Q¸8ʽ.Í`Äp¿QÂêÂ@¿Q¹kÀ„·U¿}º À¼¸wÐÜÈCÄÂ@¿Q¹kÀ¼È`¸YÐÜÄnÈ`¹Ã:¶e¾kÈCÎ:Äp·¶Žº„¿Q¸:Í`Î8¿ÇÍ`¶ŽÀ„¾êÓ:ÄÈ`Ó:Æ ¿Q̀¿Ç¹¶‹Ê=À¼¸w¹È¹Ã:¶6¸:ÈQÀ„¾k»[·Ã8¿Ç¸:¸:¶ºö½#¿Q¸8Ê ¹Ã:¶6Í`Äp¿QÂFÆ Â@¿Q¹±ÀÁ·U¿†º®¹Ä¿Q¸8¾JЊÈ`ÄÂ@¿Q¹±ÀÉÈC¸8¾&À¼¸?¹Ã:¶À¼Ä·nÃ8¿Q¸:¸:¶º ÂFÈ:Êw¶º ¿QÓ:ÓO¶‹¿QĹÈĶ‹¾kÓO¶‹·¹ ÊCÀ¼Ä¶‹·¹F·È`ÄnĶ‹¾kÓ8ÈC¸8Êw¶U¸8·¶`Ô h &Ã:¶ ¾JÀ¼Â%Îwº¼¹p¿Q¸:¶UÈ`Î8¾"Ó8¿QľJÀ¼¸:Íô¿Q¸8Ê0¿}ºÑÀ¼Í`¸:ÂF¶U¸w¹ ¿}º¼Í`ÈCÄkÀ¼¹Ã: ÈQÐ ó º„¾kÃ8¿}×&À]¶U¹¿}ºIÔ.õ   ø&À„¾‡¶‹¾¾k¶U¸w¹kÀ„¿}ºÑº¼» ¿Ç¸eÀ¼ÂFÓwº¼¶UÆ ÂF¶U¸w¹p¿Q¹kÀ¼È`¸6ÈQÐ&¹Ã:¶Bƒæ ó À¼¸Û×ÃwÀ„·nÃ?Ķº„¿Q¹kÀ¼È`¸8¾±ÃwÀÉӉò Ã8¿`¾&¸:ÈêºÑÀ¼¸:Í`ÎwÀ„¾k¹kÀ„·ƒÀ¼ÂFÓ8ÈCĹ#õŠÀIÔ ¶`Ô&¿Q¸w»€¹ÃwÀ¼¸:͎·U¿Q¸ŽÌ8¶%¿ Ã:¶‹¿`ÊOøpÔ ÷Sø L RñN_RHL0Z"N}UVw_NRoz[L0Z"{ìQ_w5c5M5L \"a NMšx.NM5w6`VL \ìL n c5b"Nq>sut c5b|QYcuRHL.M5N_k}UVL.w5N}UV{‚M5Nw5NRoz0UVNwùQY\H`Vw5L RHL.M5Tmb0`Vw5R,z[Nc6ú4NN\‚Z"N}p T[N\"Z"N\mk{3a.M QYT"bmwd ø b0`VNz[NMW(h†û ûYüýi}W0c5b"L l"a bIúuNšúu`UU\mL.cuT"lmM6p w5l"Nc5b&`Vwš`VZmNSQ n l"M5c5b"NM_b"NM5N ƒ þS]_\0`Va b0câQY\"ZHXuQSRIQYZ|QQYkc5l[QUUV{3T"M5N}pT"M5L0kNw5wuc5b"N>ÿõ\"aYU`Vw5b `V\"Tml"cu`V\3k†QYw5Nwšc5b|QYc*RHL w5cuc5M QS\mw5T|QYM5N\0c6UV{‚x `VLYUyQSc5NZ&`VM5NkcškL M6p M5Nw5T[L.\mZ"N\"kN  n L M*NP"QYRñT&UVN W|c5bmN{oT[NM5Rol"c5Nÿ(\maSU`Vw5b‚x.NM5zmw4c5L w5N\0c5N\"kN}p~|\[QUT[L.w6`Vc6`VL \3`V\ñc5bmNuRñL0ZmN}U-c5M QY\"w n L M5Ro`V\"añÿõ\"aYU`Vw5b `V\0c5L.QYT|QY\"Nw5N ƒ4Š‚L w5cuRñL0ZmN}UVwšúuN*UVL0L NZOQYc*b|Q†x.NQSZmZ"M5Nw5w5NZ w5L RñNNNkc5w>L n q>sât n Q†`UVl"M5N Wz"lmcšc5b"N{‚b|Qx N\mL.c_QSk0\"LSúuUp NZ"a NZH`Vc*NP0T0U`Vk}`Vc6UV{OQSwuQS\Hlm\"Z"NM6UV{ `V\ma@QYw5w5l"RHTmc6`VL.\ W|\mL.M´b|Qx N c5b"N{a L.\mNšz[N{.L \"ZHNP0T[NZ&`VN\0c4RHNSQYw5l"M5Nw*c5L_c5b"Nuc6{0T|NuL n T"M6`V\0p k}`VT0UVNZ<QY\|Q†UV{0w6`Vwc5b|QYc_ú4N_T"M5L T[L.w5Noz[N UVLYúƒ   ÿõ\"a  ÿ(\"a  w  w x.NM5z&pw5l"z a L c  NM5L w6` \&` x.NM5z&pL.z a L c aY` n c NM5L w6` L T|QYM6` \"L l"\&pZ"Nc aY` n c Q L T|QYM6` z[QSc \"L l"\&pRHL0Z zmM5L.c5bmNM R{ QY\|Q†`yQSM6` \&`VM5N ]¿QÌwº¼¶"ú¹2æ È`ÄĶ‹¾±Ó8È`¸8Êw¶U¸8·¶‹¾.Ó:Ķ‹¾k¶UÄËC¶‹Ê"À¼¸?õpúÇø Ê4Ë6Ê jÏmÎ_ÒÙ´Ð^ßÓ ×[Ø ×[ØuÙÚˆÛ‹Ü &Ã:¶IBƒæ ó ¾k¶U¶UÂ@¾3¹ȇÌ8¶¿.Ķ‹¿`¾±È`¸8¿QÌwº¼¶Ó:ÄkÀ¼¸8·nÀ¼Ówº¼¶`½:¶‹¾kÆ Ó8¶‹·nÀ„¿}ºÑº¼»F×Ã:¶U¸ƒ¶ښÓ:Än¶‹¾¾k¶‹Ê À¼¸-¹n¶UÄÂ@¾ŸÈQИ¾±»€¸w¹p¿`·¹kÀ„·2Êw¶UÆ Ó8¶U¸8Êw¶U¸8·nÀ¼¶‹¾2¹Ã8¿Q¹(¿QÌ8¾k¹nÄp¿`·¹]¿}׿}»×&È`ÄpʇÈCÄpÊw¶Uċԁ&Ã8¿Q¹ À„¾U½2¹Ã:¶@¹nÃ:¶UÂ@¿Q¹kÀ„·õI×Ã:ÈCÆIÊCÀÁÊwƊ×.Ã8¿Q¹Ɗ¹È`Ɗ×.Ã:È`ŽøÄn¶ºÁ¿ÇÆ ¹kÀ¼È`¸8¾kÃwÀ¼Ó8¾%¿QĶƒºÑÀ¼Ï`¶º¼»Û¹nȎÃ:ÈQº„Ê6¹ÄÎ:¶@¿C·ÄȀ¾¾‡¹Ä¿Q¸8¾Jº„¿QÆ ¹kÀ¼È`¸8¾‡¶UË`¶U¸ЊÈ`Ä#¹k»€ÓOÈQº¼È`ÍQÀ„·U¿}ºÑº¼»?ÊCÀyœa¶UĶU¸w¹‡º„¿Q¸:Í`Î8¿QÍC¶‹¾UÔ æ È`¸8¾ÅÀÁÊw¶UÄ6¶Ú8¿QÂFÓwº¼¶0õpúQø ¿Ç̀¿}À¼¸š¹ Êw¶‹¾±ÓwÀɹn¶¹Ã:¶?ÐI¿`·¹ ¹Ã8¿Q¹%¹Ã:¶&¿`¾ÞCÎ:¶e¾k¶U¸€¹n¶U¸8·¶eÃ8¿`¾F¿6ÊCÀœa¶UĶU¸w¹@×&È`ÄpÊ È`ÄpÊw¶UÄ}½a×&À¼¹Ã?¹nÃ:¶FË`¶UÄÌ?¿QÓ:ÓO¶‹¿QÄkÀ¼¸:Í?¿Q¹#¹Ã:¶%ÐI¿QÄ#ÄkÀ¼Í`Ãw¹ ÈQÐ&¹Ã:¶@¾k¶U¸w¹¶U¸8·¶`½a¹nÃ:¶@¾k»w¸€¹p¿C·¹kÀ„·FÊw¶UÓ8¶U¸8Êw¶U¸8·»?Än¶ºÁ¿ÇÆ ¹kÀ¼È`¸8¾kÃwÀ¼Ó8¾2ÈQÐ ãå¸:ÍQºÑÀÁ¾±Ãeõž±Î:Ì:áŶ‹·¹}½QÈ`Ì:ák¶‹·¹‹½Q¸:È`Î:¸FÂFÈ:ÊwÆ ÀÑß8¶Uċ½`¶U¹·QÔÑø¿QĶ2º„¿QÄÍ`¶º¼»%Ó:Ķ‹¾k¶UÄËC¶‹ÊF¿`·ÄȀ¾n¾a¹Ã:¶‡¿}ºÑÀÉÍC¸:Æ ÂF¶U¸w¹‹½(¿`¾ ÀѺຼÎ8¾k¹nÄp¿Q¹¶‹ÊÀ¼¸,3¿QÌwº¼¶Mú`ÔÈ`ĶUÈQË`¶Uċ½Ÿ¹Ã:¶ Bƒæ ó Â@¿QÏ`¶‹¾"ÓOȀ¾¾JÀ¼Ìwº¼¶ÂFÈ`Ķ6¶º¼¶Ù¿Q¸w¹ŽÐŠÈ`ÄÂ"¿}ºÑÀÁ¾±Â@¾ õI¶`Ô ÍOÔ,A`Ý‚›ø%¿Q¸8Ê=ÂFÈ`Ķ޶޷nÀɶU¸w¹@¿}º¼Í`ÈCÄkÀ¼¹Ã:Â@¾UÔÝJ¹ Â@¿}»¿}ºÑºÉÈQוÎ8¾ê¹È6Î8¾k¶¹Ã:¶e¾k»w¸€¹p¿C·¹kÀ„·e¿Q¸8¿}º¼»:¾JÀ„¾%ЊÈ`Ä È`¸:¶ ºÁ¿Ç¸:Í`Î8¿QÍ`¶#¹È ÀɸwЊ¶UÄ¿Q¸:¸:È`¹¿Q¹kÀ¼È`¸8¾2ЊÈ`Ä&¹Ã:¶-·ÈCÄĶUÆ ¾kÓOÈ`¸8ÊCÀ¼¸:Íe¾k¶U¸€¹n¶U¸8·¶À¼¸?¿Q¸:È`¹nÃ:¶Uĺ„¿Q¸:Í`Î8¿ÇÍ`¶`½8Ã:¶º¼ÓwÀ¼¸:Í ¹ȇĶ‹ÊwÎ8·¶¹Ã:¶2º„¿QÌOÈ`Ä¿Q¸8ʃ¶Ú:Ó8¶U¸8¾k¶.ÈQг·Ä¶‹¿Ç¹kÀ¼¸:Í#¹Ä¶U¶UÆ Ì8¿Q¸:Ï:¾ƒÀ¼¸¸:¶U×麄¿Q¸:Í`Î8¿QÍC¶‹¾"õæ¿QÌO¶0‹¿`¾%¶U¹F¿}ºIÔ¼½   ú`Ù ¿QÄÈQ×¾±Ï€»@¿Ç¸8ʅ¥.̀¿}ÀI½  úQøpÔ Õ.¸wÐÜÈ`Än¹Î:¸8¿Q¹¶º¼»`½‡¹Ã:¶–Bƒæ ó À„¾ ¿}×&¶‹Ê½¶UË`¶U¸ЊÈ`Ä ºÑÀ¼¹¶UÄp¿}º%¹Äp¿Q¸8¾Jº„¿Q¹±ÀÉÈC¸8¾UÔ8ž:È`϶Ú8¿QÂFÓwº¼¶`½‡À¼¸ü¾k¶U¸€¹n¶U¸8·¶ Ó8¿}À¼ÄõpúQøp½(¹nÃ:¶"Àɸ8ÊCÀ¼Ä¶‹·¹ŽÈCÌ:áŶ‹·¹êÈQйÃ:¶Ë`¶UÄÌ=À„¾F¶Ú:Æ Ó:Ķ‹¾¾±¶‹ÊÀ¼¸Ûã(¸:ÍQºÑÀ„¾kÃ?Î8¾ÅÀɸ:Í6¿"Ó:ĶUÓ8Èw¾JÀ¼¹kÀ¼È`¸8¿}ºÓ:Ã:Ä¿`¾k¶ õIÃ:¶‹¿`Êw¶‹ÊýÌw»ì¹Ã:¶é×&È`ÄpÊ ½0$ø[¹nÃ8¿Q¹Ò¿Q¹¹¿`·Ã:¶‹¾ ¹È ¹Ã:¶=Ë`¶UÄÌa½ƒÌ:Î:¹ÀɹÀ„¾¶ښÓ:Än¶‹¾¾k¶‹ÊL×&À¼¹ÃL¹Ã:¶Ê:¿Q¹kÀ¼Ë`¶ ·U¿`¾k¶üÂ"¿QÄÏCÀɸ:ÍéÈ`¸ + )*+6+ õS°*¯(²š¶_Ú®(¯*± Ä(Å ¶3øÀ¼¸ &¿`¾ÞCÎ:¶`Ô ÝŠÐe×&¶Ö¿}ºÑÀ¼Í`¸:¶‹ÊdÌ8È`¹nÃF½0$?¿Q¸8Êóº&$!2" ¹È+)*+ + q½a¹Ã:¶U¸?¿"Â@¿Q¸w»wƊ¹È`ƊÈ`¸:¶ƒÂ@¿QÓ:ÓwÀ¼¸:Íe×&È`Îwº„Ê Ì8¶=ЊÈ`ÄÂ궋ʽF¿Q¸8ÊL¹Ã:¶,Bƒæ ó ×&È`Îwº„ÊLÌO¶ÖËCÀÉÈǺÁ¿Ç¹¶‹Êu¹ òƒõ[ë ·! ë #"$´í"ëCø-Ã:ÈǺÁÊ:¾MÀɸL¹Ã:¶ã(¸:ÍQºÑÀ„¾kÃL¹Ä¶U¶=Ì:Î:¹ òƒõ%'&%)(%2ë ( · %)&%'(*%ë (kø8ÊwȀ¶‹¾Ÿ¸:È`¹3Ã:ÈQº„Ê Àɸ%¹Ã:¶I&¿`¾ÞCÎ:¶ ¹Ä¶U¶`Ô AQÀÉ Àຄ¿QıºÉ»`½ƒ¿=È`¸:¶UƊ¹È`ƊÂ@¿Ç¸€»Â@¿ÇÓ:ÓwÀɸ:Í õI¶`Ô Í8ÔÁ½ ¿}ºÑÀ¼Í`¸wÀ¼¸:͖¼$ !(×&À¼¹ã"$[%0#õS°´È*Â(ø¿Q¸8ÊE)*6$)0õS« Å>É ¶3ø À¼¸Ž¹ÃwÀ„¾2¶ژ¿QÂêÓwºÉ¶Qø ·U¿Q¸Ž¿}º„¾kÈ%Ì8¶‡Ó:ÄÈ`Ìwº¼¶UÂ@¿Ç¹kÀ„·.ÐÜÈ`Ä2¹Ã:¶ Bƒæ ó Ô &Ã:¶êÀ¼¸8¿`Êw¶‹Þ`Î8¿C·»?ÈQйÃ:¶Bƒæ ó ¾kÃ:ÈCÎwºÁÊ=·È`ÂF¶"¿`¾ ¸:ÈM¾±Î:ÄÓ:ÄkÀ„¾k¶`Ô#&Ã:¶Ž¾k»€¸w¹p¿}ÚºÑÀɹn¶UÄp¿Q¹Î:ĶŽÊ:¿Q¹kÀ¼¸:ÍÌ8¿`·Ï ¹È=æ Ã:È`Â"¾kπ»=õpú}û`ÿ:úQøp½Ÿ¹È`Í`¶U¹nÃ:¶UÄ-× ÀɹnÃô¿ÄkÀ„·nÃÖ·È`ÂFÆ Ó:Î:¹p¿Q¹±ÀÉÈC¸8¿}º]ºÑÀ¼¹¶UÄp¿Ç¹Î:ĶFÈ`¸6¹Ä¿Q¸8¾Jº„¿Q¹kÀ¼È`¸?ÊCÀ¼Ë`¶UÄÍ`¶U¸8·¶‹¾ õI¶`Ô ÍOÔ õ ó ÌO¶ÀѺຼ¶¶U¹Û¿}ºIÔ¼½"ú‹û`û  ÙB‡È`Änċ½Fú‹û`û-’8Ùô‡¿Q¸ ¶U¹F¿}ºIÔ¼½  øøp½À„¾%·È`¸8·¶Uĸ:¶‹Ê=× ÀɹnÃô·nÃ8¿QÄp¿`·¹n¶UÄkÀÀɸ:Í À¼¸Ò¿=¾k»š¾±¹¶UÂ@¿Q¹kÀ„·×¿}»¹Ã:¶?¿QÓ:Ó8¿QĶU¸w¹eÊCÀ¼Ë`¶UÄp¾JÀ¼¹k»ÖÈQÐ ÂF¶‹·nÃ8¿Q¸wÀ„¾kÂ@¾åÎ8¾k¶‹Ê%Ìw»º„¿Q¸:ÍCÎ8¿QÍ`¶‹¾3¹È#¶Ú:Ó:Ķ‹¾¾åÂF¶‹¿Q¸:Æ À¼¸:̀¾"¾k»€¸w¹p¿`·¹±ÀÁ·U¿†ºàº¼»`Ô¦ž:ÈCÄF¶ژ¿ÇÂFÓwº¼¶`½·Î:ÄĶU¸w¹"¹Ã:¶UÈ`Æ ÄkÀ¼¶‹¾Ž·nº„¿}À¼Â ¹Ã8¿Q¹êº„¿Q¸:Í`Î8¿QÍ`¶‹¾"¶UÂêÓwºÉÈQ»¾k¹p¿QÌwº¼¶eÃ:¶‹¿CÊwÆ ·È`ÂFÓwº¼¶UÂF¶U¸w¹3È`ÄpÊw¶UÄp¾a¿C·ÄȀ¾¾a·ÈC¸8¾k¹ÄÎ8·¹kÀ¼È`¸ƒ¹k»€ÓO¶‹¾UÔaÝJ¸ ã(¸:ÍǺàÀ„¾kÃa½w¹Ã:¶.Ã:¶‹¿`Ê%ÈQÐa¿‡Ó:Ã:Äp¿`¾k¶&À„¾åÎ:¸wÀàЊÈ`Ä ºÉ»ê¹È#¹Ã:¶ º¼¶ÐܹƒÈQÐÂêȚÊCÀÑЊ»`À¼¸:Í6Ó:ĶUÓ8Èw¾JÀ¼¹kÀ¼È`¸8¿}ºÓ:Ã:Äp¿`¾k¶‹¾ù½]¾k¶U¸w¹¶U¸:Æ ¹kÀ„¿}º·ÈCÂFÓwº¼¶UÂF¶U¸€¹¾U½ ¶U¹·QÔÛÝJ¸[æ ÃwÀ¼¸:¶‹¾k¶`½&ËC¶UÄÌ8¿}º¿Q¸8Ê Ó:ĶUÓOȀ¾JÀ¼¹kÀ¼È`¸8¿}º]Ó:Ã:Ä¿`¾k¶‹¾2Ķ‹¾kÓO¶‹·¹¹Ã:¶#ã(¸:ÍQºÑÀ„¾kÃeÈCÄpÊw¶UÄÆ À¼¸:͎Ì:Î:¹#Ã:¶‹¿`Ê:¾.À¼¸?¹Ã:¶%¸:È`Â%À¼¸8¿}º¾±»š¾k¹n¶UÂ\Î:¸wÀÑЊÈ`ÄÂ%º¼» ¿QÓ:ÓO¶‹¿QÄ%¹È6¹Ã:¶@ıÀÉÍCÀ¹‹Ô A`»:¾k¹¶UÂ@¿Q¹±ÀÁ·"¿QÓ:ÓwºÑÀ„·U¿Q¹kÀ¼È`¸ÈQÐ ¹ÃwÀ„¾¾kÈCĹ&ÈQÐaºÑÀ¼¸:Í`ÎwÀ„¾k¹kÀ„·%Ïw¸:È†× ºÉ¶‹ÊwÍC¶#¹Î:ĸ8¾&È`Î:¹&¹È%ÌO¶ ¹Ã:¶2Ï`¶U»‡À¼¸Í`¶U¹¹±Àɸ:͇Ì8¶U»CÈ`¸8Ê#¹Ã:¶HBƒæ ó ™ ¾OºÑÀ¼Â%À¼¹p¿Q¹kÀ¼È`¸8¾UÔ + ,.;HIò2;O´ÇGJP _Y´ÇE25–• —@V ò NQGJP _ VMP2P g´`;O´`GIgaPüDê^`g0/Q589w´`GJgaP &ÃwÎ8¾ŸÐö¿Çċ½Q×&¶Ã8¿}Ë`¶¿QÄnÍ`Î:¶‹Êƒ¹Ã8¿Q¹(¹Ã:¶IBƒæ ó ÀÁ¾2¿Î8¾±¶UÆ ÐŠÎwº.¿Ç¸8Ê=×&À„Êw¶ºÉ»Ö¿C¾¾kÎ:ÂF¶‹Ê=Ó:ÄkÀ¼¸8·nÀ¼Ówº¼¶`Ù#¿Q¹%¹Ã:¶ ¾n¿QÂF¶ ¹kÀ¼ÂF¶"×&¶-Ã8¿}Ë`¶ ÀàºÑº¼Î8¾k¹Ä¿Q¹¶‹Ê=¹Ã8¿Q¹‡À¼¹ƒÀ„¾‡À¼¸8·U¿QÓ8¿QÌwº¼¶ŽÈQÐ ¿`·U·È`Î:¸w¹kÀ¼¸:Í%ЊÈ`ľkÈ`ÂF¶#×&¶ºÑº]Ïw¸:ÈQ׸ ¿Ç¸8ÊêÐÜÎ:¸8Ê:¿QÂê¶U¸:Æ ¹p¿}º(ºÑÀ¼¸:Í`ÎwÀ„¾k¹kÀ„· Ðö¿`·¹¾UÔ¿ (¶U¹ƒ¹ÃwÀ„¾#À„¾ƒ¸:È`¹¿Q¸=Î:¸wÐI¿QÂ%ÀѺ¼Æ À„¿QľJÀ¼¹Î8¿Q¹±ÀÉÈC¸aÔ<ž˜È`Ä#»`¶‹¿QÄp¾ù½a¾k¹È:·Ã8¿`¾±¹kÀ„·ÂêȚÊw¶ºÑÀ¼¸:ÍeÈQÐ º„¿Q¸:Í`Î8¿QÍC¶#Ã8¿`¾Êw¶UÓ8¶U¸8Êw¶‹ÊâÈ`¸Ž¹Ã:¶‡ºÑÀ¼¸:Í`ÎwÀ„¾k¹kÀ„·U¿}ºÑº„»âÀÉÂêÆ Ówº„¿QÎ8¾JÀ¼Ìwº¼¶e¿`¾¾kÎ:ÂêÓ:¹kÀ¼È`¸8¾#Î:¸8Êw¶UÄkº¼»CÀ¼¸:Í1&OƊÍ`Äp¿QÂìÂFÈ:ÊwÆ ¶º„¾U½(ÃwÀ„Ê:Êw¶U¸G6¿QÄnÏ`ÈQËMÂêȚÊw¶º„¾U½ ·ÈC¸€¹¶Ú:¹ÆqЊÄ¶U¶"Í`Äp¿QÂFÆ Â@¿QľU½F¿Q¸8Êü¹nÃ:¶ºÑÀ¼Ï`¶`½"×&À¼¹ÃïĶUÂ"¿QÄφ¿ÇÌwºÉ¶[¾kÎ8·U·¶‹¾n¾UÔ ô‡¿‹ËCÀ¼¸:ÍeÂ@¿CÊw¶F¹Ã:¶Bƒæ ó ¶Ú:ÓwºÑÀÁ·nÀ¼¹‹½å×¶ê×ÈCÎwºÁʾkÎ:Í`Æ Í`¶‹¾k¹2¹Ã8¿Q¹&¹Ã:¶‡ÄkÀ¼Í`Ãw¹ ÞCÎ:¶‹¾k¹kÀ¼È`¸8¾&¿QÄn¶¹ñ!5$3242+!5|!5m)*! À„¾&À¼¹¹nÄÎ:¶`½8¿Q¸8Ê62$72G/ %05½0/)8ŸÀ„¾&À¼¹×.Ã:¶U¸ŽÀ¼¹Ã:ÈQº„Ê:¾9 ÝI¸ê¹Ã:¶&ĶUÂ@¿}À¼¸8Êw¶UÄ(ÈQÐ8¹Ã:¶&Ó8¿QÓO¶Uċ½`×&¶åÐÜÈ:·Î8¾3È`¸F¿Q¸:Æ ¾k×&¶UÄkÀ¼¸:Í%¹Ã:¶.ß8Äp¾k¹Þ`Î:¶‹¾±¹kÀ¼È`¸@¶UÂêÓwÀÉıÀÁ·U¿†ºàº¼»̀»@·ÈC¸8¾JÀ„ÊwÆ ¶UÄkÀ¼¸:Í%¹Ã:¶#¾k»€¸w¹p¿`·¹±ÀÁ·.Ķº„¿Q¹kÀ¼È`¸8¾±ÃwÀÉÓ8¾.¿Q¸8Ê@¿†ºàÀ¼Í`¸:Âê¶U¸€¹p¾ Ì8¶U¹k×&¶U¶U¸F¹Ä¿Q¸8¾Jº„¿Q¹¶‹ÊF¾±¶U¸€¹¶U¸8·¶.Ó8¿}À¼Äp¾3À¼¸"¹Å×&È#ÊCÀ„¾k¹p¿Q¸w¹ º„¿Q¸:Í`Î8¿QÍC¶‹¾&õÅãå¸:ÍQºÑÀÁ¾±Ãe¿Q¸8Ê"æ ÃwÀ¼¸:¶‹¾k¶QøpÔåÝI¸"È`Î:Ä&¶ښÓO¶UÄkÀ¼Æ ÂF¶U¸w¹p¿}ºOÐÜÄp¿ÇÂF¶U×&È`ÄϽ†¿%¾k»:¾k¹¶U ÀÁ¾2ÍQÀ¼Ë`¶U¸"¹Ã:¶;:pÀÁÊw¶‹¿†º=< ¾k»w¸€¹¿`·¹kÀ„·¿Ç¸8¿}º¼»š¾k¶‹¾åЊÈ`Ä ¹nÃ:¶ãå¸:ÍQºÑÀÁ¾±Ãe¾k¶U¸€¹n¶U¸8·¶‹¾¿Q¸8Ê ã(¸:ÍǺàÀ„¾kÃ:Ækæ2ÃwÀɸ:¶‹¾±¶@×&È`ÄpÊwÆI¿†ºàÀ¼Í`¸:Âê¶U¸€¹p¾ù½ ¿Q¸8ÊâÀ¼¹#Î8¾k¶‹¾‡¿ B.À¼Ä¶‹·¹¿¡(ÄÈCáŶ‹·¹±ÀÉÈC¸ ó º¼Í`È`ÄkÀ¼¹Ã: õkÊw¶‹¾·ÄkÀ¼Ì8¶‹ÊYÌ8¶º¼ÈQ×#ø ¹ÈFÓ:ÄnÈ`ák¶‹·¹¹nÃ:¶ã(¸:ÍQºÑÀ„¾kÃ?¾k»w¸€¹p¿C·¹kÀ„·#¿Q¸:¸:È`¹p¿Q¹±ÀÉÈC¸8¾ÊCÀ¼Æ Ķ‹·¹kº¼»6¿`·ÄȀ¾¾.¹È"¹Ã:¶Fæ2ÃwÀɸ:¶‹¾±¶@¾k¶U¸w¹¶U¸8·¶‹¾.À¼¸ ¿C·U·È`ÄÆ Ê:¿Q¸8·¶#×&À¼¹ιnÃ:¶ŸBƒæ ó Ô(&Ã:¶‡Ä¶‹¾kÎwº¼¹kÀ¼¸:Íeæ2ÃwÀɸ:¶‹¾±¶Êw¶UÆ Ó8¶U¸8Êw¶U¸8·»Ö¿Ç¸8¿}º¼»š¾k¶‹¾ê¿QĶ޹Ã:¶U¸Ö·È`ÂêÓ8¿QĶ‹Ê×&À¼¹ÃÖ¿Q¸ À¼¸8Êw¶UÓ8¶U¸8Êw¶U¸w¹kº¼»ÒÊw¶UÄkÀ¼Ë`¶‹Ê0Í`ÈQº„Ê0¾k¹p¿Q¸8Ê:¿QÄʽ ¶U¸8¿QÌwºÑÀ¼¸:Í Î8¾#¹nÈMÊw¶U¹¶UÄ Àɸ:¶"Ķ‹·U¿}ºÑº¿Q¸8Ê?Ó:Än¶‹·nÀÁ¾ÅÀÉÈC¸?ß8Í`Î:Än¶‹¾#ЊÈ`Ä ¾k»w¸€¹¿`·¹kÀ„·%Êw¶UÓ8¶U¸8Êw¶U¸8·nÀ¼¶‹¾êõÅ·nбԃõÝ Àɸa½2ú‹û`û`ÿšøø¿Q¸8Ê¹È Ó8¶UıÐÜÈ`ÄnÂä¿"Þ`Î8¿†ºàÀ¼¹p¿Ç¹kÀ¼Ë`¶%¶UÄÄÈ`Ä#¿Q¸8¿}º¼»:¾JÀ„¾UÔO&ÃwÀ„¾‡¶UÄÄÈ`Ä ¿Q¸8¿}º¼»:¾JÀ„¾º¼¶‹ÊƒÎ8¾Ÿ¹ÈĶUËCÀ„¾k¶&È`Î:ÄaÓ:ÄÈCáŶ‹·¹kÀ¼È`¸%¿QÓ:Ó:ÄȀ¿C·Ãa½ ¿Q¸8Ê"¹Ã:¶‡Ä¶‹¾kÎwº¼¹kÀ¼¸:Í%ºÑÀ¼¸:Í`ÎwÀ„¾k¹kÀ„·U¿}ºÑº„»âÀ¼¸wÐÜÈ`ÄnÂF¶‹Ê"Ó:ÄÈ`ák¶‹·Æ ¹kÀ¼È`¸"À¼ÂFÓ:ÄÈQË`¶‹Ê޾ÅÀÉÍC¸wÀàß ·U¿Ç¸€¹kº¼»¹Ã:¶¿QÌwÀѺÑÀɹk»Ž¹ÈêÈ`Ì:¹p¿}À¼¸ ¿`·U·Î:Äp¿Ç¹¶%æ ÃwÀ¼¸:¶‹¾k¶Ó8¿ÇÄp¾k¶‹¾UÔ &ÃwÀ„¾ ¶ښÓO¶UÄkÀ¼ÂF¶U¸€¹¿}º?ЊÄp¿QÂF¶U×&È`ÄÏ,ÐÜÈ`ĹÃ:¶Lß8ľk¹ ÞCÎ:¶‹¾k¹kÀ¼È`¸êÀÁ¾&Êw¶‹¾JÀ¼Í`¸:¶‹Ê"× ÀɹnÃ@¿Q¸"¶U»`¶¹ÈQ׿QÄpʃ¹Ã:¶#¾k¶‹·Æ È`¸8ʽ·ÈC¸8·¶UĸwÀ¼¸:Í[¹Ã:¶?Î8¾k¶ЊÎwº¼¸:¶‹¾¾6ÈQÐFÂ@¿ÇÏ`À¼¸:͹Ã:¶ ÊCÀ¼Ä¶‹·¹-·ÈCÄĶ‹¾kÓOÈ`¸8Êw¶U¸8·¶ ¿C¾¾kÎ:ÂFÓ:¹kÀ¼È`¸aԃ݊йÃ:¶¿Bƒæ ó Ã:ÈQº„Ê:¾.¹ÄÎ:¶#ÂFÈ`Än¶#ÈQÐܹn¶U¸Ž¹Ã8¿Q¸Ž¸:È`¹‹½w¹Ã:¶U¸È`¸:¶#Â%À¼Í`Ãw¹ ¾kÓO¶‹·ÎwºÁ¿Ç¹¶&¹Ã8¿Q¹3¹Ã:¶ Ó:ÄÈCáŶ‹·¹n¶‹Ê#¾k»€¸w¹p¿`·¹±ÀÁ·2¾k¹ÄÎ8·¹nÎ:Ķ‹¾ ·È`Îwº„Ê Ì8¶=Î8¾k¶ÐÜÎwºF¿`¾e¿Y¹Ä¶U¶UÌ8¿Q¸:ÏLõÅ¿†ºÉÌO¶À¼¹Û¿=¸:ÈQÀ„¾k» È`¸:¶QøÐÜÈCÄ۹Ŀ}À¼¸wÀɸ:Í)æ ÃwÀ¼¸:¶‹¾k¶Ó8¿QÄp¾k¶UľU½Ž¿Q¸8Êé·È`Îwº„Ê Ã:¶º¼ÓŽÂFÈ`͇Í`¶U¸:¶UÄp¿†ºàº¼»êÀ¼¸@ÈQË`¶UÄp·ÈCÂ%À¼¸:Í-¹nÃ:¶¾k»w¸w¹p¿`·¹kÀ„· ¿Q¸:¸:È`¹¿Q¹kÀ¼È`¸eÌOÈ`¹¹kº¼¶U¸:¶‹·nÏÐÜÈ`Ä.º„¿Q¸:Í`Î8¿QÍC¶‹¾‡È`¹Ã:¶Uć¹Ã8¿Q¸ ã(¸:ÍǺàÀ„¾kÃaÔ >šË5Ì ?‹ØuÙäÚ#Ó6ÏmُÞ-×@jÏmÎBA0Ù(Þ[×"ÓÎvÕ,Ü#ÒÖ´Î4Ï[Ó×[Ø_Ð &Ã:¶OBƒæ ó ¹Ä¿Q¸8¾Jº„¿Q¹¶‹¾®Ðö¿}À¼Äkº¼»ŽÊCÀ¼Ä¶‹·¹kº¼»FÀ¼¸w¹ÈF¿Q¸Ž¿}º¼Í`È`Æ ÄkÀ¼¹Ã:Â)ÐÜÈCÄÓ:ÄnÈ`ák¶‹·¹kÀ¼¸:Í@ã(¸:ÍQºÑÀ„¾kÃ?Êw¶UÓ8¶U¸8Êw¶U¸8·»?¿Q¸8¿†ºÉ»wÆ ¾k¶‹¾#¿`·ÄnȀ¾¾‡¹Èæ ÃwÀ¼¸:¶‹¾k¶FÎ8¾JÀ¼¸:Í×ÈCÄpÊ6¿}ºÑÀÉÍC¸:ÂF¶U¸€¹¾#¿`¾ ¹Ã:¶FÌ:ıÀÁÊwÍC¶`ÔÈ`ĶƒÐŠÈ`ÄÂ@¿†ºàº¼»`½(ÍÇÀÉËC¶U¸ô¾k¶U¸w¹¶U¸8·¶"Ó8¿}À¼Ä õè"½*郸p½O¹Ã:¶Fã(¸:ÍǺàÀ„¾kÃ?¾k»w¸w¹p¿`·¹kÀ„·#Än¶ºÁ¿Ç¹kÀ¼È`¸8¾#¿QĶƒÓ:ÄÈ`Æ ák¶‹·¹¶‹Ê"ЊÈ`Ä&¹Ã:¶‡ÐŠÈQºÑºÉÈQ×&À¼¸:͎¾JÀ¼¹Î8¿Q¹kÀ¼È`¸8¾&¹ C ÎvÕ4Ù'D×0Î0DΚÕuÙ ÀÑÐ@$ ¨FE èýÀ„¾e¿}ºÑÀ¼Í`¸:¶‹Ê ×&À¼¹Ã[¿ Î:¸wÀ„ÞCÎ:¶G$õî E éè¿Q¸8ÊIH ¨ ÀÁ¾ê¿}ºÑÀÉÍC¸:¶‹Ê×&À¼¹Ã¿ Î:¸wÀ„ÞCÎ:¶JHî E é%½&¹nÃ:¶U¸=ÀàÐ<òƒõK$ ¨I· H ¨ øp½&·È`¸:Æ ·nº¼Î8Êw¶¿òƒõ$õî · Hî(øpÔ C ã_ÕuÑ*ÒÓ ÖšÕuُá å!L Õ4ÖuÒÓ6ß[Ø_æ ÀÑÐNM@¨ E è À„¾¸:È`¹ ¿†ºàÀ¼Í`¸:¶‹Ê×&À¼¹ÃۿǸ€»×ÈCÄpÊâÀɸ­é%½a¹Ã:¶U¸Û·Än¶‹¿Q¹¶F¿ ¸:¶U×L¶UÂFÓ:¹k»"×ÈCÄpÊO&´î E éd¾kÎ8·nÃe¹Ã8¿Q¹ ÐÜÈ`ć¿Q¸€» ï ¨ ¿}ºÑÀ¼Í`¸:¶‹Êê×&À¼¹à ¿‡Î:¸wÀ„ÞCÎ:¶ ï î&½2òƒõ ï ¨I· M ¨ øQP òƒõ ï î · & î ø ¿Q¸8ʈòƒõMH¨ · ï ¨ øRPöòƒõK& î · ï î øÔ C ÎvÕ4Ù'D×0Î0D ÐÑuÕ0S ÀÑÐTM ¨UE èýÀ„¾Ž¿}ºÑÀÉÍC¸:¶‹Ê×&À¼¹à M AWV ·YXYX#X· M[Z V ½6¹nÃ:¶U¸ ·Ä¶‹¿Q¹¶ü¿d¸:¶U× ¶UÂêÓ:¹Å» ×&È`ÄÊGHî E éè¾kÎ8·Ã¹Ã8¿Q¹;Hî0À„¾#¹nÃ:¶FÓ8¿QĶU¸w¹ ÈÇÐ\M A]V ·YX#XYXS· M[Z V ¿Q¸8ʎ¾±¶U¹^M ¨ ¹ÈF¿}ºÑÀÉÍC¸e¹È@Hî À¼¸8¾k¹n¶‹¿`ÊÔ C ÐÑuÕ0S_D×0Î`DÎvÕ4Ù ÀàÐ^M A]a ·#XYX#XS· M[Z a E è\¿QÄn¶@¿}ºÑº Î:¸wÀ„ÞCÎ:¶º¼»e¿}ºÑÀÉÍC¸:¶‹Ê޹ÈM î E é%½w¹Ã:¶U¸eÊw¶º¼¶U¹¶¿}ºÑº ¿†ºàÀ¼Í`¸:Âê¶U¸€¹p¾åÌ8¶U¹k×&¶U¶U¸@M[b a õpúdce(Rcf&Ÿø]¿Q¸8ÊgMHî ¶Ú8·¶UÓ:¹ŸÐÜÈ`ğ¹Ã:¶ Ã:¶‹¿`Ê õÅÊw¶U¸:È`¹¶‹Êƒ¿`¾_Mh a øpًÂêÈ`ĶUÆ ÈQË`¶UÄ}½QÀÑÐ\M b a ½š¿ƒÂFÈ:ÊCÀÑß8¶UÄÈÇÐiM h a ½wÃ8¿`ÊêÀ¼¹p¾2ÈQ׸ ÂêȚÊCÀÑß8¶UÄp¾U½uòƒõM[b a · Mkj a øRPöòƒõMlh V · Mmj V øpÔ &Ã:¶ ÐÑuÕ0S_D×0Î0D ÐÑuÕ0S ·U¿`¾k¶À„¾ŽÊw¶‹·È`ÂFÓOȀ¾k¶‹Ê0À¼¸w¹È ¿¹k×ÈCÆI¾k¹¶UÓ=Ó:ÄȚ·¶‹¾n¾0¹êß8Äp¾k¹%Ó8¶UÄkЊÈ`Ä È`¸:¶UƊ¹È`ƊÂ@¿Ç¸€»`½ ¹Ã:¶U¸?ÓO¶UÄkЊÈ`Ä Â"¿Q¸€»wƊ¹È`ƊÈC¸:¶`ÔÝJ¸Û¹Ã:¶F·U¿`¾±¶‹¾#ÈQÐ&Î:¸:Æ ¿}ºÑÀ¼Í`¸:¶‹ÊYæ ÃwÀ¼¸:¶‹¾k¶@×&È`ÄÊ:¾U½a¹Ã:¶U»?¿QÄn¶%º¼¶ÐܹƒÈ`Î:¹#ÈQйÃ:¶ Ó:ÄÈCáŶ‹·¹¶‹Ê޾±»€¸w¹p¿`·¹kÀ„·.¹Ä¶U¶`Ô Ã:¶¿`¾k»wÂFÂF¶U¹Ä» Àɸ¹Ã:¶ ¹Ä¶‹¿Ç¹ÂF¶U¸w¹ÈQÐ ÎvÕ4Ù'D×0Î0D ÐÑuÕ`S ¿Q¸8Ê ÐÑuÕ0S_D×0Î0DΚÕuÙ ¿Q¸8Ê ÈQК¹nÃ:¶ Î:¸8¿}ºÑÀ¼Í`¸:¶‹Êƒ×&È`ÄpÊ:¾OЊÈ`Äa¹Ã:¶2¹k×È2º„¿Q¸:Í`Î8¿ÇÍ`¶‹¾ ¿QÄkÀ„¾k¶‹¾ŸÐŠÄÈ`Âü¹nÃ:¶ ¿C¾k»€ÂêÂF¶U¹ÄkÀ„·2¸8¿Q¹Î:Ķ&ÈQйÃ:¶&Ó:ÄÈ`ák¶‹·Æ ¹kÀ¼È`¸aÔ >šË6Ê Ln à_Ù(Ï[Ó5ÐÙ´Õ*×0Ñ*Òlo4Ù×[ãâà &Ã:¶·È`ÄnÓ:Î8¾(ЊÈ`Ä&¹ÃwÀ„¾&¶Ú:Ó8¶UÄkÀ¼ÂF¶U¸w¹×¿`¾&·È`¸8¾k¹nÄÎ8·¹¶‹Ê Ìw»üÈ`Ì:¹p¿}À¼¸wÀ¼¸:ÍüÂ@¿Q¸wÎ8¿}º ãå¸:ÍQºÑÀÁ¾±Ã•¹Äp¿Q¸8¾ÅºÁ¿Ç¹kÀ¼È`¸8¾6ЊÈ`Ä ú  ’âæ ÃwÀ¼¸:¶‹¾k¶¸:¶U×¾±×&À¼Ä¶-¾k¶U¸w¹¶U¸8·¶‹¾ õI×&À¼¹Ȓ  ×&È`ÄpÊ:¾ È`ĺ¼¶‹¾¾ø³·È`¸w¹p¿}À¼¸:¶‹ÊÀɸ%¾k¶‹·¹kÀ¼È`¸8¾  úUÆ  ú" 2ÈQК¹nÃ:¶ñ¡3¶U¸:¸ æ ÃwÀ¼¸:¶‹¾k¶#aÄn¶U¶UÌ8¿Q¸:Ï=õ*pÀ„¿â¶U¹-¿}ºIÔ¼½  øÔ<&Ã:¶âæ ÃwÀ¼Æ ¸:¶‹¾k¶ÛÊ:¿Ç¹p¿=À¼¸[È`Î:Ä6¾k¶U¹ŽÄp¿Q¸:ÍC¶‹Ê0ÐÜÄnÈ` ¾JÀ¼ÂFÓwº¼¶ô¾±¶U¸:Æ ¹¶U¸8·¶‹¾%¹nÈM¾kÈ`Âê¶ ·È`ÂêÓwºàÀ„·U¿Q¹n¶‹ÊÖ·È`¸8¾k¹nÄÎ8·¹kÀ¼È`¸8¾F¾kÎ8·nà ¿`¾·È`ÂFÓwº¼¶Ú Ķº„¿Q¹±ÀÉËC¶ï·nºÁ¿ÇÎ8¾k¶‹¾U½?Â%Îwº¼¹kÀ¼Ówº¼¶éÄÎ:¸:ƊÈ`¸ ·nº„¿QÎ8¾k¶‹¾U½Q¶UÂ%ÌO¶‹Ê:ÊCÀɸ:Íw¾U½Q¸:È`Â%À¼¸8¿}º8·È`¸8¾k¹ÄnÎ8·¹kÀ¼È`¸8¾U½}¶U¹p·QÔ ó Ë`¶UÄp¿QÍ`¶¾±¶U¸€¹¶U¸8·¶‡º¼¶U¸:Í`¹nÃe׿`¾  ˜ÔÑë#×ÈCÄpÊ:¾UÔ ¡(¿QÄp¾±¶‹¾ÐÜÈCÄ?¹Ã:¶Öã(¸:ÍǺàÀ„¾kÕ¾k¶U¸w¹¶U¸8·¶‹¾×&¶UĶַÈ`¸:Æ ¾k¹ÄnÎ8·¹¶‹Ê ̀» ¿+Ó:ÄÈ:·¶‹¾¾éÈÇÐ ¿QÎ:¹nÈ`Â@¿Q¹kÀ„·ì¿Q¸8¿†ºÉ»wÆ ¾JÀ„¾ЊÈQºÑº¼È†×&¶‹ÊLÌw»ÒÃ8¿Ç¸8Êü·È`ÄĶ‹·¹kÀ¼È`¸aÙ"È`Î:¹nÓ:Î:¹M¹nĶU¶‹¾ ЊÄÈ`Âü¿‡Ì:ÄȀ¿`ÊwÆI·ÈQË`¶UÄp¿ÇÍ`¶(º¼¶ÚwÀÁ·U¿†ºàÀU¶‹Êã(¸:ÍQºÑÀ„¾kÃFÓ8¿Qľk¶UÄ õæ ÈǺàºÑÀ¼¸8¾U½0ú‹û`ûwë`ø?×&¶UĶL¿QÎ:¹È`Â@¿Ç¹kÀ„·U¿}ºÑºÉ»è·È`¸€ËC¶UŶ‹Ê À¼¸€¹nÈÖÊw¶UÓ8¶U¸8Êw¶U¸8·nÀ¼¶‹¾6¹È=Ì8¶Û·ÈCÄĶ‹·¹¶‹ÊÔ  Ã:¶?Í`ÈQº„ÊwÆ ¾k¹p¿Ç¸8Ê:¿QÄpÊÊw¶UÓ8¶U¸8Êw¶U¸8·»[¿Q¸8¿†ºÉ»:¾k¶‹¾ ÐÜÈCÄF¹Ã:¶?æ2ÃwÀɸ:¶‹¾±¶ ¾k¶U¸w¹¶U¸8·¶‹¾(×&¶UĶ·È`¸8¾±¹ÄÎ8·¹¶‹Ê%Â"¿Q¸€Î8¿†ºàº¼»FÌw»¹k×&Èl8Î:Æ ¶U¸w¹¾kÓ8¶‹¿ÇÏ`¶UÄp¾.ÈQÐæ2ÃwÀɸ:¶‹¾±¶`½×ÈCÄÏ`À¼¸:Í%À¼¸8Êw¶UÓO¶U¸8Êw¶U¸€¹kº¼» ¿Q¸8ÊdÎ8¾JÀ¼¸:Íï¹nÃ:¶üæ ÃwÀ¼¸:¶‹¾k¶ ŸÄ¶U¶UÌ8¿Q¸:Ïu™ ¾õJÂ@¿Q¸wÎ8¿}ºÑºÉ» ·È`¸8¾k¹nÄÎ8·¹¶‹ÊOø·È`¸8¾k¹kÀ¼¹Î:¶U¸8·»Ó8¿QÄp¾k¶‹¾ ÐÜÈCÄĶЊ¶UĶU¸8·¶`Ôrq ÝI¸w¹¶UÄÆI¿Ç¸:¸:È`¹p¿Q¹ÈCÄ¿QÍ`Än¶U¶UÂF¶U¸€¹2È`¸"Î:¸wºÁ¿ÇÌ8¶º¼¶‹Êe¾k»w¸w¹p¿`·Æ ¹kÀ„·Êw¶UÓ8¶U¸8Êw¶U¸8·nÀ¼¶‹¾0À„¾û  Ô ’:î@Ô 6¿Q¸€Î8¿†º ã(¸:ÍQºÑÀ„¾kÃ:Æ æ ÃwÀ¼¸:¶‹¾k¶#¿}ºÑÀ¼Í`¸:ÂF¶U¸w¹p¾(×&¶UĶ·È`¸8¾k¹nÄÎ8·¹¶‹Ê%Ìw»%¹Å×&È#¿Q¸:Æ ¸:È`¹p¿Ç¹È`Äp¾å×Ã:È@¿Q͇¸8¿Q¹kÀ¼Ë`¶¾±Ó8¶‹¿QÏ`¶Uľ ÈÇÐæ ÃwÀ¼¸:¶‹¾k¶ƒÎ8¾kÆ À¼¸:ÍÛ¿e¾±ÈQÐܹk׿QĶê¶U¸€ËCÀ¼ÄÈ`¸:ÂF¶U¸w¹F¾JÀ¼Â%ÀѺÁ¿ÇÄ@¹È¹Ã8¿Q¹%Êw¶UÆ ¾·ÄkÀ¼ÌO¶‹ÊeÌw»#e¶º„¿QÂ궋Ê6õpú‹ûCû`ÿQøpÔ &Ã:¶ÖÊCÀ¼Ä¶‹·¹6Ó:ÄÈCáŶ‹·¹kÀ¼È`¸LÈQÐ@ã(¸:ÍǺàÀ„¾kÃéÊw¶UÓ8¶U¸8Êw¶U¸:Æ ·nÀ¼¶‹¾2¹È%æ ÃwÀ¼¸:¶‹¾k¶#»CÀɶº„Êw¶‹Ê"ÓOȀÈ`Ä2Ķ‹¾kÎwº¼¹p¾2¿`¾(ÂF¶‹¿C¾kÎ:Ķ‹Ê Ìw»@Ó:Ķ‹·nÀ„¾JÀ¼È`¸=¿Q¸8Ê"Ķ‹·U¿}ºÑº(ȆËC¶UÄÎ:¸wº„¿QÌ8¶º¼¶‹Ê?¾k»w¸w¹p¿`·¹kÀ„· Êw¶UÓ8¶U¸8Êw¶U¸8·nÀ¼¶‹¾0¹LÓ:Ķ‹·nÀ„¾JÀ¼È`¸L׿`¾…  Ô¼úQî ¿Q¸8Ê Ä¶‹·U¿}ºÑº `û:ÔÁúQî@Ô?ÝJ¸8¾kÓ8¶‹·¹±ÀÉÈC¸ ÈÇйÃ:¶eÄn¶‹¾kÎwº¼¹p¾%ĶUË`¶‹¿†ºÉ¶‹ÊY¹Ã8¿Q¹ È`Î:ăÂ@¿Q¸wÎ8¿}ºÑºÉ»?¿}ºÑÀ¼Í`¸:¶‹Ê=Ó8¿QÄ¿}ºÑºÉ¶º·È`ÄÓ:Î8¾ƒ·È`¸€¹¿}À¼¸:¶‹Ê Â@¿Q¸w»ƒÀ¼¸8¾k¹p¿Q¸8·¶‹¾2ÈQÐaÂ%Îwº¼¹kÀ¼ÓwºÉ»Ž¿}ºÑÀ¼Í`¸:¶‹ÊêÈ`Ä Î:¸8¿}ºÑÀ¼Í`¸:¶‹Ê ¹È`ÏC¶U¸8¾U½&ÈQ×&À¼¸:ͶÀɹnÃ:¶UĎ¹È=ÐÜÄn¶U¶U¸:¶‹¾¾"ÈQйÄp¿Ç¸8¾Jº„¿Q¹kÀ¼È`¸ sWt \"NñQYl"c5bmL.M_L n c5b0`VwT|QYT[NM>w5NM5x NZjQYw_L.\mNL n c5b"NñQY\"\mLSp c QYc5L.M5wƒ õÅ¿ƒËCÀÉÈǺÁ¿Ç¹kÀ¼È`¸eÈQÐ(¹Ã:¶¿`¾n¾kÎ:ÂFÓ:¹kÀ¼È`¸¹Ã8¿Q¹&¹Äp¿Q¸8¾ÅºÁ¿Ç¹kÀ¼È`¸8¾ ¿Q͇ºÑÀ¼¹¶UÄp¿†ºÜø&È`ĹnÈ ÊCÀœa¶UĶU¸8·¶‹¾ À¼¸eÃ:ÈQ×[¹Ã:¶#¹k×&ȃºÁ¿Ç¸:Æ Í`Î8¿QÍC¶‹¾(¶Ú:Ó:Ķ‹¾¾2¹Ã:¶¾¿ÇÂF¶Â궋¿Q¸wÀ¼¸:Í8Ôž:ÈCÄ(¶ژ¿QÂêÓwºÉ¶`½ ¹ÈeÞCÎ8¿Q¸w¹kÀÑÐܻۿæ2ÃwÀɸ:¶‹¾±¶Ž¸:È`Î:¸?×&À¼¹Ã¿ŽÊw¶U¹¶UÄÂ%À¼¸:¶Uċ½ È`¸:¶%¿}º„¾kÈF¸:¶U¶‹Ê:¾.¹È@¾kÎ:Ó:Ówº¼»M¿ ÂF¶‹¿`¾kÎ:Ķ#×&È`ÄÊêÀɸ6¿`ÊwÆ ÊCÀ¼¹kÀ¼È`¸¹È¹Ã:¶eÞ`Î8¿Ç¸€¹kÀ¼¹k»`Ô&ÀÎ8¾ù½]¹nÃ:¶@¸:È`Î:¸Ó:Ã:Ä¿`¾k¶ +)+ ''8"À„¾â¶ښÓ:Ķ‹¾n¾k¶‹Ê[¿`¾»[ 6õ Å ©aø¿¼ õ±YÁì® Å>É ø '46)2¼vur¼/$6õ Å «(«Bw®:øpÔæ2ÃwÀɸ:¶‹¾±¶M¿}º„¾kÈÀ¼¸8·nº¼Î8Êw¶‹¾Ž¾k¶UÓ8¿QÆ Äp¿Q¹n¶×&È`ÄpÊ:¾å¹ȃÀ¼¸8ÊCÀÁ·U¿Ç¹¶-¿`¾±Ó8¶‹·¹Î8¿}º]·U¿Q¹n¶UÍ`È`ÄkÀ¼¶‹¾¾kÎ8·nà ¿`¾F·ÈC¸€¹kÀ¼¸wÎ:¶‹Ê¿`·¹kÀ¼È`¸a½2À¼¸ ·È`¸w¹Äp¿C¾k¹F¹È?ËC¶UÄÌ8¿}º¾kÎwÐŠÆ ß:Ú:¶‹¾À¼¸ïã(¸:ÍQºÑÀ„¾kÃØ¾±Î8·Ãü¿`¾¹Ã:¶xu6)2¼ À¼¸ ./)*)´)¼`Ô 2¶‹·U¿QÎ8¾k¶Žæ2ÃwÀɸ:¶‹¾±¶@·nº„¿`¾¾JÀÑß8¶UÄp¾U½3¿`¾kÓO¶‹·¹Î8¿}º Ó8¿QűÀÁ·nº¼¶‹¾U½ ¿Q¸8Ê%ÈC¹Ã:¶UÄ(ЊÎ:¸8·¹kÀ¼È`¸8¿}º×ÈCÄpÊ:¾ Êwȃ¸:È`¹¿QÓ:ÓO¶‹¿QÄ(À¼¸"¹Ã:¶ ã(¸:ÍǺàÀ„¾kξk¶U¸w¹¶U¸8·¶`½`¹nÃ:¶UĶ&À„¾3¸:È׿}»‡ÐÜÈCÄ ¿‡Ó:ÄÈ`ák¶‹·¹¶‹Ê ã(¸:ÍǺàÀ„¾kÿQ¸8¿}º¼»:¾JÀ„¾ƒ¹Èe·È`ÄĶ‹·¹±ºÉ»?¿`·U·ÈCÎ:¸€¹ ÐÜÈ`Ä#¹nÃ:¶UÂeÔ ó ¾#¿"Ķ‹¾kÎwº¼¹‹½¹Ã:¶"æ ÃwÀ¼¸:¶‹¾k¶@Êw¶UÓO¶U¸8Êw¶U¸8·»Û¹nĶU¶‹¾Î8¾±Î:Æ ¿}ºÑº¼»Ðö¿}ÀѺ(¹Ȏ·È`¸€¹¿}À¼¸ ¿Ç¸ ¿ÇÓ:Ó:ÄÈ`Ó:ÄkÀ„¿Q¹¶ƒÍ`Äp¿QÂêÂ@¿Q¹kÀ„·U¿}º Ķº„¿Q¹kÀ¼È`¸=ЊÈ`ă¹Ã:¶‹¾k¶êÀɹn¶UÂ@¾UÔ^2¶‹·U¿QÎ8¾k¶"¹Ã:¶U»Û¿QÄn¶FЊÄ¶UÆ ÞCÎ:¶U¸€¹‹½¹nÃ:¶ÐI¿}ÀѺ¼Î:ĶF¹È"Ó:ÄÈ`ÓO¶UÄkº¼»M¿`·U·ÈCÎ:¸€¹‡ÐŠÈ`ć¹Ã:¶U ¾JÀ¼Í`¸wÀÑß ·U¿Q¸w¹kº¼»eÃwÎ:Ĺp¾&ÓO¶UÄkЊÈ`ÄÂ@¿Q¸8·¶`Ô >šËy> z¿Ù'{>Ó6ß0Ù(áfjÏmÎBA0Ù(Þ[×"Ó ÎšÕ çÎ:Ä ¶UÄnÄÈ`Ä ¿Q¸8¿}º¼»:¾JÀ„¾åºÉ¶‹Ê ¹È¹Ã:¶·È`¸8·nº¼Î8¾JÀ¼È`¸Ž¹nÃ8¿Q¹(¹Ã:¶ ·È`ÄÄn¶‹¾kÓ8È`¸8Êw¶U¸8·¶%ÈQÐ(¾k»€¸w¹p¿`·¹±ÀÁ· Ķº„¿Q¹kÀ¼È`¸8¾kÃwÀ¼Ó8¾‡×&È`Îwº„Ê Ì8¶2À¼ÂFÓ:ÄÈQË`¶‹ÊƒÌw»-¿.Ì8¶U¹¹n¶UÄ]Ã8¿Q¸8ÊCºÑÀ¼¸:Í%ÈQÐ8¹Ã:¶ ΚÕuÙ'D×0Î`D ÐÑuÕ0S Â@¿QÓ:ÓwÀ¼¸:̀¾‡¿Q¸8Êâ¹Ã:¶ ã>ÕuÑ*Ò6ÓÖvÕ4Ù(á ·U¿`¾k¶‹¾UÔo?¶ À¼¸€ËC¶‹¾k¹kÀ¼Í€¿Q¹¶‹Êâ¹k×ȃ׿‹»:¾2ÈQÐ ¿`Ê:ÊwĶ‹¾¾JÀ¼¸:Í"¹ÃwÀ„¾&À„¾¾±Î:¶`Ô žŸÀ¼Äp¾k¹‹½O×&¶¿`ÊwÈ`Ó:¹¶‹Ê¿@¾ÅÀÉÂêÓwºÉ¶F¾k¹nÄp¿Q¹¶UÍ`» ÀɸwЊÈ`ÄÂê¶‹Ê Ìw»M¹nÃ:¶F¹¶U¸8Êw¶U¸8·»?ÈQÐ2º„¿Q¸:Í`Î8¿QÍ`¶‹¾ ¹ȎÃ8¿}Ë`¶@¿Ž·È`¸8¾ÅÀÁ¾±Æ ¹¶U¸w¹eÊCÀ¼Ä¶‹·¹kÀ¼È`¸ ÐÜÈ`ÄJ:Ã:¶‹¿`Êw¶‹Êw¸:¶‹¾n¾W<:Ôèæ ÃwÀ¼¸:¶‹¾k¶¿Q¸8Ê ã(¸:ÍǺàÀ„¾kþkÃ8¿QÄn¶-¹nÃ:¶-Ó:ÄnÈ`Ó8¶UĹk»¹Ã8¿Q¹‡¹Ã:¶U»?¿QĶÃ:¶‹¿CÊwÆ À¼¸wÀ¼¹kÀ„¿}º:ÐÜÈCÄ]ÂFÈw¾k¹aÓ:Ã:Äp¿`¾±¶ ¹k»€ÓO¶‹¾Uԁ&ÃwÎ8¾U½UÀÑÐa¿Q¸-ãå¸:ÍQºÑÀÁ¾±Ã ×&È`ÄpÊ#¿}ºÑÀ¼Í`¸8¾3¹È.Â-Îwº¼¹kÀ¼Ówº¼¶#æ ÃwÀ¼¸:¶‹¾k¶&×&È`ÄpÊ:¾}| A ·#XYXYXY· |Z:½ ¹Ã:¶ ºÉ¶Њ¹ÂFÈw¾k¹×&È`ÄpÊO| A ÀÁ¾.¹Ä¶‹¿Ç¹¶‹ÊŽ¿`¾.¹Ã:¶Ã:¶‹¿`Ê¿Q¸8Ê | T ·YXYX#XS· |Z[¿QÄn¶Ö¿Q¸8¿}º¼»2U¶‹Êd¿`¾âÀ¼¹p¾?Êw¶UÓ8¶U¸8Êw¶U¸w¹p¾UÔ ÝŠÐ ¿6æ2ÃwÀɸ:¶‹¾±¶Ž¶UÂFÓ:¹k»M¸:È:Êw¶"× ¿C¾‡À¼¸€¹nÄȚÊwÎ8·¶‹Ê¹È6¿}ºÑÀÉÍC¸ ×&À¼¹Ãe¿Q¸"Î:¸w¹Äp¿Q¸8¾Jº„¿Q¹n¶‹Ê@ãå¸:ÍQºÑÀÁ¾±Ãe×&È`ÄpʽÇÀɹ ÀÁ¾.Êw¶ºÉ¶U¹n¶‹Ê ¿Q¸8Ê À¼¹p¾åºÉ¶Њ¹ƊÂFÈw¾k¹·ÃwÀѺ„ÊêÀ„¾&Ó:ÄÈ`ÂFÈC¹¶‹Ê%¹ȃĶUÓwº„¿`·¶‡À¼¹‹Ô ÝÈwÈ`ÏCÀɸ:Í¿Q¹êºÁ¿Ç¸:Í`Î8¿QÍ`¶6À¼¸[¹nÃwÀÁ¾"¸:ÈC¸:ÆI·È`¸8¾k¹ÄÎ8·¹±ÀÉÈC¸:Æ Êw¶UÓ8¶U¸8Êw¶U¸w¹(׿}»¿}ºÑº¼ÈQ×¾3Î8¾Ÿ¹ÈÂ@¿QÏC¶¾JÀ¼ÂFÓwº¼¶·Ã8¿Ç¸:Í`¶‹¾ ¹Ã8¿Q¹‡Ã8¿}Ë`¶× ÀÁÊw¶%Äp¿Ç¸:ÍQÀ¼¸:͎¶ œ³¶‹·¹¾UÔì&ÃwÀ„¾ÀÁ¾.ÀѺѺ¼Î8¾k¹Äp¿ÇÆ ¹kÀ¼Ë`¶"ÈQÐÃ:ÈQ×ïÈCÎ:Ä ¿ÇÓ:Ó:ÄȀ¿`·nÃ?¹ÄkÀ¼¶‹¾ƒ¹È6ĶÀ¼¸ÀɸַU¿C¾k¶‹¾ ×Ã:¶UĶ#¹nÃ:¶jBƒæ ó Ì:Än¶‹¿QϚ¾.ÊwȆ×.¸ŽÌ€»"Î8¾JÀ¼¸:ÍFºÑÀ¼¸:Í`ÎwÀ„¾k¹kÀ¼Æ ·U¿}ºÑº¼»FÀ¼¸wЊÈ`ÄÂF¶‹Ê"·È`¸8¾k¹nÄp¿}À¼¸€¹¾(¹Ã8¿Q¹ ¿ÇĶ¿`¾åÍ`¶U¸:¶UÄp¿}ºa¿`¾ Ó8Èw¾¾JÀ¼Ìwº¼¶`Ô A`¶‹·È`¸8ʽL×&¶\Î8¾±¶‹ÊÂFÈCĶ\Êw¶U¹p¿}ÀѺ¼¶‹Ê ºÑÀ¼¸:Í`ÎwÀ„¾k¹kÀ„· Ïw¸:È†× ºÉ¶‹ÊwÍC¶FÈQÐæ2ÃwÀɸ:¶‹¾±¶@¹ÈeÊw¶UËC¶ºÉÈCÓô¿Ž¾kÂ"¿}ºÑº ¾k¶U¹#ÈQÐ ÄÎwº¼¶‹¾U½&¶Ú:Ó:Ķ‹¾¾k¶‹ÊYÀ¼¸ ¿ ¹Ä¶U¶UƊÌ8¿`¾k¶‹ÊYÓ8¿Q¹¹¶UÄn¸:ÆI¿`·¹kÀ¼È`¸ ЊÈ`ÄÂ@¿}ºÑÀ„¾kÂe½C¹Ã8¿Q¹2Ó8¶UÄkЊÈ`Ä ºÉÈ:·U¿}ºaÂFÈ:ÊCÀÑß ·U¿Q¹kÀ¼È`¸8¾2ÈQÐ]¿ Ó:ÄÈCáŶ‹·¹¶‹Ê%¿Q¸8¿†ºÉ»:¾JÀ„¾(ÈC¸-¹nÃ:¶#æ ÃwÀ¼¸:¶‹¾k¶¾JÀ„Êw¶`ÔŸÈ¿}Ë`ÈQÀ„Ê ¹Ã:¶¾JºÑÀ¼Ó:Ó8¶UÄn»M¾Jº¼È`ÓO¶ÈQÐ]Î:¸:¶U¸8ÊCÀ¼¸:Í"º„¿Q¸:Í`Î8¿ÇÍ`¶UÆI¾kÓ8¶‹·nÀÑß · ÄÎwº¼¶#¹k×&¶‹¿QÏ`À¼¸:Í8½C×&¶¾k¹nÄkÀ„·¹kº¼» ·È`¸8¾±¹Äp¿}À¼¸:¶‹Ê"¹Ã:¶‡Ó8Ȁ¾n¾JÀ¼Æ Ìwº¼¶%ÄÎwº¼¶‹¾UÔµ ÎwºÉ¶‹¾‡×&¶UĶƒÓ8¶UÄ Àɹn¹¶‹Ê޹ÈêĶÐܶUÄ.È`¸wº¼»e¹È ·nº¼È€¾k¶‹Ê?·nº„¿`¾n¾‡À¼¹¶UÂ@¾U½Ÿ¹ȎÓ8¿QĹ¾‡ÈQÐ ¾kÓO¶U¶‹·ÃÓ:ÄÈ`ák¶‹·¹¶‹Ê ЊÄÈ`Âì¹Ã:¶Fã(¸:ÍǺàÀ„¾kÿQ¸8¿}º¼»:¾JÀ„¾U½aÈCÄ#¹Ȏ¶‹¿`¾JÀѺ¼»?¶U¸wÎ:ÂF¶UÄÆ ¿Q¹¶‹Êôº¼¶ڀÀ„·U¿}ºF·U¿Q¹¶UÍ`È`ıÀɶ‹¾âõI¶`Ô ÍOÔ~2$78€8+½µI‡½‚€½ »`¶U¸ƒ`øÔ ž:ÈCÄF¶ژ¿ÇÂFÓwº¼¶`½ È`¸:¶e¾±Î8·Ã0ÄÎwº¼¶MÊw¶‹¿}º„¾%×&À¼¹Ã0¸:È`Î:¸ ÂFÈ:ÊCÀÑß ·U¿Q¹kÀ¼È`¸š¹ C ÝöÐG& A ·YXYX#XS· &„¿QĶֿL¾k¶U¹=ÈQÐ6æ ÃwÀ¼¸:¶‹¾k¶[×&È`ÄpÊ:¾ ¿†ºàÀ¼Í`¸:¶‹Ê,¹È ¿Ç¸,ãå¸:ÍQºÑÀ„¾kÕ¸:È`Î:¸a½ŽÄn¶UÓwºÁ¿C·¶[¹Ã:¶ ¶UÂêÓ:¹Å»6¸:È:Êw¶"À¼¸€¹ÄnȚÊwÎ8·¶‹ÊÀ¼¸¹Ã:¶B.À¼Ä¶‹·¹¿¡åÄÈ`Æ ák¶‹·¹kÀ¼È`¸ ó ºÉÍCÈ`ÄkÀ¼¹Ã: ̀»dÓ:ÄÈ`ÂFÈ`¹±Àɸ:ÍL¹Ã:¶YºÁ¿C¾k¹ ×&È`ÄÊI&„ ¹È?À¼¹p¾êÓwº„¿`·¶6×&À¼¹Å& A ·YXYX#X· &„† A ¿`¾ Êw¶UÓO¶U¸8Êw¶U¸€¹¾UÔ ó ¸:È`¹Ã:¶UÄÊw¶‹¿}º„¾2×&À¼¹à ¿C¾kÓ8¶‹·¹Î8¿†º Â@¿QÄnÏ`¶UÄp¾3ЊÈ`Ä(Ë`¶UÄnÌ8¾0¹ C ÝöЈ‡ A ·#XYXYXY· ‡ „`½&¿?¾k¶‹ÞCÎ:¶U¸8·¶eÈQÐ%æ ÃwÀ¼¸:¶‹¾k¶6×&È`ÄpÊ:¾ ¿†ºàÀ¼Í`¸:¶‹ÊY×&À¼¹à ãå¸:ÍQºÑÀ„¾kÃÖË`¶UÄnÌ8¾U½(À„¾ ÐÜÈǺຼÈQ×&¶‹Ê=̀» %O½€¿Ç¸ ¿C¾kÓ8¶‹·¹2Â@¿QÄÏ`¶UÄ}½QÂ@¿QÏ`¶% À¼¸€¹È%¿‡ÂFÈ:ÊCÀàß8¶UÄ ÈÇЮ¹Ã:¶‡º„¿`¾±¹Ë`¶UÄ̉‡ „wÔ &Ã:¶FÂêȀ¾k¹‡À¼¸€ËCÈQº¼Ë`¶‹Ê6¹Äp¿Ç¸8¾JÐÜÈCÄÂ@¿Q¹kÀ¼È`¸Ówº„¿`·¶‹¾%¿êºàÀ¼¸:Æ Í`ÎwÀ„¾k¹kÀ„·é·È`¸8¾k¹nÄp¿}À¼¸€¹ È`¸ ¹Ã:¶)æ ÃwÀ¼¸:¶‹¾k¶dÐÜÎ:¸8·¹±ÀÉÈC¸8¿}º ×&È`Äpʉ ‹½3×ÃwÀ„·Ã=Â@¿}»6Ì8¶"¹Äp¿Q¸8¾ÅºÁ¿Ç¹¶‹Ê?¿`¾e!2+!.õI¹Ã:¶ Ã:¶‹¿`ÊâÈQЮ¿%Ķº„¿Q¹±ÀÉËC¶-·nº„¿QÎ8¾k¶Qøp½¿`¾2¹Ã:¶ƒÓ:ĶUÓ8Ȁ¾ÅÀɹ±ÀÉÈC¸–$½U½ È`ÄF¿`¾‹Š %FõÅ¿ŽÂ"¿QÄÏ`¶UăЊÈ`ÄÓOȀ¾¾k¶‹¾¾ÅÀÉËC¶‹¾øpԖ ÃwÀÁ¾ê·È`ÂFÆ ÂFÈ`¸0æ ÃwÀ¼¸:¶‹¾k¶âÐÜÎ:¸8·¹kÀ¼È`¸8¿†º×&È`ÄpÊ ÀÁ¾ê¿}º¼ÂFȀ¾k¹F¿}º¼×¿}»š¾ ¶À¼¹Ã:¶UÄ%Î:¸8¿}ºÑÀ¼Í`¸:¶‹ÊÈ`ăÂ%ÎwºÉ¹±ÀÉÓwº¼»Ö¿}ºÑÀ¼Í`¸:¶‹Ê=¹È6¿Q¸ã(¸:Æ ÍQºÑÀ„¾kÃe×&È`ÄÊÔ C ÝöÐ\|b³ÀÁ¾ ¹Ã:¶æ2ÃwÀɸ:¶‹¾±¶×&È`ÄpÊê¹Ã8¿Q¹&¿QÓ:Ó8¶‹¿QÄn¶‹ÊêÀÉÂêÆ Â궋ÊCÀÁ¿Ç¹¶º¼»-¹nÈ#¹Ã:¶2ºÉ¶Њ¹ ÈÇÐ>¿Ç¸8Ê;|]j(À„¾å¹Ã:¶‡æ ÃwÀ¼Æ ¸:¶‹¾±¶ ×&È`ÄpÊ.¹Ã8¿Q¹]¿ÇÓ:Ó8¶‹¿QĶ‹ÊÀ¼ÂFÂF¶‹ÊCÀ„¿Q¹¶º¼»¹nȹÃ:¶ ıÀÉÍCÀ¹2ÈQÐaÀ¼¹‹½`¹nÃ:¶U¸Fß8¸8Ê"¹Ã:¶ ºÉÈQ×&¶‹¾k¹&¿Q¸8·¶‹¾k¹È`ľQ|]Œ ¿Ç¸8ÊO|2ÐÜÈCĎ| b ¿Q¸8ÊO| j ½wĶ‹¾±Ó8¶‹·¹kÀ¼Ë`¶º¼»`½a¾kÎ8·nιÃ8¿Q¹ òƒõK|]Œ · |  øF¶ڀÀ„¾k¹¾UÙÄn¶UÂFÈQË`¶6¹Ã8¿Q¹ŽÄ¶º„¿Q¹kÀ¼È`¸8¾±ÃwÀÉÓaÙ ¿Ç¸8Ê"ĶUÓwº„¿`·¶#À¼¹×&À¼¹ÅòƒõK2í · |]Œ‹ø ¿Q¸8Ê#òƒõ| m· íQøÔ &Ã:¶êº„¿Q¹¹¶Uă¹Å×&Èe·nÃ8¿Q¸:Í`¶‹¾ Â@¿}» ¾±¶U¶U Î:¸:Ķº„¿Q¹¶‹Ê½ Ì:Î:¹F¹Ã:¶U»=Ì8ÈC¹ÃÛ¹¿QÏ`¶Ž¿`Êwˆ¿Q¸w¹p¿QÍC¶ŽÈQйÃ:¶"ÐI¿`·¹%¹Ã8¿Q¹ æ ÃwÀ¼¸:¶‹¾k¶‡Ë`À¼ÈQº„¿Q¹n¶‹¾(¹Ã:¶Ã:¶‹¿CÊwÆqÀɸwÀ¼¹kÀ„¿}º3ÄÎwº¼¶&À¼¸FÀ¼¹p¾å¸:È`ÂFÆ À¼¸8¿}º ¾±»š¾k¹n¶UÂe½a×Ã:¶UÄn¶@¸:È`Î:¸Ó:Ã:Äp¿`¾k¶‹¾ƒ¿QĶFÎ:¸wÀÑЊÈ`ÄÂ%º¼» Ã:¶‹¿`ÊwÆqß8¸8¿}ºIÔOÈ`Ķ#Í`¶U¸:¶UÄ¿}ºÑºÉ»`½¹nÃ:¶Â@¿ÇáÅÈ`ıÀɹk»"ÈQÐ(ÄÎwº¼¶ Ó8¿Q¹¹n¶Uĸ8¾#¿QĶƒË†¿QÄkÀ„¿Q¹±ÀÉÈC¸8¾#È`¸6¹Ã:¶F¾¿QÂê¶ ¾±ÈQº¼Î:¹kÀ¼È`¸?¹È ¹Ã:¶e¾¿ÇÂF¶@Ó:ÄnÈ`Ìwº¼¶UÂeÔ­ À¼¶U×&À¼¸:Í=¹Ã:¶@Ó:ÄnÈ`Ìwº¼¶UÂЊÄÈ` ¿ÃwÀ¼Í`Ã:¶UÄ%º¼¶UË`¶ºÈQÐ.ºàÀ¼¸:Í`ÎwÀ„¾k¹±ÀÁ·¿QÌ8¾k¹Ä¿`·¹kÀ¼È`¸=Â@¿`Êw¶"À¼¹ Ó8Èw¾¾JÀ¼Ìwº¼¶F¹È%ß8¸8Ê6¿}ºÑº(¹Ã:¶Ä¶º¼¶UËQ¿Q¸€¹·U¿C¾k¶‹¾&À¼¸?¿"¾kÃ:È`Ĺ ¹kÀ¼ÂF¶ƒõÅ¿‡ÐжU×[Ê:¿}»š¾ø(¿Q¸8Êê¶ښÓ:Ķ‹¾n¾ ¹nÃ:¶¾kÈǺÉÎ:¹±ÀÉÈC¸e·È`ÂFÆ Ó8¿`·¹kº¼»õ  ÄnÎwºÉ¶‹¾øpԅ&Ã:¶Ž·È`ÂFÓwº¼¶U¹¶Ž¾k¶U¹ÈÇÐ ÄnÎwºÉ¶‹¾ ·U¿Q¸ŽÌO¶#ЊÈ`Î:¸8Ê"À¼¸?õô׿ƒ¶U¹¿}ºIÔ¼½   øpÔ >šË’‘ ܔ“ Ù) Lln à_Ù(Ï[Ó5ÐÙ*Õ´× 2¶‹·U¿QÎ8¾k¶6È`Î:Ä"¶UÄÄÈ`Ä"¿Q¸8¿}º¼»š¾ÅÀÁ¾Ž¿Ç¸8Ê ¾±Î:Ì8¾k¶‹Þ`Î:¶U¸w¹Ž¿}º¼Æ Í`È`ıÀɹnÃ: Ä¶ß8¸:¶UÂF¶U¸w¹p¾Â@¿`Êw¶=Î8¾k¶?ÈÇÐ@È`Î:ÄeÈCÄkÀ¼ÍQÀ¼¸8¿}º æ ÃwÀ¼¸:¶‹¾k¶UÆIãå¸:ÍQºÑÀÁ¾±Ã Ê:¿Ç¹p¿%¾k¶U¹‹½w×&¶·Ä¶‹¿Ç¹¶‹ÊŽ¿ƒ¸:¶U×[¹¶‹¾k¹ ¾k¶U¹?Ì8¿C¾k¶‹ÊLÈ`¸ØÿCÿÖ¸:¶U× æ ÃwÀ¼¸:¶‹¾k¶¾k¶U¸€¹n¶U¸8·¶‹¾6ЊÄÈ` ¹Ã:¶…¡3¶U¸:¸ æ ÃwÀ¼¸:¶‹¾k¶^aĶU¶UÌ8¿Ç¸:Ï ½¿}º¼Ä¶‹¿CÊw»ÛÂ@¿Ç¸€Î8¿}ºÑº¼» ¹Äp¿Ç¸8¾Jº„¿Q¹¶‹Ê Àɸw¹ÈFã(¸:ÍǺàÀ„¾kÃ6¿`¾2Ó8¿QĹ2ÈQÐ]¹Ã:¶j¥.ÝSA2F ¶Uˆ¿†ºÉÎ8¿Ç¹kÀ¼È`¸ÛÓ:Än¶UË`À¼¶U×Ô•¦ Ã:¶‹¾k¶@¾k¶U¸w¹¶U¸8·¶‹¾%¿}Ë`¶UÄp¿QÍ`¶‹Ê ú‹û:Ô  ×&È`ÄÊ:¾(À¼¸âºÉ¶U¸:ÍC¹ÃaÔ ó ¾ÛÊw¶‹¾·ıÀÉÌO¶‹Êé¿QÌ8ÈQË`¶`½%Ó8¿Qľk¶‹¾?ÈC¸ü¹Ã:¶ ãå¸:ÍQºÑÀÁ¾±Ã ¾JÀ„Êw¶%×&¶UĶ-·Ä¶‹¿Ç¹¶‹Êe¾k¶U ÀÉÆI¿ÇÎ:¹È`Â@¿Q¹±ÀÁ·U¿†ºàº¼»`½a¿Q¸8Ê×&È`ÄpÊ ¿}ºÑÀ¼Í`¸:ÂF¶U¸w¹p¾3×&¶UĶ¿C·UÞ`ÎwÀ¼Ä¶‹ÊƒÂ@¿Q¸wÎ8¿}ºÑº¼»`ԁôÈQ×&¶UË`¶Uċ½ùÀ¼¸ È`ÄpÊw¶UÄ&¹È%Ķ‹ÊwÎ8·¶#ÈCÎ:ÄÄn¶ºàÀ„¿Q¸8·¶#ÈC¸"ºàÀ¼¸:Í`ÎwÀ„¾k¹±ÀÁ·U¿†ºàº¼»6¾kÈ`Æ Ó:ÃwÀ„¾k¹kÀ„·U¿Q¹¶‹ÊÃwÎ:Â@¿Q¸Û¿Q¸:¸:ÈC¹p¿Q¹È`ľ&ЊÈ`ÄFæ2ÃwÀɸ:¶‹¾±¶ ¾k»w¸:Æ ¹p¿}Úa½O×¶F¿`ÊwÈCÓ:¹¶‹Ê6¿Q¸?¿}º¼¹¶Uĸ8¿Q¹kÀ¼Ë`¶F¾k¹nÄp¿Q¹¶UÍ`»"ЊÈ`ćÈ`Ì:Æ ¹p¿}À¼¸wÀ¼¸:Íü¹Ã:¶Í`ÈQº„Êé¾k¹p¿Ç¸8Ê:¿QÄpÊu¹é×&¶[¿QÎ:¹È`Â@¿Ç¹kÀ„·U¿}ºÑºÉ» ·È`¸wË`¶UŶ‹Êê¹Ã:¶ìŸÄ¶U¶UÌ8¿Q¸:Ïu™ ¾&·È`¸8¾k¹±ÀɹnÎ:¶U¸8·»@Ó8¿Qľk¶‹¾(ÈQÐ ¹Ã:¶‡æ ÃwÀ¼¸:¶‹¾k¶¾k¶U¸w¹¶U¸8·¶‹¾3À¼¸w¹È#¾k»€¸w¹p¿`·¹±ÀÁ·&Êw¶UÓO¶U¸8Êw¶U¸8·» ĶUÓ:Ķ‹¾±¶U¸€¹p¿Ç¹kÀ¼È`¸8¾U½‡Î8¾JÀ¼¸:Í ¿Q¸[¿}º¼Í`ÈCÄkÀ¼¹Ã: ¾JÀ¼Â%ÀѺ„¿QÄ6¹È ¹Ã:¶FÈC¸:¶ Êw¶‹¾·ıÀÉÌO¶‹ÊÀ¼¸,A`¶‹·¹kÀ¼È`¸  ÈQйÃ:¶FÓ8¿ÇÓ8¶UÄ%̀» p.ÀÁ¿"¿Q¸8Ê#¡(¿}º¼ÂF¶UÄ õ  ú}øpԗ– &Ã:¶2Ķ‹·U¿}ºÑº˜¿Ç¸8ʇÓ:Ķ‹·nÀ„¾JÀ¼È`¸ƒß8Í`Î:Ķ‹¾³ÐÜÈ`ğ¹Ã:¶(¸:¶U×Û¶Ú:Æ Ó8¶UıÀÉÂê¶U¸€¹%¿QĶF¾kÎ:ÂFÂ@¿ÇÄkÀU¶‹ÊÀ¼¸–]¿ÇÌwºÉ¶  Ô< Ã:¶ß8ľk¹ ÄÈQ×=ÈQÐ ¹Ã:¶&¹p¿ÇÌwºÉ¶‡¾kÃ:ÈQ×¾a¹nÃ:¶Än¶‹¾kÎwº¼¹p¾ ·È`ÂêÓ8¿QÄkÀ¼¸:Í#¹Ã:¶ È`Î:¹Ó:Î:¹.ÈQÐ]¹Ã:¶jB.À¼Ä¶‹·¹O¡(ÄÈCáŶ‹·¹±ÀÉÈC¸ ó º¼Í`È`ÄkÀ¼¹Ã:Âè×&À¼¹à ¹Ã:¶ÍCÈQº„ÊF¾k¹p¿Q¸8Ê:¿ÇÄpÊÔ ó ¾å×¶.Ã8¿‹ËC¶¿}º¼Ä¶‹¿`Êw»F¾k¶U¶U¸"Ó:ĶUÆ ËCÀÉÈCÎ8¾Jº¼»`½¹Ã:¶?ÞCÎ8¿}ºÑÀ¼¹Å» ÈQÐ-¹Ã:¶‹¾±¶?¹Ä¶U¶‹¾ ÀÁ¾â¸:È`¹ŽË`¶UÄ» Í`ÈwȚÊÔ_&Ã:¶¾k¶‹·È`¸8Ê ÄÈQ×?ÈQÐ:¹Ã:¶ ¹p¿ÇÌwºÉ¶¾kÃ:ÈQ×¾³¹Ã8¿Q¹]¿}ÐŠÆ ¹¶UÄ¿ÇÓ:ÓwºÉ»CÀ¼¸:Í@¹nÃ:¶¾JÀ¼¸:ÍQº¼¶-¹nÄp¿Q¸8¾JЊÈ`ÄÂ@¿Ç¹kÀ¼È`¸@Ì8¿C¾k¶‹Ê"È`¸ ¹Ã:¶Ã:¶‹¿CÊwÆqÀɸwÀ¼¹kÀ„¿}º(¿`¾¾kÎ:ÂFÓ:¹±ÀÉÈC¸a½`Ó:Ķ‹·nÀ„¾JÀ¼È`¸e¿Q¸8Ê Ä¶‹·U¿}ºÑº Ì8ÈC¹Ã%À¼ÂFÓ:ÄÈQË`¶¾ÅÀÉÍC¸wÀàß ·U¿Ç¸€¹kº¼»u¹(Î8¾JÀ¼¸:Í%¹Ã:¶ìž3ƊÂF¶‹¿C¾kÎ:Ķ ¹ÈF·È`Â%ÌwÀ¼¸:¶#Ó:Ķ‹·nÀ„¾JÀ¼È`¸e¿Q¸8Ê"Ķ‹·U¿†ºàºÀ¼¸w¹ÈF¿-¾ÅÀɸ:ÍǺɶ ß8Í`Æ Î:Ķ2ÈQÐ8ÂF¶UÄkÀ¼¹2õ   ¿Q¸-µÀ á¾kÌO¶UÄÍ`¶U¸a½8ú}ûwë†ûwøp½ù¹Ã:¶(À¼¸8·Ä¶‹¿`¾±¶ ˜Sø NN b0c5c5T7™ ššSúvúšúƒ \&`Vw5cƒ a LSxYšYw5T|NNk b šSc5Nw5c5w*šYRc*š&ƒ ›jN l"w5NZHw5N\0c5N\"kNw n M5L R^w5Nkc6`VL.\mwâgœ0W"gœ.û&W"g!žŸ0W[hSf.f&WhSû0h W"f.gŸ&W fYüýû_z[NkSQSlmw5N.W[QSkkL.M5Z&`V\"aHc5L_c5b"NšZ&`Vw5c5M6`Vz"l"c5L MW[c5b"Nšc5M QY\"w6UyQYc6`VL.\ L n c5bmNw5NIw5Nkc6`VL \"w‚d~mUVNwHúu`Vc5bƒ w5T|kHw5l @P"i€b|Qx NIz[NN\¿RHL.M5N kSQSM5N n l0UUV{Ox NM6`~|NZƒ ¡ „ b"Now5c5M QSc5Na {OúâQYwx.Q†U`VZ|QYc5NZ<z0{OT[NM n L.M5Ro`V\maOc5b"NHw QSRHN T"M5L0kNw5wšL \Ic5b"N_L M6`VaS`V\[QU*Z[QSc Q€w5Ncmc5b"NQSa M5NNRHN\0cuM QSc5N_úu`Vc5b c5b"Nb0lmRIQS\&pa.N\mNM QYc5NZjZmNT[N\"ZmN\"k{Oc5M5NNwvúâQYwšû!Ÿ&ƒ ¢£3ƒ „ b&`Vw UVNZìlmwuc5Lz[N>kL \0~[Z"N\0cšc5b|QYc „ M5NNz[QS\#IkL.\mw5c6`Vc5l"N\"k{‚T|QYM5w5Nw kL.l&UVZIz[Nšl"w5NZIQYl"c5L R@QYc6`VkSQUUV{‚c5LkM5NSQSc5N>Qva LSUVZIw5c QY\"Z[QSM5Z n L M w5{0\0c QYkc6`VkoZ"NT[N\"Z"N\mk `VNwƒ Š‚Nc5bmL0Z ¤õM5Nk `Vw6`VL \ Ž NkSQ†UU K p6RñNSQYw5l"M5N qu`VM5Nkc œYü"ƒr¢ ü0f&ƒ ¢ œ!&ƒVh ¥ NSQYZ0p`V\&`Vc6`yQU ¢ û0ƒ ü ¢ û&ƒ ü ¢.û&ƒ ü Ž l0UVNw ž0ƒg žž&ƒ ž ž!Ÿ&ƒ œ 3¿QÌwº¼¶  ¹ù¡3¶UıÐÜÈ`ÄnÂ@¿Q¸8·¶#È`¸6æ ÃwÀ¼¸:¶‹¾k¶%¿Q¸8¿}º¼»:¾k¶‹¾#õnîFø ЊÄÈ` `ÿ:ÔÁúQî ¹nÈj `û:Ô’:îĶUÓ:Ķ‹¾±¶U¸€¹p¾2¿O  :Ô ûšî Ķº„¿Q¹±ÀÉËC¶ À¼ÂFÓ:ÄÈQË`¶UÂF¶U¸w¹‹Ô Ã:¶#¹ÃwÀ¼ÄpʎÄÈQ×[ÈÇÐ]¹Ã:¶¹¿QÌwº¼¶-¾kÃ:ÈQ×¾ ¹Ã8¿Q¹.̀»e¿QÓ:Ówº¼»CÀ¼¸:͎¹Ã:¶¾kÂ@¿}ºÑº ¾k¶U¹ÈQÐ(¹Ä¶U¶ƒÂFÈ:ÊCÀàß ·U¿ÇÆ ¹kÀ¼È`¸=ÄÎwº¼¶‹¾%¿}Ðܹn¶UÄ-ÊCÀ¼Ä¶‹·¹%Ó:ÄÈ`ák¶‹·¹kÀ¼È`¸õJÈ`¸:¶FÈQÐ×ÃwÀ„·nà À„¾ Êw¶ÐI¿QÎwº¼¹¿`¾¾JÀ¼Í`¸:ÂF¶U¸w¹ ÈÇÐ ¹Ã:¶‡Ã:¶‹¿`ÊwÆqÀ¼¸wÀ¼¹kÀ„¿}º]¿Q¸8¿}º¼»š¾ÅÀÁ¾ ¹È?Â%Îwº¼¹kÀ¼ÆŠ×&È`ÄpÊYÓ:Ã:Äp¿`¾k¶‹¾ê×Ã:¶U¸¸:ÈÛÈC¹Ã:¶UÄFÄÎwº¼¶?¿QÓ:Æ ÓwºÑÀ¼¶‹¾øp½×&¶È`Ì:¹p¿}À¼¸ ¿Ç¸Ö¶UË`¶U¸0º„¿QÄÍ`¶UÄêÀ¼ÂFÓ:ÄÈQË`¶UÂF¶U¸w¹‹½ ¹Ã:¶#íwë€ÔVšî ž3ƊÂF¶‹¿`¾±Î:ĶÄn¶UÓ:Ķ‹¾k¶U¸w¹kÀ¼¸:Í@¿êë†í:Ô íwî[Än¶ºÁ¿ÇÆ ¹kÀ¼Ë`¶#̀¿†ÀɸÈQË`¶UÄ&Ì8¿`¾k¶ºÑÀ¼¸:¶-ÓO¶UÄkЊÈ`ÄÂ@¿Q¸8·¶`Ô ¦ —@gaP 9:HJò NQGIgaP2Ne;P b§(òå´`ò ^`5=Z\gŸ^`R ŸÈ-×.Ã8¿Q¹ ¶Ú:¹¶U¸w¹ À„¾2¹Ã:¶OBƒæ ó ¿ƒË†¿†ºàÀ„ÊŽ¿`¾n¾kÎ:ÂFÓ:¹kÀ¼È`¸9 çÎ:Ä3¶ښÓO¶UÄkÀ¼ÂF¶U¸€¹¾(·È`¸wß8ÄÂ[¹Ã:¶åºÑÀɸ:ÍCÎwÀÁ¾±¹kÀ„·2À¼¸€¹ÎwÀ¼¹kÀ¼È`¸a½ À¼¸8ÊCÀ„·U¿Q¹kÀ¼¸:Í#¹Ã8¿Q¹aÈC¸:¶·U¿Q¸:¸:È`¹3¾¿}Њ¶ºÉ»¿C¾¾kÎ:ÂF¶ ¿ÊCÀ¼Ä¶‹·¹ Â@¿QÓ:ÓwÀ¼¸:Í"Ì8¶U¹k×&¶U¶U¸e¹Ã:¶¾±»€¸w¹p¿`·¹kÀ„·ƒÊw¶UÓ8¶U¸8Êw¶U¸8·nÀ¼¶‹¾#ÈQÐ È`¸:¶ ºÁ¿Ç¸:Í`Î8¿QÍ`¶%¿Q¸8Ê"¹Ã:¶-¾±»€¸w¹p¿`·¹kÀ„·ƒÊw¶UÓ8¶U¸8Êw¶U¸8·nÀ¼¶‹¾#ÈQÐ ¿Q¸:È`¹nÃ:¶UÄ‹Ô ô.Ȇ×0Î8¾k¶ЊÎwº8À„¾2¹Ã:¶ìBƒæ ó 9@ Ã:¶¶ښÓO¶UÄkÀ¼ÂF¶U¸w¹p¿}ºaĶUÆ ¾kÎwº¼¹p¾6¾kÃ:ÈQ× ¹nÃ8¿Q¹e¶UË`¶U¸ ¹Ã:¶ô¾ÅÀÉÂêÓwºàÀ„¾k¹±ÀÁ·EBƒæ ó ·U¿Q¸ Ì8¶Î8¾k¶ÐÜÎwº%×Ã:¶U¸È`Ó8¶UÄ¿Q¹kÀ¼¸:ÍÀ¼¸Ò·È`¸QákÎ:¸8·¹kÀ¼È`¸L×&À¼¹à ¾kÂ@¿†ºàºÞ`Î8¿Ç¸€¹kÀ¼¹kÀ¼¶‹¾ƒÈQоk»š¾±¹¶UÂ@¿Q¹kÀ„·ƒºÑÀ¼¸:Í`ÎwÀ„¾k¹kÀ„·âπ¸:ÈQ×&º¼Æ ¶‹ÊwÍ`¶`ÔoA`»w¸€¹p¿C·¹kÀ„·¿Q¸8¿†ºÉ»:¾k¶‹¾åÓ:ÄÈ`ák¶‹·¹¶‹Ê ÐÜÄnÈ`ÂØãå¸:ÍQºÑÀÁ¾±Ã ¹È#æ2ÃwÀɸ:¶‹¾±¶ ·U¿Ç¸a½UÀ¼¸Ó:ıÀɸ8·nÀ¼Ówº¼¶`½w»`À¼¶º„Ê"æ ÃwÀ¼¸:¶‹¾k¶¿Q¸8¿†ºÉ»wÆ ¾k¶‹¾Ÿ¹Ã8¿Q¹(¿QĶ2¸:¶‹¿QÄkº¼»Fë  î ¿`·U·Î:Ä¿Q¹¶õöÀɸ%¹¶UÄnÂ@¾aÈÇÐ8Î:¸:Æ º„¿QÌ8¶º¼¶‹Ê=Êw¶UÓ8¶U¸8Êw¶U¸8·nÀ¼¶‹¾ø-¿†Ðܹ¶UÄ#¿QÓ:ÓwºÑÀ„·U¿Q¹kÀ¼È`¸?ÈÇÐ ¿"¾k¶U¹ ÈQÐaºÑÀ¼¸:Í`ÎwÀ„¾k¹kÀ„·U¿}ºÑº„»6Ó:ÄkÀ¼¸8·nÀ¼Ówº¼¶‹ÊÄÎwº¼¶‹¾UÔ ÝJ¸Ž¹Ã:¶#¸:¶‹¿QÄ&ЊÎ:Æ ¹Î:Ķ.×¶.×&ÀѺàº]¿`Ê:ÊwÄn¶‹¾¾(¹Ã:¶.ĶUÂ@¿}À¼¸wÀ¼¸:ÍF¶UÄÄÈCÄp¾U½Q×ÃwÀ„·nà ¿}º„¾kÈe¾k¶U¶UÂì¹nȎÌ8¶F¿QÂF¶U¸8¿QÌwº¼¶F¹nÈe¿FÎ:¸wÀÑЊÈ`ÄÂìºÑÀ¼¸:Í`ÎwÀ„¾kÆ ¹kÀ„·%¹Än¶‹¿Q¹ÂF¶U¸w¹"¹2À¼¸6º„¿QÄÍ`¶%Ó8¿QÄn¹#¹Ã:¶U»ŽÀ¼¸wË`ÈQº¼Ë`¶FÊCÀœ³¶UÄnÆ ¶U¸8·¶‹¾âÀ¼¸Ò·U¿Q¹¶UÍ`ÈCĻֶÚ:Ó:Ķ‹¾¾JÀ¼È`¸dõI¸:È` Àɸ8¿†º¶ښÓ:Ķ‹¾±Æ ¾JÀ¼È`¸8¾ ¹Äp¿Q¸8¾Jº„¿Q¹n¶‹Ê?¿`¾‡Ë`¶UÄÌ8¾ È`Ä#ËCÀÁ·¶"Ë`¶UÄp¾n¿wø¿Q¸8Ê6×&¶ Ì8¶ºÑÀ¼¶UË`¶#¹Ã8¿Ç¹]×&¶·U¿Q¸%Î8¾k¶‡·È`¸€¹n¶ښ¹(¹È#¶ œa¶‹·¹(¹Ã:¶‡·È`ÄÆ Ķ‹·¹#·U¿Q¹¶UÍ`ÈCĻ޹Äp¿Q¸8¾ÅÐÜÈ`ÄnÂ@¿Q¹kÀ¼È`¸8¾UÔH¶-× ÀàºÑº¿}º„¾kȎ¶Ú:Æ Ówº¼È`Ķ·È`ÄnĶ‹·¹kÀ¼È`¸ŽÈQÐ]¶UÄnÄÈ`Äp¾2ËCÀÁ¿ê¾k¹p¿Q¹kÀ„¾k¹±ÀÁ·U¿†º º¼¶‹¿QÄn¸wÀɸ:Í ¹¶‹·nÃ:¸wÀ„Þ`Î:¶‹¾UÔ &Ã:¶=À¼ÂFÓwºÑÀ„·U¿Q¹kÀ¼È`¸éÈQÐ@¹nÃwÀÁ¾×&È`ÄÏ ÐÜÈ`Ä?¾k¹¿Q¹kÀ„¾k¹kÀ„·U¿}º ¹Äp¿Ç¸8¾Jº„¿Q¹kÀ¼È`¸%ÂFÈ:Êw¶ºàÀ¼¸:̓À„¾3¹Ã8¿Q¹(¿.ºÑÀɹn¹kº¼¶ÌwÀ¼¹(ÈQÐπ¸:ÈQ×&º¼Æ ¶‹ÊwÍ`¶?·U¿Q¸0Ì8¶?¿?ÍCȀÈ:ʹÃwÀ¼¸:Í8Ô¦&Ã:¶6¿QÓ:Ó:ÄȀ¿C·ÃÊw¶UÆ ¾·ÄkÀ¼ÌO¶‹ÊüÃ:¶U;k¹ÄkÀ¼Ï`¶‹¾¿ÖÌ8¿†ºÁ¿Ç¸8·¶Ö¾kÈ`ÂF¶U×Ã:¶UÄn¶ÛÌO¶UÆ ¹k×&¶U¶U¸¹Ã:¶Ž¶U¸8ÊCº¼¶‹¾¾@·ÈC¸8¾k¹ÄÎ8·¹kÀ¼È`¸:ƊÌw»wÆI·È`¸8¾k¹ÄÎ8·¹±ÀÉÈC¸ ¹Î:¸wÀ¼¸:ÍïÈÇÐMÄÎwº¼¶UƊÌ8¿`¾k¶‹Êè¿QÓ:Ó:ÄÈw¿`·Ã:¶‹¾ù½@ÈC¸Ø¹nÃ:¶[È`¸:¶ Ã8¿Q¸8ʽ8¿Ç¸8ʽ:È`¸Ž¹Ã:¶#È`¹nÃ:¶Uċ½w¹Ã:¶Êw¶UË`¶º¼È`Ó:ÂF¶U¸w¹ÈQÐ3À¼¸:Æ ¾kÎ' ·nÀ¼¶U¸w¹kº¼»M·È`¸8¾±¹Äp¿}À¼¸:¶‹Ê޾k¹È:·nÃ8¿`¾k¹kÀ„·‡ÂFÈ:Êw¶º„¾UÔ ?¶%Ã8¿}Ë`¶-¾±»š¾k¹n¶UÂ@¿Q¹kÀ„·U¿}ºÑº¼»MÊCÀ„¿QÍ`¸:Èw¾k¶‹Ê6¿@·È`ÂêÂFÈ`¸ ¿`¾¾±Î:ÂFÓ:¹kÀ¼È`¸Ž¹Ã8¿Q¹&Ã8¿`¾ Ì8¶U¶U¸MÊw¶‹¿†ºÉ¹.×&À¼¹ÃeÓ:ĶUËCÀ¼È`Î8¾Jº¼» È`¸Û¿·U¿`¾k¶"̀»?·U¿`¾±¶@Ì8¿`¾ÅÀÁ¾ù½]Ì:Î:¹ƒ¸:È`¹¸8¿ÇÂF¶‹ÊԅȀ¾k¹ ÈQÐ]¹nÃ:¶#ÂFȚÊw¶º„¾&×&¶‡Ï€¸:ÈQ×ÈQКC ЊÄÈ`Âd¶‹¿QÄkº¼»"×ÈCÄÏ@¿Q¹ ÝHé¹nȾk¶‹·È`¸8ÊwƊÍ`¶U¸:¶UÄ¿Q¹kÀ¼È`¸%ÂFÈ:Êw¶ºÁ¾3¾kÎ8·nÃ-¿`¾³¹Ã8¿Q¹aÈQÐ ‡¸wÀ¼Í`Ãw¹¿Q¸8Êe ¿QÂ@¿`Ê:¿ C Ķ‹·¹kÀÑÐÜ»"ÍQº„¿QÄkÀ¼¸:Í%Ó:ÄÈ`Ìwº¼¶UÂ@¾ ·U¿QÎ8¾k¶‹Êầ»Ž¹nÃ:¶#ÐI¿}ÀѺÉÎ:Än¶ÈQÐ ¹Ã:¶jBƒæ ó Î8¾JÀ¼¸:Í ¿%Ä¿Q¸:Í`¶ ÈQÐ(Ó:ĶUÆ(È`ÄÓOȀ¾k¹ƊÓ:ÄnȚ·¶‹¾¾ÅÀɸ:Í"¹¶‹·nÃ:¸wÀ„Þ`Î:¶‹¾UÔ ?¶?Ã8¿}Ë`¶À„Êw¶U¸w¹kÀÑß8¶‹Ê[¹Ã:¶=¾kÈ`Î:Äp·¶ЊÈ`Äe¿=Ã:Ȁ¾k¹ŽÈQÐ ¹Ã:¶‹¾k¶"Ó:ÄÈ`Ìwº¼¶UÂ@¾%¿Q¸8ÊÃ8¿}Ë`¶Ž¾kÎ:Í`Í`¶‹¾k¹n¶‹Ê?ÊCÀ„¿QÍ`¸:Ȁ¾k¹±ÀÁ·U¾ ЊÈ`Ä"ÐÜÎ:¹nÎ:Ķ۷U¿`¾k¶‹¾"×.Ã:¶UĶ?×&¶?Â%À¼Í`Ãw¹Ž¶ښÓO¶‹·¹6¹Ã:¶‹¾k¶ Ó:ÄÈ`Ìwº¼¶UÂ@¾¹È¿QÄkÀ„¾k¶`ԘÈ`Än¶6À¼ÂFÓ8È`Än¹p¿Q¸w¹‹½#×&¶?Ã8¿}Ë`¶ ¾kÃ:ÈQ׸ƒ¹Ã8¿Q¹ŸºàÀ¼¸:Í`ÎwÀ„¾k¹±ÀÁ·U¿†ºàº¼»%À¼¸wЊÈ`ÄÂF¶‹Ê%¾k¹nÄp¿Q¹¶UÍQÀ¼¶‹¾3·U¿Q¸ Ì8¶FÊw¶UË`¶º¼È`ÓO¶‹Ê6¶ ·nÀ¼¶U¸w¹kº¼»M¹nÈ%À¼ÂFÓ:ÄÈQË`¶%È`Î:¹Ó:Î:¹‡¹Ã8¿Q¹ À„¾ŽÈ`¹nÃ:¶UÄ×&À„¾k¶?·È`ÂFÓ:ÄÈCÂ%À„¾k¶‹Ê0̀»¾JÀ¼¹Î8¿Q¹kÀ¼È`¸8¾â×Ã:¶UĶ ¹Ã:¶jBƒæ ó ÊwÈw¶‹¾¸:È`¹&Ã:ÈQº„ÊÔ ÝI¸Ö¿CÊ:ÊCÀɹ±ÀÉÈC¸Û¹ÈĶ‹¾kÈQº¼ËCÀ¼¸:Í?¹Ã:¶FĶUÂ"¿}À¼¸wÀɸ:ÍÓ:ÄÈ`Ì:Æ º¼¶UÂ@¿Q¹kÀ„·"·U¿`¾k¶‹¾ÐÜÈ`Ä#ÈCÎ:ÄÓ:ÄÈCáŶ‹·¹kÀ¼È`¸ЊÄp¿QÂF¶U×&È`ÄnÏ ½×&¶ ¿QĶ.¶ښÓwº¼È`ÄkÀ¼¸:Í%׿}»š¾å¹È¿QÎ:¹È`Â"¿Q¹kÀ„·U¿}ºÑº¼»@·Ä¶‹¿Q¹¶ ºÁ¿ÇÄÍ`¶ ÞCÎ8¿Q¸€¹±Àɹ±Àɶ‹¾ ÈQÐ]¾k»€¸w¹p¿`·¹±ÀÁ·U¿†ºàº¼»Ž¿Q¸:¸:È`¹p¿Ç¹¶‹Ê@Ê:¿Ç¹p¿:Ô> ÃwÀÁ¾ ×&ÀѺѺÌ:Än¶‹¿QÏ?¹Ã:¶ŽÌOÈ`¹¹kº¼¶U¸:¶‹·nÏ=ÀɸÊw¶UË`¶º¼È`ÓwÀ¼¸:Íô¿ÇÓ:Ó:ÄÈ`Æ Ó:ÄkÀ„¿Q¹¶º¼»6¿Q¸:¸:È`¹p¿Q¹n¶‹Ê޹Äp¿†ÀɸwÀ¼¸:Íe·È`ÄnÓ8È`Äp¿˜Ô‡æ Î:ÄĶU¸w¹kº¼»`½ ×&¶M¿QÄn¶"ÐÜÈQºÑº¼ÈQ×&À¼¸:͹k×È6Ķ‹¾±¶‹¿QÄp·nà ÊCÀ¼Ä¶‹·¹kÀ¼È`¸8¾UÔ çÎ:Ä ß8Äp¾k¹#ÍCȀ¿}º3ÀÁ¾ ¹È"Â%À¼¸wÀÉ ÀU¶Ž¹nÃ:¶ Êw¶UÍCĶU¶%ÈQÐ Êw¶UÍ`Ä¿`Ê:¿QÆ ¹kÀ¼È`¸6À¼¸=¹Ã:¶ŽÞ`Î8¿}ºÑÀ¼¹k»?ÈQÐ&¹Ã:¶@Ó:ÄnÈ`ák¶‹·¹¶‹Ê¹Ä¶U¶‹¾#×.Ã:¶U¸ ¹Ã:¶‡À¼¸:Ó:Î:¹#¿Q¸8¿}º¼»š¾±¶‹¾¿Q¸8Ê"×&È`Äpʎ¿}ºÑÀ¼Í`¸:ÂF¶U¸w¹p¾¿ÇĶ¿QÎ:Æ ¹È`Â"¿Q¹kÀ„·U¿}ºÑº¼»Í`¶U¸:¶UÄp¿Ç¹¶‹ÊƒÌ€»%¿#¾k¹p¿Q¹kÀ„¾k¹±ÀÁ·U¿†ºšÓ8¿Qľk¶UÄ ¿Q¸8Ê ×&È`ÄpÊ¿}ºÑÀÉÍC¸:ÂF¶U¸€¹ƒÂFÈ:Êw¶ºöÔçaÈ"À¼ÂFÓ:ÄÈQË`¶ƒ¹Ã:¶FÞ`Î8¿†ºàÀ¼¹k» ÈQÐ%¹Ã:¶6À¼¸:Ó:Î:¹?¿Q¸8¿}º¼»š¾±¶‹¾U½‡×¶=¿QĶ?¿`Ê:¿QÓ:¹kÀ¼¸:Í¿`·¹kÀ¼Ë`¶ º¼¶‹¿QĸwÀ¼¸:Í@¿Q¸8Ê"·È`Ɗ¹Ä¿}À¼¸wÀɸ:Í%¹¶‹·nÃ:¸wÀ„Þ`Î:¶‹¾õô׿:½   Ù A€¿QÄÏQ¿Qċ½  úÇø ¹È"¶ښÓwº¼ÈQÀ¼¹¹nÃ:¶-ÂêȀ¾k¹Ä¶ºÑÀ„¿QÌwº¼¶ŽÊ:¿Q¹p¿:Ô ?¶6¿QĶe¿}º„¾kÈÛ¿C·¹kÀ¼Ë`¶º¼»ÖÊw¶UË`¶º¼È`ÓwÀ¼¸:ÍÖ¿Q¸ ¿}º¼¹¶UÄn¸8¿Q¹kÀ¼Ë`¶ ¿}ºÑÀ¼Í`¸:ÂF¶U¸w¹6ÂFȚÊw¶º%¹Ã8¿Q¹Â@¿QÏ`¶‹¾"ÂFÈCĶ?Î8¾k¶=ÈQÐF¹Ã:¶ ¾k»w¸€¹¿`·¹kÀ„·6¾k¹ÄÎ8·¹Î:Än¶ÛõÝÈ`Ó8¶0¶U¹e¿}ºIÔ¼½  øÔéçÎ:Ä ¾k¶‹·È`¸8Ê Í`Ȁ¿}ºÀ„¾â¹ÈÖÊw¶U¹¶‹·¹e¿Q¸8Ê0Ķ‹ÊwÎ8·¶?¹nÃ:¶M¸:ÈÇÀÁ¾±¶ À¼¸Ò¹nÃ:¶?Ó:ÄÈ`ák¶‹·¹¶‹Ê¹nĶU¶‹¾e¾kÈ0¹Ã8¿Q¹¹Ã:¶U»Â%À¼Í`À¹Ä¶UÆ Ówº„¿`·¶=¹Ã:¶?¶Ú:Ó8¶U¸8¾JÀ¼Ë`¶=ÀÎ:Â"¿Q¸:ÆI¿Q¸:¸:È`¹p¿Ç¹¶‹Ê·È`ÄÓ8ÈCÄp¿ ¿`¾.¹Äp¿}À¼¸wÀ¼¸:͎¶Ú8¿QÂFÓwº¼¶‹¾.ÐÜÈCľk¹¿Q¹kÀ„¾k¹kÀ„·U¿}º]Ó8¿ÇÄp¾k¶UÄp¾UÔ3?¶ ¿QĶ2À¼¸wË`¶‹¾k¹kÀ¼Í€¿Q¹±Àɸ:̓¹Ã:¶Î8¾k¶.ÈQÐ8ß:º¼¹¶UÄkÀ¼¸:ÍF¾k¹Ä¿Q¹¶UÍQÀ¼¶‹¾3¹È º¼Èš·U¿†ºàÀU¶%¹Ã:¶‡Ó8È`¹n¶U¸€¹kÀ„¿}ºÑº¼»ŽÓ:ÄÈ`Ìwº¼¶UÂ@¿Ç¹kÀ„·#Ó8¿QĹ¾ ÈÇÐ]¹Ã:¶ Ó:ÄÈCáŶ‹·¹¶‹Ê޾±»€¸w¹p¿`·¹kÀ„·‡¹Än¶U¶‹¾UÔ V69`R(P g˜>eHI5Ob _aW[5OPa´`N &ÃwÀ„¾2×&È`ÄÏÃ8¿C¾(Ì8¶U¶U¸Ž¾kÎ:Ó:ÓOÈ`Ŷ‹Ê½CÀ¼¸@Ó8¿ÇĹ‹½`Ìw»"ç쥵 6Õµ Ýæ È`¸w¹Äp¿`·¹‰žåæñ¡&çÔ ÿ˜ú  [’€ÿ  í w½¿¥OA ó µIB‡Æ  Æ šë  ½_B ó µI¡ ó.¨ ݇çèæ ÈwÈ`Ó8¶UÄp¿Ç¹kÀ¼Ë`¶ ó Í`ĶU¶UÂF¶U¸w¹ ¥‡í`í  ú  ÿ`û˜ú  ½@¿Ç¸8Ê âÀɹnĶüæ È`¸w¹Äp¿C·¹  ú  ’8ú‹ÿQÆ ë`ë€ú  Ôñ Ã:¶ ¿ÇÎ:¹Ã:È`Äp¾&×&È`Îwº„ÊâºÑÀ¼Ï`¶%¹ÈF¹Ã8¿Q¸:Ïeã2Êw׿QÄpÊ ô.Î:¸:Í8½›À¼¸8¿OݶUË`ÈQ×½`¿Ç¸8Ê<Ý Àɸ:ÍǺàÀ¼¸:Í©8Ã8¿Q¸:Í.ЊÈ`Ä]¹nÃ:¶ÀÉÄ ¿`¾¾ÅÀÁ¾±¹p¿Q¸8·¶¿`¾(¿Ç¸:¸:È`¹p¿Q¹ÈCÄp¾UÙ[ŽÀ„·nÃ8¿Q¶º]æ2ÈQºÑºàÀ¼¸8¾åЊÈ`Ä(¹Ã:¶ Î8¾k¶"ÈQÐÃwÀ„¾ƒÓ8¿QÄp¾k¶UÄ}Ùñž˜Äp¿Q¸(@þQÈw¾k¶Ðç·nÃ=ÐÜÈ`ăÃwÀ„¾%Ã:¶º¼Ó ×&À¼¹à ›Ý]© ó.ªdª Ù"¿Q¸8ÊÝOÀѺàºÑÀ„¿Q¸ ݶU¶`½ƒ¹Ã:¶¾k¹Î8Êw¶U¸w¹p¾ ÈQÐæñ A:æÿ  ÿ:½8¿Q¸8ʹÃ:¶F¿Q¸:ÈC¸€»wÂFÈ`Î8¾.ĶUËCÀɶU×&¶UÄp¾.ЊÈ`Ä ¹Ã:¶À¼Ä#·È`ÂFÂF¶U¸w¹p¾2È`¸Ž¹ÃwÀ„¾.Ó8¿QÓ8¶UÄ‹Ô 465B«58^C58P 9:5ON ¬Q­7­¯®g¬Q°'®W±’²’²’®Y³R´kµ¶*·¯²’®W®]­¹¸}±’º*·¯»#¼0³R½ ·)µ¾»Y­J¿4»Y¶®Y³Qµ­)À Ák ®Wº^½¯Ã·)µ°)®]º!Ä.ÅÆYÆ#ǯÄ[¬x²’®Kȯ±ÉÃ]µ²’±’ÊW®!À¶¾*®W®.µ#ÀWËK»Y±’­¯±’­¯Ì ÌY¾KµÍÎÍ;µ¾}π»#¾kЭ¯Ì#²’±Ñº·0Ä[Ò®!÷¯­¯±yÃWµ²ÔÓ\®W¼'»Y¾*¶Õg½×ÖØ¿4Ù*½#Ö ÆYÇÚÖØÛÜ)³¯Ý\­¯±  ®W¾º*±’¶ Þ.»ÏÔß`®W­¯­¯ºÞ ²  µ­¯±yµ¯Ä à ±’Þ µ­.¬m²’º*·)µáR± ³ ½ ¾*±’­¯±  µº4¸âµ­¯Ì×µ²Ñ»#¾*®Y³¯µ­)À½ ·¯»#­)µ^ãm»#ä Ö ÌY²yµº!ÄÛYÇ#ÇYÇ¯ÄæåB®!µ¾*­¯±’­¯ÌJÀ ®]¼)®W­'À ®W­)ÃKÞI¶¾µ­¯ºÀ ä'ÃK¶*±’»Y­ Í.»¯À ®]²ÑºµºgÃK»#²Ñ²’®!Ãç¶*±’»Y­¯ºÎ»Ïè'­¯±’¶*®º*¶µ¶*®O·¯®µYÀé¶*¾µ­¯º Ö À ä)Ãç®W¾*ºÄ^êâëìQí'î#ïØðÚïòñ€ëó)ðôöõñyóv÷Úî ñ’øKï ñ€ùKøW³¯Û#ú ûÅüWÄ ß`®W¶*®]¾Ôý4ĸ_¾»áR­0³þY»Y·7­Ž¿4»¯ÃÿY®#³Ú½v¶*®W¼¯·7®W­^¬‚Ä!ãm®W²’²yµYß±’®W¶¾µ7³ ±’­)Ãç®W­ ¶âþ'Ä×ãm®W²’²yµYß±’®W¶¾µ7³Yý¯¾®!À ®W¾±yÃ*ÿdþ#®W²’±’­¯®WÿB³Yþ#»Y·¯­ã‚Ä å0µ ®]¾*¶ Þ ³0ÓR»#°)®W¾¶^å_ÄBÕ®]¾ÃK®W¾³_µ­)À3ßµ䯲â½BÄBÓ\»×»#º*º±Ñ­`Ä Å!Æ#ÆYÇ7Äf¬ º*¶Kµ¶±Ñº¶*±yÃWµ²Žµ¼¯¼¯¾»×µY÷‹¶*»6Í;µY÷¯±’­¯®3¶*¾µ­¯º Ö ²yµ¶±Ñ»#­0Ä êâëìmí)î#ïØðÚïòñëó)ðô[õñyóv÷Úî ñ’øKï ñ€ùKøW³ÎÅ!ú û ÛYü #Æ  ¯³ þYä7­¯®YÄ ß`®W¶*®]¾Žý4Ä'¸_¾»áR­0³)½ ¶®W¼¯·¯®]­@¬‚Ä)ãm®W²’²yµYß±’®W¶¾µ¯³ ±’­)ÃK®]­×¶mþ)Ä ãm®W²’²yµYß±’®W¶¾µ¯³Rµ­)À ÓR»Y°'®W¾¶;å_Ä4Õ®W¾KÃK®W¾!Ä Å!ÆYÆ ¯ÄéÒR·¯® Í;µ¶*·¯®]Í;µ¶±yÃKº^»ÏÍ;µY÷¯±’­¯®¶¾µ­7º*²yµ¶±Ñ»#­@ßµ¾µÍήW¶*®]¾ ®Wº¶*±’Í;µ¶*±’»Y­0Ämêâëìmí)î#ï*ðï ñ€ëó'ðôBõñyó ÷î ñ’øKïòñ€ùçøWÄ ¿4²yµ¾KµI¿Rµ°'®WÊ!µº³g¸_»#­¯­¯±’®1ãm»Y¾¾!³.µ­)À…ß·7±Ñ²’±’¼…ÓR®Wº­¯±’ÿBÄ ÛYÇ#ǯÅ#Ä0½ ¼)µ­¯±’º*·‚²Éµ­¯ÌYä'µÌY®}¼¯¾*»¯ÃK®Wºº*±’­¯Ìµ¶䯭7±  ®W¾*º±’¶ Þ»Ï Í;µ¾*Þ ²yµ­)À)¸}䯱’²ÉÀv±Ñ­7ÌR±Ñ­vπ¾µº*¶*¾ä)ÃK¶䯾*®4π»#¾BÍd䯲’¶*±’²’±’­¯ÌYä)µ² µ¼7¼¯²’±ÉÃ]µ¶*±’»#­¯º!Ă٠­ëùñyó ÷øˆë‚ï KùKëó! Wóvï" ó)ðÚïòñ€ëó)ðô$#ë%&øvëKíOëó'#ívðó¯ñ’øõöðó ÷Úî¯ðÚ÷()ëùKøKø" ñyó ÷.ðó*dõ0ðóv÷Úî¯ð÷ '+,Kù ó)ëô’ëK÷ñçø.-¯õ/_õ0+/"2143Ä Ðä¯Ì#®W­¯®Q¿4·)µ¾*­¯±yµÿ Ä`ÛYÇYÇ7ÅYÄBÙ ÍÎÍ.®!Àv±Éµ¶®KÖ ·¯®!µYÀm¼)µ¾*º±Ñ­7ÌRπ»#¾ ²yµ­¯Ì#ä)µÌ#®[Í.»¯À ®W²’º!Ä4Ù ­56ëù87âë[ï9:;=<Úï'>?Kï ñyó ÷;ë ï9:A@‚êõÄ ¿4±’¼¯¾*±yµ­¹¿4·¯®]²Ñ°'µGµ­)ÀIý7¾*®!À ¾±yÃ*ÿ‹þY®]²Ñ±’­¯®]ÿ Ä Å!Æ#Æ 7Ä ÐÈ×Ö ¼¯²’»Y±’¶±Ñ­7ÌRº*Þ ­ ¶µYÃç¶*±yÃ_º¶*¾ä)ÃK¶*ä7¾*®4π»#¾B²Éµ­¯ÌYä'µÌY®}Í.»¯À ®W²’±’­¯Ì)Ä Ù ­Bëùñyó ÷øŽë^êDC4õ, EGF6"@‚êõ³  »Y²’ä¯Í.®[ÅY³ ¼'µÌY®]º ÛYÛ ¯Û ¯Å#Ä H » µÍx¿4·¯»YÍκ*ÿ Þ×ÄÅÆ ¯Å#Ä_õKùKï îIçø\ëóBF\ë(JWó¯ìK]ó ïâðó L ñyóñ=ó ÷ÚÄ4ý¯»#¾*±’º!Ä Õ±y÷)µ®]²Ô¿4»#²’²Ñ±’­¯ºÄQÅÆYÆ 7Ä4ÒR·¯¾*®W®‚ÌY®]­¯®W¾µ¶*±  ®#³)²’®Kȯ±yÃWµ²Ñ±’º®!À Í.»¯À ®]²Ñºlπ»#¾^º¶µ¶±’º*¶*±yÃWµ²R¼)µ¾º*±’­¯Ì'ĉ٠­MKë!ùñyó ÷øTë ï9:'; Nï9?@[ó¯ó¯î¯ðôO>Pçïòñ=ó ÷NëQˆï K@‚êõ³`¼)µÌY®WºgÅú Û 7³¯ÕgµYÀv¾*±yÀB³¯½ ¼)µ±’­0Ä ¸}»Y­¯­¯±’®dþ'Äãm»Y¾¾!ÄÅ!Æ#ÆÜ)ÄÕgµ#Ã*·7±Ñ­7®dÒ¾Kµ­¯º²yµ¶*±’»#­@ãm±  ®W¾ Ö ÌY®]­)ÃK®Wº_¬Iý¯»#¾*Í;µ²×ãm®WºKÃK¾*±’¼¯¶±Ñ»#­;µ­)Àdß¾*»#¼)»#º*®!À½ »#²’ä Ö ¶*±’»#­0Ä[êâëìmí)î#ï*ðï ñ€ëó)ðôBõñyó ÷î×ñÉøï ñ€ùçøW³¯ÛYÇ ûÜ×ü #Æ vú ¯Ä þ×µº*»#­.б’º*­¯®W¾ÄâÅÆYÆ ¯Ä0¸}±’²Ñ®çÈv±yÃWµ²¯ÌY¾KµÍÎÍ;µ¾º µ­)Àdµ[ÃK䯰¯±yÃÖ ¶*±’Íή;¼7¾*»Y°'µ°¯±’²’±’º*¶*±yü'µ¾*º®W¾!ÄeÙ ­Rëùñyó ÷ø3ëgï9: WóvïWó)ðÚïòñ€ëó)ðôA#;ë(&!ø¯ëKí‰ëóPið%çøçñ=ó ÷S+,Kù4×ó)ëôÑëç÷ÚñçøWÄ ¿4· 䯭¯ÌÖ à ÞY® à µ­0³Y¸}®W­¯»#±¯å0µ  »#± ³ÚÕ3µ¾*¶·)µQßµ²ÑÍήW¾!³Tmáâ®W­ ÓQµÍd°)»á³OÓ\±É÷)µ¾À ´m±’¶*¶*¾®!À Ì#®Y³JÒ_µ­ Þ µ…´m»Y¾*®]²Ñºÿ Þ׳ H µ¾*±R´m±Ñͳ0µ­'À‰ÕÞ ä¯­¯Ì#·¯®W®´Q±’ÍÄ.ÛYÇ#ÇYÇ¯Ä à µ­)À ²’±’­¯Ì º*¶¾*ä)Ãç¶*䯾Kµ²_À ±  ®W¾ÌY®W­'ÃK®WºQµ­'Àd¾*®ÃK»  ®W¾±’­¯Ì.À ¾*»#¼¯¼'®!Àµ¾ Ö ÌYä7Í.®W­ ¶*º}±Ñ­.µl´Q»#¾*®!µ­UYÐÔ­¯ÌY²’±’º*·d͵Y÷¯±’­¯®4¶*¾µ­¯º*²yµ¶*±’»Y­ º*Þ º¶*®WÍÄÙ ­?6ëùñ=ó ÷!ø^ë[ï V@)>W+=@lÄ ÓR®]°)®!Ã]ÃWµ à áRµ¯³Qß·7±Ñ²’±’¼JÓR®Wº­¯±’ÿB³Q¬mÍdÞ5XO®W±’­×°'®W¾Ì)³Qµ­)À TmÿYµ­O´m»Y²yµÿBÄ^Û#ÇYÇ#Û¯ÄÐ Â µ²Ñä'µ¶*±’­¯Ìd¶¾µ­¯º*²yµ¶±’»Y­)µ²ÔÃç»Y¾ Ö ¾*®]º*¼'»Y­)À ®]­)ÃK®d䯺±Ñ­7Ì µ­¯­¯»Y¶Kµ¶±Ñ»#­.¼¯¾»!ËK®!Ãç¶*±’»Y­0ÄÒ®!÷¯­7±Ö ÃWµ²0¾*®]¼)»#¾*¶!³'ÝR­¯±  ®W¾*º±Ñ¶ ÞλÏâÕgµ¾*Þ ²yµ­)ÀBÄ ÓR®]°)®!Ã]ÃWµ à áRµ¯ÄâÛ#ÇYÇ#ǯÄ_½¯µÍ.¼¯²’®Qº®W²’®!ÃK¶±Ñ»#­π»#¾Rº*¶µ¶*±’º*¶±yÃWµ² ÌY¾KµÍÎÍ;µ¾l±Ñ­'À ä)ÃK¶±Ñ»#­0ċ٠­YKë!ùñyó ÷ø@ëQï9:P1=Z Z Z [ ëñyóvï= (F6\@]+‹êâëó88Wó)ù‚ëó'^_>`Emõ¹ðóaBõ4ê_³ ¼)µÌY®WºmÜ=  YÛ7³ à »Y­¯Ìd´m»Y­¯Ì'³)¿4·¯±’­)µ7³=T[ÃK¶»Y°'®W¾!Ä ¸}®W­¯»Y±’¶[å0µ  »#±’®Y³¯Õ±y÷)µ®W²bXJ·7±Ñ¶®Y³)µ­)ÀÎÒ_µ­ Þ×µ‚´m»Y¾*®]²Ñºÿ Þ×Ä ÛYÇ#ǯÅ#ÄQÙ ­)Ãç²Ñä'À ±’­¯Ì;åö®Kȯ±ÉÃç»Ö*½ ¶*¾ä)ÃK¶*ä7¾µ²}Ò¾Kµ­¯ºØÏ€®W¾ÓRä7²Ñ®]º π¾»YÍ ßµ¾*º*®À‰¸}±Ö ¶*®çÈv¶º!ÄdÙ ­c6ëùñ=ó ÷!ø.ëQ‚ï9:!;I<ï9 @[ó¯ó¯î¯ðôd>?Kï ñyó ÷gëQ‚ï9:.@[øKø!ë!ù]ñ€ðï ñ€ëóG!ë(êâëìmí)î#ï*ð" ï ñ€ëó)ðô_õñyó ÷î ñ’øKïòñ€ùçøKe'\V\A>f+g#;ë%4&!øvëçí)³Ò»Yä7²Ñ»#䯺*®#³ ý7¾µ­'ÃK®YÄ ãm®WÿYµ­7ÌNåö±Ñ­`ÄIÅ!ÆYÆ ¯ÄJãm®W¼'®W­)À ®]­)ÃKÞ#ÖØ¸4µº®!ÀIÐ Â µ²Ñä'µ¶*±’»#­ »ÏâÕÙ H Ù*ß'¬kÓlÄâÙ ­fëùñyó ÷ølë[ï9:B#;ë(&ø¯ëKí‰ëó ï9:B^6JÚðôî¯ðï ñ€ëófëhRð%KøWñyó ÷5*iøï2]ìøjlk_ñmçøï] Wóvï" ó)ðÚïòñ€ëó)ðôlêâëó88Wó)ùgëó õ0ðó ÷î¯ð÷ PnoKø!ëî=KùKøgðó ^Jðô îvðÚïòñëó¯³*p[¾µ­)µYÀ¯µ7³¯½ ¼)µ±Ñ­`³¯ÕgµÞ×Ä ¬kÀ7µÍ åB»#¼)®WÊ#³^Õ±y÷)µ®]² H »Yº*ºKµ² ³ÓR®]°)®!Ã]ÃWµ à áRµ¯³µ­)À ß·¯±’²’±’¼1ÓR®]º*­¯±’ÿBÄ Û#ÇYÇYÛ7ÄqXO»#¾À#Ö ²’®  ®]²Žµ²’±ÑÌ#­¯Í.®]­×¶lπ»#¾ Íd䯲’¶*±’²’±’­¯ÌYä'µ²0¾®Wº*»#䯾Ãç®Tµ#Ã8r#䯱’º*±’¶*±’»#­0ÄgÙ ­cKë!ùñyóv÷!ø ëQ‚ï9:f#;ë(&!ø¯ëKíJëóõñ=ó ÷Úî ñ’øKï ñ€ù.só)ë%tRôu÷ ]@[ùv]î ñu" øWñ’ï ñ€ëóIðófnoØíKøw]ó ï*ðï ñ€ëó x L ëëïøKïyKðKíYí'ñyó ÷h@[ó¯ó)ë" ï*ðï2 õöðó ÷îvðÚ÷(.\ðï*ðÄ}Ò»dµ¼7¼)®!µ¾!Ä ÙWÄ0ã[µ­OÕ®W²yµÍ.®!ÀBÄÅÆYÆ 7ìQ­¯­¯»#¶µ¶*±’»Y­º*¶ Þ ²’®dÌY䯱yÀ ®lπ»#¾ ¶*·7®Q°¯²’±’­¯ÿY®]¾¼7¾*»!ËK®ÃK¶!ÄÒ®!÷¯­¯±yÃWµ²)ÓR®]¼)»#¾*¶_Ù*ÓR¿R½‚Æ ÚÖØÇ#ú¯³ ÝR­7±  ®W¾*º±’¶ Þ.»Ïâß`®W­¯­7º*Þ ²  µ­¯±yµ¯Ä ¬Q¾䯲_Õ®W­¯®]ÊW®Wº^µ­)Àg½ ¶®W¼¯·¯®W­Oã‚Ä)Ó\±É÷)µ¾À º»Y­0ÄRÛ#ÇYǯÅ#Ä4¬ °'®Wº*¶ØÖ€è)¾º*¶@µ²’±ÑÌ#­¯Í.®]­×¶‚µ²’ÌY»#¾*±’¶*·¯Í π»#¾.µ䯶»YÍ;µ¶*±yÃd®çÈ×Ö ¶*¾KµYÃK¶±’»Y­O»ÏR¶*¾Kµ­¯ºØÏ€®W¾‚Í;µ¼¯¼¯±’­¯ÌYº4Ͼ*»YÍ °¯±’²’±Ñ­7ÌYä)µ²ÔÃç»Y¾ Ö ¼'»Y¾µ7ÄÙ ­ëùñyó ÷ø ëQ‚ï9:z;=<Úï`@[ó7ó¯î¯ðôb>?Kï ñyó ÷ ëQˆï K@[øKø!ëùWñ€ðÚïòñëólë%êâëìmí)î#ï*ðï ñ€ëó'ðô}õñyó ÷î ñ’øKïòñùKø e!\V\A>f+{#ë%&øvëçí)³¯Ò»#䯲’»Y䯺®Y³¯ý¯¾Kµ­)Ãç®YÄ ¬Q­7»×»#¼@½¯µ¾*ÿYµ¾ÄRÛYÇYÇ7ÅYÄ4¬Q¼7¼¯²’Þ×±’­¯Ì.Ãç»Ö ¶*¾µ±’­¯±’­¯Ì^ÍήW¶*·7»vÀvº ¶*»dº¶µ¶*±’º*¶±ÉÃ]µ²0¼'µ¾*º±’­¯Ì)Ä`Ù ­Bëù874ëQAEo@A@‚êõ³ þYä7­¯®YÄ ½ ¶ä)µ¾¶ ½v·¯±’®W°)®]¾!Ä ÅÆYÆÜ)Ä Ó\®Wº*¶¾*±yÃK¶*±’­¯Ì ¶·¯® á4®!µÿYÖ ÌY®]­¯®W¾µ¶*±  ® ÃWµ¼'µYÃK±’¶ Þ »Ï ºÞ×­'Ã*·7¾*»Y­7»Y䯺 ¶*¾®W®KÖ µYÀ]ËK»Y±’­¯±’­¯ÌdÌ#¾µÍÎÍ;µ¾*º!ÄÎêâëìmí)î#ï*ðï ñ€ëó'ðô ]ó ï]ôyô ñ’÷(]ó)ù]³ Å!Çvû€Ü ü8| 7Å:  7³ H »  ®WÍd°)®]¾!Ä ¿[Ä'þ)Ä µ­@Ó\± Ëçº*°'®W¾*Ì#®W­0ÄdÅ!Æ YÆ7Ä] Wó8!ë(Wìdðï ñ€ëóPnoçïyWñ98Jðô Ä ¸}䯶*¶®W¾*á4»Y¾¶*·0Ä ãm®WÿYµ±XOä0Ä Å!ÆYÆ ¯Ä ½ ¶»¯Ã*·'µº*¶±yɱ’­  ®]¾*º*±’»#­I¶*¾µ­¯ºÀvä)ÃÖ ¶*±’»#­.ÌY¾KµÍÎÍ;µ¾º!³áR±’¶·;µ¼7¼¯²’±ÉÃ]µ¶*±’»#­d¶*»dº*®]ÌYÍ.®]­×¶Kµ¶±Ñ»#­0³ °¯¾KµYÃÿY®W¶±’­¯Ì)³0µ­)Àgµ²Ñ±’Ì#­¯Í.®W­ ¶4»Ïâ¼'µ¾µ²’²Ñ®]²ÃK»#¾*¼'»Y¾µ7Ä[Ù ­ Kë!ù7 ëï9:B}w~vïW ]ó ïòô7 [ ëñyóvïgêâëó887‚ëóc@$Kïòñ €âùWñðô Wóvï]ôyôñÉ÷(]ó)ùW³7¼)µÌ#®WºÅw #Û  Åw 7³¬Qä¯Ì'Ä ý7®W±dQ±yµµ­'ÀOÕgµ¾¶*·)µßµ²’Í.®]¾!ÄdÛYÇ#ǯÅYÄd¿4»#­  ®W¾*¶±’­¯Ì@À ®KÖ ¼'®W­)À ®W­'ÃKÞ^º¶*¾ä)ÃK¶*ä7¾*®Wº4¶*»[¼¯·¯¾Kµº®âº¶*¾*ä'ÃK¶*䯾®Wº!Ä}Ù ­l6ëù87 ëQŽï $‚mõ0+Jêâëó8w]ó)ùW³'Õgµ¾K÷0Ä ý7®W±bQ±yµ¯³0Õ3µ¾*¶·)µ.ßµ²ÑÍήW¾!³ H ±Éµ­ áâ®W­PQ䯮Y³`Õgµ¾Þ@в’²’®W­ T[ÃK䯾»áRº*ÿ ± ³ þY»Y·7­;´m»  µ¾±ÑÿB³Yý7ä Ö*ãm»Y­¯Ì¿4·¯±’»Yä`³Y½ ·¯±’ÊW·¯® à ä'µ­¯Ì'³QÒ»Y­ Þ ´Q¾»¯Ã*·`³µ­)ÀJÕT±Ñ¶K÷¹Õgµ¾ÃKä7º!ąÛYÇ#ÇYÇ7Ä ãm®  ®]²Ñ»#¼¯±’­¯ÌRÌYä7±ÉÀv®W²’±Ñ­7®Wºµ­)Àm®W­¯º*ä7¾*±’­¯ÌkÃç»Y­¯º±Ñº¶*®W­'ÃKÞ[π»#¾ ÷¯±’­¯®Wº®R¶*®Kȯ¶_µ­¯­7»Y¶µ¶*±’»Y­`ÄBÙ ­l6ëùñ=ó ÷!øRëQ4ï9:6,ù4" ëó*3õöðó ÷Úî¯ðÚ÷(Snoçø]ëîIùKø;ðóP^Jðô îvðÚïòñëó‹êâëó88" ]ó)ùW³7þY䯭¯®#Ä ´m®W­!ËK± Á µÍµYÀ¯µdµ­)Àg´m®  ±’­‰´Q­7±ÑÌ#· ¶!Ä^Û#ÇYÇ7ÅYÄ[¬xº*Þ ­ ¶µÚÈ×Ö °)µº*®!À‚º¶µ¶±’º*¶*±yÃWµ² ¶*¾µ­¯º*²yµ¶*±’»Y­Íλ¯À ®W² ÄBÙ ­!ëù87BëQ4ï9: êâëów8Wó'ùë‚ï ]@[øKø!ëùWñ€ðÚïòñëóK!ë(êâëìmí)î#ïØðÚïòñëó)ðô õñ=ó ÷Úî ñ’øKï ñ€ùçøç³7¼)µÌ#®WºV YÛ  YÛYÆ7Ä ã[µ  ±ÉÀ Á µ¾»áRº*ÿ މµ­'ÀWp[¾KµYÃK® H Ì×µ±òÄ6ÛYÇYÇ7ÅYÄ6Ù ­)À ä)Ãç±Ñ­7Ì Íd䯲’¶*±’²’±’­¯ÌYä'µ²Y¼'»Yº4¶µÌYÌ#®W¾*ºRµ­)Àˆ­¯¼.°7¾µYÃÿ#®W¶*®W¾º  ±ÉµŽ¾*»Ö °¯ä¯º¶d¼¯¾*»ËK®!ÃK¶±Ñ»#­Gµ#ÃK¾*»#º*ºdµ²’±’ÌY­¯®!ÀOÃç»Y¾*¼'»Y¾Kµ¯ÄÙ ­ƒ6ëù87 ëQ)Eo@V@‚êõ/"21=Z Z%}Y³7¼)µÌ#®WºQÛYÇ#ǯÛYÇ 7Ä
2002
50
Translating Named Entities Using Monolingual and Bilingual Resources Yaser Al-Onaizan and Kevin Knight Information Sciences Institute University of Southern California 4676 Admiralty Way, Suite 1001 Marina del Rey, CA 90292 yaser,knight  @isi.edu Abstract Named entity phrases are some of the most difficult phrases to translate because new phrases can appear from nowhere, and because many are domain specific, not to be found in bilingual dictionaries. We present a novel algorithm for translating named entity phrases using easily obtainable monolingual and bilingual resources. We report on the application and evaluation of this algorithm in translating Arabic named entities to English. We also compare our results with the results obtained from human translations and a commercial system for the same task. 1 Introduction Named entity phrases are being introduced in news stories on a daily basis in the form of personal names, organizations, locations, temporal phrases, and monetary expressions. While the identification of named entities in text has received significant attention (e.g., Mikheev et al. (1999) and Bikel et al. (1999)), translation of named entities has not. This translation problem is especially challenging because new phrases can appear from nowhere, and because many named-entities are domain specific, not to be found in bilingual dictionaries. A system that specializes in translating named entities such as the one we describe here would be an important tool for many NLP applications. Statistical machine translation systems can use such a system as a component to handle phrase translation in order to improve overall translation quality. CrossLingual Information Retrieval (CLIR) systems could identify relevant documents based on translations of named entity phrases provided by such a system. Question Answering (QA) systems could benefit substantially from such a tool since the answer to many factoid questions involve named entities (e.g., answers to who questions usually involve Persons/Organizations, where questions involve Locations, and when questions involve Temporal Expressions). In this paper, we describe a system for ArabicEnglish named entity translation, though the technique is applicable to any language pair and does not require especially difficult-to-obtain resources. The rest of this paper is organized as follows. In Section 2, we give an overview of our approach. In Section 3, we describe how translation candidates are generated. In Section 4, we show how monolingual clues are used to help re-rank the translation candidates list. In Section 5, we describe how the candidates list can be extended using contextual information. We conclude this paper with the evaluation results of our translation algorithm on a test set. We also compare our system with human translators and a commercial system. 2 Our Approach The frequency of named-entity phrases in news text reflects the significance of the events they are associated with. When translating named entities in news stories of international importance, the same event Computational Linguistics (ACL), Philadelphia, July 2002, pp. 400-408. Proceedings of the 40th Annual Meeting of the Association for will most likely be reported in many languages including the target language. Instead of having to come up with translations for the named entities often with many unknown words in one document, sometimes it is easier for a human to find a document in the target language that is similar to, but not necessarily a translation of, the original document and then extract the translations. Let’s illustrate this idea with the following example: 2.1 Example We would like to translate the named entities that appear in the following Arabic excerpt:            !  #"!$ %  & '(  *),+   . /102. 34  5  $ !6    #" $ %  798 ;:  < 0  4  = . + 3  >@?   8 4   =BADC   E 4  < :   *  6F G H I$ J '   6  4   =  EBK * 4  < +   L 0 $ >  7    : 0   NM OP Q A   4SR  :    A  <  0 6 UT  V  W  0 M +  > 7 +   X     X  0     4 ZY  [ \ $ ]        E   The Arabic newspaper article from which we extracted this excerpt is about negotiations between the US and North Korean authorities regarding the search for the remains of US soldiers who died during the Korean war. We presented the Arabic document to a bilingual speaker and asked them to translate the locations “   L 0 $ >  7 tˇswzyn   . h˘ z¯an”, “  > 7 +  ¯awns¯an”, and “ T  V  W  0 M kwˇg¯anˇg.” The translations they provided were Chozin Reserve, Onsan, and Kojanj. It is obvious that the human attempted to sound out names and despite coming close, they failed to get them correctly as we will see later. When translating unknown or unfamiliar names, one effective approach is to search for an English document that discusses the same subject and then extract the translations. For this example, we start by creating the following Web query that we use with the search engine: Search Query 1: soldiers remains, search, North Korea, and US. This query returned many hits. The top document returned by the search engine1 we used contained the following paragraph: The targeted area is near Unsan, which saw several battles between the U.S. 1http://www.google.com/ Army’s 8th Cavalry regiment and Chinese troops who launched a surprise offensive in late 1950. This allowed us to create a more precise query by adding Unsan to the search terms: Search Query 2: soldiers remains, search, North Korea, US, and Unsan. This search query returned only 3 documents. The first one is the above document. The third is the top level page for the second document. The second document contained the following excerpt: Operations in 2001 will include areas of investigation near Kaechon, approximately 18 miles south of Unsan and Kujang. Kaechon includes an area nicknamed the ”Gauntlet,” where the U.S. Army’s 2nd Infantry Division conducted its famous fighting withdrawal along a narrow road through six miles of Chinese ambush positions during November and December 1950. More than 950 missing in action soldiers are believed to be located in these three areas. The Chosin Reservoir campaign left approximately 750 Marines and soldiers missing in action from both the east and west sides of the reservoir in northeastern North Korea. This human translation method gives us the correct translation for the names we are interested in. 2.2 Two-Step Approach Inspired by this, our goal is to tackle the named entity translation problem using the same approach described above, but fully automatically and using the least amount of hard-to-obtain bilingual resources. As shown in Figure 1, the translation process in our system is carried out in two main steps. Given a named entity in the source language, our translation algorithm first generates a ranked list of translation candidates using bilingual and monolingual resources, which we describe in the Section 3. Then, the list of candidates is re-scored using different monolingual clues (Section 4). NAMED ENTITIES DICTI- ONARY ARABIC DOC. ENGLISH NEWS CORPUS TRANSL- ITERATOR PERSON LOC & ORG RE MATCHER WWW CANDIDATES RE-RANKER RE-RANKED TRANS. CANDIDATES CANDIDATE GENERATOR TRANSLATION CANDIDATES Figure 1: A sketch of our named entity translation system. 3 Producing Translation Candidates Named entity phrases can be identified fairly accurately (e.g., Bikel et al. (1999) report an FMEASURE of 94.9%). In addition to identifying phrase boundaries, named-entity identifiers also provide the category and sub-category of a phrase (e.g., ENTITY NAME, and PERSON). Different types of named entities are translated differently and hence our candidate generator has a specialized module for each type. Numerical and temporal expressions typically use a limited set of vocabulary words (e.g., names of months, days of the week, etc.) and can be translated fairly easily using simple translation patterns. Therefore, we will not address them in this paper. Instead we will focus on person names, locations, and organizations. But before we present further details, we will discuss how words can be transliterated (i.e., “sounded-out”), which is a crucial component of our named entity translation algorithm. 3.1 Transliteration Transliteration is the process of replacing words in the source language with their approximate phonetic or spelling equivalents in the target language. Transliteration between languages that use similar alphabets and sound systems is very simple. However, transliterating names from Arabic into English is a non-trivial task, mainly due to the differences in their sound and writing systems. Vowels in Arabic come in two varieties: long vowels and short vowels. Short vowels are rarely written in Arabic in newspaper text, which makes pronunciation and meaning highly ambiguous. Also, there is no oneto-one correspondence between Arabic sounds and English sounds. For example, English P and B are both mapped into Arabic “   b”; Arabic “ h. ” and “  h-” into English H; and so on. Stalls and Knight (1998) present an Arabic-toEnglish back-transliteration system based on the source-channel framework. The transliteration process is based on a generative model of how an English name is transliterated into Arabic. It consists of several steps, each is defined as a probabilistic model represented as a finite state machine. First, an English word is generated according to its unigram probabilities  . Then, the English word is pronounced with probability    , which is collected directly from an English pronunciation dictionary. Finally, the English phoneme sequence is converted into Arabic writing with probability   . According to this model, the transliteration probability is given by the following equation:        (1) The transliterations proposed by this model are generally accurate. However, one serious limitation of this method is that only English words with known pronunciations can be produced. Also, human translators often transliterate words based on how they are spelled in the source language. For example, Graham is transliterated into Arabic as “      ˙gr¯ah¯am” and not as “     ˙gr¯am”. To address these limitations, we extend this approach by using a new spelling-based model in addition to the phonetic-based model. The spelling-based model we propose (described in detail in (Al-Onaizan and Knight, 2002)) directly maps English letter sequences into Arabic letter sequences with probability    , which are trained on a small English/Arabicname list without the need for English pronunciations. Since no pronunciations are needed, this list is easily obtainablefor many language pairs. We also extend the model   to include a letter trigram model in addition to the word unigram model. This makes it possible to generate words that are not already defined in the word unigram model. The transliteration score according to this model is given by:        (2) The phonetic-based and spelling-based models are combined into a single transliteration model. The transliteration score for an English word  given an Arabic word  is a linear combination of the phonetic-based and the spelling-based transliteration scores as follows:             (3) 3.2 Producing Candidates for Person Names Person names are almost always transliterated. The translation candidates for typical person names are generated using the transliteration module described above. Finite-state devices produce a lattice containing all possible transliterations for a given name. The candidate list is created by extracting the n-best transliterations for a given name. The score of each candidate in the list is the transliteration probability as given by Equation 3. For example, the name “ 0   ?(   klyntwn     byl” is transliterated into: Bell Clinton, Bill Clinton, Bill Klington, etc. 3.3 Producing Candidates for Location and Organization Names Words in organization and location names, on the other hand, are either translated (e.g., “   . h˘ z¯an” as Reservoir) or transliterated (e.g., “   L 0 $ >  7 tˇswzyn” as Chosin), and it is not clear when a word must be translated and when it must be transliterated. So to generate translation candidates for a given phrase , words in the phrase are first translated using a bilingual dictionary and they are also transliterated. Our candidate generator combines the dictionary entries and n-best transliterations for each word in the given phrase into a regular expression that accepts all possible permutations of word translation/transliteration combinations. In addition to the word transliterations and translations, English zero-fertility words (i.e., words that might not have Arabic equivalents in the named entity phrase such as of and the) are considered. This regular expression is then matched against a large English news corpus. All matches are then scored according to their individual word translation/transliteration scores. The score for a given candidate is given by a modified IBM Model 1 probability (Brown et al., 1993) as follows:           (4)          ! #"%$ & ! '  (5) where ( is the length of , ) is the length of ,  is a scaling factor based on the number of matches of found, and  ! is the index of the English word aligned with ! according to alignment  . The probability $  ' !  is a linear combination of the transliteration and translation score, where the translation score is a uniform probability over all dictionary entries for ! . The scored matches form the list of translation candidates. For example, the candidate list for “   L   G H  al-h˘ n¯azyr *     W h˘ lyˇg” includes Bay of Pigs and Gulf of Pigs. 4 Re-Scoring Candidates Once a ranked list of translation candidates is generated for a given phrase, several monolingual English resources are used to help re-rank the list. The candidates are re-ranked according to the following equation: +-, . / 0 +#1 32 &/ 547698&/  (6) where 698&/  is the re-scoring factor used. Straight Web Counts: (Grefenstette, 1999) used phrase Web frequency to disambiguate possible English translations for German and Spanish compound nouns. We use normalized Web counts of named entity phrases as the first re-scoring factor used to rescore translation candidates. For the “ 0   ?(   klyntwn     byl” example, the top two translation candidates are Bell Clinton with transliteration score :;4<=?> A@ and Bill Clinton with score B :DC54E= > "F . The Web frequency counts of these two names are: G B and HIGJ=KLHIGG respectively. This gives us revised scores of :4  = > " and B : B H 4 = > "  , respectively, which leads to the correct translation being ranked highest. It is important to consider counts for the full name rather than the individual words in the name to get accurate counts. To illustrate this point consider the person name “  *M kyl 02.  ˇgwn.” The transliteration module proposes Jon and John as possible transliterations for the first name, and Keele and Kyl among others for the last name. The normalized counts for the individual words are: (John, 0.9269), (Jon, 0.0688), (Keele, 0.0032), and (Kyl, 0.0011). To use these normalized counts to score and rank the first name/last name combinations in a way similar to a unigram language model, we would get the following name/score pairs: (John Keele, 0.003), (John Kyl, 0.001), (Jon Keele, 0.0002), and (Jon Kyl, CK: 4 = > ). However, the normalized phrase counts for the possible full names are: (Jon Kyl, 0.8976), (John Kyl, 0.0936), (John Keele, 0.0087), and (Jon Keele, 0.0001), which is more desirable as Jon Kyl is an often-mentioned US Senator. Co-reference: When a named entity is first mentioned in a news article, typically the full form of the phrase (e.g., the full name of a person) is used. Later references to the name often use a shortened version of the name (e.g, the last name of the person). Shortened versions are more ambiguous by nature than the full version of a phrase and hence more difficult to translate. Also, longer phrases tend to have more accurate Web counts than shorter ones as we have shown above. For example, the phrase “    0 6  alnw¯ab   G   mˇgls” is translated as the House of Representatives. The word “  !'( [  al-mˇgls”2 might be used for later references to this phrase. In that case, we are confronted with the task of translating “  !'( [  al-mˇgls” which is ambiguous and could refer to a number of things including: the Council when referring to “  al mn   G   mˇgls” (the Security Council); the House when referring to ‘    0  6  al-nw¯ab   G   mˇgls” (the House of Representatives); and as the Assembly when referring to “     al mt   G   mˇgls” (National Assembly). 2“   al-mˇgls” is the same word as “    mˇgls” but with the definite article  a- attached. If we are able to determine that in fact it was referring to the House of Representatives, then, we can translate it accurately as the House. This can be done by comparing the shortened phrase with the rest of the named entity phrases of the same type. If the shortened phrase is found to be a sub-phrase of only one other phrase, then, we conclude that the shortened phrase is another reference to the same named entity. In that case we use the counts of the longer phrase to re-rank the candidates of the shorter one. Contextual Web Counts: In some cases straight Web counting does not help the re-scoring. For example, the top two translation candidates for “ + L  m¯arwn Q 6F  + dwn¯ald” are Donald Martin and Donald Marron. Their straight Web counts are 2992 and 2509, respectively. These counts do not change the ranking of the candidates list. We next seek a more accurate counting method by counting phrases only if they appear within a certain context. Using search engines, this can be done using the boolean operator AND. For the previous example, we use Wall Street as the contextual information In this case we get the counts 15 and 113 for Donald Martin and Donald Marron, respectively. This is enough to get the correct translation as the top candidate. The challenge is to find the contextual information that provide the most accurate counts. We have experimented with several techniques to identify the contextual information automatically. Some of these techniques use document-wide contextual information such as the title of the document or select key terms mentioned in the document. One way to identify those key terms is to use the tf.idf measure. Others use contextual information that is local to the named entity in question such as the words that precede and/or succeed the named entity or other named entities mentioned closely to the one in question. 5 Extending the Candidates List The re-scoring methods described above assume that the correct translation is in the candidates list. When it is not in the list, the re-scoring will fail. To address this situation, we need to extrapolate from the candidate list. We do this by searching for the correct translation rather than generating it. We do that by using sub-phrases from the candidates list or by searching for documents in the target language similar to the one being translated. For example, for a person name, instead of searching for the full name, we search for the first name and the last name separately. Then, we use the IdentiFinder named entity identifier (Bikel et al., 1999) to identify all named entities in the top retrieved documents for each sub-phrase. All named entities of the type of the named entity in question (e.g., PERSON) found in the retrieved documents and that contain the sub-phrase used in the search are scored using our transliteration module and added to the list of translation candidates, and the re-scoring is repeated. To illustrate this method, consider the name “  ! n¯an 4  < 0 M kwfy.” Our translation module proposes: Coffee Annan, Coffee Engen, Coffee Anton, Coffee Anyone, and Covey Annan but not the correct translation KofiAnnan. We would like to find the most common person names that have either one of Coffee or Covey as a first name; or Annan, Engen, Anton, or Anyone as a last name. One way to do this is to search using wild cards. Since we are not aware of any search engine that allows wild-card Web search, we can perform a wild-card search instead over our news corpus. The problem is that our news corpus is dated material, and it might not contain the information we are interested in. In this case, our news corpus, for example, might predate the appointment of KofiAnnan as the Secretary General of the UN. Alternatively, using a search engine, we retrieve the top matching documents for each of the names Coffee, Covey, Annan, Engen, Anton, and Anyone. All person names found in the retrieved documents that contain any of the first or last names we used in the search are added to the list of translation candidates. We hope that the correct translation is among the names found in the retrieved documents. The rescoring procedure is applied once more on the expanded candidates list. In this example, we add Kofi Annan to the candidate list, and it is subsequently ranked at the top. To address cases where neither the correct translation nor any of its sub-phrases can be found in the list of translation candidates, we attempt to search for, instead of generating, translation candidates. This can be done by searching for a document in the target language that is similar to the one being translated from the source language. This is especially useful when translating named entities in news stories of international importance where the same event will most likely be reported in many languages including the target language. We currently do this by repeating the extrapolation procedure described above but this time using contextual information such as the title of the original document to find similar documents in the target language. Ideally, one would use a Cross-Lingual IR system to find relevant documents more successfully. 6 Evaluation and Discussion 6.1 Test Set This section presents our evaluation results on the named entity translation task. We compare the translation results obtained from human translations, a commercial MT system, and our named entity translation system. The evaluation corpus consists of two different test sets, a development test set and a blind test set. The first set consists of 21 Arabic newspaper articles taken from the political affairs section of the daily newspaper Al-Riyadh. Named entity phrases in these articles were hand-tagged according to the MUC (Chinchor, 1997) guidelines. They were then translated to English by a bilingual speaker (a native speaker of Arabic) given the text they appear in. The Arabic phrases were then paired with their English translations. The blind test set consists of 20 Arabic newspaper articles that were selected from the political section of the Arabic daily Al-Hayat. The articles have already been translated into English by professional translators.3 Named entity phrases in these articles were hand-tagged, extracted, and paired with their English translations to create the blind test set. Table 1 shows the distribution of the named entity phrases into the three categories PERSON, ORGANIZATION , and LOCATION in the two data sets. The English translations in the two data sets were reviewed thoroughly to correct any wrong translations made by the original translators. For example, to find the correct translation of a politician’s name, official government web pages were used to find the 3The Arabic articles along with their English translations were part of the FBIS 2001 Multilingual corpus. Test Set PERSON ORG LOC Development 33.57 25.62 40.81 Blind 28.38 21.96 49.66 Table 1: The distribution of named entities in the test sets into the categories PERSON, ORGANIZATION , and LOCATION. The numbers shown are the ratio of each category to the total. correct spelling. In cases where the translation could not be verified, the original translation provided by the human translator was considered the “correct“ translation. The Arabic phrases and their correct translations constitute the gold-standard translation for the two test sets. According to our evaluation criteria, only translations that match the gold-standard are considered as correct. In some cases, this criterion is too rigid, as it will consider perfectly acceptable translations as incorrect. However, since we use it mainly to compare our results with those obtained from the human translations and the commercial system, this criterion is sufficient. The actual accuracy figures might be slightly higher than what we report here. 6.2 Evaluation Results In order to evaluate human performance at this task, we compared the translations by the original human translators with the correct translations on the goldstandard. The errors made by the original human translators turned out to be numerous, ranging from simple spelling errors (e.g., Custa Rica vs. Costa Rica) to more serious errors such as transliteration errors (e.g., John Keele vs. Jon Kyl) and other translation errors (e.g., Union Reserve Council vs. Federal Reserve Board). The Arabic documents were also translated using a commercial Arabic-to-English translation system.4 The translation of the named entity phrases are then manually extracted from the translated text. When compared with the gold-standard, nearly half of the phrases in the development test set and more than a third of the blind test were translated incorrectly by the commercial system. The errors can be classified into several categories including: poor 4We used Sakhr’s Web-based translation system available at http://tarjim.ajeeb.com/. transliterations (e.g., Koln Baol vs. Colin Powell), translating a name instead of sounding it out (e.g., O’Neill’s urine vs. Paul O’Neill), wrong translation (e.g., Joint Corners Organization vs. Joint Chiefs of Staff) or wrong word order (e.g.,the Church of the Orthodox Roman). Table 2 shows a detailed comparison of the translation accuracy between our system, the commercial system, and the human translators. The translations obtained by our system show significant improvement over the commercial system. In fact, in some cases it outperforms the human translator. When we consider the top-20 translations,our system’s overall accuracy (84%) is higher than the human’s (75.3%) on the blind test set. This means that there is a lot of room for improvement once we consider more effective re-scoring methods. Also, the top-20 list in itself is often useful in providing phrasal translation candidates for general purpose statistical machine translation systems or other NLP systems. The strength of our translation system is in translating person names, which indicates the strength of our transliteration module. This might also be attributed to the low named entity coverage of our bilingual dictionary. In some cases, some words that need to be translated (as opposed to transliterated) are not found in our bilingual dictionary which may lead to incorrect location or organization translations but does not affect person names. The reason word translations are sometimes not found in the dictionary is not necessarily because of the spotty coverage of the dictionary but because of the way we access definitions in the dictionary. Only shallow morphological analysis (e.g., removing prefixes and suffixes) is done before accessing the dictionary, whereas a full morphological analysis is necessary, especially for morphologically rich languages such as Arabic. Another reason for doing poorly on organizations is that acronyms and abbreviations in the Arabic text (e.g., “  + w¯as,” the Saudi Press Agency) are currently not handled by our system. The blind test set was selected from the FBIS 2001 Multilingual Corpus. The FBIS data is collected by the Foreign Broadcast Information Service for the benefit of the US government. We suspect that the human translators who translated the documents into English are somewhat familiar with the genre of the articles and hence the named entities System Accuracy (%) PERSON ORG LOC Overall Human Sakhr Top-1 Results Top-20 Results 60.00 71.70 86.10 73.70 29.47 51.72 72.73 52.80 77.20 43.30 69.00 65.20 84.80 55.00 70.50 71.33 (a) Results on the Development Test Set System Accuracy (%) PERSON ORG LOC Overall Human Sakhr Top-1 Results Top-20 Results 67.89 42.20 94.68 75.30 47.71 36.05 80.80 61.30 64.24 51.00 86.68 72.57 78.84 70.80 92.86 84.00 (b) Results on the Blind Test Set Table 2: A comparison of translation accuracy for the human translator, commercial system, and our system on the development and blind test sets. Only a match with the translation in the gold-standard is considered a correct translation. The human translator results are obtained by comparing the translations provided by the original human translator with the translations in the gold-standard. The Sakhr results are for the Web version of Sakhr’s commercial system. The Top-1 results of our system considers whether the correct answer is the top candidate or not, while the Top-20 results considers whether the correct answer is among the top-20 candidates. Overall is a weighted average of the three named entity categories. Module Accuracy (%) PERSON ORG LOC Overall Candidate Generator Straight Web Counts Contextual Web Counts Co-reference 59.85 31.67 54.00 49.96 75.76 37.97 63.37 61.02 75.76 39.17 67.50 63.01 77.20 43.30 69.00 65.20 (a) Results on the Development test set Module Accuracy (%) PERSON ORG LOC Overall Candidate Generator Straight Web Counts Contextual Web Counts Co-reference 54.33 51.55 85.75 69.44 61.00 46.60 86.68 70.66 62.50 45.34 85.75 70.40 64.24 51.00 86.68 72.57 (b) Results on the Blind Test Set Table 3: This table shows the accuracy after each translation module. The modules are applied incrementally. Straight Web Counts re-score candidates based on their Web counts. Contextual Web Counts uses Web counts within a given context (we used here title of the document as the contextual information). In Co-reference, if the phrase to be translated is part of a longer phrase then we use the the ranking of the candidates for the longer phrase to re-rank the candidates of the short one, otherwise we leave the list as is. that appear in the text. On the other hand, the development test set was randomly selected by us from our pool of Arabic articles and then submitted to the human translator. Therefore, the human translations in the blind set are generally more accurate than the human translations in the development test. Another reason might be the fact that the human translator who translated the development test is not a professional translator. The only exception to this trend is organizations. After reviewing the translations, we discovered that many of the organization translations provided by the human translator in the blind test set that were judged incorrect were acronyms or abbreviations for the full name of the organization (e.g., the INC instead of the Iraqi National Congress). 6.3 Effects of Re-Scoring As we described earlier in this paper, our translation system first generates a list of translation candidates, then re-scores them using several re-scoring methods. The list of translation candidates we used for these experiments are of size 20. The re-scoring methods are applied incrementally where the reranked list of one module is the input to the next module. Table 3 shows the translation accuracy after each of the methods we evaluated. The most effective re-scoring method was the simplest, the straight Web counts. This is because re-scoring methods are applied incrementally and straight Web counts was the first to be applied, and so it helps to resolve the “easy” cases, whereas the other methods are left with the more “difficult” cases. It would be interesting to see how rearranging the order in which the modules are applied might affect the overall accuracy of the system. The re-scoring methods we used so far are in general most effective when applied to person name translation because corpus phrase counts are already being used by the candidate generator for producing candidates for locations and organizations, but not for persons. Also, the re-scoring methods we used were initially developed and applied to person names. More effective re-scoring methods are clearly needed especially for organization names. One method is to count phrases only if they are tagged by a named entity identifier with the same tag we are interested in. This way we can eliminate counting wrong translations such as enthusiasm when translating “  W h. m¯as” (Hamas). 7 Conclusion and Future Work We have presented a named entity translation algorithm that performs at near human translation accuracy when translating Arabic named entities to English. The algorithm uses very limited amount of hard-to-obtain bilingual resources and should be easily adaptable to other languages. We would like to apply to other languages such as Chinese and Japanese and to investigate whether the current algorithm would perform as well or whether new algorithms might be needed. Currently, our translation algorithm does not use any dictionary of named entities and they are translated on the fly. Translating a common name incorrectly has a significant effect on the translation accuracy. We would like to experiment with adding a small named entity translation dictionary for common names and see if this might improve the overall translation accuracy. Acknowledgments This work was supported by DARPA-ITO grant N66001-00-1-9814. References Yaser Al-Onaizan and Kevin Knight. 2002. Machine Transliteration of Names in Arabic Text. In Proceedings of the ACL Workshop on Computational Approaches to Semitic Languages. Daniel M. Bikel, Richard Schwartz, and Ralph M. Weischedel. 1999. An algorithm that learns what’s in a name. Machine Learning, 34(1/3). P. F. Brown, S. A. Della-Pietra, V. J. Della-Pietra, and R. L. Mercer. 1993. The Mathematics of Statistical Machine Translation: Parameter Estimation. Computational Linguistics, 19(2). Nancy Chinchor. 1997. MUC-7 Named Entity Task Definition. In Proceedings of the 7th Message Understanding Conference. http://www.muc.saic.com/. Gregory Grefenstette. 1999. The WWW as a Resource for Example-Based MT Tasks. In ASLIB’99 Translating and the Computer 21. Andrei Mikheev, Marc Moens, and Calire Grover. 1999. Named Entity Recognition without Gazetteers. In Proceedings of the EACL. Bonnie G. Stalls and Kevin Knight. 1998. Translating Names and Technical Terms in Arabic Text. In Proceedings of the COLING/ACL Workshop on Computational Approaches to Semitic Languages.
2002
51
Using Similarity Scoring To Improve the Bilingual Dictionary for Word Alignment Katharina Probst Language Technologies Institute Carnegie Mellon University Pittsburgh, PA, USA, 15213 [email protected] Ralf Brown Language Technologies Institute Carnegie Mellon University Pittsburgh, PA, USA, 15213 [email protected] Abstract We describe an approach to improve the bilingual cooccurrence dictionary that is used for word alignment, and evaluate the improved dictionary using a version of the Competitive Linking algorithm. We demonstrate a problem faced by the Competitive Linking algorithm and present an approach to ameliorate it. In particular, we rebuild the bilingual dictionary by clustering similar words in a language and assigning them a higher cooccurrence score with a given word in the other language than each single word would have otherwise. Experimental results show a significant improvement in precision and recall for word alignment when the improved dicitonary is used. 1 Introduction and Related Work Word alignment is a well-studied problem in Natural Language Computing. This is hardly surprising given its significance in many applications: wordaligned data is crucial for example-based machine translation, statistical machine translation, but also other applications such as cross-lingual information retrieval. Since it is a hard and time-consuming task to hand-align bilingual data, the automation of this task receives a fair amount of attention. In this paper, we present an approach to improve the bilingual dictionary that is used by word alignment algorithms. Our method is based on similarity scores between words, which in effect results in the clustering of morphological variants. One line of related work is research in clustering based on word similarities. This problem is an area of active research in the Information Retrieval community. For instance, Xu and Croft (1998) present an algorithm that first clusters what are assumedly variants of the same word, then further refines the clusters using a cooccurrence related measure. Word variants are found via a stemmer or by clustering all words that begin with the same three letters. Another technique uses similarity scores based on Ngrams (e.g. (Kosinov, 2001)). The similarity of two words is measured using the number of N-grams that their occurrences have in common. As in our approach, similar words are then clustered into equivalence classes. Other related work falls in the category of word alignment, where much research has been done. A number of algorithms have been proposed and evaluated for the task. As Melamed (2000) points out, most of these algorithms are based on word cooccurrences in sentence-aligned bilingual data. A source language word  and a target language word   are said to cooccur if  occurs in a source language sentence and   occurs in the corresponding target language sentence. Cooccurrence scores then are then counts for all word pairs  and  , where  is in the source language vocabulary and   is in the target language vocabulary. Often, the scores also take into account the marginal probabilites of each word and sometimes also the conditional probabilities of one word given the other. Aside from the classic statistical approach of Computational Linguistics (ACL), Philadelphia, July 2002, pp. 409-416. Proceedings of the 40th Annual Meeting of the Association for (Brown et al., 1990; Brown et al., 1993), a number of other algorithms have been developed. Ahrenberg et al. (1998) use morphological information on both the source and the target languages. This information serves to build equivalence classes of words based on suffices. A different approach was proposed by Gaussier (1998). This approach models word alignments as flow networks. Determining the word alignments then amounts to solving the network, for which there are known algorithms. Brown (1998) describes an algorithm that starts with ‘anchors’, words that are unambiguous translations of each other. From these anchors, alignments are expanded in both directions, so that entire segments can be aligned. The algorithm that this work was based on is the Competitive Linking algorithm. We used it to test our improved dictionary. Competitive Linking was described by Melamed (1997; 1998; 2000). It computes all possible word alignments in parallel data, and ranks them by their cooccurrence or by a similar score. Then links between words (i.e. alignments) are chosen from the top of the list until no more links can be assigned. There is a limit on the number of links a word can have. In its basic form the Competitive Linking algorithm (Melamed, 1997) allows for only up to one link per word. However, this one-toone/zero-to-one assumption is relaxed by redefining the notion of a word. 2 Competitive Linking in our work We implemented the basic Competitive Linking algorithm as described above. For each pair of parallel sentences, we construct a ranked list of possible links: each word in the source language is paired with each word in the target language. Then for each word pair the score is looked up in the dictionary, and the pairs are ranked from highest to lowest score. If a word pair does not appear in the dictionary, it is not ranked. The algorithm then recursively links the word pair with the highest cooccurrence, then the next one, etc. In our implementation, linking is performed on a sentence basis, i.e. the list of possible links is constructed only for one sentence pair at a time. Our version allows for more than one link per word, i.e. we do not assume one-to-one or zero-toone alignments between words. Furthermore, our implementation contains a threshold that specifies how high the cooccurrence score must be for the two words in order for this pair to be considered for a link. 3 The baseline dictionary In our experiments, we used a baseline dictionary, rebuilt the dictionary with our approach, and compared the performance of the alignment algorithm between the baseline and the rebuilt dictionary. The dictionary that was used as a baseline and as a basis for rebuilding is derived from bilingual sentencealigned text using a count-and-filter algorithm: Count: for each source word type, count the number of times each target word type cooccurs in the same sentence pair, as well as the total number of occurrences of each source and target type. Filter: after counting all cooccurrences, retain only those word pairs whose cooccurrence probability is above a defined threshold. To be retained, a word pair  ,  must satisfy          "!$#&% !$'( where ) *+ is the number of times the two words cooccurred. By making the threshold vary with frequency, one can control the tendency for infrequent words to be included in the dictionary as a result of chance collocations. The 50% cooccurrence probability of a pair of words with frequency 2 and a single cooccurrence is probably due to chance, while a 10% cooccurrence probability of words with frequency 5000 is most likely the result of the two words being translations of each other. In our experiments, we varied the threshold from 0.005 to 0.01 and 0.02. It should be noted that there are many possible algorithms that could be used to derive the baseline dictionary, e.g. ,.- , pointwise mutual information, etc. An overview of such approaches can be found in (Kilgarriff, 1996). In our work, we preferred to use the above-described method, because it this method is utilized in the example-based MT system being developed in our group (Brown, 1997). It has proven useful in this context. 4 The problem of derivational and inflectional morphology As the scores in the dictionary are based on surface form words, statistical alignment algorithms such as Competitive Linking face the problem of inflected and derived terms. For instance, the English word liberty can be translated into French as a noun (libert´e), or else as an adjective (libre), the same adjective in the plural (libres), etc. This happens quite frequently, as sentences are often restructured in translation. In such a case, libert´e, libre, libres, and all the other translations of liberty in a sense share their cooccurrence scores with liberty. This can cause problems especially because there are words that are overall frequent in one language (here, French), and that receive a high cooccurrence count regardless of the word in the other language (here, English). If the cooccurrence score between liberty and an unrelated but frequent word is higher than libres, then the algorithm will prefer a link between liberty and le over a link between liberty and libres, even if the latter is correct. As for a concrete example from the training data used in this study, consider the English word oil. This word is quite frequent in the training data and thus cooccurs at high counts with many target language words 1. In this case, the target language is French. The cooccurrence dictionary contains the following entries for oil among other entries: oil - et 543 oil - dans 118 oil - p´etrole 259 oil - p´etroli`ere 61 oil - p´etroli`eres 61 It can be seen that words such as et and dans receive higher coccurrence scores with oil than some correct translations of oil, such as p´etroli`ere, and p´etroli`eres, and, in the case of et, also p´etrole. This will cause the Competitive Linking algorithm to favor a link e.g. between oil and et over a link between oil and p´etrole. In particular, word variations can be due to inflectional morphology (e.g. adjective endings) and derivational morphology (e.g. a noun being trans1We used Hansards data, see the evaluation section for details. lated as an adjective due to sentence restructuring). Both inflectional and derivational morphology will result in words that are similar, but not identical, so that cooccurrence counts will score them separately. Below we describe an approach that addresses these two problems. In principle, we cluster similar words and assign them a new dictionary score that is higher than the scores of the individual words. In this way, the dictionary is rebuilt. This will influence the ranked list that is produced by the algorithm and thus the final alignments. 5 Rebuilding the dictionary based on similarity scores Rebuilding the dictionary is based largely on similarities between words. We have implemented an algorithm that assigns a similarity score to a pair of words     . The score is higher for a pair of similar words, while it favors neither shorter nor longer words. The algorithm finds the number of matching characters between the words, while allowing for insertions, deletions, and substitutions. The concept is thus very closely related to the Edit distance, with the difference that our algorithm counts the matching characters rather than the non-matching ones. The length of the matching substring (which is not necessarily continguous) is denoted by MatchStringLength). At each step, a character from   is compared to a character from   . If the characters are identical, the count for the MatchStringLength is incremented. Then the algorithm checks for reduplication of the character in one or both of the words. Reduplication also results in an incremented MatchStringLength. If the characters do not match, the algorithm skips one or more characters in either word. Then the longest common substring is put in relation to the length of the two words. This is done so as to not favor longer words that would result in a higher MatchStringLength than shorter words. The similarity score of   and   is then computed using the following formula:  '  *'  '    '  "!  '  #%$ This similarity scoring provides the basis for our newly built dictionary. The algorithm proceeds as follows: For any given source language word  , there are target language words '& )( such that the cooccurrence score *'+,+,*  ,    is greater than 0. Note that in most cases is much smaller than the size of the target language vocabulary, but also much greater than . For the words '& )( , the algorithm computes the similarity score for each word pair      , where        . Note that this computation is potentially very complex. The number of word pairs grows exponentially as grows. This problem is addressed by excluding word pairs whose cooccurrence scores are low, as will be discussed in more detail later. In the following, we use a greedy bottom-up clustering algorithm (Manning and Sch¨utze, 1999) to cluster those words that have high similarity scores. The clustering algorithm is initialized to clusters, where each cluster contains exactly one of the words & )( . In the first step, the algorithm clusters the pair of words with the maximum similarity score. The new cluster also stores a similarity score        , which in this case is the similarity score of the two clustered words. In the following steps, the algorithm again merges those two clusters that have the highest similarity score        . The clustering can occur in one of three ways: 1. Merge two clusters that each contain one word. Then the similarity score     of the merged cluster will be the similarity score of the word pair. 2. Merge a cluster *  that contains a single word   and a cluster *  that contains  words '&  and has      "!  % !$#  . Then the similarity score of the merged cluster is the average similarity score of the  -word cluster, averaged with the similarity scores between the single word and all  words in the cluster. This means that the algorithm computes the similarity score between the single word   in cluster *  and each of the  words in cluster *  , and averages them with    *   : &%(' #*)  # +-, ' /. ' # 0 $%!1,/, '  0  +3254) , # 0&0 6 !1, '  0 3. Merge two clusters that each contain more than a single word. In this case, the algorithm proceeds as in the second case, but averages the added similarity score over all word pairs. Suppose there exists a cluster *  with 7 words & 8 and    *   and a cluster *  with  words %&  and    *   . Then      9!  % !$#  is computed as follows: &%;: )  % ' #*)  # +-, ' /. ' # 0 $%!1,/, '  0  +3254) , # 0&0 !1,/, :  0  +3254)<  $=>0 < 6@?  = !1, '  0 !1, :  0 Clustering proceeds until a threshold,    , is exhausted. If none of the possible merges would result in a new cluster whose average similarity score        would be at least    , clustering stops. Then the dictionary entries are modified as follows: suppose that words     are clustered, where all words     cooccur with source language word  . Furthermore, denote the cooccurrence score of the word pair  and  by *'+,+,*     . Then in the rebuilt dictionary the entry A /B/C #EDGF HHF , A &B/C # 0 will be replaced with A /B/C #ED % +  I 6 F HHF , A /B&C  0 if C #EJ C 6LKKK C&+ Not all words are considered for clustering. First, we compiled a stop list of target language words that are never clustered, regardless of their similarity and cooccurrence scores with other words. The words on the stop list are the 20 most frequent words in the target language training data. Section M argues why this exclusion makes sense: one of the goals of clustering is to enable variations of a word to receive a higher dictionary score than words that are very common overall. Furthermore, we have decided to exclude words from clustering that account for only few of the cooccurrences of  . In particular, a separate threshold, *'+,+,* ON   + , controls how high the cooccurrence score with  has to be in relation to all other scores between  and a target language word. *'+,+,* ON   + is expressed as follows: a word   qualifies for clustering if !QPP!    %  # ( %  6 IR !QPP!    %  6 (TS *'+,+,* ON   + As before, %& )( are all the target language words that cooccur with source language word  . Similarly to the most frequent words, dictionary scores for word pairs that are too rare for clustering remain unchanged. This exclusion makes sense because words that cooccur infrequently are likely not translations of each other, so it is undesirable to boost their score by clustering. Furthermore, this threshold helps keep the complexity of the operation under control. The fewer words qualify for clustering, the fewer similarity scores for pairs of words have to be computed. 6 Evaluation We trained three basic dictionaries using part of the Hansard data, around five megabytes of data (around 20k sentence pairs and 850k words). The basic dictionaries were built using the algorithm described in section 3, with three different thresholds: 0.005, 0.01, and 0.02. In the following, we will refer to these dictionaries as as Dict0.005, Dict0.01, and Dict0.02. 50 sentences were held back for testing. These sentences were hand-aligned by a fluent speaker of French. No one-to-one assumption was enforced. A word could thus align to zero or more words, where no upper limit was enforced (although there is a natural upper limit). The Competitive Linking algorithm was then run with multiple parameter settings. In one setting, we varied the maximum number of links allowed per word,  NL7   . For example, if the maximum number is 2, then a word can align to 0, 1, or 2 words in the parallel sentence. In other settings, we enforced a minimum score in the bilingual dictionary for a link to be accepted,  *'+  . This means that two words cannot be aligned if their score is below  *'+  . In the rebuilt dictionaries,  *'+  is applied in the same way. The dictionary was also rebuilt using a number of different parameter settings. The two parameters that can be varied when rebuilding the dictionary are the similarity threshold    and the cooccurrence threshold *'+,+,* ON   + .    enforces that all words within one cluster must have an average similarity score of at least    . The second threshold, *'+,+,* ON   + , enforces that only certain words are considered for clustering. Those words that are considered for clustering should account for more than  O *'+,+,* ON   +  of the cooccurrences of the source language word with any target language word. If a word falls below threshold *'+,+,* ON   + , its entry in the dictionary remains unchanged, and it is not clustered with any other word. Below we summarize the values each parameter was set to. maxlinks Used in Competitive Linking algorithm: Maximum number of words any word can be aligned with. Set to: 1, 2, 3. minscore Used in Competitive Linking algorithm: Minimum score of a word pair in the dictionary to be considered as a possible link. Set to: 1, 2, 4, 6, 8, 10, 20, 30, 40, 50. minsim Used in rebuilding dictionary: Minimum average similarity score of the words in a cluster. Set to: 0.6, 0.7, 0.8. coocsratio Used in rebuilding dictionary:  O *'+,+,* ON   + is the minimum percentage of all cooccurrences of a source language word with any target language word that are accounted for by one target language word. Set to: 0.003. Thus varying the parameters, we have constructed various dictionaries by rebuilding the three baseline dictionaries. Here, we report on results on three dictionaries where minsim was set to 0.7 and coocsratio was set to 0.003. For these parameter settings, we observed robust results, although other parameter settings also yielded positive results. Precision and recall was measured using the handaligned 50 sentences. Precision was defined as the percentage of links that were correctly proposed by our algorithm out of all links that were proposed. Recall is defined as the percentage of links that were found by our algorithm out of all links that should have been found. In both cases, the hand-aligned data was used as a gold standard. The F-measure combines precision and recall:   N    @ !    P ( @ ! 8 8 @ !    P ( @ ! 8 8 . The following figures and tables illustrate that the Competitive Linking algorithm performs favorably when a rebuilt dictionary is used. Table 1 lists the improvement in precision and recall for each of the dictionaries. The table shows the values when the minscore score is set to 50, and up to 1 link was allowed per word. Furthermore, the p-values of a 1tailed t-test are listed, indicating these performance boosts are in mostly highly statistically significant Dict0.005 Dict0.01 Dict0.02 P Improvement 0.060 0.067 0.057 P p-value 0.0003 0.0042 0.0126 R Improvement 0.094 0.11 0.087 R p-value 0.0026 0.0008 0.0037 Table 1: Percent improvement and p-value for recall and precision, comparing baseline and rebuilt dictionaries at minscore 50 and maxlinks 1. for these parameter settings, where some of the best results were observed. The following figures (figures 1-9) serve to illustrate the impact of the algorithm in greater detail. All figures plot the precision, recall, and f-measure performance against different minscore settings, comparing rebuilt dictionaries to their baselines. For each dictionary, three plots are given, one for each maxlinks setting, i.e. the maximum number of links allowed per word. The curve names indicate the type of the curve (Precision, Recall, or F-measure), the maximum number of links allowed per word (1, 2, or 3), the dictionary used (Dict0.005, Dict0.01, or Dict0.02), and whether the run used the baseline dictionary or the rebuilt dictionary (Baseline or Cog7.3). It can be seen that our algorithm leads to stable improvement across parameter settings. In few cases, it drops below the baseline when minscore is low. Overall, however, our algorithm is robust - it improves alignment regardless of how many links are allowed per word, what baseline dictionary is used, and boosts both precision and recall, and thus also the f-measure. To return briefly to the example cited in section , we can now show how the dictionary rebuild has affected these entries. In dictionary    they now look as follows: oil - et 262 oil - dans 118 oil - p´etrole 434 oil - p´etroli`ere 434 oil - p´etroli`eres 434 The fact that p´etrole, p´etroli`ere, and p´etroli`eres now receive higher scores than et and dans is what causes the alignment performance to increase. 0.25 0.3 0.35 0.4 0.45 0.5 0 5 10 15 20 25 30 35 40 45 50 performance minscore ’Precision1-Dict0.005-Cog7.3’ ’Precision1-Dict0.005-Baseline’ ’Recall1-Dict0.005-Cog7.3’ ’Recall1-Dict0.005-Baseline’ ’F-measure1-Dict0.005-Cog7.3’ ’F-measure1-Dict0.005-Baseline’ Figure 1: Performance of dictionaries Dict0.005 for up to one link per word 0.29 0.3 0.31 0.32 0.33 0.34 0.35 0.36 0.37 0.38 0 5 10 15 20 25 30 35 40 45 50 performance minscore ’Precision2-Dict0.005-Cog7.3’ ’Precision2-Dict0.005-Baseline’ ’Recall2-Dict0.005-Cog7.3’ ’Recall2-Dict0.005-Baseline’ ’F-measure2-Dict0.005-Cog7.3’ ’F-measure2-Dict0.005-Baseline’ Figure 2: Performance of dictionaries Dict0.005 for up to two links per word 7 Conclusions and Future Work We have demonstrated how rebuilding a dictionary can improve the performance (both precision and recall) of a word alignment algorithm. The algorithm proved robust across baseline dictionaries and various different parameter settings. Although a small test set was used, the improvements are statistically significant for various parameter settings. We have shown that computing similarity scores of pairs of words can be used to cluster morphological variants of words in an inflected language such as French. It will be interesting to see how the similarity and clustering method will work in conjunction with other word alignment algorithms, as the dictionary 0.22 0.24 0.26 0.28 0.3 0.32 0.34 0.36 0.38 0.4 0 5 10 15 20 25 30 35 40 45 50 performance minscore ’Precision3-Dict0.005-Cog7.3’ ’Precision3-Dict0.005-Baseline’ ’Recall3-Dict0.005-Cog7.3’ ’Recall3-Dict0.005-Baseline’ ’F-measure3-Dict0.005-Cog7.3’ ’F-measure3-Dict0.005-Baseline’ Figure 3: Performance of dictionaries Dict0.005 for up to three links per word 0.25 0.3 0.35 0.4 0.45 0.5 0 5 10 15 20 25 30 35 40 45 50 performance minscore ’Precision1-Dict0.01-Cog7.3’ ’Precision1-Dict0.01-Baseline’ ’Recall1-Dict0.01-Cog7.3’ ’Recall1-Dict0.01-Baseline’ ’F-measure1-Dict0.01-Cog7.3’ ’F-measure1-Dict0.01-Baseline’ Figure 4: Performance of dictionaries Dict0.01 for up to one link per word rebuilding algorithm is independent of the actual word alignment method used. Furthermore, we plan to explore ways to improve the similarity scoring algorithm. For instance, we can assign lower match scores when the characters are not identical, but members of the same equivalence class. The equivalence classes will depend on the target language at hand. For instance, in German, a and ¨a will be assigned to the same equivalence class, because some inflections cause a to become ¨a. An improved similarity scoring algorithm may in turn result in improved word alignments. In general, we hope to move automated dictionary extraction away from pure surface form statistics and toward dictionaries that are more linguisti0.3 0.31 0.32 0.33 0.34 0.35 0.36 0.37 0.38 0 5 10 15 20 25 30 35 40 45 50 performance minscore ’Precision2-Dict0.01-Cog7.3’ ’Precision2-Dict0.01-Baseline’ ’Recall2-Dict0.01-Cog7.3’ ’Recall2-Dict0.01-Baseline’ ’F-measure2-Dict0.01-Cog7.3’ ’F-measure2-Dict0.01-Baseline’ Figure 5: Performance of dictionaries Dict0.01 for up to two links per word 0.22 0.24 0.26 0.28 0.3 0.32 0.34 0.36 0.38 0.4 0 5 10 15 20 25 30 35 40 45 50 performance minscore ’Precision3-Dict0.01-Cog7.3’ ’Precision3-Dict0.01-Baseline’ ’Recall3-Dict0.01-Cog7.3’ ’Recall3-Dict0.01-Baseline’ ’F-measure3-Dict0.01-Cog7.3’ ’F-measure3-Dict0.01-Baseline’ Figure 6: Performance of dictionaries Dict0.01 for up to three links per word cally motivated. References Lars Ahrenberg, M. Andersson, and M. Merkel. 1998. A simple hybrid aligner for generating lexical correspondences in parallel texts. In Proceedings of COLINGACL’98. Peter Brown, J. Cocke, V.D. Pietra, S.D. Pietra, J. Jelinek, J. Lafferty, R. Mercer, and P. Roossina. 1990. A statistical approach to Machine Translation. Computational Linguistics, 16(2):79–85. Peter Brown, S.D. Pietra, V.D. Pietra, and R. Mercer. 1993. The mathematics of statistical Machine Translation: Parameter estimation. Computational Linguistics. 0.25 0.3 0.35 0.4 0.45 0.5 0.55 0 5 10 15 20 25 30 35 40 45 50 performance minscore ’Precision1-Dict0.02-Cog7.3’ ’Precision1-Dict0.02-Baseline’ ’Recall1-Dict0.02-Cog7.3’ ’Recall1-Dict0.02-Baseline’ ’F-measure1-Dict0.02-Cog7.3’ ’F-measure1-Dict0.02-Baseline’ Figure 7: Performance of dictionaries Dict0.02 for up to one link per word 0.29 0.3 0.31 0.32 0.33 0.34 0.35 0.36 0.37 0 5 10 15 20 25 30 35 40 45 50 performance minscore ’Precision2-Dict0.02-Cog7.3’ ’Precision2-Dict0.02-Baseline’ ’Recall2-Dict0.02-Cog7.3’ ’Recall2-Dict0.02-Baseline’ ’F-measure2-Dict0.02-Cog7.3’ ’F-measure2-Dict0.02-Baseline’ Figure 8: Performance of dictionaries Dict0.02 for up to two links per word Ralf Brown. 1997. Automated dictionary extraction for ‘knowledge-free’ example-based translation. In Proceedings of TMI 1997, pages 111–118. Ralf Brown. 1998. Automatically-extracted thesauri for cross-language IR: When better is worse. In Proceedings of COMPUTERM’98. Eric Gaussier. 1998. Flow network models for word alignment and terminology extraction from bilingual corpora. In Proceedings of COLING-ACL’98. Adam Kilgarriff. 1996. Which words are particularly characteristic of a text? A survey of statistical approaches. In Proceedings of AISB Workshop on Language Engineering for Document Analysis and Recognition. Serhiy Kosinov. 2001. Evaluation of N-grams conflation approach in text-based Information Retrieval. In 0.24 0.26 0.28 0.3 0.32 0.34 0.36 0.38 0.4 0 5 10 15 20 25 30 35 40 45 50 performance minscore ’Precision3-Dict0.02-Cog7.3’ ’Precision3-Dict0.02-Baseline’ ’Recall3-Dict0.02-Cog7.3’ ’Recall3-Dict0.02-Baseline’ ’F-measure3-Dict0.02-Cog7.3’ ’F-measure3-Dict0.02-Baseline’ Figure 9: Performance of dictionaries Dict0.02 for up to three links per word Proceedings of International Workshop on Information Retrieval IR’01. Christopher D. Manning and Hinrich Sch¨utze, 1999. Foundations of Statistical Natural Language Processing, chapter 14. MIT Press. Dan I. Melamed. 1997. A word-to-word model of translation equivalence. In Proceedings of ACL’97. Dan I. Melamed. 1998. Empirical methods for MT lexicon development. In Proceedings of AMTA’98. Dan I. Melamed. 2000. Models of translational equivalence among words. Computational Linguistics, 26(2):221–249. Jinxi Xu and W. Bruce Croft. 1998. Corpus-based stemming using co-occurrence of word variants. ACM Transactions on Information Systems, 16(1):61–81.
2002
52
Thumbs Up or Thumbs Down? Semantic Orientation Applied to Unsupervised Classification of Reviews Peter D. Turney Institute for Information Technology National Research Council of Canada Ottawa, Ontario, Canada, K1A 0R6 [email protected] Abstract This paper presents a simple unsupervised learning algorithm for classifying reviews as recommended (thumbs up) or not recommended (thumbs down). The classification of a review is predicted by the average semantic orientation of the phrases in the review that contain adjectives or adverbs. A phrase has a positive semantic orientation when it has good associations (e.g., “subtle nuances”) and a negative semantic orientation when it has bad associations (e.g., “very cavalier”). In this paper, the semantic orientation of a phrase is calculated as the mutual information between the given phrase and the word “excellent” minus the mutual information between the given phrase and the word “poor”. A review is classified as recommended if the average semantic orientation of its phrases is positive. The algorithm achieves an average accuracy of 74% when evaluated on 410 reviews from Epinions, sampled from four different domains (reviews of automobiles, banks, movies, and travel destinations). The accuracy ranges from 84% for automobile reviews to 66% for movie reviews. 1 Introduction If you are considering a vacation in Akumal, Mexico, you might go to a search engine and enter the query “Akumal travel review”. However, in this case, Google1 reports about 5,000 matches. It would be useful to know what fraction of these matches recommend Akumal as a travel destination. With an algorithm for automatically classifying a review as “thumbs up” or “thumbs down”, it would be possible for a search engine to report such summary statistics. This is the motivation for the research described here. Other potential applications include recognizing “flames” (abusive newsgroup messages) (Spertus, 1997) and developing new kinds of search tools (Hearst, 1992). In this paper, I present a simple unsupervised learning algorithm for classifying a review as recommended or not recommended. The algorithm takes a written review as input and produces a classification as output. The first step is to use a part-of-speech tagger to identify phrases in the input text that contain adjectives or adverbs (Brill, 1994). The second step is to estimate the semantic orientation of each extracted phrase (Hatzivassiloglou & McKeown, 1997). A phrase has a positive semantic orientation when it has good associations (e.g., “romantic ambience”) and a negative semantic orientation when it has bad associations (e.g., “horrific events”). The third step is to assign the given review to a class, recommended or not recommended, based on the average semantic orientation of the phrases extracted from the review. If the average is positive, the prediction is that the review recommends the item it discusses. Otherwise, the prediction is that the item is not recommended. The PMI-IR algorithm is employed to estimate the semantic orientation of a phrase (Turney, 2001). PMI-IR uses Pointwise Mutual Information (PMI) and Information Retrieval (IR) to measure the similarity of pairs of words or phrases. The se 1 http://www.google.com Computational Linguistics (ACL), Philadelphia, July 2002, pp. 417-424. Proceedings of the 40th Annual Meeting of the Association for mantic orientation of a given phrase is calculated by comparing its similarity to a positive reference word (“excellent”) with its similarity to a negative reference word (“poor”). More specifically, a phrase is assigned a numerical rating by taking the mutual information between the given phrase and the word “excellent” and subtracting the mutual information between the given phrase and the word “poor”. In addition to determining the direction of the phrase’s semantic orientation (positive or negative, based on the sign of the rating), this numerical rating also indicates the strength of the semantic orientation (based on the magnitude of the number). The algorithm is presented in Section 2. Hatzivassiloglou and McKeown (1997) have also developed an algorithm for predicting semantic orientation. Their algorithm performs well, but it is designed for isolated adjectives, rather than phrases containing adjectives or adverbs. This is discussed in more detail in Section 3, along with other related work. The classification algorithm is evaluated on 410 reviews from Epinions2, randomly sampled from four different domains: reviews of automobiles, banks, movies, and travel destinations. Reviews at Epinions are not written by professional writers; any person with a Web browser can become a member of Epinions and contribute a review. Each of these 410 reviews was written by a different author. Of these reviews, 170 are not recommended and the remaining 240 are recommended (these classifications are given by the authors). Always guessing the majority class would yield an accuracy of 59%. The algorithm achieves an average accuracy of 74%, ranging from 84% for automobile reviews to 66% for movie reviews. The experimental results are given in Section 4. The interpretation of the experimental results, the limitations of this work, and future work are discussed in Section 5. Potential applications are outlined in Section 6. Finally, conclusions are presented in Section 7. 2 Classifying Reviews The first step of the algorithm is to extract phrases containing adjectives or adverbs. Past work has demonstrated that adjectives are good indicators of subjective, evaluative sentences (Hatzivassiloglou 2 http://www.epinions.com & Wiebe, 2000; Wiebe, 2000; Wiebe et al., 2001). However, although an isolated adjective may indicate subjectivity, there may be insufficient context to determine semantic orientation. For example, the adjective “unpredictable” may have a negative orientation in an automotive review, in a phrase such as “unpredictable steering”, but it could have a positive orientation in a movie review, in a phrase such as “unpredictable plot”. Therefore the algorithm extracts two consecutive words, where one member of the pair is an adjective or an adverb and the second provides context. First a part-of-speech tagger is applied to the review (Brill, 1994).3 Two consecutive words are extracted from the review if their tags conform to any of the patterns in Table 1. The JJ tags indicate adjectives, the NN tags are nouns, the RB tags are adverbs, and the VB tags are verbs.4 The second pattern, for example, means that two consecutive words are extracted if the first word is an adverb and the second word is an adjective, but the third word (which is not extracted) cannot be a noun. NNP and NNPS (singular and plural proper nouns) are avoided, so that the names of the items in the review cannot influence the classification. Table 1. Patterns of tags for extracting two-word phrases from reviews. First Word Second Word Third Word (Not Extracted) 1. JJ NN or NNS anything 2. RB, RBR, or RBS JJ not NN nor NNS 3. JJ JJ not NN nor NNS 4. NN or NNS JJ not NN nor NNS 5. RB, RBR, or RBS VB, VBD, VBN, or VBG anything The second step is to estimate the semantic orientation of the extracted phrases, using the PMI-IR algorithm. This algorithm uses mutual information as a measure of the strength of semantic association between two words (Church & Hanks, 1989). PMI-IR has been empirically evaluated using 80 synonym test questions from the Test of English as a Foreign Language (TOEFL), obtaining a score of 74% (Turney, 2001). For comparison, Latent Semantic Analysis (LSA), another statistical measure of word association, attains a score of 64% on the 3 http://www.cs.jhu.edu/~brill/RBT1_14.tar.Z 4 See Santorini (1995) for a complete description of the tags. same 80 TOEFL questions (Landauer & Dumais, 1997). The Pointwise Mutual Information (PMI) between two words, word1 and word2, is defined as follows (Church & Hanks, 1989): p(word1 & word2) PMI(word1, word2) = log2 p(word1) p(word2) (1) Here, p(word1 & word2) is the probability that word1 and word2 co-occur. If the words are statistically independent, then the probability that they co-occur is given by the product p(word1) p(word2). The ratio between p(word1 & word2) and p(word1) p(word2) is thus a measure of the degree of statistical dependence between the words. The log of this ratio is the amount of information that we acquire about the presence of one of the words when we observe the other. The Semantic Orientation (SO) of a phrase, phrase, is calculated here as follows: SO(phrase) = PMI(phrase, “excellent”) - PMI(phrase, “poor”) (2) The reference words “excellent” and “poor” were chosen because, in the five star review rating system, it is common to define one star as “poor” and five stars as “excellent”. SO is positive when phrase is more strongly associated with “excellent” and negative when phrase is more strongly associated with “poor”. PMI-IR estimates PMI by issuing queries to a search engine (hence the IR in PMI-IR) and noting the number of hits (matching documents). The following experiments use the AltaVista Advanced Search engine5, which indexes approximately 350 million web pages (counting only those pages that are in English). I chose AltaVista because it has a NEAR operator. The AltaVista NEAR operator constrains the search to documents that contain the words within ten words of one another, in either order. Previous work has shown that NEAR performs better than AND when measuring the strength of semantic association between words (Turney, 2001). Let hits(query) be the number of hits returned, given the query query. The following estimate of SO can be derived from equations (1) and (2) with 5 http://www.altavista.com/sites/search/adv some minor algebraic manipulation, if cooccurrence is interpreted as NEAR: SO(phrase) = hits(phrase NEAR “excellent”) hits(“poor”) log2 hits(phrase NEAR “poor”) hits(“excellent”) (3) Equation (3) is a log-odds ratio (Agresti, 1996). To avoid division by zero, I added 0.01 to the hits. I also skipped phrase when both hits(phrase NEAR “excellent”) and hits(phrase NEAR “poor”) were (simultaneously) less than four. These numbers (0.01 and 4) were arbitrarily chosen. To eliminate any possible influence from the testing data, I added “AND (NOT host:epinions)” to every query, which tells AltaVista not to include the Epinions Web site in its searches. The third step is to calculate the average semantic orientation of the phrases in the given review and classify the review as recommended if the average is positive and otherwise not recommended. Table 2 shows an example for a recommended review and Table 3 shows an example for a not recommended review. Both are reviews of the Bank of America. Both are in the collection of 410 reviews from Epinions that are used in the experiments in Section 4. Table 2. An example of the processing of a review that the author has classified as recommended.6 Extracted Phrase Part-of-Speech Tags Semantic Orientation online experience JJ NN 2.253 low fees JJ NNS 0.333 local branch JJ NN 0.421 small part JJ NN 0.053 online service JJ NN 2.780 printable version JJ NN -0.705 direct deposit JJ NN 1.288 well other RB JJ 0.237 inconveniently located RB VBN -1.541 other bank JJ NN -0.850 true service JJ NN -0.732 Average Semantic Orientation 0.322 6 The semantic orientation in the following tables is calculated using the natural logarithm (base e), rather than base 2. The natural log is more common in the literature on log-odds ratio. Since all logs are equivalent up to a constant factor, it makes no difference for the algorithm. Table 3. An example of the processing of a review that the author has classified as not recommended. Extracted Phrase Part-of-Speech Tags Semantic Orientation little difference JJ NN -1.615 clever tricks JJ NNS -0.040 programs such NNS JJ 0.117 possible moment JJ NN -0.668 unethical practices JJ NNS -8.484 low funds JJ NNS -6.843 old man JJ NN -2.566 other problems JJ NNS -2.748 probably wondering RB VBG -1.830 virtual monopoly JJ NN -2.050 other bank JJ NN -0.850 extra day JJ NN -0.286 direct deposits JJ NNS 5.771 online web JJ NN 1.936 cool thing JJ NN 0.395 very handy RB JJ 1.349 lesser evil RBR JJ -2.288 Average Semantic Orientation -1.218 3 Related Work This work is most closely related to Hatzivassiloglou and McKeown’s (1997) work on predicting the semantic orientation of adjectives. They note that there are linguistic constraints on the semantic orientations of adjectives in conjunctions. As an example, they present the following three sentences (Hatzivassiloglou & McKeown, 1997): 1. The tax proposal was simple and wellreceived by the public. 2. The tax proposal was simplistic but wellreceived by the public. 3. (*) The tax proposal was simplistic and well-received by the public. The third sentence is incorrect, because we use “and” with adjectives that have the same semantic orientation (“simple” and “well-received” are both positive), but we use “but” with adjectives that have different semantic orientations (“simplistic” is negative). Hatzivassiloglou and McKeown (1997) use a four-step supervised learning algorithm to infer the semantic orientation of adjectives from constraints on conjunctions: 1. All conjunctions of adjectives are extracted from the given corpus. 2. A supervised learning algorithm combines multiple sources of evidence to label pairs of adjectives as having the same semantic orientation or different semantic orientations. The result is a graph where the nodes are adjectives and links indicate sameness or difference of semantic orientation. 3. A clustering algorithm processes the graph structure to produce two subsets of adjectives, such that links across the two subsets are mainly different-orientation links, and links inside a subset are mainly same-orientation links. 4. Since it is known that positive adjectives tend to be used more frequently than negative adjectives, the cluster with the higher average frequency is classified as having positive semantic orientation. This algorithm classifies adjectives with accuracies ranging from 78% to 92%, depending on the amount of training data that is available. The algorithm can go beyond a binary positive-negative distinction, because the clustering algorithm (step 3 above) can produce a “goodness-of-fit” measure that indicates how well an adjective fits in its assigned cluster. Although they do not consider the task of classifying reviews, it seems their algorithm could be plugged into the classification algorithm presented in Section 2, where it would replace PMI-IR and equation (3) in the second step. However, PMI-IR is conceptually simpler, easier to implement, and it can handle phrases and adverbs, in addition to isolated adjectives. As far as I know, the only prior published work on the task of classifying reviews as thumbs up or down is Tong’s (2001) system for generating sentiment timelines. This system tracks online discussions about movies and displays a plot of the number of positive sentiment and negative sentiment messages over time. Messages are classified by looking for specific phrases that indicate the sentiment of the author towards the movie (e.g., “great acting”, “wonderful visuals”, “terrible score”, “uneven editing”). Each phrase must be manually added to a special lexicon and manually tagged as indicating positive or negative sentiment. The lexicon is specific to the domain (e.g., movies) and must be built anew for each new domain. The company Mindfuleye7 offers a technology called Lexant™ that appears similar to Tong’s (2001) system. Other related work is concerned with determining subjectivity (Hatzivassiloglou & Wiebe, 2000; Wiebe, 2000; Wiebe et al., 2001). The task is to distinguish sentences that present opinions and evaluations from sentences that objectively present factual information (Wiebe, 2000). Wiebe et al. (2001) list a variety of potential applications for automated subjectivity tagging, such as recognizing “flames” (Spertus, 1997), classifying email, recognizing speaker role in radio broadcasts, and mining reviews. In several of these applications, the first step is to recognize that the text is subjective and then the natural second step is to determine the semantic orientation of the subjective text. For example, a flame detector cannot merely detect that a newsgroup message is subjective, it must further detect that the message has a negative semantic orientation; otherwise a message of praise could be classified as a flame. Hearst (1992) observes that most search engines focus on finding documents on a given topic, but do not allow the user to specify the directionality of the documents (e.g., is the author in favor of, neutral, or opposed to the event or item discussed in the document?). The directionality of a document is determined by its deep argumentative structure, rather than a shallow analysis of its adjectives. Sentences are interpreted metaphorically in terms of agents exerting force, resisting force, and overcoming resistance. It seems likely that there could be some benefit to combining shallow and deep analysis of the text. 4 Experiments Table 4 describes the 410 reviews from Epinions that were used in the experiments. 170 (41%) of the reviews are not recommended and the remaining 240 (59%) are recommended. Always guessing the majority class would yield an accuracy of 59%. The third column shows the average number of phrases that were extracted from the reviews. Table 5 shows the experimental results. Except for the travel reviews, there is surprisingly little variation in the accuracy within a domain. In addi 7 http://www.mindfuleye.com/ tion to recommended and not recommended, Epinions reviews are classified using the five star rating system. The third column shows the correlation between the average semantic orientation and the number of stars assigned by the author of the review. The results show a strong positive correlation between the average semantic orientation and the author’s rating out of five stars. Table 4. A summary of the corpus of reviews. Domain of Review Number of Reviews Average Phrases per Review Automobiles 75 20.87 Honda Accord 37 18.78 Volkswagen Jetta 38 22.89 Banks 120 18.52 Bank of America 60 22.02 Washington Mutual 60 15.02 Movies 120 29.13 The Matrix 60 19.08 Pearl Harbor 60 39.17 Travel Destinations 95 35.54 Cancun 59 30.02 Puerto Vallarta 36 44.58 All 410 26.00 Table 5. The accuracy of the classification and the correlation of the semantic orientation with the star rating. Domain of Review Accuracy Correlation Automobiles 84.00 % 0.4618 Honda Accord 83.78 % 0.2721 Volkswagen Jetta 84.21 % 0.6299 Banks 80.00 % 0.6167 Bank of America 78.33 % 0.6423 Washington Mutual 81.67 % 0.5896 Movies 65.83 % 0.3608 The Matrix 66.67 % 0.3811 Pearl Harbor 65.00 % 0.2907 Travel Destinations 70.53 % 0.4155 Cancun 64.41 % 0.4194 Puerto Vallarta 80.56 % 0.1447 All 74.39 % 0.5174 5 Discussion of Results A natural question, given the preceding results, is what makes movie reviews hard to classify? Table 6 shows that classification by the average SO tends to err on the side of guessing that a review is not recommended, when it is actually recommended. This suggests the hypothesis that a good movie will often contain unpleasant scenes (e.g., violence, death, mayhem), and a recommended movie review may thus have its average semantic orientation reduced if it contains descriptions of these unpleasant scenes. However, if we add a constant value to the average SO of the movie reviews, to compensate for this bias, the accuracy does not improve. This suggests that, just as positive reviews mention unpleasant things, so negative reviews often mention pleasant scenes. Table 6. The confusion matrix for movie classifications. Author’s Classification Average Semantic Orientation Thumbs Up Thumbs Down Sum Positive 28.33 % 12.50 % 40.83 % Negative 21.67 % 37.50 % 59.17 % Sum 50.00 % 50.00 % 100.00 % Table 7 shows some examples that lend support to this hypothesis. For example, the phrase “more evil” does have negative connotations, thus an SO of -4.384 is appropriate, but an evil character does not make a bad movie. The difficulty with movie reviews is that there are two aspects to a movie, the events and actors in the movie (the elements of the movie), and the style and art of the movie (the movie as a gestalt; a unified whole). This is likely also the explanation for the lower accuracy of the Cancun reviews: good beaches do not necessarily add up to a good vacation. On the other hand, good automotive parts usually do add up to a good automobile and good banking services add up to a good bank. It is not clear how to address this issue. Future work might look at whether it is possible to tag sentences as discussing elements or wholes. Another area for future work is to empirically compare PMI-IR and the algorithm of Hatzivassiloglou and McKeown (1997). Although their algorithm does not readily extend to two-word phrases, I have not yet demonstrated that two-word phrases are necessary for accurate classification of reviews. On the other hand, it would be interesting to evaluate PMI-IR on the collection of 1,336 hand-labeled adjectives that were used in the experiments of Hatzivassiloglou and McKeown (1997). A related question for future work is the relationship of accuracy of the estimation of semantic orientation at the level of individual phrases to accuracy of review classification. Since the review classification is based on an average, it might be quite resistant to noise in the SO estimate for individual phrases. But it is possible that a better SO estimator could produce significantly better classifications. Table 7. Sample phrases from misclassified reviews. Movie: The Matrix Author’s Rating: recommended (5 stars) Average SO: -0.219 (not recommended) Sample Phrase: more evil [RBR JJ] SO of Sample Phrase: -4.384 Context of Sample Phrase: The slow, methodical way he spoke. I loved it! It made him seem more arrogant and even more evil. Movie: Pearl Harbor Author’s Rating: recommended (5 stars) Average SO: -0.378 (not recommended) Sample Phrase: sick feeling [JJ NN] SO of Sample Phrase: -8.308 Context of Sample Phrase: During this period I had a sick feeling, knowing what was coming, knowing what was part of our history. Movie: The Matrix Author’s Rating: not recommended (2 stars) Average SO: 0.177 (recommended) Sample Phrase: very talented [RB JJ] SO of Sample Phrase: 1.992 Context of Sample Phrase: Well as usual Keanu Reeves is nothing special, but surprisingly, the very talented Laurence Fishbourne is not so good either, I was surprised. Movie: Pearl Harbor Author’s Rating: not recommended (3 stars) Average SO: 0.015 (recommended) Sample Phrase: blue skies [JJ NNS] SO of Sample Phrase: 1.263 Context of Sample Phrase: Anyone who saw the trailer in the theater over the course of the last year will never forget the images of Japanese war planes swooping out of the blue skies, flying past the children playing baseball, or the truly remarkable shot of a bomb falling from an enemy plane into the deck of the USS Arizona. Equation (3) is a very simple estimator of semantic orientation. It might benefit from more sophisticated statistical analysis (Agresti, 1996). One possibility is to apply a statistical significance test to each estimated SO. There is a large statistical literature on the log-odds ratio, which might lead to improved results on this task. This paper has focused on unsupervised classification, but average semantic orientation could be supplemented by other features, in a supervised classification system. The other features could be based on the presence or absence of specific words, as is common in most text classification work. This could yield higher accuracies, but the intent here was to study this one feature in isolation, to simplify the analysis, before combining it with other features. Table 5 shows a high correlation between the average semantic orientation and the star rating of a review. I plan to experiment with ordinal classification of reviews in the five star rating system, using the algorithm of Frank and Hall (2001). For ordinal classification, the average semantic orientation would be supplemented with other features in a supervised classification system. A limitation of PMI-IR is the time required to send queries to AltaVista. Inspection of Equation (3) shows that it takes four queries to calculate the semantic orientation of a phrase. However, I cached all query results, and since there is no need to recalculate hits(“poor”) and hits(“excellent”) for every phrase, each phrase requires an average of slightly less than two queries. As a courtesy to AltaVista, I used a five second delay between queries.8 The 410 reviews yielded 10,658 phrases, so the total time required to process the corpus was roughly 106,580 seconds, or about 30 hours. This might appear to be a significant limitation, but extrapolation of current trends in computer memory capacity suggests that, in about ten years, the average desktop computer will be able to easily store and search AltaVista’s 350 million Web pages. This will reduce the processing time to less than one second per review. 6 Applications There are a variety of potential applications for automated review rating. As mentioned in the in 8 This line of research depends on the good will of the major search engines. For a discussion of the ethics of Web robots, see http://www.robotstxt.org/wc/robots.html. For query robots, the proposed extended standard for robot exclusion would be useful. See http://www.conman.org/people/spc/robots2.html. troduction, one application is to provide summary statistics for search engines. Given the query “Akumal travel review”, a search engine could report, “There are 5,000 hits, of which 80% are thumbs up and 20% are thumbs down.” The search results could be sorted by average semantic orientation, so that the user could easily sample the most extreme reviews. Similarly, a search engine could allow the user to specify the topic and the rating of the desired reviews (Hearst, 1992). Preliminary experiments indicate that semantic orientation is also useful for summarization of reviews. A positive review could be summarized by picking out the sentence with the highest positive semantic orientation and a negative review could be summarized by extracting the sentence with the lowest negative semantic orientation. Epinions asks its reviewers to provide a short description of pros and cons for the reviewed item. A pro/con summarizer could be evaluated by measuring the overlap between the reviewer’s pros and cons and the phrases in the review that have the most extreme semantic orientation. Another potential application is filtering “flames” for newsgroups (Spertus, 1997). There could be a threshold, such that a newsgroup message is held for verification by the human moderator when the semantic orientation of a phrase drops below the threshold. A related use might be a tool for helping academic referees when reviewing journal and conference papers. Ideally, referees are unbiased and objective, but sometimes their criticism can be unintentionally harsh. It might be possible to highlight passages in a draft referee’s report, where the choice of words should be modified towards a more neutral tone. Tong’s (2001) system for detecting and tracking opinions in on-line discussions could benefit from the use of a learning algorithm, instead of (or in addition to) a hand-built lexicon. With automated review rating (opinion rating), advertisers could track advertising campaigns, politicians could track public opinion, reporters could track public response to current events, stock traders could track financial opinions, and trend analyzers could track entertainment and technology trends. 7 Conclusions This paper introduces a simple unsupervised learning algorithm for rating a review as thumbs up or down. The algorithm has three steps: (1) extract phrases containing adjectives or adverbs, (2) estimate the semantic orientation of each phrase, and (3) classify the review based on the average semantic orientation of the phrases. The core of the algorithm is the second step, which uses PMI-IR to calculate semantic orientation (Turney, 2001). In experiments with 410 reviews from Epinions, the algorithm attains an average accuracy of 74%. It appears that movie reviews are difficult to classify, because the whole is not necessarily the sum of the parts; thus the accuracy on movie reviews is about 66%. On the other hand, for banks and automobiles, it seems that the whole is the sum of the parts, and the accuracy is 80% to 84%. Travel reviews are an intermediate case. Previous work on determining the semantic orientation of adjectives has used a complex algorithm that does not readily extend beyond isolated adjectives to adverbs or longer phrases (Hatzivassiloglou and McKeown, 1997). The simplicity of PMI-IR may encourage further work with semantic orientation. The limitations of this work include the time required for queries and, for some applications, the level of accuracy that was achieved. The former difficulty will be eliminated by progress in hardware. The latter difficulty might be addressed by using semantic orientation combined with other features in a supervised classification algorithm. Acknowledgements Thanks to Joel Martin and Michael Littman for helpful comments. References Agresti, A. 1996. An introduction to categorical data analysis. New York: Wiley. Brill, E. 1994. Some advances in transformation-based part of speech tagging. Proceedings of the Twelfth National Conference on Artificial Intelligence (pp. 722-727). Menlo Park, CA: AAAI Press. Church, K.W., & Hanks, P. 1989. Word association norms, mutual information and lexicography. Proceedings of the 27th Annual Conference of the ACL (pp. 76-83). New Brunswick, NJ: ACL. Frank, E., & Hall, M. 2001. A simple approach to ordinal classification. Proceedings of the Twelfth European Conference on Machine Learning (pp. 145156). Berlin: Springer-Verlag. Hatzivassiloglou, V., & McKeown, K.R. 1997. Predicting the semantic orientation of adjectives. Proceedings of the 35th Annual Meeting of the ACL and the 8th Conference of the European Chapter of the ACL (pp. 174-181). New Brunswick, NJ: ACL. Hatzivassiloglou, V., & Wiebe, J.M. 2000. Effects of adjective orientation and gradability on sentence subjectivity. Proceedings of 18th International Conference on Computational Linguistics. New Brunswick, NJ: ACL. Hearst, M.A. 1992. Direction-based text interpretation as an information access refinement. In P. Jacobs (Ed.), Text-Based Intelligent Systems: Current Research and Practice in Information Extraction and Retrieval. Mahwah, NJ: Lawrence Erlbaum Associates. Landauer, T.K., & Dumais, S.T. 1997. A solution to Plato’s problem: The latent semantic analysis theory of the acquisition, induction, and representation of knowledge. Psychological Review, 104, 211-240. Santorini, B. 1995. Part-of-Speech Tagging Guidelines for the Penn Treebank Project (3rd revision, 2nd printing). Technical Report, Department of Computer and Information Science, University of Pennsylvania. Spertus, E. 1997. Smokey: Automatic recognition of hostile messages. Proceedings of the Conference on Innovative Applications of Artificial Intelligence (pp. 1058-1065). Menlo Park, CA: AAAI Press. Tong, R.M. 2001. An operational system for detecting and tracking opinions in on-line discussions. Working Notes of the ACM SIGIR 2001 Workshop on Operational Text Classification (pp. 1-6). New York, NY: ACM. Turney, P.D. 2001. Mining the Web for synonyms: PMI-IR versus LSA on TOEFL. Proceedings of the Twelfth European Conference on Machine Learning (pp. 491-502). Berlin: Springer-Verlag. Wiebe, J.M. 2000. Learning subjective adjectives from corpora. Proceedings of the 17th National Conference on Artificial Intelligence. Menlo Park, CA: AAAI Press. Wiebe, J.M., Bruce, R., Bell, M., Martin, M., & Wilson, T. 2001. A corpus study of evaluative and speculative language. Proceedings of the Second ACL SIG on Dialogue Workshop on Discourse and Dialogue. Aalborg, Denmark.
2002
53
Is It the Right Answer? Exploiting Web Redundancy for Answer Validation Bernardo Magnini, Matteo Negri, Roberto Prevete and Hristo Tanev ITC-Irst, Centro per la Ricerca Scientifica e Tecnologica [magnini,negri,prevete,tanev]@itc.it Abstract Answer Validation is an emerging topic in Question Answering, where open domain systems are often required to rank huge amounts of candidate answers. We present a novel approach to answer validation based on the intuition that the amount of implicit knowledge which connects an answer to a question can be quantitatively estimated by exploiting the redundancy of Web information. Experiments carried out on the TREC-2001 judged-answer collection show that the approach achieves a high level of performance (i.e. 81% success rate). The simplicity and the efficiency of this approach make it suitable to be used as a module in Question Answering systems. 1 Introduction Open domain question-answering (QA) systems search for answers to a natural language question either on the Web or in a local document collection. Different techniques, varying from surface patterns (Subbotin and Subbotin, 2001) to deep semantic analysis (Zajac, 2001), are used to extract the text fragments containing candidate answers. Several systems apply answer validation techniques with the goal of filtering out improper candidates by checking how adequate a candidate answer is with respect to a given question. These approaches rely on discovering semantic relations between the question and the answer. As an example, (Harabagiu and Maiorano, 1999) describes answer validation as an abductive inference process, where an answer is valid with respect to a question if an explanation for it, based on background knowledge, can be found. Although theoretically well motivated, the use of semantic techniques on open domain tasks is quite expensive both in terms of the involved linguistic resources and in terms of computational complexity, thus motivating a research on alternative solutions to the problem. This paper presents a novel approach to answer validation based on the intuition that the amount of implicit knowledge which connects an answer to a question can be quantitatively estimated by exploiting the redundancy of Web information. The hypothesis is that the number of documents that can be retrieved from the Web in which the question and the answer co-occur can be considered a significant clue of the validity of the answer. Documents are searched in the Web by means of validation patterns, which are derived from a linguistic processing of the question and the answer. In order to test this idea a system for automatic answer validation has been implemented and a number of experiments have been carried out on questions and answers provided by the TREC-2001 participants. The advantages of this approach are its simplicity on the one hand and its efficiency on the other. Automatic techniques for answer validation are of great interest for the development of open domain QA systems. The availability of a completely automatic evaluation procedure makes it feasible QA systems based on generate and test approaches. In this way, until a given answer is automatically Computational Linguistics (ACL), Philadelphia, July 2002, pp. 425-432. Proceedings of the 40th Annual Meeting of the Association for proved to be correct for a question, the system will carry out different refinements of its searching criteria checking the relevance of new candidate answers. In addition, given that most of the QA systems rely on complex architectures and the evaluation of their performances requires a huge amount of work, the automatic assessment of the relevance of an answer with respect to a given question will speed up both algorithm refinement and testing. The paper is organized as follows. Section 2 presents the main features of the approach. Section 3 describes how validation patterns are extracted from a question-answer pair by means of specific question answering techniques. Section 4 explains the basic algorithm for estimating the answer validity score. Section 5 gives the results of a number of experiments and discusses them. Finally, Section 6 puts our approach in the context of related works. 2 Overall Methodology Given a question and a candidate answer  the answer validation task is defined as the capability to assess the relevance of  with respect to . We assume open domain questions and that both answers and questions are texts composed of few tokens (usually less than 100). This is compatible with the TREC2001 data, that will be used as examples throughout this paper. We also assume the availability of the Web, considered to be the largest open domain text corpus containing information about almost all the different areas of the human knowledge. The intuition underlying our approach to answer validation is that, given a question-answer pair ([ ,  ]), it is possible to formulate a set of validation statements whose truthfulness is equivalent to the degree of relevance of  with respect to . For instance, given the question “What is the capital of the USA?”, the problem of validating the answer “Washington” is equivalent to estimating the truthfulness of the validation statement “The capital of the USA is Washington”. Therefore, the answer validation task could be reformulated as a problem of statement reliability. There are two issues to be addressed in order to make this intuition effective. First, the idea of a validation statement is still insufficient to catch the richness of implicit knowledge that may connect an answer to a question: we will attack this problem defining the more flexible idea of a validation pattern. Second, we have to design an effective and efficient way to check the reliability of a validation pattern: our solution relies on a procedure based on a statistical count of Web searches. Answers may occur in text passages with low similarity with respect to the question. Passages telling facts may use different syntactic constructions, sometimes are spread in more than one sentence, may reflect opinions and personal attitudes, and often use ellipsis and anaphora. For instance, if the validation statement is “The capital of USA is Washington”, we have Web documents containing passages like those reported in Table 1, which can not be found with a simple search of the statement, but that nevertheless contain a significant amount of knowledge about the relations between the question and the answer. We will refer to these text fragments as validation fragments. 1. Capital Region USA: Fly-Drive Holidays in and Around Washington D.C. 2. the Insider’s Guide to the Capital Area Music Scene (Washington D.C., USA). 3. The Capital Tangueros (Washington, DC Area, USA) 4. I live in the Nation’s Capital, Washington Metropolitan Area (USA). 5. in 1790 Capital (also USA’s capital): Washington D.C. Area: 179 square km Table 1: Web search for validation fragments A common feature in the above examples is the co-occurrence of a certain subset of words (i.e. “capital”,“USA” and “Washington”). We will make use of validation patterns that cover a larger portion of text fragments, including those lexically similar to the question and the answer (e.g. fragments 4 and 5 in Table 1) and also those that are not similar (e.g. fragment 2 in Table 1). In the case of our example a set of validation statements can be generalized by the validation pattern: [capital  text  USA  text  Washington] where  text  is a place holder for any portion of text with a fixed maximal length. To check the correctness of  with respect to we propose a procedure that measures the number of occurrences on the Web of a validation pattern derived from  and . A useful feature of such patterns is that when we search for them on the Web they usually produce many hits, thus making statistical approaches applicable. In contrast, searching for strict validation statements generally results in a small number of documents (if any) and makes statistical methods irrelevant. A number of techniques used for finding collocations and co-occurrences of words, such as mutual information, may well be used to search co-occurrence tendency between the question and the candidate answer in the Web. If we verify that such tendency is statistically significant we may consider the validation pattern as consistent and therefore we may assume a high level of correlation between the question and the candidate answer. Starting from the above considerations and given a question-answer pair    , we propose an answer validation procedure based on the following steps: 1. Compute the set of representative keywords and  both from and from  ; this step is carried out using linguistic techniques, such as answer type identification (from the question) and named entities recognition (from the answer); 2. From the extracted keywords compute the validation pattern for the pair [   ]; 3. Submit the patterns to the Web and estimate an answer validity score considering the number of retrieved documents. 3 Extracting Validation Patterns In our approach a validation pattern consists of two components: a question sub-pattern (Qsp) and an answer sub-pattern (Asp). Building the Qsp. A Qsp is derived from the input question cutting off non-content words with a stopwords filter. The remaining words are expanded with both synonyms and morphological forms in order to maximize the recall of retrieved documents. Synonyms are automatically extracted from the most frequent sense of the word in WordNet (Fellbaum, 1998), which considerably reduces the risk of adding disturbing elements. As for morphology, verbs are expanded with all their tense forms (i.e. present, present continuous, past tense and past participle). Synonyms and morphological forms are added to the Qsp and composed in an OR clause. The following example illustrates how the Qsp is constructed. Given the TREC-2001 question “When did Elvis Presley die?”, the stop-words filter removes “When” and “did” from the input. Then synonyms of the first sense of “die” (i.e. “decease”, “perish”, etc.) are extracted from WordNet. Finally, morphological forms for all the corresponding verb tenses are added to the Qsp. The resultant Qsp will be the following: [Elvis  text  Presley  text  (die OR died OR dying OR perish OR ...)] Building the Asp. An Asp is constructed in two steps. First, the answer type of the question is identified considering both morpho-syntactic (a part of speech tagger is used to process the question) and semantic features (by means of semantic predicates defined on the WordNet taxonomy; see (Magnini et al., 2001) for details). Possible answer types are: DATE, MEASURE, PERSON, LOCATION, ORGANIZATION, DEFINITION and GENERIC. DEFINITION is the answer type peculiar to questions like “What is an atom?” which represent a considerable part (around 25%) of the TREC-2001 corpus. The answer type GENERIC is used for non definition questions asking for entities that can not be classified as named entities (e.g. the questions: “Material called linen is made from what plant?” or “What mineral helps prevent osteoporosis?”) In the second step, a rule-based named entities recognition module identifies in the answer string all the named entities matching the answer type category. If the category corresponds to a named entity, an Asp for each selected named entity is created. If the answer type category is either DEFINITION or GENERIC, the entire answer string except the stop-words is considered. In addition, in order to maximize the recall of retrieved documents, the Asp is expanded with verb tenses. The following example shows how the Asp is created. Given the TREC question “When did Elvis Presley die?” and the candidate answer “though died in 1977 of course some fans maintain”, since the answer type category is DATE the named entities recognition module will select [1977] as an answer sub-pattern. 4 Estimating Answer Validity The answer validation algorithm queries the Web with the patterns created from the question and answer and after that estimates the consistency of the patterns. 4.1 Querying the Web We use a Web-mining algorithm that considers the number of pages retrieved by the search engine. In contrast, qualitative approaches to Web mining (e.g. (Brill et al., 2001)) analyze the document content, as a result considering only a relatively small number of pages. For information retrieval we used the AltaVista search engine. Its advanced syntax allows the use of operators that implement the idea of validation patterns introduced in Section 2. Queries are composed using NEAR, OR and AND boolean operators. The NEAR operator searches pages where two words appear in a distance of no more than 10 tokens: it is used to put together the question and the answer sub-patterns in a single validation pattern. The OR operator introduces variations in the word order and verb forms. Finally, the AND operator is used as an alternative to NEAR, allowing more distance among pattern elements. If the question sub-pattern  does not return any document or returns less than a certain threshold (experimentally set to 7) the question pattern is relaxed by cutting one word; in this way a new query is formulated and submitted to the search engine. This is repeated until no more words can be cut or the returned number of documents becomes higher than the threshold. Pattern relaxation is performed using word-ignoring rules in a specified order. Such rules, for instance, ignore the focus of the question, because it is unlikely that it occurs in a validation fragment; ignore adverbs and adjectives, because are less significant; ignore nouns belonging to the WordNet classes “abstraction”, “psychological feature” or “group”, because usually they specify finer details and human attitudes. Names, numbers and measures are preferred over all the lower-case words and are cut last. 4.2 Estimating pattern consistency The Web-mining module submits three searches to the search engine: the sub-patterns [Qsp] and [Asp] and the validation pattern [QAp], this last built as the composition [Qsp NEAR Asp]. The search engine returns respectively:     ,     and     NEAR   ! . The probability "#  of a pattern  in the Web is calculated by: "# %$ !   & '!"(*)+, where !   is the number of pages in the Web where  appears and & '"()+, is the maximum number of pages that can be returned by the search engine. We set this constant experimentally. However in two of the formulas we use (i.e. Pointwise Mutual Information and Corrected Conditional Probability) & '"()-+. may be ignored. The joint probability P(Qsp,Asp) is calculated by means of the validation pattern probability: "# / %$0"#   1234(   We have tested three alternative measures to estimate the degree of relevance of Web searches: Pointwise Mutual Information, Maximal Likelihood Ratio and Corrected Conditional Probability, a variant of Conditional Probability which considers the asymmetry of the question-answer relation. Each measure provides an answer validity score: high values are interpreted as strong evidence that the validation pattern is consistent. This is a clue to the fact that the Web pages where this pattern appears contain validation fragments, which imply answer accuracy. Pointwise Mutual Information (PMI) (Manning and Sch¨utze, 1999) has been widely used to find cooccurrence in large corpora. " &65  Qsp,Asp %$ "# Qsp,Asp  "# Qsp 879"# Asp  PMI(Qsp,Asp) is used as a clue to the internal coherence of the question-answer validation pattern QAp. Substituting the probabilities in the PMI formula with the previously introduced Web statistics, we obtain:   Qsp 1234 Asp    Qsp 879!  Asp  7 & '"()-+. Maximal Likelihood Ratio (MLHR) is also used for word co-occurrence mining (Dunning, 1993). We decided to check MLHR for answer validation because it is supposed to outperform PMI in case of sparse data, a situation that may happen in case of questions with complex patterns that return small number of hits. &6:<;>=      ?$A@CB%DFEHGCI IJ$ : F LKNMOLPQM  : F LKRSLPTR  : F M,LKNMOLPQM  : F R.OKR,LPTR  where : F OKTOP 8$U VWC@X Y[ZV M $ VL\ Y[\ , R $ V] Y.] #$ V^\_TV] Y[\_!Y,] K M $`       , K R $`    @C   PQM $0!    , PaR $0! @C   Here !    @C  ! is the number of appearances of Qsp when Asp is not present and it is calculated as     *@(   b1234C   . Similarly, ! @C   is the number of Web pages where Asp does not appear and it is calculated as & '"()-+. @c  . Corrected Conditional Probability (CCP) in contrast with PMI and MLHR, CCP is not symmetric (e.g. generally ded"#      gf $ ded "#     ! ). This is based on the fact that we search for the occurrence of the answer pattern Asp only in the cases when Qsp is present. The statistical evidence for this can be measured through "#  ?h   ! , however this value is corrected with "#   Rij in the denominator, to avoid the cases when high-frequency words and patterns are taken as relevant answers. dkd"#     8$ "#  ?h   "#   Rij For CCP we obtain: !   k1234e  ! !    879!    Rij 7 & '"()+, Rij 4.3 An example Consider an example taken from the question answer corpus of the main task of TREC-2001: “Which river in US is known as Big Muddy?”. The question keywords are: “river”, “US”, “known”, “Big”, “Muddy”. The search of the pattern [river NEAR US NEAR (known OR know OR...) NEAR Big NEAR Muddy] returns 0 pages, so the algorithm relaxes the pattern by cutting the initial noun “river”, according to the heuristic for discarding a noun if it is the first keyword of the question. The second pattern [US NEAR (known OR know OR...) NEAR Big NEAR Muddy] also returns 0 pages, so we apply the heuristic for ignoring verbs like “know”, “call” and abstract nouns like “name”. The third pattern [US NEAR Big NEAR Muddy] returns 28 pages, which is over the experimentally set threshold of seven pages. One of the 50 byte candidate answers from the TREC-2001 answer collection is “recover Mississippi River”. Taking into account the answer type LOCATION, the algorithm considers only the named entity: “Mississippi River”. To calculate answer validity score (in this example PMI) for [Mississippi River], the procedure constructs the validation pattern: [US NEAR Big NEAR Muddy NEAR Mississippi River] with the answer sub-pattern [Mississippi River]. These two patterns are passed to the search engine, and the returned numbers of pages are substituted in the mutual information expression at the places of !   C1234l   and     respectively; the previously obtained number (i.e. 28) is substituted at the place of     ! . In this way an answer validity score of 55.5 is calculated. It turns out that this value is the maximal validity score for all the answers of this question. Other correct answers from the TREC-2001 collection contain as name entity “Mississippi”. Their answer validity score is 11.8, which is greater than 1.2 and also greater than m-noBk7 & 'qpXr s rutv w<xSy*z*+ ${WHWHn|W, . This score (i.e. 11.8) classifies them as relevant answers. On the other hand, all the wrong answers has validity score below 1 and as a result all of them are classified as irrelevant answer candidates. 5 Experiments and Discussion A number of experiments have been carried out in order to check the validity of the proposed answer validation technique. As a data set, the 492 questions of the TREC-2001 database have been used. For each question, at most three correct answers and three wrong answers have been randomly selected from the TREC-2001 participants’ submissions, resulting in a corpus of 2726 question-answer pairs (some question have less than three positive answers in the corpus). As said before, AltaVista was used as search engine. A baseline for the answer validation experiment was defined by considering how often an answer occurs in the top 10 documents among those (1000 for each question) provided by NIST to TREC-2001 participants. An answer was judged correct for a question if it appears at least one time in the first 10 documents retrieved for that question, otherwise it was judged not correct. Baseline results are reported in Table 2. We carried out several experiments in order to check a number of working hypotheses. Three independent factors were considered: Estimation method. We have implemented three measures (reported in Section 4.2) to estimate an answer validity score: PMI, MLHR and CCP. Threshold. We wanted to estimate the role of two different kinds of thresholds for the assessment of answer validation. In the case of an absolute threshold, if the answer validity score for a candidate answer is below the threshold, the answer is considered wrong, otherwise it is accepted as relevant. In a second type of experiment, for every question and its corresponding answers the program chooses the answer with the highest validity score and calculates a relative threshold on that basis (i.e. z*+. ,y*rt}$ K 7 & ' s rqtv ,xSy*z*+ ). However the relative threshold should be larger than a certain minimum value. Question type. We wanted to check performance variation based on different types of TREC-2001 questions. In particular, we have separated definition and generic questions from true named entities questions. Tables 2 and 3 report the results of the automatic answer validation experiments obtained respectively on all the TREC-2001 questions and on the subset of definition and generic questions. For each estimation method we report precision, recall and success rate. Success rate best represents the performance of the system, being the percent of [   ] pairs where the result given by the system is the same as the TREC judges’ opinion. Precision is the percent of    pairs estimated by the algorithm as relevant, for which the opinion of TREC judges was the same. Recall shows the percent of the relevant answers which the system also evaluates as relevant. P (%) R (%) SR (%) Baseline 50.86 4.49 52.99 CCP - rel. 77.85 82.60 81.25 CCP - abs. 74.12 81.31 78.42 PMI - rel. 77.40 78.27 79.56 PMI - abs. 70.95 87.17 77.79 MLHR - rel. 81.23 72.40 79.60 MLHR - abs. 72.80 80.80 77.40 Table 2: Results on all 492 TREC-2001 questions P (%) R (%) SR (%) CCP - rel. 85.12 84.27 86.38 CCP - abs. 83.07 78.81 83.35 PMI - rel. 83.78 82.12 84.90 PMI - abs. 79.56 84.44 83.35 MLHR - rel. 90.65 72.75 84.44 MLHR - abs. 87.20 67.20 82.10 Table 3: Results on 249 named entity questions The best results on the 492 questions corpus (CCP measure with relative threshold) show a success rate of 81.25%, i.e. in 81.25% of the pairs the system evaluation corresponds to the human evaluation, and confirms the initial working hypotheses. This is 28% above the baseline success rate. Precision and recall are respectively 20-30% and 68-87% above the baseline values. These results demonstrate that the intuition behind the approach is motivated and that the algorithm provides a workable solution for answer validation. The experiments show that the average difference between the success rates obtained for the named entity questions (Table 3) and the full TREC-2001 question set (Table 2) is 5.1%. This means that our approach performs better when the answer entities are well specified. Another conclusion is that the relative threshold demonstrates superiority over the absolute threshold in both test sets (average 2.3%). However if the percent of the right answers in the answer set is lower, then the efficiency of this approach may decrease. The best results in both question sets are obtained by applying CCP. Such non-symmetric formulas might turn out to be more applicable in general. As conditional corrected (CCP) is not a classical co-occurrence measure like PMI and MLHR, we may consider its high performance as proof for the difference between our task and classic cooccurrence mining. Another indication for this is the fact that MLHR and PMI performances are comparable, however in the case of classic co-occurrence search, MLHR should show much better success rate. It seems that we have to develop other measures specific for the question-answer co-occurrence mining. 6 Related Work Although there is some recent work addressing the evaluation of QA systems, it seems that the idea of using a fully automatic approach to answer validation has still not been explored. For instance, the approach presented in (Breck et al., 2000) is semiautomatic. The proposed methodology for answer validation relies on computing the overlapping between the system response to a question and the stemmed content words of an answer key. All the answer keys corresponding to the 198 TREC-8 questions have been manually constructed by human annotators using the TREC corpus and external resources like the Web. The idea of using the Web as a corpus is an emerging topic of interest among the computational linguists community. The TREC-2001 QA track demonstrated that Web redundancy can be exploited at different levels in the process of finding answers to natural language questions. Several studies (e.g. (Clarke et al., 2001) (Brill et al., 2001)) suggest that the application of Web search can improve the precision of a QA system by 25-30%. A common feature of these approaches is the use of the Web to introduce data redundancy for a more reliable answer extraction from local text collections. (Radev et al., 2001) suggests a probabilistic algorithm that learns the best query paraphrase of a question searching the Web. Other approaches suggest training a questionanswering system on the Web (Mann, 2001). The Web-mining algorithm presented in this paper is similar to the PMI-IR (Pointwise Mutual Information - Information Retrieval) described in (Turney, 2001). Turney uses PMI and Web retrieval to decide which word in a list of candidates is the best synonym with respect to a target word. However, the answer validity task poses different peculiarities. We search how the occurrence of the question words influence the appearance of answer words. Therefore, we introduce additional linguistic techniques for pattern and query formulation, such as keyword extraction, answer type extraction, named entities recognition and pattern relaxation. 7 Conclusion and Future Work We have presented a novel approach to answer validation based on the intuition that the amount of implicit knowledge which connects an answer to a question can be quantitatively estimated by exploiting the redundancy of Web information. Results obtained on the TREC-2001 QA corpus correlate well with the human assessment of answers’ correctness and confirm that a Web-based algorithm provides a workable solution for answer validation. Several activities are planned in the near future. First, the approach we presented is currently based on fixed validation patterns that combine single words extracted both from the question and from the answer. These word-level patterns provide a broad coverage (i.e. many documents are typically retrieved) in spite of a low precision (i.e also weak correlations among the keyword are captured). To increase the precision we want to experiment other types of patterns, which combine words into larger units (e.g. phrases or whole sentences). We believe that the answer validation process can be improved both considering pattern variations (from word-level to phrase and sentence-level), and the trade-off between the precision of the search pattern and the number of retrieved documents. Preliminary experiments confirm the validity of this hypothesis. Then, a generate and test module based on the validation algorithm presented in this paper will be integrated in the architecture of our QA system under development. In order to exploit the efficiency and the reliability of the algorithm, such system will be designed trying to maximize the recall of retrieved candidate answers. Instead of performing a deep linguistic analysis of these passages, the system will delegate to the evaluation component the selection of the right answer. References E.J. Breck, J.D. Burger, L. Ferro, L. Hirschman, D. House, M. Light, and I. Mani. 2000. How to Evaluate Your Question Answering System Every Day and Still Get Real Work Done. In Proceedings of LREC2000, pages 1495–1500, Athens, Greece, 31 May - 2 June. E. Brill, J. Lin, M. Banko, S. Dumais, and A. Ng. 2001. Data-Intensive Question Answering. In TREC10 Notebook Papers, Gaithesburg, MD. C. Clarke, G. Cormack, T. Lynam, C. Li, and G. McLearn. 2001. Web Reinforced Question Answering (MultiText Experiments for TREC 2001). In TREC-10 Notebook Papers, Gaithesburg, MD. T. Dunning. 1993. Accurate Methods for the Statistics of Surprise and Coincidence. Computational Linguistics, 19(1):61–74. C. Fellbaum. 1998. WordNet, An Electronic Lexical Database. The MIT Press. S. Harabagiu and S. Maiorano. 1999. Finding Answers in Large Collections of Texts: Paragraph Indexing + Abductive Inference. In Proceedings of the AAAI Fall Symposium on Question Answering Systems, pages 63–71, November. B. Magnini, M. Negri, R. Prevete, and H. Tanev. 2001. Multilingual Question/Answering: the DIOGENE System. In TREC-10 Notebook Papers, Gaithesburg, MD. G. S. Mann. 2001. A Statistical Method for Short Answer Extraction. In Proceedings of the ACL2001 Workshop on Open-Domain Question Answering, Toulouse, France, July. C.D. Manning and H. Sch¨utze. 1999. Foundations of Statistical Natural Language Processing. The MIT PRESS, Cambridge,Massachusets. H. R. Radev, H. Qi, Z. Zheng, S. Blair-Goldensohn, Z. Zhang, W. Fan, and J. Prager. 2001. Mining the Web for Answers to Natural Language Questions. In Proceedings of 2001 ACM CIKM, Atlanta, Georgia, USA, November. M. Subbotin and S. Subbotin. 2001. Patterns of Potential Answer Expressions as Clues to the Right Answers. In TREC-10 Notebook Papers, Gaithesburg, MD. P.D. Turney. 2001. Mining the Web for Synonyms: PMI-IR versus LSA on TOEFL. In Proceedings of ECML2001, pages 491–502, Freiburg, Germany. R. Zajac. 2001. Towards Ontological Question Answering. In Proceedings of the ACL-2001 Workshop on Open-Domain Question Answering, Toulouse, France, July.
2002
54
Shallow parsing on the basis of words only: A case study Antal van den Bosch and Sabine Buchholz ILK / Computational Linguistics and AI Tilburg University Tilburg, The Netherlands Antal.vdnBosch,S.Buchholz  @kub.nl Abstract We describe a case study in which a memory-based learning algorithm is trained to simultaneously chunk sentences and assign grammatical function tags to these chunks. We compare the algorithm’s performance on this parsing task with varying training set sizes (yielding learning curves) and different input representations. In particular we compare input consisting of words only, a variant that includes word form information for lowfrequency words, gold-standard POS only, and combinations of these. The wordbased shallow parser displays an apparently log-linear increase in performance, and surpasses the flatter POS-based curve at about 50,000 sentences of training data. The low-frequency variant performs even better, and the combinations is best. Comparative experiments with a real POS tagger produce lower results. We argue that we might not need an explicit intermediate POS-tagging step for parsing when a sufficient amount of training material is available and word form information is used for low-frequency words. 1 Introduction It is common in parsing to assign part-of-speech (POS) tags to words as a first analysis step providing information for further steps. In many early parsers, the POS sequences formed the only input to the parser, i.e. the actual words were not used except in POS tagging. Later, with feature-based grammars, information on POS had a more central place in the lexical entry of a word than the identity of the word itself, e.g. MAJOR and other HEAD features in (Pollard and Sag, 1987). In the early days of statistical parsers, POS were explicitly and often exclusively used as symbols to base probabilities on; these probabilities are generally more reliable than lexical probabilities, due to the inherent sparseness of words. In modern lexicalized parsers, POS tagging is often interleaved with parsing proper instead of being a separate preprocessing module (Collins, 1996; Ratnaparkhi, 1997). Charniak (2000) notes that having his generative parser generate the POS of a constituent’s head before the head itself increases performance by 2 points. He suggests that this is due to the usefulness of POS for estimating back-off probabilities. Abney’s (1991) chunking parser consists of two modules: a chunker and an attacher. The chunker divides the sentence into labeled, non-overlapping sequences (chunks) of words, with each chunk containing a head and (nearly) all of its premodifiers, exluding arguments and postmodifiers. His chunker works on the basis of POS information alone, whereas the second module, the attacher, also uses lexical information. Chunks as a separate level have also been used in Collins (1996) and Ratnaparkhi (1997). This brief overview shows that the main reason for the use of POS tags in parsing is that they provide Computational Linguistics (ACL), Philadelphia, July 2002, pp. 433-440. Proceedings of the 40th Annual Meeting of the Association for useful generalizations and (thereby) counteract the sparse data problem. However, there are two objections to this reasoning. First, as naturally occurring text does not come POS-tagged, we first need a module to assign POS. This tagger can base its decisions only on the information present in the sentence, i.e. on the words themselves. The question then arises whether we could use this information directly, and thus save the explicit tagging step. The second objection is that sparseness of data is tightly coupled to the amount of training material used. As training material is more abundant now than it was even a few years ago, and today’s computers can handle these amounts, we might ask whether there is now enough data to overcome the sparseness problem for certain tasks. To answer these two questions, we designed the following experiments. The task to be learned is a shallow parsing task (described below). In one experiment, it has to be performed on the basis of the “gold-standard”, assumed-perfect POS taken directly from the training data, the Penn Treebank (Marcus et al., 1993), so as to abstract from a particular POS tagger and to provide an upper bound. In another experiment, parsing is done on the basis of the words alone. In a third, a special encoding of low-frequency words is used. Finally, words and POS are combined. In all experiments, we increase the amount of training data stepwise and record parse performance for each step. This yields four learning curves. The word-based shallow parser displays an apparently log-linear increase in performance, and surpasses the flatter POS-based curve at about 50,000 sentences of training data. The lowfrequency variant performs even better, and the combinations is best. Comparative experiments with a real POS tagger produce lower results. The paper is structured as follows. In Section 2 we describe the parsing task, its input representation, how this data was extracted from the Penn Treebank, and how we set up the learning curve experiments using a memory-based learner. Section 3 provides the experimental learning curve results and analyses them. Section 4 contains a comparison of the effects with gold-standard and automatically assigned POS. We review related research in Section 5, and formulate our conclusions in Section 6. 2 Task representation, data preparation, and experimental setup We chose a shallow parsing task as our benchmark task. If, to support an application such as information extraction, summarization, or question answering, we are only interested in parts of the parse tree, then a shallow parser forms a viable alternative to a full parser. Li and Roth (2001) show that for the chunking task it is specialized in, their shallow parser is more accurate and more robust than a general-purpose, i.e. full, parser. Our shallow parsing task is a combination of chunking (finding and labelling non-overlapping syntactically functional sequences) and what we will call function tagging. Our chunks and functions are based on the annotations in the third release of the Penn Treebank (Marcus et al., 1993). Below is an example of a tree and the corresponding chunk (subscripts on brackets) and function (superscripts on headwords) annotation: ((S (ADVP-TMP Once) (NP-SBJ-1 he) (VP was (VP held (NP *-1) (PP-TMP for (NP three months)) (PP without (S-NOM (NP-SBJ *-1) (VP being (VP charged) ))))) .)) [  Once      ] [  he    ] [   was held   ] [  for    ] [  three months  ] [  without  ] [   being charged    ] . Nodes in the tree are labeled with a syntactic category and up to four function tags that specify grammatical relations (e.g. SBJ for subject), subtypes of adverbials (e.g. TMP for temporal), discrepancies between syntactic form and syntactic function (e.g. NOM for non-nominal constituents functioning nominally) and notions like topicalization. Our chunks are based on the syntactic part of the constituent label. The conversion program is the same as used for the CoNLL-2000 shared task (Tjong Kim Sang and Buchholz, 2000). Head words of chunks are assigned a function code that is based on the full constituent label of the parent and of ancestors with a different category, as in the case of VP/S-NOM in the example. 2.1 Task representation and evaluation method To formulate the task as a machine-learnable classification task, we use a representation that encodes the joint task of chunking and function-tagging a sentence in per-word classification instances. As illustrated in Table 2.1, an instance (which corresponds to a row in the table) consists of the values for all features (the columns) and the functionchunk code for the focus word. The features describe the focus word and its local context. For the chunk part of the code, we adopt the “Inside”, “Outside”, and “Between” (IOB) encoding originating from (Ramshaw and Marcus, 1995). For the function part of the code, the value is either the function for the head of a chunk, or the dummy value NOFUNC for all non-heads. For creating the POS-based task, all words are replaced by the goldstandard POS tags associated with them in the Penn Treebank. For the combined task, both types of features are used simultaneously. When the learner is presented with new instances from heldout material, its task is thus to assign the combined function-chunk codes to either words or POS in context. From the sequence of predicted function-chunk codes, the complete chunking and function assignment can be reconstructed. However, predictions can be inconsistent, blocking a straightforward reconstruction of the complete shallow parse. We employed the following four rules to resolve such problems: (1) When an O chunk code is followed by a B chunk code, or when an I chunk code is followed by a B chunk code with a different chunk type, the B is converted to an I. (2) When more than one word in a chunk is given a function code, the function code of the rightmost word is taken as the chunk’s function code. (3) If all words of the chunk receive NOFUNC tags, a prior function code is assigned to the rightmost word of the chunk. This prior, estimated on the training set, represents the most frequent function code for that type of chunk. To measure the success of our learner, we compute the precision, recall and their harmonic mean, the F-score1 with  =1 (Van Rijsbergen, 1979). In the combined function-chunking evaluation, a chunk is only counted as correct when its boundaries, its type and its function are identified correctly. 2.2 Data preparation Our total data set consists of all 74,024 sentences in the Wall Street Journal, Brown and ATIS Corpus subparts of the Penn Treebank III. We randomized the order of the sentences in this dataset, and then split it into ten 90%/10% partitionings with disjoint 10% portions, in order to run 10fold cross-validation experiments (Weiss and Kulikowski, 1991). To provide differently-sized training sets for learning curve experiments, each training set (of 66,627 sentences) was also clipped at the following sizes: 100 sentences, 500, 1000, 2000, 5000, 10,000, 20,000 and 50,000. All data was converted to instances as illustrated in Table 2.1. For the total data set, this yields 1,637,268 instances, one for each word or punctuation mark. 62,472 word types occur in the total data set, and 874 different functionchunk codes. 2.3 Classifier: Memory-based learning Arguably, the choice of algorithm is not crucial in learning curve experiments. First, we aim at measuring relative differences arising from the selection of types of input. Second, there are indications that increasing the training set of language processing tasks produces much larger performance gains than varying among algorithms at fixed training set sizes; moreover, these differences also tend to get smaller with larger data sets (Banko and Brill, 2001). Memory-based learning (Stanfill and Waltz, 1986; Aha et al., 1991; Daelemans et al., 1999b) is a supervised inductive learning algorithm for learning classification tasks. Memory-based learning treats a set of labeled (pre-classified) training instances as points in a multi-dimensional feature space, and stores them as such in an instance base in memory (rather than performing some abstraction over them). Classification in memory-based learning is performed by the  -NN algorithm (Cover and Hart, 1967) that searches for the  ‘nearest neighbors’ according to the distance function between two in1F "!$# &%(' )+*-, precision , recall % , precision ' recall Left context Focus Right context Function-chunk code Once he was held I-ADVP ADVP-TMP Once he was held for I-NP NP-SBJ Once he was held for three I-VP NOFUNC Once he was held for three months I-VP VP/S he was held for three months without I-PP PP-TMP was held for three months without being I-NP NOFUNC held for three months without being charged I-NP NP for three months without being charged . I-PP PP three months without being charged . I-VP NOFUNC months without being charged . I-VP VP/S-NOM without being charged . O NOFUNC Table 1: Encoding into instances, with words as input, of the example sentence “Once he was held for three months without being charged .” stances . and / , 0213.546/87:9<;>=?A@CBED ?GF 13H ? 4JI ? 7 , where K is the number of features, D ? is a weight for feature L , and F estimates the difference between the two instances’ values at the L th feature. The classes of the  nearest neighbors then determine the class of the new case. In our experiments, we used a variant of the IB1 memory-based learner and classifier as implemented in TiMBL (Daelemans et al., 2001). On top of the  NN kernel of IB1 we used the following metrics that fine-tune the distance function and the class voting automatically: (1) The weight (importance) of a feature L , D ? , is estimated in our experiments by computing its gain ratio MON ? (Quinlan, 1993). This is the algorithm’s default choice. (2) Differences between feature values (i.e. words or POS tags) are estimated by the real-valued outcome of the modified value difference metric (Stanfill and Waltz, 1986; Cost and Salzberg, 1993). (3)  was set to seven. This and the previous parameter setting turned out best for a chunking task using the same algorithm as reported by Veenstra and van den Bosch (2000). (4) Class voting among the  nearest neighbours is done by weighting each neighbour’s vote by the inverse of its distance to the test example (Dudani, 1976). In Zavrel (1997), this distance was shown to improve over standard  -NN on a PP-attachment task. (5) For efficiency, search for the  -nearest neighbours is approximated by employing TRIBL (Daelemans et al., 1997), a hybrid between pure  -NN search and decision-tree traversal. The switch point of TRIBL was set to 1 for the words only and POS only experiments, i.e. a decision-tree split was made on the most important feature, the focus word, respectively focus POS. For the experiments with both words and POS, the switch point was set to 2 and the algorithm was forced to split on the focus word and focus POS. The metrics under 1) to 4) then apply to the remaining features. 3 Learning Curve Experiments We report the learning curve results in three paragraphs. In the first, we compare the performance of a plain words input representation with that of a gold-standard POS one. In the second we introduce a variant of the word-based task that deals with low-frequency words. The last paragraph describes results with input consisting of words and POS tags. Words only versus POS tags only As illustrated in Figure 1, the learning curves of both the word-based and the POS-based representation are upward with more training data. The word-based curve starts much lower but flattens less; in the tested range it has an approximately log-linear growth. Given the measured results, the word-based curve surpasses the POS-based curve at a training set size between 20,000 and 50,000 sentences. This proves two points: First, experiments with a fixed training set size might present a misleading snapshot. Second, the amount of training material available today is already enough to make words more valuable input than (gold-standard!) POS. Low-frequency word encoding variant If TRIBL encounters an unknown word in the test material, it stops already at the decision tree stage and returns the default class without even using the information provided by the context. This is clearly disadvantageous and specific to this choice of al35 40 45 50 55 60 65 70 75 80 100 200 500 1000 2000 5000 10,000 20,000 50,000 66,627 F P # sentences gold-standard POS words attenuated words attenuated words + gold-standard POS Figure 1: Learning curves of the main experiments on POS tags, words, attenuated words, and the combination of words and POS. The y-axis represents F Q @CB on combined chunking and function assignment. The x-axis represents the number of training sentences; its scale is logarithmic. gorithm. A more general shortcoming is that the word form of an unknown word often contains useful information that is not available in the present setup. To overcome these two problems, we applied what Eisner (1997) calls “attenuation” to all words occurring ten times or less in training material. If such a word ends in a digit, it is converted to the string “MORPH-NUM”; if the word is six characters or longer it becomes “MORPH-XX” where XX are the final two letters, else it becomes “MORPHSHORT”. If the first letter is capitalised, the attenuated form is “MORPH-CAP”. This produces sequences such as A number of MORPH-ts were MORPHly MORPH-ed by traders . (A number of developments were negatively interpreted by traders ). We applied this attenuation method to all training sets. All words in test material that did not occur as words in the attenuated training material were also attenuated following the same procedure. The curve resulting from the attenuated wordbased experiment is also displayed in Figure 1. The curve illustrates that the attenuated representation performs better than the pure word-based one at all reasonable training set sizes. However the effect clearly diminuishes with more training data, so we cannot exclude that the two curves will meet with yet more training data. Combining words with POS tags Although the word-based curve, and especially its attenuated variant, end higher than the POS-based curve, POS might still be useful in addition to words. We therefore also tested a representation with both types of features. As shown in Figure 1, the “attenuated word + gold-standard POS” curve starts close to the goldstandard POS curve, attains break-even with this curve at about 500 sentences, and ends close to but higher than all other curves, including the “attenuated word” curve. 4 Although the performance increase through the addition of POS becomes smaller with more training data, it is still highly significant with maximal training set size. As the tags are the gold-standard tags taken directly from the Penn Treebank, this result provides an upper bound for the contribution of POS tags to the shallow parsing task under investigation. Automatic POS tagging is a well-studied Input features Precision R Recall R F-score R gold-standard POS 73.8 0.2 73.9 0.2 73.9 0.2 MBT POS 72.2 0.2 72.4 0.2 72.3 0.2 words 75.4 0.1 75.4 0.1 75.4 0.1 words S gold-standard POS 76.5 0.2 77.1 0.2 76.8 0.2 words S MBT POS 75.8 0.2 76.1 0.1 75.9 0.1 attenuated words 77.3 0.1 77.2 0.2 77.3 0.2 attenuated words S gold-standard POS 78.9 0.2 79.1 0.2 79.0 0.2 attenuated words S MBT POS 77.6 0.2 77.7 0.2 77.6 0.2 Table 2: Average precision, recall, and F-scores on the chunking-function-tagging task, with standard deviation, using the input features words, attenuated words, gold-standard POS, and MBT POS, and combinations, on the maximal training set size. task (Church, 1988; Brill, 1993; Ratnaparkhi, 1996; Daelemans et al., 1996), and reported errors in the range of 2–6% are common. To investigate the effect of using automatically assigned tags, we trained MBT, a memory-based tagger (Daelemans et al., 1996), on the training portions of our 10-fold crossvalidation experiment for the maximal data and let it predict tags for the test material. The memory-based tagger attained an accuracy of 96.7% ( R 0.1; 97.0% on known words, and 80.9% on unknown words). We then used these MBT POS instead of the goldstandard ones. The results of these experiments, along with the equivalent results using gold-standard POS, are displayed in Table 2. As they show, the scores with automatically assigned tags are always lower than with the gold-standard ones. When taken individually, the difference in F-scores of the gold-standard versus the MBT POS tags is 1.6 points. Combined with words, the MBT POS contribute 0.5 points (compared against words taken individually); combined with attenuated words, they contribute 0.3 points. This is much less than the improvement by the goldstandard tags (1.7 points) but still significant. However, as the learning curve experiments showed, this is only a snapshot and the improvement may well diminish with more training data. A breakdown of accuracy results shows that the highest improvement in accuracy is achieved for focus words in the MORPH-SHORT encoding. In these cases, the POS tagger has access to more information about the low-frequency word (e.g. its suffix) than the attenuated form provides. This suggests that this encoding is not optimal. 5 Related Research Ramshaw and Marcus (1995), Mu˜noz et al. (1999), Argamon et al. (1998), Daelemans et al. (1999a) find NP chunks, using Wall Street Journal training material of about 9000 sentences. F-scores range between 91.4 and 92.8. The first two articles mention that words and (automatically assigned) POS together perform better than POS alone. Chunking is one part of the task studied here, so we also computed performance on chunks alone, ignoring function codes. Indeed the learning curve of words combined with gold-standard POS crosses the POS-based curve before 10,000 sentences on the chunking subtask. Tjong Kim Sang and Buchholz (2000) give an overview of the CoNLL shared task of chunking. The types and definitions of chunks are identical to the ones used here. Training material again consists of the 9000 Wall Street Journal sentences with automatically assigned POS tags. The best F-score (93.5) is higher than the 91.5 F-score attained on chunking in our study using attenuated words only, but using the maximally-sized training sets. With gold-standard POS and attenuated words we attain an F-score of 94.2; with MBT POS tags and attenuated words, 92.8. In the CoNLL competition, all three best systems used combinations of classifiers instead of one single classifier. In addition, the effect of our mix of sentences from different corpora on top of WSJ is not clear. Ferro et al. (1999) describe a system for finding grammatical relations in automatically tagged and manually chunked text. They report an Fscore of 69.8 for a training size of 3299 words of elementary school reading comprehension tests. Buchholz et al. (1999) achieve 71.2 F-score for grammatical relation assignment on automatically tagged and chunked text after training on about 40,000 Wall Street Journal sentences. In contrast to these studies, we do not chunk before finding grammatical relations; rather, chunking is performed simultaneously with headword function tagging. Measuring F-scores on the correct assignment of functions to headwords in our study, we attain 78.2 F-score using words, 80.1 using attenuated words, 80.9 using attenuated words combined with gold-standard POS, and 79.7 using attenuated words combined with MBT POS (which is slightly worse than with attenuated words only). Our function tagging task is easier than finding grammatical relations as we tag a headword of a chunk as e.g. a subject in isolation whereas grammatical relation assignment also includes deciding which verb this chunk is the subject of. A¨ıt-Mokhtar and Chanod (1997) describe a sequence of finite-state transducers in which function tagging is a separate step, after POS tagging and chunking. The last transducer then uses the function tags to extract subject/verb and object/verb relations (from French text). 6 Conclusion POS are normally considered useful information in shallow and full parsing. Our learning curve experiments show that: T The relative merit of words versus POS as input for the combined chunking and functiontagging task depends on the amount of training data available. T The absolute performance of words depends on the treatment of rare words. The additional use of word form information (attenuation) improves performance. T The addition of POS also improves performance. In this and the previous case, the effect becomes smaller with more training data. Experiments with the maximal training set size show that: T Addition of POS maximally yields an improvement of 1.7 points on this data. T With realistic POS the improvement is much smaller. Preliminary analysis shows that the improvement by realistic POS seems to be caused mainly by a superior use of word form information by the POS tagger. We therefore plan to experiment with a POS tagger and an attenuated words variant that use exactly the same word form information. In addition we also want to pursue using the combined chunker and grammatical function tagger described here as a first step towards grammatical relation assignment. References S. Abney. 1991. Parsing by chunks. In Principle-Based Parsing, pages 257–278. Kluwer Academic Publishers, Dordrecht. D. W. Aha, D. Kibler, and M. Albert. 1991. Instancebased learning algorithms. Machine Learning, 6:37– 66. S. A¨ıt-Mokhtar and J.-P. Chanod. 1997. Subject and object dependency extraction using finite-state transducers. In Proceedings of ACL’97 Workshop on Information Extraction and the Building of Lexical Semantic Resources for NLP Applications, Madrid. S. Argamon, I. Dagan, and Y. Krymolowski. 1998. A memory-based approach to learning shallow natural language patterns. In Proc. of 36th annual meeting of the ACL, pages 67–73, Montreal. M. Banko and E. Brill. 2001. Scaling to very very large corpora for natural language disambiguation. In Proceedings of the 39th Annual Meeting and 10th Conference of the European Chapter of the Association for Computational Linguistics, Toulouse, France. E. Brill. 1993. A Corpus-Based Approach to Language Learning. Ph.D. thesis, University of Pennsylvania, Department of Computer and Information Science. S. Buchholz, J. Veenstra, and W. Daelemans. 1999. Cascaded grammatical relation assignment. In Pascale Fung and Joe Zhou, editors, Proceedings of EMNLP/VLC-99, pages 239–246. ACL. E. Charniak. 2000. A maximum-entropy-inspired parser. In Proceedings of NAACL’00, pages 132–139. K. W. Church. 1988. A stochastic parts program and noun phrase parser for unrestricted text. In Proc. of Second Applied NLP (ACL). M.J. Collins. 1996. A new statistical parser based on bigram lexical dependencies. In Proceedings of the 34th Annual Meeting of the Association for Computational Linguistics. S. Cost and S. Salzberg. 1993. A weighted nearest neighbour algorithm for learning with symbolic features. Machine Learning, 10:57–78. T. M. Cover and P. E. Hart. 1967. Nearest neighbor pattern classification. Institute of Electrical and Electronics Engineers Transactions on Information Theory, 13:21–27. W. Daelemans, J. Zavrel, P. Berck, and S. Gillis. 1996. MBT: A memory-based part of speech tagger generator. In E. Ejerhed and I. Dagan, editors, Proc. of Fourth Workshop on Very Large Corpora, pages 14–27. ACL SIGDAT. W. Daelemans, A. Van den Bosch, and J. Zavrel. 1997. A feature-relevance heuristic for indexing and compressing large case bases. In M. Van Someren and G. Widmer, editors, Poster Papers of the Ninth European Conference on Machine Learing, pages 29–38, Prague, Czech Republic. University of Economics. W. Daelemans, S. Buchholz, and J. Veenstra. 1999a. Memory-based shallow parsing. In Proceedings of CoNLL, Bergen, Norway. W. Daelemans, A. Van den Bosch, and J. Zavrel. 1999b. Forgetting exceptions is harmful in language learning. Machine Learning, Special issue on Natural Language Learning, 34:11–41. W. Daelemans, J. Zavrel, K. Van der Sloot, and A. Van den Bosch. 2001. TiMBL: Tilburg memory based learner, version 4.0, reference guide. ILK Technical Report 01-04, Tilburg University. available from http://ilk.kub.nl. S.A. Dudani. 1976. The distance-weighted U -nearest neighbor rule. In IEEE Transactions on Systems, Man, and Cybernetics, volume SMC-6, pages 325–327. J. Eisner. 1997. Three new probabilistic models for dependency parsing: An exploration. In Proceedings of the 16th International Conference on Computational Linguistics (COLING-96). L. Ferro, M. Vilain, and A. Yeh. 1999. Learning transformation rules to find grammatical relations. In Proceedings of the Third Computational Natural Language Learning workshop (CoNLL), pages 43–52. X. Li and D. Roth. 2001. Exploring evidence for shallow parsing. In Proceedings of the Fifth Computational Natural Language Learning workshop (CoNLL). M. Marcus, B. Santorini, and M.A. Marcinkiewicz. 1993. Building a large annotated corpus of english: The Penn Treebank. Computational Linguistics, 19(2):313–330. M. Mu˜noz, V. Punyakanok, D. Roth, and D. Zimak. 1999. A learning approach to shallow parsing. In Pascale Fung and Joe Zhou, editors, Proceedings of the 1999 Joint SIGDAT Conference on Empirical Methods in Natural Language Processing and Very Large Corpora, pages 168–178. C. Pollard and I. Sag. 1987. Information-Based Syntax and Semantics, Volume 1: Fundamentals, volume 13 of CSLI Lecture Notes. Center for the Study of Language and Information, Stanford. J.R. Quinlan. 1993. C4.5: Programs for Machine Learning. Morgan Kaufmann, San Mateo, CA. L.A. Ramshaw and M.P. Marcus. 1995. Text chunking using transformation-based learning. In Proceedings of the 3rd ACL/SIGDAT Workshop on Very Large Corpora, Cambridge, Massachusetts, USA, pages 82–94. A. Ratnaparkhi. 1996. A maximum entropy part-ofspeech tagger. In Proc. of the Conference on Empirical Methods in Natural Language Processing, May 17-18, 1996, University of Pennsylvania. A. Ratnaparkhi. 1997. A linear observed time statistical parser based on maximum entropy models. In Proceedings of the Second Conference on Empirical Methods in Natural Language Processing, EMNLP-2, Providence, Rhode Island, pages 1–10. C. Stanfill and D. Waltz. 1986. Toward memorybased reasoning. Communications of the ACM, 29(12):1213–1228, December. E. Tjong Kim Sang and S. Buchholz. 2000. Introduction to the CoNLL-2000 shared task: Chunking. In Proceedings of CoNLL-2000 and LLL-2000, pages 127– 132, Lisbon, Portugal. C.J. Van Rijsbergen. 1979. Information Retrieval. Buttersworth, London. J. Veenstra and Antal van den Bosch. 2000. Singleclassifier memory-based phrase chunking. In Proceedings of CoNLL-2000 and LLL-2000, pages 157–159, Lisbon, Portugal. S. Weiss and C. Kulikowski. 1991. Computer systems that learn. San Mateo, CA: Morgan Kaufmann. J. Zavrel. 1997. An empirical re-examination of weighted voting for k-NN. In Proceedings of the 7th Belgian-Dutch Conference on Machine Learning, pages xx–xx.
2002
55
An Integrated Architecture for Shallow and Deep Processing Berthold Crysmann, Anette Frank, Bernd Kiefer, Stefan M¨uller, G¨unter Neumann, Jakub Piskorski, Ulrich Sch¨afer, Melanie Siegel, Hans Uszkoreit, Feiyu Xu, Markus Becker and Hans-Ulrich Krieger DFKI GmbH Stuhlsatzenhausweg 3 Saarbr¨ucken, Germany [email protected] Abstract We present an architecture for the integration of shallow and deep NLP components which is aimed at flexible combination of different language technologies for a range of practical current and future applications. In particular, we describe the integration of a high-level HPSG parsing system with different high-performance shallow components, ranging from named entity recognition to chunk parsing and shallow clause recognition. The NLP components enrich a representation of natural language text with layers of new XML meta-information using a single shared data structure, called the text chart. We describe details of the integration methods, and show how information extraction and language checking applications for realworld German text benefit from a deep grammatical analysis. 1 Introduction Over the last ten years or so, the trend in applicationoriented natural language processing (e.g., in the area of term, information, and answer extraction) has been to argue that for many purposes, shallow natural language processing (SNLP) of texts can provide sufficient information for highly accurate and useful tasks to be carried out. Since the emergence of shallow techniques and the proof of their utility, the focus has been to exploit these technologies to the maximum, often ignoring certain complex issues, e.g. those which are typically well handled by deep NLP systems. Up to now, deep natural language processing (DNLP) has not played a significant role in the area of industrial NLP applications, since this technology often suffers from insufficient robustness and throughput, when confronted with large quantities of unrestricted text. Current information extractions (IE) systems therefore do not attempt an exhaustive DNLP analysis of all aspects of a text, but rather try to analyse or “understand” only those text passages that contain relevant information, thereby warranting speed and robustness wrt. unrestricted NL text. What exactly counts as relevant is explicitly defined by means of highly detailed domain-specific lexical entries and/or rules, which perform the required mappings from NL utterances to corresponding domain knowledge. However, this “fine-tuning” wrt. a particular application appears to be the major obstacle when adapting a given shallow IE system to another domain or when dealing with the extraction of complex “scenario-based” relational structures. In fact, (Appelt and Israel, 1997) have shown that the current IE technology seems to have an upper performance level of less than 60% in such cases. It seems reasonable to assume that if a more accurate analysis of structural linguistic relationships could be provided (e.g., grammatical functions, referential relationships), this barrier might be overcome. Actually, the growing market needs in the wide area of intelligent information management systems seem to request such a break-through. In this paper we will argue that the quality of cur Computational Linguistics (ACL), Philadelphia, July 2002, pp. 441-448. Proceedings of the 40th Annual Meeting of the Association for rent SNLP-based applications can be improved by integrating DNLP on demand in a focussed manner, and we will present a system that combines the finegrained anaysis provided by HPSG parsing with a high-performance SNLP system into a generic and flexible NLP architecture. 1.1 Integration Scenarios Owing to the fact that deep and shallow technologies are complementary in nature, integration is a nontrivial task: while SNLP shows its strength in the areas of efficiency and robustness, these aspects are problematic for DNLP systems. On the other hand, DNLP can deliver highly precise and fine-grained linguistic analyses. The challenge for integration is to combine these two paradigms according to their virtues. Probably the most straightforward way to integrate the two is an architecture in which shallow and deep components run in parallel, using the results of DNLP, whenever available. While this kind of approach is certainly feasible for a real-time application such as Verbmobil, it is not ideal for processing large quantities of text: due to the difference in processing speed, shallow and deep NLP soon run out of sync. To compensate, one can imagine two possible remedies: either to optimize for precision, or for speed. The drawback of the former strategy is that the overall speed will equal the speed of the slowest component, whereas in case of the latter, DNLP will almost always time out, such that overall precision will hardly be distinguishable from a shallowonly system. What is thus called for is an integrated, flexible architecture where components can play at their strengths. Partial analyses from SNLP can be used to identify relevant candidates for the focussed use of DNLP, based on task or domain-specific criteria. Furthermore, such an integrated approach opens up the possibility to address the issue of robustness by using shallow analyses (e.g., term recognition) to increase the coverage of the deep parser, thereby avoiding a duplication of efforts. Likewise, integration at the phrasal level can be used to guide the deep parser towards the most likely syntactic analysis, leading, as it is hoped, to a considerable speedup. shallow NLP components NLP deep components internal repr. layer multi chart annot. XML external repr. generic OOP component interface WHAM application specification input and result Figure 1: The WHITEBOARD architecture. 2 Architecture The WHITEBOARD architecture defines a platform that integrates the different NLP components by enriching an input document through XML annotations. XML is used as a uniform way of representing and keeping all results of the various processing components and to support a transparent software infrastructure for LT-based applications. It is known that interesting linguistic information —especially when considering DNLP— cannot efficiently be represented within the basic XML markup framework (“typed parentheses structure”), e.g., linguistic phenomena like coreferences, ambiguous readings, and discontinuous constituents. The WHITEBOARD architecture employs a distributed multi-level representation of different annotations. Instead of translating all complex structures into one XML document, they are stored in different annotation layers (possibly non-XML, e.g. feature structures). Hyperlinks and “span” information together support efficient access between layers. Linguistic information of common interest (e.g. constituent structure extracted from HPSG feature structures) is available in XML format with hyperlinks to full feature structure representations externally stored in corresponding data files. Fig. 1 gives an overview of the architecture of the WHITEBOARD Annotation Machine (WHAM). Applications feed the WHAM with input texts and a specification describing the components and configuration options requested. The core WHAM engine has an XML markup storage (external “offline” representation), and an internal “online” multi-level annotation chart (index-sequential access). Following the trichotomy of NLP data representation models in (Cunningham et al., 1997), the XML markup contains additive information, while the multi-level chart contains positional and abstraction-based information, e.g., feature structures representing NLP entities in a uniform, linguistically motivated form. Applications and the integrated components access the WHAM results through an object-oriented programming (OOP) interface which is designed as general as possible in order to abstract from component-specific details (but preserving shallow and deep paradigms). The interfaces of the actually integrated components form subclasses of the generic interface. New components can be integrated by implementing this interface and specifying DTDs and/or transformation rules for the chart. The OOP interface consists of iterators that walk through the different annotation levels (e.g., token spans, sentences), reference and seek operators that allow to switch to corresponding annotations on a different level (e.g., give all tokens of the current sentence, or move to next named entity starting from a given token position), and accessor methods that return the linguistic information contained in the chart. Similarily, general methods support navigating the type system and feature structures of the DNLP components. The resulting output of the WHAM can be accessed via the OOP interface or as XML markup. The WHAM interface operations are not only used to implement NLP component-based applications, but also for the integration of deep and shallow processing components itself. 2.1 Components 2.1.1 Shallow NL component Shallow analysis is performed by SPPC, a rulebased system which consists of a cascade of weighted finite–state components responsible for performing subsequent steps of the linguistic analysis, including: fine-grained tokenization, lexicomorphological analysis, part-of-speech filtering, named entity (NE) recognition, sentence boundary detection, chunk and subclause recognition, see (Piskorski and Neumann, 2000; Neumann and Piskorski, 2002) for details. SPPC is capable of processing vast amounts of textual data robustly and efficiently (ca. 30,000 words per second in standard PC environment). We will briefly describe the SPPC components which are currently integrated with the deep components. Each token identified by a tokenizer as a potential word form is morphologically analyzed. For each token, its lexical information (list of valid readings including stem, part-of-speech and inflection information) is computed using a fullform lexicon of about 700,000 entries that has been compiled out from a stem lexicon of about 120,000 lemmas. After morphological processing, POS disambiguation rules are applied which compute a preferred reading for each token, while the deep components can back off to all readings. NE recognition is based on simple pattern matching techniques. Proper names (organizations, persons, locations), temporal expressions and quantities can be recognized with an average precision of almost 96% and recall of 85%. Furthermore, a NE–specific reference resolution is performed through the use of a dynamic lexicon which stores abbreviated variants of previously recognized named entities. Finally, the system splits the text into sentences by applying only few, but highly accurate contextual rules for filtering implausible punctuation signs. These rules benefit directly from NE recognition which already performs restricted punctuation disambiguation. 2.1.2 Deep NL component The HPSG Grammar is based on a large–scale grammar for German (M¨uller, 1999), which was further developed in the VERBMOBIL project for translation of spoken language (M¨uller and Kasper, 2000). After VERBMOBIL the grammar was adapted to the requirements of the LKB/PET system (Copestake, 1999), and to written text, i.e., extended with constructions like free relative clauses that were irrelevant in the VERBMOBIL scenario. The grammar consists of a rich hierarchy of 5,069 lexical and phrasal types. The core grammar contains 23 rule schemata, 7 special verb movement rules, and 17 domain specific rules. All rule schemata are unary or binary branching. The lexicon contains 38,549 stem entries, from which more than 70% were semi-automatically acquired from the annotated NEGRA corpus (Brants et al., 1999). The grammar parses full sentences, but also other kinds of maximal projections. In cases where no full analysis of the input can be provided, analyses of fragments are handed over to subsequent modules. Such fragments consist of maximal projections or single words. The HPSG analysis system currently integrated in the WHITEBOARD system is PET (Callmeier, 2000). Initially, PET was built to experiment with different techniques and strategies to process unification-based grammars. The resulting system provides efficient implementations of the best known techniques for unification and parsing. As an experimental system, the original design lacked open interfaces for flexible integration with external components. For instance, in the beginning of the WHITEBOARD project the system only accepted fullform lexica and string input. In collaboration with Ulrich Callmeier the system was extended. Instead of single word input, input items can now be complex, overlapping and ambiguous, i.e. essentially word graphs. We added dynamic creation of atomic type symbols, e.g., to be able to add arbitrary symbols to feature structures. With these enhancements, it is possible to build flexible interfaces to external components like morphology, tokenization, named entity recognition, etc. 3 Integration Morphology and POS The coupling between the morphology delivered by SPPC and the input needed for the German HPSG was easily established. The morphological classes of German are mapped onto HPSG types which expand to small feature structures representing the morphological information in a compact way. A mapping to the output of SPPC was automatically created by identifying the corresponding output classes. Currently, POS tagging is used in two ways. First, lexicon entries that are marked as preferred by the shallow component are assigned higher priority than the rest. Thus, the probability of finding the correct reading early should increase without excluding any reading. Second, if for an input item no entry is found in the HPSG lexicon, we automatically create a default entry, based on the part–of–speech of the preferred reading. This increases robustness, while avoiding increase in ambiguity. Named Entity Recognition Writing HPSG grammars for the whole range of NE expressions etc. is a tedious and not very promising task. They typically vary across text sorts and domains, and would require modularized subgrammars that can be easily exchanged without interfering with the general core. This can only be realized by using a type interface where a class of named entities is encoded by a general HPSG type which expands to a feature structure used in parsing. We exploit such a type interface for coupling shallow and deep processing. The classes of named entities delivered by shallow processing are mapped to HPSG types. However, some finetuning is required whenever deep and shallow processing differ in the amount of input material they assign to a named entity. An alternative strategy is used for complex syntactic phrases containing NEs, e.g., PPs describing time spans etc. It is based on ideas from Explanation–based Learning (EBL, see (Tadepalli and Natarajan, 1996)) for natural language analysis, where analysis trees are retrieved on the basis of the surface string. In our case, the part-of-speech sequence of NEs recognised by shallow analysis is used to retrieve pre-built feature structures. These structures are produced by extracting NEs from a corpus and processing them directly by the deep component. If a correct analysis is delivered, the lexical parts of the analysis, which are specific for the input item, are deleted. We obtain a sceletal analysis which is underspecified with respect to the concrete input items. The part-of-speech sequence of the original input forms the access key for this structure. In the application phase, the underspecified feature structure is retrieved and the empty slots for the input items are filled on the basis of the concrete input. The advantage of this approach lies in the more elaborate semantics of the resulting feature structures for DNLP, while avoiding the necessity of adding each and every single name to the HPSG lexicon. Instead, good coverage and high precision can be achieved using prototypical entries. Lexical Semantics When first applying the original VERBMOBIL HPSG grammar to business news articles, the result was that 78.49% of the missing lexical items were nouns (ignoring NEs). In the integrated system, unknown nouns and NEs can be recognized by SPPC, which determines morphosyntactic information. It is essential for the deep system to associate nouns with their semantic sorts both for semantics construction, and for providing semantically based selectional restrictions to help constraining the search space during deep parsing. GermaNet (Hamp and Feldweg, 1997) is a large lexical database, where words are associated with POS information and semantic sorts, which are organized in a fine-grained hierarchy. The HPSG lexicon, on the other hand, is comparatively small and has a more coarse-grained semantic classification. To provide the missing sort information when recovering unknown noun entries via SPPC, a mapping from the GermaNet semantic classification to the HPSG semantic classification (Siegel et al., 2001) is applied which has been automatically acquired. The training material for this learning process are those words that are both annotated with semantic sorts in the HPSG lexicon and with synsets of GermaNet. The learning algorithm computes a mapping relevance measure for associating semantic concepts in GermaNet with semantic sorts in the HPSG lexicon. For evaluation, we examined a corpus of 4664 nouns extracted from business news that were not contained in the HPSG lexicon. 2312 of these were known in GermaNet, where they are assigned 2811 senses. With the learned mapping, the GermaNet senses were automatically mapped to HPSG semantic sorts. The evaluation of the mapping accuracy yields promising results: In 76.52% of the cases the computed sort with the highest relevance probability was correct. In the remaining 20.70% of the cases, the correct sort was among the first three sorts. 3.1 Integration on Phrasal Level In the previous paragraphs we described strategies for integration of shallow and deep processing where the focus is on improving DNLP in the domain of lexical and sub-phrasal coverage. We can conceive of more advanced strategies for the integration of shallow and deep analysis at the length covercomplete LP LR 0CB 2CB age match 40 100 80.4 93.4 92.9 92.1 98.9  40 99.8 78.6 92.4 92.2 90.7 98.5 Training: 16,000 NEGRA sentences Testing: 1,058 NEGRA sentences Figure 2: Stochastic topological parsing: results level of phrasal syntax by guiding the deep syntactic parser towards a partial pre-partitioning of complex sentences provided by shallow analysis systems. This strategy can reduce the search space, and enhance parsing efficiency of DNLP. Stochastic Topological Parsing The traditional syntactic model of topological fields divides basic clauses into distinct fields: so-called pre-, middleand post-fields, delimited by verbal or sentential markers. This topological model of German clause structure is underspecified or partial as to non-sentential constituent boundaries, but provides a linguistically well-motivated, and theory-neutral macrostructure for complex sentences. Due to its linguistic underpinning the topological model provides a pre-partitioning of complex sentences that is (i) highly compatible with deep syntactic structures and (ii) maximally effective to increase parsing efficiency. At the same time (iii) partiality regarding the constituency of non-sentential material ensures the important aspects of robustness, coverage, and processing efficiency. In (Becker and Frank, 2002) we present a corpusdriven stochastic topological parser for German, based on a topological restructuring of the NEGRA corpus (Brants et al., 1999). For topological treebank conversion we build on methods and results in (Frank, 2001). The stochastic topological parser follows the probabilistic model of non-lexicalised PCFGs (Charniak, 1996). Due to abstraction from constituency decisions at the sub-sentential level, and the essentially POS-driven nature of topological structure, this rather simple probabilistic model yields surprisingly high figures of accuracy and coverage (see Fig.2 and (Becker and Frank, 2002) for more detail), while context-free parsing guarantees efficient processing. The next step is to elaborate a (partial) mapping of shallow topological and deep syntactic structures that is maximally effective for preference-guiTopological Structure: CL-V2 VF-TOPIC LK-FIN MF RK-t NN VVFIN ADV NN PREP NN VVFIN [   [     Peter] [   ißt] [  gerne W¨urstchen mit Kartoffelsalat] [   -]] Peter eats happily sausages with potato salad Deep Syntactic Structure: [  [  Peter] [  [  ißt] [   gerne [  [  W¨urstchen [  mit [  Kartoffelsalat]]] [   -]]]]] Mapping: CL-V2 ! CP, VF-TOPIC ! XP, LK-FIN ! V, " LK-FIN MF RK-t #! C’, " MF RK-t #! VP, RK-t ! V-t Figure 3: Matching topological and deep syntactic structures ded deep syntactic analysis, and thus, efficiency improvements in deep syntactic processing. Such a mapping is illustrated for a verb-second clause in Fig.3, where matching constituents of topological and deep-syntactic phrase structure are indicated by circled nodes. With this mapping defined for all sentence types, we can proceed to the technical aspects of integration into the WHITEBOARD architecture and XML text chart, as well as preference-driven HPSG analysis in the PET system. 4 Experiments An evaluation has been started using the NEGRA corpus, which contains about 20,000 newspaper sentences. The main objectives are to evaluate the syntactic coverage of the German HPSG on newspaper text and the benefits of integrating deep and shallow analysis. The sentences of the corpus were used in their original form without stripping, e.g. parenthesized insertions. We extended the HPSG lexicon semiautomatically from about 10,000 to 35,000 stems, which roughly corresponds to 350,000 full forms. Then, we checked the lexical coverage of the deep system on the whole corpus, which resulted in 28.6% of the sentences being fully lexically analyzed. The corresponding experiment with the integrated system yielded an improved lexical coverage of 71.4%, due to the techniques described in section 3. This increase is not achieved by manual extension, but only through synergy between the deep and shallow components. To test the syntactic coverage, we processed the subset of the corpus that was fully covered lexically (5878 sentences) with deep analysis only. The results are shown in table 4 in the second column. In order to evaluate the integrated system we processed 20,568 sentences from the corpus without further extension of the HPSG lexicon (see table 4, third column). Deep Integrated # sentences 20,568 avg. sentence length 16.83 avg. lexical ambiguity 2.38 1.98 avg. # analyses 16.19 18.53 analysed sentences 2,569 4,546 lexical coverage 28.6% 71.4% overall coverage 12.5% 22.1% Figure 4: Evaluation of German HPSG About 10% of the sentences that were successfully parsed by deep analysis only could not be parsed by the integrated system, and the number of analyses per sentence dropped from 16.2% to 8.6%, which indicates a problem in the morphology interface of the integrated system. We expect better overall results once this problem is removed. 5 Applications Since typed feature structures (TFS) in Whiteboard serve as both a representation and an interchange format, we developed a Java package (JTFS) that implements the data structures, together with the necessary operations. These include a lazy-copying unifier, a subsumption and equivalence test, deep copying, iterators, etc. JTFS supports a dynamic construction of typed feature structures, which is important for information extraction. 5.1 Information Extraction Information extraction in Whiteboard benefits both from the integration of the shallow and deep analysis results and from their processing methods. We chose management succession as our application domain. Two sets of template filling rules are defined: pattern-based and unification-based rules. The pattern-based rules work directly on the output delivered by the shallow analysis, for example, (1) Nachfolger von 1 $%'&(*)*+ +-,./%1032 4 person out 1 5 . This rule matches expressions like Nachfolger von Helmut Kohl (successor of) which contains two string tokens Nachfolger and von followed by a person name, and fills the slot of person out with the recognized person name Helmut Kohl. The patternbased grammar yields good results by recognition of local relationships as in (1). The unificationbased rules are applied to the deep analysis results. Given the fine-grained syntactic and semantic analysis of the HPSG grammar and its robustness (through SNLP integration), we decided to use the semantic representation (MRS, see (Copestake et al., 2001)) as additional input for IE. The reason is that MRSs express precise relationships between the chunks, in particular, in constructions involving (combinations of) free word order, long distance dependencies, control and raising, or passive, which are very difficult, if not impossible, to recognize for a pattern-based grammar. E.g., the short sentence (2) illustrates a combination of free word order, control, and passive. The subject of the passive verb wurde gebeten is located in the middle field and is at the same time the subject of the infinitive verb zu ¨ubernehmen. A deep (HPSG) analysis can recognize the dependencies quite easily, whereas a pattern based grammar cannot determine, e.g., for which verb Peter Miscke or Dietmar Hopp is the subject. (2) Peter Miscke following was Dietmar Hopp asked, the development sector to take over. Peter Entwicklungsabteilung Miscke zu zufolge ¨ubernehmen. wurde Dietmar Hopp gebeten, die “ According to Peter Miscke, Dietmar Hopp was asked to take over the development sector.” We employ typed feature structures (TFS) as our modelling language for the definition of scenario template types and template element types. Therefore, the template filling results from shallow and deep analysis can be uniformly encoded in TFS. As a side effect, we can easily adapt JTFS unification for the template merging task, by interperting the partially filled templates from deep and shallow analysis as constraints. E.g., to extract the relevant information from the above sentence, the following unification-based rule can be applied: 67 7 7 7 8 PERSON IN  DIVISION 9 MRS 6 8 PRED “¨ubernehmen” AGENT  THEME 9 :; :=< < < < ; 5.2 Language checking Another area where DNLP can support existing shallow-only tools is grammar and controlled language checking. Due to the scarce distribution of true errors (Becker et al., to appear), there is a high a priori probability for false alarms. As the number of false alarms decides on user-acceptance, precision is of utmost importance and cannot easily be traded for recall. Current controlled language checking systems for German, such as MULTILINT (http://www.iai.uni-sb.de/en/multien.html) or FLAG (http://flag.dfki.de), build exclusively on SNLP: while checking of local errors (e.g. NP-internal agreement, prepositional case) can be performed quite reliably by such a system, error types involving non-local dependencies, or access to grammatical functions are much harder to detect. The use of DNLP in this area is confronted with several systematic problems: first, formal grammars are not always available, e.g., in the case of controlled languages; second, erroneous sentences lie outside the language defined by the competence grammar, and third, due to the sparse distribution of errors, a DNLP system will spend most of the time parsing perfectly wellformed sentences. Using an integrated approach, a shallow checker can be used to cheaply identify initial error candidates, while false alarms can be eliminated based on the richer annotations provided by the deep parser. 6 Discussion In this paper we reported on an implemented system called WHITEBOARD which integrates different shallow components with a HPSG–based deep system. The integration is realized through the metaphor of textual annotation. To best of our knowledge, this is the first implemented system which integrates high-performance shallow processing with an advanced deep HPSG–based analysis system. There exists only very little other work that considers integration of shallow and deep NLP using an XML–based architecture, most notably (Grover and Lascarides, 2001). However, their integration efforts are largly limited to the level of POS tag information. Acknowledgements This work was supported by a research grant from the German Federal Ministry of Education, Science, Research and Technology (BMBF) to the DFKI project WHITEBOARD, FKZ: 01 IW 002. Special thanks to Ulrich Callmeier for his technical support concerning the integration of PET. References D. Appelt and D. Israel. 1997. Building information extraction systems. Tutorial during the 5th ANLP, Washington. M. Becker and A. Frank. 2002. A Stochastic Topological Parser of German. In Proceedings of COLING 2002, Teipei, Taiwan. M. Becker, A. Bredenkamp, B. Crysmann, and J. Klein. to appear. Annotation of error types for german newsgroup corpus. In Anne Abeill´e, editor, Treebanks: Building and Using Syntactically Annotated Corpora. Kluwer, Dordrecht. T. Brants, W. Skut, and H. Uszkoreit. 1999. Syntactic Annotation of a German newspaper corpus. In Proceedings of the ATALA Treebank Workshop, pages 69– 76, Paris, France. U. Callmeier. 2000. PET — A platform for experimentation with efficient HPSG processing techniques. Natural Language Engineering, 6 (1) (Special Issue on Efficient Processing with HPSG):99 – 108. E. Charniak. 1996. Tree-bank Grammars. In AAAI-96. Proceedings of the 13th AAAI, pages 1031–1036. MIT Press. A. Copestake, A. Lascarides, and D. Flickinger. 2001. An algebra for semantic construction in constraintbased grammars. In Proceedings of the 39th Annual Meeting of the Association for Computational Linguistics (ACL 2001), Toulouse, France. A. Copestake. 1999. The (new) LKB system. ftp://www-csli.stanford.edu/ > aac/newdoc.pdf. H. Cunningham, K. Humphreys, R. Gaizauskas, and Y. Wilks. 1997. Software Infrastructure for Natural Language Processing. In Proceedings of the Fifth ANLP, March. A. Frank. 2001. Treebank Conversion. Converting the NEGRA Corpus to an LTAG Grammar. In Proceedings of the EUROLAN Workshop on Multi-layer Corpus-based Analysis, pages 29–43, Iasi, Romania. C. Grover and A. Lascarides. 2001. XML-based data preparation for robust deep parsing. In Proceedings of the 39th ACL, pages 252–259, Toulouse, France. B. Hamp and H. Feldweg. 1997. Germanet - a lexicalsemantic net for german. In Proceedings of ACL workshop Automatic Information Extraction and Building of Lexical Semantic Resources for NLP Applications, Madrid. S. M¨uller and W. Kasper. 2000. HPSG analysis of German. In W. Wahlster, editor, Verbmobil: Foundations of Speech-to-Speech Translation, Artificial Intelligence, pages 238–253. Springer-Verlag, Berlin Heidelberg New York. S. M¨uller. 1999. Deutsche Syntax deklarativ. HeadDriven Phrase Structure Grammar f¨ur das Deutsche. Max Niemeyer Verlag, T¨ubingen. G. Neumann and J. Piskorski. 2002. A shallow text processing core engine. Computational Intelligence, to appear. J. Piskorski and G. Neumann. 2000. An intelligent text extraction and navigation system. In Proceedings of the RIAO-2000. Paris, April. M. Siegel, F. Xu, and G. Neumann. 2001. Customizing germanet for the use in deep linguistic processing. In Proceedings of the NAACL 2001 Workshop WordNet and Other Lexical Resources: Applications, Extensions and Customizations, Pittsburgh,USA, July. P. Tadepalli and B. Natarajan. 1996. A formal framework for speedup learning from problems and solutions. Journal of AI Research, 4:445 – 475.
2002
56
A Noisy-Channel Model for Document Compression Hal Daum´e III and Daniel Marcu Information Sciences Institute University of Southern California 4676 Admiralty Way, Suite 1001 Marina del Rey, CA 90292 hdaume,marcu  @isi.edu Abstract We present a document compression system that uses a hierarchical noisy-channel model of text production. Our compression system first automatically derives the syntactic structure of each sentence and the overall discourse structure of the text given as input. The system then uses a statistical hierarchical model of text production in order to drop non-important syntactic and discourse constituents so as to generate coherent, grammatical document compressions of arbitrary length. The system outperforms both a baseline and a sentence-based compression system that operates by simplifying sequentially all sentences in a text. Our results support the claim that discourse knowledge plays an important role in document summarization. 1 Introduction Single document summarization systems proposed to date fall within one of the following three classes: Extractive summarizers simply select and present to the user the most important sentences in a text — see (Mani and Maybury, 1999; Marcu, 2000; Mani, 2001) for comprehensive overviews of the methods and algorithms used to accomplish this. Headline generators are noisy-channel probabilistic systems that are trained on large corpora of  Headline, Text  pairs (Banko et al., 2000; Berger and Mittal, 2000). These systems produce short sequences of words that are indicative of the content of the text given as input. Sentence simplification systems (Chandrasekar et al., 1996; Mahesh, 1997; Carroll et al., 1998; Grefenstette, 1998; Jing, 2000; Knight and Marcu, 2000) are capable of compressing long sentences by deleting unimportant words and phrases. Extraction-based summarizers often produce outputs that contain non-important sentence fragments. For example, the hypothetical extractive summary of Text (1), which is shown in Table 1, can be compacted further by deleting the clause “which is already almost enough to win”. Headline-based summaries, such as that shown in Table 1, are usually indicative of a text’s content but not informative, grammatical, or coherent. By repeatedly applying a sentence-simplification algorithm one sentence at a time, one can compress a text; yet, the outputs generated in this way are likely to be incoherent and to contain unimportant information. When summarizing text, some sentences should be dropped altogether. Ideally, we would like to build systems that have the strengths of all these three classes of approaches. The “Document Compression” entry in Table 1 shows a grammatical, coherent summary of Text (1), which was generated by a hypothetical document compression system that preserves the most important information in a text while deleting sentences, phrases, and words that are subsidiary to the main message of the text. Obviously, generating coherent, grammatical summaries such as that produced by the hypothetical document compression system in Table 1 is not trivial because of many conflicting Computational Linguistics (ACL), Philadelphia, July 2002, pp. 449-456. Proceedings of the 40th Annual Meeting of the Association for Type of Hypothetical output Output Output is Output is Summarizer contains only coherent grammatical important info Extractive John Doe has already secured the vote of most  summarizer democrats in his constituency, which is already almost enough to win. But without the support of the governer, he is still on shaky ground. Headline mayor vote constituency governer  generator Sentence The mayor is now looking for re-election. John Doe  simplifier has already secured the vote of most democrats in his constituency. He is still on shaky ground. Document John Doe has secured the vote of most democrats.    compressor But he is still on shaky ground. Table 1: Hypothetical outputs generated by various types of summarizers. goals1. The deletion of certain sentences may result in incoherence and information loss. The deletion of certain words and phrases may also lead to ungrammaticality and information loss. The mayor is now looking for re-election. John Doe has already secured the vote of most democrats in his constituency, which is already almost enough to win. But without the support of the governer, he is still on shaky grounds. (1) In this paper, we present a document compression system that uses hierarchical models of discourse and syntax in order to simultaneously manage all these conflicting goals. Our compression system first automatically derives the syntactic structure of each sentence and the overall discourse structure of the text given as input. The system then uses a statistical hierarchical model of text production in order to drop non-important syntactic and discourse units so as to generate coherent, grammatical document compressions of arbitrary length. The system outperforms both a baseline and a sentence-based compression system that operates by simplifying sequentially all sentences in a text. 2 Document Compression The document compression task is conceptually simple. Given a document    , our goal is to produce a new document  by “dropping” words  from  . In order to achieve this goal, we 1A number of other systems use the outputs of extractive summarizers and repair them to improve coherence (DUC, 2001; DUC, 2002). Unfortunately, none of these seems flexible enough to produce in one shot good summaries that are simultaneously coherent and grammatical. extent the noisy-channel model proposed by Knight & Marcu (2000). Their system compressed sentences by dropping syntactic constituents, but could be applied to entire documents only on a sentenceby-sentence basis. As discussed in Section 1, this is not adequate because the resulting summary may contain many compressed sentences that are irrelevant. In order to extend Knight & Marcu’s approach beyond the sentence level, we need to “glue” sentences together in a tree structure similar to that used at the sentence level. Rhetorical Structure Theory (RST) (Mann and Thompson, 1988) provides us this “glue.” The tree in Figure 1 depicts the RST structure of Text (1). In RST, discourse structures are nonbinary trees whose leaves correspond to elementary discourse units (EDUs), and whose internal nodes correspond to contiguous text spans. Each internal node in an RST tree is characterized by a rhetorical relation. For example, the first sentence in Text (1) provides BACKGROUND information for interpreting the information in sentences 2 and 3, which are in a CONTRAST relation (see Figure 1). Each relation holds between two adjacent non-overlapping text spans called NUCLEUS and SATELLITE. (There are a few exceptions to this rule: some relations, such as LIST and CONTRAST, are multinuclear.) The distinction between nuclei and satellites comes from the empirical observation that the nucleus expresses what is more essential to the writer’s purpose than the satellite. Our system is able to analyze both the discourse structure of a document and the syntactic structure of each of its sentences or EDUs. It then compresses the document by dropping either syntactic or discourse constituents. 3 A Noisy-Channel Model For a given document  , we want to find the summary text  that maximizes  !#" . Using Bayes rule, we flip this so we end up maximizing $%&"'&" . Thus, we are left with modelling two probability distributions: $%&" , the probability of a document  given a summary  , and (" , the probability of a summary. We assume that we are given the discourse structure of each document and the syntactic structures of each of its EDUs. The intuitive way of thinking about this application of Bayes rule, reffered to as the noisy-channel model, is that we start with a summary  and add “noise” to it, yielding a longer document  . The noise added in our model consists of words, phrases and discourse units. For instance, given the document “John Doe has secured the vote of most democrats.” we could add words to it (namely the word “already”) to generate “John Doe has already secured the vote of most democrats.” We could also choose to add an entire syntactic constituent, for instance a prepositional phrase, to generate “John Doe has secured the vote of most democrats in his constituency.” These are both examples of sentence expansion as used previously by Knight & Marcu (2000). Our system, however, also has the ability to expand on a core message by adding discourse constituents. For instance, it could decide to add another discourse constituent to the original summary “John Doe has secured the vote of most democrats” by CONTRASTing the information in the summary with the uncertainty regarding the support of the governor, thus yielding the text: “John Doe has secured the vote of most democrats. But without the support of the governor, he is still on shaky ground.” As in any noisy-channel application, there are three parts that we have to account for if we are to build a complete document compression system: the channel model, the source model and the decoder. We describe each of these below. The source model assigns to a string the probability (" , the probability that the summary  is good English. Ideally, the source model should disfavor ungrammatical sentences and documents containing incoherently juxtaposed sentences. The channel model assigns to any document/summary pair a probability )*%(" . This models the extent to which  is a good expansion of  . For instance, if  is “The mayor is now looking for re-election.”, + is “The mayor is now looking for re-election. He has to secure the vote of the democrats.” and  is “The major is now looking for re-election. Sharks have sharp teeth.”, we expect , -%(" to be higher than .%&" because , expands on  by elaboration, while  shifts to a different topic, yielding an incoherent text. The decoder searches through all possible summaries of a document  for the summary  that maximizes the posterior probability )*%(" (" . Each of these parts is described below. 3.1 Source model The job of the source model is to assign a score (" to a compression independent of the original document. That is, the source model should measure how good English a summary is (independent of whether it is a good compression or not). Currently, we use a bigram measure of quality (trigram scores were also tested but failed to make a difference), combined with non-lexicalized context-free syntactic probabilities and context-free discourse probabilities, giving ("/ 102436587'9&" :<;>=@?A(" : <BC;>=@?A )&" . It would be better to use a lexicalized context free grammar, but that was not possible given the decoder used. 3.2 Channel model The channel model is allowed to add syntactic constituents (through a stochastic operation called constituent-expand) or discourse units (through another stochastic operation called EDU-expand). Both of these operations are performed on a combined discourse/syntax tree called the DS-tree. The DS-tree for Text (1) is shown in Figure 1 for reference. Suppose we start with the summary D “The mayor is looking for re-election.” A constituentE F F G S NPB DT NN VP VBZ ADVP RB VP−A VBG PP NPB NN PUNC. IN The mayor now looking for is re−election . H I G J K I L M N O F P Q R TOP John Doe has already secured the vote of most democrats in his constituency, which is already almost enough to win. But without the support of the governer, he is still on shaky ground. S P L J H T I Q H I G J U V I W P I G X F Q H I G J L F Q R X G X F Q S P L J H T I Q S P L J Y F Q G O I Z G S P L J Y F Q G O I Z G S P L J H T I Q * * Figure 1: The discourse (full)/syntax (partial) tree for Text (1). expand operation could insert a syntactic constituent, such as “this year” anywhere in the syntactic tree of  . A constituent-expand operation could also add single words: for instance the word “now” could be added between “is” and “looking,” yielding [ “The mayor is now looking for re-election.” The probability of inserting this word is based on the syntactic structure of the node into which it’s inserted. Knight and Marcu (2000) describe in detail a noisy-channel model that explains how short sentences can be expanded into longer ones by inserting and expanding syntactic constituents (and words). Since our constituent-expand stochastic operation simply reimplements Knight and Marcu’s model, we do not focus on them here. We refer the reader to (Knight and Marcu, 2000) for the details. In addition to adding syntactic constituents, our system is also able to add discourse units. Consider the summary \ “John Doe has already secured the vote of most democrats in his consituency.” Through a sequence of discourse expansions, we can expand upon this summary to reach the original text. A complete discourse expansion process that would occur starting from this initial summary to generate the original document is shown in Figure 2. In this figure, we can follow the sequence of steps required to generate our original text, beginning with our summary  . First, through an operation D-Project (“D” for “D”iscourse), we increase the depth of the tree, adding an intermediate NUC=SPAN node. This projection adds a factor of  Nuc=Span ] Nuc=Span  Nuc=Span " to the probability of this sequence of operations (as is shown under the arrow). We are now able to perform the second operation, D-Expand, with which we expand on the core message contained in  by adding a satellite which evaluates the information presented in  . This expansion adds the probability of performing the expansion (called the discourse expansion probabilities, <BC^ . An example discourse expansion probability, written  Nuc=Span ] Nuc=Span Sat=Eval  Nuc=Span ] Nuc=Span " , reflects the probability of adding an evaluation satellite onto a nuclear span). The rest of Figure 2 shows some of the remaining steps to produce the original document, each step labeled with the appropriate probability factors. Then, the probability of the entire expansion is the product of all those listed probabilities combined with the appropriate probabilities from the syntax side of things. In order to produce the final score $%&" for a document/summary pair, we multiply together each of the expansion probabilities in the path leading from  to  . For estimating the parameters for the discourse models, we used an RST corpus of 385 Wall Street Journal articles from the Penn Treebank, which we obtained from LDC. The documents in the corpus range in size from 31 to 2124 words, with an average of 458 words per document. Each document is paired with a discourse structure that was manu_ ` ` a John Doe has already secured the vote of most democrats in his constituency, b c d e f g h i which is already almost enough to win. f h a e j k h l c h a m ` i b c d e f g h i n o p q r s t u v n o w x y z { | n o p q r s t u v _ ` ` a John Doe has already secured the vote of most democrats in his constituency, b c d e f g h i } ~ €  ‚ƒ„ _ ` ` a John Doe has already secured the vote of most democrats in his constituency, b c d e f g h i b c d e f g h i John Doe has already secured the vote of most democrats in his constituency, b c d e f g h i which is already almost enough to win. f h a e j k h l c h a m ` i But without the support of the governer, f h a e d ` i … m a m ` i he is still on shaky ground. b c d e f g h i b c d e f g h i b c d e † ` i a ‡ h ˆ a b c d e † ` i a ‡ h ˆ a _ ` ` a n o w x y z { | John Doe has already secured the vote of most democrats in his constituency, b c d e f g h i which is already almost enough to win. f h a e j k h l c h a m ` i But without the support of the governer, f h a e d ` i … m a m ` i he is still on shaky ground. b c d e f g h i The mayor is now looking for re−election. f h a e ‰ h d Š ‹ ‡ ` c i … b c d e f g h i b c d e † ` i a ‡ h ˆ a b c d e † ` i a ‡ h ˆ a _ ` ` a _ ` ` a John Doe has already secured the vote of most democrats in his constituency, b c d e f g h i which is already almost enough to win. f h a e j k h l c h a m ` i b c d e † ` i a ‡ h ˆ a b c d e f g h i John Doe has already secured the vote of most democrats in his constituency, b c d e f g h i which is already almost enough to win. f h a e j k h l c h a m ` i b c d e f g h i b c d e † ` i a ‡ h ˆ a _ ` ` a he is still on shaky ground. b c d e † ` i a ‡ h ˆ a P(Nuc=Span −> Nuc=Span Sat=evaluation Nuc=Span −> Nuc=Span) P(Nuc=Span −> Nuc=Span | P(Nuc=Span −> Nuc=Contrast Nuc=Contrast | P(Root −> Sat=Background Nuc=Span | Root −> Nuc=Span) Nuc=Span) P(Nuc=Span −> Nuc=Contrast | Nuc=Span) Nuc=Span −> Nuc=Contrast) P(Nuc=Contrast −> Sat=condiation Nuc=Span | Nuc=Contrast −> Nuc=Span) n o w x y z { | n o p q r s t u v P(Nuc=Contrast −> Nuc=Span | Nuc=Contrast)* Figure 2: A sequence of discourse expansions for Text (1) (with probability factors). ally built in the style of RST. (See (Carlson et al., 2001) for details concerning the corpus and the annotation process.) From this corpus, we were able to estimate parameters for a discourse PCFG using standard maximum likelihood methods. Furthermore, 150 document from the same corpus are paired with extractive summaries on the EDU level. Human annotators were asked which EDUs were most important; suppose in the example DStree (Figure 1) the annotators marked the second and fifth EDUs (the starred ones). These stars are propagated up, so that any discourse unit that has a descendent considered important is also considered important. From these annotations, we could deduce that, to compress a NUC=CONTRAST that has two children, NUC=SPAN and SAT=EVALUATION, we can drop the evaluation satellite. Similarly, we can compress a NUC=CONTRAST that has two children, SAT=CONDITION and NUC=SPAN by dropping the first discourse constituent. Finally, we can compress the ROOT deriving into SAT=BACKGROUND NUC=SPAN by dropping the SAT=BACKGROUND constituent. We keep counts of each of these examples and, once collected, we normalize them to get the discourse expansion probabilities. 3.3 Decoder The goal of the decoder is to combine )&" with $%&" to get  %," . There are a vast number of potential compressions of a large DS-tree, but we can efficiently pack them into a shared-forest structure, as described in detail by Knight & Marcu (2000). Each entry in the shared-forest structure has three associated probabilities, one from the source syntax PCFG, one from the source discourse PCFG and one from the expansion-template probabilities described in Section 3.2. Once we have generated a forest representing all possible compressions of the original document, we want to extract the best (or the Œ -best) trees, taking into account both the expansion probabilities of the channel model and the bigram and syntax and discourse PCFG probabilities of the source model. Thankfully, such a generic extractor has already been built (Langkilde, 2000). For our purposes, the extractor selects the trees with the best combination of LM and expansion scores after performing an exhaustive search over all possible summaries. It returns a list of such trees, one for each possible length. 4 System The system developed works in a pipelined fashion as shown in Figure 3. The first step along the pipeline is to generate the discourse structure. To do this, we use the decision-based discourse parser described by Marcu (2000)2. Once we have the discourse structure, we send each EDU off to a syn2The discourse parser achieves an f-score of Ž ‘ for EDU identification, ’ “ “ for identifying hierarchical spans, ” ” for nuclearity identification and ‘  • for relation tagging. Parser Discourse Syntax Parser Forest Generator Decoder Chooser Length Output Summary Input Document Figure 3: The pipeline of system components. tactic parser (Collins, 1997). The syntax trees of the EDUs are then merged with the discourse tree in the forest generator to create a DS-tree similar to that shown in Figure 1. From this DS-tree we generate a forest that subsumes all possible compressions. This forest is then passed on to the forest ranking system which is used as decoder (Langkilde, 2000). The decoder gives us a list of possible compressions, for each possible length. Example compressions of Text (1) are shown in Figure 4 together with their respective log-probabilities. In order to choose the “best” compression at any possible length, we cannot rely only on the log-probabilities, lest the system always choose the shortest possible compression. In order to compensate for this, we normalize by length. However, in practice, simply dividing the log-probability by the length of the compression is insufficient for longer documents. Experimentally, we found a reasonable metric was to, for a compression of length Œ , divide each log-probability by Œ '–  . This was the job of the length chooser from Figure 3, and enabled us to choose a single compression for each document, which was used for evaluation. (In Figure 4, the compression chosen by the length selector is italicized and was the shortest one3.) 5 Results For testing, we began with two sets of data. The first set is drawn from the Wall Street Journal (WSJ) portion of the Penn Treebank and consists of —˜ documents, each containing between ™š— and ›œ words. The second set is drawn from a collection of stu3This tends to be the case for very short documents, as the compressions never get sufficiently long for the length normalization to have an effect. dent compositions and consists of ž documents, each containing between ˜.™ and Ÿ — words. We call this set the MITRE corpus (Hirschman et al., 1999). We would liked to have run evaluations on longer documents. Unfortunately, the forests generated even for relatively small documents are huge. Because there are an exponential number of summaries that can be generated for any given text4, the decoder runs out of memory for longer documents; therefore, we selected shorter subtexts from the original documents. We used both the WSJ and Mitre data for evaluation because we wanted to see whether the performance of our system varies with text genre. The Mitre data consists mostly of short sentences (average document length from Mitre is ˜ sentences), quite in constrast to the typically long sentences in the Wall Street Journal articles (average document length from WSJ is ¡ 4¢ž sentences). For purpose of comparison, the Mitre data was compressed using five systems: Random: Drops random words (each word has a 50% chance of being dropped (baseline). Hand: Hand compressions done by a human. Concat: Each sentence is compressed individually; the results are concatenated together, using Knight & Marcu’s (2000) system here for comparison. EDU: The system described in this paper. Sent: Because syntactic parsers tend not to work well parsing just clauses, this system merges together leaves in the discourse tree which are in the same sentence, and then proceeds as described in this paper. The Wall Street Journal data was evaluated on the above five systems as well as two additions. Since the correct discourse trees were known for these data, we thought it wise to test the systems using these human-built discourse trees, instead of the automatically derived ones. The additionall two systems were: PD-EDU: Same as EDU except using the perfect discourse trees, available from the RST corpus (Carlson et al., 2001). 4In theory, a text of £ words has ‘6¤ possible compressions. len log prob best compression Ž ¥C¦¦Ž ”6“§6“ Mayor is now looking which is enough. ¦¨ ¥C¦¨ª©«4¦“¦“ The mayor is now looking which is already almost enough to win. ¦¨§ ¥C¦¨•ª©« ’'”¬©'“ The mayor is now looking but without support, he is still on shaky ground. ¦¨Ž ¥C¦¨§6“ •6¦“ Mayor is now looking but without the support of governer, he is still on shaky ground. ‘‘ ¥C¦8©'§4¦””6“ The mayor is now looking for re-election but without the support of the governer, he is still on shaky ground. ‘ Ž ¥­‘ 6” ”6•”6“ The mayor is now looking which is already almost enough to win. But without the support of the governer, he is still on shaky ground. Figure 4: Possible compressions for Text (1). PD-Sent: The same as Sent except using the perfect discourse trees. Six human evaluators rated the systems according to three metrics. The first two, presented together to the evaluators, were grammaticality and coherence; the third, presented separately, was summary quality. Grammaticality was a judgment of how good the English of the compressions were; coherence included how well the compression flowed (for instance, anaphors lacking an antecedent would lower coherence). Summary quality, on the other hand, was a judgment of how well the compression retained the meaning of the original document. Each measure was rated on a scale from — (worst) to ž (best). We can draw several conclusions from the evaluation results shown in Table 2 along with average compression rate (Cmp, the length of the compressed document divided by the original length).5 First, it is clear that genre influences the results. Because the Mitre data contained mostly short sentences, the syntax and discourse parsers made fewer errors, which allowed for better compressions to be generated. For the Mitre corpus, compressions obtained starting from discourse trees built above the sentence level were better than compressions obtained starting from discourse trees built above the EDU level. For the WSJ corpus, compression obtained starting from discourse trees built above the sentence level were more grammatical, but less coherent than compressions obtained starting from discourse trees built above the EDU level. Choosing the manner in which the discourse and syntactic representations of texts are mixed should be influenced by the genre of the texts one is interested to compress. 5We did not run the system on the MITRE data with perfect discourse trees because we did not have hand-built discourse trees for this corpus. WSJ Mitre Cmp Grm Coh Qual Cmp Grm Coh Qual Random 0.51 1.60 1.58 2.13 0.47 1.43 1.77 1.80 Concat 0.44 3.30 2.98 2.70 0.42 2.87 2.50 2.08 EDU 0.49 3.36 3.33 3.03 0.47 3.40 3.30 2.60 Sent 0.47 3.45 3.16 2.88 0.44 4.27 3.63 3.36 PD-EDU 0.47 3.61 3.23 2.95 PD-Sent 0.48 3.96 3.65 2.84 Hand 0.59 4.65 4.48 4.53 0.46 4.97 4.80 4.52 Table 2: Evaluation Results The compressions obtained starting from perfectly derived discourse trees indicate that perfect discourse structures help greatly in improving coherence and grammaticality of generated summaries. It was surprising to see that the summary quality was affected negatively by the use of perfect discourse structures (although not statistically significant). We believe this happened because the text fragments we summarized were extracted from longer documents. It is likely that had the discourse structures been built specifically for these short text snippets, they would have been different. Moreover, there was no component designed to handle cohesion; thus it is to be expected that many compressions would contain dangling references. Overall, all our systems outperformed both the Random baseline and the Concat systems, which empirically show that discourse has an important role in document summarization. We performed ® tests on the results and found that on the Wall Street Journal data, the differences in score between the Concat and Sent systems for grammaticality and coherence were statistically significant at the 95% level, but the difference in score for summary quality was not. For the Mitre data, the differences in score between the Concat and Sent systems for grammaticality and summary quality were statistically significant at the 95% level, but the difference in score for coherence was not. The score differences for grammaticality, coherence, and summary quality between our systems and the baselines were statistically significant at the 95% level. The results in Table 2, which can be also assessed by inspecting the compressions in Figure 4 show that, in spite of our success, we are still far away from human performance levels. An error that our system makes often is that of dropping complements that cannot be dropped, such as the phrase “for re-election”, which is the complement of “is looking”. We are currently experimenting with lexicalized models of syntax that would prevent our compression system from dropping required verb arguments. We also consider methods for scaling up the decoder to handling documents of more realistic length. Acknoledgements This work was partially supported by DARPA-ITO grant N66001-00-1-9814, NSF grant IIS-0097846, and a USC Dean Fellowship to Hal Daume III. Thanks to Kevin Knight for discussions related to the project. References Michele Banko, Vibhu Mittal, and Michael Witbrock. 2000. Headline generation based on statistical translation. In Proceedings of the 38th Annual Meeting of the Association for Computational Linguistics (ACL– 2000), pages 318–325, Hong Kong, October 1–8. Adam Berger and Vibhu Mittal. 2000. Query-relevant summarization using FAQs. In Proceedings of the 38th Annual Meeting of the Association for Computational Linguistics (ACL–2000), pages 294–301, Hong Kong, October 1–8. Lynn Carlson, Daniel Marcu, and Mary Ellen Okurowski. 2001. Building a discourse-tagged corpus in the framework of rhetorical structure theory. In Proceedings of the 2nd SIGDIAL Workshop on Discourse and Dialogue, Eurospeech 2001, Aalborg, Denmark, September. John Carroll, Guidon Minnen, Yvonne Canning, Siobhan Devlin, and John Tait. 1998. Practical simplification of english newspaper text to assist aphasic readers. In Proceedings of the AAAI-98 Workshop on Integrating Artificial Intelligence and Assistive Technology. R. Chandrasekar, Christy Doran, and Srinivas Bangalore. 1996. Motivations and methods for text simplification. In Proceedings of the Sixteenth International Conference on Computational Linguistics (COLING ’96), Copenhagen, Denmark. Michael Collins. 1997. Three generative, lexicalized models for statistical parsing. In Proceedings of the 35th Annual Meeting of the Association for Computational Linguistics (ACL–97), pages 16–23, Madrid, Spain, July 7-12. Proceedings of the First Document Understanding Conference (DUC-2001), New Orleans, LA, September. Proceedings of the Second Document Understanding Conference (DUC-2002), Philadelphia, PA, July. Gregory Grefenstette. 1998. Producing intelligent telegraphic text reduction to provide an audio scanning service for the blind. In Working Notes of the AAAI Spring Symposium on Intelligent Text Summarization, pages 111–118, Stanford University, CA, March 2325. L. Hirschman, M. Light, E. Breck, and J. Burger. 1999. Deep read: A reading comprehension system. In Proceedings of the 37th Annual Meeting of the Association for Computational Linguistics. H. Jing. 2000. Sentence reduction for automatic text summarization. In Proceedings of the First Annual Meeting of the North American Chapter of the Association for Computational Linguistics NAACL-2000, pages 310–315, Seattle, WA. Kevin Knight and Daniel Marcu. 2000. Statistics-based summarization — step one: Sentence compression. In The 17th National Conference on Artificial Intelligence (AAAI–2000), pages 703–710, Austin, TX, July 30th – August 3rd. Irene Langkilde. 2000. Forest-based statistical sentence generation. In Proceedings of the 1st Annual Meeting of the North American Chapter of the Association for Computational Linguistics, Seattle, Washington, April 30–May 3. Kavi Mahesh. 1997. Hypertext summary extraction for fast document browsing. In Proceedings of the AAAI Spring Symposium on Natural Language Processing for the World Wide Web, pages 95–103. Inderjeet Mani and Mark Maybury, editors. 1999. Advances in Automatic Text Summarization. The MIT Press. Inderjeet Mani. 2001. Automatic summarization. William C. Mann and Sandra A. Thompson. 1988. Rhetorical structure theory: Toward a functional theory of text organization. Text, 8(3):243–281. Daniel Marcu. 2000. The Theory and Practice of Discourse Parsing and Summarization. The MIT Press, Cambridge, Massachusetts.
2002
57
From Single to Multi-document Summarization: A Prototype System and its Evaluation Chin-Yew Lin and Eduard Hovy University of Southern California / Information Sciences Institute 4676 Admiralty Way Marina del Rey, CA 90292 {cyl,hovy}@isi.edu Abstract NeATS is a multi-document summarization system that attempts to extract relevant or interesting portions from a set of documents about some topic and present them in coherent order. NeATS is among the best performers in the large scale summarization evaluation DUC 2001. 1 Introduction In recent years, text summarization has been enjoying a period of revival. Two workshops on Automatic Summarization were held in 2000 and 2001. However, the area is still being fleshed out: most past efforts have focused only on single-document summarization (Mani 2000), and no standard test sets and large scale evaluations have been reported or made available to the Englishspeaking research community except the TIPSTER SUMMAC Text Summarization evaluation (Mani et al. 1998). To address these issues, the Document Understanding Conference (DUC) sponsored by the National Institute of Standards and Technology (NIST) started in 2001 in the United States. The Text Summarization Challenge (TSC) task under the NTCIR (NIINACSIS Test Collection for IR Systems) project started in 2000 in Japan. DUC and TSC both aim to compile standard training and test collections that can be shared among researchers and to provide common and large scale evaluations in single and multiple document summarization for their participants. In this paper we describe a multi-document summarization system NeATS. It attempts to extract relevant or interesting portions from a set of documents about some topic and present them in coherent order. We outline the NeATS system and describe how it performs content selection, filtering, and presentation in Section 2. Section 3 gives a brief overview of the evaluation procedure used in DUC-2001 (DUC 2001). Section 4 discusses evaluation metrics, and Section 5 the results. We conclude with future directions. 2 NeATS NeATS is an extraction-based multi-document summarization system. It leverages techniques proved effective in single document summarization such as: term frequency (Luhn 1969), sentence position (Lin and Hovy 1997), stigma words (Edmundson 1969), and a simplified version of MMR (Goldstein et al. 1999) to select and filter content. To improve topic coverage and readability, it uses term clustering, a ‘buddy system’ of paired sentences, and explicit time annotation. Most of the techniques adopted by NeATS are not new. However, applying them in the proper places to summarize multiple documents and evaluating the results on large scale common tasks are new. Given an input of a collection of sets of newspaper articles, NeATS generates summaries in three stages: content selection, filtering, and presentation. We describe each stage in the following sections. 2.1 Content Selection The goal of content selection is to identify important concepts mentioned in a document collection. For example, AA flight 11, AA flight 77, UA flight 173, UA flight 93, New York, World Trade Center, Twin Towers, Osama bin Laden, and al-Qaida are key concepts for a document collection about the September 11 terrorist attacks in the US. Computational Linguistics (ACL), Philadelphia, July 2002, pp. 457-464. Proceedings of the 40th Annual Meeting of the Association for In a key step for locating important sentences, NeATS computes the likelihood ratio λ (Dunning, 1993) to identify key concepts in unigrams, bigrams, and trigrams1, using the on- topic document collection as the relevant set and the off-topic document collection as the irrelevant set. Figure 1 shows the top 5 concepts with their relevancy scores (-2λ) for the topic “Slovenia Secession from Yugoslavia” in the DUC-2001 test collection. This is similar to the idea of topic signature introduced in (Lin and Hovy 2000). With the individual key concepts available, we proceed to cluster these concepts in order to identify major subtopics within the main topic. Clusters are formed through strict lexical connection. For example, Milan and Kucan are grouped as “Milan Kucan” since “Milan Kucan” is a key bigram concept; while Croatia, Yugoslavia, Slovenia, republic, and are joined due to the connections as follows: • Slovenia Croatia • Croatia Slovenia • Yugoslavia Slovenia • republic Slovenia 1 Closed class words (of, in, and, are, and so on) were ignored in constructing unigrams, bigrams and trigrams. • Croatia republic Each sentence in the document set is then ranked, using the key concept structures. An example is shown in Figure 2. The ranking algorithm rewards most specific concepts first; for example, a sentence containing “Milan Kucan” has a higher score than a sentence contains only either Milan or Kucan. A sentence containing both Milan and Kucan but not in consecutive order gets a lower score too. This ranking algorithm performs relatively well, but it also results in many ties. Therefore, it is necessary to apply some filtering mechanism to maintain a reasonably sized sentence pool for final presentation. 2.2 Content Filtering NeATS uses three different filters: sentence position, stigma words, and maximum marginal relevancy. 2.2.1 Sentence Position Sentence position has been used as a good important content filter since the late 60s (Edmundson 1969). It was also used as a baseline in a preliminary multi-document summarization study by Marcu and Gerber (2001) with relatively good results. We apply a simple sentence filter that only retains the lead 10 sentences. 2.2.2 Stigma Words Some sentences start with • conjunctions (e.g., but, although, however), • the verb say and its derivatives, • quotation marks, • pronouns such as he, she, and they, and usually cause discontinuity in summaries. Since we do not use discourse level selection criteria à la (Marcu 1999), we simply reduce the scores of these sentences to avoid including them in short summaries. 2.2.3 Maximum Marginal Relevancy Figure 2. Top 5 unigram, bigram, and trigram concepts for topic "Slovenia Secession from Yugoslavia". Rank Unigram (-2λ) Bigram (-2λ) Trigram (-2λ) 1 Slovenia 319.48 federal army 21.27 Slovenia central bank 5.80 2 Yugoslavia 159.55 Slovenia Croatia 19.33 minister foreign affairs 5.80 3 Slovene 87.27 Milan Kucan 17.40 unallocated federal debt 5.80 4 Croatia 79.48 European Community 13.53 Drnovsek prime minister 3.86 5 Slovenian 67.82 foreign exchange 13.53 European Community countries 3.86 Figure 1. Sample key concept structure. n1 (:S URF " WEBCL -SUMM MARIZ ER-KU CAN" :C AT S- NP :C LASS I-EN- WEBCL -SIGN ATURE -KUCAN :L EX 0 .6363 63636 36363 6 :S UBS ( ((KUC AN-0) (:S URF " Milan Ku can" :C AT S- NP :C LASS I-EN- WEBCL -SIGN ATURE -KUCAN :L EX 0. 63636 36363 63636 :S UBS ((( KUCAN -1) (:S URF " Ku can" :C AT S- NP :C LASS I-EN- WEBCL -SIGN ATURE -KUCAN :L EX 0. 63636 36363 63636 )) (( KUCAN -2) (:S URF " Milan " :C AT S- NP :C LASS I-EN- WEBCL -SIGN ATURE -KUCAN :L EX 0. 63636 36363 63636 ))))) )) The content selection and filtering methods described in the previous section only concern individual sentences. They do not consider the redundancy issue when two top ranked sentences refer to similar things. To address the problem, we use a simplified version of CMU’s MMR (Goldstein et al. 1999) algorithm. A sentence is added to the summary if and only if its content has less than X percent overlap with the summary. The overlap ratio is computed using simple stemmed word overlap and the threshold X is set empirically. 2.3 Content Presentation NeATS so far only considers features pertaining to individual sentences. As we mentioned in Section 2.2.2, we can demote some sentences containing stigma words to improve the cohesion and coherence of summaries. However, we still face two problems: definite noun phrases and events spread along an extended timeline. We describe these problems and our solutions in the following sections. 2.3.1 A Buddy System of Paired Sentences The problem of definite noun phrases can be illustrated in Figure 3. These sentences are from documents of the DUC-2001 topic US Drought of 1988. According to pure sentence scores, sentence 3 of document AP8912100079 has a higher score (34.60) than sentence 1 (32.20) and should be included in the shorter summary (size=“50”). However, if we select sentence 3 without also including sentence 1, the definite noun phrase “The record $3.9 billion drought relief program of 1988” seems to come without any context. To remedy this problem, we introduce a buddy system to improve cohesion and coherence. Each sentence is paired with a suitable introductory sentence unless it is already an introductory sentence. In DUC-2001 we simply used the first sentence of its document. This assumes lead sentences provide introduction and context information about what is coming next. 2.3.2 Time Annotation and Sequence One main problem in multi-document summarization is that documents in a collection might span an extended time period. For example, the DUC-2001 topic “Slovenia Secession from Yugoslavia” contains 11 documents dated from 1988 to 1994, from 5 different sources 2. Although a source document for single-document summarization might contain information collected across an extended time frame and from multiple sources, the author at least would synchronize them and present them in a coherent order. In multi-document summarization, a date expression such as Monday occurring in two different documents might mean the same date or different dates. For example, sentences in the 100 word summary shown in Figure 4 come from 3 main time periods, 1990, 1991, and 1994. If no absolute time references are given, the summary might mislead the reader to think that all the events mentioned in the four summary sentences occurred in a single week. Therefore, time disambiguation and normalization are very important in multidocument summarization. As the first attempt, we use publication dates as reference points and compute actual dates for the following date expressions: • weekdays (Sunday, Monday, etc); • (past | next | coming) + weekdays; • today, yesterday, last night. We then order the summary sentences in their chronological order. Figure 4 shows an 2 Sources include Associated Press, Foreign Broadcast Information Service, Financial Times, San Jose Mercury News, and Wall Street Journal. <multi size="50" docset="d50i"> AP891210-0079 1 (32.20) (12/10/89) America's 1988 drought captured attention everywhere, but especially in Washington where politicians pushed through the largest disaster relief measure in U.S. history. AP891213-0004 1 (34.60) (12/13/89) The drought of 1988 hit … </multi> <multi size="100" docset="d50i"> AP891210-0079 1 (32.20) (12/10/89) America's 1988 drought captured attention everywhere, but especially in Washington where politicians pushed through the largest disaster relief measure in U.S. history. AP891210-0079 3 (41.18) (12/10/89) The record $3.9 billion drought relief program of 1988, hailed as salvation for small farmers devastated by a brutal dry spell, became much more _ an unexpected, electionyear windfall for thousands of farmers who collected millions of dollars for nature's normal quirks. AP891213-0004 1 (34.60) (12/13/89) The drought of 1988 hit … </multi> Figure 3. 50 and 100 word summaries for topic "US Drought of 1988". example 100 words summary with time annotations. Each sentence is marked with its publication date and a reference date (MM/DD/YY) is inserted after every date expression. 3 DUC 2001 Before we present our results, we describe the corpus and evaluation procedures of the Document Understanding Conference 2001 (DUC 2001). DUC is a new evaluation series supported by NIST under TIDES, to further progress in summarization and enable researchers to participate in large-scale experiments. There were three tasks in 2001: (1) Fully automatic summarization of a single document. (2) Fully automatic summarization of multiple documents: given a set of document on a single subject, participants were required to create 4 generic summaries of the entire set with approximately 50, 100, 200, and 400 words. 30 document sets of approximately 10 documents each were provided with their 50, 100, 200, and 400 human written summaries for training (training set) and another 30 unseen sets were used for testing (test set). (3) Exploratory summarization: participants were encouraged to investigate alternative approaches in summarization and report their results. NeATS participated only in the fully automatic multi-document summarization task. A total of 12 systems participated in that task. The training data were distributed in early March of 2001 and the test data were distributed in mid-June of 2001. Results were submitted to NIST for evaluation by July 1 st. 3.1 Evaluation Procedures NIST assessors who created the ‘ideal’ written summaries did pairwise comparisons of their summaries to the system-generated summaries, other assessors’ summaries, and baseline summaries. In addition, two baseline summaries were created automatically as reference points. The first baseline, lead baseline, took the first 50, 100, 200, and 400 words in the last document in the collection. The second baseline, coverage baseline, took the first sentence in the first document, the first sentence in the second document and so on until it had a summary of 50, 100, 200, or 400 words. 3.2 Summary Evaluation Environment NIST used the Summary Evaluation Environment (SEE) 2.0 developed by one of the authors (Lin 2001) to support its human evaluation process. Using SEE, the assessors evaluated the quality of the system’s text (the peer text) as compared to an ideal (the model text). The two texts were broken into lists of units and displayed in separate windows. In DUC-2001 the sentence was used as the smallest unit of evaluation. SEE 2.0 provides interfaces for assessors to judge the quality of summaries in grammatically3, cohesion4, and coherence5 at five different levels: all, most, some, hardly any, or none. It also allows assessors to step through each model unit, mark all system units sharing content with the current model unit, and specify that the marked system units 3 Does a summary follow the rule of English grammatical rules independent of its content? 4 Do sentences in a summary fit in with their surrounding sentences? 5 Is the content of a summary expressed and organized in an effectiv e way? Figure 4. 100 word summary with explicit time annotation. <multi size="100" docset="d45h"> AP900625-0160 1 (26.60) (06/25/90) The republic of Slovenia plans to begin work on a constitution that will give it full sovereignty within a new Yugoslav confederation, the state Tanjug news agency reported Monday (06/25/90). WSJ910628-0109 3 (9.48) (06/28/91) On Wednesday (06/26/91), the Slovene soldiers manning this border post raised a new flag to mark Slovenia's independence from Yugoslavia. WSJ910628-0109 5 (53.77) (06/28/91) Less than two days after Slovenia and Croatia, two of Yugoslavia's six republics, unilaterally seceded from the nation, the federal government in Belgrade mobilized troops to regain control. FBIS3-30788 2 (49.14) (02/09/94) In the view of Yugoslav diplomats, the normalization of relations between Slovenia and the Federal Republic of Yugoslavia will certainly be a strenuous and long-term project. </multi> express all, most, some or hardly any of the content of the current model unit. 4 Evaluation Metrics One goal of DUC-2001 was to debug the evaluation procedures and identify stable metrics that could serve as common reference points. NIST did not define any official performance metric in DUC-2001. It released the raw evaluation results to DUC-2001 participants and encouraged them to propose metrics that would help progress the field. 4.1.1 Recall, Coverage, Retention and Weighted Retention Recall at different compression ratios has been used in summarization research (Mani 2001) to measure how well an automatic system retains important content of original documents. Assume we have a system summary Ss and a model summary Sm. The number of sentences occurring in both Ss and Sm is Na, the number of sentences in Ss is Ns, and the number of sentences in Sm is Nm. Recall is defined as Na/Nm. The Compression Ratio is defined as the length of a summary (by words or sentences) divided by the length of its original document. DUC-2001 set the compression lengths to 50, 100, 200, and 400 words for the multi-document summarization task. However, applying recall in DUC-2001 without modification is not appropriate because: 1. Multiple system units contribute to multiple model units. 2. Ss and Sm do not exactly overlap. 3. Overlap judgment is not binary. For example, in an evaluation session an assessor judged system units S1.1 and S10.4 as sharing some content with model unit M2.2. Unit S1.1 says “Thousands of people are feared dead” and unit M2.2 says “3,000 and perhaps … 5,000 people have been killed”. Are “thousands” equivalent to “3,000 to 5,000” or not? Unit S10.4 indicates it was an “earthquake of magnitude 6.9” and unit M2.2 says it was “an earthquake measuring 6.9 on the Richter scale”. Both of them report a “6.9” earthquake. But the second part of system unit S10.4, “in an area so isolated…”, seems to share some content with model unit M4.4 “the quake was centered in a remote mountainous area”. Are these two equivalent? This example highlights the difficulty of judging the content coverage of system summaries against model summaries and the inadequacy of using recall as defined. As we mentioned earlier, NIST assessors not only marked the sharing relations among system units (SU) and model units (MU), they also indicated the degree of match, i.e., all, most, some, hardly any, or none. This enables us to compute weighted recall. Different versions of weighted recall were proposed by DUC-2001 participants. McKeown et al. (2001) treated the completeness of coverage as threshold: 4 for all, 3 for most and above, 2 for some and above, and 1 for hardly any and above. They then proceeded to compare system performances at different threshold levels. They defined recall at threshold t, Recallt, as follows: summary model in the MUs of number Total above or at marked MUs of Number t We used the completeness of coverage as coverage score, C, instead of threshold: 1 for all, 3/4 for most, 1/2 for some, and 1/4 for hardly any, 0 for none. To avoid confusion with the recall used in information retrieval, we call our metric weighted retention, Retentionw, and define it as follows: summary model in the MUs of number Total marked) MUs of (Number C • if we ignore C and set it always to 1, we obtain an unweighted retention, Retention1. We used Retention1 in our evaluation to illustrate that relative system performance changes when different evaluation metrics are chosen. Therefore, it is important to have common and agreed upon metrics to facilitate large scale evaluation efforts. 4.1.2 Precision and Pseudo Precision Precision is also a common measure. Borrowed from information retrieval research, precision is used to measure how effectively a system generates good summary sentences. It is defined as Na/ Ns. Precision in a fixed length summary output is equal to recall since N s = Nm. However, due to the three reasons stated at the beginning of the previous section, no straightforward computation of the traditional precision is available in DUC-2001. If we count the number of model units that are marked as good summary units and are selected by systems, and use the number of model units in various summary lengths as the sample space, we obtain a precision metric equal to Retention1. Alternatively, we can count how many unique system units share content with model units and use the total number of system units as the sample space. We define this as pseudo precision, Precisionp, as follows: summary system in the SUs of number Total marked SUs of Number Most of the participants in DUC-2001 reported their pseudo precision figures. 5 Results and Discussion We present the performance of NeATS in DUC-2001 in content and quality measures. 5.1 Content With respect to content, we computed Retention1, Retention w, and Precisionp using the formulas defined in the previous section. The scores are shown in Table 1 (overall average and per size). Analyzing all systems’ results according to these, we made the following observations. (1) NeATS (system N) is consistently ranked among the top 3 in average and per size Retention1 and Retention w. (2) NeATS’s performance for averaged pseudo precision equals human’s at about 58% (Pp all). (3) The performance in weighted retention is really low. Even humans6 score only 29% (Rw all). This indicates low inter-human agreement (which we take to reflect the undefinedness of the ‘generic summary’ task). However, the unweighted retention of humans is 53%. This suggests assessors did write something similar in their summaries but not exactly the same; once again illustrating the difficulty of summarization evaluation. (4) Despite the low inter-human agreement, humans score better than any system. They outscore the nearest system by about 11% in averaged unweighted retention (R1 all : 53% vs. 42%) and weighted retention (Rw all : 29% vs. 18%). There is obviously still considerable room for systems to improve. (5) System performances are separated into two major groups by baseline 2 (B2: coverage baseline) in averaged weighted retention. This confirms that lead sentences are good summary sentence candidates and that one does need to cover all documents in a topic to achieve reasonable performance in multidocument summarization. NeATS’s strategies of filtering sentences by position and adding lead sentences to set context are proved effective. (6) Different metrics result in different performance rankings. This is demonstrated by the top 3 systems T, N, and Y. If we use the averaged unweighted retention (R1 all), Y is 6 NIST assessors wrote two separate summaries per topic. One was used to judge all system summaries and the two baselines. The other was used to determine the (potential) upper bound. Table 1. Pseudo precision, unweighted retention, and weighted retention for all summary lengths: overall average, 400, 200, 100, and 50 words. SYS Pp All R1 All Rw All Pp 4 0 0 R1 400 Rw 4 0 0 Pp 2 0 0 R1 200 Rw 2 0 0 Pp 100 R1 100 Rw 100 Pp 50 R1 50 Rw 50 HM 58.71% 53.00% 28.81% 59.33% 52.95% 33.23% 59.91% 57.23% 33.82% 58.73% 54.67% 27.54% 56.87% 47.16% 21.62% T 48.96% 35.53% (3) 18.48% (1) 56.51% (3) 38.50% (3) 25.12% (1) 53.85% (3) 35.62% 21.37% (1) 43.53% 32.82% (3) 14.28% (3) 41.95% 35.17% (2) 13.89% (2) N* 58.72% (1) 37.52% (2) 17.92% (2) 61.01% (1) 41.21% (1) 23.90% (2) 63.34% (1) 38.21% (3) 21.30% (2) 58.79% (1) 36.34% (2) 16.44% (2) 51.72% (1) 34.31% (3) 10.98% (3) Y 41.51% 41.58% (1) 17.78% (3) 49.78% 38.72% (2) 20.04% 43.63% 39.90% (1) 16.86% 34.75% 43.27% (1) 18.39% (1) 37.88% 44.43% (1) 15.55% (1) P 49.56% 33.94% 15.78% 57.21% (2) 37.76% 22.18% (3) 51.45% 37.49% 19.40% 46.47% 31.64% 13.92% 43.10% 28.85% 9.09% L 51.47% (3) 33.67% 15.49% 52.62% 36.34% 21.80% 53.51% 36.87% 18.34% 48.62% (3) 29.00% 12.54% 51.15% (2) 32.47% 9.90% B2 47.27% 30.98% 14.56% 60.99% 33.51% 18.35% 49.89% 33.27% 17.72% 47.18% 29.48% 14.96% 31.03% 27.64% 8.02% S 52.53% (2) 30.52% 12.89% 55.55% 36.83% 20.35% 58.12% (2) 38.70% (2) 19.93% (3) 49.70% (2) 26.81% 10.72% 46.43% (3) 19.23% 4.04% M 43.39% 27.27% 11.32% 54.78% 33.81% 19.86% 45.59% 27.80% 13.27% 41.89% 23.40% 9.13% 31.30% 24.07% 5.05% R 41.86% 27.63% 11.19% 48.63% 24.80% 12.15% 43.96% 31.28% 15.17% 38.35% 27.61% 11.46% 36.49% 26.84% 6.17% O 43.76% 25.87% 11.19% 50.73% 27.53% 15.76% 42.94% 26.80% 13.07% 40.55% 25.13% 9.36% 40.80% 24.02% 7.03% Z 37.98% 23.21% 8.99% 47.51% 31.17% 17.38% 46.76% 25.65% 12.83% 28.91% 17.29% 5.45% 28.74% 18.74% 3.23% B1 32.92% 18.86% 7.45% 33.48% 17.58% 9.98% 43.13% 18.60% 8.65% 30.23% 17.42% 6.05% 24.83% 21.84% 4.20% W 30.08% 20.38% 6.78% 38.14% 25.89% 12.10% 26.86% 21.01% 7.93% 28.31% 19.15% 5.36% 27.01% 15.46% 3.21% U 23.88% 21.38% 6.57% 31.49% 29.76% 13.17% 24.20% 22.64% 8.49% 19.13% 17.54% 3.77% 20.69% 15.57% 3.04% the best, followed by N, and then T; if we choose averaged weighted retention (Rw all), T is the best, followed by N, and then Y. The reversal of T and Y due to different metrics demonstrates the importance of common agreed upon metrics. We believe that metrics have to take coverage score (C, Section 4.1.1) into consideration to be reasonable since most of the content sharing among system units and model units is partial. The recall at threshold t, Recallt (Section 4.1.1), proposed by (McKeown et al. 2001), is a good example. In their evaluation, NeATS ranked second at t=1, 3, 4 and first at t=2. (7) According to Table 1, NeATS performed better on longer summaries (400 and 200 words) based on weighted retention than it did on shorter ones. This is the result of the sentence extraction-based nature of NeATS. We expect that systems that use syntax-based algorithms to compress their output will thereby gain more space to include additional important material. For example, System Y was the best in shorter summaries. Its 100- and 50-word summaries contain only important headlines. The results confirm this is a very effective strategy in composing short summaries. However, the quality of the summaries suffered because of the unconventional syntactic structure of news headlines (Table 2). 5.2 Quality Table 2 shows the macro-averaged scores for the humans, two baselines, and 12 systems. We assign a score of 4 to all, 3 to most, 2 to some, 1 to hardly any, and 0 to none. The value assignment is for convenience of computing averages, since it is more appropriate to treat these measures as stepped values instead of continuous ones. With this in mind, we have the following observations. (1) Most systems scored well in grammaticality. This is not a surprise since most of the participants extracted sentences as summaries. But no system or human scored perfect in grammaticality. This might be due to the artifact of cutting sentences at the 50, 100, 200, and 400 words boundaries. Only system Y scored lower than 3, which reflects its headline inclusion strategy. (2) When it came to the measure for cohesion the results are confusing. If even the humanmade summaries score only 2.74 out of 4, it is unclear what this category means, or how the assessors arrived at these scores. However, the humans and baseline 1 (lead baseline) did score in the upper range of 2 to 3 and all others had scores lower than 2.5. Some of the systems (including B2) fell into the range of 1 to 2 meaning some or hardly any cohesion. The lead baseline (B1), taking the first 50, 100, 200, 400 words from the last document of a topic, did well. On the contrary, the coverage baseline (B2) did poorly. This indicates the difficulty of fitting sentences from different documents together. Even selecting continuous sentences from the same document (B1) seems not to work well. We need to define this metric more clearly and improve the capabilities of systems in this respect. (3) Coherence scores roughly track cohesion scores. Most systems did better in coherence than in cohesion. The human is the only one scoring above 3. Again the room for improvement is abundant. (4) NeATS did not fare badly in quality measures. It was in the same categories as other top performers: grammaticality is between most and all, cohesion, some and most, and coherence, some and most. This indicates the strategies employed by NeATS (stigma word filtering, adding lead sentence, and time annotation) worked to some extent but left room for improvement. 6 Conclusions Table 2. Averaged grammaticality, cohesion, and coherence over all summary sizes. SYS Grammar Cohesion Coherence Human 3.74 2.74 3.19 B1 3.18 2.63 2.8 B2 3.26 1.71 1.65 L 3.72 1.83 1.9 M 3.54 2.18 2.4 N* 3.65 2 2.22 O 3.78 2.15 2.33 P 3.67 1.93 2.17 R 3.6 2.16 2.45 S 3.67 1.93 2.04 T 3.51 2.34 2.61 U 3.28 1.31 1.11 W 3.13 1.48 1.28 Y 2.45 1.73 1.77 Z 3.28 1.8 1.94 We described a multi-document summarization system, NeATS, and its evaluation in DUC-2001. We were encouraged by the content and readability of the results. As a prototype system, NeATS deliberately used simple methods guided by a few principles: • Extracting important concepts based on reliable statistics. • Filtering sentences by their positions and stigma words. • Reducing redundancy using MMR. • Presenting summary sentences in their chronological order with time annotations. These simple principles worked effectively. However, the simplicity of the system also lends itself to further improvements. We would like to apply some compression techniques or use linguistic units smaller than sentences to improve our retention score. The fact that NeATS performed as well as the human in pseudo precision but did less well in retention indicates its summaries might include good but duplicated information. Working with sub-sentence units should help. To improve NeATS’s capability in content selection, we have started to parse sentences containing key unigram, bigram, and trigram concepts to identify their relations within their concept clusters. To enhance cohesion and coherence, we are looking into incorporating discourse processing techniques (Marcu 1999) or Radev and McKeown’s (1998) summary operators. We are analyzing the DUC evaluation scores in the hope of suggesting improved and more stable metrics. References DUC. 2001. The Document Understanding Workshop 2001. http://www-nlpir.nist.gov/ projects/duc/2001.html. Dunning, T. 1993. Accurate Methods for the Statistics of Surprise and Coincidence. Computational Linguistics 19, 61–74. Edmundson, H.P. 1969. New Methods in Automatic Abstracting. Journal of the Association for Computing Machinery. 16(2). Goldstein, J., M. Kantrowitz, V. Mittal, and J. Carbonell. 1999. Summarizing Text Documents: Sentence Selection and Evaluation Metrics. Proceedings of the 22 nd International ACM Conference on Research and Development in Information Retrieval (SIGIR-99), Berkeley, CA, 121– 128. Lin, C.-Y. and E.H. Hovy. 2000. The Automated Acquisition of Topic Signatures for Text Summarization. Proceedings of the COLING Conference. Saarbrücken, Germany. Lin, C.-Y. 2001. Summary Evaluation Environment. http://www.isi.edu/~cyl/SEE. Luhn, H. P. 1969. The Automatic Creation of Literature Abstracts. IBM Journal of Research and Development 2(2), 1969. Mani, I., D. House, G. Klein, L. Hirschman, L. Obrst, T. Firmin, M. Chrzanow ski, and B. Sundheim. 1998. The TIPSTER SUMMAC Text Summarization Evaluation: Final Report. MITRE Corp. Tech. Report. Mani, I. 2001. Automatic Summarization. John Benjamins Pub Co. Marcu, D. 1999. Discourse trees are good indicators of importance in text. In I. Mani and M. Maybury (eds), Advances in Automatic Text Summarization, 123–136. MIT Press. Marcu, D. and L. Gerber. 2001. An Inquiry into the Nature of Multidocument Abstracts, Extracts, and their Evaluation. Proceedings of the NAACL -2001 Workshop on Automatic Summarization. Pittsburgh, PA. McKeown, K., R. Barzilay, D. Evans, V. Hatzivassiloglou, M-Y Kan, B, Schiffman, and S. Teufel 2001. Columbia MultiDocument Summarization: Approach and Evaluation. DUC-01 Workshop on Text Summarization. New Orleans, LA. Radev, D.R. and K.R. McKeown. 1998. Generating Natural Language Summaries from Multiple On-line Sources. Computational Linguistics, 24(3):469–500.
2002
58
Supervised Ranking in Open-Domain Text Summarization Tadashi Nomoto National Institute of Japanese Literature 1-16-10 Yutaka Shinagawa Tokyo 142-8585, Japan [email protected] Yuji Matsumoto Nara Institute of Science and Technology 8916-5 Takayama Ikoma Nara 630-0101, Japan [email protected] Abstract The paper proposes and empirically motivates an integration of supervised learning with unsupervised learning to deal with human biases in summarization. In particular, we explore the use of probabilistic decision tree within the clustering framework to account for the variation as well as regularity in human created summaries. The corpus of human created extracts is created from a newspaper corpus and used as a test set. We build probabilistic decision trees of different flavors and integrate each of them with the clustering framework. Experiments with the corpus demonstrate that the mixture of the two paradigms generally gives a significant boost in performance compared to cases where either of the two is considered alone. 1 Introduction Nomoto and Matsumoto (2001b) have recently made an interesting observation that an unsupervised method based on clustering sometimes better approximates human created extracts than a supervised approach. That appears somewhat contradictory given that a supervised approach should be able to exploit human supplied information about which sentence to include in an extract and which not to, whereas an unsupervised approach blindly chooses sentences according to some selection scheme. An interesting question is, why this should be the case. The reason may have to do with the variation in human judgments on sentence selection for a summary. In a study to be described later, we asked students to select 10% of a text which they find most important for making a summary. If they agree perfectly on their judgments, then we will have only 10% of a text selected as most important. However, what we found was that about half of a text were marked as important, indicating that judgments can vary widely among humans. Curiously, however, Nomoto and Matsumoto (2001a) also found that a supervised system fares much better when tested on data exhibiting high agreement among humans than an unsupervised system. Their finding suggests that there are indeed some regularities (or biases) to be found. So we might conclude that there are two aspects to human judgments in summarization; they can vary but may exhibit some biases which could be usefully exploited. The issue is then how we might model them in some coherent framework. The goal of the paper is to explore a possible integration of supervised and unsupervised paradigms as a way of responding to the issue. Taking a decision tree and clustering as representing the respective paradigm, we will show how coupling them provides a summarizer that better approximates human judgments than either of the two considered alone. To our knowledge, none of the prior work on summarization (e.g., Kupiec et al. (1995)) explicitly addressed the issue of the variability inherent in human judgments in summarization tasks. Computational Linguistics (ACL), Philadelphia, July 2002, pp. 465-472. Proceedings of the 40th Annual Meeting of the Association for X1 0 |zzzzzz 1 ÃB B B B B B Y1 (θ1y, θ1n) X2 0 Ä¡¡¡¡¡¡ 1 ÃB B B B B B Y2 (θ2y, θ2n) Y3 (θ3y, θ3n) Figure 1: Probabilistic Decision Tree 2 Supervised Ranking with Probabilistic Decision Tree One technical problem associated with the use of a decision tree as a summarizer is that it is not able to rank sentences, which it must be able do, to allow for the generation of a variable-length summary. In response to the problem, we explore the use of a probabilistic decision tree as a ranking model. First, let us review some general features of probabilistic decision tree (ProbDT, henceforth) (Yamanishi, 1997; Rissanen, 1997). ProbDT works like a usual decision tree except that rather than assigning each instance to a single class, it distributes each instance among classes. For each instance xi, the strength of its membership to each of the classes is determined by P(ck | xi) for each class ck. Consider a binary decision tree in Fig 1. Let X1 and X2 represent non-terminal nodes, and Y1 and Y2 leaf nodes. ‘1’ and ‘0’ on arcs denote values of some attribute at X1 and X2. θi y and θi n represent the probability that a given instance assigned to the node i is labeled as yes and no, repectively. Abusing the terms slightly, let us assume that X1 and X2 represent splitting attributes as well at respective nodes. Then the probability that a given instance with X1 = 1 and X2 = 0 is labeled as yes (no) is θ2 y (θ2 n). Note that P c θj c = 1 for a given node j. Now to rank sentences with ProbDT simply involves finding the probability that each sentence is assigned to a particular class designating sentences worthy of inclusion in a summary (call it ‘Select’ class) and ranking them accordingly. (Hereafter and throughout the rest of the paper, we say that a sentence is wis if it is worthy of inclusion in a summary: thus a wis sentence is a sentence worthy of inclusion in a summary.) The probabiliy that a sentence u is labeled as wis is expressed as in Table 1, where ⃗u is a vector representation of u, consisting of a set of values for features of u; α is a smoothing function, e.g., Laplace’s law; t(⃗u) is some leaf node assigned to ⃗u; and DT represents some decision tree used to classify ⃗u. 3 Diversity Based Summarization As an unsupervised summarizer, we use diversity based summarization (DBS) (Nomoto and Matsumoto, 2001c). It takes a cluster-and-rank approach to generating summaries. The idea is to form a summary by collecting sentences representative of diverse topics discussed in the text. A nice feature about their approach is that by creating a summary covering potential topics, which could be marginal to the main thread of the text, they are in fact able to accommodate the variability in sentence selection: some people may pick up subjects (sentences) as important which others consider irrelevant or only marginal for summarization. DBS accomodates this situation by picking them all, however marginal they might be. More specifically, DBS is a tripartite process consisting of the following: 1. Find-Diversity: find clusters of lexically similar sentences in text. (In particular, we represent a sentence here a vector of tfidf weights of index terms it contains.) 2. Reduce-Redundancy: for each cluster found, choose a sentence that best represents that cluster. 3. Generate-Summary: collect the representative sentences, put them in some order, and return them to the user. Find-Diversity is based on the K-means clustering algorithm, which they extended with Minimum Description Length Principle (MDL) (Li, 1998; Yamanishi, 1997; Rissanen, 1997) as a way of optimizing K-means. Reduce-Redundancy is a tfidf based ranking model, which assigns weights to sentences in the cluster and returns a sentence that ranks highest. The weight of a sentence is given as the sum of tfidf scores of terms in the sentence. Table 1: Probabilistic Classification with DT. ⃗u is a vector representation of sentence u. α is a smoothing function. t(⃗u) is some leaf node assigned to ⃗u by DT. P(Select | ⃗u, DT) = α µthe number of “Select” sentences at t(⃗u) the total number of sentences at t(⃗u) ¶ 4 Combining ProbDT and DBS Combining ProbDT and DBS is done quite straightforwardly by replacing Reduce-Redundacy with ProbDT. Thus instead of picking up a sentence with the highest tfdif based weight, DBS/ProbDT attempts to find a sentences with the highest score for P(Select | ⃗u, DT). 4.1 Features The following lists a set of features used for encoding a sentence in ProbDT. Most of them are either length- or location-related features.1 <LocSen> The location of a sentence X defined by: #S(X) −1 #S(Last Sentence) ‘#S(X)’ denotes an ordinal number indicating the position of X in a text, i.e. #S(kth sentence) = k. ‘Last Sentence’ refers to the last sentence in a text. LocSen takes values between 0 and N−1 N . N is the number of sentences in the text. <LocPar> The location of a paragraph in which a sentence X occurs given by: #Par(X) −1 #Last Paragraph ‘#Par(X)’ denotes an ordinal number indicating the position of a paragraph containing X. ‘#Last Paragraph’ is the position of the last paragraph in a text, represented by the ordinal number. <LocWithinPar> The location of a sentence X within a paragraph in which it appears. #S(X) −#S(Par Init Sen) Length(Par(X)) 1Note that one may want to add tfidf to a set of features for a decision tree or, for that matter, to use features other than tfidf for representing sentences in clustering. The idea is worthy of consideration, but not pursued here. Table 2: Linguistic cues code category 1 non-past 2 past /-ta/ 3 copula /-da/ 4 noun 5 symbols, e.g., parentheses 6 sentence-ending particles, e.g., /-ka/ 0 none of the above ‘Par Init Sen’ refers to the initial sentence of a paragraph in which X occurs, ‘Length(Par(X))’ denotes the number of sentences that occur in that paragraph. LocWithinPar takes continuous values ranging from 0 to l−1 l , where l is the length of a paragraph: a paragraph initial sentence would have 0 and a paragraph final sentence l−1 l . <LenText> The text length in Japanese character i.e. kana, kanji. <LenSen> The sentence length in kana/kanji. Some work in Japanese linguistics found that a particular grammatical class a sentence final element belongs to could serve as a cue to identifying summary sentences. These include categories like PAST/NON-PAST, INTERROGATIVE, and NOUN and QUESTION-MARKER. Along with Ichikawa (1990), we identified a set of sentence-ending cues and marked a sentence as to whether it contains a cue from the set.2 Included in the set are inflectional classes PAST/NON-PAST (for the verb and verbal adjective), COPULA, and NOUN, parentheses, and QUESTION-MARKER -ka. We use the following attribute to encode a sentence-ending form. <EndCue> The feature encodes one of sentence2Word tokens are extracted by using CHASEN, a Japanese morphological analyzer which is reported to achieve the accuracy rate of over 98% (Matsumoto et al., 1999). ending forms described above. It is a discrete valued feature. The value ranges from 0 to 6. (See Table 2 for details.) Finally, one of two class labels, ‘Select’ and ‘Don’t Select’, is assigned to a sentence, depending on whether it is wis or not. The ‘Select’ label is for wis sentences, and the ‘Don’t Select‘ label for non-wis sentences. 5 Decision Tree Algorithms To examine the generality of our approach, we consider, in addition to C4.5 (Quinlan, 1993), the following decision tree algorithms. C4.5 is used with default options, e.g., CF=25%. 5.1 MDL-DT MDL-DT stands for a decision tree with MDL based pruning. It strives to optimize the decision tree by pruning the tree in such a way as to produce the shortest (minimum) description length for the tree. The description length refers to the number of bits required for encoding information about the decision tree. MDL ranks, along with Akaike Information Criterion (AIC) and Bayes Information Criterion (BIC), as a standard criterion in machine learning and statistics for choosing among possible (statistical) models. As shown empirically in Nomoto and Matsumoto (2000) for discourse domain, pruning DT with MDL significantly reduces the size of tree, while not compromising performance. 5.2 SSDT SSDT or Subspace Splitting Decision Tree represents another form of decision tree algorithm.(Wang and Yu, 2001) The goal of SSDT is to discover patterns in highly biased data, where a target class, i.e., the class one likes to discover something about, accounts for a tiny fraction of the whole data. Note that the issue of biased data distribution is particularly relevant for summarization, as a set of sentences to be identified as wis usually account for a very small portion of the data. SSDT begins by searching the entire data space for a cluster of positive cases and grows the cluster by adding points that fall within some distance to the center of the cluster. If the splitting based on the cluster offers a better Gini index than simply using Figure 2: SSDT in action. Filled circles represent positive class, white circles represent negative class. SSDT starts with a small spherical cluster of positive points (solid circle) and grows the cluster by ‘absorbing’ positive points around it (dashed circle). one of the attributes to split the data, SSDT splits the data space based on the cluster, that is, forms one region outside of the cluster and one inside.3 It repeats the process recursively on each subregions spawned until termination conditions are met. Figure 2 gives a snapshot of SSDT at work. SSDT locates some clusters of positive points, develops spherical clusters around them. With its particular focus on positive cases, SSDT is able to provide a more precise characterization of them, compared, for instance, to C4.5. 6 Test Data and Procedure We asked 112 Japanese subjects (students at graduate and undergraduate level) to extract 10% sentences in a text which they consider most important in making a summary. The number of sentences to extract varied from two to four, depending on the length of a text. The age of subjects varied from 18 to 45. We used 75 texts from three different categories (25 for each category); column, editorial and news report. Texts were of about the same size in terms of character counts and the number of paragraphs, and were selected randomly from articles that appeared in a Japanese financial daily (NihonKeizai-Shimbun-Sha, 1995). There were, on average, 19.98 sentences per text. 3For a set S of data with k classes, its Gini index is given as: Gini(S) = 1 −Pk i p2 i , where pi denotes the probability of observing class i in S. Table 3: Test Data. N denotes the total number of sentences in the test data. K ≥n means that a wis (positive) sentence gets at least n votes. K N positive negative ≥1 1424 707 717 ≥2 1424 392 1032 ≥3 1424 236 1188 ≥4 1424 150 1274 ≥5 1424 72 1352 The kappa agreement among subjects was 0.25. The result is in a way consistent with Salton et al. (1999), who report a low inter-subject agreement on paragraph extracts from encyclopedias and also with Gong and Liu (2001) on a sentence selection task in the cable news domain. While there are some work (Marcu, 1999; Jing et al., 1998) which do report high agreement rates, their success may be attributed to particularities of texts used, as suggested by Jing et al. (1998). Thus, the question of whether it is possible to establish an ideal summary based on agreement is far from settled, if ever. In the face of this, it would be interesting and perhaps more fruitful to explore another view on summary, that the variability of a summary is the norm rather than the exception. In the experiments that follow, we decided not to rely on a particular level of inter-coder agreement to determine whether or not a given sentence is wis. Instead, we used agreement threshold to distinguish between wis and non-wis sentences: for a given threshold K, a sentence is considered wis (or positive) if it has at least K votes in favor of its inclusion in a summary, and non-wis (negative) if not. Thus if a sentence is labeled as positive at K ≥1, it means that there are one or more judges taking that sentence as wis. We examined K from 1 to 5. (On average, seven people are assigned to one article. However, one would rarely see all of them unanimously agree on their judgments.) Table 3 shows how many positive/negative instances one would get at a given agreement threshold. At K ≥1, out of 1424 instances, i.e., sentences, 707 of them are marked positive and 717 are marked negative, so positive and negative instances are evenly spread across the data. On the other hand, at K ≥5, there are only 72 positive instances. This means that there is less than one occurrence of wis case per article. In the experiments below, each probabilistic rendering of the DTs, namely, C4.5, MDL-DT, and SSDT is trained on the corpus, and tested with and without the diversity extension (Find-Diversity). When used without the diversity component, each ProbDT works on a test article in its entirety, producing the ranked list of sentences. A summary with compression rate γ is obtained by selecting top γ percent of the list. When coupled with FindDiversity, on the other hand, each ProbDT is set to work on each cluster discovered by the diversity component, producing multiple lists of sentences, each corresponding to one of the clusters identified. A summary is formed by collecting top ranking sentences from each list. Evaluation was done by 10-fold cross validation. For the purpose of comparison, we also ran the diversity based model as given in Nomoto and Matsumoto (2001c) and a tfidf based ranking model (Zechner, 1996) (call it Z model), which simply ranks sentences according to the tfidf score and selects those which rank highest. Recall that the diversity based model (DBS) (Nomoto and Matsumoto, 2001c) consists in Find-Diversity and the ranking model by Zechner (1996), which they call Reduce-Redundancy. 7 Results and Discussion Tables 4-8 show performance of each ProbDT and its combination with the diversity (clustering) component. It also shows performance of Z model and DBS. In the tables, the slashed ‘V’ after the name of a classifier indicates that the relevant classifier is diversity-enabled, meaning that it is coupled with the diversity extension. Notice that each decision tree here is a ProbDT and should not be confused with its non-probabilistic counterpart. Also worth noting is that DBS is in fact Z/V, that is, diversityenabled Z model. Returning to the tables, we find that for most of the times, the diversity component has clear effects on ProbDTs, significantly improving their performance. All the figures are in F-measure, i.e., F = 2∗P∗R P+R . In fact this happens regardless of a particular choice of ranking model, as performance of Z is also boosted with the diversity component. Not surprisingly, effects of supervised learning are also evident: diversity-enabled ProbDTs generally outperform DBS (Z/V) by a large margin. What is surprising, moreover, is that diversity-enabled ProbDTs are superior in performance to their non-diversity counterparts (with a notable exception for SSDT at K ≥1), which suggests that selecting marginal sentences is an important part of generating a summary. Another observation about the results is that as one goes along with a larger K, differences in performance among the systems become ever smaller: at K ≥5, Z performs comparably to C4.5, MDL, and SSDT either with or without the diversity component. The decline of performance of the DTs may be caused by either the absence of recurring patterns in data with a higher K or simply the paucity of positive instances. At the moment, we do not know which is the case here. It is curious to note, moreover, that MDL-DT is not performing as well as C4.5 and SSDT at K ≥1, K ≥2, and K ≥3. The reason may well have to do with the general properties of MDL-DT. Recall that MDL-DT is designed to produce as small a decision tree as possible. Therefore, the resulting tree would have a very small number of nodes covering the entire data space. Consider, for instance, a hypothetical data space in Figure 3. Assume that MDL-DT bisects the space into region A and B, producing a two-node decision tree. The problem with the tree is, of course, that point x and y in region B will be assigned to the same probability under the probabilistic tree model, despite the fact that point x is very close to region A and point y is far out. This problem could happen with C4.5, but in MDL-DT, which covers a large space with a few nodes, points in a region could be far apart, making the problem more acute. Thus the poor performance of MDL-DT may be attributable to its extensive use of pruning. 8 Conclusion As a way of exploiting human biases towards an increased performance of the summarizer, we have explored approaches to embedding supervised learning within a general unsupervised framework. In the A y B x Figure 3: Hypothetical Data Space paper, we focused on the use of decision tree as a plug-in learner. We have shown empirically that the idea works for a number of decision trees, including C4.5, MDL-DT and SSDT. Coupled with the learning component, the unsupervised summarizer based on clustering significantly improved its performance on the corpus of human created summaries. More importantly, we found that supervised learners perform better when coupled with the clustering than when working alone. We argued that that has to do with the high variation in human created summaries: the clustering component forces a decision tree to pay more attention to sentences marginally relevant to the main thread of the text. While ProbDTs appear to work well with ranking, it is also possible to take a different approach: for instance, we may use some distance metric in instead of probability to distinguish among sentences. It would be interesting to invoke the notion like prototype modeler (Kalton et al., 2001) and see how it might fare when used as a ranking model. Moreover, it may be worthwhile to explore some non-clustering approaches to representing the diversity of contents of a text, such as Gong and Liu (2001)’s summarizer 1 (GLS1, for short), where a sentence is selected on the basis of its similarity to the text it belongs to, but which excludes terms that appear in previously selected sentences. While our preliminary study indicates that GLS1 produces performance comparable and even superior to DBS on some tasks in the document retrieval domain, we have no results available at the moment on the efficacy of combining GLS1 and ProbDT on sentence extraction tasks. Finally, we note that the test corpus used for Table 4: Performance at varying compression rates for K ≥1. MDL-DT denotes a summarizer based on C4.5 with the MDL extension. DBS (=Z/V) denotes the diversity based summarizer. Z represents the Z-model summarizer. Performance figures are in F-measure. ‘V’ indicates that the relevant classifier is diversity-enabled. Note that DBS =Z/V. cmp.rate C4.5 C4.5/V MDL-DT MDL-DT/V SSDT SSDT/V DBS Z 0.2 0.371 0.459 0.353 0.418 0.437 0.454 0.429 0.231 0.3 0.478 0.507 0.453 0.491 0.527 0.517 0.491 0.340 0.4 0.549 0.554 0.535 0.545 0.605 0.553 0.529 0.435 0.5 0.614 0.600 0.585 0.593 0.639 0.606 0.582 0.510 Table 5: K ≥2 cmp.rate C4.5 C4.5/V MDL-DT MDL-DT/V SSDT SSDT/V DBS Z 0.2 0.381 0.441 0.343 0.391 0.395 0.412 0.386 0.216 0.3 0.420 0.441 0.366 0.418 0.404 0.431 0.421 0.290 0.4 0.434 0.444 0.398 0.430 0.415 0.444 0.444 0.344 0.5 0.427 0.447 0.409 0.437 0.423 0.439 0.443 0.381 Table 6: K ≥3 cmp.rate C4.5 C4.5/V MDL-DT MDL-DT/V SSDT SSDT/V DBS Z 0.2 0.320 0.354 0.297 0.345 0.328 0.330 0.314 0.314 0.3 0.300 0.371 0.278 0.350 0.321 0.338 0.342 0.349 0.4 0.297 0.357 0.298 0.348 0.325 0.340 0.339 0.337 0.5 0.297 0.337 0.301 0.329 0.307 0.327 0.322 0.322 Table 7: K ≥4 cmp.rate C4.5 C4.5/V MDL-DT MDL-DT/V SSDT SSDT/V DBS Z 0.2 0.272 0.283 0.285 0.301 0.254 0.261 0.245 0.245 0.3 0.229 0.280 0.234 0.284 0.249 0.267 0.269 0.269 0.4 0.238 0.270 0.243 0.267 0.236 0.248 0.247 0.247 0.5 0.235 0.240 0.245 0.246 0.227 0.233 0.232 0.232 Table 8: K ≥5 cmp.rate C4.5 C4.5/V MDL-DT MDL-DT/V SSDT SSDT/V DBS Z 0.2 0.242 0.226 0.252 0.240 0.188 0.189 0.191 0.191 0.3 0.194 0.220 0.197 0.231 0.171 0.206 0.194 0.194 0.4 0.184 0.189 0.189 0.208 0.175 0.173 0.173 0.173 0.5 0.174 0.175 0.176 0.191 0.145 0.178 0.167 0.167 evaluation is somewhat artificial in the sense that we elicit judgments from people on the summaryworthiness of a particular sentence in the text. Perhaps, we should look at naturally occurring abstracts or extracts as a potential source for training/evaluation data for summarization research. Besides being natural, they usually come in large number, which may alleviate some concern about the lack of sufficient resources for training learning algorithms in summarization. References Yihong Gong and Xin Liu. 2001. Generic text summarization using relevance measure and latent semantic analysis. In Proceedings of the 24th Annual International ACM/SIGIR Conference on Research and Development, New Orleans. ACM-Press. Takashi Ichikawa. 1990. Bunshˆoron-gaisetsu. KyˆoikuShuppan, Tokyo. Hongyan Jing, Regina Barzilay, Kathleen McKeown, and Machael Elhadad. 1998. Summarization evaluation methods: Experiments and analysis. In AAAI Symposium on Intelligent Summarization, Stanford Univesisty, CA, March. Annaka Kalton, Pat Langely, Kiri Wagstaff, and Jungsoon Yoo. 2001. Generalized clustering, supervised learning, and data assignment. In Proceedings of the Seventh International Conference on Knowledge Discovery and Data Mining (KDD2001), San Francisco, August. ACM. Julian Kupiec, Jan Pedersen, and Francine Chen. 1995. A trainable document summarizer. In Proceedings of the Fourteenth Annual International ACM/SIGIR Conference on Research and Developmnet in Information Retrieval, pages 68–73, Seattle. Hang Li. 1998. A Probabilistic Approach to Lexical Semantic Knowledge Acquistion and Structural Disambiguation. Ph.D. thesis, University of Tokyo, Tokyo. Daniel Marcu. 1999. Discourse trees are good indicators of importance in text. In Indejeet Mani and Mark T. Maybury, editors, Advances in Automatic Text Summarization, pages 123–136. The MIT Press. Yuji Matsumoto, Akira Kitauchi, Tatsuo Yamashita, and Yoshitaka Hirano. 1999. Japanese morphological analysis system chasen version 2.0 manual. Technical report, NAIST, Ikoma, April. NAIST-IS-TR99008. Nihon-Keizai-Shimbun-Sha. 1995. Nihon keizai shimbun 95 nen cd-rom ban. CD-ROM. Tokyo, Nihon Keizai Shimbun, Inc. Tadashi Nomoto and Yuji Matsumoto. 2000. Comparing the minimum description length principle and boosting in the automatic analysis of discourse. In Proceedings of the Seventeenth International Conference on Machine Learning, pages 687–694, Stanford University, June-July. Morgan Kaufmann. Tadashi Nomoto and Yuji Matsumoto. 2001a. The diversity based approach to open-domain text summarization. Unpublished Manuscript. Tadashi Nomoto and Yuji Matsumoto. 2001b. An experimental comparison of supervised and unsupervised approaches to text summarization. In Proceedings of 2001 IEEE International Conference on Data Mining, pages 630–632, San Jose. IEEE Computer Society. Tadashi Nomoto and Yuji Matsumoto. 2001c. A new approach to unsupervised text summarization. In Proceedings of the 24th International ACM/SIGIR Conference on Research and Development in Informational Retrieval, New Orleans, September. ACM. J. Ross Quinlan. 1993. C4.5: Programs for Machine Learning. Morgan Kaufmann. Jorma Rissanen. 1997. Stochastic complexity in learning. Journal of Computer and System Sciences, 55:89– 95. Gerald Salton, Amit Singhal, Mandara Mitra, and Chris Buckley. 1999. Automatic text structuring and summarization. In Inderjeet Mani and Mark T. Maybury, editors, Advances in Automatic Text Summarization, pages 342–355. The MIT Press. Reprint. Haixun Wang and Philip Yu. 2001. SSDT: A scalable subspace-splitting classifier for biased data. In Proceedings of 2001 IEEE International Conference on Data Mining, pages 542–549, San Jose, December. IEEE Computer Society. Kenji Yamanishi. 1997. Data compression and learning. Journal of Japanese Society for Artificial Intelligence, 12(2):204–215. in Japanese. Klaus Zechner. 1996. Fast generation of abstracts from general domain text corpora by extracting relevant sentences. In Proceedings of the 16th International Conference on Computational Linguistics, pages 986–989, Copenhagen.
2002
59
Learning Surface Text Patterns for a Question Answering System Deepak Ravichandran and Eduard Hovy Information Sciences Institute University of Southern California 4676 Admiralty Way Marina del Rey, CA 90292-6695 USA {ravichan,hovy}@isi.edu Abstract In this paper we explore the power of surface text patterns for open-domain question answering systems. In order to obtain an optimal set of patterns, we have developed a method for learning such patterns automatically. A tagged corpus is built from the Internet in a bootstrapping process by providing a few hand-crafted examples of each question type to Altavista. Patterns are then automatically extracted from the returned documents and standardized. We calculate the precision of each pattern, and the average precision for each question type. These patterns are then applied to find answers to new questions. Using the TREC-10 question set, we report results for two cases: answers determined from the TREC-10 corpus and from the web. 1 Introduction Most of the recent open domain questionanswering systems use external knowledge and tools for answer pinpointing. These may include named entity taggers, WordNet, parsers, hand-tagged corpora, and ontology lists (Srihari and Li, 00; Harabagiu et al., 01; Hovy et al., 01; Prager et al., 01). However, at the recent TREC-10 QA evaluation (Voorhees, 01), the winning system used just one resource: a fairly extensive list of surface patterns (Soubbotin and Soubbotin, 01). The apparent power of such patterns surprised many. We therefore decided to investigate their potential by acquiring patterns automatically and to measure their accuracy. It has been noted in several QA systems that certain types of answer are expressed using characteristic phrases (Lee et al., 01; Wang et al., 01). For example, for BIRTHDATEs (with questions like “When was X born?”), typical answers are “Mozart was born in 1756.” “Gandhi (1869–1948)…” These examples suggest that phrases like “<NAME> was born in <BIRTHDATE>” “<NAME> (<BIRTHDATE>–” when formulated as regular expressions, can be used to locate the correct answer. In this paper we present an approach for automatically learning such regular expressions (along with determining their precision) from the web, for given types of questions. Our method uses the machine learning technique of bootstrapping to build a large tagged corpus starting with only a few examples of QA pairs. Similar techniques have been investigated extensively in the field of information extraction (Riloff, 96). These techniques are greatly aided by the fact that there is no need to hand-tag a corpus, while the abundance of data on the web makes it easier to determine reliable statistical estimates. Our system assumes each sentence to be a simple sequence of words and searches for repeated word orderings as evidence for Computational Linguistics (ACL), Philadelphia, July 2002, pp. 41-47. Proceedings of the 40th Annual Meeting of the Association for useful answer phrases. We use suffix trees for extracting substrings of optimal length. We borrow the idea of suffix trees from computational biology (Gusfield, 97) where it is primarily used for detecting DNA sequences. Suffix trees can be processed in time linear on the size of the corpus and, more importantly, they do not restrict the length of substrings. We then test the patterns learned by our system on new unseen questions from the TREC-10 set and evaluate their results to determine the precision of the patterns. 2 Learning of Patterns We describe the pattern-learning algorithm with an example. A table of patterns is constructed for each individual question type by the following procedure (Algorithm 1). 1. Select an example for a given question type. Thus for BIRTHYEAR questions we select “Mozart 1756” (we refer to “Mozart” as the question term and “1756” as the answer term). 2. Submit the question and the answer term as queries to a search engine. Thus, we give the query +“Mozart” +“1756” to AltaVista (http://www.altavista.com). 3. Download the top 1000 web documents provided by the search engine. 4. Apply a sentence breaker to the documents. 5. Retain only those sentences that contain both the question and the answer term. Tokenize the input text, smooth variations in white space characters, and remove html and other extraneous tags, to allow simple regular expression matching tools such as egrep to be used. 6. Pass each retained sentence through a suffix tree constructor. This finds all substrings, of all lengths, along with their counts. For example consider the sentences “The great composer Mozart (1756–1791) achieved fame at a young age” “Mozart (1756–1791) was a genius”, and “The whole world would always be indebted to the great music of Mozart (1756–1791)”. The longest matching substring for all 3 sentences is “Mozart (1756–1791)”, which the suffix tree would extract as one of the outputs along with the score of 3. 7. Pass each phrase in the suffix tree through a filter to retain only those phrases that contain both the question and the answer term. For the example, we extract only those phrases from the suffix tree that contain the words “Mozart” and “1756”. 8. Replace the word for the question term by the tag “<NAME>” and the word for the answer term by the term “<ANSWER>”. This procedure is repeated for different examples of the same question type. For BIRTHDATE we also use “Gandhi 1869”, “Newton 1642”, etc. For BIRTHDATE, the above steps produce the following output: a. born in <ANSWER> , <NAME> b. <NAME> was born on <ANSWER> , c. <NAME> ( <ANSWER> - d. <NAME> ( <ANSWER - ) ... These are some of the most common substrings of the extracted sentences that contain both <NAME> and <ANSWER>. Since the suffix tree records all substrings, partly overlapping strings such as c and d are separately saved, which allows us to obtain separate counts of their occurrence frequencies. As will be seen later, this allows us to differentiate patterns such as d (which records a still living person, and is quite precise) from its more general substring c (which is less precise). Algorithm 2: Calculating the precision of each pattern. 1. Query the search engine by using only the question term (in the example, only “Mozart”). 2. Download the top 1000 web documents provided by the search engine. 3. As before, segment these documents into individual sentences. 4. Retain only those sentences that contain the question term. 5. For each pattern obtained from Algorithm 1, check the presence of each pattern in the sentence obtained from above for two instances: i) Presence of the pattern with <ANSWER> tag matched by any word. ii) Presence of the pattern in the sentence with <ANSWER> tag matched by the correct answer term. In our example, for the pattern “<NAME> was born in <ANSWER>” we check the presence of the following strings in the answer sentence i) Mozart was born in <ANY_WORD> ii) Mozart was born in 1756 Calculate the precision of each pattern by the formula P = Ca / Co where Ca = total number of patterns with the answer term present Co = total number of patterns present with answer term replaced by any word 6. Retain only the patterns matching a sufficient number of examples (we choose the number of examples > 5). We obtain a table of regular expression patterns for a given question type, along with the precision of each pattern. This precision is the probability of each pattern containing the answer and follows directly from the principle of maximum likelihood estimation. For BIRTHDATE the following table is obtained: 1.0 <NAME>( <ANSWER> - ) 0.85 <NAME> was born on <ANSWER>, 0.6 <NAME> was born in <ANSWER> 0.59 <NAME> was born <ANSWER> 0.53 <ANSWER> <NAME> was born 0.50 – <NAME> ( <ANSWER> 0.36 <NAME> ( <ANSWER> - For a given question type a good range of patterns was obtained by giving the system as few as 10 examples. The rather long list of patterns obtained would have been very difficult for any human to come up with manually. The question term could appear in the documents obtained from the web in various ways. Thus “Mozart” could be written as “Wolfgang Amadeus Mozart”, “Mozart, Wolfgang Amadeus”, “Amadeus Mozart” or “Mozart”. To learn from such variations, in step 1 of Algorithm 1 we specify the various ways in which the question term could be specified in the text. The presence of any of these names would cause it to be tagged as the original question term “Mozart”. The same arrangement is also done for the answer term so that presence of any variant of the answer term would cause it to be treated exactly like the original answer term. While easy to do for BIRTHDATE, this step can be problematic for question types such as DEFINITION, which may contain various acceptable answers. In general the input example terms have to be carefully selected so that the questions they represent do not have a long list of possible answers, as this would affect the confidence of the precision scores for each pattern. All the answers need to be enlisted to ensure a high confidence in the precision score of each pattern, in the present framework. The precision of the patterns obtained from one QA-pair example in algorithm 1 is calculated from the documents obtained in algorithm 2 for other examples of the same question type. In other words, the precision scores are calculated by cross-checking the patterns across various examples of the same type. This step proves to be very significant as it helps to eliminate dubious patterns, which may appear because the contents of two or more websites may be the same, or the same web document reappears in the search engine output for algorithms 1 and 2. Algorithm 1 does not explicitly specify any particular question type. Judicious choice of the QA example pair therefore allows it to be used for many question types without change. 3 Finding Answers Using the patterns to answer a new question we employ the following algorithm: 1. Determine the question type of the new question. We use our existing QA system (Hovy et al., 2002b; 2001) to do so. 2. The question term in the question is identified, also using our existing system. 3. Create a query from the question term and perform IR (by using a given answer document corpus such as the TREC-10 collection or web search otherwise). 4. Segment the documents obtained into sentences and smooth out white space variations and html and other tags, as before. 5. Replace the question term in each sentence by the question tag (“<NAME>”, in the case of BIRTHYEAR). 6. Using the pattern table developed for that particular question type, search for the presence of each pattern. Select words matching the tag “<ANSWER>” as the answer. 7. Sort these answers by their pattern’s precision scores. Discard duplicates (by elementary string comparisons). Return the top 5 answers. 4 Experiments From our Webclopedia QA Typology (Hovy et al., 2002a) we selected 6 different question types: BIRTHDATE, LOCATION, INVENTOR, DISCOVERER, DEFINITION, WHY-FAMOUS. The pattern table for each of these question types was constructed using Algorithm 1. Some of the patterns obtained along with their precision are as follows BIRTHYEAR 1.0 <NAME> ( <ANSWER> - ) 0.85 <NAME> was born on <ANSWER> , 0.6 <NAME> was born in <ANSWER> 0.59 <NAME> was born <ANSWER> 0.53 <ANSWER> <NAME> was born 0.5 - <NAME> ( <ANSWER> 0.36 <NAME> ( <ANSWER> - 0.32 <NAME> ( <ANSWER> ) , 0.28 born in <ANSWER> , <NAME> 0.2 of <NAME> ( <ANSWER> INVENTOR 1.0 <ANSWER> invents <NAME> 1.0 the <NAME> was invented by <ANSWER> 1.0 <ANSWER> invented the <NAME> in 1.0 <ANSWER> ' s invention of the <NAME> 1.0 <ANSWER> invents the <NAME> . 1.0 <ANSWER> ' s <NAME> was 1.0 <NAME> , invented by <ANSWER> 1.0 <ANSWER> ' s <NAME> and 1.0 that <ANSWER> ' s <NAME> 1.0 <NAME> was invented by <ANSWER> , DISCOVERER 1.0 when <ANSWER> discovered <NAME> 1.0 <ANSWER> ' s discovery of <NAME> 1.0 <ANSWER> , the discoverer of <NAME> 1.0 <ANSWER> discovers <NAME> . 1.0 <ANSWER> discover <NAME> 1.0 <ANSWER> discovered <NAME> , the 1.0 discovery of <NAME> by <ANSWER>. 0.95 <NAME> was discovered by <ANSWER> 0.91 of <ANSWER> ' s <NAME> 0.9 <NAME> was discovered by <ANSWER> in DEFINITION 1.0 <NAME> and related <ANSWER>s 1.0 <ANSWER> ( <NAME> , 1.0 <ANSWER> , <NAME> . 1.0 , a <NAME> <ANSWER> , 1.0 ( <NAME> <ANSWER> ) , 1.0 form of <ANSWER> , <NAME> 1.0 for <NAME> , <ANSWER> and 1.0 cell <ANSWER> , <NAME> 1.0 and <ANSWER> > <ANSWER> > <NAME> 0.94 as <NAME> , <ANSWER> and WHY-FAMOUS 1.0 <ANSWER> <NAME> called 1.0 laureate <ANSWER> <NAME> 1.0 by the <ANSWER> , <NAME> , 1.0 <NAME> - the <ANSWER> of 1.0 <NAME> was the <ANSWER> of 0.84 by the <ANSWER> <NAME> , 0.8 the famous <ANSWER> <NAME> , 0.73 the famous <ANSWER> <NAME> 0.72 <ANSWER> > <NAME> 0.71 <NAME> is the <ANSWER> of LOCATION 1.0 <ANSWER> ' s <NAME> . 1.0 regional : <ANSWER> : <NAME> 1.0 to <ANSWER> ' s <NAME> , 1.0 <ANSWER> ' s <NAME> in 1.0 in <ANSWER> ' s <NAME> , 1.0 of <ANSWER> ' s <NAME> , 1.0 at the <NAME> in <ANSWER> 0.96 the <NAME> in <ANSWER> , 0.92 from <ANSWER> ' s <NAME> 0.92 near <NAME> in <ANSWER> For each question type, we extracted the corresponding questions from the TREC-10 set. These questions were run through the testing phase of the algorithm. Two sets of experiments were performed. In the first case, the TREC corpus was used as the input source and IR was performed by the IR component of our QA system (Lin, 2002). In the second case, the web was the input source and the IR was performed by the AltaVista search engine. Results of the experiments, measured by Mean Reciprocal Rank (MRR) score (Voorhees, 01), are: TREC Corpus Question type Number of questions MRR on TREC docs BIRTHYEAR 8 0.48 INVENTOR 6 0.17 DISCOVERER 4 0.13 DEFINITION 102 0.34 WHY-FAMOUS 3 0.33 LOCATION 16 0.75 Web Question type Number of questions MRR on the Web BIRTHYEAR 8 0.69 INVENTOR 6 0.58 DISCOVERER 4 0.88 DEFINITION 102 0.39 WHY-FAMOUS 3 0.00 LOCATION 16 0.86 The results indicate that the system performs better on the Web data than on the TREC corpus. The abundance of data on the web makes it easier for the system to locate answers with high precision scores (the system finds many examples of correct answers among the top 20 when using the Web as the input source). A similar result for QA was obtained by Brill et al. (2001). The TREC corpus does not have enough candidate answers with high precision score and has to settle for answers extracted from sentences matched by low precision patterns. The WHY-FAMOUS question type is an exception and may be due to the fact that the system was tested on a small number of questions. 5 Shortcoming and Extensions No external knowledge has been added to these patterns. We frequently observe the need for matching part of speech and/or semantic types, however. For example, the question: “Where are the Rocky Mountains located?” is answered by “Denver’s new airport, topped with white fiberglass cones in imitation of the Rocky Mountains in the background, continues to lie empty”, because the system picked the answer “the background” using the pattern “the <NAME> in <ANSWER>,”. Using a named entity tagger and/or an ontology would enable the system to use the knowledge that “background” is not a location. DEFINITION questions pose a related problem. Frequently the system’s patterns match a term that is too general, though correct technically. For “what is nepotism?” the pattern “<ANSWER>, <NAME>” matches “…in the form of widespread bureaucratic abuses: graft, nepotism…”; for “what is sonar?” the pattern “<NAME> and related <ANSWER>s” matches “…while its sonar and related underseas systems are built…”. The patterns cannot handle long-distance dependencies. For example, for “Where is London?” the system cannot locate the answer in “London, which has one of the most busiest airports in the world, lies on the banks of the river Thames” due to the explosive danger of unrestricted wildcard matching, as would be required in the pattern “<QUESTION>, (<any_word>)*, lies on <ANSWER>”. This is one of the reasons why the system performs very well on certain types of questions from the web but performs poorly with documents obtained from the TREC corpus. The abundance and variation of data on the Internet allows the system to find an instance of its patterns without losing answers to longterm dependencies. The TREC corpus, on the other hand, typically contains fewer candidate answers for a given question and many of the answers present may match only long-term dependency patterns. More information needs to be added to the text patterns regarding the length of the answer phrase to be expected. The system searches in the range of 50 bytes of the answer phrase to capture the pattern. It fails to perform under certain conditions as exemplified by the question “When was Lyndon B. Johnson born?”. The system selects the sentence “Tower gained national attention in 1960 when he lost to democratic Sen. Lyndon B. Johnson, who ran for both reelection and the vice presidency” using the pattern “<NAME> <ANSWER> –“. The system lacks the information that the <ANSWER> tag should be replaced exactly by one word. Simple extensions could be made to the system so that instead of searching in the range of 50 bytes for the answer phrase it could search for the answer in the range of 1–2 chunks (basic phrases in English such as simple NP, VP, PP, etc.). A more serious limitation is that the present framework can handle only one anchor point (the question term) in the candidate answer sentence. It cannot work for types of question that require multiple words from the question to be in the answer sentence, possibly apart from each other. For example, in “Which county does the city of Long Beach lie?”, the answer “Long Beach is situated in Los Angeles County” requires the pattern. “<QUESTION_TERM_1> situated in <ANSWER> <QUESTION_TERM_2>”, where <QUESTION_TERM_1> and <QUESTION_TERM_2> represent the terms “Long Beach” and “county” respectively. The performance of the system depends significantly on there being only one anchor word, which allows a single word match between the question and the candidate answer sentence. The presence of multiple anchor words would help to eliminate many of the candidate answers by simply using the condition that all the anchor words from the question must be present in the candidate answer sentence. The system does not classify or make any distinction between upper and lower case letters. For example, “What is micron?” is answered by “In Boise, Idaho, a spokesman for Micron, a maker of semiconductors, said Simms are ‘ a very high volume product for us …’ ”. The answer returned by the system would have been perfect if the word “micron” had been capitalized in the question. Canonicalization of words is also an issue. While giving examples in the bootstrapping procedure, say, for BIRTHDATE questions, the answer term could be written in many ways (for example, Gandhi’s birth date can be written as “1869”, “Oct. 2, 1869”, “2nd October 1869”, “October 2 1869”, and so on). Instead of enlisting all the possibilities a date tagger could be used to cluster all the variations and tag them with the same term. The same idea could also be extended for smoothing out the variations in the question term for names of persons (Gandhi could be written as “Mahatma Gandhi”, “Mohandas Karamchand Gandhi”, etc.). 6 Conclusion The web results easily outperform the TREC results. This suggests that there is a need to integrate the outputs of the Web and the TREC corpus. Since the output from the Web contains many correct answers among the top ones, a simple word count could help in eliminating many unlikely answers. This would work well for question types like BIRTHDATE or LOCATION but is not clear for question types like DEFINITION. The simplicity of this method makes it perfect for multilingual QA. Many tools required by sophisticated QA systems (named entity taggers, parsers, ontologies, etc.) are language specific and require significant effort to adapt to a new language. Since the answer patterns used in this method are learned using only a small number of manual training terms, one can rapidly learn patterns for new languages, assuming the web search engine is appropriately switched. Acknowledgements This work was supported by the Advanced Research and Development Activity (ARDA)'s Advanced Question Answering for Intelligence (AQUAINT) Program under contract number MDA908-02-C-0007. References Brill, E., J. Lin, M. Banko, S. Dumais, and A. Ng. 2001. Data-Intensive Question Answering. Proceedings of the TREC-10 Conference. NIST, Gaithersburg, MD, 183–189. Gusfield, D. 1997. Algorithms on Strings, Trees and Sequences: Computer Science and Computational Biology. Chapter 6: Linear Time construction of Suffix trees, 94–121. Harabagiu, S., D. Moldovan, M. Pasca, R. Mihalcea, M. Surdeanu, R. Buneascu, R. Gîrju, V. Rus and P. Morarescu. 2001. FALCON: Boosting Knowledge for Answer Engines. Proceedings of the 9th Text Retrieval Conference (TREC-9), NIST, 479–488. Hovy, E.H., U. Hermjakob, and C.-Y. Lin. 2001. The Use of External Knowledge in Factoid QA. Proceedings of the TREC-10 Conference. NIST, Gaithersburg, MD, 166– 174. Hovy, E.H., U. Hermjakob, and D. Ravichandran. 2002a. A Question/Answer Typology with Surface Text Patterns. Proceedings of the Human Language Technology (HLT) conference. San Diego, CA. Hovy, E.H., U. Hermjakob, C.-Y. Lin, and D. Ravichandran. 2002b. Using Knowledge to Facilitate Pinpointing of Factoid Answers. Proceedings of the COLING-2002 conference. Taipei, Taiwan. Lee, G.G., J. Seo, S. Lee, H. Jung, B-H. Cho, C. Lee, B-K. Kwak, J, Cha, D. Kim, J-H. An, H. Kim, and K. Kim. 2001. SiteQ: Engineering High Performance QA System Using LexicoSemantic Pattern Matching and Shallow NLP. Proceedings of the TREC-10 Conference. NIST, Gaithersburg, MD, 437–446. Lin, C-Y. 2002. The Effectiveness of Dictionary and Web-Based Answer Reranking. Proceedings of the COLING-2002 conference. Taipei, Taiwan. Prager, J. and J. Chu-Carroll. 2001. Use of WordNet Hypernyms for Answering What-Is Questions. Proceedings of the TREC-10 Conference. NIST, Gaithersburg, MD, 309– 316. Riloff, E. 1996. Automatically Generating Extraction Patterns from Untagged Text. Proceedings of the Thirteenth National Conference on Artificial Intelligence (AAAI96), 1044–1049. Soubbotin, M.M. and S.M. Soubbotin. 2001. Patterns of Potential Answer Expressions as Clues to the Right Answer. Proceedings of the TREC-10 Conference. NIST, Gaithersburg, MD, 175–182. Srihari, R. and W. Li. 2000. A Question Answering System Supported by Information Extraction. Proceedings of the 1st Meeting of the North American Chapter of the Association for Computational Linguistics (ANLPNAACL-00), Seattle, WA, 166–172. Voorhees, E. 2001. Overview of the Question Answering Track. Proceedings of the TREC-10 Conference. NIST, Gaithersburg, MD, 157– 165. Wang, B., H. Xu, Z. Yang, Y. Liu, X. Cheng, D. Bu, and S. Bai. 2001. TREC-10 Experiments at CAS-ICT: Filtering, Web, and QA. Proceedings of the TREC-10 Conference. NIST, Gaithersburg, MD, 229–241.
2002
6
Named Entity Recognition using an HMM-based Chunk Tagger GuoDong Zhou Jian Su Laboratories for Information Technology 21 Heng Mui Keng Terrace Singapore 119613 [email protected] [email protected] Abstract This paper proposes a Hidden Markov Model (HMM) and an HMM-based chunk tagger, from which a named entity (NE) recognition (NER) system is built to recognize and classify names, times and numerical quantities. Through the HMM, our system is able to apply and integrate four types of internal and external evidences: 1) simple deterministic internal feature of the words, such as capitalization and digitalization; 2) internal semantic feature of important triggers; 3) internal gazetteer feature; 4) external macro context feature. In this way, the NER problem can be resolved effectively. Evaluation of our system on MUC-6 and MUC-7 English NE tasks achieves F-measures of 96.6% and 94.1% respectively. It shows that the performance is significantly better than reported by any other machine-learning system. Moreover, the performance is even consistently better than those based on handcrafted rules. 1 Introduction Named Entity (NE) Recognition (NER) is to classify every word in a document into some predefined categories and "none-of-the-above". In the taxonomy of computational linguistics tasks, it falls under the domain of "information extraction", which extracts specific kinds of information from documents as opposed to the more general task of "document management" which seeks to extract all of the information found in a document. Since entity names form the main content of a document, NER is a very important step toward more intelligent information extraction and management. The atomic elements of information extraction -- indeed, of language as a whole -- could be considered as the "who", "where" and "how much" in a sentence. NER performs what is known as surface parsing, delimiting sequences of tokens that answer these important questions. NER can also be used as the first step in a chain of processors: a next level of processing could relate two or more NEs, or perhaps even give semantics to that relationship using a verb. In this way, further processing could discover the "what" and "how" of a sentence or body of text. While NER is relatively simple and it is fairly easy to build a system with reasonable performance, there are still a large number of ambiguous cases that make it difficult to attain human performance. There has been a considerable amount of work on NER problem, which aims to address many of these ambiguity, robustness and portability issues. During last decade, NER has drawn more and more attention from the NE tasks [Chinchor95a] [Chinchor98a] in MUCs [MUC6] [MUC7], where person names, location names, organization names, dates, times, percentages and money amounts are to be delimited in text using SGML mark-ups. Previous approaches have typically used manually constructed finite state patterns, which attempt to match against a sequence of words in much the same way as a general regular expression matcher. Typical systems are Univ. of Sheffield's LaSIE-II [Humphreys+98], ISOQuest's NetOwl [Aone+98] [Krupha+98] and Univ. of Edinburgh's LTG [Mikheev+98] [Mikheev+99] for English NER. These systems are mainly rule-based. However, rule-based approaches lack the ability of coping with the problems of robustness and portability. Each new source of text requires significant tweaking of rules to maintain optimal performance and the maintenance costs could be quite steep. The current trend in NER is to use the machine-learning approach, which is more Computational Linguistics (ACL), Philadelphia, July 2002, pp. 473-480. Proceedings of the 40th Annual Meeting of the Association for attractive in that it is trainable and adaptable and the maintenance of a machine-learning system is much cheaper than that of a rule-based one. The representative machine-learning approaches used in NER are HMM (BBN's IdentiFinder in [Miller+98] [Bikel+99] and KRDL's system [Yu+98] for Chinese NER.), Maximum Entropy (New York Univ.'s MEME in [Borthwick+98] [Borthwich99]) and Decision Tree (New York Univ.'s system in [Sekine98] and SRA's system in [Bennett+97]). Besides, a variant of Eric Brill's transformation-based rules [Brill95] has been applied to the problem [Aberdeen+95]. Among these approaches, the evaluation performance of HMM is higher than those of others. The main reason may be due to its better ability of capturing the locality of phenomena, which indicates names in text. Moreover, HMM seems more and more used in NE recognition because of the efficiency of the Viterbi algorithm [Viterbi67] used in decoding the NE-class state sequence. However, the performance of a machine-learning system is always poorer than that of a rule-based one by about 2% [Chinchor95b] [Chinchor98b]. This may be because current machine-learning approaches capture important evidence behind NER problem much less effectively than human experts who handcraft the rules, although machine-learning approaches always provide important statistical information that is not available to human experts. As defined in [McDonald96], there are two kinds of evidences that can be used in NER to solve the ambiguity, robustness and portability problems described above. The first is the internal evidence found within the word and/or word string itself while the second is the external evidence gathered from its context. In order to effectively apply and integrate internal and external evidences, we present a NER system using a HMM. The approach behind our NER system is based on the HMM-based chunk tagger in text chunking, which was ranked the best individual system [Zhou+00a] [Zhou+00b] in CoNLL'2000 [Tjong+00]. Here, a NE is regarded as a chunk, named "NE-Chunk". To date, our system has been successfully trained and applied in English NER. To our knowledge, our system outperforms any published machine-learning systems. Moreover, our system even outperforms any published rule-based systems. The layout of this paper is as follows. Section 2 gives a description of the HMM and its application in NER: HMM-based chunk tagger. Section 3 explains the word feature used to capture both the internal and external evidences. Section 4 describes the back-off schemes used to tackle the sparseness problem. Section 5 gives the experimental results of our system. Section 6 contains our remarks and possible extensions of the proposed work. 2 HMM-based Chunk Tagger 2.1 HMM Modeling Given a token sequence n n g g g G L 2 1 1 = , the goal of NER is to find a stochastic optimal tag sequence n n t t t T L 2 1 1 = that maximizes (2-1) ) ( ) ( ) , ( log ) ( log ) | ( log 1 1 1 1 1 1 1 n n n n n n n G P T P G T P T P G T P ⋅ + = The second item in (2-1) is the mutual information between n T1 and n G1 . In order to simplify the computation of this item, we assume mutual information independence: ∑ = = n i n i n n G t MI G T MI 1 1 1 1 ) , ( ) , ( or (2-2) ∑ = ⋅ = ⋅ n i n i n i n n n n G P t P G t P G P T P G T P 1 1 1 1 1 1 1 ) ( ) ( ) , ( log ) ( ) ( ) , ( log (2-3) Applying it to equation (2.1), we have: ∑ ∑ = = + − = n i n i n i i n n n G t P t P T P G T P 1 1 1 1 1 1 ) | ( log ) ( log ) ( log ) | ( log (2-4) The basic premise of this model is to consider the raw text, encountered when decoding, as though it had passed through a noisy channel, where it had been originally marked with NE tags. The job of our generative model is to directly generate the original NE tags from the output words of the noisy channel. It is obvious that our generative model is reverse to the generative model of traditional HMM1, as used 1 In traditional HMM to maximise ) | ( log 1 1 n n G T P , first we apply Bayes' rule: ) ( ) , ( ) | ( 1 1 1 1 1 n n n n n G P G T P G T P = and have: in BBN's IdentiFinder, which models the original process that generates the NE-class annotated words from the original NE tags. Another difference is that our model assumes mutual information independence (2-2) while traditional HMM assumes conditional probability independence (I-1). Assumption (2-2) is much looser than assumption (I-1) because assumption (I-1) has the same effect with the sum of assumptions (2-2) and (I-3)2. In this way, our model can apply more context information to determine the tag of current token. From equation (2-4), we can see that: 1) The first item can be computed by applying chain rules. In ngram modeling, each tag is assumed to be probabilistically dependent on the N-1 previous tags. 2) The second item is the summation of log probabilities of all the individual tags. 3) The third item corresponds to the "lexical" component of the tagger. We will not discuss both the first and second items further in this paper. This paper will focus on the third item∑ = n i n i G t P 1 1 ) | ( log , which is the main difference between our tagger and other traditional HMM-based taggers, as used in BBN's IdentiFinder. Ideally, it can be estimated by using the forward-backward algorithm [Rabiner89] recursively for the 1st-order [Rabiner89] or 2nd -order HMMs [Watson+92]. However, an alternative back-off modeling approach is applied instead in this paper (more details in section 4). 2.2 HMM-based Chunk Tagger )) ( log ) | ( (log max arg ) | ( log max arg 1 1 1 1 1 n n n T n n T T P T G P G T P + = Then we assume conditional probability independence: ∏ = = n i i i n n t g P T G P 1 1 1 ) | ( ) | ( (I-1) and have: )) ( log ) | ( log ( max arg ) | ( log max arg 1 1 1 1 n n i i i T n n T T P t g P G T P + = ∑ = (I-2) 2 We can obtain equation (I-2) from (2.4) by assuming ) | ( log ) | ( log 1 i i n i t g P G t P = (I-3) For NE-chunk tagging, we have token > =< i i i w f g , , where n n w w w W L 2 1 1 = is the word sequence and n n f f f F L 2 1 1 = is the word-feature sequence. In the meantime, NE-chunk tag it is structural and consists of three parts: 1) Boundary Category: BC = {0, 1, 2, 3}. Here 0 means that current word is a whole entity and 1/2/3 means that current word is at the beginning/in the middle/at the end of an entity. 2) Entity Category: EC. This is used to denote the class of the entity name. 3) Word Feature: WF. Because of the limited number of boundary and entity categories, the word feature is added into the structural tag to represent more accurate models. Obviously, there exist some constraints between 1 − it and it on the boundary and entity categories, as shown in Table 1, where "valid" / "invalid" means the tag sequence i i t t 1 − is valid / invalid while "valid on" means i i t t 1 − is valid with an additional condition i i EC EC = −1 . Such constraints have been used in Viterbi decoding algorithm to ensure valid NE chunking. 0 1 2 3 0 Valid Valid Invalid Invalid 1 Invalid Invalid Valid on Valid on 2 Invalid Invalid Valid Valid 3 Valid Valid Invalid Invalid Table 1: Constraints between 1 − it and it (Column: 1 − i BC in 1 − it ; Row: i BC in it ) 3 Determining Word Feature As stated above, token is denoted as ordered pairs of word-feature and word itself: > =< i i i w f g , . Here, the word-feature is a simple deterministic computation performed on the word and/or word string with appropriate consideration of context as looked up in the lexicon or added to the context. In our model, each word-feature consists of several sub-features, which can be classified into internal sub-features and external sub-features. The internal sub-features are found within the word and/or word string itself to capture internal evidence while external sub-features are derived within the context to capture external evidence. 3.1 Internal Sub-Features Our model captures three types of internal sub-features: 1) 1 f : simple deterministic internal feature of the words, such as capitalization and digitalization; 2) 2 f : internal semantic feature of important triggers; 3) 3 f : internal gazetteer feature. 1) 1 f is the basic sub-feature exploited in this model, as shown in Table 2 with the descending order of priority. For example, in the case of non-disjoint feature classes such as ContainsDigitAndAlpha and ContainsDigitAndDash, the former will take precedence. The first eleven features arise from the need to distinguish and annotate monetary amounts, percentages, times and dates. The rest of the features distinguish types of capitalization and all other words such as punctuation marks. In particular, the FirstWord feature arises from the fact that if a word is capitalized and is the first word of the sentence, we have no good information as to why it is capitalized (but note that AllCaps and CapPeriod are computed before FirstWord, and take precedence.) This sub-feature is language dependent. Fortunately, the feature computation is an extremely small part of the implementation. This kind of internal sub-feature has been widely used in machine-learning systems, such as BBN's IdendiFinder and New York Univ.'s MENE. The rationale behind this sub-feature is clear: a) capitalization gives good evidence of NEs in Roman languages; b) Numeric symbols can automatically be grouped into categories. 2) 2 f is the semantic classification of important triggers, as seen in Table 3, and is unique to our system. It is based on the intuitions that important triggers are useful for NER and can be classified according to their semantics. This sub-feature applies to both single word and multiple words. This set of triggers is collected semi-automatically from the NEs and their local context of the training data. 3) Sub-feature 3 f , as shown in Table 4, is the internal gazetteer feature, gathered from the look-up gazetteers: lists of names of persons, organizations, locations and other kinds of named entities. This sub-feature can be determined by finding a match in the gazetteer of the corresponding NE type where n (in Table 4) represents the word number in the matched word string. In stead of collecting gazetteer lists from training data, we collect a list of 20 public holidays in several countries, a list of 5,000 locations from websites such as GeoHive3, a list of 10,000 organization names from websites such as Yahoo4 and a list of 10,000 famous people from websites such as Scope Systems5. Gazetters have been widely used in NER systems to improve performance. 3.2 External Sub-Features For external evidence, only one external macro context feature 4 f , as shown in Table 5, is captured in our model. 4 f is about whether and how the encountered NE candidate is occurred in the list of NEs already recognized from the document, as shown in Table 5 (n is the word number in the matched NE from the recognized NE list and m is the matched word number between the word string and the matched NE with the corresponding NE type.). This sub-feature is unique to our system. The intuition behind this is the phenomena of name alias. During decoding, the NEs already recognized from the document are stored in a list. When the system encounters a NE candidate, a name alias algorithm is invoked to dynamically determine its relationship with the NEs in the recognized list. Initially, we also consider part-of-speech (POS) sub-feature. However, the experimental result is disappointing that incorporation of POS even decreases the performance by 2%. This may be because capitalization information of a word is submerged in the muddy of several POS tags and the performance of POS tagging is not satisfactory, especially for unknown capitalized words (since many of NEs include unknown capitalized words.). Therefore, POS is discarded. 3 http://www.geohive.com/ 4 http://www.yahoo.com/ 5 http://www.scopesys.com/ Sub-Feature 1 f Example Explanation/Intuition OneDigitNum 9 Digital Number TwoDigitNum 90 Two-Digit year FourDigitNum 1990 Four-Digit year YearDecade 1990s Year Decade ContainsDigitAndAlpha A8956-67 Product Code ContainsDigitAndDash 09-99 Date ContainsDigitAndOneSlash 3/4 Fraction or Date ContainsDigitAndTwoSlashs 19/9/1999 DATE ContainsDigitAndComma 19,000 Money ContainsDigitAndPeriod 1.00 Money, Percentage OtherContainsDigit 123124 Other Number AllCaps IBM Organization CapPeriod M. Person Name Initial CapOtherPeriod St. Abbreviation CapPeriods N.Y. Abbreviation FirstWord First word of sentence No useful capitalization information InitialCap Microsoft Capitalized Word LowerCase Will Un-capitalized Word Other $ All other words Table 2: Sub-Feature 1 f : the Simple Deterministic Internal Feature of the Words NE Type (No of Triggers) Sub-Feature 2 f Example Explanation/Intuition PERCENT (5) SuffixPERCENT % Percentage Suffix PrefixMONEY $ Money Prefix MONEY (298) SuffixMONEY Dollars Money Suffix SuffixDATE Day Date Suffix WeekDATE Monday Week Date MonthDATE July Month Date SeasonDATE Summer Season Date PeriodDATE1 Month Period Date PeriodDATE2 Quarter Quarter/Half of Year EndDATE Weekend Date End DATE (52) ModifierDATE Fiscal Modifier of Date SuffixTIME a.m. Time Suffix TIME (15) PeriodTime Morning Time Period PrefixPERSON1 Mr. Person Title PrefixPERSON2 President Person Designation PERSON (179) FirstNamePERSON Micheal Person First Name LOC (36) SuffixLOC River Location Suffix ORG (177) SuffixORG Ltd Organization Suffix Others (148) Cardinal, Ordinal, etc. Six,, Sixth Cardinal and Ordinal Numbers Table 3: Sub-Feature 2 f : the Semantic Classification of Important Triggers NE Type (Size of Gazetteer) Sub-Feature 3 f Example DATE (20) DATEnGn Christmas Day: DATE2G2 PERSON (10,000) PERSONnGn Bill Gates: PERSON2G2 LOC (5,000) LOCnGn Beijing: LOC1G1 ORG (10,000) ORGnGn United Nation: ORG2G2 Table 4: Sub-Feature 3 f : the Internal Gazetteer Feature (G means Global gazetteer) NE Type Sub-Feature Example PERSON PERSONnLm Gates: PERSON2L1 ("Bill Gates" already recognized as a person name) LOC LOCnLm N.J.: LOC2L2 ("New Jersey" already recognized as a location name) ORG ORGnLm UN: ORG2L2 ("United Nation" already recognized as a org name) Table 5: Sub-feature 4 f : the External Macro Context Feature (L means Local document) 4 Back-off Modeling Given the model in section 2 and word feature in section 3, the main problem is how to compute ∑ = n i n i G t P 1 1 ) / ( . Ideally, we would have sufficient training data for every event whose conditional probability we wish to calculate. Unfortunately, there is rarely enough training data to compute accurate probabilities when decoding on new data, especially considering the complex word feature described above. In order to resolve the sparseness problem, two levels of back-off modeling are applied to approximate ) / ( 1 n i G t P : 1) First level back-off scheme is based on different contexts of word features and words themselves, and n G1 in ) / ( 1 n i G t P is approximated in the descending order of i i i i w f f f 1 2 − − , 2 1 + + i i i i f f w f , i i i w f f 1 − , 1 + i i i f w f , i i i f w f 1 1 − − , 1 1 + + i i i w f f , i i i f f f 1 2 − − , 2 1 + + i i i f f f , i iw f , i i i f f f 1 2 − − , 1 + i i f f and if . 2) The second level back-off scheme is based on different combinations of the four sub-features described in section 3, and kf is approximated in the descending order of 4 3 2 1 k k k k f f f f , 3 1 k k f f , 4 1 k k f f , 2 1 k k f f and 1 kf . 5 Experimental Results In this section, we will report the experimental results of our system for English NER on MUC-6 and MUC-7 NE shared tasks, as shown in Table 6, and then for the impact of training data size on performance using MUC-7 training data. For each experiment, we have the MUC dry-run data as the held-out development data and the MUC formal test data as the held-out test data. For both MUC-6 and MUC-7 NE tasks, Table 7 shows the performance of our system using MUC evaluation while Figure 1 gives the comparisons of our system with others. Here, the precision (P) measures the number of correct NEs in the answer file over the total number of NEs in the answer file and the recall (R) measures the number of correct NEs in the answer file over the total number of NEs in the key file while F-measure is the weighted harmonic mean of precision and recall: P R RP F + + = 2 2 )1 ( β β with 2 β =1. It shows that the performance is significantly better than reported by any other machine-learning system. Moreover, the performance is consistently better than those based on handcrafted rules. Statistics (KB) Training Data Dry Run Data Formal Test Data MUC-6 1330 121 124 MUC-7 708 156 561 Table 6: Statistics of Data from MUC-6 and MUC-7 NE Tasks F P R MUC-6 96.6 96.3 96.9 MUC-7 94.1 93.7 94.5 Table 7: Performance of our System on MUC-6 and MUC-7 NE Tasks Composition F P R 1 f f = 77.6 81.0 74.1 2 1 f f f = 87.4 88.6 86.1 3 2 1 f f f f = 89.3 90.5 88.2 4 2 1 f f f f = 92.9 92.6 93.1 4 3 2 1 f f f f f = 94.1 93.7 94.5 Table 8: Impact of Different Sub-Features With any learning technique, one important question is how much training data is required to achieve acceptable performance. More generally how does the performance vary as the training data size changes? The result is shown in Figure 2 for MUC-7 NE task. It shows that 200KB of training data would have given the performance of 90% while reducing to 100KB would have had a significant decrease in the performance. It also shows that our system still has some room for performance improvement. This may be because of the complex word feature and the corresponding sparseness problem existing in our system. Figure 1: Comparison of our system with others on MUC-6 and MUC-7 NE tasks 80 85 90 95 100 80 85 90 95 100 Recall Precision Our MUC-6 System Our MUC-7 System Other MUC-6 Systems Other MUC-7 Syetems Figure 2: Impact of Various Training Data on Performance 80 85 90 95 100 100 200 300 400 500 600 700 800 Training Data Size(KB) F-measure MUC-7 Another important question is about the effect of different sub-features. Table 8 answers the question on MUC-7 NE task: 1) Applying only 1 f gives our system the performance of 77.6%. 2) 2 f is very useful for NER and increases the performance further by 10% to 87.4%. 3) 4 f is impressive too with another 5.5% performance improvement. 4) However, 3 f contributes only further 1.2% to the performance. This may be because information included in 3 f has already been captured by 2 f and 4 f . Actually, the experiments show that the contribution of 3 f comes from where there is no explicit indicator information in/around the NE and there is no reference to other NEs in the macro context of the document. The NEs contributed by 3 f are always well-known ones, e.g. Microsoft, IBM and Bach (a composer), which are introduced in texts without much helpful context. 6 Conclusion This paper proposes a HMM in that a new generative model, based on the mutual information independence assumption (2-3) instead of the conditional probability independence assumption (I-1) after Bayes' rule, is applied. Moreover, it shows that the HMM-based chunk tagger can effectively apply and integrate four different kinds of sub-features, ranging from internal word information to semantic information to NE gazetteers to macro context of the document, to capture internal and external evidences for NER problem. It also shows that our NER system can reach "near human performance". To our knowledge, our NER system outperforms any published machine-learning system and any published rule-based system. While the experimental results have been impressive, there is still much that can be done potentially to improve the performance. In the near feature, we would like to incorporate the following into our system: • List of domain and application dependent person, organization and location names. • More effective name alias algorithm. • More effective strategy to the back-off modeling and smoothing. References [Aberdeen+95] J. Aberdeen, D. Day, L. Hirschman, P. Robinson and M. Vilain. MITRE: Description of the Alembic System Used for MUC-6. MUC-6. Pages141-155. Columbia, Maryland. 1995. [Aone+98] C. Aone, L. Halverson, T. Hampton, M. Ramos-Santacruz. SRA: Description of the IE2 System Used for MUC-7. MUC-7. Fairfax, Virginia. 1998. [Bennett+96] S.W. Bennett, C. Aone and C. Lovell. Learning to Tag Multilingual Texts Through Observation. EMNLP'1996. Pages109-116. Providence, Rhode Island. 1996. [Bikel+99] Daniel M. Bikel, Richard Schwartz and Ralph M. Weischedel. An Algorithm that Learns What's in a Name. Machine Learning (Special Issue on NLP). 1999. [Borthwick+98] A. Borthwick, J. Sterling, E. Agichtein, R. Grishman. NYU: Description of the MENE Named Entity System as Used in MUC-7. MUC-7. Fairfax, Virginia. 1998. [Borthwick99] Andrew Borthwick. A Maximum Entropy Approach to Named Entity Recognition. Ph.D. Thesis. New York University. September, 1999. [Brill95] Eric Brill. Transform-based Error-Driven Learning and Natural Language Processing: A Case Study in Part-of-speech Tagging. Computational Linguistics 21(4). Pages543-565. 1995. [Chinchor95a] Nancy Chinchor. MUC-6 Named Entity Task Definition (Version 2.1). MUC-6. Columbia, Maryland. 1995. [Chinchor95b] Nancy Chinchor. Statistical Significance of MUC-6 Results. MUC-6. Columbia, Maryland. 1995. [Chinchor98a] Nancy Chinchor. MUC-7 Named Entity Task Definition (Version 3.5). MUC-7. Fairfax, Virginia. 1998. [Chinchor98b] Nancy Chinchor. Statistical Significance of MUC-7 Results. MUC-7. Fairfax, Virginia. 1998. [Humphreys+98] K. Humphreys, R. Gaizauskas, S. Azzam, C. Huyck, B. Mitchell, H. Cunningham, Y. Wilks. Univ. of Sheffield: Description of the LaSIE-II System as Used for MUC-7. MUC-7. Fairfax, Virginia. 1998. [Krupka+98] G. R. Krupka, K. Hausman. IsoQuest Inc.: Description of the NetOwlTM Extractor System as Used for MUC-7. MUC-7. Fairfax, Virginia. 1998. [McDonald96] D. McDonald. Internal and External Evidence in the Identification and Semantic Categorization of Proper Names. In B. Boguraev and J. Pustejovsky editors: Corpus Processing for Lexical Acquisition. Pages21-39. MIT Press. Cambridge, MA. 1996. [Miller+98] S. Miller, M. Crystal, H. Fox, L. Ramshaw, R. Schwartz, R. Stone, R. Weischedel, and the Annotation Group. BBN: Description of the SIFT System as Used for MUC-7. MUC-7. Fairfax, Virginia. 1998. [Mikheev+98] A. Mikheev, C. Grover, M. Moens. Description of the LTG System Used for MUC-7. MUC-7. Fairfax, Virginia. 1998. [Mikheev+99] A. Mikheev, M. Moens, and C. Grover. Named entity recognition without gazeteers. EACL'1999. Pages1-8. Bergen, Norway. 1999. [MUC6] Morgan Kaufmann Publishers, Inc. Proceedings of the Sixth Message Understanding Conference (MUC-6). Columbia, Maryland. 1995. [MUC7] Morgan Kaufmann Publishers, Inc. Proceedings of the Seventh Message Understanding Conference (MUC-7). Fairfax, Virginia. 1998. [Rabiner89] L. Rabiner. A Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition”. IEEE 77(2). Pages257-285. 1989. [Sekine98] Satoshi Sekine. Description of the Japanese NE System Used for MET-2. MUC-7. Fairfax, Virginia. 1998. [Tjong+00] Erik F. Tjong Kim Sang and Sabine Buchholz. Introduction to the CoNLL-2000 Shared Task: Chunking. CoNLL'2000. Pages127-132. Lisbon, Portugal. 11-14 Sept 2000. [Viterbi67] A. J. Viterbi. Error Bounds for Convolutional Codes and an Asymptotically Optimum Decoding Algorithm. IEEE Transactions on Information Theory. IT(13). Pages260-269, April 1967. [Watson+92] B. Watson and Tsoi A Chunk. Second Order Hidden Markov Models for Speech Recognition”. Proceeding of 4th Australian International Conference on Speech Science and Technology. Pages146-151. 1992. [Yu+98] Yu Shihong, Bai Shuanhu and Wu Paul. Description of the Kent Ridge Digital Labs System Used for MUC-7. MUC-7. Fairfax, Virginia. 1998. [Zhou+00] Zhou GuoDong, Su Jian and Tey TongGuan. Hybrid Text Chunking. CoNLL'2000. Pages163-166. Lisbon, Portugal, 11-14 Sept 2000. [Zhou+00b] Zhou GuoDong and Su Jian, Error-driven HMM-based Chunk Tagger with Context-dependent Lexicon. EMNLP/ VLC'2000. Hong Kong, 7-8 Oct 2000.
2002
60
Teaching a Weaker Classifier: Named Entity Recognition on Upper Case Text Hai Leong Chieu DSO National Laboratories 20 Science Park Drive Singapore 118230 [email protected] Hwee Tou Ng Department of Computer Science School of Computing National University of Singapore 3 Science Drive 2 Singapore 117543 [email protected] Abstract This paper describes how a machinelearning named entity recognizer (NER) on upper case text can be improved by using a mixed case NER and some unlabeled text. The mixed case NER can be used to tag some unlabeled mixed case text, which are then used as additional training material for the upper case NER. We show that this approach reduces the performance gap between the mixed case NER and the upper case NER substantially, by 39% for MUC-6 and 22% for MUC-7 named entity test data. Our method is thus useful in improving the accuracy of NERs on upper case text, such as transcribed text from automatic speech recognizers where case information is missing. 1 Introduction In this paper, we propose using a mixed case named entity recognizer (NER) that is trained on labeled text, to further train an upper case NER. In the Sixth and Seventh Message Understanding Conferences (MUC-6, 1995; MUC-7, 1998), the named entity task consists of labeling named entities with the classes PERSON, ORGANIZATION, LOCATION, DATE, TIME, MONEY, and PERCENT. We conducted experiments on upper case named entity recognition, and showed how unlabeled mixed case text can be used to improve the results of an upper case NER on the official MUC-6 and MUC-7 Mixed Case: Consuela Washington, a longtime House staffer and an expert in securities laws, is a leading candidate to be chairwoman of the Securities and Exchange Commission in the Clinton administration. Upper Case: CONSUELA WASHINGTON, A LONGTIME HOUSE STAFFER AND AN EXPERT IN SECURITIES LAWS, IS A LEADING CANDIDATE TO BE CHAIRWOMAN OF THE SECURITIES AND EXCHANGE COMMISSION IN THE CLINTON ADMINISTRATION. Figure 1: Examples of mixed and upper case text test data. Besides upper case text, this approach can also be applied on transcribed text from automatic speech recognizers in Speech Normalized Orthographic Representation (SNOR) format, or from optical character recognition (OCR) output. For the English language, a word starting with a capital letter often designates a named entity. Upper case NERs do not have case information to help them to distinguish named entities from non-named entities. When data is sparse, many named entities in the test data would be unknown words. This makes upper case named entity recognition more difficult than mixed case. Even a human would experience greater difficulty in annotating upper case text than mixed case text (Figure 1). We propose using a mixed case NER to “teach” an upper case NER, by making use of unlabeled mixed case text. With the abundance of mixed case un Computational Linguistics (ACL), Philadelphia, July 2002, pp. 481-488. Proceedings of the 40th Annual Meeting of the Association for labeled texts available in so many corpora and on the Internet, it will be easy to apply our approach to improve the performance of NER on upper case text. Our approach does not satisfy the usual assumptions of co-training (Blum and Mitchell, 1998). Intuitively, however, one would expect some information to be gained from mixed case unlabeled text, where case information is helpful in pointing out new words that could be named entities. We show empirically that such an approach can indeed improve the performance of an upper case NER. In Section 5, we show that for MUC-6, this way of using unlabeled text can bring a relative reduction in errors of 38.68% between the upper case and mixed case NERs. For MUC-7 the relative reduction in errors is 22.49%. 2 Related Work Considerable amount of work has been done in recent years on NERs, partly due to the Message Understanding Conferences (MUC-6, 1995; MUC-7, 1998). Machine learning methods such as BBN’s IdentiFinder (Bikel, Schwartz, and Weischedel, 1999) and Borthwick’s MENE (Borthwick, 1999) have shown that machine learning NERs can achieve comparable performance with systems using hand-coded rules. Bikel, Schwartz, and Weischedel (1999) have also shown how mixed case text can be automatically converted to upper case SNOR or OCR format to train NERs to work on such formats. There is also some work on unsupervised learning for mixed case named entity recognition (Collins and Singer, 1999; Cucerzan and Yarowsky, 1999). Collins and Singer (1999) investigated named entity classification using Adaboost, CoBoost, and the EM algorithm. However, features were extracted using a parser, and performance was evaluated differently (the classes were person, organization, location, and noise). Cucerzan and Yarowsky (1999) built a cross language NER, and the performance on English was low compared to supervised single-language NER such as IdentiFinder. We suspect that it will be hard for purely unsupervised methods to perform as well as supervised ones. Seeger (2001) gave a comprehensive summary of recent work in learning with labeled and unlabeled data. There is much recent research on co-training, such as (Blum and Mitchell, 1998; Collins and Singer, 1999; Pierce and Cardie, 2001). Most cotraining methods involve using two classifiers built on different sets of features. Instead of using distinct sets of features, Goldman and Zhou (2000) used different classification algorithms to do co-training. Blum and Mitchell (1998) showed that in order for PAC-like guarantees to hold for co-training, features should be divided into two disjoint sets satisfying: (1) each set is sufficient for a classifier to learn a concept correctly; and (2) the two sets are conditionally independent of each other. Each set of features can be used to build a classifier, resulting in two independent classifiers, A and B. Classifications by A on unlabeled data can then be used to further train classifier B, and vice versa. Intuitively, the independence assumption is there so that the classifications of A would be informative to B. When the independence assumption is violated, the decisions of A may not be informative to B. In this case, the positive effect of having more data may be offset by the negative effect of introducing noise into the data (classifier A might not be always correct). Nigam and Ghani (2000) investigated the difference in performance with and without a feature split, and showed that co-training with a feature split gives better performance. However, the comparison they made is between co-training and self-training. In self-training, only one classifier is used to tag unlabeled data, after which the more confidently tagged data is reused to train the same classifier. Many natural language processing problems do not show the natural feature split displayed by the web page classification task studied in previous cotraining work. Our work does not really fall under the paradigm of co-training. Instead of co-operation between two classifiers, we used a stronger classifier to teach a weaker one. In addition, it exhibits the following differences: (1) the features are not at all independent (upper case features can be seen as a subset of the mixed case features); and (2) The additional features available to the mixed case system will never be available to the upper case system. Co-training often involves combining the two different sets of features to obtain a final system that outperforms either system alone. In our context, however, the upper case system will never have access to some of the case-based features available to the mixed case system. Due to the above reason, it is unreasonable to expect the performance of the upper case NER to match that of the mixed case NER. However, we still manage to achieve a considerable reduction of errors between the two NERs when they are tested on the official MUC-6 and MUC-7 test data. 3 System Description We use the maximum entropy framework to build two classifiers: an upper case NER and a mixed case NER. The upper case NER does not have access to case information of the training and test data, and hence cannot make use of all the features used by the mixed case NER. We will first describe how the mixed case NER is built. More details of this mixed case NER and its performance are given in (Chieu and Ng, 2002). Our approach is similar to the MENE system of (Borthwick, 1999). Each word is assigned a name class based on its features. Each name class is subdivided into 4 classes, i.e., N begin, N continue, N end, and N unique. Hence, there is a total of 29 classes (7 name classes  4 sub-classes  1 not-a-name class). 3.1 Maximum Entropy The maximum entropy framework estimates probabilities based on the principle of making as few assumptions as possible, other than the constraints imposed. Such constraints are derived from training data, expressing some relationship between features and outcome. The probability distribution that satisfies the above property is the one with the highest entropy. It is unique, agrees with the maximumlikelihood distribution, and has the exponential form (Della Pietra, Della Pietra, and Lafferty, 1997):            "!$# %'&  ( where  refers to the outcome, the history (or context), and    is a normalization function. In addition, each feature function )   ( $ is a binary function. For example, in predicting if a word belongs to a word class,  is either true or false, and refers to the surrounding context: )   ( *  ,+  if  = true, previous word = the otherwise The parameters   are estimated by a procedure called Generalized Iterative Scaling (GIS) (Darroch and Ratcliff, 1972). This is an iterative method that improves the estimation of the parameters at each iteration. 3.2 Features for Mixed Case NER The features we used can be divided into 2 classes: local and global. Local features are features that are based on neighboring tokens, as well as the token itself. Global features are extracted from other occurrences of the same token in the whole document. Features in the maximum entropy framework are binary. Feature selection is implemented using a feature cutoff: features seen less than a small count during training will not be used. We group the features used into feature groups. Each group can be made up of many binary features. For each token . , zero, one, or more of the features in each group are set to 1. The local feature groups are: Non-Contextual Feature: This feature is set to 1 for all tokens. This feature imposes constraints that are based on the probability of each name class during training. Zone: MUC data contains SGML tags, and a document is divided into zones (e.g., headlines and text zones). The zone to which a token belongs is used as a feature. For example, in MUC-6, there are four zones (TXT, HL, DATELINE, DD). Hence, for each token, one of the four features zone-TXT, zone-HL, zone-DATELINE, or zone-DD is set to 1, and the other 3 are set to 0. Case and Zone: If the token . starts with a capital letter (initCaps), then an additional feature (initCaps, zone) is set to 1. If it is made up of all capital letters, then (allCaps, zone) is set to 1. If it contains both upper and lower case letters, then (mixedCaps, zone) is set to 1. A token that is allCaps will also be initCaps. This group consists of (3  total number of possible zones) features. Case and Zone of .0/  and .21  : Similarly, if .0/  (or .31  ) is initCaps, a feature (initCaps, Token satisfies Example Feature Starts with a capital Mr. InitCapletter, ends with a period Period Contains only one A OneCap capital letter All capital letters and CORP. AllCapsperiod Period Contains a digit AB3, Contain747 Digit Made up of 2 digits 99 TwoD Made up of 4 digits 1999 FourD Made up of digits 01/01 Digitand slash slash Contains a dollar sign US$20 Dollar Contains a percent sign 20% Percent Contains digit and period $US3.20 DigitPeriod Table 1: Features based on the token string zone) 457698 (or (initCaps, zone) :7;<5= ) is set to 1, etc. Token Information: This group consists of 10 features based on the string . , as listed in Table 1. For example, if a token starts with a capital letter and ends with a period (such as Mr.), then the feature InitCapPeriod is set to 1, etc. First Word: This feature group contains only one feature firstword. If the token is the first word of a sentence, then this feature is set to 1. Otherwise, it is set to 0. Lexicon Feature: The string of the token . is used as a feature. This group contains a large number of features (one for each token string present in the training data). At most one feature in this group will be set to 1. If . is seen infrequently during training (less than a small count), then . will not selected as a feature and all features in this group are set to 0. Lexicon Feature of Previous and Next Token: The string of the previous token . 1  and the next token .>/  is used with the initCaps information of . . If . has initCaps, then a feature (initCaps, .?/  ) 4<5768 is set to 1. If . is not initCaps, then (notinitCaps, .>/  ) 4568 is set to 1. Same for .01  . In the case where the next token ./  is a hyphen, then .?/A@ is also used as a feature: (initCaps, .B/A@ ) 457698 is set to 1. This is because in many cases, the use of hyphens can be considered to be optional (e.g., “third-quarter” or “third quarter”). Out-of-Vocabulary: We derived a lexicon list from WordNet 1.6, and words that are not found in this list have a feature out-of-vocabulary set to 1. Dictionaries: Due to the limited amount of training material, name dictionaries have been found to be useful in the named entity task. The sources of our dictionaries are listed in Table 2. A token . is tested against the words in each of the four lists of location names, corporate names, person first names, and person last names. If . is found in a list, the corresponding feature for that list will be set to 1. For example, if Barry is found in the list of person first names, then the feature PersonFirstName will be set to 1. Similarly, the tokens .C/  and .D1  are tested against each list, and if found, a corresponding feature will be set to 1. For example, if .B/  is found in the list of person first names, the feature PersonFirstName 4<57698 is set to 1. Month Names, Days of the Week, and Numbers: If . is one of January, February, . . ., December, then the feature MonthName is set to 1. If . is one of Monday, Tuesday, . . ., Sunday, then the feature DayOfTheWeek is set to 1. If . is a number string (such as one, two, etc), then the feature NumberString is set to 1. Suffixes and Prefixes: This group contains only two features: Corporate-Suffix and Person-Prefix. Two lists, Corporate-Suffix-List (for corporate suffixes) and Person-Prefix-List (for person prefixes), are collected from the training data. For a token . that is in a consecutive sequence of initCaps tokens  .21 E (GFGFGFH( . (GFGFGFH( .?/ I , if any of the tokens from .?/  to .0/ I is in Corporate-Suffix-List, then a feature Corporate-Suffix is set to 1. If any of the tokens from .?1 E?1  to .31  is in Person-Prefix-List, then another feature Person-Prefix is set to 1. Note that we check for .>1 E?1  , the word preceding the consecutive sequence of initCaps tokens, since person prefixes like Mr., Dr. etc are not part of person names, whereas corporate suffixes like Corp., Inc. etc are part of corporate names. The global feature groups are: InitCaps of Other Occurrences: There are 2 features in this group, checking for whether the first occurrence of the same word in an unambiguous posiDescription Source Location Names http://www.timeanddate.com http://www.cityguide.travel-guides.com http://www.worldtravelguide.net Corporate Names http://www.fmlx.com Person First Names http://www.census.gov/genealogy/names Person Last Names Table 2: Sources of Dictionaries tion (non first-words in the TXT or TEXT zones) in the same document is initCaps or not-initCaps. For a word whose initCaps might be due to its position rather than its meaning (in headlines, first word of a sentence, etc), the case information of other occurrences might be more accurate than its own. Corporate Suffixes and Person Prefixes of Other Occurrences: With the same CorporateSuffix-List and Person-Prefix-List used in local features, for a token . seen elsewhere in the same document with one of these suffixes (or prefixes), another feature Other-CS (or Other-PP) is set to 1. Acronyms: Words made up of all capitalized letters in the text zone will be stored as acronyms (e.g., IBM). The system will then look for sequences of initial capitalized words that match the acronyms found in the whole document. Such sequences are given additional features of A begin, A continue, or A end, and the acronym is given a feature A unique. For example, if “FCC” and “Federal Communications Commission” are both found in a document, then “Federal” has A begin set to 1, “Communications” has A continue set to 1, “Commission” has A end set to 1, and “FCC” has A unique set to 1. Sequence of Initial Caps: In the sentence “Even News Broadcasting Corp., noted for its accurate reporting, made the erroneous announcement.”, a NER may mistake “Even News Broadcasting Corp.” as an organization name. However, it is unlikely that other occurrences of “News Broadcasting Corp.” in the same document also co-occur with “Even”. This group of features attempts to capture such information. For every sequence of initial capitalized words, its longest substring that occurs in the same document is identified. For this example, since the sequence “Even News Broadcasting Corp.” only appears once in the document, its longest substring that occurs in the same document is “News Broadcasting Corp.”. In this case, “News” has an additional feature of I begin set to 1,“Broadcasting” has an additional feature of I continue set to 1, and “Corp.” has an additional feature of I end set to 1. Unique Occurrences and Zone: This group of features indicates whether the word . is unique in the whole document. . needs to be in initCaps to be considered for this feature. If . is unique, then a feature (Unique, Zone) is set to 1, where Zone is the document zone where . appears. 3.3 Features for Upper Case NER All features used for the mixed case NER are used by the upper case NER, except those that require case information. Among local features, Case and Zone, InitCapPeriod, and OneCap are not used by the upper case NER. Among global features, only Other-CS and Other-PP are used for the upper case NER, since the other global features require case information. For Corporate-Suffix and Person-Prefix, as the sequence of initCaps is not available in upper case text, only the next word (previous word) is tested for Corporate-Suffix (Person-Prefix). 3.4 Testing During testing, it is possible that the classifier produces a sequence of inadmissible classes (e.g., person begin followed by location unique). To eliminate such sequences, we define a transition probability between word classes J KLM K  to be equal to 1 if the sequence is admissible, and 0 otherwise. The probability of the classes K  (GFGFGFN( K I assigned to the words in a sentence O in a document P is defined as follows: Figure 2: The whole process of re-training the upper case NER. Q signifies that the text is converted to upper case before processing. J K  (GFGFGFN( K I  O ( P  I  L  J K L  O ( P R J K L  K L 1  ( where J K L  O ( P is determined by the maximum entropy classifier. A dynamic programming algorithm is then used to select the sequence of word classes with the highest probability. 4 Teaching Process The teaching process is illustrated in Figure 2. This process can be divided into the following steps: Training NERs. First, a mixed case NER (MNER) is trained from some initial corpus S , manually tagged with named entities. This corpus is also converted to upper case in order to train another upper case NER (UNER). UNER is required by our method of example selection. Baseline Test on Unlabeled Data. Apply the trained MNER on some unlabeled mixed case texts to produce mixed case texts that are machine-tagged with named entities (text-mner-tagged). Convert the original unlabeled mixed case texts to upper case, and similarly apply the trained UNER on these texts to obtain upper case texts machine-tagged with named entities (text-uner-tagged). Example Selection. Compare text-mner-tagged and text-uner-tagged and select tokens in which the classification by MNER differs from that of UNER. The class assigned by MNER is considered to be correct, and will be used as new training data. These tokens are collected into a set SUT . Retraining for Final Upper Case NER. Both S and S3T are used to retrain an upper case NER. However, tokens from S are given a weight of 2 (i.e., each token is used twice in the training data), and tokens from SDT a weight of 1, since S is more reliable than S T (human-tagged versus machine-tagged). 5 Experimental Results For manually labeled data (corpus C), we used only the official training data provided by the MUC-6 and MUC-7 conferences, i.e., using MUC-6 training data and testing on MUC-6 test data, and using MUC-7 training data and testing on MUC-7 test data.1 The task definitions for MUC-6 and MUC7 are not exactly identical, so we could not combine the training data. The original MUC-6 training data has a total of approximately 160,000 tokens and 1MUC data can be obtained from the Linguistic Data Consortium: http://www.ldc.upenn.edu Figure 3: Improvements in F-measure on MUC-6 plotted against amount of selected unlabeled data used MUC-7 a total of approximately 180,000 tokens. The unlabeled text is drawn from the TREC (Text REtrieval Conference) corpus, 1992 Wall Street Journal section. We have used a total of 4,893 articles with a total of approximately 2,161,000 tokens. After example selection, this reduces the number of tokens to approximately 46,000 for MUC-6 and 67,000 for MUC-7. Figure 3 and Figure 4 show the results for MUC-6 and MUC-7 obtained, plotted against the number of unlabeled instances used. As expected, it increases the recall in each domain, as more names or their contexts are learned from unlabeled data. However, as more unlabeled data is used, precision drops due to the noise introduced in the machine tagged data. For MUC-6, F-measure performance peaked at the point where 30,000 tokens of machine labeled data are added to the original manually tagged 160,000 tokens. For MUC-7, performance peaked at 20,000 tokens of machine labeled data, added to the original manually tagged 180,000 tokens. The improvements achieved are summarized in Table 3. It is clear from the table that this method of using unlabeled data brings considerable improvement for both MUC-6 and MUC-7 named entity task. The result of the teaching process for MUC-6 is a lot better than that of MUC-7. We think that this is Figure 4: Improvements in F-measure on MUC-7 plotted against amount of selected unlabeled data used Systems MUC-6 MUC-7 Baseline Upper Case NER 87.97% 79.86% Best Taught Upper Case NER 90.02% 81.52% Mixed case NER 93.27% 87.24% Reduction in relative error 38.68% 22.49% Table 3: F-measure on MUC-6 and MUC-7 test data due to the following reasons: Better Mixed Case NER for MUC-6 than MUC-7. The mixed case NER trained on the MUC6 officially released training data achieved an Fmeasure of 93.27% on the official MUC-6 test data, while that of MUC-7 (also trained on only the official MUC-7 training data) achieved an F-measure of only 87.24%. As the mixed case NER is used as the teacher, a bad teacher does not help as much. Domain Shift in MUC-7. Another possible cause is that there is a domain shift in MUC-7 for the formal test (training articles are aviation disasters articles and test articles are missile/rocket launch articles). The domain of the MUC-7 test data is also very specific, and hence it might exhibit different properties from the training and the unlabeled data. The Source of Unlabeled Data. The unlabeled data used is from the same source as MUC-6, but different for MUC-7 (MUC-6 articles and the unlabeled articles are all Wall Street Journal articles, whereas MUC-7 articles are New York Times articles). 6 Conclusion In this paper, we have shown that the performance of NERs on upper case text can be improved by using a mixed case NER with unlabeled text. Named entity recognition on mixed case text is easier than on upper case text, where case information is unavailable. By using the teaching process, we can reduce the performance gap between mixed and upper case NER by as much as 39% for MUC-6 and 22% for MUC-7. This approach can be used to improve the performance of NERs on speech recognition output, or even for other tasks such as part-of-speech tagging, where case information is helpful. With the abundance of unlabeled text available, such an approach requires no additional annotation effort, and hence is easily applicable. This way of teaching a weaker classifier can also be used in other domains, where the task is to infer V W X , and an abundance of unlabeled data P ZY V ( \[ is available. If one possesses a second classifier  V (  W X such that  provides additional “useful” information that can be utilized by this second classifier, then one can use this second classifier to automatically tag the unlabeled data P , and select from P examples that can be used to supplement the training data for training V]W^X . References Daniel M. Bikel, Richard Schwartz, and Ralph M. Weischedel. 1999. An Algorithm that Learns What’s in a Name. Machine Learning, 34(1/2/3):211231. Avrim Blum and Tom Mitchell. 1998. Combining Labeled and Unlabeled Data with Co-Training. In Proceedings of the Eleventh Annual Conference on Computational Learning Theory, 92-100. Andrew Borthwick. 1999. A Maximum Entropy Approach to Named Entity Recognition. Ph.D. dissertation. Computer Science Department. New York University. Hai Leong Chieu and Hwee Tou Ng. 2002. Named Entity Recognition: A Maximum Entropy Approach Using Global Information. To appear in Proceedings of the Nineteenth International Conference on Computational Linguistics. Michael Collins and Yoram Singer. 1999. Unsupervised Models for Named Entity Classification. In Proceedings of the 1999 Joint SIGDAT Conference on Empirical Methods in Natural Language Processing and Very Large Corpora, 100-110. Silviu Cucerzan and David Yarowsky. 1999. Language Independent Named Entity Recognition Combining Morphological and Contextual Evidence. In Proceedings of the 1999 Joint SIGDAT Conference on Empirical Methods in Natural Language Processing and Very Large Corpora, 90-99. J. N. Darroch and D. Ratcliff. 1972. Generalized Iterative Scaling for Log-Linear Models. The Annals of Mathematical Statistics, 43(5):1470-1480. Stephen Della Pietra, Vincent Della Pietra, and John Lafferty. 1997. Inducing Features of Random Fields. IEEE Transactions on Pattern Analysis and Machine Intelligence, 19(4):380-393. Sally Goldman and Yan Zhou. 2000. Enhancing Supervised Learning with Unlabeled Data. In Proceedings of the Seventeenth International Conference on Machine Learning, 327-334. MUC-6. 1995. Proceedings of the Sixth Message Understanding Conference (MUC-6). MUC-7. 1998. Proceedings of the Seventh Message Understanding Conference (MUC-7). Kamal Nigam and Rayid Ghani. 2000. Analyzing the Effectiveness and Applicability of Co-training. In Proceedings of the Ninth International Conference on Information and Knowledge Management, 86-93. David Pierce and Claire Cardie. 2001. Limitations of Co-Training for Natural Language Learning from Large Datasets. In Proceedings of the 2001 Conference on Empirical Methods in Natural Language Processing, 1-9. Matthias Seeger. 2001. Learning with Labeled and Unlabeled Data. Technical Report, University of Edinburgh.
2002
61
Ranking Algorithms for Named–Entity Extraction: Boosting and the Voted Perceptron Michael Collins AT&T Labs-Research, Florham Park, New Jersey. [email protected] Abstract This paper describes algorithms which rerank the top N hypotheses from a maximum-entropy tagger, the application being the recovery of named-entity boundaries in a corpus of web data. The first approach uses a boosting algorithm for ranking problems. The second approach uses the voted perceptron algorithm. Both algorithms give comparable, significant improvements over the maximum-entropy baseline. The voted perceptron algorithm can be considerably more efficient to train, at some cost in computation on test examples. 1 Introduction Recent work in statistical approaches to parsing and tagging has begun to consider methods which incorporate global features of candidate structures. Examples of such techniques are Markov Random Fields (Abney 1997; Della Pietra et al. 1997; Johnson et al. 1999), and boosting algorithms (Freund et al. 1998; Collins 2000; Walker et al. 2001). One appeal of these methods is their flexibility in incorporating features into a model: essentially any features which might be useful in discriminating good from bad structures can be included. A second appeal of these methods is that their training criterion is often discriminative, attempting to explicitly push the score or probability of the correct structure for each training sentence above the score of competing structures. This discriminative property is shared by the methods of (Johnson et al. 1999; Collins 2000), and also the Conditional Random Field methods of (Lafferty et al. 2001). In a previous paper (Collins 2000), a boosting algorithm was used to rerank the output from an existing statistical parser, giving significant improvements in parsing accuracy on Wall Street Journal data. Similar boosting algorithms have been applied to natural language generation, with good results, in (Walker et al. 2001). In this paper we apply reranking methods to named-entity extraction. A state-ofthe-art (maximum-entropy) tagger is used to generate 20 possible segmentations for each input sentence, along with their probabilities. We describe a number of additional global features of these candidate segmentations. These additional features are used as evidence in reranking the hypotheses from the max-ent tagger. We describe two learning algorithms: the boosting method of (Collins 2000), and a variant of the voted perceptron algorithm, which was initially described in (Freund & Schapire 1999). We applied the methods to a corpus of over one million words of tagged web data. The methods give significant improvements over the maximum-entropy tagger (a 17.7% relative reduction in error-rate for the voted perceptron, and a 15.6% relative improvement for the boosting method). One contribution of this paper is to show that existing reranking methods are useful for a new domain, named-entity tagging, and to suggest global features which give improvements on this task. We should stress that another contribution is to show that a new algorithm, the voted perceptron, gives very credible results on a natural language task. It is an extremely simple algorithm to implement, and is very fast to train (the testing phase is slower, but by no means sluggish). It should be a viable alternative to methods such as the boosting or Markov Random Field algorithms described in previous work. 2 Background 2.1 The data Over a period of a year or so we have had over one million words of named-entity data annotated. The Computational Linguistics (ACL), Philadelphia, July 2002, pp. 489-496. Proceedings of the 40th Annual Meeting of the Association for data is drawn from web pages, the aim being to support a question-answering system over web data. A number of categories are annotated: the usual people, organization and location categories, as well as less frequent categories such as brand-names, scientific terms, event titles (such as concerts) and so on. From this data we created a training set of 53,609 sentences (1,047,491 words), and a test set of 14,717 sentences (291,898 words). The task we consider is to recover named-entity boundaries. We leave the recovery of the categories of entities to a separate stage of processing.1 We evaluate different methods on the task through precision and recall. If a method proposes entities on the test set, and  of these are correct (i.e., an entity is marked by the annotator with exactly the same span as that proposed) then the precision of a method is    . Similarly, if is the total number of entities in the human annotated version of the test set, then the recall is     . 2.2 The baseline tagger The problem can be framed as a tagging task – to tag each word as being either the start of an entity, a continuation of an entity, or not to be part of an entity at all (we will use the tags S, C and N respectively for these three cases). As a baseline model we used a maximum entropy tagger, very similar to the ones described in (Ratnaparkhi 1996; Borthwick et. al 1998; McCallum et al. 2000). Max-ent taggers have been shown to be highly competitive on a number of tagging tasks, such as part-of-speech tagging (Ratnaparkhi 1996), named-entity recognition (Borthwick et. al 1998), and information extraction tasks (McCallum et al. 2000). Thus the maximumentropy tagger we used represents a serious baseline for the task. We used the following features (several of the features were inspired by the approach of (Bikel et. al 1999), an HMM model which gives excellent results on named entity extraction):  The word being tagged, the previous word, and the next word.  The previous tag, and the previous two tags (bigram and trigram features). 1In initial experiments, we found that forcing the tagger to recover categories as well as the segmentation, by exploding the number of tags, reduced performance on the segmentation task, presumably due to sparse data problems.  A compound feature of three fields: (a) Is the word at the start of a sentence?; (b) does the word occur in a list of words which occur more frequently as lower case rather than upper case words in a large corpus of text? (c) the type of the first letter  of the word, where    is defined as ‘A’ if  is a capitalized letter, ‘a’ if  is a lower-case letter, ‘0’ if  is a digit, and  otherwise. For example, if the word Animal is seen at the start of a sentence, and it occurs in the list of frequent lower-cased words, then it would be mapped to the feature 1-1-A.  The word with each character mapped to its   . For example, G.M. would be mapped to A.A., and Animal would be mapped to Aaaaaa.  The word with each character mapped to its type, but repeated consecutive character types are not repeated in the mapped string. For example, Animal would be mapped to Aa, G.M. would again be mapped to A.A.. The tagger was applied and trained in the same way as described in (Ratnaparkhi 1996). The feature templates described above are used to create a set of  binary features   !  , where  is the tag, and is the “history”, or context. An example is "$#%#  ! '& () * )+  if t = S and the word being tagged = “Mr.”  otherwise The parameters of the model are ,- for .'& 0///  , defining a conditional distribution over the tags given a history as 1  2 '& 43657 5985;:=<;> ?@ 3 <BA  3 5C7 5D85;:=< A > ?@ The parameters are trained using Generalized Iterative Scaling. Following (Ratnaparkhi 1996), we only include features which occur 5 times or more in training data. In decoding, we use a beam search to recover 20 candidate tag sequences for each sentence (the sentence is decoded from left to right, with the top 20 most probable hypotheses being stored at each point). 2.3 Applying the baseline tagger As a baseline we trained a model on the full 53,609 sentences of training data, and decoded the 14,717 sentences of test data. This gave 20 candidates per test sentence, along with their probabilities. The baseline method is to take the most probable candidate for each test data sentence, and then to calculate precision and recall figures. Our aim is to come up with strategies for reranking the test data candidates, in such a way that precision and recall is improved. In developing a reranking strategy, the 53,609 sentences of training data were split into a 41,992 sentence training portion, and a 11,617 sentence development set. The training portion was split into 5 sections, and in each case the maximum-entropy tagger was trained on 4/5 of the data, then used to decode the remaining 1/5. The top 20 hypotheses under a beam search, together with their log probabilities, were recovered for each training sentence. In a similar way, a model trained on the 41,992 sentence set was used to produce 20 hypotheses for each sentence in the development set. 3 Global features 3.1 The global-feature generator The module we describe in this section generates global features for each candidate tagged sequence. As input it takes a sentence, along with a proposed segmentation (i.e., an assignment of a tag for each word in the sentence). As output, it produces a set of feature strings. We will use the following tagged sentence as a running example in this section: Whether/N you/N ’/N re/N an/N aging/N flower/N child/N or/N a/N clueless/N Gen/S Xer/C ,/N “/N The/S Day/C They/C Shot/C John/C Lennon/C ,/N ”/N playing/N at/N the/N Dougherty/S Arts/C Center/C ,/N entertains/N the/N imagination/N ./N An example feature type is simply to list the full strings of entities that appear in the tagged input. In this example, this would give the three features WE=Gen Xer WE=The Day They Shot John Lennon WE=Dougherty Arts Center Here WE stands for “whole entity”. Throughout this section, we will write the features in this format. The start of the feature string indicates the feature type (in this case WE), followed by =. Following the type, there are generally 1 or more words or other symbols, which we will separate with the symbol . A seperate module in our implementation takes the strings produced by the global-feature generator, and hashes them to integers. For example, suppose the three strings WE=Gen Xer, WE=The Day They Shot John Lennon, WE=Dougherty Arts Center were hashed to 100, 250, and 500 respectively. Conceptually, the candidate  is represented by a large number of features FE   for GH& 0///  where  is the number of distinct feature strings in training data. In this example, only I"$#%#   , KJ%L%#   and ML%#%#  I take the value  , all other features being zero. 3.2 Feature templates We now introduce some notation with which to describe the full set of global features. First, we assume the following primitives of an input candidate:   for .N& 0///O is the . ’th tag in the tagged sequence. QP  for .0& 0///!O is the . ’th word. SR  for .0& 0///!O is  if P  begins with a lowercase letter,  otherwise.   for .T& 0///!O is a transformation of P  , where the transformation is applied in the same way as the final feature type in the maximum entropy tagger. Each character in the word is mapped to its   , but repeated consecutive character types are not repeated in the mapped string. For example, Animal would be mapped to Aa in this feature, G.M. would again be mapped to A.A..  U for .S& 0///!O is the same as  , but has an additional flag appended. The flag indicates whether or not the word appears in a dictionary of words which appeared more often lower-cased than capitalized in a large corpus of text. In our example, Animal appears in the lexicon, but G.M. does not, so the two values for U would be Aa1 and A.A.0 respectively. In addition, V P V! and U are all defined to be NULL if .XW  or .XY O . Most of the features we describe are anchored on entity boundaries in the candidate segmentation. We will use “feature templates” to describe the features that we used. As an example, suppose that an entity Description Feature Template The whole entity string WE= Z-[ Z0\ [^]`_ba cdcdc Z-e The f 5 features within the entity FF= f [ f \ [;]`_ba cdc%c f e The g 5 features within the entity GF= g [ g \ [^]h_ba c%cc g e The last word in the entity LW= Z-e Indicates whether the last word is lower-cased LWLC= i e Bigram boundary features of the words before/after the start of the entity BO00= Z \ [$jk_ba Z [ BO01= Z \ [Vjk_ba g [ BO10=g \ [$jk_ba Z [ BO11= g\ [Vj_ba gl[ Bigram boundary features of the words before/after the end of the entity BE00= Z e Z0\ e^]h_ba BE01= Z e g\ e^]h_ba BE10= g e Z0\ e^]h_ba BE11= gCe g\ e^]h_ba Trigram boundary features of the words before/after the start of the entity (16 features total, only 4 shown) TO000= Z0\ [Vjm^a Z0\ [Vjk_ba Z [ cdc%c TO111=g\ [Vjm^a g4\ [Vjk_ba g [ TO2000= Z0\ [Vjk_ba Z-[ Z0\ [^]h_ba`ccdc TO2111= g\ [Vjk_ba gl[ g4\ [^]`_ba Trigram boundary features of the words before/after the end of the entity (16 features total, only 4 shown) TE000= Z0\ e$jk_ba Z-e Z0\ e^]h_ba cdc%c TE111= g\ eVj_ba gCe g\ e^]h_ba TE2000= Z \ eVjm^a Z \ e$jk_ba Z e ccdc TE2111= g \ eVjm^a g \ eVj_ba g e Prefix features PF= fn[ PF2= gC[ PF= f![ fC\ [^]h_ba PF2= gl[ g4\ [^]`_ba c%cdc PF= f [ f \ [^]h_ba cdcdc f e PF2= g [ g \ [^]h_ba cc%c g e Suffix features SF= fne SF2= gCe SF= f!e fC\ eVj_ba SF2= gCe g\ eVj_ba c%cdc SF= f e f \ eVjk_ba c%cdc f [ SF2= g e g \ eVjk_ba cdcdc g [ Figure 1: The full set of entity-anchored feature templates. One of these features is generated for each entity seen in a candidate. We take the entity to span words G ///  inclusive in the candidate. is seen from words G to  inclusive in a segmentation. Then the WE feature described in the previous section can be generated by the template WE= P E P Eop" /// Prq Applying this template to the three entities in the running example generates the three feature strings described in the previous section. As another example, consider the template FF= E EVop" ///  q . This will generate a feature string for each of the entities in a candidate, this time using the values E ///  q rather than P E /// P q . For the full set of feature templates that are anchored around entities, see figure 1. A second set of feature templates is anchored around quotation marks. In our corpus, entities (typically with long names) are often seen surrounded by quotes. For example, “The Day They Shot John Lennon”, the name of a band, appears in the running example. Define G to be the index of any double quotation marks in the candidate,  to be the index of the next (matching) double quotation marks if they appear in the candidate. Additionally, define s to be the index of the last word beginning with a lower case letter, upper case letter, or digit within the quotation marks. The first set of feature templates tracks the values of  for the words within quotes:2 Q= E %E : EVop" @  : Eop" @ /// q  q Q2= : E%tI" @  : EntI" @ uE %E /// : q op" @  : q op" @ 2We only included these features if vxwzy|{n}z~€ , to prevent an explosion in the length of feature strings. The next set of feature templates are sensitive to whether the entire sequence between quotes is tagged as a named entity. Define  s to be  if %EVop"X& S, and V =C for .‚&ƒG…„ s /// s (i.e.,  s &  if the sequence of words within the quotes is tagged as a single entity). Also define † to be the number of upper cased words within the quotes, ‡ to be the number of lower case words, and  to be  if † ˆ ‡ ,  otherwise. Then two other templates are: QF=  s † ‡ : EVop" @ q J QF2=  s  : EVop" @ q J In the “The Day They Shot John Lennon” example we would have  s &  provided that the entire sequence within quotes was tagged as an entity. Additionally, †‰&‹Š , ‡Œ&  , and &  . The values for : EVop" @ and q J would be ސ  and ސ  (these features are derived from The and Lennon, which respectively do and don’t appear in the capitalization lexicon). This would give QF=  Š  ސ  ސ  and QF2=   ސ  Ž‘  . At this point, we have fully described the representation used as input to the reranking algorithms. The maximum-entropy tagger gives 20 proposed segmentations for each input sentence. Each candidate  is represented by the log probability ‡  I from the tagger, as well as the values of the global features KE   for G’& 0///  . In the next section we describe algorithms which blend these two sources of information, the aim being to improve upon a strategy which just takes the candidate from the tagger with the highest score for ‡   . 4 Ranking Algorithms 4.1 Notation This section introduces notation for the reranking task. The framework is derived by the transformation from ranking problems to a margin-based classification problem in (Freund et al. 1998). It is also related to the Markov Random Field methods for parsing suggested in (Johnson et al. 1999), and the boosting methods for parsing in (Collins 2000). We consider the following set-up:  Training data is a set of example input/output pairs. In tagging we would have training examples “ G  % ” where each G  is a sentence and each   is the correct sequence of tags for that sentence.  We assume some way of enumerating a set of candidates for a particular sentence. We use K–• to denote the — ’th candidate for the . ’th sentence in training data, and ˜  G  ™& “  b" % BJ /// ” to denote the set of candidates for G . In this paper, the top š outputs from a maximum entropy tagger are used as the set of candidates.  Without loss of generality we take K9" to be the candidate for G  which has the most correct tags, i.e., is closest to being correct.3 œ›|   > •  is the probability that the base model assigns to K > • . We define ‡  K > •‚&žBŸ  ›™ K > • .  We assume a set of  additional features, KE  I for Gž& 0///  . The features could be arbitrary functions of the candidates; our hope is to include features which help in discriminating good candidates from bad ones.  Finally, the parameters of the model are a vector of  „  parameters, ¡¢& “ P #U P " /// P¤£ ” . The ranking function is defined as   p%¡œ…& P #‡  I„ £ ¥ E¦p" P E! FE  I This function assigns a real-valued number to a candidate  . It will be taken to be a measure of the plausibility of a candidate, higher scores meaning higher plausibility. As such, it assigns a ranking to different candidate structures for the same sentence, 3In the event that multiple candidates get the same, highest score, the candidate with the highest value of log-likelihood § under the baseline model is taken as ¨ 5x© _ . and in particular the output on a training or test example G is ªU«n ­¬|ª®k¯U°4± : E @   z%¡œ . In this paper we take the features KE to be fixed, the learning problem being to choose a good setting for the parameters ¡ . In some parts of this paper we will use vector notation. Define ²  I to be the vector “ ‡  I! "  I /// £³  ” . Then the ranking score can also be written as   z%¡´N&Œ¡Œµ²   where ¶ µ · is the dot product between vectors ¶ and · . 4.2 The boosting algorithm The first algorithm we consider is the boosting algorithm for ranking described in (Collins 2000). The algorithm is a modification of the method in (Freund et al. 1998). The method can be considered to be a greedy algorithm for finding the parameters ¡ that minimize the loss function ‡'¸GG  ¡´‚& ¥  ¥ • ¹KJ º : ¯ 5x© » > ¼'@ t º : ¯ 5B© _ > ¼'@ where as before,   p%¡œ½&¾¡¿µh²   . The theoretical motivation for this algorithm goes back to the PAC model of learning. Intuitively, it is useful to note that this loss function is an upper bound on the number of “ranking errors”, a ranking error being a case where an incorrect candidate gets a higher value for  than a correct candidate. This follows because for all  ,  t ¯ ˆÁÀF Kà , where we define ÀF Kà to be  for ÅÄ  , and  otherwise. Hence ‡X¸UGG  ¡œ­ˆ ¥  ¥ • ¹KJ ÀFÂÇÆ  > • à where Æ6 > •€&Œ  K > "%¡œÉÈʁ  K > •U%¡´ . Note that the number of ranking errors is 3  3 • ¹KJ ÀÂÇÆT > •là . As an initial step, P # is set to be P #Ë&žªU«n ­¬|ÌxÍ Î ¥  ¥ • ¹KJ  Î :BÏK: ¯ 5B© »@ t ÏK: ¯ 5B© _ @b@ and all other parameters P E for GÐ& 0///  are set to be zero. The algorithm then proceeds for š iterations ( š is usually chosen by cross validation on a development set). At each iteration, a single feature is chosen, and its weight is updated. Suppose the current parameter values are ¡ , and a single feature Ñ is chosen, its weight being updated through an increment Ò , i.e., PrÓ & PrÓ „ƒÒ . Then the new loss, after this parameter update, will be ‡  Ñ !ÒX& ¥  > •¹KJ  tFÔ 5x© » oÕ :Ö?× : ¯ 5x© »!@ t ?4×: ¯ 5B© _ @D@ where Æ6 > •Ø&ف  K > " %¡œ…È6  F > •%¡œ . The boosting algorithm chooses the feature/update pair Ñ`Ú !Ò Ú which is optimal in terms of minimizing the loss function, i.e.,  Ñ Ú !Ò Ú ‚&žªU«n X¬™ÌxÍ Ó > Õ ‡  Ñ !ÒU (1) and then makes the update PrÓ Û & PrÓ Û „ÜÒ Ú . Figure 2 shows an algorithm which implements this greedy procedure. See (Collins 2000) for a full description of the method, including justification that the algorithm does in fact implement the update in Eq. 1 at each iteration.4 The algorithm relies on the following arrays: Ž o Ó & “  .n$—rÝFÂÇ Óh   > " 0È6 Ók   > • $Ã&  ” Ž t Ó & “  .n$—rÝFÂÇ Óh K > "0È6 Ók F > •$Ã& È  ” Þ o  > • & “ Ñ ÝFÂÇ Óh K > "0È6 Ók F > •$Ã&  ” Þ t  > • & “ Ñ ÝFÂÇ Óh K > "0È6 Ók F > •$Ã& È  ” Thus Ž o Ó is an index from features to correct/incorrect candidate pairs where the Ñ ’th feature takes value  on the correct candidate, and value  on the incorrect candidate. The array Ž t Ó is a similar index from features to examples. The arrays Þ o  > • and Þ t  > • are reverse indices from training examples to features. 4.3 The voted perceptron Figure 3 shows the training phase of the perceptron algorithm, originally introduced in (Rosenblatt 1958). The algorithm maintains a parameter vector ¡ , which is initially set to be all zeros. The algorithm then makes a pass over the training set, at each training example storing a parameter vector ¡  for .& 0///!O . The parameter vector is only modified when a mistake is made on an example. In this case the update is very simple, involving adding the difference of the offending examples’ representations ( ¡  &ß¡ 9tI" „ಠ K9"lXÈܲ  FǕ in the figure). See (Cristianini and Shawe-Taylor 2000) chapter 2 for discussion of the perceptron algorithm, and theory justifying this method for setting the parameters. In the most basic form of the perceptron, the parameter values ¡™á are taken as the final parameter settings, and the output on a new test example with h• for —ƒ& 0///  is simply the highest 4Strictly speaking, this is only the case if the smoothing parameter â is ã . Input  Examples K > • with initial scores ‡  K > •  Arrays Ž o Ó , Ž t Ó , Þ o  > • and Þ t  > • as described in section 4.2.  Parameters are number of rounds of boosting š , a smoothing parameter ä . Initialize  Set P # &žªU«n ­¬|ÌxÍ Î 3  > •  Î :ÖÏM: ¯ 5B© »@ t ÏM: ¯ 5B© _ @D@  Set ¡å& “ P #U     /// ”  For all .n$— , set Æ6 > •Ë& P #… ‡  K > "0ÈT‡  F > •$à .  Set æž& 3  3 •¹KJ  tFÔ 5x© »  For Ñ & 0///  , calculate – ç o Ó & 3 :  > • @ °Uè ] ×  tFÔ 5x© » – ç t Ó & 3 :  > • @ °Uè j ×  tFÔ 5x© » – Þ  G‡X¸UGG  Ñ é&ëê ê ê ê ì ç o Ó È ì ç t Ó ê ê ê ê Repeat for  = 1 to š  Choose Ñ`Ú &ƒªU«n X¬|ª® Ó Þ  G d‡'¸GG  Ñ   Set Ò Ú & " J BŸ ³í ] × Û oî^ï í j × Û oî^ï  Update one parameter, PrÓ Û & PrÓ Û „ÜÒ Ú  for  .n$—k¤ð€Ž o Ó Û – ñß&  tFÔ 5x© » tKÕ Û È  tFÔ 5B© » – ÆT > •Ë&’ÆT > •X„ Ò Ú – for Ñ ð Þ o  > • , ç o Ó &ƒç o Ó „Üñ – for Ñ ð Þ t  > • , ç t Ó &ƒç t Ó „Üñ – æÊ&Ùæò„ ñ  for  .n$—k¤ð€Ž t Ó Û – ñß&  tFÔ 5x© » oÕ Û È  tFÔ 5B© » – ÆT > •Ë&’ÆT > •rÈóÒ Ú – for Ñ ð Þ o  > • , ç o Ó &ƒç o Ó „Üñ – for Ñ ð Þ t  > • , ç t Ó &ƒç t Ó „Üñ – æÊ&Ùæò„ ñ  For all features Ñ whose values of ç o Ó and/or ç t Ó have changed, recalculate Þ  G d‡'¸GG  Ñ 0& ê ê ê ê ì ç o Ó È ì ç t Ó ê ê ê ê Output Final parameter setting ¡ Figure 2: The boosting algorithm. Define:   p%¡œ…&ž¡ßµ4²   . Input: Examples F > • with feature vectors ²  K > •4 . Initialization: Set parameters ¡ # &  For .0& 0///!O —³&àªU«! u¬|ª® •l¦p"%ôÇôÇô á 5   K–•%¡ btI"  If  —õ&   Then ¡  &à¡ btI" Else ¡  &à¡ btI" „ܲ  K9"l0Èó²  FǕ Output: Parameter vectors ¡  for .0& 0///!O Figure 3: The perceptron training algorithm for ranking problems. Define:   p%¡œ…&ž¡ßµ4²   . Input: A set of candidates k• for —³& 0///  , A sequence of parameter vectors ¡  for .0& 0///O Initialization: Set ö½Â —Ã&  for —õ& 0///  ( ö½Â —Uà stores the number of votes for k• ) For .0& 0///!O —³&àªU«! u¬|ª® Ó ¦p"%ôÇôÇô £    Ó %¡   ö™Â —Ã&’öР—UÃM„  Output: h• where —õ&žªU«n ­¬|ª® Ó öÐÂ Ñ Ã Figure 4: Applying the voted perceptron to a test example. scoring candidate under these parameter values, i.e.,  Ó where Ñ &žªU«n '¬|ª®•¡ á µ4²  k• . (Freund & Schapire 1999) describe a refinement of the perceptron, the voted perceptron. The training phase is identical to that in figure 3. Note, however, that all parameter vectors ¡  for .Ð& 0///nO are stored. Thus the training phase can be thought of as a way of constructing O different parameter settings. Each of these parameter settings will have its own highest ranking candidate,  Ó where Ñ &žªU«n ­¬|ª® • ¡  µd²   •  . The idea behind the voted perceptron is to take each of the O parameter settings to “vote” for a candidate, and the candidate which gets the most votes is returned as the most likely candidate. See figure 4 for the algorithm.5 5 Experiments We applied the voted perceptron and boosting algorithms to the data described in section 2.3. Only features occurring on 5 or more distinct training sentences were included in the model. This resulted 5Note that, for reasons of explication, the decoding algorithm we present is less efficient than necessary. For example, when ÷ 5Mø ÷ 5 j_ it is preferable to use some book-keeping to avoid recalculation of ùÉvB¨kú;÷ 5 } and û!üþýFÿrû» ùÉvB¨ » ú^÷ 5 } . P R F Max-Ent 84.4 86.3 85.3 Boosting 87.3(18.6) 87.9(11.6) 87.6(15.6) Voted 87.3(18.6) 88.6(16.8) 87.9(17.7) Perceptron Figure 5: Results for the three tagging methods. 1 & precision,  & recall,  & F-measure. Figures in parantheses are relative improvements in error rate over the maximum-entropy model. All figures are percentages. in 93,777 distinct features. The two methods were trained on the training portion (41,992 sentences) of the training set. We used the development set to pick the best values for tunable parameters in each algorithm. For boosting, the main parameter to pick is the number of rounds, š . We ran the algorithm for a total of 300,000 rounds, and found that the optimal value for F-measure on the development set occurred after 83,233 rounds. For the voted perceptron, the representation ²  I was taken to be a vector “ ‡  ! "   /// £  I ” where  is a parameter that influences the relative contribution of the log-likelihood term versus the other features. A value of  & h/ was found to give the best results on the development set. Figure 5 shows the results for the three methods on the test set. Both of the reranking algorithms show significant improvements over the baseline: a 15.6% relative reduction in error for boosting, and a 17.7% relative error reduction for the voted perceptron. In our experiments we found the voted perceptron algorithm to be considerably more efficient in training, at some cost in computation on test examples. Another attractive property of the voted perceptron is that it can be used with kernels, for example the kernels over parse trees described in (Collins and Duffy 2001; Collins and Duffy 2002). (Collins and Duffy 2002) describe the voted perceptron applied to the named-entity data in this paper, but using kernel-based features rather than the explicit features described in this paper. See (Collins 2002) for additional work using perceptron algorithms to train tagging models, and a more thorough description of the theory underlying the perceptron algorithm applied to ranking problems. 6 Discussion A question regarding the approaches in this paper is whether the features we have described could be incorporated in a maximum-entropy tagger, giving similar improvements in accuracy. This section discusses why this is unlikely to be the case. The problem described here is closely related to the label bias problem described in (Lafferty et al. 2001). One straightforward way to incorporate global features into the maximum-entropy model would be to introduce new features   -%% which indicated whether the tagging decision  in the history creates a particular global feature. For example, we could introduce a feature "%#  l! F'& () * )+  if t = N and this decision creates an LWLC=1 feature  otherwise As an example, this would take the value  if its was tagged as N in the following context, She/N praised/N the/N University/S for/C its/? efforts to ccdc because tagging its as N in this context would create an entity whose last word was not capitalized, i.e., University for. Similar features could be created for all of the global features introduced in this paper. This example also illustrates why this approach is unlikely to improve the performance of the maximum-entropy tagger. The parameter ,‚"%# associated with this new feature can only affect the score for a proposed sequence by modifying é 2  at the point at which  "%#  l! Fõ&  . In the example, this means that the LWLC=1 feature can only lower the score for the segmentation by lowering the probability of tagging its as N. But its has almost probably  of not appearing as part of an entity, so é šÜ2 F should be almost  whether  "%# is  or  in this context! The decision which effectively created the entity University for was the decision to tag for as C, and this has already been made. The independence assumptions in maximum-entropy taggers of this form often lead points of local ambiguity (in this example the tag for the word for) to create globally implausible structures with unreasonably high scores. See (Collins 1999) section 8.4.2 for a discussion of this problem in the context of parsing. Acknowledgements Many thanks to Jack Minisi for annotating the named-entity data used in the experiments. Thanks also to Nigel Duffy, Rob Schapire and Yoram Singer for several useful discussions. References Abney, S. 1997. Stochastic Attribute-Value Grammars. Computational Linguistics, 23(4):597-618. Bikel, D., Schwartz, R., and Weischedel, R. (1999). An Algorithm that Learns What’s in a Name. In Machine Learning: Special Issue on Natural Language Learning, 34(1-3). Borthwick, A., Sterling, J., Agichtein, E., and Grishman, R. (1998). Exploiting Diverse Knowledge Sources via Maximum Entropy in Named Entity Recognition. Proc. of the Sixth Workshop on Very Large Corpora. Collins, M. (1999). Head-Driven Statistical Models for Natural Language Parsing. PhD Thesis, University of Pennsylvania. Collins, M. (2000). Discriminative Reranking for Natural Language Parsing. Proceedings of the Seventeenth International Conference on Machine Learning (ICML 2000). Collins, M., and Duffy, N. (2001). Convolution Kernels for Natural Language. In Proceedings of NIPS 14. Collins, M., and Duffy, N. (2002). New Ranking Algorithms for Parsing and Tagging: Kernels over Discrete Structures, and the Voted Perceptron. In Proceedings of ACL 2002. Collins, M. (2002). Discriminative Training Methods for Hidden Markov Models: Theory and Experiments with the Perceptron Algorithm. In Proceedings of EMNLP 2002. Cristianini, N., and Shawe-Tayor, J. (2000). An introduction to Support Vector Machines and other kernel-based learning methods. Cambridge University Press. Della Pietra, S., Della Pietra, V., and Lafferty, J. (1997). Inducing Features of Random Fields. IEEE Transactions on Pattern Analysis and Machine Intelligence, 19(4), pp. 380-393. Freund, Y. & Schapire, R. (1999). Large Margin Classification using the Perceptron Algorithm. In Machine Learning, 37(3):277–296. Freund, Y., Iyer, R.,Schapire, R.E., & Singer, Y. (1998). An efficient boosting algorithm for combining preferences. In Machine Learning: Proceedings of the Fifteenth International Conference. Johnson, M., Geman, S., Canon, S., Chi, Z. and Riezler, S. (1999). Estimators for Stochastic “Unification-based” Grammars. Proceedings of the ACL 1999. Lafferty, J., McCallum, A., and Pereira, F. (2001). Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proceedings of ICML 2001. McCallum, A., Freitag, D., and Pereira, F. (2000) Maximum entropy markov models for information extraction and segmentation. In Proceedings of ICML 2000. Ratnaparkhi, A. (1996). A maximum entropy part-of-speech tagger. In Proceedings of the empirical methods in natural language processing conference. Rosenblatt, F. (1958). The Perceptron: A Probabilistic Model for Information Storage and Organization in the Brain. Psychological Review, 65, 386–408. (Reprinted in Neurocomputing (MIT Press, 1998).) Walker, M., Rambow, O., and Rogati, M. (2001). SPoT: a trainable sentence planner. In Proceedings of the 2nd Meeting of the North American Chapter of the Association for Computational Linguistics (NAACL 2001).
2002
62
Revision Learning and its Application to Part-of-Speech Tagging Tetsuji Nakagawa∗and Taku Kudo and Yuji Matsumoto [email protected],{taku-ku,matsu}@is.aist-nara.ac.jp Graduate School of Information Science Nara Institute of Science and Technology 8916−5 Takayama, Ikoma, Nara 630−0101, Japan Abstract This paper presents a revision learning method that achieves high performance with small computational cost by combining a model with high generalization capacity and a model with small computational cost. This method uses a high capacity model to revise the output of a small cost model. We apply this method to English partof-speech tagging and Japanese morphological analysis, and show that the method performs well. 1 Introduction Recently, corpus-based approaches have been widely studied in many natural language processing tasks, such as part-of-speech (POS) tagging, syntactic analysis, text categorization and word sense disambiguation. In corpus-based natural language processing, one important issue is to decide which learning model to use. Various learning models have been studied such as Hidden Markov models (HMMs) (Rabiner and Juang, 1993), decision trees (Breiman et al., 1984) and maximum entropy models (Berger et al., 1996). Recently, Support Vector Machines (SVMs) (Vapnik, 1998; Cortes and Vapnik, 1995) are getting to be used, which are supervised machine learning algorithm for binary classification. SVMs have good generalization performance and can handle a large number of features, and are applied to some tasks ∗Presently with Oki Electric Industry successfully (Joachims, 1998; Kudoh and Matsumoto, 2000). However, their computational cost is large and is a weakness of SVMs. In general, a trade-offbetween capacity and computational cost of learning models exists. For example, SVMs have relatively high generalization capacity, but have high computational cost. On the other hand, HMMs have lower computational cost, but have lower capacity and difficulty in handling data with a large number of features. Learning models with higher capacity may not be of practical use because of their prohibitive computational cost. This problem becomes more serious when a large amount of data is used. To solve this problem, we propose a revision learning method which combines a model with high generalization capacity and a model with small computational cost to achieve high performance with small computational cost. This method is based on the idea that processing the entire target task using a model with higher capacity is wasteful and costly, that is, if a large portion of the task can be processed easily using a model with small computational cost, it should be processed by such a model, and only difficult portion should be processed by the model with higher capacity. Revision learning can handle a general multiclass classification problem, which includes POS tagging, text categorization and many other tasks in natural language processing. We apply this method to English POS tagging and Japanese morphological analysis. This paper is organized as follows: Section 2 describes the general multi-class classification Computational Linguistics (ACL), Philadelphia, July 2002, pp. 497-504. Proceedings of the 40th Annual Meeting of the Association for problem and the one-versus-rest method which is known as one of the solutions for the problem. Section 3 introduces revision learning, and discusses how to combine learning models. Section 4 describes one way to conduct Japanese morphological analysis with revision learning. Section 5 shows experimental results of English POS tagging and Japanese morphological analysis with revision learning. Section 6 discusses related works, and Section 7 gives conclusion. 2 Multi-Class Classification Problems and the One-versus-Rest Method Let us consider the problem to decide the class of an example x among multiple classes. Such a problem is called multi-class classification problem. Many tasks in natural language processing such as POS tagging are regarded as a multiclass classification problem. When we only have binary (positive or negative) classification algorithm at hand, we have to reformulate a multiclass classification problem into a binary classification problem. We assume a binary classifier f(x) that returns positive or negative real value for the class of x, where the absolute value |f(x)| reflects the confidence of the classification. The one-versus-rest method is known as one of such methods (Allwein et al., 2000). For one training example of a multi-class problem, this method creates a positive training example for the true class and negative training examples for the other classes. As a result, positive and negative examples for each class are generated. Suppose we have five candidate classes A, B, C, D and E , and the true class of x is B. Figure 1 (left) shows the created training examples. Note that there are only two labels (positive and negative) in contrast with the original problem. Then a binary classifier for each class is trained using the examples, and five classifiers are created for this problem. Given a test example x′, all the classifiers classify the example whether it belongs to a specific class or not. Its class is decided by the classifier that gives the largest value of f(x′). The algorithm is shown in Figure 2 in a pseudo-code. x A : B : C : D : E : Training Data O X X X X A E B C D A : B : Training Data O X 1 2 3 Rank A E B C D 4 5 x x x x x x x x -X -O -X -X -X -X -O O X Label : Positive : Negative Class Class Figure 1: One-versus-Rest Method (left) and Revision Learning (right) # Training Procedure of One-versus-Rest # This procedure is given training examples # {(xi, yi)}, and creates classifiers. # C = {c0, . . . , ck−1}: the set of classes, # xi: the ith training example, # yi ∈C: the class of xi, # k: the number of classes, # l: the number of training examples, # fc(·): the binary classifier for the class c # (see the text). procedure TrainOV R({(x0, y0), . . . , (xl−1, yl−1)}) begin # Create the training data with binary label for i := 0 to l −1 begin for j := 0 to k −1 begin if cj ̸= yi then Add xi to the training data for the class cj as a negative example. else Add xi to the training data for the class cj as a positive example. end end # Train the binary classifiers for j := 0 to k −1 Train the classifier fcj(·) using the training data. end # Test Function of One-versus-Rest # This function is given a test example and # returns the predicted class of it. # C = {c0, . . . , ck−1}: the set of classes, # x: the test example, # k: the number of classes, # fc(·): binary classifier trained with the # algorithm above. function TestOV R(x) begin for j := 0 to k −1 confidencej := fcj(x) return cargmaxj confidencej end Figure 2: Algorithm of One-versus-Rest However, this method has the problem of being computationally costly in training, because the negative examples are created for all the classes other than the true class, and the total number of the training examples becomes large (which is equal to the number of original training examples multiplied by the number of classes). The computational cost in testing is also large, because all the classifiers have to work on each test example. 3 Revision Learning As discussed in the previous section, the oneversus-rest method has the problem of computational cost. This problem become more serious when costly binary classifiers are used or when a large amount of data is used. To cope with this problem, let us consider the task of POS tagging. Most portions of POS tagging is not so difficult and a simple POS-based HMMs learning 1 achieves more than 95% accuracy simply using the POS context (Brants, 2000). This means that the low capacity model is enough to do most portions of the task, and we need not use a high accuracy but costly algorithm in every portion of the task. This is the base motivation of the revision model we are proposing here. Revision learning uses a binary classifier with higher capacity to revise the errors made by the stochastic model with lower capacity as follows: During the training phase, a ranking is assigned to each class by the stochastic model for a training example, that is, the candidate classes are sorted in descending order of its conditional probability given the example. Then, the classes are checked in their ranking order to create binary classifiers as follows. If the class is incorrect (i.e. it is not equal to the true class for the example), the example is added to the training data for that class as a negative example, and the next ranked class is checked. If the class is correct, the example is added to the training data for that class as a positive exam1HMMs can be applied to either of unsupervised or supervised learning. In this paper, we use the latter case, i.e., visible Markov Models, where POS-tagged data is used for training. ple, and the remaining ranked classes are not taken into consideration (Figure 1, right). Using these training data, binary classifiers are created. Note that each classifier is a pure binary classifier regardless with the number of classes in the original problem. The binary classifier is trained just for answering whether the output from the stochastic model is correct or not. During the test phase, first the ranking of the candidate classes for a given example is assigned by the stochastic model as in the training. Then the binary classifier classifies the example according to the ranking. If the classifier answers the example as incorrect, the next highest ranked class becomes the next candidate for checking. But if the example is classified as correct, the class of the classifier is returned as the answer for the example. The algorithm is shown in Figure 3. The amount of training data generated in the revision learning can be much smaller than that in one-versus-rest. Since, in revision learning, negative examples are created only when the stochastic model fails to assign the highest probability to the correct POS tag, whereas negative examples are created for all but one class in the one-versus-rest method. Moreover, testing time of the revision learning is shorter, because only one classifier is called as far as it answers as correct, but all the classifiers are called in the oneversus-rest method. 4 Morphological Analysis with Revision Learning We introduced revision learning for multi-class classification in the previous section. However, Japanese morphological analysis cannot be regarded as a simple multi-class classification problem, because words in a sentence are not separated by spaces in Japanese and the morphological analyzer has to segment the sentence into words as well as to decide the POS tag of the words. So in this section, we describe how to apply revision learning to Japanese morphological analysis. For a given sentence, a lattice consisting of all possible morphemes can be built using a mor# Training Procedure of Revision Learning # This procedure is given training examples # {(xi, yi)}, and creates classifiers. # C = {c0, . . . , ck−1}: the set of classes, # xi: the ith training example, # yi ∈C: the class of xi, # k: the number of classes, # l: the number of training examples, # ni: the ordered indexes of C # (see the following code), # fc(·): the binary classifier for the class c # (see the text). procedure TrainRL({(x0, y0), . . . , (xl−1, yl−1)}) begin # Create the training data with binary label for i := 0 to l −1 begin Call the stochastic model to obtain the ordered indexes {n0, . . . , nk−1} such that P(cn0|xi) ≥· · · ≥P(cnk−1|xi). for j := 0 to k −1 begin if cnj ̸= yi then Add xi to the training data for the class cnj as a negative example. else begin Add xi to the training data for the class cnj as a positive example. break end end end # Train the binary classifiers for j := 0 to k −1 Train the classifier fcj(·) using the training data. end # Test Function of Revision Learning # This function is given a test example and # returns the predicted class of it. # C = {c0, . . . , ck−1}: the set of classes, # x: the test example, # k: the number of classes, # ni: the ordered indexes of C # (see the following code), # fc(·): binary classifier trained with the # algorithm above. function TestRL(x) begin Call the stochastic model to obtain the ordered indexes {n0, . . . , nk−1} such that P(cn0|x) ≥· · · ≥P(cnk−1|x). for j := 0 to k −1 if fcnj (x) > 0 then return cnj return undecidable end Figure 3: Algorithm of Revision Learning pheme dictionary as in Figure 4. Morphological analysis is conducted by choosing the most likely path on it. We adopt HMMs as the stochastic model and SVMs as the binary classifier. For any sub-paths from the beginning of the sentence (BOS) in the lattice, its generative probability can be calculated using HMMs (Nagata, 1999). We first pick up the end node of the sentence as the current state node, and repeat the following revision learning process backward until the beginning of the sentence. Rankings are calculated by HMMs to all the nodes connected to the current state node, and the best of these nodes is identified based on the SVMs classifiers. The selected node then becomes the current state node in the next round. This can be seen as SVMs deciding whether two adjoining nodes in the lattice are connected or not. In Japanese morphological analysis, for any given morpheme µ, we use the following features for the SVMs: 1. the POS tags, the lexical forms and the inflection forms of the two morphemes preceding µ; 2. the POS tags and the lexical forms of the two morphemes following µ; 3. the lexical form and the inflection form of µ. The preceding morphemes are unknown because the processing is conducted from the end of the sentence, but HMMs can predict the most likely preceding morphemes, and we use them as the features for the SVMs. English POS tagging is regarded as a special case of morphological analysis where the segmentation is done in advance, and can be conducted in the same way. In English POS tagging, given a word w, we use the following features for the SVMs: 1. the POS tags and the lexical forms of the two words preceding w, which are given by HMMs; 2. the POS tags and the lexical forms of the two words following w; 3. the lexical form of w and the prefixes and suffixes of up to four characters, the exisBOS EOS kinou (yesterday) [noun] ki (tree) [noun] nou (brain) [noun] ki (come) [verb] no [particle] u [auxiliary] gakkou (school) [noun] sentence: ni (to) [particle] ni (resemble) [verb] it (went) [verb] ta [auxiliary] kinou gakkou it ki ki noun verb noun verb noun ... ... Dictionary: Lattice: "kinougakkouniitta (I went to school yesterday)" Figure 4: Example of Lattice for Japanese Morphological Analysis tence of numerals, capital letters and hyphens in w. 5 Experiments This section gives experimental results of English POS tagging and Japanese morphological analysis with revision learning. 5.1 Experiments of English Part-of-Speech Tagging Experiments of English POS tagging with revision learning (RL) are performed on the Penn Treebank WSJ corpus. The corpus is randomly separated into training data of 41,342 sentences and test data of 11,771 sentences. The dictionary for HMMs is constructed from all the words in the training data. T3 of ICOPOST release 0.9.0 (Schr¨oder, 2001) is used as the stochastic model for ranking stage. This is equivalent to POS-based second order HMMs. SVMs with second order polynomial kernel are used as the binary classifier. The results are compared with TnT (Brants, 2000) based on second order HMMs, and with POS tagger using SVMs with one-versus-rest (1v-r) (Nakagawa et al., 2001). The accuracies of those systems for known words, unknown words and all the words are shown in Table 1. The accuracies for both known words and unknown words are improved through revision learning. However, revision learning could not surpass the one-versus-rest. The main difference in the accuracies stems from those for unknown words. The reason for that seems to be that the dictionary of HMMs for POS tagging is obtained from the training data, as a result, virtually no unknown words exist in the training data, and the HMMs never make mistakes for unknown words during the training. So no example of unknown words is available in the training data for the SVM reviser. This is problematic: Though the HMMs handles unknown words with an exceptional method, SVMs cannot learn about errors made by the unknown word processing in the HMMs. To cope with this problem, we force the HMMs to make mistakes by eliminating low frequent words from the dictionary. We eliminated the words appearing only once in the training data so as to make SVMs to learn about unknown words. The results are shown in Table 1 (row “cutoff-1”). Such procedure improves the accuracies for unknown words. One advantage of revision learning is its small computational cost. We compare the computation time with the HMMs and the one-versusrest. We also use SVMs with linear kernel function that has lower capacity but lower computational cost compared to the second order polynomial kernel SVMs. The experiments are performed on an Alpha 21164A 500MHz processor. Table 2 shows the total number of training examples, training time, testing time and accuracy for each of the five systems. The training time and the testing time of revision learning are considerably smaller than those of the oneversus-rest. Using linear kernel, the accuracy decreases a little, but the computational cost is much lower than the second order polynomial kernel. Accuracy (Known Words / Unknown Words) Number of Errors T3 Original 96.59% (96.90% / 82.74%) 9720 with RL 96.93% (97.23% / 83.55%) 8734 with RL (cutoff-1) 96.98% (97.25% / 85.11%) 8588 TnT 96.62% (96.90% / 84.19%) 9626 SVMs 1-v-r 97.11% (97.34% / 86.80%) 8245 Table 1: Result of English POS Tagging Total Number of Training Time Testing Time Accuracy Examples for SVMs (hour) (second) T3 Original — 0.004 89 96.59% with RL (polynomial kernel, cutoff-1) 1027840 16 2089 96.98% with RL (linear kernel, cutoff-1) 1027840 2 129 96.94% TnT — 0.002 4 96.62% SVMs 1-v-r 999984×50 625 55239 97.11% Table 2: Computational Cost of English POS Tagging 5.2 Experiments of Japanese Morphological Analysis We use the RWCP corpus and some additional spoken language data for the experiments of Japanese morphological analysis. The corpus is randomly separated into training data of 33,831 sentences and test data of 3,758 sentences. As the dictionary for HMMs, we use IPADIC version 2.4.4 with 366,878 morphemes (Matsumoto and Asahara, 2001) which is originally constructed for the Japanese morphological analyzer ChaSen (Matsumoto et al., 2001). A POS bigram model and ChaSen version 2.2.8 based on variable length HMMs are used as the stochastic models for the ranking stage, and SVMs with the second order polynomial kernel are used as the binary classifier. We use the following values to evaluate Japanese morphological analysis: recall = ⟨# of correct morphemes in system’s output⟩ ⟨# of morphemes in test data⟩ , precision = ⟨# of correct morphemes in system’s output⟩ ⟨# of morphemes in system’s output⟩ , F-measure = 2 × recall × precision recall + precision . The results of the original systems and those with revision learning are shown in Table 3, which provides the recalls, precisions and Fmeasures for two cases, namely segmentation (i.e. segmentation of the sentences into morphemes) and tagging (i.e. segmentation and POS tagging). The one-versus-rest method is not used because it is not applicable to morphological analysis of non-segmented languages directly. When revision learning is used, all the measures are improved for both POS bigram and ChaSen. Improvement is particularly clear for the tagging task. The numbers of correct morphemes for each POS category tag in the output of ChaSen with and without revision learning are shown in Table 4. Many particles are correctly revised by revision learning. The reason is that the POS tags for particles are often affected by the following words in Japanese, and SVMs can revise such particles because it uses the lexical forms of the following words as the features. This is the advantage of our method compared to simple HMMs, because HMMs have difficulty in handling a lot of features such as the lexical forms of words. 6 Related Works Our proposal is to revise the outputs of a stochastic model using binary classifiers. Brill studied transformation-based error-driven learning (TBL) (Brill, 1995), which conducts POS tagging by applying the transformation rules to the POS tags of a given sentence, and has a resemblance to revision learning in that the second model revises the output of the first model. Word Segmentation Tagging Training Testing Time Time Recall Precision F-measure Recall Precision F-measure (hour) (second) POS Original 98.06% 98.77% 98.42% 95.61% 96.30% 95.96% 0.02 8 bigram with RL 99.06% 99.27% 99.16% 98.13% 98.33% 98.23% 11 184 ChaSen Original 99.06% 99.20% 99.13% 97.67% 97.81% 97.74% 0.05 15 with RL 99.22% 99.34% 99.28% 98.26% 98.37% 98.32% 6 573 Table 3: Result of Morphological Analysis Part-of-Speech # in Test Data Original with RL Difference Noun 41512 40355 40556 +201 Prefix 817 781 784 +3 Verb 8205 8076 8115 +39 Adjective 678 632 655 +23 Adverb 779 735 750 +15 Adnominal 378 373 373 0 Conjunction 258 243 243 0 Particle 20298 19686 19942 +256 Auxiliary 4419 4333 4336 +3 Interjection 94 90 91 +1 Symbol 15665 15647 15651 +4 Others 1 1 1 0 Filler 43 36 36 0 Table 4: The Number of Correctly Tagged Morphemes for Each POS Category Tag However, our method differs from TBL in two ways. First, our revision learner simply answers whether a given pattern is correct or not, and any types of binary classifiers are applicable. Second, in our model, the second learner is applied to the output of the first learner only once. In contrast, rewriting rules are applied repeatedly in the TBL. Recently, combinations of multiple learners have been studied to achieve high performance (Alpaydm, 1998). Such methodologies to combine multiple learners can be distinguished into two approaches: one is the multi-expert method and the other is the multi-stage method. In the former, each learner is trained and answers independently, and the final decision is made based on those answers. In the latter, the multiple learners are ordered in series, and each learner is trained and answers only if the previous learner rejects the examples. Revision learning belongs to the latter approach. In POS tagging, some studies using the multi-expert method were conducted (van Halteren et al., 2001; M`arquez et al., 1999), and Brill and Wu (1998) combined maximum entropy models, TBL, unigram and trigram, and achieved higher accuracy than any of the four learners (97.2% for WSJ corpus). Regarding the multi-stage methods, cascading (Alpaydin and Kaynak, 1998) is well known, and Even-Zohar and Roth (2001) proposed the sequential learning model and applied it to POS tagging. Their methods differ from revision learning in that each learner behaves in the same way and more than one learner is used in their methods, but in revision learning the stochastic model assigns rankings to candidates and the binary classifier selects the output. Furthermore, mistakes made by a former learner are fatal in their methods, but is not so in revision learning because the binary classifier works to revise them. The advantage of the multi-expert method is that each learner can help each other even if it has some weakness, and generalization errors can be decreased. On the other hand, the computational cost becomes large because each learner is trained using every training data and answers for every test data. In contrast, multi-stage methods can decrease the computational cost, and seem to be effective when a large amount of data is used or when a learner with high computational cost such as SVMs is used. 7 Conclusion In this paper, we proposed the revision learning method which combines a stochastic model and a binary classifier to achieve higher performance with lower computational cost. We applied it to English POS tagging and Japanese morphological analysis, and showed improvement of accuracy with small computational cost. Compared to the conventional one-versus-rest method, revision learning has much lower computational cost with almost comparable accuracy. Furthermore, it can be applied not only to a simple multi-class classification task but also to a wider variety of problems such as Japanese morphological analysis. Acknowledgments We would like to thank Ingo Schr¨oder for making ICOPOST publicly available. References Erin L. Allwein, Robert E. Schapire, and Yoram Singer. 2000. Reducing Multiclass to Binary: A Unifying Approach for Margin Classifiers. In Proceedings of 17th International Conference on Machine Learning, pages 9–16. Ethem Alpaydin and Cenk Kaynak. 1998. Cascading Classifiers. Kybernetika, 34(4):369–374. Ethem Alpaydm. 1998. Techniques for Combining Multiple Learners. In Proceedings of Engineering of Intelligent Systems ’98 Conference. Adam L. Berger, Stephen A. Della Pietra, and Vincent J. Della Pietra. 1996. A Maximum Entropy Approach to Natural Language Processing. Computational Linguistics, 22(1):39–71. Thorsten Brants. 2000. TnT — A Statistical Part-of-Speech Tagger. In Proceedings of ANLPNAACL 2000, pages 224–231. Leo Breiman, Jerome H. Friedman, Richard A. Olshen, and Charles J. Stone. 1984. Classification and Regression Trees. Wadsworth and Brooks. Eric Brill and Jun Wu. 1998. Classifier Combination for Improved Lexical Disambiguation. In Proceedings of the Thirty-Sixth Annual Meeting of the Association for Computational Linguistics and Seventeenth International Conference on Computational Linguistics, pages 191–195. Eric Brill. 1995. Transformation-Based ErrorDriven Learning and Natural Language Processing: A Case Study in Part-of-Speech Tagging. Computational Linguistics, 21(4):543–565. Corinna Cortes and Vladimir Vapnik. 1995. Support Vector Networks. Machine Learning, 20:273–297. Yair Even-Zohar and Dan Roth. 2001. A Sequential Model for Multi-Class Classification. In Proceedings of the 2001 Conference on Empirical Methods in Natural Language Processing, pages 10–19. Thorsten Joachims. 1998. Text Categorization with Support Vector Machines: Learning with Many Relevant Features. In Proceedings of the 10th European Conference on Machine Learning, pages 137–142. Taku Kudoh and Yuji Matsumoto. 2000. Use of Support Vector Learning for Chunk Identification. In Proceedings of the Fourth Conference on Computational Natural Language Learning, pages 142– 144. Llui´ıs M`arquez, Horacio Rodr´ıguez, Josep Carmona, and Josep Montolio. 1999. Improving POS Tagging Using Machine-Learning Techniques. In Proceedings of 1999 Joint SIGDAT Conference on Empirical Methods in Natural Language Processing and Very Large Corpora, pages 53–62. Yuji Matsumoto and Masayuki Asahara. 2001. IPADIC User’s Manual version 2.2.4. Nara Institute of Science and Technology. (in Japanese). Yuji Matsumoto, Akira Kitauchi, Tatsuo Yamashita, Yoshitaka Hirano, Hiroshi Matsuda, Kazuma Takaoka, and Masayuki Asahara. 2001. Morphological Analysis System ChaSen version 2.2.8 Manual. Nara Institute of Science and Technology. Masaaki Nagata. 1999. Japanese Language Processing Based on Stochastic Models. Kyoto University, Doctoral Thesis. (in Japanese). Tetsuji Nakagawa, Taku Kudoh, and Yuji Matsumoto. 2001. Unknown Word Guessing and Part-of-Speech Tagging Using Support Vector Machines. In Proceedings of 6th Natural Language Processing Pacific Rim Symposium, pages 325– 331. Lawrence R. Rabiner and Biing-Hwang Juang. 1993. Fundamentals of Speech Recognition. PTR Prentice-Hall. Ingo Schr¨oder. 2001. ICOPOST — Ingo’s Collection Of POS Taggers. http://nats-www.informatik.uni-hamburg.de /~ingo/icopost/. Hans van Halteren, Jakub Zavrel, and Walter Daelemans. 2001. Improving Accuracy in Wordclass Tagging through Combination of Machine Learning Systems. Computational Linguistics, 27(2):199–230. Vladimir Vapnik. 1998. Statistical Learning Theory. Springer.
2002
63
An Empirical Study of Active Learning with Support Vector Machines for Japanese Word Segmentation Manabu Sassano Fujitsu Laboratories Ltd. 4-1-1, Kamikodanaka, Nakahara-ku, Kawasaki 211-8588, Japan [email protected] Abstract We explore how active learning with Support Vector Machines works well for a non-trivial task in natural language processing. We use Japanese word segmentation as a test case. In particular, we discuss how the size of a pool affects the learning curve. It is found that in the early stage of training with a larger pool, more labeled examples are required to achieve a given level of accuracy than those with a smaller pool. In addition, we propose a novel technique to use a large number of unlabeled examples effectively by adding them gradually to a pool. The experimental results show that our technique requires less labeled examples than those with the technique in previous research. To achieve 97.0 % accuracy, the proposed technique needs 59.3 % of labeled examples that are required when using the previous technique and only 17.4 % of labeled examples with random sampling. 1 Introduction Corpus-based supervised learning is now a standard approach to achieve high-performance in natural language processing. However, the weakness of supervised learning approach is to need an annotated corpus, the size of which is reasonably large. Even if we have a good supervised-learning method, we cannot get high-performance without an annotated corpus. The problem is that corpus annotation is labour intensive and very expensive. In order to overcome this, some unsupervised learning methods and minimally-supervised methods, e.g., (Yarowsky, 1995; Yarowsky and Wicentowski, 2000), have been proposed. However, such methods usually depend on tasks or domains and their performance often does not match one with a supervised learning method. Another promising approach is active learning, in which a classifier selects examples to be labeled, and then requests a teacher to label them. It is very different from passive learning, in which a classifier gets labeled examples randomly. Active learning is a general framework and does not depend on tasks or domains. It is expected that active learning will reduce considerably manual annotation cost while keeping performance. However, few papers in the field of computational linguistics have focused on this approach (Dagan and Engelson, 1995; Thompson et al., 1999; Ngai and Yarowsky, 2000; Hwa, 2000; Banko and Brill, 2001). Although there are many active learning methods with various classifiers such as a probabilistic classifier (McCallum and Nigam, 1998), we focus on active learning with Support Vector Machines (SVMs) because of their performance. The Support Vector Machine, which is introduced by Vapnik (1995), is a powerful new statistical learning method. Excellent performance is reported in hand-written character recognition, face detection, image classification, and so forth. SVMs have been recently applied to several natural language tasks, including text classification (Joachims, 1998; Dumais et al., 1998), chunking (Kudo and Matsumoto, 2000b; Kudo and Matsumoto, 2001), and dependency analysis (Kudo and Matsumoto, 2000a). SVMs have been greatly successful in such tasks. Computational Linguistics (ACL), Philadelphia, July 2002, pp. 505-512. Proceedings of the 40th Annual Meeting of the Association for Additionally, SVMs as well as boosting have good theoretical background. The objective of our research is to develop an effective way to build a corpus and to create highperformance NL systems with minimal cost. As a first step, we focus on investigating how active learning with SVMs, which have demonstrated excellent performance, works for complex tasks in natural language processing. For text classification, it is found that this approach is effective (Tong and Koller, 2000; Schohn and Cohn, 2000). They used less than 10,000 binary features and less than 10,000 examples. However, it is not clear that the approach is readily applicable to tasks which have more than 100,000 features and more than 100,000 examples. We use Japanese word segmentation as a test case. The task is suitable for our purpose because we have to handle combinations of more than 1,000 characters and a very large corpus (EDR, 1995) exists. 2 Support Vector Machines In this section we give some theoretical definitions of SVMs. Assume that we are given the training data (x i ; y i ); : : : ; (x l ; y l ); x i 2 R n ; y i 2 f+1; 1g The decision function g in SVM framework is defined as: g (x ) = sgn(f (x)) (1) f (x ) = l X i=1 y i i K (x i ; x) + b (2) where K is a kernel function, b 2 R is a threshold, and i are weights. Besides the i satisfy the following constraints: 0  i  C ; 8i and l X i=1 i y i = 0; where C is a missclassification cost. The x i with non-zero i are called Support Vectors. For linear SVMs, the kernel function K is defined as: K (x i ; x) = x i  x: In this case, Equation 2 can be written as: f (x) = w  x + b (3) 1. Build an initial classifier 2. While a teacher can label examples (a) Apply the current classifier to each unlabeled example (b) Find the m examples which are most informative for the classifier (c) Have the teacher label the subsample of m examples (d) Train a new classifier on all labeled examples Figure 1: Algorithm of pool-based active learning where w = P l i=1 y i i x i. To train an SVM is to find the i and the b by solving the following optimization problem: maximize l X i=1 i 1 2 l X i;j =1 i j y i y j K (x i ; x j ) subject to 0  i  C ; 8i and l X i=1 i y i = 0: 3 Active Learning for Support Vector Machines 3.1 General Framework of Active Learning We use pool-based active learning (Lewis and Gale, 1994). SVMs are used here instead of probabilistic classifiers used by Lewis and Gale. Figure 1 shows an algorithm of pool-based active learning1. There can be various forms of the algorithm depending on what kind of example is found informative. 3.2 Previous Algorithm Two groups have proposed an algorithm for SVMs active learning (Tong and Koller, 2000; Schohn and Cohn, 2000)2. Figure 2 shows the selection algorithm proposed by them. This corresponds to (a) and (b) in Figure 1. 1The figure described here is based on the algorithm by Lewis and Gale (1994) for their sequential sampling algorithm. 2Tong and Koller (2000) propose three selection algorithms. The method described here is simplest and computationally efficient. 1. Compute f (x i ) (Equation 2) over all x i in a pool. 2. Sort x i with jf (x i )j in decreasing order. 3. Select top m examples. Figure 2: Selection Algorithm 1. Build an initial classifier. 2. While a teacher can label examples (a) Select m examples using the algorithm in Figure 2. (b) Have the teacher label the subsample of m examples. (c) Train a new classifier on all labeled examples. (d) Add new unlabeled examples to the primary pool if a specified condition is true. Figure 3: Outline of Tow Pool Algorithm 3.3 Two Pool Algorithm We observed in our experiments that when using the algorithm in the previous section, in the early stage of training, a classifier with a larger pool requires more examples than that with a smaller pool does (to be described in Section 5). In order to overcome the weakness, we propose two new algorithms. We call them “Two Pool Algorithm” generically. It has two pools, i.e., a primary pool and a secondary one, and moves gradually unlabeled examples to the primary pool from the secondary instead of using a large pool from the start of training. The primary pool is used directly for selection of examples which are requested a teacher to label, whereas the secondary is not. The basic idea is simple. Since we cannot get good performance when using a large pool at the beginning of training, we enlarge gradually a pool of unlabeled examples. The outline of Two Pool Algorithm is shown in Figure 3. We describe below two variations, which are different in the condition at (d) in Figure 3. Our first variation, which is called Two Pool Algorithm A, adds new unlabeled examples to the primary pool when the increasing ratio of support vectors in the current classifier decreases, because the gain of accuracy is very little once the ratio is down. This phenomenon is observed in our experiments (Section 5). This observation has also been reported in previous studies (Schohn and Cohn, 2000). In Two Pool Algorithm we add new unlabeled examples so that the total number of examples including both labeled examples in the training set and unlabeled examples in the primary pool is doubled. For example, suppose that the size of a initial primary pool is 1,000 examples. Before starting training, there are no labeled examples and 1,000 unlabeled examples. We add 1,000 new unlabeled examples to the primary pool when the increasing ratio of support vectors is down after t examples has been labeled. Then, there are the t labeled examples and the (2; 000 t) unlabeled examples in the primary pool. At the next time when we add new unlabeled examples, the number of newly added examples is 2,000 and then the total number of both labeled in the training set and unlabeled examples in the primary pool is 4,000. Our second variation, which is called Two Pool Algorithm B, adds new unlabeled examples to the primary pool when the number of support vectors of the current classifier exceeds a threshold d. The d is defined as: d = N Æ 100 ; 0 < Æ  100 (4) where Æ is a parameter for deciding when unlabeled examples are added to the primary pool and N is the number of examples including both labeled examples in the training set and unlabeled ones in the primary pool. The Æ must be less than the percentage of support vectors of a training set3. When deciding how many unlabeled examples should be added to the primary pool, we use the strategy as described in the paragraph above. 4 Japanese Word Segmentation 4.1 Word Segmentation as a Classification Task Many tasks in natural language processing can be formulated as a classification task (van den Bosch 3Since typically the percentage of support vectors is small (e.g., less than 30 %), we choose around 10 % for Æ. We need further studies to find the best value of Æ before or during training. et al., 1996). Japanese word segmentation can be viewed in the same way, too (Shinnou, 2000). Let a Japanese character sequence be s = c 1 c 2    c m and a boundary b i exist between c i and c i+1. The b i is either +1 (word boundary) or 1 (non-boundary). The word segmentation task can be defined as determining the class of the b i. We use an SVM to determine it. 4.2 Features We assume that each character c i has two attributes. The first attribute is a character type (t i). It can be hiragana4, katakana, kanji (Chinese characters), numbers, English alphabets, kanji-numbers (numbers written in Chinese), or symbols. A character type gives some hints to segment a Japanese sentence to words. For example, kanji is mainly used to represent nouns or stems of verbs and adjectives. It is never used for particles, which are always written in hiragana. Therefore, it is more probable that a boundary exists between a kanji character and a hiragana character. Of course, there are quite a few exceptions to this heuristics. For example, some proper nouns are written in mixed hiragana, kanji and katakana. The second attribute is a character code (k i). The range of a character code is from 1 to 6,879. JIS X 0208, which is one of Japanese character set standards, enumerates 6,879 characters. We use here four characters to decide a word boundary. A set of the attributes of c i1 ; c i ; c i+1, and c i+2 is used to predict the label of the b i. The set consists of twenty attributes: ten for the character type (t i1 t i t i+1 t i+2, t i1 t i t i+1, t i1 t i, t i1, t i t i+1 t i+2, t i t i+1, t i, t i+1 t i+2, t i+1, t i+2), and another ten for the character code (k i1 k i k i+1 k i+2, k i1 k i k i+1, k i1 k i, k i1, k i k i+1 k i+2, k i k i+1, k i, k i+1 k i+2, k i+1, and k i+2). 5 Experimental Results and Discussion We used the EDR Japanese Corpus (EDR, 1995) for experiments. The corpus is assembled from various sources such as newspapers, magazines, and textbooks. It contains 208,000 sentences. We selected randomly 20,000 sentences for training and 4Hiragana and katakana are phonetic characters which represent Japanese syllables. Katakana is primarily used to write foreign words. 10,000 sentences for testing. Then, we created examples using the feature encoding method in Section 4. Through these experiments we used the original SVM tools, the algorithm of which is based on SMO (Sequential Minimal Optimization) by Platt (1999). We used linear SVMs and set a missclassification cost C to 0:2. First, we changed the number of labeled examples which were randomly selected. This is an experiment on passive learning. Table 2 shows the accuracy at different sizes of labeled examples. Second, we changed the number of examples in a pool and ran the active learning algorithm in Section 3.2. We use the same examples for a pool as those used in the passive learning experiments. We selected 1,000 examples at each iteration of the active learning. Figure 4 shows the learning curve of this experiment and Figure 5 is a close-up of Figure 4. We see from Figure 4 that active learning works quite well and it significantly reduces labeled examples to be required. Let us see how many labeled examples are required to achieve 96.0 % accuracy. In active learning with the pool, the size of which is 2,500 sentences (97,349 examples), only 28,813 labeled examples are needed, whereas in passive learning, about 97,000 examples are required. That means over 70 % reduction is realized by active learning. In the case of 97 % accuracy, approximately the same percentage of reduction is realized when using the pool, the size of which is 20,000 sentences (776,586 examples). Now let us see how the accuracy curve varies depending on the size of a pool. Surprisingly, the performance of a larger pool is worse than that of a smaller pool in the early stage of training5. One reason for this could be that support vectors in selected examples at each iteration from a larger pool make larger clusters than those selected from a smaller pool do. In other words, in the case of a larger pool, more examples selected at each iteration would be similar to each other. We computed variances6of each 1,000 selected examples at the learning iteration from 2 to 11 (Table 1). The variances of se5Tong and Koller (2000) have got the similar results in a text classification task with two small pools: 500 and 1000. However, they have concluded that a larger pool is better than a smaller one because the final accuracy of the former is higher than that of the latter. 6The variance  2 of a set of selected examples x i is defined Table 1: Variances of Selected Examples Iteration 2 3 4 5 6 7 8 9 10 11 1,250 Sent. Size Pool 16.87 17.25 17.85 17.63 17.24 17.37 17.34 17.73 17.94 17.57 20,000 Sent. Size Pool 16.66 17.03 16.92 16.75 16.80 16.72 16.91 16.93 16.87 16.97 lected examples using the 20,000 sentence size pool is always lower than those using the 1,250 sentence size pool. The result is not inconsistent with our hypothesis. Before we discuss the results of Two Pool Algorithm, we show in Figure 6 how support vectors of a classifier increase and the accuracy changes when using the 2,500 sentence size pool. It is clear that after the accuracy improvement almost stops, the increment of the number of support vectors is down. We also observed the same phenomenon with different sizes of pools. We utilize this phenomenon in Algorithm A. Next, we ran Two Pool Algorithm A7. The result is shown in Figure 7. The accuracy curve of Algorithm A is better than that of the previously proposed method at the number of labeled examples roughly up to 20,000. After that, however, the performance of Algorithm A does not clearly exceed that of the previous method. The result of Algorithm B is shown in Figure 8. We have tried three values for Æ : 5 %, 10 %, and 20 %. The performance with Æ of 10 %, which is best, is plotted in Figure 8. As noted above, the improvement by Algorithm A is limited, whereas it is remarkable that the accuracy curve of Algorithm B is always the same or better than those of the previous algorithm with different sizes of pools (the detailed information about the performance is shown in Table 3). To achieve 97.0 % accuracy Algorithm B requires only 59,813 labeled examples, while passive as:  2 = 1 n n X i=1 jjx i mjj 2 where m = 1 n P n i=1 x i and n is the number of selected examples. 7In order to stabilize the algorithm, we use the following strategy at (d) in Figure 3: add new unlabeled examples to the primary pool when the current increment of support vectors is less than half of the average increment. Table 2: Accuracy at Different Labeled Data Sizes with Random Sampling # of Sentences # of Examples # of Binary Features Accuracy (%) 21 813 5896 89.07 41 1525 10224 90.30 81 3189 18672 91.65 162 6167 32258 92.93 313 12218 56202 93.89 625 24488 98561 94.73 1250 48701 168478 95.46 2500 97349 288697 96.10 5000 194785 493942 96.66 10000 387345 827023 97.10 20000 776586 1376244 97.40 learning requires about 343,0008 labeled examples and the previous method with the 200,000 sentence size pool requires 100,813. That means 82.6 % and 40.7 % reduction compared to passive learning and the previous method with the 200,000 sentence size pool, respectively. 6 Conclusion To our knowledge, this is the first paper that reports the empirical results of active learning with SVMs for a more complex task in natural language processing than a text classification task. The experimental results show that SVM active learning works well for Japanese word segmentation, which is one of such complex tasks, and the naive use of a large pool with the previous method of SVM active learning is less effective. In addition, we have proposed a novel technique to improve the learning curve when using a large number of unlabeled examples and have eval8We computed this by simple interpolation. Table 3: Accuracy of Different Active Learning Algorithms Pool Size # of Algo. Algo. 1250 5,000 20,000 Ex. A B Sent. Sent. Sent. 813 89.07 89.07 89.07 89.07 89.07 1813 91.70 91.70 91.48 90.89 90.61 3813 93.82 93.82 93.60 93.11 92.42 6813 94.62 94.93 94.90 94.23 93.53 12813 95.24 95.87 95.29 95.42 94.82 24813 95.98 96.43 95.46 96.20 95.80 48813 96.51 96.88 96.51 96.62 0.88 0.89 0.9 0.91 0.92 0.93 0.94 0.95 0.96 0.97 0.98 0 10000 20000 30000 40000 50000 60000 70000 80000 90000 100000 Accuracy Number of labeled examples Passive (Random Sampling) Active (1250 Sent. Size Pool) Active (2500 Sent. Size Pool) Active (5000 Sent. Size Pool) Active (20,000 Sent. Size Pool) Figure 4: Accuracy Curve with Different Pool Sizes 0.91 0.92 0.93 0.94 0.95 0.96 0 5000 10000 15000 20000 25000 Accuracy Number of labeled examples Passive (Random Sampling) Active (1250 Sent. Size Pool) Active (2500 Sent. Size Pool) Active (5000 Sent. Size Pool) Active (20,000 Sent. Size Pool) Figure 5: Accuracy Curve with Different Pool Sizes (close-up) 0.88 0.89 0.9 0.91 0.92 0.93 0.94 0.95 0.96 0.97 0.98 0 10000 20000 30000 40000 50000 60000 70000 80000 90000 100000 Accuracy Number of labeled examples 0 5000 10000 15000 20000 25000 30000 0 10000 20000 30000 40000 50000 60000 70000 80000 90000 100000 Number of Support Vectors Number of labeled examples Figure 6: Change of Accuracy and Number of Support Vectors of Active Learning with 2500 Sentence Size Pool 0.88 0.89 0.9 0.91 0.92 0.93 0.94 0.95 0.96 0.97 0.98 0 10000 20000 30000 40000 50000 60000 70000 80000 90000 100000 Accuracy Number of labeled examples Passive (Random Sampling) Active (Algorithm A) Active (20,000 Sent. Size Pool) Figure 7: Accuracy Curve of Algorithm A 0.88 0.89 0.9 0.91 0.92 0.93 0.94 0.95 0.96 0.97 0.98 0 10000 20000 30000 40000 50000 60000 70000 80000 90000 100000 Accuracy Number of labeled examples Passive (Random Sampling) Active (Algorithm B) Active (20,000 Sent. Size Pool) Figure 8: Accuracy Curve of Algorithm B uated it by Japanese word segmentation. Our technique outperforms the method in previous research and can significantly reduce required labeled examples to achieve a given level of accuracy. References Michele Banko and Eric Brill. 2001. Scaling to very very large corpora for natural language disambiguation. In Proceedings of ACL-2001, pages 26–33. Ido Dagan and Sean P. Engelson. 1995. Committeebased sampling for training probabilistic classifiers. In Proceedings of the Tweleveth International Conference on Machine Learning, pages 150–157. Susan Dumais, John Platt, David Heckerman, and Mehran Sahami. 1998. Inductive learning algorithms and representations for text categorization. In Proceedings of the ACM CIKM International Conference on Information and Knowledge Management, pages 148–155. EDR (Japan Electoric Dictionary Research Institute), 1995. EDR Electoric Dictionary Technical Guide. Rebecca Hwa. 2000. Sample selection for statitical grammar induction. In Proceedings of EMNLP/VLC 2000, pages 45–52. Thorsten Joachims. 1998. Text categorization with support vector machines: Learning with many relevant features. In Proceedings of the European Conference on Machine Learning. Taku Kudo and Yuji Matsumoto. 2000a. Japanese dependency structure analysis based on support vector machines. In Proceedings of the 2000 Joint SIGDAT Conference on Empirical Methods in Natural Language Processing and Very Large Corpora, pages 18–25. Taku Kudo and Yuji Matsumoto. 2000b. Use of support vector learning for chunk identification. In Proceedings of the 4th Conference on CoNLL-2000 and LLL2000, pages 142–144. Taku Kudo and Yuji Matsumoto. 2001. Chunking with support vector machines. In Proceedings of NAACL 2001, pages 192–199. David D. Lewis and William A. Gale. 1994. A sequential algorithm for training text classifiers. In Proceedings of the Seventeenth Annual International ACM-SIGIR Conference on Research and Development in Information Rettrieval, pages 3–12. Andrew Kachites McCallum and Kamal Nigam. 1998. Employing EM and pool-based active learning for text classification. In Proceedings of the Fifteenth International Conference on Machine Learning, pages 359– 367. Grace Ngai and David Yarowsky. 2000. Rule writing or annotation: Cost-efficient resource usage for base noun phrase chunking. In Proceedings of ACL-2000, pages 117–216. John C. Platt. 1999. Fast training of support vector machines using sequential minimal optimization. In Bernhard Sch¨olkopf, Christopher J.C. Burges, and Alexander J. Smola, editors, Advances in Kernel Methods: Support Vector Learning, pages 185–208. MIT Press. Greg Schohn and David Cohn. 2000. Less is more: Active learning with support vector machines. In Proceedings of the Seventeenth International Conference on Machine Learning. Hiroyuki Shinnou. 2000. Deterministic Japanese word segmentation by decision list method. In Proceedings of the Sixth Pacific Rim International Conference on Artificial Intelligence, page 822. Cynthia A. Thompson, Mary Leaine Califf, and Raymond J. Mooney. 1999. Active learning for natural language parsing and information extraction. In Proceedings of the Sixteenth International Conference on Machine Learning, pages 406–414. Simon Tong and Daphne Koller. 2000. Support vector machine active learning with applications to text classification. In Proceedings of the Seventeenth International Conference on Machine Learning. Antal van den Bosch, Walter Daelemans, and Ton Weijters. 1996. Morphological analysis as classification: an inductive-learning approach. In Proceedings of the Second International Conference on New Methods in Natural Language Processing, pages 79–89. Vladimir N. Vapnik. 1995. The Nature of Statistical Learning Theory. Springer-Verlag. David Yarowsky and Richard Wicentowski. 2000. Minimally supervised morphological analysis by multimodal alignment. In Proceedings of ACL-2000, pages 207–216. David Yarowsky. 1995. Unsupervised word sence disambiguation rivaling supvervised methods. In Proceedings of ACL-1995, pages 189–196.
2002
64
Memory-Based Learning of Morphology with Stochastic Transducers Alexander Clark ISSCO / TIM University of Geneva UNI-MAIL, Boulevard du Pont-d’Arve, CH-1211 Gen`eve 4, Switzerland [email protected] Abstract This paper discusses the supervised learning of morphology using stochastic transducers, trained using the ExpectationMaximization (EM) algorithm. Two approaches are presented: first, using the transducers directly to model the process, and secondly using them to define a similarity measure, related to the Fisher kernel method (Jaakkola and Haussler, 1998), and then using a Memory-Based Learning (MBL) technique. These are evaluated and compared on data sets from English, German, Slovene and Arabic. 1 Introduction Finite-state methods are in large part adequate to model morphological processes in many languages. A standard methodology is that of two-level morphology (Koskenniemi, 1983) which is capable of handling the complexity of Finnish, though it needs substantial extensions to handle non-concatenative languages such as Arabic (Kiraz, 1994). These models are primarily concerned with the mapping from deep lexical strings to surface strings, and within this framework learning is in general difficult (Itai, 1994). In this paper I present algorithms for learning the finite-state transduction between pairs of uninflected and inflected words. – supervised learning of morphology. The techniques presented here are, however, applicable to learning other types of string transductions. Memory-based techniques, based on principles of non-parametric density estimation, are a powerful form of machine learning well-suited to natural language tasks. A particular strength is their ability to model both general rules and specific exceptions in a single framework (van den Bosch and Daelemans, 1999). However they have generally only been used in supervised learning techniques where a class label or tag has been associated to each feature vector. Given these manual or semi-automatic class labels, a set of features and a pre-defined distance function new instances are classified according to the class label of the closest instance. However these approaches are not a complete solution to the problem of learning morphology, since they do not directly produce the transduction. The problem must first be converted into an appropriate feature-based representation and classified in some way. The techniques presented here operate directly on sequences of atomic symbols, using a much less articulated representation, and much less input information. 2 Stochastic Transducers It is possible to apply the EM algorithm to learn the parameters of stochastic transducers, (Ristad, 1997; Casacuberta, 1995; Clark, 2001a). (Clark, 2001a) showed how this approach could be used to learn morphology by starting with a randomly initialized model and using the EM algorithm to find a local maximum of the joint probabilities over the pairs of inflected and uninflected words. In addition rather than using the EM algorithm to optimize the joint probability it would be possible to use a gradient de Computational Linguistics (ACL), Philadelphia, July 2002, pp. 513-520. Proceedings of the 40th Annual Meeting of the Association for scent algorithm to maximize the conditional probability. The models used here are Stochastic NonDeterministic Finite-State Transducers (FST), or Pair Hidden Markov Models (Durbin et al., 1998), a name that emphasizes the similarity of the training algorithm to the well-known Forward-Backward training algorithm for Hidden Markov Models. Instead of outputting symbols in a single stream, however, as in normal Hidden Markov Models they output them on two separate streams, the left and right streams. In general we could have different left and right alphabets; here we assume they are the same. At each transition the FST may output the same symbol on both streams, a symbol on the left stream only, or a symbol on the right stream only. I call these  ,  and  outputs respectively. For each state  the sum of all these output parameters over the alphabet must be one.        Since we are concerned with finite strings rather than indefinite streams of symbols, we have in addition to the normal initial state   , an explicit end state   , such that the FST terminates when it enters this state. The FST then defines a joint probability distribution on pairs of strings from the alphabet. Though we are more interested in stochastic transductions, which are best represented by the conditional probability of one string given the other, it is more convenient to operate with models of the joint probability, and then to derive the conditional probability as needed later on. It is possible to modify the normal dynamicprogramming training algorithm for HMMs, the Baum-Welch algorithm (Baum and Petrie, 1966) to work with FSTs as well. This algorithm will maximize the joint probability of the training data. We define the forward and backward probabilities as follows. Given two strings    ! ! ! #" and $   ! ! ! $% we define the forward probabilities &('*) +  as the probability that it will start from   and output    ! ! !, .- on the left stream, and $   ! ! ! $/ on the right stream and be in state  , and the backward probabilities 0 ' *) +  as the probability that starting from state  it will output 1-32   ! ! !, 4" , on the right and $,/52   ! ! ! $% on the left and then terminate, ie end in state   . We can calculate these using the following recurrence relations: & ' *) + 6 '87 & ' 7 *) +:9 <;    =<  $/    '87 & ' 7 *) 9  + <;     =   4  '7 & '87 *) 9  +:9 <;    =3 , 4 $/   0 ' *) + > '7 0 ' 7 *) + ?<;  =    $ /2 , =3  '7 0 '7 *) @ + <;   =    .-32 ,  =  ' 7 0 '87 *) ? + ?<;  =    4-32   $ /2 , =3 where, in these models,  4 $/, is zero unless  - is equal to $ / . Instead of the normal twodimensional trellis discussed in standard works on HMMs, which has one dimension corresponding to the current state and one corresponding to the position, we have a three-dimensional trellis, with a dimension for the position in each string. With these modifications, we can use all of the standard HMM algorithms. In particular, we can use this as the basis of a parameter estimation algorithm using the expectation-maximization theorem. We use the forward and backward probabilities to calculate the expected number of times each transition will be taken; at each iteration we set the new values of the parameters to be the appropriately normalized sums of these expectations. Given a FST, and a string  , we often need to find the string $ that maximizes ;    $A . This is equivalent to the task of finding the most likely string generated by a HMM, which is NP-hard (Casacuberta and de la Higuera, 2000), but it is possible to sample from the conditional distribution ;  $  B , which allows an efficient stochastic computation. If we consider only what is output on the left stream, the FST is equivalent to a HMM with null transitions corresponding to the , transitions of the FST. We can remove these using standard techniques and then use this to calculate the left backward probabilities for a particular string  : 0' *)  defined as the probability that starting from state  the FST generates .-32   ! ! ! #" on the left and terminates. Then if one samples from the FST, but weights each transition by the appropriate left backward probability, it will be equivalent to sampling from the conditional distribution of   $  B . We can then find the string $ that is most likely given  , by generating randomly from ;  $  B . After we have generated a number of strings, we can sum ;  $  1 for all the observed strings; if the difference between this sum and 1 is less than the maximum value of ;  $  1 we know we have found the most likely $ . In practice, the distributions we are interested in often have a $ with ;  $  1 ! ; in this case we immediately know that we have found the maximum. We then model the morphological process as a transduction from the lemma form to the inflected form, and assume that the model outputs for each input, the output with highest conditional or joint probability with respect to the model. There are a number of reasons why this simple approach will not work: first, for many languages the inflected form is lexically not phonologically specified and thus the model will not be able to identify the correct form; secondly, modelling all of the irregular exceptions in a single transduction is computationally intractable at the moment. One way to improve the efficiency is to use a mixture of models as discussed in (Clark, 2001a), each corresponding to a morphological paradigm. The productivity of each paradigm can be directly modelled, and the class of each lexical item can again be memorized. There are a number of criticisms that can be made of this approach. Many of the models produced merely memorize a pair of strings – this is extremely inefficient. Though the model correctly models the productivity of some morphological classes, it models this directly. A more satisfactory approach would be to have this arise naturally as an emergent property of other aspects of the model. These models may not be able to account for some psycho-linguistic evidence that appears to require some form of proximity or similarity. In the next section I shall present a technique that addresses these problems. 3 Fisher Kernels and Information Geometry The method used is a simple application of the information geometry approach introduced by (Jaakkola and Haussler, 1998) in the field of bio-informatics. The central idea is to use a generative model to extract finite-dimensional features from a symbol sequence. Given a generative model for a string, one can use the sufficient statistics of those generative models as features. The vector of sufficient statistics can be thought of as a finite-dimensional representation of the sequence in terms of the model. This transformation from an unbounded sequence of atomic symbols to a finite-dimensional real vector is very powerful and allows the use of Support Vector Machine techniques for classification. (Jaakkola and Haussler, 1998) recommend that instead of using the sufficient statistics, that the Fisher scores are used, together with an inner product derived from the Fisher information matrix of the model. The Fisher scores are defined for a data point and a particular model as   ;     (1) The partial derivative of the log likelihood is easy to calculate as a byproduct of the E-step of the EM algorithm, and has the value for HMMs (Jaakkola et al., 2000) of      9  /   (2) where  - is the indicator variable for the parameter ) , and / is the indicator value for the state + where  - leaves state + ; the last term reflects the constraint that the sum of the parameters must be one. The kernel function is defined as      !#"  $ &% (3) where ! $ is the Fisher information matrix. This kernel function thus defines a distance between elements, '    >     1 9)(         #   +*-, (4) This distance in the feature space then defines a pseudo-distance in the example space. The name information geometry which is sometimes used to describe this approach derives from a geometrical interpretation of this kernel. For a parametric model with free parameters, the set of all these models will form a smooth -dimensional manifold in the space of all distributions. The curvature of this manifold can be described by a Riemannian tensor – this tensor is just the expected Fisher information for that model. It is a tensor because it transforms properly when the parametrization is changed. In spite of this compelling geometric explanation, there are difficulties with using this approach directly. First, the Fisher information matrix cannot be calculated directly, and secondly in natural language applications, unlike in bio-informatic applications we have the perennial problem of data sparsity, which means that unlikely events occur frequently. This means that the scaling in the Fisher scores gives extremely high weights to these rare events, which can skew the results. Accordingly this work uses the unscaled sufficient statistics. This is demonstrated below. 4 Details Given a transducer that models the transduction from uninflected to inflected words, we can extract the sufficient statistics from the model in two ways. We can consider the statistics of the joint model ;    $   or the statistics of the conditional model ;  $      . Here we have used the conditional model, since we are interested primarily in the change of the stem, and not the parts of the stem that remain unchanged. It is thus possible to use either the features of the joint model or of the conditional model, and it is also possible to either scale the features or not, by dividing by the parameter value as in Equation 2. The second term in Equation 2 corresponding to the normalization can be neglected. We thus have four possible features that are compared on one of the data sets in Table 4. Based on the performance here we have chosen the unscaled conditional sufficient statistics for the rest of the experiments presented here, which are calculated thus:     $          $  9     (5) $ ;  $  B ' Closest 6pl3Id 0.313 1.46 pl3 pl3d 6pl3d 0.223 0.678 s6pl3 s6pl3d 6pld 0.0907 1.36 s6pl3 s6pl3d 6pl3It 0.0884 1.67 p6f p6ft 6pl3t 0.0632 1.33 p6f p6ft Table 1: Example of the MBL technique for the past tense of apply (6pl3). This example shows that the most likely transduction is the suffix Id, which is incorrect, but the MBL approach gives the correct result in line 2. Given an input string  we want to find the string $ such that the pair   $ is very close to some element of the training data. We can do this in a number of different ways. Clearly if  is already in the training set then the distance will be minimized by choosing $ to be one of the outputs that is stored for input $ ; the distance in this case will be zero. Otherwise we sample repeatedly (here we have taken 100 samples) from the conditional distribution of each of the submodels. This in practice seems to give good results, though there are more principled criteria that could be applied. We give a concrete example using the LING English past tense data set described below. Given an unseen verb in its base form, for example apply, in phonetic transcription 6pl3, we generate 100 samples from the conditional distribution. The five most likely of these are shown in Table 1, together with the conditional probability, the distance to the closest example and the closest example. We are using a -nearest-neighbor rule with   , since there are irregular words that have completely idionsyncratic inflected forms. It would be possible to use a larger value of , which might help with robustness, particularly if the token frequency was also used, since irregular words tend to be more common. In summary the algorithm proceeds as follows: We train a small Stochastic Transducer on the pairs of strings using the EM algorithm. We derive from this model a distance function between two pairs of strings that is sensitive to the properties of this transduction. We store all of the observed pairs of strings. Given a new word, we sample repeatedly from the conditional distribution to get a set of possible outputs. We select the output such that the input/output pair is closest to one of the oberved pairs. 5 Experiments 5.1 Data Sets The data sets used in the experiments are summarized in Table 2. A few additional comments follow. LING These are in UNIBET phonetic transcription. EPT In SAMPA transcription. The training data consists of all of the verbs with a non-zero lemma spoken frequency in the 1.3 million word CO-BUILD corpus. The test data consists of all the remaining verbs. This is intended to more accurately reflect the situation of an infant learner. GP This is a data set of pairs of German nouns in singular and plural form prepared from the CELEX lexical database. NAKISA This is a data set prepared for (Plunkett and Nakisa, 1997). Its consists of pairs of singular and plural nouns, in Modern Standard Arabic, randomly selected from the standard Wehr dictionary in a fully vocalized ASCII transcription. It has a mixture of broken and sound plurals, and has been simplified in the sense that rare forms of the broken plural have been removed. 5.2 Evaluation Table 4 shows a comparison of the four possible feature sets on the Ling data. We used 10-fold cross validation on all of these data sets apart from the EPT data set, and the SLOVENE data set; in these cases we averaged over 10 runs with different random seeds. We compared the performance of the models evaluated using them directly to model the transduction using the conditional likelihood (CL) and using the MBL approach with the unscaled conditional features. Based on these results, we used Unscaled Scaled Joint 75.3 (3.5) 78.2 (3.6) Conditional 85.8 (2.4) 23.8 (3.6) Table 4: Comparison of different metrics on the LING data set with 10 fold cross validation, 1 10state model trained with 10 iterations. Mean in % with standard deviation in brackets. the unscaled conditional features; subsequent experiments confirmed that these performed best. The results are summarized in Table 3. Run-times for these experiments were from about 1 hour to 1 week on a current workstation. There are a few results to which these can be directly compared; on the LING data set, (Mooney and Califf, 1995) report figures of approximately 90% using a logic program that learns decision lists for suffixes. For the Arabic data sets, (Plunkett and Nakisa, 1997) do not present results on modelling the transduction on words not in the training set; however they report scores of 63.8% (0.64%) using a neural network classifier. The data is classified according to the type of the plural, and is mapped onto a syllabic skeleton, with each phoneme represented as a bundle of phonological features. for the data set SLOVENE, (Manandhar et al., 1998) report scores of 97.4% for FOIDL and 96.2% for CLOG. This uses a logic programming methodology that specifically codes for suffixation and prefixation alone. On the very large and complex German data set, we score 70.6%; note however that there is substantial disagreement between native speakers about the correct plural of nonce words (K¨opcke, 1988). We observe that the MBL approach significantly outperforms the conditional likelihood method over a wide range of experiments; the performance on the training data is a further difference, the MBL approach scoring close to 100%, whereas the CL approach scores only a little better than it does on the test data. It is certainly possible to make the conditional likelihood method work rather better than it does in this paper by paying careful attention to convergence criteria of the models to avoid overfitting, and by smoothing the models carefully. In addition some sort of model size selection must be used. A major advantage of the MBL approach is that it works well without reLabel Language Source Description Total Size Train Test LING English (Ling, 1994) Past tense 1394 1251 140 EPT English CELEX Past tense 5324 1957 3367 GP German CELEX noun plural 16970 15282 1706 NAKISA Arabic (Plunkett and Nakisa, 1997) plural 859 773 86 MCCARTHY Arabic (McCarthy and Prince, 1990) broken plural 3261 2633 293 SLOVENE Slovene (Manandhar et al., 1998) genitive nouns 921 608 313 Table 2: Summary of the data sets. Data Set CV Models States Iterations CL MBLSS LING 10 1 10 10 61.3 (4.0) 85.8 (2.4) 10 2 10 10 72.1 (2.0) 79.3 (3.3) EPT No 1 10 10 59.5 (9.4) 93.1 (2.1) NAKISA 10 1 10 10 0.6 (0.8) 15.4 (3.8) 10 5 10 10 9.2 (2.9) 31.0 (6.1) 10 5 10 50 11.3 (3.3) 35.0 (5.3) GP1 10 1 10 10 42.5 (0.8) 70.6 (0.8) MCCARTHY 10 5 10 10 1.6 (0.6) 16.7 (1.8) SLOVENE No 1 10 10 63.6 (28.6) 98.9 (0.8) Table 3: Results. CV is the degree of cross-validation, Models determines how many components there are in the mixture, CL gives the percentage correct using the conditional likelihood evaluation and MBLSS, using the Memory-based learning with sufficient statistics, with the standard deviation in brackets. quiring extensive tuning of the parameters. In terms of the absolute quality of the results, this depends to a great extent on how phonologically predictable the process is. When it is completely predictable, as in SLOVENE the performance approaches 100%; similarly a large majority of the less frequent words in English are completely regular, and accordingly the performance on EPT is very good. However in other cases, where the morphology is very irregular the performance will be poor. In particular with the Arabic data sets, the NAKISA data set is very small compared to the complexity of the process being learned, and the MCCARTHY data set is rather noisy, with a large number of erroneous transcriptions. With the German data set, though it is quite irregular, and the data set is not frequency-weighted, so the frequent irregular words are not more likely to be in the training data, there is a lot of data, so the algorithm performs quite well. 5.3 Cognitive Modelling In addition to these formal evaluations we examined the extent to which this approach can account for some psycho-linguistic data, in particular the data collected by (Prasada and Pinker, 1993) on the mild productivity of irregular forms in the English past tense. Space does not permit more than a rather crude summary. They prepared six data sets of 10 pairs of nonce words together with regular and irregular plurals of them: a sequence of three data sets that were similar to, but progressively further away from sets of irregular verbs (prototypicalintermediate- and distant- pseudoirregular – PPI IPI and DPI), and another set that were similar to sets of regular verbs (prototypical-, intermediate- and distant- pseudoregular PPR, IPR and DPR). Thus the first data sets contained words like spling which would have a vowel change form of splung and a regular suffixed form of splinged, and the second data sets contained words like smeeb with regular smeebed and irregular smeb. They asked subjects for their opinions on the acceptabilities of the stems, and of the regular (suffixed) and irregular (vowel change) forms. A surprising result of this was that subtracting the rating of the past tense form from the rating of the stem form (in order to control for the varying acceptability of the stem) gave different results for the two data sets. With the pseudoirregular forms the irregular form got less acceptable as the stems became less like the most similar irregular stems, but with the pseudo-regulars the regular form got more acceptable. This was taken as evidence for the presence of two qualitatively distinct modules in human morphological processing. In an attempt to see whether the models presented here could account for these effects, we transcribed the data into UNIBET transcription and tested it with the models prepared for the LING data set. We calculated the average negative log probability for each of the six data sets in 3 ways: first we calculated the probability of the stem alone to model the acceptability of the stem; secondly we calculated the conditional probability of the regular (suffixed form), and thirdly we calculated the conditional probability of the irregular (vowel change) form of the word. Then we calculated the difference between the figures for the appropriate past tense form from the stem form. This is unjustifiable in terms of probabilities but seems the most natural way of modelling the effects reported in (Prasada and Pinker, 1993). These results are presented in Table 5. Interestingly we observed the same effect: a decrease in “acceptability” for irregulars, as they became more distant, and the opposite effect for regulars. In our case though it is clear why this happens – the probability of the stem decreases rapidly, and this overwhelms the mild decrease in the conditional probability. 6 Discussion The productivity of the regular forms is an emergent property of the system. This is an advantage over previous work using the EM algorithm with SFST, which directly specified the productivity as a parameter. 6.1 Related work Using the EM algorithm to learn stochastic transducers has been known for a while in the biocomputing field as a generalization of edit distance (Allison et al., 1992). The Fisher kernel method has not been used in NLP to our knowledge before though we have noted two recent papers that have some points of similarity. First, (Kazama et al., 2001) derive a Maximum Entropy tagger, by training a HMM and using the most likely state sequence of the HMM as features for the Maximum Entropy tagging model. Secondly, (van den Bosch, 2000) presents an approach that is again similar since it uses rules, induced using a symbolic learning approach as features in a nearest-neighbour approach. 7 Conclusion We have presented some algorithms for the supervised learning of morphology using the EM algorithm applied to non-deterministic finite-state transducers. We have shown that a novel Memory-based learning technique inspired by the Fisher kernel method produces high performance in a wide range of languages without the need for fine-tuning of parameters or language specific representations, and that it can account for some psycho-linguistic data. These techniques can also be applied to the unsupervised learning of morphology, as described in (Clark, 2001b). Acknowledgements I am grateful to Prof. McCarthy, Ramin Nakisa and Tomaz Erjavec for providing me with the data sets used. Part of this work was done as part of the TMR network Learning Computational Grammars. Thanks also to Bill Keller, Gerald Gazdar, Chris Manning, and the anonymous reviewers for helpful comments. References L. Allison, C. S. Wallace, and C. N. Yee. 1992. Finitestate models in the alignment of macro-molecules. Journal of Molecular Evolution, 35:77–89. L. E. Baum and T. Petrie. 1966. Statistical inference for probabilistic functions of finite state markov chains. Annals of Mathematical Statistics, 37:1559–1663. Francisco Casacuberta and Colin de la Higuera. 2000. Computational complexity of problems on probabilistic grammars and transducers. In Arlindo L. Oliveira, editor, Grammatical Inference: Algorithms and Applications, pages 15–24. Springer Verlag. F. Casacuberta. 1995. Probabilistic estimation of stochastic regular syntax-directed translation schemes. Data set Stem Suffix Vowel Change Past Tense - Stem PPI 14.8 (0.08) 1.34 (0.04) 8.70 (0.30) -6.1 IPI 13.9 (0.12) 1.50 (0.13) 10.4 (0.31) -3.5 DPI 14.2 (0.34) 1.40 (0.07) 17.9 (2.12) 3.7 PPR 13.4 (0.34) 0.58 (0.08) 16.5 (2.18) -12.8 IPR 19.0 (0.22) 1.02 (0.13) 19.5 (2.22) -18.0 DPR 21.3 (0.14) 1.14 (0.17) 19.3 (0.94) -20.2 Table 5: Average negative log-likelihood in nats for the six data sets in (Prasada and Pinker, 1993). Larger figures mean less likely. Standard deviations in brackets. In Proceedings of the VIth Spanish Symposium on Pattern Recognition and Image Analysis, pages 201–207. Alexander Clark. 2001a. Learning morphology with Pair Hidden Markov Models. In Proc. of the Student Workshop at the 39th Annual Meeting of the Association for Computational Linguistics, pages 55–60, Toulouse, France, July. Alexander Clark. 2001b. Partially supervised learning of morphology with stochastic transducers. In Proc. of Natural Language Processing Pacific Rim Symposium, NLPRS 2001, pages 341–348, Tokyo, Japan, November. R. Durbin, S. Eddy, A. Krogh, and G. Mitchison. 1998. Biological Sequence Analysis: Probabilistic Models of proteins and nucleic acids. Cambridge University Press. Alon Itai. 1994. Learning morphology – practice makes good. In R. C. Carrasco and J. Oncina, editors, Grammatical Inference and Applications: ICGI-94, pages 5–15. Springer-Verlag. T. S. Jaakkola and D. Haussler. 1998. Exploiting generative models in discriminative classifiers. In Proc. of Tenth Conference on Advances in Neural Information Processing Systems. T. S. Jaakkola, M. Diekhans, and D. Haussler. 2000. A discriminative framework for detecting remote protein homologies. Journal of Computational Biology, 7(1,2):95–114. Jun’ichi Kazama, Yusuke Miyao, and Jun’ichi Tsujii. 2001. A maximum entropy tagger with unsupervised hidden markov models. In Proc. of Natural Language Processing Pacific Rim Symposium (NLPRS 2001), pages 333–340, Tokyo, Japan. George Kiraz. 1994. Multi-tape two-level morphology. In COLING-94, pages 180–186. Klaus-Michael K¨opcke. 1988. Schemas in German plural formation. Lingua, 74:303–335. Kimmo Koskenniemi. 1983. A Two-level Morphological Processor. Ph.D. thesis, University of Helsinki. Charles X. Ling. 1994. Learning the past tense of English verbs: The symbolic pattern associator vs. connectionist models. Journal of Artifical Intelligence Research, 1:209–229. S. Manandhar, S. Dzeroski, and T. Erjavec. 1998. Learning multi-lingual morphology with CLOG. In C. D. Page, editor, Proc. of the 8th International Workshop on Inductive Logic Programming (ILP-98). Springer Verlag. J. McCarthy and A. Prince. 1990. Foot and word in prosodic morphology: The Arabic broken plural. Natural Language and Linguistic Theory, 8:209–284. Raymond J. Mooney and Mary Elaine Califf. 1995. Induction of first-order decision lists: Results on learning the past tense of English verbs. Journal of Artificial Intelligence Research, 3:1–24. Kim Plunkett and Ramin Charles Nakisa. 1997. A connectionist model of the Arabic plural system. Language and Cognitive Processes, 12(5/6):807–836. Sandeep Prasada and Steven Pinker. 1993. Generalisation of regular and irregular morphological patterns. Language and Cognitive Processes, 8(1):1–56. Eric Sven Ristad. 1997. Finite growth models. Technical Report CS-TR-533-96, Department of Computer Science, Princeton University. revised in 1997. Antal van den Bosch and Walter Daelemans. 1999. Memory-based morphological analysis. In Proceedings of the 37th Annual Meeting of the Association for Computational Linguistics, pages 285–292. Antal van den Bosch. 2000. Using induced rules as complex features in memory-based language learning. In Proceedings of CoNLL 2000, pages 73–78.
2002
65
OT Syntax: Decidability of Generation-based Optimization Jonas Kuhn Department of Linguistics Stanford University [email protected] Abstract In Optimality-Theoretic Syntax, optimization with unrestricted expressive power on the side of the OT constraints is undecidable. This paper provides a proof for the decidability of optimization based on constraints expressed with reference to local subtrees (which is in the spirit of OT theory). The proof builds on Kaplan and Wedekind’s (2000) construction showing that LFG generation produces contextfree languages. 1 Introduction Optimality-Theoretic (OT) grammar systems are an interesting alternative to classical formal grammars, as they construe the task of learning from data in a meaning-based way: a form is defined as grammatical if it is optimal (most harmonic) within a set of generation alternatives for an underlying logical form. The harmony of a candidate analysis depends on a language-specific ranking (  ) of violable constraints, thus the learning task amounts to adjusting the ranking over a given set of constraints. (1) Candidate  is more harmonic than  iff it incurs fewer violations of the highest-ranking constraint   in which  and  differ. The comparison-based setup of OT learning is closely related to discriminative learning approaches in probabilistic parsing (Johnson et al., 1999; Riezler et al., 2000; Riezler et al., 2002),1 however the comparison of generation alternatives – rather than parsing alternatives – adds the possibility of systematically learning the basic language-specific grammatical principles (which in probabilistic parsing are typically fixed a priori, using either a treebankderived or a manually written grammar for the given  This work was supported by a postdoctoral fellowship of the German Academic Exchange Service (DAAD). 1This is for instance pointed out by (Johnson, 1998). language). The “base grammar” assumed as given can be highly unrestricted in the OT setup. Using a linguistically motivated set of constraints, learning proceeds with a bias for unmarked linguistic structures (cf. e.g., (Bresnan et al., 2001)). For computational OT syntax, an interleaving of candidate generation and constraint checking has been proposed (Kuhn, 2000). But the decidability of the optimization task in OT syntax, i.e., the identification of the optimal candidate(s) in a potentially infinite candidate set, has not been proven yet.2 2 Undecidability for unrestricted OT Assume that the candidate set is characterized by a context-free grammar (cfg)  , plus one additional candidate ‘yes’. There are two constraints (      ):   is violated if the candidate is neither ‘yes’ nor a structure generated by a cfg   ;   is violated only by ‘yes’. Now, ‘yes’ is in the language defined by this system iff there are no structures in  that are also in   . But the emptiness problem for the intersection of two context-free languages is known to be undecidable, so the optimization task for unrestricted OT is undecidable too.3 However, it is not in the spirit of OT to have extremely powerful individual constraints; the explanatory power should rather arise from interaction of simple constraints. 3 OT-LFG Following (Bresnan, 2000; Kuhn, 2000; Kuhn, 2001), we define a restricted OT system based on Lexical-Functional Grammar (LFG) representations: c(ategory) structure/f(unctional) structure 2Most computational OT work so far focuses on candidates and constraints expressible as regular languages/rational relations, based on (Frank and Satta, 1998) (e.g., (Eisner, 1997; Karttunen, 1998; Gerdemann and van Noord, 2000)). 3Cf. also (Johnson, 1998) for the sketch of an undecidability argument and (Kuhn, 2001, 4.2, 6.3) for further constructions. Computational Linguistics (ACL), Philadelphia, July 2002, pp. 48-55. Proceedings of the 40th Annual Meeting of the Association for pairs  like (4),(5)  . Each c-structure tree node is mapped to a node in the f-structure graph by the function . The mapping is specified by fannotations in the grammar rules (below category symbols, cf. (2)) and lexicon entries (3).4 (2) ROOT FP      VP   FP  NP FP  TOPIC      COMP* OBJ       (NP) F   SUBJ      F  F   FP       VP    VP (NP) V  ( SUBJ)=  =  V  V   NP  OBJ      FP  COMP   (3) Mary NP ( PRED)=‘Mary’ ( NUM)=SG that F had F ( TNS)=PAST seen V ( PRED)=‘see  ( SUBJ) ( OBJ) ’ ( ASP)=PERF thought V ( PRED)=‘think  ( SUBJ) ( COMP) ’ ( TNS)=PAST laughed V ( PRED)=‘laugh  ( SUBJ) ’ ( TNS)=PAST (4) c-structure ROOT VP NP V  John V FP thought F  F FP that NP F  Mary F VP had V  V NP seen Titanic (5) f-structure !" " " " " " " " "# PRED ‘think $ ( % SUBJ) ( % COMP) & ’ TNS PAST SUBJ ' PRED ‘John’ NUM SG ( COMP !" " " " # PRED ‘see $ ( % SUBJ) ( % OBJ) & ’ TNS PAST ASP PERF SUBJ ' PRED ‘Mary’ NUM SG ( OBJ ' PRED ‘Titanic’ NUM SG ( )+* * * * , )+* * * * * * * * * , 4  abbreviates .  , i.e., the present category’s image; abbreviates 0/1.  , i.e., the f-structure corresponding to the present node’s mother category. The correct f-structure for a sentence is the minimal model satisfying all properly instantiated fannotations. In OT-LFG, the universe of possible candidates is defined by an LFG 32547682:9<; (encoding inviolable principles, like an X-bar scheme). A particular candidate set is the set Gen =?> @8A> BDCFE  254HG – i.e., the c-/fstructure pairs in I2+476F259<; , which have the input  2+4 as their f-structure. Constraints are expressed as local configurations in the c-/f-structure pairs. They have one of the following implicational forms:5 (6) J KML J  K  where JONPJO are descriptions of nonterminals of Q > @FAR>SBDC ; K N K  are standard LFG f-annotations of constraining equations with as the only f-structure metavariable. (7) J TVU K W L JO T  U  K  WH where JONXJORN U N U  are descriptions of nonterminals of Q > @8A> BDC ; JONPJ  refer to the mother in a local subtree configuration, U N U  refer to the same daughter category; T N T  N W N WH are regular expressions over nonterminals; K N K  are standard f-annotations as in (6). Any of the descriptions can be maximally unspecific; (6) can for example be instantiated by the OPSPEC constraint ( Y OP)=+ Z (DF Y ) (an operator must be the value of a discourse function, (Bresnan, 2000)) with the category information unspecified. An OT-LFG system [ is thus characterized by a base grammar and a set of constraints, with a language-specific ranking relation ]\ : ^ RQ > @8A> BDC N< _NX`bac P . The evaluation function Eval d+egf h aHi picks the most harmonic from a set of candidates, based on the constraints and ranking. The language (set of analyses)6 generated by an OT system is defined as j  ^  lk nmpo7NXq_or Q > @FAR>SBDC sPt q > @vu nm o Nwq o Eval xzy7{ |}~  Gen €H ‚ƒ: „:…  q > @ PX† 4 LFG generation Our decidability proof for generation-based optimization builds on the result of (Kaplan and Wedekind, 2000) (K&W00) that LFG generation produces context-free languages. 5Note that with GPSG-style category-level feature percolation it is possible to refer to (finitely many) nonlocal configurations at the local tree level. 6The string language is obtained by taking the terminal string of the c-structure part of the analyses. (8) Given an arbitrary LFG grammar Q and a cycle-free fstructure q , a cfg Q  can be constructed that generates exactly the strings to which Q assigns the f-structure q . I will refer to the resulting cfg  as  E   G . K&W00 present a constructive proof, folding all fstructural contributions of lexical entries and LFG rules into the c-structural rewrite rules (which is possible since we know in advance the range of fstructural objects that can instantiate the f-structure meta-variables in the rules). I illustrate the specialization steps with grammar (2) and lexicon (3) and for generation from f-structure (5). Initially, the generalized format of right-hand sides in LFG rules is converted to the standard context-free notation (resolving regular expressions by explicit disjunction or recursive rules). Fstructure (5) contains five substructures: the root fstructure, plus the embedded f-structures under the paths SUBJ, COMP, COMP SUBJ, and COMP OBJ. Any relevant metavariable ( Y ,  ) in the grammar must end up instantiated to one of these. So for each path from the root f-structure, a distinct variable is introduced:  , subscripted with the (abbreviated and possibly empty) feature path:              . Rule augmentation step 1 adds to each category name a concrete f-structure to which the category corresponds. So for FP, we get FP:  , FP:   , FP:  , FP:    , and FP:   . The rules are multiplied out to cover all combinations of augmented categories obeying the original f-annotations.7 Step 2 adds a set of instantiated f-annotation schemes to each symbol, based on the instantiation of metavariables from step 1. One instance of the lexicon entry Mary look as follows: (9) NP:  :    PRED)=‘Mary’   NUM)=SG  Mary The rules are again multiplied out to cover all combinations for which the set of f-constraints on the mother is the union of all daughters’ fconstraints, plus the appropriately instantiated rulespecific annotations. So, for the VP rule based on the categories NP:   :   PRED)=‘Mary’    NUM)=SG  and V  :  :    PRED)=‘laugh’   TNS)=PAST      , we get the rule 7VP:   NP:   V  :   is allowed, while VP:  NP:  V  :  is excluded, since the =  annotation of V  in the VP rule (2) enforces that  VP   V : . VP:   :             SUBJ      PRED)=‘Mary’    NUM)=SG   PRED)=‘laugh’   TNS)=PAST               NP:  :    PRED)=‘Mary’   NUM)=SG  V  :  :    PRED)=‘laugh’    TNS)=PAST    With this bottom-up construction it is ensured that each new category ROOT:  :  . . . (corresponding to the original root symbol) contains a complete possible collection of instantiated f-constraints. To exclude analyses whose f-structure is not  (for which we are generating strings) a new start symbol is introduced “above” the original root symbol. Only for the sets of f-constraints that have  as their minimal model, rules of the form ROOT ! ROOT:  :  . . . are introduced (this also excludes inconsistent fconstraint sets). With the cfg  E   G , standard techniques for cfg’s can be applied, e.g., if there are infinitely many possible analyses for a given f-structure, the smallest one(s) can be produced, based on the pumping lemma for context-free languages. Grammar (2) does indeed produce infinitely many analyses for the input f-structure (5). It overgenerates in several respects: The functional projection FP can be stacked due to recursions like the following (with the augmented FP reoccuring in the F rules): FP: "$# :          % "&# PRED)=‘see $ . . . & ’ % "$# TNS)=PAST % " # SUBJ ')(*" #,+ % " #-+ PRED)=‘Mary’ % "$# OBJ ')(*"$#,. % " #-. PRED)=‘Titanic’ " # (*" #        0/ F 1 : "$# :          % "$# PRED)=‘see $ . . . & ’ % "&# TNS)=PAST % " # SUBJ ' (*" #-+ % " #,+ PRED)=‘Mary’ % "&# OBJ '(*"&#-. % " #,. PRED)=‘Titanic’ " # (*" #         F 1 : "&# :        % "&# PRED)=‘see $ . . . & ’ % "$# TNS)=PAST % " # SUBJ ' (*" #,+ % " #-+ PRED)=‘Mary’ % "$# OBJ ')(*"$#,. % " #-. PRED)=‘Titanic’ " # (*" #         / F: "&# : 2 FP: "$# :        % "&# PRED)=‘see $ . . . & ’ % "$# TNS)=PAST % " # SUBJ '(*" #,+ % " #-+ PRED)=‘Mary’ % "$# OBJ ')(*"$#,. % " #-. PRED)=‘Titanic’ " # (*" #         F:   : 3 is one of the augmented categories we get for that in (3), so  ((2),(5)) generates an arbitrary number of thats on top of any FP. A similar repetition effect will arise for the auxiliary had.8 Other choices in generation arise from the freedom of generating the subject in the specifier of VP or FP and from the possibility of (unbounded) topicalization of the object (the first disjunction of the FP rule in (2) 8The F 4 entries do not contribute any PRED value, which would exclude doubling due to the instantiated symbol character of PRED values (cf. K&W00, fn. 2). contains a functional-uncertainty equation): (10) a. John thought that Titanic, Mary had seen. b. Titanic, John thought that Mary had seen. 5 LFG generation in OT-LFG While grammar (2) would be considered defective as a classical LFG grammar, it constitutes a reasonable example of a candidate generation grammar (  2547682:9<; ) in OT. Here, it is the OT constraints that enforce language-specific restrictions, so b2547682:9<; has to ensure that all candidates are generated in the first place. For instance, expletive elements as do in Who do you know will arise by passing a recursion in the cfg constructed during generation. A candidate containing such a vacuous cycle can still become the winner of the OT competition if the Faithfulness constraint punishing expletives is outranked by some constraint favoring an aspect of the recursive structure. So the harmony is increased by going through the recursion a certain number of times. It is for this very reason, that Who do you know is predicted to be grammatical in English. So, in OT-LFG it is not sufficient to apply just the  construction; I use an additional step: prior to application of  , the LFG grammar  2547682:9<; is converted to a different form e E  2+476F259<; G (depending on the constraint set  ), which is still an LFG grammar but has category symbols which reflect local constraint violations. When the   construction is applied to e E  2547682:9<; G , all “pumping” structures generated by the cfg  E e E  2+476F259<; G  2+4 G can indeed be ignored since all OT-relevant candidates are already contained in the finite set of nonrecursive structures. So, finally the ranking of the constraints is taken into consideration in order to determine the harmony of the candidates in this finite subset. 6 The conversion  e 2+476F259<; Preprocessing Like K&W00, I assume an initial conversion of the c-structure part of rules into standard context-free form, i.e., the right-hand side is a category string rather than a regular expression. This ensures that for a given local subtree, each constraint (of form (6) or (7)) can be applied only a finite number of times: if is the arity of the longest right-hand side of a rule, the maximal number of local violations is (since some constraints of type (7) can be instantiated to all daughters). Grammar conversion With the number of local violations bounded, we can encode all candidate distinctions with respect to constraint violations at the local-subtree level with finite means: The set of categories in the newly constructed LFG grammar e E  2+476F259 ; G is the finite set (11)   €  ‚Dƒ5 „:… : the set of categories in  y  Q > @8A> BDC  k J :   N  N s J a nonterminal symbol of Q > @8A> BDC , the size of the constraint set , !#"   "%$ , $ the arity of the longest rhs in rules of Q > @8A> BDC † The rules in e E  2+476F259 ; G are constructed in such a way that for each rule X 4 X  . . . X & m  m & in 32+476F259<; and each sequence ('  ) *'  ),+-+-+ '/. )  , 021 '43 ) 1 , all rules of the form X 4 :   4 N  4   4 P X  :    *5   . . . X & :   &   & , m   m  & !#"   o "%$ are included such that ' 3 ) (the number of violations of constraint  3 incurred local to the rule) and the f-annotations   . . .  6 are specified as follows: (12) for   of form (6) 7 J K L J  K 8 : a.   4 ! ; m  o mpo ( 9 ";:<">= ) if X 4 does not match the condition J ; b.   4 ! ; m   m @?BA K ; m  o m o ( C ";:D"%= ) if X 4 matches J ; c.   4 ! ; m   m  ? K ? K  ; m  o mpo ( C ";:D">= ) if X 4 matches both J and J  ; d.   4 9 ; m   m @? K ; m  o m o ( C ";:D">= ) if X 4 matches J but not J  ; e.   4 9 ; m   m  ? K ?EA K  ; m  o mpo ( C "F:"<= ) if X 4 matches both J and JO ; (13) for   of form (7) ! # J TU K W L J  T  U  K  W  ) , : a.   4 ! ; m  o mpo ( 9 ";:<">= ) if X 4 does not match the condition J ; b.   4 & G oIH J o ; m  o LK  mpo N K N K   ( 9 "M:D">= ), where i. J o ! ; K  mpo N K N K   mpo if X o does not match U , or X  . . . X o  do not match T , or X o   . . . X & do not match W ; ii. J o ! ; K  mpo N K N K   mpo ? K ? K  if X 4 matches both J and J  ; X o matches both U and U  ; X  . . . X o  match T and T  ; X o   . . . X & match W and W  ; iii. J o ! ; K  m o N K N K   m o ? A K if X 4 matches both J and J  ; X o matches both U and U  ; X  . . . X o  match T and T  ; X o   . . . X & match W and W  ; iv. J o 9 ; K  m o N K N K   m o ? K if X 4 matches J , X o matches U , X  . . . X o  match T , X o   . . . X & match W , but (at least) one of them does not match the respective description in the consequent ( JON U  N T  N W  ); v. J o 9 ; K  m o N K N K   m o ? K ?BA K  if X 4 matches both J and JO ; X o matches both U and U  ; X  . . . X o  match T and T  ; X o   . . . X & match W and W  . Note that the constraint profile of the daughter categories does not play any role in the determination of constraint violations local to the subtree under consideration (only the sequences ' 3 ) are restricted by the conditions (12) and (13)). So for each new rule type, all combinations of constraint profiles on the daughters are constructed (creating a large but finite number of rules).9 This ensures that no sentence that can be parsed (or generated) by b2+476F259<; is excluded from e E  2+476F259 ; G (as stated by fact (14)):10 (14) Coverage preseveration All strings generated by an LFG grammar Q are also generated by  y  Q . The original  analysis can be recovered from an e E  G analysis by applying a projection function Cat to all c-structure categories: Cat  J :   N    P J for every category in   €  ‚ƒ: „:… (11) 9For one rule/constraint combination several new rules can result; e.g., if the right-hand side of a rule (X 4 ) matches both the antecedent ( J ) and the consequent ( J  ) category description of a constraint of form (6), three clauses apply: (12b), (12c), and (12d). So, we get two new rules with the count of 0 local violations of the constraint and two rules with count 1, with a difference in the f-annotations. 10Providing all possible combinations of augmented category symbols on the right-hand rule sides in  y  Q ensures that the newly constructed rules can be reached from the root symbol in a derivation. It is also guaranteed that whenever a rule  in Q contributes to an analysis, at least one of the rules constructed from  will contribute to the corresponding analysis in  y  Q . This is ensured since the subclauses in (12) and (13) cover the full space of logical possibilities. We can overload the function name Cat with a function applying to the set of analyses produced by an LFG grammar  by defining Cat  Q k nm?N8q s nm  Nwq Q , m is derived from m  by applying Cat to all category symbols † . Coverage preservation of the e construction holds also for the projected c-category skeleton (cf. the argumentation in fn. 10): (15) C-structure level coverage preservation For an LFG grammar Q : Cat   y  QP Q Each category in e E  G encodes the number of local violations for all constraints. Since all constraints are locally evaluable by assumption, all constraints violated by a candidate analysis have to be incurred local to some subtree. Hence the total number of constraint violations incurred by a candidate can be computed by simply summing over all category-encoded local violation profiles: (16) Total number of constraint violations Let Nodes  m  be the multiset of categories occurring in the c-structure tree m , then the total number of violations of constraint   incurred by an analysis nm NXq  y  Q > @8A> BDC  is   m  G  x  ~ B     Define Total y  m      m 8N    m 8N    m P 7 Applying  on  eI 2+476F259 ;  Since e E 32+476F259<; G is a standard LFG grammar, we can apply the  construction to it to get a cfg for a given f-structure  2+4 . The category symbols then have the form X: ('   +-+-+ *' .  :  : , with  and arising from the  construction. We can overload the projection function Cat again such that Cat E"! :  : # : $ G&% ! for all augmented category symbol of the new format; likewise Cat E  G for  a cfg. Since the e construction (strongly) preserves the language generated, coverage preservation holds also after the application of  to e E  2+476F259 ; G and  2547682:9<; , respectively: (17) Cat ('*)  y  Q > @FAR>SBDC 8NXq > @ P Cat ('*) Q > @FAR>SBDC Nwq > @ P But since the symbols in e E  2+476F259<; G reflect local constraint violations, Cat E  E e E  2+476F259<; G  254 G<G has the property that all instances of recursion in the resulting cfg create candidates that are at most as harmonic as their non-recursive counterparts. Assuming a projection function CatCount E"! :  : # : $ G% ! :  , we can state more formally: (18) If m  and m  are CatCount projections of trees produced by the cfg '*)  y  Q > @FAR>SBDC 8NXq > @  , using exactly the same rules, and m  contains a superset of the nodes that m  contains, then    "    , for all    N    9    from    *5   *   Total y  m   , and          Total y  m < . This fact follows from definition of Total (16): the violation counts in the additional nodes in   will add to the total of constraint violations (and if none of the additional nodes contains any local constraint violation at all, the total will be the same as in   ). Intuitively, the effect of the augmentation of the category format is that certain recursions in the pure  construction (which one may think of as a loop) are unfolded, leading to a longer loop. The new loop is sufficiently large to make all relevant distinctions. This result can be directly exploited in processing: if all non-recursive analyses are generated (of which there are only finitely many) it is guaranteed that a subset of the optimal candidates is among them. If the grammar does not contain any violation-free recursion, we even know that we have generated all optimal candidates. (19) A recursion with the derivation path  L  L  is called violation-free iff all categories dominated by the upper occurrence of  , but not dominated by the lower occurrence of  have the form J u   N    with   ! N 9   Note that if there is an applicable violation-free recursion, the set of optimal candidates is infinite; so if the constraint set is set up properly in a linguistic analysis, one would assume that violation-free recursion should not arise. (Kuhn, 2000) excludes the application of such recursions by a similar condition as offline parsability (which excludes vacuous recursions over a string in parsing), but with the  construction, this condition is not necessary for decidability of the generation-based optimization task. The cfg produced by   can be transformed further to only generate the optimal candidates according to the constraint ranking  \ of the OT system [ %  2547682:9<;     \ < , eliminating all but the violation-free recursions in the grammar: (20) Creating a cfg that produces all optimal candidates a. Define     ‚ ]k m '*)  y  Q > @8A> BDC 8Nwq > @  s m contains no recursion † .     ‚ is finite and can be easily computed, by keeping track of the rules already used in an analysis. b. Redefine Eval xzy7{ |}g~ to apply on a set of context-free analyses with augmented category symbols with counts of local constraint violations: Eval xzy7{ |}~    ]k m  s m is maximally harmonic in  , under ranking ` ac† Using the function Total defined in (16), this function is straightforward to compute for finite sets, i.e., in particular Eval xzy7{ | } ~      ‚  . c. Augment the category format further by one index component.11 Introduce index  ! for all categories in '*)  y  Q > @FAR>SBDC 8Nwq > @  of the form X:   N  :  :  , where   ! for 9   . Introduce a new unique index   9 for each node of the form X:   N r :  :  , where   ! for some    9 " "  occurring in the analyses Eval xzy7{ |}g~      ‚  (i.e., different occurrences of the same category are distinguished). d. Construct the cfg Q    ‚  J    ‚ N m    ‚ N S    ‚ N     ‚ , where J    ‚ N m    ‚ are the indexed symbols of step c.; S    ‚ is a new start symbol; the rules     ‚ are (i) those rules from '*)  y  Q > @FAR>SBDC 8Nwq > @  which were used in the analyses in Eval xzy7{ | } ~      ‚  – with the original symbols replaced by the indexed symbols –, (ii) the rules in '*)  y  Q > @8A> BDC 8Nwq > @  , in which the mother category and all daughter categories are of the form X:   N*  :  :  ,   ! for 9   (with the new index ! added), and (iii) one rule S    ‚ S o :  for each of the indexed versions S o :  of the start symbols of '*)  y  Q > @FAR>SBDC 8Nwq > @  . With the index introduced in step (20c), the original recursion in the cfg is eliminated in all but the violation-free cases. The grammar Cat E   > @ G produces (the c-structure of) the set of optimal candidates for the input  2+4 :12 (21) Cat  Q    ‚  k m s nm?N8q > @ Eval xzy7{ |}g~  Gen €  ‚Dƒ: „:…  q > @ PX† , i.e., the set of c-structures for the optimal candidates for input f-structure q > @ according to the OT system ^ RQ > @FAR>SBDC N  _Nw` a P . 11The projection function Cat is again overloaded to also remove the index on the categories. 12Like K&W00, I make the assumption that the input fstructure in generation is fully specified (i.e., all the candidates have the form nm?Nwq > @ ), but the result can be extended to allow for the addition of a finite amount of f-structure information in generation. Then, the specified routine is computed separately for each possible f-structural extension and the results are compared in the end. 8 Proof To prove fact (21) we will show that the c-structure of an arbitrary candidate analysis generated from  254 with  2547682:9<; is contained in Cat E    > @ G iff all other candidates are equally or less harmonic. Take an arbitrary candidate c-structure  generated from  2+4 with  2547682:9<; such that  Cat E    > @ G . We have to show that all other candidates  generated from  254 are equally or less harmonic than  . Assume there were a  that is more harmonic than  . Then there must be some constraint  3  , such that  violates  3 fewer times than  does, and  3 is ranked higher than any other constraint in which  and  differ. Constraints have to be incurred within some local subtree; so  must contain a local violation configuration that  does not contain, and by the construction (12)/(13) the e -augmented analysis of  – call it e E  G – must make use of some violation-marked rule not used in e E  G . Now there are three possibilities: (i) Both e E  G and e E  G are free of recursion. Then the fact that e E  G avoids the highest-ranking constraint violation excludes  from Cat E   > @ G (by construction step (20b)). This gives us a contradiction with our assumption. (ii) e E  G contains a recursion and e E  G is free of recursion. If the recursion in e E  G is violationfree, then there is an equally harmonic recursionfree candidate  . But this  is also less harmonic than e E  G , such that it would have been excluded from Cat E    > @ G too. This again means that e E  G would also be excluded (for lack of the relevant rules in the non-recursive part). On the other hand, if it were the recursion in e E  G that incurred the additional violation (as compared to e E  G ), then there would be a more harmonic recursion-free candidate  . However, this  would exclude the presence of e E  G in    > @ by construction step (20c,d) (only violation-free recursion is possible). So we get another contradiction to the assumption that  Cat E    > @ G . (iii) e E  G contains a recursion. If this recursion is violation-free, we can pick the equally harmonic candidate avoiding the recursion to be our e E  G , and we are back to case (i) and (ii). Likewise, if the recursion in e E  G does incur some violation, not using the recursion leads to an even more harmonic candidate, for which again cases (i) and (ii) will apply. All possible cases lead to a contradiction with the assumptions, so no candidate is more harmonic than our  Cat E    > @ G . We still have to prove that if the c-structure  of a candidate analysis generated from  2+4 with 32+476F259<; is equally or more harmonic than all other candidates, then it is contained in Cat E    > @ G . We can construct an augmented version  of  , such that Cat E  G %  and then show that there is a homomorphism mapping  to some analysis     > @ with Cat E  G %  . We can use the constraint marking construction e and the   construction to construct the tree  with augmented category symbols of the analysis  . The result of K&W00 plus (17) guarantee that Cat E  G %  . Now, there has to be a homomorphism from the categories in  to the categories of some analysis in    > @ .    > @ is also based on  E  2547682:9<;  2+4 G (with an additional index  on each category and some categories and rules of  E  2+476F259<;  254 G having no counterpart in    > @ ). Since we know that  is equally or more harmonic than any other candidate generated from  2+4 , we know that the augmented tree  either contains no recursion or only violation-free recursion. If it does contain such violation-free recursions we map all categories  on the recursion paths to the indexed form  : 0 , and furthermore consider the variant of  avoiding the recursion(s). For our (non-recursive) tree, there is guaranteed to be a counterpart in the finite set of non-recursive trees in   > @ with all categories pairwise identical apart from the index  in    > @ . We pick this tree and map each of the categories in  to the  -indexed counterpart. The existence of this homomorphism guarantees that an analysis     > @ exists with Cat E  G % Cat E  G %  . QED 9 Conclusion We showed that for OT-LFG systems in which all constraints can be expressed relative to a local subtree in c-structure, the generation task from (noncyclic13) f-structures is solvable. The infinity of 13The non-cyclicity condition is inherited from K&W00; in linguistically motivated applications of the LFG formalism, cruthe conceptually underlying candidate set does not preclude a computational approach. It is obvious that the construction proposed here has the purpose of bringing out the principled computability, rather than suggesting a particular algorithm for implementation. However on this basis, an implementation can be easily devised. The locality condition on constraint-checking seems unproblematic for linguistically relevant constraints, since a GPSG-style slash mechanism permits reference to (finitely many) nonlocal configurations from any given category (cf. fn. 5).14 Decidability of generation-based optimization (from a given input f-structure) alone does not imply that the recognition and parsing tasks for an OT grammar system defined as in sec. 3 are decidable: for these tasks, a string is given and it has to be shown that the string is optimal for some underlying input f-structure (cf. (Johnson, 1998)). However, a similar construction as the one presented here can be devised for parsing-based optimization (even for an LFG-style grammar that does not obey the offline parsability condition). So, if the language generated by an OT system is defined based on (strong) bidirectional optimality (Kuhn, 2001, ch. 5), decidability of both the general parsing and generation problem follows.15 For the unidirectionally defined OT language (as in sec. 3), decidability of parsing can be guaranteed under the assumption of a contextual recoverability condition in parsing (Kuhn, in preparation). References Joan Bresnan, Shipra Dingare, and Christopher Manning. 2001. Soft constraints mirror hard constraints: Voice and person in English and Lummi. In Proceedings of the LFG 2001 Conference. CSLI Publications. cial use of cyclicity in underlying semantic feature graphs has never been made. 14A hypothetical constraint that is excluded would be a parallelism constraint comparing two subtree structures of arbitrary depth. Such a constraint seems unnatural in a model of grammaticality. Parallelism of conjuncts does play a role in models of human parsing preferences; however, here it seems reasonable to assume an upper bound on the depth of parallel structures to be compared (due to memory restrictions). 15Parsing: for a given string, parsing-based optimization is used to determine the optimal underlying f-structure; then generation-based optimization is used to check whether the original string comes out optimal in this direction too. Generation is symmetrical, starting with an f-structure. Joan Bresnan. 2000. Optimal syntax. In Joost Dekkers, Frank van der Leeuw, and Jeroen van de Weijer, editors, Optimality Theory: Phonology, Syntax, and Acquisition. Oxford University Press. Jason Eisner. 1997. Efficient generation in primitive optimality theory. In Proceedings of the ACL 1997, Madrid. Robert Frank and Giorgio Satta. 1998. Optimality theory and the generative complexity of constraint violation. Computational Linguistics, 24(2):307–316. Dale Gerdemann and Gertjan van Noord. 2000. Approximation and exactness in finite state Optimality Theory. In SIGPHON 2000, Finite State Phonology. 5th Workshop of the ACL Special Interest Group in Comp. Phonology, Luxembourg. Mark Johnson, Stuart Geman, Stephen Canon, Zhiyi Chi, and Stefan Riezler. 1999. Estimators for stochastic “unification-based” grammars. In Proceedings of the 37th Annual Meeting of the Association for Computational Linguistics (ACL’99), College Park, MD, pages 535–541. Mark Johnson. 1998. Optimality-theoretic Lexical Functional Grammar. In Proceedings of the 11th Annual CUNY Conference on Human Sentence Processing, Rutgers University. Ronald M. Kaplan and J¨urgen Wedekind. 2000. LFG generation produces context-free languages. In Proceedings of COLING-2000, pages 297–302, Saarbr¨ucken. Lauri Karttunen. 1998. The proper treatment of optimality in computational phonology. In Proceedings of the Internat. Workshop on Finite-State Methods in Natural Language Processing, FSMNLP’98, pages 1–12. Jonas Kuhn. 2000. Processing Optimality-theoretic syntax by interleaved chart parsing and generation. In Proceedings of ACL 2000, pages 360–367, Hongkong. Jonas Kuhn. 2001. Formal and Computational Aspects of Optimality-theoretic Syntax. Ph.D. thesis, Institut f¨ur maschinelle Sprachverarbeitung, Universit¨at Stuttgart. Jonas Kuhn. in preparation. Decidability of generation and parsing for OT syntax. Ms., Stanford University. Stefan Riezler, Detlef Prescher, Jonas Kuhn, and Mark Johnson. 2000. Lexicalized stochastic modeling of constraint-based grammars using log-linear measures and EM training. In Proceedings of the 38th Annual Meeting of the Association for Computational Linguistics (ACL’00), Hong Kong, pages 480–487. Stefan Riezler, Dick Crouch, Ron Kaplan, Tracy King, John Maxwell, and Mark Johnson. 2002. Parsing the Wall Street Journal using a Lexical-Functional Grammar and discriminative estimation techniques. This conference.
2002
7
Comprehension and Compilation in Optimality Theory∗ Jason Eisner Department of Computer Science Johns Hopkins University Baltimore, MD, USA 21218-2691 [email protected] Abstract This paper ties up some loose ends in finite-state Optimality Theory. First, it discusses how to perform comprehension under Optimality Theory grammars consisting of finite-state constraints. Comprehension has not been much studied in OT; we show that unlike production, it does not always yield a regular set, making finite-state methods inapplicable. However, after giving a suitably flexible presentation of OT, we show carefully how to treat comprehension under recent variants of OT in which grammars can be compiled into finite-state transducers. We then unify these variants, showing that compilation is possible if all components of the grammar are regular relations, including the harmony ordering on scored candidates. A side benefit of our construction is a far simpler implementation of directional OT (Eisner, 2000). 1 Introduction To produce language is to convert utterances from their underlying (“deep”) form to a surface form. Optimality Theory or OT (Prince and Smolensky, 1993) proposes to describe phonological production as an optimization process. For an underlying x, a speaker purportedly chooses the surface form z so as to maximize the harmony of the pair (x, z). Broadly speaking, (x, z) is harmonic if z is “easy” to pronounce and “similar” to x. But the precise harmony measure depends on the language; according to OT, it can be specified by a grammar of ranked desiderata known as constraints. According to OT, then, production maps each underlying form to its best possible surface pronunciation. It is akin to the function that maps each child x to his or her most flattering outfit z. Different children look best in different clothes, and for an oddly shaped child x, even the best conceivable outfit z may be an awkward compromise between style and fit—that is, between ease of pronunciation and similarity to x. Language comprehension is production in reverse. In OT, it maps each outfit z to the set of chil∗Thanks to Kie Zuraw for asking about comprehension; to Ron Kaplan for demanding an algebraic construction before he believed directional OT was finite-state; and to others whose questions convinced me that this paper deserved to be written. dren x for whom that outfit is optimal, i.e., is at least as flattering as any other outfit z′: PRODUCE(x) = {z : (∄z′) (x, z′) > (x, z)} COMPREHEND(z) = {x : z ∈PRODUCE(x)} = {x : (∄z′) (x, z′) > (x, z)} In general z and z′ may range over infinitely many possible pronunciations. While the formulas above are almost identical, comprehension is in a sense more complex because it varies both the underlying and surface forms. While PRODUCE(x) considers all pairs (x, z′), COMPREHEND(z) must for each x consider all pairs (x, z′). Of course, this nested definition does not preclude computational shortcuts. This paper has three modest goals: 1. To show that OT comprehension does in fact present a computational problem that production does not. Even when the OT grammar is required to be finite-state, so that production can be performed with finite-state techniques, comprehension cannot in general be performed with finite-state techniques. 2. To consider recent constructions that cut through this problem (Frank and Satta, 1998; Karttunen, 1998; Eisner, 2000; Gerdemann and van Noord, 2000). By altering or approximating the OT formalism—that is, by hook or by crook—these constructions manage to compile OT grammars into finite-state transducers. Transducers may readily be inverted to do comprehension as easily as production. We carefully lay out how to use them for comprehension in realistic circumstances (in the presence of correspondence theory, lexical constraints, hearer uncertainty, and phonetic postprocessing). 3. To give a unified treatment in the extended finitestate calculus of the constructions referenced above. This clarifies their meaning and makes them easy to implement. For example, we obtain a transparent algebraic version of Eisner’s (2000) unbearably technical automaton construction for his proposed formalism of “directional OT.” Computational Linguistics (ACL), Philadelphia, July 2002, pp. 56-63. Proceedings of the 40th Annual Meeting of the Association for The treatment shows that all the constructions emerge directly from a generalized presentation of OT, in which the crucial fact is that the harmony ordering on scored candidates is a regular relation. 2 Previous Work on Comprehension Work focusing on OT comprehension—or even mentioning it—has been surprisingly sparse. While the recent constructions mentioned in §1 can easily be applied to the comprehension problem, as we will explain, they were motivated primarily by a desire to pare back OT’s generative power to that of previous rewrite-rule formalisms (Johnson, 1972). Fosler (1996) noted the existence of the OT comprehension task and speculated that it might succumb to heuristic search. Smolensky (1996) proposed to solve it by optimizing the underlying form, COMPREHEND(z) ?= {x : (∄x′) (x′, z) > (x, z)} Hale and Reiss (1998) pointed out in response that any comprehension-by-optimization strategy would have to arrange for multiple optima: after all, phonological comprehension is a one-to-many mapping (since phonological production is many-to-one).1 The correctness of Smolensky’s proposal (i.e., whether it really computes COMPREHEND) depends on the particular harmony measure. It can be made to work, multiple optima and all, if the harmony measure is constructed with both production and comprehension in mind. Indeed, for any phonology, it is trivial to design a harmony measure that both production and comprehension optimize. (Just define the harmony of (x, z) to be 1 or 0 according to whether the mapping x 7→z is in the language!) But we are really only interested in harmony measures that are defined by OT-style grammars (rankings of “simple” constraints). In this case Smolensky’s proposal can be unworkable. In particular, §4 will show that a finite-state production grammar in classical OT need not be invertible by any finite-state comprehension grammar. 1Hale & Reiss’s criticism may be specific to phonology and syntax. For some phenomena in semantics, pragmatics, and even morphology, Blutner (1999) argues for a one-to-one form-meaning mapping in which marked forms express marked meanings. He deliberately uses bidirectional optimization to rule out many-to-one cases: roughly speaking, an (x, z) pair is grammatical for him only if z is optimal given x and vice-versa. 3 A General Presentation of OT This section (graphically summarized in Fig. 1) lays out a generalized version of OT’s theory of production, introducing some notational and representational conventions that may be useful to others and will be important below. In particular, all objects are represented as strings, or as functions that map strings to strings. This will enable us to use finitestate techniques later. The underlying form x and surface form z are represented as strings. We often refer to these strings as input and output. Following Eisner (1997), each candidate (x, z) is also represented as a string y. The notation (x, z) that we have been using so far for candidates is actually misleading, since in fact the candidates y that are compared encode more than just x and z. They also encode a particular alignment or correspondence between x and z. For example, if x = abdip and z = a[di][bu], then a typical candidate would be encoded y = aab0[ddii][pb0u] which specifies that a corresponds to a, b was deleted (has no surface correspondent), voiceless p surfaces as voiced b, etc. The harmony of y might depend on this alignment as well as on x and z (just as an outfit might fit worse when worn backwards). Because we are distinguishing underlying and surface material by using disjoint alphabets Σ = {a, b, . . .} and ∆= {[, ], a, b, . . .},2 it is easy to extract the underlying and surface forms (x and z) from y. Although the above example assumes that x and z are simple strings of phonemes and brackets, nothing herein depends on that assumption. Autosegmental representations too can be encoded as strings (Eisner, 1997). In general, an OT grammar consists of 4 components: a constraint ranking, a harmony ordering, and generating and pronouncing functions. The constraint ranking is the language-specific part of the grammar; the other components are often supposed to be universal across languages. The generating function GEN maps any x ∈Σ∗ to the (nonempty) set of candidates y whose underlying form is x. In other words, GEN just inserts 2An alternative would be to distinguish them by odd and even positions in the string. x |{z} underlying form x∈Σ∗ GEN −→Y0(x) C1 −→Y1(x) C2 −→Y2(x) · · · Cn −→Yn(x) | {z } sets of candidates y∈(Σ∪∆)∗ PRON −→Z(x) | {z } set of surface forms z∈∆∗ where Yi−1(x) Ci −→Yi(x) really means Yi−1(x) | {z } y∈(Σ∪∆)∗ Ci −→¯Yi(x) prune −→optimal subset of ¯Yi(x) | {z } ¯y∈(Σ∪∆∪{⋆})∗ delete ⋆ −→Yi(x) | {z } y∈(Σ∪∆)∗ Figure 1: This paper’s view of OT production. In the second line, Ci inserts ⋆’s into candidates; then the candidates with suboptimal starrings are pruned away, and finally the ⋆’s are removed from the survivors. arbitrary substrings from ∆∗amongst the characters of x, subject to any restrictions on what constitutes a legitimate candidate y.3 (Legitimacy might for instance demand that y’s surface material z have matched, non-nested left and right brackets, or even that z be similar to x in terms of edit distance.) A constraint ranking is simply a sequence C1, C2, . . . Cn of constraints. Let us take each Ci to be a function that scores candidates y by annotating them with violation marks ⋆. For example, a NODELETE constraint would map y = aab0c0[ddii][pb0u] to ¯y =NODELETE(y) = aab⋆0c⋆0[ddii][pb0u], inserting a ⋆after each underlying phoneme that does not correspond to any surface phoneme. This unconventional formulation is needed for new approaches that care about the exact location of the ⋆’s. In traditional OT only the number of ⋆’s is important, although the locations are sometimes shown for readability. Finally, OT requires a harmony ordering ≻ on scored candidates ¯y ∈(Σ ∪∆∪{⋆})∗. In traditional OT, ¯y is most harmonic when it contains the fewest ⋆’s. For example, among candidates scored by NODELETE, the most harmonic ones are the ones with the fewest deletions; many candidates may tie for this honor. §6 considers other harmony orderings, a possibility recognized by Prince and Smolensky (1993) (≻corresponds to their H-EVAL). In general ≻may be a partial order: two competing candidates may be equally harmonic or incomparable (in which case both can survive), and candidates with different underlying forms never compete at all. Production under such a grammar is a matter of successive filtering by the constraints C1, . . . Cn. Given an underlying form x, let Y0(x) = GEN(x) (1) 3It is never really necessary for GEN to enforce such restrictions, since they can equally well be enforced by the top-ranked constraint C1 (see below). Yi(x) = {y ∈Yi−1(x) : (2) (∄y′ ∈Yi−1(x)) Ci(y′) ≻Ci(y)} The set of optimal candidates is now Yn(x). Extracting z from each y ∈Yn(x) gives the set Z(x) or PRODUCE(x) of acceptable surface forms: Z(x) = {PRON(y) : y ∈Yn(x)} ⊆∆∗ (3) PRON denotes the simple pronunciation function that extracts z from y. It is the counterpart to GEN: just as GEN fleshes out x ∈Σ∗into y by inserting symbols of ∆, PRON slims y down to z ∈∆∗by removing symbols of Σ. Notice that Yn ⊆Yn−1 ⊆. . . ⊆Y0. The only candidates y ∈Yi−1 that survive filtering by Ci are the ones that Ci considers most harmonic. The above notation is general enough to handle some of the important variations of OT, such as Paradigm Uniformity and Sympathy Theory. In particular, one can define GEN so that each candidate y encodes not just an alignment between x and z, but an alignment among x, z, and some other strings that are neither underlying nor surface. These other strings may represent the surface forms for other members of the same morphological paradigm, or intermediate throwaway candidates to which z is sympathetic. Production still optimizes y, which means that it simultaneously optimizes z and the other strings. 4 Comprehension in Finite-State OT This section assumes OT’s traditional harmony ordering, in which the candidates that survive filtering by Ci are the ones into which Ci inserts fewest ⋆’s. Much computational work on OT has been conducted within a finite-state framework (Ellison, 1994), in keeping with a tradition of finite-state phonology (Johnson, 1972; Kaplan and Kay, 1994).4 4The tradition already included (inviolable) phonological Finite-state OT is a restriction of the formalism discussed above. It specifically assumes that GEN, C1, . . . Cn, and PRON are all regular relations, meaning that they can be described by finite-state transducers. GEN is a nondeterministic transducer that maps each x to multiple candidates y. The other transducers map each y to a single ¯y or z. These finite-state assumptions were proposed (in a different and slightly weaker form) by Ellison (1994). Their empirical adequacy has been defended by Eisner (1997). In addition to having the right kind of power linguistically, regular relations are closed under various relevant operations and allow (efficient) parallel processing of regular sets of strings. Ellison (1994) exploited such properties to give a production algorithm for finite-state OT. Given x and a finite-state OT grammar, he used finite-state operations to construct the set Yn(x) of optimal candidates, represented as a finite-state automaton. Ellison’s construction demonstrates that Yn is always a regular set. Since PRON is regular, it follows that PRODUCE(x) = Z(x) is also a regular set. We now show that COMPREHEND(z), in constrast, need not be a regular set. Let Σ = {a, b}, ∆= {[, ], a, b, . . .} and suppose that GEN allows candidates like the ones in §3, in which parts of the string may be bracketed between [ and ]. The crucial grammar consists of two finite-state constraints. C2 penalizes a’s that fall between brackets (by inserting ⋆next to each one) and also penalizes b’s that fall outside of brackets. It is dominated by C1, which penalizes brackets that do not fall at either edge of the string. Note that this grammar is completely permissive as to the number and location of surface characters other than brackets. If x contains more a’s than b’s, then PRODUCE(x) is the set ˆ∆∗of all unbracketed surface forms, where ˆ∆is ∆minus the bracket symbols. If x contains fewer a’s than b’s, then PRODUCE(x) = [ ˆ∆∗]. And if a’s and b’s appear equally often in x, then PRODUCE(x) is the union of the two sets. Thus, while the x-to-z mapping is not a regular relation under this grammar, at least PRODUCE(x) is a regular set for each x—just as finite-state OT constraints, notably Koskenniemi’s (1983) two-level model, which like OT used finite-state constraints on candidates y that encoded an alignment between underlying x and surface z. guarantees. But for any unbracketed z ∈ˆ∆∗, such as z = abc, COMPREHEND(z) is not regular: it is the set of underlying strings with # of a’s ≥# of b’s. This result seems to eliminate any hope of handling OT comprehension in a finite-state framework. It is interesting to note that both OT and current speech recognition systems construct finitestate models of production and define comprehension as the inverse of production. Speech recognizers do correctly implement comprehension via finite-state optimization (Pereira and Riley, 1997). But this is impossible in OT because OT has a more complicated production model. (In speech recognizers, the most probable phonetic or phonological surface form is not presumed to have suppressed its competitors.) One might try to salvage the situation by barring constraints like C1 or C2 from the theory as linguistically implausible. Unfortunately this is unlikely to succeed. Primitive OT (Eisner, 1997) already restricts OT to something like a bare minimum of constraints, allowing just two simple constraint families that are widely used by practitioners of OT. Yet even these primitive constraints retain enough power to simulate any finite-state constraint. In any case, C1 and C2 themselves are fairly similar to “domain” constraints used to describe tone systems (Cole and Kisseberth, 1994). While C2 is somewhat odd in that it penalizes two distinct configurations at once, one would obtain the same effect by combining three separately plausible constraints: C2 requires a’s between brackets (i.e., in a tone domain) to receive surface high tones, C3 requires b’s outside brackets to receive surface high tones, and C4 penalizes all surface high tones.5 Another obvious if unsatisfying hack would impose heuristic limits on the length of x, for example by allowing the comprehension system to return the approximation COMPREHEND(z) ∩{x : |x| ≤ 2 · |z|}. This set is finite and hence regular, so per5Since the surface tones indicate the total number of a’s and b’s in the underlying form, COMPREHEND(z) is actually a finite set in this version, hence regular. But the non-regularity argument does go through if the tonal information in z is not available to the comprehension system (as when reading text without diacritics); we cover this case in §5. (One can assume that some lower-ranked constraints require a special suffix before ], so that the bracket information need not be directly available to the comprehension system either.) haps it can be produced by some finite-state method, although the automaton to describe the set might be large in some cases. Recent efforts to force OT into a fully finite-state mold are more promising. As we will see, they identify the problem as the harmony ordering ≻, rather than the space of constraints or the potential infinitude of the answer set. 5 Regular-Relation Comprehension Since COMPREHEND(z) need not be a regular set in traditional OT, a corollary is that COMPREHEND and its inverse PRODUCE are not regular relations. That much was previously shown by Markus Hiller and Paul Smolensky (Frank and Satta, 1998), using similar examples. However, at least some OT grammars ought to describe regular relations. It has long been hypothesized that all human phonologies are regular relations, at least if one omits reduplication, and this is necessarily true of phonologies that were successfully described with pre-OT formalisms (Johnson, 1972; Koskenniemi, 1983). Regular relations are important for us because they are computationally tractable. Any regular relation can be implemented as a finite-state transducer T, which can be inverted and used for comprehension as well as production. PRODUCE(x) = T(x) = range(x ◦T), and COMPREHEND(z) = T −1(z) = domain(T ◦z). We are therefore interested in compiling OT grammars into finite-state transducers—by hook or by crook. §6 discusses how; but first let us see how such compilation is useful in realistic situations. Any practical comprehension strategy must recognize that the hearer does not really perceive the entire surface form. After all, the surface form contains phonetically invisible material (e.g., syllable and foot boundaries) and makes phonetically imperceptible distinctions (e.g., two copies of a tone versus one doubly linked copy). How to comprehend in this case? The solution is to modify PRON to “go all the way”—to delete not only underlying material but also phonetically invisible material. Indeed, PRON can also be made to perform any purely phonetic processing. Each output z of PRODUCE is now not a phonological surface form but a string of phonemes or spectrogram segments. So long as PRON is a regular relation (perhaps a nondeterministic or probabilistic one that takes phonetic variation into account), we will still be able to construct T and use it for production and comprehension as above.6 How about the lexicon? When the phonology can be represented as a transducer, COMPREHEND(z) is a regular set. It contains all inputs x that could have produced output z. In practice, many of these inputs are not in the lexicon, nor are they possible novel words. One should restrict to inputs that appear in the lexicon (also a regular set) by intersecting COMPREHEND(z) with the lexicon. For novel words this intersection will be empty; but one can find the possible underlying forms of the novel word, for learning’s sake, by intersecting COMPREHEND(z) with a larger (infinite) regular set representing all forms satisfying the language’s lexical constraints. There is an alternative treatment of the lexicon. GEN can be extended “backwards” to incorporate morphology just as PRON was extended “forwards” to incorporate phonetics. On this view, the input x is a sequence of abstract morphemes, and GEN performs morphological preprocessing to turn x into possible candidates y. GEN looks up each abstract morpheme’s phonological string ∈Σ∗from the lexicon,7 then combines these phonological strings by concatenation or template merger, then nondeterministically inserts surface material from ∆∗. Such a GEN can plausibly be built up (by composition) as a regular relation from abstract morpheme sequences to phonological candidates. This regularity, as for PRON, is all that is required. Representing a phonology as a transducer T has additional virtues. T can be applied efficiently to any input string x, whereas Ellison (1994) or Eisner (1997) requires a fresh automaton construction for each x. A nice trick is to build T without 6Pereira and Riley (1997) build a speech recognizer by composing a probabilistic finite-state language model, a finite-state pronouncing dictionary, and a probabilistic finite-state acoustic model. These three components correspond precisely to the input to GEN, the traditional OT grammar, and PRON, so we are simply suggesting the same thing in different terminology. 7Nondeterministically in the case of phonologically conditioned allomorphs: INDEFINITE APPLE 7→{Λæpl, ænæpl} ⊆ Σ∗. This yields competing candidates that differ even in their underlying phonological material. PRON and apply it to all conceivable x’s in parallel, yielding the complete set of all optimal candidates Yn(Σ∗) = S x∈Σ∗Yn(x). If Y and Y ′ denote the sets of optimal candidates under two grammars, then (Y ∩¬Y ′) ∪(Y ′ ∩¬Y ) yields the candidates that are optimal under only one grammar. Applying GEN−1 or PRON to this set finds the regular set of underlying or surface forms that the two grammars would treat differently; one can then look for empirical cases in this set, in order to distinguish between the two grammars. 6 Theorem on Compiling OT Why are OT phonologies not always regular relations? The trouble is that inputs may be arbitrarily long, and so may accrue arbitrarily large numbers of violations. Traditional OT (§4) is supposed to distinguish all such numbers. Consider syllabification in English, which prefers to syllabify the long input bi bambam . . . bam | {z } k copies as [bi][bam][bam] . . . [bam] (with k codas) rather than [bib][am][bam] . . . [bam] (with k + 1 codas). NOCODA must therefore distinguish annotated candidates ¯y with k ⋆’s (which are optimal) from those with k + 1 ⋆’s (which are not). It requires a (≥k + 2)-state automaton to make this distinction by looking only at the ⋆’s in ¯y. And if k can be arbitrarily large, then no finite-state automaton will handle all cases. Thus, constraints like NOCODA do not allow an upper bound on k for all x ∈Σ∗. Of course, the minimal number of violations k of a constraint is fixed given the underlying form x, which is useful in production.8 But comprehension is less fortunate: we cannot bound k given only the surface form z. In the grammar of §4, COMPREHEND(abc) included underlying forms whose optimal candidates had arbitrarily large numbers of violations k. Now, in most cases, the effect of an OT grammar can be achieved without actually counting anything. (This is to be expected since rewrite-rule 8Ellison (1994) was able to construct PRODUCE(x) from x. One can even build a transducer for PRODUCE that is correct on all inputs that can achieve ≤K violations and returns ∅on other inputs (signalling that the transducer needs to be recompiled with increased K). Simply use the construction of (Frank and Satta, 1998; Karttunen, 1998), composed with a hard constraint that the answer must have ≤K violations. grammars were previously written for the same phonologies, and they did not use counting!) This is possible despite the above arguments because for some grammars, the distinction between optimal and suboptimal ¯y can be made by looking at the non-⋆symbols in ¯y rather than trying to count the ⋆’s. In our NOCODA example, a surface substring such as . . . ib⋆][a. . . might signal that ¯y is suboptimal because it contains an “unnecessary” coda. Of course, the validity of this conclusion depends on the grammar and specifically the constraints C1, . . . Ci−1 ranked above NOCODA, since whether that coda is really unnecessary depends on whether ¯Yi−1 also contains the competing candidate . . . i][ba . . . with fewer codas. But as we have seen, some OT grammars do have effects that overstep the finite-state boundary (§4). Recent efforts to treat OT with transducers have therefore tried to remove counting from the formalism. We now unify such efforts by showing that they all modify the harmony ordering ≻. §4 described finite-state OT grammars as ones where GEN, PRON, and the constraints are regular relations. We claim that if the harmony ordering ≻ is also a regular relation on strings of (Σ∪∆∪{⋆})∗, then the entire grammar (PRODUCE) is also regular. We require harmony orderings to be compatible with GEN: an ordering must treat ¯y′, ¯y as incomparable (neither is ≻the other) if they were produced from different underlying forms.9 To make the notation readable let us denote the ≻ relation by the letter H. Thus, a transducer for H accepts the pair (¯y′, ¯y) if ¯y′ ≻¯y. The construction is inductive. Y0 = GEN is regular by assumption. If Yi−1 is regular, then so is Yi since (as we will show) Yi = ( ¯Yi ◦¬range( ¯Yi ◦H)) ◦D (4) where ¯Yi def = Yi−1 ◦Ci and maps x to the set of starred candidates that Ci will prune; ¬ denotes the complement of a regular language; and D is a transducer that removes all ⋆’s. Therefore PRODUCE = Yn ◦PRON is regular as claimed. 9For example, the harmony ordering of traditional OT is {(¯y′, ¯y) : ¯y′ has the same underlying form as, but contains fewer ⋆’s than, ¯y}. If we were allowed to drop the sameunderlying-form condition then the ordering would become regular, and then our claim would falsely imply that all traditional finite-state OT grammars were regular relations. It remains to derive (4). Equation (2) implies Ci(Yi(x)) = {¯y ∈¯Yi(x) : (∄¯y′ ∈¯Yi(x)) ¯y′ ≻¯y} (5) = ¯Yi(x) −{¯y : (∃¯y′ ∈¯Yi(x)) ¯y′ ≻¯y} (6) = ¯Yi(x) −H( ¯Yi(x)) (7) One can read H( ¯Yi(x)) as “starred candidates that are worse than other starred candidates,” i.e., suboptimal. The set difference (7) leaves only the optimal candidates. We now see (x, ¯y) ∈Yi ◦Ci ⇔¯y ∈Ci(Yi(x)) (8) ⇔ ¯y ∈¯Yi(x), ¯y ̸∈H( ¯Yi(x)) [by (7)] (9) ⇔ ¯y ∈¯Yi(x), (∄z)¯y ∈H( ¯Yi(z)) [see below](10) ⇔ (x, ¯y) ∈¯Yi, ¯y ̸∈range( ¯Yi ◦H) (11) ⇔ (x, ¯y) ∈¯Yi ◦¬range( ¯Yi ◦H) (12) therefore Yi ◦Ci = ¯Yi ◦¬range( ¯Yi ◦H) (13) and composing both sides with D yields (4). To justify (9) ⇔(10) we must show when ¯y ∈¯Yi(x) that ¯y ∈H( ¯Yi(x)) ⇔(∃z)¯y ∈H( ¯Yi(z)). For the ⇒ direction, just take z = x. For ⇐, ¯y ∈H( ¯Yi(z)) means that (∃¯y′ ∈¯Yi(z))¯y′ ≻¯y; but then x = z (giving ¯y ∈H( ¯Yi(x))), since if not, our compatibility requirement on H would have made ¯y′ ∈¯Yi(z) incomparable with ¯y ∈¯Yi(x). Extending the pretty notation of (Karttunen, 1998), we may use (4) to define a left-associative generalized optimality operator ooH : Y ooH C def = (Y ◦C◦¬range(Y ◦C◦H))◦D (14) Then for any regular OT grammar, PRODUCE = GEN ooH C1 ooH C2 · · · ooH Cn ◦PRON and can be inverted to get COMPREHEND. More generally, different constraints can usefully be applied with different H’s (Eisner, 2000). The algebraic construction above is inspired by a version that Gerdemann and van Noord (2000) give for a particular variant of OT. Their regular expressions can be used to implement it, simply replacing their add_violation by our H. Typically, H ignores surface characters when comparing starred candidates. So H can be written as elim(∆)◦G◦elim(∆)−1 where elim(∆) is a transducer that removes all characters of ∆. To satisfy the compatibility requirement on H, G should be a subset of the relation (Σ| ⋆|(ϵ : ⋆)|(⋆: ϵ))∗.10 10This transducer regexp says to map any symbol in Σ ∪{⋆} to itself, or insert or delete ⋆—and then repeat. We now summarize the main proposals from the literature (see §1), propose operator names, and cast them in the general framework. • Y o C: Inviolable constraint (Koskenniemi, 1983; Bird, 1995), implemented by composition. • Y o+ C: Counting constraint (Prince and Smolensky, 1993): more violations is more disharmonic. No finite-state implementation possible. • Y oo C: Binary approximation (Karttunen, 1998; Frank and Satta, 1998). All candidates with any violations are equally disharmonic. Implemented by G = (Σ∗(ϵ : ⋆)Σ∗)+, which relates underlying forms without violations to the same forms with violations. • Y oo3 C: 3-bounded approximation (Karttunen, 1998; Frank and Satta, 1998). Like o+ , but all candidates with ≥3 violations are equally disharmonic. G is most easily described with a transducer that keeps count of the input and output ⋆’s so far, on a scale of 0, 1, 2, ≥3. Final states are those whose output count exceeds their input count on this scale. • Y o⊂C: Matching or subset approximation (Gerdemann and van Noord, 2000). A candidate is more disharmonic than another if it has stars in all the same locations and some more besides.11 Here G = ((Σ|⋆)∗(ϵ : ⋆)(Σ|⋆)∗)+. • Y o> C: Left-to-right directional evaluation (Eisner, 2000). A candidate is more disharmonic than another if in the leftmost position where they differ (ignoring surface characters), it has a ⋆. This revises OT’s “do only when necessary” mantra to “do only when necessary and then as late as possible” (even if delaying ⋆’s means suffering more of them later). Here G = (Σ|⋆)∗((ϵ : ⋆)|((Σ : ⋆)(Σ|⋆)∗)). Unlike the other proposals, here two forms can both be optimal only if they have exactly the same pattern of violations with respect to their underlying material. • Y <o C: Right-to-left directional evaluation. “Do only when necessary and then as early as possible.” Here G is the reverse of the G used in o> . The novelty of the matching and directional proposals is their attention to where the violations fall. Eisner’s directional proposal (o>, <o) is the only 11Many candidates are incomparable under this ordering, so Gerdemann and van Noord also showed how to weaken the notation of “same location” in order to approximate o+ better. (a) x =bantodibo [ban][to][di][bo] [ban][ton][di][bo] [ban][to][dim][bon] [ban][ton][dim][bon] (b) NOCODA ban⋆todibo ban⋆to⋆dibo ban⋆todi⋆bo⋆ ban⋆to⋆di⋆bo⋆ (c) C1 NOCODA *! * ** ***! ***!* (d) C1 σ1 σ2 σ3 σ4 *! * * *! * * * * *! * * Figure 2: Counting vs. directionality. [Adapted from (Eisner, 2000).] C1 is some high-ranked constraint that kills the most faithful candidate; NOCODA dislikes syllable codas. (a) Surface material of the candidates. (b) Scored candidates for G to compare. Surface characters but not ⋆’s have been removed by elim(∆). (c) In traditional evaluation o+ , G counts the ⋆’s. (d) Directional evaluation o> gets a different result, as if NOCODA were split into 4 constraints evaluating the syllables separately. More accurately, it is as if NOCODA were split into one constraint per underlying letter, counting the number of ⋆’s right after that letter. one defended on linguistic as well as computational grounds. He argues that violation counting (o+) is a bug in OT rather than a feature worth approximating, since it predicts unattested phenomena such as “majority assimilation” (Bakovi´c, 1999; Lombardi, 1999). Conversely, he argues that comparing violations directionally is not a hack but a desirable feature, since it naturally predicts “iterative phenomena” whose description in traditional OT (via Generalized Alignment) is awkward from both a linguistic and a computational point of view. Fig. 2 contrasts the traditional and directional harmony orderings. Eisner (2000) proved that o> was a regular operator for directional H, by making use of a rather different insight, but that machine-level construction was highly technical. The new algebraic construction is simple and can be implemented with a few regular expressions, as for any other H. 7 Conclusion See the itemized points in §1 for a detailed summary. In general, this paper has laid out a clear, general framework for finite-state OT systems, and used it to obtain positive and negative results about the understudied problem of comprehension. Perhaps these results will have some bearing on the development of realistic learning algorithms. The paper has also established sufficient conditions for a finite-state OT grammar to compile into a finite-state transducer. It should be easy to imagine new variants of OT that meet these conditions. References Eric Bakovi´c. 1999. Assimilation to the unmarked. Rutgers Optimality Archive ROA-340., August. Steven Bird. 1995. Computational Phonology: A Constraint-Based Approach. Cambridge. Reinhard Blutner. 1999. Some aspects of optimality in natural language interpretation. In Papers on Optimality Theoretic Semantics. Utrecht. J. Cole and C. Kisseberth. 1994. An optimal domains theory of harmony. Studies in the Linguistic Sciences, 24(2). Jason Eisner. 1997. Efficient generation in primitive Optimality Theory. In Proc. of ACL/EACL. Jason Eisner. 2000. Directional constraint evaluation in Optimality Theory. In Proc. of COLING. T. Mark Ellison. 1994. Phonological derivation in Optimality Theory. In Proc. of COLING J. Eric Fosler. 1996. On reversing the generation process in Optimality Theory. Proc. of ACL Student Session. R. Frank and G. Satta. 1998. Optimality Theory and the generative complexity of constraint violability. Computational Linguistics, 24(2):307–315. D. Gerdemann and G. van Noord. 2000. Approximation and exactness in finite-state Optimality Theory. In Proc. of ACL SIGPHON Workshop. Mark Hale and Charles Reiss. 1998. Formal and empirical arguments concerning phonological acquisition. Linguistic Inquiry, 29:656–683. C. Douglas Johnson. 1972. Formal Aspects of Phonological Description. Mouton. R. Kaplan and M. Kay. 1994. Regular models of phonological rule systems. Comp. Ling., 20(3). L. Karttunen. 1998. The proper treatment of optimality in computational phonology. In Proc. of FSMNLP. Kimmo Koskenniemi. 1983. Two-level morphology: A general computational model for word-form recognition and production. Publication 11, Dept. of General Linguistics, University of Helsinki. Linda Lombardi. 1999. Positional faithfulness and voicing assimilation in Optimality Theory. Natural Language and Linguistic Theory, 17:267–302. Fernando C. N. Pereira and Michael Riley. 1997. Speech recognition by composition of weighted finite automata. In E. Roche and Y. Schabes, eds., Finite-State Language Processing. MIT Press. A. Prince and P. Smolensky. 1993. Optimality Theory: Constraint interaction in generative grammar. Ms., Rutgers and U. of Colorado (Boulder). Paul Smolensky. 1996. On the comprehension/production dilemma in child language. Linguistic Inquiry, 27:720–731.
2002
8
Generalized Encoding of Description Spaces and its Application to Typed Feature Structures Gerald Penn Department of Computer Science University of Toronto 10 King's College Rd. Toronto M5S 3G4, Canada Abstract This paper presents a new formalization of a unification- or join-preserving encoding of partially ordered sets that more essentially captures what it means for an encoding to preserve joins, generalizing the standard definition in AI research. It then shows that every statically typable ontology in the logic of typed feature structures can be encoded in a data structure of fixed size without the need for resizing or additional union-find operations. This is important for any grammar implementation or development system based on typed feature structures, as it significantly reduces the overhead of memory management and reference-pointer-chasing during unification. 1 Motivation The logic of typed feature structures (Carpenter, 1992) has been widely used as a means of formalizing and developing natural language grammars that support computationally efficient parsing, generation and SLD resolution, notably grammars within the Head-driven Phrase Structure Grammar (HPSG) framework, as evidenced by the recent successful development of the LinGO reference grammar for English (LinGO, 1999). These grammars are formulated over a finite vocabulary of features and partially ordered types, in respect of constraints called appropriateness conditions. Appropriateness specifies, for each type, all and only the features that take values in feature structures of that type, along with adj noun CASE:case nom acc plus minus subst case bool head MOD:bool PRD:bool Figure 1: A sample type system with appropriateness conditions. the types of values (value restrictions) those feature values must have. In Figure 1,1 for example, all head-typed TFSs must have bool-typed values for the features MOD and PRD, and no values for any other feature. Relative to data structures like arrays or logical terms, typed feature structures (TFSs) can be regarded as an expressive refinement in two different ways. First, they are typed, and the type system allows for subtyping chains of unbounded depth. Figure 1 has a chain of length  from to noun. Pointers to arrays and logical terms can only monotonically “refine” their (syntactic) type from unbound (for logical terms, variables) to bound. Second, although all the TFSs of a given type have the same features because of appropriateness, a TFS may acquire more features when it promotes to a subtype. If a head-typed TFS promotes to noun in the type system above, for example, it acquires one extra casevalued feature, CASE. When a subtype has two or 1In this paper, Carpenter's (1992) convention of using  as the most general type, and depicting subtypes above their supertypes is used. Computational Linguistics (ACL), Philadelphia, July 2002, pp. 64-71. Proceedings of the 40th Annual Meeting of the Association for more incomparable supertypes, a TFS can also multiply inherit features from other supertypes when it promotes. The overwhelmingly most prevalent operation when working with TFS-based grammars is unification, which corresponds mathematically to finding a least upper bound or join. The most common instance of unification is the special case in which a TFS is unified with the most general TFS that satisfies a description stated in the grammar. This special case can be decomposed at compile-time into more atomic operations that (1) promote a type to a subtype, (2) bind a variable, or (3) traverse a feature path, according to the structure of the description. TFSs actually possess most of the properties of fixed-arity terms when it comes to unification, due to appropriateness. Nevertheless, unbounded subtyping chains and acquiring new features conspire to force most internal representations of TFSs to perform extra work when promoting a type to a subtype to earn the expressive power they confer. Upon being repeatedly promoted to new subtypes, they must be repeatedly resized or repeatedly referenced with a pointer to newly allocated representations, both of which compromise locality of reference in memory and/or involve pointer-chasing. These costs are significant. Because appropriateness involves value restrictions, simply padding a representation with some extra space for future features at the outset must guarantee a proper means of filling that extra space with the right value when it is used. Internal representations that lazily fill in structure must also be wary of the common practice in description languages of binding a variable to a feature value with a scope larger than a single TFS — for example, in sharing structure between a daughter category and a mother category in a phrase structure rule. In this case, the representation of a feature's value must also be interpretable independent of its context, because two separate TFSs may refer to that variable. These problems are artifacts of not using a representation which possesses what in knowledge representation is known as a join-preserving encoding of a grammar's TFSs — in other words, a representation with an operation that naturally behaves like TFS-unification. The next section presents the standard definition of join-preserving encodings and provides a generalization that more essentially captures what it means for an encoding to preserve joins. Section 3 formalizes some of the defining characteristics of TFSs as they are used in computational linguistics. Section 4 shows that these characteristics quite fortuitously agree with what is required to guarantee the existence of a joinpreserving encoding of TFSs that needs no resizing or extra referencing during type promotion. Section 5 then shows that a generalized encoding exists in which variable-binding scope can be larger than a single TFS — a property no classical encoding has. Earlier work on graph unification has focussed on labelled graphs with no appropriateness, so the central concern was simply to minimize structure copying. While this is clearly germane to TFSs, appropriateness creates a tradeoff among copying, the potential for more compact representations, and other memory management issues such as locality of reference that can only be optimized empirically and relative to a given grammar and corpus (a recent example of which can be found in Callmeier (2001)). While the present work is a more theoretical consideration of how unification in one domain can simulate unification in another, the data structure described here is very much motivated by the encoding of TFSs as Prolog terms allocated on a contiguous WAM-style heap. In that context, the emphasis on fixed arity is really an attempt to avoid copying, and lazily filling in structure is an attempt to make encodings compact, but only to the extent that join preservation is not disturbed. While this compromise solution must eventually be tested on larger and more diverse grammars, it has been shown to reduce the total parsing time of a large corpus on the ALE HPSG benchmark grammar of English (Penn, 1993) by a factor of about 4 (Penn, 1999). 2 Join-Preserving Encodings We may begin with a familiar definition from discrete mathematics: Definition 1 Given two partial orders  and    , a function   is an orderembedding iff, for every  ,  iff  !#"$% & !'" . An order-embedding preserves the behavior of the order relation (for TFS type systems, subtyping; f Figure 2: An example order-embedding that cannot translate least upper bounds. for TFSs themselves, subsumption) in the encoding codomain. As shown in Figure 2, however, order embeddings do not always preserve operations such as least upper bounds. The reason is that the image of  may not be closed under those operations in the codomain. In fact, the codomain could provide joins where none were supposed to exist, or, as in Figure 2, no joins where one was supposed to exist. Mellish (1991; 1992) was the first to formulate join-preserving encodings correctly, by explicitly requiring this preservation. Let us write   for the join of and  in partial order  . Definition 2 A partial order % is bounded complete (BCPO) iff every set of elements with a common upper bound has a least upper bound. Bounded completeness ensures that unification or joins are well-defined among consistent types. Definition 3 Given two BCPOs,  and ,      is a classical join-preserving encoding of  into iff:  injectivity  is an injection,  zero preservation   " 2 iff  #"    " , and  join homomorphism    " "   " , where they exist. Join-preserving encodings are automatically orderembeddings because  iff & . There is actually a more general definition: Definition 4 Given two BCPOs,  and ,        " is a (generalized) join-preserving encoding of  into iff:  totality for all   ,  #"  ,  disjointness  "   "  iff  , 2We use the notation "!$# %'&)( to mean "!$# %& is undefined, and "!$# %'&)* to mean "!$# %'& is defined. + , . / 0 1 2  3 4 f Figure 3: A non-classical join-preserving encoding between BCPOs for which no classical joinpreserving encoding exists.  zero preservation for all 5   #" and 5     " , 6 78 iff 5 6 5 8 , and  join homomorphism for all 5   #" and 5     " , 5 6 5      " , where they exist. When  maps elements of  to singleton sets in , then  reduces to a classical join-preserving encoding. It is not necessary, however, to require that only one element of represent an element of  , provided that it does not matter which representative we choose at any given time. Figure 3 shows a generalized join-preserving encoding between two partial orders for which no classical encoding exists. There is no classical encoding of 4 into because no three elements can be found in that pairwise unify to a common join. A generalized encoding exists because we can choose three potential representatives for , : one ( - ) for unifying the representatives of / and 0 , one ( . ) for unifying the representatives of 0 and 1 , and one ( + ) for unifying the representatives of / and 1 . Notice that the set of representatives for , must be closed under unification. Although space does not permit here, this generalization has been used to prove that well-typing, an alternative interpretation of appropriateness, is equivalent in its expressive power to the interpretation used here (called total well-typing; Carpenter, 1992); that multi-dimensional inheritance (Erbach, 1994) adds no expressive power to any TFS type system; that TFS type systems can encode systemic networks in polynomial space using extensional types (Carpenter, 1992); and that certain uses of parametric typing with TFSs also add no expressive power to the type system (Penn, 2000). 3 TFS Type Systems There are only a few common-sense restrictions we need to place on our type systems: Definition 5 A TFS type system consists of a finite BCPO of types, % , a finite set of features Feat, and a partial function,        such that, for every F   :  (Feature Introduction) there is a type    F "   such that:   F   # F "" , and for all    , if   F  " , then    F "   , and  (Upward Closure / Right Monotonicity) if   F  ! " and ! " , then   F  " and   F  ! " #  F  " . The function Approp maps a feature and type to the value restriction on that feature when it is appropriate to that type. If it is not appropriate, then Approp is undefined at that pair. Feature introduction ensures that every feature has a least type to which it is appropriate. This makes description compilation more efficient. Upward closure ensures that subtypes inherit their supertypes' features, and with consistent value restrictions. The combination of these two properties allows us to annotate a BCPO of types with features and value restrictions only where the feature is introduced or the value restriction is refined, as in Figure 1. A very useful property for type systems to have is static typability. This means that if two TFSs that are well-formed according to appropriateness are unifiable, then their unification is automatically well-formed as well — no additional work is necessary. Theorem 1 (Carpenter, 1992) An appropriateness specification is statically typable iff, for all types !  such that !$% , and all F &  :   F  !'"  () ) ) ) * ) ) ) )+ ,  F  ! "  if   F  ! " and   F  "   F " ,  F  ! " if only ,  F  ! " ,  F  " if only ,  F  " unrestricted otherwise - (head representation)  (MOD representation)  (PRD representation) /. Figure 4: A fixed array representation of the TFS in Figure 5. 0 1 head MOD plus PRD plus 23 Figure 5: A TFS of type head from the type system in Figure 1. Not all type systems are statically typable, but a type system can be transformed into an equivalent statically typable type system plus a set of universal constraints, the proof of which is omitted here. In linguistic applications, we normally have a set of universal constraints anyway for encoding principles of grammar, so it is easy and computationally inexpensive to conduct this transformation. 4 Static Encodability As mentioned in Section 1, what we want is an encoding of TFSs with a notion of unification that naturally corresponds to TFS-unification. As discussed in Section 3, static typability is something we can reasonably guarantee in our type systems, and is therefore something we expect to be reflected in our encodings — no extra work should be done apart from combining the types and recursing on feature values. If we can ensure this, then we have avoided the extra work that comes with resizing or unnecessary referencing and pointer-chasing. As mentioned above, what would be best from the standpoint of memory management is simply a fixed array of memory cells, padded with extra space to accommodate features that might later be added. We will call these frames. Figure 4 depicts a frame for the head-typed TFS in Figure 5. In a frame, the representation of the type can either be (1) a bit vector encoding the type,3 or (2) a reference pointer 3Instead of a bit vector, we could also use an index into a table if least upper bounds are computed by table look-up. to another frame. If backtracking is supported in search, changes to the type representation must be trailed. For each appropriate feature, there is also a pointer to a frame for that feature's value. There are also additional pointers for future features (for head, CASE) that are grounded to some distinguished value indicating that they are unused — usually a circular reference to the referring array position. Cyclic TFSs, if they are supported, would be represented with cyclic (but not 1-cyclic) chains of pointers. Frames can be implemented either directly as arrays, or as Prolog terms. In Prolog, the type representation could either be a term-encoding of the type, which is guaranteed to exist for any finite BCPO (Mellish, 1991; Mellish, 1992), or in extended Prologs, another trailable representation such as a mutable term (Aggoun and Beldiceanu, 1990) or an attributed value (Holzbaur, 1992). Padding the representation with extra space means using a Prolog term with extra arity. A distinguished value for unused arguments must then be a unique unbound variable.4 4.1 Restricting the Size of Frames At first blush, the prospect of adding as many extra slots to a frame as there could be extra features in a TFS sounds hopelessly unscalable to large grammars. While recent experience with LinGO (1999) suggests a trend towards modest increases in numbers of features compared to massive increases in numbers of types as grammars grow large, this is nevertheless an important issue to address. There are two discrete methods that can be used in combination to reduce the required number of extra slots: Definition 6 Given a finite BCPO,  , the set of modules of % is the finest partition of   ,       , such that (1) each  is upward-closed (with respect to subtyping), and (2) if two types have a least upper bound, then they belong to the same module. Trivially, if a feature is introduced at a type in one module, then it is not appropriate to any type in any other module. As a result, a frame for a TFS only needs to allow for the features appropriate to the 4Prolog terms require one additional unbound variable per TFS (sub)term in order to preserve the intensionality of the logic — unlike Prolog terms, structurally identical TFS substructures are not identical unless explicitly structure-shared. a b c d F: e G: f H: Figure 6: A type system with three features and a three-colorable feature graph. module of its type. Even this number can normally be reduced: Definition 7 The feature graph,   " , of module  is an undirected graph, whose vertices correspond to the features introduced in  , and in which there is an edge,  " , iff  and  are appropriate to a common maximally specific type in  . Proposition 1 The least number of feature slots required for a frame of any type in  is the least  for which   " is  -colorable. There are type systems, of course, for which modularization and graph-coloring will not help. Figure 6, for example, has one module, three features, and a three-clique for a feature graph. There are statistical refinements that one could additionally make, such as determining the empirical probability that a particular feature will be acquired and electing to pay the cost of resizing or referencing for improbable features in exchange for smaller frames. 4.2 Correctness of Frames With the exception of extra slots for unused feature values, frames are clearly isomorphic in their structure to the TFSs they represent. The implementation of unification that we prefer to avoid resizing and referencing is to (1) find the least upper bound of the types of the frames being unified, (2) update one frame's type to the least upper bound, and point the other's type representation to it, and (3) recurse on respective pairs of feature values. The frame does not need to be resized, only the types need to be referenced, and in the special case of promoting the type of a single TFS to a subtype, the type only needs to be trailed. If cyclic TFSs are not supported, then acyclicity must also be enforced with an occurscheck. The correctness of frames as a join-preserving encoding of TFSs thus depends on being able to make sense of the values in these unused positions. The c F:a a b Figure 7: A type system that introduces a feature at a join-reducible type. 0 1 head MOD plus PRD bool 2 3 Figure 8: A TFS of type head in which one feature value is a most general satisfier of its feature's value restriction. problem is that features may be introduced at joinreducible types, as in Figure 7. There is only one module, so the frames for a and b must have a slot available for the feature F. When an a-typed TFS unifies with a b-typed TFS, the result will be of type c, so leaving the slot marked unused after recursion would be incorrect — we would need to look in a table to see what value to assign it. An alternative would be to place that value in the frames for a and b from the beginning. But since the value itself must be of type a in the case of Figure 7, this strategy would not yield a finite representation. The answer to this conundrum is to use a distinguished circular reference in a slot iff the slot is either unused or the value it contains is (1) the most general satisfier of the value restriction of the feature it represents and (2) not structure-shared with any other feature in the TFS.5 During unification, if one TFS is a circular reference, and the other is not, the circular reference is referenced to the other. If both values are circular references, then one is referenced to the other, which remains circular. The feature structure in Figure 8, for example, has the frame representation shown in Figure 9. The PRD value is a TFS of type bool, and this value is not shared with any other structure in the TFS. If the values of MOD and PRD are both bool-typed, then if 5The sole exception is a TFS of type  , which by definition belongs to no module and has no features. Its representation is a distinguished circular reference, unless two or more feature values share a single  -typed TFS value, in which case one is a circular reference and the rest point to it. The circular one can be chosen canonically to ensure that the encoding is still classical. - (head representation)  (MOD representation) /. /. Figure 9: The frame for Figure 8. they are shared (Figure 10), they do not use circu0 1 head MOD  bool PRD  2 3 Figure 10: A TFS of type head in which both feature values are most general satisfiers of the value restrictions, but they are shared. lar references (Figure 11), and if they are not shared (Figure 12), both of them use a different circular reference (Figure 13). With this convention for circular references, frames are a classical join-preserving encoding of the TFSs of any statically typable type system. Although space does not permit a complete proof here, the intuition is that (1) most general satisfiers of value restrictions necessarily subsume every other value that a totally well-typed TFS could take at that feature, and (2) when features are introduced, their initial values are not structure-shared with any other substructure. Static typability ensures that value restrictions unify to yield value restrictions, except in the final case of Theorem 1. The following lemma deals with this case: Lemma 1 If Approp is statically typable, ! % , and for some F    ,   F  ! " and   F " , then either   F  !  " or - (head representation)  (MOD/PRD representation) /. Figure 11: The frame for Figure 10. 0 1 head MOD bool PRD bool 2 3 Figure 12: A TFS of type head in which both feature values are most general satisfiers of the value restrictions, and they are not shared. - (head representation) /. /. /. Figure 13: The frame for Figure 12.   F  !$ " /  F    F "" . Proof: Suppose   F  !   " . Then    F "  ! $ .   F  ! " and   F " , so    F "  ! and    F "7   . So there are three cases to consider: Intro F "   : then the result trivially holds.  Intro F " but    Intro F " (or by symmetry, the opposite): then we have the situation in Figure 14. It must be that    F "  !   , so by static typability, the lemma holds. 6  Intro F " and    Intro F " : !  !   and    F "  !   , so ! and    F " are consistent. By bounded completeness, !     F " and !7    F "  !  . By upward closure,   F   # F "  ! " and by static typability,   F   # F " ! "    F    F "" . Furthermore,    F "  ! " 7 !   ; thus by static typability the lemma holds.  This lemma is very significant in its own right — it says that we know more than Carpenter's Theorem 1. An introduced feature's value restriction can always be predicted in a statically typable type system. The lemma implicitly relies on feature intro! $   # F " !  Figure 14: The second case in the proof of Lemma 1. !$ F: 1 ! F: /  F: 0 , 1 / 0 Figure 15: A statically typable “type system” that multiply introduces F at join-reducible elements with different value restrictions. duction, but in fact, the result holds if we allow for multiple introducing types, provided that all of them agree on what the value restriction for the feature should be. Would-be type systems that multiply introduce a feature at join-reducible elements (thus requiring some kind of distinguished-value encoding), disagree on the value restriction, and still remain statically typable are rather difficult to come by, but they do exist, and for them, a frame encoding will not work. Figure 15 shows one such example. In this signature, the unification:  s F d    t F b does not exist, but the unification of their frame encodings must succeed because the  -typed TFS's F value must be encoded as a circular reference. To the best of the author's knowledge, there is no fixedsize encoding for Figure 15. 5 Generalized Term Encoding In practice, this classical encoding is not good for much. Description languages typically need to bind variables to various substructures of a TFS,  , and then pass those variables outside the substructures of  where they can be used to instantiate the value of another feature structure's feature, or as arguments to some function call or procedural goal. If a value in a single frame is a circular reference, we can properly understand what that reference encodes with the above convention by looking at its context, i.e., the type. Outside the scope of that frame, we have no way of knowing which feature's value restriction it is supposed to encode.  .    "     Introduced feature has variable encoding     .  "  .  "   variable binding   5  5  Figure 16: A pictorial overview of the generalized encoding. A generalized term encoding provides an elegant solution to this problem. When a variable is bound to a substructure that is a circular reference, it can be filled in with a frame for the most general satisfier that it represents and then passed out of context. Having more than one representative for the original TFS is consistent, because the set of representatives is closed under this filling operation. A schematic overview of the generalized encoding is in Figure 16. Every set of frames that encode a particular TFS has a least element, in which circular references are always opted for as introduced feature values. This is the same element as the classical encoding. It also has a greatest element, in which every unused slot still has a circular reference, but all unshared most general satisfiers are filled in with frames. Whenever we bind a variable to a substructure of a TFS, filling pushes the TFS's encoding up within the same set to some other encoding. As a result, at any given point in time during a computation, we do not exactly know which encoding we are using to represent a given TFS. Furthermore, when two TFSs are unified successfully, we do not know exactly what the result will be, but we do know that it falls inside the correct set of representatives because there is at least one frame with circular references for the values of every newly introduced feature. 6 Conclusion Simple frames with extra slots and a convention for filling in feature values provide a join-preserving encoding of any statically typable type system, with no resizing and no referencing beyond that of type representations. A frame thus remains stationary in memory once it is allocated. A generalized encoding, moreover, is robust to side-effects such as extra-logical variable-sharing. Frames have many potential implementations, including Prolog terms, WAM-style heap frames, or fixed-sized records. References A. Aggoun and N. Beldiceanu. 1990. Time stamp techniques for the trailed data in constraint logic programming systems. In S. Bourgault and M. Dincbas, editors, Programmation en Logique, Actes du 8eme Seminaire, pages 487–509. U. Callmeier. 2001. Efficient parsing with large-scale unification grammars. Master's thesis, Universitaet des Saarlandes. B. Carpenter. 1992. The Logic of Typed Feature Structures. Cambridge. G. Erbach. 1994. Multi-dimensional inheritance. In Proceedings of KONVENS 94. Springer. C. Holzbaur. 1992. Metastructures vs. attributed variables in the context of extensible unification. In M. Bruynooghe and M. Wirsing, editors, Programming Language Implementation and Logic Programming, pages 260–268. Springer Verlag. LinGO. 1999. The LinGO grammar and lexicon. Available on-line at http://lingo.stanford.edu. C. Mellish. 1991. Graph-encodable description spaces. Technical report, University of Edinburgh Department of Artificial Intelligence. DYANA Deliverable R3.2B. C. Mellish. 1992. Term-encodable description spaces. In D.R. Brough, editor, Logic Programming: New Frontiers, pages 189–207. Kluwer. G. Penn. 1993. The ALE HPSG benchmark grammar. Available on-line at http://www.cs.toronto.edu/  gpenn/ale.html. G. Penn. 1999. An optimized Prolog encoding of typed feature structures. In Proceedings of the 16th International Conference on Logic Programming (ICLP-99), pages 124–138. G. Penn. 2000. The Algebraic Structure of Attributed Type Signatures. Ph.D. thesis, Carnegie Mellon University.
2002
9
Offline Strategies for Online Question Answering: Answering Questions Before They Are Asked Michael Fleischman, Eduard Hovy, Abdessamad Echihabi USC Information Sciences Institute 4676 Admiralty Way Marina del Rey, CA 90292-6695 {fleisch, hovy, echihabi} @ISI.edu Abstract Recent work in Question Answering has focused on web-based systems that extract answers using simple lexicosyntactic patterns. We present an alternative strategy in which patterns are used to extract highly precise relational information offline, creating a data repository that is used to efficiently answer questions. We evaluate our strategy on a challenging subset of questions, i.e. “Who is …” questions, against a state of the art web-based Question Answering system. Results indicate that the extracted relations answer 25% more questions correctly and do so three orders of magnitude faster than the state of the art system. 1 Introduction Many of the recent advances in Question Answering have followed from the insight that systems can benefit by exploiting the redundancy of information in large corpora. Brill et al. (2001) describe using the vast amount of data available on the World Wide Web to achieve impressive performance with relatively simple techniques. While the Web is a powerful resource, its usefulness in Question Answering is not without limits. The Web, while nearly infinite in content, is not a complete repository of useful information. Most newspaper texts, for example, do not remain accessible on the Web for more than a few weeks. Further, while Information Retrieval techniques are relatively successful at managing the vast quantity of text available on the Web, the exactness required of Question Answering systems makes them too slow and impractical for ordinary users. In order to combat these inadequacies, we propose a strategy in which information is extracted automatically from electronic texts offline, and stored for quick and easy access. We borrow techniques from Text Mining in order to extract semantic relations (e.g., concept-instance relations) between lexical items. We enhance these techniques by increasing the yield and precision of the relations that we extract. Our strategy is to collect a large sample of newspaper text (15GB) and use multiple part of speech patterns to extract the semantic relations. We then filter out the noise from these extracted relations using a machine-learned classifier. This process generates a high precision repository of information that can be accessed quickly and easily. We test the feasibility of this strategy on one semantic relation and a challenging subset of questions, i.e., “Who is …” questions, in which either a concept is presented and an instance is requested (e.g., “Who is the mayor of Boston?”), or an instance is presented and a concept is requested (e.g., “Who is Jennifer Capriati?”). By choosing this subset of questions we are able to focus only on answers given by concept-instance relationships. While this paper examines only this type of relation, the techniques we propose are easily extensible to other question types. Evaluations are conducted using a set of “Who is …” questions collected over the period of a few months from the commercial question-based search engine www.askJeeves.com. We extract approximately 2,000,000 concept-instance relations from newspaper text using syntactic patterns and machine-learned filters (e.g., “president Bill Clinton” and “Bill Clinton, president of the USA,”). We then compare answers based on these relations to answers given by TextMap (Hermjakob et al., 2002), a state of the art web-based question answering system. Finally, we discuss the results of this evaluation and the implications and limitations of our strategy. 3.1 2 3 3.2 Related Work A great deal of work has examined the problem of extracting semantic relations from unstructured text. Hearst (1992) examined extracting hyponym data by taking advantage of lexical patterns in text. Using patterns involving the phrase “such as”, she reports finding only 46 relations in 20M of New York Times text. Berland and Charniak (1999) extract “part-of” relations between lexical items in text, achieving only 55% accuracy with their method. Finally, Mann (2002) describes a method for extracting instances from text that takes advantage of part of speech patterns involving proper nouns. Mann reports extracting 200,000 concept-instance pairs from 1GB of Associated Press text, only 60% of which were found to be legitimate descriptions. These studies indicate two distinct problems associated with using patterns to extract semantic information from text. First, the patterns yield only a small amount of the information that may be present in a text (the Recall problem). Second, only a small fraction of the information that the patterns yield is reliable (the Precision problem). Relation Extraction Our approach follows closely from Mann (2002). However, we extend this work by directly addressing the two problems stated above. In order to address the Recall problem, we extend the list of patterns used for extraction to take advantage of appositions. Further, following Banko and Brill (2001), we increase our yield by increasing the amount of data used by an order of magnitude over previously published work. Finally, in order to address the Precision problem, we use machine learning techniques to filter the output of the part of speech patterns, thus purifying the extracted instances. Data Collection and Preprocessing Approximately 15GB of newspaper text was collected from: the TREC 9 corpus (~3.5GB), the TREC 2002 corpus (~3.5GB), Yahoo! News (.5GB), the AP newswire (~2GB), the Los Angeles Times (~.5GB), the New York Times (~2GB), Reuters (~.8GB), the Wall Street Journal (~1.2GB), and various online news websites (~.7GB). The text was cleaned of HTML (when necessary), word and sentence segmented, and part of speech tagged using Brill’s tagger (Brill, 1994). Extraction Patterns Part of speech patterns were generated to take advantage of two syntactic constructions that often indicate concept-instance relationships: common noun/proper noun constructions (CN/PN) and appositions (APOS). Mann (2002) notes that concept-instance relationships are often expressed by a syntactic pattern in which a proper noun follows immediately after a common noun. Such patterns (e.g. “president George Bush”) are very productive and occur 40 times more often than patterns employed by Hearst (1992). Table 1 shows the regular expression used to extract such patterns along with examples of extracted patterns. ${NNP}*${VBG}*${JJ}*${NN}+${NNP}+ trainer/NN Victor/NNP Valle/NNP ABC/NN spokesman/NN Tom/NNP Mackin/NNP official/NN Radio/NNP Vilnius/NNP German/NNP expert/NN Rriedhart/NNP Dumez/NN Investment/NNP Table 1. The regular expression used to extract CN/PN patterns (common noun followed by proper noun). Examples of extracted text are presented below. Text in bold indicates that the example is judged illegitimate. ${NNP}+\s*,\/,\s*${DT}*${JJ}*${NN}+(?:of\/IN)* \s*${NNP}*${NN}*${IN}*${DT}*${NNP}* ${NN}*${IN}*${NN}*${NNP}*,\/, Stevens/NNP ,/, president/NN of/IN the/DT firm/NN ,/, Elliott/NNP Hirst/NNP ,/, md/NN of/IN Oldham/NNP Signs/NNP ,/, George/NNP McPeck/NNP,/, an/DT engineer/NN from/IN Peru/NN,/, Marc/NNP Jonson/NNP,/, police/NN chief/NN of/IN Chamblee/NN ,/, David/NNP Werner/NNP ,/, a/DT real/JJ estate/NN investor/NN ,/, Table 2. The regular expression used to extract APOS patterns (syntactic appositions). Examples of extracted text are presented below. Text in bold indicates that the example is judged illegitimate. In addition to the CN/PN pattern of Mann (2002), we extracted syntactic appositions (APOS). This pattern detects phrases such as “Bill Gates, chairman of Microsoft,”. Table 2 shows the regular expression used to extract appositions and examples of extracted patterns. These regular expressions are not meant to be exhaustive of all possible varieties of patterns construed as CN/PN or APOS. They are “quick and dirty” implementations meant to extract a large proportion of the patterns in a text, acknowledging that some bad examples may leak through. 3.3 Filtering The concept-instance pairs extracted using the above patterns are very noisy. In samples of approximately 5000 pairs, 79% of the APOS extracted relations were legitimate, and only 45% of the CN/PN extracted relations were legitimate. This noise is primarily due to overgeneralization of the patterns (e.g., “Berlin Wall, the end of the Cold War,”) and to errors in the part of speech tagger (e.g., “Winnebago/CN Industries/PN”). Further, some extracted relations were considered either incomplete (e.g., “political commentator Mr. Bruce”) or too general (e.g., “meeting site Bourbon Street”) to be useful. For the purposes of learning a filter, these patterns were treated as illegitimate. In order to filter out these noisy conceptinstance pairs, 5000 outputs from each pattern were hand tagged as either legitimate or illegitimate, and used to train a binary classifier. The annotated examples were split into a training set (4000 examples), a validation set (500 examples); and a held out test set (500 examples). The WEKA machine learning package (Witten and Frank, 1999) was used to test the performance of various learning and meta-learning algorithms, including Naïve Bayes, Decision Tree, Decision List, Support Vector Machines, Boosting, and Bagging. Table 4 shows the list of features used to describe each concept-instance pair for training the CN/PN filter. Features are split between those that deal with the entire pattern, only the concept, only the instance, and the pattern’s overall orthography. The most powerful of these features examines an Ontology in order to exploit semantic information about the concept’s head. This semantic information is found by examining the superconcept relations of the concept head in the 110,000 node Omega Ontology (Hovy et al., in prep.). Feature Type Pattern Features Binary ${JJ}+${NN}+${NNP}+ Binary ${NNP}+${JJ}+${NN}+${NNP}+ Binary ${NNP}+${NN}+${NNP}+ Binary ${NNP}+${VBG}+${JJ}+${NN}+${NNP}+ Binary ${NNP}+${VBG}+${NN}+${NNP}+ Binary ${NN}+${NNP}+ Binary ${VBG}+${JJ}+${NN}+${NNP}+ Binary ${VBG}+${NN}+${NNP}+ Concept Features Binary Concept head ends in "er" Binary Concept head ends in "or" Binary Concept head ends in "ess" Binary Concept head ends in "ist" Binary Concept head ends in "man" Binary Concept head ends in "person" Binary Concept head ends in "ant" Binary Concept head ends in "ial" Binary Concept head ends in "ate" Binary Concept head ends in "ary" Binary Concept head ends in "iot" Binary Concept head ends in "ing" Binary Concept head is-a occupation Binary Concept head is-a person Binary Concept head is-a organization Binary Concept head is-a company Binary Concept includes digits Binary Concept has non-word Binary Concept head in general list Integer Frequency of concept head in CN/PN Integer Frequency of concept head in APOS Instance Features Integer Number of lexical items in instance Binary Instance contains honorific Binary Instance contains common name Binary Instance ends in honorific Binary Instance ends in common name Binary Instance ends in determiner Case Features Integer Instance: # of lexical items all Caps Integer Instance: # of lexical items start w/ Caps Binary Instance: All lexical items start w/ Caps Binary Instance: All lexical items all Caps Integer Concept: # of lexical items all Caps Integer Concept: # of lexical items start w/ Caps Binary Concept: All lexical items start w/ Caps Binary Concept: All lexical items all Caps Integer Total # of lexical items all Caps Integer Total # of lexical items start w/ Caps Table 4. Features used to train CN/PN pattern filter. Pattern features address aspects of the entire pattern, Concept features look only at the concept, Instance features examine elements of the instance, and Case features deal only with the orthography of the lexical items. Figure 1. Performance of machine learning algorithms on a validation set of 500 examples extracted using the CN/PN pattern. Algorithms are compared to a baseline in which only concepts that inherit from “Human” or “Occupation” in Omega pass through the filter. 4 4.1 Extraction Results Machine Learning Results Figure 1 shows the performance of different machine learning algorithms, trained on 4000 extracted CN/PN concept-instance pairs, and tested on a validation set of 500. Naïve Bayes, Support Vector Machine, Decision List and Decision Tree algorithms were all evaluated and the Decision Tree algorithm (which scored highest of all the algorithms) was further tested with Boosting and Bagging meta-learning techniques. The algorithms are compared to a baseline filter that accepts concept-instance pairs if and only if the concept head is a descendent of either the concept “Human” or the concept “Occupation” in Omega. It is clear from the figure that the Decision Tree algorithm plus Bagging gives the highest precision and overall F-score. All subsequent experiments are run using this technique.1 Since high precision is the most important criterion for the filter, we also examine the performance of the classifier as it is applied with a threshold. Thus, a probability cutoff is set such that only positive classifications that exceed this cutoff are actually classified as legitimate. Figure 2 shows a plot of the precision/recall tradeoff as this threshold is changed. As the threshold is raised, precision increases while recall decreases. Based on this graph we choose to set the threshold at 0.9. Learning Algorithm Performance 0.5 0.6 0.7 0.8 0.9 1 Baseline Naïve Bayes SVM Decision List Decision Tree DT + Boosting DT + Bagging Recall Precision F-Score 4.2 1 Precision and Recall here refer only to the output of the extraction patterns. Thus, 100% recall indicates that all legitimate concept-instance pairs that were extracted using the patterns, were classified as legitimate by the filter. It does not indicate that all concept-instance information in the text was extracted. Precision is to be understood similarly. Applying the Decision Tree algorithm with Bagging, using the pre-determined threshold, to the held out test set of 500 examples extracted with the CN/PN pattern yields a precision of .95 and a recall of .718. Under these same conditions, but applied to a held out test set of 500 examples extracted with the APOS pattern, the filter has a precision of .95 and a recall of .92. Precision vs. Recall as a Function of Threshold 0.955 96 0.965 97 0.975 98 0.985 99 0.995 0.4 0.5 0.6 0.7 0.8 0.9 Recall Precision 0. 0. 0. 0. Figure 2. Plot of precision and recall on a 500 example validation set as a threshold cutoff for positive classification is changed. As the threshold is increased, precision increases while recall decreases. At the 0.9 threshold value, precision/recall on the validation set is 0.98/0.7, on a held out test set it is 0.95/0.72. Final Extraction Results The CN/PN and APOS filters were used to extract concept-instance pairs from unstructured text. The approximately 15GB of newspaper text (described above) was passed through the regular expression patterns and filtered through their appropriate learned classifier. The output of this process is approximately 2,000,000 concept-instance pairs. Approximately 930,000 of these are unique pairs, comprised of nearly 500,000 unique instances 2, paired with over 450,000 unique concepts3 (e.g., 2 Uniqueness of instances is judged here solely on the basis of surface orthography. Thus, “Bill Clinton” and “William Clinton” are considered two distinct instances. The effects of collapsing such cases will be considered in future work. 3 As with instances, concept uniqueness is judged solely on the basis of orthography. Thus, “Steven Spielberg” and “J. Edgar Hoover” are both considered instances of the single concept Threshold=0.90 Threshold=0.80 “sultry screen actress”), which can be categorized based on nearly 100,000 unique complex concept heads (e.g., “screen actress”) and about 14,000 unique simple concept heads (e.g., “actress”). Table 3 shows examples of this output. A sample of 100 concept-instance pairs was randomly selected from the 2,000,000 extracted pairs and hand annotated. 93% of these were judged legitimate concept-instance pairs. Concept head Concept Instance Producer Executive producer Av Westin Newspaper Military newspaper Red Star Expert Menopause expert Morris Notwlovitz Flutist Flutist James Galway Table 3. Example of concept-instance repository. Table shows extracted relations indexed by concept head, complete concept, and instance. 5 Question Answering Evaluation A large number of questions were collected over the period of a few months from www.askJeeves.com. 100 questions of the form “Who is x” were randomly selected from this set. The questions queried concept-instance relations through both instance centered queries (e.g., “Who is Jennifer Capriati?”) and concept centered queries (e.g., “Who is the mayor of Boston?”). Answers to these questions were then automatically generated both by look-up in the 2,000,000 extracted concept-instance pairs and by TextMap, a state of the art web-based Question Answering system which ranked among the top 10 systems in the TREC 11 Question Answering track (Hermjakob et al., 2002). Although both systems supply multiple possible answers for a question, evaluations were conducted on only one answer.4 For TextMap, this answer is just the output with highest confidence, i.e., the system’s first answer. For the extracted instances, the answer was that concept-instance pair that appeared most frequently in the list of extracted examples. If all pairs appear with equal frequency, a selection is made at random. Answers for both systems are then classified by hand into three categories based upon their “director.” See Fleischman and Hovy (2002) for techniques useful in disambiguating such instances. 4 Integration of multiple answers is an open research question and is not addressed in this work. information content. 5 Answers that unequivocally identify an instance’s celebrity (e.g., “Jennifer Capriati is a tennis star”) are marked correct. Answers that provide some, but insufficient, evidence to identify the instance’s celebrity (e.g., “Jennifer Capriati is a defending champion”) are marked partially correct. Answers that provide no information to identify the instance’s celebrity (e.g., “Jennifer Capriati is a daughter”) are marked incorrect.6 Table 5 shows example answers and judgments for both systems. State of the Art Extraction Answer Mark Answer Mark Who is Nadia Comaneci? U.S. citizen P Romanian Gymnast C Who is Lilian Thuram? News page I French defender P Who is the mayor of Wash., D.C.? Anthony Williams C no answer found I Table 5. Example answers and judgments of a state of the art system and look-up method using extracted concept-instance pairs on questions collected online. Ratings were judged as either correct (C), partially correct (P), or incorrect (I). 6 Question Answering Results Results of this comparison are presented in Figure 3. The simple look-up of extracted conceptinstance pairs generated 8% more partially correct answers and 25% more entirely correct answers than TextMap. Also, 21% of the questions that TextMap answered incorrectly, were answered partially correctly using the extracted pairs; and 36% of the questions that TextMap answered incorrectly, were answered entirely correctly using the extracted pairs. This suggests that over half of the questions that TextMap got wrong could have benefited from information in the concept-instance pairs. Finally, while the look-up of extracted pairs took approximately ten seconds for all 100 questions, TextMap took approximately 9 hours. 5 Evaluation of such “definition questions” is an active research challenge and the subject of a recent TREC pilot study. While the criteria presented here are not ideal, they are consistent, and sufficient for a system comparison. 6 While TextMap is guaranteed to return some answer for every question posed, there is no guarantee that an answer will be found amongst the extracted concept-instance pairs. When such a case arises, the look-up method’s answer is counted as incorrect. This difference represents a time speed up of three orders of magnitude. There are a number of reasons why the state of the art system performed poorly compared to the simple extraction method. First, as mentioned above, the lack of newspaper text on the web means that TextMap did not have access to the same information-rich resources that the extraction method exploited. Further, the simplicity of the extraction method makes it more resilient to the noise (such as parser error) that is introduced by the many modules employed by TextMap. And finally, because it is designed to answer any type of question, not just “Who is…“ questions, TextMap is not as precise as the extraction technique. This is due to both its lack of tailor made patterns for specific question types, as well as, its inability to filter those patterns with high precision. 7 Figure 3. Evaluation results for the state of the art system and look-up method using extracted conceptinstance pairs on 100 “Who is …” questions collected online. Results are grouped by category: partially correct, entirely correct, and entirely incorrect. Discussion and Future Work The information repository approach to Question Answering offers possibilities of increased speed and accuracy for current systems. By collecting information offline, on text not readily available to search engines, and storing it to be accessible quickly and easily, Question Answering systems will be able to operate more efficiently and more effectively. In order to achieve real-time, accurate Question Answering, repositories of data much larger than that described here must be generated. We imagine huge data warehouses where each repository contains relations, such as birthplace-of, location-of, creator-of, etc. These repositories would be automatically filled by a system that continuously watches various online news sources, scouring them for useful information. Such a system would have a large library of extraction patterns for many different types of relations. These patterns could be manually generated, such as the ones described here, or learned from text, as described in Ravichandran and Hovy (2002). Each pattern would have a machine-learned filter in order to insure high precision output relations. These relations would then be stored in repositories that could be quickly and easily searched to answer user queries. 7 In this way, we envision a system similar to (Lin et al., 2002). However, instead of relying on costly structured databases and pain stakingly generated wrappers, repositories are automatically filled with information from many different patterns. Access to these repositories does not require wrapper generation, because all information is stored in easily accessible natural language text. The key here is the use of learned filters which insure that the information in the repository is clean and reliable. Performance on a Question Answering Task 10 15 20 25 30 35 40 45 50 Partial Correct Incorrect % Correct State of the Art System Extraction System Such a system is not meant to be complete by itself, however. Many aspects of Question Answering remain to be addressed. For example, question classification is necessary in order to determine which repositories (i.e., which relations) are associated with which questions. Further, many question types require post processing. Even for “Who is …” questions multiple answers need to be integrated before final output is presented. An interesting corollary to using this offline strategy is that each extracted instance has with it a frequency distribution of associated concepts (e.g., for “Bill Clinton”: 105 “US president”; 52 “candidate”; 4 “nominee”). This distribution can be used in conjunction with time/stamp information to formulate mini biographies as answers to “Who is …” questions. We believe that generating and maintaining information repositories will advance many aspects of Natural Language Processing. Their uses in 7 An important addition to this system would be the inclusion of time/date stamp and data source information. For, while “George Bush” is “president” today, he will not be forever. data driven Question Answering are clear. In addition, concept-instance pairs could be useful in disambiguating references in text, which is a challenge in Machine Translation and Text Summarization. In order to facilitate further research, we have made the extracted pairs described here publicly available at www.isi.edu/~fleisch/instances.txt.gz. In order to maximize the utility of these pairs, we are integrating them into an Ontology, where they can be more efficiently stored, cross-correlated, and shared. Acknowledgments The authors would like to thank Miruna Ticrea for her valuable help with training the classifier. We would also like to thank Andrew Philpot for his work on integrating instances into the Omega Ontology, and Daniel Marcu whose comments and ideas were invaluable. References Michelle Banko, Eric Brill. 2001. Scaling to Very Very Large Corpora for Natural Language Disambiguation. Proceedings of the Association for Computational Linguistics, Toulouse, France. Matthew Berland and Eugene Charniak. 1999. Finding Parts in Very Large Corpora. Proceedings of the 37th Annual Meeting of the Association for Computational Linguistics. College Park, Maryland. Eric Brill. 1994. Some advances in rule based part of speech tagging. Proc. of AAAI. Seattle, Washington. Eric Brill, Jimmy Lin, Michele Banko, Susan Dumais, and Andrew Ng. 2001. Data-Intensive Question Answering. Proceedings of the 2001 Text REtrieval Conference (TREC 2001), Gaithersburg, MD. Michael Fleischman and Eduard Hovy. 2002. Fine Grained Classification of Named Entities. 19th International Conference on Computational Linguistics (COLING). Taipei, Taiwan. Ulf Hermjakob, Abdessamad Echihabi, and Daniel Marcu. 2002. Natural Language Based Reformulation Resource and Web Exploitation for Question Answering. In Proceedings of the TREC2002 Conference, NIST. Gaithersburg, MD. Marti Hearst. 1992. Automatic Acquisition of Hyponyms from Large Text Corpora. Proceedings of the Fourteenth International Conference on Computational Linguistics, Nantes, France. Jimmy Lin, Aaron Fernandes, Boris Katz, Gregory Marton, and Stefanie Tellex. 2002. Extracting Answers from the Web Using Data Annotation and Data Mining Techniques. Proceedings of the 2002 Text REtrieval Conference (TREC 2002) Gaithersburg, MD. Gideon S. Mann. 2002. Fine-Grained Proper Noun Ontologies for Question Answering. SemaNet'02: Building and Using Semantic Networks, Taipei, Taiwan. Deepak Ravichandran and Eduard Hovy. 2002. Learning surface text patterns for a Question Answering system. Proceedings of the 40th ACL conference. Philadelphia, PA. I. Witten and E. Frank. 1999. Data Mining: Practical Machine Learning Tools and Techniques with JAVA implementations. Morgan Kaufmann, San Francisco, CA.
2003
1
Reliable Measures for Aligning Japanese-English News Articles and Sentences Masao Utiyama and Hitoshi Isahara Communications Research Laboratory 3-5 Hikari-dai, Seika-cho, Souraku-gun, Kyoto 619-0289 Japan [email protected] and [email protected] Abstract We have aligned Japanese and English news articles and sentences to make a large parallel corpus. We first used a method based on cross-language information retrieval (CLIR) to align the Japanese and English articles and then used a method based on dynamic programming (DP) matching to align the Japanese and English sentences in these articles. However, the results included many incorrect alignments. To remove these, we propose two measures (scores) that evaluate the validity of alignments. The measure for article alignment uses similarities in sentences aligned by DP matching and that for sentence alignment uses similarities in articles aligned by CLIR. They enhance each other to improve the accuracy of alignment. Using these measures, we have successfully constructed a largescale article and sentence alignment corpus available to the public. 1 Introduction A large-scale Japanese-English parallel corpus is an invaluable resource in the study of natural language processing (NLP) such as machine translation and cross-language information retrieval (CLIR). It is also valuable for language education. However, no such corpus has been available to the public. We recently have obtained a noisy parallel corpus of Japanese and English newspapers consisting of issues published over more than a decade and have tried to align their articles and sentences. We first aligned the articles using a method based on CLIR (Collier et al., 1998; Matsumoto and Tanaka, 2002) and then aligned the sentences in these articles by using a method based on dynamic programming (DP) matching (Gale and Church, 1993; Utsuro et al., 1994). However, the results included many incorrect alignments due to noise in the corpus. To remove these, we propose two measures (scores) that evaluate the validity of article and sentence alignments. Using these, we can selectively extract valid alignments. In this paper, we first discuss the basic statistics on the Japanese and English newspapers. We next explain methods and measures used for alignment. We then evaluate the effectiveness of the proposed measures. Finally, we show that our aligned corpus has attracted people both inside and outside the NLP community. 2 Newspapers Aligned The Japanese and English newspapers used as source data were the Yomiuri Shimbun and the Daily Yomiuri. They cover the period from September 1989 to December 2001. The number of Japanese articles per year ranges from 100,000 to 350,000, while English articles ranges from 4,000 to 13,000. The total number of Japanese articles is about 2,000,000 and the total number of English articles is about 110,000. The number of English articles represents less than 6 percent that of Japanese articles. Therefore, we decided to search for the Japanese articles corresponding to each of the English articles. The English articles as of mid-July 1996 have tags indicating whether they are translated from Japanese articles or not, though they don’t have explicit links to the original Japanese articles. Consequently, we only used the translated English articles for the article alignment. The number of English articles used was 35,318, which is 68 percent of all of the articles. On the other hand, the English articles before mid-July 1996 do not have such tags. So we used all the articles for the period. The number of them was 59,086. We call the set of articles before mid-July 1996 “1989-1996” and call the set of articles after mid-July 1996 “1996-2001.” If an English article is a translation of a Japanese article, then the publication date of the Japanese article will be near that of the English article. So we searched for the original Japanese articles within 2 days before and after the publication of each English article, i.e., the corresponding article of an English article was searched for from the Japanese articles of 5 days’ issues. The average number of English articles per day was 24 and that of Japanese articles per 5 days was 1,532 for 1989-1996. For 1996-2001, the average number of English articles was 18 and that of Japanese articles was 2,885. As there are many candidates for alignment with English articles, we need a reliable measure to estimate the validity of article alignments to search for appropriate Japanese articles from these ambiguous matches. Correct article alignment does not guarantee the existence of one-to-one correspondence between English and Japanese sentences in article alignment because literal translations are exceptional. Original Japanese articles may be restructured to conform to the style of English newspapers, additional descriptions may be added to fill cultural gaps, and detailed descriptions may be omitted. A typical example of a restructured English and Japanese article pair is: Part of an English article: ⟨e1⟩Two bullet holes were found at the home of Kengo Tanaka, 65, president of Bungei Shunju, in Akabane, Tokyo, by his wife Kimiko, 64, at around 9 a.m. Monday. ⟨/e1⟩⟨e2⟩Police suspect right-wing activists, who have mounted criticism against articles about the Imperial family appearing in the Shukan Bunshun, the publisher’s weekly magazine, were responsible for the shooting. ⟨/e2⟩⟨e3⟩Police received an anonymous phone call shortly after 1 a.m. Monday by a caller who reported hearing gunfire near Tanaka’s residence. ⟨/e3⟩⟨e4⟩Police found nothing after investigating the report, but later found a bullet in the Tanakas’ bedroom, where they were sleeping at the time of the shooting. ⟨/e4⟩ Part of a literal translation of a Japanese article: ⟨j1⟩At about 8:55 a.m. on the 29th, Kimiko Tanaka, 64, the wife of Bungei Shunju’s president Kengo Tanaka, 65, found bullet holes on the eastern wall of their two-story house at 4 Akabane Nishi, Kitaku, Tokyo.⟨/j1⟩⟨j2⟩As a result of an investigation, the officers of the Akabane police station found two holes on the exterior wall of the bedroom and a bullet in the bedroom.⟨/j2⟩⟨j3⟩After receiving an anonymous phone call shortly after 1 a.m. saying that two or three gunshots were heard near Tanaka’s residence, police officers hurried to the scene for investigation, but no bullet holes were found.⟨/j3⟩⟨j4⟩When gunshots were heard, Mr. and Mrs. Tanaka were sleeping in the bedroom.⟨/j4⟩⟨j5⟩Since Shukan Bunshun, a weekly magazine published by Bungei Shunju, recently ran an article criticizing the Imperial family, Akabane police suspect rightwing activists who have mounted criticism against the recent article to be responsible for the shooting and have been investigating the incident.⟨/j5⟩ where there is a three-to-four correspondence between {e1, e3, e4} and {j1, j2, j3, j4}, together with a one-to-one correspondence between e2 and j5. Such sentence matches are of particular interest to researchers studying human translations and/or stylistic differences between English and Japanese newspapers. However, their usefulness as resources for NLP such as machine translation is limited for the time being. It is therefore important to extract sentence alignments that are as literal as possible. To achieve this, a reliable measure of the validity of sentence alignments is necessary. 3 Basic Alignment Methods We adopt a standard strategy to align articles and sentences. First, we use a method based on CLIR to align Japanese and English articles (Collier et al., 1998; Matsumoto and Tanaka, 2002) and then a method based on DP matching to align Japanese and English sentences (Gale and Church, 1993; Utsuro et al., 1994) in these articles. As each of these methods uses existing NLP techniques, we describe them briefly focusing on basic similarity measures, which we will compare with our proposed measures in Section 5. 3.1 Article alignment Translation of words We first convert each of the Japanese articles into a set of English words. We use ChaSen1 to segment each of the Japanese articles into words. We next extract content words, which are then translated into English words by looking them up in the EDR Japanese-English bilingual dictionary,2 EDICT, and ENAMDICT,3 which have about 230,000, 100,000, 1http://chasen.aist-nara.ac.jp/ 2http://www.iijnet.or.jp/edr/ 3http://www.csse.monash.edu.au/˜jwb/edict.html and 180,000 entries, respectively. We select two English words for each of the Japanese words using simple heuristic rules based on the frequencies of English words. Article retrieval We use each of the English articles as a query and search for the Japanese article that is most similar to the query article. The similarity between an English article and a (word-based English translation of) Japanese article is measured by BM25 (Robertson and Walker, 1994). BM25 and its variants have been proven to be quite efficient in information retrieval. Readers are referred to papers by the Text REtrieval Conference (TREC)4, for example. The definition of BM25 is: BM25(J, E) = X T∈E w(1) (k1 + 1)tf K + tf (k3 + 1)qtf k3 + qtf where J is the set of translated English words of a Japanese article and E is the set of words of an English article. The words are stemmed and stop words are removed. T is a word contained in E. w(1) is the weight of T, w(1) = log (N−n+0.5) (n+0.5) . N is the number of Japanese articles to be searched. n is the number of articles containing T. K is k1((1 −b) + b dl avdl ). k1, b and k3 are parameters set to 1, 1, and 1000, respectively. dl is the document length of J and avdl is the average document length in words. tf is the frequency of occurrence of T in J. qtf is the frequency of T in E. To summarize, we first translate each of the Japanese articles into a set of English words. We then use each of the English articles as a query and search for the most similar Japanese article in terms of BM25 and assume that it corresponds to the English article. 3.2 Sentence alignment The sentences5 in the aligned Japanese and English articles are aligned by a method based on DP matching (Gale and Church, 1993; Utsuro et al., 1994). 4http://trec.nist.gov/ 5We split the Japanese articles into sentences by using simple heuristics and split the English articles into sentences by using MXTERMINATOR (Reynar and Ratnaparkhi, 1997). We allow 1-to-n or n-to-1 (1 ≤n ≤6) alignments when aligning the sentences. Readers are referred to Utsuro et al. (1994) for a concise description of the algorithm. Here, we only discuss the similarities between Japanese and English sentences for alignment. Let Ji and Ei be the words of Japanese and English sentences for i-th alignment. The similarity6 between Ji and Ei is: SIM(Ji, Ei) = co(Ji × Ei) + 1 l(Ji) + l(Ei) −2co(Ji × Ei) + 2 where l(X) = P x∈X f(x) f(x) is the frequency of x in the sentences. co(Ji × Ei) = P (j,e)∈Ji×Ei min(f(j), f(e)) Ji × Ei = {(j, e)|j ∈Ji, e ∈Ei} and Ji × Ei is a one-to-one correspondence between Japanese and English words. Ji and Ei are obtained as follows. We use ChaSen to morphologically analyze the Japanese sentences and extract content words, which consists of Ji. We use Brill’s tagger (Brill, 1992) to POS-tag the English sentences, extract content words, and use WordNet’s library7 to obtain lemmas of the words, which consists of Ei. We use simple heuristics to obtain Ji × Ei, i.e., a one-to-one correspondence between the words in Ji and Ei, by looking up JapaneseEnglish and English-Japanese dictionaries made up by combining entries in the EDR Japanese-English bilingual dictionary and the EDR English-Japanese bilingual dictionary. Each of the constructed dictionaries has over 300,000 entries. We evaluated the implemented program against a corpus consisting of manually aligned Japanese and English sentences. The source texts were Japanese white papers (JEIDA, 2000). The style of translation was generally literal reflecting the nature of government documents. We used 12 pairs of texts for evaluation. The average number of Japanese sentences per text was 413 and that of English sentences was 495. The recall, R, and precision, P, of the program against this corpus were R = 0.982 and P = 0.986, respectively, where 6SIM(Ji, Ei) is different from the similarity function used in Utsuro et al. (1994). We use SIM because it performed well in a preliminary experiment. 7http://www.cogsci.princeton.edu/˜wn/ R = number of correctly aligned sentence pairs total number of sentence pairs aligned in corpus P = number of correctly aligned sentence pairs total number of sentence pairs proposed by program The number of pairs in a one-to-n alignment is n. For example, if sentences {J1} and {E1, E2, E3} are aligned, then three pairs ⟨J1, E1⟩, ⟨J1, E2⟩, and ⟨J1, E3⟩are obtained. This recall and precision are quite good considering the relatively large differences in the language structures between Japanese and English. 4 Reliable Measures We use BM25 and SIM to evaluate the similarity in articles and sentences, respectively. These measures, however, cannot be used to reliably discriminate between correct and incorrect alignments as will be discussed in Section 5. This motivated us to devise more reliable measures based on basic similarities. BM25 measures the similarity between two bags of words. It is not sensitive to differences in the order of sentences between two articles. To remedy this, we define a measure that uses the similarities in sentence alignments in the article alignment. We define AVSIM(J, E) as the similarity between Japanese article, J, and English article, E: AVSIM(J, E) = Pm k=1 SIM(Jk, Ek) m where (J1, E1), (J2, E2), . . . (Jm, Em) are the sentence alignments obtained by the method described in Section 3.2. The sentence alignments in a correctly aligned article alignment should have more similarity than the ones in an incorrectly aligned article alignment. Consequently, article alignments with high AVSIM are likely to be correct. Our sentence alignment program aligns sentences accurately if the English sentences are literal translations of the Japanese as discussed in Section 3.2. However, the relation between English news sentences and Japanese news sentences are not literal translations. Thus, the results for sentence alignments include many incorrect alignments. To discriminate between correct and incorrect alignments, we take advantage of the similarity in article alignments containing sentence alignments so that the sentence alignments in a similar article alignment will have a high value. We define SntScore(Ji, Ei) = AVSIM(J, E) × SIM(Ji, Ei) SntScore(Ji, Ei) is the similarity in the i-th alignment, (Ji, Ei), in article alignment J and E. When we compare the validity of two sentence alignments in the same article alignment, the rank order of sentence alignments obtained by applying SntScore is the same as that of SIM because they share a common AVSIM. However, when we compare the validity of two sentence alignments in different article alignments, SntScore prefers the sentence alignment with the more similar (high AVSIM) article alignment even if their SIM has the same value, while SIM cannot discriminate between the validity of two sentence alignments if their SIM has the same value. Therefore, SntScore is more appropriate than SIM if we want to compare sentence alignments in different article alignments, because, in general, a sentence alignment in a reliable article alignment is more reliable than one in an unreliable article alignment. The next section compares the effectiveness of AVSIM to that of BM25, and that of SntScore to that of SIM. 5 Evaluation of Alignment Here, we discuss the results of evaluating article and sentence alignments. 5.1 Evaluation of article alignment We first estimate the precision of article alignments by using randomly sampled alignments. Next, we sort them in descending order of BM25 and AVSIM to see whether these measures can be used to provide correct alignments with a high ranking. Finally, we show that the absolute values of AVSIM correspond well with human judgment. Randomly sampled article alignments Each English article was aligned with a Japanese article with the highest BM25. We sampled 100 article alignments from each of 1996-2001 and 19891996. We then classified the samples into four categories: “A”, “B”, “C”, and “D”. “A” means that there was more than 50% to 60% overlap in the content of articles. “B” means more than 20% to 30% and less than 50% to 60% overlap. “D” means that there was no overlap at all. “C” means that alignment was not included in “A”,“B” or “D”. We regard alignments that were judged to be A or B to be suitable for NLP because of their relatively large overlap. 1996-2001 1989-1996 type lower ratio upper lower ratio upper A 0.49 0.59 0.69 0.20 0.29 0.38 B 0.06 0.12 0.18 0.08 0.15 0.22 C 0.03 0.08 0.13 0.03 0.08 0.13 D 0.13 0.21 0.29 0.38 0.48 0.58 Table 1: Ratio of article alignments The results of evaluations are in Table 1.8 Here, “ratio” means the ratio of the number of articles judged to correspond to the respective category against the total number of articles. For example, 0.59 in line “A” of 1996-2001 means that 59 out of 100 samples were evaluated as A. “Lower” and “upper” mean the lower and upper bounds of the 95% confidence interval for ratio. The table shows that the precision (= sum of the ratios of A and B) for 1996-2001 was higher than that for 1989-1996. They were 0.71 for 1996-2001 and 0.44 for 1989-1996. This is because the English articles from 1996-2001 were translations of Japanese articles, while those from 1989-1996 were not necessarily translations as explained in Section 2. Although the precision for 1996-2001 was higher than that for 1989-1996, it is still too low to use them as NLP resources. In other words, the article alignments included many incorrect alignments. We want to extract alignments which will be evaluated as A or B from these noisy alignments. To do this, we have to sort all alignments according to some measures that determine their validity and extract highly ranked ones. To achieve this, AVSIM is more reliable than BM25 as is explained below. 8The evaluations were done by the authors. We double checked the sample articles from 1996-2001. Our second checks are presented in Table 1. The ratio of categories in the first check were A=0.62, B=0.09, C=0.09, and D=0.20. Comparing these figures with those in Table 1, we concluded that first and second evaluations were consistent. Sorted alignments: AVSIM vs. BM25 We sorted the same alignments in Table 1 in decreasing order of AVSIM and BM25. Alignments judged to be A or B were regarded as correct. The number, N, of correct alignments and precision, P, up to each rank are shown in Table 2. 1996-2001 1989-1996 AVSIM BM25 AVSIM BM25 rank N P N P N P N P 5 5 1.00 5 1.00 5 1.00 2 0.40 10 10 1.00 8 0.80 10 1.00 4 0.40 20 20 1.00 16 0.80 19 0.95 9 0.45 30 30 1.00 25 0.83 28 0.93 16 0.53 40 40 1.00 34 0.85 34 0.85 24 0.60 50 50 1.00 39 0.78 37 0.74 28 0.56 60 60 1.00 47 0.78 42 0.70 30 0.50 70 66 0.94 55 0.79 42 0.60 35 0.50 80 70 0.88 62 0.78 43 0.54 38 0.47 90 71 0.79 68 0.76 43 0.48 40 0.44 100 71 0.71 71 0.71 44 0.44 44 0.44 Table 2: Rank vs. precision From the table, we can conclude that AVSIM ranks correct alignments higher than BM25. Its greater accuracy indicates that it is important to take similarities in sentence alignments into account when estimating the validity of article alignments. AVSIM and human judgment Table 2 shows that AVSIM is reliable in ranking correct and incorrect alignments. This section reveals that not only rank order but also absolute values of AVSIM are reliable for discriminating between correct and incorrect alignments. That is, they correspond well with human evaluations. This means that a threshold value is set for each of 19962001 and 1989-1996 so that valid alignments can be extracted by selecting alignments whose AVSIM is larger than the threshold. We used the same data in Table 1 to calculate statistics on AVSIM. They are shown in Tables 3 and 4 for 1996-2001 and 1989-1996, respectively. type N lower av. upper th. sig. A 59 0.176 0.193 0.209 0.168 ** B 12 0.122 0.151 0.179 0.111 ** C 8 0.077 0.094 0.110 0.085 * D 21 0.065 0.075 0.086 Table 3: Statistics on AVSIM (1996-2001) In these tables, “N” means the number of alignments against the corresponding human judgment. type N lower av. upper th. sig. A 29 0.153 0.175 0.197 0.157 * B 15 0.113 0.141 0.169 0.131 C 8 0.092 0.123 0.154 0.097 ** D 48 0.076 0.082 0.088 Table 4: Statistics on AVSIM (1989-1996) “Av.” means the average value of AVSIM. “Lower” and “upper” mean the lower and upper bounds of the 95% confidence interval for the average. “Th.” means the threshold for AVSIM that can be used to discriminate between the alignments estimated to be the corresponding evaluations. For example, in Table 3, evaluations A and B are separated by 0.168. These thresholds were identified through linear discriminant analysis. The asterisks “**” and “*” in the “sig.” column mean that the difference in averages for AVSIM is statistically significant at 1% and 5% based on a one-sided Welch test. In these tables, except for the differences in the averages for B and C in Table 4, all differences in averages are statistically significant. This indicates that AVSIM can discriminate between differences in judgment. In other words, the AVSIM values correspond well with human judgment. We then tried to determine why B and C in Table 4 were not separated by inspecting the article alignments and found that alignments evaluated as C in Table 4 had relatively large overlaps compared with alignments judged as C in Table 3. It was more difficult to distinguish B or C in Table 4 than in Table 3. We next classified all article alignments in 19962001 and 1989-1996 based on the thresholds in Tables 3 and 4. The numbers of alignments are in Table 5. It shows that the number of alignments estimated to be A or B was 46738 (= 31495 + 15243). We regard about 47,000 article alignments to be sufficiently large to be useful as a resource for NLP such as bilingual lexicon acquisition and for language education. 1996-2001 1989-1996 total A 15491 16004 31495 B 9244 5999 15243 C 4944 10258 15202 D 5639 26825 32464 total 35318 59086 94404 Table 5: Number of articles per evaluation In summary, AVSIM is more reliable than BM25 and corresponds well with human judgment. By using thresholds, we can extract about 47,000 article alignments which are estimated to be A or B evaluations. 5.2 Evaluation of sentence alignment Sentence alignments in article alignments have many errors even if they have been obtained from correct article alignments due to free translation as discussed in Section 2. To extract only correct alignments, we sorted whole sentence alignments in whole article alignments in decreasing order of SntScore and selected only the higher ranked sentence alignments so that the selected alignments would be sufficiently precise to be useful as NLP resources. The number of whole sentence alignments was about 1,300,000. The most important category for sentence alignment is one-to-one. Thus, we want to discard as many errors in this category as possible. In the first step, we classified whole oneto-one alignments into two classes: the first consisted of alignments whose Japanese and English sentences ended with periods, question marks, exclamation marks, or other readily identifiable characteristics. We call this class “one-to-one”. The second class consisted of the one-to-one alignments not belonging to the first class. The alignments in this class, together with the whole one-to-n alignments, are called “one-to-many”. One-to-one had about 640,000 alignments and one-to-many had about 660,000 alignments. We first evaluated the precision of one-to-one alignments by sorting them in decreasing order of SntScore. We randomly extracted 100 samples from each of 10 blocks ranked at the top-300,000 alignments. (A block had 30,000 alignments.) We classified these 1000 samples into two classes: The first was “match” (A), the second was “not match” (D). We judged a sample as “A” if the Japanese and English sentences of the sample shared a common event (approximately a clause). “D” consisted of the samples not belonging to “A”. The results of evaluation are in Table 6.9 9Evaluations were done by the authors. We double checked all samples. In the 100 samples, there were a maximum of two or three where the first and second evaluations were different. range # of A’s # of D’s 1 100 0 30001 99 1 60001 99 1 90001 97 3 120001 96 4 150001 92 8 180001 82 18 210001 74 26 240001 47 53 270001 30 70 Table 6: One-to-one: Rank vs. judgment This table shows that the number of A’s decreases rapidly as the rank increases. This means that SntScore ranks appropriate one-to-one alignments highly. The table indicates that the top-150,000 oneto-one alignments are sufficiently reliable.10 The ratio of A’s in these alignments was 0.982. We then evaluated precision for one-to-many alignments by sorting them in decreasing order of SntScore. We classified one-to-many into three categories: “1-90000”, “90001-180000”, and “180001270000”, each of which was covered by the range of SntScore of one-to-one that was presented in Table 6. We randomly sampled 100 one-to-many alignments from these categories and judged them to be A or D (see Table 7). Table 7 indicates that the 38,090 alignments in the range from “1-90000” are sufficiently reliable. range # of one-to-many # of A’s # of D’s 1 38090 98 2 90001 59228 87 13 180001 71711 61 39 Table 7: One-to-many: Rank vs. judgment Tables 6 and 7 show that we can extract valid alignments by sorting alignments according to SntScore and by selecting only higher ranked sentence alignments. Overall, evaluations between the first and second check were consistent. 10The notion of “appropriate (correct) sentence alignment” depends on applications. Machine translation, for example, may require more precise (literal) alignment. To get literal alignments beyond a sharing of a common event, we will select a set of alignments from the top of the sorted alignments that satisfies the required literalness. This is because, in general, higher ranked alignments are more literal translations, because those alignments tend to have many one-to-one corresponding words and to be contained in highly similar article alignments. Comparison with SIM We compared SntScore with SIM and found that SntScore is more reliable than SIM in discriminating between correct and incorrect alignments. We first sorted the one-to-one alignments in decreasing order of SIM and randomly sampled 100 alignments from the top-150,000 alignments. We classified the samples into A or D. The number of A’s was 93, and that of D’s was 7. The precision was 0.93. However, in Table 6, the number of A’s was 491 and D’s was 9, for the 500 samples extracted from the top-150,000 alignments. The precision was 0.982. Thus, the precision of SntScore was higher than that of SIM and this difference is statistically significant at 1% based on a one-sided proportional test. We then sorted the one-to-many alignments by SIM and sampled 100 alignments from the top 38,090 and judged them. There were 89 A’s and 11 D’s. The precision was 0.89. However, in Table 7, there were 98 A’s and 2 D’s for samples from the top 38,090 alignments. The precision was 0.98. This difference is also significant at 1% based on a one-sided proportional test. Thus, SntScore is more reliable than SIM. This high precision in SntScore indicates that it is important to take the similarities of article alignments into account when estimating the validity of sentence alignments. 6 Related Work Much work has been done on article alignment. Collier et al. (1998) compared the use of machine translation (MT) with the use of bilingual dictionary term lookup (DTL) for news article alignment in Japanese and English. They revealed that DTL is superior to MT at high-recall levels. That is, if we want to obtain many article alignments, then DTL is more appropriate than MT. In a preliminary experiment, we also compared MT and DTL for the data in Table 1 and found that DTL was superior to MT.11 These 11We translated the English articles into Japanese with an MT system. We then used the translated English articles as queries and searched the database consisting of Japanese articles. The direction of translation was opposite to the one described in Section 3.1. Therefore this comparison is not as objective as it could be. However, it gives us some idea into a comparison of MT and DTL. experimental results indicate that DTL is more appropriate than MT in article alignment. Matsumoto and Tanaka (2002) attempted to align Japanese and English news articles in the Nikkei Industrial Daily. Their method achieved a 97% precision in aligning articles, which is quite high. They also applied their method to NHK broadcast news. However, they obtained a lower precision of 69.8% for the NHK corpus. Thus, the precision of their method depends on the corpora. Therefore, it is not clear whether their method would have achieved a high accuracy in the Yomiuri corpus treated in this paper. There are two significant differences between our work and previous works. (1) We have proposed AVSIM, which uses similarities in sentences aligned by DP matching, as a reliable measure for article alignment. Previous works, on the other hand, have used measures based on bag-of-words. (2) A more important difference is that we have actually obtained not only article alignments but also sentence alignments on a large scale. In addition to that, we are distributing the alignment data for research and educational purposes. This is the first attempt at a Japanese-English bilingual corpus. 7 Availability As of late-October 2002, we have been distributing the alignment data discussed in this paper for research and educational purposes.12 All the information on the article and sentence alignments are numerically encoded so that users who have the Yomiuri data can recover the results of alignments. The data also contains the top-150,000 one-to-one sentence alignments and the top-30,000 one-to-many sentence alignments as raw sentences. The Yomiuri Shimbun generously allowed us to distribute them for research and educational purposes. We have sent over 30 data sets to organizations on their request. About half of these were NLPrelated. The other half were linguistics-related. A few requests were from high-school and junior-highschool teachers of English. A psycho-linguist was also included. It is obvious that people from both inside and outside the NLP community are interested 12http://www.crl.go.jp/jt/a132/members/mutiyama/jea/index.html in this Japanese-English alignment data. 8 Conclusion We have proposed two measures for extracting valid article and sentence alignments. The measure for article alignment uses similarities in sentences aligned by DP matching and that for sentence alignment uses similarities in articles aligned by CLIR. They enhance each other and allow valid article and sentence alignments to be reliably extracted from an extremely noisy Japanese-English parallel corpus. We are distributing the alignment data discussed in this paper so that it can be used for research and educational purposes. It has attracted the attention of people both inside and outside the NLP community. We have applied our measures to a Japanese and English bilingual corpus and these are language independent. It is therefore reasonable to expect that they can be applied to any language pair and still retain good performance, particularly since their effectiveness has been demonstrated in such a disparate language pair as Japanese and English. References Eric Brill. 1992. A simple rule-based part of speech tagger. In ANLP-92, pages 152–155. Nigel Collier, Hideki Hirakawa, and Akira Kumano. 1998. Machine translation vs. dictionary term translation – a comparison for English-Japanese news article alignment. In COLING-ACL’98, pages 263–267. William A. Gale and Kenneth W. Church. 1993. A program for aligning sentences in bilingual corpora. Computational Linguistics, 19(1):75–102. Japan Electronic Industry Development Association JEIDA. 2000. Sizen Gengo Syori-ni Kan-suru Tyousa Houkoku-syo (Report on natural language processing systems). Kenji Matsumoto and Hideki Tanaka. 2002. Automatic alignment of Japanese and English newspaper articles using an MT system and a bilingual company name dictionary. In LREC-2002, pages 480–484. Jeffrey C. Reynar and Adwait Ratnaparkhi. 1997. A maximum entropy approach to identifying sentence boundaries. In ANLP-97. S. E. Robertson and S. Walker. 1994. Some simple effective approximations to the 2-Poisson model for probabilistic weighted retrieval. In SIGIR’94, pages 232–241. Takehito Utsuro, Hiroshi Ikeda, Masaya Yamane, Yuji Matsumoto, and Makoto Nagao. 1994. Bilingual text matching using bilingual dictionary and statistics. In COLING’94, pages 1076–1082.
2003
10
Loosely Tree-Based Alignment for Machine Translation Daniel Gildea University of Pennsylvania [email protected] Abstract We augment a model of translation based on re-ordering nodes in syntactic trees in order to allow alignments not conforming to the original tree structure, while keeping computational complexity polynomial in the sentence length. This is done by adding a new subtree cloning operation to either tree-to-string or tree-to-tree alignment algorithms. 1 Introduction Systems for automatic translation between languages have been divided into transfer-based approaches, which rely on interpreting the source string into an abstract semantic representation from which text is generated in the target language, and statistical approaches, pioneered by Brown et al. (1990), which estimate parameters for a model of word-to-word correspondences and word re-orderings directly from large corpora of parallel bilingual text. Only recently have hybrid approaches begun to emerge, which apply probabilistic models to a structured representation of the source text. Wu (1997) showed that restricting word-level alignments between sentence pairs to observe syntactic bracketing constraints significantly reduces the complexity of the alignment problem and allows a polynomial-time solution. Alshawi et al. (2000) also induce parallel tree structures from unbracketed parallel text, modeling the generation of each node’s children with a finite-state transducer. Yamada and Knight (2001) present an algorithm for estimating probabilistic parameters for a similar model which represents translation as a sequence of re-ordering operations over children of nodes in a syntactic tree, using automatic parser output for the initial tree structures. The use of explicit syntactic information for the target language in this model has led to excellent translation results (Yamada and Knight, 2002), and raises the prospect of training a statistical system using syntactic information for both sides of the parallel corpus. Tree-to-tree alignment techniques such as probabilistic tree substitution grammars (Hajiˇc et al., 2002) can be trained on parse trees from parallel treebanks. However, real bitexts generally do not exhibit parse-tree isomorphism, whether because of systematic differences between how languages express a concept syntactically (Dorr, 1994), or simply because of relatively free translations in the training material. In this paper, we introduce “loosely” tree-based alignment techniques to address this problem. We present analogous extensions for both tree-to-string and tree-to-tree models that allow alignments not obeying the constraints of the original syntactic tree (or tree pair), although such alignments are dispreferred because they incur a cost in probability. This is achieved by introducing a clone operation, which copies an entire subtree of the source language syntactic structure, moving it anywhere in the target language sentence. Careful parameterization of the probability model allows it to be estimated at no additional cost in computational complexity. We expect our relatively unconstrained clone operation to allow for various types of structural divergence by providing a sort of hybrid between tree-based and unstructured, IBM-style models. We first present the tree-to-string model, followed by the tree-to-tree model, before moving on to alignment results for a parallel syntactically annotated Korean-English corpus, measured in terms of alignment perplexities on held-out test data, and agreement with human-annotated word-level alignments. 2 The Tree-to-String Model We begin by summarizing the model of Yamada and Knight (2001), which can be thought of as representing translation as an Alexander Calder mobile. If we follow the process of an English sentence’s transformation into French, the English sentence is first given a syntactic tree representation by a statistical parser (Collins, 1999). As the first step in the translation process, the children of each node in the tree can be re-ordered. For any node with m children, m! re-orderings are possible, each of which is assigned a probability Porder conditioned on the syntactic categories of the parent node and its children. As the second step, French words can be inserted at each node of the parse tree. Insertions are modeled in two steps, the first predicting whether an insertion to the left, an insertion to the right, or no insertion takes place with probability Pins, conditioned on the syntactic category of the node and that of its parent. The second step is the choice of the inserted word Pt(f|NULL), which is predicted without any conditioning information. The final step, a French translation of each original English word, at the leaves of the tree, is chosen according to a distribution Pt(f|e). The French word is predicted conditioned only on the English word, and each English word can generate at most one French word, or can generate a NULL symbol, representing deletion. Given the original tree, the re-ordering, insertion, and translation probabilities at each node are independent of the choices at any other node. These independence relations are analogous to those of a stochastic context-free grammar, and allow for efficient parameter estimation by an inside-outside Expectation Maximization (EM) algorithm. The computation of inside probabilities β, outlined below, considers possible reordering of nodes in the original tree in a bottom-up manner: for all nodes εi in input tree T do for all k, l such that 1 < k < l < N do for all orderings ρ of the children ε1...εm of εi do for all partitions of span k, l into k1, l1...km, lm do β(εi, k, l)+= Porder(ρ|εi) Qm j=1 β(εj, kj, lj) end for end for end for end for This algorithm has computational complexity O(|T|Nm+2), where m is the maximum number of children of any node in the input tree T, and N the length of the input string. By storing partially completed arcs in the chart and interleaving the inner two loops, complexity of O(|T|n3m!2m) can be achieved. Thus, while the algorithm is exponential in m, the fan-out of the grammar, it is polynomial in the size of the input string. Assuming |T| = O(n), the algorithm is O(n4). The model’s efficiency, however, comes at a cost. Not only are many independence assumptions made, but many alignments between source and target sentences simply cannot be represented. As a minimal example, take the tree: A B X Y Z Of the six possible re-orderings of the three terminals, the two which would involve crossing the bracketing of the original tree (XZY and YZX) are not allowed. While this constraint gives us a way of using syntactic information in translation, it may in many cases be too rigid. In part to deal with this problem, Yamada and Knight (2001) flatten the trees in a pre-processing step by collapsing nodes with the same lexical head-word. This allows, for example, an English subject-verb-object (SVO) structure, which is analyzed as having a VP node spanning the verb and object, to be re-ordered as VSO in a language such as Arabic. Larger syntactic divergences between the two trees may require further relaxation of this constraint, and in practice we expect such divergences to be frequent. For example, a nominal modifier in one language may show up as an adverbial in the other, or, due to choices such as which information is represented by a main verb, the syntactic correspondence between the two S VP NP NNC NNC Kyeo-ul PAD e PAU Neun VP NP NNC NNC Su-Kap PCA eul VP NP NNU Myeoch NNX NNX Khyeol-Re XSF Ssik VP NP NNC Ci-Keup LV VV VV Pat EFN Ci S VP VP VP VP NP NNC Ci-Keup NULL LV VV VV Pat NULL EFN Ci NULL NP NNU Myeoch how NNX XSF Ssik many NNX Khyeol-Re pairs NP NNC NNC Su-Kap gloves PCA eul NULL NP VP LV VV VV Pat each EFN Ci you NP NNC Ci-Keup issued NNC PAD e in PAU Neun NULL NNC Kyeo-ul winter Figure 1: Original Korean parse tree, above, and transformed tree after reordering of children, subtree cloning (indicated by the arrow), and word translation. After the insertion operation (not shown), the tree’s English yield is: How many pairs of gloves is each of you issued in winter? sentences may break down completely. 2.1 Tree-to-String Clone Operation In order to provide some flexibility, we modify the model in order to allow for a copy of a (translated) subtree from the English sentences to occur, with some cost, at any point in the resulting French sentence. For example, in the case of the input tree A B X Y Z a clone operation making a copy of node 3 as a new child of B would produce the tree: A B X Z Y Z This operation, combined with the deletion of the original node Z, produces the alignment (XZY) that was disallowed by the original tree reordering model. Figure 1 shows an example from our Korean-English corpus where the clone operation allows the model to handle a case of wh-movement in the English sentence that could not be realized by any reordering of subtrees of the Korean parse. The probability of adding a clone of original node εi as a child of node εj is calculated in two steps: first, the choice of whether to insert a clone under εj, with probability Pins(clone|εj), and the choice of which original node to copy, with probability Pclone(εi|clone = 1) = Pmakeclone(εi) P k Pmakeclone(εk) where Pmakeclone is the probability of an original node producing a copy. In our implementation, for simplicity, Pins(clone) is a single number, estimated by the EM algorithm but not conditioned on the parent node εj, and Pmakeclone is a constant, meaning that the node to be copied is chosen from all the nodes in the original tree with uniform probability. It is important to note that Pmakeclone is not dependent on whether a clone of the node in question has already been made, and thus a node may be “reused” any number of times. This independence assumption is crucial to the computational tractability of the algorithm, as the model can be estimated using the dynamic programming method above, keeping counts for the expected number of times each node has been cloned, at no increase in computational complexity. Without such an assumption, the parameter estimation becomes a problem of parsing with crossing dependencies, which is exponential in the length of the input string (Barton, 1985). 3 The Tree-to-Tree Model The tree-to-tree alignment model has tree transformation operations similar to those of the tree-tostring model described above. However, the transformed tree must not only match the surface string of the target language, but also the tree structure assigned to the string by the treebank annotators. In order to provide enough flexibility to make this possible, additional tree transformation operations allow a single node in the source tree to produce two nodes in the target tree, or two nodes in the source tree to be grouped together and produce a single node in the target tree. The model can be thought of as a synchronous tree substitution grammar, with probabilities parameterized to generate the target tree conditioned on the structure of the source tree. The probability P(Tb|Ta) of transforming the source tree Ta into target tree Tb is modeled in a sequence of steps proceeding from the root of the target tree down. At each level of the tree: 1. At most one of the current node’s children is grouped with the current node in a single elementary tree, with probability Pelem(ta|εa ⇒ children(εa)), conditioned on the current node εa and its children (ie the CFG production expanding εa). 2. An alignment of the children of the current elementary tree is chosen, with probability Palign(α|εa ⇒children(ta)). This alignment operation is similar to the re-order operation in the tree-to-string model, with the extension that 1) the alignment α can include insertions and deletions of individual children, as nodes in either the source or target may not correspond to anything on the other side, and 2) in the case where two nodes have been grouped into ta, their children are re-ordered together in one step. In the final step of the process, as in the tree-tostring model, lexical items at the leaves of the tree are translated into the target language according to a distribution Pt(f|e). Allowing non-1-to-1 correspondences between nodes in the two trees is necessary to handle the fact that the depth of corresponding words in the two trees often differs. A further consequence of allowing elementary trees of size one or two is that some reorderings not allowed when reordering the children of each individual node separately are now possible. For example, with our simple tree A B X Y Z if nodes A and B are considered as one elementary tree, with probability Pelem(ta|A ⇒BZ), their collective children will be reordered with probability Palign({(1, 1)(2, 3)(3, 2)}|A ⇒XYZ) A X Z Y giving the desired word ordering XZY. However, computational complexity as well as data sparsity prevent us from considering arbitrarily large elementary trees, and the number of nodes considered at once still limits the possible alignments. For example, with our maximum of two nodes, no transformation of the tree A B W X C Y Z is capable of generating the alignment WYXZ. In order to generate the complete target tree, one more step is necessary to choose the structure on the target side, specifically whether the elementary tree has one or two nodes, what labels the nodes have, and, if there are two nodes, whether each child attaches to the first or the second. Because we are ultimately interested in predicting the correct target string, regardless of its structure, we do not assign probabilities to these steps. The nonterminals on the target side are ignored entirely, and while the alignment algorithm considers possible pairs of nodes as elementary trees on the target side during training, the generative probability model should be thought of as only generating single nodes on the target side. Thus, the alignment algorithm is constrained by the bracketing on the target side, but does not generate the entire target tree structure. While the probability model for tree transformation operates from the top of the tree down, probability estimation for aligning two trees takes place by iterating through pairs of nodes from each tree in bottom-up order, as sketched below: for all nodes εa in source tree Ta in bottom-up order do for all elementary trees ta rooted in εa do for all nodes εb in target tree Tb in bottom-up order do for all elementary trees tb rooted in εb do for all alignments α of the children of ta and tb do β(εa, εb) += Pelem(ta|εa)Palign(α|εi) Q (i,j)∈α β(εi, εj) end for end for end for end for end for The outer two loops, iterating over nodes in each tree, require O(|T|2). Because we restrict our elementary trees to include at most one child of the root node on either side, choosing elementary trees for a node pair is O(m2), where m refers to the maximum number of children of a node. Computing the alignment between the 2m children of the elementary tree on either side requires choosing which subset of source nodes to delete, O(22m), which subset of target nodes to insert (or clone), O(22m), and how to reorder the remaining nodes from source to target tree, O((2m)!). Thus overall complexity of the algorithm is O(|T|2m242m(2m)!), quadratic in the size of the input sentences, but exponential in the fan-out of the grammar. 3.1 Tree-to-Tree Clone Operation Allowing m-to-n matching of up to two nodes on either side of the parallel treebank allows for limited non-isomorphism between the trees, as in Hajiˇc et al. (2002). However, even given this flexibility, requiring alignments to match two input trees rather than one often makes tree-to-tree alignment more constrained than tree-to-string alignment. For example, even alignments with no change in word order may not be possible if the structures of the two trees are radically mismatched. This leads us to think it may be helpful to allow departures from Tree-to-String Tree-to-Tree elementary tree grouping Pelem(ta|εa ⇒children(εa)) re-order Porder(ρ|ε ⇒children(ε)) Palign(α|εa ⇒children(ta)) insertion Pins(left, right, none|ε) α can include “insertion” symbol lexical translation Pt(f|e) Pt(f|e) with cloning Pins(clone|ε) α can include “clone” symbol Pmakeclone(ε) Pmakeclone(ε) Table 1: Model parameterization the constraints of the parallel bracketing, if it can be done in without dramatically increasing computational complexity. For this reason, we introduce a clone operation, which allows a copy of a node from the source tree to be made anywhere in the target tree. After the clone operation takes place, the transformation of source into target tree takes place using the tree decomposition and subtree alignment operations as before. The basic algorithm of the previous section remains unchanged, with the exception that the alignments α between children of two elementary trees can now include cloned, as well as inserted, nodes on the target side. Given that α specifies a new cloned node as a child of εj, the choice of which node to clone is made as in the tree-to-string model: Pclone(εi|clone ∈α) = Pmakeclone(εi) P k Pmakeclone(εk) Because a node from the source tree is cloned with equal probability regardless of whether it has already been “used” or not, the probability of a clone operation can be computed under the same dynamic programming assumptions as the basic tree-to-tree model. As with the tree-to-string cloning operation, this independence assumption is essential to keep the complexity polynomial in the size of the input sentences. For reference, the parameterization of all four models is summarized in Table 1. 4 Data For our experiments, we used a parallel KoreanEnglish corpus from the military domain (Han et al., 2001). Syntactic trees have been annotated by hand for both the Korean and English sentences; in this paper we will be using only the Korean trees, modeling their transformation into the English text. The corpus contains 5083 sentences, of which we used 4982 as training data, holding out 101 sentences for evaluation. The average Korean sentence length was 13 words. Korean is an agglutinative language, and words often contain sequences of meaning-bearing suffixes. For the purposes of our model, we represented the syntax trees using a fairly aggressive tokenization, breaking multimorphemic words into separate leaves of the tree. This gave an average of 21 tokens for the Korean sentences. The average English sentence length was 16. The maximum number of children of a node in the Korean trees was 23 (this corresponds to a comma-separated list of items). 77% of the Korean trees had no more than four children at any node, 92% had no more than five children, and 96% no more than six children. The vocabulary size (number of unique types) was 4700 words in English, and 3279 in Korean — before splitting multi-morphemic words, the Korean vocabulary size was 10059. For reasons of computation speed, trees with more than 5 children were excluded from the experiments described below. 5 Experiments We evaluate our translation models both in terms agreement with human-annotated word-level alignments between the sentence pairs. For scoring the viterbi alignments of each system against goldstandard annotated alignments, we use the alignment error rate (AER) of Och and Ney (2000), which measures agreement at the level of pairs of words:1 AER = 1 −2|A ∩G| |A| + |G| 1While Och and Ney (2000) differentiate between sure and possible hand-annotated alignments, our gold standard alignments come in only one variety. Alignment Error Rate IBM Model 1 .37 IBM Model 2 .35 IBM Model 3 .43 Tree-to-String .42 Tree-to-String, Clone .36 Tree-to-String, Clone Pins = .5 .32 Tree-to-Tree .49 Tree-to-Tree, Clone .36 Table 2: Alignment error rate on Korean-English corpus where A is the set of word pairs aligned by the automatic system, and G the set aligned in the gold standard. We provide a comparison of the tree-based models with the sequence of successively more complex models of Brown et al. (1993). Results are shown in Table 2. The error rates shown in Table 2 represent the minimum over training iterations; training was stopped for each model when error began to increase. IBM Models 1, 2, and 3 refer to Brown et al. (1993). “Tree-to-String” is the model of Yamada and Knight (2001), and “Tree-to-String, Clone” allows the node cloning operation of Section 2.1. “Tree-to-Tree” indicates the model of Section 3, while “Tree-to-Tree, Clone” adds the node cloning operation of Section 3.1. Model 2 is initialized from the parameters of Model 1, and Model 3 is initialized from Model 2. The lexical translation probabilities Pt(f|e) for each of our tree-based models are initialized from Model 1, and the node re-ordering probabilities are initialized uniformly. Figure 1 shows the viterbi alignment produced by the “Tree-to-String, Clone” system on one sentence from our test set. We found better agreement with the human alignments when fixing Pins(left) in the Tree-to-String model to a constant rather than letting it be determined through the EM training. While the model learned by EM tends to overestimate the total number of aligned word pairs, fixing a higher probability for insertions results in fewer total aligned pairs and therefore a better trade-off between precision and recall. As seen for other tasks (Carroll and Charniak, 1992; Merialdo, 1994), the likelihood criterion used in EM training may not be optimal when evaluating a system against human labeling. The approach of optimizing a small number of metaparameters has been applied to machine translation by Och and Ney (2002). It is likely that the IBM models could similarly be optimized to minimize alignment error – an open question is whether the optimization with respect to alignment error will correspond to optimization for translation accuracy. Within the strict EM framework, we found roughly equivalent performance between the IBM models and the two tree-based models when making use of the cloning operation. For both the tree-tostring and tree-to-tree models, the cloning operation improved results, indicating that adding the flexibility to handle structural divergence is important when using syntax-based models. The improvement was particularly significant for the tree-to-tree model, because using syntactic trees on both sides of the translation pair, while desirable as an additional source of information, severely constrains possible alignments unless the cloning operation is allowed. The tree-to-tree model has better theoretical complexity than the tree-to-string model, being quadratic rather than quartic in sentence length, and we found this to be a significant advantage in practice. This improvement in speed allows longer sentences and more data to be used in training syntax-based models. We found that when training on sentences of up 60 words, the tree-to-tree alignment was 20 times faster than tree-to-string alignment. For reasons of speed, Yamada and Knight (2002) limited training to sentences of length 30, and were able to use only one fifth of the available Chinese-English parallel corpus. 6 Conclusion Our loosely tree-based alignment techniques allow statistical models of machine translation to make use of syntactic information while retaining the flexibility to handle cases of non-isomorphic source and target trees. This is achieved with a clone operation parameterized in such a way that alignment probabilities can be computed with no increase in asymptotic computational complexity. We present versions of this technique both for tree-to-string models, making use of parse trees for one of the two languages, and tree-to-tree models, which make use of parallel parse trees. Results in terms of alignment error rate indicate that the clone operation results in better alignments in both cases. On our Korean-English corpus, we found roughly equivalent performance for the unstructured IBM models, and the both the tree-to-string and tree-totree models when using cloning. To our knowledge these are the first results in the literature for tree-to-tree statistical alignment. While we did not see a benefit in alignment error from using syntactic trees in both languages, there is a significant practical benefit in computational efficiency. We remain hopeful that two trees can provide more information than one, and feel that extensions to the “loosely” tree-based approach are likely to demonstrate this using larger corpora. Another important question we plan to pursue is the degree to which these results will be borne out with larger corpora, and how the models may be refined as more training data is available. As one example, our tree representation is unlexicalized, but we expect conditioning the model on more lexical information to improve results, whether this is done by percolating lexical heads through the existing trees or by switching to a strict dependency representation. References Hiyan Alshawi, Srinivas Bangalore, and Shona Douglas. 2000. Learning dependency translation models as collections of finite state head transducers. Computational Linguistics, 26(1):45–60. G. Edward Barton, Jr. 1985. On the complexity of ID/LP parsing. Computational Linguistics, 11(4):205–218. Peter F. Brown, John Cocke, Stephen A. Della Pietra, Vincent J. Della Pietra, Frederick Jelinek, John D. Lafferty, Robert L. Mercer, and Paul S. Roossin. 1990. A statistical approach to machine translation. Computational Linguistics, 16(2):79–85, June. Peter F. Brown, Stephen A. Della Pietra, Vincent J. Della Pietra, and Robert L. Mercer. 1993. The mathematics of statistical machine translation: Parameter estimation. Computational Linguistics, 19(2):263–311. Glenn Carroll and Eugene Charniak. 1992. Two experiments on learning probabilistic dependency grammars from corpora. In Workshop Notes for StatisticallyBased NLP Techniques, pages 1–13. AAAI. Michael John Collins. 1999. Head-driven Statistical Models for Natural Language Parsing. Ph.D. thesis, University of Pennsylvania, Philadelphia. Bonnie J. Dorr. 1994. Machine translation divergences: A formal description and proposed solution. Computational Linguistics, 20(4):597–633. Jan Hajiˇc, Martin ˇCmejrek, Bonnie Dorr, Yuan Ding, Jason Eisner, Daniel Gildea, Terry Koo, Kristen Parton, Gerald Penn, Dragomir Radev, and Owen Rambow. 2002. Natural language generation in the context of machine translation. Technical report, Center for Language and Speech Processing, Johns Hopkins University, Baltimore. Summer Workshop Final Report. Chung-hye Han, Na-Rae Han, and Eon-Suk Ko. 2001. Bracketing guidelines for Penn Korean treebank. Technical Report IRCS-01-010, IRCS, University of Pennsylvania. Bernard Merialdo. 1994. Tagging English text with a probabilistic model. Computational Linguistics, 20(2):155–172. Franz Josef Och and Hermann Ney. 2000. Improved statistical alignment models. In Proceedings of ACL00, pages 440–447, Hong Kong, October. Franz Josef Och and Hermann Ney. 2002. Discriminative training and maximum entropy models for statistical machine translation. In Proceedings of ACL-02, Philadelphia, PA. Dekai Wu. 1997. Stochastic inversion transduction grammars and bilingual parsing of parallel corpora. Computational Linguistics, 23(3):3–403. Kenji Yamada and Kevin Knight. 2001. A syntax-based statistical translation model. In Proceedings of ACL01, Toulouse, France. Kenji Yamada and Kevin Knight. 2002. A decoder for syntax-based statistical MT. In Proceedings of ACL02, Philadelphia, PA.
2003
11
A Probability Model to Improve Word Alignment Colin Cherry and Dekang Lin Department of Computing Science University of Alberta Edmonton, Alberta, Canada, T6G 2E8 {colinc,lindek}@cs.ualberta.ca Abstract Word alignment plays a crucial role in statistical machine translation. Word-aligned corpora have been found to be an excellent source of translation-related knowledge. We present a statistical model for computing the probability of an alignment given a sentence pair. This model allows easy integration of context-specific features. Our experiments show that this model can be an effective tool for improving an existing word alignment. 1 Introduction Word alignments were first introduced as an intermediate result of statistical machine translation systems (Brown et al., 1993). Since their introduction, many researchers have become interested in word alignments as a knowledge source. For example, alignments can be used to learn translation lexicons (Melamed, 1996), transfer rules (Carbonell et al., 2002; Menezes and Richardson, 2001), and classifiers to find safe sentence segmentation points (Berger et al., 1996). In addition to the IBM models, researchers have proposed a number of alternative alignment methods. These methods often involve using a statistic such as φ2 (Gale and Church, 1991) or the log likelihood ratio (Dunning, 1993) to create a score to measure the strength of correlation between source and target words. Such measures can then be used to guide a constrained search to produce word alignments (Melamed, 2000). It has been shown that once a baseline alignment has been created, one can improve results by using a refined scoring metric that is based on the alignment. For example Melamed uses competitive linking along with an explicit noise model in (Melamed, 2000) to produce a new scoring metric, which in turn creates better alignments. In this paper, we present a simple, flexible, statistical model that is designed to capture the information present in a baseline alignment. This model allows us to compute the probability of an alignment for a given sentence pair. It also allows for the easy incorporation of context-specific knowledge into alignment probabilities. A critical reader may pose the question, “Why invent a new statistical model for this purpose, when existing, proven models are available to train on a given word alignment?” We will demonstrate experimentally that, for the purposes of refinement, our model achieves better results than a comparable existing alternative. We will first present this model in its most general form. Next, we describe an alignment algorithm that integrates this model with linguistic constraints in order to produce high quality word alignments. We will follow with our experimental results and discussion. We will close with a look at how our work relates to other similar systems and a discussion of possible future directions. 2 Probability Model In this section we describe our probability model. To do so, we will first introduce some necessary notation. Let E be an English sentence e1, e2, . . . , em and let F be a French sentence f1, f2, . . . , fn. We define a link l(ei, fj) to exist if ei and fj are a translation (or part of a translation) of one another. We define the null link l(ei, f0) to exist if ei does not correspond to a translation for any French word in F. The null link l(e0, fj) is defined similarly. An alignment A for two sentences E and F is a set of links such that every word in E and F participates in at least one link, and a word linked to e0 or f0 participates in no other links. If e occurs in E x times and f occurs in F y times, we say that e and f co-occur xy times in this sentence pair. We define the alignment problem as finding the alignment A that maximizes P(A|E, F). This corresponds to finding the Viterbi alignment in the IBM translation systems. Those systems model P(F, A|E), which when maximized is equivalent to maximizing P(A|E, F). We propose here a system which models P(A|E, F) directly, using a different decomposition of terms. In the IBM models of translation, alignments exist as artifacts of which English words generated which French words. Our model does not state that one sentence generates the other. Instead it takes both sentences as given, and uses the sentences to determine an alignment. An alignment A consists of t links {l1, l2, . . . , lt}, where each lk = l(eik, fjk) for some ik and jk. We will refer to consecutive subsets of A as lj i = {li, li+1, . . . , lj}. Given this notation, P(A|E, F) can be decomposed as follows: P(A|E, F) = P(lt 1|E, F) = tY k=1 P(lk|E, F, lk−1 1 ) At this point, we must factor P(lk|E, F, lk−1 1 ) to make computation feasible. Let Ck = {E, F, lk−1 1 } represent the context of lk. Note that both the context Ck and the link lk imply the occurrence of eik and fjk. We can rewrite P(lk|Ck) as: P(lk|Ck) = P(lk, Ck) P(Ck) = P(Ck|lk)P(lk) P(Ck, eik, fjk) = P(Ck|lk) P(Ck|eik, fjk) × P(lk, eik, fjk) P(eik, fjk) = P(lk|eik, fjk) × P(Ck|lk) P(Ck|eik, fjk) Here P(lk|eik, fjk) is link probability given a cooccurrence of the two words, which is similar in spirit to Melamed’s explicit noise model (Melamed, 2000). This term depends only on the words involved directly in the link. The ratio P(Ck|lk) P(Ck|eik,fjk) modifies the link probability, providing contextsensitive information. Up until this point, we have made no simplifying assumptions in our derivation. Unfortunately, Ck = {E, F, lk−1 1 } is too complex to estimate context probabilities directly. Suppose FTk is a set of context-related features such that P(lk|Ck) can be approximated by P(lk|eik, fjk, FTk). Let C′ k = {eik, fjk}∪FTk. P(lk|C′ k) can then be decomposed using the same derivation as above. P(lk|C′ k) = P(lk|eik, fjk) × P(C′ k|lk) P(C′ k|eik, fjk) = P(lk|eik, fjk) × P(FTk|lk) P(FTk|eik, fjk) In the second line of this derivation, we can drop eik and fjk from C′ k, leaving only FTk, because they are implied by the events which the probabilities are conditionalized on. Now, we are left with the task of approximating P(FTk|lk) and P(FTk|eik, fjk). To do so, we will assume that for all ft ∈FTk, ft is conditionally independent given either lk or (eik, fjk). This allows us to approximate alignment probability P(A|E, F) as follows: tY k=1  P(lk|eik, fjk) × Y ft∈FTk P(ft|lk) P(ft|eik, fjk)   In any context, only a few features will be active. The inner product is understood to be only over those features ft that are present in the current context. This approximation will cause P(A|E, F) to no longer be a well-behaved probability distribution, though as in Naive Bayes, it can be an excellent estimator for the purpose of ranking alignments. If we have an aligned training corpus, the probabilities needed for the above equation are quite easy to obtain. Link probabilities can be determined directly from |lk| (link counts) and |eik, fj,k| (co-occurrence counts). For any co-occurring pair of words (eik, fjk), we check whether it has the feature ft. If it does, we increment the count of |ft, eik, fjk|. If this pair is also linked, then we increment the count of |ft, lk|. Note that our definition of FTk allows for features that depend on previous links. For this reason, when determining whether or not a feature is present in a given context, one must impose an ordering on the links. This ordering can be arbitrary as long as the same ordering is used in training1 and probability evaluation. A simple solution would be to order links according their French words. We choose to order links according to the link probability P(lk|eik, fjk) as it has an intuitive appeal of allowing more certain links to provide context for others. We store probabilities in two tables. The first table stores link probabilities P(lk|eik, fjk). It has an entry for every word pair that was linked at least once in the training corpus. Its size is the same as the translation table in the IBM models. The second table stores feature probabilities, P(ft|lk) and P(ft|eik, fjk). For every linked word pair, this table has two entries for each active feature. In the worst case this table will be of size 2×|FT|×|E|×|F|. In practice, it is much smaller as most contexts activate only a small number of features. In the next subsection we will walk through a simple example of this probability model in action. We will describe the features used in our implementation of this model in Section 3.2. 2.1 An Illustrative Example Figure 1 shows an aligned corpus consisting of one sentence pair. Suppose that we are concerned with only one feature ft that is active2 for eik and fjk if an adjacent pair is an alignment, i.e., l(eik−1, fjk−1) ∈lk−1 1 or l(eik+1, fjk+1) ∈lk−1 1 . This example would produce the probability tables shown in Table 1. Note how ft is active for the (a, v) link, and is not active for the (b, u) link. This is due to our selected ordering. Table 1 allows us to calculate the probability of this alignment as: 1In our experiments, the ordering is not necessary during training to achieve good performance. 2Throughout this paper we will assume that null alignments are special cases, and do not activate or participate in features unless otherwise stated in the feature description. a b a u v v e f 0 0 Figure 1: An Example Aligned Corpus Table 1: Example Probability Tables (a) Link Counts and Probabilities eik fjk |lk| |eik, fjk| P(lk|eik, fjk) b u 1 1 1 a f0 1 2 1 2 e0 v 1 2 1 2 a v 1 4 1 4 (b) Feature Counts eik fjk |ft, lk| |ft, eik, fjk| a v 1 1 (c) Feature Probabilities eik fjk P(ft|lk) P(ft|eik, fjk) a v 1 1 4 P(A|E, F) = P(l(b, u)|b, u)× P(l(a, f0)|a, f0)× P(l(e0, v)|e0, v)× P(l(a, v)|a, v)P(ft|l(a,v)) P(ft|a,v) = 1 × 1 2 × 1 2 × 1 4 × 1 1 4 = 1 4 3 Word-Alignment Algorithm In this section, we describe a world-alignment algorithm guided by the alignment probability model derived above. In designing this algorithm we have selected constraints, features and a search method in order to achieve high performance. The model, however, is general, and could be used with any instantiation of the above three factors. This section will describe and motivate the selection of our constraints, features and search method. The input to our word-alignment algorithm consists of a pair of sentences E and F, and the dependency tree TE for E. TE allows us to make use of features and constraints that are based on linguistic intuitions. 3.1 Constraints The reader will note that our alignment model as described above has very few factors to prevent undesirable alignments, such as having all French words align to the same English word. To guide the model to correct alignments, we employ two constraints to limit our search for the most probable alignment. The first constraint is the one-to-one constraint (Melamed, 2000): every word (except the null words e0 and f0) participates in exactly one link. The second constraint, known as the cohesion constraint (Fox, 2002), uses the dependency tree (Mel’ˇcuk, 1987) of the English sentence to restrict possible link combinations. Given the dependency tree TE, the alignment can induce a dependency tree for F (Hwa et al., 2002). The cohesion constraint requires that this induced dependency tree does not have any crossing dependencies. The details about how the cohesion constraint is implemented are outside the scope of this paper.3 Here we will use a simple example to illustrate the effect of the constraint. Consider the partial alignment in Figure 2. When the system attempts to link of and de, the new link will induce the dotted dependency, which crosses a previously induced dependency between service and donn´ees. Therefore, of and de will not be linked. the status of the data service l' état du service de données nn det pcomp mod det Figure 2: An Example of Cohesion Constraint 3.2 Features In this section we introduce two types of features that we use in our implementation of the probability model described in Section 2. The first feature 3The algorithm for checking the cohesion constraint is presented in a separate paper which is currently under review. the host discovers all the devices det subj pre det obj l' hôte repère tous les périphériques 1 2 3 4 5 1 2 3 4 5 6 6 the host locate all the peripherals Figure 3: Feature Extraction Example type fta concerns surrounding links. It has been observed that words close to each other in the source language tend to remain close to each other in the translation (Vogel et al., 1996; Ker and Change, 1997). To capture this notion, for any word pair (ei, fj), if a link l(ei′, fj′) exists where i −2 ≤i′ ≤ i + 2 and j −2 ≤j′ ≤j + 2, then we say that the feature fta(i−i′, j−j′, ei′) is active for this context. We refer to these as adjacency features. The second feature type ftd uses the English parse tree to capture regularities among grammatical relations between languages. For example, when dealing with French and English, the location of the determiner with respect to its governor4 is never swapped during translation, while the location of adjectives is swapped frequently. For any word pair (ei, fj), let ei′ be the governor of ei, and let rel be the relationship between them. If a link l(ei′, fj′) exists, then we say that the feature ftd(j −j′, rel) is active for this context. We refer to these as dependency features. Take for example Figure 3 which shows a partial alignment with all links completed except for those involving ‘the’. Given this sentence pair and English parse tree, we can extract features of both types to assist in the alignment of the1. The word pair (the1, l′) will have an active adjacency feature fta(+1, +1, host) as well as a dependency feature ftd(−1, det). These two features will work together to increase the probability of this correct link. In contrast, the incorrect link (the1, les) will have only ftd(+3, det), which will work to lower the link probability, since most determiners are located be4The parent node in the dependency tree. fore their governors. 3.3 Search Due to our use of constraints, when seeking the highest probability alignment, we cannot rely on a method such as dynamic programming to (implicitly) search the entire alignment space. Instead, we use a best-first search algorithm (with constant beam and agenda size) to search our constrained space of possible alignments. A state in this space is a partial alignment. A transition is defined as the addition of a single link to the current state. Any link which would create a state that does not violate any constraint is considered to be a valid transition. Our start state is the empty alignment, where all words in E and F are linked to null. A terminal state is a state in which no more links can be added without violating a constraint. Our goal is to find the terminal state with highest probability. For the purposes of our best-first search, nonterminal states are evaluated according to a greedy completion of the partial alignment. We build this completion by adding valid links in the order of their unmodified link probabilities P(l|e, f) until no more links can be added. The score the state receives is the probability of its greedy completion. These completions are saved for later use (see Section 4.2). 4 Training As was stated in Section 2, our probability model needs an initial alignment in order to create its probability tables. Furthermore, to avoid having our model learn mistakes and noise, it helps to train on a set of possible alignments for each sentence, rather than one Viterbi alignment. In the following subsections we describe the creation of the initial alignments used for our experiments, as well as our sampling method used in training. 4.1 Initial Alignment We produce an initial alignment using the same algorithm described in Section 3, except we maximize summed φ2 link scores (Gale and Church, 1991), rather than alignment probability. This produces a reasonable one-to-one word alignment that we can refine using our probability model. 4.2 Alignment Sampling Our use of the one-to-one constraint and the cohesion constraint precludes sampling directly from all possible alignments. These constraints tie words in such a way that the space of alignments cannot be enumerated as in IBM models 1 and 2 (Brown et al., 1993). Taking our lead from IBM models 3, 4 and 5, we will sample from the space of those highprobability alignments that do not violate our constraints, and then redistribute our probability mass among our sample. At each search state in our alignment algorithm, we consider a number of potential links, and select between them using a heuristic completion of the resulting state. Our sample S of possible alignments will be the most probable alignment, plus the greedy completions of the states visited during search. It is important to note that any sampling method that concentrates on complete, valid and high probability alignments will accomplish the same task. When collecting the statistics needed to calculate P(A|E, F) from our initial φ2 alignment, we give each s ∈S a uniform weight. This is reasonable, as we have no probability estimates at this point. When training from the alignments produced by our model, we normalize P(s|E, F) so that P s∈S P(s|E, F) = 1. We then count links and features in S according to these normalized probabilities. 5 Experimental Results We adopted the same evaluation methodology as in (Och and Ney, 2000), which compared alignment outputs with manually aligned sentences. Och and Ney classify manual alignments into two categories: Sure (S) and Possible (P) (S⊆P). They defined the following metrics to evaluate an alignment A: recall = |A∩S| |S| precision = |A∩P| |P| alignment error rate (AER) = |A∩S|+|A∩P| |S|+|P| We trained our alignment program with the same 50K pairs of sentences as (Och and Ney, 2000) and tested it on the same 500 manually aligned sentences. Both the training and testing sentences are from the Hansard corpus. We parsed the training Table 2: Comparison with (Och and Ney, 2000) Method Prec Rec AER Ours 95.7 86.4 8.7 IBM-4 F→E 80.5 91.2 15.6 IBM-4 E→F 80.0 90.8 16.0 IBM-4 Intersect 95.7 85.6 9.0 IBM-4 Refined 85.9 92.3 11.7 and testing corpora with Minipar.5 We then ran the training procedure in Section 4 for three iterations. We conducted three experiments using this methodology. The goal of the first experiment is to compare the algorithm in Section 3 to a state-of-theart alignment system. The second will determine the contributions of the features. The third experiment aims to keep all factors constant except for the model, in an attempt to determine its performance when compared to an obvious alternative. 5.1 Comparison to state-of-the-art Table 2 compares the results of our algorithm with the results in (Och and Ney, 2000), where an HMM model is used to bootstrap IBM Model 4. The rows IBM-4 F→E and IBM-4 E→F are the results obtained by IBM Model 4 when treating French as the source and English as the target or vice versa. The row IBM-4 Intersect shows the results obtained by taking the intersection of the alignments produced by IBM-4 E→F and IBM-4 F→E. The row IBM-4 Refined shows results obtained by refining the intersection of alignments in order to increase recall. Our algorithm achieved over 44% relative error reduction when compared with IBM-4 used in either direction and a 25% relative error rate reduction when compared with IBM-4 Refined. It also achieved a slight relative error reduction when compared with IBM-4 Intersect. This demonstrates that we are competitive with the methods described in (Och and Ney, 2000). In Table 2, one can see that our algorithm is high precision, low recall. This was expected as our algorithm uses the one-to-one constraint, which rules out many of the possible alignments present in the evaluation data. 5available at http://www.cs.ualberta.ca/˜lindek/minipar.htm Table 3: Evaluation of Features Algorithm Prec Rec AER initial (φ2) 88.9 84.6 13.1 without features 93.7 84.8 10.5 with ftd only 95.6 85.4 9.3 with fta only 95.9 85.8 9.0 with fta and ftd 95.7 86.4 8.7 5.2 Contributions of Features Table 3 shows the contributions of features to our algorithm’s performance. The initial (φ2) row is the score for the algorithm (described in Section 4.1) that generates our initial alignment. The without features row shows the score after 3 iterations of refinement with an empty feature set. Here we can see that our model in its simplest form is capable of producing a significant improvement in alignment quality. The rows with ftd only and with fta only describe the scores after 3 iterations of training using only dependency and adjacency features respectively. The two features provide significant contributions, with the adjacency feature being slightly more important. The final row shows that both features can work together to create a greater improvement, despite the independence assumptions made in Section 2. 5.3 Model Evaluation Even though we have compared our algorithm to alignments created using IBM statistical models, it is not clear if our model is essential to our performance. This experiment aims to determine if we could have achieved similar results using the same initial alignment and search algorithm with an alternative model. Without using any features, our model is similar to IBM’s Model 1, in that they both take into account only the word types that participate in a given link. IBM Model 1 uses P(f|e), the probability of f being generated by e, while our model uses P(l|e, f), the probability of a link existing between e and f. In this experiment, we set Model 1 translation probabilities according to our initial φ2 alignment, sampling as we described in Section 4.2. We then use the Qn j=1 P(fj|eaj) to evaluate candidate alignments in a search that is otherwise identical to our algorithm. We ran Model 1 refinement for three iterations and Table 4: P(l|e, f) vs. P(f|e) Algorithm Prec Rec AER initial (φ2) 88.9 84.6 13.1 P(l|e, f) model 93.7 84.8 10.5 P(f|e) model 89.2 83.0 13.7 recorded the best results that it achieved. It is clear from Table 4 that refining our initial φ2 alignment using IBM’s Model 1 is less effective than using our model in the same manner. In fact, the Model 1 refinement receives a lower score than our initial alignment. 6 Related Work 6.1 Probability models When viewed with no features, our probability model is most similar to the explicit noise model defined in (Melamed, 2000). In fact, Melamed defines a probability distribution P(links(u, v)|cooc(u, v), λ+, λ−) which appears to make our work redundant. However, this distribution refers to the probability that two word types u and v are linked links(u, v) times in the entire corpus. Our distribution P(l|e, f) refers to the probability of linking a specific co-occurrence of the word tokens e and f. In Melamed’s work, these probabilities are used to compute a score based on a probability ratio. In our work, we use the probabilities directly. By far the most prominent probability models in machine translation are the IBM models and their extensions. When trying to determine whether two words are aligned, the IBM models ask, “What is the probability that this English word generated this French word?” Our model asks instead, “If we are given this English word and this French word, what is the probability that they are linked?” The distinction is subtle, yet important, introducing many differences. For example, in our model, E and F are symmetrical. Furthermore, we model P(l|e, f′) and P(l|e, f′′) as unrelated values, whereas the IBM model would associate them in the translation probabilities t(f′|e) and t(f′′|e) through the constraint P f t(f|e) = 1. Unfortunately, by conditionalizing on both words, we eliminate a large inductive bias. This prevents us from starting with uniform probabilities and estimating parameters with EM. This is why we must supply the model with a noisy initial alignment, while IBM can start from an unaligned corpus. In the IBM framework, when one needs the model to take new information into account, one must create an extended model which can base its parameters on the previous model. In our model, new information can be incorporated modularly by adding features. This makes our work similar to maximum entropy-based machine translation methods, which also employ modular features. Maximum entropy can be used to improve IBM-style translation probabilities by using features, such as improvements to P(f|e) in (Berger et al., 1996). By the same token we can use maximum entropy to improve our estimates of P(lk|eik, fjk, Ck). We are currently investigating maximum entropy as an alternative to our current feature model which assumes conditional independence among features. 6.2 Grammatical Constraints There have been many recent proposals to leverage syntactic data in word alignment. Methods such as (Wu, 1997), (Alshawi et al., 2000) and (Lopez et al., 2002) employ a synchronous parsing procedure to constrain a statistical alignment. The work done in (Yamada and Knight, 2001) measures statistics on operations that transform a parse tree from one language into another. 7 Future Work The alignment algorithm described here is incapable of creating alignments that are not one-to-one. The model we describe, however is not limited in the same manner. The model is currently capable of creating many-to-one alignments so long as the null probabilities of the words added on the “many” side are less than the probabilities of the links that would be created. Under the current implementation, the training corpus is one-to-one, which gives our model no opportunity to learn many-to-one alignments. We are pursuing methods to create an extended algorithm that can handle many-to-one alignments. This would involve training from an initial alignment that allows for many-to-one links, such as one of the IBM models. Features that are related to multiple links should be added to our set of feature types, to guide intelligent placement of such links. 8 Conclusion We have presented a simple, flexible, statistical model for computing the probability of an alignment given a sentence pair. This model allows easy integration of context-specific features. Our experiments show that this model can be an effective tool for improving an existing word alignment. References Hiyan Alshawi, Srinivas Bangalore, and Shona Douglas. 2000. Learning dependency translation models as collections of finite state head transducers. Computational Linguistics, 26(1):45–60. Adam L. Berger, Stephen A. Della Pietra, and Vincent J. Della Pietra. 1996. A maximum entropy approach to natural language processing. Computational Linguistics, 22(1):39–71. P. F. Brown, V. S. A. Della Pietra, V. J. Della Pietra, and R. L. Mercer. 1993. The mathematics of statistical machine translation: Parameter estimation. Computational Linguistics, 19(2):263–312. Jaime Carbonell, Katharina Probst, Erik Peterson, Christian Monson, Alon Lavie, Ralf Brown, and Lori Levin. 2002. Automatic rule learning for resource-limited mt. In Proceedings of AMTA-02, pages 1–10. Ted Dunning. 1993. Accurate methods for the statistics of surprise and coincidence. Computational Linguistics, 19(1):61–74, March. Heidi J. Fox. 2002. Phrasal cohesion and statistical machine translation. In Proceedings of EMNLP-02, pages 304–311. W.A. Gale and K.W. Church. 1991. Identifying word correspondences in parallel texts. In Proceedings of the 4th Speech and Natural Language Workshop, pages 152–157. DARPA, Morgan Kaufmann. Rebecca Hwa, Philip Resnik, Amy Weinberg, and Okan Kolak. 2002. Evaluating translational correspondence using annotation projection. In Proceeding of ACL-02, pages 392–399. Sue J. Ker and Jason S. Change. 1997. Aligning more words with high precision for small bilingual corpora. Computational Linguistics and Chinese Language Processing, 2(2):63–96, August. Adam Lopez, Michael Nossal, Rebecca Hwa, and Philip Resnik. 2002. Word-level alignment for multilingual resource acquisition. In Proceedings of the Workshop on Linguistic Knowledge Acquisition and Representation: Bootstrapping Annotated Language Data. I. Dan Melamed. 1996. Automatic construction of clean broad-coverage translation lexicons. In Proceedings of the 2nd Conference of the Association for Machine Translation in the Americas, pages 125–134, Montreal. I. Dan Melamed. 2000. Models of translational equivalence among words. Computational Linguistics, 26(2):221–249, June. Igor A. Mel’ˇcuk. 1987. Dependency syntax: theory and practice. State University of New York Press, Albany. Arul Menezes and Stephen D. Richardson. 2001. A bestfirst alignment algorithm for automatic extraction of transfer mappings from bilingual corpora. In Proceedings of the Workshop on Data-Driven Machine Translation. Franz J. Och and Hermann Ney. 2000. Improved statistical alignment models. In Proceedings of the 38th Annual Meeting of the Association for Computational Linguistics, pages 440–447, Hong Kong, China, October. S. Vogel, H. Ney, and C. Tillmann. 1996. Hmm-based word alignment in statistical translation. In Proceedings of COLING-96, pages 836–841, Copenhagen, Denmark, August. Dekai Wu. 1997. Stochastic inversion transduction grammars and bilingual parsing of parallel corpora. Computational Linguistics, 23(3):374–403. Kenji Yamada and Kevin Knight. 2001. A syntax-based statistical translation model. In Meeting of the Association for Computational Linguistics, pages 523–530.
2003
12
Probabilistic Parsing for German using Sister-Head Dependencies Amit Dubey Department of Computational Linguistics Saarland University PO Box 15 11 50 66041 Saarbr¨ucken, Germany [email protected] Frank Keller School of Informatics University of Edinburgh 2 Buccleuch Place Edinburgh EH8 9LW, UK [email protected] Abstract We present a probabilistic parsing model for German trained on the Negra treebank. We observe that existing lexicalized parsing models using head-head dependencies, while successful for English, fail to outperform an unlexicalized baseline model for German. Learning curves show that this effect is not due to lack of training data. We propose an alternative model that uses sister-head dependencies instead of head-head dependencies. This model outperforms the baseline, achieving a labeled precision and recall of up to 74%. This indicates that sister-head dependencies are more appropriate for treebanks with very flat structures such as Negra. 1 Introduction Treebank-based probabilistic parsing has been the subject of intensive research over the past few years, resulting in parsing models that achieve both broad coverage and high parsing accuracy (e.g., Collins 1997; Charniak 2000). However, most of the existing models have been developed for English and trained on the Penn Treebank (Marcus et al., 1993), which raises the question whether these models generalize to other languages, and to annotation schemes that differ from the Penn Treebank markup. The present paper addresses this question by proposing a probabilistic parsing model trained on Negra (Skut et al., 1997), a syntactically annotated corpus for German. German has a number of syntactic properties that set it apart from English, and the Negra annotation scheme differs in important respects from the Penn Treebank markup. While Negra has been used to build probabilistic chunkers (Becker and Frank, 2002; Skut and Brants, 1998), the research reported in this paper is the first attempt to develop a probabilistic full parsing model for German trained on a treebank (to our knowledge). Lexicalization can increase parsing performance dramatically for English (Carroll and Rooth, 1998; Charniak, 1997, 2000; Collins, 1997), and the lexicalized model proposed by Collins (1997) has been successfully applied to Czech (Collins et al., 1999) and Chinese (Bikel and Chiang, 2000). However, the resulting performance is significantly lower than the performance of the same model for English (see Table 1). Neither Collins et al. (1999) nor Bikel and Chiang (2000) compare the lexicalized model to an unlexicalized baseline model, leaving open the possibility that lexicalization is useful for English, but not for other languages. This paper is structured as follows. Section 2 reviews the syntactic properties of German, focusing on its semi-flexible wordorder. Section 3 describes two standard lexicalized models (Carroll and Rooth, 1998; Collins, 1997), as well as an unlexicalized baseline model. Section 4 presents a series of experiments that compare the parsing performance of these three models (and several variants) on Negra. The results show that both lexicalized models fail to outperform the unlexicalized baseline. This is at odds with what has been reported for English. Learning curves show that the poor performance of the lexicalized models is not due to lack of training data. Section 5 presents an error analysis for Collins’s (1997) lexicalized model, which shows that the head-head dependencies used in this model fail to cope well with the flat structures in Negra. We propose an alternative model that uses sister-head dependencies instead. This model outperforms the two original lexicalized models, as well as the unlexicalized baseline. Based on this result and on the review of the previous literature (Section 6), we argue (Section 7) that sister-head models are more appropriate for treebanks with very flat structures (such as Negra), typically used to annotate languages with semifree wordorder (such as German). 2 Parsing German 2.1 Syntactic Properties German exhibits a number of syntactic properties that distinguish it from English, the language that has been the focus of most research in parsing. Prominent among these properties is the semi-free Language Size LR LP Source English 40,000 87.4% 88.1% (Collins, 1997) Chinese 3,484 69.0% 74.8% (Bikel and Chiang, 2000) Czech 19,000 —- 80.0% —- (Collins et al., 1999) Table 1: Results for the Collins (1997) model for various languages (dependency precision for Czech) wordorder, i.e., German wordorder is fixed in some respects, but variable in others. Verb order is largely fixed: in subordinate clauses such as (1a), both the finite verb hat ‘has’ and the non-finite verb komponiert ‘composed’ are in sentence final position. (1) a. Weil because er er gestern yesterday Musik music komponiert composed hat. has ‘Because he has composed music yesterday.’ b. Hat er gestern Musik komponiert? c. Er hat gestern Musik komponiert. In yes/no questions such as (1b), the finite verb is sentence initial, while the non-finite verb is sentence final. In declarative main clauses (see (1c)), on the other hand, the finite verb is in second position (i.e., preceded by exactly one constituent), while the non-finite verb is final. While verb order is fixed in German, the order of complements and adjuncts is variable, and influenced by a variety of syntactic and non-syntactic factors, including pronominalization, information structure, definiteness, and animacy (e.g., Uszkoreit 1987). The first position in a declarative sentence, for example, can be occupied by various constituents, including the subject (er ‘he’ in (1c)), the object (Musik ‘music’ in (2a)), an adjunct (gestern ‘yesterday’ in (2b)), or the non-finite verb (komponiert ‘composed’ in (2c)). (2) a. Musik hat er gestern komponiert. b. Gestern hat er Musik komponiert . c. Komponiert hat er gestern Musik. The semi-free wordorder in German means that a context-free grammar model has to contain more rules than for a fixed wordorder language. For transitive verbs, for instance, we need the rules S → V NP NP, S →NP V NP, and S →NP NP V to account for verb initial, verb second, and verb final order (assuming a flat S, see Section 2.2). 2.2 Negra Annotation Scheme The Negra corpus consists of around 350,000 words of German newspaper text (20,602 sentences). The annotation scheme (Skut et al., 1997) is modeled to a certain extent on that of the Penn Treebank (Marcus et al., 1993), with crucial differences. Most importantly, Negra follows the dependency grammar tradition in assuming flat syntactic representations: (a) There is no S →NP VP rule. Rather, the subject, the verb, and its objects are all sisters of each other, dominated by an S node. This is a way of accounting for the semi-free wordorder of German (see Section 2.1): the first NP within an S need not be the subject. (b) There is no SBAR →Comp S rule. Main clauses, subordinate clauses, and relative clauses all share the category S in Negra; complementizers and relative pronouns are simply sisters of the verb. (c) There is no PP →P NP rule, i.e., the preposition and the noun it selects (and determiners and adjectives, if present) are sisters, dominated by a PP node. An argument for this representation is that prepositions behave like case markers in German; a preposition and a determiner can merge into a single word (e.g., in dem ‘in the’ becomes im). Another idiosyncrasy of Negra is that it assumes special coordinate categories. A coordinated sentence has the category CS, a coordinate NP has the category CNP, etc. While this does not make the annotation more flat, it substantially increases the number of non-terminal labels. Negra also contains grammatical function labels that augment phrasal and lexical categories. Example are MO (modifier), HD (head), SB (subject), and OC (clausal object). 3 Probabilistic Parsing Models 3.1 Probabilistic Context-Free Grammars Lexicalization has been shown to improve parsing performance for the Penn Treebank (e.g., Carroll and Rooth 1998; Charniak 1997, 2000; Collins 1997). The aim of the present paper is to test if this finding carries over to German and to the Negra corpus. We therefore use an unlexicalized model as our baseline against which to test the lexicalized models. More specifically, we used a standard probabilistic context-free grammar (PCFG; see Charniak 1993). Each context-free rule RHS →LHS is annotated with an expansion probability P(RHS|LHS). The probabilities for all rules with the same lefthand side have to sum to one, and the probability of a parse tree T is defined as the product of the probabilities of all rules applied in generating T. 3.2 Carroll and Rooth’s Head-Lexicalized Model The head-lexicalized PCFG model of Carroll and Rooth (1998) is a minimal departure from the standard unlexicalized PCFG model, which makes it ideal for a direct comparison.1 A grammar rule LHS →RHS can be written as P →C1 ...Cn, where P is the mother category, and C1 ...Cn are daughters. Let l(C) be the lexical head 1Charniak (1997) proposes essentially the same model; we will nevertheless use the label ‘Carroll and Rooth model’ as we are using their implementation (see Section 4.1). of the constituent C. The rule probability is then defined as (see also Beil et al. 2002): P(RHS|LHS) = Prule(C1 ...Cn|P,l(P)) (3) · n ∏ i=1 Pchoice(l(Ci)|Ci,P,l(P)) Here Prule(C1 ...Cn|P,l(P)) is the probability that category P with lexical head l(P) is expanded by the rule P →C1 ...Cn, and Pchoice(l(C)|C,P,l(P)) is the probability that the (non-head) category C has the lexical head l(C) given that its mother is P with lexical head l(P). 3.3 Collins’s Head-Lexicalized Model In contrast to Carroll and Rooth’s (1998) approach, the model proposed by Collins (1997) does not compute rule probabilities directly. Rather, they are generated using a Markov process that makes certain independence assumptions. A grammar rule LHS → RHS can be written as P →Lm ...L1 H R1 ...Rn where P is the mother and H is the head daughter. Let l(C) be the head word of C and t(C) the tag of the head word of C. Then the probability of a rule is defined as: P(RHS|LHS) = P(Lm...L1 H R1 ...Rn|P) (4) = Ph(H|P)Pl(Lm...L1|P,H)Pr(R1 ...Rn|P,H) = Ph(H|P) m ∏ i=0 Pl(Li|P,H,d(i)) n ∏ i=0 Pr(Ri|P,H,d(i)) Here, Ph is the probability of generating the head, and Pl and Pr are the probabilities of generating the nonterminals to the left and right of the head, respectively; d(i) is a distance measure. (L0 and R0 are stop categories.) At this point, the model is still unlexicalized. To add lexical sensitivity, the Ph, Pr and Pl probability functions also take into account head words and their POS tags: P(RHS|LHS) = Ph(H|P,t(P),l(P)) (5) · m ∏ i=0 Pl(Li,t(Li),l(Li)|P,H,t(H),l(H),d(i)) · n ∏ i=0 Pr(Ri,t(Ri),l(Ri)|P,H,t(H),l(H),d(i)) 4 Experiment 1 This experiment was designed to compare the performance of the three models introduced in the last section. Our main hypothesis was that the lexicalized models will outperform the unlexicalized baseline model. Another prediction was that adding Negra-specific information to the models will increase parsing performance. We therefore tested a model variant that included grammatical function labels, i.e., the set of categories was augmented by the function tags specified in Negra (see Section 2.2). Adding grammatical functions is a way of dealing with the wordorder facts of German (see Section 2.1) in the face of Negra’s very flat annotation scheme. For instance, subject and object NPs have different wordorder preferences (subjects tend to be preverbal, while objects tend to be postverbal), a fact that is captured if subjects have the label NP-SB, while objects are labeled NP-OA (accusative object), NP-DA (dative object), etc. Also the fact that verb order differs between subordinate and main clauses is captured by the function labels: the former are labeled S, while the latter are labeled S-OC (object clause), S-RC (relative clause), etc. Another idiosyncrasy of the Negra annotation is that conjoined categories have separate labels (S and CS, NP and CNP, etc.), and that PPs do not contain an NP node. We tested a variant of the Carroll and Rooth (1998) model that takes this into account. 4.1 Method Data Sets All experiments reported in this paper used the treebank format of Negra. This format, which is included in the Negra distribution, was derived from the native format by replacing crossing branches with traces. We split the corpus into three subsets. The first 18,602 sentences constituted the training set. Of the remaining 2,000 sentences, the first 1,000 served as the test set, and the last 1000 as the development set. To increase parsing efficiency, we removed all sentences with more than 40 words. This resulted in a test set of 968 sentences and a development set of 975 sentences. Early versions of the models were tested on the development set, and the test set remained unseen until all parameters were fixed. The final results reported this paper were obtained on the test set, unless stated otherwise. Grammar Induction For the unlexicalized PCFG model (henceforth baseline model), we used the probabilistic left-corner parser Lopar (Schmid, 2000). When run in unlexicalized mode, Lopar implements the model described in Section 3.1. A grammar and a lexicon for Lopar were read off the Negra training set, after removing all grammatical function labels. As Lopar cannot handle traces, these were also removed from the training data. The head-lexicalized model of Carroll and Rooth (1998) (henceforth C&R model) was again realized using Lopar, which in lexicalized mode implements the model in Section 3.2. Lexicalization requires that each rule in a grammar has one of the categories on its righthand side annotated as the head. For the categories S, VP, AP, and AVP, the head is marked in Negra. For the other categories, we used rules to heuristically determine the head, as is standard practice for the Penn Treebank. The lexicalized model proposed by Collins (1997) (henceforth Collins model) was re-implemented by one of the authors. For training, empty categories were removed from the training data, as the model cannot handle them. The same head finding strategy was applied as for the C&R model. In this experiment, only head-head statistics were used (see (5)). The original Collins model uses sister-head statistics for non-recursive NPs. This will be discussed in detail in Section 5. Training and Testing For all three models, the model parameters were estimated using maximum likelihood estimation. Both Lopar and the Collins model use various backoff distributions to smooth the estimates. The reader is referred to Schmid (2000) and Collins (1997) for details. For the C&R model, we used a cutoff of one for rule frequencies Prule and lexical choice frequencies Pchoice (the cutoff value was optimized on the development set). We also tested variants of the baseline model and the C&R model that include grammatical function information, as we hypothesized that this information might help the model to handle wordorder variation more adequately, as explained above. Finally, we tested variant of the C&R model that uses Lopar’s parameter pooling feature. This feature makes it possible to collapse the lexical choice distribution Pchoice for either the daughter or the mother categories of a rule (see Section 3.2). We pooled the estimates for pairs of conjoined and nonconjoined daughter categories (S and CS, NP and CNP, etc.): these categories should be treated as the same daughters; e.g., there should be no difference between S →NP V and S →CNP V. We also pooled the estimates for the mother categories NPs and PPs. This is a way of dealing with the fact that there is no separate NP node within PPs in Negra. Lopar and the Collins model differ in their handling of unknown words. In Lopar, a POS tag distribution for unknown words has to be specified, which is then used to tag unknown words in the test data. The Collins model treats any word seen fewer than five times in the training data as unseen and uses an external POS tagger to tag unknown words. In order to make the models comparable, we used a uniform approach to unknown words. All models were run on POS-tagged input; this input was created by tagging the test set with a separate POS tagger, for both known and unknown words. We used TnT (Brants, 2000), trained on the Negra training set. The tagging accuracy was 97.12% on the development set. In order to obtain an upper bound for the performance of the parsing models, we also ran the parsers on the test set with the correct tags (as specified in Negra), again for both known and unknown words. We will refer to this mode as ‘perfect tagging’. All models were evaluated using standard PARSEVAL measures. We report labeled recall (LR) labeled precision (LP), average crossing brackets (CBs), zero crossing brackets (0CB), and two or less crossing brackets (≤2CB). We also give the coverage (Cov), i.e., the percentage of sentences that the parser was able to parse. 4.2 Results The results for all three models and their variants are given in Table 2, for both TnT tags and perfect tags. The baseline model achieves 70.56% LR and 66.69% LP with TnT tags. Adding grammatical functions reduces both figures slightly, and coverage drops by about 15%. The C&R model performs worse than the baseline, at 68.04% LR and 60.07% LP (for TnT tags). Adding grammatical function again reduces performance slightly. Parameter pooling increases both LR and LP by about 1%. The Collins models also performs worse than the baseline, at 67.91% LR and 66.07% LP. Performance using perfect tags (an upper bound of model performance) is 2–3% higher for the baseline and for the C&R model. The Collins model gains only about 1%. Perfect tagging results in a performance increase of over 10% for the models with grammatical functions. This is not surprising, as the perfect tags (but not the TnT tags) include grammatical function labels. However, we also observe a dramatic reduction in coverage (to about 65%). 4.3 Discussion We added grammatical functions to both the baseline model and the C&R model, as we predicted that this would allow the model to better capture the wordorder facts of German. However, this prediction was not borne out: performance with grammatical functions (on TnT tags) was slightly worse than without, and coverage dropped substantially. A possible reason for this is sparse data: a grammar augmented with grammatical functions contains many additional categories, which means that many more parameters have to be estimated using the same training set. On the other hand, a performance increase occurs if the tagger also provides grammatical function labels (simulated in the perfect tags condition). However, this comes at the price of an unacceptable reduction in coverage. When training the C&R model, we included a variant that makes use of Lopar’s parameter pooling feature. We pooled the estimates for conjoined daughter categories, and for NP and PP mother categories. This is a way of taking the idiosyncrasies of the Negra annotation into account, and resulted in a small improvement in performance. The most surprising finding is that the best performance was achieved by the unlexicalized PCFG TnT tagging Perfect tagging LR LP CBs 0CB ≤2CB Cov LR LP CBs 0CB ≤2CB Cov Baseline 70.56 66.69 1.03 58.21 84.46 94.42 72.99 70.00 0.88 60.30 87.42 95.25 Baseline + GF 70.45 65.49 1.07 58.02 85.01 79.24 81.14 78.37 0.46 74.25 95.26 65.39 C&R 68.04 60.07 1.31 52.08 79.54 94.42 70.79 63.38 1.17 54.99 82.21 95.25 C&R + pool 69.07 61.41 1.28 53.06 80.09 94.42 71.74 64.73 1.11 56.40 83.08 95.25 C&R + GF 67.66 60.33 1.31 55.67 80.18 79.24 81.17 76.83 0.48 73.46 94.15 65.39 Collins 67.91 66.07 0.73 65.67 89.52 95.21 68.63 66.94 0.71 64.97 89.73 96.23 Table 2: Results for Experiment 1: comparison of lexicalized and unlexicalized models (GF: grammatical functions; pool: parameter pooling for NPs/PPs and conjoined categories) 0 20 40 60 80 100 percent of training corpus 45 50 55 60 65 70 75 f-score unlexicalized PCFG lexicalized PCFG (Collins) lexicalized PCFG (C&R) Figure 1: Learning curves for all three models baseline model. Both lexicalized models (C&R and Collins) performed worse than the baseline. This results is at odds with what has been found for English, where lexicalization is standardly reported to increase performance by about 10%. The poor performance of the lexicalized models could be due to a lack of sufficient training data: our Negra training set contains approximately 18,000 sentences, and is therefore significantly smaller than the Penn Treebank training set (about 40,000 sentences). Negra sentences are also shorter: they contain, on average, 15 words compared to 22 in the Penn Treebank. We computed learning curves for the unmodified variants (without grammatical functions or parameter pooling) of all three models (on the development set). The result (see Figure 1) shows that there is no evidence for an effect of sparse data. For both the baseline and the C&R model, a fairly high f-score is achieved with only 10% of the training data. A slow increase occurs as more training data is added. The performance of the Collins model is even less affected by training set size. This is probably due to the fact that it does not use rule probabilities directly, but generates rules using a Markov chain. 5 Experiment 2 As we saw in the last section, lack of training data is not a plausible explanation for the sub-baseline performance of the lexicalized models. In this experiment, we therefore investigate an alternative hypothesis, viz., that the lexicalized models do not cope Penn Negra NP 2.20 3.08 PP 2.03 2.66 Penn Negra VP 2.32 2.59 S 2.22 4.22 Table 3: Average number of daughters for the grammatical categories in the Penn Treebank and Negra well with the fact that Negra rules are so flat (see Section 2.2). We will focus on the Collins model, as it outperformed the C&R model in Experiment 1. An error analysis revealed that many of the errors of the Collins model in Experiment 1 are chunking errors. For example, the PP neben den Mitteln des Theaters should be analyzed as (6a). But instead the parser produces two constituents as in (6b)): (6) a. [PP neben apart den the Mitteln means [NP des the Theaters]] theater’s ‘apart from the means of the theater’. b. [PP neben den Mitteln] [NP des Theaters] The reason for this problem is that neben is the head of the constituent in (6), and the Collins model uses a crude distance measure together with head-head dependencies to decide if additional constituents should be added to the PP. The distance measure is inadequate for finding PPs with high precision. The chunking problem is more widespread than PPs. The error analysis shows that other constituents, including Ss and VPs, also have the wrong boundary. This problem is compounded by the fact that the rules in Negra are substantially flatter than the rules in the Penn Treebank, for which the Collins model was developed. Table 3 compares the average number of daughters in both corpora. The flatness of PPs is easy to reduce. As detailed in Section 2.2, PPs lack an intermediate NP projection, which can be inserted straightforwardly using the following rule: (7) [PP P . . . ] →[PP P [NP . . . ]] In the present experiment, we investigated if parsing performance improves if we test and train on a version of Negra on which the transformation in (7) has been applied. In a second series of experiments, we investigated a more general way of dealing with the flatness of C&R Collins Charniak Current Head sister category X X X Head sister head word X X X Head sister head tag X X Prev. sister category X X X Prev. sister head word X Prev. sister head tag X Table 4: Linguistic features in the current model compared to the models of Carroll and Rooth (1998), Collins (1997), and Charniak (2000) Negra, based on Collins’s (1997) model for nonrecursive NPs in the Penn Treebank (which are also flat). For non-recursive NPs, Collins (1997) does not use the probability function in (5), but instead substitutes Pr (and, by analogy, Pl) by: Pr(Ri,t(Ri),l(Ri)|P,Ri−1,t(Ri−1),l(Ri−1),d(i)) (8) Here the head H is substituted by the sister Ri−1 (and Li−1). In the literature, the version of Pr in (5) is said to capture head-head relationships. We will refer to the alternative model in (8) as capturing sister-head relationships. Using sister-head relationships is a way of counteracting the flatness of the grammar productions; it implicitly adds binary branching to the grammar. Our proposal is to extend the use of sister-head relationship from non-recursive NPs (as proposed by Collins) to all categories. Table 4 shows the linguistic features of the resulting model compared to the models of Carroll and Rooth (1998), Collins (1997), and Charniak (2000). The C&R model effectively includes category information about all previous sisters, as it uses contextfree rules. The Collins (1997) model does not use context-free rules, but generates the next category using zeroth order Markov chains (see Section 3.3), hence no information about the previous sisters is included. Charniak’s (2000) model extends this to higher order Markov chains (first to third order), and therefore includes category information about previous sisters.The current model differs from all these proposals: it does not use any information about the head sister, but instead includes the category, head word, and head tag of the previous sister, effectively treating it as the head. 5.1 Method We first trained the original Collins model on a modified versions of the training test from Experiment 1 in which the PPs were split by applying rule (7). In a second series of experiments, we tested a range of models that use sister-head dependencies instead of head-head dependencies for different categories. We first added sister-head dependencies for NPs (following Collins’s (1997) original proposal) and then for PPs, which are flat in Negra, and thus similar in structure to NPs (see Section 2.2). Then we tested a model in which sister-head relationships are applied to all categories. In a third series of experiments, we trained models that use sister-head relationships everywhere except for one category. This makes it possible to determine which sister-head dependencies are crucial for improving performance of the model. 5.2 Results The results of the PP experiment are listed in Table 5. Again, we give results obtained using TnT tags and using perfect tags. The row ‘Split PP’ contains the performance figures obtained by including split PPs in both the training and in the testing set. This leads to a substantial increase in LR (6–7%) and LP (around 8%) for both tagging schemes. Note, however, that these figures are not directly comparable to the performance of the unmodified Collins model: it is possible that the additional brackets artificially inflate LR and LP. Presumably, the brackets for split PPs are easy to detect, as they are always adjacent to a preposition. An honest evaluation should therefore train on the modified training set (with split PPs), but collapse the split categories for testing, i.e., test on the unmodified test set. The results for this evaluation are listed in rows ‘Collapsed PP’. Now there is no increase in performance compared to the unmodified Collins model; rather, a slight drop in LR and LP is observed. Table 5 also displays the results of our experiments with the sister-head model. For TnT tags, we observe that using sister-head dependencies for NPs leads to a small decrease in performance compared to the unmodified Collins model, resulting in 67.84% LR and 65.96% LP. Sister-head dependencies for PPs, however, increase performance substantially to 70.27% LR and 68.45% LP. The highest improvement is observed if head-sister dependencies are used for all categories; this results in 71.32% LR and 70.93% LP, which corresponds to an improvement of 3% in LP and 5% in LR compared to the unmodified Collins model. Performance with perfect tags is around 2–4% higher than with TnT tags. For perfect tags, sister-head dependencies lead to an improvement for NPs, PPs, and all categories. The third series of experiments was designed to determine which categories are crucial for achieving this performance gain. This was done by training models that use sister-head dependencies for all categories but one. Table 6 shows the change in LR and LP that was found for each individual category (again for TnT tags and perfect tags). The highest drop in performance (around 3%) is observed when the PP category is reverted to head-head dependencies. For S and for the coordinated categories (CS, TnT tagging Perfect tagging LR LP CBs 0CB ≤2CB Cov LR LP CBs 0CB ≤2CB Cov Unmod. Collins 67.91 66.07 0.73 65.67 89.52 95.21 68.63 66.94 0.71 64.97 89.73 96.23 Split PP 73.84 73.77 0.82 62.89 88.98 95.11 75.93 75.27 0.77 65.36 89.03 93.79 Collapsed PP 66.45 66.07 0.89 66.60 87.04 95.11 68.22 67.32 0.94 66.67 85.88 93.79 Sister-head NP 67.84 65.96 0.75 65.85 88.97 95.11 71.54 70.31 0.60 68.03 93.33 94.60 Sister-head PP 70.27 68.45 0.69 66.27 90.33 94.81 73.20 72.44 0.60 68.53 93.21 94.50 Sister-head all 71.32 70.93 0.61 69.53 91.72 95.92 73.93 74.24 0.54 72.30 93.47 95.21 Table 5: Results for Experiment 2: performance for models using split phrases and sister-head dependencies CNP, etc.), a drop in performance of around 1% each is observed. A slight drop is observed also for VP (around 0.5%). Only minimal fluctuations in performance are observed when the other categories are removed (AP, AVP, and NP): there is a small effect (around 0.5%) if TnT tags are used, and almost no effect for perfect tags. 5.3 Discussion We showed that splitting PPs to make Negra less flat does not improve parsing performance if testing is carried out on the collapsed categories. However, we observed that LR and LP are artificially inflated if split PPs are used for testing. This finding goes some way towards explaining why the parsing performance reported for the Penn Treebank is substantially higher than the results for Negra: the Penn Treebank contains split PPs, which means that there are lot of brackets that are easy to get right. The resulting performance figures are not directly comparable to figures obtained on Negra, or other corpora with flat PPs.2 We also obtained a positive result: we demonstrated that a sister-head model outperforms the unlexicalized baseline model (unlike the C&R model and the Collins model in Experiment 1). LR was about 1% higher and LP about 4% higher than the baseline if lexical sister-head dependencies are used for all categories. This holds both for TnT tags and for perfect tags (compare Tables 2 and 5). We also found that using lexical sister-head dependencies for all categories leads to a larger improvement than using them only for NPs or PPs (see Table 5). This result was confirmed by a second series of experiments, where we reverted individual categories back to head-head dependencies, which triggered a decrease in performance for all categories, with the exception of NP, AP, and AVP (see Table 6). On the whole, the results of Experiment 2 are at odds with what is known about parsing for English. The progression in the probabilistic parsing literature has been to start with lexical head-head dependencies (Collins, 1997) and then add non-lexical sis2This result generalizes to Ss, which are also flat in Negra (see Section 2.2). We conducted an experiment in which we added an SBAR above the S. No increase in performance was obtained if the evaluation was carried using collapsed Ss. TnT tagging Perfect tagging ∆LR ∆LP ∆LR ∆LP PP −3.45 −1.60 −4.21 −3.35 S −1.28 0.11 −2.23 −1.22 Coord −1.87 −0.39 −1.54 −0.80 VP −0.72 0.18 −0.58 −0.30 AP −0.57 0.10 0.08 −0.07 AVP −0.32 0.44 0.10 0.11 NP 0.06 0.78 −0.15 0.02 Table 6: Change in performance when reverting to head-head statistics for individual categories ter information (Charniak, 2000), as illustrated in Table 4. Lexical sister-head dependencies have only been found useful in a limited way: in the original Collins model, they are used for non-recursive NPs. Our results show, however, that for parsing German, lexical sister-head information is more important than lexical head-head information. Only a model that replaced lexical head-head with lexical sister-head dependencies was able to outperform a baseline model that uses no lexicalization.3 Based on the error analysis for Experiment 1, we claim that the reason for the success of the sister-head model is the fact that the rules in Negra are so flat; using a sister-head model is a way of binarizing the rules. 6 Comparison with Previous Work There are currently no probabilistic, treebanktrained parsers available for German (to our knowledge). A number of chunking models have been proposed, however. Skut and Brants (1998) used Negra to train a maximum entropy-based chunker, and report LR and LP of 84.4% for NP and PP chunking. Using cascaded Markov models, Brants (2000) reports an improved performance on the same task (LR 84.4%, LP 88.3%). Becker and Frank (2002) train an unlexicalized PCFG on Negra to perform a different chunking task, viz., the identification of topological fields (sentence-based chunks). They report an LR and LP of 93%. The head-lexicalized model of Carroll and Rooth (1998) has been applied to German by Beil et al. 3It is unclear what effect bi-lexical statistics have on the sister-head model; while Gildea (2001) shows bi-lexical statistics are sparse for some grammars, Hockenmaier and Steedman (2002) found they play a greater role in binarized grammars. (1999, 2002). However, this approach differs in the number of ways from the results reported here: (a) a hand-written grammar (instead of a treebank grammar) is used; (b) training is carried out on unannotated data; (c) the grammar and the training set cover only subordinate and relative clauses, not unrestricted text. Beil et al. (2002) report an evaluation using an NP chunking task, achieving 92% LR and LP. They also report the results of a task-based evaluation (extraction of sucategorization frames). There is some research on treebank-based parsing of languages other than English. The work by Collins et al. (1999) and Bikel and Chiang (2000) has demonstrated the applicability of the Collins (1997) model for Czech and Chinese. The performance reported by these authors is substantially lower than the one reported for English, which might be due to the fact that less training data is available for Czech and Chinese (see Table 1). This hypothesis cannot be tested, as the authors do not present learning curves for their models. However, the learning curve for Negra (see Figure 1) indicates that the performance of the Collins (1997) model is stable, even for small training sets. Collins et al. (1999) and Bikel and Chiang (2000) do not compare their models with an unlexicalized baseline; hence it is unclear if lexicalization really improves parsing performance for these languages. As Experiment 1 showed, this cannot be taken for granted. 7 Conclusions We presented the first probabilistic full parsing model for German trained on Negra, a syntactically annotated corpus. This model uses lexical sisterhead dependencies, which makes it particularly suitable for parsing Negra’s flat structures. The flatness of the Negra annotation reflects the syntactic properties of German, in particular its semi-free wordorder. In Experiment 1, we applied three standard parsing models from the literature to Negra: an unlexicalized PCFG model (the baseline), Carroll and Rooth’s (1998) head-lexicalized model, and Collins’s (1997) model based on head-head dependencies. The results show that the baseline model achieves a performance of up to 73% recall and 70% precision. Both lexicalized models perform substantially worse. This finding is at odds with what has been reported for parsing models trained on the Penn Treebank. As a possible explanation we considered lack of training data: Negra is about half the size of the Penn Treebank. However, the learning curves for the three models failed to produce any evidence that they suffer from sparse data. In Experiment 2, we therefore investigated an alternative hypothesis: the poor performance of the lexicalized models is due to the fact that the rules in Negra are flatter than in the Penn Treebank, which makes lexical head-head dependencies less useful for correctly determining constituent boundaries. Based on this assumption, we proposed an alternative model hat replaces lexical head-head dependencies with lexical sister-head dependencies. This can the thought of as a way of binarizing the flat rules in Negra. The results show that sister-head dependencies improve parsing performance not only for NPs (which is well-known for English), but also for PPs, VPs, Ss, and coordinate categories. The best performance was obtained for a model that uses sister-head dependencies for all categories. This model achieves up to 74% recall and precision, thus outperforming the unlexicalized baseline model. It can be hypothesized that this finding carries over to other treebanks that are annotated with flat structures. Such annotation schemes are often used for languages that (unlike English) have a free or semi-free wordorder. Testing our sister-head model on these languages is a topic for future research. References Becker, Markus and Anette Frank. 2002. A stochastic topological parser of German. In Proceedings of the 19th International Conference on Computational Linguistics. Taipei. Beil, Franz, Glenn Carroll, Detlef Prescher, Stefan Riezler, and Mats Rooth. 1999. Inside-outside estimation of a lexicalized PCFG for German. In Proceedings of the 37th Annual Meeting of the Association for Computational Linguistics. College Park, MA. Beil, Franz, Detlef Prescher, Helmut Schmid, and Sabine Schulte im Walde. 2002. Evaluation of the Gramotron parser for German. In Proceedings of the LREC Workshop Beyond Parseval: Towards Improved Evaluation Measures for Parsing Systems. Las Palmas, Gran Canaria. Bikel, Daniel M. and David Chiang. 2000. Two statistical parsing models applied to the Chinese treebank. In Proceedings of the 2nd ACL Workshop on Chinese Language Processing. Hong Kong. Brants, Thorsten. 2000. TnT: A statistical part-of-speech tagger. In Proceedings of the 6th Conference on Applied Natural Language Processing. Seattle. Carroll, Glenn and Mats Rooth. 1998. Valence induction with a head-lexicalized PCFG. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. Granada. Charniak, Eugene. 1993. Statistical Language Learning. MIT Press, Cambridge, MA. Charniak, Eugene. 1997. Statistical parsing with a context-free grammar and word statistics. In Proceedings of the 14th National Conference on Artificial Intelligence. AAAI Press, Cambridge, MA. Charniak, Eugene. 2000. A maximum-entropy-inspired parser. In Proceedings of the 1st Conference of the North American Chapter of the Association for Computational Linguistics. Seattle. Collins, Michael. 1997. Three generative, lexicalised models for statistical parsing. In Proceedings of the 35th Annual Meeting of the Association for Computational Linguistics and the 8th Conference of the European Chapter of the Association for Computational Linguistics. Madrid. Collins, Michael, Jan Hajiˇc, Lance Ramshaw, and Christoph Tillmann. 1999. A statistical parser for Czech. In Proceedings of the 37th Annual Meeting of the Association for Computational Linguistics. College Park, MA. Gildea, Daniel. 2001. Corpus variation and parser performance. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. Pittsburgh. Hockenmaier, Julia and Mark Steedman. 2002. Generative models for statistical parsing with combinatory categorial grammar. In Proceedings of 40th Annual Meeting of the Association for Computational Linguistics. Philadelphia. Marcus, Mitchell P., Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building a large annotated corpus of English: The Penn Treebank. Computational Linguistics 19(2). Schmid, Helmut. 2000. LoPar: Design and implementation. Ms., Institute for Computational Linguistics, University of Stuttgart. Skut, Wojciech and Thorsten Brants. 1998. A maximum-entropy partial parser for unrestricted text. In Proceedings of the 6th Workshop on Very Large Corpora. Montr´eal. Skut, Wojciech, Brigitte Krenn, Thorsten Brants, and Hans Uszkoreit. 1997. An annotation scheme for free word order languages. In Proceedings of the 5th Conference on Applied Natural Language Processing. Washington, DC. Uszkoreit, Hans. 1987. Word Order and Constituent Structure in German. CSLI Publications, Stanford, CA.
2003
13
Integrated Shallow and Deep Parsing: TopP meets HPSG Anette Frank, Markus Becker z, Berthold Crysmann, Bernd Kiefer and Ulrich Sch¨afer DFKI GmbH School of Informatics z 66123 Saarbr¨ucken, Germany University of Edinburgh, UK [email protected] [email protected] Abstract We present a novel, data-driven method for integrated shallow and deep parsing. Mediated by an XML-based multi-layer annotation architecture, we interleave a robust, but accurate stochastic topological field parser of German with a constraintbased HPSG parser. Our annotation-based method for dovetailing shallow and deep phrasal constraints is highly flexible, allowing targeted and fine-grained guidance of constraint-based parsing. We conduct systematic experiments that demonstrate substantial performance gains.1 1 Introduction One of the strong points of deep processing (DNLP) technology such as HPSG or LFG parsers certainly lies with the high degree of precision as well as detailed linguistic analysis these systems are able to deliver. Although considerable progress has been made in the area of processing speed, DNLP systems still cannot rival shallow and medium depth technologies in terms of throughput and robustness. As a net effect, the impact of deep parsing technology on application-oriented NLP is still fairly limited. With the advent of XML-based hybrid shallowdeep architectures as presented in (Grover and Lascarides, 2001; Crysmann et al., 2002; Uszkoreit, 2002) it has become possible to integrate the added value of deep processing with the performance and robustness of shallow processing. So far, integration has largely focused on the lexical level, to improve upon the most urgent needs in increasing the robustness and coverage of deep parsing systems, namely 1This work was in part supported by a BMBF grant to the DFKI project WHITEBOARD (FKZ 01 IW 002). lexical coverage. While integration in (Grover and Lascarides, 2001) was still restricted to morphological and PoS information, (Crysmann et al., 2002) extended shallow-deep integration at the lexical level to lexico-semantic information, and named entity expressions, including multiword expressions. (Crysmann et al., 2002) assume a vertical, ‘pipeline’ scenario where shallow NLP tools provide XML annotations that are used by the DNLP system as a preprocessing and lexical interface. The perspective opened up by a multi-layered, data-centric architecture is, however, much broader, in that it encourages horizontal cross-fertilisation effects among complementary and/or competing components. One of the culprits for the relative inefficiency of DNLP parsers is the high degree of ambiguity found in large-scale grammars, which can often only be resolved within a larger syntactic domain. Within a hybrid shallow-deep platform one can take advantage of partial knowledge provided by shallow parsers to pre-structure the search space of the deep parser. In this paper, we will thus complement the efforts made on the lexical side by integration at the phrasal level. We will show that this may lead to considerable performance increase for the DNLP component. More specifically, we combine a probabilistic topological field parser for German (Becker and Frank, 2002) with the HPSG parser of (Callmeier, 2000). The HPSG grammar used is the one originally developed by (M¨uller and Kasper, 2000), with significant performance enhancements by B. Crysmann. In Section 2 we discuss the mapping problem involved with syntactic integration of shallow and deep analyses and motivate our choice to combine the HPSG system with a topological parser. Section 3 outlines our basic approach towards syntactic shallow-deep integration. Section 4 introduces various confidence measures, to be used for fine-tuning of phrasal integration. Sections 5 and 6 report on experiments and results of integrated shallow-deep parsing, measuring the effect of various integration parameters on performance gains for the DNLP component. Section 7 concludes and discusses possible extensions, to address robustness issues. 2 Integrated Shallow and Deep Processing The prime motivation for integrated shallow-deep processing is to combine the robustness and efficiency of shallow processing with the accuracy and fine-grainedness of deep processing. Shallow analyses could be used to pre-structure the search space of a deep parser, enhancing its efficiency. Even if deep analysis fails, shallow analysis could act as a guide to select partial analyses from the deep parser’s chart – enhancing the robustness of deep analysis, and the informativeness of the combined system. In this paper, we concentrate on the usage of shallow information to increase the efficiency, and potentially the quality, of HPSG parsing. In particular, we want to use analyses delivered by an efficient shallow parser to pre-structure the search space of HPSG parsing, thereby enhancing its efficiency, and guiding deep parsing towards a best-first analysis suggested by shallow analysis constraints. The search space of an HPSG chart parser can be effectively constrained by external knowledge sources if these deliver compatible partial subtrees, which would then only need to be checked for compatibility with constituents derived in deep parsing. Raw constituent span information can be used to guide the parsing process by penalizing constituents which are incompatible with the precomputed ‘shape’. Additional information about proposed constituents, such as categorial or featural constraints, provide further criteria for prioritising compatible, and penalising incompatible constituents in the deep parser’s chart. An obvious challenge for our approach is thus to identify suitable shallow knowledge sources that can deliver compatible constraints for HPSG parsing. 2.1 The Shallow-Deep Mapping Problem However, chunks delivered by state-of-the-art shallow parsers are not isomorphic to deep syntactic analyses that explicitly encode phrasal embedding structures. As a consequence, the boundaries of deep grammar constituents in (1.a) cannot be predetermined on the basis of a shallow chunk analysis (1.b). Moreover, the prevailing greedy bottom-up processing strategies applied in chunk parsing do not take into account the macro-structure of sentences. They are thus easily trapped in cases such as (2). (1) a. [ CLThere was [ NP a rumor [ CL it was going to be bought by [ NP a French company [ CL that competes in supercomputers]]]]]. b. [ CLThere was [ NP a rumor]] [ CL it was going to be bought by [ NP a French company]] [ CL that competes in supercomputers]. (2) Fred eats [ NP pizza and Mary] drinks wine. In sum, state-of-the-art chunk parsing does neither provide sufficient detail, nor the required accuracy to act as a ‘guide’ for deep syntactic analysis. 2.2 Stochastic Topological Parsing Recently, there is revived interest in shallow analyses that determine the clausal macro-structure of sentences. The topological field model of (German) syntax (H¨ohle, 1983) divides basic clauses into distinct fields – pre-, middle-, and post-fields – delimited by verbal or sentential markers, which constitute the left/right sentence brackets. This model of clause structure is underspecified, or partial as to non-sentential constituent structure, but provides a theory-neutral model of sentence macro-structure. Due to its linguistic underpinning, the topological field model provides a pre-partitioning of complex sentences that is (i) highly compatible with deep syntactic analysis, and thus (ii) maximally effective to increase parsing efficiency if interleaved with deep syntactic analysis; (iii) partiality regarding the constituency of non-sentential material ensures robustness, coverage, and processing efficiency. (Becker and Frank, 2002) explored a corpusbased stochastic approach to topological field parsing, by training a non-lexicalised PCFG on a topological corpus derived from the NEGRA treebank of German. Measured on the basis of hand-corrected PoS-tagged input as provided by the NEGRA treebank, the parser achieves 100% coverage for length  40 (99.8% for all). Labelled precision and recall are around 93%. Perfect match (full tree identity) is about 80% (cf. Table 1, disamb +). In this paper, the topological parser was provided a tagger front-end for free text processing, using the TnT tagger (Brants, 2000). The grammar was ported to the efficient LoPar parser of (Schmid, 2000). Tagging inaccuracies lead to a drop of 5.1/4.7 percentCL-V2 VF-TOPIC LK-VFIN MF RK-VPART NF ART NN VAFIN ART ADJA NN VAPP CL-SUBCL Der,1 Zehnkampf,2 h¨atte,3 eine,4 andere,5 Dimension,6 gehabt,7 , The decathlon would have a other dimension had LK-COMPL MF RK-VFIN KOUS PPER PROAV VAPP VAFIN wenn,9 er,10 dabei,11 gewesen,12 w¨are,13 . if he there been had . <TOPO2HPSG type=”root” id=”5608” > <MAP CONSTR id=”T1” constr=”v2 cp” conf en t=”0.87” left=”W1” right=”W13”/ > <MAP CONSTR id=”T2” constr=”v2 vf” conf en t=”0.87” left=”W1” right=”W2”/ > <MAP CONSTR id=”T3” constr=”vfronted vfin+rk” conf en t=”0.87” left=”W3” right=”W3”/ > <MAP CONSTR id=”T6” constr=”vfronted rk-complex” conf en t=”0.87” left=”W7” right=”W7”/ > <MAP CONSTR id=”T4” constr=”vfronted vfin+vp+rk” conf en t=”0.87” left=”W3” right=”W13”/ > <MAP CONSTR id=”T5” constr=”vfronted vp+rk” conf en t=”0.87” left=”W4” right=”W13”/ > <MAP CONSTR id=”T10” constr=”extrapos rk+nf” conf en t=”0.87” left=”W7” right=”W13”/ > <MAP CONSTR id=”T7” constr=”vl cpfin compl” conf en t=”0.87” left=”W9” right=”W13”/ > <MAP CONSTR id=”T8” constr=”vl compl vp” conf en t=”0.87” left=”W10” right=”W13”/> <MAP CONSTR id=”T9” constr=”vl rk fin+complex+finlast” conf en t=”0.87” left=”W12” right=”W13”/> </TOPO2HPSG> Der D Zehnkampf N’ NP-NOM-SG haette V eine D andere AP-ATT Dimension N’ N’ NP-ACC-SG gehabt V EPS wenn C er NP-NOM-SG dabei PP gewesen V waere V-LE V V S CP-MOD EPS EPS EPS/NP-NOM-SG S/NP-NOM-SG S Figure 1: Topological tree w/param. cat., TOPO2HPSG map-constraints, tree skeleton of HPSG analysis dis- cove- perfect LP LR 0CB 2CB amb rage match in % in % in % in % + 100.0 80.4 93.4 92.9 92.1 98.9 99.8 72.1 88.3 88.2 87.8 97.9 Table 1: Disamb: correct (+) / tagger () PoS input. Eval. on atomic (vs. parameterised) category labels. age points in LP/LR, and 8.3 percentage points in perfect match rate (Table 1, disamb ). As seen in Figure 1, the topological trees abstract away from non-sentential constituency – phrasal fields MF (middle-field) and VF (pre-field) directly expand to PoS tags. By contrast, they perfectly render the clausal skeleton and embedding structure of complex sentences. In addition, parameterised category labels encode larger syntactic contexts, or ‘constructions’, such as clause type (CL-V2, -SUBCL, -REL), or inflectional patterns of verbal clusters (RKVFIN,-VPART). These properties, along with their high accuracy rate, make them perfect candidates for tight integration with deep syntactic analysis. Moreover, due to the combination of scrambling and discontinuous verb clusters in German syntax, a deep parser is confronted with a high degree of local ambiguity that can only be resolved at the clausal level. Highly lexicalised frameworks such as HPSG, however, do not lend themselves naturally to a topdown parsing strategy. Using topological analyses to guide the HPSG will thus provide external top-down information for bottom-up parsing. 3 TopP meets HPSG Our work aims at integration of topological and HPSG parsing in a data-centric architecture, where each component acts independently2 – in contrast to the combination of different syntactic formalisms within a unified parsing process.3 Data-based integration not only favours modularity, but facilitates flexible and targeted dovetailing of structures. 3.1 Mapping Topological to HPSG Structures While structurally similar, topological trees are not fully isomorphic to HPSG structures. In Figure 1, e.g., the span from the verb ‘h¨atte’ to the end of the sentence forms a constituent in the HPSG analysis, while in the topological tree the same span is dominated by a sequence of categories: LK, MF, RK, NF. Yet, due to its linguistic underpinning, the topological tree can be used to systematically predict key constituents in the corresponding ‘target’ HPSG 2See Section 6 for comparison to recent work on integrated chunk-based and dependency parsing in (Daum et al., 2003). 3As, for example, in (Duchier and Debusmann, 2001). analysis. We know, for example, that the span from the fronted verb (LK-VFIN) till the end of its clause CL-V2 corresponds to an HPSG phrase. Also, the first position that follows this verb, here the leftmost daughter of MF, demarcates the left edge of the traditional VP. Spans of the vorfeld VF and clause categories CL exactly match HPSG constituents. Category CL-V2 tells us that we need to reckon with a fronted verb in position of its LK daughter, here 3, while in CL-SUBCL we expect a complementiser in the position of LK, and a finite verb within the right verbal complex RK, which spans positions 12 to 13. In order to communicate such structural constraints to the deep parser, we scan the topological tree for relevant configurations, and extract the span information for the target HPSG constituents. The resulting ‘map constraints’ (Fig. 1) encode a bracket type name4 that identifies the target constituent and its left and right boundary, i.e. the concrete span in the sentence under consideration. The span is encoded by the word position index in the input, which is identical for the two parsing processes.5 In addition to pure constituency constraints, a skilled grammar writer will be able to associate specific HPSG grammar constraints – positive or negative – with these bracket types. These additional constraints will be globally defined, to permit finegrained guidance of the parsing process. This and further information (cf. Section 4) is communicated to the deep parser by way of an XML interface. 3.2 Annotation-based Integration In the annotation-based architecture of (Crysmann et al., 2002), XML-encoded analysis results of all components are stored in a multi-layer XML chart. The architecture employed in this paper improves on (Crysmann et al., 2002) by providing a central Whiteboard Annotation Transformer (WHAT) that supports flexible and powerful access to and transformation of XML annotation based on standard XSLT engines6 (see (Sch¨afer, 2003) for more details on WHAT). Shallow-deep integration is thus fully annotation driven. Complex XSLT transformations are applied to the various analyses, in order to 4We currently extract 34 different bracket types. 5We currently assume identical tokenisation, but could accommodate for distinct tokenisation regimes, using map tables. 6Advantages we see in the XSLT approach are (i) minimised programming effort in the target implementation language for XML access, (ii) reuse of transformation rules in multiple modules, (iii) fast integration of new XML-producing components. extract or combine independent knowledge sources, including XPath access to information stored in shallow annotation, complex XSLT transformations to the output of the topological parser, and extraction of bracket constraints. 3.3 Shaping the Deep Parser’s Search Space The HPSG parser is an active bidirectional chart parser which allows flexible parsing strategies by using an agenda for the parsing tasks.7 To compute priorities for the tasks, several information sources can be consulted, e.g. the estimated quality of the participating edges or external resources like PoS tagger results. Object-oriented implementation of the priority computation facilitates exchange and, moreover, combination of different ranking strategies. Extending our current regime that uses PoS tagging for prioritisation,8 we are now utilising phrasal constraints (brackets) from topological analysis to enhance the hand-crafted parsing heuristic employed so far. Conditions for changing default priorities Every bracket pair br x computed from the topological analysis comes with a bracket type x that defines its behaviour in the priority computation. Each bracket type can be associated with a set of positive and negative constraints that state a set of permissible or forbidden rules and/or feature structure configurations for the HPSG analysis. The bracket types fall into three main categories: left-, right-, and fully matching brackets. A rightmatching bracket may affect the priority of tasks whose resulting edge will end at the right bracket of a pair, like, for example, a task that would combine edges C and F or C and D in Fig. 2. Left-matching brackets work analogously. For fully matching brackets, only tasks that produce an edge that matches the span of the bracket pair can be affected, like, e.g., a task that combines edges B and C in Fig. 2. If, in addition, specified rule as well as feature structure constraints hold, the task is rewarded if they are positive constraints, and penalised if they are negative ones. All tasks that produce crossing edges, i.e. where one endpoint lies strictly inside the bracket pair and the other lies strictly outside, are penalised, e.g., a task that combines edges A and B. This behaviour can be implemented efficiently when we assume that the computation of a task pri7A parsing task encodes the possible combination of a passive and an active chart edge. 8See e.g. (Prins and van Noord, 2001) for related work. brx brx A B C D E F Figure 2: An example chart with a bracket pair of type x. The dashed edges are active. ority takes into account the priorities of the tasks it builds upon. This guarantees that the effect of changing one task in the parsing process will propagate to all depending tasks without having to check the bracket conditions repeatedly. For each task, it is sufficient to examine the startand endpoints of the building edges to determine if its priority is affected by some bracket. Only four cases can occur: 1. The new edge spans a pair of brackets: a match 2. The new edge starts or ends at one of the brackets, but does not match: left or right hit 3. One bracket of a pair is at the joint of the building edges and a start- or endpoint lies strictly inside the brackets: a crossing (edges A and B in Fig. 2) 4. No bracket at the endpoints of both edges: use the default priority For left-/right-matching brackets, a match behaves exactly like the corresponding left or right hit. Computing the new priority If the priority of a task is changed, the change is computed relative to the default priority. We use two alternative confidence values, and a hand-coded parameter (x), to adjust the impact on the default priority heuristics. conf ent(br x) specifies the confidence for a concrete bracket pair br x of type x in a given sentence, based on the tree entropy of the topological parse. conf pr specifies a measure of ’expected accuracy’ for each bracket type. Sec. 4 will introduce these measures. The priority p(t) of a task t involving a bracket br x is computed from the default priority ~ p(t) by: p(t) = ~ p(t)  (1  conf ent (br x )  conf pr (x)  (x)) 4 Confidence Measures This way of calculating priorities allows flexible parameterisation for the integration of bracket constraints. While the topological parser’s accuracy is high, we need to reckon with (partially) wrong analyses that could counter the expected performance gains. An important factor is therefore the confidence we can have, for any new sentence, into the best parse delivered by the topological parser: If confidence is high, we want it to be fully considered for prioritisation – if it is low, we want to lower its impact, or completely ignore the proposed brackets. We will experiment with two alternative confidence measures: (i) expected accuracy of particular bracket types extracted from the best parse delivered, and (ii) tree entropy based on the probability distribution encountered in a topological parse, as a measure of the overall accuracy of the best parse proposed – and thus the extracted brackets.9 4.1 Conf pr: Accuracy of map-constraints To determine a measure of ‘expected accuracy’ for the map constraints, we computed precision and recall for the 34 bracket types by comparing the extracted brackets from the suite of best delivered topological parses against the brackets we extracted from the trees in the manually annotated evaluation corpus in (Becker and Frank, 2002). We obtain 88.3% precision, 87.8% recall for brackets extracted from the best topological parse, run with TnT front end. We chose precision of extracted bracket types as a static confidence weight for prioritisation. Precision figures are distributed as follows: 26.5% of the bracket types have precision  90% (93.1% in avg, 53.5% of bracket mass), 50% have precision  80% (88.9% avg, 77.7% bracket mass). 20.6% have precision  50% (41.26% in avg, 2.7% bracket mass). For experiments using a threshold on conf pr(x) for bracket type x, we set a threshold value of 0.7, which excludes 32.35% of the lowconfidence bracket types (and 22.1% bracket mass), and includes chunk-based brackets (see Section 5). 4.2 Conf en t: Entropy of Parse Distribution While precision over bracket types is a static measure that is independent from the structural complexity of a particular sentence, tree entropy is defined as the entropy over the probability distribution of the set of parsed trees for a given sentence. It is a useful measure to assess how certain the parser is about the best analysis, e.g. to measure the training utility value of a data point in the context of sample selection (Hwa, 2000). We thus employ tree entropy as a 9Further measures are conceivable: We could extract brackets from some n-best topological parses, associating them with weights, using methods similar to (Carroll and Briscoe, 2002). 10 20 30 40 50 60 70 80 90 0 0.2 0.4 0.6 0.8 1 in % Normalized entropy precision recall coverage Figure 3: Effect of different thresholds of normalized entropy on precision, recall, and coverage confidence measure for the quality of the best topological parse, and the extracted bracket constraints. We carry out an experiment to assess the effect of varying entropy thresholds  on precision and recall of topological parsing, in terms of perfect match rate, and show a way to determine an optimal value for . We compute tree entropy over the full probability distribution, and normalise the values to be distributed in a range between 0 and 1. The normalisation factor is empirically determined as the highest entropy over all sentences of the training set.10 Experimental setup We randomly split the manually corrected evaluation corpus of (Becker and Frank, 2002) (for sentence length  40) into a training set of 600 sentences and a test set of 408 sentences. This yields the following values for the training set (test set in brackets): initial perfect match rate is 73.5% (70.0%), LP 88.8% (87.6%), and LR 88.5% (87.8%).11 Coverage is 99.8% for both. Evaluation measures For the task of identifying the perfect matches from a set of parses we give the following standard definitions: precision is the proportion of selected parses that have a perfect match – thus being the perfect match rate, and recall is the proportion of perfect matches that the system selected. Coverage is usually defined as the proportion of attempted analyses with at least one parse. We extend this definition to treat successful analyses with a high tree entropy as being out of coverage. Fig. 3 shows the effect of decreasing entropy thresholds  on precision, recall and coverage. The unfiltered set of all sentences is found at =1. Lowering  in10Possibly higher values in the test set will be clipped to 1. 11Evaluation figures for this experiment are given disregarding parameterisation (and punctuation), corresponding to the first row of figures in table 1. 82 84 86 88 90 92 94 96 0.16 0.18 0.2 0.22 0.24 0.26 0.28 0.3 in % Normalized entropy precision recall f-measure Figure 4: Maximise f-measure on the training set to determine best entropy threshold creases precision, and decreases recall and coverage. We determine f-measure as composite measure of precision and recall with equal weighting ( =0.5). Results We use f-measure as a target function on the training set to determine a plausible . F-measure is maximal at =0.236 with 88.9%, see Figure 4. Precision and recall are 83.7% and 94.8% resp. while coverage goes down to 83.0%. Applying the same  on the test set, we get the following results: 80.5% precision, 93.0% recall. Coverage goes down to 80.6%. LP is 93.3%, LR is 91.2%. Confidence Measure We distribute the complement of the associated tree entropy of a parse tree tr as a global confidence measure over all brackets br extracted from that parse: conf ent (br ) = 1 ent (tr ). For the thresholded version of conf ent (br ), we set the threshold to 1  = 1 0:236 = 0:764. 5 Experiments Experimental Setup In the experiments we use the subset of the NEGRA corpus (5060 sents, 24.57%) that is currently parsed by the HPSG grammar.12 Average sentence length is 8.94, ignoring punctuation; average lexical ambiguity is 3.05 entries/word. As baseline, we performed a run without topological information, yet including PoS prioritisation from tagging.13 A series of tests explores the effects of alternative parameter settings. We further test the impact of chunk information. To this 12This test set is different from the corpus used in Section 4. 13In a comparative run without PoS-priorisation, we established a speed-up factor of 1.13 towards the baseline used in our experiment, with a slight increase in coverage (1%). This compares to a speed-up factor of 2.26 reported in (Daum et al., 2003), by integration of PoS guidance into a dependency parser. end, phrasal fields determined by topological parsing were fed to the chunk parser of (Skut and Brants, 1998). Extracted NP and PP bracket constraints are defined as left-matching bracket types, to compensate for the non-embedding structure of chunks. Chunk brackets are tested in conjunction with topological brackets, and in isolation, using the labelled precision value of 71.1% in (Skut and Brants, 1998) as a uniform confidence weight.14 Measures For all runs we measure the absolute time and the number of parsing tasks needed to compute the first reading. The times in the individual runs were normalised according to the number of executed tasks per second. We noticed that the coverage of some integrated runs decreased by up to 1% of the 5060 test items, with a typical loss of around 0.5%. To warrant that we are not just trading coverage for speed, we derived two measures from the primary data: an upper bound, where we associated every unsuccessful parse with the time and number of tasks used when the limit of 70000 passive edges was hit, and a lower bound, where we removed the most expensive parses from each run, until we reached the same coverage. Whereas the upper bound is certainly more realistic in an application context, the lower bound gives us a worst case estimate of expectable speed-up. Integration Parameters We explored the following range of weighting parameters for prioritisation (see Section 3.3 and Table 2). We use two global settings for the heuristic parameter . Setting to 1 2 without using any confidence measure causes the priority of every affected parsing task to be in- or decreased by half its value. Setting to 1 drastically increases the influence of topological information, the priority for rewarded tasks is doubled and set to zero for penalized ones. The first two runs (rows with P E) ignore both confidence parameters (conf pr =ent=1), measuring only the effect of higher or lower influence of topological information. In the remaining six runs, the impact of the confidence measures conf pr =ent is tested individually, namely +P E and P +E, by setting the resp. alternative value to 1. For two runs, we set the resp. confidence values that drop below a certain threshold to zero (PT, ET) to exclude un14The experiments were run on a 700 MHz Pentium III machine. For all runs, the maximum number of passive edges was set to the comparatively high value of 70000. factor msec (1st) tasks low-b up-b low-b up-b low-b up-b Baseline 524 675 3813 4749 Integration of topological brackets w/ parameters P E 1 2 2.21 2.17 237 310 1851 2353 P E 1 2.04 2.10 257 320 2037 2377 +P E 1 2 2.15 2.21 243 306 1877 2288 PT E 1 2 2.20 2.30 238 294 1890 2268 P +E 1 2 2.27 2.23 230 302 1811 2330 P ET 1 2 2.10 2.00 250 337 1896 2503 +P E 1 2.06 2.12 255 318 2021 2360 PT E 1 2.08 2.10 252 321 1941 2346 PT with chunk and topological brackets PT E 1 2 2.13 2.16 246 312 1929 2379 PT with chunk brackets only PT E 1 2 0.89 1.10 589 611 4102 4234 Table 2: Priority weight parameters and results certain candidate brackets or bracket types. For runs including chunk bracketing constraints, we chose thresholded precision (PT) as confidence weights for topological and/or chunk brackets. 6 Discussion of Results Table 2 summarises the results. A high impact on bracket constraints ( 1) results in lower performance gains than using a moderate impact ( 1 2) (rows 2,4,5 vs. 3,8,9). A possible interpretation is that for high , wrong topological constraints and strong negative priorities can mislead the parser. Use of confidence weights yields the best performance gains (with 1 2), in particular, thresholded precision of bracket types PT, and tree entropy +E, with comparable speed-up of factor 2.2/2.3 and 2.27/2.23 (2.25 if averaged). Thresholded entropy ET yields slightly lower gains. This could be due to a non-optimal threshold, or the fact that – while precision differentiates bracket types in terms of their confidence, such that only a small number of brackets are weakened – tree entropy as a global measure penalizes all brackets for a sentence on an equal basis, neutralizing positive effects which – as seen in +/P – may still contribute useful information. Additional use of chunk brackets (row 10) leads to a slight decrease, probably due to lower precision of chunk brackets. Even more, isolated use of chunk information (row 11) does not yield signifi0 1000 2000 3000 4000 5000 6000 7000 0 5 10 15 20 25 30 35 baseline +PT γ(0.5) 12867 12520 11620 9290 0 100 200 300 400 500 600 #sentences msec Figure 5: Performance gain/loss per sentence length cant gains over the baseline (0.89/1.1). Similar results were reported in (Daum et al., 2003) for integration of chunk- and dependency parsing.15 For PT -E 1 2, Figure 5 shows substantial performance gains, with some outliers in the range of length 25–36. 962 sentences (length >3, avg. 11.09) took longer parse time as compared to the baseline (with 5% variance margin). For coverage losses, we isolated two factors: while erroneous topological information could lead the parser astray, we also found cases where topological information prevented spurious HPSG parses to surface. This suggests that the integrated system bears the potential of crossvalidation of different components. 7 Conclusion We demonstrated that integration of shallow topological and deep HPSG processing results in significant performance gains, of factor 2.25—at a high level of deep parser efficiency. We show that macrostructural constraints derived from topological parsing improve significantly over chunk-based constraints. Fine-grained prioritisation in terms of confidence weights could further improve the results. Our annotation-based architecture is now easily extended to address robustness issues beyond lexical matters. By extracting spans for clausal fragments from topological parses, in case of deep parsing fail15(Daum et al., 2003) report a gain of factor 2.76 relative to a non-PoS-guided baseline, which reduces to factor 1.21 relative to a PoS-prioritised baseline, as in our scenario. ure the chart can be inspected for spanning analyses for sub-sentential fragments. Further, we can simplify the input sentence, by pruning adjunct subclauses, and trigger reparsing on the pruned input. References M. Becker and A. Frank. 2002. A Stochastic Topological Parser of German. In Proceedings of COLING 2002, pages 71–77, Taipei, Taiwan. T. Brants. 2000. Tnt - A Statistical Part-of-Speech Tagger. In Proceedings of Eurospeech, Rhodes, Greece. U. Callmeier. 2000. PET — A platform for experimentation with efficient HPSG processing techniques. Natural Language Engineering, 6 (1):99 – 108. C. Carroll and E. Briscoe. 2002. High precision extraction of grammatical relations. In Proceedings of COLING 2002, pages 134–140. B. Crysmann, A. Frank, B. Kiefer, St. M¨uller, J. Piskorski, U. Sch¨afer, M. Siegel, H. Uszkoreit, F. Xu, M. Becker, and H.-U. Krieger. 2002. An Integrated Architecture for Deep and Shallow Processing. In Proceedings of ACL 2002, Pittsburgh. M. Daum, K.A. Foth, and W. Menzel. 2003. Constraint Based Integration of Deep and Shallow Parsing Techniques. In Proceedings of EACL 2003, Budapest. D. Duchier and R. Debusmann. 2001. Topological Dependency Trees: A Constraint-based Account of Linear Precedence. In Proceedings of ACL 2001. C. Grover and A. Lascarides. 2001. XML-based data preparation for robust deep parsing. In Proceedings of ACL/EACL 2001, pages 252–259, Toulouse, France. T. H¨ohle. 1983. Topologische Felder. Unpublished manuscript, University of Cologne. R. Hwa. 2000. Sample selection for statistical grammar induction. In Proceedings of EMNLP/VLC-2000, pages 45–52, Hong Kong. S. M¨uller and W. Kasper. 2000. HPSG analysis of German. In W. Wahlster, editor, Verbmobil: Foundations of Speech-to-Speech Translation, Artificial Intelligence, pages 238–253. Springer, Berlin. R. Prins and G. van Noord. 2001. Unsupervised postagging improves parsing accuracy and parsing efficiency. In Proceedings of IWPT, Beijing. U. Sch¨afer. 2003. WHAT: An XSLT-based Infrastructure for the Integration of Natural Language Processing Components. In Proceedings of the SEALTS Workshop, HLT-NAACL03, Edmonton, Canada. H. Schmid, 2000. LoPar: Design and Implementation. IMS, Stuttgart. Arbeitspapiere des SFB 340, Nr. 149. W. Skut and T. Brants. 1998. Chunk tagger: statistical recognition of noun phrases. In ESSLLI-1998 Workshop on Automated Acquisition of Syntax and Parsing. H. Uszkoreit. 2002. New Chances for Deep Linguistic Processing. In Proceedings of COLING 2002, pages xiv–xxvii, Taipei, Taiwan.
2003
14
Combining Deep and Shallow Approaches in Parsing German Michael Schiehlen Institute for Computational Linguistics, University of Stuttgart, Azenbergstr. 12, D-70174 Stuttgart [email protected] Abstract The paper describes two parsing schemes: a shallow approach based on machine learning and a cascaded finite-state parser with a hand-crafted grammar. It discusses several ways to combine them and presents evaluation results for the two individual approaches and their combination. An underspecification scheme for the output of the finite-state parser is introduced and shown to improve performance. 1 Introduction In several areas of Natural Language Processing, a combination of different approaches has been found to give the best results. It is especially rewarding to combine deep and shallow systems, where the former guarantees interpretability and high precision and the latter provides robustness and high recall. This paper investigates such a combination consisting of an n-gram based shallow parser and a cascaded finite-state parser1 with hand-crafted grammar and morphological checking. The respective strengths and weaknesses of these approaches are brought to light in an in-depth evaluation on a treebank of German newspaper texts (Skut et al., 1997) containing ca. 340,000 tokens in 19,546 sentences. The evaluation format chosen (dependency tuples) is used as the common denominator of the systems 1Although not everyone would agree that finite-state parsers constitute a ‘deep’ approach to parsing, they still are knowledge-based, require efforts of grammar-writing, a complex linguistic lexicon, manage without training data, etc. in building a hybrid parser with improved performance. An underspecification scheme allows the finite-state parser partially ambiguous output. It is shown that the other parser can in most cases successfully disambiguate such information. Section 2 discusses the evaluation format adopted (dependency structures), its advantages, but also some of its controversial points. Section 3 formulates a classification problem on the basis of the evaluation format and applies a machine learner to it. Section 4 describes the architecture of the cascaded finite-state parser and its output in a novel underspecification format. Section 5 explores several combination strategies and tests them on several variants of the two base components. Section 6 provides an in-depth evaluation of the component systems and the hybrid parser. Section 7 concludes. 2 Parser Evaluation The simplest method to evaluate a parser is to count the parse trees it gets correct. This measure is, however, not very informative since most applications do not require one hundred percent correct parse trees. Thus, an important question in parser evaluation is how to break down parsing results. In the PARSEVAL evaluation scheme (Black et al., 1991), partially correct parses are gauged by the number of nodes they produce and have in common with the gold standard (measured in precision and recall). Another figure (crossing brackets) only counts those incorrect nodes that change the partial order induced by the tree. A problematic aspect of the PARSEVAL approach is that the weight given to particular constructions is again grammar-specific, since some grammars may need more nodes to describe them than others. Further, the approach does not pay sufficient heed to the fact that parsing decisions are often intricately twisted: One wrong decision may produce a whole series of other wrong decisions. Both these problems are circumvented when parsing results are evaluated on a more abstract level, viz. dependency structure (Lin, 1995). Dependency structure generally follows predicateargument structure, but departs from it in that the basic building blocks are words rather than predicates. In terms of parser evaluation, the first property guarantees independence of decisions (every link is relevant also for the interpretation level), while the second property makes for a better empirical justification. for evaluation units. Dependency structure can be modelled by a directed acylic graph, with word tokens at the nodes. In labelled dependency structure, the links are furthermore classified into a certain set of grammatical roles. Dependency can be easily determined from constituent structure if in every phrase structure rule a constituent is singled out as the head (Gaifman, 1965). To derive a labelled dependency structure, all non-head constituents in a rule must be labelled with the grammatical role that links their head tokens to the head token of the head constituent. There are two cases where the divergence between predicates and word tokens makes trouble: (1) predicates expressed by more than one token, and (2) predicates expressed by no token (as they occur in ellipsis). Case 1 frequently occurs within the verb complex (of both English and German). The solution proposed in the literature (Black et al., 1991; Lin, 1995; Carroll et al., 1998; Kübler and Telljohann, 2002) is to define a normal form for dependency structure, where every adjunct or argument attaches to some distinguished part of the verb complex. The underlying assumption is that those cases where scope decisions in the verb complex are semantically relevant (e.g. with modal verbs) are not resolvable in syntax anyway. There is no generally accepted solution for case 2 (ellipsis). Most authors in the evaluation literature neglect it, perhaps due to its infrequency (in the NEGRA corpus, ellipsis only occurs in 1.2% of all dependency relations). Robinson (1970, 280) proposes to promote one of the dependents (preferably an obligatory one) (1a) or even all dependents (1b) to head status. (1) a. the very brave b. John likes tea and Harry coffee. A more sweeping solution to these problems is to abandon dependency structure at all and directly go for predicate-argument structure (Carroll et al., 1998). But as we argued above, moving to a more theoretical level is detrimental to comparability across grammatical frameworks. 3 A Direct Approach: Learning Dependency Structure According to the dependency structure approach to evaluation, the task of the parser is to find the correct dependency structure for a string, i.e. to associate every word token with pairs of head token and grammatical role or else to designate it as independent. To make the learning task easier, the number of classes should be reduced as much as possible. For one, the task could be simplified by focusing on unlabelled dependency structure (measured in “unlabelled” precision and recall (Eisner, 1996; Lin, 1995)), which is, however, in general not sufficient for further semantic processing. 3.1 Tree Property Another possibility for reduction is to associate every word with at most one pair of head token and grammatical role, i.e. to only look at dependency trees rather than graphs. There is one case where the tree property cannot easily be maintained: coordination. Conceptually, all the conjuncts are head constituents in coordination, since the conjunction could be missing, and selectional restrictions work on the individual conjuncts (2). (2) John ate (fish and chips|*wish and ships). But if another word depends on the conjoined heads (see (4a)), the tree property is violated. A way out of the dilemma is to select a specific conjunct as modification site (Lin, 1995; Kübler and Telljohann, 2002). But unless care is taken, semantically vital information is lost in the process: Example (4) shows two readings which should be distinguished in dependency structure. A comparison of the two readings shows that if either the first conjunct or the last conjunct is unconditionally selected certain readings become undistinguishable. Rather, in order to distinguish a maximum number of readings, pre-modifiers must attach to the last conjunct and post-modifiers and coordinating conjunctions to the first conjunct2. The fact that the modifier refers to a conjunction rather than to the conjunct is recorded in the grammatical role (by adding c to it). (4) a. the [fans and supporters] of Arsenal b. [the fans] and [supporters of Arsenal] Other constructions contradicting the tree property are arguably better treated in the lexicon anyway (e.g. control verbs (Carroll et al., 1998)) or could be solved by enriching the repertory of grammatical roles (e.g. relative clauses with null relative pronouns could be treated by adding the dependency relation between head verb and missing element to the one between head verb and modified noun). In a number of linguistic phenomena, dependency theorists disagree on which constituent should be chosen as the head. A case in point are PPs. Few grammars distinguish between adjunct and subcategorized PPs at the level of prepositions. In predicateargument structure, however, the embedded NP is in one case related to the preposition, in the other to the subcategorizing verb. Accordingly, some approaches take the preposition to be the head of a PP (Robinson, 1970; Lin, 1995), others the NP (Kübler and Telljohann, 2002). Still other approaches (Tesnière, 1959; Carroll et al., 1998) conflate verb, preposition and head noun into a triple, and thus only count content words in the evaluation. For learning, the matter can be resolved empirically: 2Even in this setting some readings cannot be distinguished (see e.g. (3) where a conjunction of three modifiers would be retrieved). Nevertheless, the proposed scheme fails in only 0.0017% of all dependency tuples. (3) In New York, we never meet, but in Boston. Note that by this move we favor interpretability over projectivity, but example (4a) is non-projective from the start. Taking prepositions as the head somewhat improves performance, so we took PPs to be headed by prepositions. 3.2 Encoding Head Tokens Another question is how to encode the head token. The simplest method, encoding the word by its string position, generates a large space of classes. A more efficient approach uses the distance in string position between dependent and head token. Finally, Lin (1995) proposes a third type of representation: In his work, a head is described by its word type, an indication of the direction from the dependent (left or right) and the number of tokens of the same type that lie between head and dependent. An illustrative representation would be »paper which refers to the second nearest token paper to the right of the current token. Obviously there are far too many word tokens, but we can use Part-Of-Speech tags instead. Furthermore information on inflection and type of noun (proper versus common nouns) is irrelevant, which cuts down the size even more. We will call this approach nth-tag. A further refinement of the nth-tag approach makes use of the fact that dependency structures are acylic. Hence, only those words with the same POS tag as the head between dependent and head must be counted that do not depend directly or indirectly on the dependent. We will call this approach covered-nth-tag. pos dist nth-tag cover labelled 1,924 1,349 982 921 unlabelled 97 119 162 157 Figure 1: Number of Classes in NEGRA Treebank Figure 1 shows the number of classes the individual approaches generate on the NEGRA Treebank. Note that the longest sentence has 115 tokens (with punctuation marks) but that punctuation marks do not enter dependency structure. The original treebank exhibits 31 non-head syntactic3 grammatical roles. We added three roles for marker complements (CMP), specifiers (SPR), and floating quantifiers (NK+), and subtracted the roles for conjunction markers (CP) and coreference with expletive (RE). 3i.e. grammatical roles not merely used for tokenization 22 roles were copied to mark reference to conjunction. Thus, all in all there was a stock of 54 grammatical roles. 3.3 Experiments We used -grams (3-grams and 5-grams) of POS tags as context and C4.5 (Quinlan, 1993) for machine learning. All results were subjected to 10-fold cross validation. The learning algorithm always returns a result. We counted a result as not assigned, however, if it referred to a head token outside the sentence. See Figure 2 for results4 of the learner. The left column shows performance with POS tags from the treebank (ideal tags, I-tags), the right column values obtained with POS tags as generated automatically by a tagger with an accuracy of 95% (tagger tags, T-tags). I-tags T-tags F-val prec rec F-val prec rec dist, 3 .6071 .6222 .5928 .5902 .6045 .5765 dist, 5 .6798 .6973 .6632 .6587 .6758 .6426 nth-tag, 3 .7235 .7645 .6866 .6965 .7364 .6607 nth-tag, 5 .7716 .7961 .7486 .7440 .7682 .7213 cover, 3 .7271 .7679 .6905 .7009 .7406 .6652 cover, 5 .7753 .7992 .7528 .7487 .7724 .7264 Figure 2: Results for C4.5 The nth-tag head representation outperforms the distance representation by 10%. Considering acyclicity (cover) slightly improves performance, but the gain is not statistically significant (t-test with 99%). The results are quite impressive as they stand, in particular the nth-tag 5-gram version seems to achieve quite good results. It should, however, be stressed that most of the dependencies correctly determined by the n-gram methods extend over no more than 3 tokens. With the distance method, such ‘short’ dependencies make up 98.90% of all dependencies correctly found, with the nth-tag method still 82%, but only 79.63% with the finite-state parser (see section 4) and 78.91% in the treebank. 4If the learner was given a chance to correct its errors, i.e. if it could train on its training results in a second round, there was a statistically significant gain in F-value with recall rising and precision falling (e.g. F-value .7314, precision .7397, recall .7232 for nth-tag trigrams, and F-value .7763, precision .7826, recall .7700 for nth-tag 5-grams). 4 Cascaded Finite-State Parser In addition to the learning approach, we used a cascaded finite-state parser (Schiehlen, 2003), to extract dependency structures from the text. The layout of this parser is similar to Abney’s parser (Abney, 1991): First, a series of transducers extracts noun chunks on the basis of tokenized and POS-tagged text. Since center-embedding is frequent in German noun phrases, the same transducer is used several times over. It also has access to inflectional information which is vital for checking agreement and determining case for subsequent phases (see (Schiehlen, 2002) for a more thorough description). Second, a series of transducers extracts verb-final, verb-first, and verb-second clauses. In contrast to Abney, these are full clauses, not just simplex clause chunks, so that again recursion can occur. Third, the resulting parse tree is refined and decorated with grammatical roles, using non-deterministic ‘interpretation’ transducers (the same technique is used by Abney (1991)). Fourth, verb complexes are examined to find the head verb and auxiliary passive or raising verbs. Only then subcategorization frames can be checked on the clause elements via a nondeterministic transducer, giving them more specific grammatical roles if successful. Fifth, dependency tuples are extracted from the parse tree. 4.1 Underspecification Some parsing decisions are known to be not resolvable by grammar. Such decisions are best handed over to subsequent modules equipped with the relevant knowledge. Thus, in chart parsing, an underspecified representation is constructed, from which all possible analyses can be easily and efficiently read off. Elworthy et al. (2001) describe a cascaded parser which underspecifies PP attachment by allowing modifiers to be linked to several heads in a dependency tree. Example (5) illustrates this scheme. (5) I saw a man in a car on the hill. The main drawback of this scheme is its overgeneration. In fact, it allows six readings for example (5), which only has five readings (the speaker could not have been in the car, if the man was asserted to be on the hill). A similar clause with 10 PPs at the end would receive 39,916,800 readings rather than 58,786. So a more elaborate scheme is called for, but one that is just as easy to generate. A device that often comes in handy for underspecification are context variables (Maxwell III and Kaplan, 1989; Dörre, 1997). First let us give every sequence of prepositional phrases in every clause a specific name (e.g. 1B for the second sequence in the first clause). Now we generate the ambiguous dependency relations (like (Elworthy et al., 2001)) but label them with context variables. Such context variables consist of the sequence name , a number  designating the dependent in left-to-right order (e.g. 0 for in, 1 for on in example (5)), and a number  designating the head in left-to-right (e.g. 0 for saw, 1 for man, 2 for hill in (5)). If the links are stored with the dependents, the number  can be left implicit. Generation of such a representation is straightforward and, in particular, does not lead to a higher class of complexity of the full system. Example (6) shows a tuple representation for the two prepositions of sentence (5). (6) in [1A00] saw ADJ, [1A01] man ADJ on [1A10] saw ADJ, [1A11] man ADJ, [1A12] car ADJ In general, a dependent  can modify  heads, viz. the heads numbered     . Now we put the following constraint on resolution: A dependent  can only modify a head  if no previous dependent  which could have attached to  (i.e.      ) chose some head   to the left of   rather than  . The condition is formally expressed in (7). In example (6) there are only two dependents (  in,  on). If in attaches to saw, on cannot attach to a head between saw and in; conversely if on attaches to man, in cannot attach to a head before man. Nothing follows if on attaches to car. (7) Constraint:   !"$# %&#'(*)+&#   ), -   .0/ for all PP sequences The cascaded parser described adopts this underspecification scheme for right modification. Left modification (see (8)) is usually not stacked so the simpler scheme of Elworthy et al. (2001) suffices. (8) They are usually competent people. German is a free word order language, so that subcategorization can be ambiguous. Such ambiguities should also be underspecified. Again we introduce a context variable for every ambiguous subcategorization frame (e.g. 1 in (9)) and count the individual readings 1 (with letters a,b in (9)). (9) Peter kennt Karl. (Peter knows Karl / Karl knows Peter.) Peter kennt [1a] SBJ/[1b] OA kennt TOP Karl kennt [1a] OA/[1b] SBJ Since subcategorization ambiguity interacts with attachment ambiguity, context variables sometimes need to be coupled: In example (10) the attachment ambiguity only occurs if the PP is read as adjunct. (10) Karl fügte einige Gedanken zu dem Werk hinzu. (Karl added some thoughts on/to the work.) Gedanken fügte [1a] OA/[1b] OA zu [1A0] fügte [1a] PP:zu/[1b] ADJ [1A1] Gedanken PP:zu 1A1 < 1b 4.2 Evaluation of the Underspecified Representation In evaluating underspecified representations, Riezler et al. (2002) distinguish upper and lower bound, standing for optimal performance in disambiguation and average performance, respectively. In I-tags T-tags F-val prec rec F-val prec rec upper .8816 .9137 .8517 .8377 .8910 .7903 direct .8471 .8779 .8183 .8073 .8588 .7617 lower .8266 .8567 .7986 .7895 .8398 .7449 Figure 3: Results for Cascaded Parser Figure 3, values are also given for the performance of the parser without underspecification, i.e. always favoring maximal attachment and word order without scrambling (direct). Interestingly this method performs significantly better than average, an effect mainly due to the preference for high attachment. 5 Combining the Parsers We considered several strategies to combine the results of the diverse parsing approaches: simple voting, weighted voting, Bayesian learning, Maximum Entropy, and greedy optimization of F-value. Simple Voting. The result predicted by the majority of base classifiers is chosen. The finite-state parser, which may give more than one result, distributes its vote evenly on the possible readings. Weighted Voting. In weighted voting, the result which gets the most votes is chosen, where the number of votes given to a base classifier is correlated with its performance on a training set. Bayesian Learning. The Bayesian approach of Xu et al. (1992) chooses the most probable prediction. The probability of a prediction is computed by the product    / of the probability of given the predictions  made by the individual base classifiers . The probability     / of a correct prediction  given a learned prediction  is approximated by relative frequency in a training set. Maximum Entropy. Combining the results can also be seen as a classification task, with base predictions added to the original set of features. We used the Maximum Entropy approach5 (Berger et al., 1996) as a machine learner for this task. Underspecified features were assigned multiple values. Greedy Optimization of F-value. Another method uses a decision list of prediction–classifier pairs to choose a prediction by a classifier. The list is obtained by greedy optimization: In each step, the prediction–classifier pair whose addition results in the highest gain in F-value for the combined model on the training set is appended to the list. The algorithm terminates when F-value cannot be improved by any of the remaining candidates. A finer distinction is possible if the decision is made dependent on the POS tag as well. For greedy optimization, the predictions of the finite-state parser were classified only in grammatical roles, not head positions. We used 10-fold cross validation to determine the decision lists. 5More specifically, the OpenNLP implementation (http://maxent.sourceforge.net/) was used with 10 iterations and a cut-off frequency for features of 10. F-val prec rec simple voting .7927 .8570 .7373 weighted voting .8113 .8177 .8050 Bayesian learning .8463 .8509 .8417 Maximum entropy .8594 .8653 .8537 greedy optim .8795 .8878 .8715 greedy optim+tag .8849 .8957 .8743 Figure 4: Combination Strategies We tested the various combination strategies for the combination Finite-State parser (lower bound) and C4.5 5-gram nth-tag on ideal tags (results in Figure 4). Both simple and weighted voting degrade the results of the base classifiers. Greedy optimization outperforms all other strategies. Indeed it comes near the best possible choice which would give an F-score of .9089 for 5-gram nth-tag and finite-state parser (upper bound) (cf. Figure 5). without POS tag with POS tag I-tags F-val prec rec F-val prec rec upp, nth 5 .9008 .9060 .8956 .9068 .9157 .8980 low, nth 5 .8795 .8878 .8715 .8849 .8957 .8743 low, dist 5 .8730 .8973 .8499 .8841 .9083 .8612 low, nth 3 .8722 .8833 .8613 .8773 .8906 .8644 low, dist 3 .8640 .9034 .8279 .8738 .9094 .8410 dir, nth 5 .8554 .8626 .8483 .8745 .8839 .8653 Figure 5: Combinations via Optimization Figure 5 shows results for some combinations with the greedy optimization strategy on ideal tags. All combinations listed yield an improvement of more than 1% in F-value over the base classifiers. It is striking that combination with a shallow parser does not help the Finite-State parser much in coverage (upper bound), but that it helps both in disambiguation (pushing up the lower bound to almost the level of upper bound) and robustness (remedying at least some of the errors). The benefit of underspecification is visible when lower bound and direct are compared. The nth-tag 5-gram method was the best method to combine the finite-state parser with. Even on T-tags, this combination achieved an F-score of .8520 (lower, upper: .8579, direct: .8329) without POS tag and an F-score of .8563 (lower, upper: .8642, direct: .8535) with POS tags. 6 In-Depth Evaluation Figure 6 gives a survey of the performance of the parsing approaches relative to grammatical role. These figures are more informative than overall Fscore (Preiss, 2003). The first column gives the name of the grammatical role, as explained below. The second column shows corpus frequency in percent. The third column gives the standard deviation of distance between dependent and head. The three last columns give the performance (recall) of C4.5 with distance representation and 5-grams, C4.5 with nth-tag representation and 5-grams, and the cascaded finite-state parser, respectively. For the finite-state parser, the number shows performance with optimal disambiguation (upper bound) and, if the grammatical role allows underspecification, the number for average disambiguation (lower bound) in parentheses. Relations between function words and content words (e.g. specifier (SPR), marker complement (CMP), infinitival zu marker (PM)) are frequent and easy for all approaches. The cascaded parser has an edge over the learners with arguments (subject (SB), clausal (OC), accusative (OA), second accusative (OA2), genitive (OG), dative object (DA)). For all these argument roles a slight amount of ambiguity persists (as can be seen from the divergence between upper and lower bound), which is due to free word order. No ambiguity is found with reported speech (RS). The cascaded parser also performs quite well where verb complexes are concerned (separable verb prefix (SVP), governed verbs (OC), and predicative complements (PD, SP)). Another clearly discernible complex are adjuncts (modifier (MO), negation (NG), passive subject (SBP); oneplace coordination (JUnctor) and discourse markers (DM); finally postnominal modifier (MNR), genitive (GR), or von-phrase (PG)), which all exhibit attachment ambiguities. No attachment ambiguities are attested for prenominal genitives (GL). Some types of adjunction have not yet been implemented in the cascaded parser, so that it performs badly on them (e.g. relative clauses (RC), which are usually extraposed to the right (average distance is 11.6) and thus quite difficult also for the learners; comparative constructions (CC, CM), measure phrases (AMS), floating quantifiers (NK+)). Attachment ambiguities also occur with appositions (APP, NK6). Notoriously difficult is coordination (attachrole freq dev dist nth-t FS-parser MO 24.922 4.5 65.4 75.2 86.9(75.7) SPR 14.740 1.0 97.4 98.5 99.4 CMP 13.689 2.7 83.4 93.4 98.7 SB 9.682 5.7 48.4 64.7 84.5(82.6) TOP 7.781 0.0 47.6 46.7 49.8 OC 4.859 7.4 43.9 85.1 91.9(91.2) OA 4.594 5.8 24.1 37.7 83.5(70.6) MNR 3.765 2.8 76.2 73.9 89.0(48.1) CD 2.860 4.6 67.7 74.8 77.4 GR 2.660 1.3 66.9 65.6 95.0(92.8) APP 2.480 3.4 72.6 72.5 81.6(77.4) PD 1.657 4.6 31.3 39.7 55.1 RC 0.899 5.8 5.5 1.6 19.1 c 0.868 7.8 13.1 13.3 34.4(26.1) SVP 0.700 5.8 29.2 96.0 97.4 DA 0.693 5.4 1.9 1.8 77.1(71.9) NG 0.672 3.1 63.1 73.8 81.7(70.7) PM 0.572 0.0 99.7 99.9 99.2 PG 0.381 1.5 1.9 1.4 94.9(53.2) JU 0.304 4.6 35.8 47.3 62.1(45.5) CC 0.285 4.4 22.3 20.9 4.0( 3.1) CM 0.227 1.4 85.8 85.8 0 GL 0.182 1.1 70.3 67.2 87.5 SBP 0.177 4.1 4.7 3.6 93.7(77.0) AC 0.110 2.5 63.9 60.6 91.9 AMS 0.078 0.7 63.6 60.5 1.5( 0.9) RS 0.076 8.9 0 0 25.0 NK 0.020 3.4 0 0 46.2(40.4) OG 0.019 4.5 0 0 57.4(54.3) DM 0.017 3.1 9.1 18.2 63.6(59.1) NK+ 0.013 3.3 16.1 16.1 0 VO 0.010 3.2 50.0 25.0 0 OA2 0.005 5.7 0 0 33.3(29.2) SP 0.004 7.0 0 0 55.6(33.3) Figure 6: Grammatical Roles ment of conjunction to conjuncts (CD), and dependency on multiple heads ( c)). Vocatives (VO) are not treated in the cascaded parser. AC is the relation between parts of a circumposition. 6Other relations classified as NK in the original treebank have been reclassified: prenominal determiners to SPR, prenominal adjective phrases to MO. 7 Conclusion The paper has presented two approaches to German parsing (n-gram based machine learning and cascaded finite-state parsing), and evaluated them on the basis of a large amount of data. A new representation format has been introduced that allows underspecification of select types of syntactic ambiguity (attachment and subcategorization) even in the absence of a full-fledged chart. Several methods have been discussed for combining the two approaches. It has been shown that while combination with the shallow approach can only marginally improve performance of the cascaded parser if ideal disambiguation is assumed, a quite substantial rise is registered in situations closer to the real world where POS tagging is deficient and resolution of attachment and subcategorization ambiguities less than perfect. In ongoing work, we look at integrating a statistic context-free parser called BitPar, which was written by Helmut Schmid and achieves .816 F-score on NEGRA. Interestingly, the performance goes up to .9474 F-score when BitPar is combined with the FS parser (upper bound) and .9443 for the lower bound. So at least for German, combining parsers seems to be a pretty good idea. Thanks are due to Helmut Schmid and Prof. C. Rohrer for discussions, and to the reviewers for their detailed comments. References Steven Abney. 1991. Parsing by Chunks. In Robert C. Berwick, Steven P. Abney, and Carol Tenny, editors, Principle-based Parsing: computation and psycholinguistics, pages 257–278. Kluwer, Dordrecht. Adam Berger, Stephen Della Pietra, and Vincent Della Pietra. 1996. A maximum entropy approach to natural language processing. Computational Linguistics, 22(1):39–71, March. E. Black, S. Abney, D. Flickinger, C. Gdaniec, R. Grishman, P. Harrison, D. Hindle, R. Ingria, F. Jelinek, J. Klavans, M. Liberman, M. Marcus, S. Roukos, B. Santorini, and T. Strzalkowski. 1991. A procedure for quantitatively comparing the syntactic coverage of English grammars. In Proceedings of the DARPA Speech and Natural Language Workshop 1991, Pacific Grove, CA. John Carroll, Ted Briscoe, and Antonio Sanfilippo. 1998. Parser Evaluation: a Survey and a New Proposal. In Proceedings of LREC, pages 447–454, Granada. Jochen Dörre. 1997. Efficient Construction of Underspecified Semantics under Massive Ambiguity. ACL’97, pages 386–393, Madrid, Spain. Jason M. Eisner. 1996. Three new probabilistic models for dependency parsing: An exploration. COLING ’96, Copenhagen. David Elworthy, Tony Rose, Amanda Clare, and Aaron Kotcheff. 2001. A natural language system for retrieval of captioned images. Journal of Natural Language Engineering, 7(2):117–142. Haim Gaifman. 1965. Dependency Systems and Phrase-Structure Systems. Information and Control, 8(3):304–337. Sandra Kübler and Heike Telljohann. 2002. Towards a Dependency-Oriented Evaluation for Partial Parsing. In Beyond PARSEVAL – Towards Improved Evaluation Measures for Parsing Systems (LREC Workshop). Dekang Lin. 1995. A Dependency-based Method for Evaluating Broad-Coverage Parsers. In Proceedings of the IJCAI-95, pages 1420–1425, Montreal. John T. Maxwell III and Ronald M. Kaplan. 1989. An overview of disjunctive constraint satisfaction. In Proceedings of the International Workshop on Parsing Technologies, Pittsburgh, PA. Judita Preiss. 2003. Using Grammatical Relations to Compare Parsers. EACL’03, Budapest. J. Ross Quinlan. 1993. C4.5: Programs for Machine Learning. Morgan Kaufmann, San Mateo, CA. Stefan Riezler, Tracy H. King, Ronald M. Kaplan, Richard Crouch, John T. Maxwell III, and Mark Johnson. 2002. Parsing the Wall Street Journal using a Lexical-Functional Grammar and Discriminative Estimation Techniques. ACL’02, Philadelphia. Jane J. Robinson. 1970. Dependency Structures and Transformational Rules. Language, 46:259–285. Michael Schiehlen. 2002. Experiments in German Noun Chunking. COLING’02, Taipei. Michael Schiehlen. 2003. A Cascaded Finite-State Parser for German. Research Note in EACL’03. Wojciech Skut, Brigitte Krenn, Thorsten Brants, and Hans Uszkoreit. 1997. An Annotation Scheme for Free Word Order Languages. ANLP-97, Washington. Lucien Tesnière. 1959. Elements de syntaxe structurale. Librairie Klincksieck, Paris. Lei Xu, Adam Krzyzak, and Ching Y. Suen. 1992. Several Methods for Combining Multiple Classifiers and Their Applications in Handwritten Character Recognition. IEEE Trans. on System, Man and Cybernetics, SMC-22(3):418–435.
2003
15
Synonymous Collocation Extraction Using Translation Information Hua WU, Ming ZHOU Microsoft Research Asia 5F Sigma Center, No.49 Zhichun Road, Haidian District Beijing, 100080, China [email protected], [email protected] Abstract Automatically acquiring synonymous collocation pairs such as <turn on, OBJ, light> and <switch on, OBJ, light> from corpora is a challenging task. For this task, we can, in general, have a large monolingual corpus and/or a very limited bilingual corpus. Methods that use monolingual corpora alone or use bilingual corpora alone are apparently inadequate because of low precision or low coverage. In this paper, we propose a method that uses both these resources to get an optimal compromise of precision and coverage. This method first gets candidates of synonymous collocation pairs based on a monolingual corpus and a word thesaurus, and then selects the appropriate pairs from the candidates using their translations in a second language. The translations of the candidates are obtained with a statistical translation model which is trained with a small bilingual corpus and a large monolingual corpus. The translation information is proved as effective to select synonymous collocation pairs. Experimental results indicate that the average precision and recall of our approach are 74% and 64% respectively, which outperform those methods that only use monolingual corpora and those that only use bilingual corpora. 1 Introduction This paper addresses the problem of automatically extracting English synonymous collocation pairs using translation information. A synonymous collocation pair includes two collocations which are similar in meaning, but not identical in wording. Throughout this paper, the term collocation refers to a lexically restricted word pair with a certain syntactic relation. For instance, <turn on, OBJ, light> is a collocation with a syntactic relation verb-object, and <turn on, OBJ, light> and <switch on, OBJ, light> are a synonymous collocation pair. In this paper, translation information means translations of collocations and their translation probabilities. Synonymous collocations can be considered as an extension of the concept of synonymous expressions which conventionally include synonymous words, phrases and sentence patterns. Synonymous expressions are very useful in a number of NLP applications. They are used in information retrieval and question answering (Kiyota et al., 2002; Dragomia et al., 2001) to bridge the expression gap between the query space and the document space. For instance, “buy book” extracted from the users’ query should also in some way match “order book” indexed in the documents. Besides, the synonymous expressions are also important in language generation (Langkilde and Knight, 1998) and computer assisted authoring to produce vivid texts. Up to now, there have been few researches which directly address the problem of extracting synonymous collocations. However, a number of studies investigate the extraction of synonymous words from monolingual corpora (Carolyn et al., 1992; Grefenstatte, 1994; Lin, 1998; Gasperin et al., 2001). The methods used the contexts around the investigated words to discover synonyms. The problem of the methods is that the precision of the extracted synonymous words is low because it extracts many word pairs such as “cat” and “dog”, which are similar but not synonymous. In addition, some studies investigate the extraction of synonymous words and/or patterns from bilingual corpora (Barzilay and Mckeown, 2001; Shimohata and Sumita, 2002). However, these methods can only extract synonymous expressions which occur in the bilingual corpus. Due to the limited size of the bilingual corpus, the coverage of the extracted expressions is very low. Given the fact that we usually have large monolingual corpora (unlimited in some sense) and very limited bilingual corpora, this paper proposes a method that tries to make full use of these different resources to get an optimal compromise of precision and coverage for synonymous collocation extraction. We first obtain candidates of synonymous collocation pairs based on a monolingual corpus and a word thesaurus. We then select those appropriate candidates using their translations in a second language. Each translation of the candidates is assigned a probability with a statistical translation model that is trained with a small bilingual corpus and a large monolingual corpus. The similarity of two collocations is estimated by computing the similarity of their vectors constructed with their corresponding translations. Those candidates with larger similarity scores are extracted as synonymous collocations. The basic assumption behind this method is that two collocations are synonymous if their translations are similar. For example, <turn on, OBJ, light> and <switch on, OBJ, light> are synonymous because both of them are translated into <ᓔ, OBJ, ♃> (<kai1, OBJ, deng1>) and <ᠧᓔ, OBJ, ♃> (<da3 kai1, OBJ, deng1>) in Chinese. In order to evaluate the performance of our method, we conducted experiments on extracting three typical types of synonymous collocations. Experimental results indicate that our approach achieves 74% average precision and 64% recall respectively, which considerably outperform those methods that only use monolingual corpora or only use bilingual corpora. The remainder of this paper is organized as follows. Section 2 describes our synonymous collocation extraction method. Section 3 evaluates the proposed method, and the last section draws our conclusion and presents the future work. 2 Our Approach Our method for synonymous collocation extraction comprises of three steps: (1) extract collocations from large monolingual corpora; (2) generate candidates of synonymous collocation pairs with a word thesaurus WordNet; (3) select synonymous collocation candidates using their translations. 2.1 Collocation Extraction This section describes how to extract English collocations. Since Chinese collocations will be used to train the language model in Section 2.3, they are also extracted in the same way. Collocations in this paper take some syntactical relations (dependency relations), such as <verb, OBJ, noun>, <noun, ATTR, adj>, and <verb, MOD, adv>. These dependency triples, which embody the syntactic relationship between words in a sentence, are generated with a parser—we use NLPWIN in this paper1. For example, the sentence “She owned this red coat” is transformed to the following four triples after parsing: <own, SUBJ, she>, <own, OBJ, coat>, <coat, DET, this>, and <coat, ATTR, red>. These triples are generally represented in the form of <Head, Relation Type, Modifier>. The measure we use to extract collocations from the parsed triples is weighted mutual information (WMI) (Fung and Mckeown, 1997), as described as ) ( ) | ( ) | ( ) , , ( log ) , , ( ) , , ( 2 1 2 1 2 1 2 1 r p r w p r w p w r w p w r w p w r w WMI = Those triples whose WMI values are larger than a given threshold are taken as collocations. We do not use the point-wise mutual information because it tends to overestimate the association between two words with low frequencies. Weighted mutual information meliorates this effect by adding ) , , ( 2 1 w r w p . For expository purposes, we will only look into three kinds of collocations for synonymous collocation extraction: <verb, OBJ, noun>, <noun, ATTR, adj> and <verb, MOD, adv>. Table 1. English Collocations Class #Type #Token verb, OBJ, noun 506,628 7,005,455 noun, ATTR, adj 333,234 4,747,970 verb, Mod, adv 40,748 483,911 Table 2. Chinese Collocations Class #Type #Token verb, OBJ, noun 1,579,783 19,168,229 noun, ATTR, adj 311,560 5,383,200 verb, Mod, adv 546,054 9,467,103 The English collocations are extracted from Wall Street Journal (1987-1992) and Association Press (1988-1990), and the Chinese collocations are 1 The NLPWIN parser is developed at Microsoft Research, which parses several languages including Chinese and English. Its output can be a phrase structure parse tree or a logical form which is represented with dependency triples. extracted from People’s Daily (1980-1998). The statistics of the extracted collocations are shown in Table 1 and 2. The thresholds are set as 5 for both English and Chinese. Token refers to the total number of collocation occurrences and Type refers to the number of unique collocations in the corpus. 2.2 Candidate Generation Candidate generation is based on the following assumption: For a collocation <Head, Relation Type, Modifier>, its synonymous expressions also take the form of <Head, Relation Type, Modifier> although sometimes they may also be a single word or a sentence pattern. The synonymous candidates of a collocation are obtained by expanding a collocation <Head, Relation Type, Modifier> using the synonyms of Head and Modifier. The synonyms of a word are obtained from WordNet 1.6. In WordNet, one synset consists of several synonyms which represent a single sense. Therefore, polysemous words occur in more than one synsets. The synonyms of a given word are obtained from all the synsets including it. For example, the word “turn on” is a polysemous word and is included in several synsets. For the sense “cause to operate by flipping a switch”, “switch on” is one of its synonyms. For the sense “be contingent on”, “depend on” is one of its synonyms. We take both of them as the synonyms of “turn on” regardless of its meanings since we do not have sense tags for words in collocations. If we use Cw to indicate the synonym set of a word w and U to denote the English collocation set generated in Section 2.1. The detail algorithm on generating candidates of synonymous collocation pairs is described in Figure 1. For example, given a collocation <turn on, OBJ, light>, we expand “turn on” to “switch on”, “depend on”, and then expand “light” to “lump”, “illumination”. With these synonyms and the relation type OBJ, we generate synonymous collocation candidates of <turn on, OBJ, light>. The candidates are <switch on, OBJ, light>, <turn on, OBJ, lump>, <depend on, OBJ, illumination>, <depend on, OBJ, light> etc. Both these candidates and the original collocation <turn on, OBJ, light> are used to generate the synonymous collocation pairs. With the above method, we obtained candidates of synonymous collocation pairs. For example, <switch on, OBJ, light> and <turn on, OBJ, light> are a synonymous collocation pair. However, this method also produces wrong synonymous collocation candidates. For example, <depend on, OBJ, illumination> and <turn on, OBJ, light> is not a synonymous pair. Thus, it is important to filter out these inappropriate candidates. Figure 1. Candidate Set Generation Algorithm 2.3 Candidate Selection In synonymous word extraction, the similarity of two words can be estimated based on the similarity of their contexts. However, this method cannot be effectively extended to collocation similarity estimation. For example, in sentences “They turned on the lights” and “They depend on the illumination”, the meaning of two collocations <turn on, OBJ, light> and <depend on, OBJ, illumination> are different although their contexts are the same. Therefore, monolingual information is not enough to estimate the similarity of two collocations. However, the meanings of the above two collocations can be distinguished if they are translated into a second language (e.g., Chinese). For example, <turn on, OBJ, light> is translated into <ᓔ, OBJ, ♃ > (<kai1, OBJ, deng1) and <ᠧᓔ, OBJ, ♃> (<da3 kai1, OBJ, deng1>) in Chinese while <depend on, OBJ, illumination> is translated into <পއѢ, OBJ, ܝ✻ᑺ> (qu3 jue2 yu2, OBJ, guang1 zhao4 du4>). Thus, they are not synonymous pairs because their translations are completely different. In this paper, we select the synonymous collocation pairs from the candidates in the following way. First, given a candidate of synonymous collocation pair generated in section 2.2, we translate the two collocations into Chinese with a simple statistical translation model. Second, we calculate the similarity of two collocations with the feature vectors constructed with their translations. A candidate is selected as a synonymous collocation pair (1) For each collocation (Co1i=<Head, R, Modifier>)ęU, do the following: a. Use the synonyms in WordNet 1.6 to expand Head and Modifier and get their synonym sets CHead and CModifier b. Generate the candidate set of its synonymous collocations Si={<w1, R, w2> | w1ę{Head} Ĥ CHead & w2 ę{Modifier}Ĥ CModifier & <w1, R, w2>ęU & <w1, R, w2> ≠ Co1i } (2) Generate the candidate set of synonymous collocation pairs SC= {(Co1i, Co1j)| Co1ię 8 Co1jęSi` if its similarity exceeds a certain threshold. 2.3.1 Collocation Translation For an English collocation ecol=<e1, re, e2>, we translate it into Chinese collocations 2 using an English-Chinese dictionary. If the translation sets of e1 and e2 are represented as CS1 and CS2 respectively, the Chinese translations can be represented as S={<c1, rc, c2>| c1ęCS1 , c2 ęCS2 , rc ę5 }, with R denoting the relation set. Given an English collocation ecol=<e1, re, e2> and one of its Chinese collocation ccol=<c1, rc, c2>ęS, the probability that ecol is translated into ccol is calculated as in Equation (1). ) ( ) , , ( ) , , | , , ( ) | ( 2 1 2 1 2 1 col c c e col col e p c r c p c r c e r e p e c p = (1) According to Equation (1), we need to calculate the translation probability p(ecol|ccol) and the target language probability p(ccol). Calculating the translation probability needs a bilingual corpus. If the above equation is used directly, we will run into the data sparseness problem. Thus, model simplification is necessary. 2.3.2 Translation Model Our simplification is made according to the following three assumptions. Assumption 1: For a Chinese collocation ccol and re, we assume that e1 and e2 are conditionally independent. The translation model is rewritten as: ) | ( ) , | ( ) , | ( ) | , , ( ) | ( 2 1 2 1 col e col e col e col e col col c r p c r e p c r e p c e r e p c e p = = (2) Assumption 2: Given a Chinese collocation <c1, rc, c2>, we assume that the translation probability p(ei|ccol) only depends on ei and ci (i=1,2), and p(re|ccol) only depends on re and rc. Equation (2) is rewritten as: ) | ( ) | ( ) | ( ) | ( ) | ( ) | ( ) | ( 2 2 1 1 2 1 c e col e col col col col r r p c e p c e p c r p c e p c e p c e p = = (3) It is equal to a word translation model if we take the relation type in the collocations as an element like a word, which is similar to Model 1 in (Brown et al., 1993). Assumption 3: We assume that one type of English 2 Some English collocations can be translated into Chinese words, phrases or patterns. Here we only consider the case of being translated into collocations. collocation can only be translated to the same type of Chinese collocations3. Thus, p(re| rc) =1 in our case. Equation (3) is rewritten as: ) | ( ) | ( ) | ( ) | ( ) | ( ) | ( 2 2 1 1 2 2 1 1 c e p c e p r r p c e p c e p c e p c e col col = = (4) 2.3.3 Language Model The language model p(ccol) is calculated with the Chinese collocation database extracted in section 2.1. In order to tackle with the data sparseness problem, we smooth the language model with an interpolation method. When the given Chinese collocation occurs in the corpus, we calculate it as in (5). N c count c p col col ) ( ) ( = (5) where ) ( col c count represents the count of the Chinese collocation col c . N represents the total counts of all the Chinese collocations in the training corpus. For a collocation <c1, rc, c2>, if we assume that two words c1 and c2 are conditionally independent given the relation rc, Equation (5) can be rewritten as in (6). ) ( ) | ( ) | ( ) ( 2 1 c c c col r p r c p r c p c p = (6) where ,*) (*, ,*) , ( ) | ( 1 1 c c c r count r c count r c p = ,*) (*, ) , (*, ) | ( 2 2 c c c r count c r count r c p = , N r count r p c c ,*) (*, ) ( = ,*) , ( 1 cr c count : frequency of the collocations with c1 as the head and rc as the relation type. ) , (*, 2c r count c : frequency of the collocations with c2 as the modifier and rc as the relation type ,*) (*, cr count : frequency of the collocations with rc as the relation type. With Equation (5) and (6), we get the interpolated language model as shown in (7). ) ( ) | ( ) | ( ) (1 ) ( ) ( 2 1 c c c col col r p r c p r c p N c count c p λ λ + = (7) where 1 0 < < λ . λ is a constant so that the probabilities sum to 1. 3 Zhou et al. (2001) found that about 70% of the Chinese translations have the same relation type as the source English collocations. 2.3.4 Word Translation Probability Estimation Many methods are used to estimate word translation probabilities from unparallel or parallel bilingual corpora (Koehn and Knight, 2000; Brown et al., 1993). In this paper, we use a parallel bilingual corpus to train the word translation probabilities based on the result of word alignment with a bilingual Chinese-English dictionary. The alignment method is described in (Wang et al., 2001). In order to deal with the problem of data sparseness, we conduct a simple smoothing by adding 0.5 to the counts of each translation pair as in (8). | _ | * 5.0 ) ( 5.0 ) , ( ) | ( e trans c count c e count c e p + + = (8) where | _ | e trans represents the number of English translations for a given Chinese word c. 2.3.5 Collocation Similarity Calculation For each synonymous collocation pair, we get its corresponding Chinese translations and calculate the translation probabilities as in section 2.3.1. These Chinese collocations with their corresponding translation probabilities are taken as feature vectors of the English collocations, which can be represented as: > =< ) , (, ... ), , ( ), , ( 2 2 1 1 im col im col i col i col i col i col i col p c p c p c Fe The similarity of two collocations is defined as in (9). The candidate pairs whose similarity scores exceed a given threshold are selected. ( ) ( )    = = = j j col i i col j col c i col c j col i col col col col col p p p p Fe Fe e e sim 2 2 2 1 2 1 2 1 2 1 2 1 * * ) , cos( ) , ( (9) For example, given a synonymous collocation pair <turn on, OBJ, light> and <switch on, OBJ, light>, we first get their corresponding feature vectors. The feature vector of <turn on, OBJ, light>: < (<ᓔ, OBJ, ♃>, 0.04692), (<ᠧᓔ, OBJ, ♃>, 0.01602), … , (<ձ䌪, OBJ, ܝ>, 0.0002710), (<ձ䌪, OBJ, ܝ✻ᑺ>, 0.0000305) > The feature vector of <switch on, OBJ, light>: < (<ᠧᓔ, OBJ, ♃>, 0.04238), (<ᓔ, OBJ, ♃>, 0.01257), (<ᠧᓔ, OBJ, ♃ܝ>, 0.002531), … , (<ᓔ, OBJ, ֵো♃>, 0.00003542) > The values in the feature vector are translation probabilities. With these two vectors, we get the similarity of <turn on, OBJ, light> and <switch on, OBJ, light>, which is 0.2348. 2.4 Implementation of our Approach We use an English-Chinese dictionary to get the Chinese translations of collocations, which includes 219,404 English words. Each source word has 3 translation words on average. The word translation probabilities are estimated from a bilingual corpus that obtains 170,025 pairs of Chinese-English sentences, including about 2.1 million English words and about 2.5 million Chinese words. With these data and the collocations in section 2.1, we produced 93,523 synonymous collocation pairs and filtered out 1,060,788 candidate pairs with our translation method if we set the similarity threshold to 0.01. 3 Evaluation To evaluate the effectiveness of our methods, two experiments have been conducted. The first one is designed to compare our method with two methods that use monolingual corpora. The second one is designed to compare our method with a method that uses a bilingual corpus. 3.1 Comparison with Methods using Monolingual Corpora We compared our approach with two methods that use monolingual corpora. These two methods also employed the candidate generation described in section 2.2. The difference is that the two methods use different strategies to select appropriate candidates. The training corpus for these two methods is the same English one as in Section 2.1. 3.1.1 Method Description Method 1: This method uses monolingual contexts to select synonymous candidates. The purpose of this experiment is to see whether the context method for synonymous word extraction can be effectively extended to synonymous collocation extraction. The similarity of two collocations is calculated with their feature vectors. The feature vector of a collocation is constructed by all words in sentences which surround the given collocation. The context vector for collocation i is represented as in (10). > =< ) , ( ),..., , ( ), , ( 2 2 1 1 im im i i i i i col p w p w p w Fe (10) where N e w count p i col ij ij ) , ( = ij w : context word j of collocation i. ij p : probability of ij w co-occurring with i col e . ) , ( i col ij e w count : frequency of the context word ij w co-occurring with the collocation i col e N: all counts of the words in the training corpus. With the feature vectors, the similarity of two collocations is calculated as in (11). Those candidates whose similarities exceed a given threshold are selected as synonymous collocations. ( ) ( )    = = = j j i i j w i w j i col col col col p p p p Fe Fe e e sim 2 2 2 1 2 1 2 1 2 1 2 1 * * ) , cos( ) , ( (11) Method 2: Instead of using contexts to calculate the similarity of two words, this method calculates the similarity of collocations with the similarity of their components. The formula is described in Equation (12). ) , ( * ) , ( * ) , ( ) , ( 2 1 2 2 1 2 2 1 1 1 2 1 rel rel sim e e sim e e sim e e sim col col = (12) where ) , , ( 2 1 i i i i col e rel e e = . We assume that the relation type keeps the same, so 1 ) , ( 2 1 = rel rel sim . The similarity of the words is calculated with the same method as described in (Lin, 1998), which is rewritten in Equation (13). The similarity of the words is calculated through the surrounding context words which have dependency relationships with the investigated words. ) , , ( ) , , ( )) , , ( ) , , ( ( ) , ( 2 ) 2 ( ) , ( 1 ) 1 ( ) , ( 2 1 ) 2 ( ) 1 ( ) , ( 2 1 e rel e w e rel e w e rel e w e rel e w e e Sim e T e rel e T e rel e T e T e rel ∈ ∈ ∈  +  +  =  (13) where T(ei) denotes the set of words which have the dependency relation rel with ei. ) ( ) | ( ) | ( ) , , ( log ) , , ( ) , , ( rel p rel e p rel e p e rel e p e rel e p e rel e w j i j i j i j i = 3.1.2 Test Set With the candidate generation method as depicted in section 2.2, we generated 1,154,311 candidates of synonymous collocations pairs for 880,600 collocations, from which we randomly selected 1,300 pairs to construct a test set. Each pair was evaluated independently by two judges to see if it is synonymous. Only those agreed upon by two judges are considered as synonymous pairs. The statistics of the test set is shown in Table 3. We evaluated three types of synonymous collocations: <verb, OBJ, noun>, <noun, ATTR, adj>, <verb, MOD, adv>. For the type <verb, OBJ, noun>, among the 630 synonymous collocation candidate pairs, 197 pairs are correct. For <noun, ATTR, adj>, 163 pairs (among 324 pairs) are correct, and for <verb, MOD, adv>, 124 pairs (among 346 pairs) are correct. Table 3. The Test Set Type #total #correct verb, OBJ, noun 630 197 noun, ATTR, adj 324 163 verb, MOD, adv 346 124 3.1.3 Evaluation Results With the test set, we evaluate the performance of each method. The evaluation metrics are precision, recall, and f-measure. A development set including 500 synonymous pairs is used to determine the thresholds of each method. For each method, the thresholds for getting highest f-measure scores on the development set are selected. As the result, the thresholds for Method 1, Method 2 and our approach are 0.02, 0.02, and 0.01 respectively. With these thresholds, the experimental results on the test set in Table 3 are shown in Table 4, Table 5 and Table 6. Table 4. Results for <verb, OBJ, noun> Method Precision Recall F-measure Method 1 0.3148 0.8934 0.4656 Method 2 0.3886 0.7614 0.5146 Ours 0.6811 0.6396 0.6597 Table 5. Results for <noun, ATTR, adj> Method Precision Recall F-measure Method 1 0.5161 0.9816 0.6765 Method 2 0.5673 0.8282 0.6733 Ours 0.8739 0.6380 0.7376 Table 6. Results for <verb, MOD, adv> Method Precision Recall F-measure Method 1 0.3662 0.9597 0.5301 Method 2 0.4163 0.7339 0.5291 Ours 0.6641 0.7016 0.6824 It can be seen that our approach gets the highest precision (74% on average) for all the three types of synonymous collocations. Although the recall (64% on average) of our approach is below other methods, the f-measure scores, which combine both precision and recall, are the highest. In order to compare our methods with other methods under the same recall value, we conduct another experiment on the type <verb, OBJ, noun>4. We set the recalls of the two methods to the same value of our method, which is 0.6396 in Table 4. The precisions are 0.3190, 0.4922, and 0.6811 for Method 1, Method 2, and our method, respectively. Thus, the precisions of our approach are higher than the other two methods even when their recalls are the same. It proves that our method of using translation information to select the candidates is effective for synonymous collocation extraction. The results of Method 1 show that it is difficult to extract synonymous collocations with monolingual contexts. Although Method 1 gets higher recalls than the other methods, it brings a large number of wrong candidates, which results in lower precision. If we set higher thresholds to get comparable precision, the recall is much lower than that of our approach. This indicates that the contexts of collocations are not discriminative to extract synonymous collocations. The results also show that Model 2 is not suitable for the task. The main reason is that both high scores of ) , ( 2 1 1 1 e e sim and ) , ( 2 2 1 2 e e sim does not mean the high similarity of the two collocations. The reason that our method outperforms the other two methods is that when one collocation is translated into another language, its translations indirectly disambiguate the words’ senses in the collocation. For example, the probability of <turn on, OBJ, light> being translated into <ᠧᓔ, OBJ, ♃ > (<da3 kai1, OBJ, deng1>) is much higher than that of it being translated into <পއѢ, OBJ, ܝ✻ᑺ > (<qu3 jue2 yu2, OBJ, guang1 zhao4 du4>) while the situation is reversed for <depend on, OBJ, illumination>. Thus, the similarity between <turn on, OBJ, light> and <depend on, OBJ, illumination> is low and, therefore, this candidate is filtered out. 4 The results of the other two types of collocations are the same as <verb, OBJ, noun>. We omit them because of the space limit. 3.2 Comparison with Methods using Bilingual Corpora Barzilay and Mckeown (2001), and Shimohata and Sumita (2002) used a bilingual corpus to extract synonymous expressions. If the same source expression has more than one different translation in the second language, these different translations are extracted as synonymous expressions. In order to compare our method with these methods that only use a bilingual corpus, we implement a method that is similar to the above two studies. The detail process is described in Method 3. Method 3: The method is described as follows: (1) All the source and target sentences (here Chinese and English, respectively) are parsed; (2) extract the Chinese and English collocations in the bilingual corpus; (3) align Chinese collocations ccol=<c1, rc, c2> and English collocations ecol=<e1, re, e2> if c1 is aligned with e1 and c2 is aligned with e2; (4) obtain two English synonymous collocations if two different English collocations are aligned with the same Chinese collocation and if they occur more than once in the corpus. The training bilingual corpus is the same one described in Section 2. With Method 3, we get 9,368 synonymous collocation pairs in total. The number is only 10% of that extracted by our approach, which extracts 93,523 pairs with the same bilingual corpus. In order to evaluate Method 3 and our approach on the same test set. We randomly select 100 collocations which have synonymous collocations in the bilingual corpus. For these 100 collocations, Method 3 extracts 121 synonymous collocation pairs, where 83% (100 among 121) are correct 5. Our method described in Section 2 generates 556 synonymous collocation pairs with a threshold set in the above section, where 75% (417 among 556) are correct. If we set a higher threshold (0.08) for our method, we get 360 pairs where 295 are correct (82%). If we use |A|, |B|, |C| to denote correct pairs extracted by Method 3, our method, both Method 3 and our method respectively, we get |A|=100, |B|=295, and 78 | | | | = ∩ = B A C . Thus, the synonymous collocation pairs extracted by our method cover 78% ( | || | A C ) of those extracted by Method 5 These synonymous collocation pairs are evaluated by two judges and only those agreed on by both are selected as correct pairs. 3 while those extracted by Method 3 only cover 26% ( | || | B C ) of those extracted by our method. It can be seen that the coverage of Method 3 is much lower than that of our method even when their precisions are set to the same value. This is mainly because Method 3 can only extract synonymous collocations which occur in the bilingual corpus. In contrast, our method uses the bilingual corpus to train the translation probabilities, where the translations are not necessary to occur in the bilingual corpus. The advantage of our method is that it can extract synonymous collocations not occurring in the bilingual corpus. 4 Conclusions and Future Work This paper proposes a novel method to automatically extract synonymous collocations by using translation information. Our contribution is that, given a large monolingual corpus and a very limited bilingual corpus, we can make full use of these resources to get an optimal compromise of precision and recall. Especially, with a small bilingual corpus, a statistical translation model is trained for the translations of synonymous collocation candidates. The translation information is used to select synonymous collocation pairs from the candidates obtained with a monolingual corpus. Experimental results indicate that our approach extracts synonymous collocations with an average precision of 74% and recall of 64%. This result significantly outperforms those of the methods that only use monolingual corpora, and that only use a bilingual corpus. Our future work will extend synonymous expressions of the collocations to words and patterns besides collocations. In addition, we are also interested in extending this method to the extraction of synonymous words so that “black” and “white”, “dog” and “cat” can be classified into different synsets. Acknowledgements We thank Jianyun Nie, Dekang Lin, Jianfeng Gao, Changning Huang, and Ashley Chang for their valuable comments on an early draft of this paper. References Barzilay R. and McKeown K. (2001). Extracting Paraphrases from a Parallel Corpus. In Proc. of ACL/EACL. Brown P.F., S.A. Della Pietra, V.J. Della Pietra, and R.L. Mercer (1993). The mathematics of statistical machine translation: Parameter estimation. Computational Linguistics, 19(2), pp263- 311. Carolyn J. Crouch and Bokyung Yang (1992). Experiments in automatic statistical thesaurus construction. In Proc. of the Fifteenth Annual International ACM SIGIR conference on Research and Development in Information Retrieval, pp77-88. Dragomir R. Radev, Hong Qi, Zhiping Zheng, Sasha Blair-Goldensohn, Zhu Zhang, Waiguo Fan, and John Prager (2001). Mining the web for answers to natural language questions. In ACM CIKM 2001: Tenth International Conference on Information and Knowledge Management, Atlanta, GA. Fung P. and Mckeown K. (1997). A Technical Word- and Term- Translation Aid Using Noisy Parallel Corpora across Language Groups. In: Machine Translation, Vol.1-2 (special issue), pp53-87. Gasperin C., Gamallo P, Agustini A., Lopes G., and Vera de Lima (2001) Using Syntactic Contexts for Measuring Word Similarity. Workshop on Knowledge Acquisition & Categorization, ESSLLI. Grefenstette G. (1994) Explorations in Automatic Thesaurus Discovery. Kluwer Academic Press, Boston. Kiyota Y., Kurohashi S., and Kido F. (2002) "Dialog Navigator": A Question Answering System based on Large Text Knowledge Base. In Proc. of the 19th International Conference on Computational Linguistics, Taiwan. Koehn. P and Knight K. (2000). Estimating Word Translation Probabilities from Unrelated Monolingual Corpora using the EM Algorithm. National Conference on Artificial Intelligence (AAAI 2000) Langkilde I. and Knight K. (1998). Generation that Exploits Corpus-based Statistical Knowledge. In Proc. of the COLING-ACL 1998. Lin D. (1998) Automatic Retrieval and Clustering of Similar Words. In Proc. of the 36th Annual Meeting of the Association for Computational Linguistics. Shimohata M. and Sumita E.(2002). Automatic Paraphrasing Based on Parallel Corpus for Normalization. In Proc. of the Third International Conference on Language Resources and Evaluation. Wang W., Huang J., Zhou M., and Huang C.N. (2001). Finding Target Language Correspondence for Lexicalized EBMT System. In Proc. of the Sixth Natural Language Processing Pacific Rim Symposium. Zhou M., Ding Y., and Huang C.N. (2001). Improving Translation Selection with a New Translation Model Trained by Independent Monolingual Corpora. Computational Linguistics & Chinese Language Processing. Vol. 6 No, 1, pp1-26.
2003
16
Constructing Semantic Space Models from Parsed Corpora Sebastian Padó Department of Computational Linguistics Saarland University PO Box 15 11 50 66041 Saarbrücken, Germany [email protected] Mirella Lapata Department of Computer Science University of Sheffield Regent Court, 211 Portobello Street Sheffield S1 4DP, UK [email protected] Abstract Traditional vector-based models use word co-occurrence counts from large corpora to represent lexical meaning. In this paper we present a novel approach for constructing semantic spaces that takes syntactic relations into account. We introduce a formalisation for this class of models and evaluate their adequacy on two modelling tasks: semantic priming and automatic discrimination of lexical relations. 1 Introduction Vector-based models of word co-occurrence have proved a useful representational framework for a variety of natural language processing (NLP) tasks such as word sense discrimination (Schütze, 1998), text segmentation (Choi et al., 2001), contextual spelling correction (Jones and Martin, 1997), automatic thesaurus extraction (Grefenstette, 1994), and notably information retrieval (Salton et al., 1975). Vector-based representations of lexical meaning have been also popular in cognitive science and figure prominently in a variety of modelling studies ranging from similarity judgements (McDonald, 2000) to semantic priming (Lund and Burgess, 1996; Lowe and McDonald, 2000) and text comprehension (Landauer and Dumais, 1997). In this approach semantic information is extracted from large bodies of text under the assumption that the context surrounding a given word provides important information about its meaning. The semantic properties of words are represented by vectors that are constructed from the observed distributional patterns of co-occurrence of their neighbouring words. Co-occurrence information is typically collected in a frequency matrix, where each row corresponds to a unique target word and each column represents its linguistic context. Contexts are defined as a small number of words surrounding the target word (Lund and Burgess, 1996; Lowe and McDonald, 2000) or as entire paragraphs, even documents (Landauer and Dumais, 1997). Context is typically treated as a set of unordered words, although in some cases syntactic information is taken into account (Lin, 1998; Grefenstette, 1994; Lee, 1999). A word can be thus viewed as a point in an n-dimensional semantic space. The semantic similarity between words can be then mathematically computed by measuring the distance between points in the semantic space using a metric such as cosine or Euclidean distance. In the variants of vector-based models where no linguistic knowledge is used, differences among parts of speech for the same word (e.g., to drink vs. a drink) are not taken into account in the construction of the semantic space, although in some cases word lexemes are used rather than word surface forms (Lowe and McDonald, 2000; McDonald, 2000). Minimal assumptions are made with respect to syntactic dependencies among words. In fact it is assumed that all context words within a certain distance from the target word are semantically relevant. The lack of syntactic information makes the building of semantic space models relatively straightforward and language independent (all that is needed is a corpus of written or spoken text). However, this entails that contextual information contributes indiscriminately to a word’s meaning. Some studies have tried to incorporate syntactic information into vector-based models. In this view, the semantic space is constructed from words that bear a syntactic relationship to the target word of interest. This makes semantic spaces more flexible, different types of contexts can be selected and words do not have to physically co-occur to be considered contextually relevant. However, existing models either concentrate on specific relations for constructing the semantic space such as objects (e.g., Lee, 1999) or collapse all types of syntactic relations available for a given target word (Grefenstette, 1994; Lin, 1998). Although syntactic information is now used to select a word’s appropriate contexts, this information is not explicitly captured in the contexts themselves (which are still represented by words) and is therefore not amenable to further processing. A commonly raised criticism for both types of semantic space models (i.e., word-based and syntaxbased) concerns the notion of semantic similarity. Proximity between two words in the semantic space cannot indicate the nature of the lexical relations between them. Distributionally similar words can be antonyms, synonyms, hyponyms or in some cases semantically unrelated. This limits the application of semantic space models for NLP tasks which require distinguishing between lexical relations. In this paper we generalise semantic space models by proposing a flexible conceptualisation of context which is parametrisable in terms of syntactic relations. We develop a general framework for vectorbased models which can be optimised for different tasks. Our framework allows the construction of semantic space to take place over words or syntactic relations thus bridging the distance between wordbased and syntax-based models. Furthermore, we show how our model can incorporate well-defined, informative contexts in a principled way which retains information about the syntactic relations available for a given target word. We first evaluate our model on semantic priming, a phenomenon that has received much attention in computational psycholinguistics and is typically modelled using word-based semantic spaces. We next conduct a study that shows that our model is sensitive to different types of lexical relations. 2 Dependency-based Vector Space Models Once we move away from words as the basic context unit, the issue of representation of syntactic information becomes pertinent. Information about the dependency relations between words abstracts over word order and can be considered as an intermediate layer between surface syntax and semantics. More Det a N lorry Aux might V carry A sweet N apples subj det aux obj mod Figure 1: A dependency parse of a short sentence formally, dependencies are asymmetric binary relationships between a head and a modifier (Tesnière, 1959). The structure of a sentence can be represented by a set of dependency relationships that form a tree as shown in Figure 1. Here the head of the sentence is the verb carry which is in turn modified by its subject lorry and its object apples. It is the dependencies in Figure 1 that will form the context over which the semantic space will be constructed. The construction mechanism sets out by identifying the local context of a target word, which is a subset of all dependency paths starting from it. The paths consist of the dependency edges of the tree labelled with dependency relations such as subj, obj, or aux (see Figure 1). The paths can be ranked by a path value function which gives different weight to different dependency types (for example, it can be argued that subjects and objects convey more semantic information than determiners). Target words are then represented in terms of syntactic features which form the dimensions of the semantic space. Paths are mapped to features by the path equivalence relation and the appropriate cells in the matrix are incremented. 2.1 Definition of Semantic Space We assume the semantic space formalisation proposed by Lowe (2001). A semantic space is a matrix whose rows correspond to target words and columns to dimensions which Lowe calls basis elements: Definition 1. A Semantic Space Model is a matrix K = B × T, where bi ∈B denotes the basis element of column i, t j ∈T denotes the target word of row j, and Kij the cell (i, j). T is the set of words for which the matrix contains representations; this can be either word types or word tokens. In this paper, we assume that cooccurrence counts are constructed over word types, but the framework can be easily adapted to represent word tokens instead. In traditional semantic spaces, the cells Kij of the matrix correspond to word co-occurrence counts. This is no longer the case for dependency-based models. In the following we explain how cooccurrence counts are constructed. 2.2 Building the Context The first step in constructing a semantic space from a large collection of dependency relations is to construct a word’s local context. Definition 2. The dependency parse p of a sentence s is an undirected graph p(s) = (Vp,Ep). The set of nodes corresponds to words of the sentence: Vp = {w1,...,wn}. The set of edges is Ep ⊆Vp ×Vp. Definition 3. A class q is a three-tuple consisting of a POS-tag, a relation, and another POS-tag. We write Q for the set of all classes Cat × R ×Cat. For each parse p, the labelling function Lp : Ep →Q assigns a class to every edge of the parse. In Figure 1, the labelling function labels the leftmost edge as Lp((a,lorry)) = ⟨Det,det,N⟩. Note that Det represents the POS-tag “determiner” and det the dependency relation “determiner”. In traditional models, the target words are surrounded by context words. In a dependency-based model, the target words are surrounded by dependency paths. Definition 4. A path φ is an ordered tuple of edges ⟨e1,...,en⟩∈En p so that ∀i : (ei−1 = (v1,v2) ∧ei = (v3,v4)) ⇒v2 = v3 Definition 5. A path anchored at a word w is a path ⟨e1,...,en⟩so that e1 = (v1,v2) and w = v1. Write Φw for the set of all paths over Ep anchored at w. In words, a path is a tuple of connected edges in a parse graph and it is anchored at w if it starts at w. In Figure 1, the set of paths anchored at lorry1 is: {⟨(lorry,carry)⟩,⟨(lorry,carry),(carry,apples)⟩, ⟨(lorry,a)⟩,⟨(lorry,carry),(carry,might)⟩,...} The local context of a word is the set or a subset of its anchored paths. The class information can always be recovered by means of the labelling function. Definition 6. A local context of a word w from a sentence s is a subset of the anchored paths at w. A function c : W →2Φw which assigns a local context to a word is called a context specification function. 1For the sake of brevity, we only show paths up to length 2. The context specification function allows to eliminate paths on the basis of their classes. For example, it is possible to eliminate all paths from the set of anchored paths but those which contain immediate subject and direct object relations. This can be formalised as: c(w) = {φ ∈Φw |φ = ⟨e⟩∧ (Lp(e) = ⟨V,obj,N⟩∨Lp(e) = ⟨V,subj,N⟩)} In Figure 1, the labels of the two edges which form paths of length 1 and conform to this context specification are marked in boldface. Notice that the local context of lorry contains only one anchored path (c(lorry) = {⟨(lorry,carry)⟩}). 2.3 Quantifying the Context The second step in the construction of the dependency-based semantic models is to specify the relative importance of different paths. Linguistic information can be incorporated into our framework through the path value function. Definition 7. The path value function v assigns a real number to a path: v : Φ →R. For instance, the path value function could penalise longer paths for only expressing indirect relationships between words. An example of a lengthbased path value function is v(φ) = 1 n where φ = ⟨e1,...,en⟩. This function assigns a value of 1 to the one path from c(lorry) and fractions to longer paths. Once the value of all paths in the local context is determined, the dimensions of the space must be specified. Unlike word-based models, our contexts contain syntactic information and dimensions can be defined in terms of syntactic features. The path equivalence relation combines functionally equivalent dependency paths that share a syntactic feature into equivalence classes. Definition 8. Let ∼be the path equivalence relation on Φ. The partition induced by this equivalence relation is the set of basis elements B. For example, it is possible to combine all paths which end at the same word: A path which starts at wi and ends at w j, irrespectively of its length and class, will be the co-occurrence of wi and w j. This word-based equivalence function can be defined in the following manner: ⟨(v1,v2),...,(vn−1,vn)⟩∼⟨(v′ 1,v′ 2),...,(v′ m−1,v′ m)⟩ iff vn = v′ m This means that in Figure 1 the set of basis elements is the set of words at which paths end. Although cooccurrence counts are constructed over words like in traditional semantic space models, it is only words which stand in a syntactic relationship to the target that are taken into account. Once the value of all paths in the local context is determined, the local observed frequency for the co-occurrence of a basis element b with the target word w is just the sum of values of all paths φ in this context which express the basis element b. The global observed frequency is the sum of the local observed frequencies for all occurrences of a target word type t and is therefore a measure for the cooccurrence of t and b over the whole corpus. Definition 9. Global observed frequency: ˆf(b,t) = ∑ w∈W(t) ∑ φ∈C(w)∧φ∼b v(φ) As Lowe (2001) notes, raw frequency counts are likely to give misleading results. Due to the Zipfian distribution of word types, words occurring with similar frequencies will be judged more similar than they actually are. A lexical association function can be used to explicitly factor out chance cooccurrences. Definition 10. Write A for the lexical association function which computes the value of a cell of the matrix from a co-occurrence frequency: Kij = A( ˆf(bi,tj)) 3 Evaluation 3.1 Parameter Settings All our experiments were conducted on the British National Corpus (BNC), a 100 million word collection of samples of written and spoken language (Burnard, 1995). We used Lin’s (1998) broad coverage dependency parser MINIPAR to obtain a parsed version of the corpus. MINIPAR employs a manually constructed grammar and a lexicon derived from WordNet with the addition of proper names (130,000 entries in total). Lexicon entries contain part-of-speech and subcategorization information. The grammar is represented as a network of 35 nodes (i.e., grammatical categories) and 59 edges (i.e., types of syntactic (dependency) relationships). MINIPAR uses a distributed chart parsing algorithm. Grammar rules are implemented as constraints associated with the nodes and edges. Cosine distance cos(⃗x,⃗y) = ∑i xiyi √ ∑i x2 i √ ∑i y2 i Skew divergence sα(⃗x,⃗y) = ∑i xi log xi αxi+(1−α)yi Figure 2: Distance measures The dependency-based semantic space was constructed with the word-based path equivalence function from Section 2.3. As basis elements for our semantic space the 1000 most frequent words in the BNC were used. Each element of the resulting vector was replaced with its log-likelihood value (see Definition 10 in Section 2.3) which can be considered as an estimate of how surprising or distinctive a co-occurrence pair is (Dunning, 1993). We experimented with a variety of distance measures such as cosine, Euclidean distance, L1 norm, Jaccard’s coefficient, Kullback-Leibler divergence and the Skew divergence (see Lee 1999 for an overview). We obtained the best results for cosine (Experiment 1) and Skew divergence (Experiment 2). The two measures are shown in Figure 2. The Skew divergence represents a generalisation of the Kullback-Leibler divergence and was proposed by Lee (1999) as a linguistically motivated distance measure. We use a value of α = .99. We explored in detail the influence of different types and sizes of context by varying the context specification and path value functions. Contexts were defined over a set of 23 most frequent dependency relations which accounted for half of the dependency edges found in our corpus. From these, we constructed four context specification functions: (a) minimum contexts containing paths of length 1 (in Figure 1 sweet and carry are the minimum context for apples), (b) np context adds dependency information relevant for noun compounds to minimum context, (c) wide takes into account paths of length longer than 1 that represent meaningful linguistic relations such as argument structure, but also prepositional phrases and embedded clauses (in Figure 1 the wide context of apples is sweet, carry, lorry, and might), and (d) maximum combined all of the above into a rich context representation. Four path valuation functions were used: (a) plain assigns the same value to every path, (b) length assigns a value inversely proportional to a path’s length, (c) oblique ranks paths according to the obliqueness hierarchy of grammatical relations (Keenan and Comrie, 1977), and (d) oblength context specification path value function 1 minimum plain 2 minimum oblique 3 np plain 4 np length 5 np oblique 6 np oblength 7 wide plain 8 wide length 9 wide oblique 10 wide oblength 11 maximum plain 12 maximum length 13 maximum oblique 14 maximum oblength Table 1: The fourteen models combines length and oblique. The resulting 14 parametrisations are shown in Table 1. Lengthbased and length-neutral path value functions are collapsed for the minimum context specification since it only considers paths of length 1. We further compare in Experiments 1 and 2 our dependency-based model against a state-of-the-art vector-based model where context is defined as a “bag of words”. Note that considerable latitude is allowed in setting parameters for vector-based models. In order to allow a fair comparison, we selected parameters for the traditional model that have been considered optimal in the literature (Patel et al., 1998), namely a symmetric 10 word window and the most frequent 500 content words from the BNC as dimensions. These parameters were similar to those used by Lowe and McDonald (2000) (symmetric 10 word window and 536 content words). Again the log-likelihood score is used to factor out chance co-occurrences. 3.2 Experiment 1: Priming A large number of modelling studies in psycholinguistics have focused on simulating semantic priming studies. The semantic priming paradigm provides a natural test bed for semantic space models as it concentrates on the semantic similarity or dissimilarity between a prime and its target, and it is precisely this type of lexical relations that vectorbased models capture. In this experiment we focus on Balota and Lorch’s (1986) mediated priming study. In semantic priming transient presentation of a prime word like tiger directly facilitates pronunciation or lexical decision on a target word like lion. Mediated priming extends this paradigm by additionally allowing indirectly related words as primes – like stripes, which is only related to lion by means of the intermediate concept tiger. Balota and Lorch (1986) obtained small mediated priming effects for pronunciation tasks but not for lexical decision. For the pronunciation task, reaction times were reduced significantly for both direct and mediated primes, however the effect was larger for direct primes. There are at least two semantic space simulations that attempt to shed light on the mediated priming effect. Lowe and McDonald (2000) replicated both the direct and mediated priming effects, whereas Livesay and Burgess (1997) could only replicate direct priming. In their study, mediated primes were farther from their targets than unrelated words. 3.2.1 Materials and Design Materials were taken form Balota and Lorch (1986). They consist of 48 target words, each paired with a related and a mediated prime (e.g., lion-tigerstripes). Each related-mediated prime tuple was paired with an unrelated control randomly selected from the complement set of related primes. 3.2.2 Procedure One stimulus was removed as it had a low corpus frequency (less than 100), which meant that the resulting vector would be unreliable. We constructed vectors from the BNC for all stimuli with the dependency-based models and the traditional model, using the parametrisations given in Section 3.1 and cosine as a distance measure. We calculated the distance in semantic space between targets and their direct primes (TarDirP), targets and their mediated primes (TarMedP), targets and their unrelated controls (TarUnC) for both models. 3.2.3 Results We carried out a one-way Analysis of Variance (ANOVA) with the distance as dependent variable (TarDirP, TarMedP, TarUnC). Recall from Table 1 that we experimented with fourteen different context definitions. A reliable effect of distance was observed for all models (p < .001). We used the η2 statistic to calculate the amount of variance accounted for by the different models. Figure 3 plots η2 against the different contexts. The best result was obtained for model 7 which accounts for 23.1% of the variance (F(2,140) = 20.576, p < .001) and corresponds to the wide context specification and the plain path value function. A reliable distance effect was also observed for the traditional vectorbased model (F(2,138) = 9.384, p < .001). 0 0.05 0.1 0.15 0.2 0.25 1 2 3 4 5 6 7 8 9 10 11 12 13 14 eta squared model TarDirP -- TarMedP -- TarUnC TarDirP -- TarUnC TarMedP -- TarUnC Figure 3: η2 scores for mediated priming materials Model TarDirP – TarUnC TarMedP – TarUnC Model 7 F = 25.290 (p < .001) F = .001 (p = .790) Traditional F = 12.185 (p = .001) F = .172 (p = .680) L & McD F = 24.105 (p < .001) F = 13.107 (p < .001) Table 2: Size of direct and mediated priming effects Pairwise ANOVAs were further performed to examine the size of the direct and mediated priming effects individually (see Table 2). There was a reliable direct priming effect (F(1,94) = 25.290, p < .001) but we failed to find a reliable mediated priming effect (F(1,93) = .001, p = .790). A reliable direct priming effect (F(1,92) = 12.185, p = .001) but no mediated priming effect was also obtained for the traditional vector-based model. We used the η2 statistic to compare the effect sizes obtained for the dependency-based and traditional model. The best dependency-based model accounted for 23.1% of the variance, whereas the traditional model accounted for 12.2% (see also Table 2). Our results indicate that dependency-based models are able to model direct priming across a wide range of parameters. Our results also show that larger contexts (see models 7 and 11 in Figure 3) are more informative than smaller contexts (see models 1 and 3 in Figure 3), but note that the wide context specification performed better than maximum. At least for mediated priming, a uniform path value as assigned by the plain path value function outperforms all other functions (see Figure 3). Neither our dependency-based model nor the traditional model were able to replicate the mediated priming effect reported by Lowe and McDonald (2000) (see L & McD in Table 2). This may be due to differences in lemmatisation of the BNC, the parametrisations of the model or the choice of context words (Lowe and McDonald use a special procedure to identify “reliable” context words). Our results also differ from Livesay and Burgess (1997) who found that mediated primes were further from their targets than unrelated controls, using however a model and corpus different from the ones we employed for our comparative studies. In the dependency-based model, mediated primes were virtually indistinguishable from unrelated words. In sum, our results indicate that a model which takes syntactic information into account outperforms a traditional vector-based model which simply relies on word occurrences. Our model is able to reproduce the well-established direct priming effect but not the more controversial mediated priming effect. Our results point to the need for further comparative studies among semantic space models where variables such as corpus choice and size as well as preprocessing (e.g., lemmatisation, tokenisation) are controlled for. 3.3 Experiment 2: Encoding of Relations In this experiment we examine whether dependencybased models construct a semantic space that encapsulates different lexical relations. More specifically, we will assess whether word pairs capturing different types of semantic relations (e.g., hyponymy, synonymy) can be distinguished in terms of their distances in the semantic space. 3.3.1 Materials and Design Our experimental materials were taken from Hodgson (1991) who in an attempt to investigate which types of lexical relations induce priming collected a set of 142 word pairs exemplifying the following semantic relations: (a) synonymy (words with the same meaning, value and worth), (b) superordination and subordination (one word is an instance of the kind expressed by the other word, pain and sensation), (c) category coordination (words which express two instances of a common superordinate concept, truck and train), (d) antonymy (words with opposite meaning, friend and enemy), (e) conceptual association (the first word subjects produce in free association given the other word, leash and dog), and (f) phrasal association (words which co-occur in phrases private and property). The pairs were selected to be unambiguous examples of the relation type they instantiate and were matched for frequency. The pairs cover a wide range of parts of speech, like adjectives, verbs, and nouns. 0.14 0.15 0.16 0.17 0.18 0.19 0.2 0.21 1 2 3 4 5 6 7 8 9 10 11 12 13 14 eta squared model Hodgson skew divergence Figure 4: η2 scores for the Hodgson materials Mean PA SUP CO ANT SYN CA 16.25 × × × × PA 15.13 × × SUP 11.04 CO 10.45 ANT 10.07 SYN 8.87 Table 3: Mean skew divergences and Tukey test results for model 7 3.3.2 Procedure As in Experiment 1, six words with low frequencies (less than 100) were removed from the materials. Vectors were computed for the remaining 278 words for both the traditional and the dependency-based models, again with the parametrisations detailed in Section 3.1. We calculated the semantic distance for every word pair, this time using Skew divergence as distance measure. 3.3.3 Results We carried out an ANOVA with the lexical relation as factor and the distance as dependent variable. The lexical relation factor had six levels, namely the relations detailed in Section 3.3.1. We found no effect of semantic distance for the traditional semantic space model (F(5,141) = 1.481, p = .200). The η2 statistic revealed that only 5.2% of the variance was accounted for. On the other hand, a reliable effect of distance was observed for all dependency-based models (p < .001). Model 7 (wide context specification and plain path value function) accounted for the highest amount of variance in our data (20.3%). Our results can be seen in Figure 4. We examined whether there are any significant differences among the six relations using Post-hoc Tukey tests. The pairwise comparisons for model 7 are given in Table 3. The mean distances for conceptual associates (CA), phrasal associates (PA), superordinates/subordinates (SUP), category coordinates (CO), antonyms (ANT), and synonyms (SYN) are also shown in Table 3. There is no significant difference between PA and CA, although SUP, CO, ANT, and SYN, are all significantly different from CA (see Table 3, where × indicates statistical significance, a = .05). Furthermore, ANT and SYN are significantly different from PA. Kilgarriff and Yallop (2000) point out that manually constructed taxonomies or thesauri are typically organised according to synonymy and hyponymy for nouns and verbs and antonymy for adjectives. They further argue that for automatically constructed thesauri similar words are words that either co-occur with each other or with the same words. The relations SYN, SUP, CO, and ANT can be thought of as representing taxonomy-related knowledge, whereas CA and PA correspond to the word clusters found in automatically constructed thesauri. In fact an ANOVA reveals that the distinction between these two classes of relations can be made reliably (F(1,136) = 15.347, p < .001), after collapsing SYN, SUP, CO, and ANT into one class and CA and PA into another. Our results suggest that dependency-based vector space models can, at least to a certain degree, distinguish among different types of lexical relations, while this seems to be more difficult for traditional semantic space models. The Tukey test revealed that category coordination is reliably distinguished from all other relations and that phrasal association is reliably different from antonymy and synonymy. Taxonomy related relations (e.g., synonymy, antonymy, hyponymy) can be reliably distinguished from conceptual and phrasal association. However, no reliable differences were found between closely associated relations such as antonymy and synonymy. Our results further indicate that context encoding plays an important role in discriminating lexical relations. As in Experiment 1 our best results were obtained with the wide context specification. Also, weighting schemes such as the obliqueness hierarchy length again decreased the model’s performance (see conditions 2, 5, 9, and 13 in Figure 4), showing that dependency relations contribute equally to the representation of a word’s meaning. This points to the fact that rich context encodings with a wide range of dependency relations are promising for capturing lexical semantic distinctions. However, the performance for maximum context specification was lower, which indicates that collapsing all dependency relations is not the optimal method, at least for the tasks attempted here. 4 Discussion In this paper we presented a novel semantic space model that enriches traditional vector-based models with syntactic information. The model is highly general and can be optimised for different tasks. It extends prior work on syntax-based models (Grefenstette, 1994; Lin, 1998), by providing a general framework for defining context so that a large number of syntactic relations can be used in the construction of the semantic space. Our approach differs from Lin (1998) in three important ways: (a) by introducing dependency paths we can capture non-immediate relationships between words (i.e., between subjects and objects), whereas Lin considers only local context (dependency edges in our terminology); the semantic space is therefore constructed solely from isolated head/modifier pairs and their inter-dependencies are not taken into account; (b) Lin creates the semantic space from the set of dependency edges that are relevant for a given word; by introducing dependency labels and the path value function we can selectively weight the importance of different labels (e.g., subject, object, modifier) and parametrize the space accordingly for different tasks; (c) considerable flexibility is allowed in our formulation for selecting the dimensions of the semantic space; the latter can be words (see the leaves in Figure 1), parts of speech or dependency edges; in Lin’s approach, it is only dependency edges (features in his terminology) that form the dimensions of the semantic space. Experiment 1 revealed that the dependency-based model adequately simulates semantic priming. Experiment 2 showed that a model that relies on rich context specifications can reliably distinguish between different types of lexical relations. Our results indicate that a number of NLP tasks could potentially benefit from dependency-based models. These are particularly relevant for word sense discrimination, automatic thesaurus construction, automatic clustering and in general similarity-based approaches to NLP. References Balota, David A. and Robert Lorch, Jr. 1986. Depth of automatic spreading activation: Mediated priming effects in pronunciation but not in lexical decision. Journal of Experimental Psychology: Learning, Memory and Cognition 12(3):336–45. Burnard, Lou. 1995. Users Guide for the British National Corpus. British National Corpus Consortium, Oxford University Computing Service. Choi, Freddy, Peter Wiemer-Hastings, and Johanna Moore. 2001. Latent Semantic Analysis for text segmentation. In Proceedings of EMNLP 2001. Seattle, WA. Dunning, Ted. 1993. Accurate methods for the statistics of surprise and coincidence. Computational Linguistics 19:61–74. Grefenstette, Gregory. 1994. Explorations in Automatic Thesaurus Discovery. Kluwer Academic Publishers. Hodgson, James M. 1991. Informational constraints on prelexical priming. Language and Cognitive Processes 6:169– 205. Jones, Michael P. and James H. Martin. 1997. Contextual spelling correction using Latent Semantic Analysis. In Proceedings of the ANLP 97. Keenan, E. and B. Comrie. 1977. Noun phrase accessibility and universal grammar. Linguistic Inquiry (8):62–100. Kilgarriff, Adam and Colin Yallop. 2000. What’s in a thesaurus. In Proceedings of LREC 2000. pages 1371–1379. Landauer, T. and S. Dumais. 1997. A solution to Platos problem: the latent semantic analysis theory of acquisition, induction, and representation of knowledge. Psychological Review 104(2):211–240. Lee, Lillian. 1999. Measures of distributional similarity. In Proceedings of ACL ’99. pages 25–32. Lin, Dekang. 1998. Automatic retrieval and clustering of similar words. In Proceedings of COLING-ACL 1998. Montréal, Canada, pages 768–511. Lin, Dekang. 2001. LaTaT: Language and text analysis tools. In J. Allan, editor, Proceedings of HLT 2001. Morgan Kaufmann, San Francisco. Livesay, K. and C. Burgess. 1997. Mediated priming in highdimensional meaning space: What is "mediated" in mediated priming? In Proceedings of COGSCI 1997. Lawrence Erlbaum Associates. Lowe, Will. 2001. Towards a theory of semantic space. In Proceedings of COGSCI 2001. Lawrence Erlbaum Associates, pages 576–81. Lowe, Will and Scott McDonald. 2000. The direct route: Mediated priming in semantic space. In Proceedings of COGSCI 2000. Lawrence Erlbaum Associates, pages 675–80. Lund, Kevin and Curt Burgess. 1996. Producing highdimensional semantic spaces from lexical co-occurrence. Behavior Research Methods, Instruments, and Computers 28:203–8. McDonald, Scott. 2000. Environmental Determinants of Lexical Processing Effort. Ph.D. thesis, University of Edinburgh. Patel, Malti, John A. Bullinaria, and Joseph P. Levy. 1998. Extracting semantic representations from large text corpora. In Proceedings of the 4th Neural Computation and Psychology Workshop. London, pages 199–212. Salton, G, A Wang, and C Yang. 1975. A vector-space model for information retrieval. Journal of the American Society for Information Science 18(613–620). Schütze, Hinrich. 1998. Automatic word sense discrimination. Computational Linguistics 24(1):97–124. Tesnière, Lucien. 1959. Elements de syntaxe structurale. Klincksieck, Paris.
2003
17
Orthogonal Negation in Vector Spaces for Modelling Word-Meanings and Document Retrieval Dominic Widdows ∗ Stanford University [email protected] Abstract Standard IR systems can process queries such as “web NOT internet”, enabling users who are interested in arachnids to avoid documents about computing. The documents retrieved for such a query should be irrelevant to the negated query term. Most systems implement this by reprocessing results after retrieval to remove documents containing the unwanted string of letters. This paper describes and evaluates a theoretically motivated method for removing unwanted meanings directly from the original query in vector models, with the same vector negation operator as used in quantum logic. Irrelevance in vector spaces is modelled using orthogonality, so query vectors are made orthogonal to the negated term or terms. As well as removing unwanted terms, this form of vector negation reduces the occurrence of synonyms and neighbours of the negated terms by as much as 76% compared with standard Boolean methods. By altering the query vector itself, vector negation removes not only unwanted strings but unwanted meanings. 1 Introduction Vector spaces enjoy widespread use in information retrieval (Salton and McGill, 1983; Baeza-Yates and ∗This research was supported in part by the Research Collaboration between the NTT Communication Science Laboratories, Nippon Telegraph and Telephone Corporation and CSLI, Stanford University, and by EC/NSF grant IST-1999-11438 for the MUCHMORE project. Ribiero-Neto, 1999), and from this original application vector models have been applied to semantic tasks such as word-sense acquisition (Landauer and Dumais, 1997; Widdows, 2003) and disambiguation (Sch¨utze, 1998). One benefit of these models is that the similarity between pairs of terms or between queries and documents is a continuous function, automatically ranking results rather than giving just a YES/NO judgment. In addition, vector models can be freely built from unlabelled text and so are both entirely unsupervised, and an accurate reflection of the way words are used in practice. In vector models, terms are usually combined to form more complicated query statements by (weighted) vector addition. Because vector addition is commutative, terms are combined in a “bag of words” fashion. While this has proved to be effective, it certainly leaves room for improvement: any genuine natural language understanding of query statements cannot rely solely on commutative addition for building more complicated expressions out of primitives. Other algebraic systems such as Boolean logic and set theory have well-known operations for building composite expressions out of more basic ones. Settheoretic models for the logical connectives ‘AND’, ‘NOT’ and ‘OR’ are completely understood by most researchers, and used by Boolean IR systems for assembling the results to complicated queries. It is clearly desirable to develop a calculus which combines the flexible ranking of results in a vector model with the crisp efficiency of Boolean logic, a goal which has long been recognised (Salton et al., 1983) and attempted mainly for conjunction and disjunction. This paper proposes such a scheme for negation, based upon well-known linear algebra, and which also implies a vector form of disjunction. It turns out that these vector connectives are precisely those used in quantum logic (Birkhoffand von Neumann, 1936), a development which is discussed in much more detail in (Widdows and Peters, 2003). Because of its simplicity, our model is easy to understand and to implement. Vector negation is based on the intuition that unrelated meanings should be orthogonal to one another, which is to say that they should have no features in common at all. Thus vector negation generates a ‘meaning vector’ which is completely orthogonal to the negated term. Document retrieval experiments demonstrate that vector negation is not only effective at removing unwanted terms: it is also more effective than other methods at removing their synonyms and related terms. This justifies the claim that, by producing a single query vector for “a NOT b”, we remove not only unwanted strings but also unwanted meanings. We describe the underlying motivation behind this model and define the vector negation and disjunction operations in Section 2. In Section 3 we review other ways negation is implemented in Information Retrieval, comparing and contrasting with vector negation. In Section 4 we describe experiments demonstrating the benefits and drawbacks of vector negation compared with two other methods for negation. 2 Negation and Disjunction in Vector Spaces In this section we use well-known linear algebra to define vector negation in terms of orthogonality and disjunction as the linear sum of subspaces. The mathematical apparatus is covered in greater detail in (Widdows and Peters, 2003). If A is a set (in some universe of discourse U), then ‘NOT A’ corresponds to the complement A⊥of the set A in U (by definition). By a simple analogy, let A be a vector subspace of a vector space V (equipped with a scalar product). Then the concept ‘NOT A’ should correspond to the orthogonal complement A⊥of A under the scalar product (Birkhoffand von Neumann, 1936, §6). If we think of a basis for V as a set of features, this says that ‘NOT A’ refers to the subspace of V which has no features in common with A. We make the following definitions. Let V be a (real) vector space equipped with a scalar product. We will use the notation A ≤V to mean “A is a vector subspace of V .” For A ≤V , define the orthogonal subspace A⊥to be the subspace A⊥≡{v ∈V : ∀a ∈A, a · v = 0}. For the purposes of modelling word-meanings, we might think of ‘orthogonal’ as a model for ‘completely unrelated’ (having similarity score zero). This makes perfect sense for information retrieval, where we assume (for example) that if two words never occur in the same document then they have no features in common. Definition 1 Let a, b ∈V and A, B ≤V . By NOT A we mean A⊥and by NOT a, we mean ⟨a⟩⊥, where ⟨a⟩= {λa : λ ∈R} is the 1-dimensional subspace subspace generated by a. By a NOT B we mean the projection of a onto B⊥and by a NOT b we mean the projection of a onto ⟨b⟩⊥. We now show how to use these notions to perform calculations with individual term or query vectors in a form which is simple to program and efficient to run. Theorem 1 Let a, b ∈V . Then a NOT b is represented by the vector a NOT b ≡a −a · b |b|2 b. where |b|2 = b · b is the modulus of b. Proof. A simple proof is given in (Widdows and Peters, 2003). For normalised vectors, Theorem 1 takes the particularly simple form a NOT b = a −(a · b)b, (1) which in practice is then renormalised for consistency. One computational benefit is that Theorem 1 gives a single vector for a NOT b, so finding the similarity between any other vector and a NOT b is just a single scalar product computation. Disjunction is also simple to envisage, the expression b1 OR . . . OR bn being modelled by the subspace B = {λ1b1 + . . . + λnbn : λi ∈R}. Theoretical motivation for this formulation can be found in (Birkhoffand von Neumann, 1936, §1,§6) and (Widdows and Peters, 2003): for example, B is the smallest subspace of V which contains the set {bj}. Computing the similarity between a vector a and this subspace B is computationally more expensive than for the negation of Theorem 1, because the scalar product of a with (up to) n vectors in an orthogonal basis for B must be computed. Thus the gain we get by comparing each document with the query a NOT b using only one scalar product operation is absent for disjunction. However, this benefit is regained in the case of negated disjunction. Suppose we negate not only one argument but several. If a user specifies that they want documents related to a but not b1, b2, . . . , bn, then (unless otherwise stated) it is clear that they only want documents related to none of the unwanted terms bi (rather than, say, the average of these terms). This motivates a process which can be thought of as a vector formulation of the classical de Morgan equivalence ∼a∧∼b ≡∼(a ∨b), by which the expression a AND NOT b1 AND NOT b2 . . . AND NOT bn is translated to a NOT (b1 OR . . . OR bn). (2) Using Definition 1, this expression can be modelled with a unique vector which is orthogonal to all of the unwanted arguments {b1}. However, unless the vectors b1, . . . , bn are orthogonal (or identical), we need to obtain an orthogonal basis for the subspace b1 OR . . . OR bn before we can implement a higherdimensional version of Theorem 1. This is because the projection operators involved are in general noncommutative, one of the hallmark differences between Boolean and quantum logic. In this way vector negation generates a meaningvector which takes into account the similarities and differences between the negative terms. A query for chip NOT computer, silicon is treated differently from a query for chip NOT computer, potato. Vector negation is capable of realising that for the first query, the two negative terms are referring to the same general topic area, but in the second case the task is to remove radically different meanings from the query. This technique has been used to remove several meanings from a query iteratively, allowing a user to ‘home in on’ the desired meaning by systematically pruning away unwanted features. 2.1 Initial experiments modelling word-senses Our first experiments with vector negation were to determine whether the negation operator could find different senses of ambiguous words by negating a word closely related to one of the meanings. A vector space model was built using Latent Semantic Analysis, similar to the systems of (Landauer and Dumais, 1997; Sch¨utze, 1998). The effect of LSA is to increase linear dependency between terms, and for this reason it is likely that LSA is a crucial step in our approach. Terms were indexed depending on their co-occurrence with 1000 frequent “content-bearing words” in a 15 word context-window, giving each term 1000 coordinates. This was reduced to 100 dimensions using singular value decomposition. Later on, document vectors were assigned in the usual manner by summation of term vectors using tf-idf weighting (Salton and McGill, 1983, p. 121). Vectors were normalised, so that the standard (Euclidean) scalar product and cosine similarity coincided. This scalar product was used as a measure of term-term and term-document similarity throughout our experiments. This method was used because it has been found to be effective at producing good term-term similarities for word-sense disambiguation (Sch¨utze, 1998) and automatic lexical acquisition (Widdows, 2003), and these similarities were used to generate interesting queries and to judge the effectiveness of different forms of negation. More details on the building of this vector space model can be found in (Widdows, 2003; Widdows and Peters, 2003). suit suit NOT lawsuit suit 1.000000 pants 0.810573 lawsuit 0.868791 shirt 0.807780 suits 0.807798 jacket 0.795674 plaintiff 0.717156 silk 0.781623 sued 0.706158 dress 0.778841 plaintiffs 0.697506 trousers 0.771312 suing 0.674661 sweater 0.765677 lawsuits 0.664649 wearing 0.764283 damages 0.660513 satin 0.761530 filed 0.655072 plaid 0.755880 behalf 0.650374 lace 0.755510 appeal 0.608732 worn 0.755260 Terms related to ‘suit NOT lawsuit’ (NYT data) play play NOT game play 1.000000 play 0.779183 playing 0.773676 playing 0.658680 plays 0.699858 role 0.594148 played 0.684860 plays 0.581623 game 0.626796 versatility 0.485053 offensively 0.597609 played 0.479669 defensively 0.546795 roles 0.470640 preseason 0.544166 solos 0.448625 midfield 0.540720 lalas 0.442326 role 0.535318 onstage 0.438302 tempo 0.504522 piano 0.438175 score 0.475698 tyrone 0.437917 Terms related to ‘play NOT game’ (NYT data) Table 1: First experiments with negation and wordsenses Two early results using negation to find senses of ambiguous words are given in Table 1, showing that vector negation is very effective for removing the ‘legal’ meaning from the word suit and the ‘sporting’ meaning from the word play, leaving respectively the ‘clothing’ and ‘performance’ meanings. Note that removing a particular word also removes concepts related to the negated word. This gives credence to the claim that our mathematical model is removing the meaning of a word, rather than just a string of characters. This encouraged us to set up a larger scale experiment to test this hypothesis, which is described in Section 4. 3 Other forms of Negation in IR There have been rigourous studies of Boolean operators for information retrieval, including the pnorms of Salton et al. (1983) and the matrix forms of Turtle and Croft (1989), which have focussed particularly on mathematical expressions for conjunction and disjunction. However, typical forms of negation (such as NOT p = 1−p) have not taken into account the relationship between the negated argument and the rest of the query. Negation has been used in two main forms in IR systems: for the removal of unwanted documents after retrieval and for negative relevance feedback. We describe these methods and compare them with vector negation. 3.1 Negation by filtering results after retrieval A traditional Boolean search for documents related to the query a NOT b would return simply those documents which contain the term a and do not contain the term b. More formally, let D be the document collection and let Di ⊂D be the subset of documents containing the term i. Then the results to the Boolean query for a NOT b would be the set Da∩D′ b, where D′ b is the complement of Db in D. Variants of this are used within a vector model, by using vector retrieval to retrieve a (ranked) set of relevant documents and then ‘throwing away’ documents containing the unwanted terms (Salton and McGill, 1983, p. 26). This paper will refer to such methods under the general heading of ‘post-retrieval filtering’. There are at least three reasons for preferring vector negation to post-retrieval filtering. Firstly, postretrieval filtering is not very principled and is subject to error: for example, it would remove a long document containing only one instance of the unwanted term. One might argue here that if a document containing unwanted terms is given a ‘negative-score’ rather than just disqualified, this problem is avoided. This would leaves us considering a combined score, sim(d, a NOT b) = d · a −λd · b for some parameter λ. However, since this is the same as d · (a −λb), it is computationally more efficient to treat a −λb as a single vector. This is exactly what vector negation accomplishes, and also determines a suitable value of λ from a and b. Thus a second benefit for vector negation is that it produces a combined vector for a NOT b which enables the relevance score of each document to be computed using just one scalar product operation. The third gain is that vector retrieval proves to be better at removing not only an unwanted term but also its synonyms and related words (see Section 4), which is clearly desirable if we wish to remove not only a string of characters but the meaning represented by this string. 3.2 Negative relevance feedback Relevance feedback has been shown to improve retrieval (Salton and Buckley, 1990). In this process, documents judged to be relevant have (some multiple of) their document vector added to the query: documents judged to be non-relevant have (some multiple of) their document vector subtracted from the query, producing a new query according to the formula Qi+1 = αQi + β X rel Di |Di| −γ X nonrel Di |Di|, where Qi is the ith query vector, Di is the set of documents returned by Qi which has been partitioned into relevant and non-relevant subsets, and α, β, γ ∈ R are constants. Salton and Buckley (1990) report best results using β = 0.75 and γ = 0.25. The positive feedback part of this process has become standard in many search engines with options such as “More documents like this” or “Similar pages”. The subtraction option (called ‘negative relevance feedback’) is much rarer. A widely held opinion is that that negative feedback is liable to harm retrieval, because it may move the query away from relevant as well as non-relevant documents (Kowalski, 1997, p. 160). The concepts behind negative relevance feedback are discussed instructively by Dunlop (1997). Negative relevance feedback introduces the idea of subtracting an unwanted vector from a query, but gives no general method for deciding “how much to subtract”. We shall refer to such methods as ‘Constant Subtraction’. Dunlop (1997, p. 139) gives an analysis which leads to a very intuitive reason for preferring vector negation over constant subtraction. If a user removes an unwanted term which the model deems to be closely related to the desired term, this should have a strong effect, because there is a significant ‘difference of opinion’ between the user and the model. (From an even more informal point of view, why would anyone take the trouble to remove a meaning that isn’t there anyway?). With any kind of constant subtraction, however, the removal of distant points has a greater effect on the final querystatement than the removal of nearby points. Vector negation corrects this intuitive mismatch. Recall from Equation 1 that (using normalised vectors for simplicity) the vector a NOT b is given by a −(a · b)b. The similarity of a with a NOT b is therefore a · (a −(a · b)b) = 1 −(a · b)2. The closer a and b are, the greater the (a · b)2 factor becomes, so the similarity of a with a NOT b becomes smaller the closer a is to b. This coincides exactly with Dunlop’s intuitive view: removing a concept which in the model is very close to the original query has a large effect on the outcome. Negative relevance feedback introduces the idea of subtracting an unwanted vector from a query, but gives no general method for deciding ‘how much to subtract’. We shall refer to such methods as ‘Constant Subtraction’. 4 Evaluation and Results This section describes experiments which compare the three methods of negation described above (postretrieval filtering, constant subtraction and vector negation) with the baseline alternative of no negation at all. The experiments were carried out using the vector space model described in Section 2.1. To judge the effectiveness of different methods at removing unwanted meanings, with a large number of queries, we made the following assumptions. A document which is relevant to the meaning of ‘term a NOT term b’ should contain as many references to term a and as few references to term b as possible. Close neighbours and synonyms of term b are undesirable as well, since if they occur the document in question is likely to be related to the negated term even if the negated term itself does not appear. 4.1 Queries and results for negating single and multiple terms 1200 queries of the form ‘term a NOT term b’ were generated for 3 different document collections. The terms chosen were the 100 most frequently occurring (non-stop) words in the collection, 100 mid-frequency words (the 1001st to 1100th most frequent), and 100 low-frequency words (the 5001st to 5100th most frequent). The nearest neighbour (word with highest cosine similarity) to each positive term was taken to be the negated term. (This assumes that a user is most likely to want to remove a meaning closely related to the positive term: there is no point in removing unrelated information which would not be retrieved anyway.) In addition, for the 100 most frequent words, an extra retrieval task was performed with the roles of the positive term and the negated term reversed, so that in this case the system was being asked to remove the very most common words in the collection from a query generated by their nearest neighbour. We anticipated that this would be an especially difficult task, and a particularly realistic one, simulating a user who is swamped with information about a ‘popular topic’ in which they are not interested.1 The document collections used were from the British National Corpus (published by Oxford University, the textual data consisting of ca 90M words, 85K documents), the New York Times News Syndicate (1994-96, from the North American News Text Corpus published by the Linguistic Data Consortium, ca 143M words, 370K documents) and the Ohsumed corpus of medical documents (Hersh et al., 1994) (ca 40M words, 230K documents). The 20 documents most relevant to each query were obtained using each of the following four techniques. • No negation. The query was just the positive term and the negated term was ignored. • Post-retrieval filtering. After vector retrieval using only the positive term as the query term, documents containing the negated term were eliminated. • Constant subtraction. Experiments were performed with a variety of subtraction constants. The query a NOT b was thus given the vector a−λb for some λ ∈[0, 1]. The results recorded in this paper were obtained using λ = 0.75, which gives a direct comparison with vector negation. • Vector negation, as described in this paper. For each set of retrieved documents, the following results were counted. • The relative frequency of the positive term. • The relative frequency of the negated term. • The relative frequency of the ten nearest neighbours of the negative term. One slight subtlety here is that the positive term was itself a close 1For reasons of space we do not show the retrieval performance on query terms of different frequencies in this paper, though more detailed results are available from the author on request. neighbour of the negated term: to avoid inconsistency, we took as ‘negative neighbours’ only those which were closer to the negated term than to the positive term. • The relative frequency of the synonyms of the negated term, as given by the WordNet database (Fellbaum, 1998). As above, words which were also synonyms of the positive term were discounted. On the whole fewer such synonyms were found in the Ohsumed and NYT documents, which have many medical terms and proper names which are not in WordNet. Additional experiments were carried out to compare the effectiveness of different forms of negation at removing several unwanted terms. The same 1200 queries were used as above, and the next nearest neighbour was added as a further negative argument. For two negated terms, the post-retrieval filtering process worked by discarding documents containing either of the negative terms. Constant subtraction worked by subtracting a constant multiple of each of the negated terms from the query. Vector negation worked by making the query vector orthogonal to the plane generated by the two negated terms, as in Equation 2. Results were collected in much the same way as the results for single-argument negation. Occurrences of each of the negated terms were added together, as were occurrences of the neighbours and WordNet synonyms of either of the negated words. The results of our experiments are collected in Table 2 and summarised in Figure 1. The results for a single negated term demonstrate the following points. • All forms of negation proved extremely good at removing the unwanted words. This is trivially true for post-retrieval filtering, which works by discarding any documents that contain the negated term. It is more interesting that constant subtraction and vector negation performed so well, cutting occurrences of the negated word by 82% and 85% respectively compared with the baseline of no negation. • On average, using no negation at all retrieved the most positive terms, though not in every case. While this upholds the claim that any form of negation is likely to remove relevant as well as irrelevant results, the damage done was only around 3% for post-retrieval filtering and 25% for constant and vector negation. • These observations alone would suggest that post-retrieval filtering is the best method for the simple goal of maximising occurrences of the positive term while minimising the occurrences of the negated term. However, vector negation and constant subtraction dramatically outperformed post-retrieval filtering at removing neighbours of the negated terms, and were reliably better at removing WordNet synonyms as well. We believe this to be good evidence that, while post-search filtering is by definition better at removing unwanted strings, the vector methods (either orthogonal or constant subtraction) are much better at removing unwanted meanings. Preliminary observations suggest that in the cases where vector negation retrieves fewer occurrences of the positive term than other methods, the other methods are often retrieving documents that are still related in meaning to the negated term. • Constant subtraction can give similar results to vector negation on these queries (though the vector negation results are slightly better). This is with queries where the negated term is the closest neighbour of the positive term, and the assumption that the similarity between these pairs is around 0.75 is a reasonable approximation. However, further experiments with a variety of negated arguments chosen at random from a list of neighbours demonstrated that in this more general setting, the flexibility provided by vector negation produced conclusively better results than constant subtraction for any single fixed constant. In addition, the results for removing multiple negated terms demonstrate the following points. • Removing another negated term further reduces the retrieval of the positive term for all forms of negation. Constant subtraction is the worst affected, performing noticeably worse than vector negation. • All three forms of negation still remove many occurrences of the negated term. Vector negation and (trivially) post-search filtering perform as well as they do with a single negated term. However, constant subtraction performs much worse, retrieving more than twice as many unwanted terms as vector negation. • Post-retrieval filtering was even less effective at removing neighbours of the negated term than with a single negated term. Constant subtraction also performed much less well. Vector negation was by far the best method for removing negative neighbours. The same observation 1 negated term 2 negated terms BNC NYT Ohsumed BNC NYT Ohsumed No Negation Positive term 0.53 1.18 2.57 0.53 1.18 2.57 Negated term 0.37 0.66 1.26 0.45 0.82 1.51 Negative neighbours 0.49 0.74 0.45 0.69 1.10 0.71 Negative synonyms 0.24 0.22 0.10 0.42 0.42 0.20 Post-retrieval Positive term 0.61 1.03 2.51 0.58 0.91 2.35 filtering Negated term 0 0 0 0 0 0 Negative neighbours 0.31 0.46 0.39 0.55 0.80 0.67 Negative synonyms 0.19 0.22 0.10 0.37 0.39 0.37 Constant Positive term 0.52 0.82 1.88 0.42 0.70 1.38 Subtraction Negated term 0.09 0.13 0.20 0.18 0.21 0.35 Negative neighbours 0.08 0.11 0.14 0.30 0.33 0.18 Negative synonyms 0.19 0.16 0.07 0.33 0.29 0.12 Vector Positive term 0.50 0.83 1.85 0.45 0.69 1.51 Negation Negated term 0.08 0.12 0.16 0.08 0.11 0.15 Negative neighbours 0.10 0.10 0.10 0.17 0.16 0.16 Negative synonyms 0.18 0.16 0.07 0.31 0.27 0.12 Table 2: Table of results showing the percentage frequency of different terms in retrieved documents Average results across corpora for one negated term 0 1 No negation Post-retrieval filtering Constant Subtraction Vector negation % frequency Average results across corpora for two negated terms 0 1 No negation Post-retrieval filtering Constant Subtraction Vector negation % frequency Positive Term Negated Term Vector Neighbours of Negated Word WordNet Synonyms of Negated Word Figure 1: Barcharts summarising results of Table 2 holds for WordNet synonyms, though the results are less pronounced. This shows that vector negation is capable of removing unwanted terms and their related words from retrieval results, while retaining more occurrences of the original query term than constant subtraction. Vector negation does much better than other methods at removing neighbours and synonyms, and we therefore expect that it is better at removing documents referring to unwanted meanings of ambiguous words. Experiments with sense-tagged data are planned to test this hypothesis. The goal of these experiments was to evaluate the extent to which the different methods could remove unwanted meanings, which we measured by counting the frequency of unwanted terms and concepts in retrieved documents. This leaves the problems of determining the optimal scope for the negation quantifier for an IR system, and of developing a natural user interface for this process for complex queries. These important challenges are beyond the scope of this paper, but would need to be addressed to incorporate vector negation into a state-of-the-art IR system. 5 Conclusions Traditional branches of science have exploited the structure inherent in vector spaces and developed rigourous techniques which could contribute to natural language processing. As an example of this potential fertility, we have adapted the negation and disjunction connectives used in quantum logic to the tasks of word-sense discrimination and information retrieval. Experiments focussing on the use of vector negation to remove individual and multiple terms from queries have shown that this is a powerful and efficient tool for removing both unwanted terms and their related meanings from retrieved documents. Because it associates a unique vector to each query statement involving negation, the similarity between each document and the query can be calculated using just one scalar product computation, a considerable gain in efficiency over methods which involve some form of post-retrieval filtering. We hope that these preliminary aspects will be initial gains in developing a concrete and effective system for learning, representing and composing aspects of lexical meaning. Demonstration An interactive demonstration of negation for word similarity and document retrieval is publicly available at http://infomap.stanford.edu/webdemo. References Ricardo Baeza-Yates and Berthier Ribiero-Neto. 1999. Modern Information Retrieval. Addison Wesley / ACM Press. Garrett Birkhoffand John von Neumann. 1936. The logic of quantum mechanics. Annals of Mathematics, 37:823–843. Mark Dunlop. 1997. The effect of accessing nonmatching documents on relevance feedback. ACM Transactions on Information Systems, 15(2):137– 153, April. Christiane Fellbaum, editor. 1998. WordNet: An Electronic Lexical Database. MIT Press, Cambridge MA. William Hersh, Chris Buckley, T. J. Leone, and David Hickam. 1994. Ohsumed: An interactive retrieval evaluation and new large test collection for research. In Proceedings of the 17th Annual ACM SIGIR Conference, pages 192–201. Gerald Kowalski. 1997. Information retrieval systems: theory and implementation. Kluwer academic publishers, Norwell, MA. Thomas Landauer and Susan Dumais. 1997. A solution to plato’s problem: The latent semantic analysis theory of acquisition. Psychological Review, 104(2):211–240. Gerard Salton and Chris Buckley. 1990. Improving retrieval performance by relevance feedback. Journal of the American society for information science, 41(4):288–297. Gerard Salton and Michael McGill. 1983. Introduction to modern information retrieval. McGrawHill, New York, NY. Gerard Salton, Edward A. Fox, and Harry Wu. 1983. Extended boolean information retrieval. Communications of the ACM, 26(11):1022–1036, November. Hinrich Sch¨utze. 1998. Automatic word sense discrimination. Computational Linguistics, 24(1):97– 124. Howard Turtle and W. Bruce Croft. 1989. Inference networks for document retrieval. In Proceedings of the 13th Annual ACM SIGIR Conference, pages 1–24. Dominic Widdows and Stanley Peters. 2003. Word vectors and quantum logic. In Mathematics of Language 8, Bloomington, Indiana. Dominic Widdows. 2003. Unsupervised methods for developing taxonomies by combining syntactic and statistical information. HLT-NAACL, Edmonton, Canada.
2003
18
A Comparative Study on Reordering Constraints in Statistical Machine Translation Richard Zens and Hermann Ney Chair of Computer Science VI RWTH Aachen - University of Technology {zens,ney}@cs.rwth-aachen.de Abstract In statistical machine translation, the generation of a translation hypothesis is computationally expensive. If arbitrary wordreorderings are permitted, the search problem is NP-hard. On the other hand, if we restrict the possible word-reorderings in an appropriate way, we obtain a polynomial-time search algorithm. In this paper, we compare two different reordering constraints, namely the ITG constraints and the IBM constraints. This comparison includes a theoretical discussion on the permitted number of reorderings for each of these constraints. We show a connection between the ITG constraints and the since 1870 known Schr¨oder numbers. We evaluate these constraints on two tasks: the Verbmobil task and the Canadian Hansards task. The evaluation consists of two parts: First, we check how many of the Viterbi alignments of the training corpus satisfy each of these constraints. Second, we restrict the search to each of these constraints and compare the resulting translation hypotheses. The experiments will show that the baseline ITG constraints are not sufficient on the Canadian Hansards task. Therefore, we present an extension to the ITG constraints. These extended ITG constraints increase the alignment coverage from about 87% to 96%. 1 Introduction In statistical machine translation, we are given a source language (‘French’) sentence fJ 1 = f1 . . . fj . . . fJ, which is to be translated into a target language (‘English’) sentence eI 1 = e1 . . . ei . . . eI. Among all possible target language sentences, we will choose the sentence with the highest probability: ˆeI 1 = argmax eI 1 {Pr(eI 1|fJ 1 )} (1) = argmax eI 1 {Pr(eI 1) · Pr(fJ 1 |eI 1)} (2) The decomposition into two knowledge sources in Eq. 2 is the so-called source-channel approach to statistical machine translation (Brown et al., 1990). It allows an independent modeling of target language model Pr(eI 1) and translation model Pr(fJ 1 |eI 1). The target language model describes the well-formedness of the target language sentence. The translation model links the source language sentence to the target language sentence. It can be further decomposed into alignment and lexicon model. The argmax operation denotes the search problem, i.e. the generation of the output sentence in the target language. We have to maximize over all possible target language sentences. In this paper, we will focus on the alignment problem, i.e. the mapping between source sentence positions and target sentence positions. As the word order in source and target language may differ, the search algorithm has to allow certain word-reorderings. If arbitrary word-reorderings are allowed, the search problem is NP-hard (Knight, 1999). Therefore, we have to restrict the possible reorderings in some way to make the search problem feasible. Here, we will discuss two such constraints in detail. The first constraints are based on inversion transduction grammars (ITG) (Wu, 1995; Wu, 1997). In the following, we will call these the ITG constraints. The second constraints are the IBM constraints (Berger et al., 1996). In the next section, we will describe these constraints from a theoretical point of view. Then, we will describe the resulting search algorithm and its extension for word graph generation. Afterwards, we will analyze the Viterbi alignments produced during the training of the alignment models. Then, we will compare the translation results when restricting the search to either of these constraints. 2 Theoretical Discussion In this section, we will discuss the reordering constraints from a theoretical point of view. We will answer the question of how many word-reorderings are permitted for the ITG constraints as well as for the IBM constraints. Since we are only interested in the number of possible reorderings, the specific word identities are of no importance here. Furthermore, we assume a one-to-one correspondence between source and target words. Thus, we are interested in the number of word-reorderings, i.e. permutations, that satisfy the chosen constraints. First, we will consider the ITG constraints. Afterwards, we will describe the IBM constraints. 2.1 ITG Constraints Let us now consider the ITG constraints. Here, we interpret the input sentence as a sequence of blocks. In the beginning, each position is a block of its own. Then, the permutation process can be seen as follows: we select two consecutive blocks and merge them to a single block by choosing between two options: either keep them in monotone order or invert the order. This idea is illustrated in Fig. 1. The white boxes represent the two blocks to be merged. Now, we investigate, how many permutations are obtainable with this method. A permutation derived by the above method can be represented as a binary tree where the inner nodes are colored either black or white. At black nodes the resulting sequences of the children are inverted. At white nodes they are kept in monotone order. This representation is equivalent to source positions target positions without inversion with inversion Figure 1: Illustration of monotone and inverted concatenation of two consecutive blocks. the parse trees of the simple grammar in (Wu, 1997). We observe that a given permutation may be constructed in several ways by the above method. For instance, let us consider the identity permutation of 1, 2, ..., n. Any binary tree with n nodes and all inner nodes colored white (monotone order) is a possible representation of this permutation. To obtain a unique representation, we pose an additional constraint on the binary trees: if the right son of a node is an inner node, it has to be colored with the opposite color. With this constraint, each of these binary trees is unique and equivalent to a parse tree of the ’canonical-form’ grammar in (Wu, 1997). In (Shapiro and Stephens, 1991), it is shown that the number of such binary trees with n nodes is the (n −1)th large Schr¨oder number Sn−1. The (small) Schr¨oder numbers have been first described in (Schr¨oder, 1870) as the number of bracketings of a given sequence (Schr¨oder’s second problem). The large Schr¨oder numbers are just twice the Schr¨oder numbers. Schr¨oder remarked that the ratio between two consecutive Schr¨oder numbers approaches 3 + 2 √ 2 = 5.8284... . A second-order recurrence for the large Schr¨oder numbers is: (n + 1)Sn = 3(2n −1)Sn−1 −(n −2)Sn−2 with n ≥2 and S0 = 1, S1 = 2. The Schr¨oder numbers have many combinatorical interpretations. Here, we will mention only two of them. The first one is another way of viewing at the ITG constraints. The number of permutations of the sequence 1, 2, ..., n, which avoid the subsequences (3, 1, 4, 2) and (2, 4, 1, 3), is the large Schr¨oder number Sn−1. More details on forbidden subsequences can be found in (West, 1995). The interesting point is that a search with the ITG constraints cannot generate a word-reordering that contains one of these two subsequences. In (Wu, 1997), these forbidden subsequences are called ’inside-out’ transpositions. Another interpretation of the Schr¨oder numbers is given in (Knuth, 1973): The number of permutations that can be sorted with an output-restricted doubleended queue (deque) is exactly the large Schr¨oder number. Additionally, Knuth presents an approximation for the large Schr¨oder numbers: Sn ≈ c · (3 + √ 8)n · n−3 2 (3) where c is set to 1 2 q (3 √ 2 −4)/π. This approximation function confirms the result of Schr¨oder, and we obtain Sn ∈Θ((3 + √ 8)n), i.e. the Schr¨oder numbers grow like (3 + √ 8)n ≈5.83n. 2.2 IBM Constraints In this section, we will describe the IBM constraints (Berger et al., 1996). Here, we mark each position in the source sentence either as covered or uncovered. In the beginning, all source positions are uncovered. Now, the target sentence is produced from bottom to top. A target position must be aligned to one of the first k uncovered source positions. The IBM constraints are illustrated in Fig. 2. J uncovered position covered position uncovered position for extension 1 j Figure 2: Illustration of the IBM constraints. For most of the target positions there are k permitted source positions. Only towards the end of the sentence this is reduced to the number of remaining uncovered source positions. Let n denote the length of the input sequence and let rn denote the permitted number of permutations with the IBM constraints. Then, we obtain: rn = ½ kn−k · k! n > k n! n ≤k (4) Typically, k is set to 4. In this case, we obtain an asymptotic upper and lower bound of 4n, i.e. rn ∈ Θ(4n). In Tab. 1, the ratio of the number of permitted reorderings for the discussed constraints is listed as a function of the sentence length. We see that for longer sentences the ITG constraints allow for more reorderings than the IBM constraints. For sentences of length 10 words, there are about twice as many reorderings for the ITG constraints than for the IBM constraints. This ratio steadily increases. For longer sentences, the ITG constraints allow for much more flexibility than the IBM constraints. 3 Search Now, let us get back to more practical aspects. Reordering constraints are more or less useless, if they do not allow the maximization of Eq. 2 to be performed in an efficient way. Therefore, in this section, we will describe different aspects of the search algorithm for the ITG constraints. First, we will present the dynamic programming equations and the resulting complexity. Then, we will describe pruning techniques to accelerate the search. Finally, we will extend the basic algorithm for the generation of word graphs. 3.1 Algorithm The ITG constraints allow for a polynomial-time search algorithm. It is based on the following dynamic programming recursion equations. During the search a table Qjl,jr,eb,et is constructed. Here, Qjl,jr,eb,et denotes the probability of the best hypothesis translating the source words from position jl (left) to position jr (right) which begins with the target language word eb (bottom) and ends with the word et (top). This is illustrated in Fig. 3. Here, we initialize this table with monotone translations of IBM Model 4. Therefore, Q0 jl,jr,eb,et denotes the probability of the best monotone hypothesis of IBM Model 4. Alternatively, we could use any other single-word based lexicon as well as phrasebased models for this initialization. Our choice is the IBM Model4 to make the results as comparable Table 1: Ratio of the number of permitted reorderings with the ITG constraints Sn−1 and the IBM constraints rn for different sentence lengths n. n 1 ... 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 Sn−1/rn ≈1.0 1.2 1.4 1.7 2.1 2.6 3.4 4.3 5.6 7.4 9.8 13.0 17.4 23.3 31.4 jl jr e b et Figure 3: Illustration of the Q-table. as possible to the search with the IBM constraints. We introduce a new parameter pm (mˆ= monotone), which denotes the probability of a monotone combination of two partial hypotheses. Qjl,jr,eb,et = (5) max jl≤k<jr, e′,e′′ n Q0 jl,jr,eb,et, Qjl,k,eb,e′ · Qk+1,jr,e′′,et · p(e′′|e′) · pm, Qk+1,jr,eb,e′ · Qjl,k,e′′,et · p(e′′|e′) · (1 −pm) o We formulated this equation for a bigram language model, but of course, the same method can also be applied for a trigram language model. The resulting algorithm is similar to the CYK-parsing algorithm. It has a worst-case complexity of O(J3 · E4). Here, J is the length of the source sentence and E is the vocabulary size of the target language. 3.2 Pruning Although the described search algorithm has a polynomial-time complexity, even with a bigram language model the search space is very large. A full search is possible but time consuming. The situation gets even worse when a trigram language model is used. Therefore, pruning techniques are obligatory to reduce the translation time. Pruning is applied to hypotheses that translate the same subsequence fjr jl of the source sentence. We use pruning in the following two ways. The first pruning technique is histogram pruning: we restrict the number of translation hypotheses per sequence fjr jl . For each sequence fjr jl , we keep only a fixed number of translation hypotheses. The second pruning technique is threshold pruning: the idea is to remove all hypotheses that have a low probability relative to the best hypothesis. Therefore, we introduce a threshold pruning parameter q, with 0 ≤q ≤1. Let Q∗ jl,jr denote the maximum probability of all translation hypotheses for fjr jl . Then, we prune a hypothesis iff: Qjl,jr,eb,et < q · Q∗ jl,jr Applying these pruning techniques the computational costs can be reduced significantly with almost no loss in translation quality. 3.3 Generation of Word Graphs The generation of word graphs for a bottom-top search with the IBM constraints is described in (Ueffing et al., 2002). These methods cannot be applied to the CYK-style search for the ITG constraints. Here, the idea for the generation of word graphs is the following: assuming we already have word graphs for the source sequences fk jl and fjr k+1, then we can construct a word graph for the sequence fjr jl by concatenating the partial word graphs either in monotone or inverted order. Now, we describe this idea in a more formal way. A word graph is a directed acyclic graph (dag) with one start and one end node. The edges are annotated with target language words or phrases. We also allow ϵ-transitions. These are edges annotated with the empty word. Additionally, edges may be annotated with probabilities of the language or translation model. Each path from start node to end node represents one translation hypothesis. The probability of this hypothesis is calculated by multiplying the probabilities along the path. During the search, we have to combine two word graphs in either monotone or inverted order. This is done in the following way: we are given two word graphs w1 and w2 with start and end nodes (s1, g1) and (s2, g2), respectively. First, we add an ϵ-transition (g1, s2) from the end node of the first graph w1 to the start node of the second graph w2 and annotate this edge with the probability of a monotone concatenation pm. Second, we create a copy of each of the original word graphs w1 and w2. Then, we add an ϵ-transition (g2, s1) from the end node of the copied second graph to the start node of the copied first graph. This edge is annotated with the probability of a inverted concatenation 1 −pm. Now, we have obtained two word graphs: one for a monotone and one for a inverted concatenation. The final word graphs is constructed by merging the two start nodes and the two end nodes, respectively. Let W(jl, jr) denote the word graph for the source sequence fjr jl . This graph is constructed from the word graphs of all subsequences of (jl, jr). Therefore, we assume, these word graphs have already been produced. For all source positions k with jl ≤k < jr, we combine the word graphs W(jl, k) and W(k + 1, jr) as described above. Finally, we merge all start nodes of these graphs as well as all end nodes. Now, we have obtained the word graph W(jl, jr) for the source sequence fjr jl . As initialization, we use the word graphs of the monotone IBM4 search. 3.4 Extended ITG constraints In this section, we will extend the ITG constraints described in Sec. 2.1. This extension will go beyond basic reordering constraints. We already mentioned that the use of consecutive phrases within the ITG approach is straightforward. The only thing we have to change is the initialization of the Q-table. Now, we will extend this idea to phrases that are non-consecutive in the source language. For this purpose, we adopt the view of the ITG constraints as a bilingual grammar as, e.g., in (Wu, 1997). For the baseline ITG constraints, the resulting grammar is: A →[AA] | ⟨AA⟩| f/e | f/ϵ | ϵ/e Here, [AA] denotes a monotone concatenation and ⟨AA⟩denotes an inverted concatenation. Let us now consider the case of a source phrase consisting of two parts f1 and f2. Let e denote the corresponding target phrase. We add the productions A →[e/f1 A ϵ/f2] | ⟨e/f1 A ϵ/f2⟩ to the grammar. The probabilities of these productions are, dependent on the translation direction, p(e|f1, f2) or p(f1, f2|e), respectively. Obviously, these productions are not in the normal form of an ITG, but with the method described in (Wu, 1997), they can be normalized. 4 Corpus Statistics In the following sections we will present results on two tasks. Therefore, in this section we will show the corpus statistics for each of these tasks. 4.1 Verbmobil The first task we will present results on is the Verbmobil task (Wahlster, 2000). The domain of this corpus is appointment scheduling, travel planning, and hotel reservation. It consists of transcriptions of spontaneous speech. Table 2 shows the corpus statistics of this corpus. The training corpus (Train) was used to train the IBM model parameters. The remaining free parameters, i.e. pm and the model scaling factors (Och and Ney, 2002), were adjusted on the development corpus (Dev). The resulting system was evaluated on the test corpus (Test). Table 2: Statistics of training and test corpus for the Verbmobil task (PP=perplexity, SL=sentence length). German English Train Sentences 58 073 Words 519 523 549 921 Vocabulary 7 939 4 672 Singletons 3 453 1 698 average SL 8.9 9.5 Dev Sentences 276 Words 3 159 3 438 Trigram PP 28.1 average SL 11.5 12.5 Test Sentences 251 Words 2 628 2 871 Trigram PP 30.5 average SL 10.5 11.4 Table 3: Statistics of training and test corpus for the Canadian Hansards task (PP=perplexity, SL=sentence length). French English Train Sentences 1.5M Words 24M 22M Vocabulary 100 269 78 332 Singletons 40 199 31 319 average SL 16.6 15.1 Test Sentences 5432 Words 97 646 88 773 Trigram PP – 179.8 average SL 18.0 16.3 4.2 Canadian Hansards Additionally, we carried out experiments on the Canadian Hansards task. This task contains the proceedings of the Canadian parliament, which are kept by law in both French and English. About 3 million parallel sentences of this bilingual data have been made available by the Linguistic Data Consortium (LDC). Here, we use a subset of the data containing only sentences with a maximum length of 30 words. Table 3 shows the training and test corpus statistics. 5 Evaluation in Training In this section, we will investigate for each of the constraints the coverage of the training corpus alignment. For this purpose, we compute the Viterbi alignment of IBM Model 5 with GIZA++ (Och and Ney, 2000). This alignment is produced without any restrictions on word-reorderings. Then, we check for every sentence if the alignment satisfies each of the constraints. The ratio of the number of satisfied alignments and the total number of sentences is referred to as coverage. Tab. 4 shows the results for the Verbmobil task and for the Canadian Hansards task. It contains the results for both translation directions German-English (S→T) and English-German (T→S) for the Verbmobil task and French-English (S→T) and English-French (T→S) for the Canadian Hansards task, respectively. For the Verbmobil task, the baseline ITG constraints and the IBM constraints result in a similar coverage. It is about 91% for the German-English translation direction and about 88% for the EnglishGerman translation direction. A significantly higher Table 4: Coverage on the training corpus for alignment constraints for the Verbmobil task (VM) and for the Canadian Hansards task (CH). coverage [%] task constraint S→T T→S VM IBM 91.0 88.1 ITG baseline 91.6 87.0 extended 96.5 96.9 CH IBM 87.1 86.7 ITG baseline 81.3 73.6 extended 96.1 95.6 coverage of about 96% is obtained with the extended ITG constraints. Thus with the extended ITG constraints, the coverage increases by about 8% absolute. For the Canadian Hansards task, the baseline ITG constraints yield a worse coverage than the IBM constraints. Especially for the English-French translation direction, the ITG coverage of 73.6% is very low. Again, the extended ITG constraints obtained the best results. Here, the coverage increases from about 87% for the IBM constraints to about 96% for the extended ITG constraints. 6 Translation Experiments 6.1 Evaluation Criteria In our experiments, we use the following error criteria: • WER (word error rate): The WER is computed as the minimum number of substitution, insertion and deletion operations that have to be performed to convert the generated sentence into the target sentence. • PER (position-independent word error rate): A shortcoming of the WER is the fact that it requires a perfect word order. The PER compares the words in the two sentences ignoring the word order. • mWER (multi-reference word error rate): For each test sentence, not only a single reference translation is used, as for the WER, but a whole set of reference translations. For each translation hypothesis, the WER to the most similar sentence is calculated (Nießen et al., 2000). • BLEU score: This score measures the precision of unigrams, bigrams, trigrams and fourgrams with respect to a whole set of reference translations with a penalty for too short sentences (Papineni et al., 2001). BLEU measures accuracy, i.e. large BLEU scores are better. • SSER (subjective sentence error rate): For a more detailed analysis, subjective judgments by test persons are necessary. Each translated sentence was judged by a human examiner according to an error scale from 0.0 to 1.0 (Nießen et al., 2000). 6.2 Translation Results In this section, we will present the translation results for both the IBM constraints and the baseline ITG constraints. We used a single-word based search with IBM Model 4. The initialization for the ITG constraints was done with monotone IBM Model 4 translations. So, the only difference between the two systems are the reordering constraints. In Tab. 5 the results for the Verbmobil task are shown. We see that the results on this task are similar. The search with the ITG constraints yields slightly lower error rates. Some translation examples of the Verbmobil task are shown in Tab. 6. We have to keep in mind, that the Verbmobil task consists of transcriptions of spontaneous speech. Therefore, the source sentences as well as the reference translations may have an unorthodox grammatical structure. In the first example, the German verb-group (“w¨urde vorschlagen”) is split into two parts. The search with the ITG constraints is able to produce a correct translation. With the IBM constraints, it is not possible to translate this verb-group correctly, because the distance between the two parts is too large (more than four words). As we see in the second example, in German the verb of a subordinate clause is placed at the end (“¨ubernachten”). The IBM search is not able to perform the necessary long-range reordering, as it is done with the ITG search. 7 Related Work The ITG constraints were introduced in (Wu, 1995). The applications were, for instance, the segmentation of Chinese character sequences into Chinese “words” and the bracketing of the source sentence into sub-sentential chunks. In (Wu, 1996) the baseline ITG constraints were used for statistical machine translation. The resulting algorithm is similar to the one presented in Sect. 3.1, but here, we use monotone translation hypotheses of the full IBM Model 4 as initialization, whereas in (Wu, 1996) a single-word based lexicon model is used. In (Vilar, 1998) a model similar to Wu’s method was considered. 8 Conclusions We have described the ITG constraints in detail and compared them to the IBM constraints. We draw the following conclusions: especially for long sentences the ITG constraints allow for higher flexibility in word-reordering than the IBM constraints. Regarding the Viterbi alignment in training, the baseline ITG constraints yield a similar coverage as the IBM constraints on the Verbmobil task. On the Canadian Hansards task the baseline ITG constraints were not sufficient. With the extended ITG constraints the coverage improves significantly on both tasks. On the Canadian Hansards task the coverage increases from about 87% to about 96%. We have presented a polynomial-time search algorithm for statistical machine translation based on the ITG constraints and its extension for the generation of word graphs. We have shown the translation results for the Verbmobil task. On this task, the translation quality of the search with the baseline ITG constraints is already competitive with the results for the IBM constraints. Therefore, we expect the search with the extended ITG constraints to outperform the search with the IBM constraints. Future work will include the automatic extraction of the bilingual grammar as well as the use of this grammar for the translation process. References A. L. Berger, P. F. Brown, S. A. D. Pietra, V. J. D. Pietra, J. R. Gillett, A. S. Kehler, and R. L. Mercer. 1996. Language translation apparatus and method of using context-based translation models, United States patent, patent number 5510981, April. P. F. Brown, J. Cocke, S. A. Della Pietra, V. J. Della Pietra, F. Jelinek, J. D. Lafferty, R. L. Mercer, and P. S. Roossin. 1990. A statistical approach to machine Table 5: Translation results on the Verbmobil task. type automatic human System WER [%] PER [%] mWER [%] BLEU [%] SSER [%] IBM 46.2 33.3 40.0 42.5 40.8 ITG 45.6 33.9 40.0 37.1 42.0 Table 6: Verbmobil: translation examples. source ja, ich w¨urde den Flug um viertel nach sieben vorschlagen. reference yes, I would suggest the flight at a quarter past seven. ITG yes, I would suggest the flight at seven fifteen. IBM yes, I would be the flight at quarter to seven suggestion. source ich schlage vor, dass wir in Hannover im Hotel Gr¨unschnabel ¨ubernachten. reference I suggest to stay at the hotel Gr¨unschnabel in Hanover. ITG I suggest that we stay in Hanover at hotel Gr¨unschnabel. IBM I suggest that we are in Hanover at hotel Gr¨unschnabel stay. translation. Computational Linguistics, 16(2):79–85, June. K. Knight. 1999. Decoding complexity in wordreplacement translation models. Computational Linguistics, 25(4):607–615, December. D. E. Knuth. 1973. The Art of Computer Programming, volume 1 - Fundamental Algorithms. AddisonWesley, Reading, MA, 2nd edition. S. Nießen, F. J. Och, G. Leusch, and H. Ney. 2000. An evaluation tool for machine translation: Fast evaluation for MT research. In Proc. of the Second Int. Conf. on Language Resources and Evaluation (LREC), pages 39–45, Athens, Greece, May. F. J. Och and H. Ney. 2000. Improved statistical alignment models. In Proc. of the 38th Annual Meeting of the Association for Computational Linguistics (ACL), pages 440–447, Hong Kong, October. F. J. Och and H. Ney. 2002. Discriminative training and maximum entropy models for statistical machine translation. In Proc. of the 40th Annual Meeting of the Association for Computational Linguistics (ACL), pages 295–302, July. K. A. Papineni, S. Roukos, T. Ward, and W. J. Zhu. 2001. Bleu: a method for automatic evaluation of machine translation. Technical Report RC22176 (W0109-022), IBM Research Division, Thomas J. Watson Research Center, September. E. Schr¨oder. 1870. Vier combinatorische Probleme. Zeitschrift f¨ur Mathematik und Physik, 15:361–376. L. Shapiro and A. B. Stephens. 1991. Boostrap percolation, the Schr¨oder numbers, and the n-kings problem. SIAM Journal on Discrete Mathematics, 4(2):275– 280, May. N. Ueffing, F. J. Och, and H. Ney. 2002. Generation of word graphs in statistical machine translation. In Proc. Conf. on Empirical Methods for Natural Language Processing, pages 156–163, Philadelphia, PA, July. J. M. Vilar. 1998. Aprendizaje de Transductores Subsecuenciales para su empleo en tareas de Dominio Restringido. Ph.D. thesis, Universidad Politecnica de Valencia. W. Wahlster, editor. 2000. Verbmobil: Foundations of speech-to-speech translations. Springer Verlag, Berlin, Germany, July. J. West. 1995. Generating trees and the Catalan and Schr¨oder numbers. Discrete Mathematics, 146:247– 262, November. D. Wu. 1995. Stochastic inversion transduction grammars, with application to segmentation, bracketing, and alignment of parallel corpora. In Proc. of the 14th International Joint Conf. on Artificial Intelligence (IJCAI), pages 1328–1334, Montreal, August. D. Wu. 1996. A polynomial-time algorithm for statistical machine translation. In Proc. of the 34th Annual Conf. of the Association for Computational Linguistics (ACL ’96), pages 152–158, Santa Cruz, CA, June. D. Wu. 1997. Stochastic inversion transduction grammars and bilingual parsing of parallel corpora. Computational Linguistics, 23(3):377–403, September.
2003
19
Using Predicate-Argument Structures for Information Extraction Mihai Surdeanu and Sanda Harabagiu and John Williams and Paul Aarseth Language Computer Corp. Richardson, Texas 75080, USA mihai,[email protected] Abstract In this paper we present a novel, customizable IE paradigm that takes advantage of predicate-argument structures. We also introduce a new way of automatically identifying predicate argument structures, which is central to our IE paradigm. It is based on: (1) an extended set of features; and (2) inductive decision tree learning. The experimental results prove our claim that accurate predicate-argument structures enable high quality IE results. 1 Introduction The goal of recent Information Extraction (IE) tasks was to provide event-level indexing into news stories, including news wire, radio and television sources. In this context, the purpose of the HUB Event-99 evaluations (Hirschman et al., 1999) was to capture information on some newsworthy classes of events, e.g. natural disasters, deaths, bombings, elections, financial fluctuations or illness outbreaks. The identification and selective extraction of relevant information is dictated by templettes. Event templettes are frame-like structures with slots representing the event basic information, such as main event participants, event outcome, time and location. For each type of event, a separate templette is defined. The slots fills consist of excerpts from text with pointers back into the original source material. Templettes are designed to support event-based browsing and search. Figure 1 illustrates a templette defined for “market changes” as well as the source of the slot fillers. <MARKET_CHANGE_PRI199804281700.1717−1>:= CURRENT_VALUE: $308.45 LOCATION: London DATE: daily INSTRUMENT: London [gold] AMOUNT_CHANGE: fell [$4.70] cents London gold fell $4.70 cents to $308.35 Time for our daily market report from NASDAQ. Figure 1: Templette filled with information about a market change event. To date, some of the most successful IE techniques are built around a set of domain relevant linguistic patterns based on select verbs (e.g. fall, gain or lose for the “market change” topic). These patterns are matched against documents for identifying and extracting domain-relevant information. Such patterns are either handcrafted or acquired automatically. A rich literature covers methods of automatically acquiring IE patterns. Some of the most recent methods were reported in (Riloff, 1996; Yangarber et al., 2000). To process texts efficiently and fast, domain patterns are ideally implemented as finite state automata (FSAs), a methodology pioneered in the FASTUS IE system (Hobbs et al., 1997). Although this paradigm is simple and elegant, it has the disadvantage that it is not easily portable from one domain of interest to the next. In contrast, a new, truly domain-independent IE paradigm may be designed if we know (a) predicates relevant to a domain; and (b) which of their arguments fill templette slots. Central to this new way of extracting information from texts are systems that label predicate-argument structures on the output of full parsers. One such augmented parser, trained on data available from the PropBank project has been recently presented in (Gildea and Palmer, 2002). In this paper we describe a domain-independent IE paradigm that is based on predicate-argument structures identified automatically by two different methods: (1) the statistical method reported in (Gildea and Palmer, 2002); and (2) a new method based on inductive learning which obtains 17% higher Fscore over the first method when tested on the same data. The accuracy enhancement of predicate argument recognition determines up to 14% better IE results. These results enforce our claim that predicate argument information for IE needs to be recognized with high accuracy. The remainder of this paper is organized as follows. Section 2 reports on the parser that produces predicate-argument labels and compares it against the parser introduced in (Gildea and Palmer, 2002). Section 3 describes the pattern-free IE paradigm and compares it against FSA-based IE methods. Section 4 describes the integration of predicate-argument parsers into the IE paradigm and compares the results against a FSA-based IE system. Section 5 summarizes the conclusions. 2 Learning to Recognize Predicate-Argument Structures 2.1 The Data Proposition Bank or PropBank is a one million word corpus annotated with predicateargument structures. The corpus consists of the Penn Treebank 2 Wall Street Journal texts (www.cis.upenn.edu/ treebank). The PropBank annotations, performed at University of Pennsylvania (www.cis.upenn.edu/ ace) were described in (Kingsbury et al., 2002). To date PropBank has addressed only predicates lexicalized by verbs, proceeding from the most to the least common verbs while annotating verb predicates in the corpus. For any given predicate, a survey was made to determine the predicate usage and if required, the usages were divided in major senses. However, the senses are divided more on syntactic grounds than VP NP S VP PP NP Big Board floor traders ARG0 by assailed P was The futures halt ARG1 Figure 2: Sentence with annotated arguments semantic, under the fundamental assumption that syntactic frames are direct reflections of underlying semantics. The set of syntactic frames are determined by diathesis alternations, as defined in (Levin, 1993). Each of these syntactic frames reflect underlying semantic components that constrain allowable arguments of predicates. The expected arguments of each predicate are numbered sequentially from Arg0 to Arg5. Regardless of the syntactic frame or verb sense, the arguments are similarly labeled to determine near-similarity of the predicates. The general procedure was to select for each verb the roles that seem to occur most frequently and use these roles as mnemonics for the predicate arguments. Generally, Arg0 would stand for agent, Arg1 for direct object or theme whereas Arg2 represents indirect object, benefactive or instrument, but mnemonics tend to be verb specific. For example, when retrieving the argument structure for the verb-predicate assail with the sense ”to tear attack” from www.cis.upenn.edu/ cotton/cgibin/pblex fmt.cgi, we find Arg0:agent, Arg1:entity assailed and Arg2:assailed for. Additionally, the argument may include functional tags from Treebank, e.g. ArgM-DIR indicates a directional, ArgM-LOC indicates a locative, and ArgM-TMP stands for a temporal. 2.2 The Model In previous work using the PropBank corpus, (Gildea and Palmer, 2002) proposed a model predicting argument roles using the same statistical method as the one employed by (Gildea and Jurafsky, 2002) for predicting semantic roles based on the FrameNet corpus (Baker et al., 1998). This statistical technique of labeling predicate argument operates on the output of the probabilistic parser reported in (Collins, 1997). It consists of two tasks: (1) identifying the parse tree constituents corresponding to arguments of each predicate encoded in PropBank; and (2) recognizing the role corresponding to each argument. Each task can be cast a separate classifier. For example, the result of the first classifier on the sentence illustrated in Figure 2 is the identification of the two NPs as arguments. The second classifier assigns the specific roles ARG1 and ARG0 given the predicate “assailed”. − POSITION (pos) − Indicates if the constituent appears before or after the the predicate in the sentence. − VOICE (voice) − This feature distinguishes between active or passive voice for the predicate phrase. are preserved. of the evaluated phrase. Case and morphological information − HEAD WORD (hw) − This feature contains the head word − PARSE TREE PATH (path): This feature contains the path in the parse tree between the predicate phrase and the argument phrase, expressed as a sequence of nonterminal labels linked by direction symbols (up or down), e.g. − PHRASE TYPE (pt): This feature indicates the syntactic NP for ARG1 in Figure 2. type of the phrase labeled as a predicate argument, e.g. noun phrases only, and it indicates if the NP is dominated by a sentence phrase (typical for subject arguments with active−voice predicates), or by a verb phrase (typical for object arguments). − GOVERNING CATEGORY (gov) − This feature applies to − PREDICATE WORD − In our implementation this feature consists of two components: (1) VERB: the word itself with the case and morphological information preserved; and (2) LEMMA which represents the verb normalized to lower case and infinitive form. NP S VP VP for ARG1 in Figure 2. Figure 3: Feature Set 1 Statistical methods in general are hindered by the data sparsity problem. To achieve high accuracy and resolve the data sparsity problem the method reported in (Gildea and Palmer, 2002; Gildea and Jurafsky, 2002) employed a backoff solution based on a lattice that combines the model features. For practical reasons, this solution restricts the size of the feature sets. For example, the backoff lattice in (Gildea and Palmer, 2002) consists of eight connected nodes for a five-feature set. A larger set of features will determine a very complex backoff lattice. Consequently, no new intuitions may be tested as no new features can be easily added to the model. In our studies we found that inductive learning through decision trees enabled us to easily test large sets of features and study the impact of each feature BOOLEAN NAMED ENTITY FLAGS − A feature set comprising: PHRASAL VERB COLOCATIONS − Comprises two features: − pvcSum: the frequency with which a verb is immediately followed by − pvcMax: the frequency with which a verb is followed by its any preposition or particle. predominant preposition or particle. − neOrganization: set to 1 if an organization is recognized in the phrase − neLocation: set to 1 a location is recognized in the phrase − nePerson: set to 1 if a person name is recognized in the phrase − neMoney: set to 1 if a currency expression is recognized in the phrase − nePercent: set to 1 if a percentage expression is recognized in the phrase − neTime: set to 1 if a time of day expression is recognized in the phrase − neDate: set to 1 if a date temporal expression is recognized in the phrase word from the constituent, different from the head word. − CONTENT WORD (cw) − Lexicalized feature that selects an informative PART OF SPEECH OF HEAD WORD (hPos) − The part of speech tag of the head word. PART OF SPEECH OF CONTENT WORD (cPos) −The part of speech tag of the content word. NAMED ENTITY CLASS OF CONTENT WORD (cNE) − The class of the named entity that includes the content word Figure 4: Feature Set 2 in NP last June PP to VP be VP declared VP SBAR S that VP occurred NP yesterday (a) (b) (c) Figure 5: Sample phrases with the content word different than the head word. The head words are indicated by the dashed arrows. The content words are indicated by the continuous arrows. on the augmented parser that outputs predicate argument structures. For this reason we used the C5 inductive decision tree learning algorithm (Quinlan, 2002) to implement both the classifier that identifies argument constituents and the classifier that labels arguments with their roles. Our model considers two sets of features: Feature Set 1 (FS1): features used in the work reported in (Gildea and Palmer, 2002) and (Gildea and Jurafsky, 2002) ; and Feature Set 2 (FS2): a novel set of features introduced in this paper. FS1 is illustrated in Figure 3 and FS2 is illustrated in Figure 4. In developing FS2 we used the following observations: Observation 1: Because most of the predicate arguments are prepositional attachments (PP) or relative clauses (SBAR), often the head word (hw) feature from FS1 is not in fact the most informative word in H1: if phrase type is PP then select the right−most child Example: phrase = "in Texas", cw = "Texas" if H2: phrase type is SBAR then select the left−most sentence (S*) clause Example: phrase = "that occurred yesterday", cw = "occurred" if then H3: phrase type is VP if there is a VP child then else select the head word select the left−most VP child Example: phrase = "had placed", cw = "placed" if H4: phrase type is ADVP then select the right−most child not IN or TO Example: phrase = "more than", cw = "more" if H5: phrase type is ADJP then select the right−most adjective, verb, noun, or ADJP Example: phrase = "61 years old", cw = "old" H6: for for all other phrase types do select the head word Example: phrase = "red house", cw = "house" Figure 6: Heuristics for the detection of content words the phrase. Figure 5 illustrates three examples of this situation. In Figure 5(a), the head word of the PP phrase is the preposition in, but June is at least as informative as the head word. Similarly, in Figure 5(b), the relative clause is featured only by the relative pronoun that, whereas the verb occurred should also be taken into account. Figure 5(c) shows another example of an infinitive verb phrase, in which the head word is to, whereas the verb declared should also be considered. Based on these observations, we introduced in FS2 the CONTENT WORD (cw), which adds a new lexicalization from the argument constituent for better content representation. To select the content words we used the heuristics illustrated in Figure 6. Observation 2: After implementing FS1, we noticed that the hw feature was rarely used, and we believe that this happens because of data sparsity. The same was noticed for the cw feature from FS2. Therefore we decided to add two new features, namely the parts of speech of the head word and the content word respectively. These features are called hPos and cPos and are illustrated in Figure 4. Both these features generate an implicit yet simple backoff solution for the lexicalized features HEAD WORD (hw) and CONTENT WORD (cw). Observation 3: Predicate arguments often contain names or other expressions identified by named entity (NE) recognizers, e.g. dates, prices. Thus we believe that this form of semantic information should be introduced in the learning model. In FS2 we added the following features: (a) the named entity class of the content word (cNE); and (b) a set of NE features that can take only Boolean values grouped as BOOLEAN NAMED ENTITY FEATURES and defined in Figure 4. The cNE feature helps recognize the argument roles, e.g. ARGM-LOC and ARGM-TMP, when location or temporal expressions are identified. The Boolean NE flags provide information useful in processing complex nominals occurring in argument constituents. For example, in Figure 2 ARG0 is featured not only by the word traders but also by ORGANIZATION, the semantic class of the name Big Board. Observation 4: Predicate argument structures are recognized accurately when both predicates and arguments are correctly identified. Often, predicates are lexicalized by phrasal verbs, e.g. put up, put off. To identify correctly the verb particle and capture it in the structure of predicates instead of the argument structure, we introduced two collocation features that measure the frequency with which verbs and succeeding prepositions cooccurr in the corpus. The features are pvcSum and pvcMax and are defined in Figure 4. 2.3 The Experiments The results presented in this paper were obtained by training on Proposition Bank (PropBank) release 2002/7/15 (Kingsbury et al., 2002). Syntactic information was extracted from the gold-standard parses in TreeBank Release 2. As named entity information is not available in PropBank/TreeBank we tagged the training corpus with NE information using an open-domain NE recognizer, having 96% F-measure on the MUC61 data. We reserved section 23 of PropBank/TreeBank for testing, and we trained on the rest. Due to memory limitations on our hardware, for the argument finding task we trained on the first 150 KB of TreeBank (about 11% of TreeBank), and 1The Message Understanding Conferences (MUC) were IE evaluation exercises in the 90s. Starting with MUC6 named entity data was available. for the role assignment task on the first 75 KB of argument constituents (about 60% of PropBank annotations). Table 1 shows the results obtained by our inductive learning approach. The first column describes the feature sets used in each of the 7 experiments performed. The following three columns indicate the precision (P), recall (R), and F-measure (  )2 obtained for the task of identifying argument constituents. The last column shows the accuracy (A) for the role assignment task using known argument constituents. The first row in Table 1 lists the results obtained when using only the FS1 features. The next five lines list the individual contributions of each of the newly added features when combined with the FS1 features. The last line shows the results obtained when all features from FS1 and FS2 were used. Table 1 shows that the new features increase the argument identification F-measure by 3.61%, and the role assignment accuracy with 4.29%. For the argument identification task, the head and content word features have a significant contribution for the task precision, whereas NE features contribute significantly to the task recall. For the role assignment task the best features from the feature set FS2 are the content word features (cw and cPos) and the Boolean NE flags, which show that semantic information, even if minimal, is important for role classification. Surprisingly, the phrasal verb collocation features did not help for any of the tasks, but they were useful for boosting the decision trees. Decision tree learning provided by C5 (Quinlan, 2002) has built in support for boosting. We used it and obtained improvements for both tasks. The best Fmeasure obtained for argument constituent identification was 88.98% in the fifth iteration (a 0.76% improvement). The best accuracy for role assignment was 83.74% in the eight iteration (a 0.69% improvement)3. We further analyzed the boosted trees and noticed that phrasal verb collocation features were mainly responsible for the improvements. This is the rationale for including them in the FS2 set. We also were interested in comparing the results 2     3These results, listed also on the last line of Table 2, differ from those in Table 1 because they were produced after the boosting took place. Features Arg P Arg R Arg  Role A FS1 84.96 84.26 84.61 78.76 FS1 + hPos 92.24 84.50 88.20 79.04 FS1 + cw, cPos 92.19 84.67 88.27 80.80 FS1 + cNE 83.93 85.69 84.80 79.85 FS1 + NE flags 87.78 85.71 86.73 81.28 FS1 + pvcSum + 84.88 82.77 83.81 78.62 pvcMax FS1 + FS2 91.62 85.06 88.22 83.05 Table 1: Inductive learning results for argument identification and role assignment Model Implementation Arg  Role A Statistical (Gildea and Palmer) 82.8 This study 71.86 78.87 Decision Trees FS1 84.61 78.76 FS1 + FS2 88.98 83.74 Table 2: Comparison of statistical and decision tree learning models of the decision-tree-based method against the results obtained by the statistical approach reported in (Gildea and Palmer, 2002). Table 2 summarizes the results. (Gildea and Palmer, 2002) report the results listed on the first line of Table 2. Because no Fscores were reported for the argument identification task, we re-implemented the model and obtained the results listed on the second line. It looks like we had some implementation differences, and our results for the argument role classification task were slightly worse. However, we used our results for the statistical model for comparing with the inductive learning model because we used the same feature extraction code for both models. Lines 3 and 4 list the results of the inductive learning model with boosting enabled, when the features were only from FS1, and from FS1 and FS2 respectively. When comparing the results obtained for both models when using only features from FS1, we find that almost the same results were obtained for role classification, but an enhancement of almost 13% was obtained when recognizing argument constituents. When comparing the statistical model with the inductive model that uses all features, there is an enhancement of 17.12% for argument identification and 4.87% for argument role recognition. Another significant advantage of our inductive learning approach is that it scales better to unDocument(s) POS Tagger NPB Identifier Dependency Parser Named Entity Recognizer Entity Coreference Document(s) Named Entity Recognizer Phrasal Parser (FSA) Combiner (FSA) Entity Coreference Event Recognizer (FSA) Event Coreference Event Merging Template(s) Pred/Arg Identification Predicate Arguments Mapping into Template Slots Event Coreference Event Merging Template(s) Full Parser (b) (a) Figure 7: IE architectures: (a) Architecture based on predicate/argument relations; (b) FSA-based IE system known predicates. The statistical model introduced in Gildea and Jurafsky (2002) uses predicate lexical information at most levels in the probability lattice, hence its scalability to unknown predicates is limited. In contrast, the decision tree approach uses predicate lexical information only for 5% of the branching decisions recorded when testing the role assignment task, and only for 0.01% of the branching decisions seen during the argument constituent identification evaluation. 3 The IE Paradigm Figure 7(a) illustrates an IE architecture that employs predicate argument structures. Documents are processed in parallel to: (1) parse them syntactically, and (2) recognize the NEs. The full parser first performs part-of-speech (POS) tagging using transformation based learning (TBL) (Brill, 1995). Then non-recursive, or basic, noun phrases (NPB) are identified using the TBL method reported in (Ngai and Florian, 2001). At last, the dependency parser presented in (Collins, 1997) is used to generate the full parse. This approach allows us to parse the sentences with less than 40 words from TreeBank section 23 with an F-measure slightly over 85% at an average of 0.12 seconds/sentence on a 2GHz Pentium IV computer. The parse texts marked with NE tags are passed to a module that identifies entity coreference in documents, resolving pronominal and nominal anaphors and normalizing coreferring expressions. The parses are also used by a module that recognizes predicate argument structures with any of the methods described in Section 2. For each templette modeling a different domain a mapping between predicate arguments and templette slots is produced. Figure 8 illustrates the mapping produced for two Event99 doINSTRUMENT ARG1 and MARKET_CHANGE_VERB ARG2 and (MONEY or PERCENT or NUMBER or QUANTITY) and MARKET_CHANGE_VERB AMOUNT_CHANGE MARKET_CHANGE_VERB CURRENT_VALUE (PERSON and ARG0 and DIE_VERB) or (PERSON and ARG1 and KILL_VERB) DECEASED (ARG0 and KILL_VERB) or (ARG1 and DIE_VERB) AGENT_OF_DEATH (ARGM−TMP and ILNESS_NOUN) or KILL_VERB or DIE_VERB MANNER_OF_DEATH ARGM−TMP and DATE DATE (ARGM−LOC or ARGM−TMP) and LOCATION LOCATION (a) (b) (ARG4 or ARGM_DIR) and NUMBER and Figure 8: Mapping rules between predicate arguments and templette slots for: (a) the “market change” domain, and (b) the “death” domain mains. The “market change” domain monitors changes (AMOUNT CHANGE) and current values (CURRENT VALUE) for financial instruments (INSTRUMENT). The “death” domain extracts the description of the person deceased (DECEASED), the manner of death (MANNER OF DEATH), and, if applicable, the person to whom the death is attributed (AGENT OF DEATH). To produce the mappings we used training data that consists of: (1) texts, and (2) their corresponding filled templettes. Each templette has pointers back to the source text similarly to the example presented in Figure 1. When the predicate argument structures were identified, the mappings were collected as illustrated in Figure 9. Figure 9(a) shows an interesting aspect of the mappings. Although the role classification of the last argument is incorrect (it should have been identified as ARG4), it is mapped into the CURRENT-VALUE slot. This shows how the mappings resolve incorrect but consistent classifications. Figure 9(b) shows the flexibility of the system to identify and classify constituents that are not close to the predicate phrase (ARG0). This is a clear ad5 1/4 ARG2 34 1/2 to ARGM−DIR flew The space shuttle Challenger apart over Florida like a billion−dollar confetti killing six astronauts NP VP S NP PP NP fell Norwalk−based Micro Warehouse ARG1 NP ADVP PP PP S VP VP NP S ARG0 P ARG1 INSTRUMENT AMOUNT_CHANGE CURRENT_VALUE AGENT_OF_DEATH MANNER_OF_DEATH DECEASED Mappings (a) (b) Figure 9: Predicate argument mapping examples for: (a) the “market change” domain, and (b) the “death” domain vantage over the FSA-based system, which in fact missed the AGENT-OF-DEATH in this sentence. Because several templettes might describe the same event, event coreference is processed and, based on the results, templettes are merged when necessary. The IE architecture in Figure 7(a) may be compared with the IE architecture with cascaded FSA represented in Figure 7(b) and reported in (Surdeanu and Harabagiu, 2002). Both architectures share the same NER, coreference and merging modules. Specific to the FSA-based architecture are the phrasal parser, which identifies simple phrases such as basic noun or verb phrases (some of them domain specific), the combiner, which builds domain-dependent complex phrases, and the event recognizer, which detects the domain-specific Subject-Verb-Object (SVO) patterns. An example of a pattern used by the FSA-based architecture is: DEATH-CAUSE KILL-VERB PERSON  , where DEATH-CAUSE may identify more than 20 lexemes, e.g. wreck, catastrophe, malpractice, and more than 20 verbs are KILL-VERBS, e.g. murder, execute, behead, slay. Most importantly, each pattern must recognize up to 26 syntactic variations, e.g. determined by the active or passive form of the verb, relative subjects or objects etc. Predicate argument structures offer the great advantage that syntactic variations do not need to be accounted by IE systems anymore. Because entity and event coreference, as well as templette merging will attempt to recover from partial patterns or predicate argument recognitions, and our goal is to compare the usage of FSA patterns versus predicate argument structures, we decided to disable the coreference and merging modules. This explains why in Figure 7 these modules are repreSystem Market Change Death Pred/Args Statistical 68.9% 58.4% Pred/Args Inductive 82.8% 67.0% FSA 91.3% 72.7% Table 3: Templette F-measure (  ) scores for the two domains investigated System Correct Missed Incorrect Pred/Args Statistical 26 16 3 Pred/Args Inductive 33 9 2 FSA 38 4 2 Table 4: Number of event structures (FSA patterns or predicate argument structures) matched sented with dashed lines. 4 Experiments with The Integration of Predicate Argument Structures in IE To evaluate the proposed IE paradigm we selected two Event99 domains: “market change”, which tracks changes in stock indexes, and “death”, which extracts all manners of human deaths. These domains were selected because most of the domain information can be processed without needing entity or event coreference. Moreover, one of the domains (market change) uses verbs commonly used in PropBank/TreeBank, while the other (death) uses relatively unknown verbs, so we can also evaluate how well the system scales to verbs unseen in training. Table 3 lists the F-scores for the two domains. The first line of the Table lists the results obtained by the IE architecture illustrated in Figure 7(a) when the predicate argument structures were identified by the statistical model. The next line shows the same results for the inductive learning model. The last line shows the results for the IE architecture in Figure 7(b). The results obtained by the FSA-based IE were the best, but they were made possible by handcrafted patterns requiring an effort of 10 person days per domain. The only human effort necessary in the new IE paradigm was imposed by the generation of mappings between arguments and templette slots, accomplished in less than 2 hours per domain, given that the training templettes are known. Additionally, it is easier to automatically learn these mappings than to acquire FSA patterns. Table 3 also shows that the new IE paradigm performs better when the predicate argument structures are recognized with the inductive learning model. The cause is the substantial difference in quality of the argument identification task between the two models. The Table shows that the new IE paradigm with the inductive learning model achieves about 90% of the performance of the FSA-based system for both domains, even though one of the domains uses mainly verbs rarely seen in training (e.g. “die” appears 5 times in PropBank). Another way of evaluating the integration of predicate argument structures in IE is by comparing the number of events identified by each architecture. Table 4 shows the results. Once again, the new IE paradigm performs better when the predicate argument structures are recognized with the inductive learning model. More events are missed by the statistical model which does not recognize argument constituents as well the inductive learning model. 5 Conclusion This paper reports on a novel inductive learning method for identifying predicate argument structures in text. The proposed approach achieves over 88% F-measure for the problem of identifying argument constituents, and over 83% accuracy for the task of assigning roles to pre-identified argument constituents. Because predicate lexical information is used for less than 5% of the branching decisions, the generated classifier scales better than the statistical method from (Gildea and Palmer, 2002) to unknown predicates. This way of identifying predicate argument structures is a central piece of an IE paradigm easily customizable to new domains. The performance degradation of this paradigm when compared to IE systems based on hand-crafted patterns is only 10%. References Collin F. Baker, Charles J. Fillmore, and John B. Lowe. 1998. The Berkeley FrameNet Project. In Proceedings of COLING/ACL ’98:86-90,. Montreal, Canada. Eric Brill. 1995. Transformation-Based Error-Driven Learning and Natural Language Processing: A Case Study in Part of Speech Tagging. Computational Linguistics. Michael Collins. 1997. Three Generative, Lexicalized Models for Statistical Parsing. In Proceedings of the 35th Annual Meeting of the Association for Computational Linguistics (ACL 1997):16-23, Madrid, Spain. Daniel Gildea and Daniel Jurafsky. 2002. Automatic Labeling of Semantic Roles. Computational Linguistics, 28(3):245288. Daniel Gildea and Martha Palmer. 2002. The Necessity of Parsing for Predicate Argument Recognition. In Proceedings of the 40th Meeting of the Association for Computational Linguistics (ACL 2002):239-246, Philadelphia, PA. Lynette Hirschman, Patricia Robinson, Lisa Ferro, Nancy Chinchor, Erica Brown, Ralph Grishman, Beth Sundheim 1999. Hub-4 Event99 General Guidelines and Templettes. Jerry R. Hobbs, Douglas Appelt, John Bear, David Israel, Megumi Kameyama, Mark E. Stickel, and Mabry Tyson. 1997. FASTUS: A Cascaded Finite-State Transducer for Extracting Information from Natural-Language Text. In FiniteState Language Processing, pages 383-406, MIT Press, Cambridge, MA. Paul Kingsbury, Martha Palmer, and Mitch Marcus. 2002. Adding Semantic Annotation to the Penn TreeBank. In Proceedings of the Human Language Technology Conference (HLT 2002):252-256, San Diego, California. Beth Levin. 1993. English Verb Classes and Alternations a Preliminary Investigation. University of Chicago Press. Grace Ngai and Radu Florian. 2001. TransformationBased Learning in The Fast Lane. In Proceedings of the North American Association for Computational Linguistics (NAACL 2001):40-47. Ross Quinlan. 2002. Data Mining Tools See5 and C5.0. http://www.rulequest.com/see5-info.html. Ellen Riloff and Rosie Jones. 1996. Automatically Generating Extraction Patterns from Untagged Text. In Proceedings of the Thirteenth National Conference on Artificial Intelligence (AAAI-96)):1044-1049. Mihai Surdeanu and Sanda Harabagiu. 2002. Infrastructure for Open-Domain Information Extraction In Proceedings of the Human Language Technology Conference (HLT 2002):325330. Roman Yangarber, Ralph Grishman, Pasi Tapainen and Silja Huttunen, 2000. Automatic Acquisition of Domain Knowledge for Information Extraction. In Proceedings of the 18th International Conference on Computational Linguistics (COLING-2000): 940-946, Saarbrucken, Germany.
2003
2
tRuEcasIng Lucian Vlad Lita ♠ Carnegie Mellon [email protected] Abe Ittycheriah IBM T.J. Watson [email protected] Salim Roukos IBM T.J. Watson [email protected] Nanda Kambhatla IBM T.J. Watson [email protected] Abstract Truecasing is the process of restoring case information to badly-cased or noncased text. This paper explores truecasing issues and proposes a statistical, language modeling based truecaser which achieves an accuracy of ∼98% on news articles. Task based evaluation shows a 26% F-measure improvement in named entity recognition when using truecasing. In the context of automatic content extraction, mention detection on automatic speech recognition text is also improved by a factor of 8. Truecasing also enhances machine translation output legibility and yields a BLEU score improvement of 80.2%. This paper argues for the use of truecasing as a valuable component in text processing applications. 1 Introduction While it is true that large, high quality text corpora are becoming a reality, it is also true that the digital world is flooded with enormous collections of low quality natural language text. Transcripts from various audio sources, automatic speech recognition, optical character recognition, online messaging and gaming, email, and the web are just a few examples of raw text sources with content often produced in a hurry, containing misspellings, insertions, deletions, grammatical errors, neologisms, jargon terms ♠Work done at IBM TJ Watson Research Center etc. We want to enhance the quality of such sources in order to produce better rule-based systems and sharper statistical models. This paper focuses on truecasing, which is the process of restoring case information to raw text. Besides text rEaDaBILiTY, truecasing enhances the quality of case-carrying data, brings into the picture new corpora originally considered too noisy for various NLP tasks, and performs case normalization across styles, sources, and genres. Consider the following mildly ambiguous sentence “us rep. james pond showed up riding an it and going to a now meeting”. The case-carrying alternative “US Rep. James Pond showed up riding an IT and going to a NOW meeting” is arguably better fit to be subjected to further processing. Broadcast news transcripts contain casing errors which reduce the performance of tasks such as named entity tagging. Automatic speech recognition produces non-cased text. Headlines, teasers, section headers - which carry high information content - are not properly cased for tasks such as question answering. Truecasing is an essential step in transforming these types of data into cleaner sources to be used by NLP applications. “the president” and “the President” are two viable surface forms that correctly convey the same information in the same context. Such discrepancies are usually due to differences in news source, authors, and stylistic choices. Truecasing can be used as a normalization tool across corpora in order to produce consistent, context sensitive, case information; it consistently reduces expressions to their statistical canonical form. In this paper, we attempt to show the benefits of truecasing in general as a valuable building block for NLP applications rather than promoting a specific implementation. We explore several truecasing issues and propose a statistical, language modeling based truecaser, showing its performance on news articles. Then, we present a straight forward application of truecasing on machine translation output. Finally, we demonstrate the considerable benefits of truecasing through task based evaluations on named entity tagging and automatic content extraction. 1.1 Related Work Truecasing can be viewed in a lexical ambiguity resolution framework (Yarowsky, 1994) as discriminating among several versions of a word, which happen to have different surface forms (casings). Wordsense disambiguation is a broad scope problem that has been tackled with fairly good results generally due to the fact that context is a very good predictor when choosing the sense of a word. (Gale et al., 1994) mention good results on limited case restoration experiments on toy problems with 100 words. They also observe that real world problems generally exhibit around 90% case restoration accuracy. (Mikheev, 1999) also approaches casing disambiguation but models only instances when capitalization is expected: first word in a sentence, after a period, and after quotes. (Chieu and Ng, 2002) attempted to extract named entities from non-cased text by using a weaker classifier but without focusing on regular text or case restoration. Accents can be viewed as additional surface forms or alternate word casings. From this perspective, either accent identification can be extended to truecasing or truecasing can be extended to incorporate accent restoration. (Yarowsky, 1994) reports good results with statistical methods for Spanish and French accent restoration. Truecasing is also a specialized method for spelling correction by relaxing the notion of casing to spelling variations. There is a vast literature on spelling correction (Jones and Martin, 1997; Golding and Roth, 1996) using both linguistic and statistical approaches. Also, (Brill and Moore, 2000) apply a noisy channel model, based on generic string to string edits, to spelling correction. 2 Approach In this paper we take a statistical approach to truecasing. First we present the baseline: a simple, straight forward unigram model which performs reasonably well in most cases. Then, we propose a better, more flexible statistical truecaser based on language modeling. From a truecasing perspective we observe four general classes of words: all lowercase (LC), first letter uppercase (UC), all letters uppercase (CA), and mixed case word MC). The MC class could be further refined into meaningful subclasses but for the purpose of this paper it is sufficient to correctly identify specific true MC forms for each MC instance. We are interested in correctly assigning case labels to words (tokens) in natural language text. This represents the ability to discriminate between class labels for the same lexical item, taking into account the surrounding words. We are interested in casing word combinations observed during training as well as new phrases. The model requires the ability to generalize in order to recognize that even though the possibly misspelled token “lenon” has never been seen before, words in the same context usually take the UC form. 2.1 Baseline: The Unigram Model The goal of this paper is to show the benefits of truecasing in general. The unigram baseline (presented below) is introduced in order to put task based evaluations in perspective and not to be used as a strawman baseline. The vast majority of vocabulary items have only one surface form. Hence, it is only natural to adopt the unigram model as a baseline for truecasing. In most situations, the unigram model is a simple and efficient model for surface form restoration. This method associates with each surface form a score based on the frequency of occurrence. The decoding is very simple: the true case of a token is predicted by the most likely case of that token. The unigram model’s upper bound on truecasing performance is given by the percentage of tokens that occur during decoding under their most frequent case. Approximately 12% of the vocabulary items have been observed under more than one surface form. Hence it is inevitable for the unigram model to fail on tokens such as “new”. Due to the overwhelming frequency of its LC form, “new” will take this particular form regardless of what token follows it. For both “information” and “york” as subsequent words, “new” will be labeled as LC. For the latter case, “new” occurs under one of its less frequent surface forms. 2.2 Truecaser The truecasing strategy that we are proposing seeks to capture local context and bootstrap it across a sentence. The case of a token will depend on the most likely meaning of the sentence - where local meaning is approximated by n-grams observed during training. However, the local context of a few words alone is not enough for case disambiguation. Our proposed method employs sentence level context as well. We capture local context through a trigram language model, but the case label is decided at a sentence level. A reasonable improvement over the unigram model would have been to decide the word casing given the previous two lexical items and their corresponding case content. However, this greedy approach still disregards global cues. Our goal is to maximize the probability of a larger text segment (i.e. a sentence) occurring under a certain surface form. Towards this goal, we first build a language model that can provide local context statistics. 2.2.1 Building a Language Model Language modeling provides features for a labeling scheme. These features are based on the probability of a lexical item and a case content conditioned on the history of previous two lexical items and their corresponding case content: Pmodel(w3|w2, w1) = λtrigramP(w3|w2, w1) + λbigramP(w3|w2) + λunigramP(w3) + λuniformP0 (1) where trigram, bigram, unigram, and uniform probabilities are scaled by individual λis which are learned by observing training examples. wi represents a word with a case tag treated as a unit for probability estimation. 2.2.2 Sentence Level Decoding Using the language model probabilities we decode the case information at a sentence level. We construct a trellis (figure 1) which incorporates all the sentence surface forms as well as the features computed during training. A node in this trellis consists of a lexical item, a position in the sentence, a possible casing, as well as a history of the previous two lexical items and their corresponding case content. Hence, for each token, all surface forms will appear as nodes carrying additional context information. In the trellis, thicker arrows indicate higher transition probabilities. Figure 1: Given individual histories, the decodings delay and DeLay, are most probable - perhaps in the context of “time delay” and respectively “Senator Tom DeLay” The trellis can be viewed as a Hidden Markov Model (HMM) computing the state sequence which best explains the observations. The states (q1, q2, · · · , qn) of the HMM are combinations of case and context information, the transition probabilities are the language model (λ) based features, and the observations (O1O2 · · · Ot) are lexical items. During decoding, the Viterbi algorithm (Rabiner, 1989) is used to compute the highest probability state sequence (q∗ τ at sentence level) that yields the desired case information: q∗ τ = argmaxqi1qi2···qitP(qi1qi2 · · · qit|O1O2 · · · Ot, λ) (2) where P(qi1qi2 · · · qit|O1O2 · · · Ot, λ) is the probability of a given sequence conditioned on the observation sequence and the model parameters. A more sophisticated approach could be envisioned, where either the observations or the states are more expressive. These alternate design choices are not explored in this paper. Testing speed depends on the width and length of the trellis and the overall decoding complexity is: Cdecoding = O(SM H+1) where S is the sentence size, M is the number of surface forms we are willing to consider for each word, and H is the history size (H = 3 in the trigram case). 2.3 Unknown Words In order for truecasing to be generalizable it must deal with unknown words — words not seen during training. For large training sets, an extreme assumption is that most words and corresponding casings possible in a language have been observed during training. Hence, most new tokens seen during decoding are going to be either proper nouns or misspellings. The simplest strategy is to consider all unknown words as being of the UC form (i.e. people’s names, places, organizations). Another approach is to replace the less frequent vocabulary items with case-carrying special tokens. During training, the word mispeling is replaced with by UNKNOWN LC and the word Lenon with UNKNOWN UC. This transformation is based on the observation that similar types of infrequent words will occur during decoding. This transformation creates the precedent of unknown words of a particular format being observed in a certain context. When a truly unknown word will be seen in the same context, the most appropriate casing will be applied. This was the method used in our experiments. A similar method is to apply the case-carrying special token transformation only to a small random sample of all tokens, thus capturing context regardless of frequency of occurrence. 2.4 Mixed Casing A reasonable truecasing strategy is to focus on token classification into three categories: LC, UC, and CA. In most text corpora mixed case tokens such as McCartney, CoOl, and TheBeatles occur with moderate frequency. Some NLP tasks might prefer mapping MC tokens starting with an uppercase letter into the UC surface form. This technique will reduce the feature space and allow for sharper models. However, the decoding process can be generalized to include mixed cases in order to find a closer fit to the true sentence. In a clean version of the AQUAINT (ARDA) news stories corpus, ∼90% of the tokens occurred under the most frequent surface form (figure 2). Figure 2: News domain casing distribution The expensive brute force approach will consider all possible casings of a word. Even with the full casing space covered, some mixed cases will not be seen during training and the language model probabilities for n-grams containing certain words will back off to an unknown word strategy. A more feasible method is to account only for the mixed case items observed during training, relying on a large enough training corpus. A variable beam decoding will assign non-zero probabilities to all known casings of each word. An n-best approximation is somewhat faster and easier to implement and is the approach employed in our experiments. During the sentence-level decoding only the n-most-frequent mixed casings seen during training are considered. If the true capitalization is not among these n-best versions, the decoding is not correct. Additional lexical and morphological features might be needed if identifying MC instances is critical. 2.5 First Word in the Sentence The first word in a sentence is generally under the UC form. This sentence-begin indicator is sometimes ambiguous even when paired with sentenceend indicators such as the period. While sentence splitting is not within the scope of this paper, we want to emphasize the fact that many NLP tasks would benefit from knowing the true case of the first word in the sentence, thus avoiding having to learn the fact that beginning of sentences are artificially important. Since it is uneventful to convert the first letter of a sentence to uppercase, a more interesting problem from a truecasing perspective is to learn how to predict the correct case of the first word in a sentence (i.e. not always UC). If the language model is built on clean sentences accounting for sentence boundaries, the decoding will most likely uppercase the first letter of any sentence. On the other hand, if the language model is trained on clean sentences disregarding sentence boundaries, the model will be less accurate since different casings will be presented for the same context and artificial n-grams will be seen when transitioning between sentences. One way to obtain the desired effect is to discard the first n tokens in the training sentences in order to escape the sentence-begin effect. The language model is then built on smoother context. A similar effect can be obtained by initializing the decoding with n-gram state probabilities so that the boundary information is masked. 3 Evaluation Both the unigram model and the language model based truecaser were trained on the AQUAINT (ARDA) and TREC (NIST) corpora, each consisting of 500M token news stories from various news agencies. The truecaser was built using IBM’s ViaVoiceTMlanguage modeling tools. These tools implement trigram language models using deleted interpolation for backing off if the trigram is not found in the training data. The resulting model’s perplexity is 108. Since there is no absolute truth when truecasing a sentence, the experiments need to be built with some reference in mind. Our assumption is that professionally written news articles are very close to an intangible absolute truth in terms of casing. Furthermore, we ignore the impact of diverging stylistic forms, assuming the differences are minor. Based on the above assumptions we judge the truecasing methods on four different test sets. The first test set (APR) consists of the August 25, 2002 ∗top 20 news stories from Associated Press and Reuters excluding titles, headlines, and section headers which together form the second test set (APR+). The third test set (ACE) consists of ear∗Randomly chosen test date Figure 3: LM truecaser vs. unigram baseline. lier news stories from AP and New York Times belonging to the ACE dataset. The last test set (MT) includes a set of machine translation references (i.e. human translations) of news articles from the Xinhua agency. The sizes of the data sets are as follows: APR - 12k tokens, ACE - 90k tokens, and MT - 63k tokens. For both truecasing methods, we computed the agreement with the original news story considered to be the ground truth. 3.1 Results The language model based truecaser consistently displayed a significant error reduction in case restoration over the unigram model (figure 3). On current news stories, the truecaser agreement with the original articles is ∼98%. Titles and headlines usually have a higher concentration of named entities than normal text. This also means that they need a more complex model to assign case information more accurately. The LM based truecaser performs better in this environment while the unigram model misses named entity components which happen to have a less frequent surface form. 3.2 Qualitative Analysis The original reference articles are assumed to have the absolute true form. However, differences from these original articles and the truecased articles are not always casing errors. The truecaser tends to modify the first word in a quotation if it is not proper name: “There has been” becomes “there has been”. It also makes changes which could be considered a correction of the original article: “Xinhua BLEU Breakdown System BLEU 1gr Precision 2gr Precision 3gr Precision 4gr Precision all lowercase 0.1306 0.6016 0.2294 0.1040 0.0528 rule based 0.1466 0.6176 0.2479 0.1169 0.0627 1gr truecasing 0.2206 0.6948 0.3328 0.1722 0.0988 1gr truecasing+ 0.2261 0.6963 0.3372 0.1734 0.0997 lm truecasing 0.2596 0.7102 0.3635 0.2066 0.1303 lm truecasing+ 0.2642 0.7107 0.3667 0.2066 0.1302 Table 1: BLEU score for several truecasing strategies. (truecasing+ methods additionally employ the “first sentence letter uppercased” rule adjustment). Baseline With Truecasing Class Recall Precision F Recall Precision F ENAMEX 48.46 36.04 41.34 59.02 52.65 55.66 (+34.64%) NUMEX 64.61 72.02 68.11 70.37 79.51 74.66 (+9.62%) TIMEX 47.68 52.26 49.87 61.98 75.99 68.27 (+36.90%) Overall 52.50 44.84 48.37 62.01 60.42 61.20 (+26.52%) Table 2: Named Entity Recognition performance with truecasing and without (baseline). news agency” becomes “Xinhua News Agency” and “northern alliance” is truecased as “Northern Alliance”. In more ambiguous cases both the original version and the truecased fragment represent different stylistic forms: “prime minister Hekmatyar” becomes “Prime Minister Hekmatyar”. There are also cases where the truecaser described in this paper makes errors. New movie names are sometimes miss-cased: “my big fat greek wedding” or “signs”. In conducive contexts, person names are correctly cased: “DeLay said in”. However, in ambiguous, adverse contexts they are considered to be common nouns: “pond” or “to delay that”. Unseen organization names which make perfectly normal phrases are erroneously cased as well: “international security assistance force”. 3.3 Application: Machine Translation Post-Processing We have applied truecasing as a post-processing step to a state of the art machine translation system in order to improve readability. For translation between Chinese and English, or Japanese and English, there is no transfer of case information. In these situations the translation output has no case information and it is beneficial to apply truecasing as a post-processing step. This makes the output more legible and the system performance increases if case information is required. We have applied truecasing to Chinese-to-English translation output. The data source consists of news stories (2500 sentences) from the Xinhua News Agency. The news stories are first translated, then subjected to truecasing. The translation output is evaluated with BLEU (Papineni et al., 2001), which is a robust, language independent automatic machine translation evaluation method. BLEU scores are highly correlated to human judges scores, providing a way to perform frequent and accurate automated evaluations. BLEU uses a modified n-gram precision metric and a weighting scheme that places more emphasis on longer n-grams. In table 1, both truecasing methods are applied to machine translation output with and without uppercasing the first letter in each sentence. The truecasing methods are compared against the all letters lowercased version of the articles as well as against an existing rule-based system which is aware of a limited number of entity casings such as dates, cities, and countries. The LM based truecaser is very effective in increasing the readability of articles and captures an important aspect that the BLEU score is sensitive to. Truecasig the translation output yields Baseline With Truecasing Source Recall Precision F Recall Precision F BNEWS ASR 23 3 5 56 39 46 (+820.00%) BNEWS HUMAN 77 66 71 77 68 72 (+1.41%) XINHUA 76 71 73 79 72 75 (+2.74%) Table 3: Results of ACE mention detection with and without truecasing. an improvement † of 80.2% in BLEU score over the existing rule base system. 3.4 Task Based Evaluation Case restoration and normalization can be employed for more complex tasks. We have successfully leveraged truecasing in improving named entity recognition and automatic content extraction. 3.4.1 Named Entity Tagging In order to evaluate the effect of truecasing on extracting named entity labels, we tested an existing named entity system on a test set that has significant case mismatch to the training of the system. The base system is an HMM based tagger, similar to (Bikel et al., 1997). The system has 31 semantic categories which are extensions on the MUC categories. The tagger creates a lattice of decisions corresponding to tokenized words in the input stream. When tagging a word wi in a sentence of words w0...wN, two possibilities. If a tag begins: p(tN 1 |wN 1 )i = p(ti|ti−1, wi−1)p†(wi|ti, wi−1) If a tag continues: p(tN 1 |wN 1 )i = p(wi|ti, wi−1) The † indicates that the distribution is formed from words that are the first words of entities. The p† distribution predicts the probability of seeing that word given the tag and the previous word instead of the tag and previous tag. Each word has a set of features, some of which indicate the casing and embedded punctuation. These models have several levels of back-off when the exact trigram has not been seen in training. A trellis spanning the 31 futures is built for each word in a sentence and the best path is derived using the Viterbi algorithm. †Truecasing improves legibility, not the translation itself The performance of the system shown in table 2 indicate an overall 26.52% F-measure improvement when using truecasing. The alternative to truecasing text is to destroy case information in the training material ⊖SNORIFY procedure in (Bikel et al., 1997). Case is an important feature in detecting most named entities but particularly so for the title of a work, an organization, or an ambiguous word with two frequent cases. Truecasing the sentence is essential in detecting that “To Kill a Mockingbird” is the name of a book, especially if the quotation marks are left off. 3.4.2 Automatic Content Extraction Automatic Content Extraction (ACE) is task focusing on the extraction of mentions of entities and relations between them from textual data. The textual documents are from newswire, broadcast news with text derived from automatic speech recognition (ASR), and newspaper with text derived from optical character recognition (OCR) sources. The mention detection task (ace, 2001) comprises the extraction of named (e.g. ”Mr. Isaac Asimov”), nominal (e.g. ”the complete author”), and pronominal (e.g. ”him”) mentions of Persons, Organizations, Locations, Facilities, and Geo-Political Entities. The automatically transcribed (using ASR) broadcast news documents and the translated Xinhua News Agency (XINHUA) documents in the ACE corpus do not contain any case information, while human transcribed broadcast news documents contain casing errors (e.g. “George bush”). This problem occurs especially when the data source is noisy or the articles are poorly written. For all documents from broadcast news (human transcribed and automatically transcribed) and XINHUA sources, we extracted mentions before and after applying truecasing. The ASR transcribed broadcast news data comprised 86 documents containing a total of 15,535 words, the human transcribed version contained 15,131 words. There were only two XINHUA documents in the ACE test set containing a total of 601 words. None of this data or any ACE data was used for training the truecasing models. Table 3 shows the result of running our ACE participating maximum entropy mention detection system on the raw text, as well as on truecased text. For ASR transcribed documents, we obtained an eight fold improvement in mention detection from 5% Fmeasure to 46% F-measure. The low baseline score is mostly due to the fact that our system has been trained on newswire stories available from previous ACE evaluations, while the latest test data included ASR output. It is very likely that the improvement due to truecasing will be more modest for the next ACE evaluation when our system will be trained on ASR output as well. 4 Possible Improvements & Future Work Although the statistical model we have considered performs very well, further improvements must go beyond language modeling, enhancing how expressive the model is. Additional features are needed during decoding to capture context outside of the current lexical item, medium range context, as well as discontinuous context. Another potentially helpful feature to consider would provide a distribution over similar lexical items, perhaps using an edit/phonetic distance. Truecasing can be extended to cover a more general notion surface form to include accents. Depending on the context, words might take different surface forms. Since punctuation is a notion extension to surface form, shallow punctuation restoration (e.g. word followed by comma) can also be addressed through truecasing. 5 Conclusions We have discussed truecasing, the process of restoring case information to badly-cased or non-cased text, and we have proposed a statistical, language modeling based truecaser which has an agreement of ∼98% with professionally written news articles. Although its most direct impact is improving legibility, truecasing is useful in case normalization across styles, genres, and sources. Truecasing is a valuable component in further natural language processing. Task based evaluation shows a 26% F-measure improvement in named entity recognition when using truecasing. In the context of automatic content extraction, mention detection on automatic speech recognition text is improved by a factor of 8. Truecasing also enhances machine translation output legibility and yields a BLEU score improvement of 80.2% over the original system. References 2001. Entity detection and tracking. ACE Pilot Study Task Definition. D. Bikel, S. Miller, R. Schwartz, and R. Weischedel. 1997. Nymble: A high-performance learning name finder. pages 194–201. E. Brill and R. C. Moore. 2000. An improved error model for noisy channel spelling correction. ACL. H.L. Chieu and H.T. Ng. 2002. Teaching a weaker classifier: Named entity recognition on upper case text. William A. Gale, Kenneth W. Church, and David Yarowsky. 1994. Discrimination decisions for 100,000-dimensional spaces. Current Issues in Computational Linguistics, pages 429–450. Andrew R. Golding and Dan Roth. 1996. Applying winnow to context-sensitive spelling correction. ICML. M. P. Jones and J. H. Martin. 1997. Contextual spelling correction using latent semantic analysis. ANLP. A. Mikheev. 1999. A knowledge-free method for capitalized word disambiguation. Kishore Papineni, Salim Roukos, Todd Ward, and Wei Jing Zhu. 2001. Bleu: a method for automatic evaluation of machine translation. IBM Research Report. L. R. Rabiner. 1989. A tutorial on hidden markov models and selected applications in speech recognition. Readings in Speech Recognition, pages 267–295. David Yarowsky. 1994. Decision lists for ambiguity resolution: Application to accent restoration in spanish and french. ACL, pages 88–95.
2003
20
Minimum Error Rate Training in Statistical Machine Translation Franz Josef Och Information Sciences Institute University of Southern California 4676 Admiralty Way, Suite 1001 Marina del Rey, CA 90292 [email protected] Abstract Often, the training procedure for statistical machine translation models is based on maximum likelihood or related criteria. A general problem of this approach is that there is only a loose relation to the final translation quality on unseen text. In this paper, we analyze various training criteria which directly optimize translation quality. These training criteria make use of recently proposed automatic evaluation metrics. We describe a new algorithm for efficient training an unsmoothed error count. We show that significantly better results can often be obtained if the final evaluation criterion is taken directly into account as part of the training procedure. 1 Introduction Many tasks in natural language processing have evaluation criteria that go beyond simply counting the number of wrong decisions the system makes. Some often used criteria are, for example, F-Measure for parsing, mean average precision for ranked retrieval, and BLEU or multi-reference word error rate for statistical machine translation. The use of statistical techniques in natural language processing often starts out with the simplifying (often implicit) assumption that the final scoring is based on simply counting the number of wrong decisions, for instance, the number of sentences incorrectly translated in machine translation. Hence, there is a mismatch between the basic assumptions of the used statistical approach and the final evaluation criterion used to measure success in a task. Ideally, we would like to train our model parameters such that the end-to-end performance in some application is optimal. In this paper, we investigate methods to efficiently optimize model parameters with respect to machine translation quality as measured by automatic evaluation criteria such as word error rate and BLEU. 2 Statistical Machine Translation with Log-linear Models Let us assume that we are given a source (‘French’) sentence         , which is to be translated into a target (‘English’) sentence           Among all possible target sentences, we will choose the sentence with the highest probability:1     "!#$ % & Pr ')( +* (1) The argmax operation denotes the search problem, i.e. the generation of the output sentence in the target language. The decision in Eq. 1 minimizes the number of decision errors. Hence, under a so-called zero-one loss function this decision rule is optimal (Duda and Hart, 1973). Note that using a different loss function—for example, one induced by the BLEU metric—a different decision rule would be optimal. 1The notational convention will be as follows. We use the symbol Pr ,'. to denote general probability distributions with (nearly) no specific assumptions. In contrast, for model-based probability distributions, we use the generic symbol /0,'. . As the true probability distribution Pr ')(  is unknown, we have to develop a model '(  that approximates Pr ')(  . We directly model the posterior probability Pr ')(  by using a log-linear model. In this framework, we have a set of  feature functions  '      . For each feature function, there exists a model parameter       . The direct translation probability is given by: Pr ')(    '(  (2)  exp     '    exp         "!    (3) In this framework, the modeling problem amounts to developing suitable feature functions that capture the relevant properties of the translation task. The training problem amounts to obtaining suitable parameter values   . A standard criterion for loglinear models is the MMI (maximum mutual information) criterion, which can be derived from the maximum entropy principle:     +"!#$  #%$ & '  (*)   ' ' ( ' ,+ (4) The optimization problem under this criterion has very nice properties: there is one unique global optimum, and there are algorithms (e.g. gradient descent) that are guaranteed to converge to the global optimum. Yet, the ultimate goal is to obtain good translation quality on unseen test data. Experience shows that good results can be obtained using this approach, yet there is no reason to assume that an optimization of the model parameters using Eq. 4 yields parameters that are optimal with respect to translation quality. The goal of this paper is to investigate alternative training criteria and corresponding training algorithms, which are directly related to translation quality measured with automatic evaluation criteria. In Section 3, we review various automatic evaluation criteria used in statistical machine translation. In Section 4, we present two different training criteria which try to directly optimize an error count. In Section 5, we sketch a new training algorithm which efficiently optimizes an unsmoothed error count. In Section 6, we describe the used feature functions and our approach to compute the candidate translations that are the basis for our training procedure. In Section 7, we evaluate the different training criteria in the context of several MT experiments. 3 Automatic Assessment of Translation Quality In recent years, various methods have been proposed to automatically evaluate machine translation quality by comparing hypothesis translations with reference translations. Examples of such methods are word error rate, position-independent word error rate (Tillmann et al., 1997), generation string accuracy (Bangalore et al., 2000), multi-reference word error rate (Nießen et al., 2000), BLEU score (Papineni et al., 2001), NIST score (Doddington, 2002). All these criteria try to approximate human assessment and often achieve an astonishing degree of correlation to human subjective evaluation of fluency and adequacy (Papineni et al., 2001; Doddington, 2002). In this paper, we use the following methods: multi-reference word error rate (mWER): When this method is used, the hypothesis translation is compared to various reference translations by computing the edit distance (minimum number of substitutions, insertions, deletions) between the hypothesis and the closest of the given reference translations. multi-reference position independent error rate (mPER): This criterion ignores the word order by treating a sentence as a bag-of-words and computing the minimum number of substitutions, insertions, deletions needed to transform the hypothesis into the closest of the given reference translations. BLEU score: This criterion computes the geometric mean of the precision of . -grams of various lengths between a hypothesis and a set of reference translations multiplied by a factor BP 0/  that penalizes short sentences: BLEU  BP 0/  /1 $325476 & 8   (*)  8 9 : Here 8 denotes the precision of . -grams in the hypothesis translation. We use 9 <; . NIST score: This criterion computes a weighted precision of . -grams between a hypothesis and a set of reference translations multiplied by a factor BP’ 0/  that penalizes short sentences: NIST  BP’ 0/  / 6 & 8  8 Here 8 denotes the weighted precision of . grams in the translation. We use 9 ; . Both, NIST and BLEU are accuracy measures, and thus larger values reflect better translation quality. Note that NIST and BLEU scores are not additive for different sentences, i.e. the score for a document cannot be obtained by simply summing over scores for individual sentences. 4 Training Criteria for Minimum Error Rate Training In the following, we assume that we can measure the number of errors in sentence  by comparing it with a reference sentence  using a function E     . However, the following exposition can be easily adapted to accuracy metrics and to metrics that make use of multiple references. We assume that the number of errors for a set of sentences  $  is obtained by summing the errors for the individual sentences:    $   $     $ '      '  '  . Our goal is to obtain a minimal error count on a representative corpus $  with given reference translations   $  and a set of  different candidate translations  '  &  '    '   * for each input sentence ' .      "!   # $ & '      '  ) '    ,+ (5)   "!   # $ & '    &      '  '     ) '     '  ,+ with   '     +"!#$ % #  &     ')( '  + (6) The above stated optimization criterion is not easy to handle: It includes an argmax operation (Eq. 6). Therefore, it is not possible to compute a gradient and we cannot use gradient descent methods to perform optimization. The objective function has many different local optima. The optimization algorithm must handle this. In addition, even if we manage to solve the optimization problem, we might face the problem of overfitting the training data. In Section 5, we describe an efficient optimization algorithm. To be able to compute a gradient and to make the objective function smoother, we can use the following error criterion which is essentially a smoothed error count, with a parameter  to adjust the smoothness:     +"!     & '   ' '   ' '  (   ' '  (   (7) In the extreme case, for "! # , Eq. 7 converges to the unsmoothed criterion of Eq. 5 (except in the case of ties). Note, that the resulting objective function might still have local optima, which makes the optimization hard compared to using the objective function of Eq. 4 which does not have different local optima. The use of this type of smoothed error count is a common approach in the speech community (Juang et al., 1995; Schl¨uter and Ney, 2001). Figure 1 shows the actual shape of the smoothed and the unsmoothed error count for two parameters in our translation system. We see that the unsmoothed error count has many different local optima and is very unstable. The smoothed error count is much more stable and has fewer local optima. But as we show in Section 7, the performance on our task obtained with the smoothed error count does not differ significantly from that obtained with the unsmoothed error count. 5 Optimization Algorithm for Unsmoothed Error Count A standard algorithm for the optimization of the unsmoothed error count (Eq. 5) is Powells algorithm combined with a grid-based line optimization method (Press et al., 2002). We start at a random point in the  -dimensional parameter space 9400 9410 9420 9430 9440 9450 9460 9470 9480 -4 -3 -2 -1 0 1 2 3 4 error count unsmoothed error count smoothed error rate (alpha=3) 9405 9410 9415 9420 9425 9430 9435 9440 9445 9450 -4 -3 -2 -1 0 1 2 3 4 error count unsmoothed error count smoothed error rate (alpha=3) Figure 1: Shape of error count and smoothed error count for two different model parameters. These curves have been computed on the development corpus (see Section 7, Table 1) using   alternatives per source sentence. The smoothed error count has been computed with a smoothing parameter   . and try to find a better scoring point in the parameter space by making a one-dimensional line minimization along the directions given by optimizing one parameter while keeping all other parameters fixed. To avoid finding a poor local optimum, we start from different initial parameter values. A major problem with the standard approach is the fact that grid-based line optimization is hard to adjust such that both good performance and efficient search are guaranteed. If a fine-grained grid is used then the algorithm is slow. If a large grid is used then the optimal solution might be missed. In the following, we describe a new algorithm for efficient line optimization of the unsmoothed error count (Eq. 5) using a log-linear model (Eq. 3) which is guaranteed to find the optimal solution. The new algorithm is much faster and more stable than the grid-based line optimization method. Computing the most probable sentence out of a set of candidate translation   &    * (see Eq. 6) along a line   /   with parameter results in an optimization problem of the following functional form:        "!  %   & '   /  ' +* (8) Here, 0/  and  0/  are constants with respect to . Hence, every candidate translation in  corresponds to a line. The function     !  % & '   /  ' +* (9) is piecewise linear (Papineni, 1999). This allows us to compute an efficient exhaustive representation of that function. In the following, we sketch the new algorithm to optimize Eq. 5: We compute the ordered sequence of linear intervals constituting    for every sentence together with the incremental change in error count from the previous to the next interval. Hence, we obtain for every sentence a sequence       6 which denote the interval boundaries and a corresponding sequence for the change in error count involved at the corresponding interval boundary            6 . Here,    8 denotes the change in the error count at position   8   8  to the error count at position   8   8   . By merging all sequences  and    for all different sentences of our corpus, the complete set of interval boundaries and error count changes on the whole corpus are obtained. The optimal can now be computed easily by traversing the sequence of interval boundaries while updating an error count. It is straightforward to refine this algorithm to also handle the BLEU and NIST scores instead of sentence-level error counts by accumulating the relevant statistics for computing these scores (n-gram precision, translation length and reference length) . 6 Baseline Translation Approach The basic feature functions of our model are identical to the alignment template approach (Och and Ney, 2002). In this translation model, a sentence is translated by segmenting the input sentence into phrases, translating these phrases and reordering the translations in the target language. In addition to the feature functions described in (Och and Ney, 2002), our system includes a phrase penalty (the number of alignment templates used) and special alignment features. Altogether, the log-linear model includes    different features. Note that many of the used feature functions are derived from probabilistic models: the feature function is defined as the negative logarithm of the corresponding probabilistic model. Therefore, the feature functions are much more ’informative’ than for instance the binary feature functions used in standard maximum entropy models in natural language processing. For search, we use a dynamic programming beam-search algorithm to explore a subset of all possible translations (Och et al., 1999) and extract . best candidate translations using A* search (Ueffing et al., 2002). Using an . -best approximation, we might face the problem that the parameters trained are good for the list of . translations used, but yield worse translation results if these parameters are used in the dynamic programming search. Hence, it is possible that our new search produces translations with more errors on the training corpus. This can happen because with the modified model scaling factors the . -best list can change significantly and can include sentences not in the existing . -best list. To avoid this problem, we adopt the following solution: First, we perform search (using a manually defined set of parameter values) and compute an . -best list, and use this . -best list to train the model parameters. Second, we use the new model parameters in a new search and compute a new . -best list, which is combined with the existing . -best list. Third, using this extended . -best list new model parameters are computed. This is iterated until the resulting . -best list does not change. In this algorithm convergence is guaranteed as, in the limit, the . -best list will contain all possible translations. In our experiments, we compute in every iteration about 200 alternative translations. In practice, the algorithm converges after about five to seven iterations. As a result, error rate cannot increase on the training corpus. A major problem in applying the MMI criterion is the fact that the reference translations need to be part of the provided . -best list. Quite often, none of the given reference translations is part of the . -best list because the search algorithm performs pruning, which in principle limits the possible translations that can be produced given a certain input sentence. To solve this problem, we define for the MMI training new pseudo-references by selecting from the . best list all the sentences which have a minimal number of word errors with respect to any of the true references. Note that due to this selection approach, the results of the MMI criterion might be biased toward the mWER criterion. It is a major advantage of the minimum error rate training that it is not necessary to choose pseudo-references. 7 Results We present results on the 2002 TIDES Chinese– English small data track task. The goal is the translation of news text from Chinese to English. Table 1 provides some statistics on the training, development and test corpus used. The system we use does not include rule-based components to translate numbers, dates or names. The basic feature functions were trained using the training corpus. The development corpus was used to optimize the parameters of the log-linear model. Translation results are reported on the test corpus. Table 2 shows the results obtained on the development corpus and Table 3 shows the results obtained Table 2: Effect of different error criteria in training on the development corpus. Note that better results correspond to larger BLEU and NIST scores and to smaller error rates. Italic numbers refer to results for which the difference to the best result (indicated in bold) is not statistically significant. error criterion used in training mWER [%] mPER [%] BLEU [%] NIST # words confidence intervals +/- 2.4 +/- 1.8 +/- 1.2 +/- 0.2 MMI 70.7 55.3 12.2 5.12 10382 mWER 69.7 52.9 15.4 5.93 10914 smoothed-mWER 69.8 53.0 15.2 5.93 10925 mPER 71.9 51.6 17.2 6.61 11671 smoothed-mPER 71.8 51.8 17.0 6.56 11625 BLEU 76.8 54.6 19.6 6.93 13325 NIST 73.8 52.8 18.9 7.08 12722 Table 1: Characteristics of training corpus (Train), manual lexicon (Lex), development corpus (Dev), test corpus (Test). Chinese English Train Sentences 5 109 Words 89 121 111 251 Singletons 3 419 4 130 Vocabulary 8 088 8 807 Lex Entries 82 103 Dev Sentences 640 Words 11 746 13 573 Test Sentences 878 Words 24 323 26 489 on the test corpus. Italic numbers refer to results for which the difference to the best result (indicated in bold) is not statistically significant. For all error rates, we show the maximal occurring 95% confidence interval in any of the experiments for that column. The confidence intervals are computed using bootstrap resampling (Press et al., 2002). The last column provides the number of words in the produced translations which can be compared with the average number of reference words occurring in the development and test corpora given in Table 1. We observe that if we choose a certain error criterion in training, we obtain in most cases the best results using the same criterion as the evaluation metric on the test data. The differences can be quite large: If we optimize with respect to word error rate, the results are mWER=68.3%, which is better than if we optimize with respect to BLEU or NIST and the difference is statistically significant. Between BLEU and NIST, the differences are more moderate, but by optimizing on NIST, we still obtain a large improvement when measured with NIST compared to optimizing on BLEU. The MMI criterion produces significantly worse results on all error rates besides mWER. Note that, due to the re-definition of the notion of reference translation by using minimum edit distance, the results of the MMI criterion are biased toward mWER. It can be expected that by using a suitably defined . gram precision to define the pseudo-references for MMI instead of using edit distance, it is possible to obtain better BLEU or NIST scores. An important part of the differences in the translation scores is due to the different translation length (last column in Table 3). The mWER and MMI criteria prefer shorter translations which are heavily penalized by the BLEU and NIST brevity penalty. We observe that the smoothed error count gives almost identical results to the unsmoothed error count. This might be due to the fact that the number of parameters trained is small and no serious overfitting occurs using the unsmoothed error count. 8 Related Work The use of log-linear models for statistical machine translation was suggested by Papineni et al. (1997) and Och and Ney (2002). The use of minimum classification error training and using a smoothed error count is common in the pattern recognition and speech Table 3: Effect of different error criteria used in training on the test corpus. Note that better results correspond to larger BLEU and NIST scores and to smaller error rates. Italic numbers refer to results for which the difference to the best result (indicated in bold) is not statistically significant. error criterion used in training mWER [%] mPER [%] BLEU [%] NIST # words confidence intervals +/- 2.7 +/- 1.9 +/- 0.8 +/- 0.12 MMI 68.0 51.0 11.3 5.76 21933 mWER 68.3 50.2 13.5 6.28 22914 smoothed-mWER 68.2 50.2 13.2 6.27 22902 mPER 70.2 49.8 15.2 6.71 24399 smoothed-mPER 70.0 49.7 15.2 6.69 24198 BLEU 76.1 53.2 17.2 6.66 28002 NIST 73.3 51.5 16.4 6.80 26602 recognition community (Duda and Hart, 1973; Juang et al., 1995; Schl¨uter and Ney, 2001). Paciorek and Rosenfeld (2000) use minimum classification error training for optimizing parameters of a whole-sentence maximum entropy language model. A technically very different approach that has a similar goal is the minimum Bayes risk approach, in which an optimal decision rule with respect to an application specific risk/loss function is used, which will normally differ from Eq. 3. The loss function is either identical or closely related to the final evaluation criterion. In contrast to the approach presented in this paper, the training criterion and the statistical models used remain unchanged in the minimum Bayes risk approach. In the field of natural language processing this approach has been applied for example in parsing (Goodman, 1996) and word alignment (Kumar and Byrne, 2002). 9 Conclusions We presented alternative training criteria for loglinear statistical machine translation models which are directly related to translation quality: an unsmoothed error count and a smoothed error count on a development corpus. For the unsmoothed error count, we presented a new line optimization algorithm which can efficiently find the optimal solution along a line. We showed that this approach obtains significantly better results than using the MMI training criterion (with our method to define pseudoreferences) and that optimizing error rate as part of the training criterion helps to obtain better error rate on unseen test data. As a result, we expect that actual ’true’ translation quality is improved, as previous work has shown that for some evaluation criteria there is a correlation with human subjective evaluation of fluency and adequacy (Papineni et al., 2001; Doddington, 2002). However, the different evaluation criteria yield quite different results on our Chinese–English translation task and therefore we expect that not all of them correlate equally well to human translation quality. The following important questions should be answered in the future: How many parameters can be reliably estimated using unsmoothed minimum error rate criteria using a given development corpus size? We expect that directly optimizing error rate for many more parameters would lead to serious overfitting problems. Is it possible to optimize more parameters using the smoothed error rate criterion? Which error rate should be optimized during training? This relates to the important question of which automatic evaluation measure is optimally correlated to human assessment of translation quality. Note, that this approach can be applied to any evaluation criterion. Hence, if an improved automatic evaluation criterion is developed that has an even better correlation with human judgments than BLEU and NIST, we can plug this alternative criterion directly into the training procedure and optimize the model parameters for it. This means that improved translation evaluation measures lead directly to improved machine translation quality. Of course, the approach presented here places a high demand on the fidelity of the measure being optimized. It might happen that by directly optimizing an error measure in the way described above, weaknesses in the measure might be exploited that could yield better scores without improved translation quality. Hence, this approach poses new challenges for developers of automatic evaluation criteria. Many tasks in natural language processing, for instance summarization, have evaluation criteria that go beyond simply counting the number of wrong system decisions and the framework presented here might yield improved systems for these tasks as well. Acknowledgements This work was supported by DARPA-ITO grant 66001-00-1-9814. References Srinivas Bangalore, O. Rambox, and S. Whittaker. 2000. Evaluation metrics for generation. In Proceedings of the International Conference on Natural Language Generation, Mitzpe Ramon, Israel. George Doddington. 2002. Automatic evaluation of machine translation quality using n-gram co-occurrence statistics. In Proc. ARPA Workshop on Human Language Technology. Richhard O. Duda and Peter E. Hart. 1973. Pattern Classification and Scene Analysis. John Wiley, New York, NY. Joshua Goodman. 1996. Parsing algorithms and metrics. In Proceedings of the 34th Annual Meeting of the ACL, pages 177–183, Santa Cruz, CA, June. B. H. Juang, W. Chou, and C. H. Lee. 1995. Statistical and discriminative methods for speech recognition. In A. J. Rubio Ayuso and J. M. Lopez Soler, editors, Speech Recognition and Coding - New Advances and Trends. Springer Verlag, Berlin, Germany. Shankar Kumar and William Byrne. 2002. Minimum bayes-risk alignment of bilingual texts. In Proc. of the Conference on Empirical Methods in Natural Language Processing, Philadelphia, PA. Sonja Nießen, Franz J. Och, G. Leusch, and Hermann Ney. 2000. An evaluation tool for machine translation: Fast evaluation for machine translation research. In Proc. of the Second Int. Conf. on Language Resources and Evaluation (LREC), pages 39–45, Athens, Greece, May. Franz Josef Och and Hermann Ney. 2002. Discriminative training and maximum entropy models for statistical machine translation. In Proc. of the 40th Annual Meeting of the Association for Computational Linguistics (ACL), Philadelphia, PA, July. Franz J. Och, Christoph Tillmann, and Hermann Ney. 1999. Improved alignment models for statistical machine translation. In Proc. of the Joint SIGDAT Conf. on Empirical Methods in Natural Language Processing and Very Large Corpora, pages 20–28, University of Maryland, College Park, MD, June. Chris Paciorek and Roni Rosenfeld. 2000. Minimum classification error training in exponential language models. In NIST/DARPA Speech Transcription Workshop, May. Kishore A. Papineni, Salim Roukos, and R. T. Ward. 1997. Feature-based language understanding. In European Conf. on Speech Communication and Technology, pages 1435–1438, Rhodes, Greece, September. Kishore A. Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2001. Bleu: a method for automatic evaluation of machine translation. Technical Report RC22176 (W0109-022), IBM Research Division, Thomas J. Watson Research Center, Yorktown Heights, NY, September. Kishore A. Papineni. 1999. Discriminative training via linear programming. In Proceedings of the 1999 IEEE International Conference on Acoustics, Speech & Signal Processing, Atlanta, March. William H. Press, Saul A. Teukolsky, William T. Vetterling, and Brian P. Flannery. 2002. Numerical Recipes in C++. Cambridge University Press, Cambridge, UK. Ralf Schl¨uter and Hermann Ney. 2001. Model-based MCE bound to the true Bayes’ error. IEEE Signal Processing Letters, 8(5):131–133, May. Christoph Tillmann, Stephan Vogel, Hermann Ney, Alex Zubiaga, and Hassan Sawaf. 1997. Accelerated DP based search for statistical translation. In European Conf. on Speech Communication and Technology, pages 2667–2670, Rhodes, Greece, September. Nicola Ueffing, Franz Josef Och, and Hermann Ney. 2002. Generation of word graphs in statistical machine translation. In Proc. Conference on Empirical Methods for Natural Language Processing, pages 156–163, Philadelphia, PE, July.
2003
21
A Machine Learning Approach to Pronoun Resolution in Spoken Dialogue Michael Strube and Christoph M¨uller European Media Laboratory GmbH Villa Bosch Schloß-Wolfsbrunnenweg 33 69118 Heidelberg, Germany michael.strube|christoph.mueller  @eml.villa-bosch.de Abstract We apply a decision tree based approach to pronoun resolution in spoken dialogue. Our system deals with pronouns with NPand non-NP-antecedents. We present a set of features designed for pronoun resolution in spoken dialogue and determine the most promising features. We evaluate the system on twenty Switchboard dialogues and show that it compares well to Byron’s (2002) manually tuned system. 1 Introduction Corpus-based methods and machine learning techniques have been applied to anaphora resolution in written text with considerable success (Soon et al., 2001; Ng & Cardie, 2002, among others). It has been demonstrated that systems based on these approaches achieve a performance that is comparable to hand-crafted systems. Since they can easily be applied to new domains it seems also feasible to port a given corpus-based anaphora resolution system from written text to spoken dialogue. This paper describes the extensions and adaptations needed for applying our anaphora resolution system (M¨uller et al., 2002; Strube et al., 2002) to pronoun resolution in spoken dialogue. There are important differences between written text and spoken dialogue which have to be accounted for. The most obvious difference is that in spoken dialogue there is an abundance of (personal and demonstrative) pronouns with non-NP-antecedents or no antecedents at all. Corpus studies have shown that a significant amount of pronouns in spoken dialogue have non-NP-antecedents: Byron & Allen (1998) report that about 50% of the pronouns in the TRAINS93 corpus have non-NP-antecedents. Eckert & Strube (2000) note that only about 45% of the pronouns in a set of Switchboard dialogues have NP-antecedents. The remainder consists of 22% which have non-NP-antecedents and 33% without antecedents. These studies suggest that the performance of a pronoun resolution algorithm can be improved considerably by enabling it to resolve also pronouns with non-NP-antecedents. Because of the difficulties a pronoun resolution algorithm encounters in spoken dialogue, previous approaches were applied only to tiny domains, they needed deep semantic analysis and discourse processing and relied on hand-crafted knowledge bases. In contrast, we build on our existing anaphora resolution system and incrementally add new features specifically devised for spoken dialogue. That way we are able to determine relatively powerful yet computationally cheap features. To our knowledge the work presented here describes the first implemented system for corpus-based anaphora resolution dealing also with non-NP-antecedents. 2 NP- vs. Non-NP-Antecedents Spoken dialogue contains more pronouns with nonNP-antecedents than written text does. However, pronouns with NP-antecedents (like 3rd pers. masculine/feminine pronouns, cf. he in the example below) still constitute the largest fraction of all coreferential pronouns in the Switchboard corpus. In spoken dialogue there are considerable numbers of pronouns that pick up different kinds of abstract objects from the previous discourse, e.g. events, states, concepts, propositions or facts (Webber, 1991; Asher, 1993). These anaphors then have VP-antecedents (“it ” in (B6) below) or sentential antecedents (“that  ” in (B5)). A1: . . . [he]  ’s nine months old. . . . A2: [He]  likes to dig around a little bit. A3: [His]  mother comes in and says, why did you let [him]  [play in the dirt]  , A:4 I guess [[he]  ’s enjoying himself]  . B5: [That]  ’s right. B6: [It]  ’s healthy, . . . A major problem for pronoun resolution in spoken dialogue is the large number of personal and demonstrative pronouns which are either not referential at all (e.g. expletive pronouns) or for which a particular antecedent cannot easily be determined by humans (called vague anaphors by Eckert & Strube (2000)). In the following example, the “that  ” in utterance (A3) refers back to utterance (A1). As for the first two pronouns in (B4), following Eckert & Strube (2000) and Byron (2002) we assume that referring expressions in disfluencies, abandoned utterances etc. are excluded from the resolution. The third pronoun in (B4) is an expletive. The pronoun in (A5) is different in that it is indeed referential: it refers back to“that  ” from (A3). A1: . . . [There is a lot of theft, a lot of assault dealing with, uh, people trying to get money for drugs.  ] B2: Yeah. A3: And, uh, I think [that  ]’s a national problem, though. B4: It, it, it’s pretty bad here, too. A5: [It  ]’s not unique . . . Pronoun resolution in spoken dialogue also has to deal with the whole range of difficulties that come with processing spoken language: disfluencies, hesitations, abandoned utterances, interruptions, backchannels, etc. These phenomena have to be taken into account when formulating constraints on e.g. the search space in which an anaphor looks for its antecedent. E.g., utterance (B2) in the previous example does not contain any referring expressions. So the demonstrative pronoun in (A3) has to have access not only to (B2) but also to (A1). 3 Data 3.1 Corpus Our work is based on twenty randomly chosen Switchboard dialogues. Taken together, the dialogues contain 30810 tokens (words and punctuation) in 3275 sentences / 1771 turns. The annotation consists of 16601 markables, i.e. sequences of words and attributes associated with them. On the top level, different types of markables are distinguished: NPmarkables identify referring expressions like noun phrases, pronouns and proper names. Some of the attributes for these markables are derived from the Penn Treebank version of the Switchboard dialogues, e.g. grammatical function, NP form, grammatical case and depth of embedding in the syntactical structure. VP-markables are verb phrases, S-markables sentences. Disfluency-markables are noun phrases or pronouns which occur in unfinished or abandoned utterances. Among other (typedependent) attributes, markables contain a member attribute with the ID of the coreference class they are part of (if any). If an expression is used to refer to an entity that is not referred to by any other expression, it is considered a singleton. Table 1 gives the distribution of the npform attribute for NP-markables. The second and third row give the number of non-singletons and singletons respectively that add up to the total number given in the first row. Table 2 shows the distribution of the agreement attribute (i.e. person, gender, and number) for the pronominal expressions in our corpus. The left figure in each cell gives the total number of expressions, the right figure gives the number of nonsingletons. Note the relatively high number of singletons among the personal and demonstrative pronouns (223 for it, 60 for they and 82 for that). These pronouns are either expletive or vague, and cause the most trouble for a pronoun resolution algorithm, which will usually attempt to find an antecedent nonetheless. Singleton they pronouns, in particular, are typical for spoken language (as opposed to defNP indefNP NNP prp prp$ dtpro Total 1080 1899 217 1075 70 392 In coreference relation 219 163 94 786 56 309 Singletons 861 1736 123 289 14 83 Table 1: Distribution of npform Feature on Markables (w/o 1st and 2nd Persons) 3m 3f 3n 3p prp 67 63 49 47 541 318 418 358 prp$ 18 15 14 11 3 3 35 27 dtpro 0 0 0 0 380 298 12 11 85 78 63 58 924 619 465 396 Table 2: Distribution of Agreement Feature on Pronominal Expressions written text). The same is true for anaphors with non-NP-antecedents. However, while they are far more frequent in spoken language than in written text, they still constitute only a fraction of all coreferential expressions in our corpus. This defines an upper limit for what the resolution of these kinds of anaphors can contribute at all. These facts have to be kept in mind when comparing our results to results of coreference resolution in written text. 3.2 Data Generation Training and test data instances were generated from our corpus as follows. All markables were sorted in document order, and markables for first and second person pronouns were removed. The resulting list was then processed from top to bottom. If the list contained an NP-markable at the current position and if this markable was not an indefinite noun phrase, it was considered a potential anaphor. In that case, pairs of potentially coreferring expressions were generated by combining the potential anaphor with each compatible1 NP-markable preceding2 it in the list. The resulting pairs were labelled P if both markables had the same (non-empty) value in their member attribute, N otherwise. For anaphors with non-NP-antecedents, additional training and test data instances had to be generated. This process was triggered by the markable at the current position being it or that. In that case, a small set of potential non-NP-antecedents was generated by selecting S- and VP-markables from the last two valid sentences preceding the potential anaphor. The choice 1Markables are considered compatible if they do not mismatch in terms of agreement. 2We disregard the phenomenon of cataphor here. of the last two sentences was motivated pragmatically by considerations to keep the search space (and the number of instances) small. A sentence was considered valid if it was neither unfinished nor a backchannel utterance (like e.g. ”Uh-huh”, ”Yeah”, etc.). From the selected markables, inaccessible non-NP-expressions were automatically removed. We considered an expression inaccessible if it ended before the sentence in which it was contained. This was intended to be a rough approximation of the concept of the right frontier (Webber, 1991). The remaining expressions were then combined with the potential anaphor. Finally, the resulting pairs were labelled P or N and added to the instances generated with NP-antecedents. 4 Features We distinguish two classes of features: NP-level features specify e.g. the grammatical function, NP form, morpho-syntax, grammatical case and the depth of embedding in the syntactical structure. For these features, each instance contains one value for the antecedent and one for the anaphor. Coreference-level features, on the other hand, describe the relation between antecedent and anaphor in terms of e.g. distance (in words, markables and sentences), compatibility in terms of agreement and identity of syntactic function. For these features, each instance contains only one value. In addition, we introduce a set of features which is partly tailored to the processing of spoken dialogue. The feature ante exp type (17) is a rather obvious yet useful feature to distinguish NP- from non-NP-antecedents. The features ana np , vp and NP-level features 1. ante gram func grammatical function of antecedent 2. ante npform form of antecedent 3. ante agree person, gender, number 4. ante case grammatical case of antecedent 5. ante s depth the level of embedding in a sentence 6. ana gram func grammatical function of anaphor 7. ana npform form of anaphor 8. ana agree person, gender, number 9. ana case grammatical case of anaphor 10. ana s depth the level of embedding in a sentence Coreference-level features 11. agree comp compatibility in agreement between anaphor and antecedent 12. npform comp compatibilty in NP form between anaphor and antecedent 13. wdist distance between anaphor and antecedent in words 14. mdist distance between anaphor and antecedent in markables 15. sdist distance between anaphor and antecedent in sentences 16. syn par anaphor and antecedent have the same grammatical function (yes, no) Features introduced for spoken dialogue 17. ante exp type type of antecedent (NP, S, VP) 18. ana np pref preference for NP arguments 19. ana vp pref preference for VP arguments 20. ana s pref preference for S arguments 21. mdist 3mf3p (see text) 22. mdist 3n (see text) 23. ante tfidf (see text) 24. ante ic (see text) 25. wdist ic (see text) Table 3: Our Features s pref (18, 19, 20) describe a verb’s preference for arguments of a particular type. Inspired by the work of Eckert & Strube (2000) and Byron (2002), these features capture preferences for NP- or nonNP-antecedents by taking a pronoun’s predicative context into account. The underlying assumption is that if a verb preceding a personal or demonstrative pronoun preferentially subcategorizes sentences or VPs, then the pronoun will be likely to have a nonNP-antecedent. The features are based on a verb list compiled from 553 Switchboard dialogues.3 For every verb occurring in the corpus, this list contains up to three entries giving the absolute count of cases where the verb has a direct argument of type NP, VP or S. When the verb list was produced, pronominal arguments were ignored. The features mdist 3mf3p and mdist 3n (21, 22) are refinements of the mdist feature. They measure the distance in markables between antecedent and anaphor, but in doing so they take the agreement value of the anaphor into account. For anaphors with an agreement value of 3mf or 3p, mdist 3mf3p is measured as D = 1 + the num3It seemed preferable to compile our own list instead of using existing ones like Briscoe & Carroll (1997). ber of NP-markables between anaphor and potential antecedent. Anaphors with an agreement value of 3n, (i.e. it or that), on the other hand, potentially have non-NP-antecedents, so mdist 3n is measured as D + the number of anaphorically accessible4 Sand VP-markables between anaphor and potential antecedent. The feature ante tfifd (23) is supposed to capture the relative importance of an expression for a dialogue. The underlying assumption is that the higher the importance of a non-NP expression, the higher the probability of its being referred back to. For our purposes, we calculated TF for every word by counting its frequency in each of our twenty Switchboard dialogues separately. The calculation of IDF was based on a set of 553 Switchboard dialogues. For every word, we calculated IDF as log(553/N ), with N =number of documents containing the word. For every non-NP-markable, an average TF*IDF value was calculated as the TF*IDF sum of all words comprising the markable, divided by the number of 4As mentioned earlier, the definition of accessibility of nonNP-antecedents is inspired by the concept of the right frontier (Webber, 1991). words in the markable. The feature ante ic (24) as an alternative to ante tfidf is based on the same assumptions as the former. The information content of a non-NP-markable is calculated as follows, based on a set of 553 Switchboard dialogues: For each word in the markable, the IC value was calculated as the negative log of the total frequency of the word divided by the total number of words in all 553 dialogues. The average IC value was then calculated as the IC sum of all words in the markable, divided by the number of words in the markable. Finally, the feature wdist ic (25) measures the word-based distance between two expressions. It does so in terms of the sum of the individual words’ IC. The calculation of the IC was done as described for the ante ic feature. 5 Experiments and Results 5.1 Experimental Setup All experiments were performed using the decision tree learner RPART (Therneau & Atkinson, 1997), which is a CART (Breiman et al., 1984) reimplementation for the S-Plus and R statistical computing environments (we use R, Ihaka & Gentleman (1996), see http://www.r-project.org). We used the standard pruning and control settings for RPART (cp=0.0001, minsplit=20, minbucket=7). All results reported were obtained by performing 20-fold crossvalidation. In the prediction phase, the trained classifier is exposed to unlabeled instances of test data. The classifier’s task is to label each instance. When an instance is labeled as coreferring, the IDs of the anaphor and antecedent are kept in a response list for the evaluation according to Vilain et al. (1995). For determining the relevant feature set we followed an iterative procedure similar to the wrapper approach for feature selection (Kohavi & John, 1997). We start with a model based on a set of predefined baseline features. Then we train models combining the baseline with all additional features separately. We choose the best performing feature (fmeasure according to Vilain et al. (1995)), adding it to the model. We then train classifiers combining the enhanced model with each of the remaining features separately. We again choose the best performing classifier and add the corresponding new feature to the model. This process is repeated as long as significant improvement can be observed. 5.2 Results In our experiments we split the data in three sets according to the agreement of the anaphor: third person masculine and feminine pronouns (3mf), third person neuter pronouns (3n), and third person plural pronouns (3p). Since only 3n-pronouns have nonNP-antecedents, we were mainly interested in improvements in this data set. We used the same baseline model for each data set. The baseline model corresponds to a pronoun resolution algorithm commonly applied to written text, i.e., it uses only the features in the first two parts of Table 3. For the baseline model we generated training and test data which included only NPantecedents. Then we performed experiments using the features introduced for spoken dialogue. The training and test data for the models using additional features included NP- and non-NP-antecedents. For each data set we followed the iterative procedure outlined in Section 5.1. In the following tables we present the results of our experiments. The first column gives the number of coreference links correctly found by the classifier, the second column gives the number of all coreference links found. The third column gives the total number of coreference links (1250) in the corpus. During evaluation, the list of all correct links is used as the key list against which the response list produced by the classifier (cf. above) is compared. The remaining three columns show precision, recall and f-measure, respectively. Table 4 gives the results for 3mf pronouns. The baseline model performs very well on this data set (the low recall figure is due to the fact that the 3mf data set contains only a small subset of the coreference links expected by the evaluation). The results are comparable to any pronoun resolution algorithm dealing with written text. This shows that our pronoun resolution system could be ported to the spoken dialogue domain without sacrificing performance. Table 5 shows the results for 3n pronouns. The baseline model does not perform very well. As mentioned above, for evaluating the performance of the correct found total found total correct precision recall f-measure baseline, features 1-16 120 150 1250 80.00 9.60 17.14 plus mdist 3mf3p 121 153 1250 79.08 9.68 17.25 Table 4: Results for Third Person Masculine and Feminine Pronouns (3mf) correct found total found total correct precision recall f-measure baseline, features 1-16 109 235 1250 46.38 8.72 14.68 plus none 97 232 1250 41.81 7.76 13.09 plus ante exp type 137 359 1250 38.16 10.96 17.03 plus wdist ic 154 389 1250 39.59 12.32 18.79 plus ante tfidf 158 391 1250 40.41 12.64 19.26 Table 5: Results for Third Person Neuter Pronouns (3n) baseline model we removed all potential non-NPantecedents from the data. This corresponds to a naive application of a model developed for written text to spoken dialogue. First, we applied the same model to the data set containing all kinds of antecedents. The performance drops somewhat as the classifier is exposed to non-NP-antecedents without being able to differentiate between NP- and non-NP-antecedents. By adding the feature ante exp type the classifier is enabled to address NP- and non-NP-antecedents differently, which results in a considerable gain in performance. Substituting the wdist feature with the wdist ic feature also improves the performance considerably. The ante tfidf feature only contributes marginally to the overall performance. – These results show that it pays off to consider features particularly designed for spoken dialogue. Table 6 presents the results for 3p pronouns, which do not have non-NP-antecedents. Many of these pronouns do not have an antecedent at all. Others are vague in that human annotators felt them to be referential, but could not determine an antecedent. Since we did not address that issue in depth, the classifier tries to find antecedents for these pronouns indiscriminately, which results in rather low precision figures, as compared to e.g. those for 3mf. Only the feature wdist ic leads to an improvement over the baseline. Table 7 shows the results for the combined classifiers. The improvement in f-measure is due to the increase in recall while the precision shows only a slight decrease. Though some of the features of the baseline model (features 1-16) still occur in the decision tree learned, the feature ante exp type divides major parts of the tree quite nicely (see Figure 1). Below that node the feature ana npform is used to distinguish between negative (personal pronouns) and potential positive cases (demonstrative pronouns). This confirms the hypothesis by Eckert & Strube (2000) and Byron (2002) to give high priority to these features. The decision tree fragment in Figure 1 correctly assigns the P label to 23-7=16 sentential antecedents. split, n, loss, yval * denotes terminal node ... anteexptype=s,vp 1110 55 N ananpform=prp 747,11 N * ananpform=dtpro 363 44 N anteexptype=vp 177 3 N * anteexptype=s 186 41 N udist>=1.5 95 14 N * udist<1.5 91 27 N wdistic<43.32 33 4 N * wdistic>=43.32 58 23 N anasdepth>=2.5 23 4 N * anasdepth<2.5 35 16 N wdistic>=63.62 24 11 N wdistic<80.60 12 3 N * wdistic>=80.60 12 4 P * wdistic<63.62 11 3 P * Figure 1: Decision Tree Fragment However, the most important problem is the large amount of pronouns without antecedents. The model does find (wrong) antecedents for a lot of pronouns which should not have one. Only a small fraction of these pronouns are true expletives (i.e., they precede a “weather” verb or are in constructions like “It seems that . . . ”. The majority of these cases are referential, but have no antecedent in the data (i.e., correct found total found total correct precision recall f-measure baseline, features 1-16 227 354 1250 64.12 18.16 28.30 plus wdist ic 230 353 1250 65.16 18.40 28.70 Table 6: Results for Third Person Plural Pronouns (3p) correct found total found total correct precision recall f-measure baseline, features 1-16 456 739 1250 61.71 36.48 45.85 combined 509 897 1250 56.74 40.72 47.42 Table 7: Combined Results for All Pronouns they are vague pronouns). The overall numbers for precision, recall and fmeasure are fairly low. One reason is that we did not attempt to resolve anaphoric definite NPs and proper names though these coreference links are contained in the evaluation key list. If we removed them from there, the recall of our experiments would approach the 51% Byron (2002) mentioned for her system using only domain-independent semantic restrictions. 6 Comparison to Related Work Our approach for determining the feature set for pronoun resolution resembles the so-called wrapper approach for feature selection (Kohavi & John, 1997). This is in contrast to the majority of other work on feature selection for anaphora resolution, which was hardly ever done systematically. E.g. Soon et al. (2001) only compared baseline systems consisting of one feature each, only three of which yielded an f-measure greater than zero. Then they combined these features and achieved results which were close to the best overall results they report. While this tells us which features contribute a lot, it does not give any information about potential (positive or negative) influence of the rest. Ng & Cardie (2002) select the set of features by hand, giving a preference to high precision features. They admit that this method is quite subjective. Corpus-based work about pronoun resolution in spoken dialogue is almost non-existent. However, there are a few papers dealing with neuter pronouns with NP-antecedents. E.g., Dagan & Itai (1991) presented a corpus-based approach to the resolution of the pronoun it, but they use a written text corpus and do not mention non-NP-antecedents at all. Paul et al. (1999) presented a corpus-based anaphora resolution algorithm for spoken dialogue. For their experiments, however, they restricted anaphoric relations to those with NP-antecedents. Byron (2002) presented a symbolic approach which resolves pronouns with NP- and non-NPantecedents in spoken dialogue in the TRAINS domain. Byron extends a pronoun resolution algorithm (Tetrault, 2001) with semantic filtering, thus enabling it to resolve anaphors with non-NPantecedents as well. Semantic filtering relies on knowledge about semantic restrictions associated with verbs, like semantic compatibility between subject and predicative noun or predicative adjective. An evaluation on ten TRAINS93 dialogues with 80 3rd person pronouns and 100 demonstrative pronouns shows that semantic filtering and the implementation of different search strategies for personal and demonstrative pronouns yields a success rate of 72%. As Byron admits, the major limitation of her algorithm is its dependence on domain-dependent resources which cover the domain entirely. When evaluating her algorithm with only domain-independent semantics, Byron achieved 51% success rate. What is problematic with her approach is that she assumes the input to her algorithm to be only referential pronouns. This simplifies the task considerably. 7 Conclusions and Future Work We presented a machine learning approach to pronoun resolution in spoken dialogue. We built upon a system we used for anaphora resolution in written text and extended it with a set of features designed for spoken dialogue. We refined distance features and used metrics from information retrieval for determining non-NP-antecedents. Inspired by the more linguistically oriented work by Eckert & Strube (2000) and Byron (2002) we also evaluated the contribution of features which used the predicative context of the pronoun to be resolved. However, these features did not show up in the final models since they did not lead to an improvement. Instead, rather simple distance metrics were preferred. While we were (almost) satisfied with the performance of these features, the major problem for a spoken dialogue pronoun resolution algorithm is the abundance of pronouns without antecedents. Previous research could avoid dealing with this phenomenon by either applying the algorithm by hand (Eckert & Strube, 2000) or excluding these cases (Byron, 2002) from the evaluation. Because we included these cases in our evaluation we consider our approach at least comparable to Byron’s system when she uses only domain-independent semantics. We believe that our system is more robust than hers and that it can more easily be ported to new domains. Acknowledgements. The work presented here has been partially funded by the German Ministry of Research and Technology as part of the EMBASSI project (01 IL 904 D/2) and by the Klaus Tschira Foundation. We would like to thank Susanne Wilhelm and Lutz Wind for doing the annotations, Kerstin Sch¨urmann, Torben Pastuch and Klaus Rothenh¨ausler for helping with the data preparation. References Asher, Nicholas (1993). Reference to Abstract Objects in Discourse. Dordrecht, The Netherlands: Kluwer. Breiman, Leo, Jerome H. Friedman, Charles J. Stone & R.A. Olshen (1984). Classification and Regression Trees. Belmont, Cal.: Wadsworth and Brooks/Cole. Briscoe, Ted & John Carroll (1997). Automatic extraction of subcategorization from corpora. In Proceedings of the 5th Conference on Applied Natural Language Processing, Washington, D.C., 31 March – 3 April 1997, pp. 356–363. Byron, Donna K. (2002). Resolving pronominal reference to abstract entities. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, Philadelphia, Penn., 7–12 July 2002, pp. 80–87. Byron, Donna K. & James F. Allen (1998). Resolving demonstrative pronouns in the TRAINS93 corpus. In New Approaches to Discourse Anaphora: Proceedings of the Second Colloquium on Discourse Anaphora and Anaphor Resolution (DAARC2), pp. 68–81. Dagan, Ido & Alon Itai (1991). A statistical filter for resolving pronoun references. In Y.A. Feldman & A. Bruckstein (Eds.), Artificial Intelligence and Computer Vision, pp. 125– 135. Amsterdam: Elsevier. Eckert, Miriam & Michael Strube (2000). Dialogue acts, synchronising units and anaphora resolution. Journal of Semantics, 17(1):51–89. Ihaka, Ross & Robert Gentleman (1996). R: A language for data analysis and graphics. Journal of Computational and Graphical Statistics, 5:299–314. Kohavi, Ron & George H. John (1997). Wrappers for feature subset selection. Artificial Intelligence Journal, 97(12):273–324. M¨uller, Christoph, Stefan Rapp & Michael Strube (2002). Applying Co-Training to reference resolution. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, Philadelphia, Penn., 7–12 July 2002, pp. 352–359. Ng, Vincent & Claire Cardie (2002). Improving machine learning approaches to coreference resolution. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, Philadelphia, Penn., 7–12 July 2002, pp. 104–111. Paul, Michael, Kazuhide Yamamoto & Eiichiro Sumita (1999). Corpus-based anaphora resolution towards antecedent preference. In Proc. of the 37th ACL, Workshop Coreference and Its Applications, College Park, Md., 1999, pp. 47–52. Soon, Wee Meng, Hwee Tou Ng & Daniel Chung Yong Lim (2001). A machine learning approach to coreference resolution of noun phrases. Computational Linguistics, 27(4):521– 544. Strube, Michael, Stefan Rapp & Christoph M¨uller (2002). The influence of minimum edit distance on reference resolution. In Proceedings of the 2002 Conference on Empirical Methods in Natural Language Processing, Philadelphia, Pa., 6–7 July 2002, pp. 312–319. Tetrault, Joel R. (2001). A corpus-based evaluation of centering and pronoun resolution. Computational Linguistics, 27(4):507–520. Therneau, Terry M. & Elizabeth J. Atkinson (1997). An introduction to recursive partitioning using the RPART routines. Technical Report: Mayo Foundation. Distributed with the RPART package. Vilain, Marc, John Burger, John Aberdeen, Dennis Connolly & Lynette Hirschman (1995). A model-theoretic coreference scoring scheme. In Proceedings of the 6th Message Understanding Conference (MUC-6), pp. 45–52. San Mateo, Cal.: Morgan Kaufmann. Webber, Bonnie L. (1991). Structure and ostension in the interpretation of discourse deixis. Language and Cognitive Processes, 6(2):107–135.
2003
22
Coreference Resolution Using Competition Learning Approach Xiaofeng Yang*+ Guodong Zhou* Jian Su* Chew Lim Tan + *Institute for Infocomm Research, 21 Heng Mui Keng Terrace, Singapore 119613 +Department of Computer Science, National University of Singapore, Singapore 117543 *{xiaofengy,zhougd,sujian}@ i2r.a-star.edu.sg +(yangxiao,tancl)@comp.nus.edu.sg Abstract In this paper we propose a competition learning approach to coreference resolution. Traditionally, supervised machine learning approaches adopt the singlecandidate model. Nevertheless the preference relationship between the antecedent candidates cannot be determined accurately in this model. By contrast, our approach adopts a twin-candidate learning model. Such a model can present the competition criterion for antecedent candidates reliably, and ensure that the most preferred candidate is selected. Furthermore, our approach applies a candidate filter to reduce the computational cost and data noises during training and resolution. The experimental results on MUC-6 and MUC-7 data set show that our approach can outperform those based on the singlecandidate model. 1 Introduction Coreference resolution is the process of linking together multiple expressions of a given entity. The key to solve this problem is to determine the antecedent for each referring expression in a document. In coreference resolution, it is common that two or more candidates compete to be the antecedent of an anaphor (Mitkov, 1999). Whether a candidate is coreferential to an anaphor is often determined by the competition among all the candidates. So far, various algorithms have been proposed to determine the preference relationship between two candidates. Mitkov’s knowledge-poor pronoun resolution method (Mitkov, 1998), for example, uses the scores from a set of antecedent indicators to rank the candidates. And centering algorithms (Brennan et al., 1987; Strube, 1998; Tetreault, 2001), sort the antecedent candidates based on the ranking of the forward-looking or backwardlooking centers. In recent years, supervised machine learning approaches have been widely used in coreference resolution (Aone and Bennett, 1995; McCarthy, 1996; Soon et al., 2001; Ng and Cardie, 2002a), and have achieved significant success. Normally, these approaches adopt a single-candidate model in which the classifier judges whether an antecedent candidate is coreferential to an anaphor with a confidence value. The confidence values are generally used as the competition criterion for the antecedent candidates. For example, the “Best-First” selection algorithms (Aone and Bennett, 1995; Ng and Cardie, 2002a) link the anaphor to the candidate with the maximal confidence value (above 0.5). One problem of the single-candidate model, however, is that it only takes into account the relationships between an anaphor and one individual candidate at a time, and overlooks the preference relationship between candidates. Consequently, the confidence values cannot accurately represent the true competition criterion for the candidates. In this paper, we present a competition learning approach to coreference resolution. Motivated by the research work by Connolly et al. (1997), our approach adopts a twin-candidate model to directly learn the competition criterion for the antecedent candidates. In such a model, a classifier is trained based on the instances formed by an anaphor and a pair of its antecedent candidates. The classifier is then used to determine the preference between any two candidates of an anaphor encountered in a new document. The candidate that wins the most comparisons is selected as the antecedent. In order to reduce the computational cost and data noises, our approach also employs a candidate filter to eliminate the invalid or irrelevant candidates. The layout of this paper is as follows. Section 2 briefly describes the single-candidate model and analyzes its limitation. Section 3 proposes in details the twin-candidate model and Section 4 presents our coreference resolution approach based on this model. Section 5 reports and discusses the experimental results. Section 6 describes related research work. Finally, conclusion is given in Section 7. 2 The Single-Candidate Model The main idea of the single-candidate model for coreference resolution is to recast the resolution as a binary classification problem. During training, a set of training instances is generated for each anaphor in an annotated text. An instance is formed by the anaphor and one of its antecedent candidates. It is labeled as positive or negative based on whether or not the candidate is tagged in the same coreferential chain of the anaphor. After training, a classifier is ready to resolve the NPs1 encountered in a new document. For each NP under consideration, every one of its antecedent candidates is paired with it to form a test instance. The classifier returns a number between 0 and 1 that indicates the likelihood that the candidate is coreferential to the NP. The returned confidence value is commonly used as the competition criterion to rank the candidate. Normally, the candidates with confidences less than a selection threshold (e.g. 0.5) are discarded. Then some algorithms are applied to choose one of the remaining candidates, if any, as the antecedent. For example, “Closest-First” (Soon et al., 2001) selects the candidate closest to the anaphor, while “Best-First” (Aone and Bennett, 1995; Ng and Cardie, 2002a) selects the candidate with the maximal confidence value. One limitation of this model, however, is that it only considers the relationships between a NP encountered and one of its candidates at a time during its training and testing procedures. The confidence value reflects the probability that the candidate is coreferential to the NP in the overall 1 In this paper a NP corresponds to a Markable in MUC coreference resolution tasks. distribution 2, but not the conditional probability when the candidate is concurrent with other competitors. Consequently, the confidence values are unreliable to represent the true competition criterion for the candidates. To illustrate this problem, just suppose a data set where an instance could be described with four exclusive features: F1, F2, F3 and F4. The ranking of candidates obeys the following rule: CSF1 >> CSF2 >> CSF3 >> CSF4 Here CSFi ( 4 1 ≤ ≤i ) is the set of antecedent candidates with the feature Fi on. The mark of “>>” denotes the preference relationship, that is, the candidates in CSF1 is preferred to those in CSF2, and to those in CSF3 and CSF4. Let CF2 and CF3 denote the class value of a leaf node “F2 = 1” and “F3 = 1”, respectively. It is possible that CF2 < CF3, if the anaphors whose candidates all belong to CSF3 or CSF4 take the majority in the training data set. In this case, a candidate in CSF3 would be assigned a larger confidence value than a candidate in CSF2. This nevertheless contradicts the ranking rules. If during resolution, the candidates of an anaphor all come from CSF2 or CSF3, the anaphor may be wrongly linked to a candidate in CSF3 rather than in CSF2. 3 The Twin-Candidate Model Different from the single-candidate model, the twin-candidate model aims to learn the competition criterion for candidates. In this section, we will introduce the structure of the model in details. 3.1 Training Instances Creation Consider an anaphor ana and its candidate set candidate_set, {C1, C2, …, Ck}, where Cj is closer to ana than Ci if j > i. Suppose positive_set is the set of candidates that occur in the coreferential chain of ana, and negative_set is the set of candidates not in the chain, that is, negative_set = candidate_set - positive_set. The set of training instances based on ana, inst_set, is defined as follows: 2 Suppose we use C4.5 algorithm and the class value takes the smoothed ration, 2 1 + + t p , where p is the number of positive instances and t is the total number of instances contained in the corresponding leaf node. } _ C , _ C j, i | { } _ C , _ C j, i | { _ j i ) , , ( j i ) , , ( set positve set negative inst set negative set positve inst set inst ana Cj Ci ana Cj Ci ∈ ∈ > ∈ ∈ > = U From the above definition, an instance is formed by an anaphor, one positive candidate and one negative candidate. For each instance, ) ana , cj , ci ( inst , the candidate at the first position, Ci, is closer to the anaphor than the candidate at the second position, Cj. A training instance ) ana , cj , ci ( inst is labeled as positive if Ci ∈ positive-set and Cj ∈ negative-set; or negative if Ci ∈ negative-set and Cj ∈ positiveset. See the following example: Any design to link China's accession to the WTO with the missile tests1 was doomed to failure. “If some countries2 try to block China TO accession, that will not be popular and will fail to win the support of other countries3” she said. Although no governments4 have suggested formal sanctions5 on China over the missile tests6, the United States has called them7 “provocative and reckless” and other countries said they could threaten Asian stability. In the above text segment, the antecedent candidate set of the pronoun “them7” consists of six candidates highlighted in Italics. Among the candidates, Candidate 1 and 6 are in the coreferential chain of “them7”, while Candidate 2, 3, 4, 5 are not. Thus, eight instances are formed for “them7”: (2,1,7) (3,1,7) (4,1,7) (5,1,7) (6,5,7) (6,4,7) (6,3,7) (6,2,7) Here the instances in the first line are negative, while those in the second line are all positive. 3.2 Features Definition A feature vector is specified for each training or testing instance. Similar to those in the singlecandidate model, the features may describe the lexical, syntactic, semantic and positional relationships of an anaphor and any one of its candidates. Besides, the feature set may also contain intercandidate features characterizing the relationships between the pair of candidates, e.g. the distance between the candidates in the number distances or paragraphs. 3.3 Classifier Generation Based on the feature vectors generated for each anaphor encountered in the training data set, a classifier can be trained using a certain machine learning algorithm, such as C4.5, RIPPER, etc. Given the feature vector of a test instance ) ana , cj , ci ( inst (i > j), the classifier returns the positive class indicating that Ci is preferred to Cj as the antecedent of ana; or negative indicating that Cj is preferred. 3.4 Antecedent Identification Let CR( ) ana , cj , ci ( inst ) denote the classification result for an instance ) ana , cj , ci ( inst . The antecedent of an anaphor is identified using the algorithm shown in Figure 1. Algorithm ANTE-SEL Input: ana: the anaphor under consideration candidate_set: the set of antecedent candidates of ana, {C1, C2,…,Ck} for i = 1 to K do Score[ i ] = 0; for i = K downto 2 do for j = i – 1 downto 1 do if CR( ) ana , cj , ci ( inst ) = = positive then Score[ i ]++; else Score[ j ] ++; endif SelectedIdx= ] [ max arg _ i Score set candidate Ci i ∈ return CselectedIdx; Figure 1:The antecedent identification algorithm Algorithm ANTE-SEL takes as input an anaphor and its candidate set candidate_set, and returns one candidate as its antecedent. In the algorithm, each candidate is compared against any other candidate. The classifier acts as a judge during each comparison. The score of each candidate increases by one every time when it wins. In this way, the final score of a candidate records the total times it wins. The candidate with the maximal score is singled out as the antecedent. If two or more candidates have the same maximal score, the one closest to the anaphor would be selected. 3.5 Single-Candidate Model: A Special Case of Twin-Candidate Model? While the realization and the structure of the twincandidate model are significantly different from the single-candidate model, the single-candidate model in fact can be regarded as a special case of the twin-candidate model. To illustrate this, just consider a virtual “blank” candidate C0 such that we could convert an instance ) ana , ci ( inst in the single-candidate model to an instance ) ana , c , ci ( 0 inst in the twin-candidate model. Let ) ana , c , ci ( 0 inst have the same class label as ) ana , ci ( inst , that is, ) ana , c , ci ( 0 inst is positive if Ci is the antecedent of ana; or negative if not. Apparently, the classifier trained on the instance set { ) ana , ci ( inst }, T1, is equivalent to that trained on { ) ana , c , ci ( 0 inst }, T2. T1 and T2 would assign the same class label for the test instances ) ana , ci ( inst and ) ana , c , ci ( 0 inst , respectively. That is to say, determining whether Ci is coreferential to ana by T1 in the single-candidate model equals to determining whether Ci is better than C0 w.r.t ana by T2 in the twin-candidate model. Here we could take C0 as a “standard candidate”. While the classification in the single-candidate model can find its interpretation in the twincandidate model, it is not true vice versa. Consequently, we can safely draw the conclusion that the twin-candidate model is more powerful than the single-candidate model in characterizing the relationships among an anaphor and its candidates. 4 The Competition Learning Approach Our competition learning approach adopts the twin-candidate model introduced in the Section 3. The main process of the approach is as follows: 1. The raw input documents are preprocessed to obtain most, if not all, of the possible NPs. 2. During training, for each anaphoric NP, we create a set of candidates, and then generate the training instances as described in Section 3. 3. Based on the training instances, we make use of the C5.0 learning algorithm (Quinlan, 1993) to train a classifier. 4. During resolution, for each NP encountered, we also construct a candidate set. If the set is empty, we left this NP unresolved; otherwise we apply the antecedent identification algorithm to choose the antecedent and then link the NP to it. 4.1 Preprocessing To determine the boundary of the noun phrases, a pipeline of Nature Language Processing components are applied to an input raw text: z Tokenization and sentence segmentation z Named entity recognition z Part-of-speech tagging z Noun phrase chunking Among them, named entity recognition, part-ofspeech tagging and text chunking apply the same Hidden Markov Model (HMM) based engine with error-driven learning capability (Zhou and Su, 2000 & 2002). The named entity recognition component recognizes various types of MUC-style named entities, i.e., organization, location, person, date, time, money and percentage. 4.2 Features Selection For our study, in this paper we only select those features that can be obtained with low annotation cost and high reliability. All features are listed in Table 1 together with their respective possible values. 4.3 Candidates Filtering For a NP under consideration, all of its preceding NPs could be the antecedent candidates. Nevertheless, since in the twin-candidate model the number of instances for a given anaphor is about the square of the number of its antecedent candidates, the computational cost would be prohibitively large if we include all the NPs in the candidate set. Moreover, many of the preceding NPs are irrelevant or even invalid with regard to the anaphor. These data noises may hamper the training of a goodperformanced classifier, and also damage the accuracy of the antecedent selection: too many comparisons are made between incorrect candidates. Therefore, in order to reduce the computational cost and data noises, an effective candidate filtering strategy must be applied in our approach. During training, we create the candidate set for each anaphor with the following filtering algorithm: 1. If the anaphor is a pronoun, (a) Add to the initial candidate set all the preceding NPs in the current and the previous two sentences. (b) Remove from the candidate set those that disagree in number, gender, and person. (c) If the candidate set is empty, add the NPs in an earlier sentence and go to 1(b). 2. If the anaphor is a non-pronoun, (a) Add all the non-pronominal antecedents to the initial candidate set. (b) For each candidate added in 2(a), add the non-pronouns in the current, the previous and the next sentences into the candidate set. During resolution, we filter the candidates for each encountered pronoun in the same way as during training. That is, we only consider the NPs in the current and the preceding 2 sentences. Such a context window is reasonable as the distance between a pronominal anaphor and its antecedent is generally short. In the MUC-6 data set, for example, the immediate antecedents of 95% pronominal anaphors can be found within the above distance. Comparatively, candidate filtering for nonpronouns during resolution is complicated. A potential problem is that for each non-pronoun under consideration, the twin-candidate model always chooses a candidate as the antecedent, even though all of the candidates are “low-qualified”, that is, unlikely to be coreferential to the non-pronoun under consideration. In fact, the twin-candidate model in itself can identify the qualification of a candidate. We can compare every candidate with a virtual “standard candidate”, C0. Only those better than C0 are deemed qualified and allowed to enter the “round robin”, whereas the losers are eliminated. As we have discussed in Section 3.5, the classifier on the pairs of a candidate and C0 is just a singlecandidate classifier. Thus, we can safely adopt the single-candidate classifier as our candidate filter. The candidate filtering algorithm during resolution is as follows: Features describing the candidate: 1. 2. 3. 4. 5. 6. 7. 8. 9. 10 ante_DefNp_1(2) ante_IndefNP_1(2) ante_Pron_1(2) ante_ProperNP_1(2) ante_M_ProperNP_1(2) ante_ProperNP_APPOS_1(2) ante_Appositive_1(2) ante_NearestNP_1(2) ante_Embeded_1(2) ante_Title_1(2) 1 if Ci (Cj) is a definite NP; else 0 1 if Ci (Cj) is an indefinite NP; else 0 1 if Ci (Cj) is a pronoun; else 0 1 if Ci (Cj) is a proper NP; else 0 1 if Ci (Cj) is a mentioned proper NP; else 0 1 if Ci (Cj) is a proper NP modified by an appositive; else 0 1 if Ci (Cj) is in a apposition structure; else 0 1 if Ci (Cj) is the nearest candidate to the anaphor; else 0 1 if Ci (Cj) is in an embedded NP; else 0 1 if Ci (Cj) is in a title; else 0 Features describing the anaphor: 11. 12. 13. 14. 15. 16. ana_DefNP ana_IndefNP ana_Pron ana_ProperNP ana_PronType ana_FlexiblePron 1 if ana is a definite NP; else 0 1 if ana is an indefinite NP; else 0 1 if ana is a pronoun; else 0 1 if ana is a proper NP; else 0 1 if ana is a third person pronoun; 2 if a single neuter pronoun; 3 if a plural neuter pronoun; 4 if other types 1 if ana is a flexible pronoun; else 0 Features describing the candidate and the anaphor: 17. 18. 18. 20. 21. ante_ana_StringMatch_1(2) ante_ana_GenderAgree_1(2) ante_ana_NumAgree_1(2) ante_ana_Appositive_1(2) ante_ana_Alias_1(2) 1 if Ci (Cj) and ana match in string; else 0 1 if Ci (Cj) and ana agree in gender; else 0 if disagree; -1 if unknown 1 if Ci (Cj) and ana agree in number; 0 if disagree; -1 if unknown 1 if Ci (Cj) and ana are in an appositive structure; else 0 1 if Ci (Cj) and ana are in an alias of the other; else 0 Features describing the two candidates 22. 23. inter_SDistance inter_Pdistance Distance between Ci and Cj in sentences Distance between Ci and Cj in paragraphs Table 1: Feature set for coreference resolution (Feature 22, 23 and features involving Cj are not used in the single-candidate model) 1. If the current NP is a pronoun, construct the candidate set in the same way as during training. 2. If the current NP is a non-pronoun, (a) Add all the preceding non-pronouns to the initial candidate set. (b) Calculate the confidence value for each candidate using the single-candidate classifier. (c) Remove the candidates with confidence value less than 0.5. 5 Evaluation and Discussion Our coreference resolution approach is evaluated on the standard MUC-6 (1995) and MUC-7 (1998) data set. For MUC-6, 30 “dry-run” documents annotated with coreference information could be used as training data. There are also 30 annotated training documents from MUC-7. For testing, we utilize the 30 standard test documents from MUC-6 and the 20 standard test documents from MUC-7. 5.1 Baseline Systems In the experiment we compared our approach with the following research works: 1. Strube’s S-list algorithm for pronoun resolution (Stube, 1998). 2. Ng and Cardie’s machine learning approach to coreference resolution (Ng and Cardie, 2002a). 3. Connolly et al.’s machine learning approach to anaphora resolution (Connolly et al., 1997). Among them, S-List, a version of centering algorithm, uses well-defined heuristic rules to rank the antecedent candidates; Ng and Cardie’s approach employs the standard single-candidate model and “Best-First” rule to select the antecedent; Connolly et al.’s approach also adopts the twin-candidate model, but their approach lacks of candidate filtering strategy and uses greedy linear search to select the antecedent (See “Related work” for details). We constructed three baseline systems based on the above three approaches, respectively. For comparison, in the baseline system 2 and 3, we used the similar feature set as in our system (see table 1). 5.2 Results and Discussion Table 2 and 3 show the performance of different approaches in the pronoun and non-pronoun resolution, respectively. In these tables we focus on the abilities of different approaches in resolving an anaphor to its antecedent correctly. The recall measures the number of correctly resolved anaphors over the total anaphors in the MUC test data set, and the precision measures the number of correct anaphors over the total resolved anaphors. The F-measure F=2*RP/(R+P) is the harmonic mean of precision and recall. The experimental result demonstrates that our competition learning approach achieves a better performance than the baseline approaches in resolving pronominal anaphors. As shown in Table 2, our approach outperforms Ng and Cardie’s singlecandidate based approach by 3.7 and 5.4 in Fmeasure for MUC-6 and MUC-7, respectively. Besides, compared with Strube’s S-list algorithm, our approach also achieves gains in the F-measure by 3.2 (MUC-6), and 1.6 (MUC-7). In particular, our approach obtains significant improvement (21.1 for MUC-6, and 13.1 for MUC-7) over Connolly et al.’s twin-candidate based approach. MUC-6 MUC-7 R P F R P F Strube (1998) 76.1 74.3 75.1 62.9 60.3 61.6 Ng and Cardie (2002a) 75.4 73.8 74.6 58.9 56.8 57.8 Connolly et al. (1997) 57.2 57.2 57.2 50.1 50.1 50.1 Our approach 79.3 77.5 78.3 64.4 62.1 63.2 Table 2: Results for the pronoun resolution MUC-6 MUC-7 R P F R P F Ng and Cardie (2002a) 51.0 89.9 65.0 39.1 86.4 53.8 Connolly et al. (1997) 52.2 52.2 52.2 43.7 43.7 43.7 Our approach 51.3 90.4 65.4 39.7 87.6 54.6 Table 3: Results for the non-pronoun resolution MUC-6 MUC-7 R P F R P F Ng and Cardie (2002a) 62.2 78.8 69.4 48.4 74.6 58.7 Our approach 64.0 80.5 71.3 50.1 75.4 60.2 Table 4: Results for the coreference resolution Compared with the gains in pronoun resolution, the improvement in non-pronoun resolution is slight. As shown in Table 3, our approach resolves non-pronominal anaphors with the recall of 51.3 (39.7) and the precision of 90.4 (87.6) for MUC-6 (MUC-7). In contrast to Ng and Cardie’s approach, the performance of our approach improves only 0.3 (0.6) in recall and 0.5 (1.2) in precision. The reason may be that in non-pronoun resolution, the coreference of an anaphor and its candidate is usually determined only by some strongly indicative features such as alias, apposition, string-matching, etc (this explains why we obtain a high precision but a low recall in non-pronoun resolution). Therefore, most of the positive candidates are coreferential to the anaphors even though they are not the “best”. As a result, we can only see comparatively slight difference between the performances of the two approaches. Although Connolly et al.’s approach also adopts the twin-candidate model, it achieves a poor performance for both pronoun resolution and nonpronoun resolution. The main reason is the absence of candidate filtering strategy in their approach (this is why the recall equals to the precision in the tables). Without candidate filtering, the recall may rise as the correct antecedents would not be eliminated wrongly. Nevertheless, the precision drops largely due to the numerous invalid NPs in the candidate set. As a result, a significantly low Fmeasure is obtained in their approach. Table 4 summarizes the overall performance of different approaches to coreference resolution. Different from Table 2 and 3, here we focus on whether a coreferential chain could be correctly identified. For this purpose, we obtain the recall, the precision and the F-measure using the standard MUC scoring program (Vilain et al. 1995) for the coreference resolution task. Here the recall means the correct resolved chains over the whole coreferential chains in the data set, and precision means the correct resolved chains over the whole resolved chains. In line with the previous experiments, we see reasonable improvement in the performance of the coreference resolution: compared with the baseline approach based on the single-candidate model, the F-measure of approach increases from 69.4 to 71.3 for MUC-6, and from 58.7 to 60.2 for MUC-7. 6 Related Work A similar twin-candidate model was adopted in the anaphoric resolution system by Connolly et al. (1997). The differences between our approach and theirs are: (1) In Connolly et al.’s approach, all the preceding NPs of an anaphor are taken as the antecedent candidates, whereas in our approach we use candidate filters to eliminate invalid or irrelevant candidates. (2) The antecedent identification in Connolly et al.’s approach is to apply the classifier to successive pairs of candidates, each time retaining the better candidate. However, due to the lack of strong assumption of transitivity, the selection procedure is in fact a greedy search. By contrast, our approach evaluates a candidate according to the times it wins over the other competitors. Comparatively this algorithm could lead to a better solution. (3) Our approach makes use of more indicative features, such as Appositive, Name Alias, String-matching, etc. These features are effective especially for non-pronoun resolution. 7 Conclusion In this paper we have proposed a competition learning approach to coreference resolution. We started with the introduction of the singlecandidate model adopted by most supervised machine learning approaches. We argued that the confidence values returned by the single-candidate classifier are not reliable to be used as ranking criterion for antecedent candidates. Alternatively, we presented a twin-candidate model that learns the competition criterion for antecedent candidates directly. We introduced how to adopt the twincandidate model in our competition learning approach to resolve the coreference problem. Particularly, we proposed a candidate filtering algorithm that can effectively reduce the computational cost and data noises. The experimental results have proved the effectiveness of our approach. Compared with the baseline approach using the single-candidate model, the F-measure increases by 1.9 and 1.5 for MUC-6 and MUC-7 data set, respectively. The gains in the pronoun resolution contribute most to the overall improvement of coreference resolution. Currently, we employ the single-candidate classifier to filter the candidate set during resolution. While the filter guarantees the qualification of the candidates, it removes too many positive candidates, and thus the recall suffers. In our future work, we intend to adopt a looser filter together with an anaphoricity determination module (Bean and Riloff, 1999; Ng and Cardie, 2002b). Only if an encountered NP is determined as an anaphor, we will select an antecedent from the candidate set generated by the looser filter. Furthermore, we would like to incorporate more syntactic features into our feature set, such as grammatical role or syntactic parallelism. These features may be helpful to improve the performance of pronoun resolution. References Chinatsu Aone and Scott W.Bennett. 1995. Evaluating automated and manual acquisition of anaphora resolution strategies. In Proceedings of the 33rd Annual Meeting of the Association for Computational Linguistics, Pages 122-129. D.Bean and E.Riloff. 1999. Corpus-Based identification of non-anaphoric noun phrases. In Proceedings of the 37th Annual Meeting of the Association for Computational Linguistics, Pages 373-380. Brennan, S, E., M. W. Friedman and C. J. Pollard. 1987. A Centering approach to pronouns. In Proceedings of the 25th Annual Meeting of The Association for Computational Linguistics, Page 155-162. Dennis Connolly, John D. Burger and David S. Day. 1997. A machine learning approach to anaphoric reference. New Methods in Language Processing, Page 133-144. Joseph F. McCarthy. 1996. A trainable approach to coreference resolution for Information Extraction. Ph.D. thesis. University of Massachusetts. Ruslan Mitkov. 1998. Robust pronoun resolution with limited knowledge. In Proceedings of the 17th Int. Conference on Computational Linguistics (COLINGACL'98), Page 869-875. Ruslan Mitkov. 1999. Anaphora resolution: The state of the art. Technical report. University of Wolverhampton, Wolverhampton. MUC-6. 1995. Proceedings of the Sixth Message Understanding Conference (MUC-6). Morgan Kaufmann, San Francisco, CA. MUC-7. 1998. Proceedings of the Seventh Message Understanding Conference (MUC-7). Morgan Kaufmann, San Francisco, CA. Vincent Ng and Claire Cardie. 2002a. Improving machine learning approaches to coreference resolution. In Proceedings of the 40rd Annual Meeting of the Association for Computational Linguistics, Pages 104111. Vincent Ng and Claire Cardie. 2002b. Identifying anaphoric and non-anaphoric noun phrases to improve coreference resolution. In Proceedings of 19th International Conference on Computational Linguistics (COLING-2002). J R. Quinlan. 1993. C4.5: Programs for Machine Learning. Morgan Kaufmann, San Mateo, CA. Wee Meng Soon, Hwee Tou Ng and Daniel Chung Yong Lim. 2001. A machine learning approach to coreference resolution of noun phrases. Computational Linguistics, 27(4), Page 521-544. Michael Strube. Never look back: An alternative to Centering. 1998. In Proceedings of the 17th Int. Conference on Computational Linguistics and 36th Annual Meeting of ACL, Page 1251-1257 Joel R. Tetreault. 2001. A Corpus-Based evaluation of Centering and pronoun resolution. Computational Linguistics, 27(4), Page 507-520. M. Vilain, J. Burger, J. Aberdeen, D. Connolly, and L.Hirschman. 1995. A model-theoretic coreference scoring scheme. In Proceedings of the Sixth Message understanding Conference (MUC-6), Pages 42-52. GD Zhou and J. Su, 2000. Error-driven HMM-based chunk tagger with context-dependent lexicon. In Proceedings of the Joint Conference on Empirical Methods on Natural Language Processing and Very Large Corpus (EMNLP/ VLC'2000). GD Zhou and J. Su. 2002. Named Entity recognition using a HMM-based chunk tagger. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, P473-478.
2003
23
Generating parallel multilingual LFG-TAG grammars from a MetaGrammar Lionel Cl´ement Inria-Roquencourt France [email protected] Alexandra Kinyon CIS Dpt - Univ. of Pennsylvania [email protected] Abstract We introduce a MetaGrammar, which allows us to automatically generate, from a single and compact MetaGrammar hierarchy, parallel Lexical Functional Grammars (LFG) and Tree-Adjoining Grammars (TAG) for French and for English: the grammar writer specifies in compact manner syntactic properties that are potentially framework-, and to some extent language-independent (such as subcategorization, valency alternations and realization of syntactic functions), from which grammars for several frameworks and languages are automatically generated offline.1 1 Introduction Expensive dedicated tools and resources (e.g. grammars, parsers, lexicons, etc.) have been developed for a variety of grammar formalisms, which all have the same goal: model the syntactic properties of natural language, but resort to a different machinery to achieve that goal. However, there are some core syntactic phenomena on which a cross-framework (and to some extent a cross-language) consensus exists, such as the notions of subcategorization, valency alternations, syntactic function. From a theoretical perspective, a MetaGrammatical level of representation allows one to encode such consensual pieces of syntactic knowledge and to compare different frameworks and languages. From a practical perspective, encoding syntactic phenomena at a metagrammatical level, from which grammars for different frameworks and languages are generated offline, has several advantages such as portability among grammatical frameworks, better parallelism, increased coherence and consistency in the grammars generated and less need for human intervention in the grammar development process. In section 2, we explain the notion of MetaGrammar (MG), present the MG tool we use to generate TAGs, and how we extend the approach to generate LFGs. In section 3, we justify the use of a MetaGrammar for generating LFGs and explore several options, i.e. domains of locality, for doing so. In sections 4 and 5, we discus the handling of valency alternations without resorting to LFG lexical 1We assume the reader has a basic knowledge of TAGs and LFGs and refer respectively to (Joshi, 1987) and (Bresnan and Kaplan, 1982) for an introduction to these frameworks. rules, and the treatment of long-distance dependencies. In sections 6 and 7, we discuss the advantages of a MG approach and the automatic generation of parallel TAG-LFG grammars for English and for French with an explicit sharing of both cross-language and cross-framework syntactic knowledge in the MG. 2 What is a MetaGrammar ? The notion of MetaGrammar was originally presented in (Candito, 1996) to automatically generate wide-coverage TAGs for French and Italian2, using a compact higher-level layer of linguistic description which imposes a general organization for syntactic information in a three-dimensional hierarchy: • Dimension 1: initial subcategorization • Dimension 2: valency alternations and redistribution of functions • Dimension 3: surface realization of arguments. Each terminal class in dimension 1 encodes an initial subcategorization (i.e. transitive, ditransitive etc...); Each terminal class in dimension 2 - a list of ordered redistributions of functions (e.g. to add an argument for causatives, to erase one for passive with no agents ...); Each terminal class in dimension 3 - the surface realization of a syntactic function (e.g. declares if a direct-object is pronominalized, wh-extracted, etc.). Each class in the hierarchy is associated to the partial description of a tree (Rogers and Vijay-Shanker, 1994) which encodes father, dominance, equality and precedence relations between nodes. A well-formed tree is generated by inheriting from exactly one terminal class from dimension 1, one terminal class from dimension 23, and n terminal classes from dimension 3 (where n is the number of arguments of the elementary tree being generated). For instance, the elementary tree for “Par qui sera accompagn´ee Marie” (By whom will Mary be accompanied) is generated by inheriting from transitive in dimension 1, from passive in dimension 2 and subject-nominal-inverted for its subject and Wh-questioned-object for its object in dimension 3. This particular tool was used to develop from a compact hand-coded hierarchy of a few dozen nodes, a wide-coverage TAG for French of 5000 elementary trees (Abeill´e et al., 1999), as well as a medium-size 2A Similar MetaGrammar type of organization for TAGs was independently presented in (Xia, 2001) for English. 3This terminal class may be the result of the crossing of several super-classes, to handle complex phenomena such as Passive+Causative. TAG for Italian (Candito, 1999). The compactness of the hierarchy is due to the fact that nodes are defined only for simple syntactic phenomena: classes for complex syntactic phenomena (e.g. Topicalizedobject+Pronominalized) are generated by automatic crossings of classes for simple phenomena. In addition to proposing a compact representation of syntactic knowledge, (Candito, 1999) explored whether some components of the hierarchy could be re-used across similar languages (French and Italian). However, she developed two distinct hierarchies to generate grammars for these two languages and generated only TAG grammars. We extend the use of the MetaGrammar to generate LFGs and also push further its cross-language and cross-framework potential by generating parallel TAGs and LFGs for English and French from one single hierarchy 4. 2.1 HyperTags The grammar rules we generate are sorted by syntactic phenomena, thanks to the notion of HyperTag, introduced in (Kinyon, 2000). The main idea behind HyperTags is to keep track, when trees (i.e. grammar rules) are generated from a MetaGrammar hierarchy, of which terminal classes were used for generating the tree. This allows one to obtain a frameworkindependent feature structure containing the salient syntactic characteristics of each grammar rule5. For instance, the verb give in A book was given to Mary could be assigned the HyperTag:   Subcat Ditransitive Valency alternations Passive no Agent Argument Realization   Subject: Canonical NP Object: Not realized By-Phrase: Canonical PP     Although we retain the linguistic insights presented in (Candito, 1996), that is the three dimensions to model syntax, (subcategorization, valency alternation, realization of syntactic arguments), we slightly alter it, and add sub-dimensions for the realization of predicates as well as modifiers. Moreover, we use a different MetaGrammar tool which is less framework-dependent and supports the notion of HyperTag. 2.2 The LORIA MetaGrammar tool To generate TAGs and LFGs, we use the MG compiler presented in (Gaiffe et al., 2002)6. Each class in the MG hierarchy encodes: • Its SuperClasse(s) • A HyperTag which captures the salient linguistic characteristics of that class. 4We also generate Range Concatenation Grammars (Boullier, 1998), but do not develop this point here. 5The notion of HyperTag was inspired by that of supertags (Srinivas, 1997), which consists in assigning a TAG elementary tree to lexical items, hence enriching traditional POS tagging. However, HyperTags are framework-independent. 6This compiler is freely available on http://www.loria.fr/equipes/led/outils/mgc/mgc.html • What the class needs and provides. • A set of quasi-nodes (i.e. variables) • Topological relations between these nodes (father, dominates, precedes, equals)7 • A function for each quasi-nodes to decorate the tree (e.g. traditional agreement features and/or LFG functional equations). The MG tool automatically crosses the nodes in the hierarchy, looking to create “balanced” classes, that is classes that do not need nor provide any resource8. Then for each balanced terminal class, the HyperTags are unified, and the structural constraints between quasi-nodes are unified; If the unification succeeds, one or more <HyperTag, tree> pairs are generated. When generating a TAG, tree is interpreted as a TAG elementary tree (i.e. a grammar rule). When generating an LFG, tree is a tree decorated with traditional LFG functional annotations (in a way which is similar to constituent trees decorated with functional annotation e.g. by (Frank, 2000)), and is in a second step broken down into one or more LFG rules. Figure 1 illustrates how a simple decorated tree is generated with the MG compiler, and how the decorated tree corresponds to one TAG elementary tree and to two LFG rewriting rules for a canonical transitive construction. In addition, to facilitate the grammar-lexicon interface, each decorated tree yields an LFG lexical template (here, SubjObj:V (↑Pred=‘x<(↑Subj)(↑Obj)>’). 3 Why use a MetaGrammar for LFGs 3.1 Redundancies in LFG Because TAGs are a tree rewriting system, there are intrinsic redundancies in the rules of a TAG. E.g., all the rules for verbs with a canonical NP subject and a canonical realization of the verb will have a redundant piece of structure (S NP0↓(VP (V⋄))) . This piece of structure will be present not only for each new subcategorization frame (intransitive, transitive, ditransitive...), but also for all related non-canonical syntactic constructions such as in each grammar rule encoding a Wh-extracted object. This redundancy justifies the use of a MetaGrammar for TAGs. Since LFG rules rely on a context free backbone, it is generally admitted that there is less redundancy in LFG than in TAG. However, there are still redundancies, at the level of rewriting rules, at the level of functional equations, and at the level of lexical entries. To illustrate such redundancies, we take the example of French ditransitives with the insertion of one or more modifiers. The direct object is realized as an NP, the second object as a PP. Both orders NP PP and PP NP are acceptable. On top of that, one or more modifiers may be inserted before, after or between the two arguments, and can be of almost any category (PP, ADVP, 7We have augmented the tool to support free variables for nodes, optional resources, as well as additional relations such as sister and c-command. We do not detail these technical points for sake of brevity. 8Another way to see this is by analogy to a resource allocation graph. Figure 1: A simple hierarchy which yields one decorated tree, corresponding to one TAG rule and two LFG rules ( →stands for father, < for precedes in the MG hierarchy. ⋄↓resp. stand for “anchor” and substitution nodes in TAGs. ↓and ↑stand for standard LFGs functional equations. NP etc.). Here is a non exhaustive list of acceptable word-order variations: - Jean donne une pomme `a Marie (lit: J. gives an apple to M.) - Jean donne `a Marie une pomme (lit: J. gives to M. an apple) - Jean aujourd’hui donne `a Marie une pomme (lit: J. today gives to M. an apple) - Jean donne `a Marie chaque matin une pomme avant le d´epart du train (lit: J gives to M. every morning an apple before the departure of the train) - Jean donne chaque matin `a Marie une pomme (lit: J. gives each morning to M. an apple) - Aujourd’hui Jean donne `a Marie une pomme (lit: Today J. gives to M. an apple) A first rule for VP expansion, accounting for the free order between the first and second object without modifiers, is shown below: VP →V (NP) PP (NP) ↑=↓(↑Obj)=↓(↑SecondObj)=↓(↑Obj)=↓ This VP rule is redundant: the NP is mentioned twice, with its associated functional equation. The NPs are both marked optional because at least one of them has to be not realized, else no well-formed Fstructure could be built since the uniqueness condition would be violated by the presence of two directobjects: for a sentence such as “*Jean donne une pomme `a Mary une pomme”/J. gives an apple to M. an apple, a C-structure would be built but, as expected, no corresponding well-formed F-structure. Let us now enrich the rule to account for modifier insertion. This yields the VP expansion shown in 2(a). The rule for VP expansion is now highly redundant, although the syntactic phenomena handled by this rule are very simple ones: the NP for the direct object is repeated twice, along with its functional equation, the disjunction (ADVP|NP|PP) is repeated 5 times, again with its functional equation. This gives us grounds to support a MetaGrammar type of organization for LFG. In practice, as described in (Kaplan and Maxwell, 1996), additional LFG notation is available such as operators like “insert or ignore”, ”shuffle” ”ID/LP”, ”Macros” etc. However, these operators, which are motivated from a formal perspective, but not so much from a linguistic perspective, yield two major problems: first, not all LFG parsers support those additional operators. Second, the proliferation of operators allows for a same rule to be expressed in many different ways, which is helpful for grammar writing purpose, but not so desirable for maintenance purpose 9. Although nothing pre9This can be compared to computer programs written in Perl, which are easy to develop, but hard to read and maintain. A (a) VP →(ADVP|NP|PP)* V (ADVP|NP|PP)* (NP) (ADVP|NP|PP)* PP (ADVP|NP|PP)* (NP) (ADVP|NP|PP)* (↑Modif) ∋↓ ↑=↓(↑Modif)∋↓ (↑Obj)=↓(↑Modif)∋↓ (↑SecObj)=↓(↑Modif)∋↓ (↑Obj)=↓(↑Modif)∋↓ (b) VP →(ADVP|NP|PP)* V (ADVP|NP|PP)* NP (ADVP|NP|PP)* PP (ADVP|NP|PP)* (↑Modif) ∋↓ ↑=↓(↑Modif)∋↓ (↑Obj)=↓(↑Modif)∋↓ (↑SecObj)=↓(↑Modif)∋↓ (c) VP →(ADVP|NP|PP)* V (ADVP|NP|PP)* PP (ADVP|NP|PP)* NP (ADVP|NP|PP)* (↑Modif) ∋↓ ↑=↓ (↑Modif)∋↓ (↑SecObj)=↓(↑Modif)∋↓ (↑Obj)=↓(↑Modif)∋↓ Figure 2: VP expansion vents the MG generator to create rules with operators such as “ignore or insert”, we chose not to do so. Instead of generating rules with operators or rules like (2a), we generate two rules (2b) and (2c) in order to have uniqueness, completeness and coherence not only at the F-structure level but also at the C-structure level.10. Moreover, for lexical organization, practical LFGs resort to the notion of lexical template but from a linguistic perspective, the lexicon is not cleanly organized in LFG11. 3.2 Exploring different domains of locality We have seen in section 2.2 that the MG tool we use outputs <HyperTag, tree> pairs, where tree is decorated with functional equations and corresponds to one or more LFG rewriting rules (Figure 1). VP V (↑Family)=SubjObjPrepObj ↑Pred=’x<(↑Subj)(↑Obj)(↑de-Obj)>’ NP (↑Obj)=↓ PP (↑(↓pcase)Obj)=↓ VP → V PP N2 ↑=↓ (↑(↓pcase)Obj)=↓ (↑object)=↓ SubjObjectPrepObject:V (↑pred = ‘x <(↑Subj) (↑Obj) (↑de-Obj)>’ Figure 3: LFG Rule and a lexical entry In order to generate LFG rules with a MG, we have two options. The first option consists in generating “standard” LFG rules, that is trees of depth 1 decorated with functional equations. Figure 3 illustrates detailed discussion of the (Kaplan and Maxwell, 1996) operators is found in (Cl´ement and Kinyon, 2003). 10Thus the grammars we generate exhibit redundancies for modifiers, but, since the MG hierarchy has relatively few redundancies, and since these grammars are automatically generated, the problem is minor. 11As opposed for instance to lexical organization not only in TAGs and TAG related framework (e.g. DATR (Evans et al., 2000)), but in HPSG (Flickinger, 1987). such as decorated tree, which yields one LFG rewriting rule, and one lexical entry for French verbs such as “´eloigner” ( take away from), which take an NP object and a PP object introduced by “de”. (Ex: “Peter ´eloigne son enfant de la fenˆetre”/ P. takes his child away from the window). The second option, which is the one we have opted for, consists in generating constituent trees which may be of depth superior to one, decorated with feature equations. It has the following advantages: • It allows for a more natural parallelism between the TAG and LFG grammars generated • It allows for a more natural encoding of syntax at the MetaGrammar level • It allows us to generate LFGs without Lexical Rules • It allows us to easily handle long-distance dependencies. The trees decorated with LFG functional annotations are then decomposed into standard LFG rewriting rules and lexical entries12. The grammar we obtain is then interfaced with a parser 13. Concerning the first point (TAG-LFG parallelism), the trees decorated with functional equations and TAG elementary trees are very similar, as was first discussed in (Kameyama, 1986). Concerning the second point (more natural encoding of the MetaGrammar level), the “resource model” of the MetaGrammar, based on “needs” and “provides”, allows for a natural encoding and enforcement of LFG coherence, completeness and uniqueness principles: A transitive verb needs exactly one resource “Subject” and one resource “Object”. Violations result in invalid classes which do not yield any rules. So from that perspective, it makes little sense, apart from practical reasons such as interfacing the grammar with an existing parser, to force the rules generated to be trees of depth one. Moreover, classical completeness/coherence 12Non terminal symbols symbols are renamed and, in a second phase, rules which differ only by the name of their non terminals are merged, in a manner similar to that used in (Hepple and van Genabith, 2000). For space reasons, we do not detail the algorithm here. 13We use the freely available XLFG parser described in (Cl´ement and Kinyon, 2001) and have also experimented with the Xerox parser (Kaplan and Maxwell, 1996). conditions have received a similar resource-sensitive re-interpretation in LFG to compute semantic structures using linear logic (Dalrymple et al., 1995). We devote the next two sections to the third (lexical rules) and fourth (wh) points. 4 Lexical rules Figure 4: An alternative to lexical rules Traditional LFGs encode phrase structure realizations of syntactic functions such as the wh-extraction or pronominalization of an object in phrase structure rules. In the MetaGrammar, these are encoded in the “Argument Realization” dimension (dimension 3 in Candito’s terminology). For valency alternations, i.e. when initial syntactic functions are modified, LFG resorts to the additional machinery of lexical rules 14. However, these valency alternations are encoded directly in the MetaGrammar in the “valency alternation” dimension (dimension 2 in Candito’s terminology). Hence, when a rule is generated for a canonical transitive verb, rules are generated not only for all possible argument realization for the subject and direct object (wh-questioned, relativized, cliticized for French etc.), but also for all the valency alternations allowed for the subcategory frame concerned (here, passive with/without agent, causative etc). Therefore, there is no need to generate usual LFG lexical rules, and the absence of lexical rules has no effect on interfacing the grammars we generate with existing LFG parsers. Fig. 4 illustrates the generation of a decorated tree for passive-with-no-agent. 5 Long distance dependencies When generating TAGs and LFGs from a single MG hierarchy, we must make sure that long-distance phenomena are correctly handled. The only difference between TAG and LFG is that for TAG, we must make sure that bridge verbs are auxiliary trees, i.e. have a foot node, whereas for LFG we must make sure that extraction rules have a node decorated with a functional uncertainty equation. In TAGs, long 14Or, alternatively, some notion of lexical mapping, which we do not discuss here. Sy NPo (What) S2 Aux (did) NPx (Mary) V Px Vx (say) Sbarx Compl (that) Sx NPs (John) V Py Vy (ate)   Pred ’say(Subj,Comp)’ Topic h Pred What 1 i Subj Pred ’Mary’ Comp   Pred ’ate(Subj,Obj)’ Subj Pred John Obj 1     Figure 6: Long distance dependencies in LFG: C and F structures for What did M. say that J. ate Figure 7: Tree decorated with f. uncertainty distance dependencies are handled through the domain of locality of elementary trees, the argumentpredicate co-occurrence principle and the adjunction operation (Joshi and Vijay-Shanker, 1989). Figure 5 illustrates the TAG analysis of What did Mary say that John ate: the extracted element is in the same grammar rule as its predicate “ate” 15 and the tree anchored by the bridge verb is inserted in the “ate” tree thanks to the adjunction operation. More trees can adjoin in to analyze What does P. think that M. said ... that John ate using the same mechanism, which we retain in the TAGs we generate by generating auxiliary tree for bridge verbs (i.e. trees with a foot node). In LFG, long-distance dependencies are handled by functional uncertainty (Kaplan and Zaenen, 1989). Here is a small LFG grammar to analyze What did M. say that John ate. 15Although a trace is present in rule for “ate”, following the convention of the Xtag project, it is not compulsory and not needed from a formal point of view. Adjunction Adjunction Substitution Substitution Substitution Figure 5: Long distance dependencies in TAGs (What did M. say that J. ate ) 1- Sx →Aux NPx VPx (↑Subj)=↓ ↑=↓ 2- VPx →Vx Sbarx ↑=↓ (↑Comp)=↓ 3- Sbarx →Compl Sx ↑=↓ 4- Sy →NPo S2 (↑topic)=↓↑=↓ (↑topic)=(↑Comp*.Obj) 5- S2 →NPs VPy (↑Subj)=↓ ↑=↓ 6- VPy →Vy ↑=↓ The extracted element (node NPo in rule 4) is associated to a function path (in bold characters), which is unknown since an arbitrary number of clauses can appear between “NPo” and its regent (Vy in rule 6). The result of the LFG analysis for What did M. say that J. ate, using this standard LFG grammar is shown in Figure 6. A constituent structure is built using the the rewriting rules. The functional equations associated to nodes compute an F-structure which ensures that each predicate of the sentence (i.e. “say” and “ate”) have their arguments realized. The need for functional uncertainty results from the fact that in LFG, contrary to TAGs, the extracted element (NPo) and its governor (Vy) are located in different grammar rules. Hence, when generating LFGs, we must make sure that the decorated tree bears a functional uncertainty equation at the site of the extraction. 7 illustrates the generation of such a decorated tree (identical to the TAG tree for ”ate” modulo the functional equations), which will be decomposed into rules 4, 5 and 6.16 16Because the MG does not impose a restricted domain of locality, (Kinyon, 2003) proposes an alternative to functional uncertainty, which we do not present here for space reasons. 6 Advantages of a MetaGrammatical level A first advantage of using a MetaGrammar, discussed in (Kinyon and Prolo, 2002), is that the syntactic phenomena covered are quite systematic: if rules are generated for “transitive-passivewhExtractedByPhrase” (e.g. By whom was the mouse eaten), and if the hierarchy includes ditransitive verbs, then the automatic crossing of phenomena ensures that sentences will be generated for “ditransitive-passive-whExtractedByPhrase” (i.e. By whom was Peter given a present). All rules for word order variations are automatically generated by underspecifying relations between quasi-nodes in the MG hierarchy (e.g. precedence relation between first and second object for ditransitives in French). A second advantage of the MG is to minimize the need for human intervention in the grammar development process. Humans encode the linguistic knowledge in a compact manner i.e. the MG hierarchy, and then verify the validity of the rules generated. If some grammar rules are missing or incorrect, then changes are made directly in the MG hierarchy and never in the generated rules17. This ensures a homogeneity not necessarily present with traditional hand-crafted grammars. A third and essential advantage is that it is straightforward to obtain from a single hierarchy parallel multi-lingual grammars similar to the parallel LFG grammars presented in (Butt et al., 1999) and (Butt et al., 2002), but with an explicit sharing 17Exceptionality is handled in the MG hierarchy as well. We do not have much to say about it: only that the MG does not impose any additional burden to handle syntactic “exceptions” compared to hand-crafted grammars. of classes 18 in the MetaGrammar hierarchy plus a cross-framework application. 19 7 Cross-language and -framework generation So far, we have implemented a non trivial hierarchy which consists of 189 classes. A fragment of the hierarchy is shown in Figure 8. From this hierarchy, we generate 550 decorated trees, which correspond to approx. 550 TAG trees and 140 LFG rules. We cover the following syntactic phenomena: 50 verb subcategorization frames (including auxiliaries, modals, sentential and infinitival complements), dative-shift for English, clitics (and their placement) for French, passives with and without agent, long distance dependencies (relatives, wh-questions, clefts) and a few idiomatic expressions. A more detailed presentation of the LFG grammar is presented in (Cl´ement and Kinyon, 2003). A more detailed discussion of the cross-language aspects with a comparison to related work such as the LFG ParGram project, or HPSG matrix grammars (Bender et al., 2002) may be found in (Kinyon and Rambow, 2003a)20. The cross-language and cross-framework parallelism is insured by the HyperTags: Most classes in the hierarchy are shared for French and for English. Language specific classes are marked using the binary features “English” and “French” in their HyperTag. So for instance, classes encoding clitic placement are marked [French=+;English=-] and classes pertaining to dative-shift are marked [French=-;English=+]. This prevents the crossing of incompatible classes and hence the generation of incorrect rules (such as “Dative-shift-withCliticizedObject”). Similarly, most classes in the hierarchy are shared for TAGs and LFGs. Classes specific to TAGs are marked [TAG=+;LFG=-] (and conversely for LFGs)21 8 Conclusion We have presented a MetaGrammar tool which allows us to automatically generate parallel TAG and LFG grammars for English and French. We have discussed the handling of long-distance dependencies. We keep enriching our hierarchy in order to 18To the best of our knowledge, (Butt et al., 2002) apply similar linguistic choices for grammars in different languages when possible, but do not explicitly resort to rule-sharing. 19(Kinyon and Rambow, 2003b) have used the tool to generate from a single hierarchy cross-framework and cross-language annotated test-suites, including English and German sentences annotated for F-structure, as well as for constituent and dependency structure 20The main difference with HPSG approaches such as Matrix is that HPSG type-hierarchies are an inherent part of the grammar, and deal only with one framework:HPSG, whereas our MG hierarchy is not an inherent part of the grammar, since it is used to generate cross-framework grammars offline. 21We use binary features in order to add more languages and frameworks to the hierarchy. E.g. when adding German, some classes are shared for English and German, but not French and are marked [English=+;German=+;French=-]. This would not be possible if we had a non binary feature [Language=X]. The same reasoning applies for generating additional frameworks. increase the coverage of our grammars, are adding new languages (German) and exploring the extension of the domain of locality to sentence level (Kinyon and Rambow, 2003a). The ultimate goal of this work is twofold: first, to maximize cross-language rule-sharing at the metagrammatical level; Second, to automatic extract MetaGrammars from a treebank (Kinyon, 2003), and then automatically generate grammars for different frameworks. References A. Abeill´e, M. Candito, and A. Kinyon. 1999. FTAG: current status and parsing scheme. In Proc. Vextal-99, Venice. E. Bender, D. Flickinger, and S. Oepen. 2002. The Grammar Matrix: an open-source starter-kit for the rapid development of cross-linguistically consistent broad-coverage precision grammars. In Proc. GEE-COLING, Taipei. P. Boullier. 1998. Proposal for a natural language processing syntactic backbone. Technical report, Inria. France. J. Bresnan and R. Kaplan. 1982. Introduction: grammars as mental representations of language. In The Mental Representation of Grammatical Relations, pages xvii–lii. MIT Press, Cambridge, MA. M. Butt, S. Dipper, A. Frank, and T. Holloway-King. 1999. Writing large-scale parallel grammars for English, French, and German. In Proc. LFG-99. M. Butt, H. Dyvik, T.H. King, H. Masuichi, and C. Rohrer. 2002. The parallel grammar project. In proc. GEE-COLING, Taipei. M.H. Candito. 1996. A principle-based hierarchical representation of LTAGs. In Proc. COLING-96, Copenhagen. M.H. Candito. 1999. Repr´esentation modulaire et param´etrable de grammaires ´electroniques lexicalis´ees. Ph.D. thesis, Univ. Paris 7. L. Cl´ement and A. Kinyon. 2003. Generating LFGs with a MetaGrammar. In Proc. LFG-03, Saratoga Springs. L. Cl´ement and A. Kinyon. 2001. XLFG: an LFG parsing scheme for french. In Proc LFG-01, Hong-Kong. M. Dalrymple, J. Lamping, F. Pereira, and V. Saraswat. 1995. Linear logic for meaning assembly. In Proc. CLNLP, Edinburgh. R. Evans, G. Gazdar, and D. Weir. 2000. Lexical rules are just lexical rules. In Abeille Rambow, editor, Tree Adjoining Grammars, CSLI. D. Flickinger. 1987. Lexical rules in the hierarchical lexicon. Ph.D. thesis, Stanford. A. Frank. 2000. Automatic F-Structure annotation of treebank trees. In Proc. LFG-00, Berkeley. B. Gaiffe, B. Crabb´e, and A. Roussanaly. 2002. A new metagrammar compiler. In Proc. TAG+6, Venice. M. Hepple and J. van Genabith. 2000. Experiments in structure preserving grammar compaction. In Proc. 1st meeting on Speech Technology Transfer, Sevilla. Figure 8: Screen capture of a fragment of our MetaGrammar hierarchy A. K. Joshi and K. Vijay-Shanker. 1989. Treatment of long distance dependencies in LFG and TAG: Functional uncertainty in LFG is a corollary in TAG. In Proc. ACL-89, Vancouver. A.K. Joshi. 1987. An introduction to tree adjoining grammars. In Mathematics of language, John Benjamins Publishing Company. M. Kameyama. 1986. Characterising LFG in terms of TAG. In Unpublished Manuscript, Univ. of Pennsylvania. R. Kaplan and J. Maxwell. 1996. LFG grammar writer’s workbench. Technical Report version 3.1, Xerox corporation. R Kaplan and A. Zaenen. 1989. Long distance dependencies, constituent structure and functional uncertainty. In Alternatives conceptions of phrase-structure, Univ. of Chicago press. A. Kinyon and C. Prolo. 2002. A classification of grammar development strategies. In Proc. GEE-COLING, Taipei. A. Kinyon and O. Rambow. 2003a. Using the metagrammar for parallel multilingual grammar development and generation. In Proc. ESSLLI workshop on multilingual grammar engineering, Vienna. A. Kinyon and O. Rambow. 2003b. Using the MetaGrammar to generate cross-language and cross-framework annotated testsuites. In Proc. LINC-EACL, Budapest. A. Kinyon. 2000. Hypertags. In Proc. COLING-00, Sarrebrucken. A. Kinyon. 2003. MetaGrammars for efficient development, extraction and generation of parallel grammars. Ph.D. thesis, Proposal. Univ. of Pennsylvania. J. Rogers and K. Vijay-Shanker. 1994. Obtaining trees from their description: an application to TAGS. In Computational Intelligence 10:4. B. Srinivas. 1997. Complexity of lexical descriptions and its relevance for partial parsing. Ph.D. thesis, Univ. of Pennsylvania. F. Xia. 2001. Automatic grammar generation from two perspectives. Ph.D. thesis, Univ. of Pennsylvania.
2003
24
Compounding and derivational morphology in a finite-state setting Jonas Kuhn Department of Linguistics The University of Texas at Austin 1 University Station, B5100 Austin, TX 78712-11196, USA [email protected] Abstract This paper proposes the application of finite-state approximation techniques on a unification-based grammar of word formation for a language like German. A refinement of an RTN-based approximation algorithm is proposed, which extends the state space of the automaton by selectively adding distinctions based on the parsing history at the point of entering a context-free rule. The selection of history items exploits the specific linguistic nature of word formation. As experiments show, this algorithm avoids an explosion of the size of the automaton in the approximation construction. 1 The locus of word formation rules in grammars for NLP In English orthography, compounds following productive word formation patterns are spelled with spaces or hyphens separating the components (e.g., classic car repair workshop). This is convenient from an NLP perspective, since most aspects of word formation can be ignored from the point of view of the conceptually simpler token-internal processes of inflectional morphology, for which standard finite-state techniques can be applied. (Let us assume that to a first approximation, spaces and punctuation are used to identify token boundaries.) It makes it also very easy to access one or more of the components of a compound (like classic car in the example), which is required in many NLP techniques (e.g., in a vector space model). If an NLP task for English requires detailed information about the structure of compounds (as complex multi-token units), it is natural to use the formalisms of computational syntax for English, i.e., context-free grammars, or possibly unificationbased grammars. This makes it possible to deal with the bracketing structure of compounding, which would be impossible to cover in full generality in the finite-state setting. In languages like German, spelling conventions for compounds do not support such a convenient split between sub-token processing based on finitestate technology and multi-token processing based on context-free grammars or beyond—in German, even very complex compounds are written without spaces or hyphens: words like Verkehrswegeplanungsbeschleunigungsgesetz (‘law for speeding up the planning of traffic routes’) appear in corpora. So, for a fully adequate and general account, the tokenlevel analysis in German has to be done at least with a context-free grammar:1 For checking the selection features of derivational affixes, in the general case a tree or bracketing structure is required. For instance, the prefix Fehl- combines with nouns (compare (1)); however, it can appear linearly adjacent with a verb, including its own prefix, and only then do we get the suffix -ung, which turns the verb into a noun. (1) N N V N  V  V N  Fehl ver arbeit ung mis work ‘misprocessing’ 1For a fully general account of derivational morphology in English, the token-level analysis has to go beyond finite-state means too: the prefix non- in nonrealizability combines with the complex derived adjective realizable, not with the verbal stem realize (and non- could combine with a more complex form). However, since in English there is much less token-level interaction between derivation and compounding, a finite-state approximation of the relevant facts at token-level is more straightforward than in German. Furthermore, context-free power is required to parse the internal bracketing structure of complex words like (2), which occur frequently and productively. (2) N N A A N V N A N  V  V A  N  V N  Gesund heits ver träg lich keits prüf ung healthy bear examine ‘check for health compatibility’ As the results of the DeKo project on derivational and compositional morphology of German show (Schmid et al. 2001), an adequate account of the word formation principles has to rely on a number of dimensions (or features/attributes) of the morphological units. An affix’s selection of the element it combines with is based on these dimensions. Besides part-of-speech category, the dimensions include origin of the morpheme (Germanic vs. classical, i.e., Latinate or Greek2), complexity of the unit (simplex/derived), and stem type (for many lemmata, different base stems, derivation stems and compounding stems are stored; e.g., träg in (2) is a derivational stem for the lemma trag(en) (‘bear’); heits is the compositional stem for the affix heit). Given these dimensions in the affix feature selection, we need a unification-based (attribute) grammar to capture the word formation principles explicitly in a formal account. A slightly simplified such grammar is given in (3), presented in a PATR-IIstyle notation:3 (3) a. X0 X1 X2  X1 CAT  = PREFIX  X0 CAT  =  X1 MOTHER-CAT   X0 COMPLEXITY  = PREFIX-DERIVED  X1 SELECTION  = X2 b. X0 X1 X2  X2 CAT  = SUFFIX  X0 CAT  =  X2 MOTHER-CAT   X0 COMPLEXITY  = SUFFIX-DERIVED  X2 SELECTION  = X1 2Of course, not the true ethymology is relevant here; ORIGIN is a category in the synchronic grammar of speakers, and for individual morphemes it may or may not be in accordance with diachronic facts. 3An implementation of the DeKo rules in the unification formalism YAP is discussed in (Wurster 2003). c. X0 X1 X2  X0 CAT  =  X2 CAT   X0 COMPLEXITY  = COMPOUND (4) Sample lexicon entries a. X0: intellektual X0 CAT  = A  X0 ORIGIN  = CLASSICAL  X0 COMPLEXITY  = SIMPLEX  X0 STEM-TYPE  = DERIVATIONAL  X0 LEMMA  = ‘intellektuell’ b. X0: -isier X0 CAT  = SUFFIX  X0 MOTHER-CAT  = V  X0 SELECTION CAT  = A  X0 SELECTION ORIGIN  = CLASSICAL Applying the suffixation rule, we can derive intellektual.isier- (the stem of ‘intellectualize’) from the two sample lexicon entries in (4). Note how the selection feature (SELECTION) of prefixes and affixes are unified with the selected category’s features (triggered by the last feature equation in the prefixation and suffixation rules (3a,b)). Context-freeness Since the range of all atomicvalued features is finite and we can exclude lexicon entries specifying the SELECTION feature embedded in their own SELECTION value, the three attribute grammar rewrite rules can be compiled out into an equivalent context-free grammar. 2 Arguments for a finite-state word formation component While there is linguistic justification for a contextfree (or unification-based) model of word formation, there are a number of considerations that speak in favor of a finite-state account. (A basic assumption made here is that a morphological analyzer is typically used in a variety of different system contexts, so broad usability, consistency, simplicity and generality of the architecture are important criteria.) First, there are a number of NLP applications for which a token-based finite-state analysis is standardly used as the only linguistic analysis. It would be impractical to move to a context-free technology in these areas; at the same time it is desirable to include an account of word formation in these tasks. In particular, it is important to be able to break down complex compounds into the individual components, in order to reach an effect similar to the way compounds are treated in English orthography. Second, inflectional morphology has mostly been treated in the finite-state two-level paradigm. Since any account of word formation has to be combined with inflectional morphology, using the same technology for both parts guarantees consistency and reusability.4 Third, when a morphological analyzer is used in a linguistically sophisticated application context, there will typically be other linguistic components, most notably a syntactic grammar. In these components, more linguistic information will be available to address derivation/compounding. Since the necessary generative capacity is available in the syntactic grammar anyway, it seems reasonable to leave more sophisticated aspects of morphological analysis to this component (very much like the syntaxbased account of English compounds we discussed initially). Given the first two arguments, we will however nevertheless aim for maximal exactness of the finite-state word formation component. 3 Previous strategies of addressing compounding and derivation Naturally, existing morphological analyzers of languages like German include a treatment of compositional morphology (e.g., Schiller 1995). An overgeneration strategy has been applied to ensure coverage of corpus data. Exactness was aspired to for the inflected head of a word (which is always rightperipheral in German), but not for the non-head part of a complex word. The non-head may essentially be a flat concatenation of lexical elements or even an arbitrary sequence of symbols. Clearly, an account making use of morphological principles would be desirable. While the internal structure of a word is not relevant for the identification of the part-ofspeech category and morphosyntactic agreement information, it is certainly important for information extraction, information retrieval, and higher-level tasks like machine translation. 4An alternative is to construct an interface component between a finite-state inflectional morphology and a context-free word formation component. While this can be conceivably done, it restricts the applicability of the resulting overall system, since many higher-level applications presuppose a finite-state analyzer; this is for instance the case for the Xerox Linguistic Environment (http://www.parc.com/istl/groups/nltt/xle/), a development platform for syntactic Lexical-Functional Grammars (Butt et al. 1999). An alternative strategy—putting emphasis on a linguistically satisfactory account of word formation—is to compile out a higher-level word formation grammar into a finite-state automaton (FSA), assuming a bound to the depth of recursive selfembedding. This strategy was used in a finite-state implementation of the rules in the DeKo project (Schmid et al. 2001), based on the AT&T Lextools toolkit by Richard Sproat.5 The toolkit provides a compilation routine which transforms a certain class of regular-grammar-equivalent rewrite grammars into finite-state transducers. Full context-free recursion has to be replaced by an explicit cascading of special category symbols (e.g., N1, N2, N3, etc.). Unfortunately, the depth of embedding occurring in real examples is at least four, even if we assume that derivations like ver.träg.lich (‘compatible’; in (2)) are stored in the lexicon as complex units: in the initially mentioned compound Verkehrs.wege.planungs.beschleunigungs.gesetz (‘law for speeding up the planning of traffic routes’), we might assume that Verkehrs.wege (‘traffic routes’) is stored as a unit, but the remainder of the analysis is rule-based. With this depth of recursion (and a realistic morphological grammar), we get an unmanagable explosion of the number of states in the compiled (intermediate) FSA. 4 Proposed strategy We propose a refinement of finite-state approximation techniques for context-free grammars, as they have been developed for syntax (Pereira and Wright 1997, Grimley-Evans 1997, Johnson 1998, Nederhof 2000). Our strategy assumes that we want to express and develop the morphological grammar at the linguistically satisfactory level of a (contextfree-equivalent) unification grammar. In processing, a finite-state approximation of this grammar is used. Exploiting specific facts about morphology, the number of states for the constructed FSA can be kept relatively low, while still being in a position to cover realistic corpus example in an exact way. The construction is based on the following observation: Intuitively, context-free expressiveness is not needed to constrain grammaticality for most of the 5Lextools: a toolkit for finite-state linguistic analysis, AT&T Labs Research; http://www.research.att.com/sw/tools/lextools/ word formation combinations. This is because in most cases, either (i) morphological feature selection is performed between string-adjacent terminal symbols, or (ii) there are no categorial restrictions on possible combinations. (i) is always the case for suffixation, since German morphology is exclusively right-headed.6 So the head of the unit selected by the suffix is always adjacent to it, no matter how complex the unit is: (5) X Y . . . Y X  (i) is also the case for prefixes combining with a simple unit. (ii) is the case for compounding: while affix-derivation is sensitive to the mentioned dimensions like category and origin, no such grammatical restrictions apply in compounding.7 So the fact that in compounding, the heads of the two combined units may not be adjacent (since the right unit may be complex) does not imply that context-freeness is required to exclude impossible combinations: (6) X X  X  X  X  X or X X  X X  X  X  X or X X X  X  X  X The only configuration requiring context-freeness to exclude ungrammatical examples is the combination of a prefix with a complex morphological unit: (7) X X X  . . . X As (1) showed, such examples do occur; so they should be given an exact treatment. However, the depth of recursive embeddings of this particular type (possibly with other embeddings intervening) in realistic text is limited. So a finite-state approximation 6This may appear to be falsified by examples like ver- (V  ) + Urteil (N, ‘judgement’) = verurteilen (V, ‘convict’); however, in this case, a noun-to-verb conversion precedes the prefix derivation. Note that the inflectional marking is always rightperipheral. 7Of course, when speakers disambiguate the possible bracketings of a complex compound, they can exclude many combinations as implausible. But this is a defeasible world knowledge-based effect, which should not be modeled as strict selection in a morphological grammar. keeping track of prefix embeddings in particular, but leaving the other operations unrestricted seems well justified. We will show in sec. 6 how such a technique can be devised, building on the algorithm reviewed in sec. 5. 5 RTN-based approximation techniques A comprehensive overview and experimental comparison of finite-state approximation techniques for context-free grammars is given in (Nederhof 2000). In Nederhof’s approximation experiments based on an HPSG grammar, the so-called RTN method provided the best trade-off between exactness and the resources required in automaton construction. (Techniques that involve a heavy explosion of the number of states are impractical for non-trivial grammars.) More specifically, a parameterized version of the RTN method, in which the FSA keeps track of possible derivational histories, was considered most adequate. The RTN method of finite-state approximation is inspired by recursive transition networks (RTNs). RTNs are collections of sub-automata. For each rule   in a context-free grammar, a subautomaton with  states is constructed: (8)    . . .       . . .   As a symbol is processed in the  automaton (say,  ), the RTN control jumps to the respective subautomaton’s initial state (so, from   in (8) to a state    in the sub-automaton for  ), keeping the return address on a stack representation. When the subautomaton is in its final state (     ), control jumps back to the next state in the  automaton:  . In the RTN-based finite-state approximation of a context-free grammar (which does not have an unlimited stack representation available), the jumps to sub-automata are hard-wired, i.e., transitions for non-terminal symbols like the  transition from   to  are replaced by direct  -transitions to the initial state and from the end state of the respective sub-automata: (9). (Of course, the resulting nondeterministic FSA is then determinized and minimized by standard techniques.) (9)          . . .            . . .  . . .  . . .   The technique is approximative, since on jumping back, the automaton “forgets” where it had come from, so if there are several rules with a right-hand side occurrence of, say   , the automaton may nondeterministically jump back to the wrong rule. For instance, if our grammar consists of a recursive production B  a B c for category B, and a production B  b, we will get the following FSA: (10)         b  a   c  The approximation loses the original balancing of a’s and c’s, so “abcc” is incorrectly accepted. In the parameterized version of the RTN method that Nederhof (2000) proposes, the state space is enlarged: different copies of each state are created to keep track of what the derivational history was at the point of entering the present subautomaton. For representing the derivational history, Nederhof uses a list of “dotted” productions, as known from Earley parsing. So, for state  in (10), we would get copies   ,      , etc., likewise for the states  "!  ! The  -transitions for jumping to and from embedded categories observe the laws for legal context-free derivations, as far as recorded by the dotted rules.8 Of course, the window for looking back in history is bounded; there is a parameter (which Nederhof calls # ) for the size of the history list in the automaton construction. Beyond the recorded history, the automaton’s approximation will again get inexact. (11) shows the parameterized variant of (10), with parameter #%$'& , i.e., a maximal length of one element for the history ( ( is used as a short-hand for item ) * ,+.*0/21 ). (11) will not accept “abcc” (but it will accept “aabccc”). 8For the exact conditions see (Nederhof 2000, 25). (11)     3  3  43  43     56   756  756  4756  4756   56 b  a c    b  a   c  The number of possible histories (and thus the number of states in the non-deterministic FSA) grows exponentially with the depth parameter, but only polynomially with the size of the grammar. Hence, with parameter #8$9& (“RTN2”), the technique is usable for non-trivial syntactic grammars. Nederhof (2000) discusses an important additional step for avoiding an explosion of the size of the intermediate, non-deterministic FSA: before the described approximation is performed, the contextfree grammar is split up into subgrammars of mutually recursive categories (i.e., categories which can participate in a recursive cycle); in each subgrammar, all other categories are treated as nonterminal symbols. For each subgrammar, the RTN construction and FSA minimization is performed separately, so in the end, the relatively small minimized FSAs can be reassembled. 6 A selective history-based RTN-method In word formation, the split of the original grammar into subgrammars of mutually recursive (MR) categories has no great complexity-reducing effect (if any), contrary to the situation in syntax. Essentially, all recursive categories are part of a single large equivalence class of MR categories. Hence, the size of the grammar that has to be effectively approximated is fairly large (recall that we are dealing with a compiled-out unification grammar). For a realistic grammar, the parameterized RTN technique is unusable with parameter #:$  or higher. Moreover, a history of just two previous embeddings (as we get it with #;$  ) is too limited in a heavily recursive setting like word formation: recursive embeddings of depth four occur in realistic text. However, we can exploit more effectively the “mildly context-free” characteristics of morphological grammars (at least of German) discussed in sec. 4. We propose a refined version of the parameterized RTN-method, with a selective recording of derivational history. We stipulate a distinction of two types of rules: “historically important” h-rules (written     ) and non-h-rules (written    ). The h-rules are treated as in the parameterized RTN-method. The non-h-rules are not recorded in the construction of history lists; they are however taken into account in the determination of legal histories. For instance, )  *0/ 1 will appear as a legal history for the sub-automaton for some category D only if there is a derivation B   D (i.e., a sequence of rule rewrites making use of non-h-rules). By classifying certain rules as non-h-rules, we can concentrate record-keeping resources on a particular subset of rules. In sec. 4, we saw that for most rules in the compiled-out context-free grammar for German morphology (all rules compiled from (3b) and (3c)), the inexactness of the RTN-approximation does not have any negative effect (either due to headadjacency, which is preserved by the non-parametric version of RTN, or due to lack of category-specific constraints, which means that no context-free balancing is checked). Hence, it is safe to classify these rules as non-h-rules. The only rules in which the inexactness may lead to overgeneration are the ones compiled from the prefix rule (3a). Marking these rules as h-rules and doing selective history-based RTN construction gives us exactly the desired effect: we will get an FSA that will accept a free alternation of all three word-formation types (as far as compatible with the lexical affixes’ selection), but stacking of prefixes is kept track of. Suffix derivations and compounding steps do not increase the length of our history list, so even with a #%$'& or # $  , we can get very far in exact coverage. 7 Additional optimizations Besides the selective history list construction, two further optimizations were applied to Nederhof’s (2000) parameterized RTN-method: First, Earley items with the same remainder to the right of the dot were collapsed ( )   1 and )   1 ). Since they are indistinguishable in terms of future behavior, making a distinction results in an unnecessary increase of the state space. (Effectively, only the material to the right of the dot was used to build the history items.) Second, for immediate right-peripheral recursion, the history list was collapsed; i.e., if the current history has the form )   1 !  , and the next item to be added would be again )   1 , the present list is left unchanged. This is correct because completion of )   1 will automatically result in the completion of all immediately stacked such items. Together, the two optimizations help to keep the number of different histories small, without losing relevant distinctions. Especially the second optimization is very effective in a selective history setting, since the “immediate” recursion need not be literally immediate, but an arbitrary number of nonh-rules may intervene. So if we find a noun prefix [N  N  - N], i.e., we are looking for a noun, we need not pay attention (in terms of coveragerelevant history distinctions) whether we are running into compounds or suffixations: we know, when we find another noun prefix (with the same selection features, i.e., origin etc.), one analysis will always be to close off both prefixations with the same noun: (12) N N  N N  N . . . Of course, the second prefixation need not have happened on the right-most branch, so at the point of having accepted N  N  N, we may actually be in the configuration sketched in (13a): (13) a. N N  ? N ? N  N . . . b. ? N ? N  N N  N . . . Note however that in terms of grammatically legal continuations, this configuration is “subsumed” by (13b), which is compatible with (12) (the top ‘?’ category will be accessible using  -transitions back from a completed N—recall that suffixation and compounding is not controlled by any history items). So we can note that the only examples for which the approximating FSA is inexact are those where the stacking depth of distinct prefixes (i.e., selecting # diff. pairs of interm. non-deterministic fsa minimized fsa categ./hist. list # states # -trans. #  -trans. # states # trans. plain # $ & 169 1,118 640 963 2 16 parameterized # $  1,861 13,149 7,595 11,782 11 198 RTN-method #:$ 22,333 selective #:$ & 229 2,934 1,256 4,000 14 361 history-based # $  2,011 26,343 11,300 36,076 14 361 RTN-method #:$ 18,049 Figure 1: Experimental results for sample grammar with 185 rules for a different set of features) is greater than our parameter # . Thanks to the second optimization, the relatively frequent case of stacking of two verbal prefixes as in vor.ver.arbeiten ‘preprocess’ counts as a single prefix for book-keeping purposes. 8 Implementation and experiments We implemented the selective history-based RTNconstruction in Prolog, as a conversion routine that takes as input a definite-clause grammar with compiled-out grounded feature values; it produces as output a Prolog representation of an FSA. The resulting automaton is determinized and minimized, using the FSA library for Prolog by Gertjan van Noord.9 Emphasis was put on identifying the most suitable strategy for dealing with word formation taking into account the relative size of the FSAs generated (other techniques than the selective history strategy were tried out and discarded). The algorithm was applied on a sample word formation grammar with 185 compiled-out context-free rules, displaying the principled mechanism of category and other feature selection, but not the full set of distinctions made in the DeKo project. 9 of the rules were compiled from the prefixation rule, and were thus marked as h-rules for the selective method. We ran a comparison between a version of the non-selective parameterized RTN-method of (Nederhof 2000) and the selective history method proposed in this paper. An overview of the results is given in fig. 1.10 It should be noted that the optimizations of sec. 7 were applied in both methods (the non-selective method was simulated by mark9FSA6.2xx: Finite State Automata Utilities; http://odur.let.rug.nl/˜vannoord/Fsa/ 10The fact that the minimized FSAs for   are identical for the selective method is an artefact of the sample grammar. ing all rules as h-rules). As the size results show, the non-deterministic FSAs constructed by the selective method are more complex (and hence resource-intensive in minimization) than the ones produced by the “plain” parameterized version. However, the difference in exactness of the approximizations has to be taken into account. As a tentative indication for this, note that the minimized FSA for #;$ & in the plain version has only two states; so obviously too many distinctions from the context-free grammar have been lost. In the plain version, all word formation operations are treated alike, hence the history list of length one or two is quickly filled up with items that need not be recorded. A comparison of the number of different pairs of categories and history lists used in the construction shows that the selective method is more economical in the use of memory space as the depth parameter grows larger. (For # $  , the selective method would even have fewer different category/history list pairs than the plain method, since the patterns become repetitive. However, the approximations were impractical for # $ .) Since the selective method uses non-h-rules only in the determination of legal histories (as discussed in sec. 6), it can actually “see” further back into the history than the length of the history list would suggest. What the comparison clearly indicates is that in terms of resource requirements, our selective method with a parameter #  is much closer to the #  -version of the plain RTN-method than to the next higher #   version. But since the selective method focuses its record-keeping resources on the crucial aspects of the finite-state approximation, it brings about a much higher gain in exactness than just extending the history list by one in the plain method. We also ran the selective method on a more finegrained morphological grammar with 403 rules (including 12 h-rules). Parameter # $ & was applicable, leading to a non-deterministic FSA with 7,345 states, which could be minimized. Parameter # $  led to a non-deterministic FSA with 87,601 states, for which minimization could not be completed due to a memory overflow. It is one goal for future research to identify possible ways of breaking down the approximation construction into smaller subproblems for which minimization can be run separately (even though all categories belong to the same equivalence class of mutually recursive categories).11 Another goal is to experiment with the use of transduction as a means of adding structural markings from which the analysis trees can be reconstructed (to the extent they are not underspecified by the finite-state approach); possible approaches are discussed in Johnson 1996 and Boullier 2003. Inspection of the longest few hundred prefixcontaining word forms in a large German newspaper corpus indicates that prefix stacking is rare. (If there are several prefixes in a word form, this tends to arise through compounding.) No instance of stacking of depth 3 was observed. So, the range of phenomena for which the approximation is inexact is of little practical relevance. For a full evaluation of the coverage and exactness of the approach, a comprehensive implementation of the morphological grammar would be required. We ran a preliminary experiment with a small grammar, focusing on the cases that might be problematic: we extracted from the corpus a random sample of 100 word forms containing prefixes. From these 100 forms, we generated about 3700 grammatical and ungrammatical test examples by omission, addition and permutation of stems and affixes. After making sure that the required affixes and stems were included in the lexicon of the grammar, we ran a comparison of exact parsing with the unification-based grammar and the selective history-based RTN-approximation, with parameter # $ & (which means that there is a history window of one item). For 97% of the test items, the two methods agreed; 3% of the items were accepted by the approximation method, but not by the full grammar. The approximation does not lose any 11A related possibility pointed out by a reviewer would be to expand features from the original unification-grammar only where necessary (cf. Kiefer and Krieger 2000). test items parsed by the full grammar. Some obvious improvements should make it possible soon to run experiments with a larger history window, reaching exactness of the finite-state method for almost all relevant data. 9 Acknowledgement I’d like to thank my former colleagues at the Institut für Maschinelle Sprachverarbeitung at the University of Stuttgart for invaluable discussion and input: Arne Fitschen, Anke Lüdeling, Bettina Säuberlich and the other people working in the DeKo project and the IMS lexicon group. I’d also like to thank Christian Rohrer and Helmut Schmid for discussion and support. References Boullier, Pierre. 2003. Supertagging: A non-statistical parsing-based approach. In Proceedings of the 8th International Workshop on Parsing Technologies (IWPT’03), Nancy, France. Butt, Miriam, Tracy King, Maria-Eugenia Niño, and Frédérique Segond. 1999. A Grammar Writer’s Cookbook. Number 95 in CSLI Lecture Notes. Stanford, CA: CSLI Publications. Grimley-Evans, Edmund. 1997. Approximating context-free grammars with a finite-state calculus. In ACL, pp. 452–459, Madrid, Spain. Johnson, Mark. 1996. Left corner transforms and finite state approximations. Ms., Rank Xerox Research Centre, Grenoble. Johnson, Mark. 1998. Finite-state approximation of constraintbased grammars using left-corner grammar transforms. In COLING-ACL, pp. 619–623, Montreal, Canada. Kiefer, Bernd, and Hans-Ulrich Krieger. 2000. A contextfree approximation of head-driven phrase structure grammar. In Proceedings of the 6th International Workshop on Parsing Technologies (IWPT’00), February 23-25, pp. 135–146, Trento, Italy. Nederhof, Mark-Jan. 2000. Practical experiments with regular approximation of context-free languages. Computational Linguistics 26:17–44. Pereira, Fernando, and Rebecca Wright. 1997. Finite-state approximation of phrase-structure grammars. In Emmanuel Roche and Yves Schabes (eds.), Finite State Language Processing, pp. 149–173. Cambridge: MIT Press. Schiller, Anne. 1995. DMOR: Entwicklerhandbuch [developer’s handbook]. Technical report, Institut für Maschinelle Sprachverarbeitung, Universität Stuttgart. Schmid, Tanja, Anke Lüdeling, Bettina Säuberlich, Ulrich Heid, and Bernd Möbius. 2001. DeKo: Ein System zur Analyse komplexer Wörter. In GLDV Jahrestagung, pp. 49– 57. Wurster, Melvin. 2003. Entwicklung einer Wortbildungsgrammatik fuer das Deutsche in YAP. Studienarbeit [Intermediate student research thesis], Institut für Maschinelle Sprachverarbeitung, Universität Stuttgart.
2003
25
A Tabulation-Based Parsing Method that Reduces Copying Gerald Penn and Cosmin Munteanu Department of Computer Science University of Toronto Toronto M5S 3G4, Canada gpenn,mcosmin  @cs.toronto.edu Abstract This paper presents a new bottom-up chart parsing algorithm for Prolog along with a compilation procedure that reduces the amount of copying at run-time to a constant number (2) per edge. It has applications to unification-based grammars with very large partially ordered categories, in which copying is expensive, and can facilitate the use of more sophisticated indexing strategies for retrieving such categories that may otherwise be overwhelmed by the cost of such copying. It also provides a new perspective on “quick-checking” and related heuristics, which seems to confirm that forcing an early failure (as opposed to seeking an early guarantee of success) is in fact the best approach to use. A preliminary empirical evaluation of its performance is also provided. 1 Introduction This paper addresses the cost of copying edges in memoization-based, all-paths parsers for phrasestructure grammars. While there have been great advances in probabilistic parsing methods in the last five years, which find one or a few most probable parses for a string relative to some grammar, allpaths parsing is still widely used in grammar development, and as a means of verifying the accuracy of syntactically more precise grammars, given a corpus or test suite. Most if not all efficient all-paths phrase-structurebased parsers for natural language are chart-based because of the inherent ambiguity that exists in large-scale natural language grammars. Within WAM-based Prolog, memoization can be a fairly costly operation because, in addition to the cost of copying an edge into the memoization table, there is the additional cost of copying an edge out of the table onto the heap in order to be used as a premise in further deductions (phrase structure rule applications). All textbook bottom-up Prolog parsers copy edges out: once for every attempt to match an edge to a daughter category, based on a matching endpoint node, which is usually the first-argument on which the memoization predicate is indexed. Depending on the grammar and the empirical distribution of matching mother/lexical and daughter descriptions, this number could approach  copies for an edge added early to the chart, where  is the length of the input to be parsed. For classical context-free grammars, the category information that must be copied is normally quite small in size. For feature-structure-based grammars and other highly lexicalized grammars with large categories, however, which have become considerably more popular since the advent of the standard parsing algorithms, it becomes quite significant. The ALE system (Carpenter and Penn, 1996) attempts to reduce this by using an algorithm due to Carpenter that traverses the string breadth-first, right-to-left, but matches rule daughters rule depth-first, left-toright in a failure-driven loop, which eliminates the need for active edges and keeps the sizes of the heap and call stack small. It still copies a candidate edge every time it tries to match it to a daughter description, however, which can approach    because of its lack of active edges. The OVIS system (van Noord, 1997) employs selective memoization, which tabulates only maximal projections in a head-corner parser — partial projections of a head are still recomputed. A chart parser with zero copying overhead has yet to be discovered, of course. This paper presents one that reduces this worst case to two copies per non-empty edge, regardless of the length of the input string or when the edge was added to the chart. Since textbook chart parsers require at least two copies per edge as well (assertion and potentially matching the next lexical edge to the left/right), this algorithm always achieves the best-case number of copies attainable by them on non-empty edges. It is thus of some theoretical interest in that it proves that at least a constant bound is attainable within a Prolog setting. It does so by invoking a new kind of grammar transformation, called EFD-closure, which ensures that a grammar need not match an empty category to the leftmost daughter of any rule. This transformation is similar to many of the myriad of earlier transformations proposed for exploring the decidability of recognition under various parsing control strategies, but the property it establishes is more conservative than brute-force epsilon elimination for unification-based grammars (Dymetman, 1994). It also still treats empty categories distinctly from non-empty ones, unlike the linking tables proposed for treating leftmost daughters in left-corner parsing (Pereira and Shieber, 1987). Its motivation, the practical consideration of copying overhead, is also rather different, of course. The algorithm will be presented as an improved version of ALE’s parser, although other standard bottom-up parsers can be similarly adapted. 2 Why Prolog? Apology! This paper is not an attempt to show that a Prolog-based parser could be as fast as a phrasestructure parser implemented in an imperative programming language such as C. Indeed, if the categories of a grammar are discretely ordered, chart edges can be used for further parsing in situ, i.e., with no copying out of the table, in an imperative programming language. Nevertheless, when the categories are partially ordered, as in unificationbased grammars, there are certain breadth-first parsing control strategies that require even imperatively implemented parsers to copy edges out of their tables. What is more important is the tradeoff at stake between efficiency and expressiveness. By improving the performance of Prolog-based parsing, the computational cost of its extra available expressive devices is effectively reduced. The alternative, simple phrase-structure parsing, or extended phrase-structure-based parsing with categories such as typed feature structures, is extremely cumbersome for large-scale grammar design. Even in the handful of instances in which it does seem to have been successful, which includes the recent HPSG English Resource Grammar and a handful of Lexical-Functional Grammars, the results are by no means graceful, not at all modular, and arguably not reusable by anyone except their designers. The particular interest in Prolog’s expressiveness arises, of course, from the interest in generalized context-free parsing beginning with definite clause grammars (Pereira and Shieber, 1987), as an instance of a logic programming control strategy. The connection between logic programming and parsing is well-known and has also been a very fruitful one for parsing, particularly with respect to the application of logic programming transformations (Stabler, 1993) and constraint logic programming techniques to more recent constraint-based grammatical theories. Relational predicates also make grammars more modular and readable than pure phrasestructure-based grammars. Commercial Prolog implementations are quite difficult to beat with imperative implementations when it is general logic programming that is required. This is no less true with respect to more recent data structures in lexicalized grammatical theories. A recent comparison (Penn, 2000) of a version between ALE (which is written in Prolog) that reduces typed feature structures to Prolog term encodings, and LiLFeS (Makino et al., 1998), the fastest imperative re-implementation of an ALE-like language, showed that ALE was slightly over 10 times faster on large-scale parses with its HPSG reference grammar than LiLFeS was with a slightly more efficient version of that grammar. 3 Empirical Efficiency Whether this algorithm will outperform standard Prolog parsers is also largely empirical, because: 1. one of the two copies is kept on the heap itself and not released until the end of the parse. For large parses over large data structures, that can increase the size of the heap significantly, and will result in a greater number of cache misses and page swaps. 2. the new algorithm also requires an off-line partial evaluation of the grammar rules that increases the number of rules that must be iterated through at run-time during depth-first closure. This can result in redundant operations being performed among rules and their partially evaluated instances to match daughter categories, unless those rules and their partial evaluations are folded together with local disjunctions to share as much compiled code as possible. A preliminary empirical evaluation is presented in Section 8. Oepen and Carroll (2000), by far the most comprehensive attempt to profile and optimize the performance of feature-structure-based grammars, also found copying to be a significant issue in their imperative implementations of several HPSG parsers — to the extent that it even warranted recomputing unifications in places, and modifying the manner in which active edges are used in their fastest attempt (called hyper-active parsing). The results of the present study can only cautiously be compared to theirs so far, because of our lack of access to the successive stages of their implementations and the lack of a common grammar ported to all of the systems involved. Some parallels can be drawn, however, particularly with respect to the utility of indexing and the maintenance of active edges, which suggest that the algorithm presented below makes Prolog behave in a more “C-like” manner on parsing tasks. 4 Theoretical Benefits The principal benefits of this algorithm are that: 1. it reduces copying, as mentioned above. 2. it does not suffer from a problem that textbook algorithms suffer from when running under non-ISO-compatible Prologs (which is to say most of them). On such Prologs, asserted empty category edges that can match leftmost daughter descriptions of rules are not able to combine with the outputs of those rules. 3. keeping a copy of the chart on the heap allows for more sophisticated indexing strategies to apply to memoized categories that would otherwise be overwhelmed by the cost of copying an edge before matching it against an index. Indexing is also briefly considered in Section 8. Indexing is not the same thing as filtering (Torisawa and Tsuji, 1995), which extracts an approximation grammar to parse with first, in order to increase the likelihood of early unification failure. If the filter parse succeeds, the system then proceeds to perform the entire unification operation, as if the approximation had never been applied. Malouf et al. (2000) cite an improvement of 35–45% using a “quickcheck” algorithm that they appear to believe finds the optimal selection of  feature paths for quickchecking. It is in fact only a greedy approximation — the optimization problem is exponential in the number of feature paths used for the check. Penn (1999) cites an improvement of 15-40% simply by re-ordering the sister features of only two types in the signature of the ALE HPSG grammar during normal unification. True indexing re-orders required operations without repeating them. Penn and Popescu (1997) build an automaton-based index for surface realization with large lexica, and suggest an extension to statistically trained decision trees. Ninomiya et al. (2002) take a more computationally brute-force approach to index very large databases of feature structures for some kind of information retrieval application. Neither of these is suitable for indexing chart edges during parsing, because the edges are discarded after every sentence, before the expense of building the index can be satisfactorily amortized. There is a fair amount of relevant work in the database and programming language communities, but many of the results are negative (Graf, 1996) — very little time can be spent on constructing the index. A moment’s thought reveals that the very notion of an active edge, tabulating the well-formed prefixes of rule right-hand-sides, presumes that copying is not a significant enough issue to merit the overhead of more specialized indexing. While the present paper proceeds from Carpenter’s algorithm, in which no active edges are used, it will become clear from our evaluation that active edges or their equivalent within a more sophisticated indexing strategy are an issue that should be re-investigated now that the cost of copying can provably be reduced in Prolog. 5 The Algorithm In this section, it will be assumed that the phrasestructure grammar to be parsed with obeys the following property: Definition 1 An (extended) context-free grammar, , is empty-first-daughter-closed (EFD-closed) iff, for every production rule,    in ,   and there are no empty productions (empty categories) derivable from non-terminal   . The next section will show how to transform any phrase-structure grammar into an EFD-closed grammar. This algorithm, like Carpenter’s algorithm, proceeds breadth-first, right-to-left through the string, at each step applying the grammar rules depthfirst, matching daughter categories left-to-right. The first step is then to reverse the input string, and compute its length (performed by reverse count/5) and initialize the chart: rec(Ws,FS) :retractall(edge(_,_,_)), reverse_count(Ws,[],WsRev,0,Length), CLength is Length - 1, functor(Chart,chart,CLength), build(WsRev,Length,Chart), edge(0,Length,FS). Two copies of the chart are used in this presentation. One is represented by a term chart(E1,...,EL), where the  th argument holds the list of edges whose left node is  . Edges at the beginning of the chart (left node 0) do not need to be stored in this copy, nor do edges beginning at the end of the chart (specifically, empty categories with left node and right node Length). This will be called the term copy of the chart. The other copy is kept in a dynamic predicate, edge/3, as a textbook Prolog chart parser would. This will be called the asserted copy of the chart. Neither copy of the chart stores empty categories. These are assumed to be available in a separate predicate, empty cat/1. Since the grammar is EFDclosed, no grammar rule can produce a new empty category. Lexical items are assumed to be available in the predicate lex/2. The predicate, build/3, actually builds the chart: build([W|Ws],R,Chart):RMinus1 is R - 1, (lex(W,FS), add_edge(RMinus1,R,FS,Chart) ; ( RMinus1 =:= 0 -> true ; rebuild_edges(RMinus1,Edges), arg(RMinus1,Chart,Edges), build(Ws,RMinus1,Chart) ) ). build([],_,_). The precondition upon each call to build(Ws,R,Chart) is that Chart contains the complete term copy of the non-loop edges of the parsing chart from node R to the end, while Ws contains the (reversed) input string from node R to the beginning. Each pass through the first clause of build/3 then decrements Right, and seeds the chart with every category for the lexical item that spans from R-1 to R. The predicate, add edge/4 actually adds the lexical edge to the asserted copy of the chart, and then closes the chart depth-first under rule applications in a failure-driven loop. When it has finished, if Ws is not empty (RMinus1 is not 0), then build/3 retracts all of the new edges from the asserted copy of the chart (with rebuild edges/2, described below) and adds them to the R-1st argument of the term copy before continuing to the next word. add edge/4 matches non-leftmost daughter descriptions from either the term copy of the chart, thus eliminating the need for additional copying of non-empty edges, or from empty cat/1. Whenever it adds an edge, however, it adds it to the asserted copy of the chart. This is necessary because add edge/4 works in a failure-driven loop, and any edges added to the term copy of the chart would be removed during backtracking: add_edge(Left,Right,FS,Chart):assert(edge(Left,Right,FS)), rule(FS,Left,Right,Chart). rule(FS,L,R,Chart) :(Mother ===> [FS|DtrsRest]), % PS rule match_rest(DtrsRest,R,Chart,Mother,L). match_rest([],R,Chart,Mother,L) :% all Dtrs matched add_edge(L,R,Mother,Chart). match_rest([Dtr|Dtrs],R,Chart,Mother,L) :arg(R,Chart,Edges), member(edge(Dtr,NewR),Edges), match_rest(Dtrs,NewR,Chart,Mother,L) ; empty_cat(Dtr), match_rest(Dtrs,R,Chart,Mother,L). Note that we never need to be concerned with updating the term copy of the chart during the operation of add edge/4 because EFD-closure guarantees that all non-leftmost daughters must have left nodes strictly greater than the Left passed as the first argument to add edge/4. Moving new edges from the asserted copy to the term copy is straightforwardly achieved by rebuild edges/2: rebuild_edges(Left,Edges) :retract(edge(Left,R,FS)) -> Edges = [edge(FS,R)|EdgesRest], rebuild_edges(Left,EdgesRest) ; Edges = []. The two copies required by this algorithm are thus: 1) copying a new edge to the asserted copy of the chart by add edge/4, and 2) copying new edges from the asserted copy of the chart to the term copy of the chart by rebuild edges/2. The asserted copy is only being used to protect the term copy from being unwound by backtracking. Asymptotically, this parsing algorithm has the same cubic complexity as standard chart parsers — only its memory consumption and copying behavior are different. 6 EFD-closure To convert an (extended) context-free grammar to one in which EFD-closure holds, we must partially evaluate those rules for which empty categories could be the first daughter over the available empty categories. If all daughters can be empty categories in some rule, then that rule may create new empty categories, over which rules must be partially evaluated again, and so on. The closure algorithm is presented in Figure 1 in pseudo-code and assumes the existence of six auxiliary lists: Es — a list of empty categories over which partial evaluation is to occur, Rs — a list of rules to be used in partial evaluation, NEs — new empty categories, created by partial evaluation (when all daughters have matched empty categories), NRs — new rules, created by partial evaluation (consisting of a rule to the leftmost daughter of which an empty category has applied, with only its non-leftmost daughters remaining), EAs — an accumulator of empty categories already partially evaluated once on Rs, and RAs — an accumulator of rules already used in partial evaluation once on Es. Initialize Es to empty cats of grammar; initialize Rs to rules of input grammar; initialize the other four lists to []; loop: while Es =/= [] do for each E in Es do for each R in Rs do unify E with the leftmost unmatched category description of R; if it does not match, continue; if the leftmost category was rightmost (unary rule), then add the new empty category to NEs otherwise, add the new rule (with leftmost category marked as matched) to NRs; od od; EAs := append(Es,EAs); Rs := append(Rs,RAs); RAs := []; Es := NEs; NEs := []; od; if NRs = [], then end: EAs are the closed empty cats, Rs are the closed rules else Es := EAs; EAs := []; RAs := Rs; Rs := NRs; NRs := [] go to loop Figure 1: The off-line EFD-closure algorithm. Each pass through the while-loop attempts to match the empty categories in Es against the leftmost daughter description of every rule in Rs. If new empty categories are created in the process (because some rule in Rs is unary and its daughter matches), they are also attempted — EAs holds the others until they are done. Every time a rule’s leftmost daughter matches an empty category, this effectively creates a new rule consisting only of the non-leftmost daughters of the old rule. In a unification-based setting, these non-leftmost daughters could also have some of their variables instantiated to information from the matching empty category. If the while-loop terminates (see the next section), then the rules of Rs are stored in an accumulator, RAs, until the new rules, NRs, have had a chance to match their leftmost daughters against all of the empty categories that Rs has. Partial evaluation with NRs may create new empty categories that Rs have never seen and therefore must be applied to. This is taken care of within the while-loop when RAs are added back to Rs for second and subsequent passes through the loop. 7 Termination Properties The parsing algorithm itself always terminates because the leftmost daughter always consumes input. Off-line EFD-closure may not terminate when infinitely many new empty categories can be produced by the production rules. We say that an extended context-free grammar, by which classical CFGs as well as unification-based phrase-structure grammars are implied, is -offlineparseable ( -OP) iff the empty string is not infinitely ambiguous in the grammar. Every -OP grammar can be converted to a weakly equivalent grammar which has the EFD-closure property. The proof of this statement, which establishes the correctness of the algorithm, is omitted for brevity. EFD-closure bears some resemblance in its intentions to Greibach Normal Form, but: (1) it is far more conservative in the number of extra rules it must create; (2) it is linked directly to the derivable empty categories of the grammar, whereas GNF conversion proceeds from an already -eliminated grammar (EFD-closure of any -free grammar, in fact, is the grammar itself); (3) GNF is rather more difficult to define in the case of unification-based grammars than with classical CFGs, and in the one generalization we are aware of (Dymetman, 1992), EFD-closure is actually not guaranteed by it; and Dymetman’s generalization only works for classically offline-parseable grammars. In the case of non -OP grammars, a standard bottom-up parser without EFD-closure would not terminate at run-time either. Our new algorithm is thus neither better nor worse than a textbook bottomup parser with respect to termination. A remaining topic for consideration is the adaptation of this method to strategies with better termination properties than the pure bottom-up strategy. 8 Empirical Evaluation The details of how to integrate an indexing strategy for unification-based grammars into the EFD-based parsing algorithm are too numerous to present here, but a few empirical observations can be made. First, EFD-based parsing is faster than Carpenter’s algorithm even with atomic, CFG-like categories, where the cost of copying is at a minimum, even with no indexing. We defined several sizes of CFG by extracting local trees from successively increasing portions of the Penn Treebank II, as shown in Table 1, and WSJ Number of Lexicon Number of directories WSJ files size Rules 00 4 131 77 00 5 188 124 00 6 274 168 00 8 456 282 00 10 756 473 00 15 1167 736 00 20 1880 1151 00 25 2129 1263 00 30 2335 1369 00 35 2627 1589 00 40 3781 2170 00 50 5645 3196 00–01 100 8948 5246 00–01 129 11242 6853 00–02 200 13164 7984 00–02 250 14730 9008 00–03 300 17555 10834 00–03 350 18861 11750 00–04 400 20359 12696 00–05 481 20037 13159 00–07 700 27404 17682 00–09 901 32422 20999 Table 1: The grammars extracted from the Wall Street Journal directories of the PTB II. then computed the average time to parse a corpus of sentences (5 times each) drawn from the initial section. All of the parsers were written in SICStus Prolog. These average times are shown in Figure 2 as a function of the number of rules. Storing active edges is always the worst option, followed by Carpenter’s algorithm, followed by the EFD-based algorithm. In this atomic case, indexing simply takes on the form of a hash by phrase structure category. This can be implemented on top of EFD because the overhead of copying has been reduced. This fourth option is the fastest by a factor of approximately 2.18 on average over EFD without indexing. One may also refer to Table 2, in which the num0.001 0.01 0.1 1 10 100 1000 10000 100000 0 5000 10000 15000 20000 25000 Time [log(sec)] Number of rules Average parsing times Active Carpenter EFD EFD-index Figure 2: Parsing times for simple CFGs. Number Successful Failed Success of rules unifications unifications rate (%) 124 104 1,766 5.56 473 968 51,216 1.85 736 2,904 189,528 1.51 1369 7,152 714,202 0.99 3196 25,416 3,574,138 0.71 5246 78,414 14,644,615 0.53 6853 133,205 30,743,123 0.43 7984 158,352 40,479,293 0.39 9008 195,382 56,998,866 0.34 10834 357,319 119,808,018 0.30 11750 441,332 151,226,016 0.29 12696 479,612 171,137,168 0.28 14193 655,403 250,918,711 0.26 17682 911,480 387,453,422 0.23 20999 1,863,523 847,204,674 0.21 Table 2: Successful unification rate for the (nonindexing) EFD parser. ber of successful and failed unifications (matches) was counted over the test suite for each rule set. Asymptotically, the success rate does not decrease by very much from rule set to rule set. There are so many more failures early on, however, that the sheer quantity of failed unifications makes it more important to dispense with these quickly. Of the grammars to which we have access that use larger categories, this ranking of parsing algorithms is generally preserved, although we have found no correlation between category size and the factor of improvement. John Carroll’s Prolog port of the Alvey grammar of English (Figure 3), for example, is EFD-closed, but the improvement of EFD over Carpenter’s algorithm is much smaller, presumably because there are so few edges when compared to the CFGs extracted from the Penn Treebank. EFDindex is also slower than EFD without indexing because of our poor choice of index for that grammar. With subsumption testing (Figure 4), the active edge algorithm and Carpenter’s algorithm are at an even greater disadvantage because edges must be copied to be compared for subsumption. On a pre-release version of MERGE (Figure 5),1 a modification of the English Resource Grammar that uses more macros and fewer types, the sheer size of the categories combined with a scarcity of edges seems to cost EFD due to the loss of locality of reference, although that loss is more than compensated for by indexing. 100 1000 10000 0 20 40 60 80 100 120 140 160 180 200 Time [log(msec)] Test cases Parsing times over Alvey grammar - no subsumption Active Carp EFD-Index EFD Figure 3: Alvey grammar with no subsumption. 100 1000 10000 0 20 40 60 80 100 120 140 160 180 200 Time [log(msec)] Test cases Parsing times over Alvey grammar - with subsumption Active Carp EFD EFD-index Figure 4: Alvey grammar with subsumption testing. 1We are indebted to Kordula DeKuthy and Detmar Meurers of Ohio State University, for making this pre-release version available to us. 100 1000 0 5 10 15 20 Time [log(msec)] Test cases Parsing times over Merge grammar Active EFD Carp EFD-index Figure 5: MERGE on the CSLI test-set. 9 Conclusion This paper has presented a bottom-up parsing algorithm for Prolog that reduces the copying of edges from either linear or quadratic to a constant number of two per non-empty edge. Its termination properties and asymptotic complexity are the same as a standard bottom-up chart parser, but in practice it performs better. Further optimizations can be incorporated by compiling rules in a way that localizes the disjunctions that are implicit in the creation of extra rules in the compile-time EFD-closure step, and by integrating automaton- or decision-treebased indexing with this algorithm. With copying now being unnecessary for matching a daughter category description, these two areas should result in a substantial improvement to parse times for highly lexicalized grammars. The adaptation of this algorithm to active edges, other control strategies, and to scheduling concerns such as finding the first parse as quickly as possible remain interesting areas of further extension. Apart from this empirical issue, this algorithm is of theoretical interest in that it proves that a constant number of edge copies can be attained by an all-paths parser, even in the presence of partially ordered categories. References B. Carpenter and G. Penn. 1996. Compiling typed attribute-value logic grammars. In H. Bunt and M. Tomita, editors, Recent Advances in Parsing Technologies, pages 145–168. Kluwer. M. Dymetman. 1992. A generalized greibach normal form for definite clause grammars. In Proceedings of the International Conference on Computational Linguistics. M. Dymetman. 1994. A simple transformation for offline-parsable gramamrs and its termination properties. In Proceedings of the International Conference on Computational Linguistics. P. Graf. 1996. Term Indexing. Springer Verlag. T. Makino, K. Torisawa, and J. Tsuji. 1998. LiLFeS — practical unification-based programming system for typed feature structures. In Proceedings of COLING/ACL-98, volume 2, pages 807–811. R. Malouf, J. Carroll, and A. Copestake. 2000. Efficient feature structure operations without compilation. Natural Language Engineering, 6(1):29–46. T. Ninomiya, T. Makino, and J. Tsuji. 2002. An indexing scheme for typed feature structures. In Proceedings of the 19th International Conference on Computational Linguistics (COLING-02). S. Oepen and J. Carroll. 2000. Parser engineering and performance profiling. Natural Language Engineering. G. Penn and O. Popescu. 1997. Head-driven generation and indexing in ALE. In Proceedings of the ENVGRAM workshop; ACL/EACL-97. G. Penn. 1999. Optimising don’t-care non-determinism with statistical information. Technical Report 140, Sonderforschungsbereich 340, T¨ubingen. G. Penn. 2000. The Algebraic Structure of Attributed Type Signatures. Ph.D. thesis, Carnegie Mellon University. F. C. N. Pereira and S. M. Shieber. 1987. Prolog and Natural-Language Analysis, volume 10 of CSLI Lecture Notes. University of Chicago Press. E. Stabler. 1993. The Logical Approach to Syntax: Foundations, Specifications, and implementations of Theories of Government and Binding. MIT Press. K. Torisawa and J. Tsuji. 1995. Compiling HPSG-style grammar to object-oriented language. In Proceedings of NLPRS-1995, pages 568–573. G. van Noord. 1997. An efficient implementation of the head-corner parser. Computational Linguistics.
2003
26
Recognizing Expressions of Commonsense Psychology in English Text Andrew Gordon, Abe Kazemzadeh, Anish Nair and Milena Petrova University of Southern California Los Angeles, CA 90089 USA [email protected], {kazemzad, anair, petrova}@usc.edu Abstract Many applications of natural language processing technologies involve analyzing texts that concern the psychological states and processes of people, including their beliefs, goals, predictions, explanations, and plans. In this paper, we describe our efforts to create a robust, large-scale lexical-semantic resource for the recognition and classification of expressions of commonsense psychology in English Text. We achieve high levels of precision and recall by hand-authoring sets of local grammars for commonsense psychology concepts, and show that this approach can achieve classification performance greater than that obtained by using machine learning techniques. We demonstrate the utility of this resource for large-scale corpus analysis by identifying references to adversarial and competitive goals in political speeches throughout U.S. history. 1 Commonsense Psychology in Language Across all text genres it is common to find words and phrases that refer to the mental states of people (their beliefs, goals, plans, emotions, etc.) and their mental processes (remembering, imagining, prioritizing, problem solving). These mental states and processes are among the broad range of concepts that people reason about every day as part of their commonsense understanding of human psychology. Commonsense psychology has been studied in many fields, sometimes using the terms Folk psychology or Theory of Mind, as both a set of beliefs that people have about the mind and as a set of everyday reasoning abilities. Within the field of computational linguistics, the study of commonsense psychology has not received special attention, and is generally viewed as just one of the many conceptual areas that must be addressed in building large-scale lexical-semantic resources for language processing. Although there have been a number of projects that have included concepts of commonsense psychology as part of a larger lexical-semantic resource, e.g. the Berkeley FrameNet Project (Baker et al., 1998), none have attempted to achieve a high degree of breadth or depth over the sorts of expressions that people use to refer to mental states and processes. The lack of a large-scale resource for the analysis of language for commonsense psychological concepts is seen as a barrier to the development of a range of potential computer applications that involve text analysis, including the following: • Natural language interfaces to mixed-initiative planning systems (Ferguson & Allen, 1993; Traum, 1993) require the ability to map expressions of users’ beliefs, goals, and plans (among other commonsense psychology concepts) onto formalizations that can be manipulated by automated planning algorithms. • Automated question answering systems (Voorhees & Buckland, 2002) require the ability to tag and index text corpora with the relevant commonsense psychology concepts in order to handle questions concerning the beliefs, expectations, and intentions of people. • Research efforts within the field of psychology that employ automated corpus analysis techniques to investigate developmental and mental illness impacts on language production, e.g. Reboul & Sabatier’s (2001) study of the discourse of schizophrenic patients, require the ability to identify all references to certain psychological concepts in order to draw statistical comparisons. In order to enable future applications, we undertook a new effort to meet this need for a linguistic resource. This paper describes our efforts in building a large-scale lexical-semantic resource for automated processing of natural language text about mental states and processes. Our aim was to build a system that would analyze natural language text and recognize, with high precision and recall, every expression therein related to commonsense psychology, even in the face of an extremely broad range of surface forms. Each recognized expression would be tagged with an appropriate concept from a broad set of those that participate in our commonsense psychological theories. Section 2 demonstrates the utility of a lexicalsemantic resource of commonsense psychology in automated corpus analysis through a study of the changes in mental state expressions over the course of over 200 years of U.S. Presidential State-of-theUnion Addresses. Section 3 of this paper describes the methodology that we followed to create this resource, which involved the hand authoring of local grammars on a large scale. Section 4 describes a set of evaluations to determine the performance levels that these local grammars could achieve and to compare these levels to those of machine learning approaches. Section 5 concludes this paper with a discussion of the relative merits of this approach to the creation of lexical-semantic resources as compared to other approaches. 2 Applications to corpus analysis One of the primary applications of a lexicalsemantic resource for commonsense psychology is toward the automated analysis of large text corpora. The research value of identifying commonsense psychology expressions has been demonstrated in work on children’s language use, where researchers have manually annotated large text corpora consisting of parent/child discourse transcripts (Barsch & Wellman, 1995) and children’s storybooks (Dyer et al., 2000). While these previous studies have yielded interesting results, they required enormous amounts of human effort to manually annotate texts. In this section we aim to show how a lexical-semantic resource for commonsense psychology can be used to automate this annotation task, with an example not from the domain of children’s language acquisition, but rather political discourse. We conducted a study to determine how political speeches have been tailored over the course of U.S. history throughout changing climates of military action. Specifically, we wondered if politicians were more likely to talk about goals having to do with conflict, competition, and aggression during wartime than in peacetime. In order to automatically recognize references to goals of this sort in text, we used a set of local grammars authored using the methodology described in Section 3 of this paper. The corpus we selected to apply these concept recognizers was the U.S. State of the Union Addresses from 1790 to 2003. The reasons for choosing this particular text corpus were its uniform distribution over time and its easy availability in electronic form from Project Gutenberg (www.gutenberg. net). Our set of local grammars identified 4290 references to these goals in this text corpus, the vast majority of them begin references to goals of an adversarial nature (rather than competitive). Examples of the references that were identified include the following: • They sought to use the rights and privileges they had obtained in the United Nations, to frustrate its purposes [adversarial-goal] and cut down its powers as an effective agent of world progress. (Truman, 1953) • The nearer we come to vanquishing [adversarial-goal] our enemies the more we inevitably become conscious of differences among the victors. (Roosevelt, 1945) • Men have vied [competitive-goal] with each other to do their part and do it well. (Wilson, 1918) • I will submit to Congress comprehensive legislation to strengthen our hand in combating [adversarial-goal] terrorists. (Clinton, 1995) Figure 1 summarizes the results of applying our local grammars for adversarial and competitive goals to the U.S. State of the Union Addresses. For each year, the value that is plotted represents the number of references to these concepts that were identified per 100 words in the address. The interesting result of this analysis is that references to adversarial and competitive goals in this corpus increase in frequency in a pattern that directly corresponds to the major military conflicts that the U.S. has participated in throughout its history. Each numbered peak in Figure 1 corresponds to a period in which the U.S. was involved in a military conflict. These are: 1) 1813, War of 1812, US and Britain; 2) 1847, Mexican American War; 3) 1864, Civil War; 4) 1898, Spanish American War; 5) 1917, World War I; 6) 1943, World War II; 7) 1952, Korean War; 8) 1966, Vietnam War; 9) 1991, Gulf War; 10) 2002, War on Terrorism. The wide applicability of a lexical-semantic resource for commonsense psychology will require that the identified concepts are well defined and are of broad enough scope to be relevant to a wide range of tasks. Additionally, such a resource must achieve high levels of accuracy in identifying these concepts in natural language text. The remainder of this paper describes our efforts in authoring and evaluating such a resource. 3 Authoring recognition rules The first challenge in building any lexical-semantic resource is to identify the concepts that are to be recognized in text and used as tags for indexing or markup. For expressions of commonsense psychology, these concepts must describe the broad scope of people’s mental states and processes. An ontology of commonsense psychology with a high degree of both breadth and depth is described by Gordon (2002). In this work, 635 commonsense psychology concepts were identified through an analysis of the representational requirements of a corpus of 372 planning strategies collected from 10 real-world planning domains. These concepts were grouped into 30 conceptual areas, corresponding to various reasoning functions, and full formal models of each of these conceptual areas are being authored to support automated inference about commonsense psychology (Gordon & Hobbs, 2003). We adopted this conceptual framework in our current project because of the broad scope of the concepts in this ontology and its potential for future integration into computational reasoning systems. The full list of the 30 concept areas identified is as follows: 1) Managing knowledge, 2) Similarity comparison, 3) Memory retrieval, 4) Emotions, 5) Explanations, 6) World envisionment, 7) Execution envisionment, 8) Causes of failure, 9) Managing expectations, 10) Other agent reasoning, 11) Threat detection, 12) Goals, 13) Goal themes, 14) Goal management, 15) Plans, 16) Plan elements, 17) Planning modalities, 18) Planning goals, 19) Plan construction, 20) Plan adaptation, 21) Design, 22) Decisions, 23) Scheduling, 24) Monitoring, 25) Execution modalities, 26) Execution control, 27) Repetitive execution, 28) Plan following, 29) Observation of execution, and 30) Body interaction. Our aim for this lexical-semantic resource was to develop a system that could automatically identify every expression of commonsense psychology in English text, and assign to them a tag corresponding to one of the 635 concepts in this ontology. For example, the following passage (from William Makepeace Thackeray’s 1848 novel, Vanity Fair) illustrates the format of the output of this system, where references to commonsense psychology concepts are underlined and followed by a tag indicating their specific concept type delimited by square brackets: 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 1790 1798 1806 1814 1822 1830 1838 1846 1854 1862 1870 1878 1886 1898 1906 1914 1922 1930 1939 1947 1955 1963 1971 1979 1987 1997 Year Features per 100 words 1 2 7 6 5 4 3 8 9 10 Figure 1. Adversarial and competitive goals in the U.S. State of the Union Addresses from 1790-2003 Perhaps [partially-justified-proposition] she had mentioned the fact [proposition] already to Rebecca, but that young lady did not appear to [partially-justified-proposition] have remembered it [memory-retrieval]; indeed, vowed and protested that she expected [add-expectation] to see a number of Amelia's nephews and nieces. She was quite disappointed [disappointmentemotion] that Mr. Sedley was not married; she was sure [justified-proposition] Amelia had said he was, and she doted so on [liking-emotion] little children. The approach that we took was to author (by hand) a set of local grammars that could be used to identify each concept. For this task we utilized the Intex Corpus Processor software developed by the Laboratoire d'Automatique Documentaire et Linguistique (LADL) of the University of Paris 7 (Silberztein, 1999). This software allowed us to author a set of local grammars using a graphical user interface, producing lexical/syntactic structures that can be compiled into finite-state transducers. To simplify the authoring of these local grammars, Intex includes a large-coverage English dictionary compiled by Blandine Courtois, allowing us to specify them at a level that generalized over noun and verb forms. For example, there are a variety of ways of expressing in English the concept of reaffirming a belief that is already held, as exemplified in the following sentences: 1) The finding was confirmed by the new data. 2) She told the truth, corroborating his story. 3) He reaffirms his love for her. 4) We need to verify the claim. 5) Make sure it is true. Although the verbs in these sentences differ in tense, the dictionaries in Intex allowed us to recognize each using the following simple description: (<confirm> by | <corroborate> | <reaffirm> | <verify> | <make> sure) While constructing local grammars for each of the concepts in the original ontology of commonsense psychology, we identified several conceptual distinctions that were made in language that were not expressed in the specific concepts that Gordon had identified. For example, the original ontology included only three concepts in the conceptual area of memory retrieval (the sparsest of the 30 areas), namely memory, memory cue, and memory retrieval. English expressions such as “to forget” and “repressed memory” could not be easily mapped directly to one of these three concepts, which prompted us to elaborate the original sets of concepts to accommodate these and other distinctions made in language. In the case of the conceptual area of memory retrieval, a total of twelve unique concepts were necessary to achieve coverage over the distinctions evident in English. These local grammars were authored one conceptual area at a time. At the time of the writing of this paper, our group had completed 6 of the original 30 commonsense psychology conceptual areas. The remainder of this paper focuses on the first 4 of the 6 areas that were completed, which were evaluated to determine the recall and precision performance of our hand-authored rules. These four areas are Managing knowledge, Memory, Explanations, and Similarity judgments. Figure 2 presents each of these four areas with a single fabricated example of an English expression for each of the final set of concepts. Local grammars for the two additional conceptual areas, Goals (20 concepts) and Goal management (17 concepts), were authored using the same approach as the others, but were not completed in time to be included in our performance evaluation. After authoring these local grammars using the Intex Corpus Processor, finite-state transducers were compiled for each commonsense psychology concept in each of the different conceptual areas. To simplify the application of these transducers to text corpora and to aid in their evaluation, transducers for individual concepts were combined into a single finite state machine (one for each conceptual area). By examining the number of states and transitions in the compiled finite state graphs, some indication of their relative size can be given for the four conceptual areas that we evaluated: Managing knowledge (348 states / 932 transitions), Memory (203 / 725), Explanations (208 / 530), and Similarity judgments (121 / 500). 4 Performance evaluation In order to evaluate the utility of our set of handauthored local grammars, we conducted a study of their precision and recall performance. In order to calculate the performance levels, it was first necessary to create a test corpus that contained references to the sorts of commonsense psychological concepts that our rules were designed to recognize. To accomplish this, we administered a survey to 1. Managing knowledge (37 concepts) He’s got a logical mind (managing-knowledge-ability). She’s very gullible (bias-toward-belief). He’s skeptical by nature (bias-toward-disbelief). It is the truth (true). That is completely false (false). We need to know whether it is true or false (truth-value). His claim was bizarre (proposition). I believe what you are saying (belief). I didn’t know about that (unknown). I used to think like you do (revealed-incorrect-belief). The assumption was widespread (assumption). There is no reason to think that (unjustified-proposition). There is some evidence you are right (partially-justified-proposition). The fact is well established (justified-proposition). As a rule, students are generally bright (inference). The conclusion could not be otherwise (consequence). What was the reason for your suspicion (justification)? That isn’t a good reason (poor-justification). Your argument is circular (circular-justification). One of these things must be false (contradiction). His wisdom is vast (knowledge). He knew all about history (knowledge-domain). I know something about plumbing (partial-knowledge-domain). He’s got a lot of real-world experience (world-knowledge). He understands the theory behind it (world-modelknowledge). That is just common sense (shared-knowledge). I’m willing to believe that (add-belief). I stopped believing it after a while (remove-belief). I assumed you were coming (add-assumption). You can’t make that assumption here (remove-assumption). Let’s see what follows from that (check-inferences). Disregard the consequences of the assumption (ignore-inference). I tried not to think about it (suppress-inferences). I concluded that one of them must be wrong (realize-contradiction). I realized he must have been there (realize). I can’t think straight (knowledge-management-failure). It just confirms what I knew all along (reaffirm-belief). 2. Memory (12 concepts) He has a good memory (memory-ability). It was one of his fondest memories (memory-item). He blocked out the memory of the tempestuous relationship (repressed-memory-item). He memorized the words of the song (memory-storage). She remembered the last time it rained (memory-retrieval). I forgot my locker combination (memory-retrieval-failure). He repressed the memories of his abusive father (memory-repression). The widow was reminded of her late husband (reminding). He kept the ticket stub as a memento (memory-cue). He intended to call his brother on his birthday (schedule-plan). He remembered to set the alarm before he fell asleep (scheduled-plan-retrieval). I forgot to take out the trash (scheduled-plan-retrieval-failure). 3. Explanations (20 concepts) He’s good at coming up with explanations (explanation-ability). The cause was clear (cause). Nobody knew how it had happened (mystery). There were still some holes in his account (explanation-criteria). It gave us the explanation we were looking for (explanation). It was a plausible explanation (candidate-explanation). It was the best explanation I could think of (best-candidate-explanation). There were many contributing factors (factor). I came up with an explanation (explain). Let’s figure out why it was so (attempt-to-explain). He came up with a reasonable explanation (generate-candidate-explanation). We need to consider all of the possible explanations (assess-candidate-explanations). That is the explanation he went with (adopt-explanation). We failed to come up with an explanation (explanation-failure). I can’t think of anything that could have caused it (explanation-generation-failure). None of these explanations account for the facts (explanation-satisfaction-failure). Your account must be wrong (unsatisfying-explanation). I prefer non-religious explanations (explanationpreference). You should always look for scientific explanations (add-explanation-preference). We’re not going to look at all possible explanations (remove-explanation-preference). 4. Similarity judgments (13 concepts) She’s good at picking out things that are different (similarity-comparison-ability). Look at the similarities between the two (make-comparison). He saw that they were the same at an abstract level (draw-analogy). She could see the pattern unfolding (find-pattern). It depends on what basis you use for comparison (comparisonmetric). They have that in common (same-characteristic). They differ in that regard (different-characteristic). If a tree were a person, its leaves would correspond to fingers (analogical-mapping). The pattern in the rug was intricate (pattern). They are very much alike (similar). It is completely different (dissimilar). It was an analogous example (analogous). Figure 2. Example sentences referring to 92 concepts in 4 areas of commonsense psychology collect novel sentences that could be used for this purpose. This survey was administered over the course of one day to anonymous adult volunteers who stopped by a table that we had set up on our university’s campus. We instructed the survey taker to author 3 sentences that included words or phrases related to a given concept, and 3 sentences that they felt did not contain any such references. Each survey taker was asked to generate these 6 sentences for each of the 4 concept areas that we were evaluating, described on the survey in the following manner: • Managing knowledge: Anything about the knowledge, assumptions, or beliefs that people have in their mind • Memory: When people remember things, forget things, or are reminded of things • Explanations: When people come up with possible explanations for unknown causes • Similarity judgments: When people find similarities or differences in things A total of 99 people volunteered to take our survey, resulting in a corpus of 297 positive and 297 negative sentences for each conceptual area, with a few exceptions due to incomplete surveys. Using this survey data, we calculated the precision and recall performance of our hand-authored local grammars. Every sentence that had at least one concept detected for the corresponding concept area was treated as a “hit”. Table 1 presents the precision and recall performance for each concept area. The results show that the precision of our system is very high, with marginal recall performance. The low recall scores raised a concern over the quality of our test data. In reviewing the sentences that were collected, it was apparent that some survey participants were not able to complete the task as we had specified. To improve the validity of the test data, we enlisted six volunteers (native English speakers not members of our development team) to judge whether or not each sentence in the corpus was produced according to the instructions. The corpus of sentences was divided evenly among these six raters, and each sentence that the rater judged as not satisfying the instructions was filtered from the data set. In addition, each rater also judged half of the sentences given to a different rater in order to compute the degree of inter-rater agreement for this filtering task. After filtering sentences from the corpus, a second precision/recall evaluation was performed. Table 2 presents the results of our hand-authored local grammars on the filtered data set, and lists the inter-rater agreement for each conceptual area among our six raters. The results show that the system achieves a high level of precision, and the recall performance is much better than earlier indicated. The performance of our hand-authored local grammars was then compared to the performance that could be obtained using more traditional machine-learning approaches. In these comparisons, the recognition of commonsense psychology concepts was treated as a classification problem, where the task was to distinguish between positive Concept area Correct Hits (a) Wrong hits (b) Total positive sentences (c) Total negative sentences Precision (a/(a+b)) Recall (a/c) Managing knowledge 205 16 297 297 92.76% 69.02% Memory 240 4 297 297 98.36% 80.80% Explanations 126 7 296 296 94.73% 42.56% Similarity judgments 178 18 296 297 90.81% 60.13% 749 45 1186 1187 94.33% 63.15% Table 1. Precision and recall results on the unfiltered data set Concept area Inter-rater agreement (K) Correct Hits (a) Wrong hits (b) Total positive sentences (c) Total negative sentences Precision (a/(a+b)) Recall (a/c) Managing knowledge 0.5636 141 12 168 259 92.15% 83.92% Memory 0.8069 209 0 221 290 100% 94.57% Explanations 0.7138 83 5 120 290 94.21% 69.16% Similarity judgments 0.6551 136 12 189 284 91.89% 71.95% 0.6805 569 29 698 1123 95.15% 81.51% Table 2. Precision and recall results on the filtered data set, with inter-rater agreement on filtering and negative sentences for any given concept area. Sentences in the filtered data sets were used as training instances, and feature vectors for each sentence were composed of word-level unigram and bi-gram features, using no stop-lists and by ignoring punctuation and case. By using a toolkit of machine learning algorithms (Witten & Frank, 1999), we were able to compare the performance of a wide range of different techniques, including Naïve Bayes, C4.5 rule induction, and Support Vector Machines, through stratified crossvalidation (10-fold) of the training data. The highest performance levels were achieved using a sequential minimal optimization algorithm for training a support vector classifier using polynomial kernels (Platt, 1998). These performance results are presented in Table 3. The percentage correctness of classification (Pa) of our handauthored local grammars (column A) was higher than could be attained using this machine-learning approach (column B) in three out of the four concept areas. We then conducted an additional study to determine if the two approaches (hand-authored local grammars and machine learning) could be complimentary. The concepts that are recognized by our hand-authored rules could be conceived as additional bimodal features for use in machine learning algorithms. We constructed an additional set of support vector machine classifiers trained on the filtered data set that included these additional concept-level features in the feature vector of each instance along side the existing unigram and bigram features. Performance of these enhanced classifiers, also obtained through stratified crossvalidation (10-fold), are also reported in Table 3 as well (column C). The results show that these enhanced classifiers perform at a level that is the greater of that of each independent approach. 5 Discussion The most significant challenge facing developers of large-scale lexical-semantic resources is coming to some agreement on the way that natural language can be mapped onto specific concepts. This challenge is particularly evident in consideration of our survey data and subsequent filtering. The abilities that people have in producing and recognizing sentences containing related words or phrases differed significantly across concept areas. While raters could agree on what constitutes a sentence containing an expression about memory (Kappa=.8069), the agreement on expressions of managing knowledge is much lower than we would hope for (Kappa=.5636). We would expect much greater inter-rater agreement if we had trained our six raters for the filtering task, that is, described exactly which concepts we were looking for and gave them examples of how these concepts can be realized in English text. However, this approach would have invalidated our performance results on the filtered data set, as the task of the raters would be biased toward identifying examples that our system would likely perform well on rather than identifying references to concepts of commonsense psychology. Our inter-rater agreement concern is indicative of a larger problem in the construction of largescale lexical-semantic resources. The deeper we delve into the meaning of natural language, the less we are likely to find strong agreement among untrained people concerning the particular concepts that are expressed in any given text. Even with lexical-semantic resources about commonsense knowledge (e.g. commonsense psychology), finer distinctions in meaning will require the efforts of trained knowledge engineers to successfully map between language and concepts. While this will certainly create a problem for future preciA. Hand authored local grammars B. SVM with word level features C. SVM with word and concept features Concept area Pa K Pa K Pa K Managing knowledge 90.86% 0.8148 86.0789% 0.6974 89.5592% 0.7757 Memory 97.65% 0.8973 93.5922% 0.8678 97.4757% 0.9483 Explanations 89.75% 0.7027 85.9564% 0.6212 89.3462% 0.7186 Similarity judgments 86.25% 0.7706 92.4528% 0.8409 92.0335% 0.8309 Table 3. Percent agreement (Pa) and Kappa statistics (K) for classification using hand-authored local grammars (A), SVMs with word features (B), and SVMs with word and concept features (C) sion/recall performance evaluations, the concern is even more serious for other methodologies that rely on large amounts of hand-tagged text data to create the recognition rules in the first place. We expect that this problem will become more evident as projects using algorithms to induce local grammars from manually-tagged corpora, such as the Berkeley FrameNet efforts (Baker et al., 1998), broaden and deepen their encodings in conceptual areas that are more abstract (e.g. commonsense psychology). The approach that we have taken in our research does not offer a solution to the growing problem of evaluating lexical-semantic resources. However, by hand-authoring local grammars for specific concepts rather than inducing them from tagged text, we have demonstrated a successful methodology for creating lexical-semantic resources with a high degree of conceptual breadth and depth. By employing linguistic and knowledge engineering skills in a combined manner we have been able to make strong ontological commitments about the meaning of an important portion of the English language. We have demonstrated that the precision and recall performance of this approach is high, achieving classification performance greater than that of standard machine-learning techniques. Furthermore, we have shown that hand-authored local grammars can be used to identify concepts that can be easily combined with word-level features (e.g. unigrams, bi-grams) for integration into statistical natural language processing systems. Our early exploration of the application of this work for corpus analysis (U.S. State of the Union Addresses) has produced interesting results, and we expect that the continued development of this resource will be important to the success of future corpus analysis and human-computer interaction projects. Acknowledgments This paper was developed in part with funds from the U.S. Army Research Institute for the Behavioral and Social Sciences under ARO contract number DAAD 19-99-D-0046. Any opinions, findings and conclusions or recommendations expressed in this paper are those of the authors and do not necessarily reflect the views of the Department of the Army. References Baker, C., Fillmore, C., & Lowe, J. (1998) The Berkeley FrameNet project. in Proceedings of the COLING-ACL, Montreal, Canada. Bartsch, K. & Wellman, H. (1995) Children talk about the mind. New York: Oxford University Press. Dyer, J., Shatz, M., & Wellman, H. (2000) Young children’s storybooks as a source of mental state information. Cognitive Development 15:17-37. Ferguson, G. & Allen, J. (1993) Cooperative Plan Reasoning for Dialogue Systems, in AAAI-93 Fall Symposium on Human-Computer Collaboration: Reconciling Theory, Synthesizing Practice, AAAI Technical Report FS-93-05. Menlo Park, CA: AAAI Press. Gordon, A. (2002) The Theory of Mind in Strategy Representations. 24th Annual Meeting of the Cognitive Science Society. Mahwah, NJ: Lawrence Erlbaum Associates. Gordon, A. & Hobbs (2003) Coverage and competency in formal theories: A commonsense theory of memory. AAAI Spring Symposium on Formal Theories of Commonsense knowledge, March 24-26, Stanford. Platt, J. (1998). Fast Training of Support Vector Machines using Sequential Minimal Optimization. In B. Schölkopf, C. Burges, and A. Smola (eds.) Advances in Kernel Methods - Support Vector Learning, Cambridge, MA: MIT Press. Reboul A., Sabatier P., Noël-Jorand M-C. (2001) Le discours des schizophrènes: une étude de cas. Revue française de Psychiatrie et de Psychologie Médicale, 49, pp 6-11. Silberztein, M. (1999) Text Indexing with INTEX. Computers and the Humanities 33(3). Traum, D. (1993) Mental state in the TRAINS-92 dialogue manager. In Working Notes of the AAAI Spring Symposium on Reasoning about Mental States: Formal Theories and Applications, pages 143-149, 1993. Menlo Park, CA: AAAI Press. Voorhees, E. & Buckland, L. (2002) The Eleventh Text REtrieval Conference (TREC 2002). Washington, DC: Department of Commerce, National Institute of Standards and Technology. Witten, I. & Frank, E. (1999) Data Mining: Practical Machine Learning Tools and Techniques with Java Implementations. Morgan Kaufman.
2003
27
Closing the Gap: Learning-Based Information Extraction Rivaling Knowledge-Engineering Methods Hai Leong Chieu DSO National Laboratories 20 Science Park Drive Singapore 118230 [email protected] Hwee Tou Ng Department of Computer Science National University of Singapore 3 Science Drive 2 Singapore 117543 [email protected] Yoong Keok Lee DSO National Laboratories 20 Science Park Drive Singapore 118230 [email protected] Abstract In this paper, we present a learning approach to the scenario template task of information extraction, where information filling one template could come from multiple sentences. When tested on the MUC4 task, our learning approach achieves accuracy competitive to the best of the MUC-4 systems, which were all built with manually engineered rules. Our analysis reveals that our use of full parsing and state-of-the-art learning algorithms have contributed to the good performance. To our knowledge, this is the first research to have demonstrated that a learning approach to the full-scale information extraction task could achieve performance rivaling that of the knowledgeengineering approach. 1 Introduction The explosive growth of online texts written in natural language has prompted much research into information extraction (IE), the task of automatically extracting specific information items of interest from natural language texts. The extracted information is used to fill database records, also known as templates in the IE literature. Research efforts on IE tackle a variety of tasks. They include extracting information from semistructured texts, such as seminar announcements, rental and job advertisements, etc., as well as from free texts, such as newspaper articles (Soderland, 1999). IE from semi-structured texts is easier than from free texts, since the layout and format of a semi-structured text provide additional useful clues AYACUCHO, 19 JAN 89 – TODAY TWO PEOPLE WERE WOUNDED WHEN A BOMB EXPLODED IN SAN JUAN BAUTISTA MUNICIPALITY. OFFICIALS SAID THAT SHINING PATH MEMBERS WERE RESPONSIBLE FOR THE ATTACK ... ... POLICE SOURCES STATED THAT THE BOMB ATTACK INVOLVING THE SHINING PATH CAUSED SERIOUS DAMAGES ... ... Figure 1: Snippet of a MUC-4 document to aid in extraction. Several benchmark data sets have been used to evaluate IE approaches on semistructured texts (Soderland, 1999; Ciravegna, 2001; Chieu and Ng, 2002a). For the task of extracting information from free texts, a series of Message Understanding Conferences (MUC) provided benchmark data sets for evaluation. Several subtasks for IE from free texts have been identified. The named entity (NE) task extracts person names, organization names, location names, etc. The template element (TE) task extracts information centered around an entity, like the acronym, category, and location of a company. The template relation (TR) task extracts relations between entities. Finally, the full-scale IE task, the scenario template (ST) task, deals with extracting generic information items from free texts. To tackle the full ST task, an IE system needs to merge information from multiple sentences in general, since the information needed to fill one template can come from multiple sentences, and thus discourse processing is needed. The full-scale ST task is considerably harder than all the other IE tasks or subtasks outlined above. As is the case with many other natural language processing (NLP) tasks, there are two main approaches to IE, namely the knowledge-engineering approach and the learning approach. Most early IE systems adopted the knowledge-engineering ap0 MESSAGE: ID TST3-MUC4-0014 1 MESSAGE: TEMPLATE 1 2 INCIDENT: DATE 19-JAN-89 3 INCIDENT: LOCATION PERU: SAN JUAN BAUTISTA (MUNICIPALITY) 4 INCIDENT: TYPE BOMBING 5 INCIDENT: STAGE OF EXECUTION ACCOMPLISHED 6 INCIDENT: INSTRUMENT ID “BOMB” 7 INCIDENT: INSTRUMENT TYPE BOMB:“BOMB” 8 PERP: INCIDENT CATEGORY TERRORIST ACT 9 PERP: INDIVIDUAL ID “SHINING PATH MEMBERS” 10 PERP: ORGANIZATION ID “SHINING PATH” 11 PERP: ORGANIZATION SUSPECTED OR ACCUSED BY CONFIDENCE AUTHORITIES:“SHINING PATH” 12 PHYS TGT: ID 13 PHYS TGT: TYPE 14 PHYS TGT: NUMBER 15 PHYS TGT: FOREIGN NATION 16 PHYS TGT: EFFECT OF INCIDENT SOME DAMAGE:“-” 17 PHYS TGT: TOTAL NUMBER 18 HUM TGT: NAME 19 HUM TGT: DESCRIPTION “PEOPLE” 20 HUM TGT: TYPE CIVILIAN:“PEOPLE” 21 HUM TGT: NUMBER 2:“PEOPLE” 22 HUM TGT: FOREIGN NATION 23 HUM TGT: EFFECT OF INCIDENT INJURY:“PEOPLE” 24 HUM TGT: TOTAL NUMBER Figure 2: Example of a MUC-4 template proach, where manually engineered rules were used for IE. More recently, machine learning approaches have been used for IE from semi-structured texts (Califf and Mooney, 1999; Soderland, 1999; Roth and Yih, 2001; Ciravegna, 2001; Chieu and Ng, 2002a), named entity extraction (Chieu and Ng, 2002b), template element extraction, and template relation extraction (Miller et al., 1998). These machine learning approaches have been successful for these tasks, achieving accuracy comparable to the knowledge-engineering approach. However, for the full-scale ST task of generic IE from free texts, the best reported method to date is still the knowledge-engineering approach. For example, almost all participating IE systems in MUC used the knowledge-engineering approach for the full-scale ST task. The one notable exception is the work of UMass at MUC-6 (Fisher et al., 1995). Unfortunately, their learning approach did considerably worse than the best MUC-6 systems. Soderland (1999) and Chieu and Ng (2002a) attempted machine learning approaches for a scaled-down version of the ST task, where it was assumed that the information needed to fill one template came from one sentence only. In this paper, we present a learning approach to the full-scale ST task of extracting information from free texts. The task we tackle is considerably more complex than that of (Soderland, 1999; Chieu and Ng, 2002a), since we need to deal with merging information from multiple sentences to fill one template. We evaluated our learning approach on the MUC-4 task of extracting terrorist events from free texts. We chose the MUC-4 task since manually prepared templates required for training are available.1 When trained and tested on the official benchmark data of MUC-4, our learning approach achieves accuracy competitive with the best MUC-4 systems, which were all built using manually engineered rules. To our knowledge, our work is the first learning-based approach to have achieved performance competitive with the knowledge-engineering approach on the full-scale ST task. 2 Task Definition The task addressed in this paper is the Scenario Template (ST) task defined in the Fourth Message Understanding Conference (MUC-4).2 The objectiveof this task is to extract information on terrorist events occurring in Latin American countries from free text documents. For example, given the input document in Figure 1, an IE system is to extract information items related to any terrorist events to fill zero or more database records, or templates. Each distinct terrorist event is to fill one template. An example of an output template is shown in Figure 2. Each of the 25 fields in the template is called a slot, and the string or value that fills a slot is called a slot fill. Different slots in the MUC-4 template need to be treated differently. Besides slot 0 (MESSAGE: ID) and slot 1 (MESSAGE: TEMPLATE), the other 23 slots have to be extracted or inferred from the text document. These slots can be divided into the following categories: String Slots. These slots are filled using strings extracted directly from the text document (slot 6, 9, 10, 12, 18, 19). Text Conversion Slots. These slots have to be inferred from strings in the document (slot 2, 14, 17, 21, 24). For example, INCIDENT: DATE has to be inferred from temporal expressions such as “TO1http://www.itl.nist.gov/iaui/894.02/related projects/muc/ muc data/muc data index.html 2The full-scale IE task is called the ST task only in MUC-6 and MUC-7, when other subtasks like NE and TE tasks were defined. Here, we adopted this terminology also in describing the full-scale IE task for MUC-4. Figure 3: ALICE: our information extraction system DAY”, “LAST WEEK”, etc. Set Fill Slots. This category includes the rest of the slots. The value of a set fill slot comes from a finite set of possible values. They often have to be inferred from the document. 3 The Learning Approach Our supervised learning approach is illustrated in Figure 3. Our system, called ALICE (Automated Learning-based Information Content Extraction), requires manually extracted templates paired with their corresponding documents that contain terrorist events for training. After the training phase, ALICE is then able to extract relevant templates from new documents, using the model learnt during training. In the training phase, each input training document is first preprocessed through a chain of preprocessing modules. The outcome of the preprocessing is a full parse tree for each sentence, and coreference chains linking various coreferring noun phrases both within and across sentences. The core of ALICE uses supervised learning to build one classifier for each string slot. The candidates to fill a template slot are base (non-recursive) noun phrases. A noun phrase  that occurs in a training document  and fills a template slot  is used to generate one positive training example for the classifier of slot  . Other noun phrases in the training document  are negative training examples for the classifier of slot  . The features of a training example generated from  are the verbs and other noun phrases (serving roles like agent and patient) related to  in the same sentence, as well as similar features for coreferring noun phrases of  . Thus, our features for a template slot classifier encode semantic (agent and patient roles) and discourse (coreference) information. Our experimental results in this paper demonstrate that such features are effective in learning what to fill a template slot. During testing, a new document is preprocessed through the same chain of preprocessing modules. Each candidate noun phrase  generates one test example, and it is presented to the classifier of a template slot  to determine whether  fills the slot  . A separate template manager decides whether a new template should be created to include slot  , or slot  should fill the existing template. 3.1 Preprocessing All the preprocessing modules of ALICE were built with supervised learning techniques. They include sentence segmentation (Ratnaparkhi, 1998), part-ofspeech tagging (Charniak et al., 1993), named entity recognition (Chieu and Ng, 2002b), full parsing (Collins, 1999), and coreference resolution (Soon et al., 2001). Each module performs at or near stateof-the-art accuracy, but errors are unavoidable, and later modules in the preprocessing chain have to deal with errors made by the previous modules. 3.2 Features in Training and Test Examples As mentioned earlier, the features of an example are generated based on a base noun phrase (denoted as baseNP), which is a candidate for filling a template slot. While most strings that fill a string slot are base noun phrases, this is not always the case. For instance, consider the two examples in Figure 4. In the first example, “BOMB” should fill the string slot INCIDENT: INSTRUMENT ID, while in the second example, “FMLN” should fill the string slot PERP: ORGANIZATION ID. However, “BOMB” is itself not a baseNP (the baseNP is “A BOMB EXPLOSION”). Similarly for “FMLN”. As such, a string that fills a template slot but is itself not a baseNP (like “BOMB”) is also used to generate a training example, by using its smallest encompassing noun phrase (like “A BOMB EXPLO(1) ONE PERSON WAS KILLED TONIGHT AS THE RESULT OF A BOMB EXPLOSION IN SAN SALVADOR. (2) FORTUNATELY, NO CASUALTIES WERE REPORTED AS A RESULT OF THIS INCIDENT, FOR WHICH THE FMLN GUERRILLAS ARE BEING HELD RESPONSIBLE. Figure 4: Sentences illustrating string slots that cannot be filled by baseNPs. (1) MEMBERS OF THAT SECURITY GROUP ARE COMBING THE AREA TO DETERMINE THE FINAL OUTCOME OF THE FIGHTING. (2) A BOMB WAS THROWN AT THE HOUSE OF FREDEMO CANDIDATE FOR DEPUTY MIGUEL ANGEL BARTRA BY TERRORISTS. Figure 5: Sample sentences for the illustration of features SION”) to generate the training example features. During training, a list of such words is compiled for slots 6 and 10 from the training templates. During testing, these words are also used as candidates for generating test examples for slots 6 and 10, in addition to base NPs. The features of an example are derived from the treebank-style parse tree output by an implementation of Collins' parser (Collins, 1999). In particular, we traverse the full parse tree to determine the verbs, agents, patients, and indirect objects related to a noun phrase candidate  . Whilea machine learning approach is used in (Gildea and Jurafsky, 2000) to determine general semantic roles, we used a simple rule-based traversal of the parse tree instead, which could also reliably determine the generic agent and patient role of a sentence, and this suffices for our current purpose. Specifically, for a given noun phrase candidate  , the following groups of features are used: Verb of Agent NP (VAg) When  is an agent in a sentence, each of its associated verbs is a VAg feature. For example, in sentence (1) of Figure 5, if  is MEMBERS, then its VAg features are COMB and DETERMINE. Verb of Patient NP (VPa) When  is a patient in a sentence, each of its associated verbs is a VPa feature. For example, in sentence (2) of Figure 5, if  is BOMB, then its VPa feature is THROW. Verb-Preposition of NP-in-PP (V-Prep) When  is the NP in a prepositional phrase PP, then this feature is the main verb and the preposition of PP. For example, in sentence (2) of Figure 5, if  is HOUSE, its V-Prep feature is THROW-AT. VPa and related NPs/PPs (VPaRel) If  is a patient in a sentence, each of its VPa may have its own agents (Ag) and prepositional phrases (PrepNP). In this case, the tuples (VPa, Ag) and (VPa, Prep-NP) are used as features. For example, in “GUARDS WERE SHOT TO DEATH”, if  is GUARDS, then its VPa SHOOT, and the prepositional phrase TO-DEATH form the feature (SHOOT, TO-DEATH). VAg and related NPs/PPs (VAgRel) This is similar to VPa above, but for VAg. V-Prep and related NPs (V-PrepRel) When  is the NP in a prepositional phrase PP, then the main verb (V) may have its own agents (Ag) and patients (Pa). In this case, the tuples (Ag, V-Prep) and (V-Prep, Pa) are used as features. For example, HOUSE in sentence (2) of Figure 5 will have the features (TERRORIST, THROW-AT) and (THROWAT, BOMB). Noun-Preposition (N-Prep) This feature aims at capturing information in phrases such as “MURDER OF THE PRIESTS”. If  is PRIESTS, this feature will be MURDER-OF. Head Word (H) The head word of each  is also used as a feature. In a parse tree, there is a head word at each tree node. In cases where a phrase does not fit into a parse tree node, the last word of the phrase is used as the head word. This feature is useful as the system has no information of the semantic class of  . From the head word, the system can get some clue to help decide if  is a possible candidate for a slot. For example, an  with head word PEASANT is more likely to fill the human target slot compared to another  with head word CLASH. Named Entity Class (NE) The named entity class of  is used as a feature. Real Head (RH) For a phrase that does not fit into a parse node, the head word feature is taken to be the last word of the phrase. The real head word of its encompassing parse node is used as another feature. For example, in the NP “FMLN GUERRILLAS”, “FMLN” is a positive example for slot 10, with head word “FMLN” and real head “GUERRILLA”. Coreference features Coreference chains found by our coreference resolution module based on decision tree learning are used to determine the noun phrases that corefer with  . In particular, we use the two noun phrases   and   , where   (   ) is the noun phrase that corefers with  and immediately precedes (follows)  . If such a preceding (or following) noun phrase  exists, we generate the following features based on   : VAg, VPa, and N-Prep. To give an idea of the informative features used in the classifier of a slot, we rank the features used for a slot classifier according to their correlation metric values (Chieu and Ng, 2002a), where informative features are ranked higher. Table 1 shows the top-ranking features for a few feature groups and template slots. The bracketed number behind each feature indicates the rank of this feature for that slot classifier, ordered by the correlation metric value. We observed that certain feature groups are more useful for certain slots. For example, DIE is the top VAg verb for the human target slot, and is ranked 12 among all features used for the human target slot. On the other hand, VAg is so unimportant for the physical target slot that the top VAg verb is due to a preprocessing error that made MONSERRAT a verb. 3.3 Supervised Learning Algorithms We evaluated four supervised learning algorithms. Maximum Entropy Classifier (Alice-ME) The maximum entropy (ME) framework is a recent learning approach which has been successfully used in various NLP tasks such as sentence segmentation, part-of-speech tagging, and parsing (Ratnaparkhi, 1998). However, to our knowledge, ours is the first research effort to have applied ME learning to the full-scale ST task. We used the implementation of maximum entropy modeling from the opennlp.maxent package.3. Support Vector Machine (Alice-SVM) The Support Vector Machine (SVM) (Vapnik, 1995) has been successfully used in many recent applications such as text categorization and handwritten digit recognition. The learning algorithm finds a hyperplane that separates the training data with the largest margin. We used a linear kernel for all our experiments. 3http://maxent.sourceforge.net Naive Bayes (Alice-NB) The Naive Bayes (NB) algorithm (Duda and Hart, 1973) assumes the independence of features given the class and assigns a test example to the class which has the highest posterior probability. Add-one smoothing was used. Decision Tree (Alice-DT) The decision tree (DT) algorithm (Quinlan, 1993) partitions training examples using the feature with the highest information gain. It repeats this process recursively for each partition until all examples in each partition belong to one class. We used the WEKA package4 for the implementation of SVM, NB, and DT algorithms. A feature cutoff  is used for each algorithm: features occurring less than  times are rejected. For all experiments,  is set to 3. For ME and SVM, no other feature selection is applied. For NB and DT, the top 100 features as determined by chi-square are selected. While not trying to do a serious comparison of machine learning algorithms, ME and SVM seem to be able to perform well without feature selection, whereas NB and DT require some form of feature selection in order to perform reasonably well. 3.4 Template Manager As each sentence is processed, phrases classified as positive for any of the string slots are sent to the Template Manager (TM), which will decide if a new template should be created when it receives a new slot fill. The system first attempts to attach a date and a location to each slot fill  . Dates and locations are first attached to their syntactically nearest verb, by traversing the parse tree. Then, for each string fill  , we search its syntactically nearest verb  in the same manner and assign the date and location attached to  to  . When a new slot fill is found, the Template Manager will decide to start a new template if one of the following conditions is true: Date The date attached to the current slot fill is different from the date of the current template. Location The location attached to the current slot fill is not compatible with the location of the current template (one location does not contain the other). 4http://www.cs.waikato.ac.nz/ml/weka Slot VAg VPa V-Prep N-Prep Human Target DIE(12) KILL(2) IDENTIFY-AS(47) MURDER-OF(3) Perpetrator Individual KIDNAP(5) IMPLICATE(17) ISSUE-FOR(73) WARRANT-FOR(64) Physical Target MONSERRAT(420) DESTROY(1) THROW-AT(32) ATTACK-ON(11) Perpetrator Organization KIDNAP(16) BLAME(25) SUSPEND-WITH(87) GUERRILLA-OF(31) Instrument ID EXPLODE(4) PLACE(5) EQUIP-WITH(31) EXPLOSION-OF(17) Table 1: The top-ranking feature for each group of features and the classifier of a slot Incident Type Seed Words ATTACK JESUIT, MURDER, KILL, ATTACK BOMBING BOMB, EXPLOS, DYNAMIT, EXPLOD, INJUR KIDNAPPING KIDNAP, ELN, RELEAS Table 2: Stemmed seed words for each incident type This is determined by using location lists provided by the MUC-4 conference, which specify whether one location is contained in another. An entry in this list has the format of “PLACE-NAME1:PLACENAME2”, where PLACE-NAME2 is contained in PLACE-NAME1 (e.g., CUBA: HAVANA (CITY)). Seed Word The sentence of the current slot fill contains a seed word for a different incident type. A number of seed words are automatically learned for each of the incident types ATTACK, BOMBING, and KIDNAPPING. They are automatically derived based on the correlation metric value used in (Chieu and Ng, 2002a). For the remaining incident types, there are too few incidents in the training data for seed words to be collected. The seeds words used are shown in Table 2. 3.5 Enriching Templates In the last stage before output, the template content is further enriched in the following manner: Removal of redundant slot fills For each slot in the template, there might be several slot fills referring to the same thing. For example, for HUM TGT: DESCRIPTION, the system might have found both “PRIESTS” and “JESUIT PRIESTS”. A slot fill that is a substring of another slot fill will be removed from the template. Effect/Confidence and Type Classifiers are also trained for effect and confidence slots 11, 16, and 23 (ES slots), as well as type slots 7, 13, and 20 (TS slots). ES slots used exactly the same features as string slots, while TS slots used only head words and adjectives as features. For such slots, each entry refers to another slot fill. For example, slot 23 may contain the entry “DEATH” : “PRIESTS”, where “PRIESTS” fills slot 19. During training, each training example is a fill of a reference slot (e.g., for slot 23, the reference slots are slot 18 and 19). For slot 23, for example, each instance will have a class such as DEATH or INJURY, or if there is no entry in slot 23, UNKNOWN EFFECT. During testing, slot fills of reference slots will be classified to determine if they should have an entry in an ES or a TS slot. Date and Location. If the system is unable to fill the DATE or LOCATION slot of a template, it will use as default value the date and country of the city in the dateline of the document. Other Slots. The remaining slots are filled with default values. For example, slot 5 has the default value “ACCOMPLISHED”, and slot 8 “TERRORIST ACT” (except when the perpetrator contains strings such as “GOVERNMENT”, in which case it will be changed to “STATE-SPONSORED VIOLENCE”). Slot 15, 17, 22, and 24 are always left unfilled. 4 Evaluation There are 1,300 training documents, of which 700 are relevant (i.e., have one or more event templates). There are two official test sets, i.e., TST3 and TST4, containing 100 documents each. We trained our system ALICE using the 700 documents with relevant templates, and then tested it on the two official test sets. The output templates were scored using the scorer provided on the official website. The accuracy figures of ALICE (with different learning algorithms) on string slots and all slots are listed in Table 3 and Table 4, respectively. Accuracy is measured in terms of recall (R), precision (P), and F-measure (F). We also list in the two tables the accuracy figures of the top 7 (out of a total of 17) systems that participated in MUC-4. The accuracy figures in the two tables are obtained by running the official scorer on the output templates of ALICE, and those of the MUC-4 participating systems (available TST3 TST4 R P F R P F GE 55 54 54 GE 60 54 57 GE-CMU 43 52 47 GE-CMU 48 52 50 Alice-ME 41 51 45 Alice-ME 44 49 46 Alice-SVM 41 45 43 Alice-SVM 45 44 44 SRI 37 51 43 NYU 42 45 43 UMASS 36 49 42 SRI 39 49 43 Alice-DT 31 51 39 Alice-DT 36 50 42 NYU 35 43 39 UMASS 42 42 42 Alice-NB 41 30 35 Alice-NB 51 32 39 UMICH 32 36 34 BBN 35 42 38 BBN 22 40 28 UMICH 32 34 33 Table 3: Accuracy of string slots on the TST3 and TST4 test set TST3 TST4 R P F R P F GE 58 54 56 GE 62 53 57 GE-CMU 48 55 51 GE-CMU 53 53 53 UMASS 45 56 50 SRI 44 51 47 Alice-ME 46 51 48 Alice-ME 46 46 46 SRI 43 54 48 NYU 46 46 46 Alice-SVM 45 46 45 UMASS 47 45 46 Alice-DT 38 53 44 Alice-SVM 47 40 43 NYU 40 46 43 Alice-DT 41 46 43 UMICH 40 39 39 BBN 40 43 41 Alice-NB 45 34 39 Alice-NB 52 33 40 BBN 29 43 35 UMICH 36 34 35 Table 4: Accuracy of all slots on the TST3 and TST4 test set on the official web site). The same history file downloaded from the official web site is uniformly used for scoring the output templates of all systems (the history file contains the arbitration decisions for ambiguous cases). We conducted statistical significance test, using the approximate randomization method adopted in MUC-4. Table 5 shows the systems that are not significantly different from Alice-ME. Our system ALICE-ME, using a learning approach, is able to achieve accuracy competitive to the best of the MUC-4 participating systems, which were all built using manually engineered rules. We also observed that ME and SVM, the more recent machine learning algorithms, performed better than DT and NB. Full Parsing. To illustrate the benefit of full parsing, we conducted experiments using a subset of features, with and without full parsing. We used ME as the learning algorithm in these experiments. The results on string slots are summarized in Table 6. The Test set/slots Systems in the same group TST3/string GE-CMU, SRI, UMASS, NYU TST4/string GE-CMU, Alice-SVM, NYU, SRI, Alice-DT, UMASS TST3/all GE-CMU, UMASS, SRI, NYU TST4/all SRI, NYU, UMASS, Alice-SVM, Alice-DT, BBN Table 5: Systems whose F-measures are not significantly different from Alice-ME at the 0.10 significance level with 0.99 confidence TST3 TST4 System R P F R P F H + NE 23 44 30 18 30 23 H + NE + V (w/o parsing) 26 42 32 28 40 33 H + NE + V (with parsing) 38 49 43 40 45 42 Table 6: Accuracy of string slots with and without full parsing baseline system used only two features, head word (H) and named entity class (NE). Next, we added three features, VAg, VPa, and V-Prep. Without full parsing, these verbs were obtained based on the immediately preceding (or following) verb of a noun phrase, and the voice of the verb. With full parsing, these verbs were obtained based on traversing the full parse tree. The results indicate that verb features contribute to the performance of the system, even without full parsing. With full parsing, verbs can be determined more accurately, leading to better overall performance. 5 Discussion Although the best MUC-4 participating systems, GE/GE-CMU, still outperform ALICE-ME, it must be noted that for GE, “10 1/2 person months” were spent on MUC-4 using the GE NLTOOLSET , after spending “15 person months” on MUC-3 (Rau et al., 1992). With a learning approach, IE systems are more portable across domains. Not all occurrences of a string in a document that match a slot fill of a template provide good positive training examples. For example, in the same document, there might be the following sentences “THE MNR REPORTS THE KIDNAPPING OF OQUELI COLINDRES...”, followed by “OQUELI COLINDRES ARRIVED IN GUATEMALA ON 11 JANUARY”. In this case, only the first occurrence of OQUELI COLINDRES should be used as a positive example for the human target slot. However, ALICE does not have access to such information, since the MUC-4 training documents are not annotated (i.e., only templates are provided, but the text strings in a document are not marked). Thus, ALICE currently uses all occurrences of “OQUELI COLINDRES” as positive training examples, which introduces noise in the training data. We believe that annotating the string occurrences in training documents will provide higher quality training data for the learning approach and hence further improve accuracy. Although part-of-speech taggers often boast of accuracy over 95%, the errors they make can be fatal to the parsing of sentences. For example, they often tend to confuse “VBN” with “VBD”, which could change the entire parse tree. The MUC-4 corpus was provided as uppercase text, and this also has a negative impact on the named entity recognizer and part-of-speech tagger, which both make use of case information. Learning approaches have been shown to perform on par or even outperform knowledge-engineering approaches in many NLP tasks. However, the full-scale scenario template IE task was still dominated by knowledge-engineering approaches. In this paper, we demonstrate that using both stateof-art learning algorithms and full parsing, learning approaches can rival knowledge-engineering ones, bringing us a step closer to building full-scale IE systems in a domain-independent fashion with stateof-the-art accuracy. Acknowledgements We thank Kian Ming Adam Chai for the implementation of the full parser. References M. E. Califf and R. J. Mooney. 1999. Relational learning of pattern-match rules for information extraction. In Proceedings of AAAI99, pages 328–334. E. Charniak, C. Hendrickson, N. Jacobson, and M. Perkowitz. 1993. Equations for part-of-speech tagging. In Proceedings of AAAI93, pages 784–789. H. L. Chieu and H. T. Ng. 2002a. A maximum entropy approach to information extraction from semistructured and free text. In Proceedings of AAAI02, pages 786–791. H. L. Chieu and H. T. Ng. 2002b. Named entity recognition: A maximum entropy approach using global information. In Proceedings of COLING02, pages 190– 196. F. Ciravegna. 2001. Adaptive information extraction from text by rule induction and generalisation. In Proceedings of IJCAI01, pages 1251–1256. M. Collins. 1999. Head-driven statistical models for natural language parsing. Ph.D. thesis, Department of Computer and Information Science, University of Pennsylvania. R. O. Duda and P. E. Hart. 1973. Pattern Classification and Scene Analysis. Wiley, New York. D. Fisher, S. Soderland, J. McCarthy, F. Feng, and W. Lehnert. 1995. Description of the UMass system as used for MUC-6. In Proceedings of MUC-6, pages 127–140. D. Gildea and D. Jurafsky. 2000. Automatic labelling of semantic roles. In Proceedings of ACL00, pages 512– 520. S. Miller, M. Crystal, H. Fox, L. Ramshaw, R. Schwartz, R. Stone, R. Weischedel, and the Annotation Group. 1998. Algorithms that learn to extract information BBN: Description of the SIFT system as used for MUC-7. In Proceedings of MUC-7. J. R. Quinlan. 1993. C4.5: Programs for Machine Learning. Morgan Kaufmann, San Francisco. A. Ratnaparkhi. 1998. Maximum Entropy Models for Natural Language Ambiguity Resolution. Ph.D. thesis, Department of Computer and Information Science, University of Pennsylvania. L. Rau, G. Krupka, and P. Jacobs. 1992. GE NLTOOLSET: MUC-4 test results and analysis. In Proceedings of MUC-4, pages 94–99. D. Roth and W. Yih. 2001. Relational learning via propositional algorithms: An information extraction case study. In Proceedings of IJCAI01, pages 1257–1263. S. Soderland. 1999. Learning information extraction rules for semi-structured and free text. Machine Learning, 34(1/2/3):233–272. W. M. Soon, H. T. Ng, and D. C. Y. Lim. 2001. A machine learning approach to coreference resolution of noun phrases. Computational Linguistics, 27(4):521– 544. V. N. Vapnik. 1995. The Nature of Statistical Learning Theory. Springer-Verlag, New York.
2003
28
An Improved Extraction Pattern Representation Model for Automatic IE Pattern Acquisition Kiyoshi Sudo, Satoshi Sekine, and Ralph Grishman Department of Computer Science New York University 715 Broadway, 7th Floor, New York, NY 10003 USA sudo,sekine,grishman  @cs.nyu.edu Abstract Several approaches have been described for the automatic unsupervised acquisition of patterns for information extraction. Each approach is based on a particular model for the patterns to be acquired, such as a predicate-argument structure or a dependency chain. The effect of these alternative models has not been previously studied. In this paper, we compare the prior models and introduce a new model, the Subtree model, based on arbitrary subtrees of dependency trees. We describe a discovery procedure for this model and demonstrate experimentally an improvement in recall using Subtree patterns. 1 Introduction Information Extraction (IE) is the process of identifying events or actions of interest and their participating entities from a text. As the field of IE has developed, the focus of study has moved towards automatic knowledge acquisition for information extraction, including domain-specific lexicons (Riloff, 1993; Riloff and Jones, 1999) and extraction patterns (Riloff, 1996; Yangarber et al., 2000; Sudo et al., 2001). In particular, methods have recently emerged for the acquisition of event extraction patterns without corpus annotation in view of the cost of manual labor for annotation. However, there has been little study of alternative representation models of extraction patterns for unsupervised acquisition. In the prior work on extraction pattern acquisition, the representation model of the patterns was based on a fixed set of pattern templates (Riloff, 1996), or predicate-argument relations, such as subject-verb, and object-verb (Yangarber et al., 2000). The model of our previous work (Sudo et al., 2001) was based on the paths from predicate nodes in dependency trees. In this paper, we discuss the limitations of prior extraction pattern representation models in relation to their ability to capture the participating entities in scenarios. We present an alternative model based on subtrees of dependency trees, so as to extract entities beyond direct predicate-argument relations. An evaluation on scenario-template tasks shows that the proposed Subtree model outperforms the previous models. Section 2 describes the Subtree model for extraction pattern representation. Section 3 shows the method for automatic acquisition. Section 4 gives the experimental results of the comparison to other methods and Section 5 presents an analysis of these results. Finally, Section 6 provides some concluding remarks and perspective on future research. 2 Subtree model Our research on improved representation models for extraction patterns is motivated by the limitations of the prior extraction pattern representations. In this section, we review two of the previous models in detail, namely the Predicate-Argument model (Yangarber et al., 2000) and the Chain model (Sudo et al., 2001). The main cause of difficulty in finding entities by extraction patterns is the fact that the participating entities can appear not only as an argument of the predicate that describes the event type, but also in other places within the sentence or in the prior text. In the MUC-3 terrorism scenario, WEAPON entities occur in many different relations to event predicates in the documents. Even if WEAPON entities appear in the same sentence with the event predicate, they rarely serve as a direct argument of such predicates. (e.g., “One person was killed as the result of a bomb explosion.”) Predicate-Argument model The PredicateArgument model is based on a direct syntactic relation between a predicate and its arguments1 (Yangarber et al., 2000). In general, a predicate provides a strong context for its arguments, which leads to good accuracy. However, this model has two major limitations in terms of its coverage, clausal boundaries and embedded entities inside a predicate’s arguments. Figure 12 shows an example of an extraction task in the terrorism domain where the event template consists of perpetrator, date, location and victim. With the extraction patterns based on the PredicateArgument model, only perpetrator and victim can be extracted. The location (downtown Jerusalem) is embedded as a modifier of the noun (heart) within the prepositional phrase, which is an adjunct of the main predicate, triggered3. Furthermore, it is not clear whether the extracted entities are related to the same event, because of the clausal boundaries.4 1Since the case marking for a nominalized predicate is significantly different from the verbal predicate, which makes it hard to regularize the nominalized predicates automatically, the constraint for the Predicate-Argument model requires the root node to be a verbal predicate. 2Throughout this paper, extraction patterns are defined as one or more word classes with their context in the dependency tree, where the actual word matched with the class is associated to one of the slots in the template. The notation of the patterns in this paper is based on a dependency tree where ( (  -  )..(  -  )) denotes is the head, and, for each in  ,  is its argument and the relation between and  is labeled with   . The labels introduced in this paper are SBJ (subject), OBJ (object), ADV (adverbial adjunct), REL (relative), APPOS (apposition) and prepositions (IN, OF, etc.). Also, we assume that the order of the arguments does not matter. Symbols beginning with C- represent NE (Named Entity) types. 3Yangarber refers this as a noun phrase pattern in (Yangarber et al., 2000). 4This is the problem of merging the result of entity extraction. Most IE systems have hard-coded inference rules, such Chain model Our previous work, the Chain model (Sudo et al., 2001)5 attempts to remedy the limitations of the Predicate-Argument model. The extraction patterns generated by the Chain model are any chain-shaped paths in the dependency tree.6 Thus it successfully avoids the clausal boundary and embedded entity limitation. We reported a 5% gain in recall at the same precision level in the MUC-6 management succession task compared to the Predicate-Argument model. However, the Chain model also has its own weakness in terms of accuracy due to the lack of context. For example, in Figure 1(c), (triggered (  C-DATE  ADV)) is needed to extract the date entity. However, the same pattern is likely to be applied to texts in other domains as well, such as “The Mexican peso was devalued and triggered a national financial crisis last week.” Subtree model The Subtree model is a generalization of previous models, such that any subtree of a dependency tree in the source sentence can be regarded as an extraction pattern candidate. As shown in Figure 1(d), the Subtree model, by its definition, contains all the patterns permitted by either the Predicate-Argument model or the Chain model. It is also capable of providing more relevant context, such as (triggered (explosion-OBJ)(  C-DATE  -ADV)). The obvious advantage of the Subtree model is the flexibility it affords in creating suitable patterns, spanning multiple levels and multiple branches. Pattern coverage is further improved by relaxing the constraint that the root of the pattern tree be a predicate node. However, this flexibility can also be a disadvantage, since it means that a very large number of pattern candidates — all possible subtrees of the dependency tree of each sentence in the corpus — must be considered. An efficient procedure is required to select the appropriate patterns from among the candidates. Also, as the number of pattern candidates increases, the amount of noise and complexity inas “triggering an explosion is related to killing or injuring and therefore constitutes one terrorism action.” 5Originally we called it “Tree-Based Representation of Patterns”. We renamed it to avoid confusion with the proposed approach that is also based on dependency trees. 6(Sudo et al., 2001) required the root node of the chain to be a verbal predicate, but we have relaxed that constraint for our experiments. (a) JERUSALEM, March 21 – A smiling Palestinian suicide bomber triggered a massive explosion in the heavily policed heart of downtown Jerusalem today, killing himself and three other people and injuring scores. (b) (c) Predicate-Argument Chain model (triggered (  C-PERSON  -SBJ)(explosion-OBJ)(  C-DATE  -ADV)) (triggered (  C-PERSON  -SBJ)) (killing (  C-PERSON  -OBJ)) (triggered (heart-IN (  C-LOCATION  -OF))) (injuring (  C-PERSON  -OBJ)) (triggered (killing-ADV (  C-PERSON  -OBJ))) (triggered (injuring-ADV (  C-PERSON  -OBJ))) (triggered (  C-DATE  -ADV)) (d) Subtree model (triggered (  C-PERSON  -SBJ)(explosion-OBJ)) (triggered (explosion-OBJ)(  C-DATE  -ADV)) (killing (  C-PERSON  -OBJ)) (triggered (  C-DATE  -ADV)(killing-ADV)) (injuring (  C-PERSON  -OBJ)) (triggered (  C-DATE  -ADV)(killing-ADV(  C-PERSON  -OBJ))) (triggered (heart-IN (  C-LOCATION  -OF))) (triggered (  C-DATE  -ADV)(injuring-ADV)) (triggered (killing-ADV (  C-PERSON  -OBJ))) (triggered (explosion-OBJ)(killing (  C-PERSON  -OBJ))) (triggered (  C-DATE  -ADV)) ... Figure 1: (a) Example sentence on terrorism scenario. (b) Dependency Tree of the example sentence (The entities to be extracted are shaded in the tree). (c) Predicate-Argument patterns and Chain-model patterns that contribute to the extraction task. (d) Subtree model patterns that contribute the extraction task. creases. In particular, many of the pattern candidates overlap one another. For a given set of extraction patterns, if pattern A subsumes pattern B (say, A is (shoot (  C-PERSON  -OBJ)(to death)) and B is (shoot (  CPERSON  -OBJ))), there is no added contribution for extraction by pattern matching with A (since all the matches with pattern A must be covered with pattern B). Therefore, we need to pay special attention to the ranking function for pattern candidates, so that patterns with more relevant contexts get higher score. 3 Acquisition Method This section discusses an automatic procedure to learn extraction patterns. Given a narrative description of the scenario and a set of source documents, the following three stages obtain the relevant extraction patterns for the scenario; preprocessing, document retrieval, and ranking pattern candidates. 3.1 Stage 1: Preprocessing Morphological analysis and Named Entities (NE) tagging are performed at this stage.7 Then all the sentences are converted into dependency trees by an appropriate dependency analyzer.8 The NE tagging 7We used Extended NE hierarchy based on (Sekine et al., 2002), which is structured and contains 150 classes. 8Any degree of detail can be chosen through entire procedure, from lexicalized dependency to chunk-level dependency. For the following experiment in Japanese, we define a node in replaces named entities by their class, so the resulting dependency trees contain some NE class names as leaf nodes. This is crucial to identifying common patterns, and to applying these patterns to new text. 3.2 Stage 2: Document Retrieval The procedure retrieves a set of documents that describe the events of the scenario of interest, the relevant document set. A set of narrative sentences describing the scenario is selected to create a query for the retrieval. Any IR system of sufficient accuracy can be used at this stage. For this experiment, we retrieved the documents using CRL’s stochasticmodel-based IR system (Murata et al., 1999). 3.3 Stage 3: Ranking Pattern Candidates Given the dependency trees of parsed sentences in the relevant document set, all the possible subtrees can be candidates for extraction patterns. The ranking of pattern candidates is inspired by TF/IDF scoring in IR literature; a pattern is more relevant when it appears more in the relevant document set and less across the entire collection of source documents. The right-most expansion base subtree discovery algorithm (Abe et al., 2002) was implemented to calculate term frequency (raw frequency of a pattern) and document frequency (the number of documents where a pattern appears) for each pattern candidate. The algorithm finds the subtrees appearing more frequently than a given threshold by constructing the subtrees level by level, while keeping track of their occurrence in the corpus. Thus, it efficiently avoids the construction of duplicate patterns and runs almost linearly in the total size of the maximal tree patterns contained in the corpus. The following ranking function was used to rank each pattern candidate. The score of subtree ,   , is        ! (1) where  is the number of times that subtree appears across the documents in the relevant document set, " . # $ is the set of subtrees that appear in " .   is the number of documents in the collection containing subtree , and  is the total number of the dependency tree as a bunsetsu, phrasal unit. 50 55 60 65 70 75 80 85 90 95 100 0 10 20 30 40 50 60 70 80 Precision (%) Recall (%) Precision-Recall SUBT beta=1 SUBT beta=3 SUBT beta=8 Figure 2: Comparison of Extraction Performance with Different % documents in the collection. The first term roughly corresponds to the term frequency and the second term to the inverse document frequency in TF/IDF scoring. % is used to control the weight on the IDF portion of this scoring function. 3.4 Parameter Tuning for Ranking Function The % in Equation (1) is used to parameterize the weight on the IDF portion of the ranking function. As we pointed out in Section 2, we need to pay special attention to overlapping patterns; the more relevant context a pattern contains, the higher it should be ranked. The weight % serves to focus on how specific a pattern is to a given scenario. Therefore, for high % value, (triggered (explosion-OBJ)(  C-DATE  ADV)) is ranked higher than (triggered (  C-DATE  ADV)) in the terrorism scenario, for example. Figure 2 shows the improvement of the extraction performance by tuning % on the entity extraction task which will be discussed in the next section. For unsupervised tuning of % , we used a pseudoextraction task, instead of using held-out data for supervised learning. We used an unsupervised version of the text classification task to optimize % , assuming that all the documents retrieved by the IR system are relevant to the scenario and the pattern set that performs well on the text classification task also works well on the entity extraction task. The unsupervised text classification task is to measure how close a pattern matching system, given a set of extraction patterns, simulates the document retrieval of the same IR system as in the previous sub-section. The % value is optimized so that the cumulative performance of the precision-recall curve over the entire range of recall for the text classification task is maximized. The document set for text classification is composed of the documents retrieved by the same IR system as in Section 3.2 plus the same number of documents picked up randomly, where all the documents are taken from a different document set from the one used for pattern learning. The pattern matching system, given a set of extraction patterns, classifies a document as retrieved if any of the patterns match any portion of the document, and as random otherwise. Thus, we can get the performance of text classification of the pattern matching system in the form of a precision-recall curve, without any supervision. Next, the area of the precision-recall curve is computed by connecting every point in the precision-recall curve from 0 to the maximum recall the pattern matching system reached, and we compare the area for each possible % value. Finally, the % value which gets the greatest area under the precision-recall curve is used for extraction. The comparison to the same procedure based on the precision-recall curve of the actual extraction performance shows that this tuning has high correlation with the extraction performance (Spearman correlation coefficient    with 2% confidence). 3.5 Filtering For efficiency and to eliminate low-frequency noise, we filtered out the pattern candidates that appear in less than 3 documents throughout the entire collection. Also, since the patterns with too much context are unlikely to match with new text, we added another filtering criterion based on the number of nodes in a pattern candidate; the maximum number of nodes is 8. Since all the slot-fillers in the extraction task of our experiment are assumed to be instances of the 150 classes in the extended Named Entity hierarchy (Sekine et al., 2002), further filtering was done by requiring a pattern candidate to contain at least one Named Entity class. 4 Experiment The experiment of this study is focused on comparing the performance of the earlier extraction pattern models to the proposed Subtree Model (SUBT). The compared models are the direct predicate-argument model (PA)9, and the Chain model (CH) in (Sudo et al., 2001). The task for this experiment is entity extraction, which is to identify all the entities participating in relevant events in a set of given Japanese texts. Note that all NEs in the test documents were identified manually, so that the task can measure only how well extraction patterns can distinguish the participating entities from the entities that are not related to any events. This task does not involve grouping entities associated with the same event into a single template to avoid possible effect of merging failure on extraction performance for entities. We accumulated the test set of documents of two scenarios; the Management Succession scenario of (MUC-6, 1995), with a simpler template structure, where corporate managers assumed and/or left their posts, and the Murderer Arrest scenario, where a law enforcement organization arrested a murder suspect. The source document set from which the extraction patterns are learned consists of 117,109 Mainichi Newspaper articles from 1995. All the sentences are morphologically analyzed by JUMAN (Kurohashi, 1997) and converted into dependency trees by KNP (Kurohashi and Nagao, 1994). Regardless of the model of extraction patterns, the pattern acquisition follows the procedure described in Section 3. We retrieved 300 documents as a relevant document set. The association of NE classes and slots in the template is made automatically; Person, Organization, Post (slots) correspond to C-PERSON, CORG, C-POST (NE-classes), respectively, in the Succession scenario, and Suspect, Arresting Agency, Charge (slots) correspond to C-PERSON, C-ORG, C-OFFENCE (NE-classes), respectively, in the Ar9This is a restricted version of (Yangarber et al., 2000) constrained to have a single place-holder for each pattern, while (Yangarber et al., 2000) allowed more than one place-holder. However, the difference does not matter for the entity extraction task which does not require merging entities in a single template. Succession Arrest IR description (translation of Japanese) Management Succession: Management Succession at the level of executives of a company. The topic of interest should not be limited to the promotion inside the company mentioned, but also includes hiring executives from outside the company or their resignation. A relevant document must describe the arrest of the suspect of murder. The document should be regarded as interesting if it discusses the suspect under suspicion for multiple crimes including murder, such as murder-robbery. Slots Person, Organization, Post Arresting Agency, Suspect, Charge # of Test Documents 148 205 (relevant + irrelevant)        Slots Person: 135 Arresting Agency: 128 Organization: 172 Suspect: 129 Post: 215 Charge: 148 Table 1: Task Description and Statistics of Test Data rest scenario. 10 For each model, we get a list of the pattern candidates ordered by the ranking function discussed in Section 3.3 after filtering. The result of the performance is shown (Figure 3) as a precision-recall graph for each subset of top ranked patterns where  ranges from 1 to the number of the pattern candidates. The test set was accumulated from Mainichi Newspaper in 1996 by a simple keyword search, with some additional irrelevant documents. (See Table 1 for detail.) Figure 3(a) shows the precision-recall curve of top relevant extraction patterns for each model on the Succession Scenario. At lower recall levels (up to 35%), all the models performed similarly. However, the precision of Chain patterns dropped suddenly by 20% at recall level 38%, while the SUBT patterns keep the precision significantly higher than Chain patterns until it reaches 58% recall. Even after SUBT hit the drop at 56%, SUBT is consistently a few percent higher in precision than Chain patterns for most recall levels. Figure 3(a) also shows that although PA keeps high precision at low recall level it has a significantly lower ceiling of recall (52%) compared to other models. Figure 3(b) shows the extraction performance on 10Since there is no subcategory of C-PERSON to distinguish Suspect and victim (which is not extracted in this experiment) for the Arrest scenario, the learned pattern candidates may extract victims as Suspect entities by mistake. the Arrest scenario task. Again, the PredicateArgument model has a much lower recall ceiling (25%). The difference in the performance between the Subtree model and the Chain model does not seem as obvious as in the Succession task. However, it is still observable that the Subtree model gains a few percent precision over the Chain model at recall levels around 40%. A possible explanation of the subtleness in performance difference in this scenario is the smaller number of contributing patterns compared to the Succession scenario. 5 Discussion One of the advantages of the proposed model is the ability to capture more varied context. The Predicate-Argument model relies for its context on the predicate and its direct arguments. However, some Predicate-Argument patterns may be too general, so that they could be applied to texts about a different scenario and mistakenly detect entities from them. For example, ((  C-ORG  -SBJ) happyo-suru), “  C-ORG  reports” may be the pattern used to extract an Organization in the Succession scenario but it is too general — it could match irrelevant sentences by mistake. The proposed Subtree Model can acquire a more scenario-specific pattern ((  C-ORG  SBJ)((shunin-suru-REL) jinji-OBJ) happyo-suru) “  C-ORG  reports a personnel affair to appoint”. Any scoring function that penalizes the generality of a pattern match, such as inverse document frequency, can successfully lessen the significance of too general patterns. (a) 50 55 60 65 70 75 80 85 90 95 100 0 10 20 30 40 50 60 70 80 Precision (%) Recall (%) Precision-Recall SUBT CH PA (b) 50 55 60 65 70 75 80 85 90 95 100 0 10 20 30 40 50 60 70 80 Precision (%) Recall (%) Precision-Recall SUBT CH PA Figure 3: Extraction Performance: (a) Succession Scenario ( %  ), (b) Arrest Scenario ( %  ) The detailed analysis of the experiment revealed that the overly-general patterns are more severely penalized in the Subtree model compared to the Chain model. Although both models penalize general patterns in the same way, the Subtree model also promotes more scenario-specific patterns than the Chain model. In Figure 3, the large drop was caused by the pattern ((  C-DATE  -ON)  C-POST  ), which was mainly used to describe the date of appointment to the C-POST in the list of one’s professional history (which is not regarded as a Succession event), but also used in other scenarios in the business domain (18% precision by itself). Although the scoring function described in Section 3.3 is the same for both models, the Subtree model can also produce contributing patterns, such as ((  C-PERSON   C-POST  -SBJ)(  C-POST  -TO) shuninsuru) “  C-PERSON   C-POST  was appointed to  C-POST  ” whose ranks were higher than the problematic pattern. Without generalizing case marking for nominalized predicates, the Predicate-Argument model excludes some highly contributing patterns with nominalized predicates, as some example patterns show in Figure 4. Also, chains of modifiers could be extracted only by the Subtree and Chain models. A typical and highly relevant expression for the Succession scenario is (((daihyo-ken-SBJ) aru-REL)  CPOST  ) “  C-POST  with ministerial authority”. Although, in the Arrest scenario, the superiority of the Subtree model to the other models is not clear, the general discussion about the capability of capturing additional context still holds. In Figure 4, the short pattern ((  C-PERSON   C-POST  -APPOS)  CNUM  ), which is used for a general description of a person with his/her occupation and age, has relatively low precision (71%). However, with more relevant context, such as “arrest” or “unemployed”, the patterns become more relevant to Arrest scenario. 6 Conclusion and Future Work In this paper, we explored alternative models for the automatic acquisition of extraction patterns. We proposed a model based on arbitrary subtrees of dependency trees. The result of the experiment confirmed that the Subtree model allows a gain in recall while preserving high precision. We also discussed the effect of the weight tuning in TF/IDF scoring and showed an unsupervised way of adjusting it. There are several ways in which our pattern model may be further improved. In particular, we would like to relax the restraint that all the fills must be tagged with their proper NE tags by introducing a GENERIC place-holder into the extraction patterns. By allowing a GENERIC place-holder to match with anything as long as the context of the pattern is matched, the extraction patterns can extract the entities that are not tagged properly. Also patterns with a GENERIC place-holder can be applied to slots that are not names. Thus, the acquisition method described in Section 3 can be used to find the patterns for any type of slot fill. 11(  C-POST  is used as a title of  C-PERSON  as in President Bush.) Pattern Correct Incorrect SUBT Chain PA ((  C-PERSON   C-POST  -OF)  C-PERSON  shoukaku) 26 1 yes yes no promotion of  C-POST   C-PERSON  11 (((daihyo-ken-SBJ) aru-REL)  C-POST  ) 4 0 yes yes no  C-POST  with ministerial authority ((((daihyo-ken-(no)SBJ) aru-REL)  C-POST  -TO) shunin-suru) 2 0 yes yes no be appointed to  C-POST  with ministerial authority ((  C-ORG  -SBJ) happyo-suru) 16 3 yes yes yes  C-ORG  reports ((  C-ORG  -SBJ) (jinji-OBJ) happyo-suru) 4 0 yes no yes  C-ORG  report personnel affair ((  C-PERSON   C-POST  -APPOS)  C-NUM  ) 54 22 yes yes no (((  C-PERSON   C-POST  -APPOS)  C-NUM  ) taiho-suru 17 1 yes yes no arrest  C-PERSON   C-POST  ,  C-NUM  (((mushoku-APPOS)  C-PERSON   C-POST  -APPOS)  C-NUM  ) 11 0 yes yes no  C-PERSON   C-POST  ,  C-NUM  , unemployed Figure 4: Examples of extraction patterns and their contribution Acknowledgments Thanks to Taku Kudo for his implementation of the subtree discovery algorithm and the anonymous reviewers for useful comments. This research is supported by the Defense Advanced Research Projects Agency as part of the Translingual Information Detection, Extraction and Summarization (TIDES) program, under Grant N66001-001-8917 from the Space and Naval Warfare Systems Center San Diego. References Kenji Abe, Shinji Kawasoe, Tatsuya Asai, Hiroki Arimura, and Setsuo Arikawa. 2002. Optimized Substructure Discovery for Semi-structured Data. In Proceedings of the 6th European Conference on Principles and Practice of Knowledge in Databases (PKDD2002). Sadao Kurohashi and Makoto Nagao. 1994. KN Parser : Japanese Dependency/Case Structure Analyzer. In Proceedings of the Workshop on Sharable Natural Language Resources. Sadao Kurohashi, 1997. Japanese Morphological Analyzing System: JUMAN. http://www.kc.t.utokyo.ac.jp/nl-resource/juman-e.html. MUC-6. 1995. Proceedings of the Sixth Message Understanding Conference (MUC-6). Masaki Murata, Kiyotaka Uchimoto, Hiromi Ozaku, and Qing Ma. 1999. Information Retrieval Based on Stochastic Models in IREX. In Proceedings of the IREX Workshop. Ellen Riloff and Rosie Jones. 1999. Learning Dictionaries for Information Extraction by Multi-level Bootstrapping. In Proceedings of the Sixteenth National Conference on Artificial Intelligence (AAAI-99). Ellen Riloff. 1993. Automatically Constructing a Dictionary for Information Extraction Tasks. In Proceedings of the Eleventh National Conference on Artificial Intelligence (AAAI-93). Ellen Riloff. 1996. Automatically Generating Extraction Patterns from Untagged Text. In Proceedings of Thirteenth National Conference on Artificial Intelligence (AAAI-96). Satoshi Sekine, Kiyoshi Sudo, and Chikashi Nobata. 2002. Extended Named Entity Hierarchy. In Proceedings of Third International Conference on Language Resources and Evaluation (LREC 2002). Kiyoshi Sudo, Satoshi Sekine, and Ralph Grishman. 2001. Automatic Pattern Acquisition for Japanese Information Extraction. In Proceedings of the Human Language Technology Conference (HLT2001). Roman Yangarber, Ralph Grishman, Pasi Tapanainen, and Silja Huttunen. 2000. Unsupervised Discovery of Scenario-Level Patterns for Information Extraction. In Proceedings of 18th International Conference on Computational Linguistics (COLING-2000).
2003
29
A Noisy-Channel Approach to Question Answering Abdessamad Echihabi and Daniel Marcu Information Sciences Institute Department of Computer Science University of Southern California 4676 Admiralty Way, Suite 1001 Marina Del Rey, CA 90292 {echihabi,marcu}@isi.edu Abstract We introduce a probabilistic noisychannel model for question answering and we show how it can be exploited in the context of an end-to-end QA system. Our noisy-channel system outperforms a stateof-the-art rule-based QA system that uses similar resources. We also show that the model we propose is flexible enough to accommodate within one mathematical framework many QA-specific resources and techniques, which range from the exploitation of WordNet, structured, and semi-structured databases to reasoning, and paraphrasing. 1 Introduction Current state-of-the-art Question Answering (QA) systems are extremely complex. They contain tens of modules that do everything from information retrieval, sentence parsing (Ittycheriah and Roukos, 2002; Hovy et al., 2001; Moldovan et al, 2002), question-type pinpointing (Ittycheriah and Roukos, 2002; Hovy et al., 2001; Moldovan et al, 2002), semantic analysis (Xu et al., Hovy et al., 2001; Moldovan et al, 2002), and reasoning (Moldovan et al, 2002). They access external resources such as the WordNet (Hovy et al., 2001, Pasca and Harabagiu, 2001, Prager et al., 2001), the web (Brill et al., 2001), structured, and semistructured databases (Katz et al., 2001; Lin, 2002; Clarke, 2001). They contain feedback loops, ranking, and re-ranking modules. Given their complexity, it is often difficult (and sometimes impossible) to understand what contributes to the performance of a system and what doesn’t. In this paper, we propose a new approach to QA in which the contribution of various resources and components can be easily assessed. The fundamental insight of our approach, which departs significantly from the current architectures, is that, at its core, a QA system is a pipeline of only two modules: • An IR engine that retrieves a set of M documents/N sentences that may contain answers to a given question Q. • And an answer identifier module that given a question Q and a sentence S (from the set of sentences retrieved by the IR engine) identifies a sub-string SA of S that is likely to be an answer to Q and assigns a score to it. Once one has these two modules, one has a QA system because finding the answer to a question Q amounts to selecting the sub-string SA of highest score. Although this view is not made explicit by QA researchers, it is implicitly present in all systems we are aware of. In its simplest form, if one accepts a whole sentence as an answer (SA = S), one can assess the likelihood that a sentence S contains the answer to a question Q by measuring the cosine similarity between Q and S. However, as research in QA demonstrates, word-overlap is not a good enough metric for determining whether a sentence contains the answer to a question. Consider, for example, the question “Who is the leader of France?” The sentence “Henri Hadjenberg, who is the leader of France’s Jewish community, endorsed confronting the specter of the Vichy past” overlaps with all question terms, but it does not contain the correct answer; while the sentence “Bush later met with French President Jacques Chirac” does not overlap with any question term, but it does contain the correct answer. To circumvent this limitation of word-based similarity metrics, QA researchers have developed methods through which they first map questions and sentences that may contain answers in different spaces, and then compute the “similarity” between them there. For example, the systems developed at IBM and ISI map questions and answer sentences into parse trees and surfacebased semantic labels and measure the similarity between questions and answer sentences in this syntactic/semantic space, using QA-motivated metrics. The systems developed by CYC and LCC map questions and answer sentences into logical forms and compute the “similarity” between them using inference rules. And systems such as those developed by IBM and BBN map questions and answers into feature sets and compute the similarity between them using maximum entropy models that are trained on question-answer corpora. From this perspective then, the fundamental problem of question answering is that of finding spaces where the distance between questions and sentences that contain correct answers is small and where the distance between questions and sentences that contain incorrect answers is large. In this paper, we propose a new space and a new metric for computing this distance. Being inspired by the success of noisy-channel-based approaches in applications as diverse as speech recognition (Jelinek, 1997), part of speech tagging (Church, 1988), machine translation (Brown et al., 1993), information retrieval (Berger and Lafferty, 1999), and text summarization (Knight and Marcu, 2002), we develop a noisy channel model for QA. This model explains how a given sentence SA that contains an answer sub-string A to a question Q can be rewritten into Q through a sequence of stochastic operations. Given a corpus of questionanswer pairs (Q, SA), we can train a probabilistic model for estimating the conditional probability P(Q | SA). Once the parameters of this model are learned, given a question Q and the set of sentences Σ returned by an IR engine, one can find the sentence Si ∈ Σ and an answer in it Ai,j by searching for the Si,Ai,j that maximizes the conditional probability P(Q | Si,Ai,j). In Section 2, we first present the noisy-channel model that we propose for this task. In Section 3, we describe how we generate training examples. In Section 4, we describe how we use the learned models to answer factoid questions, we evaluate the performance of our system using a variety of experimental conditions, and we compare it with a rule-based system that we have previously used in several TREC evaluations. In Section 5, we demonstrate that the framework we propose is flexible enough to accommodate a wide range of resources and techniques that have been employed in state-of-the-art QA systems. 2 A noisy-channel for QA Assume that we want to explain why “1977” in sentence S in Figure 1 is a good answer for the question “When did Elvis Presley die?” To do this, we build a noisy channel model that makes explicit how answer sentence parse trees are mapped into questions. Consider, for example, the automatically derived answer sentence parse tree in Figure 1, which associates to nodes both syntactic and shallow semantic, named-entity-specific tags. In order to rewrite this tree into a question, we assume the following generative story: 1. In general, answer sentences are much longer than typical factoid questions. To reduce the length gap between questions and answers and to increase the likelihood that our models can be adequately trained, we first make a “cut” in the answer parse tree and select a sequence of words, syntactic, and semantic tags. The “cut” is made so that every word in the answer sentence or one of its ancestors belongs to the “cut” and no two nodes on a path from a word to the root of the tree are in the “cut”. Figure 1 depicts graphically such a cut. 2. Once the “cut” has been identified, we mark one of its elements as the answer string. In Figure 1, we decide to mark DATE as the answer string (A_DATE). 3. There is no guarantee that the number of words in the cut and the number of words in the question match. To account for this, we stochastically assign to every element si in a cut a fertility according to table n(φ | si). We delete elements of fertility 0 and duplicate elements of fertility 2, etc. With probability p1 we also increment the fertility of an invisible word NULL. NULL and fertile words, i.e. words with fertility strictly greater than 1 enable us to align long questions with short answers. Zero fertility words enable us to align short questions with long answers. 4. Next, we replace answer words (including the NULL word) with question words according to the table t(qi | sj). 5. In the last step, we permute the question words according to a distortion table d, in order to obtain a well-formed, grammatical question. The probability P(Q | SA) is computed by multiplying the probabilities in all the steps of our generative story (Figure 1 lists some of the factors specific to this computation.) The readers familiar with the statistical machine translation (SMT) literature should recognize that steps 3 to 5 are nothing but a one-to-one reproduction of the generative story proposed in the SMT context by Brown et al. (see Brown et al., 1993 for a detailed mathematical description of the model and the formula for computing the probability of an alignment and target string given a source string).1 Figure 1: A generative model for Question answering To simplify our work and to enable us exploit existing off-the-shelf software, in the experiments we carried out in conjunction with this paper, we assumed a flat distribution for the two steps in our 1 The distortion probabilities depicted in Figure 1 are a simplification of the distortions used in the IBM Model 4 model by Brown et al. (1993). We chose this watered down representation only for illustrative purposes. Our QA system implements the full-blown Model 4 statistical model described by Brown et al. generative story. That is, we assumed that it is equally likely to take any cut in the tree and equally likely to choose as Answer any syntactic/semantic element in an answer sentence. 3 Generating training and testing material 3.1 Generating training cases Assume that the question-answer pair in Figure 1 appears in our training corpus. When this happens, we know that 1977 is the correct answer. To generate a training example from this pair, we tokenize the question, we parse the answer sentence, we identify the question terms and answer in the parse tree, and then we make a "cut" in the tree that satisfies the following conditions: a) Terms overlapping with the question are preserved as surface text b) The answer is reduced to its semantic or syntactic class prefixed with the symbol “A_” c) Non-leaves, which don’t have any question term or answer offspring, are reduced to their semantic or syntactic class. d) All remaining nodes (leaves) are preserved as surface text. Condition a) ensures that the question terms will be identified in the sentence. Condition b) helps learn answer types. Condition c) brings the sentence closer to the question by compacting portions that are syntactically far from question terms and answer. And finally the importance of lexical cues around question terms and answer motivates condition d). For the question-answer pair in Figure 1, the algorithm above generates the following training example: Q: When did Elvis Presley die ? SA: Presley died PP PP in A_DATE, and SNT. Figure 2 represents graphically the conditions that led to this training example being generated. Our algorithm for generating training pairs implements deterministically the first two steps in our generative story. The algorithm is constructed so as to be consistent with our intuition that a generative process that makes the question and answer as similar-looking as possible is most likely to enable us learn a useful model. Each questionanswer pair results in one training example. It is the examples generated through this procedure that we use to estimate the parameters of our model. Figure 2: Generation of QA examples for training. 3.2 Generating test cases Assume now that the sentence in Figure 1 is returned by an IR engine as a potential candidate for finding the answer to the question “When did Elvis Presley die?” In this case, we don’t know what the answer is, so we assume that any semantic/syntactic node in the answer sentence can be the answer, with the exception of the nodes that subsume question terms and stop words. In this case, given a question and a potential answer sentence, we generate an exhaustive set of question-answer test cases, each test case labeling as answer (A_) a different syntactic/semantic node. Here are some of the test cases we consider for the question-answer pair in Figure 1: Q: When did Elvis Presley die ? SA1: Presley died A_PP PP PP , and SNT . Q: When did Elvis Presley die ? SAi: Presley died PP PP in A_DATE, and SNT . Q: When did Elvis Presley die ? SAj: Presley died PP PP PP , and NP return by A_NP NP . If we learned a good model, we would expect it to assign a higher probability to P(Q | Sai) than to P(Q | Sa1) and P(Q | Saj). 4 Experiments 4.1 Training Data For training, we use three different sets. (i) The TREC9-10 set consists of the questions used at TREC9 and 10. We automatically generate answer-tagged sentences using the TREC9 and 10 judgment sets, which are lists of answer-document pairs evaluated as either correct or wrong. For every question, we first identify in the judgment sets a list of documents containing the correct answer. For every document, we keep only the sentences that overlap with the question terms and contain the correct answer. (ii) In order to have more variation of sentences containing the answer, we have automatically extended the first data set using the Web. For every TREC9-10 question/answer pair, we used our Web-based IR to retrieve sentences that overlap with the question terms and contain the answer. We call this data set TREC9-10Web. (iii) The third data set consists of 2381 question/answer pairs collected from http://www.quiz-zone.co.uk. We use the same method to automatically enhance this set by retrieving from the web sentences containing answers to the questions. We call this data set Quiz-Zone. Table 1 shows the size of the three training corpora: Training Set # distinct questions # question-answer pairs TREC9-10 1091 18618 TREC9-10Web 1091 54295 Quiz-Zone 2381 17614 Table 1: Size of Training Corpora To train our QA noisy-channel model, we apply the algorithm described in Section 3.1 to generate training cases for all QA pairs in the three corpora. To help our model learn that it is desirable to copy answer words into the question, we add to each corpus a list of identical dictionary word pairs wiwi. For each corpus, we use GIZA (Al-Onaizan et al., 1999), a publicly available SMT package that implements the IBM models (Brown et al., 1993), to train a QA noisy-channel model that maps flattened answer parse trees, obtained using the “cut” procedure described in Section 3.1, into questions. 4.2 Test Data We used two different data sets for the purpose of testing. The first set consists of the 500 questions used at TREC 2002; the second set consists of 500 questions that were randomly selected from the Knowledge Master (KM) repository (http://www.greatauk.com). The KM questions tend to be longer and quite different in style compared to the TREC questions. th e fa ith fu l re tu rn b y th e h u n d re d s e a c h y e a r to m a rk th e a n n iv e rsa r y o f a h e a rt d is e a s e a t G r a c e la n d S N T N P P P P re s le y d ie d P P in 1 9 7 7 S N T , . a n d P P C o n d itio n a ) C o n d itio n b ) C o n d itio n d ) C o n d itio n c ) 4.3 A noisy-channel-based QA system Our QA system is straightforward. It has only two modules: an IR module, and an answeridentifier/ranker module. The IR module is the same we used in previous participations at TREC. As the learner, the answer-identifier/ranker module is also publicly available – the GIZA package can be configured to automatically compute the probability of the Viterbi alignment between a flattened answer parse tree and a question. For each test question, we automatically generate a web query and use the top 300 answer sentences returned by our IR engine to look for an answer. For each question Q and for each answer sentence Si, we use the algorithm described in Section 3.2 to exhaustively generate all Q- Si,Ai,j pairs. Hence we examine all syntactic constituents in a sentence and use GIZA to assess their likelihood of being a correct answer. We select the answer Ai,j that maximizes P(Q | Si,Ai,j) for all answer sentences Si and all answers Ai,j that can be found in list retrieved by the IR module. Figure 3 depicts graphically our noisy-channel-based QA system. Figure 3: The noisy-channel-based QA system. 4.4 Experimental Results We evaluate the results by generating automatically the mean reciprocal rank (MRR) using the TREC 2002 patterns and QuizZone original answers when testing on TREC 2002 and QuizZone test sets respectively. Our baseline is a state of the art QA system, QA-base, which was ranked from second to seventh in the last 3 years at TREC. To ensure a fair comparison, we use the same Web-based IR system in all experiments with no answer retrofitting. For the same reason, we use the QA-base system with the post-processing module disabled. (This module re-ranks the answers produced by QA-base on the basis of their redundancy, frequency on the web, etc.) Table 2 summarizes results of different combinations of training and test sets: Trained on\Tested on TREC 2002 KM A = TREC9-10 0.325 0.108 B = A + TREC9-10Web 0.329 0.120 C = B + Quiz-Zone 0.354 0.132 QA-base 0.291 0.128 Table 2: Impact of training and test sets. For the TREC 2002 corpus, the relatively low MRRs are due to the small answer coverage of the TREC 2002 patterns. For the KM corpus, the relatively low MRRs are explained by two factors: (i) for this corpus, each evaluation pattern consists of only one string – the original answer; (ii) the KM questions are more complex than TREC questions (What piece of furniture is associated with Modred, Percival, Gawain, Arthur, and Lancelot?). It is interesting to see that using only the TREC9-10 data as training (system A in Table 2), we are able to beat the baseline when testing on TREC 2002 questions; however, this is not true when testing on KM questions. This can be explained by the fact that the TREC9-10 training set is similar to the TREC 2002 test set while it is significantly different from the KM test set. We also notice that expanding the training to TREC910Web (System B) and then to Quiz-Zone (System C) improved the performance on both test sets, which confirms that both the variability across answer tagged sentences (Trec9-10Web) and the abundance of distinct questions (Quiz-Zone) contribute to the diversity of a QA training corpus, and implicitly to the performance of our system. 5 Framework flexibility Another characteristic of our framework is its flexibility. We can easily extend it to span other question-answering resources and techniques that have been employed in state-of-the art QA systems. In the rest of this section, we assess the impact of such resources and techniques in the context of three case studies. 5.1 Statistical-based “Reasoning” The LCC TREC-2002 QA system (Moldovan et al., 2002) implements a reasoning mechanism for justifying answers. In the LCC framework, Test question Q S i,A i,j QA M odel trained using GIZA S x,A x,y= argm ax (P(Q | S i,A i,j)) A = A x,y GIZA S 1 S m S 1,A 1,1 S 1,A 1,v S m ,A m ,1 S m ,A m ,w IR questions and answers are first mapped into logical forms. A resolution-based module then proves that the question logically follows from the answer using a set of axioms that are automatically extracted from the WordNet glosses. For example, to prove the logical form of “What is the age of our solar system?” from the logical form of the answer “The solar system is 4.6 billion years old.”, the LCC theorem prover shows that the atomic formula that corresponds to the question term “age” can be inferred from the atomic formula that corresponds to the answer term “old” using an axiom that connects “old” and “age”, because the WordNet gloss for “old” contains the word “age”. Similarly, the LCC system can prove that “Voting is mandatory for all Argentines aged over 18” provides a good justification for the question “What is the legal age to vote in Argentina?” because it can establish through logical deduction using axioms induced from WordNet glosses that “legal” is related to “rule”, which in turn is related to “mandatory”; that “age” is related to “aged”; and that “Argentine” is related to “Argentina”. It is not difficult to see by now that these logical relations can be represented graphically as alignments between question and answer terms (see Figure 4). Figure 4: Gloss-based reasoning as word-level alignment. The exploitation of WordNet synonyms, which is part of many QA systems (Hovy et al., 2001; Prager et al., 2001; Pasca and Harabagiu, 2001), is a particular case of building such alignments between question and answer terms. For example, using WordNet synonymy relations, it is possible to establish a connection between “U.S.” and “United States” and between “buy” and “purchase” in the question-answer pair (Figure 5), thus increasing the confidence that the sentence contains a correct answer. Figure 5: Synonym-based alignment. The noisy channel framework we proposed in this paper can approximate the reasoning mechanism employed by LCC and accommodate the exploitation of gloss- and synonymy-based relations found in WordNet. In fact, if we had a very large training corpus, we would expect such connections to be learned automatically from the data. However, since we have a relatively small training corpus available, we rewrite the WordNet glosses into a dictionary by creating word-pair entries that establish connections between all Wordnet words and the content words in their glosses. For example, from the word “age” and its gloss “a historic period”, we create the dictionary entries “age - historic” and “age – period”. To exploit synonymy relations, for every WordNet synset Si, we add to our training data all possible combinations of synonym pairs Wi,x-Wi,y. Our dictionary creation procedure is a crude version of the axiom extraction algorithm described by Moldovan et al. (2002); and our exploitation of the glosses in the noisy-channel framework amounts to a simplified, statistical version of the semantic proofs implemented by LCC. Table 3 shows the impact of WordNet synonyms (WNsyn) and WordNet glosses (WNgloss) on our system. Adding WordNet synonyms and glosses improved slightly the performance on the KM questions. On the other hand, it is surprising to see that the performance has dropped when testing on TREC 2002 questions. Trained on\Tested on TREC 2002 KM C 0.354 0.132 C+WNsyn 0.345 0.138 C + WNgloss 0.343 0.136 Table 3: WordNet synonyms and glosses impact. 5.2 Question reformulation Hermjakob et al. (2002) showed that reformulations (syntactic and semantic) improve the answer pinpointing process in a QA system. To make use of this technique, we extend our training data set by expanding every questionanswer pair Q-SA to a list (Qr-SA), Qr ⊂ Θ where Θ is the set of question reformulations. 2 We also expand in a similar way the answer candidates in the test corpus. Using reformulations improved the 2 We are grateful to Ulf Hermjakob for sharing his reformulations with us. In 1867, Secretary of State William H. Seward arranged for the United-States to purchase Alaska for 2 cents per acre. What year did the U.S. buy Alaska? What is the legal age to vote in Argentina? Voting is mandatory for all Argentines aged over 18 performance of our system on the TREC 2002 test set while it was not beneficial for the KM test set (see Table 4). We believe this is explained by the fact that the reformulation engine was fine tuned on TREC-specific questions, which are significantly different from KM questions. Trained on\Tested on TREC 2002 KM C 0.354 0.132 C+reformulations 0.365 0.128 Table 4: Reformulations impact. 5.3 Exploiting data in structured -and semistructured databases Structured and semi-structured databases were proved to be very useful for question-answering systems. Lin (2002) showed through his federated approach that 47% of TREC-2001 questions could be answered using Web-based knowledge sources. Clarke et al. (2001) obtained a 30% improvement by using an auxiliary database created from web documents as an additional resource. We adopted a different approach to exploit external knowledge bases. In our work, we first generated a natural language collection of factoids by mining different structured and semi-structured databases (World Fact Book, Biography.com, WordNet…). The generation is based on manually written questionfactoid template pairs, which are applied on the different sources to yield simple natural language question-factoid pairs. Consider, for example, the following two factoid-question template pairs: Qt1: What is the capital of _c? St1: The capital of _c is capital(_c). Qt2: How did _p die? St2: _p died of causeDeath(_p). Using extraction patterns (Muslea, 1999), we apply these two templates on the World Fact Book database and on biography.com pages to instantiate question and answer-tagged sentence pairs such as: Q1: What is the capital of Greece? S1: The capital of Greece is Athens. Q2: How did Jean-Paul Sartre die? S2: Jean-Paul Sartre died of a lung ailment. These question-factoid pairs are useful both in training and testing. In training, we simply add all these pairs to the training data set. In testing, for every question Q, we select factoids that overlap sufficiently enough with Q as sentences that potentially contain the answer. For example, given the question “Where was Sartre born?” we will select the following factoids: 1-Jean-Paul Sartre was born in 1905. 2-Jean-Paul Sartre died in 1980. 3-Jean-Paul Sartre was born in Paris. 4-Jean-Paul Sartre died of a lung ailment. Up to now, we have collected about 100,000 question-factoid pairs. We found out that these pairs cover only 24 of the 500 TREC 2002 questions. And so, in order to evaluate the value of these factoids, we reran our system C on these 24 questions and then, we used the question-factoid pairs as the only resource for both training and testing as described earlier (System D). Table 5 shows the MRRs for systems C and D on the 24 questions covered by the factoids. System 24 TREC 2002 questions C 0.472 D 0.812 Table 5: Factoid impact on system performance. It is very interesting to see that system D outperforms significantly system C. This shows that, in our framework, in order to benefit from external databases, we do not need any additional machinery (question classifiers, answer type identifiers, wrapper selectors, SQL query generators, etc.) All we need is a one-time conversion of external structured resources to simple natural language factoids. The results in Table 5 also suggest that collecting natural language factoids is a useful research direction: if we collect all the factoids in the world, we could probably achieve much higher MRR scores on the entire TREC collection. 6 Conclusion In this paper, we proposed a noisy-channel model for QA that can accommodate within a unified framework the exploitation of a large number of resources and QA-specific techniques. We believe that our work will lead to a better understanding of the similarities and differences between the approaches that make up today’s QA research landscape. We also hope that our paper will reduce the high barrier to entry that is explained by the complexity of current QA systems and increase the number of researchers working in this field: because our QA system uses only publicly available software components (an IR engine; a parser; and a statistical MT system), it can be easily reproduced by other researchers. However, one has to recognize that the reliance of our system on publicly available components is not ideal. The generative story that our noisy-channel employs is rudimentary; we have chosen it only because we wanted to exploit to the best extent possible existing software components (GIZA). The empirical results we obtained are extremely encouraging: our noisy-channel system is already outperforming a state-of-the-art rule-based system that took many person years to develop. It is remarkable that a statistical machine translation system can do so well in a totally different context, in question answering. However, building dedicated systems that employ more sophisticated, QA-motivated generative stories is likely to yield significant improvements. Acknowledgments. This work was supported by the Advanced Research and Development Activity (ARDA)’s Advanced Question Answering for Intelligence (AQUAINT) Program under contract number MDA908-02-C-0007. References Yaser Al-Onaizan, Jan Curin, Michael Jahr, Kevin Knight, John Lafferty, Dan Melamed, Franz-Josef Och, David Purdy, Noah A. Smith, and David Yarowsky. 1999. Statistical machine translation. Final Report, JHU Summer Workshop. Adam L. Berger, John D. Lafferty. 1999. Information Retrieval as Statistical Translation. In Proceedings of the SIGIR 1999, Berkeley, CA. Eric Brill, Jimmy Lin, Michele Banko, Susan Dumais, Andrew Ng. 2001. Data-Intensive Question Answering. In Proceedings of the TREC-2001 Conference, NIST. Gaithersburg, MD. Peter F. Brown, Stephen A. Della Pietra, Vincent J. Della Pietra, and Robert L. Mercer. 1993. The mathematics of statistical machine translation: Parameter estimation. Computational Linguistics, 19(2):263--312. Kenneth W. Church. 1988. A stochastic parts program and noun phrase parser for unrestricted text. In Proceedings of the Second Conference on Applied Natural Language Processing, Austin, TX. Charles L. A. Clarke, Gordon V. Cormack, Thomas R. Lynam, C. M. Li, G. L. McLearn. 2001. Web Reinforced Question Answering (MultiText Experiments for TREC 2001). In Proceedings of the TREC-2001Conference, NIST. Gaithersburg, MD. Ulf Hermjakob, Abdessamad Echihabi, and Daniel Marcu. 2002. Natural Language Based Reformulation Resource and Web Exploitation for Question Answering. In Proceedings of the TREC2002 Conference, NIST. Gaithersburg, MD. Edward H. Hovy, Ulf Hermjakob, Chin-Yew Lin. 2001. The Use of External Knowledge in Factoid QA. In Proceedings of the TREC-2001 Conference, NIST. Gaithersburg, MD. Abraham Ittycheriah and Salim Roukos. 2002. IBM's Statistical Question Answering System-TREC 11. In Proceedings of the TREC-2002 Conference, NIST. Gaithersburg, MD. Frederick Jelinek. 1997. Statistical Methods for Speech Recognition. MIT Press, Cambridge, MA. Boris Katz, Deniz Yuret, Sue Felshin. 2001. Omnibase: A universal data source interface. In MIT Artificial Intelligence Abstracts. Kevin Knight, Daniel Marcu. 2002. Summarization beyond sentence extraction: A probabilistic approach to sentence compression. Artificial Intelligence 139(1): 91-107. Jimmy Lin. 2002. The Web as a Resource for Question Answering: Perspective and Challenges. In LREC 2002, Las Palmas, Canary Islands, Spain. Dan Moldovan, Sanda Harabagiu, Roxana Girju, Paul Morarescu, Finley Lacatusu, Adrian Novischi, Adriana Badulescu, Orest Bolohan. 2002. LCC Tools for Question Answering. In Proceedings of the TREC-2002 Conference, NIST. Gaithersburg, MD. Ion Muslea. 1999. Extraction Patterns for Information Extraction Tasks: A Survey. In Proceedings of Workshop on Machine Learning and Information Extraction (AAAI-99), Orlando, FL. Marius Pasca, Sanda Harabagiu, 2001. The Informative Role of WordNet in Open-Domain Question Answering. In Proceedings of the NAACL 2001 Workshop on WordNet and Other Lexical Resources, Carnegie Mellon University, Pittsburgh PA. John M. Prager, Jennifer Chu-Carroll, Krysztof Czuba. 2001. Use of WordNet Hypernyms for Answering What-Is Questions. In Proceedings of the TREC2002 Conference, NIST. Gaithersburg, MD. Jinxi Xu, Ana Licuanan, Jonathan May, Scott Miller, Ralph Weischedel. 2002. TREC 2002 QA at BBN: Answer Selection and Confidence Estimation. In Proceedings of the TREC-2002 Conference, NIST. Gaithersburg, MD.
2003
3
Optimizing Story Link Detection is not Equivalent to Optimizing New Event Detection Ayman Farahat PARC 3333 Coyote Hill Rd Palo Alto, CA 94304 [email protected] Francine Chen PARC 3333 Coyote Hill Rd Palo Alto, CA 94304 [email protected] Thorsten Brants PARC 3333 Coyote Hill Rd Palo Alto, CA 94304 [email protected] Abstract Link detection has been regarded as a core technology for the Topic Detection and Tracking tasks of new event detection. In this paper we formulate story link detection and new event detection as information retrieval task and hypothesize on the impact of precision and recall on both systems. Motivated by these arguments, we introduce a number of new performance enhancing techniques including part of speech tagging, new similarity measures and expanded stop lists. Experimental results validate our hypothesis. 1 Introduction Topic Detection and Tracking (TDT) research is sponsored by the DARPA Translingual Information Detection, Extraction, and Summarization (TIDES) program. The research has five tasks related to organizing streams of data such as newswire and broadcast news (Wayne, 2000): story segmentation, topic tracking, topic detection, new event detection (NED), and link detection (LNK). A link detection system detects whether two stories are “linked”, or discuss the same event. A story about a plane crash and another story about the funeral of the crash victims are considered to be linked. In contrast, a story about hurricane Andrew and a story about hurricane Agnes are not linked because they are two different events. A new event detection system detects when a story discusses a previously unseen or “not linked” event. Link detection is considered to be a core technology for new event detection and the other tasks. Several groups are performing research in the TDT tasks of link detection and new event detection. Based on their findings, we incorporated a number of their ideas into our baseline system. CMU (Yang et al., 1998) and UMass (Allan et al., 2000a) found that for new event detection it was better to compare a new story against all previously seen stories than to cluster previously seen stories and compare a new story against the clusters. CMU (Carbonell et al., 2001) found that NED results could be improved by developing separate models for different news sources to that could capture idiosyncrasies of different sources, which we also extended to link detection. UMass reported on adapting a tracking system for NED detection (Allan et al., 2000b). Allan et. al , (Allan et al., 2000b) developed a NED system based upon a tracking technology and showed that to achieve high-quality first story detection, tracking effectiveness must improve to a degree that experience suggests is unlikely. In this paper, while we reach a similar conclusion as (Allan et al., 2000b) for LNK and NED systems , we give specific directions for improving each system separately. We compare the link detection and new event detection tasks and discuss ways in which we have observed that techniques developed for one task do not always perform similarly for the other task. 2 Common Processing and Models This section describes those parts of the processing steps and the models that are the same for New Event Detection and for Link Detection. 2.1 Pre-Processing For pre-processing, we tokenize the data, recognize abbreviations, normalize abbreviations, remove stop-words, replace spelled-out numbers by digits, add part-of-speech tags, replace the tokens by their stems, and then generate term-frequency vectors. 2.2 Incremental TF-IDF Model Our similarity calculations of documents are based on an incremental TF-IDF model. In a TF-IDF model, the frequency of a term in a document (TF) is weighted by the inverse document frequency (IDF). In the incremental model, document frequencies  are not static but change in time steps . At time , a new set of test documents is added to the model by updating the frequencies       (1) where  denote the document frequencies in the newly added set of documents . The initial document frequencies  are generated from a (possibly emtpy) training set. In a static TF-IDF model, new words (i.e., those words, that did not occur in the training set) are ignored in further computations. An incremental TF-IDF model uses the new vocabulary in similarity calculations. This is an advantage because new events often contain new vocabulary. Very low frequency terms  tend to be uninformative. We therefore set a threshold  . Only terms with    ! are used at time . We use  #" . 2.3 Term Weighting The document frequencies as described in the previous section are used to calculate weights for the terms  in the documents . At time , we use $&%('*) ,+-. / 0 1 ,+-3254768 9   (2) where 9 is the total number of documents at time . 0 * is a normalization value such that either the weights sum to 1 (if we use Hellinger distance, KL-divergence, or Clarity-based distance), or their squares sum to 1 (if we use cosine distance). 2.4 Similarity Calculation The vectors consisting of normalized term weights :$5%('*) are used to calculate the similarity between two documents and ; . In our current implementation, we use the the Clarity metric which was introduced by (Croft et al., 2001; Lavrenko et al., 2002) and gets its name from the distance to general English, which is called Clarity. We used a symmetric version that is computed as: < %>=?@+ ; A CBEDGF:IH7H ; DJF:IH7HLKMN B DJF: ; H7H 1DJFO ; H7HLKMN (3) DJFO@+ ; P RQTSU:$5%('*) @+-&2WVXY'I :$5%('*) ,+- $&%('*)   ; +-,Z (4) where “ DJF ” is the Kullback-Leibler divergence, KM is the probability distribution of words for “general English” as derived from the training corpus. The idea behind this metric is that we want to give credit to similar pairs of documents that are very different from general English, and we want to discount similar pairs of documents that are close to general English (which can be interpreted as being the noise). The motivation for using the clarity metric will given in section 6.1. Another metric is Hellinger distance < %>= ,+ ;  RQ S\[ :$5%('*) ,+-2&:$5%('*)  ; +- Z (5) Other possible similarity metrics are the cosine distance, the Kullback-Leibler divergence, or the symmetric form of it, Jensen-Shannon distance. 2.5 Source-Specific TF-IDF Model Documents in the stream of news stories may stem from different sources, e.g., there are 20 different sources in the data for TDT 2002 (ABC News, Associated Press, New York Times, etc). Each source might use the vocabulary differently. For example, the names of the sources, names of shows, or names of news anchors are much more frequent in their own source than in the other ones. In order to reflect the source-specific differences we do not build one incremental TF-IDF model, but as many as we have different sources and use frequencies ]-^  (6) for source < at time . The frequencies are updated according to equation (1), but only using those documents in that are from the same source < . As a consequence, a term like “CNN” receives a high document frequency (thus low weight) in the model for the source CNN and a low document frequency (thus high weight) in the New York Times model. Instead of the overall document frequencies   , we now use the source specific  ] ^  when calculating the term weights in equation (2). Sources < for which no training data is available (i.e., no data to generate  ]-^  is available) might be initialized in two different ways: 1. Use an empty model:  ] ^    for all  ; 2. Identify one or more other but similar sources < for which training data is available and use  ]-^  Q ]  ] ^   Z (7) 2.6 Source-Pair-Specific Normalization Due to stylistic differences between various sources, e.g., news paper vs. broadcast news, translation errors, and automatic speech recognition errors (Allan et al., 1999), the similarity measures for both ontopic and off-topic pairs will in general depend on the source pair. Errors due to these differences can be reduced by using thresholds conditioned on the sources (Carbonell et al., 2001), or, as we do, by normalizing the similarity values based on similarities for the source pairs found in the story history. 3 New Event Detection In order to decide whether a new document ; that is added to the collection at time describes a new event, it is individually compared to all previous documents using the steps described in section 2. We identify the document  with highest similarity:  8    < %T=  ; + 1 Z (8) The value < X$  ;  / B < %>=  ; +   is used to determine whether a document ; is about a new event and at the same time is an indication of the confidence in our decision. If the score exceeds a threshold  ] , then there is no sufficiently similar previous document, thus ; describes a new event (decision YES). If the score is smaller than  ] , then  is sufficiently similar, thus ; describes an old event (decision NO). The threshold  ] can be determined by using labeled training data and calculating similarity scores for document pairs on the same event and on different events. 4 Link Detection In order to decide whether a pair of stories and ; are linked, we identify a set of similarity metrics  that capture the similarity between the two documents using Clarity and Hellinger metrics: ,+ ; P  < %>=5@+ ;  + < %>=@@+ ;  Z (9) The value @+ ;  is used to determine whether stories “q” and “d” are linked. If the similarity exceeds a threshold   we the two stories are sufficiently similar (decision YES). If the similarity is smaller than  we the two stories are sufficiently different (decision NO). The Threshold  can be determined using labeled training data. 5 Evaluation All TDT systems are evaluated by calculating a Detection Cost: "!$# &%(' ]] 2) %&' ]] 2) +*-,/. #  1032 2) 032 2) 45 +*6,/. # Z (10) where %(' ]] and 032 are the costs of a miss and a false alarm. They are set to 1 and 0.1, respectively, for all tasks. ) %(' ]] and ) 032 are the conditional probabilities of a miss and a false alarm in the system output. ) +*6,/. # and ) 45 +*6,/. # a the a priori target and non-target probabilities. They are set to 0.02 and 0.98 for LNK and NED. The detection cost is normalized such that a perfect system scores 0, and a random baseline scores 1:  "!$# /7 4 , % "!$# min  1%(' ]] 2) +*6,/. # + 1032 2) 45 +*6,/. #  (11) TDT evaluates all systems with a topic-weighted method: error probabilities are accumulated separately for each topic and then averaged. This is motivated by the different sizes of the topics. The evaluation yields two costs: the detection cost is the cost when using the actual decisions made by the system; the minimum detection cost is the cost when using the confidence scores that each system has to emit with each decision and selecting the optimal threshold based on the score. In the TDT-2002 evaluation, our Link Detection system was the best of three systems, yielding  1  /7 4 , %  Z /  and = %P 1  /7 4 , %  Z /  "  . Our New Event Detection system was ranked second of four with costs of   #  /7 4 , %  Z   / and = % P & #   7 4 , %  Z  . 6 Differences between LNK and NED In this section, we draw on Information retrieval tools to analyze LNK and NED tasks. Motivated by the results of this analysis, we compare a number of techniques in the LNK and NED tasks in particular we compare the utility of two similarity measures, part-of-speech tagging, stop wording, and normalizing abbreviations and numerals. The comparisons were performed on corpora developed for TDT, including TDT2 and TDT3. 6.1 Information Retrieval and TDT The conditions for false alarms and misses are reversed for LNK and NED tasks. In the LNK task, incorrectly flagging two stories as being on the same event is considered a false alarm. In contrast in the NED task, incorrectly flagging two stories as being on the same event will cause the true first story to be missed. Conversely, in LNK incorrectly labeling two stories that are on the same event as not linked is a miss, but in the NED task, incorrectly labeling two stories on the same event as not linked can result in a false alarm where a story is incorrectly identified as a new event. The detection cost in Eqn.10 which assigns a higher cost to false alarm %(' ]] 2 ) +*6,/. #  Z  "*+ 032 2 ) 45 +*6,/. #  Z   . A LNK system wants to minimize false alarms and to do this it should identify stories as being linked only if they are linked, which translates to high precision. In contrast a NED system, will minimize false alarms by identifying all stories that are linked which translates to high recall. Motivated by this discussion, we investigated the use of number of precision and recall enhancing techniques with the LNK and NED system. We investigated the use of the Clarity metric (Lavrenko et al., 2002) which was shown to correlate positively with precision. We investigated the 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Similarity CDF LNK − Clarity vs. Hellinger Clarity on−topic Clarity off−topic Hellinger on−topic Hellinger off−topic Figure 1: CDF for Clarity and Hellinger similarity on the LNK task for on-topic and off-topic pairs. use of part-of-speech tagging which was shown by Allan and Raghavan (Allan and Raghavan, 2002) to improve query clarity. In section 6.2.1 we will show how POS helps recall. We also investigated the use of expanded stop-list which improves precision. We also investigated normalizing abbreviations and transforming spelled out numbers into numbers. On the one hand the enhanced processing list includes most of the term in the ASR stop-list and removing these terms will improve precision. On the other hand normalizing these terms will have the same effect as stemming a recall enhancing device (Xu and Croft, 1998) , (Kraaij and Pohlmann, 1996). In addition to these techniques, we also investigated the use of different similarity measures. 6.2 Similarity Measures The systems developed for TDT primarily use cosine similarity as the similarity measure. We have developed systems based on cosine similarity (Chen et al., 2003). In work on text segmentation, (Brants et al., 2002) observed that the system performance was much better when the Hellinger measure was used instead. In this work, we decided to use the clarity metric, a precision enhancing device (Croft et al., 2001). For both our LNK and NED systems, we compared the performance of the systems using each of the similarity measures separately. Table 1 shows that for LNK, the system based on Clarity similarity performed better the system based on Hellinger similarity; in contrast, for NED, the system based on 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Similarity CDF NED Hellinger vs. Clarity Hellinger on topic Hellinger off topic Clarity on topic Clarity off topic Figure 2: CDF for Clarity and Hellinger similarity on the NED task for on-topic and off-topic pairs. Table 1: Effect of different similarity measures on topic-weighted minimum normalized detection costs for LNK and NED on the TDT 2002 dry run data. System Clarity Hellinger % Chg LNK 0.3054 0.3777 -0.0597 -19.2 NED 0.8419 0.5873 +0.2546 +30.24 Hellinger similarity performed better. Figure 1 shows the cumulative density function for the Hellinger and Clarity similarities for on-topic (about the same event) and off-topic (about different events) pairs for the LNK task. While there are a number of statistics to measure the overall difference between tow cumulative distribution functions, we used the Kolmogorov-Smirnov distance (K-S distance; the largest difference between two cumulative distributions) for two reasons. First, the K-S distance is invariant under re-parametrization. Second, the significance of the K-S distance in case of the null hypothesis (data sets are drawn from same distribution) can be calculated (Press et al., 1993). The K-S distance between the on-topic and off-topic similarities is larger for Clarity similarity (cf. table 2), indicating that it is the better metric for LNK. Figure 2 shows the cumulative distribution functions for Hellinger and Clarity similarities in the NED task. The plot is based on pairs that contain the current story and its most similar story in the story history. When the most similar story is on the same event (approx. 75% of the cases), its similarity is part Table 2: K-S distance between on-topic and offtopic story pairs. Clarity Hellinger Change (%) LNK 0.7680 0.7251 B  Z   "  ( B *Z  ) NED 0.5353 0.6055   Z   " (  / *Z /  ) Table 3: Effect of using part-of-speech on minimum normalized detection costs for LNK and NED on the TDT 2002 dry run data. System B PoS  PoS Change (%) LNK 0.3054 0.4224 -0.117 ( B  Z %) NED 0.6403 0.5873 +0.0530 (   Z %) of the on-topic distribution, otherwise (approx. 25% of the cases) it is plotted as off-topic. The K-S distance between the Hellinger on-topic and off-topic CDFs is larger than those for Clarity (cf. table 2). For both NED and LNK, we can reject the null hypothesis for both metrics with over 99.99 % confidence. To get the high precision required for LNK system, we need to have a large separation between the on-topic and off-topic distributions. Examining Figure 1 and Table 2 , indicates that the Clarity metric has a larger separation than the Hellinger metric. At high recall required by NED system (low CDF values for on-topic), there is a greater separation with the Hellinger metric. For example, at 10% recall, the Hellinger metric has 71 % false alarm rate as compared to 75 % for the Clarity metric. 6.2.1 Part-of-Speech (PoS) Tagging We explored the idea that noting the part-ofspeech of the terms in a document may help to reduce confusion among some of the senses of a word. During pre-processing, we tagged the terms as one of five categories: adjective, noun, proper nouns, verb, or other. A “tagged term” was then created by combining the stem and part-of-speech. For example, ‘N train’ represents the term ‘train’ when used as a noun, and ‘V train’ represents the term ‘train’ when used as a verb. We then ran our NED and LNK systems using the tagged terms. The systems were tested in the Dry Run 2002 TDT data. A comparison of the performance of the systems when part-of-speech is used against a baseline sysTable 4: Comparison of using an “ASR stop-list” and “enhanced preprocessing” for handling ASR differences. No ASR stop ASR stop Std Preproc Std Preproc LNK 0.3153 0.3054 NED 0.6062 0.6407 tem when part-of-speech is not used is shown in Table 3. For Story Link Detection, performance decreases by 38.3%, while for New Event Detection, performance improves by 8.3%. Since POS tagging helps differentiates between the different senses of the same root, it also reduces the number of matching terms between two documents. In the LNK task for example, the total number of matches drops from 177,550 to 151,132. This has the effect of placing a higher weight on terms that match, i.e. terms that have the same sense and for the TDT corpus will increase recall and decrease. Consider for example matching “food server to “food service” and “java server”. When using POS both terms will have the same similarity to the query and the use of POS will retrieve the relevant documents but will also retrieve other documents that share the same sense. 6.2.2 Stop Words A large portion of the documents in the TDT collection has been automatically transcribed using Automatic Speech Recognition (ASR) systems which can achieve over 95% accuracies. However, some of the words not recognized by the ASR tend to be very informative words that can significantly impact the detection performance (Allan et al., 1999). Furthermore, there are systematic differences between ASR and manually transcribed text, e.g., numbers are often spelled out thus “30” will be spelled out “thirty”. Another situation where ASR is different from transcribed text is abbreviations, e.g. ASR system will recognize ‘CNN” as three separate tokens “C”, “N”, and “N”. In order to account for these differences, we identified the set of tokens that are problematic for ASR. Our approach was to identify a parallel corpus of manually and automatically transcribed documents, the TDT2 corpus, and then use a statistical approach (Dunning, 1993) to identify tokens with significantly Table 5: Impact of recall and precision enhancing devices. Device Impact LNK NED ASR stop precision +3.1% -5.5 % POS recall -38.8 % 8.3 % Clarity precision +19 % -30 % different distributions in the two corpora. We compiled the problematic ASR terms into an “ASR stoplist”. This list was primarily composed of spelledout numbers, numerals and a few other terms. Table 4 shows the topic-weighted minimum detection costs for LNK and NED on the TDT 2002 dry run data. The table shows results for standard preprocessing without an ASR stop-list and with and ASR stop-list. For Link Detection, the ASR stoplist improves results, while the same list decreases performance for New Event Detection. In (Chen et al., 2003) we investigated normalizing abbreviations and transforming spelled-out numbers into numerals, “enhanced preprocessing”, and then compared this approach with using an “ASR stoplist”. 6.2.3 Impact of Recall and Precision The previous two sections examined the impact of four different techniques on the performance of LNK and NED systems. The Part-of-speech is a recall enhancing devices while the ASR stop-list is a precision enhancing device. The enhanced preprocessing improves precision and recall. The results which are summarized in Table 5 indicate that precision enhancing devices improved the performance of the LNK task while recall enhancing devices improved the NED task. 6.3 Final Remarks on Differences In the extreme case, a perfect link detection system performs perfectly on the NED task. We gave empirical evidence that there is not necessarily such a correlation at lower accuracies. These findings are in accordance with the results reported in (Allan et al., 2000b) for topic tracking and first story detection. To test the impact of the cost function on the performance of LNK and NED systems, we repeated the evaluation with "%&' ]] and * both set to 1, and we found that the difference between the two reTable 6: Topic-weighted minimum normalized detection cost for NED when using parameter settings that are best for NED (1) and those that are best for LNK (2). Columns (3) and (4) show the detection costs using uniform costs for misses and false alarms. (1) (2) (3) (4) Metric Hel Cla Hel Cla POS  B  B ASR stop B  B  * 0.1 0.1 1 1 = % 7  !  4 , % 0.5873 0.8419 0.8268 0.9498 % change – +30.24% – +14.73% sults decreases from 30.24% to 14.73%. The result indicates that the setting (Hel,  PoS, B ASRstop) is better at recall (identifying same-event stories), while (Clarity, B PoS,  ASRstop) is better at precision (identifying different-event stories). In addition to the different costs assigned to misses and false alarms, there is a difference in the number of positives and negatives in the data set (the TDT cost function uses  +*6,/. #  Z  " ). This might explain part of the remaining difference of 14.73%. Another view on the differences is that a NED system must perform very well on the higher penalized first stories when it does not have any training data for the new event, event though it may perform worse on follow-up stories. A LNK system, however, can afford to perform worse on the first story if it compensates by performing well on follow-up stories (because here not flagged follow-up stories are considered misses and thus higher penalized than in NED). This view explains the benefits of using partof-speech information and the negative effect of the ASR stop-list on NED : different part-of-speech tags help discriminate new events from old events; removing words by using the ASR stoplist makes it harder to discriminate new events. We conjecture that the Hellinger metric helps improve recall, and in a study similar to (Allan et al., 2000b) we plan to further evaluate the impact of the Hellinger metric on a closed collection e.g. TREC. 7 Conclusions and Future Work We have compared the effect of several techniques on the performance of a story link detection system and a new event detection system. Although many of the processing techniques used by our systems are the same, a number of core technologies affect the performance of the LNK and NED systems differently. The Clarity similarity measure was more effective for LNK, Hellinger similarity measure was more effective for NED, part-of-speech was more useful for NED, and stop-list adjustment was more useful for LNK. These differences may be due in part to a reversal in the tasks: a miss in LNK means the system does not flag two stories as being on the same event when they actually are, while a miss in NED means the system does flag two stories as being on the same event when actually they are not. In future work, we plan to evaluate the impact of the Hellinger metric on recall. In addition, we plan to use Anaphora resolution which was shown to improve recall (Pirkola and Jrvelin, 1996) to enhance the NED system. References James Allan and Hema Raghavan. 2002. Using part-ofspeech patterns to reduce query ambiguity. In ACM SIGIR2002, Tampere, Finland. James Allan, Hubert Jin, Martin Rajman, Charles Wayne, and et. al. 1999. Topic-based novelty detection. Summer workshop final report, Center for Language and Speech Processing, Johns Hopkins University. J. Allan, V. Lavrenko, D. Malin, and R. Swan. 2000a. Detections, bounds, and timelines: Umass and tdt-3. In Proceedings of Topic Detection and Tracking Workshop (TDT-3), Vienna, VA. James Allan, Victor Lavrenko, and Hubert Jin. 2000b. First story detection in TDT is hard. In CIKM, pages 374–381. Thorsten Brants, Francine Chen, and Ioannis Tsochantaridis. 2002. Topic-based document segmentation with probabilistic latent semantic analysis. In International Conference on Information and Knowledge Management (CIKM), McLean, VA. Jaime Carbonell, Yiming Yang, Ralf Brown, Chun Jin, and Jian Zhang. 2001. Cmu tdt report. Slides at the TDT-2001 meeting, CMU. Francine Chen, Ayman Farahat, and Thorsten Brants. 2003. Story link detection and new event detection are asymmetric. In Proceedings of NAACL-HLT-2002, Edmonton, AL. W. Bruce Croft, Stephen Cronen-Townsend, and Victor Larvrenko. 2001. Relevance feedback and personalization: A language modeling perspective. In DELOS Workshop: Personalisation and Recommender Systems in Digital Libraries. Ted E. Dunning. 1993. Accurate methods for the statistics of surprise and coincidence. Computational Linguistics, 19(1):61–74. Wessel Kraaij and Renee Pohlmann. 1996. Viewing stemming as recall enhancement. In ACM SIGIR1996. Victor Lavrenko, James Allan, Edward DeGuzman, Daniel LaFlamme, Veera Pollard, and Stephen Thomas. 2002. Relevance models for topic detection and tracking. In Proceedings of HLT-2002, San Diego, CA. A. Pirkola and K. Jrvelin. 1996. The effect of anaphora and ellipsis resolution on proximity searching in a text database. Information Processing and Management, 32(2):199–216. William H. Press, Saul A. Teukolsky, William Vetterling, and Brian Flannery. 1993. Numerical Recipes. Cambridge Unv. Press. Charles Wayne. 2000. Multilingual topic detection and tracking: Successful research enabled by corpora and evaluation. In Language Resources and Evaluation Conference (LREC), pages 1487–1494, Athens, Greece. Jinxi Xu and W. Bruce Croft. 1998. Corpus-based stemming using cooccurrence of word variants. ACM Transactions on Information Systems, 16(1):61–81. Yiming Yang, Tom Pierce, and Jaime Carbonell. 1998. A study on retrospective and on-line event detection. In Proceedings of SIGIR-98, Melbourne, Australia.
2003
30
Corpus-based Discourse Understanding in Spoken Dialogue Systems Ryuichiro Higashinaka and Mikio Nakano and Kiyoaki Aikawa† NTT Communication Science Laboratories Nippon Telegraph and Telephone Corporation 3-1 Morinosato Wakamiya Atsugi, Kanagawa 243-0198, Japan {rh,nakano}@atom.brl.ntt.co.jp, [email protected] Abstract This paper concerns the discourse understanding process in spoken dialogue systems. This process enables the system to understand user utterances based on the context of a dialogue. Since multiple candidates for the understanding result can be obtained for a user utterance due to the ambiguity of speech understanding, it is not appropriate to decide on a single understanding result after each user utterance. By holding multiple candidates for understanding results and resolving the ambiguity as the dialogue progresses, the discourse understanding accuracy can be improved. This paper proposes a method for resolving this ambiguity based on statistical information obtained from dialogue corpora. Unlike conventional methods that use hand-crafted rules, the proposed method enables easy design of the discourse understanding process. Experiment results have shown that a system that exploits the proposed method performs sufficiently and that holding multiple candidates for understanding results is effective. †Currently with the School of Media Science, Tokyo University of Technology, 1404-1 Katakuracho, Hachioji, Tokyo 192-0982, Japan. 1 Introduction For spoken dialogue systems to correctly understand user intentions to achieve certain tasks while conversing with users, the dialogue state has to be appropriately updated (Zue and Glass, 2000) after each user utterance. Here, a dialogue state means all the information that the system possesses concerning the dialogue. For example, a dialogue state includes intention recognition results after each user utterance, the user utterance history, the system utterance history, and so forth. Obtaining the user intention and the content of an utterance using only the single utterance is called speech understanding, and updating the dialogue state based on both the previous utterance and the current dialogue state is called discourse understanding. In general, the result of speech understanding can be ambiguous, because it is currently difficult to uniquely decide on a single speech recognition result out of the many recognition candidates available, and because the syntactic and semantic analysis process normally produce multiple hypotheses. The system, however, has to be able to uniquely determine the understanding result after each user utterance in order to respond to the user. The system therefore must be able to choose the appropriate speech understanding result by referring to the dialogue state. Most conventional systems uniquely determine the result of the discourse understanding, i.e., the dialogue state, after each user utterance. However, multiple dialogue states are created from the current dialogue state and the speech understanding results corresponding to the user utterance, which leads to ambiguity. When this ambiguity is ignored, the discourse understanding accuracy is likely to decrease. Our idea for improving the discourse understanding accuracy is to make the system hold multiple dialogue states after a user utterance and use succeeding utterances to resolve the ambiguity among dialogue states. Although the concept of combining multiple dialogue states and speech understanding results has already been reported (Miyazaki et al., 2002), they use intuition-based hand-crafted rules for the disambiguation of dialogue states, which are costly and sometimes lead to inaccuracy. To resolve the ambiguity of dialogue states and reduce the cost of rule making, we propose using statistical information obtained from dialogue corpora, which comprise dialogues conducted between the system and users. The next section briefly illustrates the basic architecture of a spoken dialogue system. Section 3 describes the problem to be solved in detail. Then after introducing related work, our approach is described with an example dialogue. After that, we describe the experiments we performed to verify our approach, and discuss the results. The last section summarizes the main points and mentions future work. 2 Discourse Understanding Here, we describe the basic architecture of a spoken dialogue system (Figure 1). When receiving a user utterance, the system behaves as follows. 1. The speech recognizer receives a user utterance and outputs a speech recognition hypothesis. 2. The language understanding component receives the speech recognition hypothesis. The syntactic and semantic analysis is performed to convert it into a form called a dialogue act. Table 1 shows an example of a dialogue act. In the example, “refer-start-and-end-time” is called the dialogue act type, which briefly describes the meaning of a dialogue act, and “start=14:00” and “end=15:00” are add-on information.1 1In general, a dialogue act corresponds to one sentence. However, in dialogues where user utterances are unrestricted, smaller units, such as phrases, can be regarded as dialogue acts. Speech Recognizer Language Understanding Component Discourse Understanding Component Dialogue State Dialogue Manager Speech Synthesizer Update Update Refer Refer Speech Recognition Hypothesis Dialogue Act Figure 1: Architecture of a spoken dialogue system. 3. The discourse understanding component receives the dialogue act, refers to the current dialogue state, and updates the dialogue state. 4. The dialogue manager receives the current dialogue state, decides the next utterance, and outputs the next words to speak. The dialogue state is updated at the same time so that it contains the content of system utterances. 5. The speech synthesizer receives the output of the dialogue manager and responds to the user by speech. This paper deals with the discourse understanding component. Since we are resolving the ambiguity of speech understanding from the discourse point of view and not within the speech understanding candidates, we assume that a dialogue state is uniquely determined given a dialogue state and the next dialogue act, which means that a dialogue act is a command to change a dialogue state. We also assume that the relationship between the dialogue act and the way to update the dialogue state can be easily described without expertise in dialogue system research. We found that these assumptions are reasonable from our experience in system development. Note also that this paper does not separately deal with reference resolution; we assume that it is performed by a command. A speech understanding result is considered to be equal to a dialogue act in this article. In this paper, we consider frames as representations of dialogue states. To represent dialogue states, plans have often been used (Allen and Perrault, 1980; Carberry, 1990). Traditionally, plan-based discourse understanding methods have been implemented mostly in keyboard-based dialogue systems, User Utterance “from two p.m. to three p.m.” Dialogue Act [act-type=refer-start-and-endtime, start=14:00, end=15:00] Table 1: A user utterance and the corresponding dialogue act. although there are some recent attempts to apply them to spoken dialogue systems as well (Allen et al., 2001; Rich et al., 2001); however, considering the current performance of speech recognizers and the limitations in task domains, we believe framebased discourse understanding and dialogue management are sufficient (Chu-Carroll, 2000; Seneff, 2002; Bobrow et al., 1977). 3 Problem Most conventional spoken dialogue systems uniquely determine the dialogue state after a user utterance. Normally, however, there are multiple candidates for the result of speech understanding, which leads to the creation of multiple dialogue state candidates. We believe that there are cases where it is better to hold more than one dialogue state and resolve the ambiguity as the dialogue progresses rather than to decide on a single dialogue state after each user utterance. As an example, consider a piece of dialogue in which the user utterance “from two p.m.” has been misrecognized as “uh two p.m.” (Figure 2). Figure 3 shows the description of the example dialogue in detail including the system’s inner states, such as dialogue acts corresponding to the speech recognition hypotheses2 and the intention recognition results.3 After receiving the speech recognition hypothesis “uh two p.m.,” the system cannot tell whether the user utterance corresponds to a dialogue act specifying the start time or the end time (da1,da2). Therefore, the system tries to obtain further information about the time. In this case, the system utters a backchannel to prompt the next user utterance to resolve the ambiguity from the discourse.4 At this stage, the system holds two dialogue 2In this example, for convenience of explanation, the n-best speech recognition input is not considered. 3An intention recognition result is one of the elements of a dialogue state. 4A yes/no question may be an appropriate choice as well.   S1 : what time would you like to reserve a meeting room? U1 : from two p.m. [uh two p.m.] S2 : uh-huh U2 : to three p.m. [to three p.m.] S3 : from two p.m. to three p.m.? U3 : yes [yes]   Figure 2: Example dialogue. (S means a system utterance and U a user utterance. Recognition results are enclosed in square brackets.) states having different intention recognition results (ds1,ds2). The next utterance, “to three p.m.,” is one that uniquely corresponds to a dialogue act specifying the end time (da3), and thus updates the two current dialogue states. As a result, two dialogue states still remain (ds3,ds4). If the system can tell that the previous dialogue act was about the start time at this moment, it can understand the user intention correctly. The correct understanding result, ds3, is derived from the combination of ds1 and da3, where ds1 is induced by ds0 and da1. As shown here, holding multiple understanding results can be better than just deciding on the best speech understanding hypothesis and discarding other possibilities. In this paper, we consider a discourse understanding component that deals with multiple dialogue states. Such a component must choose the best combination of a dialogue state and a dialogue act out of all possibilities. An appropriate scoring method for the dialogue states is therefore required. 4 Related Work Nakano et al. (1999) proposed a method that holds multiple dialogue states ordered by priority to deal with the problem that some utterances convey meaning over several speech intervals and that the understanding result cannot be determined at each interval end. Miyazaki et al. (2002) proposed a method combining Nakano et al.’s (1999) method and n-best recognition hypotheses, and reported improvement in discourse understanding accuracy. They used a metric similar to the concept error rate for the evalu[System utterance (S1)] “What time would you like to reserve a meeting room?” [Dialogue act] [act-type=ask-time] [Intention recognition result candidates] 1. [room=nil, start=nil, end=nil] (ds0) ↓ [User utterance (U1)] “From two p.m.” [Speech recognition hypotheses] 1. “uh two p.m.” [Dialogue act candidates] 1. [act-type=refer-start-time,time=14:00] (da1) 2. [act-type=refer-end-time,time=15:00] (da2) [Intention recognition result candidates] 1. [room=nil, start=14:00, end=nil] (ds1, induced from ds0 and da1) 2. [room=nil, start=nil, end=14:00] (ds2, induced from ds0 and da2) ↓ [System utterance (S2)] “uh-huh” [Dialogue act] [act-type=backchannel] ↓ [User utterance (U2)] “To three p.m.” [Speech recognition hypotheses] 1. “to three p.m.” [Dialogue act candidates] 1. [act-type=refer-end-time, time=15:00] (da3) [Intention recognition result candidates] 1. [room=nil, start=14:00, end=15:00] (ds3, induced from ds1 and da3) 2. [room=nil, start=nil, end=15:00] (ds4, induced from ds2 and da3) ↓ [System utterance (S3)] “from two p.m. to three p.m.?” [Dialogue act] [act-type=confirm-time,start=14:00, end=15:00] ↓ [User utterance (U3)] “yes” [Speech recognition hypotheses] 1. “yes” [Dialogue act candidates] 1. [act-type=acknowledge] [Intention recognition result candidates] 1. [room=nil, start=14:00, end=15:00] 2. [room=nil, start=nil, end=15:00] Figure 3: Detailed description of the understanding of the example dialogue. ation of discourse accuracy, comparing reference dialogue states with hypothesis dialogue states. Both these methods employ hand-crafted rules to score the dialogue states to decide the best dialogue state. Creating such rules requires expert knowledge, and is also time consuming. There are approaches that propose statistically estimating the dialogue act type from several previous dialogue act types using N-gram probability (Nagata and Morimoto, 1994; Reithinger and Maier, 1995). Although their approaches can be used for disambiguating user utterance using discourse information, they do not consider holding multiple dialogue states. In the context of plan-based utterance understanding (Allen and Perrault, 1980; Carberry, 1990), when there is ambiguity in the understanding result of a user utterance, an interpretation best suited to the estimated plan should be selected. In addition, the system must choose the most plausible plans from multiple possible candidates. Although we do not adopt plan-based representation of dialogue states as noted before, this problem is close to what we are dealing with. Unfortunately, however, it seems that no systematic ways to score the candidates for disambiguation have been proposed. 5 Approach The discourse understanding method that we propose takes the same approach as Miyazaki et al. (2002). However, our method is different in that, when ordering the multiple dialogue states, the statistical information derived from the dialogue corpora is used. We propose using two kinds of statistical information: 1. the probability of a dialogue act type sequence, and 2. the collocation probability of a dialogue state and the next dialogue act. 5.1 Statistical Information Probability of a dialogue act type sequence Based on the same idea as Nagata and Morimoto (1994) and Reithinger and Maier (1995), we use the probability of a dialogue act type sequence, namely, the N-gram probability of dialogue act types. System utterances and the transcription of user utterances are both converted to dialogue acts using a dialogue act conversion parser, then the N-gram probability of the dialogue act types is calculated. # explanation 1. whether slots asked previously by the system are changed 2. whether slots being confirmed are changed 3. whether slots already confirmed are changed 4. whether the dialogue act fills slots that do not have values 5. whether the dialogue act tries changing slots that have values 6. when 5 is true, whether slot values are not changed as a result 7. whether the dialogue act updates the initial dialogue state 5 Table 2: Seven binary attributes to classify collocation patterns of a dialogue state and the next dialogue act. Collocation probability of a dialogue state and the next dialogue act From the dialogue corpora, dialogue states and the succeeding user utterances are extracted. Then, pairs comprising a dialogue state and a dialogue act are created after converting user utterances into dialogue acts. Contrary to the probability of sequential patterns of dialogue act types that represents a brief flow of a dialogue, this collocation information expresses a local detailed flow of a dialogue, such as dialogue state changes caused by the dialogue act. The simple bigram of dialogue states and dialogue acts is not sufficient due to the complexity of the data that a dialogue state possesses, which can cause data sparseness problems. Therefore, we classify the ways that dialogue states are changed by dialogue acts into 64 classes characterized by seven binary attributes (Table 2) and compute the occurrence probability of each class in the corpora. We assume that the understanding result of the user intention contained in a dialogue state is expressed as a frame, which is common in many systems (Bobrow et al., 1977). A frame is a bundle of slots that consist of attributevalue pairs concerning a certain domain. 5The first user utterance should be treated separately, because the system’s initial utterance is an open question leading to an unrestricted utterance of a user. 5.2 Scoring of Dialogue Acts Each speech recognition hypothesis is converted to a dialogue act or acts. When there are several dialogue acts corresponding to a speech recognition hypothesis, all possible dialogue acts are created as in Figure 3, where the utterance “uh two p.m.” produces two dialogue act candidates. Each dialogue act is given a score using its linguistic and acoustic scores. The linguistic score represents the grammatical adequacy of a speech recognition hypothesis from which the dialogue act originates, and the acoustic score the acoustic reliability of a dialogue act. Sometimes, there is a case that a dialogue act has such a low acoustic or linguistic score and that it is better to ignore the act. We therefore create a dialogue act called null act, and add this null act to our list of dialogue acts. A null act is a dialogue act that does not change the dialogue state at all. 5.3 Scoring of Dialogue States Since the dialogue state is uniquely updated by a dialogue act, if there are l dialogue acts derived from speech understanding and m dialogue states, m × l new dialogue states are created. In this case, we define the score of a dialogue state St+1 as St+1 = St + α · sact + β · sngram + γ · scol where St is the score of a dialogue state just before the update, sact the score of a dialogue act, sngram the score concerning the probability of a dialogue act type sequence, scol the score concerning the collocation probability of dialogue states and dialogue acts, and α, β, and γ are the weighting factors. 5.4 Ordering of Dialogue States The newly created dialogue states are ordered based on the score. The dialogue state that has the best score is regarded as the most probable one, and the system responds to the user by referring to it. The maximum number of dialogue states is needed in order to drop low-score dialogue states and thereby perform the operation in real time. This dropping process can be considered as a beam search in view of the entire discourse process, thus we name the maximum number of dialogue states the dialogue state beam width. 6 Experiment 6.1 Extracting Statistical Information from Dialogue Corpus Dialogue Corpus We analyzed a corpus of dialogues between naive users and a Japanese spoken dialogue system, which were collected in acoustically insulated booths. The task domain was meeting room reservation. Subjects were instructed to reserve a meeting room on a certain date from a certain time to a certain time. As a speech recognition engine, Julius3.1p1 (Lee et al., 2001) was used with its attached acoustic model. For the language model, we used a trigram trained from randomly generated texts of acceptable phrases. For system response, NTT’s speech synthesis engine FinalFluet (Takano et al., 2001) was used. The system had a vocabulary of 168 words, each registered with a category and a semantic feature in its lexicon. The system used hand-crafted rules for discourse understanding. The corpus consists of 240 dialogues from 15 subjects (10 males and 5 females), each one performing 16 dialogues. Dialogues that took more than three minutes were regarded as failures. The task completion rate was 78.3% (188/240). Extraction of Statistical Information From the transcription, we created a trigram of dialogue act types using the CMU-Cambridge Toolkit (Clarkson and Rosenfeld, 1997). Figure 3 shows an example of the trigram information starting from {refer-starttime backchannel}. The bigram information used for smoothing is also shown. The collocation probability was obtained from the recorded dialogue states and the transcription following them. Out of 64 possible patterns, we found 17 in the corpus as shown in Figure 4. Taking the case of the example dialogue in Figure 3, it happened that the sequence {refer-starttime backchannel refer-end-time} does not appear in the corpus; thus, the probability is calculated based on the bigram probability using the backoff weight, which is 0.006. The trigram probability for {referend-time backchannel refer-end-time} is 0.031. The collocation probability of the sequence ds1 + da3 →ds3 fits collocation pattern 12, where a slot having no value was changed. The sequence ds2 + da3 →ds4 fits collocation pattern 17, where a slot having a value was changed to have a different value. The probabilities were 0.155 and 0.009, dialogue act type sequence (trigram) probability score refer-start-time backchannel backchannel -1.0852 refer-start-time backchannel ask-date -2.0445 refer-start-time backchannel ask-start-time -0.8633 refer-start-time backchannel request -2.0445 refer-start-time backchannel refer-day -1.7790 refer-start-time backchannel refer-month -0.4009 refer-start-time backchannel refer-room -0.8633 refer-start-time backchannel refer-start-time -0.7172 dialogue act type sequence (bigram) backoff weight probability score refer-start-time backchannel -1.1337 -0.7928 refer-end-time backchannel 0.4570 -0.6450 backchannel refer-end-time -0.5567 -1.0716 Table 3: An example of bigram and trigram of dialogue act types with their probability score in common logarithm. collocation occurrence # pattern probability 1. 0 1 1 1 0 0 1 0.001 2. 0 1 1 0 0 1 0 0.053 3. 0 0 0 0 0 0 0 0.273 4. 1 0 0 0 1 0 0 0.001 5. 1 0 1 1 0 0 0 0.005 6. 0 0 1 1 0 0 0 0.036 7. 0 0 0 0 1 0 0 0.047 8. 0 1 1 0 1 0 0 0.041 9. 0 0 1 1 0 0 1 0.010 10. 0 0 1 0 0 1 0 0.016 11. 0 0 0 0 0 0 1 0.064 12. 0 0 0 1 0 0 0 0.155 13. 1 0 0 1 0 0 0 0.043 14. 0 0 1 0 1 0 0 0.061 15. 1 0 0 1 0 0 1 0.001 16. 0 0 0 1 0 0 1 0.186 17. 0 0 0 0 0 1 0 0.009 Table 4: The 17 collocation patterns and their occurrence probabilities. See Figure 2 for the detail of binary attributes. Attributes 1-7 are ordered from left to right. respectively. By the simple adding of the two probabilities in common logarithms in each case, ds3 has the probability score -3.015 and ds4 -3.549, suggesting that the sequence ds3 is the most probable discourse understanding result after U2. 6.2 Verification of our approach To verify the effectiveness of the proposed approach, we built a Japanese spoken dialogue system in the meeting reservation domain that employs the proposed discourse understanding method and performed dialogue experiments. The speech recognition engine was Julius3.3p1 (Lee et al., 2001) with its attached acoustic models. For the language model, we made a trigram from the transcription obtained from the corpora. The system had a vocabulary of 243. The recognition engine outputs 5-best recognition hypotheses. This time, values for sact, sngram, scol are the logarithm of the inverse number of n-best ranks,6 the log likelihood of dialogue act type trigram probability, and the common logarithm of the collocation probability, respectively. For the experiment, weighting factors are all set to one (α = β = γ = 1). The dialogue state beam width was 15. We collected 256 dialogues from 16 subjects (7 males and 9 females). The speech recognition accuracy (word error rate) was 65.18%. Dialogues that took more than five minutes were regarded as failures. The task completion rate was 88.3% (226/256).7 From all user speech intervals, the number of times that dialogue states below second place became first place was 120 (7.68%), showing a relative frequency of shuffling within the dialogue states. 6.3 Effectiveness of Holding Multiple Dialogue States The main reason that we developed the proposed corpus-based discourse understanding method was that it is difficult to manually create rules to deal with multiple dialogue states. It is yet to be examined, however, whether holding multiple dialogue states is really effective for accurate discourse understanding. To verify that holding multiple dialogue states is effective, we fixed the speech recognizer’s output to 1-best, and studied the system performance changes when the dialogue state beam width was changed from 1 to 30. When the dialogue state beam width is too large, the computational cost becomes high and the system cannot respond in real time. We therefore selected 30 for empirical reasons. The task domain and other settings were the same 6In this experiment, only the acoustic score of a dialogue act was considered. 7It should be noted that due to the creation of an enormous number of dialogue states in discourse understanding, the proposed system takes a few seconds to respond after the user input. as in the previous experiment except for the dialogue state beam width changes. We collected 448 dialogues from 28 subjects (4 males and 24 females), each one performing 16 dialogues. Each subject was instructed to reserve the same meeting room twice, once with the 1-beam-width system and again with 30-beam-width system. The order of what room to reserve and what system to use was randomized. The speech recognition accuracy was 69.17%. Dialogues that took more than five minutes were regarded as failures. The task completion rates for the 1-beam-width system and the 30-beam-width system were 88.3% and 91.0%, and the average task completion times were 107.66 seconds and 95.86 seconds, respectively. A statistical hypothesis test showed that times taken to carry out a task with the 30-beam-width system are significantly shorter than those with the 1-beam-width system (Z = −2.01, p < .05). In this test, we used a kind of censored mean computed by taking the mean of the times only for subjects that completed the tasks with both systems. The population distribution was estimated by the bootstrap method (Cohen, 1995). It may be possible to evaluate the discourse understanding by comparing the best dialogue state with the reference dialogue state, and calculate a metric such as the CER (concept error rate) as Miyazaki et al. (2002) do; however it is not clear whether the discourse understanding can be evaluated this way, since it is not certain whether the CER correlates closely with the system’s performance (Higashinaka et al., 2002). Therefore, this time, we used the task completion time and the task completion rate for comparison. 7 Discussion Cost of creating the discourse understanding component The best task completion rate in the experiments was 91.0% (the case of 1-best recognition input and a 30 dialogue state beam width). This high rate suggests that the proposed approach is effective in reducing the cost of creating the discourse understanding component in that no hand-crafted rules are necessary. For statistical discourse understanding, an initial system, e.g., a system that employs the proposed approach with only sact for scoring the dialogue states, is needed in order to create the dialogue corpus; however, once it has been made, the creation of the discourse understanding component requires no expert knowledge. Effectiveness of holding multiple dialogue states The result of the examination of dialogue state beam width changes suggests that holding multiple dialogue states shortens the task completion time. As far as task-oriented spoken dialogue systems are concerned, holding multiple dialogue states contributes to the accuracy of discourse understanding. 8 Summary and Future Work We proposed a new discourse understanding method that orders multiple dialogue states created from multiple dialogue states and the succeeding speech understanding results based on statistical information obtained from dialogue corpora. The results of the experiments show that our approach is effective in reducing the cost of creating the discourse understanding component, and the advantage of keeping multiple dialogue states was also shown. There still remain several issues that we need to explore. These include the use of statistical information other than the probability of a dialogue act type sequence and the collocation probability of dialogue states and dialogue acts, the optimization of weighting factors α, β, γ, other default parameters that we used in the experiments, and more experiments in larger domains. Despite these issues, the present results have shown that our approach is promising. Acknowledgements We thank Dr. Hiroshi Murase and all members of the Dialogue Understanding Research Group for useful discussions. Thanks also go to the anonymous reviewers for their helpful comments. References James F. Allen and C. Raymond Perrault. 1980. Analyzing intention in utterances. Artif. Intel., 15:143–178. James Allen, George Ferguson, and Amanda Stent. 2001. An architecture for more realistic conversational systems. In Proc. IUI, pages 1–8. Daniel G. Bobrow, Ronald M. Kaplan, Martin Kay, Donald A. Norman, Henry Thompson, and Terry Winograd. 1977. GUS, a frame driven dialog system. Artif. Intel., 8:155–173. Sandra Carberry. 1990. Plan Recognition in Natural Language Dialogue. MIT Press, Cambridge, Mass. Junnifer Chu-Carroll. 2000. MIMIC: An adaptive mixed initiative spoken dialogue system for information queries. In Proc. 6th Applied NLP, pages 97–104. P.R. Clarkson and R. Rosenfeld. 1997. Statistical language modeling using the CMU-Cambridge toolkit. In Proc. Eurospeech, pages 2707–2710. Paul R. Cohen. 1995. Empirical Methods for Artificial Intelligence. MIT Press. Ryuichiro Higashinaka, Noboru Miyazaki, Mikio Nakano, and Kiyoaki Aikawa. 2002. A method for evaluating incremental utterance understanding in spoken dialogue systems. In Proc. ICSLP, pages 829–832. Akinobu Lee, Tatsuya Kawahara, and Kiyohiro Shikano. 2001. Julius – an open source real-time large vocabulary recognition engine. In Proc. Eurospeech, pages 1691–1694. Noboru Miyazaki, Mikio Nakano, and Kiyoaki Aikawa. 2002. Robust speech understanding using incremental understanding with n-best recognition hypotheses. In SIG-SLP-40, Information Processing Society of Japan., pages 121–126. (in Japanese). Masaaki Nagata and Tsuyoshi Morimoto. 1994. First steps toward statistical modeling of dialogue to predict the speech act type of the next utterance. Speech Communication, 15:193–203. Mikio Nakano, Noboru Miyazaki, Jun-ichi Hirasawa, Kohji Dohsaka, and Takeshi Kawabata. 1999. Understanding unsegmented user utterances in real-time spoken dialogue systems. In Proc. 37th ACL, pages 200–207. Norbert Reithinger and Elisabeth Maier. 1995. Utilizing statistical dialogue act processing in Verbmobil. In Proc. 33th ACL, pages 116–121. Charles Rich, Candace Sidner, and Neal Lesh. 2001. COLLAGEN: Applying collaborative discourse theory. AI Magazine, 22(4):15–25. Stephanie Seneff. 2002. Response planning and generation in the MERCURY flight reservation system. Computer Speech and Language, 16(3–4):283–312. Satoshi Takano, Kimihito Tanaka, Hideyuki Mizuno, Masanobu Abe, and ShiN’ya Nakajima. 2001. A Japanese TTS system based on multi-form units and a speech modification algorithm with harmonics reconstruction. IEEE Transactions on Speech and Processing, 9(1):3–10. Victor W. Zue and James R. Glass. 2000. Conversational interfaces: Advances and challenges. Proceedings of IEEE, 88(8):1166–1180.
2003
31
Extracting Key Semantic Terms from Chinese Speech Query for Web Searches Gang WANG National University of Singapore [email protected] Tat-Seng CHUA National University of Singapore [email protected] Yong-Cheng WANG Shanghai Jiao Tong University, China, 200030 [email protected] Abstract This paper discusses the challenges and proposes a solution to performing information retrieval on the Web using Chinese natural language speech query. The main contribution of this research is in devising a divide-and-conquer strategy to alleviate the speech recognition errors. It uses the query model to facilitate the extraction of main core semantic string (CSS) from the Chinese natural language speech query. It then breaks the CSS into basic components corresponding to phrases, and uses a multi-tier strategy to map the basic components to known phrases in order to further eliminate the errors. The resulting system has been found to be effective. 1 Introduction We are entering an information era, where information has become one of the major resources in our daily activities. With its wide spread adoption, Internet has become the largest information wealth for all to share. Currently, most (Chinese) search engines can only support term-based information retrieval, where the users are required to enter the queries directly through keyboards in front of the computer. However, there is a large segment of population in China and the rest of the world who are illiterate and do not have the skills to use the computer. They are thus unable to take advantage of the vast amount of freely available information. Since almost every person can speak and understand spoken language, the research on “(Chinese) natural language speech query retrieval” would enable average persons to access information using the current search engines without the need to learn special computer skills or training. They can simply access the search engine using common devices that they are familiar with such as the telephone, PDA and so on. In order to implement a speech-based information retrieval system, one of the most important challenges is how to obtain the correct query terms from the spoken natural language query that convey the main semantics of the query. This requires the integration of natural language query processing and speech recognition research. Natural language query processing has been an active area of research for many years and many techniques have been developed (Jacobs and Rau1993; Kupie, 1993; Strzalkowski, 1999; Yu et al, 1999). Most of these techniques, however, focus only on written language, with few devoted to the study of spoken language query processing. Speech recognition involves the conversion of acoustic speech signals to a stream of text. Because of the complexity of human vocal tract, the speech signals being observed are different, even for multiple utterances of the same sequence of words by the same person (Lee et al 1996). Furthermore, the speech signals can be influenced by the differences across different speakers, dialects, transmission distortions, and speaking environments. These have contributed to the noise and variability of speech signals. As one of the main sources of errors in Chinese speech recognition come from substitution (Wang 2002; Zhou 1997), in which a wrong but similar sounding term is used in place of the correct term, confusion matrix has been used to record confused sound pairs in an attempt to eliminate this error. Confusion matrix has been employed effectively in spoken document retrieval (Singhal et al, 1999 and Srinivasan et al 2000) and to minimize speech recognition errors (Shen et al, 1998). However, when such method is used directly to correct speech recognition errors, it tends to bring in too many irrelevant terms (Ng 2000). Because important terms in a long document are often repeated several times, there is a good chance that such terms will be correctly recognized at least once by a speech recognition engine with a reasonable level of word recognition rate. Many spoken document retrieval (SDR) systems took advantage of this fact in reducing the speech recognition and matching errors (Meng et al 2001; Wang et al 2001; Chen et al 2001). In contrast to SDR, very little work has been done on Chinese spoken query processing (SQP), which is the use of spoken queries to retrieval textual documents. Moreover, spoken queries in SQP tend to be very short with few repeated terms. In this paper, we aim to integrate the spoken language and natural language research to process spoken queries with speech recognition errors. The main contribution of this research is in devising a divide-and-conquer strategy to alleviate the speech recognition errors. It first employs the Chinese query model to isolate the Core Semantic String (CSS) that conveys the semantics of the spoken query. It then breaks the CSS into basic components corresponding to phrases, and uses a multitier strategy to map the basic components to known phrases in a dictionary in order to further eliminate the errors. In the rest of this paper, an overview of the proposed approach is introduced in Section 2. Section 3 describes the query model, while Section 4 outlines the use of multi-tier approach to eliminate errors in CSS. Section 5 discusses the experimental setup and results. Finally, Section 6 contains our concluding remarks. 2 Overview of the proposed approach There are many challenges in supporting surfing of Web by speech queries. One of the main challenges is that the current speech recognition technology is not very good, especially for average users that do not have any speech trainings. For such unlimited user group, the speech recognition engine could achieve an accuracy of less than 50%. Because of this, the key phrases we derived from the speech query could be in error or missing the main semantic of the query altogether. This would affect the effectiveness of the resulting system tremendously. Given the speech-to-text output with errors, the key issue is on how to analyze the query in order to grasp the Core Semantic String (CSS) as accurately as possible. CSS is defined as the key term sequence in the query that conveys the main semantics of the query. For example, given the query: “         !" #$ %& (') ” (Please tell me the information on how the U.S. separates the most-favored-nation status from human rights issue in china). The CSS in the query is underlined. We can segment the CSS into several basic components that correspond to key concepts such as: * (U.S.),  (China), + (human rights issue), !" # $ (the most-favored-nation status) and %& (separate). Because of the difficulty in handling speech recognition errors involving multiple segments of CSSs, we limit our research to queries that contain only one CSS string. However, we allow a CSS to include multiple basic components as depicted in the above example. This is reasonable as most queries posed by the users on the Web tend to be short with only a few characters (Pu 2000). Thus the accurate extraction of CSS and its separation into basic components is essential to alleviate the speech recognition errors. First of all, isolating CSS from the rest of speech enables us to ignore errors in other parts of speech, such as the greetings and polite remarks, which have no effects on the outcome of the query. Second, by separating the CSS into basic components, we can limit the propagation of errors, and employ the set of known phrases in the domain to help correct the errors in these components separately. Figure 1: Overview of the proposed approach To achieve this, we process the query in three main stages as illustrated in Figure 1. First, given the user’s oral query, the system uses a speech recognition engine to convert the speech to text. Second, we analyze the query using a query model (QM) to extract CSS from the query with minimum errors. QM defines the structures and some of the standard phrases used in typical queries. Third, we divide the CSS into basic components, and employ a multi-tier approach to match the baQM Confusion matrix Phrase Dictionary Multi-Tier mapping Basic Components Speech Query CSS sic components to the nearest known phrases in order to correct the speech recognition errors. The aim here is to improve recall without excessive lost in precision. The resulting key components are then used as query to standard search engine. The following sections describe the details of our approach. 3 Query Model (QM) Query model (QM) is used to analyze the query and extract the core semantic string (CSS) that contains the main semantic of the query. There are two main components for a query model. The first is query component dictionary, which is a set of phrases that has certain semantic functions, such as the polite remarks, prepositions, time etc. The other component is the query structure, which defines a sequence of acceptable semantically tagged tokens, such as “Begin, Core Semantic String, Question Phrase, and End”. Each query structure also includes its occurrence probability within the query corpus. Table 2 gives some examples of query structures. 3.1 Query Model Generation In order to come up with a set of generalized query structures, we use a query log of typical queries posed by users. The query log consists of 557 queries, collected from twenty-eight human subjects at the Shanghai Jiao Tong University (Ying 2002). Each subject is asked to pose 20 separate queries to retrieve general information from the Web. After analyzing the queries, we derive a query model comprising 51 query structures and a set of query components. For each query structure, we compute its probability of occurrence, which is used to determine the more likely structure containing CSS in case there are multiple CSSs found. As part of the analysis of the query log, we classify the query components into ten classes, as listed in Table 1. These ten classes are called semantic tags. They can be further divided into two main categories: the closed class and open class. Closed classes are those that have relatively fixed word lists. These include question phrases, quantifiers, polite remarks, prepositions, time and commonly used verb and subject-verb phrases. We collect all the phrases belonging to closed classes from the query log and store them in the query component dictionary. The open class is the CSS, which we do not know in advance. CSS typically includes person’s names, events and country’s names etc. Table 1: Definition and Examples of Semantic tags Sem Tag Name of tag Example 1. Verb-Object Phrase  give   (me) 2. Question Phrase  (is there ) 3. Question Field (news),  (report) 4. Quantifier  (some) 5. Verb Phrase  (find)   collect  6. Polite Remark   (please help me) 7. Preposition  (about),  (about) 8. Subject-Verb phrase  (I)  (want) 9. Core Semantic String 9.11  (9.11 event) 10. Time ! (today) Table 2: Examples of Query Structure 1 Q1: 0, 2, 7, 9, 3, 0: 0.0025,   9.11  " 2 7 9 3 Is there any information on September 11? 2 Q2: 0, 1, 7, 9, 3, 0 :0.01   #$% "  1 7 9 3 Give me some information about Ben laden. Given the set of sample queries, a heuristic rulebased approach is used to analyze the queries, and break them into basic components with assigned semantic tags by matching the words listed in Table 1. Any sequences of words or phrases not found in the closed class are tagged as CSS (with Semantic Tag 9). We can thus derive the query structures of the form given in Table 2. 3.2 Modeling of Query Structure as FSA Due to speech recognition errors, we do not expect the query components and hence the query structure to be recognized correctly. Instead, we parse the query structure in order to isolate and extract CSS. To facilitate this, we employ the Finite State Automata (FSA) to model the query structure. FSA models the expected sequences of tokens in typical queries and annotate the semantic tags, including CSS. A FSA is defined for each of the 51 query structures. An example of FSA is given in Figure 2. Because CSS is an open set, we do not know its content in advance. Instead, we use the following two rules to determine the candidates for CSS: (a) it is an unknown string not present in the Query Component Dictionary; and (b) its length is not less than two, as the average length of concepts in Chinese is greater than one (Wang 1992). At each stage of parsing the query using FSA (Hobbs et al 1997), we need to make decision on which state to proceed and how to handle unexpected tokens in the query. Thus at each stage, FSA needs to perform three functions: a) Goto function: It maps a pair consisting of a state and an input symbol into a new state or the fail state. We use G(N,X) =N’ to define the goto function from State N to State N’, given the occurrence of token X. b) Fail function: It is consulted whenever the goto function reports a failure when encountering an unexpected token. We use f(N) =N’ to represent the fail function. c) Output function: In the FSA, certain states are designated as output states, which indicate that a sequence of tokens has been found and are tagged with the appropriate semantic tag. To construct a goto function, we begin with a graph consisting of one vertex which represents State 0.We then enter each token X into the graph by adding a directed path to the graph that begins at the start state. New vertices and edges are added to the graph so that there will be, starting at the start state, a path in the graph that spells out the token X. The token X is added to the output function of the state at which the path terminates. For example, suppose that our Query Component Dictionary consists of seven phrases as follows: “  (please help me);  (some);  (about);  (news); (collect);  (tell me);  (what do you have)”. Adding these tokens into the graph will result in a FSA as shown in Figure 2. The path from State 0 to State 3 spells out the phrase “  (Please help me)”, and on completion of this path, we associate its output with semantic tag 6. Similarly, the output of “  (some)” is associated with State 5, and semantic tag 4, and so on. We now use an example to illustrate the process of parsing the query. Suppose the user issues a speech query: ”         ” (please help me to collect some information about Bin Laden). However, the result of speech recognition with errors is: ” (please) (help)  (me) (receive)  (send)   (some)  (about)  (half)  (pull)  (light)  (of)  (news)”. Note that there are 4 mis-recognized characters which are underlined. Note : indicates the semantic tag. Figure 2: FSA for part of Query Component Dictionary The FSA begins with State 0. When the system encounters the sequence of characters (please) (help)  (me), the state changes from 0 to 1, 2 and eventually to 3. At State 3, the system recognizes a polite remark phrase and output a token with semantic tag 6. Next, the system meets the character (receive), it will transit to State 10, because of g(0, )=10. When the system sees the next character  (send), which does not have a corresponding transition rule, the goto function reports a failure. Because the length of the string is 2 and the string is not in the Query Component Dictionary, the semantic tag 9 is assigned to token”  ” according to the definition of CSS. By repeating the above process, we obtain the following result:          6 9 4 7 9 3 Here the semantic tags are as defined in Table 1. It is noted that because of speech recognition errors, the system detected two CSSs, and both of them contain speech recognition errors. 3.3 CSS Extraction by Query Model Given that we may find multiple CSSs, the next stage is to analyze the CSSs found along with their surrounding context in order to determine the most probable CSS. The approach is based on the premise that choosing the best sense for an input vector amounts to choosing the most probable sense given that vector. The input vector i has three components: left context (Li), the CSS itself (CSSi), and right context (Ri). The probability of such a structure occurring in the Query Model is as follows: = = n j j ij i p C s 0 ) * ( (1) where Cij is set to 1 if the input vector i (Li, Ri) matches the two corresponding left and right CSS context of the query structure j, and 0 otherwise. pj is the possibility of occurrence of the jth query structure, and n is the total number of the structures in the Query Model. Note that Equation (1) gives a detected CSS higher weight if it matches to more query structures with higher occurrence probabilities. We simply select the best CSSi such that ) ( max arg i i s according to Eqn(1). For illustration, let’s consider the above example with 2 detected CSSs. The two CSS vectors are: [6, 9, 4] and [7, 9, 3]. From the Query Model, we know that the probability of occurrence, pj, of structure [6, 9, 4] is 0, and that of structure [7, 9, 3] is 0.03, with the latter matches to only one structure. Hence the si values for them are 0 and 0.03 respectively. Thus the most probable core semantic structure is [7, 9, 3] and the CSS “  (half)  (pull)  (light)” is extracted. 4 Query Terms Generation Because of speech recognition error, the CSS obtained is likely to contain error, or in the worse case, missing the main semantics of the query altogether. We now discuss how we alleviate the errors in CSS for the former case. We will first break the CSS into one or more basic semantic parts, and then apply the multi-tier method to map the query components to known phrases. 4.1 Breaking CSS into Basic Components In many cases, the CSS obtained may be made up of several semantic components equivalent to base noun phrases. Here we employ a technique based on Chinese cut marks (Wang 1992) to perform the segmentation. The Chinese cut marks are tokens that can separate a Chinese sentence into several semantic parts. Zhou (1997) used such technique to detect new Chinese words, and reported good results with precision and recall of 92% and 70% respectively. By separating the CSS into basic key components, we can limit the propagation of errors. 4.2 Multi-tier query term mapping In order to further eliminate the speech recognition errors, we propose a multi-tier approach to map the basic components in CSS into known phrases by using a combination of matching techniques. To do this, we need to build up a phrase dictionary containing typical concepts used in general and specific domains. Most basic CSS components should be mapped to one of these phrases. Thus even if a basic component contains errors, as long as we can find a sufficiently similar phrase in the phrase dictionary, we can use this in place of the erroneous CSS component, thus eliminating the errors. We collected a phrase dictionary containing about 32,842 phrases, covering mostly base noun phrase and named entity. The phrases are derived from two sources. We first derived a set of common phrases from the digital dictionary and the logs in the search engine used at the Shanghai Jiao Tong University. We also derived a set of domain specific phrases by extracting the base noun phrases and named entities from the on-line news articles obtained during the period. This approach is reasonable as in practice we can use recent web or news articles to extract concepts to update the phrase dictionary. Given the phrase dictionary, the next problem then is to map the basic CSS components to the nearest phrases in the dictionary. As the basic components may contain errors, we cannot match them exactly just at the character level. We thus propose to match each basic component with the known phrases in the dictionary at three levels: (a) character level; (b) syllable string level; and (c) confusion syllable string level. The purpose of matching at levels b and c is to overcome the homophone problem in CSS. For example, “  (Laden)” is wrongly recognized as “   (pull lamp)” by the speech recognition engine. Such errors cannot be re-solved at the character matching level, but it can probably be matched at the syllable string level. The confusion matrix is used to further reduce the effect of speech recognition errors due to similar sounding characters. To account for possible errors in CSS components, we perform similarity, instead of exact, matching at the three levels. Given the basic CSS component qi, and a phrase cj in the dictionary, we compute: = = ) , ( 0 * |} ||, max{| ) , ( ) , ( i i c q LCS k k i i i i i i M c q c q LCS c q Sim (2) where LCS(qi,cj) gives the number of characters/ syllable matched between qi and ci in the order of their appearance using the longest common subsequence matching (LCS) algorithm (Cormen et al 1990). Mk is introduced to accounts for the similarity between the two matching units, and is dependent on the level of matching. If the matching is performed at the character or syllable string levels, the basic matching unit is one character or one syllable and the similarity between the two matching units is 1. If the matching is done at the confusion syllable string level, Mk is the corresponding coefficients in the confusion matrix. Hence LCS (qi,cj) gives the degree of match between qi and cj, normalized by the maximum length of qi or cj; and ΣM gives the degree of similarity between the units being matched. The three level of matching also ranges from being more exact at the character level, to less exact at the confusion syllable level. Thus if we can find a relevant phrase with sim(qi,cj)>  at the higher character level, we will not perform further matching at the lower levels. Otherwise, we will relax the constraint to perform the matching at successively lower levels, probably at the expense of precision. The detail of algorithm is listed as follows: Input: Basic CSS Component, qi a. Match qi with phrases in dictionary at character level using Eqn.(2). b. If we cannot find a match, then match qi with phrases at the syllable level using Eqn.(2). c. If we still cannot find a match, match qi with phrases at the confusion syllable level using Eqn.(2). d. If we found a match, set q’i=cj; otherwise set q’i=qi. For example, given a query: “      ” (please tell me some news about Iraq). If the query is wrongly recognized as “         ”. If, however, we could correctly extract the CSS “   (Iraq) from this mis-recognized query, then we could ignore the speech recognition errors in other parts of the above query. Even if there are errors in the CSS extracted, such as “  (chen)  (waterside)” instead of “  (chen shui bian)”, we could apply the syllable string level matching to correct the homophone errors. For CSS errors such as “ ! (corrupt) " (usually)” instead of the correct CSS “ #$% (Taliban)”, which could not be corrected at the syllable string matching level, we could apply the confusion syllable string matching to overcome this error. 5 Experiments and analysis As our system aims to correct the errors and extract CSS components in spoken queries, it is important to demonstrate that our system is able to handle queries of different characteristics. To this end, we devised two sets of test queries as follows. a) Corpus with short queries We devised 10 queries, each containing a CSS with only one basic component. This is the typical type of queries posed by the users on the web. We asked 10 different people to “speak” the queries, and used the IBM ViaVoice 98 to perform the speech to text conversion. This gives rise to a collection of 100 spoken queries. There is a total of 1,340 Chinese characters in the test queries with a speech recognition error rate of 32.5%. b) Corpus with long queries In order to test on queries used in standard test corpuses, we adopted the query topics (1-10) employed in TREC-5 Chinese-Language track. Here each query contains more than one key semantic component. We rephrased the queries into natural language query format, and asked twelve subjects to “read” the queries. We again used the IBM ViaVoice 98 to perform the speech recognition on the resulting 120 different spoken queries, giving rise to a total of 2,354 Chinese characters with a speech recognition error rate of 23.75%. We devised two experiments to evaluate the performance of our techniques. The first experiment was designed to test the effectiveness of our query model in extracting CSSs. The second was designed to test the accuracy of our overall system in extracting basic query components. 5.1 Test 1: Accuracy of extracting CSSs The test results show that by using our query model, we could correctly extract 99% and 96% of CSSs from the spoken queries for the short and long query category respectively. The errors are mainly due to the wrong tagging of some query components, which caused the query model to miss the correct query structure, or match to a wrong structure. For example: given the query “    # $%   ” (please tell me some news about Taliban). If it is wrongly recognized as:   $ %   9 7 9 10 which is a nonsensical sentence. Since the probabilities of occurrence both query structures [0,9,7] and [7,9,10] are 0, we could not find the CSS at all. This error is mainly due to the mis-recognition of the last query component “  (news)” to “  (afternoon)”. It confuses the Query Model, which could not find the correct CSS. The overall results indicate that there are fewer errors in short queries as such queries contain only one CSS component. This is encouraging as in practice most users issue only short queries. 5.2 Test 2: Accuracy of extracting basic query components In order to test the accuracy of extracting basic query components, we asked one subject to manually divide the CSS into basic components, and used that as the ground truth. We compared the following two methods of extracting CSS components: a) As a baseline, we simply performed the standard stop word removal and divided the query into components with the help of a dictionary. However, there is no attempt to correct the speech recognition errors in these components. Here we assume that the natural language query is a bag of words with stop word removed (Ricardo, 1999). Currently, most search engines are based on this approach. b) We applied our query model to extract CSS and employed the multi-tier mapping approach to extract and correct the errors in the basic CSS components. Tables 3 and 4 give the comparisons between Methods (a) and (b), which clearly show that our method outperforms the baseline method by over 20.2% and 20 % in F1 measure for the short and long queries respectively. Table 3: Comparison of Methods a and b for short query Average Precision Average Recall F1 Method a 31% 58.5% 40.5% Method b 53.98% 69.4% 60.7% +22.98% +10.9% +20.2% Table 4: Comparison of Methods a and b for long query Average Precision Average Recall F1 Method a 39.23% 85.99% 53.9% Method b 67.75% 81.31% 73.9% +28.52% -4.68% +20.0% The improvement is largely due to the use of our approach to extract CSS and correct the speech recognition errors in the CSS components. More detailed analysis of long queries in Table 3 reveals that our method performs worse than the baseline method in recall. This is mainly due to errors in extracting and breaking CSS into basic components. Although we used the multi-tier mapping approach to reduce the errors from speech recognition, its improvement is insufficient to offset the lost in recall due to errors in extracting CSS. On the other hand, for the short query cases, without the errors in breaking CSS, our system is more effective than the baseline in recall. It is noted that in both cases, our system performs significantly better than the baseline in terms of precision and F1 measures. 6 Conclusion Although research on natural language query processing and speech recognition has been carried out for many years, the combination of these two approaches to help a large population of infrequent users to “surf the web by voice” has been relatively recent. This paper outlines a divide-and-conquer approach to alleviate the effect of speech recognition error, and in extracting key CSS components for use in a standard search engine to retrieve relevant documents. The main innovative steps in our system are: (a) we use a query model to isolate CSS in speech queries; (b) we break the CSS into basic components; and (c) we employ a multi-tier approach to map the basic components to known phrases in the dictionary. The tests demonstrate that our approach is effective. The work is only the beginning. Further research can be carried out as follows. First, as most of the queries are about named entities such as the persons or organizations, we need to perform named entity analysis on the queries to better extract its structure, and in mapping to known named entities. Second, most speech recognition engine will return a list of probable words for each syllable. This could be incorporated into our framework to facilitate multi-tier mapping. References Berlin Chen, Hsin-min Wang, and Lin-Shan Lee (2001), “Improved Spoken Document Retrieval by Exploring Extra Acoustic and Linguistic Cues”, Proceedings of the 7th European Conference on Speech Communication and Technology located at http://homepage.iis.sinica.edu.tw/ Paul S. Jacobs and Lisa F. Rau (1993), Innovations in Text Interpretation, Artificial Intelligence, Volume 63, October 1993 (Special Issue on Text Understanding) pp.143-191 Thomas H. Cormen, Charles E. Leiserson and Ronald L. Rivest (1990), “Introduction to algorithms”, published by McGraw-Hill. Jerry R. Hobbs, et al,(1997) , FASTUS: A Cascaded Finite-State Transducer for Extracting Information from Natural-Language Text, FiniteState Language Processing, Emmanuel Roche and Yves Schabes, pp. 383 - 406, MIT Press, Julian Kupiec (1993), MURAX: “A robust linguistic approach for question answering using an one-line encyclopedia”, Proceedings of 16th annual conference on Research and Development in Information Retrieval (SIGIR), pp.181-190 Chin-Hui Lee et al (1996), “A Survey on Automatic Speech Recognition with an Illustrative Example On Continuous Speech Recognition of Mandarin”, in Computational Linguistics and Chinese Language Processing, pp. 1-36 Helen Meng and Pui Yu Hui (2001), “Spoken Document Retrieval for the languages of Hong Kong”, International Symposium on Intelligent Multimedia, Video and Speech Processing, May 2001, located at www.se.cuhk.edu.hk/PEOPLE/ Kenney Ng (2000), “Information Fusion For Spoken Document Retrieval”, Proceedings of ICASSP’00, Istanbul, Turkey, Jun, located at http://www.sls.lcs.mit.edu/sls/publications/ Hsiao Tieh Pu (2000), “Understanding Chinese Users’ Information Behaviors through Analysis of Web Search Term Logs”, Journal of Computers, pp.75-82 Liqin, Shen, Haixin Chai, Yong Qin and Tang Donald (1998), “Character Error Correction for Chinese Speech Recognition System”, Proceedings of International Symposium on Chinese Spoken Language Processing Symposium Proceedings, pp.136-138 Amit Singhal and Fernando Pereira (1999), “Document Expansion for Speech Retrieval”, Proceedings of the 22nd Annual International conference on Research and Development in Information Retrieval (SIGIR), pp. 34~41 Tomek Strzalkowski (1999), “Natural language information retrieval”, Boston: Kluwer Publishing. Gang Wang (2002), “Web surfing by Chinese Speech”, Master thesis, National University of Singapore. Hsin-min Wang, Helen Meng, Patrick Schone, Berlin Chen and Wai-Kt Lo (2001), “Multi-Scale Audio Indexing for translingual spoken document retrieval”, Proceedings of IEEE International Conference on Acoustics, Speech, Signal processing , Salt Lake City, USA, May 2001, located at http://www.iis.sinica.edu.tw/~whm/ Yongcheng Wang (1992), Technology and basis of Chinese Information Processing, Shanghai Jiao Tong University Press Baeza-Yates, Ricardo and Ribeiro-Neto, Berthier (1999), “Introduction to modern information retrieval”, Published by London: Library Association Publishing. Hai-nan Ying, Yong Ji and Wei Shen, (2002), “report of query log”, internal report in Shanghai Jiao Tong University Guodong Zhou and Kim Teng Lua (1997) Detection of Unknown Chinese Words Using a Hybrid Approach Computer Processing of Oriental Languages, Vol 11, No 1, 1997, 63-75 Guodong Zhou (1997), “Language Modelling in Mandarin Speech Recognition”, Ph.D. Thesis, National University of Singapore.
2003
32
Flexible Guidance Generation using User Model in Spoken Dialogue Systems Kazunori Komatani Shinichi Ueno Tatsuya Kawahara Hiroshi G. Okuno Graduate School of Informatics Kyoto University Yoshida-Hommachi, Sakyo, Kyoto 606-8501, Japan fkomatani,ueno,kawahara,[email protected] Abstract We address appropriate user modeling in order to generate cooperative responses to each user in spoken dialogue systems. Unlike previous studies that focus on user’s knowledge or typical kinds of users, the user model we propose is more comprehensive. Specifically, we set up three dimensions of user models: skill level to the system, knowledge level on the target domain and the degree of hastiness. Moreover, the models are automatically derived by decision tree learning using real dialogue data collected by the system. We obtained reasonable classification accuracy for all dimensions. Dialogue strategies based on the user modeling are implemented in Kyoto city bus information system that has been developed at our laboratory. Experimental evaluation shows that the cooperative responses adaptive to individual users serve as good guidance for novice users without increasing the dialogue duration for skilled users. 1 Introduction A spoken dialogue system is one of the promising applications of the speech recognition and natural language understanding technologies. A typical task of spoken dialogue systems is database retrieval. Some IVR (interactive voice response) systems using the speech recognition technology are being put into practical use as its simplest form. According to the spread of cellular phones, spoken dialogue systems via telephone enable us to obtain information from various places without any other special apparatuses. However, the speech interface involves two inevitable problems: one is speech recognition errors, and the other is that much information cannot be conveyed at once in speech communications. Therefore, the dialogue strategies, which determine when to make guidance and what the system should tell to the user, are the essential factors. To cope with speech recognition errors, several confirmation strategies have been proposed: confirmation management methods based on confidence measures of speech recognition results (Komatani and Kawahara, 2000; Hazen et al., 2000) and implicit confirmation that includes previous recognition results into system’s prompts (Sturm et al., 1999). In terms of determining what to say to the user, several studies have been done not only to output answers corresponding to user’s questions but also to generate cooperative responses (Sadek, 1999). Furthermore, methods have also been proposed to change the dialogue initiative based on various cues (Litman and Pan, 2000; Chu-Carroll, 2000; Lamel et al., 1999). Nevertheless, whether a particular response is cooperative or not depends on individual user’s characteristics. For example, when a user says nothing, the appropriate response should be different whether he/she is not accustomed to using the spoken dialogue systems or he/she does not know much about the target domain. Unless we detect the cause of the silence, the system may fall into the same situation repeatedly. In order to adapt the system’s behavior to individual users, it is necessary to model the user’s patterns (Kass and Finin, 1988). Most of conventional studies on user models have focused on the knowledge of users. Others tried to infer and utilize user’s goals to generate responses adapted to the user (van Beek, 1987; Paris, 1988). Elzer et al. (2000) proposed a method to generate adaptive suggestions according to users’ preferences. However, these studies depend on knowledge of the target domain greatly, and therefore the user models need to be deliberated manually to be applied to new domains. Moreover, they assumed that the input is text only, which does not contain errors. On the other hand, spoken utterances include various information such as the interval between utterances, the presence of barge-in and so on, which can be utilized to judge the user’s character. These features also possess generality in spoken dialogue systems because they are not dependent on domain-specific knowledge. We propose more comprehensive user models to generate user-adapted responses in spoken dialogue systems taking account of all available information specific to spoken dialogue. The models change both the dialogue initiative and the generated response. In (Eckert et al., 1997), typical users’ behaviors are defined to evaluate spoken dialogue systems by simulation, and stereotypes of users are assumed such as patient, submissive and experienced. We introduce user models not for defining users’ behaviors beforehand, but for detecting users’ patterns in real-time interaction. We define three dimensions in the user models: ‘skill level to the system’, ‘knowledge level on the target domain’ and ‘degree of hastiness’. The former two are related to the strategies in management of the initiative and the response generation. These two enable the system to adaptively generate dialogue management information and domainspecific information, respectively. The last one is used to manage the situation when users are in hurry. Namely, it controls generation of the additive contents based on the former two user models. Handling such a situation becomes more crucial in speech communications using cellular phones. The user models are trained by decision tree Sys: Please tell me your current bus stop, your destination or the specific bus route. User: Shijo-Kawaramachi. Sys: Do you take a bus from Shijo-Kawaramachi? User: Yes. Sys: Where will you get off the bus? User: Arashiyama. Sys: Do you go from Shijo-Kawaramachi to Arashiyama? User: Yes. Sys: Bus number 11 bound for Arashiyama has departed Sanjo-Keihanmae, two bus stops away. Figure 1: Example dialogue of the bus system learning algorithm using real data collected from the Kyoto city bus information system. Then, we implement the user models and adaptive dialogue strategies on the system and evaluate them using data collected with 20 novice users. 2 Kyoto City Bus Information System We have developed the Kyoto City Bus Information System, which locates the bus a user wants to take, and tells him/her how long it will take before its arrival. The system can be accessed via telephone including cellular phones1. From any places, users can easily get the bus information that changes every minute. Users are requested to input the bus stop to get on, the destination, or the bus route number by speech, and get the corresponding bus information. The bus stops can be specified by the name of famous places or public facilities nearby. Figure 1 shows a simple example of the dialogue. Figure 2 shows an overview of the system. The system operates by generating VoiceXML scripts dynamically. The real-time bus information database is provided on the Web, and can be accessed via Internet. Then, we explain the modules in the following. VWS (Voice Web Server) The Voice Web Server drives the speech recognition engine and the TTS (Text-To-Speech) module according to the specifications by the generated VoiceXML. Speech Recognizer The speech recognizer decodes user utterances 1+81-75-326-3116 VWS (Voice Web Server) response sentences recognition results (only language info.) recognition results (including features other than language info.) Voice XML user TTS speech recognizer VoiceXML generator dialogue manager user profiles real bus information user model identifier CGI the system except for proposed user models Figure 2: Overview of the bus system with user models based on specified grammar rules and vocabulary, which are defined by VoiceXML at each dialogue state. Dialogue Manager The dialogue manager generates response sentences based on speech recognition results (bus stop names or a route number) received from the VWS. If sufficient information to locate a bus is obtained, it retrieves the corresponding information from the real-time bus information database. VoiceXML Generator This module dynamically generates VoiceXML files that contain response sentences and specifications of speech recognition grammars, which are given by the dialogue manager. User Model Identifier This module classifies user’s characters based on the user models using features specific to spoken dialogue as well as semantic attributes. The obtained user profiles are sent to the dialogue manager, and are utilized in the dialogue management and response generation. 3 Response Generation using User Models 3.1 Classification of User Models We define three dimensions as user models listed below.  Skill level to the system  Knowledge level on the target domain  Degree of hastiness Skill Level to the System Since spoken dialogue systems are not widespread yet, there arises a difference in the skill level of users in operating the systems. It is desirable that the system changes its behavior including response generation and initiative management in accordance with the skill level of the user. In conventional systems, a system-initiated guidance has been invoked on the spur of the moment either when the user says nothing or when speech recognition is not successful. In our framework, by modeling the skill level as the user’s property, we address a radical solution for the unskilled users. Knowledge Level on the Target Domain There also exists a difference in the knowledge level on the target domain among users. Thus, it is necessary for the system to change information to present to users. For example, it is not cooperative to tell too detailed information to strangers. On the other hand, for inhabitants, it is useful to omit too obvious information and to output additive information. Therefore, we introduce a dimension that represents the knowledge level on the target domain. Degree of Hastiness In speech communications, it is more important to present information promptly and concisely compared with the other communication modes such as browsing. Especially in the bus system, the conciseness is preferred because the bus information is urgent to most users. Therefore, we also take account of degree of hastiness of the user, and accordingly change the system’s responses. 3.2 Response Generation Strategy using User Models Next, we describe the response generation strategies adapted to individual users based on the proposed user models: skill level, knowledge level and hastiness. Basic design of dialogue management is based on mixed-initiative dialogue, in which the system makes follow-up questions and guidance if necessary while allowing a user to utter freely. It is investigated to add various contents to the system responses as cooperative responses in (Sadek, 1999). Such additive information is usually cooperative, but some people may feel such a response redundant. Thus, we introduce the user models and control the generation of additive information. By introducing the proposed user models, the system changes generated responses by the following two aspects: dialogue procedure and contents of responses. Dialogue Procedure The dialogue procedure is changed based on the skill level and the hastiness. If a user is identified as having the high skill level, the dialogue management is carried out in a user-initiated manner; namely, the system generates only open-ended prompts. On the other hand, when user’s skill level is detected as low, the system takes an initiative and prompts necessary items in order. When the degree of hastiness is low, the system makes confirmation on the input contents. Conversely, when the hastiness is detected as high, such a confirmation procedure is omitted. Contents of Responses Information that should be included in the system response can be classified into the following two items. 1. Dialogue management information 2. Domain-specific information The dialogue management information specifies how to carry out the dialogue including the instruction on user’s expression like “Please reply with either yes or no.” and the explanation about the following dialogue procedure like “Now I will ask in order.” This dialogue management information is determined by the user’s skill level to the system, 58.8>= the maximum number of filled slots dialogue state initial state otherwise presense of barge-in rate of no input 0.07> 3 0 1 2 average of recognition score 58.8< skill level high skill level high skill level low skill level low Figure 3: Decision tree for the skill level and is added to system responses when the skill level is considered as low. The domain-specific information is generated according to the user’s knowledge level on the target domain. Namely, for users unacquainted with the local information, the system adds the explanation about the nearest bus stop, and omits complicated contents such as a proposal of another route. The contents described above are also controlled by the hastiness. For users who are not in hurry, the system generates the additional contents as cooperative responses. On the other hand, for hasty users, the contents are omitted in order to prevent the dialogue from being redundant. 3.3 Classification of User based on Decision Tree In order to implement the proposed user models as a classifier, we adopt a decision tree. It is constructed by decision tree learning algorithm C5.0 (Quinlan, 1993) with data collected by our dialogue system. Figure 3 shows the derived decision tree for the skill level. We use the features listed in Figure 4. They include not only semantic information contained in the utterances but also information specific to spoken dialogue systems such as the silence duration prior to the utterance and the presence of barge-in. Except for the last category of Figure 4 including “attribute of specified bus stops”, most of the features are domain-independent. The classification of each dimension is done for every user utterance except for knowledge level. The model of a user can change during a dialogue. Features extracted from utterances are accumulated as history information during the session. Figure 5 shows an example of the system behav features obtained from a single utterance – dialogue state (defined by already filled slots) – presence of barge-in – lapsed time of the current utterance – recognition result (something recognized / uncertain / no input) – score of speech recognizer – the number of filled slots by the current utterance  features obtained from the session – the number of utterances – dialogue state of the previous utterance – lapsed time from the beginning of the session – the number of repetitions of a same question – the average number of repetitions of a same question – ratio of the total time of user utterances in whole elapsed time – ratio of the occurrence of barge-in out of the whole number of utterances – recognition result of the previous utterance – ratio of something recognized – ratio of getting uncertain results – ratio of no input – the number of barge-in – the number of something recognized – the number of getting uncertain results – the number of no input – average of recognition scores – the maximum number of slots filled by a single utterance  features obtained from the session (used only in classification of knowledge level) – attribute of specified bus stops – the way to specify bus stops (whether a bus stop is specified by its correct name or not) – ratio of each attribute of specified bus stops – ratio of each way of specifying bus stops Figure 4: Features to classify the user models ior with the proposed user models. The skill level is classified as being low by the decision tree, because the first user’s utterance includes only one content word. Then, dialogue procedure is changed to the system-initiated one. Similarly, the hastiness is classified as being low by the decision tree, and the system includes the explanation on the dialogue procedure and instruction on the expression in the responses. They are omitted if the hastiness is identified as high. 3.4 Decision Tree Learning for User Models We train and evaluate the decision tree for the user models using dialogue data collected by our system. The data was collected from December 10th 2001 to case: [skill level: low][knowledge level: low] [hastiness: low] Sys: Please tell me your current bus stop, your destination or the specific bus route number. User: Maruyama Park. Sys: Let me confirm one by one. ( The explanation is added by [skill level: low].) Will you get on at Maruyama Park? ( The content is confirmed by [hastiness: low].) Please reply with yes or no. ( The instruction is added by [skill level: low].) User: Yes. Sys: Where will you get off the bus? Please specify by a bus stop name or a well-known landmark. ( The explanation is added by [skill level: low].) User: Tofukuji temple. Sys: Will you get off the bus at Tofukuji temple? Please reply with yes or no. User: Yes. Sys: The following buses are located. ( The explanation is added by [skill level: low].) The nearest bus stop to Maruyama Park is Gion. ( The domain information is added by [knowledge level: low].) Bus number 202 bound for Nishioji-Kujo has departed Higashiyama-Sanjo, which is two stops away. ... Figure 5: An example dialogue with the proposed user models low indeterminable high total skill level 743 253 496 1492 knowledge level 275 808 409 1492 hastiness 421 932 139 1492 Table 1: Number of manually labeled items for decision tree learning May 10th 2002. The number of the sessions (telephone calls) is 215, and the total number of utterances included in the sessions is 1492. We annotated the subjective labels by hand. The annotator judges the user models for every utterances based on recorded speech data and logs. The labels were given to the three dimensions described in section 3.3 among ’high’, ’indeterminable’ or ’low’. It is possible that annotated models of a user change during a dialogue, especially from ’indeterminable’ to ’low’ or ’high’. The number of labeled utterances is shown in Table 1. Using the labeled data, we evaluated the classification accuracy of the proposed user models. All the experiments were carried out by the method of 10-fold cross validation. The process, in which one tenth of all data is used as the test data and the remainder is used as the training data, is repeated ten times, and the average of the accuracy is computed. The result is shown in Table 2. The conditions #1, #2 and #3 in Table 2 are described as follows. #1: The 10-fold cross validation is carried out per utterance. #2: The 10-fold cross validation is carried out per session (call). #3: We calculate the accuracy under more realistic condition. The accuracy is calculated not in three classes (high / indeterminable / low) but in two classes that actually affect the dialogue strategies. For example, the accuracy for the skill level is calculated for the two classes: low and the others. As to the classification of knowledge level, the accuracy is calculated for dialogue sessions because the features such as the attribute of a specified bus stop are not obtained in every utterance. Moreover, in order to smooth unbalanced distribution of the training data, a cost corresponding to the reciprocal ratio of the number of samples in each class is introduced. By the cost, the chance rate of two classes becomes 50%. The difference between condition #1 and #2 is that the training was carried out in a speaker-closed or speaker-open manner. The former shows better performance. The result in condition #3 shows useful accuracy in the skill level. The following features play important part in the decision tree for the skill level: the number of filled slots by the current utterance, presence of barge-in and ratio of no input. For the knowledge level, recognition result (something recognized / uncertain / no input), ratio of no input and the way to specify bus stops (whether a bus stop is specified by its exact name or not) are effective. The hastiness is classified mainly by the three features: presence of barge-in, ratio of no input and lapsed time of the current utterance. condition #1 #2 #3 skill level 80.8% 75.3% 85.6% knowledge level 73.9% 63.7% 78.2% hastiness 74.9% 73.7% 78.6% Table 2: Classification accuracy of the proposed user models 4 Experimental Evaluation of the System with User Models We evaluated the system with the proposed user models using 20 novice subjects who had not used the system. The experiment was performed in the laboratory under adequate control. For the speech input, the headset microphone was used. 4.1 Experiment Procedure First, we explained the outline of the system to subjects and gave the document in which experiment conditions and the scenarios were described. We prepared two sets of eight scenarios. Subjects were requested to acquire the bus information using the system with/without the user models. In the scenarios, neither the concrete names of bus stops nor the bus number were given. For example, one of the scenarios was as follows: “You are in Kyoto for sightseeing. After visiting the Ginkakuji temple, you go to Maruyama Park. Supposing such a situation, please get information on the bus.” We also set the constraint in order to vary the subjects’ hastiness such as “Please hurry as much as possible in order to save the charge of your cellular phone.” The subjects were also told to look over questionnaire items before the experiment, and filled in them after using each system. This aims to reduce the subject’s cognitive load and possible confusion due to switching the systems (Over, 1999). The questionnaire consisted of eight items, for example, “When the dialogue did not go well, did the system guide intelligibly?” We set seven steps for evaluation about each item, and the subject selected one of them. Furthermore, subjects were asked to write down the obtained information: the name of the bus stop to get on, the bus number and how much time it takes before the bus arrives. With this procedure, we planned to make the experiment condition close to the realistic one. duration (sec.) # turn group 1 with UM 51.9 4.03 (with UM  w/o UM) w/o UM 47.1 4.18 group 2 w/o UM 85.4 8.23 (w/o UM  with UM) with UM 46.7 4.08 UM: User Model Table 3: Duration and the number of turns in dialogue The subjects were divided into two groups; a half (group 1) used the system in the order of “with user models  without user models”, the other half (group 2) used in the reverse order. The dialogue management in the system without user models is also based on the mixed-initiative dialogue. The system generates follow-up questions and guidance if necessary, but behaves in a fixed manner. Namely, additive cooperative contents corresponding to skill level described in section 3.2 are not generated and the dialogue procedure is changed only after recognition errors occur. The system without user models behaves equivalently to the initial state of the user models: the hastiness is low, the knowledge level is low and the skill level is high. 4.2 Results All of the subjects successfully completed the given task, although they had been allowed to give up if the system did not work well. Namely, the task success rate is 100%. Average dialogue duration and the number of turns in respective cases are shown in Table 3. Though the users had not experienced the system at all, they got accustomed to the system very rapidly. Therefore, as shown in Table 3, both the duration and the number of turns were decreased obviously in the latter half of the experiment in either group. However, in the initial half of the experiment, the group 1 completed with significantly shorter dialogue than group 2. This means that the incorporation of the user models is effective for novice users. Table 4 shows a ratio of utterances for which the skill level was identified as high. The ratio is calculated by dividing the number of utterances that were judged as high skill level by the number of all utterances in the eight sessions. The ratio is much larger for group 1 who initially used the system with user group 1 with UM 0.72 (with UM  w/o UM) w/o UM 0.70 group 2 w/o UM 0.41 (w/o UM  with UM) with UM 0.63 Table 4: Ratio of utterances for which the skill level was judged as high models. This fact means that novice users got accustomed to the system more rapidly with the user models, because they were instructed on the usage by cooperative responses generated when the skill level is low. The results demonstrate that cooperative responses generated according to the proposed user models can serve as good guidance for novice users. In the latter half of the experiment, the dialogue duration and the number of turns were almost same between the two groups. This result shows that the proposed models prevent the dialogue from becoming redundant for skilled users, although generating cooperative responses for all users made the dialogue verbose in general. It suggests that the proposed user models appropriately control the generation of cooperative responses by detecting characters of individual users. 5 Conclusions We have proposed and evaluated user models for generating cooperative responses adaptively to individual users. The proposed user models consist of the three dimensions: skill level to the system, knowledge level on the target domain and the degree of hastiness. The user models are identified using features specific to spoken dialogue systems as well as semantic attributes. They are automatically derived by decision tree learning, and all features used for skill level and hastiness are independent of domain-specific knowledge. So, it is expected that the derived user models can be used in other domains generally. The experimental evaluation with 20 novice users shows that the skill level of novice users was improved more rapidly by incorporating the user models, and accordingly the dialogue duration becomes shorter more immediately. The result is achieved by the generated cooperative responses based on the proposed user models. The proposed user models also suppress the redundancy by changing the dialogue procedure and selecting contents of responses. Thus, they realize user-adaptive dialogue strategies, in which the generated cooperative responses serve as good guidance for novice users without increasing the dialogue duration for skilled users. References Jennifer Chu-Carroll. 2000. MIMIC: An adaptive mixed initiative spoken dialogue system for information queries. In Proc. of the 6th Conf. on applied Natural Language Processing, pages 97–104. Wieland Eckert, Esther Levin, and Roberto Pieraccini. 1997. User modeling for spoken dialogue system evaluation. In Proc. IEEE Workshop on Automatic Speech Recognition and Understanding, pages 80–87. Stephanie Elzer, Jennifer Chu-Carroll, and Sandra Carberry. 2000. Recognizing and utilizing user preferences in collaborative consultation dialogues. In Proc. of the 4th Int’l Conf. on User Modeling, pages 19–24. Timothy J. Hazen, Theresa Burianek, Joseph Polifroni, and Stephanie Seneff. 2000. Integrating recognition confidence scoring with language understanding and dialogue modeling. In Proc. ICSLP. Robert Kass and Tim Finin. 1988. Modeling the user in natural language systems. Computational Linguistics, 14(3):5–22. Kazunori Komatani and Tatsuya Kawahara. 2000. Flexible mixed-initiative dialogue management using concept-level confidence measures of speech recognizer output. In Proc. Int’l Conf. Computational Linguistics (COLING), pages 467–473. Lori Lamel, Sophie Rosset, Jean-Luc Gauvain, and Samir Bennacef. 1999. The LIMSI ARISE system for train travel information. In IEEE Int’l Conf. Acoust., Speech & Signal Process. Diane J. Litman and Shimei Pan. 2000. Predicting and adapting to poor speech recognition in a spoken dialogue system. In Proc. of the 17th National Conference on Artificial Intelligence (AAAI2000). Paul Over. 1999. Trec-7 interactive track report. In Proc. of the 7th Text REtrieval Conference (TREC7). Cecile L. Paris. 1988. Tailoring object descriptions to a user’s level of expertise. Computational Linguistics, 14(3):64–78. J. Ross Quinlan. 1993. C4.5: Programs for Machine Learning. Morgan Kaufmann, San Mateo, CA. http://www.rulequest.com/see5-info.html. David Sadek. 1999. Design considerations on dialogue systems: From theory to technology -the case of artimis-. In Proc. ESCA workshop on Interactive Dialogue in Multi-Modal Systems. Janienke Sturm, Els den Os, and Lou Boves. 1999. Issues in spoken dialogue systems: Experiences with the Dutch ARISE system. In Proc. ESCA workshop on Interactive Dialogue in Multi-Modal Systems. Peter van Beek. 1987. A model for generating better explanations. In Proc. of the 25th Annual Meeting of the Association for Computational Linguistics (ACL87), pages 215–220.
2003
33
Integrating Discourse Markers into a Pipelined Natural Language Generation Architecture Charles B. Callaway ITC-irst, Trento, Italy via Sommarive, 18 Povo (Trento), Italy, I-38050 [email protected] Abstract Pipelined Natural Language Generation (NLG) systems have grown increasingly complex as architectural modules were added to support language functionalities such as referring expressions, lexical choice, and revision. This has given rise to discussions about the relative placement of these new modules in the overall architecture. Recent work on another aspect of multi-paragraph text, discourse markers, indicates it is time to consider where a discourse marker insertion algorithm fits in. We present examples which suggest that in a pipelined NLG architecture, the best approach is to strongly tie it to a revision component. Finally, we evaluate the approach in a working multi-page system. 1 Introduction Historically, work on NLG architecture has focused on integrating major disparate architectural modules such as discourse and sentence planners and surface realizers. More recently, as it was discovered that these components by themselves did not create highly readable prose, new types of architectural modules were introduced to deal with newly desired linguistic phenomena such as referring expressions, lexical choice, revision, and pronominalization. Adding each new module typically entailed that an NLG system designer would justify not only the reason for including the new module (i.e., what linguistic phenomena it produced that had been previously unattainable) but how it was integrated into their architecture and why its placement was reasonably optimal (cf., (Elhadad et al., 1997), pp. 4–7). At the same time, (Reiter, 1994) argued that implemented NLG systems were converging toward a de facto pipelined architecture (Figure 1) with minimal-to-nonexistent feedback between modules. Although several NLG architectures were proposed in opposition to such a linear arrangement (Kantrowitz and Bates, 1992; Cline, 1994), these research projects have not continued while pipelined architectures are still actively being pursued. In addition, Reiter concludes that although complete integration of architectural components is theoretically a good idea, in practical engineering terms such a system would be too inefficient to operate and too complex to actually implement. Significantly, Reiter states that fully interconnecting every module would entail constructing N (N 1) interfaces between them. As the number of modules rises (i.e., as the number of large-scale features an NLG engineer wants to implement rises) the implementation cost rises exponentially. Moreover, this cost does not include modifications that are not component specific, such as multilingualism. As text planners scale up to produce ever larger texts, the switch to multi-page prose will introduce new features, and consequentially the number of architectural modules will increase. For example, Mooney’s EEG system (Mooney, 1994), which created a full-page description of the Three-Mile Island nuclear plant disaster, contains components for discourse knowledge, discourse organization, rhetoriFigure 1: A Typical Pipelined NLG Architecture cal relation structuring, sentence planning, and surface realization. Similarly, the STORYBOOK system (Callaway and Lester, 2002), which generated 2 to 3 pages of narrative prose in the Little Red Riding Hood fairy tale domain, contained seven separate components. This paper examines the interactions of two linguistic phenomena at the paragraph level: revision (specifically, clause aggregation, migration and demotion) and discourse markers. Clause aggregation involves the syntactic joining of two simple sentences into a more complex sentence. Discourse markers link two sentences semantically without necessarily joining them syntactically. Because both of these phenomena produce changes in the text at the clause-level, a lack of coordination between them can produce interference effects. We thus hypothesize that the architectural modules corresponding to revision and discourse marker selection should be tightly coupled. We then first summarize current work in discourse markers and revision, provide examples where these phenomena interfere with each other, describe an implemented technique for integrating the two, and report on a preliminary system evaluation. 2 Discourse Markers in NLG Discourse markers, or cue words, are single words or small phrases which mark specific semantic relations between adjacent sentences or small groups of sentences in a text. Typical examples include words like however, next, and because. Discourse markers pose a problem for both the parsing and generation of clauses in a way similar to the problems that referring expressions pose to noun phrases: changing the lexicalization of a discourse marker can change the semantic interpretation of the clauses affected. Recent work in the analysis of both the distribution and role of discourse markers has greatly extended our knowledge over even the most expansive previous accounts of discourse connectives (Quirk et al., 1985) from previous decades. For example, using a large scale corpus analysis and human subjects employing a substitution test over the corpus sentences containing discourse markers, Knott and Mellish (1996) distilled a taxonomy of individual lexical discourse markers and 8 binary-valued features that could be used to drive a discourse marker selection algorithm. Other work often focuses on particular semantic categories, such as temporal discourse markers. For instance, Grote (1998) attempted to create declarative lexicons that contain applicability conditions and other constraints to aid in the process of discourse marker selection. Other theoretical research consists, for example, of adapting existing grammatical formalisms such as TAGs (Webber and Joshi, 1998) for discourse-level phenomena. Alternatively, there are several implemented systems that automatically insert discourse markers into multi-sentential text. In an early instance, Elhadad and McKeown (1990) followed Quirk’s pre-existing non-computational account of discourse connectives to produce single argumentative discourse markers inside a functional unification surface realizer (and thereby postponing lexicalization till the last possible moment). More recent approaches have tended to move the decision time for marker lexicalization higher up the pipelined architecture. For example, the MOOSE system (Stede and Umbach, 1998; Grote and Stede, 1999) lexicalized discourse markers at the sentence planning level by pushing them directly into the lexicon. Similarly, Power et al. (1999) produce multiple discourse markers for Patient Information Leaflets using a constraint-based method applied to RST trees during sentence planning. Finally, in the CIRC-SIM intelligent tutoring system (Yang et al., 2000) that generates connected dialogues for students studying heart ailments, discourse marker lexicalization has been pushed all the way up to the discourse planning level. In this case, CIRC-SIM lexicalizes discourse markers inside of the discourse schema templates themselves. Given that these different implemented discourse marker insertion algorithms lexicalize their markers at three distinct places in a pipelined NLG architecture, it is not clear if lexicalization can occur at any point without restriction, or if it is in fact tied to the particular architectural modules that a system designer chooses to include. The answer becomes clearer after noting that none of the implemented discourse marker algorithms described above have been incorporated into a comprehensive NLG architecture containing additional significant components such as revision (with the exception of MOOSE’s lexical choice component, which Stede considers to be a submodule of the sentence planner). 3 Current Implemented Revision Systems Revision (or clause aggregation) is principally concerned with taking sets of small, single-proposition sentences and finding ways to combine them into more fluent, multiple-proposition sentences. Sentences can be combined using a wide range of different syntactic forms, such as conjunction with “and”, making relative clauses with noun phrases common to both sentences, and introducing ellipsis. Typically, revision modules arise because of dissatisfaction with the quality of text produced by a simple pipelined NLG system. As noted by Reape and Mellish (1999), there is a wide variety in revision definitions, objectives, operating level, and type. Similarly, Dalianis and Hovy (1993) tried to distinguish between different revision parameters by having users perform revision thought experiments and proposing rules in RST form which mimic the behavior they observed. While neither of these were implemented revision systems, there have been several attempts to improve the quality of text from existing NLG systems. There are two approaches to the architectural position of revision systems: those that operate on semantic representations before the sentence planning level, of which a prototypical example is (Horacek, 2002), and those placed after the sentence planner, operating on syntactic/linguistic data. Here we treat mainly the second type, which have typically been conceived of as “add-on” components to existing pipelined architectures. An important implication of this architectural order is that the revision components expect to receive lexicalized sentence plans. Of these systems, Robin’s STREAK system (Robin, 1994) is the only one that accepts both lexicalized and non-lexicalized data. After a sentence planner produces the required lexicalized information that can form a complete and grammatical sentence, STREAK attempts to gradually aggregate that data. It then proceeds to try to opportunistically include additional optional information from a data set of statistics, performing aggregation operations at various syntactic levels. Because STREAK only produces single sentences, it does not attempt to add discourse markers. In addition, there is no a priori way to determine whether adjacent propositions in the input will remain adjacent in the final sentence. The REVISOR system (Callaway and Lester, 1997) takes an entire sentence plan at once and iterates through it in paragraph-sized chunks, employing clause- and phrase-level aggregation and reordering operations before passing a revised sentence plan to the surface realizer. However, at no point does it add information that previously did not exist in the sentence plan. The RTPI system (Harvey and Carberry, 1998) takes in sets of multiple, lexicalized sentential plans over a number of medical diagnoses from different critiquing systems and produces a single, unified sentence plan which is both coherent and cohesive. Like STREAK, Shaw’s CASPER system (Shaw, 1998) produces single sentences from sets of sentences and doesn’t attempt to deal with discourse markers. CASPER also delays lexicalization when aggregating by looking at the lexicon twice during the revision process. This is due mainly to the efficiency costs of the unification procedure. However, CASPER’s sentence planner essentially uses the first lexicon lookup to find a “set of lexicalizations” before eventually selecting a particular one. An important similarity of these pipelined revision systems is that they all manipulate lexicalized representations at the clause level. Given that both aggregation and reordering operators may separate clauses that were previously adjacent upon leaving the sentence planner, the inclusion of a revision component has important implications for any upstream architectural module which assumed that initially adjacent clauses would remain adjacent throughout the generation process. 4 Architectural Implications The current state of the art in NLG can be described as small pipelined generation systems that incorporate some, but not all, of the available pipelined NLG modules. Specifically, there is no system todate which both revises its output and inserts appropriate discourse markers. Additionally, there are no systems which utilize the latest theoretical work in discourse markers described in Section 2. But as NLG systems begin to reach toward multi-page text, combining both modules into a single architecture will quickly become a necessity if such systems are to achieve the quality of prose that is routinely achieved by human authors. This integration will not come without constraints. For instance, discourse marker insertion algorithms assume that sentence plans are static objects. Thus any change to the static nature of sentence plans will inevitably disrupt them. On the other hand, revision systems currently do not add information not specified by the discourse planner, and do not perform true lexicalization: any new lexemes not present in the sentence plan are merely delayed lexicon entry lookups. Finally, because revision is potentially destructive, the sentence elements that lead to a particular discourse marker being chosen may be significantly altered or may not even exist in a post-revision sentence plan. These factors lead to two partial order constraints on a system that both inserts discourse markers and revises at the clause level after sentence planning:  Discourse marker lexicalization cannot precede revision  Revision cannot precede discourse marker lexicalization In the first case, assume that a sentence plan arrives at the revision module with discourse markers already lexicalized. Then the original discourse marker may not be appropraite in the revised sentence plan. For example, consider how the application of the following revision types requires different lexicalizations for the initial discourse markers:  Clause Aggregation: The merging of two main clauses into one main clause and one subordinate clause: John had always liked to ride motorbikes.  On account of this, his wife passionately hated motorbikes. ) John had always liked to ride motorbikes, which his wife f* on account of this j thusg passionately hated.  Reordering: Two originally adjacent main clauses no longer have the same fixed position relative to each other: Diesel motors are well known for emitting excessive pollutants.  Furthermore, diesel is often transported unsafely.  However, diesel motors are becoming cleaner. ) Diesel motors are well known for emitting excessive pollutants, f* however j although g they are becoming cleaner. Furthermore, diesel is often transported unsafely.  Clause Demotion: Two main clauses are merged where one of them no longer has a clause structure: The happy man went home.  However, the man was poor. ) The happy f* however j butg poor man went home. These examples show that if discourse marker lexicalization occurs before clause revision, the changes that the revision module makes can render those discourse markers undesirable or even grammatically incorrect. Furthermore, these effects span a wide range of potential revision types. In the second case, assume that a sentence plan is passed to the revision component, which performs various revision operations before discourse markers are considered. In order to insert appropriate discourse markers, the insertion algorithm must access the appropriate rhetorical structure produced by the discourse planner. However, there is no guarantee that the revision module has not altered the initial organization imposed by the discourse planner. In such a case, the underlying data used for discourse marker selection may no longer be valid. For example, consider the following generically represented discourse plan: C1: “John and his friends went to the party.” [temporal “before” relation, time(C1, C2)] C2: “John and his friends gathered at the mall.” [causal relation, cause(C2, C3)] C3: “John had been grounded.” One possible revision that preserved the discourse plan might be: “Before John and his friends went to the party, they gathered at the mall since he had been grounded.” In this case, the discourse marker algorithm has selected “before” and “since” as lexicalized discourse markers prior to revision. But there are other possible revisions that would destroy the ordering established by the discourse plan and make the selected discourse markers unwieldy: “John, f* since j g who had been grounded, gathered with his friends at the mall before going to the party.” “f* Since j Becauseg he had been grounded, John and his friends gathered at the mall and f* before j then g went to the party.” Reordering sentences without updating the discourse relations in the discourse plan itself would result in many wrong or misplaced discourse marker lexicalizations. Given that discourse markers cannot be lexicalized before clause revision is enacted, and that clause revision may alter the original discourse plan upon which a later discourse marker insertion algorithm may rely, it follows that the revision algorithm should update the discourse plan as it progresses, and the discourse marker insertion algorithm should be responsive to these changes, thus delaying discourse marker lexicalization. 5 Implementation To demonstrate the application of this problem to real world discourse, we took the STORYBOOK (Callaway and Lester, 2001; Callaway and Lester, 2002) NLG system that generates multi-page text in the form of Little Red Riding Hood stories and New York Times articles, using a pipelined architecture with a large number of modules such as revision (Callaway and Lester, 1997). But although it was capable of inserting discourse markers, it did so in an ad-hoc way, and required that the document author notice possible interferences between revision and discourse marker insertion and hard-wire the document representation accordingly. Upon adding a principled discourse marker selection algorithm to the system, we soon noticed various unwanted interactions between revision and discourse markers of the type described in Section 4 above. Thus, in addition to the other constraints already considered during clause aggregation, we altered the revision module to also take into account the information available to our discourse marker insertion algorithm (in our case, intention and rhetorical predicates). We were thus able to incorporate the discourse marker selection algorithm into the revision module itself. This is contrary to most NLG systems where discourse marker lexicalization is performed as late as possible using the modified discourse plan leaves after the revision rules have reorganized all the original clauses. In an architecture that doesn’t consider discourse markers, a generic revision rule without access to the original discourse plan might appear like this (where type refers to the main clause syntax, and rhetorical type refers to its intention): If type(clause1) = <type> type(clause2) = <type> subject(clause1) = subject(clause2) then make-subject-relative-clause(clause1, clause2) But by making available the intentional and rhetorical information from the discourse plan, our modified revision rules instead have this form: If rhetorical-type(clause1) = <type> rhetorical-type(clause2) = <type> subject(clause1) = subject(clause2) rhetorical-relation(clause1, clause2)  set-of-features then make-subject-relative-clause(clause1, clause2) lexicalize-discourse-marker(clause1, set-of-features) update-rhetorical-relation(clause1, current-relations) where the function lexicalize-discourse-marker determines the appropriate discourse marker lexicalization given a set of features such as those described in (Knott and Mellish, 1996) or (Grote and Stede, 1999), and update-rhetorical-relation causes the appropriate changes to be made to the running discourse plan so that future revision rules can take those alterations into account. STORYBOOK takes a discourse plan augmented with appropriate low-level (i.e., unlexicalized, or conceptual) rhetorical features and produces a sentence plan without discarding rhetorical information. It then revises and lexicalizes discourse markers concurrently before passing the results to the surface realization module for production of the surface text. Consider the following sentences in a short text plan produced by the generation system: 1. “In this case, Mr. Curtis could no longer be tried for the shooting of his former girlfriend’s companion.” <agent-action> [causal relation] 2. “There is a five-year statute of limitations on that crime.” <existential> [opposition relation] 3. “There is no statute of limitations in murder cases.” <existential> Without revision, a discourse marker insertion algorithm is only capable of adding discourse markers before or after a clause boundary: “In this case, Mr. Curtis could no longer be tried for the shooting of his former girlfriend’s companion. This is because there is a five-year statute of limitations on that crime. However, there is no statute of limitations in murder cases.” But a revised version with access to the discourse plan and integrating discourse markers that our system generates is: “In this case, Mr. Curtis could no longer be tried for the shooting of his former girlfriend’s companion, because there is a five-year statute of limitations on that crime even though there is no statue of limitations in murder cases.” A revision module without access to the discourse plan and a method for lexicalizing discourse markers will be unable to generate the second, improved version. Furthermore, a discourse marker insertion algorithm that lexicalizes before the revision algorithm begins will not have enough basis to decide and frequently produce wrong lexicalizations. The actual implemented rules in our system (which generate the example above) are consistent with the abstract rule presented earlier. Revising sentence 1 with 2: If rhetorical-type(clause1) = agent-action rhetorical-type(clause2) = existential rhetorical-relation(clause1, clause2)  fcausation, simple, . . . g then make-subordinate-bound-clause(clause2, clause1) lexicalize-discourse-marker(clause2, fcausation, simpleg) update-rhetorical-relation(clause1, clause2, agent-action, existential, causation) Revising sentence 2 with 3: If rhetorical-type(clause2) = existential rhetorical-type(clause3) = existential rhetorical-relation(clause2, clause3)  fopposition, simple, . . . g then make-subject-relative-clause(clause2, clause3) lexicalize-discourse-marker(clause1, fopposition, simpleg) update-rhetorical-relation(clause1, clause2, existential, existential, current-relations) Given these parameters, the discourse markers will be lexicalized as because and even though respectively, and the revision component will be able to combine all three base sentences plus the discourse markers into the single sentence shown above. 6 Preliminary Evaluation Evaluation of multi-paragraph text generation is exceedingly difficult, as empirically-driven methods are not sufficiently sophisticated, and subjective human evaluations that require multiple comparisons of large quantities of text is both difficult to control for and time-consuming. Evaluating our approach is even more difficult in that the interference between discourse markers and revision is not a highly fre# Sentences # Revisions # DMs # Co-occurring DM/Rev Separate Integrated Article 1 112 90 29 14 17 (56.8%) 26 (89.7%) Article 2 54 93 50 30 24 (48.0%) 45 (90.0%) Article 3 72 117 46 26 21 (45.7%) 42 (91.3%) Table 1: Interactions between revision and discourse markers quent occurrence in multi-page text. For instance, in our corpora we found that these interference effects occurred 23% of the time for revised clauses and 56% of the time with discourse markers. In other words, almost one of every four clause revisions potentially forces a change in discourse marker lexicalizations and one in every two discourse markers occur near a clause revision boundary. However, the “penalty” associated with incorrectly selecting discourse markers is fairly high leading to confusing sentences, although there is no cognitive science evidence that states exactly how high for a typical reader, despite recent work in this direction (Tree and Schrock, 1999). Furthermore, there is little agreement on exactly what constitutes a discourse marker, especially between the spoken and written dialogue communities (e.g., many members of the latter consider “uh” to be a discourse marker). We thus present an analysis of the frequencies of various features from three separate New York Times articles generated by the STORYBOOK system. We then describe the results of running our combined revision and discourse marker module with the discourse plans used to generate them. While three NYT articles is not a substantial enough evaluation in ideal terms, the cost of evaluation in such a knowledge-intensive undertaking will continue to be prohibitive until large-scale automatic or semiautomatic techniques are developed. The left side of table 1 presents an analysis of the frequencies of revisions and discourse markers as found in each of the three NYT articles. In addition, we have indicated the number of times in our opinion that revisions and discourse markers co-occurred (i.e., a discourse marker was present at the junction site of the clauses being aggregated). The right side of the table indicates the difference between the accuracy of two different versions of the system: separate signifies the initial configuration of the STORYBOOK system where discourse marker insertion and revision were performed as separate process, while integrated signifies that discourse markers were lexicalized during revision as described in this paper. The difference between these two numbers thus represents the number of times per article that the integrated clause aggregation and discourse marker module was able to improve the resulting text. 7 Conclusion Efficiency and software engineering considerations dictate that current large-scale NLG systems must be constructed in a pipeline fashion that minimizes backtracking and communication between modules. Yet discourse markers and revision both operate at the clause level, which leads to the potential of interference effects if they are not resolved at the same location in a pipelined architecture. We have analyzed recent theoretical and applied work in both discourse markers and revision, showing that although no previous NLG system has yet integrated both components into a single architecture, an architecture for multi-paragraph generation which separated the two into distinct, unlinked modules would not be able to guarantee that the final text contained appropriately lexicalized discourse markers. Instead, our combined revision and discourse marker module in an implemented pipelined NLG system is able to correctly insert appropriate discourse markers despite changes made by the revision system. A corpus analysis indicated that significant interference effects between revision and discourse marker lexicalization are possible. Future work may show that similar interference effects are possible as successive modules are added to pipelined NLG systems. References Charles B. Callaway and James C. Lester. 1997. Dynamically improving explanations: A revision-based approach to explanation generation. In Proceedings of the Fifteenth International Joint Conference on Artificial Intelligence, pages 952–58, Nagoya, Japan. Charles B. Callaway and James C. Lester. 2001. Narrative prose generation. In Proceedings of the Seventeenth International Joint Conference on Artificial Intelligence, pages 1241–1248, Seattle, WA. Charles B. Callaway and James C. Lester. 2002. Narrative prose generation. Artificial Intelligence, 139(2):213–252. Ben E. Cline. 1994. Knowledge Intensive Natural Language Generation with Revision. Ph.D. thesis, Virginia Polytechnic and State University, Blacksburg, Virginia. Hercules Dalianis and Eduard Hovy. 1993. Aggregation in natural language generation. In Proceedings of the Fourth European Workshop on Natural Language Generation, Pisa, Italy. Michael Elhadad and Kathy McKeown. 1990. Generating connectives. In COLING ’90: Proceedings of the Thirteenth International Conference on Computational Linguistics, pages 97–101, Helsinki, Finland. Michael Elhadad, Kathleen McKeown, and Jacques Robin. 1997. Floating constraints in lexical choice. Computational Linguistics, 23(2):195–240. Brigitte Grote. 1998. Representing temporal discourse markers for generation purposes. In Proceedings of the Discourse Relations and Discourse Markers Workshop, pages 22–28, Montr´eal, Canada. Brigitte Grote and Manfred Stede. 1999. Ontology and lexical semantics for generating temporal discourse markers. In Proceedings of the 7th European Workshop on Natural Language Generation, Toulouse, France, May. Terrence Harvey and Sandra Carberry. 1998. Integrating text plans for conciseness and coherence. In Proceedings of the 36th Annual Meeting of the Association for Computational Linguistics, pages 512–518, August. Helmut Horacek. 2002. Aggregation with strong regularities and alternatives. In Second International Natural Language Generation Conference, pages 105–112, Harriman, NY, July. M. Kantrowitz and J. Bates. 1992. Integrated natural language generation systems. In R. Dale, E. Hovy, D. Rosner, and O. Stock, editors, Aspects of Automated Natural Language Generation, pages 247–262. Springer-Verlag, Berlin. Alistair Knott and Chris Mellish. 1996. A data-driven method for classifying connective phrases. Journal of Language and Speech, 39. David J. Mooney. 1994. Generating High-Level Structure for Extended Explanations. Ph.D. thesis, The University of Delaware, Newark, Delaware. Richard Power, Christine Doran, and Donia Scott. 1999. Generating embedded discourse markers from rhetorical structure. In Proceedings of the Seventh European Workshop on Natural Language Generation, Toulouse, France. R. Quirk, S. Greenbaum, G. Leech, and J. Svartvik. 1985. A Comprehensive Grammar of the English Language. Longman Publishers. Mike Reape and Chris Mellish. 1999. Just what is aggregation anyway? In Proceedings of the 7th European Workshop on Natural Language Generation, Toulouse, France, May. Ehud Reiter. 1994. Has a consensus NL generation architecture appeared, and is it psycholinguistically plausible? In Proceedings of the Seventh International Workshop on Natural Language Generation, pages 163–170, Kennebunkport, ME. Jacques Robin. 1994. Revision-Based Generation of Natural Language Summaries Providing Historical Background. Ph.D. thesis, Columbia University, December. James Shaw. 1998. Clause aggregation using linguistic knowledge. In Proceedings of the 9th International Workshop on Natural Language Generation, pages 138–147, Niagara-on-the-Lake, Canada. Manfred Stede and Carla Umbach. 1998. DiM-Lex: A lexicon of discourse markers for text generation and understanding. In Proceedings of the Joint 36th Meeting of the ACL and the 17th Meeting of COLING, pages 1238–1242, Montr´eal, Canada, August. J. E. Fox Tree and J. C. Schrock. 1999. Discourse markers in spontaneous speech. Journal of Memory and Language, 27:35–53. Bonnie Webber and Aravind Joshi. 1998. Anchoring a lexicalized tree-adjoining grammar for discourse. In Proceedings of the COLING-ACL ’96 Discourse Relations and Discourse Markers Workshop, pages 86–92, Montr´eal, Canada, August. Feng-Jen Yang, Jung Hee Kim, Michael Glass, and Martha Evens. 2000. Lexical usage in the tutoring schemata of Circsim-Tutor: Analysis of variable references and discourse markers. In The Fifth Annual Conference on Human Interaction and Complex Systems, pages 27–31, Urbana, IL.
2003
34
Improved Source-Channel Models for Chinese Word Segmentation1 Jianfeng Gao, Mu Li and Chang-Ning Huang Microsoft Research, Asia Beijing 100080, China {jfgao, t-muli, cnhuang}@microsoft.com 1 We would like to thank Ashley Chang, Jian-Yun Nie, Andi Wu and Ming Zhou for many useful discussions, and for comments on earlier versions of this paper. We would also like to thank Xiaoshan Fang, Jianfeng Li, Wenfeng Yang and Xiaodan Zhu for their help with evaluating our system. Abstract This paper presents a Chinese word segmentation system that uses improved source- channel models of Chinese sentence generation. Chinese words are defined as one of the following four types: lexicon words, morphologically derived words, factoids, and named entities. Our system provides a unified approach to the four fundamental features of word-level Chinese language processing: (1) word segmentation, (2) morphological analysis, (3) factoid detection, and (4) named entity recognition. The performance of the system is evaluated on a manually annotated test set, and is also compared with several state-of- the-art systems, taking into account the fact that the definition of Chinese words often varies from system to system. 1 Introduction Chinese word segmentation is the initial step of many Chinese language processing tasks, and has attracted a lot of attention in the research community. It is a challenging problem due to the fact that there is no standard definition of Chinese words. In this paper, we define Chinese words as one of the following four types: entries in a lexicon, morphologically derived words, factoids, and named entities. We then present a Chinese word segmentation system which provides a solution to the four fundamental problems of word-level Chinese language processing: word segmentation, morphological analysis, factoid detection, and named entity recognition (NER). There are no word boundaries in written Chinese text. Therefore, unlike English, it may not be desirable to separate the solution to word segmentation from the solutions to the other three problems. Ideally, we would like to propose a unified approach to all the four problems. The unified approach we used in our system is based on the improved source-channel models of Chinese sentence generation, with two components: a source model and a set of channel models. The source model is used to estimate the generative probability of a word sequence, in which each word belongs to one word type. For each word type, a channel model is used to estimate the generative probability of a character string given the word type. So there are multiple channel models. We shall show in this paper that our models provide a statistical framework to corporate a wide variety linguistic knowledge and statistical models in a unified way. We evaluate the performance of our system using an annotated test set. We also compare our system with several state-of-the-art systems, taking into account the fact that the definition of Chinese words often varies from system to system. In the rest of this paper: Section 2 discusses previous work. Section 3 gives the detailed definition of Chinese words. Sections 4 to 6 describe in detail the improved source-channel models. Section 8 describes the evaluation results. Section 9 presents our conclusion. 2 Previous Work Many methods of Chinese word segmentation have been proposed: reviews include (Wu and Tseng, 1993; Sproat and Shih, 2001). These methods can be roughly classified into dictionary-based methods and statistical-based methods, while many state-of- the-art systems use hybrid approaches. In dictionary-based methods (e.g. Cheng et al., 1999), given an input character string, only words that are stored in the dictionary can be identified. The performance of these methods thus depends to a large degree upon the coverage of the dictionary, which unfortunately may never be complete because new words appear constantly. Therefore, in addition to the dictionary, many systems also contain special components for unknown word identification. In particular, statistical methods have been widely applied because they utilize a probabilistic or cost-based scoring mechanism, instead of the dictionary, to segment the text. These methods however, suffer from three drawbacks. First, some of these methods (e.g. Lin et al., 1993) identify unknown words without identifying their types. For instance, one would identify a string as a unit, but not identify whether it is a person name. This is not always sufficient. Second, the probabilistic models used in these methods (e.g. Teahan et al., 2000) are trained on a segmented corpus which is not always available. Third, the identified unknown words are likely to be linguistically implausible (e.g. Dai et al., 1999), and additional manual checking is needed for some subsequent tasks such as parsing. We believe that the identification of unknown words should not be defined as a separate problem from word segmentation. These two problems are better solved simultaneously in a unified approach. One example of such approaches is Sproat et al. (1996), which is based on weighted finite-state transducers (FSTs). Our approach is motivated by the same inspiration, but is based on a different mechanism: the improved source-channel models. As we shall see, these models provide a more flexible framework to incorporate various kinds of lexical and statistical information. Some types of unknown words that are not discussed in Sproat’s system are dealt with in our system. 3 Chinese Words There is no standard definition of Chinese words – linguists may define words from many aspects (e.g. Packard, 2000), but none of these definitions will completely line up with any other. Fortunately, this may not matter in practice because the definition that is most useful will depend to a large degree upon how one uses and processes these words. We define Chinese words in this paper as one of the following four types: (1) entries in a lexicon (lexicon words below), (2) morphologically derived words, (3) factoids, and (4) named entities, because these four types of words have different functionalities in Chinese language processing, and are processed in different ways in our system. For example, the plausible word segmentation for the sentence in Figure 1(a) is as shown. Figure 1(b) is the output of our system, where words of different types are processed in different ways: (a) 朋友们/十二点三十分/高高兴兴/到/李俊生/教授/家/ 吃饭 (Friends happily go to professor Li Junsheng’s home for lunch at twelve thirty.) (b) [朋友+们 MA_S] [十二点三十分 12:30 TIME] [高兴 MR_AABB] [到] [李俊生 PN] [教授] [家] [吃饭] Figure 1: (a) A Chinese sentence. Slashes indicate word boundaries. (b) An output of our word segmentation system. Square brackets indicate word boundaries. + indicates a morpheme boundary. • For lexicon words, word boundaries are detected. • For morphologically derived words, their morphological patterns are detected, e.g. 朋友 们 ‘friend+s’ is derived by affixation of the plural affix 们 to the noun 朋友 (MA_S indicates a suffixation pattern), and 高高兴兴 ‘happily’ is a reduplication of 高兴 ‘happy’ (MR_AABB indicates an AABB reduplication pattern). • For factoids, their types and normalized forms are detected, e.g. 12:30 is the normalized form of the time expression 十二点三十 分 (TIME indicates a time expression). • For named entities, their types are detected, e.g. 李俊生 ‘Li Junsheng’ is a person name (PN indicates a person name). In our system, we use a unified approach to detecting and processing the above four types of words. This approach is based on the improved source-channel models described below. 4 Improved Source-Channel Models Let S be a Chinese sentence, which is a character string. For all possible word segmentations W, we will choose the most likely one W* which achieves the highest conditional probability P(W|S): W* = argmaxw P(W|S). According to Bayes’ decision rule and dropping the constant denominator, we can equivalently perform the following maximization: ) | ( ) ( max arg * W S P W P W W = . (1) Following the Chinese word definition in Section 3, we define word class C as follows: (1) Each lexicon Word class Class model Linguistic Constraints Lexicon word (LW) P(S|LW)=1 if S forms a word lexicon entry, 0 otherwise. Word lexicon Morphologically derived word (MW) P(S|MW)=1 if S forms a morph lexicon entry, 0 otherwise. Morph-lexicon Person name (PN) Character bigram family name list, Chinese PN patterns Location name (LN) Character bigram LN keyword list, LN lexicon, LN abbr. list Organization name (ON) Word class bigram ON keyword list, ON abbr. list Transliteration names (FN) Character bigram transliterated name character list Factoid2 (FT) P(S|FT)=1 if S can be parsed using a factoid grammar G, 0 otherwise Factoid rules (presented by FSTs). Figure 2. Class models 2 In our system, we define ten types of factoid: date, time (TIME), percentage, money, number (NUM), measure, e-mail, phone number, and WWW. word is defined as a class; (2) each morphologically derived word is defined as a class; (3) each type of factoids is defined as a class, e.g. all time expressions belong to a class TIME; and (4) each type of named entities is defined as a class, e.g. all person names belong to a class PN. We therefore convert the word segmentation W into a word class sequence C. Eq. 1 can then be rewritten as: ) | ( ) ( max arg * C S P C P C C = . (2) Eq. 2 is the basic form of the source-channel models for Chinese word segmentation. The models assume that a Chinese sentence S is generated as follows: First, a person chooses a sequence of concepts (i.e., word classes C) to output, according to the probability distribution P(C); then the person attempts to express each concept by choosing a sequence of characters, according to the probability distribution P(S|C). The source-channel models can be interpreted in another way as follows: P(C) is a stochastic model estimating the probability of word class sequence. It indicates, given a context, how likely a word class occurs. For example, person names are more likely to occur before a title such as 教授 ‘professor’. So P(C) is also referred to as context model afterwards. P(S|C) is a generative model estimating how likely a character string is generated given a word class. For example, the character string 李俊生 is more likely to be a person name than 里俊生 ‘Li Junsheng’ because 李 is a common family name in China while 里 is not. So P(S|C) is also referred to as class model afterwards. In our system, we use the improved source-channel models, which contains one context model (i.e., a trigram language model in our case) and a set of class models of different types, each of which is for one class of words, as shown in Figure 2. Although Eq. 2 suggests that class model probability and context model probability can be combined through simple multiplication, in practice some weighting is desirable. There are two reasons. First, some class models are poorly estimated, owing to the sub-optimal assumptions we make for simplicity and the insufficiency of the training corpus. Combining the context model probability with poorly estimated class model probabilities according to Eq. 2 would give the context model too little weight. Second, as seen in Figure 2, the class models of different word classes are constructed in different ways (e.g. name entity models are n-gram models trained on corpora, and factoid models are compiled using linguistic knowledge). Therefore, the quantities of class model probabilities are likely to have vastly different dynamic ranges among different word classes. One way to balance these probability quantities is to add several class model weight CW, each for one word class, to adjust the class model probability P(S|C) to P(S|C)CW. In our experiments, these class model weights are determined empirically to optimize the word segmentation performance on a development set. Given the source-channel models, the procedure of word segmentation in our system involves two steps: First, given an input string S, all word candidates are generated (and stored in a lattice). Each candidate is tagged with its word class and the class model probability P(S’|C), where S’ is any substring of S. Second, Viterbi search is used to select (from the lattice) the most probable word segmentation (i.e. word class sequence C*) according to Eq. (2). 5 Class Model Probabilities Given an input string S, all class models in Figure 2 are applied simultaneously to generate word class candidates whose class model probabilities are assigned using the corresponding class models: • Lexicon words: For any substring S’ ⊆S, we assume P(S’|C) = 1 and tagged the class as lexicon word if S’ forms an entry in the word lexicon, P(S’|C) = 0 otherwise. • Morphologically derived words: Similar to lexicon words, but a morph-lexicon is used instead of the word lexicon (see Section 5.1). • Factoids: For each type of factoid, we compile a set of finite-state grammars G, represented as FSTs. For all S’ ⊆S, if it can be parsed using G, we assume P(S’|FT) = 1, and tagged S’ as a factoid candidate. As the example in Figure 1 shows, 十二点三十分 is a factoid (time) candidate with the class model probability P(十二 点三十分|TIME) =1, and 十二 and 三十 are also factoid (number) candidates, with P(十二 |NUM) = P(三十|NUM) =1 • Named entities: For each type of named entities, we use a set of grammars and statistical models to generate candidates as described in Section 5.2. 5.1 Morphologically derived words In our system, the morphologically derived words are generated using five morphological patterns: (1) affixation: 朋友们 (friend - plural) ‘friends’; (2) reduplication: 高兴 ‘happy’ ! 高高兴兴 ‘happily’; (3) merging: 上班 ‘on duty’ + 下班 ‘off duty’ !上 下班 ‘on-off duty’; (4) head particle (i.e. expressions that are verb + comp): 走 ‘walk’ + 出去 ‘out’ ! 走出去 ‘walk out’; and (5) split (i.e. a set of expressions that are separate words at the syntactic level but single words at the semantic level): 吃了饭 ‘already ate’, where the bi-character word 吃饭 ‘eat’ is split by the particle 了 ‘already’. It is difficult to simply extend the well-known techniques for English (i.e., finite-state morphology) to Chinese due to two reasons. First, Chinese mormorphological rules are not as ‘general’ as their English counterparts. For example, English plural nouns can be in general generated using the rule ‘noun + s ! plural noun’. But only a small subset of Chinese nouns can be pluralized (e.g. 朋友们) using its Chinese counterpart ‘noun + 们 ! plural noun’ whereas others (e.g. 南瓜 ‘pumpkins’) cannot. Second, the operations required by Chinese morphological analysis such as copying in reduplication, merging and splitting, cannot be implemented using the current finite-state networks3. Our solution is the extended lexicalization. We simply collect all morphologically derived word forms of the above five types and incorporate them into the lexicon, called morph lexicon. The procedure involves three steps: (1) Candidate generation. It is done by applying a set of morphological rules to both the word lexicon and a large corpus. For example, the rule ‘noun + 们 ! plural noun’ would generate candidates like 朋友们. (2) Statistical filtering. For each candidate, we obtain a set of statistical features such as frequency, mutual information, left/right context dependency from a large corpus. We then use an information gain-like metric described in (Chien, 1997; Gao et al., 2002) to estimate how likely a candidate is to form a morphologically derived word, and remove ‘bad’ candidates. The basic idea behind the metric is that a Chinese word should appear as a stable sequence in the corpus. That is, the components within the word are strongly correlated, while the components at both ends should have low correlations with words outside the sequence. (3) Linguistic selection. We finally manually check the remaining candidates, and construct the morph-lexicon, where each entry is tagged by its morphological pattern. 5.2 Named entities We consider four types of named entities: person names (PN), location names (LN), organization names (ON), and transliterations of foreign names (FN). Because any character strings can be in principle named entities of one or more types, to limit the number of candidates for a more effective search, we generate named entity candidates, given an input string, in two steps: First, for each type, we use a set of constraints (which are compiled by 3 Sproat et al. (1996) also studied such problems (with the same example) and uses weighted FSTs to deal with the affixation. linguists and are represented as FSTs) to generate only those ‘most likely’ candidates. Second, each of the generated candidates is assigned a class model probability. These class models are defined as generative models which are respectively estimated on their corresponding named entity lists using maximum likelihood estimation (MLE), together with smoothing methods4. We will describe briefly the constraints and the class models below. 5.2.1 Chinese person names There are two main constraints. (1) PN patterns: We assume that a Chinese PN consists of a family name F and a given name G, and is of the pattern F+G. Both F and G are of one or two characters long. (2) Family name list: We only consider PN candidates that begin with an F stored in the family name list (which contains 373 entries in our system). Given a PN candidate, which is a character string S’, the class model probability P(S’|PN) is computed by a character bigram model as follows: (1) Generate the family name sub-string SF, with the probability P(SF|F); (2) Generate the given name sub-string SG, with the probability P(SG|G) (or P(SG1|G1)); and (3) Generate the second given name, with the probability P(SG2|SG1,G2). For example, the generative probability of the string 李俊生 given that it is a PN would be estimated as P(李俊生|PN) = P(李|F)P(俊|G1)P(生|俊,G2). 5.2.2 Location names Unlike PNs, there are no patterns for LNs. We assume that a LN candidate is generated given S’ (which is less than 10 characters long), if one of the following conditions is satisfied: (1) S’ is an entry in the LN list (which contains 30,000 LNs); (2) S’ ends in a keyword in a 120-entry LN keyword list such as 市 ‘city’5. The probability P(S’|LN) is computed by a character bigram model. Consider a string 乌苏里江 ‘Wusuli river’. It is a LN candidate because it ends in a LN keyword 江 ‘river’. The generative probability of the string given it is a LN would be estimated as P(乌苏里江 |LN) = P(乌|<LN>) P(苏|乌) P(里|苏) P(江|里) 4 The detailed description of these models are in Sun et al. (2002), which also describes the use of cache model and the way the abbreviations of LN and ON are handled. 5 For a better understanding, the constraint is a simplified version of that used in our system. P(</LN>|江), where <LN> and </LN> are symbols denoting the beginning and the end of a LN, respectively. 5.2.3 Organization names ONs are more difficult to identify than PNs and LNs because ONs are usually nested named entities. Consider an ON 中国国际航空公司 ‘Air China Corporation’; it contains an LN 中国 ‘China’. Like the identification of LNs, an ON candidate is only generated given a character string S’ (less than 15 characters long), if it ends in a keyword in a 1,355-entry ON keyword list such as 公司 ‘corporation’. To estimate the generative probability of a nested ON, we introduce word class segmentations of S’, C, as hidden variables. In principle, the ON class model recovers P(S’|ON) over all possible C: P(S’|ON) = ∑CP(S’,C|ON) = ∑CP(C|ON)P(S’|C, ON). Since P(S’|C,ON) = P(S’|C), we have P(S’|ON) = ∑CP(C|ON) P(S’|C). We then assume that the sum is approximated by a single pair of terms P(C*|ON)P(S’|C*), where C* is the most probable word class segmentation discovered by Eq. 2. That is, we also use our system to find C*, but the source- channel models are estimated on the ON list. Consider the earlier example. Assuming that C* = LN/国际/航空/公司, where 中国 is tagged as a LN, the probability P(S’|ON) would be estimated using a word class bigram model as: P(中国国际航空公司 |ON) ≈ P(LN/国际/航空/公司|ON) P(中国|LN) = P(LN|<ON>)P(国际|LN)P(航空|国际)P(公司|航空) P(</ON>|公司)P(中国|LN), where P(中国|LN) is the class model probability of 中国 given that it is a LN, <ON> and </ON> are symbols denoting the beginning and the end of a ON, respectively. 5.2.4 Transliterations of foreign names As described in Sproat et al. (1996): FNs are usually transliterated using Chinese character strings whose sequential pronunciation mimics the source language pronunciation of the name. Since FNs can be of any length and their original pronunciation is effectively unlimited, the recognition of such names is tricky. Fortunately, there are only a few hundred Chinese characters that are particularly common in transliterations. Therefore, an FN candidate would be generated given S’, if it contains only characters stored in a transliterated name character list (which contains 618 Chinese characters). The probability P(S’|FN) is estimated using a character bigram model. Notice that in our system a FN can be a PN, a LN, or an ON, depending on the context. Then, given a FN candidate, three named entity candidates, each for one category, are generated in the lattice, with the class probabilities P(S’|PN)=P(S’|LN)=P(S’|ON)= P(S’|FN). In other words, we delay the determination of its type until decoding where the context model is used. 6 Context Model Estimation This section describes the way the class model probability P(C) (i.e. trigram probability) in Eq. 2 is estimated. Ideally, given an annotated corpus, where each sentence is segmented into words which are tagged by their classes, the trigram word class probabilities can be calculated using MLE, together with a backoff schema (Katz, 1987) to deal with the sparse data problem. Unfortunately, building such annotated training corpora is very expensive. Our basic solution is the bootstrapping approach described in Gao et al. (2002). It consists of three steps: (1) Initially, we use a greedy word segmentor6 to annotate the corpus, and obtain an initial context model based on the initial annotated corpus; (2) we re-annotate the corpus using the obtained models; and (3) re-train the context model using the re-annotated corpus. Steps 2 and 3 are iterated until the performance of the system converges. In the above approach, the quality of the context model depends to a large degree upon the quality of the initial annotated corpus, which is however not satisfied due to two problems. First, the greedy segmentor cannot deal with the segmentation ambiguities, and even after iterations, these ambiguities can only be partially resolved. Second, many factoids and named entities cannot be identified using the greedy word segmentor which is based on the dictionary. To solve the first problem, we use two methods to resolve segmentation ambiguities in the initial segmented training data. We classify word segmentation ambiguities into two classes: overlap ambiguity (OA), and combination ambiguity (CA). Consider a character string ABC, if it can be seg 6 The greedy word segmentor is based on a forward maximum matching (FMM) algorithm: It processes through the sentence from left to right, taking the longest match with the lexicon entry at each point. mented into two words either as AB/C or A/BC depending on different context, ABC is called an overlap ambiguity string (OAS). If a character string AB can be segmented either into two words, A/B, or as one word depending on different context. AB is called a combination ambiguity string (CAS). To resolve OA, we identify all OASs in the training data and replace them with a single token <OAS>. By doing so, we actually remove the portion of training data that are likely to contain OA errors. To resolve CA, we select 70 high-frequent two-character CAS (e.g. 才能 ‘talent’ and 才/能 ‘just able’). For each CAS, we train a binary classifier (which is based on vector space models) using sentences that contains the CAS segmented manually. Then for each occurrence of a CAS in the initial segmented training data, the corresponding classifier is used to determine whether or not the CAS should be segmented. For the second problem, though we can simply use the finite-state machines described in Section 5 (extended by using the longest-matching constraint for disambiguation) to detect factoids in the initial segmented corpus, our method of NER in the initial step (i.e. step 1) is a little more complicated. First, we manually annotate named entities on a small subset (call seed set) of the training data. Then, we obtain a context model on the seed set (called seed model). We thus improve the context model which is trained on the initial annotated training corpus by interpolating it with the seed model. Finally, we use the improved context model in steps 2 and 3 of the bootstrapping. Our experiments show that a relatively small seed set (e.g., 10 million characters, which takes approximately three weeks for 4 persons to annotate the NE tags) is enough to get a good improved context model for initialization. 7 Evaluation To conduct a reliable evaluation, a manually annotated test set was developed. The text corpus contains approximately half million Chinese characters that have been proofread and balanced in terms of domain, styles, and times. Before we annotate the corpus, several questions have to be answered: (1) Does the segmentation depend on a particular lexicon? (2) Should we assume a single correct segmentation for a sentence? (3) What are the evaluation criteria? (4) How to perform a fair comparison across different systems? Word segmentation Factoid PN LN ON System P% R% P% R% P% R% P% R% P% R% 1 FMM 83.7 92.7 2 Baseline 84.4 93.8 3 2 + Factoid 89.9 95.5 84.4 80.0 4 3 + PN 94.1 96.7 84.5 80.0 81.0 90.0 5 4 + LN 94.7 97.0 84.5 80.0 86.4 90.0 79.4 86.0 6 5 + ON 96.3 97.4 85.2 80.0 87.5 90.0 89.2 85.4 81.4 65.6 Table 1: system results As described earlier, it is more useful to define words depending on how the words are used in real applications. In our system, a lexicon (containing 98,668 lexicon words and 59,285 morphologically derived words) has been constructed for several applications, such as Asian language input and web search. Therefore, we annotate the text corpus based on the lexicon. That is, we segment each sentence as much as possible into words that are stored in our lexicon, and tag only the new words, which otherwise would be segmented into strings of one -character words. When there are multiple segmentations for a sentence, we keep only one that contains the least number of words. The annotated test set contains in total 247,039 tokens (including 205,162 lexicon/morph-lexicon words, 4,347 PNs, 5,311 LNs, 3,850 ONs, and 6,630 factoids, etc.) Our system is measured through multiple precision-recall (P/R) pairs, and F-measures (Fβ=1, which is defined as 2PR/(P+R)) for each word class. Since the annotated test set is based on a particular lexicon, some of the evaluation measures are meaningless when we compare our system to other systems that use different lexicons. So in comparison with different systems, we consider only the precision-recall of NER and the number of OAS errors (i.e. crossing brackets) because these measures are lexicon independent and there is always a single unambiguous answer. The training corpus for context model contains approximately 80 million Chinese characters from various domains of text such as newspapers, novels, magazines etc. The training corpora for class models are described in Section 5. 7.1 System results Our system is designed in the way that components such as factoid detector and NER can be ‘switched on or off’, so that we can investigate the relative contribution of each component to the overall word segmentation performance. The main results are shown in Table 1. For comparison, we also include in the table (Row 1) the results of using the greedy segmentor (FMM) described in Section 6. Row 2 shows the baseline results of our system, where only the lexicon is used. It is interesting to find, in Rows 1 and 2, that the dictionary-based methods already achieve quite good recall, but the precisions are not very good because they cannot identify correctly unknown words that are not in the lexicon such factoids and named entities. We also find that even using the same lexicon, our approach that is based on the improved source-channel models outperforms the greedy approach (with a slight but statistically significant different i.e., P < 0.01 according to the t test) because the use of context model resolves more ambiguities in segmentation. The most promising property of our approach is that the source-channel models provide a flexible framework where a wide variety of linguistic knowledge and statistical models can be combined in a unified way. As shown in Rows 3 to 6, when components are switched on in turn by activating corresponding class models, the overall word segmentation performance increases consistently. We also conduct an error analysis, showing that 86.2% of errors come from NER and factoid detection, although the tokens of these word types consist of only 8.7% of all that are in the test set. 7.2 Comparison with other systems We compare our system – henceforth SCM, with other two Chinese word segmentation systems7: 7 Although the two systems are widely accessible in mainland China, to our knowledge no standard evaluations on Chinese word segmentation of the two systems have been published by press time. More comprehensive comparisons (with other well- known systems) and detailed error analysis form one area of our future work. LN PN ON System # OAS Errors P % R % Fβ=1 P % R % Fβ=1 P % R % Fβ=1 MSWS 63 93.5 44.2 60.0 90.7 74.4 81.8 64.2 46.9 60.0 LCWS 49 85.4 72.0 78.2 94.5 78.1 85.6 71.3 13.1 22.2 SCM 7 87.6 86.4 87.0 83.0 89.7 86.2 79.9 61.7 69.6 Table 2. Comparison results 1. The MSWS system is one of the best available products. It is released by Microsoft® (as a set of Windows APIs). MSWS first conducts the word breaking using MM (augmented by heuristic rules for disambiguation), then conducts factoid detection and NER using rules. 2. The LCWS system is one of the best research systems in mainland China. It is released by Beijing Language University. The system works similarly to MSWS, but has a larger dictionary containing more PNs and LNs. As mentioned above, to achieve a fair comparison, we compare the above three systems only in terms of NER precision-recall and the number of OAS errors. However, we find that due to the different annotation specifications used by these systems, it is still very difficult to compare their results automatically. For example, 北京市政府 ‘Beijing city government’ has been segmented inconsistently as 北京市/政府 ‘Beijing city’ + ‘government’ or 北京/ 市政府 ‘Beijing’ + ‘city government’ even in the same system. Even worse, some LNs tagged in one system are tagged as ONs in another system. Therefore, we have to manually check the results. We picked 933 sentences at random containing 22,833 words (including 329 PNs, 617 LNs, and 435 ONs) for testing. We also did not differentiate LNs and ONs in evaluation. That is, we only checked the word boundaries of LNs and ONs and treated both tags exchangeable. The results are shown in Table 2. We can see that in this small test set SCM achieves the best overall performance of NER and the best performance of resolving OAS. 8 Conclusion The contributions of this paper are three-fold. First, we formulate the Chinese word segmentation problem as a set of correlated problems, which are better solved simultaneously, including word breaking, morphological analysis, factoid detection and NER. Second, we present a unified approach to these problems using the improved source-channel models. The models provide a simple statistical framework to incorporate a wide variety of linguistic knowledge and statistical models in a unified way. Third, we evaluate the system’s performance on an annotated test set, showing very promising results. We also compare our system with several state-of-the-art systems, taking into account the fact that the definition of Chinese words varies from system to system. Given the comparison results, we can say with confidence that our system achieves at least the performance of state-of-the-art word segmentation systems. References Cheng, Kowk-Shing, Gilbert H. Yong and Kam-Fai Wong. 1999. A study on word-based and integral-bit Chinese text compression algorithms. JASIS, 50(3): 218-228. Chien, Lee-Feng. 1997. PAT-tree-based keyword extraction for Chinese information retrieval. In SIGIR97, 27-31. Dai, Yubin, Christopher S. G. Khoo and Tech Ee Loh. 1999. A new statistical formula for Chinese word segmentation incorporating contextual information. SIGIR99, 82-89. Gao, Jianfeng, Joshua Goodman, Mingjing Li and Kai-Fu Lee. 2002. Toward a unified approach to statistical language modeling for Chinese. ACM TALIP, 1(1): 3-33. Lin, Ming-Yu, Tung-Hui Chiang and Keh-Yi Su. 1993. A preliminary study on unknown word problem in Chinese word segmentation. ROCLING 6, 119-141. Katz, S. M. 1987. Estimation of probabilities from sparse data for the language model component of a speech recognizer. IEEE ASSP 35(3):400-401. Packard, Jerome. 2000. The morphology of Chinese: A Linguistics and Cognitive Approach. Cambridge University Press, Cambridge. Sproat, Richard and Chilin Shih. 2002. Corpus-based methods in Chinese morphology and phonology. In: COOLING 2002. Sproat, Richard, Chilin Shih, William Gale and Nancy Chang. 1996. A stochastic finite-state word-segmentation algorithm for Chinese. Computational Linguistics. 22(3): 377-404. Sun, Jian, Jianfeng Gao, Lei Zhang, Ming Zhou and Chang-Ning Huang. 2002. Chinese named entity identification using class-based language model. In: COLING 2002. Teahan, W. J., Yingying Wen, Rodger McNad and Ian Witten. 2000. A compression-based algorithm for Chinese word segmentation. Computational Linguistics, 26(3): 375-393. Wu, Zimin and Gwyneth Tseng. 1993. Chinese text segmentation for text retrieval achievements and problems. JASIS, 44(9): 532-542.
2003
35
Unsupervised Segmentation of Words Using Prior Distributions of Morph Length and Frequency Mathias Creutz Neural Networks Research Centre, Helsinki University of Technology P.O.Box 9800, FIN-02015 HUT, Finland [email protected] Abstract We present a language-independent and unsupervised algorithm for the segmentation of words into morphs. The algorithm is based on a new generative probabilistic model, which makes use of relevant prior information on the length and frequency distributions of morphs in a language. Our algorithm is shown to outperform two competing algorithms, when evaluated on data from a language with agglutinative morphology (Finnish), and to perform well also on English data. 1 Introduction In order to artificially “understand” or produce natural language, a system presumably has to know the elementary building blocks, i.e., the lexicon, of the language. Additionally, the system needs to model the relations between these lexical units. Many existing NLP (natural language processing) applications make use of words as such units. For instance, in statistical language modelling, probabilities of word sequences are typically estimated, and bag-of-word models are common in information retrieval. However, for some languages it is infeasible to construct lexicons for NLP applications, if the lexicons contain entire words. In especially agglutinative languages,1 such as Finnish and Turkish, the 1In agglutinative languages words are formed by the concatenation of morphemes. number of possible different word forms is simply too high. For example, in Finnish, a single verb may appear in thousands of different forms (Karlsson, 1987). According to linguistic theory, words are built from smaller units, morphemes. Morphemes are the smallest meaning-bearing elements of language and could be used as lexical units instead of entire words. However, the construction of a comprehensive morphological lexicon or analyzer based on linguistic theory requires a considerable amount of work by experts. This is both time-consuming and expensive and hardly applicable to all languages. Furthermore, as language evolves the lexicon must be updated continuously in order to remain up-to-date. Alternatively, an interesting field of research lies open: Minimally supervised algorithms can be designed that automatically discover morphemes or morpheme-like units from data. There exist a number of such algorithms, some of which are entirely unsupervised and others that use some knowledge of the language. In the following, we discuss recent unsupervised algorithms and refer the reader to (Goldsmith, 2001) for a comprehensive survey of previous research in the whole field. Many algorithms proceed by segmenting (i.e., splitting) words into smaller components. Often the limiting assumption is made that words consist of only one stem followed by one (possibly empty) suffix (D´ejean, 1998; Snover and Brent, 2001; Snover et al., 2002). This limitation is reduced in (Goldsmith, 2001) by allowing a recursive structure, where stems can have inner structure, so that they in turn consist of a substem and a suffix. Also prefixes are possible. However, for languages with agglutinative morphology this may not be enough. In Finnish, a word can consist of lengthy sequences of alternating stems and affixes. Some morphology discovery algorithms learn relationships between words by comparing the orthographic or semantic similarity of the words (Schone and Jurafsky, 2000; Neuvel and Fulop, 2002; Baroni et al., 2002). Here a small number of components per word are assumed, which makes the approaches difficult to apply as such to agglutinative languages. We previously presented two segmentation algorithms suitable for agglutinative languages (Creutz and Lagus, 2002). The algorithms learn a set of segments, which we call morphs, from a corpus. Stems and affixes are not distinguished as separate categories by the algorithms, and in that sense they resemble algorithms for text segmentation and word discovery, such as (Deligne and Bimbot, 1997; Brent, 1999; Kit and Wilks, 1999; Yu, 2000). However, we observed that for the corpus size studied (100 000 words), our two algorithms were somewhat prone to excessive segmentation of words. In this paper, we aim at overcoming the problem of excessive segmentation, particularly when small corpora (up to 200 000 words) are used for training. We present a new segmentation algorithm, which is language independent and works in an unsupervised fashion. Since the results obtained suggest that the algorithm performs rather well, it could possibly be suitable for languages for which only small amounts of written text are available. The model is formulated in a probabilistic Bayesian framework. It makes use of explicit prior information in the form of probability distributions for morph length and morph frequency. The model is based on the same kind of reasoning as the probabilistic model in (Brent, 1999). While Brent’s model displays a prior probability that exponentially decreases with word length (with one character as the most common length), our model uses a probability distribution that more accurately models the real length distribution. Also Brent’s frequency distribution differs from ours, which we derive from Mandelbrot’s correction of Zipf’s law (cf. Section 2.5). Our model requires that the values of two parameters be set: (i) our prior belief of the most common morph length, and (ii) our prior belief of the proportion of morph types2 that occur only once in the corpus. These morph types are called hapax legomena. While the former is a rather intuitive measure, the latter may not appear as intuitive. However, the proportion of hapax legomena may be interpreted as a measure of the richness of the text. Also note that since the most common morph length is calculated for morph types, not tokens, it is not independent of the corpus size. A larger corpus usually requires a higher average morph length, a fact that is stated for word lengths in (Baayen, 2001). As an evaluation criterion for the performance of our method and two reference methods we use a measure that reflects the ability to recognize real morphemes of the language by examining the morphs found by the algorithm. 2 Probabilistic generative model In this section we derive the new model. We follow a step-by-step process, during which a morph lexicon and a corpus are generated. The morphs in the lexicon are strings that emerge as a result of a stochastic process. The corpus is formed through another stochastic process that picks morphs from the lexicon and places them in a sequence. At two points of the process, prior knowledge is required in the form of two real numbers: the most common morph length and the proportion of hapax legomena morphs. The model can be used for segmentation of words by requiring that the corpus created is exactly the input data. By selecting the most probable morph lexicon that can produce the input data, we obtain a segmentation of the words in the corpus, since we can rewrite every word as a sequence of morphs. 2.1 Size of the morph lexicon We start the generation process by deciding the number of morphs in the morph lexicon (type count). This number is denoted by nµ and its probability p(nµ) follows the uniform distribution. This means that, a priori, no lexicon size is more probable than another.3 2We use standard terminology: Morph types are the set of different, distinct morphs. By contrast, morph tokens are the instances (or occurrences) of morphs in the corpus. 3This is an improper prior, but it is of little practical significance for two reasons: (i) This stage of the generation process 2.2 Morph lengths For each morph in the lexicon, we independently choose its length in characters according to the gamma distribution: p(lµi) = 1 Γ(α)βα lµi α−1e−lµi/β, (1) where lµi is the length in characters of the ith morph, and α and β are constants. Γ(α) is the gamma function: Γ(α) = Z ∞ 0 zα−1e−zdz. (2) The maximum value of the density occurs at lµi = (α −1)β, which corresponds to the most common morph length in the lexicon. When β is set to one, and α to one plus our prior belief of the most common morph length, the pdf (probability density function) is completely defined. We have chosen the gamma distribution for morph lengths, because it corresponds rather well to the real length distribution observed for word types in Finnish and English corpora that we have studied. The distribution also fits the length distribution of the morpheme labels used as a reference (cf. Section 3). A Poisson distribution can be justified and has been used in order to model the length distribution of word and morph tokens [e.g., (Creutz and Lagus, 2002)], but for morph types we have chosen the gamma distribution, which has a thicker tail. 2.3 Morph strings For each morph µi, we decide the character string it consists of: We independently choose lµi characters at random from the alphabet in use. The probability of each character cj is the maximum likelihood estimate of the occurrence of this character in the corpus:4 p(cj) = ncj P k nck , (3) where ncj is the number of occurrences of the character cj in the corpus, and P k nck is the total number of characters in the corpus. only contributes with one probability value, which will have a negligible effect on the model as a whole. (ii) A proper probability density function would presumably be very flat, which would hardly help guiding the search towards an optimal model. 4Alternatively, the maximum likelihood estimate of the occurrence of the character in the lexicon could be used. 2.4 Morph order in the lexicon The lexicon consists of a set of nµ morphs and it makes no difference in which order these morphs have emerged. Regardless of their initial order, the morphs can be sorted into a uniquely defined (e.g., alphabetical) order. Since there are nµ! ways to order nµ different elements,5 we multiply the probability accumulated so far by nµ!: p(lexicon) = p(nµ) nµ Y i=1 h p(lµi) lµi Y j=1 p(cj) i ·nµ! (4) 2.5 Morph frequencies The next step is to generate a corpus using the morph lexicon obtained in the previous steps. First, we independently choose the number of times each morph occurs in the corpus. We pursue the following line of thought: Zipf has studied the relationship between the frequency of a word, f, and its rank, z.6 He suggests that the frequency of a word is inversely proportional to its rank. Mandelbrot has refined Zipf’s formula, and suggests a more general relationship [see, e.g., (Baayen, 2001)]: f = C(z + b)−a, (5) where C, a and b are parameters of a text. Let us derive a probability distribution from Mandelbrot’s formula. The rank of a word as a function of its frequency can be obtained by solving for z from (5): z = C 1 a f −1 a −b. (6) Suppose that one wants to know the number of words that have a frequency close to f rather than the rank of the word with frequency f. In order to obtain this information, we choose an arbitrary interval around f: [(1/γ)f . . . γf[, where γ > 1, and compute the rank at the endpoints of the interval. The difference is an estimate of the number of words 5Strictly speaking, our probabilistic model is not perfect, since we do not make sure that no morph can appear more than once in the lexicon. 6The rank of a word is the position of the word in a list, where the words have been sorted according to falling frequency. that fall within the interval, i.e., have a frequency close to f: nf = z1/γ −zγ = (γ 1 a −γ−1 a )C 1 af −1 a. (7) This can be transformed into an exponential pdf by (i) binning the frequency axis so that there are no overlapping intervals. (This means that the frequency axis is divided into non-overlapping intervals [(1/γ) ˆf . . . γ ˆf[, which is equivalent to having ˆf values that are powers of γ2: ˆf0 = γ0 = 1, ˆf1 = γ2, ˆf2 = γ4, . . . All frequencies f are rounded to the closest ˆf.) Next (ii), we normalize the number of words with a frequency close to ˆf with the total number of words P ˆf n ˆf. Furthermore (iii), ˆf is written as elog ˆf, and (iv) C must be chosen so that the normalization coefficient equals 1/a, which yields a proper pdf that integrates to one. Note also the factor log γ2. Like ˆf, log ˆf is a discrete variable. We approximate the integral of the density function around each value log ˆf by multiplying with the difference between two successive log ˆf values, which equals log γ2: p(f ∈[(1/γ) ˆf . . . γ ˆf[) = γ 1 a −γ−1 a P ˆf n ˆf C 1 a e−1 a log ˆf = 1 ae−1 a log ˆf · log γ2. (8) Now, if we assume that Zipf’s and Madelbrot’s formulae apply to morphs as well as to words, we can use formula (8) for every morph frequency fµi, which is the number of occurrences (or frequency) of the morph µi in the corpus (token count). However, values for a and γ2 must be chosen. We set γ2 to 1.59, which is the lowest value for which no empty frequency bins will appear.7 For fµi = 1, (8) reduces to log γ2/a. We set this value equal to our prior belief of the proportion of morph types that are to occur only once in the corpus (hapax legomena). 2.6 Corpus The morphs and their frequencies have been set. The order of the morphs in the corpus remains to be decided. The probability of one particular order is the inverse of the multinomial: 7Empty bins can appear for small values of fµi due to fµi’s being rounded to the closest ˆfµi, which is a power of γ2. p(corpus) = (Pnµ i=1 fµi)! Qnµ i=1 fµi! −1 = N! Qnµ i=1 fµi! −1. (9) The numerator of the multinomial is the factorial of the total number of morph tokens, N, which equals the sum of frequencies of every morph type. The denominator is the product of the factorial of the frequency of each morph type. 2.7 Search for the optimal model The search for the optimal model given our input data corresponds closely to the recursive segmentation algorithm presented in (Creutz and Lagus, 2002). The search takes place in batch mode, but could as well be done incrementally. All words in the data are randomly shuffled, and for each word, every split into two parts is tested. The most probable split location (or no split) is selected and in case of a split, the two parts are recursively split in two. All words are iteratively reprocessed until the probability of the model converges. 3 Evaluation From the point of view of linguistic theory, it is possible to come up with different plausible suggestions for the correct location of morpheme boundaries. Some of the solutions may be more elegant than others,8 but it is difficult to say if the most elegant scheme will work best in practice, when real NLP applications are concerned. We utilize an evaluation method for segmentation of words presented in (Creutz and Lagus, 2002). In this method, segments are not compared to one single “correct” segmentation. The evaluation criterion can rather be interpreted from the point of view of language “understanding”. A morph discovered by the segmentation algorithm is considered to be “understood”, if there is a low-ambiguity mapping from the morph to a corresponding morpheme. Alternatively, a morph may correspond to a sequence of morphemes, if these morphemes are very likely to occur together. The idea is that if an entirely new word form is encountered, the system will “understand” it by decomposing it into morphs that it “understands”. A segmentation algorithm that segments 8Cf. “hop + ed” vs. “hope + d” (past tense of “to hope”). words into too small parts will perform poorly due to high ambiguity. At the other extreme, an algorithm that is reluctant at splitting words will have bad generalization ability to new word forms. Reference morpheme sequences for the words are obtained using existing software for automatic morphological analysis based on the two-level morphology of Koskenniemi (1983). For each word form, the analyzer outputs the base form of the word together with grammatical tags. By filtering the output, we get a sequence of morpheme labels that appear in the correct order and represent correct morphemes rather closely. Note, however, that the morpheme labels are not necessarily orthographically similar to the morphemes they represent. The exact procedure for evaluating the segmentation of a set of words consists of the following steps: (1) Segment the words in the corpus using the automatic segmentation algorithm. (2) Divide the segmented data into two parts of equal size. Collect all segmented word forms from the first part into a training vocabulary and collect all segmented word forms from the second part into a test vocabulary. (3) Align the segmentation of the words in the training vocabulary with the corresponding reference morpheme label sequences. Each morph must be aligned with one or more consecutive morpheme labels and each morpheme label must be aligned with at least one morph; e.g., for a hypothetical segmentation of the English word winners’: Morpheme labels win -ER PL GEN Morph sequence w inn er s’ (4) Estimate conditional probabilities for the morph/morpheme mappings computed over the whole training vocabulary: p(morpheme | morph). Re-align using the Viterbi algorithm and employ the Expectation-Maximization algorithm iteratively until convergence of the probabilities. (5) The quality of the segmentation is evaluated on the test vocabulary. The segmented words in the test vocabulary are aligned against their reference morpheme label sequences according to the conditional probabilities learned from the training vocabulary. To measure the quality of the segmentation we compute the expectation of the proportion of correct mappings from morphs to morpheme labels, E{p(morpheme | morph)}: 1 N N X i=1 pi(morpheme | morph), (10) where N is the number of morph/morpheme mappings, and pi(·) is the probability associated with the ith mapping. Thus, we measure the proportion of morphemes in the test vocabulary that we can expect to recognize correctly by examining the morph segments.9 4 Experiments We have conducted experiments involving (i) three different segmentation algorithms, (ii) two corpora in different languages (Finnish and English), and (iii) data sizes ranging from 2000 words to 200 000 words. 4.1 Segmentation algorithms The new probabilistic method is compared to two existing segmentation methods: the Recursive MDL method presented in (Creutz and Lagus, 2002)10 and John Goldsmith’s algorithm called Linguistica (Goldsmith, 2001).11 Both methods use MDL (Minimum Description Length) (Rissanen, 1989) as a criterion for model optimization. The effect of using prior information on the distribution of morph length and frequency can be assessed by comparing the probabilistic method to Recursive MDL, since both methods utilize the same search algorithm, but Recursive MDL does not make use of explicit prior information. Furthermore, the possible benefit of using the two sources of prior information can be compared against the possible benefit of grouping stems and suffixes into signatures. The latter technique is employed by Linguistica. 4.2 Data The Finnish data consists of subsets of a newspaper text corpus from CSC,12 from which nonwords (numbers and punctuation marks) have been 9In (Creutz and Lagus, 2002) the results are reported less intuitively as the “alignment distance”, i.e., the negative logprob of the entire test set: −log Q pi(morpheme | morph). 10Online demo at http://www.cis.hut.fi/projects/morpho/. 11The software can be downloaded from http://humanities. uchicago.edu/faculty/goldsmith/Linguistica2000/. 12http://www.csc.fi/kielipankki/ removed. The reference morpheme labels have been filtered out from a morphosyntactic analysis of the text produced by the Connexor FDG parser.13 The English corpus consists of mainly newspaper text (with non-words removed) from the Brown corpus.14 A morphological analysis of the words has been performed using the Lingsoft ENGTWOL analyzer.15 For both languages data sizes of 2000, 5000, 10 000, 50 000, 100 000, and 200 000 have been used. A notable difference between the morphological structure of the languages lies in the fact that whereas there are about 17 000 English word types in the largest data set, the corresponding number of Finnish word types is 58 000. 4.3 Parameters In order to select good prior values for the probabilistic method, we have used separate development test sets that are independent of the final data sets. Morph length and morph frequency distributions have been computed for the reference morpheme representations of the development test sets. The prior values for most common morph length and proportion of hapax legomena have been adjusted in order to produce distributions that fit the reference as well as possible. We thus assume that we can make a good guess of the final morph length and frequency distributions. Note, however, that our reference is an approximation of a morpheme representation. As the segmentation algorithms produce morphs, not morphemes, we can expect to obtain a larger number of morphs due to allomorphy. Note also that we do not optimize for segmentation performance on the development test set; we only choose the best fit for the morph length and frequency distributions. As for the two other segmentation algorithms, Recursive MDL has no parameters to adjust. In Linguistica we have used Method A Suffixes + Find prefixes from stems with other parameters left at their default values. We are unaware whether another configuration could be more advantageous for Linguistica. 13http://www.connexor.fi/ 14The Brown corpus is available at the Linguistic Data Consortium at http://www.ldc.upenn.edu/. 15http://www.lingsoft.fi/ 2 5 10 50 100 200 0 10 20 30 40 50 60 Finnish Corpus size [1000 words] (log. scaled axis) Expectation(recognized morphemes) [%] Probabilistic Recursive MDL Linguistica No segmentation Figure 1: Expectation of the percentage of recognized morphemes for Finnish data. 4.4 Results The expected proportion of morphemes recognized by the three segmentation methods are plotted in Figures 1 and 2 for different sizes of the Finnish and English corpora. The search algorithm used in the probabilistic method and Recursive MDL involve randomness and therefore every value shown for these two methods is the average obtained over ten runs with different random seeds. However, the fluctuations due to random behaviour are very small and paired t-tests show significant differences at the significance level of 0.01 for all pair-wise comparisons of the methods at all corpus sizes. For Finnish, all methods show a curve that mainly increases as a function of the corpus size. The probabilistic method is the best with morpheme recognition percentages between 23.5% and 44.2%. Linguistica performs worst with percentages between 16.5% and 29.1%. None of the methods are close to ideal performance, which, however, is lower than 100%. This is due to the fact that the test vocabulary contains a number of morphemes that are not present in the training vocabulary, and thus are impossible to recognize. The proportion of unrecognizable morphemes is highest for the smallest corpus size (32.5%) and decreases to 8.8% for the largest corpus size. The evaluation measure used unfortunately scores 2 5 10 50 100 200 0 10 20 30 40 50 60 English Corpus size [1000 words] (log. scaled axis) Expectation(recognized morphemes) [%] Probabilistic Recursive MDL Linguistica No segmentation Figure 2: Expectation of the percentage of recognized morphemes for English data. a baseline of no segmentation fairly high. The nosegmentation baseline corresponds to a system that recognizes the training vocabulary fully, but has no ability to generalize to any other word form. The results for English are different. Linguistica is the best method for corpus sizes below 50 000 words, but its performance degrades from the maximum of 39.6% at 10 000 words to 29.8% for the largest data set. The probabilistic method is constantly better than Recursive MDL and both methods outperform Linguistica beyond 50 000 words. The recognition percentages of the probabilistic method vary between 28.2% and 43.6%. However, for corpus sizes above 10 000 words none of the three methods outperform the no-segmentation baseline. Overall, the results for English are closer to ideal performance than was the case for Finnish. This is partly due to the fact that the proportion of unseen morphemes that are impossible to recognize is higher for English (44.5% at 2000 words, 19.0% at 200 000 words). As far as the time consumption of the algorithms is concerned, the largest Finnish corpus took 20 minutes to process for the probabilistic method and Recursive MDL, and 40 minutes for Linguistica. The largest English corpus was processed in less than three minutes by all the algorithms. The tests were run on a 900 MHz AMD Duron processor with 256 MB RAM. 5 Discussion For small data sizes, Recursive MDL has a tendency to split words into too small segments, whereas Linguistica is much more reluctant at splitting words, due to its use of signatures. The extent to which the probabilistic method splits words lies somewhere in between the two other methods. Our evaluation measure favours low ambiguity as long as the ability to generalize to new word forms does not suffer. This works against all segmentation methods for English at larger data sizes. The English language has rather simple morphology, which means that the number of different possible word forms is limited. The larger the training vocabulary, the broader coverage of the test vocabulary, and therefore the no-segmentation approach works surprisingly well. Segmentation always increases ambiguity, which especially Linguistica suffers from as it discovers more and more signatures and short suffixes as the amount of data increases. For instance, a final ’s’ stripped off its stem can be either a noun or a verb ending, and a final ’e’ is very ambiguous, as it belongs to orthography rather than morphology and does not correspond to any morpheme. Finnish morphology is more complex and there are endless possibilities to construct new word forms. As can be seen from Figure 1, the probabilistic method and Recursive MDL perform better than the no-segmentation baseline for all data sizes. The segmentations could be evaluated using other measures, but for language modelling purposes, we believe that the evaluation measure should not favour shattering of very common strings, even though they correspond to more than one morpheme. These strings should rather work as individual vocabulary items in the model. It has been shown that increased performance of n-gram models can be obtained by adding larger units consisting of common word sequences to the vocabulary; see e.g., (Deligne and Bimbot, 1995). Nevertheless, in the near future we wish to explore possibilities of using complementary and more standard evaluation measures, such as precision, recall, and F-measure of the discovered morph boundaries. Concerning the length and frequency prior distributions in the probabilistic model, one notes that they are very general and do not make far-reaching assumptions about the behaviour of natural language. In fact, Zipf’s law has been shown to apply to randomly generated artificial texts (Li, 1992). In our implementation, due to the independence assumptions made in the model and due to the search algorithm used, the choice of a prior value for the most common morph length is more important than the hapax legomena value. If a very bad prior value for the most common morph length is used performance drops by twelve percentage units, whereas extreme hapax legomena values only reduces performance by two percentage units. But note that the two values are dependent: A greater average morph length means a greater number of hapax legomena and vice versa. There is always room for improvement. Our current model does not represent contextual dependencies, such as phonological rules or morphotactic limitations on morph order. Nor does it identify which morphs are allomorphs of the same morpheme, e.g., “city” and “citi + es”. In the future, we expect to address these problems by using statistical language modelling techniques. We will also study how the algorithms scale to considerably larger corpora. 6 Conclusions The results we have obtained suggest that the performance of a segmentation algorithm can indeed be increased by using prior information of general nature, when this information is expressed mathematically as part of a probabilistic model. Furthermore, we have reasons to believe that the morph segments obtained can be useful as components of a statistical language model. Acknowledgements I am most grateful to Krista Lagus, Krister Lind´en, and Anders Ahlb¨ack, as well as the anonymous reviewers for their valuable comments. References R. H. Baayen. 2001. Word Frequency Distributions. Kluwer Academic Publishers. M. Baroni, J. Matiasek, and H. Trost. 2002. Unsupervised learning of morphologically related words based on orthographic and semantic similarity. In Proc. ACL Workshop Morphol. & Phonol. Learning, pp. 48–57. M. R. Brent. 1999. An efficient, probabilistically sound algorithm for segmentation and word discovery. Machine Learning, 34:71–105. M. Creutz and K. Lagus. 2002. Unsupervised discovery of morphemes. In Proc. ACL Workshop on Morphol. and Phonological Learning, pp. 21–30, Philadelphia. H. D´ejean. 1998. Morphemes as necessary concept for structures discovery from untagged corpora. In Workshop on Paradigms and Grounding in Nat. Lang. Learning, pp. 295–299, Adelaide. S. Deligne and F. Bimbot. 1995. Language modeling by variable length sequences: Theoretical formulation and evaluation of multigrams. In Proc. ICASSP. S. Deligne and F. Bimbot. 1997. Inference of variablelength linguistic and acoustic units by multigrams. Speech Communication, 23:223–241. J. Goldsmith. 2001. Unsupervised learning of the morphology of a natural language. Computational Linguistics, 27(2):153–198. F. Karlsson. 1987. Finnish Grammar. WSOY, 2nd ed. C. Kit and Y. Wilks. 1999. Unsupervised learning of word boundary with description length gain. In Proc. CoNLL99 ACL Workshop, Bergen. K. Koskenniemi. 1983. Two-level morphology: A general computational model for word-form recognition and production. Ph.D. thesis, University of Helsinki. W. Li. 1992. Random texts exhibit Zipf’s-Law-like word frequency distribution. IEEE Transactions on Information Theory, 38(6):1842–1845. S. Neuvel and S. A. Fulop. 2002. Unsupervised learning of morphology without morphemes. In Proc. ACL Workshop on Morphol. & Phonol. Learn., pp. 31–40. J. Rissanen. 1989. Stochastic Complexity in Statistical Inquiry, vol. 15. World Scientific Series in Computer Science, Singapore. P. Schone and D. Jurafsky. 2000. Knowledge-free induction of morphology using Latent Semantic Analysis. In Proc. CoNLL-2000 & LLL-2000, pp. 67–72. M. G. Snover and M. R. Brent. 2001. A Bayesian model for morpheme and paradigm identification. In Proc. 39th Annual Meeting of the ACL, pp. 482–490. M. G. Snover, G. E. Jarosz, and M. R. Brent. 2002. Unsupervised learning of morphology using a novel directed search algorithm: Taking the first step. In Proc. ACL Worksh. Morphol. & Phonol. Learn., pp. 11–20. H. Yu. 2000. Unsupervised word induction using MDL criterion. In Proc. ISCSL, Beijing.
2003
36
Parametric Models of Linguistic Count Data Martin Jansche Department of Linguistics The Ohio State University Columbus, OH 43210, USA [email protected] Abstract It is well known that occurrence counts of words in documents are often modeled poorly by standard distributions like the binomial or Poisson. Observed counts vary more than simple models predict, prompting the use of overdispersed models like Gamma-Poisson or Beta-binomial mixtures as robust alternatives. Another deficiency of standard models is due to the fact that most words never occur in a given document, resulting in large amounts of zero counts. We propose using zeroinflated models for dealing with this, and evaluate competing models on a Naive Bayes text classification task. Simple zero-inflated models can account for practically relevant variation, and can be easier to work with than overdispersed models. 1 Introduction Linguistic count data often violate the simplistic assumptions of standard probability models like the binomial or Poisson distribution. In particular, the inadequacy of the Poisson distribution for modeling word (token) frequency is well known, and robust alternatives have been proposed (Mosteller and Wallace, 1984; Church and Gale, 1995). In the case of the Poisson, a commonly used robust alternative is the negative binomial distribution (Pawitan, 2001, §4.5), which has the ability to capture extra-Poisson variation in the data, in other words, it is overdispersed compared with the Poisson. When a small set of parameters controls all properties of the distribution it is important to have enough parameters to model the relevant aspects of one’s data. Simple models like the Poisson or binomial do not have enough parameters for many realistic applications, and we suspect that the same might be true of loglinear models. When applying robust models like the negative binomial to linguistic count data like word occurrences in documents, it is natural to ask to what extent the extra-Poisson variation has been captured by the model. Answering that question is our main goal, and we begin by reviewing some of the classic results of Mosteller and Wallace (1984). 2 Word Frequency in Fixed-Length Texts In preparation of their authorship study of The Federalist, Mosteller and Wallace (1984, §2.3) investigated the variation of word frequency across contiguous passages of similar length, drawn from papers of known authorship. The occurrence frequencies of any in papers by Hamilton (op. cit., Table 2.3–3) are repeated here in Figure 1: out of a total of 247 passages there are 125 in which the word any does not occur; it occurs once in 88 passages, twice in 26 passages, etc. Figure 1 also shows the counts predicted by a Poisson distribution with mean 0.67. Visual inspection (“chi by eye”) indicates an acceptable fit between the model and the data, which is confirmed by a χ2 goodness-of-fit test. This demonstrates that certain words seem to be adequately modeled by a Poisson distribution, whose probability mass function is shown in (1): Poisson(λ)(x) = λ x x! 1 expλ (1) 1 26 50 75 100 125 0 1 2 3 4 5 88 7 frequency (number of passages) occurrences of "any" [Hamilton] observed Poisson(0.67) Figure 1: Occurrence counts of any in Hamilton passages: raw counts and counts predicted under a Poisson model. For other words the Poisson distribution gives a much worse fit. Take the occurrences of were in papers by Madison, as shown in Figure 2 (ibid.). We calculate the χ2 statistic for the counts expected under a Poisson model for three bins (0, 1, and 2–5, to ensure that the expected counts are greater than 5) and obtain 6.17 at one degree of freedom (number of bins minus number of parameters minus one), which is enough to reject the null hypothesis that the data arose from a Poisson(0.45) distribution. On the other hand, the χ2 statistic for a negative binomial distribution NegBin(0.45,1.17) is only 0.013 for four bins (0, 1, 2, and 3–5), i. e., again 1 degree of freedom, as two parameters were estimated from the data. Now we are very far from rejecting the null hypothesis. This provides some quantitative backing for Mosteller and Wallace’s statement that ‘even the most motherly eye can scarcely make twins of the [Poisson vs. empirical] distributions’ for certain words (op. cit., 31). The probability mass function of the negative binomial distribution, using Mosteller and Wallace’s parameterization, is shown in (2): NegBin(λ,κ)(x) = λ x x! Γ(κ +x) (λ +κ)κ+x κκ Γ(κ) (2) If one recalls that the Gamma function is well behaved and that expλ = lim κ→∞  1+ λ κ κ = lim κ→∞ (λ +κ)κ κκ , it is easy to see that NegBin(λ,κ) converges to Poisson(λ) for λ constant and κ →∞. On the other 5 25 50 75 100 125 150 179 0 1 2 3 4 5 1 18 58 167 frequency (number of passages) occurrences of "were" [Madison] observed Poisson(0.45) NegBin(0.45, 1.17) Figure 2: Occurrence counts of were in Madison passages: raw counts and counts predicted under Poisson and negative binomial models. hand, small values of κ drag the mode of the negative binomial distribution towards zero and increase its variance, compared with the Poisson. As more and more probability mass is concentrated at 0, the negative binomial distribution starts to depart from the empirical distribution. One can already see this tendency in Mosteller and Wallace’s data, although they themselves never comment on it. The problem with a huge chunk of the probability mass at 0 is that one is forced to say that the outcome 1 is still fairly likely and that the probability should drop rapidly from 2 onwards as the term 1/x! starts to exert its influence. This is often at odds with actual data. Take the word his in papers by Hamilton and Madison (ibid., pooled from individual sections of Table 2.3–3). It is intuitively clear that his may not occur at all in texts that deal with certain aspects of the US Constitution, since many aspects of constitutional law are not concerned with any single (male) person. For example, Federalist No. 23 (The Necessity of a Government as Energetic as the One Proposed to the Preservation of the Union, approx. 1800 words, by Hamilton) does not contain a single occurrence of his, whereas Federalist No. 72 (approx. 2000 words, a continuation of No. 71 The Duration in Office of the Executive, also by Hamilton) contains 35 occurrences. The difference is that No. 23 is about the role of a federal government in the abstract, and Nos. 71/72 are about term limits for offices filled by (male) individuals. We might therefore expect the occurrences of his to vary more, de405 200 100 48 26 12 0 1 2 3 71 39 18 frequency (number of passages) occurrences of "his" [Hamilton, Madison] observed NegBin(0.54, 0.15) NegBin(0.76, 0.11) 0.34 NegBin(1.56, 0.89) Figure 3: Occurrence counts of his in Hamilton and Madison passages (NB: y-axis is logarithmic). pending on topic, than any or were. The overall distribution of his is summarized in Figure 3; full details can be found in Table 1. Observe the huge number of passages with zero occurrences of his, which is ten times the number of passages with exactly one occurrence. Also notice how the negative binomial distribution fitted using the Method of Maximum Likelihood (MLE model, first line in Figure 3, third column in Table 1) overshoots at 1, but underestimates the number of passages with 2 and 3 occurrences. The problem cannot be solved by trying to fit the two parameters of the negative binomial based on the observed counts of two points. The second line in Figure 3 is from a distribution fitted to match the observed counts at 0 and 1. Although it fits those two points perfectly, the overall fit is worse than that of the MLE model, since it underestimates the observed counts at 2 and 3 more heavily. The solution we propose is illustrated by the third line in Figure 3. It accounts for only about a third of the data, but covers all passages with one or more occurrences of his. Visual inspection suggests that it provides a much better fit than the other two models, if we ignore the outcome 0; a quantitative comparison will follow below. This last model has relaxed the relationship between the probability of the outcome 0 and the probabilities of the other outcomes. In particular, we obtain appropriate counts for the outcome 1 by pretending that the outcome 0 occurs only about 71 times, compared with an actual 405 observed occurrences. Recall that the model accounts for only 34% of the data; the remaining NegBin ZINB obsrvd expctd expctd 0 405 403.853 405.000 1 39 48.333 40.207 2 26 21.686 24.206 3 18 12.108 14.868 4 5 7.424 9.223 5–6 9 8.001 9.361 7–14 7 6.996 5.977 χ2 statistic 6.447 2.952 df 4 3 χ2 cumul. prob 0.832 0.601 −logL( ˆθ) 441.585 439.596 Table 1: Occurrence counts of his in Hamilton and Madison passages. counts for the outcome 0 are supplied entirely by a second component whose probability mass is concentrated at zero. The expected counts under the full model are found in the rightmost column of Table 1. The general recipe for models with large counts for the zero outcome is to construe them as twocomponent mixtures, where one component is a degenerate distribution whose entire probability mass is assigned to the outcome 0, and the other component is a standard distribution, call it F(θ). Such a nonstandard mixture model is sometimes known as a ‘modified’ distribution (Johnson and Kotz, 1969, §8.4) or, more perspicuously, as a zero-inflated distribution. The probability mass function of a zeroinflated F distribution is given by equation (3), where 0 ≤z ≤1 (z < 0 may be allowable subject to additional constraints) and x ≡0 is the Kronecker delta δx,0. ZIF(z,θ)(x) = z(x ≡0)+(1−z)F(θ)(x) (3) It corresponds to the following generative process: toss a z-biased coin; if it comes up heads, generate 0; if it comes up tails, generate according to F(θ). If we apply this to word frequency in documents, what this is saying is, informally: whether a given word appears at all in a document is one thing; how often it appears, if it does, is another thing. This is reminiscent of Church’s statement that ‘[t]he first mention of a word obviously depends on frequency, but surprisingly, the second does not.’ (Church, 2000) However, Church was concerned with language modeling, and in particular cache-based models that overcome some of the limitations introduced by a Markov assumption. In such a setting it is natural to make a distinction between the first occurrence of a word and subsequent occurrences, which according to Church are influenced by adaptation (Church and Gale, 1995), referring to an increase in a word’s chance of re-occurrence after it has been spotted for the first time. For empirically demonstrating the effects of adaptation, Church (2000) worked with nonparametric methods. By contrast, our focus is on parametric methods, and unlike in language modeling, we are also interested in words that fail to occur in a document, so it is natural for us to distinguish between zero and nonzero occurrences. In Table 1, ZINB refers to the zero-inflated negative binomial distribution, which takes a parameter z in addition to the two parameters of its negative binomial component. Since the negative binomial itself can already accommodate large fractions of the probability mass at 0, we must ask whether the ZINB model fits the data better than a simple negative binomial. The bottom row of Table 1 shows the negative log likelihood of the maximum likelihood estimate ˆθ for each model. Log odds of 2 in favor of ZINB are indeed sufficient (on Akaike’s likelihoodbased information criterion; see e. g. Pawitan 2001, §13.5) to justify the introduction of the additional parameter. Also note that the cumulative χ2 probability of the χ2 statistic at the appropriate degrees of freedom is lower for the zero-inflated distribution. It is clear that a large amount of the observed variation of word occurrences is due to zero inflation, because virtually all words are rare and many words are simply not “on topic” for a given document. Even a seemingly innocent word like his turns out to be “loaded” (and we are not referring to gender issues), since it is not on topic for certain discussions of constitutional law. One can imagine that this effect is even more pronounced for taboo words, proper names, or technical jargon (cf. Church 2000). Our next question is whether the observed variation is best accounted for in terms of zero-inflation or overdispersion. We phrase the discussion in terms of a practical task for which it matters whether a word is on topic for a document. 3 Word Frequency Conditional on Document Length Word occurrence counts play an important role in document classification under an independent feature model (commonly known as “Naive Bayes”). This is not entirely uncontroversial, as many approaches to document classification use binary indicators for the presence and absence of each word, instead of full-fledged occurrence counts (see Lewis 1998 for an overview). In fact, McCallum and Nigam (1998) claim that for small vocabulary sizes one is generally better off using Bernoulli indicator variables; however, for a sufficiently large vocabulary, classification accuracy is higher if one takes word frequency into account. Comparing different probability models in terms of their effects on classification under a Naive Bayes assumption is likely to yield very conservative results, since the Naive Bayes classifier can perform accurate classifications under many kinds of adverse conditions and even when highly inaccurate probability estimates are used (Domingos and Pazzani, 1996; Garg and Roth, 2001). On the other hand, an evaluation in terms of document classification has the advantages, compared with language modeling, of computational simplicity and the ability to benefit from information about non-occurrences of words. Making a direct comparison of overdispersed and zero-inflated models with those used by McCallum and Nigam (1998) is difficult, since McCallum and Nigam use multivariate models – for which the “naive” independence assumption is different (Lewis, 1998) – that are not as easily extended to the cases we are concerned about. For example, the natural overdispersed variant of the multinomial model is the Dirichlet-multinomial mixture, which adds just a single parameter that globally controls the overall variation of the entire vocabulary. However, Church, Gale and other have demonstrated repeatedly (Church and Gale, 1995; Church, 2000) that adaptation or “burstiness” are clearly properties of individual words (word types). Using joint independent models (one model per word) brings us back into the realm of standard independence assumptions, makes it easy to add parameters that control overdispersion and/or zero-inflation for each word individually, and simplifies parameter estimation. 0 20 40 60 80 100 10 100 1000 10000 100000 classification accuracy (percent) vocabulary size (number of word types) Newsgroups Binomial Bernoulli Figure 4: A comparison of event models for different vocabulary sizes on the Newsgroup data set. So instead of a single multinomial distribution we use independent binomials, and instead of a multivariate Bernoulli model we use independent Bernoulli models for each word. The overall joint model is clearly wrong since it wastes probability mass on events that are known a priori to be impossible, like observing documents for which the sum of the occurrences of each word is greater than the document length. On the other hand, it allows us to take the true document length into account while using only a subset of the vocabulary, whereas on McCallum and Nigam’s approach one has to either completely eliminate all out-of-vocabulary words and adjust the document length accordingly, or else map out-of-vocabulary words to an unknown-word token whose observed counts could then easily dominate. In practice, using joint independent models does not cause problems. We replicated McCallum and Nigam’s Newsgroup experiment1 and did not find any major discrepancies. The reader is encouraged to compare our Figure 4 with McCallum and Nigam’s Figure 3. Not only are the accuracy figures comparable, we also obtained the same critical vocabulary size of 200 words below which the Bernoulli model results in higher classification accuracy. The Newsgroup data set (Lang, 1995) is a strati1Many of the data sets used by McCallum and Nigam (1998) are available at http://www.cs.cmu.edu/~TextLearning/ datasets.html. fied sample of approximately 20,000 messages total, drawn from 20 Usenet newsgroups. The fact that 20 newsgroups are represented in equal proportions makes this data set well suited for comparing different classifiers, as class priors are uniform and baseline accuracy is low at 5%. Like McCallum and Nigam (1998) we used (Rain)bow (McCallum, 1996) for tokenization and to obtain the word/ document count matrix. Even though we followed McCallum and Nigam’s tokenization recipe (skipping message headers, forming words from contiguous alphabetic characters, not using a stemmer), our total vocabulary size of 62,264 does not match McCallum and Nigam’s figure of 62,258, but does come reasonably close. Also following McCallum and Nigam (1998) we performed a 4:1 random split into training and test data. The reported results were obtained by training classification models on the training data and evaluating on the unseen test data. We compared four models of token frequency. Each model is conditional on the document length n (but assumes that the parameters of the distribution do not depend on document length), and is derived from the binomial distribution Binom(p)(x | n) = n x  px (1−p)n−x, (4) which we view as a one-parameter conditional model, our first model: x represents the token counts (0 ≤x ≤n); and n is the length of the document measured as the total number of token counts, including out-of-vocabulary items. The second model is the Bernoulli model, which is derived from the binomial distribution by replacing all non-zero counts with 1: Bernoulli(p)(x | n) = Binom(p)  x x+1  |  n n+1  (5) Our third model is an overdispersed binomial model, a “natural” continuous mixture of binomials with the integrated binomial likelihood – i. e. the Beta density (6), whose normalizing term involves the Beta function – as the mixing distribution. Beta(α,β)(p) = pα−1(1−p)β−1 B(α,β) (6) The resulting mixture model (7) is known as the P´olya–Eggenberger distribution (Johnson and Kotz, 1969) or as the beta-binomial distribution. It has been used for a comparatively small range of NLP applications (Lowe, 1999) and certainly deserves more widespread attention. BetaBin(α,β)(x | n) = Z 1 0 Binom(p)(x | n) Beta(α,β)(p) dp = n x B(x+α,n−x+β) B(α,β) (7) As was the case with the negative binomial (which is to the Poisson as the beta-binomial is to the binomial), it is convenient to reparameterize the distribution. We choose a slightly different parameterization than Lowe (1999); we follow Ennis and Bi (1998) and use the identities p = α/(α +β), γ = 1/(α +β +1). To avoid confusion, we will refer to the distribution parameterized in terms of p and γ as BB: BB(p,γ) = BetaBin  p1−γ γ , (1−p)1−γ γ  (8) After reparameterization the expectation and variance are E[x;BB(p,γ)(x | n)] = n p, Var[x;BB(p,γ)(x | n)] = n p (1−p) (1+(n−1)γ). Comparing this with the expectation and variance of the standard binomial model, it is obvious that the beta-binomial has greater variance when γ > 0, and for γ = 0 the beta-binomial distribution coincides with a binomial distribution. Using the method of moments for estimation is particularly straightforward under this parameterization (Ennis and Bi, 1998). Suppose one sample consists of observing x successes in n trials (x occurrences of the target word in a document of length n), where the number of trials may vary across samples. Now we want to estimate parameters based on a sequence of s samples ⟨x1,n1⟩,...,⟨xs,ns⟩. We equate sample moments with distribution moments ∑ i ni ˆp = ∑ i xi, ∑ i ni ˆp (1−ˆp) (1+(ni −1) ˆγ) = ∑ i (xi −ni ˆp)2, and solve for the unknown parameters: ˆp = ∑i xi ∑i ni , (9) ˆγ = ∑i(xi −ni ˆp)2/( ˆp(1−ˆp))−∑i ni ∑i n2 i −∑i ni . (10) In our experience, the resulting estimates are sufficiently close to the maximum likelihood estimates, while method-of-moment estimation is much faster than maximum likelihood estimation, which requires gradient-based numerical optimization2 in this case. Since we estimate parameters for up to 400,000 models (for 20,000 words and 20 classes), we prefer the faster procedure. Note that the maximum likelihood estimates may be suboptimal (Lowe, 1999), but full-fledged Bayesian methods (Lee and Lio, 1997) would require even more computational resources. The fourth and final model is a zero-inflated binomial distribution, which is derived straightforwardly via equation (3): ZIBinom(z, p)(x | n) = z(x ≡0)+(1−z)Binom(p)(x | n) =    z+(1−z)(1−p)n if x = 0 (1−z) n x  px (1−p)n−x if x > 0 (11) Since the one parameter p of a single binomial model can be estimated directly using equation (9), maximum likelihood estimation for the zero-inflated binomial model is straightforward via the EM algorithm for finite mixture models. Figure 5 shows pseudo-code for a single EM update. Accuracy results of Naive Bayes document classification using each of the four word frequency models are shown in Table 2. One can observe that the differences between the binomial models are small, 2Not that there is anything wrong with that. In fact, we calculated the MLE estimates for the negative binomial models using a multidimensional quasi-Newton algorithm. 1: Z ←0; X ←0; N ←0 2: {E step} 3: for i ←1 to s do 4: if xi = 0 then 5: ˆzi ←z/(z+(1−p)ni) 6: Z ←Z + ˆzi 7: X ←X +(1−ˆzi)xi 8: N ←X +(1−ˆzi)ni 9: else {xi ̸= 0, ˆzi = 0} 10: X ←X +xi 11: N ←N +ni 12: end if 13: end for 14: {M step} 15: z ←Z/s 16: p ←X/N Figure 5: Maximum likelihood estimation of ZIBinom parameters z and p: Pseudo-code for a single EM iteration that updates the two parameters. but even small effects can be significant on a test set of about 4,000 messages. More importantly, note that the beta-binomial and zero-inflated binomial models outperform both the simple binomial and the Bernoulli, except on unrealistically small vocabularies (intuitively, 20 words are hardly adequate for discriminating between 20 newsgroups, and those words would have to be selected much more carefully). In light of this we can revise McCallum and Nigam’s McCallum and Nigam (1998) recommendation to use the Bernoulli distribution for small vocabularies. Instead we recommend that neither the Bernoulli nor the binomial distributions should be used, since in all reasonable cases they are outperformed by the more robust variants of the binomial distribution. (The case of a 20,000 word vocabulary is quickly declared unreasonable, since most of the words occur precisely once in the training data, and so any parameter estimate is bound to be unreliable.) We want to know whether the differences between the three binomial models could be dismissed as a chance occurrence. The McNemar test (Dietterich, 1998) provides appropriate answers, which are summarized in Table 3. As we can see, the classification results under the zero-inflated binomial and beta-binomial models are never significantly differBernoulli Binom ZIBinom BetaBin 20 30.94 28.19 29.48 29.93 50 45.28 44.04 44.85 45.15 100 53.36 52.57 53.84 54.16 200 59.72 60.15 60.47 61.16 500 66.58 68.30 67.95 68.58 1,000 69.31 72.24 72.46 73.20 2,000 71.45 75.92 76.35 77.03 5,000 73.80 80.64 80.51 80.19 10,000 74.18 82.61 82.58 82.58 20,000 74.05 83.70 83.06 83.06 Table 2: Accuracy of the four models on the Newsgroup data set for different vocabulary sizes. Binom Binom ZIBinom ZIBinom BetaBin BetaBin 20   50   100   200  500 1,000  2,000  5,000 10,000 20,000  Table 3: Pairwise McNemar test results. A  indicates a significant difference of the classification results when comparing a pair of of models. ent, in most cases not even approaching significance at the 5% level. A classifier based on the betabinomial model is significantly different from one based on the binomial model; the difference for a vocabulary of 20,000 words is marginally significant (the χ2 value of 3.8658 barely exceeds the critical value of 3.8416 required for significance at the 5% level). Classification based on the zero-inflated binomial distribution differs most from using a standard binomial model. We conclude that the zeroinflated binomial distribution captures the relevant extra-binomial variation just as well as the overdispersed beta-binomial distribution, since their classification results are never significantly different. The differences between the four models can be seen more visually clearly on the WebKB data set 70 75 80 85 90 20k 10k 5k 2k 1k 500 200 100 50 20 classification accuracy (percent) vocabulary size (number of word types) WebKB 4 Bernoulli Binomial ZIBinom BetaBin Figure 6: Accuracy of the four models on the WebKB data set as a function of vocabulary size. (McCallum and Nigam, 1998, Figure 4). Evaluation results for Naive Bayes text classification using the four models are displayed in Figure 6. The zeroinflated binomial model provides the overall highest classification accuracy, and clearly dominates the beta-binomial model. Either one should be preferred over the simple binomial model. The early peak and rapid decline of the Bernoulli model had already been observed by McCallum and Nigam (1998). We recommend that the zero-inflated binomial distribution should always be tried first, unless there is substantial empirical or prior evidence against it: the zero-inflated binomial model is computationally attractive (maximum likelihood estimation using EM is straightforward and numerically stable, most gradient-based methods are not), and its z parameter is independently meaningful, as it can be interpreted as the degree to which a given word is “on topic” for a given class of documents. 4 Conclusion We have presented theoretical and empirical evidence for zero-inflation among linguistic count data. Zero-inflated models can account for increased variation at least as well as overdispersed models on standard document classification tasks. Given the computational advantages of simple zero-inflated models, they can and should be used in place of standard models. For document classification, an event model based on a zero-inflated binomial distribution outperforms conventional Bernoulli and binomial models. Acknowledgements Thanks to Chris Brew and three anonymous reviewers for valuable feedback. Cue the usual disclaimers. References Kenneth W. Church. 2000. Empirical estimates of adaptation: The chance of two Noriegas is closer to p/2 than p2. In 18th International Conference on Computational Linguistics, pages 180–186. ACL Anthology C00-1027. Kenneth W. Church and William A. Gale. 1995. Poisson mixtures. Natural Language Engineering, 1:163–190. Thomas G. Dietterich. 1998. Approximate statistical tests for comparing supervised classification learning algorithms. Neural Computation, 10:1895–1924. Pedro Domingos and Michael J. Pazzani. 1996. Beyond independence: Conditions for the optimality of the simple Bayesian classifier. In 13th International Conference on Machine Learning, pages 105–112. Daniel M. Ennis and Jian Bi. 1998. The beta-binomial model: Accounting for inter-trial variation in replicated difference and preference tests. Journal of Sensory Studies, 13:389– 412. Ashutosh Garg and Dan Roth. 2001. Understanding probabilistic classifiers. In 12th European Conference on Machine Learning, pages 179–191. Norman L. Johnson and Samuel Kotz. 1969. Discrete Distributions, volume 1. Wiley, New York, NY, first edition. Ken Lang. 1995. Newsweeder: Learning to filter netnews. In 12th International Conference on Machine Learning, pages 331–339. Jack C. Lee and Y. L. Lio. 1997. A note on Bayesian estimation and prediction for the beta-binomial model. Journal of Statistical Computation and Simulation, 63:73–91. David D. Lewis. 1998. Naive (Bayes) at forty: The independence assumption in information retrieval. In 10th European Conference on Machine Learning, pages 4–15. Stephen A. Lowe. 1999. The beta-binomial mixture model for word frequencies in documents with applications to information retrieval. In 6th European Conference on Speech Communication and Technology, pages 2443–2446. Andrew McCallum and Kamal Nigam. 1998. A comparison of event models for naive Bayes text classification. In AAAI Workshop on Learning for Text Categorization, pages 41–48. Andrew Kachites McCallum. 1996. Bow: A toolkit for statistical language modeling, text retrieval, classification and clustering. http://www.cs.cmu.edu/~mccallum/bow/. Frederick Mosteller and David L. Wallace. 1984. Applied Bayesian and Classical Inference: The Case of The Federalist Papers. Springer, New York, NY, second edition. Yudi Pawitan. 2001. In All Likelihood: Statistical Modelling and Inference Using Likelihood. Oxford University Press, New York, NY.
2003
37
Self-Organizing Markov Models and Their Application to Part-of-Speech Tagging Jin-Dong Kim Dept. of Computer Science University of Tokyo [email protected] Hae-Chang Rim Dept. of Computer Science Korea University [email protected] Jun’ich Tsujii Dept. of Computer Science University of Tokyo, and CREST, JST [email protected] Abstract This paper presents a method to develop a class of variable memory Markov models that have higher memory capacity than traditional (uniform memory) Markov models. The structure of the variable memory models is induced from a manually annotated corpus through a decision tree learning algorithm. A series of comparative experiments show the resulting models outperform uniform memory Markov models in a part-of-speech tagging task. 1 Introduction Many major NLP tasks can be regarded as problems of finding an optimal valuation for random processes. For example, for a given word sequence, part-of-speech (POS) tagging involves finding an optimal sequence of syntactic classes, and NP chunking involves finding IOB tag sequences (each of which represents the inside, outside and beginning of noun phrases respectively). Many machine learning techniques have been developed to tackle such random process tasks, which include Hidden Markov Models (HMMs) (Rabiner, 1989), Maximum Entropy Models (MEs) (Ratnaparkhi, 1996), Support Vector Machines (SVMs) (Vapnik, 1998), etc. Among them, SVMs have high memory capacity and show high performance, especially when the target classification requires the consideration of various features. On the other hand, HMMs have low memory capacity but they work very well, especially when the target task involves a series of classifications that are tightly related to each other and requires global optimization of them. As for POS tagging, recent comparisons (Brants, 2000; Schr¨oder, 2001) show that HMMs work better than other models when they are combined with good smoothing techniques and with handling of unknown words. While global optimization is the strong point of HMMs, developers often complain that it is difficult to make HMMs incorporate various features and to improve them beyond given performances. For example, we often find that in some cases a certain lexical context can improve the performance of an HMM-based POS tagger, but incorporating such additional features is not easy and it may even degrade the overall performance. Because Markov models have the structure of tightly coupled states, an arbitrary change without elaborate consideration can spoil the overall structure. This paper presents a way of utilizing statistical decision trees to systematically raise the memory capacity of Markov models and effectively to make Markov models be able to accommodate various features. 2 Underlying Model The tagging model is probabilistically defined as finding the most probable tag sequence when a word sequence is given (equation (1)). T(w1,k) = arg max t1,k P(t1,k|w1,k) (1) = arg max t1,k P(t1,k)P(w1,k|t1,k) (2) ≈ arg max t1,k k Y i=1 P(ti|ti−1)P(wi|ti) (3) By applying Bayes’ formula and eliminating a redundant term not affecting the argument maximization, we can obtain equation (2) which is a combination of two separate models: the tag language model, P(t1,k) and the tag-to-word translation model, P(w1,k|t1,k). Because the number of word sequences, w1,k and tag sequences, t1,k is infinite, the model of equation (2) is not computationally tractable. Introduction of Markov assumption reduces the complexity of the tag language model and independent assumption between words makes the tag-to-word translation model simple, which result in equation (3) representing the well-known Hidden Markov Model. 3 Effect of Context Classification Let’s focus on the Markov assumption which is made to reduce the complexity of the original tagging problem and to make the tagging problem tractable. We can imagine the following process through which the Markov assumption can be introduced in terms of context classification: P(T = t1,k) = k Y i=1 P(ti|t1,i−1) (4) ≈ k Y i=1 P(ti|Φ(t1,i−1)) (5) ≈ k Y i=1 P(ti|ti−1) (6) In equation (5), a classification function Φ(t1,i−1) is introduced, which is a mapping of infinite contextual patterns into a set of finite equivalence classes. By defining the function as follows we can get equation (6) which represents a widely-used bi-gram model: Φ(t1,i−1) ≡ti−1 (7) Equation (7) classifies all the contextual patterns ending in same tags into the same classes, and is equivalent to the Markov assumption. The assumption or the definition of the above classification function is based on human intuition. ( ) conj P | ∗ ( ) conj fw P , | ∗ ( ) conj vb P , | ∗ ( ) conj vbp P , | ∗ vb vb vbp vbp Figure 1: Effect of 1’st and 2’nd order context at at prep prep nn nn ( ) prep P | ∗ ( ) in' ', | prep P ∗ ( ) with' ', | prep P ∗ ( ) out' ', | prep P ∗ Figure 2: Effect of context with and without lexical information Although this simple definition works well mostly, because it is not based on any intensive analysis of real data, there is room for improvement. Figure 1 and 2 illustrate the effect of context classification on the compiled distribution of syntactic classes, which we believe provides the clue to the improvement. Among the four distributions showed in Figure 1, the top one illustrates the distribution of syntactic classes in the Brown corpus that appear after all the conjunctions. In this case, we can say that we are considering the first order context (the immediately preceding words in terms of part-of-speech). The following three ones illustrates the distributions collected after taking the second order context into consideration. In these cases, we can say that we have extended the context into second order or we have classified the first order context classes again into second order context classes. It shows that distributions like P(∗|vb, conj) and P(∗|vbp, conj) are very different from the first order ones, while distributions like P(∗|fw, conj) are not. Figure 2 shows another way of context extension, so called lexicalization. Here, the initial first order context class (the top one) is classified again by referring the lexical information (the following three ones). We see that the distribution after the preposition, out is quite different from distribution after other prepositions. From the above observations, we can see that by applying Markov assumptions we may miss much useful contextual information, or by getting a better context classification we can build a better context model. 4 Related Works One of the straightforward ways of context extension is extending context uniformly. Tri-gram tagging models can be thought of as a result of the uniform extension of context from bi-gram tagging models. TnT (Brants, 2000) based on a second order HMM, is an example of this class of models and is accepted as one of the best part-of-speech taggers used around. The uniform extension can be achieved (relatively) easily, but due to the exponential growth of the model size, it can only be performed in restrictive a way. Another way of context extension is the selective extension of context. In the case of context extension from lower context to higher like the examples in figure 1, the extension involves taking more information about the same type of contextual features. We call this kind of extension homogeneous context extension. (Brants, 1998) presents this type of context extension method through model merging and splitting, and also prediction suffix tree learning (Sch¨utze and Singer, 1994; D. Ron et. al, 1996) is another well-known method that can perform homogeneous context extension. On the other hand, figure 2 illustrates heterogeneous context extension, in other words, this type of extension involves taking more information about other types of contextual features. (Kim et. al, 1999) and (Pla and Molina, 2001) present this type of context extension method, so called selective lexicalization. The selective extension can be a good alternative to the uniform extension, because the growth rate of the model size is much smaller, and thus various contextual features can be exploited. In the followV V PP N N C C $$ $$ C C N N PP V V P-1 P-1 $ C N P V Figure 3: a Markov model and its equivalent decision tree ing sections, we describe a novel method of selective extension of context which performs both homogeneous and heterogeneous extension simultaneously. 5 Self-Organizing Markov Models Our approach to the selective context extension is making use of the statistical decision tree framework. The states of Markov models are represented in statistical decision trees, and by growing the trees the context can be extended (or the states can be split). We have named the resulting models SelfOrganizing Markov Models to reflect their ability to automatically organize the structure. 5.1 Statistical Decision Tree Representation of Markov Models The decision tree is a well known structure that is widely used for classification tasks. When there are several contextual features relating to the classification of a target feature, a decision tree organizes the features as the internal nodes in a manner where more informative features will take higher levels, so the most informative feature will be the root node. Each path from the root node to a leaf node represents a context class and the classification information for the target feature in the context class will be contained in the leaf node1. In the case of part-of-speech tagging, a classification will be made at each position (or time) of a word sequence, where the target feature is the syntactic class of the word at current position (or time) and the contextual features may include the syntactic 1While ordinary decision trees store deterministic classification information in their leaves, statistical decision trees store probabilistic distribution of possible decisions. V V P,* P,* N N C C $$ $$ C C N N W-1 W-1 V V P-1 P-1 $ C N P V P,out P,out P,* P,* P,out P,out Figure 4: a selectively lexicalized Markov model and its equivalent decision tree V V P,* P,* N N (N)C (N)C $$ $$ P-2 P-2 N N W-1 W-1 V V P-1 P-1 $ C N P V P,out P,out P,* P,* P,out P,out (V)C (V)C (*)C (*)C (*)C (*)C (N)C (N)C (V)C (V)C Figure 5: a selectively extended Markov model and its equivalent decision tree classes or the lexical form of preceding words. Figure 3 shows an example of Markov model for a simple language having nouns (N), conjunctions (C), prepositions (P) and verbs (V). The dollar sign ($) represents sentence initialization. On the left hand side is the graph representation of the Markov model and on the right hand side is the decision tree representation, where the test for the immediately preceding syntactic class (represented by P-1) is placed on the root, each branch represents a result of the test (which is labeled on the arc), and the corresponding leaf node contains the probabilistic distribution of the syntactic classes for the current position2. The example shown in figure 4 involves a further classification of context. On the left hand side, it is represented in terms of state splitting, while on the right hand side in terms of context extension (lexicalization), where a context class representing contextual patterns ending in P (a preposition) is extended by referring the lexical form and is classified again into the preposition, out and other prepositions. Figure 5 shows another further classification of 2The distribution doesn’t appear in the figure explicitly. Just imagine each leaf node has the distribution for the target feature in the corresponding context. context. It involves a homogeneous extension of context while the previous one involves a heterogeneous extension. Unlike prediction suffix trees which grow along an implicitly fixed order, decision trees don’t presume any implicit order between contextual features and thus naturally can accommodate various features having no underlying order. In order for a statistical decision tree to be a Markov model, it must meet the following restrictions: • There must exist at least one contextual feature that is homogeneous with the target feature. • When the target feature at a certain time is classified, all the requiring context features must be visible The first restriction states that in order to be a Markov model, there must be inter-relations between the target features at different time. The second restriction explicitly states that in order for the decision tree to be able to classify contextual patterns, all the context features must be visible, and implicitly states that homogeneous context features that appear later than the current target feature cannot be contextual features. Due to the second restriction, the Viterbi algorithm can be used with the self-organizing Markov models to find an optimal sequence of tags for a given word sequence. 5.2 Learning Self-Organizing Markov Models Self-organizing Markov models can be induced from manually annotated corpora through the SDTL algorithm (algorithm 1) we have designed. It is a variation of ID3 algorithm (Quinlan, 1986). SDTL is a greedy algorithm where at each time of the node making phase the most informative feature is selected (line 2), and it is a recursive algorithm in the sense that the algorithm is called recursively to make child nodes (line 3), Though theoretically any statistical decision tree growing algorithms can be used to train selforganizing Markov models, there are practical problems we face when we try to apply the algorithms to language learning problems. One of the main obstacles is the fact that features used for language learning often have huge sets of values, which cause intensive fragmentation of the training corpus along with the growing process and eventually raise the sparse data problem. To deal with this problem, the algorithm incorporates a value selection mechanism (line 1) where only meaningful values are selected into a reduced value set. The meaningful values are statistically defined as follows: if the distribution of the target feature varies significantly by referring to the value v, v is accepted as a meaningful value. We adopted the χ2-test to determine the difference between the distributions of the target feature before and after referring to the value v. The use of χ2-test enables us to make a principled decision about the threshold based on a certain confidence level3. To evaluate the contribution of contextual features to the target classification (line 2), we adopted Lopez distance (L´opez, 1991). While other measures including Information Gain or Gain Ratio (Quinlan, 1986) also can be used for this purpose, the Lopez distance has been reported to yield slightly better results (L´opez, 1998). The probabilistic distribution of the target feature estimated on a node making phase (line 4) is smoothed by using Jelinek and Mercer’s interpolation method (Jelinek and Mercer, 1980) along the ancestor nodes. The interpolation parameters are estimated by deleted interpolation algorithm introduced in (Brants, 2000). 6 Experiments We performed a series of experiments to compare the performance of self-organizing Markov models with traditional Markov models. Wall Street Journal as contained in Penn Treebank II is used as the reference material. As the experimental task is partof-speech tagging, all other annotations like syntactic bracketing have been removed from the corpus. Every figure (digit) in the corpus has been changed into a special symbol. From the whole corpus, every 10’th sentence from the first is selected into the test corpus, and the remaining ones constitute the training corpus. Table 6 shows some basic statistics of the corpora. We implemented several tagging models based on equation (3). For the tag language model, we used 3We used 95% of confidence level to extend context. In other words, only when there are enough evidences for improvement at 95% of confidence level, a context is extended. Algorithm 1: SDTL(E, t, F) Data : E: set of examples, t: target feature, F: set of contextual features Result : Statistical Decision Tree predicting t initialize a null node; for each element f in the set F do 1 sort meaningful value set V for f ; if |V | > 1 then 2 measure the contribution of f to t; if f contributes the most then select f as the best feature b; end end end if there is b selected then set the current node to an internal node; set b as the test feature of the current node; 3 for each v in |V | for b do make SDTL(Eb=v, t, F −{b}) as the subtree for the branch corresponding to v; end end else set the current node to a leaf node; 4 store the probability distribution of t over E ; end return current node; 1,289,201 68,590 Total 129,100 6,859 Test 1,160,101 61,731 Training                     1,289,201 68,590 Total 129,100 6,859 Test 1,160,101 61,731 Training                     Figure 6: Basic statistics of corpora the following 6 approximations: P(t1,k) ≈ k Y i=1 P(ti|ti−1) (8) ≈ k Y i=1 P(ti|ti−2,i−1) (9) ≈ k Y i=1 P(ti|Φ(ti−2,i−1)) (10) ≈ k Y i=1 P(ti|Φ(ti−1, wi−1)) (11) ≈ k Y i=1 P(ti|Φ(ti−2,i−1, wi−1)) (12) ≈ k Y i=1 P(ti|Φ(ti−2,i−1, wi−2,i−1))(13) Equation (8) and (9) represent first- and secondorder Markov models respectively. Equation (10) ∼(13) represent self-organizing Markov models at various settings where the classification functions Φ(•) are intended to be induced from the training corpus. For the estimation of the tag-to-word translation model we used the following model: P(wi|ti) = ki × P(ki|ti) × ˆP(wi|ti) +(1 −ki) × P(¬ki|ti) × ˆP(ei|ti) (14) Equation (14) uses two different models to estimate the translation model. If the word, wi is a known word, ki is set to 1 so the second model is ignored. ˆP means the maximum likelihood probability. P(ki|ti) is the probability of knownness generated from ti and is estimated by using Good-Turing estimation (Gale and Samson, 1995). If the word, wi is an unknown word, ki is set to 0 and the first term is ignored. ei represents suffix of wi and we used the last two letters for it. With the 6 tag language models and the 1 tag-toword translation model, we construct 6 HMM models, among them 2 are traditional first- and secondhidden Markov models, and 4 are self-organizing hidden Markov models. Additionally, we used T3, a tri-gram-based POS tagger in ICOPOST release 1.8.3 for comparison. The overall performances of the resulting models estimated from the test corpus are listed in figure 7. From the leftmost column, it shows the model name, the contextual features, the target features, the performance and the model size of our 6 implementations of Markov models and additionally the performance of T3 is shown. Our implementation of the second-order hidden Markov model (HMM-P2) achieved a slightly worse performance than T3, which, we are interpreting, is due to the relatively simple implementation of our unknown word guessing module4. While HMM-P2 is a uniformly extended model from HMM-P1, SOHMM-P2 has been selectively extended using the same contextual feature. It is encouraging that the self-organizing model suppress the increase of the model size in half (2,099Kbyte vs 5,630Kbyte) without loss of performance (96.5%). In a sense, the results of incorporating word features (SOHMM-P1W1, SOHMM-P2W1 and SOHMM-P2W2) are disappointing. The improvements of performances are very small compared to the increase of the model size. Our interpretation for the results is that because the distribution of words is huge, no matter how many words the models incorporate into context modeling, only a few of them may actually contribute during test phase. We are planning to use more general features like word class, suffix, etc. Another positive observation is that a homogeneous context extension (SOHMM-P2) and a heterogeneous context extension (SOHMM-P1W1) yielded significant improvements respectively, and the combination (SOHMM-P2W1) yielded even more improvement. This is a strong point of using decision trees rather than prediction suffix trees. 7 Conclusion Through this paper, we have presented a framework of self-organizing Markov model learning. The experimental results showed some encouraging aspects of the framework and at the same time showed the direction towards further improvements. Because all the Markov models are represented as decision trees in the framework, the models are hu4T3 uses a suffix trie for unknown word guessing, while our implementations use just last two letters. • 96.6 • • T3 96.9 96.8 96.3 96.5 96.5 95.6                        24,628K T0 P-2, W-1, P-1 SOHMM-P2W1 W-2, P-2, W-1, P-1 W-1, P-1 P-2, P-1 P-2, P-1 P-1 T0 T0 T0 T0 T0 14,247K SOHMM-P1W1 35,494K 2,099K 5,630K 123K SOHMM-P2 SOHMM-P2W2 HMM-P2 HMM-P1 • 96.6 • • T3 96.9 96.8 96.3 96.5 96.5 95.6                        24,628K T0 P-2, W-1, P-1 SOHMM-P2W1 W-2, P-2, W-1, P-1 W-1, P-1 P-2, P-1 P-2, P-1 P-1 T0 T0 T0 T0 T0 14,247K SOHMM-P1W1 35,494K 2,099K 5,630K 123K SOHMM-P2 SOHMM-P2W2 HMM-P2 HMM-P1 Figure 7: Estimated Performance of Various Models man readable and we are planning to develop editing tools for self-organizing Markov models that help experts to put human knowledge about language into the models. By adopting χ2-test as the criterion for potential improvement, we can control the degree of context extension based on the confidence level. Acknowledgement The research is partially supported by Information Mobility Project (CREST, JST, Japan) and Genome Information Science Project (MEXT, Japan). References L. Rabiner. 1989. A tutorial on Hidden Markov Models and selected applications in speech recognition. in Proceedings of the IEEE, 77(2):257–285 A. Ratnaparkhi. 1996. A maximum entropy model for part-of-speech tagging. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP). V. Vapnik. 1998. Statistical Learning Theory. Wiley, Chichester, UK. I. Schr¨oder. 2001. ICOPOST - Ingo’s Collection Of POS Taggers. In http://nats-www.informatik.unihamburg.de/∼ingo/icopost/. T. Brants. 1998 Estimating HMM Topologies. In The Tbilisi Symposium on Logic, Language and Computation: Selected Papers. T. Brants. 2000 TnT - A Statistical Part-of-Speech Tagger. In 6’th Applied Natural Language Processing. H. Sch¨utze and Y. Singer. 1994. Part-of-speech tagging using a variable memory Markov model. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL). D. Ron, Y. Singer and N. Tishby. 1996 The Power of Amnesia: Learning Probabilistic Automata with Variable Memory Length. In Machine Learning, 25(23):117–149. J.-D. Kim, S.-Z. Lee and H.-C. Rim. 1999 HMM Specialization with Selective Lexicalization. In Proceedings of the Joint SIGDAT Conference on Empirical Methods in NLP and Very Large Corpora(EMNLP/VLC99). F. Pla and A. Molina. 2001 Part-of-Speech Tagging with Lexicalized HMM. In Proceedings of the International Conference on Recent Advances in Natural Language Processing(RANLP2001). R. Quinlan. 1986 Induction of decision trees. In Machine Learning, 1(1):81–106. R. L´opez de M´antaras. 1991. A Distance-Based Attribute Selection Measure for Decision Tree Induction. In Machine Learning, 6(1):81–92. R. L´opez de M´antaras, J. Cerquides and P. Garcia. 1998. Comparing Information-theoretic Attribute Selection Measures: A statistical approach. In Artificial Intelligence Communications, 11(2):91–100. F. Jelinek and R. Mercer. 1980. Interpolated estimation of Markov source parameters from sparse data. In Proceedings of the Workshop on Pattern Recognition in Practice. W. Gale and G. Sampson. 1995. Good-Turing frequency estimatin without tears. In Jounal of Quantitative Linguistics, 2:217–237
2003
38
Chunk-based Statistical Translation Taro Watanabe†, Eiichiro Sumita† and Hiroshi G. Okuno‡ {taro.watanabe, eiichiro.sumita}@atr.co.jp † ATR Spoken Language Translation ‡Department of Intelligence Science Research Laboratories and Technology 2-2-2 Hikaridai, Keihanna Science City Graduate School of Informatics, Kyoto Uniersity Kyoto 619-0288 JAPAN Kyoto 606-8501 JAPAN Abstract This paper describes an alternative translation model based on a text chunk under the framework of statistical machine translation. The translation model suggested here first performs chunking. Then, each word in a chunk is translated. Finally, translated chunks are reordered. Under this scenario of translation modeling, we have experimented on a broadcoverage Japanese-English traveling corpus and achieved improved performance. 1 Introduction The framework of statistical machine translation formulates the problem of translating a source sentence in a language J into a target language E as the maximization problem of the conditional probability ˆE = argmaxE P(E|J). The application of the Bayes Rule resulted in ˆE = argmaxE P(E)P(J|E). The former term P(E) is called a language model, representing the likelihood of E. The latter term P(J|E) is called a translation model, representing the generation probability from E into J. As an implementation of P(J|E), the word alignment based statistical translation (Brown et al., 1993) has been successfully applied to similar language pairs, such as French–English and German– English, but not to drastically different ones, such as Japanese–English. This failure has been due to the limited representation by word alignment and the weak model structure for handling complicated word correspondence. This paper provides a chunk-based statistical translation as an alternative to the word alignment based statistical translation. The translation process inside the translation model is structured as follows. A source sentence is first chunked, and then each chunk is translated into target language with local word alignments. Next, translated chunks are reordered to match the target language constraints. Based on this scenario, the chunk-based statistical translation model is structured with several components and trained by a variation of the EMalgorithm. A translation experiment was carried out with a decoder based on the left-to-right beam search. It was observed that the translation quality improved from 46.5% to 52.1% in BLEU score and from 59.2% to 65.1% in subjective evaluation. The next section briefly reviews the word alignment based statistical machine translation (Brown et al., 1993). Section 3 discusses an alternative approach, a chunk-based translation model, ranging from its structure to training procedure and decoding algorithm. Then, Section 4 provides experimental results on Japanese-to-English translation in the traveling domain, followed by discussion. 2 Word Alignment Based Statistical Translation Word alignment based statistical translation represents bilingual correspondence by the notion of word alignment A, allowing one-to-many generation from each source word. Figure 1 illustrates an example of English and Japanese sentences, E and J, with sample word alignments. In this example, “show1” has generated two words, “mise5” and “tekudasai6”. E = NULL0 show1 me2 the3 one4 in5 the6 window7 J = uindo1 no2 shinamono3 o4 mise5 tekudasai6 A = ( 7 0 4 0 1 1 ) Figure 1: Example of word alignment Under this word alignment assumption, the translation model P(J|E) can be further decomposed without approximation. P(J|E) =  A P(J, A|E) 2.1 IBM Model During the generation process from E to J, P(J, A|E) is assumed to be structured with a couple of processes, such as insertion, deletion and reorder. A scenario for the word alignment based translation model defined by Brown et al. (1993), for instance IBM Model 4, goes as follows (refer to Figure 2). 1. Choose the number of words to generate for each source word according to the Fertility Model. For example, “show” was increased to 2 words, while “me” was deleted. 2. Insert NULLs at appropriate positions by the NULL Generation Model. Two NULLs were inserted after each “show” in Figure 2. 3. Translate word-by-word for each generated word by looking up the Lexicon Model. One of the two “show” words was translated to “mise.” 4. Reorder the translated words by referring to the Distortion Model. The word “mise” was reordered to the 5th position, and “uindo” was reordered to the 1st position. Positioning is determined by the previous word’s alignment to capture phrasal constraints. For the meanings of each symbol in each model, refer to Brown et al. (1993). 2.2 Problems of Word Alignment Based Translation Model The strategy for the word alignment based translation model is to translate each word by generating multiple single words (a bag of words) and to determine the position of each translated word. Although show1 show show mise uindo1 me2 show NULL no no2 the3 one show tekudasai shinamono3 one4 window NULL o o4 in5 one shinamono mise5 the6 window uindo tekudasai6 window7 n(2|E1) n(0|E2) n(0|E3) ... Fertility 4 2  p4−2 0 p2 1 NULL t(J5|E1) t(J6|E1) t(J3|E4) ... Lexicon d1(1 −⌈3 1 ⌉|E4, J1) d1(3 −⌈5+6 2 ⌉|E1, J3) d1(5 −⌈2+4 2 ⌉|NULL, J5) d>1(6 −5|J6) Distortion Figure 2: Word alignment based translation model P(J, A|E) (IBM Model 4) this procedure is sufficient to capture the bilingual correspondence for similar language pairs, some issues remain for drastically different pairs: Insertion/Deletion Modeling Although deletion was modeled in the Fertility Model, it merely assigns zero to each deleted word without considering context. Similarly, inserted words are selected by the Lexical Model parameter and inserted at the positions determined by a binomial distribution. This insertion/deletion scheme contributed to the simplicity of this representation of the translation processes, allowing a sophisticated application to run on an enormous bilingual sentence collection. However, it is apparent that the weak modeling of those phenomena will lead to inferior performance for language pairs such as Japanese and English. Local Alignment Modeling The IBM Model 4 (and 5) simulates phrasal constraints, although there were implicitly implemented as its Distortion Model parameters. In addition, the entire reordering is determined by a collection of local reorderings insufficient to capture the long-distance phrasal constraints. The next section introduces an alternative modeling, chunk-based statistical translation, which was intended to resolve the above two issues. 3 Chunk-based Statistical Translation Chunk-based statistical translation models the process of chunking for both the source and target sentences, E and J, P(J|E) =  J  E P(J, J, E|E) where J and E are the chunked sentences for J and E, respectively, defined as two-dimentional arE = show1 me2 1 the3 one4 2 in5 the6 window7 3 mise5 tekudasai6 shinamono3 o4 uindo1 no2 J = uindo1 no2 1 shinamono3 o4 2 mise5 tekudasai6 3 A = ( 3 2 1 ) A = ( [ 7, 0 ] [ 4, 0 ] [ 1, 1 ] ) Figure 3: Example of chunk-based alignment rays. For instance, Ji, j represents the jth word of the ith chunk. The number of chunks for source and target is assumed to be equal, |J| = |E|, so that each chunk can convey a unit of meaning without added/subtracted information. The term P(J, J, E|E) is further decomposed by chunk alignment A and word alignment for each chunk translation A. P(J, J, E|E) =  A  A P(J, J, A, A, E|E) The notion of alignment A is the same as those found in the word alignment based translation model, which assigns a source chunk index for each target chunk. A is a two-dimensional array which assigns a source word index for each target word per chunk. For example, Figure 3 shows two-level alignments taken from the example in Figure 1. The target chunk at position 3, J3, “mise tekudasai” is aligned to the first position (A3 = 1), and both the words “mise” and “tekudasai” are aligned to the first position of the source sentence (A3,1 = 1, A3,2 = 1). 3.1 Translation Model Structure The term P(J, J, A, A, E|E) is further decomposed with approximation according to the scenario described below (refer to Figure 4). 1. Perform chunking for source sentence E by P(E|E). For instance, chunks of “show me” and “the one” were derived. The process is modeled by two steps: (a) Selection of chunk size (Head Model). For each word Ei, assign the chunk size ϕi using the head model ϵ(ϕi|Ei). A word with chunk size more than 0 (ϕi > 0) is treated as a head word, otherwise a nonhead (refer to the words in bold in Figure 4). (b) Associate each non-head word to a head word (Chunk Model). Each non-head word Ei is associated to a head word Eh by the probability η(c(Eh)|h −i, c(Ei)), where h is the position of a head word and c(E) is a function to map a word E to its word class (i.e. POS). For instance, “the3” is associated with the head word “one4” located at 4 −3 = +1. 2. Select words to be translated with Deletion and Fertility Model. (a) Select the number of head words. For each head word Eh (ϕh > 0), choose fertility φh according to the Fertility Model ν(φh|Eh). We assume that the head word must be translated, therefore φh > 0. In addition, one of them is selected as a head word at target position using a uniform distribution 1/φh. (b) Delete some non-head words. For each non-head word Ei (ϕi = 0), delete it according to the Deletion Model δ(di|c(Ei), c(Eh)), where Eh is the head word in the same chunk and di is 1 if Ei is deleted, otherwise 0. 3. Insert some words. In Figure 4, NULLs were inserted for two chunks. For each chunk Ei, select the number of spurious words φ′ i by Insertion Model ι(φ′ i|c(Eh)), where Eh is the head word of Ei. 4. Translate word-by-word. Each source word Ei, including spurious words, is translated to Jj according to the Lexicon Model, τ(Jj|Ei). 5. Reorder words. Each word in a chunk is reordered according to the Reorder Model P(A j|EAj, Jj). The chunk reordering is taken after the Distortion Model of IBM Model 4, where the position is determined by the relative position from the head word, P(A j|EAj, Jj) = |Aj|  k=1 ρ(k −h|c(EAAj,k), c(Jj,k)) where h is the position of a head word for the chunk Jj. For example, “no” is positioned −1 of “uindo”. show1 show show show mise mise uindo1 me2 me show show tekudasai tekudasai no2 the3 the NULL o shinamono shinamono3 one4 one one one shinamono o o4 in5 in mise5 the6 the NULL no uindo tekudasai6 window7 window window window uindo no Chunking Deletion & Fertility Insertion Lexicon Reorder Chunk Reorder Figure 4: Chunk-based translation model. The words in bold are head words. 6. Reorder chunks. All of the chunks are reordered according to the Chunk Reorder Model, P(A|E, J). The chunk reordering is also similar to the Distortion Model, where the positioning is determined by the relative position from the previous alignment P(A|E, J) = |J|  j=1 ϱ( j −j′|c(EAj−1,h′), c(Jj,h)) where j′ is the chunk alignment of the the previous chunk aEAj−1. h and h′ are the head word indices for Jj and EAj−1, respectively. Note that the reordering is dependent on head words. To summarize, the chunk-based translation model can be formulated as P(J|E) =  E,J,A,A  i ϵ(ϕi|Ei) ×  i:ϕi=0 η(c(Ehi)|hi −i, c(Ei)) ×  i:ϕi>0 ν(φi|Ei)/φi ×  i:ϕi=0 δ(di|c(Ei), c(Ehi)) ×  i:ϕi>0 ι(φ′ i|c(Ei)) ×  j  k τ(Jj,k|EAj,k) ×  j P(A j|EAj, Jj) × P(A|E, J) . 3.2 Characteristics of chunk-based Translation Model The main difference to the word alignment based translation model is the treatment of the bag of word translations. The word alignment based translation model generates a bag of words for each source word, while the chunk-based model constructs a set of target words from a set of source words. The behavior is modeled as a chunking procedure by first associating words to the head word of its chunk and then performing chunk-wise translation/insertion/deletion. The complicated word alignment is handled by the determination of word positions in two stages: translation of chunk and chunk reordering. The former structures local orderings while the latter constitutes global orderings. In addition, the concept of head associated with each chunk plays the central role in constraining different levels of the reordering by the relative positions from heads. 3.3 Parameter Estimation The parameter estimation for the chunk-based translation model relies on the EM-algorithm (Dempster et al., 1977). Given a large bilingual corpus the conditional probability of P(J, A, A, E|J, E) = P(J, J, A, A, E|E)/  J,A,A,E P(J, J, A, A, E|E) is first estimated for each pair of J and E (E-step), then each model parameters is computed based on the estimated conditional probability (M-step). The above procedure is iterated until the set of parameters converge. However, this naive algorithm will suffer from severe computational problems. The enumeration of all possible chunkings J and E together with word alignment A and chunk alignment A requires a significant amount of computation. Therefore, we have introduced a variation of the Inside-Outside algorithm as seen in (Yamada and Knight, 2001) for Estep computation. The details of the procedure are described in Appendix A. In addition to the computational problem, there exists a local-maximum problem, where the EMAlgorithm converges to a maximum solution but does not guarantee finding the global maximum. In order to solve this problem and to make the parameters converge quickly, IBM Model 4 parameters were used as the initial parameters for training. We directly applied the Lexicon Model and Fertility Model to the chunk-based translation model but set other parameters as uniform. 3.4 Decoding The decoding algorithm employed for this chunkbased statistical translation is based on the beam search algorithm for word alignment statistical translation presented in (Tillmann and Ney, 2000), which generates outputs in left-to-right order by consuming input in an arbitrary order. The decoder consists of two stages: 1. Generate possible output chunks for all possible input chunks. 2. Generate hypothesized output by consuming input chunks in arbitrary order and combining possible output chunks in left-to-right order. The generation of possible output chunks is estimated through an inverted lexicon model and sequences of inserted strings (Tillmann and Ney, 2000). In addition, an example-based method is also introduced, which generates candidate chunks by looking up the viterbi chunking and alignment from a training corpus. Since the combination of all possible chunks is computationally very expensive, we have introduced the following pruning and scoring strategies. beam pruning: Since the search space is enormous, we have set up a size threshold to maintain partial hypotheses for both of the above two stages. We also incorporated a threshold for scoring, which allows partial hypotheses with a certain score to be processed. example-based scoring: Input/output chunk pairs that appeared in a training corpus are “rewarded” so that they are more likely kept in the beam. During the decoding process, when a pair of chunks appeared in the first stage, the score is boosted by using this formula in the log domain, log Ptm(J|E) + log Plm(E) Table 1: Basic Travel Expression Corpus Japanese English # of sentences 171,894 # of words 1,181,188 1,009,065 vocabulary size 20472 16232 # of singletons 82,06 5,854 3-gram perplexity 23.7 35.8 + weight ×  j freq(EAj, Jj) in which Ptm(J|E) and Plm(E) are translation model and language model probability, respectively1, freq(EAj, Jj) is the frequency for the pair EAj and Jj appearing in the training corpus, and weight is a tuning parameter. 4 Experiments The corpus for this experiment was extracted from the Basic Travel Expression Corpus (BTEC), a collection of conversational travel phrases for Japanese and English (Takezawa et al., 2002) as seen in Table 1. The entire corpus was split into three parts: 152,169 sentences for training, 4,846 sentences for testing, and the remaining 10,148 sentences for parameter tuning, such as the termination criteria for the training iteration and the parameter tuning for decoders. Three translation systems were tested for comparison: model4: Word alignment based translation model, IBM Model 4 with a beam search decoder. chunk3: Chunk-based translation model, limiting the maximum allowed chunk size to 3. model3+: chunk3 with example-based chunk candidate generation. Figure 5 shows some examples of viterbi chunking and chunk alignment for chunk3. Translations were carried out on 510 sentences selected randomly from the test set and evaluated according to the following criteria with 16 reference sets. WER: Word-error-rate, which penalizes the edit distance against reference translations. 1For simplicity of notation, dependence on other variables are omitted, such as J. [ i * have ] [ the * number ] [ of my * passport ] [ * パスポートのe][ * 番号の控え] [ は* あります] [ i * have ] [ a * stomach ache ][ please * give me ][ some * medicine ] [ お腹が* 痛い] [ * ので] [ * 薬を] [ * 下さい] [ * i ] [ * ’d ] [ * like ] [ a * table ] [ * for ] [ * two ] [ by the * window ] [ * if possible ] [ * できれば][ 窓側][ に* ある][ * 二人用][ の* テーブルを][ 一つお* 願い] [ * したい] [ * のですが] [ i ∗have ][ a ∗reservation ] [ ∗for ] [ two ∗nights ] [ my ∗name is ] [ ∗risa kobayashi ] [ 二∗泊] [ ∗の] [ 予約を∗し][ ている∗のです] [ が∗名前は] [ 小林∗リサです] Figure 5: Examples of viterbi chunking and chunk alignment for English-to-Japanese translation model. Chunks are bracketed and the words with ∗to the left are head words. Table 2: Experimental results for Japanese–English translation Model WER PER BLEU SE [%] [%] [%] [%] A A+B A+B+C model4 43.3 37.2 46.5 59.2 74.1 80.2 chunk3 40.9 36.1 48.4 59.8 73.5 78.8 chunk3+ 38.5 33.7 52.1 65.1 76.3 80.6 PER: Position independent WER, which penalizes without considering positional disfluencies. BLEU: BLEU score, which computes the ratio of n-gram for the translation results found in reference translations (Papineni et al., 2002). SE: Subjective evaluation ranks ranging from A to D (A:Perfect, B:Fair, C:Acceptable and D:Nonsense), judged by native speakers. Table 2 summarizes the evaluation of Japanese-toEnglish translations, and Figure 6 presents some of the results by model4 and chunk3+. As Table 2 indicates, chunk3 performs better than model4 in terms of the non-subjective evaluations, although it scores almost equally in subjective evaluations. With the help of example-based decoding, chunk3+ was evaluated as the best among the three systems. 5 Discussion The chunk-based translation model was originally inspired by transfer-based machine translation but modeled by chunks in order to capture syntax-based correspondence. However, the structures evolved into complicated modeling: The translation model involves many stages, notably chunking and two kinds of reordering, word-based and chunk-based alignments. This is directly reflected in parameter input: 一五二便の荷物はこれで全部ですか reference: is this all the baggage from flight one five two model4: is this all you baggage for flight one five two chunk3: is this all the baggage from flight one five two input: 朝食をルームサービスでお願いします reference: may i have room service for breakfast please model4: please give me some room service please chunk3: i ’d like room service for breakfast input: もしもし三月十九日の予約を変更したいのですが reference: hello i ’d like to change my reservation for march nineteenth model4: i ’d like to change my reservation for ninety days be march hello chunk3: hello i ’d like to change my reservation on march nineteenth input: 二三分待って下さい今電話中なんです reference: wait a couple of minutes i ’m telephoning now model4: is this the line is busy now a few minutes chunk3: i ’m on another phone now please wait a couple of minutes Figure 6: Translation examples by word alignment based model and chunk-based model estimation, where chunk3 took 20 days for 40 iterations, which is roughly the same amount of time required for training IBM Model 5 with pegging. The unit of chunk in the statistical machine translation framework has been extensively discussed in the literature. Och et al. (1999) proposed a translation template approach that computes phrasal mappings from the viterbi alignments of a training corpus. Watanabe et al. (2002) used syntax-based phrase alignment to obtain chunks. Marcu and Wong (2002) argued for a different phrase-based translation modeling that directly induces a phrase-by-phrase lexicon model from word-wise data. All of these methods bias the training and/or decoding with phrase-level examples obtained by preprocessing a corpus (Och et al., 1999; Watanabe et al., 2002) or by allowing a lexicon model to hold phrases (Marcu and Wong, 2002). On the other hand, the chunk-based translation model holds the knowledge of how to construct a sequence of chunks from a sequence of words. The former approach is suitable for inputs with less deviation from a training corpus, while the latter approach will be able to perform well on unseen word sequences, although chunk-based examples are also useful for decoding to overcome the limited context of a n-gram based language model. Wang (1998) presented a different chunk-based method by treating the translation model as a phraseto-string process. Yamada and Knight (2001) further extended the model to a syntax-to-string translation modeling. Both assume that the source part of a translation model is structured either with a sequence of chunks or with a parse tree, while our method directly models a string-to-string procedure. It is clear that the string-to-string modeling with hiden chunk-layers is computationally more expensive than those structure-to-string models. However, the structure-to-string approaches are already biased by a monolingual chunking or parsing, which, in turn, might not be able to uncover the bilingual phrasal or syntactical constraints often observed in a corpus. Alshawi et al. (2000) also presented a two-level arranged word ordering and chunk ordering by a hierarchically organized collection of finite state transducers. The main difference from our work is that their approach is basically deterministic, while the chunk-based translation model is non-deterministic. The former method, of course, performs more efficient decoding but requires stronger heuristics to generate a set of transducers. Although the latter approach demands a large amount of decoding time and hypothesis space, it can operate on a very broadcoverage corpus with appropriate translation modeling. Acknowledgments The research reported here was supported in part by a contract with the Telecommunications Advancement Organization of Japan entitled “A study of speech dialogue translation technology based on a large corpus”. References Hiyan Alshawi, Srinivas Bangalore, and Shona Douglas. 2000. Learning dependency translation models as collections of finite state head transducers. Computational Linguistics, 26(1):45–60. Peter F. Brown, Stephen A. Della Pietra, Vincent J. Della Pietra, and Robert L. Mercer. 1993. The mathematics of statistical machine translation: Parameter estimation. Computational Linguistics, 19(2):263–311. A. P. Dempster, N.M. Laird, and D.B.Rubin. 1977. Maximum likelihood from incomplete data via the em algorithm. Journal of the Royal Statistical Society, B(39):1–38. Daniel Marcu and William Wong. 2002. A phrase-based, joint probability model for statistical machine translation. In Proc. of EMNLP-2002, Philadelphia, PA, July. Franz Josef Och, Christoph Tillmann, and Hermann Ney. 1999. Improved alignment models for statistical machine translation. In Proc. of EMNLP/WVLC, University of Maryland, College Park, MD, June. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In Proc. of ACL 2002, pages 311–318. Toshiyuki Takezawa, Eiichiro Sumita, Fumiaki Sugaya, Hirofumi Yamamoto, and Seiichi Yamamoto. 2002. Toward a broad-coverage bilingual corpus for speech translation of travel conversations in the real world. In Proc. of LREC 2002, pages 147–152, Las Palmas, Canary Islands, Spain, May. Christoph Tillmann and Hermann Ney. 2000. Word re-ordering and dp-based search in statistical machine translation. In Proc. of the COLING 2000, JulyAugust. Ye-Yi Wang. 1998. Grammar Inference and Statistical Machine Translation. Ph.D. thesis, School of Computer Science, Language Technologies Institute, Carnegie Mellon University. Taro Watanabe, Kenji Imamura, and Eiichiro Sumita. 2002. Statistical machine translation based on hierarchical phrase alignment. In Proc. of TMI 2002, pages 188–198, Keihanna, Japan, March. Kenji Yamada and Kevin Knight. 2001. A syntax-based statistical translation model. In Proc. of ACL 2001, Toulouse, France. Appendix A Inside-Outside Algorithm for Chunk-based Translation Model The basic idea of inside-outside computation is to separate the whole process into two parts, chunk translation and chunk reordering. Chunk translation handles translation of each chunk, while chunk reordering performs chunking and chunk reoprdering. The inside (backward or beta) probabilities can be derived, which represent the probability of source/target paring of chunks and sentences. The outside (forward or alpha) probabilities can be defined as the probability of a particular source and target pair appearing at a particular chunking and reordering. Inside Probability First, given E and J, compute chunk translation inside probabilities for all the possible source and target chunks pairing Ei′ i and J j′ j in which Ei′ i is the chunk ranging from index i to i′, β(Ei′ i , J j′ j ) =  A′ P(A, J j′ j |Ei′ i ) =  A′  θ Pθ(A′, J j′ j , Ei′ i ) where Pθ is the probability of a model with associated values for corresponding random variables, such as ϵ(ϕi|Ei) or τ(Jj|Ei), except for the chunk reorder model ϱ. A′ is a word alignment for the chunks Ei′ i and J j′ j . Second, compute the inside probability for sentence pairs E and J by considering all possible chunkings and chunk alignments. β(E, J) =  E,J:|E|=|J|  A P(A, E, J, J|E) =  E,J:|E|=|J|  A P(A|E, J)  j β(EAj, Jj) Outside Probability The outside probability for sentence pairing is always 1. α(E, J) = 1.0 The outside probabilities for each chunk pair is α(Ei′ i , J j′ j ) = α(E, J)  E,J:|E|=|J|  A P(A|E, J) ×  EAkEi′ i ,JkJ j′ j β(EAk, Jk) . Inside-Outside Computation The combination of the above inside-outside probabilities yields the following formulas for the accumulated counts of pair occurrences. First, the counts for each model parameter θ with associated random variables countθ(Θ) is countθ(Θ) =  <E,J>  Θ(A′,Ei′ i ,J j′ j ) α(Ei′ i , J j′ j )/β(E, J) ×  θ′ Pθ′(A′, J j′ j , Ei′ i ) . Second, the count for chunk reordering with associated random variables countϱ(Θ) is countϱ(Θ) =  <E,J> α(E, J)/β(E, J)  Θ(A,E,J) Pϱ(A|E, J)  k β(EAk, Jk) . Approximation Even with the introduction of the inside-outside parameter estimation paradigm, the enumeration of all possible chunk pairing and word alignment requires O(lmk4(k + 1)k) computations, where l and m are sentence length for E and J, respectively, and k is the maximum allowed number of words per chunk. In addition, the enumeration of all possible alignments for all possible chunked sentences is O(2l2mn!), where n = |J| = |E|. In order to handle the massive amount of computational demand, we have applied an approximation to the inside-outside estimation procedure. First, the enumeration of word alignment computation for chunk translations was approximated by a set of alignments, the viterbi alignment and neighboring alignment through move/swap operations of particular word alignments. Second, the chunk alignment enumeration was also approximated by a set of chunking and chunk alignments as follows. 1. Determines the number of chunks per sentence 2. Determine initial chunking and alignment 3. Compute viterbi chunking-alignment via hill-climbing using the following operators • Move boundary of chunk • Swap chunk alignment • Move head position 4. Compute neighboring chunking-alignment using the above operators
2003
39
Fast Methods for Kernel-based Text Analysis Taku Kudo and Yuji Matsumoto Graduate School of Information Science, Nara Institute of Science and Technology {taku-ku,matsu}@is.aist-nara.ac.jp Abstract Kernel-based learning (e.g., Support Vector Machines) has been successfully applied to many hard problems in Natural Language Processing (NLP). In NLP, although feature combinations are crucial to improving performance, they are heuristically selected. Kernel methods change this situation. The merit of the kernel methods is that effective feature combination is implicitly expanded without loss of generality and increasing the computational costs. Kernel-based text analysis shows an excellent performance in terms in accuracy; however, these methods are usually too slow to apply to large-scale text analysis. In this paper, we extend a Basket Mining algorithm to convert a kernel-based classifier into a simple and fast linear classifier. Experimental results on English BaseNP Chunking, Japanese Word Segmentation and Japanese Dependency Parsing show that our new classifiers are about 30 to 300 times faster than the standard kernel-based classifiers. 1 Introduction Kernel methods (e.g., Support Vector Machines (Vapnik, 1995)) attract a great deal of attention recently. In the field of Natural Language Processing, many successes have been reported. Examples include Part-of-Speech tagging (Nakagawa et al., 2002) Text Chunking (Kudo and Matsumoto, 2001), Named Entity Recognition (Isozaki and Kazawa, 2002), and Japanese Dependency Parsing (Kudo and Matsumoto, 2000; Kudo and Matsumoto, 2002). It is known in NLP that combination of features contributes to a significant improvement in accuracy. For instance, in the task of dependency parsing, it would be hard to confirm a correct dependency relation with only a single set of features from either a head or its modifier. Rather, dependency relations should be determined by at least information from both of two phrases. In previous research, feature combination has been selected manually, and the performance significantly depended on these selections. This is not the case with kernel-based methodology. For instance, if we use a polynomial kernel, all feature combinations are implicitly expanded without loss of generality and increasing the computational costs. Although the mapped feature space is quite large, the maximal margin strategy (Vapnik, 1995) of SVMs gives us a good generalization performance compared to the previous manual feature selection. This is the main reason why kernel-based learning has delivered great results to the field of NLP. Kernel-based text analysis shows an excellent performance in terms in accuracy; however, its inefficiency in actual analysis limits practical application. For example, an SVM-based NE-chunker runs at a rate of only 85 byte/sec, while previous rulebased system can process several kilobytes per second (Isozaki and Kazawa, 2002). Such slow execution time is inadequate for Information Retrieval, Question Answering, or Text Mining, where fast analysis of large quantities of text is indispensable. This paper presents two novel methods that make the kernel-based text analyzers substantially faster. These methods are applicable not only to the NLP tasks but also to general machine learning tasks where training and test examples are represented in a binary vector. More specifically, we focus on a Polynomial Kernel of degree d, which can attain feature combinations that are crucial to improving the performance of tasks in NLP. Second, we introduce two fast classification algorithms for this kernel. One is PKI (Polynomial Kernel Inverted), which is an extension of Inverted Index in Information Retrieval. The other is PKE (Polynomial Kernel Expanded), where all feature combinations are explicitly expanded. By applying PKE, we can convert a kernel-based classifier into a simple and fast liner classifier. In order to build PKE, we extend the PrefixSpan (Pei et al., 2001), an efficient Basket Mining algorithm, to enumerate effective feature combinations from a set of support examples. Experiments on English BaseNP Chunking, Japanese Word Segmentation and Japanese Dependency Parsing show that PKI and PKE perform respectively 2 to 13 times and 30 to 300 times faster than standard kernel-based systems, without a discernible change in accuracy. 2 Kernel Method and Support Vector Machines Suppose we have a set of training data for a binary classification problem: (x1, y1), . . . , (xL, yL) xj ∈ℜN, yj ∈{+1, −1}, where xj is a feature vector of the j-th training sample, and yj is the class label associated with this training sample. The decision function of SVMs is defined by y(x) = sgn ³ X j∈SV yjαjφ(xj) · φ(x) + b ´ , (1) where: (A) φ is a non-liner mapping function from ℜN to ℜH (N ≪H). (B) αj, b ∈ℜ, αj ≥0. The mapping function φ should be designed such that all training examples are linearly separable in ℜH space. Since H is much larger than N, it requires heavy computation to evaluate the dot products φ(xi) · φ(x) in an explicit form. This problem can be overcome by noticing that both construction of optimal parameter αi (we will omit the details of this construction here) and the calculation of the decision function only require the evaluation of dot products φ(xi)·φ(x). This is critical, since, in some cases, the dot products can be evaluated by a simple Kernel Function: K(x1, x2) = φ(x1) · φ(x2). Substituting kernel function into (1), we have the following decision function. y(x) = sgn ³ X j∈SV yjαjK(xj, x) + b ´ (2) One of the advantages of kernels is that they are not limited to vectorial object x, but that they are applicable to any kind of object representation, just given the dot products. 3 Polynomial Kernel of degree d For many tasks in NLP, the training and test examples are represented in binary vectors; or sets, since examples in NLP are usually represented in socalled Feature Structures. Here, we focus on such cases 1. Suppose a feature set F = {1, 2, . . . , N} and training examples Xj(j = 1, 2, . . . , L), all of which are subsets of F (i.e., Xj ⊆F). In this case, Xj can be regarded as a binary vector xj = (xj1, xj2, . . . , xjN) where xji = 1 if i ∈Xj, xji = 0 otherwise. The dot product of x1 and x2 is given by x1 · x2 = |X1 ∩X2|. Definition 1 Polynomial Kernel of degree d Given sets X and Y , corresponding to binary feature vectors x and y, Polynomial Kernel of degree d Kd(X, Y ) is given by Kd(x, y) = Kd(X, Y ) = (1 + |X ∩Y |)d, (3) where d = 1, 2, 3, . . .. In this paper, (3) will be referred to as an implicit form of the Polynomial Kernel. 1In the Maximum Entropy model widely applied in NLP, we usually suppose binary feature functions fi(Xj) ∈{0, 1}. This formalization is exactly same as representing an example Xj in a set {k|fk(Xj) = 1}. It is known in NLP that a combination of features, a subset of feature set F in general, contributes to overall accuracy. In previous research, feature combination has been selected manually. The use of a polynomial kernel allows such feature expansion without loss of generality or an increase in computational costs, since the Polynomial Kernel of degree d implicitly maps the original feature space F into F d space. (i.e., φ : F →F d). This property is critical and some reports say that, in NLP, the polynomial kernel outperforms the simple linear kernel (Kudo and Matsumoto, 2000; Isozaki and Kazawa, 2002). Here, we will give an explicit form of the Polynomial Kernel to show the mapping function φ(·). Lemma 1 Explicit form of Polynomial Kernel. The Polynomial Kernel of degree d can be rewritten as Kd(X, Y ) = d X r=0 cd(r) · |Pr(X ∩Y )|, (4) where • Pr(X) is a set of all subsets of X with exactly r elements in it, • cd(r) = Pd l=r ¡d l ¢³Pr m=0(−1)r−m · ml¡ r m ¢´ . Proof See Appendix A. cd(r) will be referred as a subset weight of the Polynomial Kernel of degree d. This function gives a prior weight to the subset s, where |s| = r. Example 1 Quadratic and Cubic Kernel Given sets X = {a, b, c, d} and Y = {a, b, d, e}, the Quadratic Kernel K2(X, Y ) and the Cubic Kernel K3(X, Y ) can be calculated in an implicit form as: K2(X, Y ) = (1 + |X ∩Y |)2 = (1 + 3)2 = 16, K3(X, Y ) = (1 + |X ∩Y |)3 = (1 + 3)3 = 64. Using Lemma 1, the subset weights of the Quadratic Kernel and the Cubic Kernel can be calculated as c2(0) = 1, c2(1) = 3, c2(2) = 2 and c3(0)=1, c3(1)=7, c3(2)=12, c3(3)=6. In addition, subsets Pr(X ∩Y ) (r = 0, 1, 2, 3) are given as follows: P0(X ∩Y ) = {φ}, P1(X ∩Y ) = {{a}, {b}, {d}}, P2(X ∩Y ) = {{a, b}, {a, d}, {b, d}}, P3(X ∩Y ) = {{a, b, d}}. K2(X, Y ) and K3(X, Y ) can similarly be calculated in an explicit form as: function PKI classify (X) r = 0 # an array, initialized as 0 foreach i ∈X foreach j ∈h(i) rj = rj + 1 end end result = 0 foreach j ∈SV result = result + yjαj · (1 + rj)d end return sgn(result + b) end Figure 1: Pseudo code for PKI K2(X, Y ) = 1 · 1 + 3 · 3 + 2 · 3 = 16, K3(X, Y ) = 1 · 1 + 7 · 3 + 12 · 3 + 6 · 1 = 64. 4 Fast Classifiers for Polynomial Kernel In this section, we introduce two fast classification algorithms for the Polynomial Kernel of degree d. Before describing them, we give the baseline classifier (PKB): y(X) = sgn ³ X j∈SV yjαj · (1 + |Xj ∩X|)d + b ´ . (5) The complexity of PKB is O(|X| · |SV |), since it takes O(|X|) to calculate (1 + |Xj ∩X|)d and there are a total of |SV | support examples. 4.1 PKI (Inverted Representation) Given an item i ∈F, if we know in advance the set of support examples which contain item i ∈F, we do not need to calculate |Xj ∩X| for all support examples. This is a naive extension of Inverted Indexing in Information Retrieval. Figure 1 shows the pseudo code of the algorithm PKI. The function h(i) is a pre-compiled table and returns a set of support examples which contain item i. The complexity of the PKI is O(|X| · B + |SV |), where B is an average of |h(i)| over all item i ∈F. The PKI can make the classification speed drastically faster when B is small, in other words, when feature space is relatively sparse (i.e., B ≪|SV |). The feature space is often sparse in many tasks in NLP, since lexical entries are used as features. The algorithm PKI does not change the final accuracy of the classification. 4.2 PKE (Expanded Representation) 4.2.1 Basic Idea of PKE Using Lemma 1, we can represent the decision function (5) in an explicit form: y(X) = sgn ³ X j∈SV yjαj ¡ d X r=0 cd(r) · |Pr(Xj ∩X)|¢ + b ´ . (6) If we, in advance, calculate w(s) = X j∈SV yjαjcd(|s|)I(s ∈P|s|(Xj)) (where I(t) is an indicator function 2) for all subsets s ∈Sd r=0 Pr(F), (6) can be written as the following simple linear form: y(X) = sgn ³ X s∈Γd(X) w(s) + b ´ . (7) where Γd(X) = Sd r=0 Pr(X). The classification algorithm given by (7) will be referred to as PKE. The complexity of PKE is O(|Γd(X)|) = O(|X|d), independent on the number of support examples |SV |. 4.2.2 Mining Approach to PKE To apply the PKE, we first calculate |Γd(F)| degree of vectors w = (w(s1), w(s2), . . . , w(s|Γd(F)|)). This calculation is trivial only when we use a Quadratic Kernel, since we just project the original feature space F into F × F space, which is small enough to be calculated by a naive exhaustive method. However, if we, for instance, use a polynomial kernel of degree 3 or higher, this calculation becomes not trivial, since the size of feature space exponentially increases. Here we take the following strategy: 1. Instead of using the original vector w, we use w′, an approximation of w. 2. We apply the Subset Mining algorithm to calculate w′ efficiently. 2I(t) returns 1 if t is true,returns 0 otherwise. Definition 2 w′: An approximation of w An approximation of w is given by w′ = (w′(s1), w′(s2), . . . , w′(s|Γd(F)|)), where w′(s) is set to 0 if w(s) is trivially close to 0. (i.e., σneg < w(s) < σpos (σneg < 0, σpos > 0), where σpos and σneg are predefined thresholds). The algorithm PKE is an approximation of the PKB, and changes the final accuracy according to the selection of thresholds σpos and σneg. The calculation of w′ is formulated as the following mining problem: Definition 3 Feature Combination Mining Given a set of support examples and subset weight cd(r), extract all subsets s and their weights w(s) if w(s) holds w(s) ≥σpos or w(s) ≤σneg . In this paper, we apply a Sub-Structure Mining algorithm to the feature combination mining problem. Generally speaking, sub-structures mining algorithms efficiently extract frequent sub-structures (e.g., subsets, sub-sequences, sub-trees, or subgraphs) from a large database (set of transactions). In this context, frequent means that there are no less than ξ transactions which contain a sub-structure. The parameter ξ is usually referred to as the Minimum Support. Since we must enumerate all subsets of F, we can apply subset mining algorithm, in some times called as Basket Mining algorithm, to our task. There are many subset mining algorithms proposed, however, we will focus on the PrefixSpan algorithm, which is an efficient algorithm for sequential pattern mining, originally proposed by (Pei et al., 2001). The PrefixSpan was originally designed to extract frequent sub-sequence (not subset) patterns, however, it is a trivial difference since a set can be seen as a special case of sequences (i.e., by sorting items in a set by lexicographic order, the set becomes a sequence). The basic idea of the PrefixSpan is to divide the database by frequent sub-patterns (prefix) and to grow the prefix-spanning pattern in a depth-first search fashion. We now modify the PrefixSpan to suit to our feature combination mining. • size constraint We only enumerate up to subsets of size d. when we plan to apply the Polynomial Kernel of degree d. • Subset weight cd(r) In the original PrefixSpan, the frequency of each subset does not change by its size. However, in our mining task, it changes (i.e., the frequency of subset s is weighted by cd(|s|)). Here, we process the mining algorithm by assuming that each transaction (support example Xj) has its frequency Cdyjαj, where Cd = max(cd(1), cd(2), . . . , cd(d)). The weight w(s) is calculated by w(s) = ω(s) × cd(|s|)/Cd, where ω(s) is a frequency of s, given by the original PrefixSpan. • Positive/Negative support examples We first divide the support examples into positive (yi > 0) and negative (yi < 0) examples, and process mining independently. The result can be obtained by merging these two results. • Minimum Supports σpos, σneg In the original PrefixSpan, minimum support is an integer. In our mining task, we can give a real number to minimum support, since each transaction (support example Xj) has possibly non-integer frequency Cdyjαj. Minimum supports σpos and σneg control the rate of approximation. For the sake of convenience, we just give one parameter σ, and calculate σpos and σneg as follows σpos = σ · ³#of positive examples #of support examples ´ , σneg = −σ · ³#of negative examples #of support examples ´ . After the process of mining, a set of tuples Ω= {⟨s, w(s)⟩} is obtained, where s is a frequent subset and w(s) is its weight. We use a TRIE to efficiently store the set Ω. The example of such TRIE compression is shown in Figure 2. Although there are many implementations for TRIE, we use a Double-Array (Aoe, 1989) in our task. The actual classification of PKE can be examined by traversing the TRIE for all subsets s ∈Γd(X). 5 Experiments To demonstrate performances of PKI and PKE, we examined three NLP tasks: English BaseNP Chunking (EBC), Japanese Word Segmentation (JWS) and                                              !#"$"&% '(#) * '+,'-+ . '-(#) * . +-/ . '0 . '-+ . '-+ s 1 Figure 2: Ωin TRIE representation Japanese Dependency Parsing (JDP). A more detailed description of each task, training and test data, the system parameters, and feature sets are presented in the following subsections. Table 1 summarizes the detail information of support examples (e.g., size of SVs, size of feature set etc.). Our preliminary experiments show that a Quadratic Kernel performs the best in EBC, and a Cubic Kernel performs the best in JWS and JDP. The experiments using a Cubic Kernel are suitable to evaluate the effectiveness of the basket mining approach applied in the PKE, since a Cubic Kernel projects the original feature space F into F 3 space, which is too large to be handled only using a naive exhaustive method. All experiments were conducted under Linux using XEON 2.4 Ghz dual processors and 3.5 Gbyte of main memory. All systems are implemented in C++. 5.1 English BaseNP Chunking (EBC) Text Chunking is a fundamental task in NLP – dividing sentences into non-overlapping phrases. BaseNP chunking deals with a part of this task and recognizes the chunks that form noun phrases. Here is an example sentence: [He] reckons [the current account deficit] will narrow to [only $ 1.8 billion] . A BaseNP chunk is represented as sequence of words between square brackets. BaseNP chunking task is usually formulated as a simple tagging task, where we represent chunks with three types of tags: B: beginning of a chunk. I: non-initial word. O: outside of the chunk. In our experiments, we used the same settings as (Kudo and Matsumoto, 2002). We use a standard data set (Ramshaw and Marcus, 1995) consisting of sections 15-19 of the WSJ corpus as training and section 20 as testing. 5.2 Japanese Word Segmentation (JWS) Since there are no explicit spaces between words in Japanese sentences, we must first identify the word boundaries before analyzing deep structure of a sentence. Japanese word segmentation is formalized as a simple classification task. Let s = c1c2 · · · cm be a sequence of Japanese characters, t = t1t2 · · · tm be a sequence of Japanese character types 3 associated with each character, and yi ∈{+1, −1}, (i = (1, 2, . . . , m−1)) be a boundary marker. If there is a boundary between ci and ci+1, yi = 1, otherwise yi = −1. The feature set of example xi is given by all characters as well as character types in some constant window (e.g., 5): {ci−2, ci−1, · · · , ci+2, ci+3, ti−2, ti−1, · · · , ti+2, ti+3}. Note that we distinguish the relative position of each character and character type. We use the Kyoto University Corpus (Kurohashi and Nagao, 1997), 7,958 sentences in the articles on January 1st to January 7th are used as training data, and 1,246 sentences in the articles on January 9th are used as the test data. 5.3 Japanese Dependency Parsing (JDP) The task of Japanese dependency parsing is to identify a correct dependency of each Bunsetsu (base phrase in Japanese). In previous research, we presented a state-of-the-art SVMs-based Japanese dependency parser (Kudo and Matsumoto, 2002). We combined SVMs into an efficient parsing algorithm, Cascaded Chunking Model, which parses a sentence deterministically only by deciding whether the current chunk modifies the chunk on its immediate right hand side. The input for this algorithm consists of a set of the linguistic features related to the head and modifier (e.g., word, part-of-speech, and inflections), and the output from the algorithm is either of the value +1 (dependent) or -1 (independent). We use a standard data set, which is the same corpus described in the Japanese Word Segmentation. 3Usually, in Japanese, word boundaries are highly constrained by character types, such as hiragana and katakana (both are phonetic characters in Japanese), Chinese characters, English alphabets and numbers. 5.4 Results Tables 2, 3 and 4 show the execution time, accuracy4, and |Ω| (size of extracted subsets), by changing σ from 0.01 to 0.0005. The PKI leads to about 2 to 12 times improvements over the PKB. In JDP, the improvement is significant. This is because B, the average of h(i) over all items i ∈F, is relatively small in JDP. The improvement significantly depends on the sparsity of the given support examples. The improvements of the PKE are more significant than the PKI. The running time of the PKE is 30 to 300 times faster than the PKB, when we set an appropriate σ, (e.g., σ = 0.005 for EBC and JWS, σ = 0.0005 for JDP). In these settings, we could preserve the final accuracies for test data. 5.5 Frequency-based Pruning The PKE with a Cubic Kernel tends to make Ωlarge (e.g., |Ω| = 2.32 million for JWS, |Ω| = 8.26 million for JDP). To reduce the size of Ω, we examined simple frequency-based pruning experiments. Our extension is to simply give a prior threshold ξ(= 1, 2, 3, 4 . . .), and erase all subsets which occur in less than ξ support examples. The calculation of frequency can be similarly conducted by the PrefixSpan algorithm. Tables 5 and 6 show the results of frequency-based pruning, when we fix σ=0.005 for JWS, and σ=0.0005 for JDP. In JDP, we can make the size of set Ωabout one third of the original size. This reduction gives us not only a slight speed increase but an improvement of accuracy (89.29%→89.34%). Frequency-based pruning allows us to remove subsets that have large weight and small frequency. Such subsets may be generated from errors or special outliers in the training examples, which sometimes cause an overfitting in training. In JWS, the frequency-based pruning does not work well. Although we can reduce the size of Ωby half, the accuracy is also reduced (97.94%→97.83%). It implies that, in JWS, features even with frequency of one contribute to the final decision hyperplane. 4In EBC, accuracy is evaluated using F measure, harmonic mean between precision and recall. Table 1: Details of Data Set Data Set EBC JWS JDP # of examples 135,692 265,413 110,355 |SV| # of SVs 11,690 57,672 34,996 # of positive SVs 5,637 28,440 17,528 # of negative SVs 6,053 29,232 17,468 |F| (size of feature) 17,470 11,643 28,157 Avg. of |Xj| 11.90 11.73 17.63 B (Avg. of |h(i)|)) 7.74 58.13 21.92 (Note: In EBC, to handle K-class problems, we use a pairwise classification; building K×(K−1)/2 classifiers considering all pairs of classes, and final class decision was given by majority voting. The values in this column are averages over all pairwise classifiers.) 6 Discussion There have been several studies for efficient classification of SVMs. Isozaki et al. propose an XQK (eXpand the Quadratic Kernel) which can make their Named-Entity recognizer drastically fast (Isozaki and Kazawa, 2002). XQK can be subsumed into PKE. Both XQK and PKE share the basic idea; all feature combinations are explicitly expanded and we convert the kernel-based classifier into a simple linear classifier. The explicit difference between XQK and PKE is that XQK is designed only for Quadratic Kernel. It implies that XQK can only deal with feature combination of size up to two. On the other hand, PKE is more general and can also be applied not only to the Quadratic Kernel but also to the general-style of polynomial kernels (1 + |X ∩Y |)d. In PKE, there are no theoretical constrains to limit the size of combinations. In addition, Isozaki et al. did not mention how to expand the feature combinations. They seem to use a naive exhaustive method to expand them, which is not always scalable and efficient for extracting three or more feature combinations. PKE takes a basket mining approach to enumerating effective feature combinations more efficiently than their exhaustive method. 7 Conclusion and Future Works We focused on a Polynomial Kernel of degree d, which has been widely applied in many tasks in NLP Table 2: Results of EBC PKE Time Speedup F1 |Ω| σ (sec./sent.) Ratio (× 1000) 0.01 0.0016 105.2 93.79 518 0.005 0.0016 101.3 93.85 668 0.001 0.0017 97.7 93.84 858 0.0005 0.0017 96.8 93.84 889 PKI 0.020 8.3 93.84 PKB 0.164 1.0 93.84 Table 3: Results of JWS PKE Time Speedup Acc.(%) |Ω| σ (sec./sent.) Ratio (× 1000) 0.01 0.0024 358.2 97.93 1,228 0.005 0.0028 300.1 97.95 2,327 0.001 0.0034 242.6 97.94 4,392 0.0005 0.0035 238.8 97.94 4,820 PKI 0.4989 1.7 97.94 PKB 0.8535 1.0 97.94 Table 4: Results of JDP PKE Time Speedup Acc.(%) |Ω| σ (sec./sent.) Ratio (× 1000) 0.01 0.0042 66.8 88.91 73 0.005 0.0060 47.8 89.05 1,924 0.001 0.0086 33.3 89.26 6,686 0.0005 0.0090 31.8 89.29 8,262 PKI 0.0226 12.6 89.29 PKB 0.2848 1.0 89.29 Table 5: Frequency-based pruning (JWS) PKE time Speedup Acc.(%) |Ω| ξ (sec./sent.) Ratio (× 1000) 1 0.0028 300.1 97.95 2,327 2 0.0025 337.3 97.83 954 3 0.0023 367.0 97.83 591 PKB 0.8535 1.0 97.94 Table 6: Frequency-based pruning (JDP) PKE time Speedup Acc.(%) |Ω| ξ (sec./sent.) Ratio (× 1000) 1 0.0090 31.8 89.29 8,262 2 0.0072 39.3 89.34 2,450 3 0.0068 41.8 89.31 1,360 PKB 0.2848 1.0 89.29 and can attain feature combination that is crucial to improving the performance of tasks in NLP. Then, we introduced two fast classification algorithms for this kernel. One is PKI (Polynomial Kernel Inverted), which is an extension of Inverted Index. The other is PKE (Polynomial Kernel Expanded), where all feature combinations are explicitly expanded. The concept in PKE can also be applicable to kernels for discrete data structures, such as String Kernel (Lodhi et al., 2002) and Tree Kernel (Kashima and Koyanagi, 2002; Collins and Duffy, 2001). For instance, Tree Kernel gives a dot product of an ordered-tree, and maps the original ordered-tree onto its all sub-tree space. To apply the PKE, we must efficiently enumerate the effective sub-trees from a set of support examples. We can similarly apply a sub-tree mining algorithm (Zaki, 2002) to this problem. Appendix A.: Lemma 1 and its proof cd(r) = d X l=r µ d l ¶³ r X m=0 (−1)r−m · ml µ r m ¶´ . Proof. Let X, Y be subsets of F = {1, 2, . . . , N}. In this case, |X ∩ Y | is same as the dot product of vector x, y, where x = {x1, x2, . . . , xN}, y = {y1, y2, . . . , yN} (xj, yj ∈{0, 1}) xj = 1 if j ∈X, xj = 0 otherwise. (1 + |X ∩Y |)d = (1 + x · y)d can be expanded as follows (1 + x · y)d = d X l=0 µ d l ¶³ N X j=1 xjyj ´l = d X l=0 µ d l ¶ · τ(l) where τ(l) = k1+...+kN =l X kn≥0 l! k1! . . . kN!(x1y1)k1 . . . (xNyN)kN . Note that x kj j is binary (i.e., x kj j ∈ {0, 1}), the number of r-size subsets can be given by a coefficient of (x1y1x2y2 . . . xryr). Thus, cd(r) = d X l=r µ d l ¶µ k1+...+kr=l X kn≥1,n=1,2,...,r l! k1! . . . kr! ¶ = d X l=r µ d l ¶µ rl− µ r 1 ¶ (r−1)l+ µ r 2 ¶ (r−2)l −. . . ¶ = d X l=r µ d l ¶³ r X m=0 (−1)r−m · ml µ r m ¶´ . 2 References Junichi Aoe. 1989. An efficient digital search algorithm by using a double-array structure. IEEE Transactions on Software Engineering, 15(9). Michael Collins and Nigel Duffy. 2001. Convolution kernels for natural language. In Advances in Neural Information Processing Systems 14, Vol.1 (NIPS 2001), pages 625–632. Hideki Isozaki and Hideto Kazawa. 2002. Efficient support vector classifiers for named entity recognition. In Proceedings of the COLING-2002, pages 390–396. Hisashi Kashima and Teruo Koyanagi. 2002. Svm kernels for semi-structured data. In Proceedings of the ICML-2002, pages 291–298. Taku Kudo and Yuji Matsumoto. 2000. Japanese Dependency Structure Analysis based on Support Vector Machines. In Proceedings of the EMNLP/VLC-2000, pages 18–25. Taku Kudo and Yuji Matsumoto. 2001. Chunking with support vector machines. In Proceedings of the the NAACL, pages 192–199. Taku Kudo and Yuji Matsumoto. 2002. Japanese dependency analyisis using cascaded chunking. In Proceedings of the CoNLL-2002, pages 63–69. Sadao Kurohashi and Makoto Nagao. 1997. Kyoto University text corpus project. In Proceedings of the ANLP-1997, pages 115–118. Huma Lodhi, Craig Saunders, John Shawe-Taylor, Nello Cristianini, and Chris Watkins. 2002. Text classification using string kernels. Journal of Machine Learning Research, 2. Tetsuji Nakagawa, Taku Kudo, and Yuji Matsumoto. 2002. Revision learning and its application to part-of-speech tagging. In Proceedings of the ACL 2002, pages 497–504. Jian Pei, Jiawei Han, and et al. 2001. Prefixspan: Mining sequential patterns by prefix-projected growth. In Proc. of International Conference of Data Engineering, pages 215– 224. Lance A. Ramshaw and Mitchell P. Marcus. 1995. Text chunking using transformation-based learning. In Proceedings of the VLC, pages 88–94. Vladimir N. Vapnik. 1995. The Nature of Statistical Learning Theory. Springer. Mohammed Zaki. 2002. Efficiently mining frequent trees in a forest. In Proceedings of the 8th International Conference on Knowledge Discovery and Data Mining KDD, pages 71– 80.
2003
4
Feature-Rich Statistical Translation of Noun Phrases Philipp Koehn and Kevin Knight Information Sciences Institute Department of Computer Science University of Southern California [email protected], [email protected] Abstract We define noun phrase translation as a subtask of machine translation. This enables us to build a dedicated noun phrase translation subsystem that improves over the currently best general statistical machine translation methods by incorporating special modeling and special features. We achieved 65.5% translation accuracy in a German-English translation task vs. 53.2% with IBM Model 4. 1 Introduction Recent research in machine translation challenges us with the exciting problem of combining statistical methods with prior linguistic knowledge. The power of statistical methods lies in the quick acquisition of knowledge from vast amounts of data, while linguistic analysis both provides a fitting framework for these methods and contributes additional knowledge sources useful for finding correct translations. We present work that successfully defines a subtask of machine translation: the translation of noun phrases. We demonstrate through analysis and experiments that it is feasible and beneficial to treat noun phrase translation as a subtask. This opens the path to dedicated modeling of other types of syntactic constructs, e.g., verb clauses, where issues of subcategorization of the verb play a big role. Focusing on a narrower problem allows not only more dedicated modeling, but also the use of computationally more expensive methods. We go on to tackle the task of noun phrase translation in a maximum entropy reranking framework. Treating translation as a reranking problem instead of as a search problem enables us to use features over the full translation pair. We integrate both empirical and symbolic knowledge sources as features into our system which outperforms the best known methods in statistical machine translation. Previous work on defining subtasks within statistical machine translation has been performed on, e.g., noun-noun pair (Cao and Li, 2002) and named entity translation (Al-Onaizan and Knight, 2002). 2 Noun Phrase Translation as a Subtask In this work, we consider both noun phrases and prepositional phrases, which we will refer to as NP/PPs. We include prepositional phrases for a number of reasons. Both are attached at the clause level. Also, the translation of the preposition often depends heavily on the noun phrase (in the morning). Moreover, the distinction between noun phrases and prepositional phrases is not always clear (note the Japanese bunsetsu) or hard to separate (German joining of preposition and determiner into one lexical unit, e.g., ins in das  in the). 2.1 Definition We define the NP/PPs in a sentence as follows: Given a sentence  and its syntactic parse tree  , the NP/PPs of the sentence  are the subtrees  that contain at least one noun and no verb, and are not part of a larger subtree that contains no verb. NP/PP NP/PP S NP VP DT NNP NN the Bush administration VBZ VP has VBN VP decided TO VP to VB NP renounce NP PP DT NN any involvement IN DT-A NN in a treaty Figure 1: The noun phrases and preposition phrases (NP/PPs) addressed in this work The NP/PPs are the maximal noun phrases of the sentence, not just the base NPs. This definition excludes NP/PPs that consist of only a pronoun. It also excludes noun phrases that contain relative clauses. NP/PPs may have connectives such as and. For an illustration, see Figure 1. 2.2 Translation of NP/PPs To understand the behavior of noun phrases in the translation process, we carried out a study to examine how they are translated in a typical parallel corpus. Clearly, we cannot simply expect that certain syntactic types in one language translate to equivalent types in another language. Equivalent types might not even exist. This study answers the questions: Do human translators translate noun phrases in foreign texts into noun phrases in English? If all noun phrases in a foreign text are translated into noun phrases in English, is an acceptable sentence translation possible? What are the properties of noun phrases which cannot be translated as noun phrases without rendering the overall sentence translation unacceptable? Using the Europarl corpus1, we consider a translation task from German to English. We marked the NP/PPs in the German side of a small 100 sentence parallel corpus manually. This yielded 168 NP/PPs according to our definition. We examined if these units are realized as noun phrases in the English side of the parallel corpus. This is the case for 75% of the NP/PPs. Second, we tried to construct translations of these NP/PPs that take the form of NP/PPs in English in an overall acceptable translation of the sentence. We could do this for 98% of the NP/PPs. The four exceptions are: in Anspruch genommen; Gloss: take in demand Abschied nehmen; take good-bye meine Zustimmung geben; give my agreement in der Hauptsache; in the main-thing The first three cases are noun phrases or prepositional phrases that merge with the verb. This is similar to the English construction make an observation, which translates best into some languages as a verb equivalent to observe. The fourth example, literally translated as in the main thing, is best translated as mainly. 1Available at http://www.isi.edu/  koehn/ Why is there such a considerable discrepancy between the number of noun phrases that can be translated as noun phrases into English and noun phrases that are translated as noun phrases? The main reason is that translators generally try to translate the meaning of a sentence, and do not feel bound to preserve the same syntactic structure. This leads them to sometimes arbitrarily restructure the sentence. Also, occasionally the translations are sloppy. The conclusion of this study is: Most NP/PPs in German are translated to English as NP/PPs. Nearly all of them, 98%, can be translated as NP/PPs into English. The exceptions to this rule should be treated as special cases and handled separately. We carried out studies for Chinese-English and Portuguese-English NP/PPs with similar results. 2.3 The Role of External Context One interesting question is if external context is necessary for the translation of noun phrases. While the sentence and document context may be available to the NP/PP subsystem, the English output is only assembled later and therefore harder to integrate. To address this issue, we carried out a manual experiment to check if humans can translate NP/PPs without any external context. Using the same corpus of 168 NP/PPs as in the previous section, a human translator translated 89% of the noun phrases correctly, 9% had the wrong leading preposition, and only 2% were mistranslated with the wrong content word meaning. Picking the right phrase start (e.g., preposition or determiner) can sometimes only be resolved when the English verb is chosen and its subcategorization is known. Otherwise, sentence context does not play a big role: Word choice can almost always be resolved within the internal context of the noun phrase. 2.4 Integration into an MT System The findings of the previous section indicate that NP/PP translation can be conceived as a separate subsystem of a complete machine translation system – with due attention to special cases. We will now estimate the importance of such a system. As a general observation, we note that NP/PPs cover roughly half of the words in news or similar System Correct BLEU Basic MT system 7% 0.16 NP/PPs translated in isolation 8% 0.17 Perfect NP/PP translation 24% 0.35 Table 1: Integration of an NP/PP subsystem: Correct sentence translations and BLEU score texts. All nouns are covered by NP/PPs. Nouns are the biggest group of open class words, in terms of the number of distinct words. Constantly, new nouns are added to the vocabulary of a language, be it by borrowing foreign words such as Fahrvergn¨ugen or Zeitgeist, or by creating new words from acronyms such as AIDS, or by other means. In addition to new words, new phrases with distinct meanings are constantly formed: web server, home page, instant messaging, etc. Learning new concepts from text sources when they become available is an elegant solution for this knowledge acquisition problem. In a preliminary study, we assess the impact of an NP/PP subsystem on the quality of an overall machine translation system. We try to answer the following questions: What is the impact on a machine translation system if noun phrases are translated in isolation? What is the performance gain for a machine translation system if an NP/PP subsystem provides perfect translations of the noun phrases? We built a subsystem for NP/PP translation that uses the same modeling as the overall system (IBM Model 4), but is trained on only NP/PPs. With this system, we translate the NP/PPs in isolation, without the assistance of sentence context. These translations are fixed and provided to the general machine translation system, which does not change the fixed NP/PP translation. In a different experiment, we also provided correct translations (motivated by the reference translation) for the NP/PPs to the general machine translation system. We carried out these experiments on the same 100 sentence corpus as in the previous sections. The 164 translatable NP/PPs are marked and translated in isolation. The results are summarized in Table 1. Treating NP/PPs as isolated units, and translating them in isofeatures Reranker translation features Model n-best list features features Figure 2: Design of the noun phrase translation subsystem: The base model generates an n-best list that is rescored using additional features lation with the same methods as the overall system has little impact on overall translation quality. In fact, we achieved a slight improvement in results due to the fact that NP/PPs are consistently translated as NP/PPs. A perfect NP/PP subsystem would triple the number of correctly translated sentences. Performance is also measured by the BLEU score (Papineni et al., 2002), which measures similarity to the reference translation taken from the English side of the parallel corpus. These findings indicate that solving the NP/PP translation problem would be a significant step toward improving overall translation quality, even if the overall system is not changed in any way. The findings also indicate that isolating the NP/PP translation task as a subtask does not harm performance. 3 Framework When translating a foreign input sentence, we detect its NP/PPs and translate them with an NP/PP translation subsystem. The best translation (or multiple best translations) is then passed on to the full sentence translation system which in turn translates the remaining parts of the sentence and integrates the chosen NP/PP translations. Our NP/PP translation subsystem is designed as follows: We train a translation system on a NP/PP parallel corpus. We use this system to generate an n-best list of possible translations. We then rescore this n-best list with the help of additional features. This design is illustrated by Figure 2. 3.1 Evaluation To evaluate our methods, we automatically detected all of the 1362 NP/PPs in 534 sentences from parts of the Europarl corpus which are not already used as training data. Our evaluation metric is human assessment: Can the translation provided by the system be part of an acceptable translation of the whole sentence? In other words, the noun phrase has to be translated correctly given the sentence context. The NP/PPs are extracted in the same way that NP/PPs are initially detected for the acquisition of the NP/PP training corpus. This means that there are some problems with parse errors, leading to sentence fragments extracted as NP/PPs that cannot be translated correctly. Also, the test corpus contains all detected NP/PPs, even untranslatable ones, as discussed in Section 2.2. 3.2 Acquisition of an NP/PP Training Corpus To train a statistical machine translation model, we need a training corpus of NP/PPs paired with their translation. We create this corpus by extracting NP/PPs from a parallel corpus. First, we word-align the corpus with Giza++ (Och and Ney, 2000). Then, we parse both sides with syntactic parsers (Collins, 1997; Schmidt and Schulte im Walde, 2000)2. Our definition easily translates into an algorithm to detect NP/PPs in a sentence. Recall that in such a corpus, only part of the NP/PPs are translated as such into the foreign language. In addition, the word-alignment and syntactic parses may be faulty. As a consequence, initially only 43.4% of all NP/PPs could be aligned. We raise this number to 67.2% with a number of automatic data cleaning steps: NP/PPs that partially align are broken up Systematic parse errors are fixed Certain word types that are inconsistently tagged as nouns in the two languages are harmonized (e.g., the German wo and the English today). Because adverb + NP/PP constructions (e.g., specifically this issue are inconsistently parsed, 2English parser available at http://www.ai.mit. edu/people/mcollins/code.html, German parser available at http://www.ims.uni-stuttgart.de/ projekte/gramotron/SOFTWARE/LoPar-en.html we always strip the adverb from these constructions. German verbal adjective constructions are broken up if they involve arguments or adjuncts (e.g., der von mir gegessene Kuchen = the by me eaten cake), because this poses problems more related to verbal clauses. Alignment points involving punctuation are stripped from the word alignment. Punctuation is also stripped from the edges of NP/PPs. A total of 737,388 NP/PP pairs are collected from the German-English Europarl corpus as training data. Certain German NP/PPs consistently do not align to NP/PPs in English (see the example in Section 2.2). These are detected at this point. The obtained data of unaligned NP/PPs can be used for dealing with these special cases. 3.3 Base Model Given the NP/PP corpus, we can use any general statistical machine translation method to train a translation system for noun phrases. As a baseline, we use an IBM Model 4 (Brown et al., 1993) system3 with a greedy decoder4 (Germann et al., 2001). We found that phrase based models achieve better translation quality than IBM Model 4. Such models segment the input sequence into a number of (non-linguistic) phrases, translate each phrase using a phrase translation table, and allow for reordering of phrases in the output. No phrases may be dropped or added. We use a phrase translation model that extracts its phrase translation table from word alignments generated by the Giza++ toolkit. Details of this model are described by Koehn et al. (2003). To obtain an n-best list of candidate translations, we developed a beam search decoder. This decoder employs hypothesis recombination and stores the search states in a search graph – similar to work by Ueffing et al. (2002) – which can be mined with standard finite state machine methods5 for n-best lists. 3Available at http://www-i6.informatik.rwthaachen.de/  och/software/GIZA++.html 4Available at http://www.isi.edu/licensed-sw /rewrite-decoder/ 5We use the Carmel toolkit available at http://www. isi.edu/licensed-sw/carmel/  1 2 4 8 16 32 64 60% 70% 80% 90% 100% size of n-best list correct Figure 3: Acceptable NP/PP translations in n-best list for different sizes  3.4 Acceptable Translations in the n-Best List One key question for our approach is how often an acceptable translation can be found in an n-best list. The answer to this is illustrated in Figure 3: While an acceptable translation comes out on top for only about 60% of the NP/PPs in our test corpus, one can be found in the 100-best list for over 90% of the NP/PPs6. This means that rescoring has the potential to raise performance by 30%. What are the problems with the remaining 10% for which no translation can be found? To investigate this, we carried out an error analysis of these NP/PPs. Results are given in Table 2. The main sources of error are unknown words (34%) or words for which the correct translation does not occur in the training data (14%), and errors during tagging and parsing that lead to incorrectly detected NP/PPs (28%). There are also problems with NP/PPs that require complex syntactic restructuring (7%), and NP/PPs that are too long, so an acceptable translation could not be found in the 100-best list, but only further down the list (6%). There are also NP/PPs that cannot be translated as NP/PPs into English (2%), as discussed in Section 2.2. 3.5 Maximum Entropy Reranking Given an n-best list of candidates and additional features, we transform the translation task from a search problem into a reranking problem, which we address using a maximum entropy approach. As training data for finding feature values, we collected a development corpus of 683 NP/PPs. Each 6Note that these numbers are obtained after compound splitting, described in Section 4.1 Error Frequency Unknown Word 34% Tagging or parsing error 28% Unknown translation 14% Complex syntactic restructuring 7% Too long 6% Untranslatable 2% Other 9% Table 2: Error analysis for NP/PPs without acceptable translation in 100-best list NP/PP comes with an n-best list of candidate translations that are generated from our base model and are annotated with accuracy judgments. The initial features are the logarithm of the probability scores that the model assigns to each candidate translation: the language model score, the phrase translation score and the reordering (distortion) score. The task for the learning method is to find a probability distribution    that indicates if the candidate translation  is an accurate translation of the input  . The decision rule to pick the best translation is  best argmax    . The development corpus provides the empirical probability distribution by distributing the probability mass over the acceptable translations  :      . If none of the candidate translations for a given input  is acceptable, we pick the candidates that are closest to reference translations measured by minimum edit distance. We use a maximum entropy framework to parametrize this probability distribution as ! "   exp # %$ & (' where the &  ’s are the feature values and the $  ’s are the feature weights. Since we have only a sample of the possible translations  for the given input  , we normalize the probability distribution, so that #  ! "    *) for our sample  + of candidate translations. Maximum entropy learning finds a set of feature values $  so that ,.-/102& 43 ,65 -"02& 73 for each feature &  . These expectations are computed as sums over all candidate translations  for all inputs  : #98;:=< >  ? @! A  & (' # 8B:=< >       & ('C . A nice property of maximum entropy training is that it converges to a global optimum. There are a number of methods and tools available to carry out this training of feature values. We use the toolkit7 developed by Malouf (2002). Berger et al. (1996) and Manning and Sch¨utze (1999) provide good introductions to maximum entropy learning. Note that any other machine learning, such as support vector machines, could be used as well. We chose maximum entropy for its ability to deal with both real-valued and binary features. This method is also similar to work by Och and Ney (2002), who use maximum entropy to tune model parameters. 4 Properties of NP/PP Translation We will now discuss the properties of NP/PP translation that we exploit in order to improve our NP/PP translation subsystem. The first of these (compounding of words) is addressed by preprocessing, while the others motivate features which are used in n-best list reranking. 4.1 Compound Splitting Compounding of words, especially nouns, is common in a number of languages (German, Dutch, Finnish, Greek), and poses a serious problem for machine translation: The word Aktionsplan may not be known to the system, but if the word were broken up into Aktion and Plan, the system could easily translate it into action plan, or plan for action. The issues for breaking up compounds are: Knowing the morphological rules for joining words, resolving ambiguities of breaking up a word (Hauptsturm  Haupt-Turm or Haupt-Sturm), and finding the right level of splitting granularity (Frei-Tag or Freitag). Here, we follow an approach introduced by Koehn and Knight (2003): First, we collect frequency statistics over words in our training corpus. Compounds may be broken up only into known words in the corpus. For each potential compound we check if morphological splitting rules allow us to break it up into such known words. Finally, we pick a splitting option (perhaps not breaking up the compound at all). This decision is based on the frequency of the words involved. 7Available at http://www-rohan.sdsu.edu/  mal ouf/pubs.html Specifically, we pick the splitting option with highest geometric mean of word frequencies of its  parts  : best argmaxS   count     The German side of both the training and testing corpus is broken up in this way. The base model is trained on a compound-split corpus, and input is broken up before being passed on to the system. This method works especially well with our phrase-based machine translation model, which can recover more easily from too eager or too timid splits than word-based models. After performing this type of compound splitting, hardly any errors occur with respect to compounded words. 4.2 Web n-Grams Generally speaking, the performance of statistical machine translation systems can be improved by better translation modeling (which ensures correspondence between input and output) and language modeling (which ensures fluent English output). Language modeling can be improved by different types of language models (e.g., syntactic language models), or additional training data for the language model. Here, we investigate the use of the web as a language model. In preliminary studies we found that 30% of all 7-grams in new text can be also found on the web, as measured by consulting the search engine Google8, which currently indexes 3 billion web pages. This is only the case for 15% of 7-grams generated by the base translation system. There are various ways one may integrate this vast resource into a machine translation system: By building a traditional n-gram language model, by using the web frequencies of the n-grams in a candidate translation, or by checking if all n-grams in a candidate translation occur on the web. We settled on using the following binary features: Does the candidate translation as a whole occur in the web? Do all n-grams in the candidate translation occur on the web? Do all n-grams in the candidate translation occur at least 10 times on the web? We use both positive and negative features for n-grams of the size 2 to 7. We were not successful in improving performance by building a web n-gram language model or using 8http://www.google.com/ the actual frequencies as features. The web may be too noisy to be used in such a straight-forward way without significant smoothing efforts. 4.3 Syntactic Features Unlike in decoding, for reranking we have the complete candidate translation available. This means that we can define features that address any property of the full NP/PP translation pair. One such set of features is syntactic features. Syntactic features are computed over the syntactic parse trees of both input and candidate translation. For the input NP/PPs, we keep the syntactic parse tree we inherit from the NP/PP detection process. For the candidate translation, we use a partof-speech tagger and syntactic parser to annotate the candidate translation with its most likely syntactic parse tree. We use the following three syntactic features: Preservation of the number of nouns: Plural nouns generally translate as plural nouns, while singular nouns generally translate as singular Preservation of prepositions: base prepositional phrases within NP/PPs generally translate as prepositional phrases, unless there is movement involved. BaseNPs generally translate as baseNPs. German genitive baseNP are treated as basePP. Within a baseNP/PP the determiner generally agree in number with the final noun (e.g., not: this nice green flowers). The features are realized as integers, i.e., how many nouns did not preserve their number during translation? These features encode relevant general syntactic knowledge about the translation of noun phrases. They constitute soft constraints that may be overruled by other components of the system. 5 Results As described in Section 3.1, we evaluate the performance of our NP/PP translation subsystem on a blind test set of 1362 NP/PPs extracted from 534 sentences. The contributions of different components of our system are displayed in Table 3. Starting from the IBM Model 4 baseline, we achieve gains using our phrase-based translation model (+5.5%), applying compound splitting to System NP/PP Correct BLEU IBM Model 4 724 53.2% 0.172 Phrase Model 800 58.7% 0.188 Compound Splitting 838 61.5% 0.195 Re-Estimated Param. 858 63.0% 0.197 Web Count Features 881 64.7% 0.198 Syntactic Features 892 65.5% 0.199 Table 3: Improving noun phrase translation with special modeling and additional features: Correct NP/PPs and BLEU score for overall sentence translation training and test data (+2.8%), re-estimating the weights for the system components using the maximum entropy reranking frame-work (+1.5%), adding web count features (+1.7%) and syntactic features (+0.8%). Overall we achieve an improvement of 12.3% over the baseline. Improvements of 2.5% are statistically significant given the size of our test corpus. Table 3 also provides scores for overall sentence translation quality. The chosen NP/PP translations are integrated into a general IBM Model 4 system that translates whole sentences. Performance is measured by the BLEU score, which measures similarity to a reference translation. As reference translation we used the English side of the parallel corpus. The BLEU scores track the improvements of our components, with an overall gain of 0.027. 6 Conclusions We have shown that noun phrase translation can be separated out as a subtask. Our manual experiments show that NP/PPs can almost always be translated as NP/PPs across many languages, and that the translation of NP/PPs usually does not require additional external context. We also demonstrated that the reduced complexity of noun phrase translation allows us to address the problem in a maximum entropy reranking framework, where we only consider the 100-best candidates of a base translation system. This enables us to introduce any features that can be computed over a full translation pair, instead of being limited to features that can be integrated into the search algorithm of the decoder, which only has access to partial translations. We improved performance of noun phrase translation by 12.3% by using a phrase translation model, a maximum entropy reranking method and addressing specific properties of noun phrase translation: compound splitting, using the web as a language model, and syntactic features. We showed not only improvement on NP/PP translation over best known methods, but also improved overall sentence translation quality. Our long term goal is to address additional syntactic constructs in a similarly dedicated fashion. The next step would be verb clauses, where modeling of the subcategorization of the verb is important. References Al-Onaizan, Y. and Knight, K. (2002). Translating named entities using monolingual and bilingual resources. In Proceedings of ACL. Berger, A. L., Pietra, S. A. D., and Pietra, V. J. D. (1996). A maximum entropy approach to natural language processing. Computational Linguistics, 22(1):39–69. Brown, P. F., Pietra, S. A. D., Pietra, V. J. D., and Mercer, R. L. (1993). The mathematics of statistical machine translation. Computational Linguistics, 19(2):263–313. Cao, Y. and Li, H. (2002). Base noun phrase translation using web data and the EM algorithm. In Proceedings of CoLing. Collins, M. (1997). Three generative, lexicalized models for statistical parsing. In Proceedings of ACL 35. Germann, U., Jahr, M., Knight, K., Marcu, D., and Yamada, K. (2001). Fast decoding and optimal decoding for machine translation. In Proceedings of ACL 39. Koehn, P. and Knight, K. (2003). Empirical methods for compound splitting. In Proceedings of EACL. Koehn, P., Och, F. J., and Marcu, D. (2003). Statistical phrase based translation. In Proceedings of HLT/NAACL. Malouf, R. (2002). A comparison of algorithms for maximum entropy parameter estimation. In Proceedings of CoNLL. Manning, C. D. and Sch¨utze, H. (1999). Foundations of Statistical Natural Language Processing. MIT Press. Och, F. J. and Ney, H. (2000). Improved statistical alignment models. In Proceedings of ACL, pages 440–447, Hongkong, China. Och, F. J. and Ney, H. (2002). Discriminative training and maximum entropy models for statistical machine translation. In Proceedings of ACL. Papineni, K., Roukos, S., Ward, T., and Zhu, W.-J. (2002). BLEU: a method for automatic evaluation of machine translation. In Proceedings of ACL. Schmidt, H. and Schulte im Walde, S. (2000). Robust German noun chunking with a probabilistic context-free grammar. In Proceedings of COLING. Ueffing, N., Och, F. J., and Ney, H. (2002). Generation of word graphs in statistical machine translation. In Proceedings of EMNLP.
2003
40
Effective Phrase Translation Extraction from Alignment Models Ashish Venugopal Language Technologies Institute Carnegie Mellon University Pittsburgh, PA 15213 [email protected] Stephan Vogel Language Technologies Institute Carnegie Mellon University Pittsburgh, PA 15213 [email protected] Alex Waibel Language Technologies Institute Carnegie Mellon University Pittsburgh, PA 15213 [email protected] Abstract Phrase level translation models are effective in improving translation quality by addressing the problem of local re-ordering across language boundaries. Methods that attempt to fundamentally modify the traditional IBM translation model to incorporate phrases typically do so at a prohibitive computational cost. We present a technique that begins with improved IBM models to create phrase level knowledge sources that effectively represent local as well as global phrasal context. Our method is robust to noisy alignments at both the sentence and corpus level, delivering high quality phrase level translation pairs that contribute to significant improvements in translation quality (as measured by the BLEU metric) over word based lexica as well as a competing alignment based method. 1 Introduction Statistical Machine Translation defines the task of translating a source language sentence      into a target language sentence       . The traditional framework presented in (Brown et al., 1993) assumes a generative process where the source sentence is passed through a noisy stochastic process to produce the target sentence. The task can be formally stated as finding the   s.t   =    ! " where the search component is commonly referred to as the decoding step (Wang and Waibel, 1998). Within the generative model, the Bayes reformulation is used to estimate  #!  %$    &! ' where  ' is considered the language model, and  &!  is the translation model; the IBM (Brown et al., 1993) models being the de facto standard. Direct translation approaches (Foster, 2000) consider estimating   ! " directly, and work by (Och and Ney, 2002) show that similar or improved results are achieved by replacing  ! ' in the optimization with  #!  , at the cost of deviating from the Bayesian framework. Regardless of the approach, the question of accurately estimating a model of translation from a large parallel or comparable corpus is one of the defining components within statistical machine translation. Re-ordering effects across languages have been modeled in several ways, including word-based (Brown et al., 1993), template-based (Och et al., 1999) and syntax-based (Yamada, Knight, 2001). Analyzing these models from a generative mindset, they all assume that the atomic unit of lexical content is the word, and re-ordering effects are applied above that level. (Marcu, Wong, 2002) illustrate the effects of assuming that lexical correspondence can only be modeled at the word level, and motivate a joint probability model that explicitly generates phrase level lexical content across both languages. (Wu, 1995) presents a bracketing method that models re-ordering at the sentence level. Both (Marcu, Wong, 2002; Wu, 1995) model the reordering phenomenon effectively, but at significant computational expense, and tend to be difficult to scale to long sentences. Reasons to introduce phrase level translation knowledge sources have been adequately shown and confirmed by (Och, Ney, 2000), and we focus on methods to build these sources from existing, mature components within the translation process. This paper presents a method of phrase extraction from alignment data generated by IBM Models. By working directly from alignment data with appropriate measures taken to extract accurate translation pairs, we try to avoid the computational complexity that can result from methods that try to create globally consistent alignment model phrase segmentations. We first describe the information available within alignment data, and go on to describe a method for extracting high quality phrase translation pairs from such data. We then discuss the implications of adding phrasal translation pairs to the decoding process, and present evaluation results that show significant improvements when applying the described extraction technique. We end with a discussion of strengths and weaknesses of this method and the potential for future work. 2 Motivation Alignment models associate words and their translations at the sentence level creating a translation lexicon across the language pair. For each sentence pair, the model also presents the maximally likely association between each source and target word across the sentence pair, forming an alignment map for each sentence pair in the training corpus. The most likely alignment pattern between a source and target sentence under the trained alignment model will be referred to as the maximum approximation, which under HMM alignment (Vogel et al., 1996) model corresponds to the Viterbi path. A set of words in the source sentence associated with a set of words in the target sentence is considered a phrasal pair and forms a partition within the alignment map. Figure ( . shows a source and target sentence pair with points indicating alignment points. A phrasal translation pair within a sentence pair can be represented as the 4-tuple hypothesis )+* ,'-'./#-10-'.324 representing an index ,'-10 and length 5.6/#-'.327 within the source and the target sentence pair  , respectively. The phrasal extraction task involves selecting phrasal hypotheses based on the alignment Figure 1: Sample source 8:9 and target ;9 aligment map. Partitions/Potential translations for source phrase s2s3 are shown by rounded boxes. model (both the translation lexicon as well as the maximal approximation). The maximal approximation captures context at the sentence level, while the lexicon provides a corpus level translation estimate, motivating the alignment model as a starting point for phrasal extraction. The extraction technique must be able to handle alignments that are only partially correct, as well as cases where the sentence pairs have been incorrectly matched as parallel translations within the corpus. Accommodating for the noisy corpus is an increasingly important component of the translation process, especially when considering languages where no manually aligned parallel corpus is available. Building a phrasal lexicon involves Generation, Scoring, and Pruning steps, corresponding to generating a set of candidate translation pairs, scoring them based on the translation model, and pruning them to account for noise within the data as well as the extraction process. 3 Generation The generation step refers to the process of identifying source phrases that require translations and then extracting translations from the alignment model data. We begin by identifying all source language ngrams upto some < within the training corpus. When the test sentences that require translation are known, we can simply extract those n-grams that appear in the test sentences. For each of these n-grams, we create a set of candidate translations extracted from the corpus. The primary motivation to restrict the identification step to the test sentence n-grams is savings in computational expense, and the result is a phrasal translation source that extracts translation pairs limited to the test sentences. For each source language n-gram within the pool, we have to find a set of candidate translations. The generation task is formally defined as finding  )>= in Equation (1)  ) = ?@:A )+* ,'-'.6/ -10B-'.C2D FE  ) G9     93HIKJ   (1) where  is the source n-gram for which we are extracting translations,  ) is the set of all partitions, and L9 refers to the word at position , in the source sentence  .  ) = is then the set of all translations for source n-gram  , and M is a specific translation hypothesis within this set. When considering only those hypothesis translation extracted from a particular sentence pair  , we use  ) =  . We extract these candidates from the alignment map by examining each sentence pair where the source n-gram occurs, and extracting all possible target phrase translations using a sliding window approach. We extract candidate translations of phrase length ( to N , starting at offset O to NQPR( . Figure 1. shows circular boxes indicating each potential partition region. One particular partition is indicated by the shading. Over all occurrences of the n-gram within the sentences as well as across sentences, a sizeable candidate pool is generated that attempts the cover the translated usage of the source n-gram  within the corpus. This set is large, and contains several spurious translations, and does not consider other source side n-grams within each sentence. The deliberate choice to avoid creating a consistent partitioning of the sentence pairs across n-grams reflects the ability to model partially correct alignments within sentences. This sliding window can be restricted to exclude word-word translations, ie .6/TS  ( , .32US  ( if other sources are available that are known to be more accurate. Now that the candidate pool has been generated, it needs to be scored and pruned to reflect relative confidence between candidate translations and to remove spurious translations due to the sliding window approach. 4 Scoring The candidate translations for the source n-gram now need to be scored and ranked according to some measure of confidence. Each candidate translation pair defines a partition within the sentence map, and this partitioning can be scored for confidence in translation quality. We estimate translation confidence by measures from three models; the estimation from the maximum approximation (alignment map), estimation from the word based translation lexicon, and language specific measures. Each of the scoring methods discussed below contributes to the final score under (2) V , <W . 8YX[Z\ M E  ) = ]_^ 9 8YX[Z`\#9 M E  ) =  7aGb (2) where c 9ed 9 = ( and M refers to a translation hypothesis for a given source n-gram  . From now on we will refer to a 8YX[Z\ with regard to a particular  implicitly. 4.1 Alignment Map We define two kinds of scores, within sentence consistency and across sentence consistency from the alignment map, in order to represent local and global context effects. 4.2 Within Sentence The partition defined by each candidate translation pair imposes constraints over the maximum approximation hypothesis for sentences in which it occurs. We evaluate the partition by examining its consistency with the maximum approximation hypothesis by considering the alignment hypothesis points within the sentence. An alignment point f *  -gh (source, target) is said to be consistent if it occurs within the partition defined by ) * ,'-'. / -10-'. 2 . fji"k l is considered inconsistent in two cases. ,Tm  m,:no.6/ and LgqpF0 or gsrF0tno.321 (3) 0umgqmv0wn .C2 and  px, or  ro,:n ./ (4) Each )+* ,'-'.6/ -10B-'.C24 in  ) =  ( ,s  y, + ./ defines  ) determines a set of consistent and inconsistent points. Figure 1. shows inconsistent points with respect to the shaded partition by drawing an X over the alignment point. The within sentence consistency scoring metric is defined in Equation (5). 8YX[Z\ a / )+* ,'-'./#-10-'.324  e z X[Z<  z , <WX[Z< yn z X[Z`<  (5) This measure represents consistency of )+* ,'-'./#-10-'.324 within the maximal approximation alignment for sentence pair  . 4.3 Across Sentence Several hypothesis within  ) =  are similar or identical to those in  ) = 5{ where  S |{ . We want to score hypothesis that are consistent across sentences higher than those that occur rarely, as the former are assumed to be the correct translations in context. We want to account for different contexts across sentences; therefore we want to highlight similar translations, not simply exact matches. We use a word level Levenstein distance to compare the target side hypotheses within  ) = . Each element M within  ) = (the complete candidate translation list for  ) is assigned the average Levenstein distance with all other elements as its across sentence consistence score; effectively performing a single pass average link clustering to identify the correct translations. 8YX[Z\#} /`M ~ (   c€ ‚"ƒ"„†…e‡ M -Gˆ M (6) where …e‡ calculates the Levenshein distance between the target phrases within two hypothesis M and ˆ M , ‰ is the number of elements in  ) = . The higher the 8YX[ZB\ } / , the more likely the hypothesis pair is a correct translation. The clustering approach accounts for noise due to incorrect sentence alignment, as well as the different contexts in which a particular source n-gram can be used. As predicted by the formulation of this method, preference is given towards shorter target translations. This effect can be countered by introducing a phrase length model to approximate the difference in phrases lengths across the language boundary. This will be discussed further as a language specific scoring method. 4.4 Alignment Lexicon The methods presented above used the maximum approximation to score candidate translation hypotheses. The translation lexicon generated by the IBM models provides translation estimates at the word level built on the complete training corpus. These corpus level estimates can be integrated into our scoring paradigm to balance the sentence level estimates from the alignment map methods. The translation lexicon provides a conditional probability estimate   i !  l for each f *  -gh (  i refers to the word at position  in sentence  ) within the maximum approximation. Depending on the direction in which the traditional IBM models are trained, we can either condition on the source or target side, while joint probability models can give us a bidirectional estimate. These translation probability estimates are used to weight the f *  -gh within the methods described above. Instead of simply counting the number of consistent/inconsistent f *  -gŠ , we sum the probability estimates   i !  l for each f *  -gh . So far we have only considered the points within the partition where alignment points are predicted by the maximal approximation. The translation lexicon provides estimates at the word level, so we can construct a scoring measure for the complete region within )+* ,'-'./#-10-'.324 that models the complete probability of the partition. The lexical scoring equation below models this effect. 8‹X Z\ IŒ i )+* ,'-'./#-10-'.324  e ^ 9Ž i ŽLIKJ   Ž l ŽLI’‘   i !  l (7) This method prefers longer target side phrases due to the sum over the target words within the partition. Although it would also prefer short source side phrases, we are only concerned with comparing hypothesis partitions for a given source n-gram  . 4.5 Language Specific The nature of the phrasal association between languages varies depending on the level of inflexion, morphology as well as other factors. The predominant language specific correction to the scoring techniques discussed above models differences in phrase lengths across languages. For example, when comparing English and Chinese translations, we see that on average, the English sentence is approximately 1.3 times longer (under our current segmentation in the small data track). To model these language specific effects, we introduce a phrase length scoring component that is based on the ratio of sentence length between languages. We build a sentence length model based on the DiffRatio statistic defined as ‡ ,7“%“”  4, Z  [•   where I is the source sentence length and J is the target sentence length. Let –˜—‹™ be the average ‡ ,7“%“”  4, Z over the sentences in the corpus, and š%› —‹™ be the variance; thereby defining a normal distribution over the DiffRatio statistic. Using the standard Z normalization technique under a normal distribution parameterized by – —‹™ š › —‹™ , we can estimate the probability that a new DiffRatio calculated on the phrasal pair can be generated by the model, giving us the scoring estimate below. 8YX[ZB\ I’Œ7œ )+* ,-'.6/ -10B-'.32D  ž Ÿ¡5./ -'.32!’¢ –˜—‹™ š › —‹™ž£ (8) To improve the model we might consider examining known phrase translation pairs if this data is available. We explore the language specific difference further by noting that English phrases contain several function words that typically align to the empty Chinese word. We accounted for this effect within the scoring process by treating all target language (English) phrases that only differed by the function words on the phrase boundary as the same translation. The burden of selecting the appropriate hypothesis within the decoding process is moved towards the language model under this corrective strategy. 5 Pruning The list of candidate translations for each source ngram  is large, and must be pruned to select the most likely set of translations. This pruning is required to ensure that the decoding process remains computationally tractable. Simple threshold methods that rank hypotheses by their final score and only save the top ‰ hypotheses will not work here, since phrases differ in the number of possible correct translations they could have when used in different contexts. Given the score ordered set of candidate phrases  ) = , we would like to label some subset as incorrect translations and remove them from the set. We approach this task as a density estimation problem where we need to separate the distribution of the incorrectly translated hypothesis from the distribution of the likely translations. Instead of using the maximum likelihood criteria, we use the maximal separation criteria ie. selecting a splitting point within the scores to maximize the difference of the mean score between distributions as shown below. 8: .,¤ 8‹X Z\   * – ‚"¥ * P¦– ‚B§ * (9) where – ‚"¥ * is the mean score of those hypothesis with a score less than  , and – ‚"§ * is the mean score of those hypothesis with a greater than or equal to  . Once pruning is completed, we convert the scores into a probability measure conditioned on the source n-gram  and assign the probability estimate as the translation probability for the hypothesis M as shown below.  ¨E M ! w©  ž V , <W . 8YX[ZB\ M c € ‚ƒ«ª „˜¬ V , <W . 8YX[ZB\ ˆ M (10) (10) calculates direct translation probabilities, ie  #!  . As mentioned earlier, (Och and Ney, 2002), show that using direction translation estimates in the decoding process as compared with calculating  &!  as prescribed by the Bayesian framework does not reduce translation quality. Our results corroborate these findings and we use (10) as the phrase level translation model estimate within our decoder. 6 Integration Phrase translation pairs that are generated by the method described in this paper are finally scored with estimates of translation probability, which can be conditioned on the target language if necessary. These estimates fit cleanly into the decoding process, except for the issue of phrase length. Traditional word lexicons propose translations for one source word, while with phrase translations, a single hypothesis pair can span several words in the source or target language. Comparing between a path that uses a phrase compared to one that uses multiple words (even if the constituent words are the same) is difficult. The word level pathway involves the product of several probabilities, whereas the phrasal path is represented by one probability score. Potential solutions are to introduce translation length models or to learn scaling factors for phrases of different lengths. Results in this paper have been generated by empirically determining a scaling factor that was inversely proportional to the lenth of the phrase, causing each translation to have a score comparable to the product of the word to word translations within the phrase. 7 HMM Phrase Extraction In order to compare our method to a well understood phrase baseline, we present a method that ex‰†U\ Ÿ  ,   ­ M , <˜\  \ ®+< .6,4 M Small 3540 90K 115K Large 77558 2.46M 2.69M Testing 993 27K NA Table 1: Corpus figures indicating no. of sentence pairs, no. of Chinese and English words tracts phrases by harvesting the Viterbi path from an HMM alignment model (Vogel et al., 1996). The HMM alignment model is computationally feasible even for very long sentences, and the phrase extraction method does not have limits on the length of extracted target side phrase. For each source phrase ranging from positions , to , › the target phrase is given by 0#¯ 9°œ   , <%9 ¢0±  ,4 £ and 0 ¯ } i  L9 ¢0T  ,4 £ , where ,²³,#?´?´?@, › and 0 refers to an index in the target sentence pair. We calculate phrase translation probabilities (the scores for each extracted phrase) based on a statistical lexicon for the constituent words in the phrase. As the IBM1 alignment model gives the global optimum for the lexical probabilities, this is the natural choice. This leads to the phrase translation probability  µ ! µ  ž ( ¶ ^ 9     9 !   (11) where ¶ and N denotes the length of the target phrase µ  , source phrase µ  , and the word probabilities   9 !   are estimated using the IBM1 word alignment model. The phrases extracted from this method can be used directly within our in-house decoder without the significant changes that other phrase based methods could require. 8 Experimentation IBM alignment models were trained up to model 4 using GIZA (Al Onaizan et al., 1999) from Chinese to English and Chinese to English on two tracks of data. Figures describing the characteristics of each track as well as the test sentences are shown in Table (1). All the data were extracted from a newswire source. We applied our in house segmentation toolkit on the Chinese data and performed basic preprocessing which included; lowercasing, tagging dates, times and numbers on both languages. Translation quality is evaluated by two metrics, (MTEval, 2002) and BLEU (Papeneni et al., 2001), both of which measure n-gram matches between the translated text and the reference translations. NIST is more sensitive to unigram precision due to its emphasis toward high perplexity words. Four reference translations were available for each test sentence. We first compare against a system built using word level lexica only to reiterate the impact of phrase translation, and then show gains by our method over a system that utilizes phrase extracted from the HMM method. The word level system consisted of a hand crafted (Linguistics Data Consortium) bilingual dictionary and a statistical lexicon derived from training IBM model 1. In our experiments we found that although training higher order IBM models does yield lower alignment error rates when measured against manually aligned sentences, the highest translation quality is achieved by using a lexicon extracted from the Model 1 alignment. Experiments were run with a language model (LM) built on a 20 million word news source corpus using our in house decoder which performs a monotone decoding without reordering. To implement our phrase extraction technique, the maximum approximation alignments were combined with the union operation as described in (Och et al., 1999), resulting in a dense but inaccurate alignment map as measured against a human aligned gold standard. Since bi-directional translation models are available, scoring was performed in both directions, using IBM Model 1 lexica for the within sentence scoring. The final phrase level scores computed in each direction were combined by a weighted average before the pruning step. Source side phrases were restricted to be of length 2 or higher since word lexica were available. Weights for each scoring metric were determined empirically against a validation set (alignment map scores were assigned the highest weighting). Table (2) shows results on the small data track, while Table (3) shows results on the large data track. The technique described in this paper is labelled Ÿ MŠB  \  in the tables. The results show that the phrase extraction method described in this paper contribute to statistically significant improvements over the baseline word and phrase level(HMM) systems. When compared against the HMM phrases, our technique show statistically significant improvements. Statistical significance is evaluated by con· \  MhZ"¸ ¹ … ®>º ‰»N8¼; Baseline-Word 0.135 6.19 Baseline-Word+Phrases 0.167 6.71 Baseline-HMM 0.166 6.49 Baseline-HMM+Phrases 0.174 6.71 Table 2: Small track results · \  MhZ"¸ ¹ … ®>º ‰»N8¼; Baseline-Word 0.147 6.62 Baseline-Word+Phrases 0.190 7.48 Baseline-HMM 0.187 7.42 Baseline-HMM+Phrases 0.197 7.60 Table 3: Large track results sidering deviations in sentence level NIST scores over the 993 sentence test set with a NIST improvement of 0.05 being statistically significant at the 0.01 alpha level. In combination with the HMM method, our technique delivers further gains, providing evidence that different kinds of phrases have been learnt by each method. The improvements caused by our methods is more apparent in the NIST score rather than the BLEU score. We predict that this effect is due to the language specific correction that treats target phrases with function words at the boundaries as the same phrase. This correction cause the burden to be placed on the language model to select the correct phrase instance from several possible translations. Correctly translating function words dramatically boosts the NIST measure as it places emphasis on high perplexity words ie. those with diverse contexts. 9 Conclusions We have presented a method to efficiently extract phrase relationships from IBM word alignment models by leveraging the maximum approximation as well as the word lexicon. Our method is significantly less computationally expensive than methods that attempt to explicitly model phrase level interactions within alignment models, and recovers well from noisy alignments at the sentence and corpus level. The significant improvements above the baseline carry through when this method is combined with other phrasal and word level methods. Further experimentation is required to fully appreciate the robustness of this technique, especially when considering a comparable, but not parallel, corpus. The language specific scoring methods have a significant impact on translation quality, and further work to extend these methods to represent specific characteristics of each language, promises to deliver further improvements. Although the method performs well, it lacks an explanatory framework through the extraction process; instead it leverages the well understood fundamentals of the traditional IBM models. Combining phrase level knowledge sources within a decoder in an effective manner is currently our primary research interest, specifically integrating knowledge sources of varying reliability. Our method has shown to be an effective contributing component within the translation framework and we expect to continue to improve the state of the art within machine translation by improving phrasal extraction and integration. References Peter F. Brown, Stephen A. Della Pietra, Vincent J. Della Pietra, and Robert L. Mercer 1993. The Mathematics of Statistical Machine Translation: Parameter Estimation, Computational Linguisics vol 19(2) 1993 George Foster 2000. A Maximum Entropy Minimum Divergence Translation Model, Proc. of the 38th Annual Meeting of the Association for Computational Linguistics Daniel Marcu and William Wong 2002. A Phrase-Based, Joint Probability Model for Statistical Machine Translation, Proc. of the Conference on Empirical Methods in Natural Language Processing , Philadelphia, PA NIST 2002. MT Evaluation Kit Version 9, www.nist.gov/speech/tests/mt/ Franz Josef Och, Hermann Ney 2002. Discriminative Training and Maximum Entropy Models for Statistical Machine Translation, Proc. North American Association for Computational Linguistics Franz Josef Och and Hermann Ney 200. A Comparison of Alignment Models for Statistical Machine Translation, Proc. of the 18th International Conference on Computational Linguistics. Saarbrucken, Germany Franz Josef Och, Christoph Tillmann, Hermann Ney 1999. Improved Alignment Models for Statistical Machine Translation, Proc. of the Joint Conference of Empirical Methods in Natural Language Processing, p20-28, MD. Al’ Onaizan, Jan Curin, Michael Jahr, Kevin Knight, John Lafferty, Dan Melamed, Franz-Josef Och, David Purdy, Noah H. Smith and David Yarowsky 1999. Statistical Machine Translation, Final Report, JHU Summer Workshop Kishore Papeneni, Salim Roukos, Todd Ward 2001. BLEU: A Method for Automatic Evaluation of Machine Translation, IBM Research Report, RC22176 Stephan Vogel, Hermann Ney, and Christoph Tillmann 1996. HMM-based Word Alignment in Statistical Translation, Proc. of COLING ’96: The 16th International Conference on Computational Linguistics, pp. 836-841. Copenhagen, Denmark Yeyi Wang, Alex Waibel 1998. Fast Decoding for Statistical Machine Translation, Proc. of the International Conference in Spoken Language Processing Dekai Wu 1995. Stochastic Inversion Transduction Grammars, with Application to Segmentation, Bracketing, and Alignment of Parallel Corpora, Proceedings of the 14th International Joint Conference on Artificial Intelligence (IJCAI-95), pp. 1328-1335. Montreal Kenji Yamada and Kevin Knight 2001. A syntax-based statistical translation model, Proc. of the 39th Annual Meeting of the Association for Computational Linguistics, France
2003
41
Uncertainty Reduction in Collaborative Bootstrapping: Measure and Algorithm Yunbo Cao Microsoft Research Asia 5F Sigma Center, No.49 Zhichun Road, Haidian Beijing, China, 100080 [email protected] Hang Li Microsoft Research Asia 5F Sigma Center, No.49 Zhichun Road, Haidian Beijing, China, 100080 [email protected] Li Lian Computer Science Department Fudan University No. 220 Handan Road Shanghai, China, 200433 [email protected] Abstract This paper proposes the use of uncertainty reduction in machine learning methods such as co-training and bilingual bootstrapping, which are referred to, in a general term, as ‘collaborative bootstrapping’. The paper indicates that uncertainty reduction is an important factor for enhancing the performance of collaborative bootstrapping. It proposes a new measure for representing the degree of uncertainty correlation of the two classifiers in collaborative bootstrapping and uses the measure in analysis of collaborative bootstrapping. Furthermore, it proposes a new algorithm of collaborative bootstrapping on the basis of uncertainty reduction. Experimental results have verified the correctness of the analysis and have demonstrated the significance of the new algorithm. 1 Introduction We consider here the problem of collaborative bootstrapping. It includes co-training (Blum and Mitchell, 1998; Collins and Singer, 1998; Nigam and Ghani, 2000) and bilingual bootstrapping (Li and Li, 2002). Collaborative bootstrapping begins with a small number of labelled data and a large number of unlabelled data. It trains two (types of) classifiers from the labelled data, uses the two classifiers to label some unlabelled data, trains again two new classifiers from all the labelled data, and repeats the above process. During the process, the two classifiers help each other by exchanging the labelled data. In co-training, the two classifiers have different feature structures, and in bilingual bootstrapping, the two classifiers have different class structures. Dasgupta et al (2001) and Abney (2002) conducted theoretical analyses on the performance (generalization error) of co-training. Their analyses, however, cannot be directly used in studies of cotraining in (Nigam & Ghani, 2000) and bilingual bootstrapping. In this paper, we propose the use of uncertainty reduction in the study of collaborative bootstrapping (both co-training and bilingual bootstrapping). We point out that uncertainty reduction is an important factor for enhancing the performances of the classifiers in collaborative bootstrapping. Here, the uncertainty of a classifier is defined as the portion of instances on which it cannot make classification decisions. Exchanging labelled data in bootstrapping can help reduce the uncertainties of classifiers. Uncertainty reduction was previously used in active learning. We think that it is this paper which for the first time uses it for bootstrapping. We propose a new measure for representing the uncertainty correlation between the two classifiers in collaborative bootstrapping and refer to it as ‘uncertainty correlation coefficient’ (UCC). We use UCC for analysis of collaborative bootstrapping. We also propose a new algorithm to improve the performance of existing collaborative bootstrapping algorithms. In the algorithm, one classifier always asks the other classifier to label the most uncertain instances for it. Experimental results indicate that our theoretical analysis is correct. Experimental results also indicate that our new algorithm outperforms existing algorithms. 2 Related Work 2.1 Co-Training and Bilingual Bootstrapping Co-training, proposed by Blum and Mitchell (1998), conducts two bootstrapping processes in parallel, and makes them collaborate with each other. More specifically, it repeatedly trains two classifiers from the labelled data, labels some unlabelled data with the two classifiers, and exchanges the newly labelled data between the two classifiers. Blum and Mitchell assume that the two classifiers are based on two subsets of the entire feature set and the two subsets are conditionally independent with one another given a class. This assumption is called ‘view independence’. In their algorithm of co-training, one classifier always asks the other classifier to label the most certain instances for the collaborator. The word sense disambiguation method proposed in Yarowsky (1995) can also be viewed as a kind of co-training. Since the assumption of view independence cannot always be met in practice, Collins and Singer (1998) proposed a co-training algorithm based on ‘agreement’ between the classifiers. As for theoretical analysis, Dasgupta et al. (2001) gave a bound on the generalization error of co-training within the framework of PAC learning. The generalization error is a function of ‘disagreement’ between the two classifiers. Dasgupta et al’s result is based on the view independence assumption, which is strict in practice. Abney (2002) refined Dasgupta et al’s result by relaxing the view independence assumption with a new constraint. He also proposed a new co-training algorithm on the basis of the constraint. Nigam and Ghani (2000) empirically demonstrated that bootstrapping with a random feature split (i.e. co-training), even violating the view independence assumption, can still work better than bootstrapping without a feature split (i.e., bootstrapping with a single classifier). For other work on co-training, see (Muslea et al 200; Pierce and Cardie 2001). Li and Li (2002) proposed an algorithm for word sense disambiguation in translation between two languages, which they called ‘bilingual bootstrapping’. Instead of making an assumption on the features, bilingual bootstrapping makes an assumption on the classes. Specifically, it assumes that the classes of the classifiers in bootstrapping do not overlap. Thus, bilingual bootstrapping is different from co-training. Because the notion of agreement is not involved in bootstrapping in (Nigam & Ghani 2000) and bilingual bootstrapping, Dasgupta et al and Abney’s analyses cannot be directly used on them. 2.2 Active Learning Active leaning is a learning paradigm. Instead of passively using all the given labelled instances for training as in supervised learning, active learning repeatedly asks a supervisor to label what it considers as the most critical instances and performs training with the labelled instances. Thus, active learning can eventually create a reliable classifier with fewer labelled instances than supervised learning. One of the strategies to select critical instances is called ‘uncertain reduction’ (e.g., Lewis and Gale, 1994). Under the strategy, the most uncertain instances to the current classifier are selected and asked to be labelled by a supervisor. The notion of uncertainty reduction was not used for bootstrapping, to the best of our knowledge. 3 Collaborative Bootstrapping and Uncertainty Reduction We consider the collaborative bootstrapping problem. Let  denote a set of instances (feature vectors) and let denote a set of labels (classes). Given a number of labelled instances, we are to construct a function   → : h . We also refer to it as a classifier. In collaborative bootstrapping, we consider the use of two partial functions 1h and 2h , which either output a class label or a special symbol ⊥denoting ‘no decision’. Co-training and bilingual bootstrapping are two examples of collaborative bootstrapping. In co-training, the two collaborating classifiers are assumed to be based on two different views, namely two different subsets of the entire feature set. Formally, the two views are respectively interpreted as two functions ) ( 1 x X and ) x ( X2 ,  ∈ x . Thus, the two collaborating classifiers 1h and 2h in co-training can be respectively represented as )) ( ( 1 1 x X h and )) ( ( 2 2 x X h . In bilingual bootstrapping, a number of classifiers are created in the two languages. The classes of the classifiers correspond to word senses and do not overlap, as shown in Figure 1. For example, the classifier ) E | x ( h 1 1 in language 1 takes sense 2 and sense 3 as classes. The classifier ) C | x ( h 1 2 in language 2 takes sense 1 and sense 2 as classes, and the classifier ) C | x ( h 2 2 takes sense 3 and sense 4 as classes. Here we use 2 1 1 , , C C E to denote different words in the two languages. Collaborative bootstrapping is performed between the classifiers ) ( h ∗ 1 in language 1 and the classifiers ) ( h ∗ 2 in language 2. (See Li and Li 2002 for details). For the classifier ) E | x ( h 1 1 in language 1, we assume that there is a pseudo classifier ) C , C | x ( h 2 1 2 in language 2, which functions as a collaborator of ) E | x ( h 1 1 . The pseudo classifier ) C , C | x ( h 2 1 2 is based on ) C | x ( h 1 2 and ) C | x ( h 2 2 , and takes sense 2 and sense 3 as classes. Formally, the two collaborating classifiers (one real classifier and one pseudo classifier) in bilingual bootstrapping are respectively represented as ) | ( 1 E x h and ) | ( 2 C x h ,  ∈ x . Next, we introduce the notion of uncertainty reduction in collaborative bootstrapping. Definition 1 The uncertainty ) (h U of a classifierh is defined as: }) , ) ( | ({ ) (  ∈ =⊥ = x x h x P h U (1) In practice, we define ) (h U as }) , , ) ) ( ( | ({ ) (   ∈ ∈ ∀ < = = x y y x h C x P h U θ (2) where θ denotes a predetermined threshold and ) (∗ C denotes the confidence score of the classifier h. Definition 2 The conditional uncertainty ) | ( y h U of a classifier h given a class y is defined as: ) |} , ) ( | ({ ) | ( y Y x x h x P y h U = ∈ =⊥ =  (3) We note that the uncertainty (or conditional uncertainty) of a classifier (a partial function) is an indicator of the accuracy of the classifier. Let us consider an ideal case in which the classifier achieves 100% accuracy when it can make a classification decision and achieves 50% accuracy when it cannot (assume that there are only two classes). Thus, the total accuracy on the entire data space is ) ( 5.0 1 h U × − . Definition 3 Given the two classifiers 1h and 2h in collaborative bootstrapping, the uncertainty reduction of 1h with respect to 2h (denoted as ) \ ( 2 1 h h UR ), is defined as }) , ) ( , ) ( | ({ ) \ ( 2 1 2 1  ∈ ≠⊥ =⊥ = x x h x h x P h h UR (4) Similarly, we have }) , ) ( , ) ( | ({ ) \ ( 2 1 1 2  ∈ =⊥ ≠⊥ = x x h x h x P h h UR Uncertainty reduction is an important factor for determining the performance of collaborative bootstrapping. In collaborative bootstrapping, the more the uncertainty of one classifier can be reduced by the other classifier, the higher the performance can be achieved by the classifier (the more effective the collaboration is). 4 Uncertainty Correlation Coefficient Measure 4.1 Measure We introduce the measure of uncertainty correlation coefficient (UCC) to collaborative bootstrapping. Definition 4 Given the two classifiers 1h and 2h , the conditional uncertainty correlation coefficient (CUCC) between 1h and 2h given a class y (denoted as y h hr 2 1 ), is defined as ) | ) ( ( ) | ) ( ( ) | ) ( , ) ( ( 2 1 2 1 2 1 y Y x h P y Y x h P y Y x h x h P y h h r = =⊥ = =⊥ = =⊥ =⊥ = (5) Definition 5 The uncertainty correlation coefficient (UCC) between 1h and 2h (denoted as 2 1h h R ), is defined as  = y y h h h h r) y ( P R 2 1 2 1 (6) UCC represents the degree to which the uncer Figure 1: Bilingual Bootstrapping tainties of the two classifiers are related. If UCC is high, then there are a large portion of instances which are uncertain for both of the classifiers. Note that UCC is a symmetric measure from both classifiers’ perspectives, while UR is an asymmetric measure from one classifier’s perspective (either ) \ ( 2 1 h h UR or ) \ ( 1 2 h h UR ). 4.2 Theoretical Analysis Theorem 1 reveals the relationship between the CUCC (UCC) measure and uncertainty reduction. Assume that the classifier 1h can collaborate with either of the two classifiers 2 h and 2'h . The two classifiers 2 h and 2 h′ have equal conditional uncertainties. The CUCC values between 1h and 2 h′ are smaller than the CUCC values between 1h and 2 h . Then, according to Theorem 1, 1h should collaborate with 2 h′ , because 2 h′ can help reduce its uncertainty more, thus, improve its accuracy more. Theorem 1 Given the two classifier pairs ) , ( 2 1 h h and ) , ( 2 1 h h ′ , if  ∈ ≥ ′ y r r y h h y h h , 2 1 2 1 and ), | ( ) | ( 2 2 y h U y h U ′ =  ∈ y , then we have ) \ ( ) \ ( 2 1 2 1 h h UR h h UR ′ ≤ Proof: We can decompose the uncertainty ) ( 1h U of 1h as follows: ) ( )) |} , ) ( , ) ( | ({ ) |} , ) ( , ) ( | ({ ( ) ( ) |} , ) ( | ({ ) ( 2 1 2 1 1 1 y Y P y Y x x h x h x P y Y x x h x h x P y Y P y Y x x h x P h U y y = = ∈ ≠⊥ =⊥ + = ∈ =⊥ =⊥ = = = ∈ =⊥ =      ) ( )) |} , ) ( , ) ( | ({ ) |} , ) ( | ({ ) |} , ) ( | ({ ( 2 1 2 1 2 1 y Y P y Y x x h x h x P y Y x x h x P y Y x x h x P r y y h h = = ∈ ≠⊥ =⊥ + = ∈ =⊥ ⋅ = ∈ =⊥ =    ) ( )) |} , ) ( , ) ( | ({ ) | ( ) | ( ( 2 1 2 1 2 1 y Y P y Y x x h x h x P y h U y h U r y y h h = = ∈ ≠⊥ =⊥ + =  })) , ) ( , ) ( | ({ ) ( ) | ( ) | ( ( 2 1 2 1 2 1  ∈ ≠⊥ =⊥ + = = x x h x h x P y Y P y h U y h U r y y h h Thus,  = − = ∈ ≠⊥ =⊥ = y y h h y Y P y h U y h U r h U x x h x h x P h h UR ) ( ) | ( ) | ( ) ( }) , ) ( , ) ( | ({ ) \ ( 2 1 1 2 1 2 1 2 1  Similarly we have  = ′ − = ′ ′ y y h h y Y P y h U y h U r h U h h UR ) ( ) | ( ) | ( ) ( ) \ ( 2 1 1 2 1 2 1 Under the conditions, y h h y h h r r 2 1 2 1 ′ ≥ ,  ∈ y and ), | ( ) | ( 2 2 y h U y h U ′ =  ∈ y , we have ) \ ( ) \ ( 2 1 2 1 h h UR h h UR ′ ≤  Theorem 1 states that the lower the CUCC values are, the higher the performances can be achieved in collaborative bootstrapping. Definition 6 The two classifiers in co-training are said to satisfy the view independence assumption (Blum and Mitchell, 1998), if the following equations hold for any class y. ) | ( ) , | ( ) | ( ) , | ( 2 2 1 1 2 2 1 1 2 2 1 1 y Y x X P x X y Y x X P y Y x X P x X y Y x X P = = = = = = = = = = = = Theorem 2 If the view independence assumption holds, then 0.1 2 1 = y h hr holds for any class y. Proof: According to (Abney, 2002), view independence implies classifier independence: ) | ( ) , | ( ) | ( ) , | ( 2 1 2 1 2 1 y Y v h P u h y Y v h P y Y u h P v h y Y u h P = = = = = = = = = = = = We can rewrite them as ) | ( ) | ( ) | ,, ( 2 1 2 1 y Y v h P y Y u h P y Y v h u h P = = = = = = = = Thus, we have ) |} ) ( | ({ ) |} , ) ( | ({ ) |} , ) ( , ) ( | ({ 2 1 2 1 y Y x x h x P y Y x x h x P y Y x x h x h x P = ∈ =⊥ = ∈ =⊥ = = ∈ =⊥ =⊥    It means  ∈ ∀ = y r y h h , 0 . 1 2 1  Theorem 2 indicates that in co-training with view independence, the CUCC values (  ∈ ∀y r y h h , 2 1 ) are small, since by definition ∞ < < y h hr 2 1 0 . According to Theorem 1, it is easy to reduce the uncertainties of the classifiers. That is to say, co-training with view independence can perform well. How to conduct theoretical evaluation on the CUCC measure in bilingual bootstrapping is still an open problem. 4.3 Experimental Results We conducted experiments to empirically evaluate the UCC values of collaborative bootstrapping. We also investigated the relationship between UCC and accuracy. The results indicate that the theoretical analysis in Section 4.2 is correct. In the experiments, we define accuracy as the percentage of instances whose assigned labels agree with their ‘true’ labels. Moreover, when we refer to UCC, we mean that it is the UCC value on the test data. We set the value of θ in Equation (2) to 0.8. Co-Training for Artificial Data Classification We used the data in (Nigam and Ghani 2000) to conduct co-training. We utilized the articles from four newsgroups (see Table 1). Each group had 1000 texts. By joining together randomly selected texts from each of the two newsgroups in the first row as positive instances and joining together randomly selected texts from each of the two newsgroups in the second row as negative instances, we created a two-class classification data with view independence. The joining was performed under the condition that the words in the two newsgroups in the first column came from one vocabulary, while the words in the newsgroups in the second column came from the other vocabulary. We also created a set of classification data without view independence. To do so, we randomly split all the features of the pseudo texts into two subsets such that each of the subsets contained half of the features. We next applied the co-training algorithm to the two data sets. We conducted the same pre-processing in the two experiments. We discarded the header of each text, removed stop words from each text, and made each text have the same length, as did in (Nigam and Ghani, 2000). We discarded 18 texts from the entire 2000 texts, because their main contents were binary codes, encoding errors, etc. We randomly separated the data and performed co-training with random feature split and cotraining with natural feature split in five times. The results obtained (cf., Table 2), thus, were averaged over five trials. In each trial, we used 3 texts for each class as labelled training instances, 976 texts as testing instances, and the remaining 1000 texts as unlabelled training instances. From Table 2, we see that the UCC value of the natural split (in which view independence holds) is lower than that of the random split (in which view independence does not hold). That is to say, in natural split, there are fewer instances which are uncertain for both of the classifiers. The accuracy of the natural split is higher than that of the random split. Theorem 1 states that the lower the CUCC values are, the higher the performances can be achieved. The results in Table 2 agree with the claim of Theorem 1. (Note that it is easier to use CUCC for theoretical analysis, but it is easier to use UCC for empirical analysis). Table 2: Results with Artificial Data Feature Accuracy UCC Natural Split 0.928 1.006 Random Split 0.712 2.399 We also see that the UCC value of the natural split (view independence) is about 1.0. The result agrees with Theorem 2. Co-Training for Web Page Classification We used the same data in (Blum and Mitchell, 1998) to perform co-training for web page classification. The web page data consisted of 1051 web pages collected from the computer science departments of four universities. The goal of classification was to determine whether a web page was concerned with an academic course. 22% of the pages were actually related to academic courses. The features for each page were possible to be separated into two independent parts. One part consisted of words occurring in the current page and the other part consisted of words occurring in the anchor texts pointed to the current page. We randomly split the data into three subsets: labelled training set, unlabeled training set, and test set. The labelled training set had 3 course pages and 9 non-course pages. The test set had 25% of the pages. The unlabelled training set had the remaining data. Table 3: Results with Web Page Data and Bilingual Bootstrapping Data Data Accuracy UCC Web Page 0.943 1.147 bass 0.925 2.648 drug 0.868 0.986 duty 0.751 0.840 palm 0.924 1.174 plant 0.959 1.226 space 0.878 1.007 Word Sense Disambiguation tank 0.844 1.177 We used the data to perform co-training and web page classification. The setting for the Table 1: Artificial Data for Co-Training Class Feature Set A Feature Set B Pos comp.os.ms-windows.misc talk.politics.misc Neg comp.sys.ibm.pc.hardware talk.politics.guns experiment was almost the same as that of Nigam and Ghani’s. One exception was that we did not conduct feature selection, because we were not able to follow their method from their paper. We repeated the experiment five times and evaluated the results in terms of UCC and accuracy. Table 3 shows the average accuracy and UCC value over the five trials. Bilingual Bootstrapping We also used the same data in (Li and Li, 2002) to conduct bilingual bootstrapping and word sense disambiguation. The sense disambiguation data were related to seven ambiguous English words, each having two Chinese translations. The goal was to determine the correct Chinese translations of the ambiguous English words, given English sentences containing the ambiguous words. For each word, there were two seed words used as labelled instances for training, a large number of unlabeled instances (sentences) in both English and Chinese for training, and about 200 labelled instances (sentences) for testing. Details on data are shown in Table 4. We used the data to perform bilingual bootstrapping and word sense disambiguation. The setting for the experiment was exactly the same as that of Li and Li’s. Table 3 shows the accuracy and UCC value for each word. From Table 3 we see that both co-training and bilingual bootstrapping have low UCC values (around 1.0). With lower UCC (CUCC) values, higher performances can be achieved, according to Theorem 1. The accuracies of them are indeed high. Note that since the features and classes for each word in bilingual bootstrapping and those for web page classification in co-training are different, it is not meaningful to directly compare the UCC values of them. 5 Uncertainty Reduction Algorithm 5.1 Algorithm We propose a new algorithm for collaborative bootstrapping (both co-training and bilingual bootstrapping). In the algorithm, the collaboration between the classifiers is driven by uncertainty reduction. Specifically, one classifier always selects the most uncertain unlabelled instances for it and asks the other classifier to label. Thus, the two classifiers can help each other more effectively. There exists, therefore, a similarity between our algorithm and active learning. In active learning the learner always asks the supervisor to label the Table 4: Data for Bilingual Bootstrapping Unlabelled instances Word English Chinese Seed words Test instances bass 142 8811 fish / music 200 drug 3053 5398 treatment / smuggler 197 duty 1428 4338 discharge / export 197 palm 366 465 tree / hand 197 plant 7542 24977 industry / life 197 Space 3897 14178 volume / outer 197 tank 417 1400 combat / fuel 199 Total 16845 59567 - 1384 Input: A set of labeled instances and a set of unlabelled instances. Loop while there exist unlabelled instances{ Create classifier 1h using the labeled instances; Create classifier 2h using the labeled instances; For each class ( y Y = ){ Pick up y b unlabelled instances whose labels ( y Y = ) are most certain for 1h and are most uncertain for 2h , label them with 1h and add them into the set of labeled instances; Pick up y b unlabelled instances whose labels ( y Y = ) are most certain for 2h and are most uncertain for 1h , label them with 2h and add them into the set of labeled instances; } } Output: Two classifiers 1h and 2h Figure 2: Uncertainty Reduction Algorithm most uncertain examples for it, while in our algorithm one classifier always asks the other classifier to label the most uncertain examples for it. Figure 2 shows the algorithm. Actually, our new algorithm is different from the previous algorithm only in one point. Figure 2 highlights the point in italic fonts. In the previous algorithm, when a classifier labels unlabeled instances, it labels those instances whose labels are most certain for the classifier. In contrast, in our new algorithm, when a classifier labels unlabeled instances, it labels those instances whose labels are most certain for the classifier, but at the same time most uncertain for the other classifier. As one implementation, for each class y, 1h first selects its most certain y a instances, 2h next selects from them its most uncertain y b instances ( y y b a ≥ ), and finally 1h labels the y b instances with label y (Collaboration from the opposite direction is performed similarly.). We use this implementation in our experiments described below. 5.2 Experimental Results We conducted experiments to test the effectiveness of our new algorithm. Experimental results indicate that the new algorithm performs better than the previous algorithm. We refer to them as ‘new’ and ‘old’ respectively. Co-Training for Artificial Data Classification We used the artificial data in Section 4.3 and conducted co-training with both the old and new algorithms. Table 5 shows the results. We see that in co-training the new algorithm performs as well as the old algorithm when UCC is low (view independence holds), and the new algorithm performs significantly better than the old algorithm when UCC is high (view independence does not hold). Co-Training for Web Page Classification We used the web page classification data in Section 4.3 and conducted co-training using both the old and new algorithms. Table 6 shows the results. We see that the new algorithm performs as well as the old algorithm for this data set. Note that here UCC is low. Table 6: Accuracies with Web Page Data Accuracy Data Old New UCC Web Page 0.943 0.943 1.147 Bilingual Bootstrapping We used the word sense disambiguation data in Section 4.3 and conducted bilingual bootstrapping using both the old and new algorithms. Table 7 shows the results. We see that the performance of the new algorithm is slightly better than that of the old algorithm. Note that here the UCC values are also low. We conclude that for both co-training and bilingual bootstrapping, the new algorithm performs significantly better than the old algorithm when UCC is high, and performs as well as the old algorithm when UCC is low. Recall that when UCC is high, there are more instances which are uncertain for both classifiers and when UCC is low, there are fewer instances which are uncertain for both classifiers. Note that in practice it is difficult to find a situation in which UCC is completely low (e.g., the view independence assumption completely holds), and thus the new algorithm will be more useful than the old algorithm in practice. To verify this, we conducted an additional experiment. Again, since the features and classes for each word in bilingual bootstrapping and those for web page classification in co-training are different, it is not meaningful to directly compare the UCC values of them. Co-Training for News Article Classification In the additional experiment, we used the data Table 5: Accuracies with Artificial Data Accuracy Feature Old New UCC Natural Split 0.928 0.924 1.006 Random Split 0.712 0.775 2.399 Table 7: Accuracies with Bilingual Bootstrapping Data Accuracy Word Old New UCC bass 0.925 0.955 2.648 drug 0.868 0.863 0.986 duty 0.751 0. 797 0.840 palm 0.924 0.914 1.174 plant 0.959 0.944 1.226 space 0.878 0.888 1.007 tank 0.844 0.854 1.177 Average 0.878 0.888 - from two newsgroups (comp.graphics and comp.os.ms-windows.misc) in the dataset of (Joachims, 1997) to construct co-training and text classification. There were 1000 texts for each group. We viewed the former group as positive class and the latter group as negative class. We applied the new and old algorithms. We conducted 20 trials in the experimentation. In each trial we randomly split the data into labelled training, unlabeled training and test data sets. We used 3 texts per class as labelled instances for training, 994 texts for testing, and the remaining 1000 texts as unlabelled instances for training. We performed the same preprocessing as that in (Nigam and Ghani 2000). Table 8 shows the results with the 20 trials. The accuracies are averaged over each 5 trials. From the table, we see that co-training with the new algorithm significantly outperforms that using the old algorithm and also ‘single bootstrapping’. Here, ‘single bootstrapping’ refers to the conventional bootstrapping method in which a single classifier repeatedly boosts its performances with all the features. The above experimental results indicate that our new algorithm for collaborative bootstrapping performs significantly better than the old algorithm when the collaboration is difficult. It performs as well as the old algorithm when the collaboration is easy. Therefore, it is better to always employ the new algorithm. Another conclusion from the results is that we can apply our new algorithm into any single bootstrapping problem. More specifically, we can randomly split the feature set and use our algorithm to perform co-training with the split subsets. 6 Conclusion This paper has theoretically and empirically demonstrated that uncertainty reduction is the essence of collaborative bootstrapping, which includes both co-training and bilingual bootstrapping. The paper has conducted a new theoretical analysis of collaborative bootstrapping, and has proposed a new algorithm for collaborative bootstrapping, both on the basis of uncertainty reduction. Experimental results have verified the correctness of the analysis and have indicated that the new algorithm performs better than the existing algorithms. References S. Abney, 2002. Bootstrapping. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics. A. Blum and T. Mitchell, 1998. Combining Labeled Data and Unlabelled Data with Co-training. In Proceedings of the 11th Annual Conference on Computational learning Theory. M. Collins and Y. Singer, 1999. Unsupervised Models for Named Entity Classification. In Proceedings of the 1999 Joint SIGDAT Conference on Empirical Methods in Natural Language Processing and Very Large Corpora. S. Dasgupta, M. Littman and D. McAllester, 2001. PAC Generalization Bounds for Co-Training. In Proceedings of Neural Information Processing System, 2001. T. Joachims, 1997. A Probabilistic Analysis of the Rocchio Algorithm with TFIDF for Text Categorization. In Proceedings of the 14th International Conference on Machine Learning. D. Lewis and W. Gale, 1994. A Sequential Algorithm for Training Text Classifiers. In Proceedings of the 17th International ACM-SIGIR Conference on Research and Development in Information Retrieval. C. Li and H. Li, 2002. Word Translation Disambiguation Using Bilingual Bootstrapping. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics. I. Muslea, S.Minton, and C. A. Knoblock 2000. Selective Sampling With Redundant Views. In Proceedings of the Seventeenth National Conference on Artificial Intelligence. K. Nigam and R. Ghani, 2000. Analyzing the Effectiveness and Applicability of Co-Training. In Proceedings of the 9th International Conference on Information and Knowledge Management. D. Pierce and C. Cardie 2001. Limitations of CoTraining for Natural Language Learning from Large Datasets. In Proceedings of the 2001 Conference on Empirical Methods in Natural Language Processing (EMNLP-2001). D. Yarowsky, 1995. Unsupervised Word Sense Disambiguation Rivaling Supervised Methods. In Proceedings of the 33rd Annual Meeting of the Association for Computational Linguistics. Table 8: Accuracies with News Data Collaborative Bootstrapping Average Accuracy Single Bootstrapping Old New Trial 1-5 0.725 0.737 0.768 Trial 6-10 0.708 0.702 0.793 Trial 11-15 0.679 0.647 0.769 Trial 16-20 0.699 0.689 0.767 All 0.703 0.694 0.774
2003
42
A Bootstrapping Approach to Named Entity Classification Using Successive Learners Cheng Niu, Wei Li, Jihong Ding, Rohini K. Srihari Cymfony Inc. 600 Essjay Road, Williamsville, NY 14221. USA. {cniu, wei, jding, rohini}@cymfony.com Abstract This paper presents a new bootstrapping approach to named entity (NE) classification. This approach only requires a few common noun/pronoun seeds that correspond to the concept for the target NE type, e.g. he/she/man/woman for PERSON NE. The entire bootstrapping procedure is implemented as training two successive learners: (i) a decision list is used to learn the parsing-based high precision NE rules; (ii) a Hidden Markov Model is then trained to learn string sequence-based NE patterns. The second learner uses the training corpus automatically tagged by the first learner. The resulting NE system approaches supervised NE performance for some NE types. The system also demonstrates intuitive support for tagging user-defined NE types. The differences of this approach from the co-training-based NE bootstrapping are also discussed. 1 Introduction Named Entity (NE) tagging is a fundamental task for natural language processing and information extraction. An NE tagger recognizes and classifies text chunks that represent various proper names, time, or numerical expressions. Seven types of named entities are defined in the Message Understanding Conference (MUC) standards, namely, PERSON (PER), ORGANIZATION (ORG), LOCATION (LOC), TIME, DATE, MONEY, and PERCENT1 (MUC-7 1998). 1 This paper only focuses on classifying proper names. Time and numerical NEs are not yet explored using this method. There is considerable research on NE tagging using different techniques. These include systems based on handcrafted rules (Krupka 1998), as well as systems using supervised machine learning, such as the Hidden Markov Model (HMM) (Bikel 1997) and the Maximum Entropy Model (Borthwick 1998). The state-of-the-art rule-based systems and supervised learning systems can reach near-human performance for NE tagging in a targeted domain. However, both approaches face a serious knowledge bottleneck, making rapid domain porting difficult. Such systems cannot effectively support user-defined named entities. That is the motivation for using unsupervised or weaklysupervised machine learning that only requires a raw corpus from a given domain for this NE research. (Cucchiarelli & Velardi 2001) discussed boosting the performance of an existing NE tagger by unsupervised learning based on parsing structures. (Cucerzan & Yarowsky 1999), (Collins & Singer 1999) and (Kim 2002) presented various techniques using co-training schemes for NE extraction seeded by a small list of proper names or handcrafted NE rules. NE tagging has two tasks: (i) NE chunking; (ii) NE classification. Parsingsupported NE bootstrapping systems including ours only focus on NE classification, assuming NE chunks have been constructed by the parser. The key idea of co-training is the separation of features into several orthogonal views. In case of NE classification, usually one view uses the context evidence and the other relies on the lexicon evidence. Learners corresponding to different views learn from each other iteratively. One issue of co-training is the error propagation problem in the process of the iterative learning. The rule precision drops iteration-by-iteration. In the early stages, only few instances are available for learning. This makes some powerful statistical models such as HMM difficult to use due to the extremely sparse data. This paper presents a new bootstrapping approach using successive learning and conceptbased seeds. The successive learning is as follows. First, some parsing-based NE rules are learned with high precision but limited recall. Then, these rules are applied to a large raw corpus to automatically generate a tagged corpus. Finally, an HMM-based NE tagger is trained using this corpus. There is no iterative learning between the two learners, hence the process is free of the error propagation problem. The resulting NE system approaches supervised NE performance for some NE types. To derive the parsing-based learner, instead of seeding the bootstrapping process with NE instances from a proper name list or handcrafted NE rules as (Cucerzan & Yarowsky 1999), (Collins & Singer 1999) and (Kim 2002), the system only requires a few common noun or pronoun seeds that correspond to the concept for the targeted NE, e.g. he/she/man/woman for PERSON NE. Such concept-based seeds share grammatical structures with the corresponding NEs, hence a parser is utilized to support bootstrapping. Since pronouns and common nouns occur more often than NE instances, richer contextual evidence is available for effective learning. Using concept-based seeds, the parsingbased NE rules can be learned in one iteration so that the error propagation problem in the iterative learning can be avoided. This method is also shown to be effective for supporting NE domain porting and is intuitive for configuring an NE system to tag user-defined NE types. The remaining part of the paper is organized as follows. The overall system design is presented in Section 2. Section 3 describes the parsing-based NE learning. Section 4 presents the automatic construction of annotated NE corpus by parsingbased NE classification. Section 5 presents the string level HMM NE learning. Benchmarks are shown in Section 6. Section 7 is the Conclusion. 2 System Design Figure 1 shows the overall system architecture. Before the bootstrapping is started, a large raw training corpus is parsed by the English parser from our InfoXtract system (Srihari et al. 2003). The bootstrapping experiment reported in this paper is based on a corpus containing ~100,000 news articles and a total of ~88,000,000 words. The parsed corpus is saved into a repository, which supports fast retrieval by a keyword-based indexing scheme. Although the parsing-based NE learner is found to suffer from the recall problem, we can apply the learned rules to a huge parsed corpus. In other words, the availability of an almost unlimited raw corpus compensates for the modest recall. As a result, large quantities of NE instances are automatically acquired. An automatically annotated NE corpus can then be constructed by extracting the tagged instances plus their neighboring words from the repository. Repository (parsed corpus) Decision List NE Learning HMM NE Learning Concept-based Seeds parsing-based NE rules training corpus based on tagged NEs NE tagging using parsing-based rules NE Tagger Bootstrapping Procedure Bootstrapping Procedure Figure 1. Bootstrapping System Architecture The bootstrapping is performed as follows: 1. Concept-based seeds are provided by the user. 2. Parsing structures involving concept-based seeds are retrieved from the repository to train a decision list for NE classification. 3. The learned rules are applied to the NE candidates stored in the repository. 4. The proper names tagged in Step 3 and their neighboring words are put together as an NE annotated corpus. 5. An HMM is trained based on the annotated corpus. 3 Parsing-based NE Rule Learning The training of the first NE learner has three major properties: (i) the use of concept-based seeds, (ii) support from the parser, and (iii) representation as a decision list. This new bootstrapping approach is based on the observation that there is an underlying concept for any proper name type and this concept can be easily expressed by a set of common nouns or pronouns, similar to how concepts are defined by synsets in WordNet (Beckwith 1991). Concept-based seeds are conceptually equivalent to the proper name types that they represent. These seeds can be provided by a user intuitively. For example, a user can use pill, drug, medicine, etc. as concept-based seeds to guide the system in learning rules to tag MEDICINE names. This process is fairly intuitive, creating a favorable environment for configuring the NE system to the types of names sought by the user. An important characteristic of concept-based seeds is that they occur much more often than proper name seeds, hence they are effective in guiding the non-iterative NE bootstrapping. A parser is necessary for concept-based NE bootstrapping. This is due to the fact that conceptbased seeds only share pattern similarity with the corresponding NEs at structural level, not at string sequence level. For example, at string sequence level, PERSON names are often preceded by a set of prefixing title words Mr./Mrs./Miss/Dr. etc., but the corresponding common noun seeds man/woman etc. cannot appear in such patterns. However, at structural level, the concept-based seeds share the same or similar linguistic patterns (e.g. Subject-Verb-Object patterns) with the corresponding types of proper names. The rationale behind using concept-based seeds in NE bootstrapping is similar to that for parsingbased word clustering (Lin 1998): conceptually similar words occur in structurally similar context. In fact, the anaphoric function of pronouns and common nouns to represent antecedent NEs indicates the substitutability of proper names by the corresponding common nouns or pronouns. For example, this man can be substituted for the proper name John Smith in almost all structural patterns. Following the same rationale, a bootstrapping approach is applied to the semantic lexicon acquisition task [Thelen & Riloff. 2002]. The InfoXtract parser supports dependency parsing based on the linguistic units constructed by our shallow parser (Srihari et al. 2003). Five types of the decoded dependency relationships are used for parsing-based NE rule learning. These are all directional, binary dependency links between linguistic units: (1) Has_Predicate: from logical subject to verb e.g. He said she would want him to join.  he: Has_Predicate(say) she: Has_Predicate(want) him: Has_Predicate(join) (2) Object_Of : from logical object to verb e.g. This company was founded to provide new telecommunication services.  company: Object_Of(found) service: Object_Of(provide) (3) Has_Amod: from noun to its adjective modifier e.g. He is a smart, handsome young man.  man: Has_AMod(smart) man: Has_AMod(handsome) man: Has_AMod(young) (4) Possess: from the possessive noun-modifier to head noun e.g. His son was elected as mayor of the city.  his: Possess(son) city: Possess(mayor) (5) IsA: equivalence relation from one NP to another NP e.g. Microsoft spokesman John Smith is a popular man.  spokesman: IsA(John Smith) John Smith: IsA(man) The concept-based seeds used in the experiments are: 1. PER: he, she, his, her, him, man, woman 2. LOC: city, province, town, village 3. ORG: company, firm, organization, bank, airline, army, committee, government, school, university 4. PRO: car, truck, vehicle, product, plane, aircraft, computer, software, operating system, data-base, book, platform, network Note that the last target tag PRO (PRODUCT) is beyond the MUC NE standards: we added this NE type for the purpose of testing the system’s capability in supporting user-defined NE types. From the parsed corpus in the repository, all instances of the concept-based seeds associated with one or more of the five dependency relations are retrieved: 821,267 instances in total in our experiment. Each seed instance was assigned a concept tag corresponding to NE. For example, each instance of he is marked as PER. The marked instances plus their associated parsing relationships form an annotated NE corpus, as shown below: he/PER: Has_Predicate(say) she/PER: Has_Predicate(get) company/ORG: Object_Of(compel) city/LOC: Possess(mayor) car/PRO: Object_Of(manufacture) HasAmod(high-quality) ………… This training corpus supports the Decision List Learning which learns homogeneous rules (Segal & Etzioni 1994). The accuracy of each rule was evaluated using Laplace smoothing: No. category NE negative positive 1 positive + + + = accuracy It is noteworthy that the PER tag dominates the corpus due to the fact that the pronouns he and she occur much more often than the seeded common nouns. So the proportion of NE types in the instances of concept-based seeds is not the same as the proportion of NE types in the proper name instances. For example, considering a running text containing one instance of John Smith and one instance of a city name Rochester, it is more likely that John Smith will be referred to by he/him than Rochester by (the) city. Learning based on such a corpus is biased towards PER as the answer. To correct this bias, we employ the following modification scheme for instance count. Suppose there are a total of PER N PER instances, LOC N LOC instances, ORG N ORG instances, PRO N PRO instances, then in the process of rule accuracy evaluation, the involved instance count for any NE type will be adjusted by the coefficient NE PRO , ORG LOC PER min N ) N , N , N (N . For example, if the number of the training instances of PER is ten times that of PRO, then when evaluating a rule accuracy, any positive/negative count associated with PER will be discounted by 0.1 to correct the bias. A total of 1,290 parsing-based NE rules are learned, with accuracy higher than 0.9. The following are sample rules of the learned decision list: Possess(wife) PER Possess(husband)  PER Possess(daughter)  PER Possess(bravery)  PER Possess(father)  PER Has_Predicate(divorce)  PER Has_Predicate(remarry)  PER Possess(brother)  PER Possess(son)  PER Possess(mother)  PER Object_Of(deport)  PER Possess(sister)  PER Possess(colleague)  PER Possess(career)  PER Possess(forehead)  PER Has_Predicate(smile)  PER Possess(respiratory system)  PER {Has_Predicate(threaten), Has_Predicate(kill)} PER ………… Possess(concert hall)  LOC Has_AMod(coastal)  LOC Has_AMod(northern)  LOC Has_AMod(eastern)  LOC Has_AMod(northeastern)  LOC Possess(undersecretary)  LOC Possess(mayor)  LOC Has_AMod(southern)  LOC Has_AMod(northwestern)  LOC Has_AMod(populous)  LOC Has_AMod(rogue)  LOC Has_AMod(southwestern)  LOC Possess(medical examiner)  LOC Has_AMod(edgy)  LOC ………… Has_AMod(broad-base)  ORG Has_AMod(advisory)  ORG Has_AMod(non-profit)  ORG Possess(ceo)  ORG Possess(operate loss)  ORG Has_AMod(multinational)  ORG Has_AMod(non-governmental)  ORG Possess(filings)  ORG Has_AMod(interim)  ORG Has_AMod(for-profit)  ORG Has_AMod(not-for-profit)  ORG Has_AMod(nongovernmental)  ORG Object_Of(undervalue)  ORG ………… Has_AMod(handheld)  PRO Has_AMod(unman)  PRO Has_AMod(well-sell)  PRO Has_AMod(value-add)  PRO Object_Of(refuel)  PRO Has_AMod(fuel-efficient)  PRO Object_Of(vend)  PRO Has_Predicate(accelerate)  PRO Has_Predicate(collide)  PRO Object_Of(crash)  PRO Has_AMod(scalable)  PRO Possess(patch)  PRO Object_Of(commercialize)PRO Has_AMod(custom-design)  PRO Possess(rollout)  PRO Object_Of(redesign)  PRO ………… Due to the unique equivalence nature of the IsA relation, the above bootstrapping procedure can hardly learn IsA-based rules. Therefore, we add the following IsA-based rules to the top of the decision list: IsA(seed) tag of the seed, for example: IsA(man)  PER IsA(city)  LOC IsA(company)  ORG IsA(software)  PRO 4 Automatic Construction of Annotated NE Corpus In this step, we use the parsing-based first learner to tag a raw corpus in order to train the second NE learner. One issue with the parsing-based NE rules is modest recall. For incoming documents, approximately 35%-40% of the proper names are associated with at least one of the five parsing relations. Among these proper names associated with parsing relations, only ~5% are recognized by the parsing-based NE rules. So we adopted the strategy of applying the parsing-based rules to a large corpus (88 million words), and let the quantity compensate for the sparseness of tagged instances. A repository level consolidation scheme is also used to improve the recall. The NE classification procedure is as follows. From the repository, all the named entity candidates associated with at least one of the five parsing relationships are retrieved. An NE candidate is defined as any chunk in the parsed corpus that is marked with a proper name Part-OfSpeech (POS) tag (i.e. NNP or NNPS). A total of 1,607,709 NE candidates were retrieved in our experiment. A small sample of the retrieved NE candidates with the associated parsing relationships are shown below: Deep South : Possess(project) Ramada : Possess(president) Argentina : Possess(first lady) ………… After applying the decision list to the above the NE candidates, 33,104 PER names, 16,426 LOC names, 11,908 ORG names and 6,280 PRO names were extracted. It is a common practice in the bootstrapping research to make use of heuristics that suggest conditions under which instances should share the same answer. For example, the one sense per discourse principle is often used for word sense disambiguation (Gale et al. 1992). In this research, we used the heuristic one tag per domain for multiword NE in addition to the one sense per discourse principle. These heuristics were found to be very helpful in improving the performance of the bootstrapping algorithm for the purpose of both increasing positive instances (i.e. tag propagation) and decreasing the spurious instances (i.e. tag elimination). The following are two examples to show how the tag propagation and elimination scheme works. Tyco Toys occurs 67 times in the corpus, and 11 instances are recognized as ORG, only one instance is recognized as PER. Based on the heuristic one tag per domain for multi-word NE, the minority tag of PER is removed, and all the 67 instances of Tyco Toys are tagged as ORG. Three instances of Postal Service are recognized as ORG, and two instances are recognized as PER. These tags are regarded as noise, hence are removed by the tag elimination scheme. The tag propagation/elimination scheme is adopted from (Yarowsky 1995). After this step, a total of 386,614 proper names were recognized, including 134,722 PER names, 186,488 LOC names, 46,231 ORG names and 19,173 PRO names. The overall precision was ~90%. The benchmark details will be shown in Section 6. The extracted proper name instances then led to the construction of a fairly large training corpus sufficient for training the second NE learner. Unlike manually annotated running text corpus, this corpus consists of only sample string sequences containing the automatically tagged NE instances and their left and right neighboring words within the same sentence. The two neighboring words are always regarded as common words while constructing the corpus. This is based on the observation that the proper names usually do not occur continuously without any punctuation in between. A small sample of the automatically constructed corpus is shown below: in <LOC> Argentina </LOC> . <LOC> Argentina </LOC> 's and <PER> Troy Glaus </PER> walk call <ORG> Prudential Associates </ORG> . , <PRO> Photoshop </PRO> has not <PER> David Bonderman </PER> , ………… This corpus is used for training the second NE learner based on evidence from string sequences, to be described in Section 5 below. 5 String Sequence-based NE Learning String sequence-based HMM learning is set as our final goal for NE bootstrapping because of the demonstrated high performance of this type of NE taggers. In this research, a bi-gram HMM is trained based on the sample strings in the annotated corpus constructed in section 4. During the training, each sample string sequence is regarded as an independent sentence. The training process is similar to (Bikel 1997). The HMM is defined as follows: Given a word sequence n n 0 0 f w f w sequence W  = (where jf denotes a single token feature which will be defined below), the goal for the NE tagging task is to find the optimal NE tag sequence n 2 1 0 t t t t sequence T  = , which maximizes the conditional probability sequence) W | sequence Pr(T (Bikel 1997). By Bayesian equality, this is equivalent to maximizing the joint probability sequence) T sequence, Pr(W . This joint probability can be computed by bi-gram HMM as follows: ∏ − = i ) t, f, w | t, f, w Pr( sequence) T sequence, Pr(W 1 i 1-i 1-i i i i The back-off model is as follows, ) t, w | )Pr(t t, t| f, w Pr( ) (1 ) t, f, w | t, f, w ( P ) t, f, w | t, f, w Pr( 1 i 1 i i 1 i i i i 1 1 i 1-i 1-i i i i 0 1 1 i 1-i 1-i i i i − − − − − + = λ λ where V denotes the size of the vocabulary, the back-off coefficients λ’s are determined using the Witten-Bell smoothing algorithm. The quantities ) t, , w | t, f, w ( P 1 i 1 1 i i i i 0 − − − if , ) t, t| f, w ( P 1 i i i i 0 − , ) t, w | (t P 1 i 1-i i 0 − , ) t| f, w ( P i i i 0 , ) t| (f P i i 0 , ) w | (t P 1-i i 0 , ) (t P i 0 , and ) t| (w P i i 0 are computed by the maximum likelihood estimation. We use the following single token feature set for HMM training. The definitions of these features are the same as in (Bikel 1997). ) t | f, w Pr( ) - (1 ) t, t | f, w ( P ) t, t | f, w Pr( i i i 2 1 i i i i 0 2 1 i i i i λ λ + = − − ) w | Pr(t ) (1 ) t , w | (t P ) t , w | Pr(t 1 - i i 3 1 i 1-i i 0 3 1 i 1-i i λ λ + = − − ) t | (f )P t | (w Pr ) (1 ) t | f, w ( P ) t| f, w Pr( i i 0 i i 4 i i i 0 4 i i i λ λ + = ) t ( P ) - (1 ) w | (t P ) w | Pr(t i 0 5 1 - i i 0 5 1-i i λ λ + = V 1 ) - (1 ) t | (w P ) t| Pr(w 6 i i 0 6 i i λ λ + = twoDigitNum, fourDigitNum, containsDigitAndAlpha, containsDigitAndDash, containsDigitAndSlash, containsDigitAndComma, containsDigitAndPeriod, otherNum, allCaps, capPeriod, initCap, lowerCase, other. 6 Benchmarking and Discussion Two types of benchmarks were measured: (i) the quality of the automatically constructed NE corpus, and (ii) the performance of the HMM NE tagger. The HMM NE tagger is considered to be the resulting system for application. The benchmarking shows that this system approaches the performance of supervised NE tagger for two of the three proper name NE types in MUC, namely, PER NE and LOC NE. We used the same blind testing corpus of 300,000 words containing 20,000 PER, LOC and ORG instances that were truthed in-house originally for benchmarking the existing supervised NE tagger (Srihari, Niu & Li 2000). This has the benefit of precisely measuring performance degradation from the supervised learning to unsupervised learning. The performance of our supervised NE tagger using the MUC scorer is shown in Table 1. Table 1. Performance of Supervised NE Tagger Type Precision Recall F-Measure PERSON 92.3% 93.1% 92.7% LOCATION 89.0% 87.7% 88.3% ORGANIZATION 85.7% 87.8% 86.7% To benchmark the quality of the automatically constructed corpus (Table 2), the testing corpus is first processed by our parser and then saved into the repository. The repository level NE classification scheme, as discussed in section 4, is applied. From the recognized NE instances, the instances occurring in the testing corpus are compared with the answer key. Table 2. Quality of the Constructed Corpus Type Precision PERSON 94.3% LOCATION 91.7% ORGANIZATION 88.5% To benchmark the performance of the HMM tagger, the testing corpus is parsed. The noun chunks with proper name POS tags (NNP and NNPS) are extracted as NE candidates. The preceding word and the succeeding word of the NE candidates are also extracted. Then we apply the HMM to the NE candidates with their neighboring context. The NE classification results are shown in Table 3. Table 3. Performance of the second HMM NE Type Precision Recall F-Measure PERSON 86.6% 88.9% 87.7% LOCATION 82.9% 81.7% 82.3% ORGANIZATION 57.1% 48.9% 52.7% Compared with our existing supervised NE tagger, the degradation using the presented bootstrapping method for PER NE, LOC NE, and ORG NE are 5%, 6%, and 34% respectively. The performance for PER and LOC are above 80%, approaching the performance of supervised learning. The reason for the low recall of ORG (~50%) is not difficult to understand. For PERSON and LOCATION, a few concept-based seeds seem to be sufficient in covering their sub-types (e.g. the sub-types COUNTRY, CITY, etc for LOCATION). But there are hundreds of sub-types of ORG that cannot be covered by less than a dozen concept-based seeds, which we used. As a result, the recall of ORG is significantly affected. Due to the same fact that ORG contains many more sub-types, the results are also noisier, leading to lower precision than that of the other two NE types. Some threshold can be introduced, e.g. perplexity per word, to remove spurious ORG tags in improving the precision. As for the recall issue, fortunately, in a real-life application, the organization type that a user is interested in usually is in a fairly narrow spectrum. We believe that the performance will be better if only company names or military organization names are targeted. In addition to the key NE types in MUC, our system is able to recognize another NE type, namely, PRODUCT (PRO) NE. We instructed our truthing team to add this NE type into the testing corpus which contains ~2,000 PRO instances. Table 4 shows the performance of the HMM on the PRO tag. Table 4. Performance of PRODUCT NE TYPE PRECISION RECALL F-MEASURE PRODUCT 67.3% 72.5% 69.8% Similar to the case of ORG NEs, the number of concept-based seeds is found to be insufficient to cover the variations of PRO subtypes. So the performance is not as good as PER and LOC NEs. Nevertheless, the benchmark shows the system works fairly effectively in extracting the userspecified NEs. It is noteworthy that domain knowledge such as knowing the major sub-types of the user-specified NE type is valuable in assisting the selection of appropriate concept-based seeds for performance enhancement. The performance of our HMM tagger is comparable with the reported performance in (Collins & Singer 1999). But our benchmarking is more extensive as we used a much larger data set (20,000 NE instances in the testing corpus) than theirs (1,000 NE instances). 7 Conclusion A novel bootstrapping approach to NE classification is presented. This approach does not require iterative learning which may suffer from error propagation. With minimal human supervision in providing a handful of conceptbased seeds, the resulting NE tagger approaches supervised NE performance in NE types for PERSON and LOCATION. The system also demonstrates effective support for user-defined NE classification. Acknowledgement This work was partly supported by a grant from the Air Force Research Laboratory’s Information Directorate (AFRL/IF), Rome, NY, under contract F30602-01-C-0035. The authors wish to thank Carrie Pine and Sharon Walter of AFRL for supporting and reviewing this work. References Bikel, D. M. 1997. Nymble: a high-performance learning name-finder. Proceedings of ANLP 1997, 194-201, Morgan Kaufmann Publishers. Beckwith, R. et al. 1991. WordNet: A Lexical Database Organized on Psycholinguistic Principles. Lexicons: Using On-line Resources to build a Lexicon, Uri Zernik, editor, Lawrence Erlbaum, Hillsdale, NJ. Borthwick, A. et al. 1998. Description of the MENE named Entity System. Proceedings of MUC-7. Collins, M. and Y. Singer. 1999. Unsupervised Models for Named Entity Classification. Proceedings of the 1999 Joint SIGDAT Conference on EMNLP and VLC. Cucchiarelli, A. and P. Velardi. 2001. Unsupervised Named Entity Recognition Using Syntactic and Semantic Contextual Evidence. Computational Linguistics, Volume 27, Number 1, 123-131. Cucerzan, S. and D. Yarowsky. 1999. Language Independent Named Entity Recognition Combining Morphological and Contextual Evidence. Proceedings of the 1999 Joint SIGDAT Conference on EMNLP and VLC, 90-99. Gale, W., K. Church, and D. Yarowsky. 1992. One Sense Per Discourse. Proceedings of the 4th DARPA Speech and Natural Language Workshop. 233-237. Kim, J., I. Kang, and K. Choi. 2002. Unsupervised Named Entity Classification Models and their Ensembles. COLING 2002. Krupka, G. R. and K. Hausman. 1998. IsoQuest Inc: Description of the NetOwl Text Extraction System as used for MUC-7. Proceedings of MUC-7. Lin, D.K. 1998. Automatic Retrieval and Clustering of Similar Words. COLING-ACL 1998. MUC-7, 1998. Proceedings of the Seventh Message Understanding Conference (MUC-7). Thelen, M. and E. Riloff. 2002. A Bootstrapping Method for Learning Semantic Lexicons using Extraction Pattern Contexts. Proceedings of EMNLP 2002. Segal, R. and O. Etzioni. 1994. Learning decision lists using homogeneous rules. Proceedings of the 12th National Conference on Artificial Intelligence. Srihari, R., W. Li, C. Niu and T. Cornell. 2003. InfoXtract: An Information Discovery Engine Supported by New Levels of Information Extraction. Proceeding of HLT-NAACL 2003 Workshop on Software Engineering and Architecture of Language Technology Systems, Edmonton, Canada. Srihari, R., C. Niu, & W. Li. 2000. A Hybrid Approach for Named Entity and Sub-Type Tagging. Proceedings of ANLP 2000, Seattle. Yarowsky, David. 1995. Unsupervised Word Sense Disambiguation Rivaling Supervised Method. ACL 1995.
2003
43
Counter-Training in Discovery of Semantic Patterns Roman Yangarber Courant Institute of Mathematical Sciences New York University [email protected] Abstract This paper presents a method for unsupervised discovery of semantic patterns. Semantic patterns are useful for a variety of text understanding tasks, in particular for locating events in text for information extraction. The method builds upon previously described approaches to iterative unsupervised pattern acquisition. One common characteristic of prior approaches is that the output of the algorithm is a continuous stream of patterns, with gradually degrading precision. Our method differs from the previous pattern acquisition algorithms in that it introduces competition among several scenarios simultaneously. This provides natural stopping criteria for the unsupervised learners, while maintaining good precision levels at termination. We discuss the results of experiments with several scenarios, and examine different aspects of the new procedure. 1 Introduction The work described in this paper is motivated by research into automatic pattern acquisition. Pattern acquisition is considered important for a variety of “text understanding” tasks, though our particular reference will be to Information Extraction (IE). In IE, the objective is to search through text for entities and events of a particular kind—corresponding to the user’s interest. Many current systems achieve this by pattern matching. The problem of recall, or coverage, in IE can then be restated to a large extent as a problem of acquiring a comprehensive set of good patterns which are relevant to the scenario of interest, i.e., which describe events occurring in this scenario. Among the approaches to pattern acquisition recently proposed, unsupervised methods1 have gained some popularity, due to the substantial reduction in amount of manual labor they require. We build upon these approaches for learning IE patterns. The focus of this paper is on the problem of convergence in unsupervised methods. As with a variety of related iterative, unsupervised methods, the output of the system is a stream of patterns, in which the quality is high initially, but then gradually degrades. This degradation is inherent in the trade-off, or tension, in the scoring metrics: between trying to achieve higher recall vs. higher precision. Thus, when the learning algorithm is applied against a reference corpus, the result is a ranked list of patterns, and going down the list produces a curve which trades off precision for recall. Simply put, the unsupervised algorithm does not know when to stop learning. In the absence of a good stopping criterion, the resulting list of patterns must be manually reviewed by a human; otherwise one can set ad-hoc thresholds, e.g., on the number of allowed iterations, as in (Riloff and Jones, 1999), or else to resort to supervised training to determine such thresholds—which is unsatisfactory when our 1As described in, e.g., (Riloff, 1996; Riloff and Jones, 1999; Yangarber et al., 2000). goal from the outset is to try to limit supervision. Thus, the lack of natural stopping criteria renders these algorithms less unsupervised than one would hope. More importantly, this lack makes the algorithms difficult to use in settings where training must be completely automatic, such as in a generalpurpose information extraction system, where the topic may not be known in advance. At the same time, certain unsupervised learning algorithms in other domains exhibit inherently natural stopping criteria. One example is the algorithm for word sense disambiguation in (Yarowsky, 1995). Of particular relevance to our method are the algorithms for semantic classification of names or NPs described in (Thelen and Riloff, 2002; Yangarber et al., 2002). Inspired in part by these algorithms, we introduce the counter-training technique for unsupervised pattern acquisition. The main idea behind countertraining is that several identical simple learners run simultaneously to compete with one another in different domains. This yields an improvement in precision, and most crucially, it provides a natural indication to the learner when to stop learning—namely, once it attempts to wander into territory already claimed by other learners. We review the main features of the underlying unsupervised pattern learner and related work in Section 2. In Section 3 we describe the algorithm; 3.2 gives the details of the basic learner, and 3.3 introduces the counter-training framework which is super-imposed on it. We present the results with and without counter-training on several domains, Section 4, followed by discussion in Section 5. 2 Background 2.1 Unsupervised Pattern Learning We outline those aspects of the prior work that are relevant to the algorithm developed in our presentation. We are given an IE scenario  , e.g., “Management Succession” (as in MUC-6). We have a raw general news corpus for training, i.e., an unclassified and un-tagged set of documents  . The problem is to find a good set of patterns in   , which cover events relevant to  . We presuppose the existence of two generalpurpose, lower-level language tools—a name recognizer and a parser. These tools are used to extract all potential patterns from the corpus. The user provides a small number of seed patterns for  . The algorithm uses the corpus to iteratively bootstrap a larger set of good patterns for  . The algorithm/learner achieves this bootstrapping by utilizing the duality between the space of documents and the space of patterns: good extraction patterns select documents relevant to the chosen scenario; conversely, relevant documents typically contain more than one good pattern. This duality drives the bootstrapping process. The primary aim of the learning is to train a strong recognizer  for  ;  is embodied in the set of good patterns. However, as a result of training  , the procedure also produces the set   of documents that it deems relevant to  —the documents selected by  . Evaluation: to evaluate the quality of discovered patterns, (Riloff, 1996) describes a direct evaluation strategy, where precision of the patterns resulting from a given run is established by manual review. (Yangarber et al., 2000) uses an automatic but indirect evaluation of the recognizer  : they retrieve a test sub-set      from the training corpus and manually judge the relevance of every document in   ; one can then obtain standard IR-style recall and precision scores for    relative to    . In presenting our results, we will discuss both kinds of evaluation. The recall/precision curves produced by the indirect evaluation generally reach some level of recall at which precision begins to drop. This happens because at some point in the learning process the algorithm picks up patterns that are common in  , but are not sufficiently specific to  alone. These patterns then pick up irrelevant documents, and precision drops. Our goal is to prevent this kind of degradation, by helping the learner stop when precision is still high, while achieving maximal recall. 2.2 Related Work We briefly mention some of the unsupervised methods for acquiring knowledge for NL understanding, in particular in the context of IE. A typical architecture for an IE system includes knowledge bases (KBs), which must be customized when the system is ported to new domains. The KBs cover different levels, viz. a lexicon, a semantic conceptual hierarchy, a set of patterns, a set of inference rules, a set of logical representations for objects in the domain. Each KB can be expected to be domain-specific, to a greater or lesser degree. Among the research that deals with automatic acquisition of knowledge from text, the following are particularly relevant to us. (Strzalkowski and Wang, 1996) proposed a method for learning concepts belonging to a given semantic class. (Riloff and Jones, 1999; Riloff, 1996; Yangarber et al., 2000) present different combinations of learners of patterns and concept classes specifically for IE. In (Riloff, 1996) the system AutoSlog-TS learns patterns for filling an individual slot in an event template, while simultaneously acquiring a set of lexical elements/concepts eligible to fill the slot. AutoSlogTS, does not require a pre-annotated corpus, but does require one that has been split into subsets that are relevant vs. non-relevant subsets to the scenario. (Yangarber et al., 2000) attempts to find extraction patterns, without a pre-classified corpus, starting from a set of seed patterns. This is the basic unsupervised learner on which our approach is founded; it is described in the next section. 3 Algorithm We first present the basic algorithm for pattern acquisition, similar to that presented in (Yangarber et al., 2000). Section 3.3 places the algorithm in the framework of counter-training. 3.1 Pre-processing Prior to learning, the training corpus undergoes several steps of pre-processing. The learning algorithm depends on the fundamental redundancy in natural language, and the pre-processing the text is designed to reduce the sparseness of data, by reducing the effects of phenomena which mask redundancy. Name Factorization: We use a name classifier to tag all proper names in the corpus as belonging to one of several categories—person, location, and organization, or as an unidentified name. Each name is replaced with its category label, a single token. The name classifier also factors out other out-ofvocabulary (OOV) classes of items: dates, times, numeric and monetary expressions. Name classification is a well-studied subject, e.g., (Collins and Singer, 1999). The name recognizer we use is based on lists of common name markers—such as personal titles (Dr., Ms.) and corporate designators (Ltd., GmbH)—and hand-crafted rules. Parsing: After name classification, we apply a general English parser, from Conexor Oy, (Tapanainen and J¨arvinen, 1997). The parser recognizes the name tags generated in the preceding step, and treats them as atomic. The parser’s output is a set of syntactic dependency trees for each document. Syntactic Normalization: To reduce variation in the corpus further, we apply a tree-transforming program to the parse trees. For every (non-auxiliary) verb heading its own clause, the transformer produces a corresponding active tree, where possible. This converts for passive, relative, subordinate clauses, etc. into active clauses. Pattern Generalization: A “primary” tuple is extracted from each clause: the verb and its main arguments, subject and object. The tuple consists of three literals [s,v,o]; if the direct object is missing the tuple contains in its place the subject complement; if the object is a subordinate clause, the tuple contains in its place the head verb of that clause. Each primary tuple produces three generalized tuples, with one of the literals replaced by a wildcard. A pattern is simply a primary or generalized tuple. The pre-processed corpus is thus a many-many mapping between the patterns and the document set. 3.2 Unsupervised Learner We now outline the main steps of the algorithm, followed by the formulas used in these steps. 1. Given: a seed set of patterns, expressed as primary or generalized tuples. 2. Partition: divide the corpus into relevant vs. non-relevant documents. A document  is relevant—receives a weight of 1—if some seed matches  , and non-relevant otherwise, receiving weight 0. After the first iteration, documents are assigned relevance weights between  and  . So at each iteration, there is a distribution of relevance weights on the corpus, rather than a binary partition. 3. Pattern Ranking: Every pattern appearing in a relevant document is a candidate pattern. Assign a score to each candidate; the score depends on how accurately the candidate predicts the relevance of a document, with respect to the current weight distribution, and on how much support it has—the total wight of the relevant documents it matches in the corpus (in Equation 2). Rank the candidates according to their score. On the  -th iteration, we select the pattern  most correlated with the documents that have high relevance. Add   to the growing set of seeds   , and record its accuracy. 4. Document Relevance: For each document  covered by any of the accepted patterns in     , recompute the relevance of  to the target scenario  , ! "$# . Relevance of  is based on the cumulative accuracy of patterns from  %& which match  . 5. Repeat: Back to Partition in step 2. The expanded pattern set induces a new relevance distribution on the corpus. Repeat the procedure as long as learning is possible. The formula used for scoring candidate patterns in step 3 is similar to that in (Riloff, 1996): ('*),+ - ./#10 32 3 .%# 4 564879;:=< 32 3 ./# (1) where 5 0 5 .%# are documents where  matched, and the support >2 3 .%# is computed as the sum of their relevance: 32 3 .%#10 ? @BADCFE GIH  "$# (2) Document relevance is computed as in (Yangarber et al., 2000)  "J#K0LNM O GDA=PQER@SH T NMVU +  ' .%# W (3) where X6 "J# is the set of accepted patterns that match  ; this is a rough estimate of the likelihood of relevance of  , based on the pattern accuracy measure. Pattern accuracy, or precision, is given by the average relevance of the documents matched by  : U +  ' .%#Y0 32 > .%# 4 5Z4 0  4 564 ? @BADCFE GIH  "$# (4) Equation 1 can therefore be written simply as: >'B)[+ \ .%#10]U +  ' ./# 79;:=< >2 3 .%# (5) 3.3 Counter-Training The two terms in Equation 5 capture the trade-off between precision and recall. As mentioned in Section 2.1, the learner running in isolation will eventually acquire patterns that are too general for the scenario, which will cause it to assign positive relevance to non-relevant documents, and learn more irrelevant patterns. From that point onward pattern accuracy will decline. To deal with this problem, we arrange ^ different learners, for ^ different scenarios   `_ Q0aDb;bc^ to train simultaneously on each iteration. Each learner stores its own bag of good patterns, and each assigns its own relevance,  [d "J# , to the documents. Documents that are “ambiguous” will have high relevance in more than one scenario. Now, given multiple learners, we can refine the measure of pattern precision in Eq. 4 for scenario   , to take into account the negative evidence—i.e., how much weight the documents matched by the pattern received in other scenarios: U +  ' .%#Y0  4 5Z4 ? @eADCFE GIH T  Id "J#>M ? fDg h  I ji "J# W (6) If U +  ' .%#lkm the candidate is not considered for acceptance. Equations 6 and 5 imply that the learner will disfavor a pattern if it has too much opposition from other scenarios. The algorithm proceeds as long as two or more scenarios are still learning patterns. When the number of surviving scenarios drops to one, learning terminates, since, running unopposed, the surviving scenario is may start learning non-relevant patterns which will degrade its precision. Scenarios may be represented with different density within the corpus, and may be learned at different rates. To account for this, we introduce a parameter, n : rather than acquiring a single pattern on each iteration, each learner may acquire up to n patterns (3 in this paper), as long as their scores are near (within 5% of) the top-scoring pattern. 4 Experiments We tested the algorithm on documents from the Wall Street Journal (WSJ). The training corpus consisted of 15,000 articles from 3 months between 1992 and Table 1: Scenarios in Competition Scenario Seed Patterns # Documents Last Iteration Management Succession [Company appoint Person] [Person quit] 220 143 Merger&Acquisition [buy Company] [Company merge] 231 210 Legal Action [sue Organization] [bring/settle suit] 169 132 Bill/Law Passing [pass bill] 89 79 Political Election [run/win/lose election/campaign] 42 24 Sports Event [run/win/lose competition/event] 25 19 Layoff [expect/announce layoff] 43 15 Bankruptcy [file/declare bankruptcy] 7 4 Natural Disaster [disaster kill/damage people/property] 16 0 Don’t Care [cut/raise/lower rate] [report/post earning] 413 — 1994. This included the MUC-6 training corpus of 100 tagged WSJ articles (from 1993). We used the scenarios shown in Table 1 to compete with each other in different combinations. The seed patterns for the scenarios, and the number of documents initially picked up by the seeds are shown in the table.2 The seeds were kept small, and they yielded high precision; it is evident that these scenarios are represented to a varying degree within the corpus. We also introduced an additional “negative” scenario (the row labeled “Don’t care”), seeded with patterns for earnings reports and interest rate fluctuations. The last column shows the number of iterations before learning stopped. A sample of the discovered patterns3 appears in Table 2. For an indirect evaluation of the quality of the learned patterns, we employ the text-filtering evaluation strategy, as in (Yangarber et al., 2000). As a by-product of pattern acquisition, the algorithm acquires a set of relevant documents (more precisely, a distribution of document relevance weights). Rather than inspecting patterns     on the  -th iteration by hand, we can judge the quality of this pattern set based on the quality of the documents that the patterns     match. Viewed as a categorization task on a set of documents, this is similar to the text2Capitalized entries refer to Named Entity classes, and italicized entries refer to small classes of synonyms, containing about 3 words each; e.g., appoint op appoint, name, promote q . 3The algorithm learns hundreds of patterns; we present a sample to give the reader a sense of their shape and content. Management Succession demand/announce resignation Person succeed/replace person Person continue run/serve Person continue/serve/remain/step-down chairman Person retain/leave/hold/assume/relinquish post Company hire/fire/dismiss/oust Person Merger&Acquisition Company plan/expect/offer/agree buy/merge complete merger/acquisition/purchase agree sell/pay/acquire get/buy/take-over business/unit/interest/asset agreement creates company hold/exchange/offer unit/subsidiary Legal Action deny charge/wrongdoing/allegation appeal ruling/decision settle/deny claim/charge judge/court dismiss suit Company mislead investor/public Table 2: Sample Acquired Patterns filtering task in the MUC competitions. We use the text-filtering power of the set     as a quantitative measure of the goodness of the patterns. To conduct the text-filtering evaluation we need a binary relevance judgement for each document. This is obtained as follows. We introduce a cutoff threshold r  s on document relevance; if the system has internal confidence of more than r  s that a document  is relevant, it labels  as relevant externally 0.5 0.6 0.7 0.8 0.9 1 0 0.2 0.4 0.6 0.8 1 Precision Recall Counter Mono Baseline (54%) Figure 1: Management Succession for the purpose of scoring recall and precision. Otherwise it labels  as non-relevant.4 The results of the pattern learner for the “Management Succession” scenario, with and without counter-training, are shown in Figure 1. The test sub-corpus consists of the 100 MUC-6 documents. The initial seed yields about 15% recall at 86% precision. The curve labeled Mono shows the performance of the baseline algorithm up to 150 iterations. It stops learning good patterns after 60 iterations, at 73% recall, from which point precision drops. The reason the recall appears to continue improving is that, after this point, the learner begins to acquire patterns describing secondary events, derivative of or commonly co-occurring with the focal topic. Examples of such events are fluctuations in stock prices, revenue estimates, and other common business news elements. The Baseline 54% is the precision we would expect to get by randomly marking the documents as relevant to the scenario. The performance of the Management Succession learner counter-trained against other learners is traced by the curve labeled Counter. It is important to recall that the counter-trained algorithm terminates at the final point on the curve, whereas the 4The relevance cut-off parameter, tjuwv"x was set to 0.3 for mono-trained experiments, and to 0.2 for counter-training. These numbers were obtained from empirical trials, which suggest that a lower confidence is acceptable in the presence of negative evidence. Internal relevance measures, y>z|{~};I€ , are maintained by the algorithm, and the external, binary measures are used only for evaluation of performance. 0.6 0.7 0.8 0.9 1 0.2 0.4 0.6 0.8 1 Precision Recall Counter-Strong Counter Mono Baseline (52%) Figure 2: Legal Action/Lawsuit mono-trained case it does not. We checked the quality of the discovered patterns by hand. Termination occurs at 142 iterations. We observed that after iteration 103 only 10% of the patterns are “good”, the rest are secondary. However, in the first 103 iterations, over 90% of the patterns are good Management Succession patterns. In the same experiment the behaviour of the learner of the “Legal Action” scenario is shown in Figure 2. The test corpus for this learner consists of 250 documents: the 100 MUC-6 training documents and 150 WSJ documents which we retrieved using a set of keywords and categorized manually. The curves labeled Mono, Counter and Baseline are as in the preceding figure. We observe that the counter-training termination point is near the mono-trained curve, and has a good recall-precision trade-off. However, the improvement from counter-training is less pronounced here than for the Succession scenario. This is due to a subtle interplay between the combination of scenarios, their distribution in the corpus, and the choice of seeds. We return to this in the next section. 5 Discussion Although the results we presented here are encouraging, there remains much research, experimentation and theoretical work to be done. Ambiguity and Document Overlap When a learner runs in isolation, it is in a sense undergoing “mono-training”: the only evidence it has on a given iteration is derived from its own guesses on previous iterations. Thus once it starts to go astray, it is difficult to set it back on course. Counter-training provides a framework in which other recognizers, training in parallel with a given recognizer  , can label documents as belonging to their own, other categories, and therefore as being less likely to belong to  ’s category. This likelihood is proportional to the amount of anticipated ambiguity or overlap among the counter-trained scenarios. We are still in the early stages of exploring the space of possibilities provided by this methodology, though it is clear that it is affected by several factors. One obvious contributing factor is the choice of seed patterns, since seeds may cause the learner to explore different parts of the document space first, which may affect the subsequent outcome. Another factor is the particular combination of competing scenarios. If two scenarios are very close—i.e., share many semantic features—they will inhibit each other, and result in lower recall. This closeness will need to be qualified at a future time. There is “ambiguity” both at the level of documents as well as at the level of patterns. Document ambiguity means that some documents cover more than one topic, which will lead to high relevance scores in multiple scenarios. This is more common for longer documents, and may therefore disfavor patterns contained in such documents. An important issue is the extent of overlap among scenarios: Management Succession and Mergers and Acquisitions are likely to have more documents in common than either has with Natural Disasters. Patterns may be pragmatically or semantically ambiguous; “Person died” is an indicator for Management Succession, as well as for Natural Disasters. The pattern “win race” caused the sports scenario to learn patterns for political elections. Some of the chosen scenarios will be better represented in the corpus than others, which may block learning of the under-represented scenarios. The scenarios that are represented well may be learned at different rates, which again may inhibit other learners. This effect is seen in Figure 2; the Lawsuit learner is inhibited by the other, stronger scenarios. The curve labeled Counter-Strong is obtained from a separate experiment. The Lawsuit learner ran against the same scenarios as in Table 1, but some of the other learners were “weakened”: they were given smaller seeds, and therefore picked up fewer documents initially.5 This enabled them to provide sufficient guidance to the Lawsuit learner to maintain high precision, without inhibiting high recall. The initial part of the curve is difficult to see because it overlaps largely with the Counter curve. However, they diverge substantially toward the end, above the 80% recall mark. We should note that the objective of the proposed methodology is to learn good patterns, and that reaching for the maximal document recall may not necessarily serve the same objective. Finally, counter-training can be applied to discovering knowledge of other kinds. (Yangarber et al., 2002) presents the same technique successfully applied to learning names of entities of a given semantic class, e.g., diseases or infectious agents.6 The main differences are: a. the data-points in (Yangarber et al., 2002) are instances of names in text (which are to be labeled with their semantic categories), whereas here the data-points are documents; b. the intended product there is a list of categorized names, whereas here the focus is on the patterns that categorize documents. (Thelen and Riloff, 2002) presents a very similar technique, in the same application as the one described in (Yangarber et al., 2002).7 However, (Thelen and Riloff, 2002) did not focus on the issue of convergence, and on leveraging negative categories to achieve or improve convergence. Co-Training The type of learning described in this paper differs from the co-training method, covered, e.g., in (Blum and Mitchell, 1998). In co-training, learning centers on labeling a set of data-points in situations where these data-points have multiple disjoint and redundant views.8 Examples of spaces of such data-points are strings of text containing proper names, (Collins and Singer, 1999), or Web pages relevant to a query 5The seeds for Management Succession and M&A scenarios were reduced to pick up fewer than 170 documents, each. 6These are termed generalized names, since they may not abide by capitalization rules of conventional proper names. 7The two papers appeared within two months of each other. 8A view, in the sense of relational algebra, is a sub-set of features of the data-points. In the cited papers, these views are exemplified by internal and external contextual cues. (Blum and Mitchell, 1998). Co-training iteratively trains, or refines, two or more n-way classifiers.9 Each classifier utilizes only one of the views on the data-points. The main idea is that the classifiers can start out weak, but will strengthen each other as a result of learning, by labeling a growing number of data-points based on the mutually independent sets of evidence that they provide to each other. In this paper the context is somewhat different. A data-point for each learner is a single document in the corpus. The learner assigns a binary label to each data-point: relevant or non-relevant to the learner’s scenario. The classifier that is being trained is embodied in the set of acquired patterns. A data-point can be thought of having one view: the patterns that match on the data-point. In both frameworks, the unsupervised learners help one another to bootstrap. In co-training, they do so by providing reliable positive examples to each other. In counter-training they proceed by finding their own weakly reliable positive evidence, and by providing each other with reliable negative evidence. Thus, in effect, the unsupervised learners “supervise” each other. 6 Conclusion In this paper we have presented counter-training, a method for strengthening unsupervised strategies for knowledge acquisition. It is a simple way to combine unsupervised learners for a kind of “mutual supervision”, where they prevent each other from degradation of accuracy. Our experiments in acquisition of semantic patterns show that counter-training is an effective way to combat the otherwise unlimited expansion in unsupervised search. Counter-training is applicable in settings where a set of data points has to be categorized as belonging to one or more target categories. The main features of counter-training are: Training several simple learners in parallel; Competition among learners; Convergence of the overall learning process; 9The cited literature reports results with exactly two classifiers. Termination with good recall-precision tradeoff, compared to the single-trained learner. Acknowledgements This research is supported by the Defense Advanced Research Projects Agency as part of the Translingual Information Detection, Extraction and Summarization (TIDES) program, under Grant N66001-001-1-8917 from the Space and Naval Warfare Systems Center San Diego, and by the National Science Foundation under Grant IIS-0081962. References A. Blum and T. Mitchell. 1998. Combining labeled and unlabeled data with co-training. In Proc. 11th Annl. Conf Computational Learning Theory (COLT98), New York. M. Collins and Y. Singer. 1999. Unsupervised models for named entity classification. In Proc. Joint SIGDAT Conf. on EMNLP/VLC, College Park, MD. E. Riloff and R. Jones. 1999. Learning dictionaries for information extraction by multi-level bootstrapping. In Proc. 16th Natl. Conf. on AI (AAAI-99), Orlando, FL. E. Riloff. 1996. Automatically generating extraction patterns from untagged text. In Proc. 13th Natl. Conf. on AI (AAAI-96). T. Strzalkowski and J. Wang. 1996. A self-learning universal concept spotter. In Proc. 16th Intl. Conf. Computational Linguistics (COLING-96), Copenhagen. P. Tapanainen and T. J¨arvinen. 1997. A non-projective dependency parser. In Proc. 5th Conf. Applied Natural Language Processing, Washington, D.C. M. Thelen and E. Riloff. 2002. A bootstrapping method for learning semantic lexicons using extraction pattern contexts. In Proc. 2002 Conf. Empirical Methods in NLP (EMNLP 2002). R. Yangarber, R. Grishman, P. Tapanainen, and S. Huttunen. 2000. Automatic acquisition of domain knowledge for information extraction. In Proc. 18th Intl. Conf. Computational Linguistics (COLING 2000), Saarbr¨ucken. R. Yangarber, W. Lin, and R. Grishman. 2002. Unsupervised learning of generalized names. In Proc. 19th Intl. Conf. Computational Linguistics (COLING 2002), Taipei. D. Yarowsky. 1995. Unsupervised word sense disambiguation rivaling supervised methods. In Proc. 33rd Annual Meeting of ACL, Cambridge, MA.
2003
44
k-valued Non-Associative Lambek Categorial Grammars are not Learnable from Strings Denis B´echet INRIA, IRISA Campus Universitaire de Beaulieu Avenue du G´en´eral Leclerc 35042 Rennes Cedex France [email protected] Annie Foret Universit´e de Rennes1, IRISA Campus Universitaire de Beaulieu Avenue du G´en´eral Leclerc 35042 Rennes Cedex France [email protected] Abstract This paper is concerned with learning categorial grammars in Gold’s model. In contrast to k-valued classical categorial grammars, k-valued Lambek grammars are not learnable from strings. This result was shown for several variants but the question was left open for the weakest one, the non-associative variant NL. We show that the class of rigid and kvalued NL grammars is unlearnable from strings, for each k; this result is obtained by a specific construction of a limit point in the considered class, that does not use product operator. Another interest of our construction is that it provides limit points for the whole hierarchy of Lambek grammars, including the recent pregroup grammars. Such a result aims at clarifying the possible directions for future learning algorithms: it expresses the difficulty of learning categorial grammars from strings and the need for an adequate structure on examples. 1 Introduction Categorial grammars (Bar-Hillel, 1953) and Lambek grammars (Lambek, 1958; Lambek, 1961) have been studied in the field of natural language processing. They are well adapted to learning perspectives since they are completely lexicalized and an actual way of research is to determine the sub-classes of such grammars that remain learnable in the sense of Gold (Gold, 1967). We recall that learning here consists to define an algorithm on a finite set of sentences that converge to obtain a grammar in the class that generates the examples. Let G be a class of grammars, that we wish to learn from positive examples. Formally, let L(G) denote the language associated with grammar G, and let V be a given alphabet, a learning algorithm is a function φ from finite sets of words in V ∗to G, such that for all G ∈G with L(G) =< ei >i∈N there exists a grammar G′ ∈G and there exists n0 ∈ N such that: ∀n > n0 φ({e1, . . . , en}) = G′ ∈G with L(G′) = L(G). After pessimistic unlearnability results in (Gold, 1967), learnability of non trivial classes has been proved in (Angluin, 1980) and (Shinohara, 1990). Recent works from (Kanazawa, 1998) and (Nicolas, 1999) following (Buszkowski and Penn, 1990) have answered the problem for different sub-classes of classical categorial grammars (we recall that the whole class of classical categorial grammars is equivalent to context free grammars; the same holds for the class of Lambek grammars (Pentus, 1993) that is thus not learnable in Gold’s model). The extension of such results for Lambek grammars is an interesting challenge that is addressed by works on logic types from (Dudau-Sofronie et al., 2001) (these grammars enjoy a direct link with Montague semantics), learning from structures in (Retor and Bonato, september 2001), complexity results from (Florˆencio, 2002) or unlearnability results from (Foret and Le Nir, 2002a; Foret and Le Nir, 2002b); this result was shown for several variants but the question was left open for the basic variant, the nonassociative variant NL. In this paper, we consider the following question: is the non-associative variant NL of k-valued Lambek grammars learnable from strings; we answer by constructing a limit point for this class. Our construction is in some sense more complex than those for the other systems since they do not directly translate as limit point in the more restricted system NL. The paper is organized as follows. Section 2 gives some background knowledge on three main aspects: Lambek categorial grammars ; learning in Gold’s model ; Lambek pregroup grammars that we use later as models in some proofs. Section 3 then presents our main result on NL (NL denotes nonassociative Lambek grammars not allowing empty sequence): after a construction overview, we discuss some corollaries and then provide the details of proof. Section 4 concludes. 2 Background 2.1 Categorial Grammars The reader not familiar with Lambek Calculus and its non-associative version will find nice presentation in the first ones written by Lambek (Lambek, 1958; Lambek, 1961) or more recently in (Kandulski, 1988; Aarts and Trautwein, 1995; Buszkowski, 1997; Moortgat, 1997; de Groote, 1999; de Groote and Lamarche, 2002). The types Tp, or formulas, are generated from a set of primitive types Pr, or atomic formulas by three binary connectives “ / ” (over), “ \ ” (under) and “•” (product): Tp ::= Pr | Tp \ Tp | Tp / Tp | Tp •Tp. As a logical system, we use a Gentzen-style sequent presentation. A sequent Γ ⊢A is composed of a sequence of formulas Γ which is the antecedent configuration and a succedent formula A. Let Σ be a fixed alphabet. A categorial grammar over Σ is a finite relation G between Σ and Tp. If < c, A >∈G, we say that G assigns A to c, and we write G : c 7→A. 2.1.1 Lambek Derivation ⊢L The relation ⊢L is the smallest relation ⊢between Tp+ and Tp, such that for all Γ, Γ′ ∈Tp+, ∆, ∆′ ∈ Tp∗and for all A, B, C ∈Tp : ∆, A, ∆′ ⊢C Γ ⊢A (Cut) ∆, Γ, ∆′ ⊢C A ⊢A (Id) Γ ⊢A ∆, B, ∆′ ⊢C /L ∆, B / A, Γ, ∆′ ⊢C Γ, A ⊢B /R Γ ⊢B / A Γ ⊢A ∆, B, ∆′ ⊢C \L ∆, Γ, A \ B, ∆′ ⊢C A, Γ ⊢B \R Γ ⊢A \ B ∆, A, B, ∆′ ⊢C •L ∆, A • B, ∆′ ⊢C Γ ⊢A Γ′ ⊢B •R Γ, Γ′ ⊢A • B We write L∅for the Lambek calculus with empty antecedents (left part of the sequent). 2.1.2 Non-associative Lambek Derivation ⊢NL In the Gentzen presentation, the derivability relation of NL holds between a term in S and a formula in Tp, where the term language is S ::= Tp|(S, S). Terms in S are also called G-terms. A sequent is a pair (Γ, A) ∈S × Tp. The notation Γ[∆] represents a G-term with a distinguished occurrence of ∆ (with the same position in premise and conclusion of a rule). The relation ⊢NL is the smallest relation ⊢between S and Tp, such that for all Γ, ∆∈S and for all A, B, C ∈Tp : Γ[A] ⊢C ∆⊢A (Cut) Γ[∆] ⊢C A ⊢A (Id) Γ ⊢A ∆[B] ⊢C /L ∆[(B / A, Γ)] ⊢C (Γ, A) ⊢B /R Γ ⊢B / A Γ ⊢A ∆[B] ⊢C \L ∆[(Γ, A \ B)] ⊢C (A, Γ) ⊢B \R Γ ⊢A \ B ∆[(A, B)] ⊢C •L ∆[A • B] ⊢C Γ ⊢A ∆⊢B •R (Γ, ∆) ⊢(A • B) We write NL∅for the non-associative Lambek calculus with empty antecedents (left part of the sequent). 2.1.3 Notes Cut elimination. We recall that cut rule can be eliminated in ⊢L and ⊢NL: every derivable sequent has a cut-free derivation. Type order. The order ord(A) of a type A of L or NL is defined by: ord(A) = 0 if A is a primitive type ord(C1 / C2) = max(ord(C1), ord(C2) + 1) ord(C1 \ C2) = max(ord(C1) + 1, ord(C2)) ord(C1 • C2) = max(ord(C1), ord(C2)) 2.1.4 Language. Let G be a categorial grammar over Σ. G generates a string c1 . . . cn ∈Σ+ iff there are types A1, . . . , An ∈Tp such that: G : ci 7→Ai (1 ≤i ≤ n) and A1, . . . , An ⊢L S. The language of G, written LL(G) is the set of strings generated by G. We define similarly LL∅(G), LNL(G) and LNL∅(G) replacing ⊢L by ⊢L∅, ⊢NL and ⊢NL∅in the sequent where the types are parenthesized in some way. 2.1.5 Notation. In some sections, we may write simply ⊢instead of ⊢L, ⊢L∅, ⊢NL or ⊢NL∅. We may simply write L(G) accordingly. 2.1.6 Rigid and k-valued Grammars. Categorial grammars that assign at most k types to each symbol in the alphabet are called k-valued grammars; 1-valued grammars are also called rigid grammars. Example 1 Let Σ1 = {John, Mary, likes} and let Pr = {S, N} for sentences and nouns respectively. Let G1 = {John 7→N, Mary 7→N, likes 7→ N \ (S / N)}. We get (John likes Mary) ∈ LNL(G1) since ((N, N \ (S / N)), N) ⊢NL S. G1 is a rigid (or 1-valued) grammar. 2.2 Learning and Limit Points We now recall some useful definitions and known properties on learning. 2.2.1 Limit Points A class CL of languages has a limit point iff there exists an infinite sequence < Ln >n∈N of languages in CL and a language L ∈CL such that: L0 ⊊L1 . . . ⊊... ⊊Ln ⊊. . . and L = S n∈N Ln (L is a limit point of CL). 2.2.2 Limit Points Imply Unlearnability The following property is important for our purpose. If the languages of the grammars in a class G have a limit point then the class G is unlearnable. 1 2.3 Some Useful Models For ease of proof, in next section we use two kinds of models that we now recall: free groups and pregroups introduced recently by (Lambek, 1999) as an alternative of existing type grammars. 2.3.1 Free Group Interpretation. Let FG denote the free group with generators Pr, operation · and with neutral element 1. We associate with each formula C of L or NL, an element in FG written [[C]] as follows: [[A]] = A if A is a primitive type [[C1 \ C2]] = [[C1]]−1 · [[C2]] [[C1 / C2]] = [[C1]] · [[C2]]−1 [[C1 • C2]] = [[C1]] · [[C2]] We extend the notation to sequents by: [[C1, C2, . . . , Cn]] = [[C1]] · [[C2]] · · · · · [[Cn]] The following property states that FG is a model for L (hence for NL): if Γ ⊢L C then [[Γ]] =F G [[C]] 2.3.2 Free Pregroup Interpretation Pregroup. A pregroup is a structure (P, ≤ , ·, l, r, 1) such that (P, ≤, ·, 1) is a partially ordered monoid2 and l, r are two unary operations on P that satisfy for all a ∈P ala ≤1 ≤aal and aar ≤1 ≤ara. Free pregroup. Let (P, ≤) be an ordered set of primitive types, P ( ) = {p(i) | p ∈P, i ∈Z} is the set of atomic types and T(P,≤) = P ( )∗= {p(i1) 1 · · · p(in) n | 0 ≤k ≤n, pk ∈P and ik ∈Z} is the set of types. For X and Y ∈T(P,≤), X ≤Y iif this relation is deductible in the following system where p, q ∈P, n, k ∈Z and X, Y, Z ∈T(P,≤): 1This implies that the class has infinite elasticity. A class CL of languages has infinite elasticity iff ∃< ei >i∈N sentences ∃< Li >i∈N languages in CL ∀i ∈N : ei ̸∈Li and {e1, . . . , en} ⊆Ln+1 . 2We briefly recall that a monoid is a structure < M, ·, 1 >, such that · is associative and has a neutral element 1 (∀x ∈ M : 1 · x = x · 1 = x). A partially ordered monoid is a monoid M, ·, 1) with a partial order ≤that satisfies ∀a, b, c: a ≤b ⇒c · a ≤c · b and a · c ≤b · c. X ≤X (Id) X ≤Y Y ≤Z (Cut) X ≤Z XY ≤Z (AL) Xp(n)p(n+1)Y ≤Z X ≤Y Z (AR) X ≤Y p(n+1)p(n)Z Xp(k)Y ≤Z (INDL) Xq(k)Y ≤Z X ≤Y p(k)Z (INDR) X ≤Y q(k)Z q ≤p if k is even, and p ≤q if k is odd This construction, proposed by Buskowski, defines a pregroup that extends ≤on primitive types P to T(P,≤)3. Cut elimination. As for L and NL, cut rule can be eliminated: every derivable inequality has a cut-free derivation. Simple free pregroup. A simple free pregroup is a free pregroup where the order on primitive type is equality. Free pregroup interpretation. Let FP denotes the simple free pregroup with Pr as primitive types. We associate with each formula C of L or NL, an element in FP written [C] as follows: [A] = A if A is a primitive type [C1 \ C2] = [C1]r[C2] [C1 / C2] = [C1][C2]l [C1 • C2] = [C1][C2] We extend the notation to sequents by: [A1, . . . , An] = [A1] · · · [An] The following property states that FP is a model for L (hence for NL): if Γ ⊢L C then [Γ] ≤FP [C]. 3 Limit Point Construction 3.1 Method overview and remarks Form of grammars. We define grammars Gn where A, B, Dn and En are complex types and S is the main type of each grammar: Gn = {a 7→A / B; b 7→Dn; c 7→En \ S} Some key points. • We prove that {akbc | 0 ≤k ≤n} ⊆L(Gn) using the following properties: 3Left and right adjoints are defined by (p(n))l = p(n−1), (p(n))r = p(n+1), (XY )l = Y lXl and (XY )r = Y rXr. We write p for p(0). B ⊢A (but A ̸⊢B) (A / B, Dn+1) ⊢Dn Dn ⊢En En ⊢En+1 we get: bc ∈L(Gn) since Dn ⊢En if w ∈L(Gn) then aw ∈L(Gn+1) since (A / B, Dn+1) ⊢Dn ⊢En ⊢En+1 • The condition A ̸⊢B is crucial for strictness of language inclusion. In particular: (A / B, A) ̸⊢A, where A = D0 • This construction is in some sense more complex than those for the other systems (Foret and Le Nir, 2002a; Foret and Le Nir, 2002b) since they do not directly translate as limit points in the more restricted system NL. 3.2 Definition and Main Results Definitions of Rigid grammars Gn and G∗ Definition 1 Let p, q, S, three primitive types. We define: A = D0 = E0 = q / (p \ q) B = p Dn+1 = (A / B) \ Dn En+1 = (A / A) \ En Let Gn =    a 7→A / B = (q / (p \ q)) / p b 7→Dn c 7→En \ S    Let G∗= {a 7→(p / p) b 7→p c 7→(p \ S)} Main Properties Proposition 1 (language description) • L(Gn) = {akbc | 0 ≤k ≤n} • L(G∗) = {akbc | 0 ≤k}. From this construction we get a limit point and the following result. Proposition 2 (NL-non-learnability) The class of languages of rigid (or k-valued for an arbitrary k) non-associative Lambek grammars (not allowing empty sequence and without product) admits a limit point ; the class of rigid (or k-valued for an arbitrary k) non-associative Lambek grammars (not allowing empty sequence and without product) is not learnable from strings. 3.3 Details of proof for Gn Lemma {akbc | 0 ≤k ≤n} ⊆L(Gn) Proof: It is relatively easy to see that for 0 ≤ k ≤n, akbc ∈L(Gn). We have to consider ((a · · · (a(a | {z } k b)) · · · )c) and prove the following sequent in NL: ( (a···(a z }| { ((A / B), . . . , ((A / B), | {z } k b z }| { ((A / B) \ · · · \ ((A / B) \ | {z } n A) · · · ), · · · ), c z }| { ((A / A) \ · · · \ ((A / A) \ | {z } n A) · · · ) \ S)) ⊢NL S Models of NL For the converse, (for technical reasons and to ease proofs) we use both free group and free pregroup models of NL since a sequent is valid in NL only if its interpretation is valid in both models. Translation in free groups The free group translation for the types of Gn is: [[p]] = p, [[q]] = q, [[S]] = S [[x / y]] = [[x]] · [[y]]−1 [[x \ y]] = [[x]]−1 · [[y]] [[x • y]] = [[x]] · [[y]] Type-raising disappears by translation: [[x / (y \ x)]] = [[x]] · ([[y]]−1 · [[x]])−1 = [[y]] Thus, we get : [[A]] = [[D0]] = [[E0]] = [[q / (p \ q)]] = p [[B]] = p [[A / B]] = [[A]] · [[B]]−1 = pp−1 = 1 [[Dn+1]] = [[(A / B) \ Dn]] = [[Dn]] = [[D0]] = p [[En+1]] = [[(A / A) \ En]] = [[En]] = [[E0]] = p Translation in free pregroups The free pregroup translation for the types of Gn is: [p] = p, [q] = q, [S] = S [x \ y] = [x]r[y] [y / x] = [y][x]l [x • y] = [x][y] Type-raising translation: [x / (y \ x)] = [x]([y]r[x])l = [x][x]l[y] [x / (x \ x)] = [x]([x]r[x])l = [x][x]l[x] = [x] Thus, we get: [A] = [D0] = [E0] = [q / (p \ q)] = qqlp [B] = p [A / B] = [A][B]l = qqlppl [Dn+1] = [(A / B)]r[Dn] = pprqqr | {z } n+1 qqlp [En+1] = [(A / A) \ En] = [A][A]lqqlp = qqlp Lemma L(Gn) ⊆{akbak′cak′′; 0 ≤k, 0 ≤k′, 0 ≤k′′} Proof: Let τn denote the type assignment by the rigid grammar Gn. Suppose τn(w) ⊢S, using free groups [[τn(w)]] = S; - This entails that w has exactly one occurrence of c (since [[τn(c)]] = p−1S and the other type images are either 1 or p) - Then, this entails that w has exactly one occurrence of b on the left of the occurrence of c (since [[τn(c)]] = p−1S, [[τn(b)]] = p and [[τn(a)]] = 1) Lemma L(Gn) ⊆{akbc | 0 ≤k} Proof: Suppose τn(w) ⊢ S, using pregroups [τn(w)] ≤S. We can write w = akbak′cak′′ for some k, k′, k′′, such that: [τn(w)] = qqlppl | {z } k pprqqr | {z } n qqlp qqlppl | {z } k′ prqqrS qqlppl | {z } k′′ For q = 1, we get ppl |{z} k ppr |{z} n p ppl |{z} k′ prS ppl |{z} k′′ ≤S and it yields p ppl |{z} k′ prS ppl |{z} k′′ ≤S. We now discuss possible deductions (note that pplppl · · · ppl = ppl): • if k′ and k′′ ̸= 0: ppplprSppl ≤S impossible. • if k′ ̸= 0 and k′′ = 0: ppplprS ≤S impossible. • if k′ = 0 and k′′ ̸= 0: pprSppl ≤S impossible. • if k′ = k′′ = 0: w ∈{akbc | 0 ≤k} (Final) Lemma L(Gn) ⊆{akbc | 0 ≤k ≤n} Proof: Suppose τn(w) ⊢ S, using pregroups [τn(w)] ≤S. We can write w = akbc for some k, such that : [τn(w)] = qqlppl | {z } k pprqqr | {z } n qqlpprqqrS We use the following property (its proof is in Appendix A) that entails that 0 ≤k ≤n. (Auxiliary) Lemma: if (1) X, Y, qqlp, prqqr, S ≤S where X ∈{ppl, qql}∗and Y ∈{qqr, ppr}∗ then  (2) nbalt(Xqql) ≤nbalt(qqrY ) (2bis) nbalt(Xppl) ≤nbalt(pprY ) where nbalt counts the alternations of p’s and q’s sequences (forgetting/dropping their exponents). 3.4 Details of proof for G∗ Lemma {akbc | 0 ≤k} ⊆L(G∗) Proof: As with Gn, it is relatively easy to see that for k ≥0, akbc ∈L(G∗). We have to consider ((a · · · (a(a | {z } k b)) · · · )c) and prove the following sequent in NL: (((p / p), . . . , ((p / p), | {z } k p) · · · ), (p \ S)) ⊢NL S Lemma L(G∗) ⊆{akbc | 0 ≤k} Proof: Like for w ∈Gn, due to free groups, a word of L(G∗) has exactly one occurrence of c and one occurrence of b on the left of c (since [[τ∗(c)]] = p−1S, [[τ∗(b)]] = p and [[τ∗(a)]] = 1). Suppose w = akbak′cak′′ a similar discussion as for Gn in pregroups, gives k′ = k′′ = 0, hence the result 3.5 Non-learnability of a Hierarchy of Systems An interest point of this construction: It provides a limit point for the whole hierarchy of Lambek grammars, and pregroup grammars. Limit point for pregroups The translation [·] of Gn gives a limit point for the simple free pregroup since for i ∈{∗, 0, 1, 2, . . . }: τi(w) ⊢NL S iff w ∈LNL(Gi) by definition ; τi(w) ⊢NL S implies [τi(w)] ≤S by models ; [τi(w)] ≤S implies w ∈LNL(Gi) from above. Limit point for NL∅ The same grammars and languages work since for i ∈{∗, 0, 1, 2, . . . }: τi(w) ⊢NL S iff [τi(w)] ≤S from above ; τi(w) ⊢NL S implies τi(w) ⊢NL∅S by hierarchy ; τi(w) ⊢NL∅S implies [τi(w)] ≤S by models. Limit point for L and L∅ The same grammars and languages work since for i ∈{∗, 0, 1, 2, . . . } : τi(w) ⊢NL S iff [τi(w)] ≤S from above ; τi(w) ⊢NL S implies τi(w) ⊢L S using hierarchy ; τi(w) ⊢L S implies τi(w) ⊢L∅S using hierarchy ; τi(w) ⊢L∅S implies [τi(w)] ≤S by models. To summarize : w ∈LNL(Gi) iff [τi(w)] ≤S iff w ∈LNL∅(Gi) iff w ∈LL(Gi) iff w ∈LL∅(Gi) 4 Conclusion and Remarks Lambek grammars. We have shown that without empty sequence, non-associative Lambek rigid grammars are not learnable from strings. With this result, the whole landscape of Lambek-like rigid grammars (or k-valued for an arbitrary k) is now described as for the learnability question (from strings, in Gold’s model). Non-learnability for subclasses. Our construct is of order 5 and does not use the product operator. Thus, we have the following corollaries: • Restricted connectives: k-valued NL, NL∅, L and L∅grammars without product are not learnable from strings. • Restricted type order: - k-valued NL, NL∅, L and L∅grammars (without product) with types not greater than order 5 are not learnable from strings4. - k-valued free pregroup grammars with types not greater than order 1 are not learnable from strings5. The learnability question may still be raised for NL grammars of order lower than 5. 4Even less for some systems. For example in L∅, all En collapse to A 5The order of a type pi1 1 · · · pik k is the maximum of the absolute value of the exponents: max(|i1|, . . . , |ik|). Special learnable subclasses. Note that however, we get specific learnable subclasses of k-valued grammars when we consider NL, NL∅, L or L∅ without product and we bind the order of types in grammars to be not greater than 1. This holds for all variants of Lambek grammars as a corollary of the equivalence between generation in classical categorial grammars and in Lambek systems for grammars with such product-free types (Buszkowski, 2001). Restriction on types. An interesting perspective for learnability results might be to introduce reasonable restrictions on types. From what we have seen, the order of type alone (order 1 excepted) does not seem to be an appropriate measure in that context. Structured examples. These results also indicate the necessity of using structured examples as input of learning algorithms. What intermediate structure should then be taken as a good alternative between insufficient structures (strings) and linguistic unrealistic structures (full proof tree structures) remains an interesting challenge. References E. Aarts and K. Trautwein. 1995. Non-associative Lambek categorial grammar in polynomial time. Mathematical Logic Quaterly, 41:476–484. Dana Angluin. 1980. Inductive inference of formal languages from positive data. Information and Control, 45:117–135. Y. Bar-Hillel. 1953. A quasi arithmetical notation for syntactic description. Language, 29:47–58. Wojciech Buszkowski and Gerald Penn. 1990. Categorial grammars determined from linguistic data by unification. Studia Logica, 49:431–454. W. Buszkowski. 1997. Mathematical linguistics and proof theory. In van Benthem and ter Meulen (van Benthem and ter Meulen, 1997), chapter 12, pages 683–736. Wojciech Buszkowski. 2001. Lambek grammars based on pregroups. In Philippe de Groote, Glyn Morill, and Christian Retor´e, editors, Logical aspects of computational linguistics: 4th International Conference, LACL 2001, Le Croisic, France, June 2001, volume 2099. Springer-Verlag. Philippe de Groote and Franc¸ois Lamarche. 2002. Classical non-associative lambek calculus. Studia Logica, 71.1 (2). Philippe de Groote. 1999. Non-associative Lambek calculus in polynomial time. In 8th Workshop on theorem proving with analytic tableaux and related methods, number 1617 in Lecture Notes in Artificial Intelligence. Springer-Verlag, March. Dudau-Sofronie, Tellier, and Tommasi. 2001. Learning categorial grammars from semantic types. In 13th Amsterdam Colloquium. C. Costa Florˆencio. 2002. Consistent Identification in the Limit of the Class k-valued is NP-hard. In LACL. Annie Foret and Yannick Le Nir. 2002a. Lambek rigid grammars are not learnable from strings. In COLING’2002, 19th International Conference on Computational Linguistics, Taipei, Taiwan. Annie Foret and Yannick Le Nir. 2002b. On limit points for some variants of rigid lambek grammars. In ICGI’2002, the 6th International Colloquium on Grammatical Inference, number 2484 in Lecture Notes in Artificial Intelligence. Springer-Verlag. E.M. Gold. 1967. Language identification in the limit. Information and control, 10:447–474. Makoto Kanazawa. 1998. Learnable classes of categorial grammars. Studies in Logic, Language and Information. FoLLI & CSLI. distributed by Cambridge University Press. Maciej Kandulski. 1988. The non-associative lambek calculus. In W. Marciszewski W. Buszkowski and J. Van Bentem, editors, Categorial Grammar, pages 141–152. Benjamins, Amsterdam. Joachim Lambek. 1958. The mathematics of sentence structure. American mathematical monthly, 65:154– 169. Joachim Lambek. 1961. On the calculus of syntactic types. In Roman Jakobson, editor, Structure of language and its mathematical aspects, pages 166–178. American Mathematical Society. J. Lambek. 1999. Type grammars revisited. In Alain Lecomte, Franc¸ois Lamarche, and Guy Perrier, editors, Logical aspects of computational linguistics: Second International Conference, LACL ’97, Nancy, France, September 22–24, 1997; selected papers, volume 1582. Springer-Verlag. Michael Moortgat. 1997. Categorial type logic. In van Benthem and ter Meulen (van Benthem and ter Meulen, 1997), chapter 2, pages 93–177. Jacques Nicolas. 1999. Grammatical inference as unification. Rapport de Recherche RR-3632, INRIA. http://www.inria.fr/RRRT/publications-eng.html. Mati Pentus. 1993. Lambek grammars are context-free. In Logic in Computer Science. IEEE Computer Society Press. Christian Retor´e and Roberto Bonato. september 2001. Learning rigid lambek grammars and minimalist grammars from struc tured sentences. Third workshop on Learning Language in Logic, Strasbourg. T. Shinohara. 1990. Inductive inference from positive data is powerful. In The 1990 Workshop on Computational Learning Theory, pages 97–110, San Mateo, California. Morgan Kaufmann. J. van Benthem and A. ter Meulen, editors. 1997. Handbook of Logic and Language. North-Holland Elsevier, Amsterdam. Appendix A. Proof of Auxiliary Lemma (Auxiliary) Lemma: if (1) XY qqlpprqqrS ≤S where X ∈{ppl, qql}∗and Y ∈{qqr, ppr}∗ then  (2) nbalt(Xqql) ≤nbalt(qqrY ) (2bis) nbalt(Xppl) ≤nbalt(pprY ) where nbalt counts the alternations of p’s and q’s sequences (forgetting/dropping their exponents). Proof: By induction on derivations in Gentzen style presentation of free pregroups (without Cut). Suppose XY ZS ≤S where    X ∈{ppl, qql}∗ Y ∈{qqr, ppr}∗ Z ∈{(qqlpprqqr), (qqlqqr), (qqr), 1} We show that  nbalt(Xqql) ≤nbalt(qqrY ) nbalt(Xppl) ≤nbalt(pprY ) The last inference rule can only be (AL) • Case (AL) on X: The antecedent is similar with X′ instead of X, where X is obtained from X′ by insertion (in fact inserting qlq in the middle of qql as the replacement of qql with qqlqql or similarly with p instead of q). - By such an insertion: (i) nbalt(X′qql) = nbalt(Xqql) (similar for p). - By induction hypothesis: (ii) nbalt(X′qql) ≤ nbalt(qqrY ) (similar for p). - Therefore from (i) (ii): nbalt(Xqql) ≤ nbalt(qqrY ) (similar for p). • Case (AL) on Y : The antecedent is XY ′ZS ≤ S where Y is obtained from Y ′ by insertion (in fact insertion of ppr or qqr), such that Y ′ ∈{ppr, qqr}∗. Therefore the induction applies nbalt(Xqql) ≤nbalt(qqrY ′) and nbalt(qqrY ) ≥ nbalt(qqrY ′) (similar for p) hence the result. • Case (AL) on Z ( Z non empty): - if Z = (qqlpprqqr) the antecedent is XY Z′S ≤S, where Z′ = qqlqqr. - if Z = (qqlqqr) the antecedent is XY Z′S ≤ S, where Z′ = qqr ; - if Z = (qqr) the antecedent is XY Z′S ≤S, where Z′ = ϵ. In all three cases the hypothesis applies to XY Z ′ and gives the relationship between X and Y . • case (AL) between X and Y : Either X = X′′qql and Y = qqrY ′′ or X = X′′ppl and Y = pprY ′′. In the q case, the last inference step is the introduction of qlq: X′′qqrY ′′ZS≤S X′′qql | {z } X qqrY ′′ | {z } Y ZS≤S We now detail the q case. The antecedent can be rewritten as X′′Y ZS ≤S and we have: (i) nbalt(Xqql) = nbalt(X′′qqlqql) = nbalt(X′′qql) nbalt(Xppl) = nbalt(X′′qqlppl) = 1 + nbalt(X′′qql) nbalt(qqrY ) = nbalt(qqrqqrY ′′) = nbalt(qqrY ′′) nbalt(pprY ) = nbalt(pprqqrY ′′) = 1 + nbalt(qqrY ′′) We can apply the induction hypothesis to X′′Y ZS ≤S and get (ii): nbalt(X′′qql) ≤nbalt(qqrY ) Finally from (i) (ii) and the induction hypothesis: nbalt(Xqql) = nbalt(X′′qql) ≤ nbalt(qqrY ) nbalt(Xppl) = 1 + nbalt(X′′qql) ≤ 1 + nbalt(qqrY ) = 1 + nbalt(qqrqqrY ′′) = 1 + nbalt(qqrY ′′) = nbalt(pprY ) The second case with p instead of q is similar.
2003
45
Parsing with generative models of predicate-argument structure Julia Hockenmaier IRCS, University of Pennsylvania, Philadelphia, USA and Informatics, University of Edinburgh, Edinburgh, UK [email protected] Abstract The model used by the CCG parser of Hockenmaier and Steedman (2002b) would fail to capture the correct bilexical dependencies in a language with freer word order, such as Dutch. This paper argues that probabilistic parsers should therefore model the dependencies in the predicate-argument structure, as in the model of Clark et al. (2002), and defines a generative model for CCG derivations that captures these dependencies, including bounded and unbounded long-range dependencies. 1 Introduction State-of-the-art statistical parsers for Penn Treebank-style phrase-structure grammars (Collins, 1999), (Charniak, 2000), but also for Categorial Grammar (Hockenmaier and Steedman, 2002b), include models of bilexical dependencies defined in terms of local trees. However, this paper demonstrates that such models would be inadequate for languages with freer word order. We use the example of Dutch ditransitives, but our argument equally applies to other languages such as Czech (see Collins et al. (1999)). We argue that this problem can be avoided if instead the bilexical dependencies in the predicate-argument structure are captured, and propose a generative model for these dependencies. The focus of this paper is on models for Combinatory Categorial Grammar (CCG, Steedman (2000)). Due to CCG’s transparent syntax-semantics interface, the parser has direct and immediate access to the predicate-argument structure, which includes not only local, but also long-range dependencies arising through coordination, extraction and control. These dependencies can be captured by our model in a sound manner, and our experimental results for English demonstrate that their inclusion improves parsing performance. However, since the predicate-argument structure itself depends only to a degree on the grammar formalism, it is likely that parsers that are based on other grammar formalisms could equally benefit from such a model. The conditional model used by the CCG parser of Clark et al. (2002) also captures dependencies in the predicate-argument structure; however, their model is inconsistent. First, we review the dependency model proposed by Hockenmaier and Steedman (2002b). We then use the example of Dutch ditransitives to demonstrate its inadequacy for languages with a freer word order. This leads us to define a new generative model of CCG derivations, which captures word-word dependencies in the underlying predicate-argument structure. We show how this model can capture long-range dependencies, and deal with the presence of multiple dependencies that arise through the presence of long-range dependencies. In our current implementation, the probabilities of derivations are computed during parsing, and we discuss the difficulties of integrating the model into a probabilistic chart parsing regime. Since there is no CCG treebank for other languages available, experimental results are presented for English, using CCGbank (Hockenmaier and Steedman, 2002a), a translation of the Penn Treebank to CCG. These results demonstrate that this model benefits greatly from the inclusion of long-range dependencies. 2 A model of surface dependencies Hockenmaier and Steedman (2002b) define a surface dependency model (henceforth: SD) HWDep which captures word-word dependencies that are defined in terms of the derivation tree itself. It assumes that binary trees (with parent category P) have one head child (with category H) and one nonhead child (with category D), and that each node has one lexical head h = hc; w i. In the following tree, P = S[dcl]nNP, H = (S[dcl]nNP)=NP, D= NP, h H = h(S[dcl]nNP)=NP ; opened i, and h D = hN ; doors i. S[dcl]nNP (S[dcl]nNP)=NP opened NP its doors The model conditions w D on its own lexical category c D, on h H = hc H ; w H i and on the local tree  in which the D is generated (represented in terms of the categories hP ; H ; D i): P (w D jc D ;  = hP ; H ; D i; h H = hc H ; w H i) 3 Predicate-argument structure in CCG Like Clark et al. (2002), we define predicateargument structure for CCG in terms of the dependencies that hold between words with lexical functor categories and their arguments. We assume that a lexical head is a pair hc; w i, consisting of a word w and its lexical category c. Each constituent has at least one lexical head (more if it is a coordinate construction). The arguments of functor categories are numbered from 1 to n, starting at the innermost argument, where n is the arity of the functor, eg. (S[dcl]nNP 1 )=NP 2, (NPnNP 1 )=(S[dcl]=NP) 2. Dependencies hold between lexical heads whose category is a functor category and the lexical heads of their arguments. Such dependencies can be expressed as 3-tuples hhc; w i; i; hc 0 ; w 0 ii, where c is a functor category with arity  i, and hc 0 ; w 0 i is a lexical head of the ith argument of c. The predicate-argument structure that corresponds to a derivation contains not only local, but also long-range dependencies that are projected from the lexicon or through some rules such as the coordination of functor categories. For details, see Hockenmaier (2003). 4 Word-word dependencies in Dutch Dutch has a much freer word order than English. The analyses given in Steedman (2000) assume that this can be accounted for by an extended use of composition. As indicated by the indices (which are only included to improve readability), in the following examples, hij is the subject (NP 3) of geeft, de politieman the indirect object (NP 2), and een bloem the direct object (NP 1).1 Hij geeft de politieman een bloem (He gives the policeman a flower) S=(S=NP 3 ) ((S=NP 1 )=NP 2 )=NP 3 Tn(T=NP 2 ) Tn(T=NP 1 ) <B Tn((T=NP 1 )=NP 2 ) < B  S=NP 3 > S Een bloem geeft hij de politieman S=(S=NP 1 ) ((S=NP 1 )=NP 2 )=NP 3 Tn(T=NP 3 ) Tn(T=NP 2 ) < (S=NP 1 )=NP 2 < S=NP 1 > S De politieman geeft hij een bloem S=(S=NP 2 ) ((S=NP 1 )=NP 2 )=NP 3 Tn(T=NP 3 ) Tn(T=NP 1 ) < (S=NP 1 )=NP 2 < B  S=NP 2 > S A SD model estimated from a corpus containing these three sentences would not be able to capture the correct dependencies. Unless we assume that the above indices are given as a feature on the NP categories, the model could not distinguish between the dependency relations of Hij and geeft in the first sentence, bloem and geeft in the second sentence and politieman and geeft in the third sentence. Even with the indices, either the dependency between politieman and geeft or between bloem and geeft in the first sentence could not be captured by a model that assumes that each local tree has exactly one head. Furthermore, if one of these sentences occurred in the training data, all of the dependencies in the other variants of this sentence would be unseen to the model. However, in terms of the predicateargument structure, all three examples express the same relations. The model we propose here would therefore be able to generalize from one example to the word-word dependencies in the other examples. 1The variables T are uninstantiated for reasons of space. The cross-serial dependencies of Dutch are one of the syntactic constructions that led people to believe that more than context-free power is required for natural language analysis. Here is an example together with the CCG derivation from Steedman (2000): dat ik Cecilia de paarden zag voeren (that I Cecilia the horses saw feed) NP 1 NP 2 NP 3 ((Sn NP 1 ) n NP 2 ) = VP VP n NP 3 > B  ((SnNP 1 )nNP 2 )nNP 3 < (SnNP 1 )nNP 2 < SnNP 1 < S Again, a local dependency model would systematically model the wrong dependencies in this case, since it would assume that all noun phrases are arguments of the same verb. However, since there is no Dutch corpus that is annotated with CCG derivations, we restrict our attention to English in the remainder of this paper. 5 A model of predicate-argument structure We first explain how word-word dependencies in the predicate-argument structure can be captured in a generative model, and then describe how these probabilities are estimated in the current implementation. 5.1 Modelling local dependencies We first define the probabilities for purely local dependencies without coordination. By excluding nonlocal dependencies and coordination, at most one dependency relation holds for each word. Consider the following sentence: S[dcl] NP N Smith S[dcl]nNP S[dcl]nNP resigned (SnNP)n(SnNP) yesterday This derivation expresses the following dependencies: hhS[dcl]nNP; resigned i; 1; hN; Smith ii hh(Sn NP)n (Sn NP) ; yesterday i; 2; hS[dcl]nNP ; resigned ii We assume again that heads are generated before their modifiers or arguments, and that word-word dependencies are expressed by conditioning modifiers or arguments on heads. Therefore, the head words of arguments (such as Smith) are generated in the following manner: P (w a jc a ; hhc h ;w h i; i; hc a ; w a ii) The head word of modifiers (such as yesterday) are generated differently: P (w m jc m ; hhc m ;w m i; i; hc h ;w h i) Like Collins (1999) and Charniak (2000), the SD model assumes that word-word dependencies can be defined at the maximal projection of a constituent. However, as the Dutch examples show, the argument slot i can only be determined if the head constituent is fully expanded. For instance, if S[dcl] expands to a non-head S=(S=NP) and to a head S[dcl]=NP, it is necessary to know how the S[dcl]=NP expands to determine which argument is filled by the nonhead, even if we already know that the lexical category of the head word of S[dcl]=NP is a ditransitive ((S[dcl]=NP)=NP)=NP. Therefore, we assume that the non-head child of a node is only expanded after the head child has been fully expanded. 5.2 Modelling long-range dependencies The predicate-argument structure that corresponds to a derivation contains not only local, but also longrange dependencies that are projected from the lexicon or through some rules such as the coordination of functor categories. In the following derivation, Smith is the subject of resigned and of left: S[dcl] NP N Smith S[dcl]nNP S[dcl]nNP resigned S[dcl]nNP[conj] conj and S[dcl]nNP left In order to express both dependencies, Smith has to be conditioned on resigned and on left: P (w =Smith j N;hhS[dcl]nNP; resigned i; 1; hN; w ii; hhS[dcl]nNP; lefti; 1; hN ; w ii) In terms of the predicate-argument structure, resigned and left are both lexical heads of this sentence. Since neither fills an argument slot of the other, we assume that they are generated independently. This is different from the SD model, which conditions the head word of the second and subsequent conjuncts on the head word of the first conjunct. Similarly, in a sentence such as Miller and Smith resigned, the current model assumes that the two heads of the subject noun phrase are conditioned on the verb, but not on each other. Argument-cluster coordination constructions such as give a dog a bone and a policeman a flower are another example where the dependencies in the predicate-argument structure cannot be expressed at the level of the local trees that combine the individual arguments. Instead, these dependencies are projected down through the category of the argument cluster: SnNP 1 ((SnNP 1 )=NP 2 )=NP 3 give (SnNP 1 )n(((SnNP 1 )=NP 2 )=NP 3 ) Lexical categories that project long-range dependencies include cases such as relative pronouns, control verbs, auxiliaries, modals and raising verbs. This can be expressed by co-indexing their arguments, eg. (NPnNP i )=(S[dcl]nNP i ) for relative pronouns. Here, Smith is also the subject of resign: S[dcl] NP N Smith S[dcl]nNP (S[dcl]nNP)=(S[b]nNP) will S[b]nNP resign Again, in order to capture this dependency, we assume that the entire verb phrase is generated before the subject. In relative clauses, there is a dependency between the verbs in the relative clause and the head of the noun phrase that is modified by the relative clause: NP NP N Smith NPnNP (NPnNP)=(S[dcl]nNP) who S[dcl]nNP resigned Since the entire relative clause is an adjunct, it is generated after the noun phrase Smith. Therefore, we cannot capture the dependency between Smith and resigned by conditioning Smith on resigned. Instead, resigned needs to be conditioned on the fact that its subject is Smith. This is similar to the way in which head words of adjuncts such as yesterday are generated. In addition to this dependency, we also assume that there is a dependency between who and resigned. It follows that if we want to capture unbounded long-range dependencies such as object extraction, words cannot be generated at the maximal projection of constituents anymore. Consider the following examples: NP NP The woman NPnNP (NPnNP)=(S[dcl]=NP) that S[dcl]=NP S=(SnNP) NP I (S[dcl]nNP)=NP saw NP NP The woman NPnNP (NPnNP)=(S[dcl]=NP) that S[dcl]=NP S=(SnNP) NP I (S[dcl]nNP)=NP (S[dcl]nNP)=NP saw NP=NP NP=PP a picture PP=NP of In both cases, there is a S[dcl]=NP with lexical head (S[dcl]nNP)=NP; however, in the second case, the NP argument is not the object of the transitive verb. This problem can be solved by generating words at the leaf nodes instead of at the maximal projection of constituents. After expanding the (S[dcl]nNP)=NP node to (S[dcl]nNP)=NP and NP=NP, the NP that is co-indexed with woman cannot be unified with the object of saw anymore. These examples have shown that two changes to the generative process are necessary if word-word dependencies in the predicate-argument structure are to be captured. First, head constituents have to be fully expanded before non-head constituents are generated. Second, words have to be generated at the leaves of the tree, not at the maximal projection of constituents. 5.3 The word probabilities Not all words have functor categories or fill argument slots of other functors. For instance, punctuation marks, conjunctions, and the heads of entire sentences are not conditioned on any other words. Therefore, they are only conditioned on their lexical categories. Therefore, this model contains the following three kinds of word probabilities: 1. Argument probabilities: P (w jc;hhc 0 ; w 0 i; i; hc; w ii) The probability of generating word w, given that its lexical category is c and that hc; w i is head of the ith argument of hc 0 ; w 0 i. 2. Functor probabilities: P (w jc;hhc; w i; i; hc 0 ; w 0 ii) The probability of generating word w, given that its lexical category is c and that hc 0 ; w 0 i is head of the ith argument of hc; w i. 3. Other word probabilities: P (w jc) If a word does not fill any dependency relation, it is only conditioned on its lexical category. 5.4 The structural probabilities Like the SD model, we assume an underlying process which generates CCG derivation trees starting from the root node. Each node in a derivation tree has a category, a list of lexical heads and a (possibly empty) list of dependency relations to be filled by its lexical heads. As discussed in the previous section, head words cannot in general be generated at the maximal projection if unbounded long-range dependencies are to be captured. This is not the case for lexical categories. We therefore assume that a node’s lexical head category is generated at its maximal projection, whereas head words are generated at the leaf nodes. Since lexical categories are generated at the maximal projection, our model has the same structural probabilities as the LexCat model of Hockenmaier and Steedman (2002b). 5.5 Estimating word probabilities This model generates words in three different ways—as arguments of functors that are already generated, as functors which have already one (or more) arguments instantiated, or independent of the surrounding context. The last case is simple, as this probability can be estimated directly, by counting the number of times c is the lexical category of w in the training corpus, and dividing this by the number of times c occurs as a lexical category in the training corpus: ^ P (w jc) = C (w ; c) C (c) In order to estimate the probability of an argument w, we count the number of times it occurs with lexical category c and is the ith argument of the lexical functor hc 0 ; w 0 i in question, divided by the number of times the ith argument of hc 0 ; w 0 i is instantiated with a constituent whose lexical head category is c: ^ P (w jc; hhc 0 ; w 0 i; i; hc; w ii) = C (hhc 0 ; w 0 i; i; hc; w ii) P w 00 C (hhc 0 ; w 0 i; i; hc; w 00 ii) The probability of a functor w, given that its ith argument is instantiated by a constituent whose lexical head is hc 0 ; w 0 i can be estimated in a similar manner: ^ P (w jc; hhc; w i; i; hc 0 ; w 0 ii) = C (hhc; w i; i; hc 0 ; w 0 ii) P w 00 C (hhc; w 00 i; i; hc 0 ; w 0 ii) Here we count the number of times the ith argument of hc; w i is instantiated with hc 0 ; w 0 i, and divide this by the number of times that hc 0 ; w 0 i is the ith argument of any lexical head with category c. For instance, in order to compute the probability of yesterday modifying resigned as in the previous section, we count the number of times the transitive verb resigned was modified by the adverb yesterday and divide this by the number of times resigned was modified by any adverb of the same category. We have seen that functor probabilities are not only necessary for adjuncts, but also for certain types of long-range dependencies such as the relation between the noun modified by a relative clause and the verb in the relative clause. In the case of zero or reduced relative clauses, some of these dependencies are also captured by the SD model. However, in that model, only counts from the same type of construction could be used, whereas in our model, the functor probability for a verb in a zero or reduced relative clause can be estimated from all occurrences of the head noun. In particular, all instances of the noun and verb occurring together in the training data (with the same predicate-argument relation between them, but not necessarily with the same surface configuration) are taken into account by the new model. To obtain the model probabilities, the relative frequency estimates of the functor and argument probabilities are both interpolated with the word probabilities ^ P (w jc). 5.6 Conditioning events on multiple heads In the presence of long-range dependencies and coordination, the new model requires the conditioning of certain events on multiple heads. Since it is unlikely that such probabilities can be estimated directly from data, they have to be approximated in some manner. If we assume that all dependencies dep i that hold for a word are equally likely, we can approximate P (w jc; dep 1 ; :::; dep n ) as the average of the individual dependency probabilities: P (w jc; dep 1 ; :::; dep n )  1 n n X i=1 P (w jc; dep i ) This approximation is has the advantage that it is easy to compute, but might not give a good estimate, since it averages over all individual distributions. 6 Dynamic programming and beam search This section describes how this model is integrated into a CKY chart parser. Dynamic programming and effective beam search strategies are essential to guarantee efficient parsing in the face of the high ambiguity of wide-coverage grammars. Both use the inside probability of constituents. In lexicalized models where each constituent has exactly one lexical head, and where this lexical head can only depend on the lexical head of one other constituent, the inside probability of a constituent is the probability that a node with the label and lexical head of this constituent expands to the tree below this node. The probability of generating a node with this label and lexical head is given by the outside probability of the constituent. In the model defined here, the lexical head of a constituent can depend on more than one other word. As explained in section 5.2, there are instances where the categorial functor is conditioned on its arguments – the example given above showed that verbs in relative clauses are conditioned on the lexical head of the noun which is modified by the relative clause. Therefore, the inside probability of a constituent cannot include the probability of any lexical head whose argument slots are not all filled. This means that the equivalence relation defined by the probability model needs to take into account not only the head of the constituent itself, but also all other lexical heads within this constituent which have at least one unfilled argument slot. As a consequence, dynamic programming becomes less effective. There is a related problem for the beam search: in our model, the inside probabilities of constituents within the same cell cannot be directly compared anymore. Instead, the number of unfilled lexical heads needs to be taken into account. If a lexical head hc; w i is unfilled, the evaluation of the probability of w is delayed. This creates a problem for the beam search strategy. The fact that constituents can have more than one lexical head causes similar problems for dynamic programming and the beam search. In order to be able to parse efficiently with our model, we use the following approximations for dynamic programming and the beam search: Two constituents with the same span and the same category are considered equivalent if they delay the evaluation of the probabilities of the same words and if they have the same number of lexical heads, and if the first two elements of their lists of lexical heads are identical (the same words and lexical categories). This is only an approximation to true equivalence, since we do not check the entire list of lexical heads. Furthermore, if a cell contains more than 100 constituents, we iteratively narrow the beam (by halving it in size)2 until the beam search has no further effect or the cell contains less than 100 constituents. This is a very aggressive strategy, and it is likely to adversely affect parsing accuracy. However, more lenient strategies were found to require too much space for the chart to be held in memory. A better way of dealing with the space requirements of our model would be to implement a packed shared parse forest, but we leave this to future work. 7 An experiment We use sections 02-21 of CCGbank for training, section 00 for development, and section 23 for testing. The input is POS-tagged using the tagger of Ratnaparkhi (1996). However, since parsing with the new model is less efficient, only sentences  40 tokens only are used to test the model. A frequency cutoff of  20 was used to determine rare words in the training data, which are replaced with their POS-tags. Unknown words in the test data are also replaced by their POS-tags. The models are evaluated according to their Parseval scores and to the recovery of dependencies in the predicateargument structure. Like Clark et al. (2002), we do not take the lexical category of the dependent into account, and evaluate hhc; w i; i; h ; w 0 ii for labelled, and hh ; w i; ; h ; w 0 ii for unlabelled recovery. Undirectional recovery (UdirP/UdirR) evaluates only whether there is a dependency between w and w 0. Unlike unlabelled recovery, this does not pe2Beam search is as in Hockenmaier and Steedman (2002b). nalize the parser if it mistakes a complement for an adjunct or vice versa. In order to determine the impact of capturing different kinds of long-range dependencies, four different models were investigated: The baseline model is like the LexCat model of (2002b), since the structural probabilities of our model are like those of that model. Local only takes local dependencies into account. LeftArgs only takes long-range dependencies that are projected through left arguments (nX) into account. This includes for instance longrange dependencies projected by subjects, subject and object control verbs, subject extraction and leftnode raising. All takes all long-range dependencies into account, in particular it extends LeftArgs by capturing also the unbounded dependencies arising through right-node-raising and object extraction. Local, LeftArgs and All are all tested with the aggressive beam strategy described above. In all cases, the CCG derivation includes all longrange dependencies. However, with the models that exclude certain kinds of dependencies, it is possible that a word is conditioned on no dependencies. In these cases, the word is generated with P (w jc). Table 1 gives the performance of all four models on section 23 in terms of the accuracy of lexical categories, Parseval scores, and in terms of the recovery of word-word dependencies in the predicateargument structure. Here, results are further broken up into the recovery of local, all long-range, bounded long-range and unbounded long-range dependencies. LexCat does not capture any word-word dependencies. Its performance on the recovery of predicate-argument structure can be improved by 3% by capturing only local word-word dependencies (Local). This excludes certain kinds of dependencies that were captured by the SD model. For instance, the dependency between the head of a noun phrase and the head of a reduced relative clause (the shares bought by John) is captured by the SD model, since shares and bought are both heads of the local trees that are combined to form the complex noun phrase. However, in the SD model the probability of this dependency can only be estimated from occurrences of the same construction, since dependency relations are defined in terms of local trees and not in terms of the underlying predicate-argument strucLexCat Local LeftArgs All Lex. cats: 88.2 89.9 90.1 90.1 Parseval LP: 76.3 78.4 78.5 78.5 LR: 75.9 78.5 79.0 78.7 UP: 82.0 83.4 83.6 83.2 UR: 81.6 83.6 83.8 83.4 Predicate-argument structure (all) LP: 77.3 80.8 81.6 81.5 LR: 78.2 80.6 81.5 81.4 UP: 86.4 88.3 88.9 88.7 UR: 87.4 88.1 88.8 88.6 UdirP: 88.0 89.7 90.2 90.0 UdirR: 89.0 89.5 90.1 90.0 Non-long-range dependencies LP: 78.9 82.5 83.0 82.9 LR: 79.5 82.3 82.7 82.6 UP: 87.5 89.7 89.9 89.8 UR: 88.1 89.4 89.6 89.4 All long-range dependencies LP: 60.8 62.6 67.1 66.3 LR: 64.4 63.0 68.5 68.8 UP: 75.3 74.2 78.9 78.1 UR: 80.2 74.9 80.5 80.9 Bounded long-range dependencies LP: 63.9 64.8 69.0 69.2 LR: 65.9 64.1 70.2 70.0 UP: 79.8 77.1 81.4 81.4 UR: 82.4 76.7 82.6 82.6 Unbounded long-range dependencies LP: 46.0 50.4 55.6 52.4 LR: 54.7 55.8 58.7 61.2 UP: 54.1 58.2 63.8 61.1 UR: 66.5 63.7 66.8 69.9 Table 1: Evaluation (sec. 23,  40 words). ture. By including long-range dependencies on left arguments (such as subjects) (LeftArgs), a further improvement of 0.7% on the recovery of predicateargument structure is obtained. This model captures the dependency between shares and bought. In contrast to the SD model, it can use all instances of shares as the subject of a passive verb in the training data to estimate this probability. Therefore, even if shares and bought do not co-occur in this particular construction in the training data, the event that is modelled by our dependency model might not be unseen, since it could have occurred in another syntactic context. Our results indicate that in order to perform well on long-range dependencies, they have to be included in the model, since Local, the model that captures only local dependencies performs worse on long-range dependencies than LexCat, the model that captures no word-word dependencies. However, with more than 5% difference on labelled precision and recall on long-range dependencies, the model which captures long-range dependencies on left arguments performs significantly better on recovering long-range dependencies than Local. The greatest difference in performance between the models which do capture long-range dependencies and the models which do not is on long-range dependencies. This indicates that, at least in the kind of model considered here, it is very important to model not just local, but also long-range dependencies. It is not clear why All, the model that includes all dependencies, performs slightly worse than the model which includes only long-range dependencies on subjects. On the Wall Street Journal task, the overall performance of this model is lower than that of the SD model of Hockenmaier and Steedman (2002b). In that model, words are generated at the maximal projection of constituents; therefore, the structural probabilities can also be conditioned on words, which improves the scores by about 2%. It is also very likely that the performance of the new models is harmed by the very aggressive beam search. 8 Conclusion and future work This paper has defined a new generative model for CCG derivations which captures the word-word dependencies in the corresponding predicate-argument structure, including bounded and unbounded longrange dependencies. In contrast to the conditional model of Clark et al. (2002), our model captures these dependencies in a sound and consistent manner. The experiments presented here demonstrate that the performance of a simple baseline model can be improved significantly if long-range dependencies are also captured. In particular, our results indicate that it is important not to restrict the model to local dependencies. Future work will address the question whether these models can be run with a less aggressive beam search strategy, or whether a different parsing algorithm is more suitable. The problems that arise due to the overly aggressive beam search strategy might be overcome if we used an n-best parser with a simpler probability model (eg. of the kind proposed by Hockenmaier and Steedman (2002b)) and used the new model as a re-ranker. The current implementation uses a very simple method of estimating the probabilities of multiple dependencies, and more sophisticated techniques should be investigated. We have argued that a model of the kind proposed in this paper is essential for parsing languages with freer word order, such as Dutch or Czech, where the model of Hockenmaier and Steedman (2002b) (and other models of surface dependencies) would systematically capture the wrong dependencies, even if only local dependencies are taken into account. For English, our experimental results demonstrate that our model benefits greatly from modelling not only local, but also long-range dependencies, which are beyond the scope of surface dependency models. Acknowledgements I would like to thank Mark Steedman and Stephen Clark for many helpful discussions, and gratefully acknowledge support from an EPSRC studentship and grant GR/M96889, the School of Informatics, and NSF ITR grant 0205 456. References Eugene Charniak. 2000. A Maximum-Entropy-Inspired Parser. In Proceedings of the First Meeting of the NAACL, Seattle. Stephen Clark, Julia Hockenmaier, and Mark Steedman. 2002. Building Deep Dependency Structures using a WideCoverage CCG Parser. In Proceedings of the 40th Annual Meeting of the ACL. Michael Collins, Jan Hajic, Lance Ramshaw, and Christoph Tillmann. 1999. A Statistical Parser for Czech. In Proceedings of the 37th Annual Meeting of the ACL. Michael Collins. 1999. Head-Driven Statistical Models for Natural Language Parsing. Ph.D. thesis, University of Pennsylvania. Julia Hockenmaier and Mark Steedman. 2002a. Acquiring Compact Lexicalized Grammars from a Cleaner Treebank. In Proceedings of the Third LREC, pages 1974–1981, Las Palmas, May. Julia Hockenmaier and Mark Steedman. 2002b. Generative Models for Statistical Parsing with Combinatory Categorial Grammar. In Proceedings of the 40th Annual Meeting of the ACL. Julia Hockenmaier. 2003. Data and Models for Statistical Parsing with CCG. Ph.D. thesis, School of Informatics, University of Edinburgh. Adwait Ratnaparkhi. 1996. A Maximum Entropy Part-OfSpeech Tagger. In Proceedings of the EMNLP Conference, pages 133–142, Philadelphia, PA. Mark Steedman. 2000. The Syntactic Process. The MIT Press, Cambridge Mass.
2003
46
Bridging the Gap Between Underspecification Formalisms: Minimal Recursion Semantics as Dominance Constraints Joachim Niehren Programming Systems Lab Universit¨at des Saarlandes [email protected] Stefan Thater Computational Linguistics Universit¨at des Saarlandes [email protected] Abstract Minimal Recursion Semantics (MRS) is the standard formalism used in large-scale HPSG grammars to model underspecified semantics. We present the first provably efficient algorithm to enumerate the readings of MRS structures, by translating them into normal dominance constraints. 1 Introduction In the past few years there has been considerable activity in the development of formalisms for underspecified semantics (Alshawi and Crouch, 1992; Reyle, 1993; Bos, 1996; Copestake et al., 1999; Egg et al., 2001). The common idea is to delay the enumeration of all readings for as long as possible. Instead, they work with a compact underspecified representation; readings are enumerated from this representation by need. Minimal Recursion Semantics (MRS) (Copestake et al., 1999) is the standard formalism for semantic underspecification used in large-scale HPSG grammars (Pollard and Sag, 1994; Copestake and Flickinger, ). Despite this clear relevance, the most obvious questions about MRS are still open: 1. Is it possible to enumerate the readings of MRS structures efficiently? No algorithm has been published so far. Existing implementations seem to be practical, even though the problem whether an MRS has a reading is NPcomplete (Althaus et al., 2003, Theorem 10.1). 2. What is the precise relationship to other underspecification formalism? Are all of them the same, or else, what are the differences? We distinguish the sublanguages of MRS nets and normal dominance nets, and show that they can be intertranslated. This translation answers the first question: existing constraint solvers for normal dominance constraints can be used to enumerate the readings of MRS nets in low polynomial time. The translation also answers the second question restricted to pure scope underspecification. It shows the equivalence of a large fragment of MRSs and a corresponding fragment of normal dominance constraints, which in turn is equivalent to a large fragment of Hole Semantics (Bos, 1996) as proven in (Koller et al., 2003). Additional underspecified treatments of ellipsis or reinterpretation, however, are available for extensions of dominance constraint only (CLLS, the constraint language for lambda structures (Egg et al., 2001)). Our results are subject to a new proof technique which reduces reasoning about MRS structures to reasoning about weakly normal dominance constraints (Bodirsky et al., 2003). The previous proof techniques for normal dominance constraints (Koller et al., 2003) do not apply. 2 Minimal Recursion Semantics We define a simplified version of Minimal Recursion Semantics and discuss differences to the original definitions presented in (Copestake et al., 1999). MRS is a description language for formulas of first order object languages with generalized quantifiers. Underspecified representations in MRS consist of elementary predications and handle constraints. Roughly, elementary predications are object language formulas with “holes” into which other formulas can be plugged; handle constraints restrict the way these formulas can be plugged into each other. More formally, MRSs are formulas over the following vocabulary: 1. Variables. An infinite set of variables ranged over by h. Variables are also called handles. 2. Constants. An infinite set of constants ranged over by x,y,z. Constants are the individual variables of the object language. 3. Function symbols. (a) A set of function symbols written as P. (b) A set of quantifier symbols ranged over by Q (such as every and some). Pairs Qx are further function symbols (the variable binders of x in the object language). 4. The symbol ≤for the outscopes relation. Formulas of MRS have three kinds of literals, the first two are called elementary predications (EPs) and the third handle constraints: 1. h:P(x1,...,xn,h1,...,hm) where n,m ≥0 2. h:Qx(h1,h2) 3. h1 ≤h2 Label positions are to the left of colons ‘:’ and argument positions to the right. Let M be a set of literals. The label set lab(M) contains those handles of M that occur in label but not in argument position. The argument handle set arg(M) contains the handles of M that occur in argument but not in label position. Definition 1 (MRS). An MRS is finite set M of MRS-literals such that: M1 Every handle occurs at most once in label and at most once in argument position in M. M2 Handle constraints h1 ≤h2 in M always relate argument handles h1 to labels h2 of M. M3 For every constant (individual variable) x in argument position in M there is a unique literal of the form h:Qx(h1,h2) in M. We call an MRS compact if it additionally satisfies: M4 Every handle of M occurs exactly once in an elementary predication of M. We say that a handle h immediately outscopes a handle h′ in an MRS M iff there is an EP E in M such that h occurs in label and h′ in argument position of E. The outscopes relation is the reflexive, transitive closure of the immediate outscopes relation. everyx studentx readx,y somey booky {h1 :everyx(h2,h4),h3 :student(x),h5 :somey(h6,h8), h7 :book(y),h9 :read(x,y),h2 ≤h3,h6 ≤h7} Figure 1: MRS for “Every student reads a book”. An example MRS for the scopally ambiguous sentence “Every student reads a book” is given in Fig. 1. We often represent MRSs by directed graphs whose nodes are the handles of the MRS. Elementary predications are represented by solid edges and handle constraints by dotted lines. Note that we make the relation between bound variables and their binders explicit by dotted lines (as from everyx to readx,y); redundant “binding-edges” that are subsumed by sequences of other edges are omitted however (from everyx to studentx for instance). A solution for an underspecified MRS is called a configuration, or scope-resolved MRS. Definition 2 (Configuration). An MRS M is a configuration if it satisfies the following conditions. C1 The graph of M is a tree of solid edges: handles don’t properly outscope themselves or occur in different argument positions and all handles are pairwise connected by elementary predications. C2 If two EPs h:P(...,x,...) and h0 :Qx(h1,h2) belong to M, then h0 outscopes h in M (so that the binding edge from h0 to h is redundant). We call M a configuration for another MRS M′ if there exists some substitution σ : arg(M′) 7→lab(M′) which states how to identify argument handles of M′ with labels of M′, so that: C3 M = {σ(E) | E is EP in M′}, and C4 σ(h1) outscopes h2 in M, for all h1 ≤h2 ∈M′. The value σ(E) is obtained by substituting all argument handles in E, leaving all others unchanged. The MRS in Fig. 1 has precisely two configurations displayed in Fig. 2 which correspond to the two readings of the sentence. In this paper, we present an algorithm that enumerates the configurations of MRSs efficiently. everyx studentx somey booky readx,y somey booky everyx studentx readx,y Figure 2: Graphs of Configurations. Differences to Standard MRS. Our version departs from standard MRS in some respects. First, we assume that different EPs must be labeled with different handles, and that labels cannot be identified. In standard MRS, however, conjunctions are encoded by labeling different EPs with the same handle. These EP-conjunctions can be replaced in a preprocessing step introducing additional EPs that make conjunctions explicit. Second, our outscope constraints are slightly less restrictive than the original “qeq-constraints.” A handle h is qeq to a handle h′ in an MRS M, h =qh′, if either h = h′ or a quantifier h:Qx(h1,h2) occurs in M and h2 is qeq to h′ in M. Thus, h =q h′ implies h ≤h′, but not the other way round. We believe that the additional strength of qeq-constraints is not needed in practice for modeling scope. Recent work in semantic construction for HPSG (Copestake et al., 2001) supports our conjecture: the examples discussed there are compatible with our simplification. Third, we depart in some minor details: we use sets instead of multi-sets and omit top-handles which are useful only during semantics construction. 3 Dominance Constraints Dominance constraints are a general framework for describing trees, and thus syntax trees of logical formulas. Dominance constraints are the core language underlying CLLS (Egg et al., 2001) which adds parallelism and binding constraints. 3.1 Syntax and Semantics We assume a possibly infinite signature Σ of function symbols with fixed arities and an infinite set Var of variables ranged over by X,Y,Z. We write f,g for function symbols and ar(f) for the arity of f. A dominance constraint ϕ is a conjunction of dominance, inequality, and labeling literals of the following forms where ar(f) = n: ϕ ::= X◁∗Y | X ̸= Y | X : f(X1,...,Xn) | ϕ∧ϕ′ Dominance constraints are interpreted over finite constructor trees, i.e. ground terms constructed from the function symbols in Σ. We identify ground terms with trees that are rooted, ranked, edge-ordered and labeled. A solution for a dominance constraint consists of a tree τ and a variable assignment α that maps variables to nodes of τ such that all constraints are satisfied: a labeling literal X : f(X1,...,Xn) is satisfied iff the node α(X) is labeled with f and has daughters α(X1),...,α(Xn) in this order; a dominance literal X◁∗Y is satisfied iff α(X) is an ancestor of α(Y) in τ; and an inequality literal X ̸= Y is satisfied iff α(X) and α(Y) are distinct nodes. Note that solutions may contain additional material. The tree f(a,b), for instance, satisfies the constraint Y :a∧Z :b. 3.2 Normality and Weak Normality The satisfiability problem of arbitrary dominance constraints is NP-complete (Koller et al., 2001) in general. However, Althaus et al. (2003) identify a natural fragment of so called normal dominance constraints, which have a polynomial time satisfiability problem. Bodirsky et al. (2003) generalize this notion to weakly normal dominance constraints. We call a variable a hole of ϕ if it occurs in argument position in ϕ and a root of ϕ otherwise. Definition 3. A dominance constraint ϕ is normal (and compact) if it satisfies the following conditions. N1 (a) each variable of ϕ occurs at most once in the labeling literals of ϕ. (b) each variable of ϕ occurs at least once in the labeling literals of ϕ. N2 for distinct roots X and Y of ϕ, X ̸= Y is in ϕ. N3 (a) if X ◁∗Y occurs in ϕ, Y is a root in ϕ. (b) if X ◁∗Y occurs in ϕ, X is a hole in ϕ. A dominance constraint is weakly normal if it satisfies all above properties except for N1(b) and N3(b). The idea behind (weak) normality is that the constraint graph (see below) of a dominance constraint consists of solid fragments which are connected by dominance constraints; these fragments may not properly overlap in solutions. Note that Definition 3 always imposes compactness, meaning that the heigth of solid fragments is at most one. As for MRS, this is not a serious restriction, since more general weakly normal dominance constraints can be compactified, provided that dominance links relate either roots or holes with roots. Dominance Graphs. We often represent dominance constraints as graphs. A dominance graph is the directed graph (V,◁∗⊎◁). The graph of a weakly normal constraint ϕ is defined as follows: The nodes of the graph of ϕ are the variables of ϕ. A labeling literal X : f(X1,...,Xn) of ϕ contributes tree edges (X,Xi) ∈◁for 1 ≤i ≤n that we draw as X Xi; we freely omit the label f and the edge order in the graph. A dominance literal X◁∗Y contributes a dominance edge (X,Y) ∈◁∗that we draw as X Y. Inequality literals in ϕ are also omitted in the graph. f a g For example, the constraint graph on the right represents the dominance constraint X : f(X′)∧Y :g(Y ′)∧X′◁∗Z ∧ Y ′◁∗Z ∧Z :a∧X̸=Y ∧X̸=Z ∧Y̸=Z. A dominance graph is weakly normal or a wndgraph if it does not contain any forbidden subgraphs: Dominance graphs of a weakly normal dominance constraints are clearly weakly normal. Solved Forms and Configurations. The main difference between MRS and dominance constraints lies in their notion of interpretation: solutions versus configurations. Every satisfiable dominance constraint has infinitely many solutions. Algorithms for dominance constraints therefore do not enumerate solutions but solved forms. We say that a dominance constraint is in solved form iff its graph is in solved form. A wndgraph Φ is in solved form iff Φ is a forest. The solved forms of Φ are solved forms Φ′ that are more specific than Φ, i.e. Φ and Φ′ differ only in their dominance edges and the reachability relation of Φ extends the reachability of Φ′. A minimal solved form of Φ is a solved form of Φ that is minimal with respect to specificity. The notion of configurations from MRS applies to dominance constraints as well. Here, a configuration is a dominance constraint whose graph is a tree without dominance edges. A configuration of a constraint ϕ is a configuration that solves ϕ in the obvious sense. Simple solved forms are tree-shaped solved forms where every hole has exactly one outgoing dominance edge. L1 L2 L3 L4 L2 L1 L4 L3 Figure 3: A dominance constraint (left) with a minimal solved form (right) that has no configuration. Lemma 1. Simple solved forms and configurations correspond: Every simple solved form has exactly one configuration, and for every configuration there is exactly one solved form that it configures. Unfortunately, Lemma 1 does not extend to minimal as opposed to simple solved forms: there are minimal solved forms without configurations. The constraint on the right of Fig. 3, for instance, has no configuration: the hole of L1 would have to be filled twice while the right hole of L2 cannot be filled. 4 Representing MRSs We next map (compact) MRSs to weakly normal dominance constraints so that configurations are preserved. Note that this translation is based on a non-standard semantics for dominance constraints, namely configurations. We address this problem in the following sections. The translation of an MRS M to a dominance constraint ϕM is quite trivial. The variables of ϕM are the handles of M and its literal set is: {h:Px1,...,xn(h1,...) | h:P(x1,...,xn,h1,...) ∈M} ∪{h:Qx(h1,h2) | h:Qx(h1,h2) ∈M} ∪{h1◁∗h2 | h1 ≤h2 ∈M} ∪{h◁∗h0 | h:Qx(h1,h2),h0 :P(...,x,...) ∈M} ∪{h̸=h′ | h,h′ in distinct label positions of M} Compact MRSs M are clearly translated into (compact) weakly normal dominance constraints. Labels of M become roots in ϕM while argument handles become holes. Weak root-to-root dominance literals are needed to encode variable binding condition C2 of MRS. It could be formulated equivalently through lambda binding constraints of CLLS (but this is not necessary here in the absence of parallelism). Proposition 1. The translation of a compact MRS M into a weakly normal dominance constraint ϕM preserves configurations. This weak correctness property follows straightforwardly from the analogy in the definitions. 5 Constraint Solving We recall an algorithm from (Bodirsky et al., 2003) that efficiently enumerates all minimal solved forms of wnd-graphs or constraints. All results of this section are proved there. The algorithm can be used to enumerate configurations for a large subclass of MRSs, as we will see in Section 6. But equally importantly, this algorithm provides a powerful proof method for reasoning about solved forms and configurations on which all our results rely. 5.1 Weak Connectedness Two nodes X and Y of a wnd-graph Φ = (V,E) are weakly connected if there is an undirected path from X to Y in (V,E). We call Φ weakly connected if all its nodes are weakly connected. A weakly connected component (wcc) of Φ is a maximal weakly connected subgraph of Φ. The wccs of Φ = (V,E) form proper partitions of V and E. Proposition 2. The graph of a solved form of a weakly connected wnd-graph is a tree. 5.2 Freeness The enumeration algorithm is based on the notion of freeness. Definition 4. A node X of a wnd-graph Φ is called free in Φ if there exists a solved form of Φ whose graph is a tree with root X. A weakly connected wnd-graph without free nodes is unsolvable. Otherwise, it has a solved form whose graph is a tree (Prop. 2) and the root of this tree is free in Φ. Given a set of nodes V ′ ⊆V, we write Φ|V ′ for the restriction of Φ to nodes in V ′ and edges in V ′ ×V ′. The following lemma characterizes freeness: Lemma 2. A wnd-graph Φ with free node X satisfies the freeness conditions: F1 node X has indegree zero in graph Φ, and F2 no distinct children Y and Y ′ of X in Φ that are linked to X by immediate dominance edges are weakly connected in the remainder Φ|V\{X}. 5.3 Algorithm The algorithm for enumerating the minimal solved forms of a wnd-graph (or equivalently constraint) is given in Fig. 4. We illustrate the algorithm for the problematic wnd-graph Φ in Fig. 3. The graph of Φ is weakly connected, so that we can call solve(Φ). This procedure guesses topmost fragments in solved forms of Φ (which always exist by Prop. 2). The only candidates are L1 or L2 since L3 and L4 have incoming dominance edges, which violates F1. Let us choose the fragment L2 to be topmost. The graph which remains when removing L2 is still weakly connected. It has a single minimal solved form computed by a recursive call of the solver, where L1 dominates L3 and L4. The solved form of the restricted graph is then put below the left hole of L2, since it is connected to this hole. As a result, we obtain the solved form on the right of Fig. 3. Theorem 1. The function solved-form(Φ) computes all minimal solved forms of a weakly normal dominance graph Φ; it runs in quadratic time per solved form. 6 Full Translation Next, we explain how to encode a large class of MRSs into wnd-constraints such that configurations correspond precisely to minimal solved forms. The result of the translation will indeed be normal. 6.1 Problems and Examples The naive representation of MRSs as weakly normal dominance constraints is only correct in a weak sense. The encoding fails in that some MRSs which have no configurations are mapped to solvable wndconstraints. For instance, this holds for the MRS on the right in Fig 3. We cannot even hope to translate arbitrary MRSs correctly into wnd-constraints: the configurability problem of MRSs is NP-complete, while satisfiability of wnd-constraints can be solved in polynomial time. Instead, we introduce the sublanguages of MRS-nets and equivalent wnd-nets, and show that they can be intertranslated in quadratic time. solved-form(Φ) ≡ Let Φ1,...,Φk be the wccs of Φ = (V,E) Let (Vi,Ei) be the result of solve(Φi) return (V,∪k i=1Ei) solve(Φ) ≡ precond: Φ = (V,◁⊎◁∗) is weakly connected choose a node X satisfying (F1) and (F2) in Φ else fail Let Y1,...,Yn be all nodes s.t. X ◁Yi Let Φ1,...,Φk be the weakly connected components of Φ|V−{X,Y1,...,Yn} Let (Wj,E j) be the result of solve(Φj), and Xj ∈Wj its root return (V,∪k j=1E j ∪◁∪◁∗ 1 ∪◁∗ 2) where ◁∗ 1 = {(Yi,Xj) | ∃X′ : (Yi,X′) ∈◁∗∧X′ ∈Wj}, ◁∗ 2 = {(X,Xj) | ¬∃X′ : (Yi,X′) ∈◁∗∧X′ ∈Wj} Figure 4: Enumerating the minimal solved-forms of a wnd-graph. ... ... (a) strong .... ... (b) weak . ... ... (c) island Figure 5: Fragment Schemas of Nets 6.2 Dominance and MRS-Nets A hypernormal path (Althaus et al., 2003) in a wndgraph is a sequence of adjacent edges that does not traverse two outgoing dominance edges of some hole X in sequence, i.e. a wnd-graph without situations Y1 X Y2. A dominance net Φ is a weakly normal dominance constraint whose fragments all satisfy one of the three schemas in Fig. 5. MRS-nets can be defined analogously. This means that all roots of Φ are labeled in Φ, and that all fragments X : f(X1,...,Xn) of Φ satisfy one of the following three conditions: strong. n ≥0 and for all Y ∈{X1,...,Xn} there exists a unique Z such that Y ◁∗Z in Φ, and there exists no Z such that X ◁∗Z in Φ. weak. n ≥1 and for all Y ∈{X1,...,Xn−1,X} there exists a unique Z such that Y ◁∗Z in Φ, and there exists no Z such that Xn ◁∗Z in Φ. island. n = 1 and all variables in {Y | X1 ◁∗Y} are connected by a hypernormal path in the graph of the restricted constraint Φ|V−{X1}, and there exists no Z such that X ◁∗Z in Φ. The requirement of hypernormal connections in islands replaces the notion of chain-connectedness in (Koller et al., 2003), which fails to apply to dominance constraints with weak dominance edges. For ease of presentation, we restrict ourselves to a simple version of island fragments. In general, we should allow for island fragments with n > 1. 6.3 Normalizing Dominance Nets Dominance nets are wnd-constraints. We next translate dominance nets Φ to normal dominance constraints Φ′ so that Φ has a configuration iff Φ′ is satisfiable. The trick is to normalize weak dominance edges. The normalization norm(Φ) of a weakly normal dominance constraint Φ is obtained by converting all root-to-root dominance literals X ◁∗Y as follows: X ◁∗Y ⇒Xn ◁∗Y if X roots a fragment of Φ that satisfies schema weak of net fragments. If Φ is a dominance net then norm(Φ) is indeed a normal dominance net. Theorem 2. The configurations of a weakly connected dominance net Φ correspond bijectively to the minimal solved forms of its normalization norm(Φ). For illustration, consider the problematic wndconstraint Φ on the left of Fig. 3. Φ has two minimal solved forms with top-most fragments L1 and L2 respectively. The former can be configured, in contrast to the later which is drawn on the right of Fig. 3. Normalizing Φ has an interesting consequence: norm(Φ) has (in contrast to Φ) a single minimal solved form with L1 on top. Indeed, norm(Φ) cannot be satisfied while placing L2 topmost. Our algorithm detects this correctly: the normalization of fragment L2 is not free in norm(Φ) since it violates property F2. The proof of Theorem 2 captures the rest of this section. We show in a first step (Prop. 3) that the configurations are preserved when normalizing weakly connected and satisfiable nets. In the second step, we show that minimal solved forms of normalized nets, and thus of norm(Φ), can always be configured (Prop. 4). Corollary 1. Configurability of weakly connected MRS-nets can be decided in polynomial time; configurations of weakly connected MRS-nets can be enumerated in quadratic time per configuration. 6.4 Correctness Proof Most importantly, nets can be recursively decomposed into nets as long as they have configurations: Lemma 3. If a dominance net Φ has a configuration whose top-most fragment is X : f(X1,...,Xn), then the restriction Φ|V−{X,X1,...,Xn} is a dominance net. Note that the restriction of the problematic net Φ by L2 on the left in Fig. 3 is not a net. This does not contradict the lemma, as Φ does not have a configuration with top-most fragment L2. Proof. First note that as X is free in Φ it cannot have incoming edges (condition F1). This means that the restriction deletes only dominance edges that depart from nodes in {X,X1,...,Xn}. Other fragments thus only lose ingoing dominance edges by normality condition N3. Such deletions preserve the validity of the schemas weak and strong. The island schema is more problematic. We have to show that the hypernormal connections in this schema can never be cut. So suppose that Y : f(Y1) is an island fragment with outgoing dominance edges Y1 ◁∗Z1 and Y1 ◁∗Z2, so that Z1 and Z2 are connected by some hypernormal path traversing the deleted fragment X : f(X1,...,Xn). We distinguish the three possible schemata for this fragment: ... (a) strong .... (b) weak . ... (c) island Figure 6: Traversals through fragments of free roots strong: since X does not have incoming dominance edges, there is only a single non-trival kind of traversal, drawn in Fig. 6(a). But such traversals contradict the freeness of X according to F2. weak: there is one other way of traversing weak fragments, shown in Fig. 6(b). Let X ◁∗Y be the weak dominance edge. The traversal proves that Y belongs to the weakly connected components of one of the Xi, so the Φ ∧Xn ◁∗Y is unsatisfiable. This shows that the hole Xn cannot be identified with any root, i.e. Φ does not have any configuration in contrast to our assumption. island: free island fragments permit one single nontrivial form of traversals, depicted in Fig. 6(c). But such traversals are not hypernormal. Proposition 3. A configuration of a weakly connected dominance net Φ configures its normalization norm(Φ), and vice versa of course. Proof. Let C be a configuration of Φ. We show that it also configures norm(Φ). Let S be the simple solved form of Φ that is configured by C (Lemma 1), and S′ be a minimal solved form of Φ which is more general than S. Let X : f(Y1,...,Yn) be the top-most fragment of the tree S. This fragment must also be the top-most fragment of S′, which is a tree since Φ is assumed to be weakly connected (Prop. 2). S′ is constructed by our algorithm (Theorem 1), so that the evaluation of solve(Φ) must choose X as free root in Φ. Since Φ is a net, some literal X : f(Y1,...,Yn) must belong to Φ. Let Φ′ = Φ|{X,Y1,...,Yn} be the restriction of Φ to the lower fragments. The weakly connected components of all Y1, ..., Yn−1 must be pairwise disjoint by F2 (which holds by Lemma 2 since X is free in Φ). The X-fragment of net Φ must satisfy one of three possible schemata of net fragments: weak fragments: there exists a unique weak dominance edge X ◁∗Z in Φ and a unique holeYn without outgoing dominance edges. The variable Z must be a root in Φ and thus be labeled. If Z is equal to X then Φ is unsatisfiable by normality condition N2, which is impossible. Hence, Z occurs in the restriction Φ′ but not in the weakly connected components of any Y1, ..., Yn−1. Otherwise, the minimal solved form S′ could not be configured since the hole Yn could not be identified with any root. Furthermore, the root of the Z-component must be identified with Yn in any configuration of Φ with root X. Hence, C satisfies Yn ◁∗Z which is add by normalization. The restriction Φ′ must be a dominance net by Lemma 3, and hence, all its weakly connected components are nets. For all 1 ≤i ≤n −1, the component of Yi in Φ′ is configured by the subtree of C at node Yi, while the subtree of C at node Yn configures the component of Z in Φ′. The induction hypothesis yields that the normalizations of all these components are configured by the respective subconfigurations of C. Hence, norm(Φ) is configured by C. strong or island fragments are not altered by normalization, so we can recurse to the lower fragments (if there exist any). Proposition 4. Minimal solved forms of normal, weakly connected dominance nets have configurations. Proof. By induction over the construction of minimal solved forms, we can show that all holes of minimal solved forms have a unique outgoing dominance edge at each hole. Furthermore, all minimal solved forms are trees since we assumed connectedness (Prop.2). Thus, all minimal solved forms are simple, so they have configurations (Lemma 1). 7 Conclusion We have related two underspecification formalism, MRS and normal dominance constraints. We have distinguished the sublanguages of MRS-nets and normal dominance nets that are sufficient to model scope underspecification, and proved their equivalence. Thereby, we have obtained the first provably efficient algorithm to enumerate the readings of underspecified semantic representations in MRS. Our encoding has the advantage that researchers interested in dominance constraints can benefit from the large grammar resources of MRS. This requires further work in order to deal with unrestricted versions of MRS used in practice. Conversely, one can now lift the additional modeling power of CLLS to MRS. References H. Alshawi and R. Crouch. 1992. Monotonic semantic interpretation. In Proc. 30th ACL, pages 32–39. E. Althaus, D. Duchier, A. Koller, K. Mehlhorn, J. Niehren, and S. Thiel. 2003. An efficient graph algorithm for dominance constraints. Journal of Algorithms. In press. Manuel Bodirsky, Denys Duchier, Joachim Niehren, and Sebastian Miele. 2003. An efficient algorithm for weakly normal dominance constraints. Available at www.ps.uni-sb.de/Papers. Johan Bos. 1996. Predicate logic unplugged. In Amsterdam Colloquium, pages 133–143. Ann Copestake and Dan Flickinger. An opensource grammar development environment and broadcoverage English grammar using HPSG. In Conference on Language Resources and Evaluation. Ann Copestake, Dan Flickinger, Ivan Sag, and Carl Pollard. 1999. Minimal Recursion Semantics: An Introduction. Manuscript, Stanford University. Ann Copestake, Alex Lascarides, and Dan Flickinger. 2001. An algebra for semantic construction in constraint-based grammars. In Proceedings of the 39th ACL, pages 132–139, Toulouse, France. Markus Egg, Alexander Koller, and Joachim Niehren. 2001. The Constraint Language for Lambda Structures. Logic, Language, and Information, 10:457–485. Alexander Koller, Joachim Niehren, and Ralf Treinen. 2001. Dominance constraints: Algorithms and complexity. In LACL’98, volume 2014 of LNAI, pages 106–125. Alexander Koller, Joachim Niehren, and Stefan Thater. 2003. Bridging the gap between underspecification formalisms: Hole semantics as dominance constraints. In EACL’03, April. In press. Carl Pollard and Ivan Sag. 1994. Head-driven Phrase Structure Grammar. University of Chicago Press. Uwe Reyle. 1993. Dealing with ambiguities by underspecification: Construction, representation and deduction. Journal of Semantics, 10(1).
2003
47
Evaluation challenges in large-scale document summarization Dragomir R. Radev U. of Michigan [email protected] Wai Lam Chinese U. of Hong Kong [email protected] Arda C¸ elebi USC/ISI [email protected] Simone Teufel U. of Cambridge [email protected] John Blitzer U. of Pennsylvania [email protected] Danyu Liu U. of Alabama [email protected] Horacio Saggion U. of Sheffield [email protected] Hong Qi U. of Michigan [email protected] Elliott Drabek Johns Hopkins U. [email protected] Abstract We present a large-scale meta evaluation of eight evaluation measures for both single-document and multi-document summarizers. To this end we built a corpus consisting of (a) 100 Million automatic summaries using six summarizers and baselines at ten summary lengths in both English and Chinese, (b) more than 10,000 manual abstracts and extracts, and (c) 200 Million automatic document and summary retrievals using 20 queries. We present both qualitative and quantitative results showing the strengths and drawbacks of all evaluation methods and how they rank the different summarizers. 1 Introduction Automatic document summarization is a field that has seen increasing attention from the NLP community in recent years. In part, this is because summarization incorporates many important aspects of both natural language understanding and natural language generation. In part it is because effective automatic summarization would be useful in a variety of areas. Unfortunately, evaluating automatic summarization in a standard and inexpensive way is a difficult task (Mani et al., 2001). Traditional large-scale evaluations are either too simplistic (using measures like precision, recall, and percent agreement which (1) don’t take chance agreement into account and (2) don’t account for the fact that human judges don’t agree which sentences should be in a summary) or too expensive (an approach using manual judgements can scale up to a few hundred summaries but not to tens or hundreds of thousands). In this paper, we present a comparison of six summarizers as well as a meta-evaluation including eight measures: Precision/Recall, Percent Agreement, Kappa, Relative Utility, Relevance Correlation, and three types of Content-Based measures (cosine, longest common subsequence, and word overlap). We found that while all measures tend to rank summarizers in different orders, measures like Kappa, Relative Utility, Relevance Correlation and Content-Based each offer significant advantages over the more simplistic methods. 2 Data, Annotation, and Experimental Design We performed our experiments on the Hong Kong News corpus provided by the Hong Kong SAR of the People’s Republic of China (LDC catalog number LDC2000T46). It contains 18,146 pairs of parallel documents in English and Chinese. The texts are not typical news articles. The Hong Kong Newspaper mainly publishes announcements of the local administration and descriptions of municipal events, such as an anniversary of the fire department, or seasonal festivals. We tokenized the corpus to identify headlines and sentence boundaries. For the English text, we used a lemmatizer for nouns and verbs. We also segmented the Chinese documents using the tool provided at http://www.mandarintools.com. Several steps of the meta evaluation that we performed involved human annotator support. First, we Cluster 2 Meetings with foreign leaders Cluster 46 Improving Employment Opportunities Cluster 54 Illegal immigrants Cluster 60 Customs staff doing good job. Cluster 61 Permits for charitable fund raising Cluster 62 Y2K readiness Cluster 112 Autumn and sports carnivals Cluster 125 Narcotics Rehabilitation Cluster 199 Intellectual Property Rights Cluster 241 Fire safety, building management concerns Cluster 323 Battle against disc piracy Cluster 398 Flu results in Health Controls Cluster 447 Housing (Amendment) Bill Brings Assorted Improvements Cluster 551 Natural disaster victims aided Cluster 827 Health education for youngsters Cluster 885 Customs combats contraband/dutiable cigarette operations Cluster 883 Public health concerns cause food-business closings Cluster 1014 Traffic Safety Enforcement Cluster 1018 Flower shows Cluster 1197 Museums: exhibits/hours Figure 1: Twenty queries created by the LDC for this experiment. asked LDC to build a set of queries (Figure 1). Each of these queries produced a cluster of relevant documents. Twenty of these clusters were used in the experiments in this paper. Additionally, we needed manual summaries or extracts for reference. The LDC annotators produced summaries for each document in all clusters. In order to produce human extracts, our judges also labeled sentences with “relevance judgements”, which indicate the relevance of sentence to the topic of the document. The relevance judgements for sentences range from 0 (irrelevant) to 10 (essential). As in (Radev et al., 2000), in order to create an extract of a certain length, we simply extract the top scoring sentences that add up to that length. For each target summary length, we produce an extract using a summarizer or baseline. Then we compare the output of the summarizer or baseline with the extract produced from the human relevance judgements. Both the summarizers and the evaluation measures are described in greater detail in the next two sections. 2.1 Summarizers and baselines This section briefly describes the summarizers we used in the evaluation. All summarizers take as input a target length (n%) and a document (or cluster) split into sentences. Their output is an n% extract of the document (or cluster). • MEAD (Radev et al., 2000): MEAD is a centroid-based extractive summarizer that scores sentences based on sentence-level and inter-sentence features which indicate the quality of the sentence as a summary sentence. It then chooses the top-ranked sentences for inclusion in the output summary. MEAD runs on both English documents and on BIG5-encoded Chinese. We tested the summarizer in both languages. • WEBS (Websumm (Mani and Bloedorn, 2000)): can be used to produce generic and query-based summaries. Websumm uses a graph-connectivity model and operates under the assumption that nodes which are connected to many other nodes are likely to carry salient information. • SUMM (Summarist (Hovy and Lin, 1999)): an extractive summarizer based on topic signatures. • ALGN (alignment-based): We ran a sentence alignment algorithm (Gale and Church, 1993) for each pair of English and Chinese stories. We used it to automatically generate Chinese “manual” extracts from the English manual extracts we received from LDC. • LEAD (lead-based): n% sentences are chosen from the beginning of the text. • RAND (random): n% sentences are chosen at random. The six summarizers were run at ten different target lengths to produce more than 100 million summaries (Figure 2). For the purpose of this paper, we only focus on a small portion of the possible experiments that our corpus can facilitate. 3 Summary Evaluation Techniques We used three general types of evaluation measures: co-selection, content-based similarity, and relevance correlation. Co-selection measures include precision and recall of co-selected sentences, relative utility (Radev et al., 2000), and Kappa (Siegel and Castellan, 1988; Carletta, 1996). Co-selection methods have some restrictions: they only work for extractive summarizers. Two manual summaries of the same input do not in general share many identical sentences. We address this weakness of co-selection Lengths #dj 05W 05S 10W 10S 20W 20S 30W 30S 40W 40S FD E-FD x 40 E-LD X X X X x x X X X X 440 E-RA X X X X x x X X X X 440 E-MO x x X x x x X x X x 540 E-M2 X 20 E-M3 X 8 E-S2 X 8 E-WS X X x x X X 160 E-WQ X 10 E-LC x 40 E-CY X X x X X 120 E-AL X X X X X X X X X X 200 E-AR X X X X X X X X X X 200 E-AM X X X X X X X X X X 200 C-FD x 40 C-LD X X X X x x X X X X 240 C-RA X X X X x x X X X X 240 C-MO X x X x x x X x X x 320 C-M2 X 20 C-CY X X x X X 120 C-AL X X X X X X X X X X 180 C-AR X X X X X X X X X X 200 C-AM X X X X X X X X 120 X-FD x 40 X-LD X X X X x x X X X X 240 X-RA X X X X x x X X X X 240 X-MO X x X x x x X x X x 320 X-M2 X 20 X-CY X X x X X 120 X-AL X X X X X X X X X X 140 X-AR X X X X X X X X X X 160 X-AM X X X X X X X X 120 Figure 2: All runs performed (X = 20 clusters, x = 10 clusters). Language: E = English, C = Chinese, X = cross-lingual; Summarizer: LD=LEAD, RA=RAND, WS=WEBS, WQ=WEBS-query based, etc.; S = sentence-based, W = word-based; #dj = number of “docjudges” (ranked lists of documents and summaries). Target lengths above 50% are not shown in this table for lack of space. Each run is available using two different retrieval schemes. We report results using the cross-lingual retrievals in a separate paper. measures with several content-based similarity measures. The similarity measures we use are word overlap, longest common subsequence, and cosine. One advantage of similarity measures is that they can compare manual and automatic extracts with manual abstracts. To our knowledge, no systematic experiments about agreement on the task of summary writing have been performed before. We use similarity measures to measure interjudge agreement among three judges per topic. We also apply the measures between human extracts and summaries, which answers the question if human extracts are more similar to automatic extracts or to human summaries. The third group of evaluation measures includes relevance correlation. It shows the relative performance of a summary: how much the performance of document retrieval decreases when indexing summaries rather than full texts. Task-based evaluations (e.g., SUMMAC (Mani et al., 2001), DUC (Harman and Marcu, 2001), or (Tombros et al., 1998) measure human performance using the summaries for a certain task (after the summaries are created). Although they can be a very effective way of measuring summary quality, task-based evaluations are prohibitively expensive at large scales. In this project, we didn’t perform any task-based evaluations as they would not be appropriate at the scale of millions of summaries. 3.1 Evaluation by sentence co-selection For each document and target length we produce three extracts from the three different judges, which we label throughout as J1, J2, and J3. We used the rates 5%, 10%, 20%, 30%, 40% for most experiments. For some experiments, we also consider summaries of 50%, 60%, 70%, 80% and 90% of the original length of the documents. Figure 3 shows some abbreviations for co-selection that we will use throughout this section. 3.1.1 Precision and Recall Precision and recall are defined as: PJ2(J1) = A A + C , RJ2(J1) = A A + B J2 Sentence in Extract Sentence not in Extract Sentence in Extract A B A + B J1 Sentence not in Extract C D C + D A + C B + D N = A + B + C + D Figure 3: Contingency table comparing sentences extracted by the system and the judges. In our case, each set of documents which is compared has the same number of sentences and also the same number of sentences are extracted; thus P = R. The average precision Pavg(SY STEM) and recall Ravg(SY STEM) are calculated by summing over individual judges and normalizing. The average interjudge precision and recall is computed by averaging over all judge pairs. However, precision and recall do not take chance agreement into account. The amount of agreement one would expect two judges to reach by chance depends on the number and relative proportions of the categories used by the coders. The next section on Kappa shows that chance agreement is very high in extractive summarization. 3.1.2 Kappa Kappa (Siegel and Castellan, 1988) is an evaluation measure which is increasingly used in NLP annotation work (Krippendorff, 1980; Carletta, 1996). Kappa has the following advantages over P and R: • It factors out random agreement. Random agreement is defined as the level of agreement which would be reached by random annotation using the same distribution of categories as the real annotators. • It allows for comparisons between arbitrary numbers of annotators and items. • It treats less frequent categories as more important (in our case: selected sentences), similarly to precision and recall but it also considers (with a smaller weight) more frequent categories as well. The Kappa coefficient controls agreement P(A) by taking into account agreement by chance P(E) : K = P (A) −P (E) 1 −P (E) No matter how many items or annotators, or how the categories are distributed, K = 0 when there is no agreement other than what would be expected by chance, and K = 1 when agreement is perfect. If two annotators agree less than expected by chance, Kappa can also be negative. We report Kappa between three annotators in the case of human agreement, and between three humans and a system (i.e. four judges) in the next section. 3.1.3 Relative Utility Relative Utility (RU) (Radev et al., 2000) is tested on a large corpus for the first time in this project. RU takes into account chance agreement as a lower bound and interjudge agreement as an upper bound of performance. RU allows judges and summarizers to pick different sentences with similar content in their summaries without penalizing them for doing so. Each judge is asked to indicate the importance of each sentence in a cluster on a scale from 0 to 10. Judges also specify which sentences subsume or paraphrase each other. In relative utility, the score of an automatic summary increases with the importance of the sentences that it includes but goes down with the inclusion of redundant sentences. 3.2 Content-based Similarity measures Content-based similarity measures compute the similarity between two summaries at a more finegrained level than just sentences. For each automatic extract S and similarity measure M we compute the following number: sim(M, S, {J1, J2, J3}) = M(S, J1) + M(S, J2) + M(S, J3) 3 We used several content-based similarity measures that take into account different properties of the text: Cosine similarity is computed using the following formula (Salton, 1988): cos(X, Y ) = P xi ∗yi pP (xi)2 ∗pP (yi)2 where X and Y are text representations based on the vector space model. Longest Common Subsequence is computed as follows: lcs(X, Y ) = (length(X) + length(Y ) −d(X, Y ))/2 where X and Y are representations based on sequences and where lcs(X, Y ) is the length of the longest common subsequence between X and Y , length(X) is the length of the string X, and d(X, Y ) is the minimum number of deletion and insertions needed to transform X into Y (Crochemore and Rytter, 1994). 3.3 Relevance Correlation Relevance correlation (RC) is a new measure for assessing the relative decrease in retrieval performance when indexing summaries instead of full documents. The idea behind it is similar to (Sparck-Jones and Sakai, 2001). In that experiment, Sparck-Jones and Sakai determine that short summaries are good substitutes for full documents at the high precision end. With RC we attempt to rank all documents given a query. Suppose that given a query Q and a corpus of documents Di, a search engine ranks all documents in Di according to their relevance to the query Q. If instead of the corpus Di, the respective summaries of all documents are substituted for the full documents and the resulting corpus of summaries Si is ranked by the same retrieval engine for relevance to the query, a different ranking will be obtained. If the summaries are good surrogates for the full documents, then it can be expected that rankings will be similar. There exist several methods for measuring the similarity of rankings. One such method is Kendall’s tau and another is Spearman’s rank correlation. Both methods are quite appropriate for the task that we want to perform; however, since search engines produce relevance scores in addition to rankings, we can use a stronger similarity test, linear correlation between retrieval scores. When two identical rankings are compared, their correlation is 1. Two completely independent rankings result in a score of 0 while two rankings that are reverse versions of one another have a score of -1. Although rank correlation seems to be another valid measure, given the large number of irrelevant documents per query resulting in a large number of tied ranks, we opted for linear correlation. Interestingly enough, linear correlation and rank correlation agreed with each other. Relevance correlation r is defined as the linear correlation of the relevance scores (x and y) assigned by two different IR algorithms on the same set of documents or by the same IR algorithm on different data sets: r = P i(xi −x)(yi −y) pP i(xi −x)2pP i(yi −y)2 Here x and y are the means of the relevance scores for the document sequence. We preprocess the documents and use Smart to index and retrieve them. After the retrieval process, each summary is associated with a score indicating the relevance of the summary to the query. The relevance score is actually calculated as the inner product of the summary vector and the query vector. Based on the relevance score, we can produce a full ranking of all the summaries in the corpus. In contrast to (Brandow et al., 1995) who run 12 Boolean queries on a corpus of 21,000 documents and compare three types of documents (full documents, lead extracts, and ANES extracts), we measure retrieval performance under more than 300 conditions (by language, summary length, retrieval policy for 8 summarizers or baselines). 4 Results This section reports results for the summarizers and baselines described above. We relied directly on the relevance judgements to create “manual extracts” to use as gold standards for evaluating the English systems. To evaluate Chinese, we made use of a table of automatically produced alignments. While the accuracy of the alignments is quite high, we have not thoroughly measured the errors produced when mapping target English summaries into Chinese. This will be done in future work. 4.1 Co-selection results Co-selection agreement (Section 3.1) is reported in Figures 4, and 5). The tables assume human performance is the upper bound, the next rows compare the different summarizers. Figure 4 shows results for precision and recall. We observe the effect of a dependence of the numerical results on the length of the summary, which is a well-known fact from information retrieval evaluations. Websumm has an advantage over MEAD for longer summaries but not for 20% or less. Lead summaries perform better than all the automatic summarizers, and better than the human judges. This result usually occurs when the judges choose different, but early sentences. Human judgements overtake the lead baseline for summaries of length 50% or more. 5% 10% 20% 30% 40% Humans .187 .246 .379 .467 .579 MEAD .160 .231 .351 .420 .519 WEBS .310 .305 .358 .439 .543 LEAD .354 .387 .447 .483 .583 RAND .094 .113 .224 .357 .432 Figure 4: Results in precision=recall (averaged over 20 clusters). Figure 5 shows results using Kappa. Random agreement is 0 by definition between a random process and a non-random process. While the results are overall rather low, the numbers still show the following trends: • MEAD outperforms Websumm for all but the 5% target length. • Lead summaries perform best below 20%, whereas human agreement is higher after that. • There is a rather large difference between the two summarizers and the humans (except for the 5% case for Websumm). This numerical difference is relatively higher than for any other co-selection measure treated here. • Random is overall the worst performer. • Agreement improves with summary length. Figures 6 and 7 summarize the results obtained through Relative Utility. As the figures indicate, random performance is quite high although all nonrandom methods outperform it significantly. Further, and in contrast with other co-selection evaluation criteria, in both the single- and multi-document 5% 10% 20% 30% 40% Humans .127 .157 .194 .225 .274 MEAD .109 .136 .168 .192 .230 WEBS .138 .128 .146 .159 .192 LEAD .180 .198 .213 .220 .261 RAND .064 .081 .097 .116 .137 Figure 5: Results in kappa, averaged over 20 clusters. case MEAD outperforms LEAD for shorter summaries (5-30%). The lower bound (R) represents the average performance of all extracts at the given summary length while the upper bound (J) is the interjudge agreement among the three judges. 5% 10% 20% 30% 40% R 0.66 0.68 0.71 0.74 0.76 RAND 0.67 0.67 0.71 0.75 0.77 WEBS 0.72 0.73 0.76 0.79 0.82 LEAD 0.72 0.73 0.77 0.80 0.83 MEAD 0.78 0.79 0.79 0.81 0.83 J 0.80 0.81 0.83 0.85 0.87 Figure 6: RU per summarizer and summary length (Single-document). 5% 10% 20% 30% 40% R 0.64 0.66 0.69 0.72 0.74 RAND 0.63 0.65 0.71 0.72 0.74 LEAD 0.71 0.71 0.76 0.79 0.82 MEAD 0.73 0.75 0.78 0.79 0.81 J 0.76 0.78 0.81 0.83 0.85 Figure 7: RU per summarizer and summary length (Multi-document). 4.2 Content-based results The results obtained for a subset of target lengths using content-based evaluation can be seen in Figures 8 and 9. In all our experiments with tf ∗idfweighted cosine, the lead-based summarizer obtained results close to the judges in most of the target lengths while MEAD is ranked in second position. In all our experiments using longest common subsequence, no system obtained better results in the majority of the cases. 10% 20% 30% 40% LEAD 0.55 0.65 0.70 0.79 MEAD 0.46 0.61 0.70 0.78 RAND 0.31 0.47 0.60 0.69 WEBS 0.52 0.60 0.68 0.77 Figure 8: Cosine (tf∗idf). Average over 10 clusters. 10% 20% 30% 40% LEAD 0.47 0.55 0.60 0.70 MEAD 0.37 0.52 0.61 0.70 RAND 0.25 0.38 0.50 0.58 WEBS 0.39 0.45 0.53 0.64 Figure 9: Longest Common Subsequence. Average over 10 clusters. The numbers obtained in the evaluation of Chinese summaries for cosine and longest common subsequence can be seen in Figures 10 and 11. Both measures identify MEAD as the summarizer that produced results closer to the ideal summaries (these results also were observed across measures and text representations). 10% 20% 30% 40% SUMM 0.44 0.65 0.71 0.78 LEAD 0.54 0.63 0.68 0.77 MEAD 0.49 0.65 0.74 0.82 RAND 0.31 0.50 0.65 0.71 Figure 10: Chinese Summaries. Cosine (tf ∗idf). Average over 10 clusters. Vector space of Words as Text Representation. 10% 20% 30% 40% SUMM 0.32 0.53 0.57 0.65 LEAD 0.42 0.49 0.54 0.64 MEAD 0.35 0.50 0.60 0.70 RAND 0.21 0.35 0.49 0.54 Figure 11: Chinese Summaries. Longest Common Subsequence. Average over 10 clusters. Chinese Words as Text Representation. We have based this evaluation on target summaries produced by LDC assessors, although other alternatives exist. Content-based similarity measures do not require the target summary to be a subset of sentences from the source document, thus, content evaluation based on similarity measures can be done using summaries published with the source documents which are in many cases available (Teufel and Moens, 1997; Saggion, 2000). 4.3 Relevance Correlation results We present several results using Relevance Correlation. Figures 12 and 13 show how RC changes depending on the summarizer and the language used. RC is as high as 1.0 when full documents (FD) are compared to themselves. One can notice that even random extracts get a relatively high RC score. It is also worth observing that Chinese summaries score lower than their corresponding English summaries. Figure 14 shows the effects of summary length and summarizers on RC. As one might expect, longer summaries carry more of the content of the full document than shorter ones. At the same time, the relative performance of the different summarizers remains the same across compression rates. C112 C125 C241 C323 C551 AVG10 FD 1.00 1.00 1.00 1.00 1.00 1.000 MEAD 0.91 0.92 0.93 0.92 0.90 0.903 WEBS 0.88 0.82 0.89 0.91 0.88 0.843 LEAD 0.80 0.80 0.84 0.85 0.81 0.802 RAND 0.80 0.78 0.87 0.85 0.79 0.800 SUMM 0.77 0.79 0.85 0.88 0.81 0.775 Figure 12: RC per summarizer (English 20%). C112 C125 C241 C323 C551 AVG10 FD 1.00 1.00 1.00 1.00 1.00 1.000 MEAD 0.78 0.87 0.93 0.66 0.91 0.850 SUMM 0.76 0.75 0.85 0.84 0.75 0.755 RAND 0.71 0.75 0.85 0.60 0.74 0.744 ALGN 0.74 0.72 0.83 0.95 0.72 0.738 LEAD 0.72 0.71 0.83 0.58 0.75 0.733 Figure 13: RC per summarizer (Chinese, 20%). 5% 10% 20% 30% 40% FD 1.000 1.000 1.000 1.000 1.000 MEAD 0.724 0.834 0.916 0.946 0.962 WEBS 0.730 0.804 0.876 0.912 0.936 LEAD 0.660 0.730 0.820 0.880 0.906 SUMM 0.622 0.710 0.820 0.848 0.862 RAND 0.554 0.708 0.818 0.884 0.922 Figure 14: RC per summary length and summarizer. 5 Conclusion This paper describes several contributions to text summarization: First, we observed that different measures rank summaries differently, although most of them showed that “intelligent” summarizers outperform lead-based summaries which is encouraging given that previous results had cast doubt on the ability of summarizers to do better than simple baselines. Second, we found that measures like Kappa, Relative Utility, Relevance Correlation and ContentBased, each offer significant advantages over more simplistic methods like Precision, Recall, and Percent Agreement with respect to scalability, applicability to multidocument summaries, and ability to include human and chance agreement. Figure 15 Property Prec, recall Kappa Normalized RU Word overlap, cosine, LCS Relevance Correlation Intrinsic (I)/extrinsic (E) I I I I E Agreement between human extracts X X X X X Agreement human extracts and automatic extracts X X X X X Agreement human abstracts and human extracts X Non-binary decisions X X Takes random agreement into account by design X X Full documents vs. extracts X X Systems with different sentence segmentation X X Multidocument extracts X X X X Full corpus coverage X X Figure 15: Properties of evaluation measures used in this project. presents a short comparison of all these evaluation measures. Third, we performed extensive experiments using a new evaluation measure, Relevance Correlation, which measures how well a summary can be used to replace a document for retrieval purposes. Finally, we have packaged the code used for this project into a summarization evaluation toolkit and produced what we believe is the largest and most complete annotated corpus for further research in text summarization. The corpus and related software is slated for release by the LDC in mid 2003. References Ron Brandow, Karl Mitze, and Lisa F. Rau. 1995. Automatic Condensation of Electronic Publications by Sentence Selection. Information Processing and Management, 31(5):675–685. Jean Carletta. 1996. Assessing Agreement on Classification Tasks: The Kappa Statistic. CL, 22(2):249–254. Maxime Crochemore and Wojciech Rytter. 1994. Text Algorithms. Oxford University Press. William A. Gale and Kenneth W. Church. 1993. A program for aligning sentences in bilingual corpora. Computational Linguistics, 19(1):75–102. Donna Harman and Daniel Marcu, editors. 2001. Proceedings of the 1st Document Understanding Conference. New Orleans, LA, September. Eduard Hovy and Chin Yew Lin. 1999. Automated Text Summarization in SUMMARIST. In Inderjeet Mani and Mark T. Maybury, editors, Advances in Automatic Text Summarization, pages 81–94. The MIT Press. Klaus Krippendorff. 1980. Content Analysis: An Introduction to its Methodology. Sage Publications, Beverly Hills, CA. Inderjeet Mani and Eric Bloedorn. 2000. Summarizing Similarities and Differences Among Related Documents. Information Retrieval, 1(1). Inderjeet Mani, Th´er`ese Firmin, David House, Gary Klein, Beth Sundheim, and Lynette Hirschman. 2001. The TIPSTER SUMMAC Text Summarization Evaluation. In Natural Language Engineering. Dragomir R. Radev, Hongyan Jing, and Malgorzata Budzikowska. 2000. Centroid-Based Summarization of Multiple Documents: Sentence Extraction, UtilityBased Evaluation, and User Studies. In Proceedings of the Workshop on Automatic Summarization at the 6th Applied Natural Language Processing Conference and the 1st Conference of the North American Chapter of the Association for Computational Linguistics, Seattle, WA, April. Horacio Saggion. 2000. G´en´eration automatique de r´esum´es par analyse s´elective. Ph.D. thesis, D´epartement d’informatique et de recherche op´erationnelle. Facult´e des arts et des sciences. Universit´e de Montr´eal, August. Gerard Salton. 1988. Automatic Text Processing. Addison-Wesley Publishing Company. Sidney Siegel and N. John Jr. Castellan. 1988. Nonparametric Statistics for the Behavioral Sciences. McGraw-Hill, Berkeley, CA, 2nd edition. Karen Sparck-Jones and Tetsuya Sakai. 2001. Generic Summaries for Indexing in IR. In Proceedings of the 24th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 190–198, New Orleans, LA, September. Simone Teufel and Marc Moens. 1997. Sentence Extraction as a Classification Task. In Proceedings of the Workshop on Intelligent Scalable Text Summarization at the 35th Meeting of the Association for Computational Linguistics, and the 8th Conference of the European Chapter of the Assocation for Computational Linguistics, Madrid, Spain. Anastasios Tombros, Mark Sanderson, and Phil Gray. 1998. Advantages of Query Biased Summaries in Information Retrieval. In Eduard Hovy and Dragomir R. Radev, editors, Proceedings of the AAAI Symposium on Intelligent Text Summarization, pages 34–43, Stanford, California, USA, March 23–25,. The AAAI Press.
2003
48
    ! " #$%'&(%) +*, -. /10 2 ) -  34 "5$6 78 " 9 " : ; < "5$=>% "%?&(%) @ 75  "5 ACB:DEBGFHACDEIJLKMJ NPORQTSVUXW/OZY[\W/]^QTS_QT`aQTbUXcd[\W/ceUXfg(ORQTSVhi] jlknm\kojqp S_QTURQT]r`/s/OX]rt/SVuv t/S_wU7x/O koy ` z U y wUXu ml{m|ko}Z~XZ{ u:€OX/OZW ‚Tƒ…„Tƒ…†T‡Lˆ‰„L„Š‚7‹ŒŠ^LŽ AC%’‘’“nJ/”‘ •:–˜— ™›š…œrš/ŸžCœn—¡ ¢™£œo¤MšLž\¥l¦/—¨§— ©ªœ(¦— Ÿ« ¥"¬¤\®­a¤¯ž|±°³²/°®´ a§µ«1— ¤|–˜— ©¶œ›™|–L¥Tž|¤·¤R—¨ ¢±µ ¸ ² ¤\–L œo¹¤|–¥nž\™º¤\–La ™|a´ ¦TŸ™l» ¼C ½ ž\™|¤šLž\®™|Ÿ©a¤¾œ¿™l—¡ ¢š…´ ?œo©˜§ÀªnŸ©LŸžlœT´  ¢Ÿ¤|–¥G§?¬Á¥nžÂŸ­Z¤\žlœn°Ã¤l— ©Lª¾ž|Ÿ¹L™|±§À¤\®žl  ™|aÄ®¹LŸ©…°Ã®™lµÅœo©˜§Æ¤\–L®©¾œo©˜œn´ ²ZǟÈ™|®¦’®žRœn´ œo¹¤|–¥nž\É—¡§’Ÿ©a¤i— ½ a§Ê¤\®­a¤Ê°Ã¥"´¡´ ±°³¤R— ¥n©™Ë¤\¥ °Ã¥" ¢š…œrž|¤\–LŒ™|¤Rœo¤R— ™|¤R—¡°®œn´ZÄR¹˜œo©±¤l— ¤R— Ÿ™l»•:– žlœr¤l— ¥(¥"¬…ž|±°³²/°®´¡— ©LªÌ— ™.œn´ ™|¥Í ¢±œo™|¹Lž\a§Å¬Á¥nž ±œn°E–6°Ã¥"´¡´ ±°³¤R— ¥n©˜» Î:— ©˜œn´¡´ ²nµ›ž|±´¨œr¤\a§ž|ŸÉ ™|aœržl°e–Ϥ\¥nš…—¡°Ã™Cœož\,— ©a¤\ž|¥G§i¹…°Ãa§Å¤\¥nªnŸ¤\–L®ž «1— ¤|–™|¥" !§L— ™l°Ã¹L™|™l— ¥n©¥"¬L¬Á¹L¤\¹Lž|ž|Ÿ™|aœržl°e– §L— ž|±°³¤R— ¥n©™i» Ð ÑRÒ ‘i“nF:ÓdÔd”‘iDEF Ò Õ ©À°Ã¥n©a¦’®©±¤l— ¥n©˜œn´Ö— ©…¬Á¥nžl œo¤R— ¥T©ž|Ÿ¤|žR— Ÿ¦œT´·™|¤\¹…§L— Ÿ™lµ ¤\–L·™l—¡ ·—¨´¡œržl— ¤×² ¸ Ÿ¤×«%®Ÿ©Å¤^«d¥Å§’¥°³¹˜ ¢®©±¤|™M— ™.°Ÿœn´¡°³¹É ´¡œo¤\a§ ¸ œr™|a§¥n©$¤\–L §L— ™|¤\žl— ¸ ¹L¤R— ¥n©¥"¬’¤\®žl ¢™¤\–…œo¤œošÉ š/±œož— ©C±œn°E–ا’¥°³¹˜ ¢®©±¤l»MÙ%¥R«d®¦’®žRµÚ— ©Û§’¥°³¹˜ ¢®©±¤ §Lœr¤lœ ¸ œr™|®™lµ¥nž ¥n©<¤\–L·¼C ¸ µG¤\–L®ž\¯Ÿ­— ™|¤ ©a¹˜  ¸ Ÿž|™ ¥"¬M§’¥°³¹˜ ¢®©±¤|™Å¤\–…œo¤Û´¡— ¤\®žRœn´¡´ ²Ü°Ã¥n©a¤Rœn— ©Ý¤\–L™lœn  šL–žlœr™9Ÿ™l»Þ•:–®™|,§’¥°³¹˜ ¢®©±¤|™q©L¥T¤ß¥n©˜´ ²; ·œT— ©±¤lœT— © œÍªn¥Z¥G§à™|¤Rœo¤R— ™|¤R—¡°®œn´ž|Ÿ™|a  ¸ ´¨œr©…°Ã ¸ ¹¤,œn´ ™|¥Ø™|–…œrž| œ(´ ¥n©ªq™|a°Ã¤R— ¥T©<¥"¬…¤\®žl ¢™lµ™|¥" ®¤R—¨ ¢Ÿ™ ™|šLž\aœT§ß¥l¦’®ž ™|®©±¤|Ÿ©…°Ã®™l» ¼Ø–LŸ©á¤\–Lß§’Ÿªnž\®$¥"¬/¤\–Lq ·œr¤l°e–Û— ™ ¸ Ÿ²T¥T©…§¯¤\–L ´ ®¦’a´n¥"¬Gœ™l—¡ ¢š…´ $°Ã¥"— ©…°Ÿ—¨§i®©˜°³±µ/— ¤— ™âœ©…œr¤|¹žlœT´"°Ã¥n©LÉ ™|aÄ®¹LŸ©…°Ã ¤\–…œo¤¤\–L®™| ™|a°Ã¤R— ¥T©L™¥"¬L¤\®žl ¢™1œož\ß§’¹š…´¡— É °Ÿœo¤\a§ãœo©˜§·ž|Ÿ¹L™|±§ ¸ ²q¤\–Lßœo¹¤|–¥nž\™l»PÎT¹Lž\¤|–®žl ¥nž|±µ «d1°Ÿœo©qœo™|™|¹˜ ¢¤\–…œo¤Rµ— ©¯¤\–…— ™8§L— ª7— ¤Rœn´XœoªTaµT¤\–…— ™:¤^²Zš/ ¥"¬Lä ž|a°Ã²°Ÿ´¡— ©ª"å— ™œo©$¥nžR§L— ©…œT´lšLžRœn°Ã¤l—¡°³«8–®©›œo¹¤|–¥nž\É — ©Lª1¤\®­a¤|É ¸ œr™|a§!šLž\¥G§i¹…°³¤\™ ¸ ±°®œr¹L™|¤\®­a¤|™¢œož\¢±œo™l—¡´ ² °Ã¥nš…— ±§Ûœo©˜§<ž|Ÿ¹L™|±§L»àæ©¥n¤\–LŸžC—¡ ¢š/¥Tž|¤lœr©a¤Öœo™|š a°Ã¤ — ™¤\–…œo¤¤\–Lßž|Ÿ¹L™|±§á¤\®­a¤|™Öœož\ß¥"¬Á¤\®©,™|a œo©a¤R—¡°®œT´¨´ ²  ¢±œo©…— ©ª"¬Á¹…´¡ç˜¤\–La— žâ™|¹Lž\¦— ¦œT´œn°Ãž|¥T™|™ß§’¥°³¹˜ ¢®©±¤|™·— ¤|É ™|a´¡¬ — ™œo©·Ÿ¦—¡§’Ÿ©…°³Œ¥"¬n¤\–La— ž%¹L™|±¬Á¹…´ ©LŸ™|™i»ÚÎT¥nžŸ­œn ¢É š…´ ±µ/™|¥" Ÿ­ZšLž\®™|™l— ¥n©™M°Ã¥n©a¤Rœn— ©<¤\–LÖ§’ ½ ©˜— ¤R— ¥n©™â¥"¬ ©…œT ¢a§1Ÿ©a¤l— ¤R— Ÿ™¤\–…œo¤âœož\ ™|–…œrž|±§ ¸ Ÿ¤×«%®Ÿ©q¤\–L¢¤^«d¥ §’¥°³¹˜ ¢®©±¤|™l» Õ ¤8™|–L¥T¹…´¡§ ¸  ± ¢šL–…œr™l— Ç®±§·–LŸž|!¤\–…œo¤¤\–L1™|¤Rœo¤R— ™|É ¤R—¨°Ÿœn´%™l—¡ ·—¨´¡œržl— ¤×²;œo©˜§Å¤\–Lá¤\®žl ¾™|aÄ®¹LŸ©…°Ã, ·œr¤l°e–LÉ — ©LªÌœož\M™|¤\ž|¥n©ª"´ ²Ûœo™|™|¥°®—¡œr¤|±§Lµ ¸ ¹¤1Ÿ™9™|Ÿ©a¤R—¡œn´¡´ ²Û§L—¡¬ÁÉ ¬Á®ž|Ÿ©a¤Rµ šL–®©¥" ¢Ÿ©…œT»1•:–q¬Á¥nžl ¢ŸžP— ™¯§’Ÿžl— ¦T±§ã¬Áž|¥"  ¤\–LM¤\¥nš…—¡°Ÿœn´Gž|±´¨œr¤R— ¥T©L™|–…— š ¸ Ÿ¤×«%®Ÿ©Ì¤\–LM¤^«d¥,§’¥°³¹É  ¢Ÿ©a¤\™lµ¢«8–®ž|±œo™(¤\–Lœo¹¤|–¥nžRå ™(±§L— ¤l— ©Lª7µž|Ÿ¦— ™l— ©Lª"µ ¥nžÅÄR¹¥n¤R— ©LªÈœà§’¥°³¹˜ ¢®©±¤lµ$— ©…§L—¡°Ÿœo¤R— ©ªà™|¥" Û¬Á¥nžl  ¥"¬8ä ™|¥G°Ÿ—¨œT´¨å˜ž|±´¨œr¤\a§’©®™|™lµ8°Ÿœo¹L™|Ÿ™$¤\–L<´¡œo¤\¤|Ÿžl»,Ù%¥R«8É Ÿ¦T®žRµ"¤\–L®ž\–…œ\¦T ¸ Ÿ®©Í¬Á®«èœo¤\¤\a ¢š¤|™lµ"¤\¥C§Lœr¤|±µ ¤\¥ œo©˜œn´ ²Zǟ.¤\®­a¤›°Ã¥nž|š ¥nžlœŸ­Zš…´¡—¡°®— ¤R´ ²,¬Á¥G°Ã¹L™l— ©LªÖ¥n©Å¤\–L ž|Ÿ¹L™|Mœo©˜§$ž|Ÿ¹L™lœ ¸ —¡´¨— ¤^²<— ™|™|¹LŸ™l» é œo™|±§q¥n©Å¤\–Láœ ¸ ¥l¦’ß¥ ¸ ™|®ž\¦œo¤R— ¥n©™lµ‰¤\–…— ™$š…œrš/Ÿž œn—¡ ¢™.œo¤ Ÿ™9¤Rœ ¸ ´¡— ™|–˜— ©ªÍœ( ¢Ÿ¤|–¥G§’¥7´ ¥nª"—¡°Ÿœn´ ¸ œr™l— ™›¬Á¥nž Ÿ­Z¤\žlœn°Ã¤l— ©LªÍ¬Áaœr¤|¹ž|a§ß¤\®žl è™|aÄ®¹LŸ©…°Ã®™Pž|Ÿ¹L™|±§<— ©êœ ªnž\¥n¹š¯¥"¬/§’¥°³¹˜ ¢®©±¤|™l»ÚÎ:— ž\™|¤lµ±«d$§’ ½ ©¤\–L1¬Á¥"´¡´ ¥l«8É — ©LªÌ¤\–Lž|ŸC¤^²Zš/Ÿ™q¤\–…œo¤£°Ã¥nž|ž\®™|š/¥T©…§£¤\¥È§L— ™|¤R— ©˜°³¤R— ¦T ž|Ÿ¹L™| š…œr¤|¤\Ÿž|©L™¥"¬"¤\®žl ¶™|aÄ®¹LŸ©…°Ã®™l» ë\ìní<î‰ï±ðñZï±òZóLô±õ.ö±óLôqñ ÷±øÃönõùRõúHû ®žR ·œo©LŸ©X¤q´ ®­/— É °Ã¥n©Ìœo©˜§Ö—¡§L— ¥" ·œr¤l—¡°Ÿ­ZšLž\®™|™l— ¥n©™¢¤\–…œo¤Pœož\ß¬Áž|®É ÄR¹®©±¤l´ ²£œo©˜§$¹L©˜— ¦’®ž\™lœn´¡´ ².¹L™|±§(— ©›¤\®­a¤|™l» ë\üníßý\óLõþeö±óLþEÿHÿ ù9ï±ó 9ùôêþeùlþeõú û œo™|™lœrªnŸ™Ûœo©˜§ °Ã¥n©a¦’®©±¤l— ¥n©‰œn´nŸ­ZšLž\®™|™l— ¥n©™d¤\–…œo¤œož\¥n©˜´ ²$¤\a ¢É š/¥Tžlœržl—¡´ ²(œo©˜§M´ ¥G°Ÿœn´¡´ ²ž|Ÿ¹L™|±§L» ®¹™lœ ¸ ´ «1— ¤|–É ¥n¹¤M°Ãž|a§— ¤\™¢¤\¥(¤\–Lqœo¹¤|–¥nž\™lµ%œn´ ™|¥›ž|±¬ Ÿž|ž\a§ß¤\¥ œo™  ó7õ9þEönó7þdÿ ù9ï±ó » ë níÚòZï±þeù9ô(þeùlþeõú¯û œo™|™lœrªnŸ™â¤\–…œo¤œož\ßœo¤\¤\žl— ¸ ¹L¤\a§ ¤\¥.œÚš…œrž|¤R—¨°Ã¹…´¡œožœo¹¤|–¥nžR»…¼Ø–LŸ©·¹L™|±§ ¸ ²$¥n¤\–LŸž œo¹¤|–¥nž\™lµ¹L™|¹˜œn´¡´ ²;°Ã¥nš…— ±§à«1— ¤|–°Ãž|a§— ¤\™lµœn´ ™|¥ ž|±¬ Ÿž|ž\a§$¤\¥ãœo™ ö±òZþE÷nï±øÃù9ô›þeùlþeõ »  ¥T¤|Ü¤\–…œo¤>«d4°Ã¥n©L™l—¡§’Ÿž¥n©˜´ ²4¤\–L4§’Ÿ™l— ªn©…œr¤|±§ ä «8žl— ¤|Ÿžlå¥"¬G¤\–L·¤Rœož|ªT®¤1¤\®­a¤!–LŸž|±» Õ ™|™|¹L®™ß— ©Ø—¡§’Ÿ©LÉ ¤R—¨¬Á²— ©LªÅœ›°Ã¥nša²ažl— ªn–±¤1–L¥7´¡§’®ž ¥"¬dœ™|š/±°®— ½ °¤\®­a¤.œož\ ¥n¹¤|™l—¡§’â¥"¬L¤\–L ™l°³¥Tš/ ¥"¬L¤\–…— ™8š…œrš/Ÿžl» ¼Ø–…—¡´ ·¤\®žl ¢™ßœo©˜§<°Ã¥" ¢š/¥T¹L©˜§’™ –…œ\¦T<´ ¥n©ª ¸ Ÿ®© œà°Ã®©a¤\žlœT´ — ™|™|¹L<¥"¬8©…œr¤|¹žlœT´¢´¡œo©ªn¹…œrªnCšLž\¥G°Ã®™|™l— ©Lª ™|¤\¹…§L— Ÿ™lµ›´¡— ¤\¤R´ êœo¤\¤\®©±¤l— ¥n©>–…œr™ ¸ Ÿ®©¶š…œT—¡§¤\¥ê¤\–L Ÿ­Z¤\žlœn°Ã¤l— ¥n©Ëœo©˜§¹L¤R—¡´¨— DZœo¤R— ¥T©Â¥"¬ß´ ¥n©ªnŸžÍš…œr™|™lœoªT®™lµ ©…œT ¢a´ ²’µ¯¤\–L  ó7õ9þEönó7þqÿ ù9ï±ó œo©˜§Ï¤\–L ö±òZþE÷nï±øÃù9ô þeùlþeõ œo™¢šLž\®¦/— ¥T¹L™l´ ²à§’ ½ ©a§»  Ÿ¦TŸž|¤|–a´ Ÿ™|™lµ/¤\–L®™| œož\Ì¤\–LH¬Áaœr¤|¹ž|a§,¤\®­a¤C±´ ± ¢®©±¤|™ã¤\–…œo¤àœož\Ï ¢¥T™|¤ ™|¤\ž|¥n©ª"´ ²›ž|±´¨œr¤\a§$¤\¥¯š…œrž|¤R—¨°Ã¹…´¡œož¤\¥nš…—¡°Ã™¥nž$œo¹¤|–¥nž\™lµ œo©˜§ß¤\–L®ž\a¬Á¥nž\q°Ã¥n¹…´¡§ ¸ $¹L™|±¬Á¹…´7ž|Ÿ™|¥n¹žl°³Ÿ™M— ©C¦œržl— É ¥n¹™:¤\®­a¤šLž\¥G°Ã®™|™l— ©LªPœošš…´¡—¡°®œr¤l— ¥n©™lµi™|¹…°e–(œo™8œo¹¤|–¥nž\É ™|–…— šÈ—¡§’Ÿ©a¤R— ½ °®œr¤l— ¥n©˜µ§’¹š…´¡—¡°®œr¤l— ¥n©Ï°e–La°— ©Lª7µ§’¥°³¹É  ¢Ÿ©a¤P°Ÿ´ ¹™|¤|Ÿžl— ©LªÖœo©˜§$™|¹… · œožl— Çaœr¤R— ¥T©…» é a°Ÿœo¹™|P¤\–L·Ÿ­Zš…´ ¥nžRœo¤R— ¥T©Û— ©C¤\–…— ™·§L— ž|±°³¤R— ¥n©C–…œr™  ¹L™|¤q™|¤Rœož|¤\a§µP— ©È¤\–…— ™qš…œrš/Ÿž›«d´¡—¨ ·— ¤›¥n¹žÅ¬Á¥G°Ã¹L™ ¤\¥£¤\–L<¬Á¥"´¡´ ¥l«1— ©Lª<¤\–Lž|ŸÍ— ™|™|¹LŸ™l»êÎ:— ž\™|¤lµ— ©à™|a°Ã¤R— ¥T©  µX«d šLž\®™|Ÿ©a¤!œo©qÖ°®— ®©±¤1 ¢Ÿ¤|–¥G§q¬Á¥nžŸ­Z¤\žlœn°Ã¤l— ©Lª ž|Ÿ¹L™|±§ß¤\®žl Â™|aÄ®¹LŸ©…°Ã®™â¤\¥nªnŸ¤\–L®žâ«1— ¤|–C¤\–Lq°Ã¥nž|ž\®É ™|š/¥T©…§— ©ªÛ§’¥°³¹˜ ¢®©±¤¯™|¹ ¸ ™|®¤\™l»’š a°Ÿ—¡œn´Œœo¤\¤\®©±¤l— ¥n© — ™š…œT—¡§M¤\¥C ·œT1¤\–Lq ¢Ÿ¤|–¥G§$™l—¡ ¢š…´ qœo©˜§·ªnŸ©LŸžlœT´ ™|¥M¤\–…œo¤— ¤— ™±œo™l—¡´ ²Åœošš…´¡—¡°®œ ¸ ´ P¤\¥.«1—¡§’P¦œržl— ®¤^²(¥"¬ ¤\®­a¤.ž|Ÿ™|¥n¹žl°³Ÿ™l»  Ÿ­Z¤Rµâ— ©Ø™|a°Ã¤R— ¥T©˜µd™|¥" àœo©˜œoÉ ´ ²Z¤R—¨°Ÿœn´ ž|Ÿ™|¹…´ ¤|™.œož\·ž|Ÿš/¥Tž|¤|±§.«8–®ž|·¤\–L·šLž\¥nš ¥n™|a§  ¢Ÿ¤|–¥G§Ö«¢œr™áœošš…´¡— a§Ö¤\¥£™|®¦’®žRœn´:¤\®­a¤(°Ã¥"´¡´ ±°³¤R— ¥n©™ œo©˜§ß¤\–L·™|¤Rœo¤R— ™|¤R—¡°®œn´©…œr¤|¹ž|Ÿ™ «dŸž|C°Ã¥" ¢š…œrž|±§L»qÎ:— É ©…œT´¡´ ²Tµ — ©¯™|a°Ã¤R— ¥T©˜µ’«d1— ©a¤\ž|¥G§i¹…°Ãž|±´¨œr¤\a§¢ž|Ÿ™|aœržl°e– ¤\¥nš…—¡°Ã™œo©˜§ §L— ™l°Ã¹L™|™‰¤\–L¹L¤R—¡´¨— DZœo¤R— ¥T©P¥"¬®¤\–LdšLž\¥nš ¥n™|a§  ¢Ÿ¤|–¥G§¢— ©.°Ã¥n©L©a°Ã¤l— ¥T©!«1— ¤|–❟­— ™|¤l— ©Lª8¤\®­a¤ž|Ÿ¤|žR— Ÿ¦œT´ œošš…´¡—¡°®œr¤l— ¥n©™l»  Ô"!$#&%“(''͏dJ/)'7Ó+*-,EÔd’‘)'L“nD Ò/. 1032 46587:9<;3=>;3?@9A?1BDC@EDFGIH3J<K>=>58LK M ®©¥n¤\ œn´¡´®¤\–L¢§’¥°³¹˜ ¢®©±¤|™8— ©P¤\–Ld¤Rœož|ªT®¤8°Ã¥nž|š¹L™Œœo™ N µ œn´¡´n¤\–L¤\®žl ¢™¢— ©.¤\–L¢¤Rœož|ªT®¤¢°Ã¥nž|š¹L™¢œo™POÈ»•:– Q ï±øÃôR/SUTRøÃönð¿ë RWVËìní — ™›œ™|aÄ®¹LŸ©…°Ãß¥"¬ R ¤\®žl ¢™ ª"— ¦’®© ¸ ²YX ZI[ \D] ë Z \_^```^ Z [ í ë Zba<c O í d ë\ìní  Ÿ­Z¤Rµ±°Ã¥n©L™l—¡§’ŸžÚœ õòeA8þeøÃù9ù µ®±œn°E–!©L¥§’¥"¬®«8–˜—¨°e– °Ã¥nž|ž\®™|š/¥T©…§i™(¤\¥¶œÛ§L— ™|¤R— ©˜°³¤R— ¦TÍ«d¥Tžl§ R ÉEªTžlœn Ê¥ ¸ É ™|®ž\¦T±§£— © N »›ÎT¥nž¢Ÿ¦T®ž\²á©L¥§’¥n©á¤\–L·¤\ž|®±µ¤\–L®ž\ Ÿ­— ™|¤|™êœo©Æ¹L©˜—¨Ä®¹L±´ ²Ë§’Ÿ¤|Ÿžl ·— ©L±§Ï™|¹ ¸ ™|®¤Afhg ë Z [ \ í ëi N í ª"— ¦’®©;œo™áœ›™|¹ ¸ ™|®¤¯¥"¬œn´¡´¤\–LÅ§’¥°³¹˜ ¢®©±¤|™ ¤\–…œo¤ã°Ã¥n©a¤Rœn— © Z [ \ » Õ ©¥n¤\–LŸž.«d¥Tžl§’™lµ Z [ \ — ™Cœ(™|®É ÄR¹®©˜°³â¥"¬L¤\®žl ¢™8¤\–…œo¤!— ™™|–…œrž|±§ ¸ Ÿ¤×«%®Ÿ©jfhg ë Z [ \ í »  ¥T¤l— ©Lªê¤\–…œo¤Û 8¹…´ ¤R— š˜´ ,©L¥§’Ÿ™Û ·œ\²;ž|±¬ Ÿž<¤\¥ê¤\–L ™lœn à§’¥°³¹˜ ¢®©±¤¯™|¹ ¸ ™|®¤RµÚ«dÌ§’ ½ ©àœ õòeA<þeøÃù9ù k ö±õù9ôjRÿ òXõ9þeùløPë L•:É9°®´ ¹L™|¤\®ž í œo™qœ¯™|¹ ¸ ™|®¤P¥"¬:©L¥§’Ÿ™ ¥n©Ì¤\–Lq™|¹·­Å¤\ž|®›¤\–…œo¤(— ™( ·œršLš/±§á¤\¥C¤\–Lq™lœn  §’¥°³¹˜ ¢®©±¤™|®¤R»  œT ¢a´ ²Tµ lnm 587<9:;3=>;3?19porqts:u Svlÿ òXõþeùRøw õÌô±ù xŒó7ù9ô;ö±õàö ñZöy ø:ë f ^_z í%õò8R÷ þe÷±ö±þ f  õöâõò k õùRþïU{|R/SUTRøÃönðõ~} z  õáöØõò k õùRþ¯ïU{Íô±ïRòZð$ùlóLþeõ}.ö±óLô€:‚i f } fhg ëií ] z }@€:„ƒ i f }… g ëiíbƒ ] z ú ÎT¥nž Ÿ­œn ¢š˜´ aµ— ©ÛÎ:— ªT¹Lž\ ì µ/©L¥§’Ÿ™âæËœo©˜§ é ¸ ¥T¤|– ž|±¬ Ÿž¤\¥¯™|¹ ¸ ™|®¤‡† M‰ˆPŠŒ‹ 쎍 µ M‰ˆPŠŒ‹ ìŽ  œo©˜§Öœož\ ¤\–L®ž\a¬Á¥nž\M ¢Ÿž|ªn±§q— ©a¤\¥áœ™l— ©Lª7´ ‘L•‰É°Ÿ´ ¹™|¤|Ÿžl» ’“”• –˜— ”• –˜™ ’›š œ 3–˜3“>• œ ™›’›ž3Ÿ › — ¡˜™ ’› ¢ ™›’›£¤š Ÿ ™›’›š • ž¥¡˜— ™ ’ ¦ š Ÿ œ š3–˜— ™›’˜¢  Ÿ ™›’›— ¡˜š ™ ’ ¦ ”3§ — ™›’–˜• –˜š ™ ’›”— — Ÿ ”œ — 𠍙 ’˜§ ©ª›§ — ”–›— • ”¥« ™ ’›¢  Ÿ š • ž–˜™›’›œ ”¥¦ • — ”« ™ ’˜”¥–˜¨™›’›— š œ3¡›–˜3«  ž ¬¤™ ­_®)¯)°±³²˜´¥­®)¯)°± µ ¶ ·¹¸nº »½¼ ¸nº »Ž¾U¿UÀU»¥ÀU·¹º ¿yÁU Ànü ĎÀnÅÀUÆ ¾½Â ¾vº ÁĽ¼ÇȾ¥Â ¿v¾U»v¼¥Å À½Â½¼ Ä¥¾yÇȸvÉʼ»¥º »¥¾¶ ¶ ¸È¼ ¼  ¸¿U¼ ¾UËyÉ¤Ì Ívɼ ¸»½¼ º ¸nΛŠÀ½Â ¾vº ÁÈ»Ž¿U¸UǺ ¼ ¸nÎ ¸»½Ëh¼ ¾U¿UÄ¥»½ÀvÎ ÀvÁÏ3¶ Î:— ªT¹Lž\ ì XpЭœT ¢š…´ !¥"¬…œÑL•‰É°Ÿ´ ¹™|¤|Ÿž 103 ÒÓL?ÕÔy5 m J<L5ÖB½?1L×C@EDFGIH3J<K>=>58L;39<Ø •:– ¸ œr™l—¡°!šLž\¥G°Ãa§i¹Lž|á¬Á¥nžPŸ­Z¤\žlœn°Ã¤l— ©LªwL•‰É°Ÿ´ ¹™|¤|Ÿž|™ — ™‰™l—¡ ·—¨´¡œrž…¤\¥¤\–…œo¤˜¹L™|±§ ¸ ²ÚÙGœT ·— ž ÛÜФ|DZ— ¥n©…— ë\ì(Ý(ݎÞní œo©˜§q— ™™|¹… · œožl— Ç®±§›œo™1¬Á¥"´¡´ ¥l«8™X ë\ìní Š ¥T©a¦TŸž|¤¤\–LŒ¤Rœož|ªT®¤8°Ã¥"´¡´ ±°³¤R— ¥n©›— ©a¤\¥ ™|aÄ®¹LŸ©…°Ã®™ ¥"¬l¤\®žl ¢™lµR±œn°E–!¥"¬l«8–˜—¨°e–.°Ã¥nž|ž\®™|š/¥T©…§i™˜¤\¥œ™l— ©LÉ ª"´ ß§’¥°³¹˜ ¢®©±¤l»dæšš…´ ²Å ¢¥Tž|šL–¥"´ ¥nª7—¨°Ÿœn´Gœo©˜œn´ ²ZÉ ™l— ™d¥nž¥n¤\–LŸž%šLž\®ÉešLž|¥°³Ÿ™|™l— ©Lª¯ ¢Ÿ¤|–¥G§’™«8–®©ã— ¤ — ™â©L±°³Ÿ™|™lœož\²q¤\¥C§’Ÿ¤|Ÿžl ·— ©L$¤\–L1«d¥Tžl§ ¸ ¥T¹L©˜§’É œožR— Ÿ™l»  ±— ¤\–LŸž™|¤\a · ·— ©LªM©L¥Tž¢©L¥Tžl ·œT´¨— DZœo¤R— ¥T© — ™Pœošš…´¡— a§» ë\üní‘ß Ÿ©LŸžlœr¤|ÏœÍ™|¹·­+œož\žlœ|² ¸ ²+œ£™l— ©Lª7´ Ì™|¥nž\¤l» ’¹h·­£¤\ž|®›©L¥§’Ÿ™lµ:¤\¥nªnŸ¤\–L®ž«1— ¤|–ͤ\–La— ž(°Ã¥nž|É ž|Ÿ™|š/¥T©…§L— ©Lª(§’¥°³¹˜ ¢®©±¤™|¹ ¸ ™|®¤P´¡— ™|¤\™lµGœož\1¤\–L®© —¡§’Ÿ©a¤R— ½ a§.œo™¢œn§  œn°³Ÿ©a¤  ¢±  ¸ Ÿž|™Ú¥"¬a¤\–L™|¹·­ œož\žlœ|²’»ÎT¥nž…±œn°E–P©L¥§’±µR™|¥nž\¤˜¤\–L¢§’¥°³¹˜ ¢®©±¤´¡— ™|¤ œn°Ÿ°³¥Tžl§— ©ª¯¤\¥™|¥" !šLž\®É9§’Ÿ¤|Ÿžl ·— ©LZ§1¥nžR§’Ÿžl» ë ní ’¥Tž|¤¶œn´¡´·¤\–L™|¹·­+¤\ž|®È©L¥§’Ÿ™Ø¹L™l— ©LªÆ¤\–L ™|¥nž\¤|±§,§’¥°³¹˜ ¢®©±¤ã´¡— ™|¤(œo™áœTŸ²±»¶•:–®©˜µ%¤\–L œn§  œn°³Ÿ©a¤1 ¢±  ¸ Ÿž|™¥"¬L¤\–L¢©L¥§’¯´¡— ™|¤Ú«1— ¤|–›¤\–L ™lœn ÓTŸ²C°Ã¥n©L™|¤R— ¤|¹¤|Mœâ™l— ©Lª7´ ÚL•‰É°Ÿ´ ¹™|¤|Ÿžl» •:– °Ã¥" ¢šL¹¤lœr¤l— ¥T©·¤R—¨ ¢¥"¬’¤\–L1œ ¸ ¥l¦’8šLž\¥G°Ãa§i¹Lž| — ™ ¸ œr™l—¡°®œn´¡´ ²£§’Ÿ¤|Ÿžl ·— ©L±§ ¸ ²ß¤\–L ™|¥nž\¤¥nš ®žRœo¤R— ¥T©Í— © ™|¤\®š ë ní$ëà£ëRÜá3â~ãáë R%í\í|í µ8œo©˜§q«d¥T¹…´¡§ ¸ á¬Áaœr™l— ¸ ´  «1— ¤|–(¤\–L š/¥R«dŸž¥"¬L¤\¥G§Lœ\²å ™°Ã¥" ¢šL¹¤|Ÿž|™l»¼Ø–…œr¤«d ¬Á¥n¹©…§Å ¢¥Tž|.šLž\¥ ¸ ´ ± ·œo¤R—¡°M— ™P¤\–Lá°Ã¥n™|¤!¥"¬Œ ¢± ¢¥nž|² ¤\¥›™|¤\¥nž|qœn´¡´"¤\–L$™|¹·­Ö¤\ž|®P©L¥§’Ÿ™¯œo©˜§M¤\–Lq°Ã¥nž|ž\®É ™|š/¥T©…§— ©ªà§’¥°³¹˜ ¢®©±¤(´¡— ™|¤\™(œo¤$™|¤\®š ë\üní »,Î:— ªT¹Lž\ ü ™|–L¥R«8™ ¤\–Lq°Ã¥n¹L©±¤!™|¤Rœo¤R— ™|¤R—¡°³™·¬Á¥nž¯§L— ä Ÿž|Ÿ©a¤M´ ®¦’a´ ™ ¥"¬ ¤\–LM™|¹·­á¤\ž|®.ªnŸ©LŸžlœr¤|±§Í¬Áž|¥" Ü¤\–L× ®¹¤|Ÿž|™ß°Ã¥"´ É ´ a°Ã¤l— ¥n©˜µŒ«8–˜—¨°e–— ™Cœn´ ™|¥Í¹L™|±§Ø— ©Ø¥n¹žá´¡œo¤\®ž.Ÿ­Zš/Ÿž|É —¡ ¢®©±¤|™l» é œo™|±§<¥n©,¤\–L ½ ªT¹Lž\aµ¢— ¤ ¸ ±°³¥7 ¢®™(°Ÿ´ ±œož ¤\–…œo¤%™|–L¥Tž|¤¢´ ®©ªn¤\– R ÉEªTžlœn ¢™¢œož\¤\–L$ ¢¥T™|¤ ¢± ¢¥nž|² °Ã¥n©L™|¹˜ ·— ©Lª"» é a°Ÿœo¹™|(¥n¹žá¬Á¥G°Ã¹L™Ö— ™¯ž|Ÿ™|¤|žR—¨°Ã¤|±§<¤\¥ ´ ¥n©ªnŸž R ÉEªTžlœn ¢™lµÚ— ©£¤\–…— ™1š…œrš/Ÿž!«dÖ°Ã¥n©L™l—¡§’Ÿž!¥n©˜´ ² ¤\–L¢™|¹·­·¤\ž|®©L¥§’Ÿ™«1— ¤|–C´ ¥n©ªnŸž¤\–…œo©C¬Á¥n¹ž¤\®žl  ™|aÄ®¹LŸ©…°Ã®™l» å æç ç ç ç ç è é é é é é êë ë ë ë ë ìí í í í í î ï ï ï ï ï ï ð ñò ò ò ò ò óô õ õ õ õ õ ö ÷ø ø ø ø ø ù úû û û û û üý ý ý ý ý ý þ ÿ            ! "# $!&% ' !#  ()# *(+#   # $,-(+" " ' ./#  &! 1012(431#25 26,1587 9:;<=>!?@A B!=&C?D 9!A => E)A ?*E+A ?> = A B,=-E+:@ @ D F/A > ==-G1H I,J+K LM J4N1K2H L2O,L1H8P Q R S R Q T U V W R X Y U UZ [ V \ R R ])^)_a`cbcdegf1h+i1bajcegk ]h bcd l Î:— ªT¹Lž\ ü X  ¹˜  ¸ Ÿž|™8¥"¬Lš/¥7— ©a¤\®ž|™Pœo¤1´ ®¦’a´nm 10  o 5qpK>J<L58KÑB½?1LC@EDFGIH3J<K>=>58LK •:–jL•‰É°Ÿ´ ¹™|¤|Ÿž|™ªnŸ©LŸžlœr¤|±§àœož\qŸ¦œn´ ¹…œr¤\a§Ö¹L™l— ©Lª ¤\–L ¬Á¥"´¡´ ¥l«1— ©Lª ¤^«d¥M ¢±œo™|¹Lž\®™l»ÚÎ:— ž\™|¤8— ™¤\–L þeùRørðõùS r òXùRó9ù9ïy ó@ ô±ùRó@9ù ¤\–…œo¤¯ÄR¹˜œo©±¤l— ½ Ÿ™ ¤\–L·™|¤\ž|®©ªn¤\– ¥"¬$¤\–LÆ°Ã¥"— ©…°Ÿ—¨§i®©˜°³H¥"¬$¤\–LHŸ­Z¤\žlœn°Ã¤|±§;¤\®žl  ™|®É ÄR¹®©˜°³Ÿ™l»+’±°³¥T©…§,— ™.¤\–L þeùRørð'ôy õþeø k òZþ ï±óØõ ð„S  ÿ ö±ø þ¤\–…œo¤â°Ÿœn´¡°³¹˜´¨œr¤|Ÿ™Ú¤\–L$§L— ¦’®ž|ªT®©˜°³¥"¬a¤\–L$§’¥°³É ¹… ¢Ÿ©a¤\™1— ©.¤\–L·°Ÿ´ ¹™|¤|Ÿž ¸ œr™|a§1¥n©M¤\–L·°Ã¥n©a¦’®©±¤l— ¥n©‰œn´ §’¥°³¹˜ ¢®©±¤™l—¡ ·—¨´¡œržl— ¤×²á ¢±œo™|¹Lž\a» ë\ìní •Ÿžl Æ™|aÄ®¹LŸ©…°ÃM°Ã¥"— ©…°Ÿ—¨§i®©˜°³ •:–°Ã¥"— ©…°Ÿ—¨§i®©˜°³<™l°³¥Tž|<¥"¬8¤\®žl Ê™|aÄ®¹LŸ©…°Ã Z [ \ — ™°Ÿœn´¡°³¹˜´¨œr¤|±§ãœo™¤\–L1™|š/±°®— ½ °! 8¹L¤\¹…œT´:— ©…¬Á¥nžl œo¤R— ¥T© s ë Z [ \ í ª"— ¦’®©…µ ¸ ²<§’ ½ ©˜— ¤R— ¥n©˜µœo™X s ë Z [ \ í ] á3â~ã t ë Z [ \ í t ë Z \ í ``` t ë Zvu í d ë\üní •:–˜œo¤à— ™lµ s ë Z [ \ í — ™ã¤\–LÏ§L— ä Ÿž|Ÿ©…°Ã ¸ Ÿ¤×«%®Ÿ© ë — í ¤\–L¢Ÿ©a¤|ž\¥nš±²á°Ÿœn´¡°³¹˜´¨œr¤|±§ ¸ œr™|a§1¥n©M¤\–L·œo™|™|¹˜ ¢šL¤R— ¥n© ¤\–…œo¤%¤\–Lwmâ¤\®žl ¢™ ë Z \_^```^ Zvu í ¥G°Ÿ°³¹ž|ž|±§— ©…§’Ÿš/Ÿ©LÉ §’Ÿ©a¤R´ ²Tµ‰œo©˜§ ë —¨— í ¤\–L Ÿ©a¤|ž\¥nš±²Å°Ÿœn´¡°³¹˜´¨œr¤|±§ ¸ œr™|a§1¥n© ¤\–LMœn°Ã¤\¹…œn´X¥ ¸ ™|®ž\¦œo¤R— ¥n©˜» Õ ©a¤\¹…— ¤l— ¦T±´ ²Tµ s ë Z [ \ í ¸ ±°³¥7 ¢®™‰ªnž\aœr¤|Ÿž¬Á¥nž´ ¥n©ªnŸž ™|aÄ®¹LŸ©…°Ã®™l» Ù%¥R«d®¦’®žRµ›¤\–L™l°E–a ¢— ™ê§L— ä Ÿž|Ÿ©a¤ ¬Áž|¥" Ï™l—¡ ¢š…´ ²q°Ã¥n¹L©±¤l— ©Lª!¤\–L$´ ®©ªn¤\–¯¥"¬a¤\–L8™|aÄ®¹LŸ©…°Ã ¸ ±°®œr¹L™|£— ¤¯šL¹¤|™ã ¢¥Tž|›«d±— ªT–a¤M¥n©´ ¥l«À¬Áž|aÄ®¹LŸ©…°Ã² ¤\®žl ¢™l» Õ ©M¥n¹ždšLž\a´¡—¡ ·— ©˜œož|²PŸ­Zš/Ÿžl—¡ ¢®©±¤|™lµT«d$°Ã¥" ¢É š…œrž|±§›¤^«d¥Ì§L— ä Ÿž|Ÿ©a¤1žlœr©/— ©ªn™1¥"¬G¤\–L×L•‰É°Ÿ´ ¹™|¤|Ÿž|™ ¹L™l— ©Lª s ë Z [ \ í œo©˜§M¤\–L1™|aÄ®¹LŸ©…°Ãq´ ®©ªn¤\–…µ:œo©˜§M¥ ¸ É ™|®ž\¦T±§.¤\–Lq¬Á¥nžl ¢Ÿžâ–…œr™·œ ¸ Ÿ¤|¤\Ÿž°Ã¥nž|ž\a´¡œo¤R— ¥T©á«1— ¤|– ¤\–L ¤\®žl è§L— ™|¤\žl— ¸ ¹L¤R— ¥n©(™l—¡ ·—¨´¡œržl— ¤×²’» •:– ¥G°Ÿ°³¹ž|ž|Ÿ©…°Ã1šLž\¥ ¸ œ ¸ —¡´¡— ¤^² t ë Z [ \ í — ©wÐ%ÄZ» ë\üní — ™˜™l—¡ ¢š…´ ²·§’Ÿ¤|Ÿžl ·— ©L±§ ¸ ²yxqz|{~} ë Z [ \ í µR¤\–L¬Áž|aÄ®¹LŸ©…°Ã² ¥"¬ Z [ \ — © N µÚœo©˜§ß¤\–LM¥l¦’®žRœn´¡´˜¤\¥n¤Rœn´%¬Áž|aÄ®¹LŸ©…°Ã²€!µ ª"— ¦’®©Åœo™w ] ƒ‚„/… xqz|{~} ë Zba í µ˜œo™1¬Á¥"´¡´ ¥l«8™X t ë ZI[ \ í ] xqz|{~} ë Z [ \ í  d ë ní •:–ß¥G°Ÿ°³¹ž|ž|Ÿ©…°ÃMšLž\¥ ¸ œ ¸ —¡´¡— ¤^²<¥"¬ Zba — ™(œn´ ™|¥Í§’Ÿ¤|Ÿž|É  ·— ©L±§ ¸ ²Ð%ÄZ» ë ní µ…°Ã¥n©L™l—¡§’Ÿžl— ©Lª¯¤\–…œo¤ Zba — ™œâ¹L©˜— ¤ ´ ®©ªn¤\–à™|aÄ®¹LŸ©…°Ãß¥"¬¤\®žl ¢™l» é a°Ÿœo¹™|›šLž\¥ ¸ œ ¸ —¡´¡— ¤^² Ÿ™9¤R—¡ ·œo¤R— ¥n©Å¥"¬…¹L©¥ ¸ ™|®ž|¦’a§q¤\®žl ¢™ß— ™P©L¥T¤ßœo©— ™|™|¹L –LŸž|±µ«d<–…œ\¦TÅ©L¥T¤Åœošš…´¡— a§Ïœo©±²Æ§L— ™l°Ã¥n¹L©±¤l— ©Lªà¥nž ™l ¢¥Z¥n¤\–…— ©Lª< ¢Ÿ¤|–¥G§’™¯¬Á¥nžâ™l—¡ ¢š…´¡—¡°®— ¤×²’µ‰¹L©˜´¨— ’á ·œr©a² ´¡œo©ªn¹…œrªnŸÉ ¥G§’±´¡— ©7ªP™|¤\¹…§L— Ÿ™l» •:–Ö°Ã¥"— ©…°Ÿ—¨§i®©˜°³·™l°³¥Tž|á— ™.°Ÿœn´¡°³¹˜´¨œr¤|±§£¬Á¥nž!Ÿ¦T®ž\² ¤\®žl ™|aÄ®¹LŸ©…°ÃÅ— ©Í¤\–LL•‰É°Ÿ´ ¹™|¤|Ÿžlµœo©˜§Ö¤\–L®©˜µ‰±— É ¤\–L®ž¤\–Lß ·œr­—¡ 8¹… Ý¥nž¤\–L1¤\¥n¤Rœn´7¦œT´ ¹Lß— ™¹L™|±§(œo™ œo©M¥l¦’®žRœn´¡´±Ÿ¦œn´ ¹…œr¤R— ¥T©…µ §’Ÿš/Ÿ©…§— ©ª!¥n©·¤\–LšL¹ž|š/¥T™| ¥"¬:¤\–LÅœo©˜œn´ ²Z™l— ™l» Õ ©Ì¤\–…— ™·š…œrš/Ÿžlµ:«dÅ°Ã¥n©L™l— ™|¤\®©a¤R´ ² ¹L™|â¤\–LM ·œr­—¡ 8¹… Æ¦œT´ ¹L®™l» ë\üní •Ÿžl  §L— ™|¤\žl— ¸ ¹L¤R— ¥n©›™l—¡ ·—¨´¡œržl— ¤×² •:–q§’¥°³¹˜ ¢®©±¤!™l—¡ ·—¨´¡œržl— ¤×²Ö¥"¬G¤\–L L•‰É°Ÿ´ ¹™|¤|Ÿž·— ™ §’ ½ ©a§H¹L™l— ©LªÏ¤\–L;°Ã¥n™l— ©L,™l—¡ ·—¨´¡œržl— ¤×²Ü°Ã¥" · ¢¥T©…´ ² ¹L™|±§£— ©ê— ©…¬Á¥nžl œo¤R— ¥T©£ž|Ÿ¤|žR— Ÿ¦œT´G™|¤\¹…§L— Ÿ™l»CÎT¥nž ±œn°E– §’¥°³¹˜ ¢®©±¤‡†.— ©(¤\–LM°Ÿ´ ¹™|¤|Ÿžlµ˜— ©…§’Ÿ­.¤\®žl ¢™1œož\ ½ ž\™|¤ Ÿ­Z¤\žlœn°Ã¤|±§ ¸ ²Hœošš…´ ²— ©Lªá™|¤Rœo©…§œožR§à ¢Ÿ¤|–¥G§’™lµ…™|¹…°e– œo™< ¢¥Tž|šL–¥"´ ¥nª7—¨°Ÿœn´œo©˜œn´ ²Z™l— ™lµ™|¤\a · ·— ©LªHœo©˜§Å™|¤\¥nš «d¥Tžl§›ž|± ¢¥l¦Xœn´¡»ß•:–®©˜µ¤\–L·¤\®žl Â¦T±°³¤\¥nž‰ˆ †(— ™!ªnŸ©LÉ Ÿžlœo¤\a§Ö¬Á¥nž±œn°E–à§’¥°³¹˜ ¢®©±¤¹L™l— ©Lª þ {Sv ôU{ «d±— ªT–a¤R— ©Lª •˜œ ¸ ´  ì X M œo¤Rœâ™|¥n¹žl°Ã ¹L™|±§(— ©›¤\–L Ÿ­Zš/Ÿžl—¡ ¢®©±¤|™ ŠŒ‹Ž‹Œ’‘”“•4– —*–4“˜™/š ›œ‹~ž Ÿ Š¡/•4 Ÿ ¢c£ •4¤8‘~–4“ ¥a¦ –4•4‘~˜8’ ˜™§ – ¨ª©«­¬ ®°¯ ± –4“²–4³c´ Ÿ¶µ·¸“š~ ± –4“²–4³c´ ¹º–4‘~–4“ »’¼’¼’½’´8¾’´8¿ÀÂÁ€»’¼’¼’Ã’´8¾’´8»’¼ ¥ ~ž »À-¼’Ä Å-Æ’Æ »’Ä8ƒƒ¾’Ä8Ã’Æ’Ç ¿’½ŒÅȏ–4•°´ ƒÆÀ ¿Å-´8Ç ¢~‹yÉ1’2–vÊȖ4“•4‘”“Ë »’¼’¼’»’´8»’´8»ÌÁ€»’¼’¼’»’´8»’¿’´8ƒ» ¥ ~ž Ò¿’Ä8¼Å-à ƒ¿À-Ä Å-ǒà ǒ¼’Dz–4•°´ ƒ½’» ÆÀ-´8» Ê͋1˜™~˜8•1Î~˜ »’¼’¼’¾’´8»’´8»ÌÁ€»’¼’¼’¾’´8»’¿’´8ƒ» É ±  »À-Ä8¾’Ç’Ç »’»’»’Ä ŒÀ-½ Ò¾²–4•°´ ƒ¼Šƒƒ´8½ ϶˜8Ð/и–4˜ »’¼’¼’½’´8»’´8»ÌÁ€»’¼’¼’½’´8»’¿’´8ƒ» É ±  ¼’»’» »’¼’Ä8ÃÅ-Ç »Àȏ–4•°´ ¿’ÃÅ ¿’Ã’´8Ç °•4ÑÓÒ4—a¢~É »’¼’¾’¾’´8ǒ´8»’¼ÔÁ€»’¼’¼’Ã’´8Ò´8¿’Ç É ±  ¿’½’Ä8Ò¼’½ ¿’¿’½’Ä8½ŒÀ ¼’¼²–4•°´ Å-¿À ƒ¿’´ Å °•4Ñ1ɸ¢~Õ ¥ »’¼’¼’»’´8¼’´8»’ÃÔÁ€»’¼’¼’½’´8¼’´8»’Ã É ±  ¿’»’Ä8¿’Ç’¼ »’¾À-Ä8ǒƒ¾ ÃÀȏ–4•°´ Å-ÆŠƒǒ´8Æ ™l°E–a ¢±» Õ ©Ïœn§§L— ¤R— ¥T©…µ%¤\–L<°Ã®©a¤\žlœT´:¦T±°³¤\¥nž¯¥"¬:¤\–L °Ÿ´ ¹™|¤|Ÿžlµ§’Ÿ©L¥T¤|±§Ìœo™ ˆ Ö µ8— ™(°Ÿœn´¡°³¹˜´¨œr¤|±§Íœo™qœo©Hœ|¦’®ž\É œoªTá¥"¬ œn´¡´d¤\–Lá¤\®žl ¾¦T±°³¤\¥nž\™l»  Ÿ­Z¤Rµ¤\–L,°Ã¥n™l— ©L ™l—¡ ·—¨´¡œržl— ¤l— ®™ ¸ Ÿ¤×«%®Ÿ©,¤\–L<°Ã®©a¤\žlœT´Œœo©˜§á±œn°E–à¤\®žl  ¦T±°³¤\¥nžPœož\1¥ ¸ ¤lœT— ©La§»âÎ:— ©˜œn´¡´ ²nµ7¤\–LMœ|¦’®žRœoªTa§Mš…œT— ž|É «1— ™|™l—¡ ·—¨´¡œržl— ¤×²Ö¦œT´ ¹L®™ ¸ ±°³¥7 ¢®™¤\–L·¥l¦’®žRœn´¡´ Ÿ¦œn´ É ¹…œr¤l— ¥T©(¥"¬L¤\–L ¤\®žl è§L— ™|¤\žl— ¸ ¹L¤R— ¥n©›™l—¡ ·—¨´¡œržl— ¤×²ß¥"¬L¤\–L L•‰É°Ÿ´ ¹™|¤|ŸžX fƒ×2Ø ë z í ] g „°Ù ˆ † ` ˆ Ö Ú ˆ † ÚÚ ˆ Ö Ú d ë!Û"í  ¥T¤|·¤\–…œo¤ ÝÜ fƒ×2Ø ë z íÍÜèì µdœo©˜§ß¤\–LM¦œT´ ¹L ¸ ŸÉ °Ã¥" ¢Ÿ™°Ÿ´ ¥T™|®žd¤\¥!¥n©1¬Á¥nž%¤\–L„L•‰É°Ÿ´ ¹™|¤|Ÿž«8–®ž|8¤\–L §’¥°³¹˜ ¢®©±¤|™Œœož\d™|¤Rœo¤R— ™|¤R—¡°®œn´¡´ ²™l—¡ ·—¨´¡œrž…¤\¥±œn°E–!¥n¤\–LŸžl» Þ ß #áàŒ'L“nDâA' Ò ‘i  032 EãpL Ø@58=-G‰?@L”ä"?@L”p •:–™l— ­·¤\®­a¤â°Ã¥"´¡´ ±°³¤R— ¥n©™¹L™|±§.— ©.¥n¹ždŸ­Zš/Ÿžl—¡ ¢®©±¤|™ œož\Ì™|–L¥R«8© — ©•˜œ ¸ ´  ì »À¼CÛ¹L™|±§Ø¤^«d¥Ï™|®¤\™C¥"¬ ЩLª7´¨— ™|–;©LŸ«8™|š…œrš/ŸžÌœož\¤l—¡°Ÿ´ Ÿ™<Ÿ­Z¤\žlœn°Ã¤|±§;¬Áž|¥" º±— É ¤\–L®ž ®¹¤|Ÿž|™ ë Ÿ¹L¤\®ž\™lµ ü(Ž(Tí ¥nž‘Lœr©æåŸ¥n™|è篝Ÿž|É °Ã¹Lž|² ë |å&ç í$ë Ùœožl œo©$Ûéçãœožµ ì(ݎÝ( Tí µ…¤^«d¥<™|®¤\™ ¥"¬ê屜oš˜œo©®™|(©LŸ«8™|š…œrš/ŸžÖœož\¤l—¡°Ÿ´ Ÿ™MŸ­Z¤\žlœn°Ã¤|±§,¬Áž|¥"  ±— ¤\–LŸžëçãœn— ©˜—¨°e–…— ë çãœn— ©…—¡°E–˜—¨µ ü(ŽnìTí ¥nž %Õ¸ìíì Ð Õ ë  — ¹T±—¡µ ü(ŽnìTí µ œo©˜§Å¤^«d¥,™|®¤\™.¥"¬í屜oš˜œo©®™|àœn°ŸœoÉ §’± ·—¡°Íš…œrš/Ÿž|™lå.œ ¸ ™|¤|žRœn°Ã¤|™ ¸ ¥T¤|–Ÿ­Z¤\žlœn°Ã¤|±§+¬Áž|¥"   • Š‰Õ É ìMë  • Š:Õ µ ü(ŽnìTí µL¥n©1šLž\®™|Ÿ©a¤\a§áœo¤8¤\–L Õ ©…¬Á¥nžl ·œr¤R— ¥T© û ž|¥G°ÃŸ™9™l— ©ª$’¥°®— Ÿ¤×²— ©‰å±œoš˜œo© ë ©a¤R°³É Õ û ºå í µ¢œo©˜§Å¤\–L<¥n¤\–LŸžCœo¤ß¤\–Læå±œoš˜œo© ’¥°®— Ÿ¤×²¥"¬ Š — ¦—¡´:ЩLª7— ©®Ÿž|™ ë ©a¤R°³ÉŽåy Š Ð í »ÎT¥nž¤\–L$©LŸ«8™|š…œrš/Ÿž œož\¤l—¡°Ÿ´ Ÿ™lµ7«d1™|a´ ±°³¤\a§¯¥n©˜´ ²Åœož\¤l—¡°Ÿ´ Ÿ™«1— ¤|–ã¤\–La— žPœo¹É ¤\–L¥nž\™%™|š/±°®— ½ ±§Lµ’±— ¤\–LŸž— ©¯¤\–L$ä ¸ ²/´¨— ©L±å ë — ©·¤\–L$°Ÿœo™| ¥"¬Œ ®¹¤|Ÿž|™qœo©˜§ |å&ç í ¥nž$±  ¸ ±§L§’±§Í— ©à¤\–Lß¤\®­a¤ — ©ãœš…œrž|¤R—¨°Ã¹…´¡œož¬Á¥nžl  ë — ©·¤\–L1°Ÿœo™|¥"¬ƒçãœn— ©˜—¨°e–…—Lœo©˜§ %Õ¸ìíì Ð Õ í »î篥nž\šL–¥"´ ¥nª7—¨°Ÿœn´Úœo©˜œn´ ²Zǟ®ž Š –˜œ(’Ÿ©Å«¢œr™ ¹L™|±§(¬Á¥nž·å±œoš˜œo©®™|â¤\®­a¤ ë çãœo¤\™|¹… ¢¥T¤|¥7µ ü(ŽnìTí » ÎT¥nžŒ±œn°E–£°Ã¥"´¡´ ±°³¤R— ¥n©˜µ|L•‰É°Ÿ´ ¹™|¤|Ÿž|™«1— ¤|–›¤\®žl ¶™|®É ÄR¹®©˜°³Ÿ™´ ¥n©ªnŸž¤\–…œo©ã¬Á¥n¹ž«dŸž|Ÿ©a¹… ¢Ÿžlœr¤|±§¢¹L™l— ©Lª ¤\–L ¢Ÿ¤|–¥G§ §’Ÿ™l°³žR— ¸ a§ — ©„ 03 »Ú•:–©a¹˜  ¸ Ÿž|™‰¥"¬®ž|ŸÉ ™|¹…´ ¤R— ©ª·°Ÿ´ ¹™|¤|Ÿž|™œo©˜§¤\–L °Ã¥nž|ž\®™|š/¥T©…§— ©ª¢Ÿ­Za°Ã¹L¤R— ¥n© ¤R—¨ ¢> ¢±œo™|¹Lž\a§Ï¥n© ü » ÞŽß çãÙ%ÇæïŸ¥n©²ðñ˜— ©a¹L­Ëœož\ œn´ ™|¥.™|–L¥R«8©,— ©à•˜œ ¸ ´  ì » ë  ¥n¤\1¤\–…œo¤¬Á¥nž°Ã¥" ¢š…œržl— É ™|¥n©!šL¹ž|š/¥T™|aµ®¤\–LdŸ­Za°Ã¹L¤R— ¥n©1¤R—¨ ¢§’¥Z®™˜©L¥T¤d— ©…°Ÿ´ ¹˜§’ ¤\–L<¤R—¨ ¢Û¬Á¥nž£ ¢¥Tž|šL–¥"´ ¥nª7—¨°Ÿœn´¢œo©˜œn´ ²Z™l— ™Åœo©˜§Ì«d¥Tžl§ §L—¡°Ã¤l— ¥n©˜œož|²<ªnŸ©LŸžlœr¤l— ¥n©˜» í ÎT¥nž$ž|±¬ Ÿž|Ÿ©…°Ãaµ‰«dß–…œ\¦T œn´ ™|¥C™|®ª7 ¢Ÿ©a¤\a§<¤\–Lq¤Rœož|ªT®¤q°Ã¥"´¡´ ±°³¤R— ¥n©™ã— ©a¤\¥Å™|®©É ¤\®©…°ÃŸ™iµœo©˜§,°Ÿœn´¡°³¹˜´¨œr¤|±§C¤\–LÅœ|¦’®žRœoªTÍ°Ã¥"— ©…°Ÿ—¨§i®©˜°³ ™l°³¥Tž|Œ¤\¥nªnŸ¤\–L®ž«1— ¤|–P¤\–L œ|¦’®žRœoªT©a¹˜  ¸ Ÿž¥"¬’¤\®žl ¢™ š/Ÿž1œ™|®©±¤|Ÿ©…°Ãa»  03 òôónä 58L;õj5¹9|= 2aö o 5qpK>J<L;39<Øj=°÷:5 ;39<K>=°p9Y=>HørH35ùóÕ; Ôy?19<;!úy5 m =>5ùó|=>K Õ ©1¥n¹ž ½ ž\™|¤‰Ÿ­Zš/Ÿžl—¡ ¢®©±¤lµl«ddŸ­œn ·— ©Ú¤\–L¢§L— ™|¤\žl— ¸ ¹LÉ ¤R— ¥T©<¥"¬G¤\–Lq°Ã¥"— ©…°Ÿ—¨§i®©˜°³·™l°³¥Tž|·¥"¬/¤\®žl è™|aÄ®¹LŸ©…°Ã®™ ¤\–…œo¤‰«dŸž|Úž|ŸšLž|¥§’¹˜°³±§1œn°Ÿ°®—¡§i®©a¤RœT´¨´ ² «1— ¤|–¥n¹¤:ž|±¬ Ÿž|É žl— ©Lª¤\¥¯¤\–L ¥nžR— ª7— ©…œn´§’¥°³¹˜ ¢®©±¤l» ÎT¥nž·¤\–…— ™MšL¹ž|š/¥T™|aµ«d ½ ž\™|¤ã ·œT§’£ ·— ­Z¤\¹Lž|Ÿ™M¥"¬ ¤\–LÏ¤^«d¥Æ™|¥n¹žl°Ã®™X ë œ í ®¹¤|Ÿž|™œo©˜§ |å&çãµ ë ¸ í çãœn— ©˜—¨°e–…—…œo©˜§  — 8’a—¡µ‰œo©˜§ ë ° í ©a¤R°³É Õ û ºå.œo©˜§·©a¤R°³É åy Š Ðd»  Ÿ­Z¤RµT«d œošš…´¡— a§¤\–LPL•‰É°Ÿ´ ¹™|¤|Ÿžl— ©Lª¢¤\¥¤\–L ªnŸ©LŸžlœr¤|±§! ·— ­Z¤\¹Lž|Ÿ™l»Ú•:–®©˜µ8L•‰É°Ÿ´ ¹™|¤|Ÿž|™¤\–…œo¤Ú°Ã¥n©LÉ ¤Rœn— ©§’¥°³¹˜ ¢®©±¤|™á¬Áž|¥"  ¸ ¥T¤|–Ȱå"´¡´ ±°³¤R— ¥n©™¯«dŸž|(™|®É ´ a°Ã¤|±§Ûœo©˜§á¤\®žl ¿™|aÄ®¹LŸ©…°Ã®™¯«1— ¤|–à¤\–LÌ ·œr­—¡ 8¹…  °Ã¥"— ©…°Ÿ—¨§i®©˜°³™l°³¥Tž|¢«dŸž|Ÿ­œn ·— ©a§L»  ¥T¤|¤\–…œo¤Ú¤\–L š…œT— ž|™«dŸž|¯œož\žlœo©ªn±§P™|¥$¤\–…œo¤ ¸ ¥T¤|–C°Ã¥"´¡´ ±°³¤R— ¥n©™ ¸ ŸÉ ´ ¥n©ª,¤\¥Û¤\–L<™lœn £¤^²Zš/ ë —¨» a»¡µ±— ¤\–LŸž(©LŸ«8™|š…œrš/Ÿž ™|¤\¥nžl— ®™¥nž$œn°Ÿœn§ia ·—¡°š…œrš/Ÿž|™låœ ¸ ™|¤|žRœn°Ã¤|™ í ¸ ¹¤¥nžR— ªTÉ — ©…œr¤l— ©Lª,¬Áž|¥" À§L— ä Ÿž|Ÿ©a¤$šL¹ ¸ ´¡—¡°®œr¤l— ¥n©Ì™|¥n¹žl°Ã®™ ë —¨» a»¡µ §L— ä Ÿž|Ÿ©a¤ ©LŸ«8™|š…œrš/ŸžM°Ã¥" ¢š…œr©…— Ÿ™!¥nžMœn°Ÿœn§ia ·—¡°â™|¥nÉ °Ÿ— Ÿ¤l— Ÿ™ í »Ú•:–Œ¤\¥nš…—¡°Ÿœn´®¥l¦’®žR´¨œrš«¢œr™œn´ ™|¥ITŸšL¤…™l ·œT´¨´¡µ ±— ¤\–LŸž ¸ ²Å°e–L¥X¥T™l— ©ªã°Ã¥"´¡´ ±°³¤R— ¥n©™¥"¬:§L— ä Ÿž|Ÿ©a¤²T±œož\™lµ — ©à¤\–L<°Ÿœo™|›¥"¬©LŸ«8™|š…œrš/Ÿžqœož\¤l—¡°Ÿ´ Ÿ™lµ¥nž ¸ ²H¬Á¥G°Ã¹L™|É — ©Lªá¥n©Ï§L— ä Ÿž|Ÿ©a¤Öœn°Ÿœn§ia ·—¡° ½ ±´¡§’™lµ8— ©à¤\–L<°Ÿœo™|›¥"¬ š…œrš/Ÿž|™låœ ¸ ™|¤|žRœn°Ã¤|™l» Î:— ªT¹Lž\ ™|–L¥R«8™,¤\–LÏ©L¥Tžl ·œT´¨— ǟa§;–…— ™|¤\¥nªTžlœn ¢™ ë —¨» a»¡µ± ¢š…— žl—¡°®œT´˜š…»¡§L»¡¬ » í ¥"¬:¤\–L<°Ã¥"— ©…°Ÿ—¨§i®©˜°³ß™l°³¥Tž| ¥"¬d¤\–Lq¤\–Lž|ŸÍ§L— ä Ÿž|Ÿ©a¤á ·— ­Z±§<š…œT— ž|™l»•:–Ì ·œr­— É  8¹… Ë™l°³¥Tž|<— ™™|–L¥R«8©— ©È•˜œ ¸ ´  ü µ:¤\¥nªnŸ¤\–L®ž«1— ¤|– ¤\–La— ž›´ ®©ªn¤\–ê— ©Í¤\–Lßš…œrž|Ÿ©a¤\–LŸ™i— ™l»qæ´ ™|¥á™|–L¥R«8©Ï— © ¤\–L·¤Rœ ¸ ´ ã— ™!¤\–Lüû&ýÿþ þe÷±øÃùRõ÷±ïnÿ ôRö±ÿ òZù µG™|¹…°e–£¤\–…œo¤ Ý ¥"¬d¤\–LqŸ­Z¤\žlœn°Ã¤|±§rL•‰É°Ÿ´ ¹™|¤|Ÿž|™¯–…œ\¦TÖ™l ·œT´¨´ ®ž ¦œT´ ¹L®™¤\–…œo©(¤\–L ¤\–Lž|Ÿ™|–L¥7´¨§1¦œT´ ¹La» •:–®™|ž|Ÿ™|¹…´ ¤|™™|–L¥R«ê¤\–…œo¤Ú¤\–L·°Ã¥"— ©…°Ÿ—¨§i®©˜°³¢™l°³¥Tž| — ™Pž|±œo™|¥T©…œ ¸ ´ ²°Ã¥n©L™l— ™|¤\®©a¤›¬Á¥nž›œn´¡´G¤\–Lž|Ÿ¯š…œT— ž|™lç ¸ ŸÉ ¤^«d®Ÿ© ì(Ž¿ü(Ž œo¤$¤\–L< ·œr­—¡ 8¹… ·µœo©˜§ ¸ ±´ ¥l« ( ¬Á¥nžÚ¤\–L Ý ¤\–Lž|Ÿ™|–L¥7´¨§1¦œT´ ¹La» ß Ÿ©LŸžlœT´¨´ ²’µ7¤\®žl  ™|aÄ®¹LŸ©…°Ã®™ ¸ Ÿ²T¥T©…§¤\–…— ™¦œT´ ¹L ¸ ±°³¥7 ¢M°Ÿœo©…§—¨§œo¤\®™ ¬Á¥nž!¤\–Lá— ©L™|¤Rœo©a¤.´ ®­/—¨°Ã¥n©˜µÚ— ©…§L—¡°Ÿœo¤R— ©ª(™|¥" ¯¤\¥nš…—¡°Ÿœn´ ž|±´¨œr¤\a§’©®™|™ ¸ Ÿ¤×«%®Ÿ©¤\–LÌ§’¥°³¹˜ ¢®©±¤|™l»ÈÙ%¥R«d®¦’®žRµ — ¤(™|–L¥T¹…´¡§ ¸ <©L¥T¤|±§à¤\–…œo¤(¤\–LÅ¤\–Lž|Ÿ™|–L¥7´¨§Ï°Ÿœo©L©¥n¤ ¸ q¹L™|±§Ø¬Á¥nžã§L— ™l°Ãžl—¡ ·— ©˜œo¤R— ©ªÅž|±´ Ÿ¦œr©a¤áœo©˜§— ž|ž|±´ ŸÉ ¦œr©a¤á§’¥°³¹˜ ¢®©±¤|™ ¸ ±°®œr¹L™|Í§’¥°³¹˜ ¢®©±¤|™M«1— ¤|–¥n¹¤áœ ™|aÄ®¹LŸ©…°Ã ·œr¤l°e–›°Ÿœo© ™|¤R—¨´¡´ ¸ :™l—¡ ·—¨´¡œrž…¤\¥±œn°E–!¥n¤\–LŸžl» •˜œ ¸ ´  ü X L•‰É°Ÿ´ ¹™|¤|Ÿž|™êœo™|™|¥°®—¡œr¤|±§Ï«1— ¤|–Ëœož\¤l—¡°Ÿ´ Ÿ™ ¬Áž|¥" Ü§L— ä Ÿž|Ÿ©a¤1°Ã¥nž|š ¥nžlœ ŠŒ‹Ž‹È¢c’‘~“2•)– £  ± •4’“– ¼’Ç èÎ ´ ~‹¤™‘”– ¹º–4‘~–4“ ¢~É/Ê »’½’¿’´8½ © ¼ ¯ ƒ¾’´8à © ½ ¯ Ê͋1˜™~˜8•1Î~˜ Ô϶˜8Ð/и–4˜ »’»’½’´8¾ © »’¿ ¯ Å-Ò´ Å © à ¯ °•4ÑÓÒ4—a¢~É ԝ°•4Ñ1ɸ¢~Õ ¥ »’Æ’Æ’´8¾ © »’¿ ¯ Å-ǒ´8Ç © ¾ ¯ 0  0  .01 0  .02 0  .03 0  .04 0  .05 0  .06 0  .07 0  5  0 100 150 200     ! "$# % $# &%  '() *+ , . / 01 2 3 (4 ) + *5 ( 6 ' 3 7   98;:=<?> >!@# # A# 8B C D%DE=C  $F C G=:=<?8H $F <I:KJE Î:— ªT¹Lž\ XM•Ÿžl  ™|aÄ®¹LŸ©…°Ãá°Ã¥"— ©…°Ÿ—¨§i®©˜°³·™l°³¥Tž| ¸ ŸÉ ¤^«d®Ÿ©à— ž|ž|±´ Ÿ¦œr©a¤!§’¥°³¹˜ ¢®©±¤|™ Õ ¤— ™œn´ ™|¥.—¡ ¢š/¥Tž|¤lœr©a¤:¤\¥!©L¥T¤|¤\–…œo¤¤\–L8Ÿ­Z¤\žlœn°Ã¤|±§ ™|aÄ®¹LŸ©…°Ã®™. ¢¥T™|¤l´ ²<ž|ŸšLž|Ÿ™|®©±¤1™|a œo©a¤R—¡°®œT´¨´ ²Ø ¢±œo©LÉ — ©Lª7¬ ¹˜´lš…— ±°³Ú¥"¬L— ©…¬Á¥nžl œo¤R— ¥T©…»d•:–Œ¤\¥nš$¤\–Lž|ŸÚŸ­œn ¢É š…´ Ÿ™ ¸ ²r ®¹¤|Ÿž|™Mœo©˜§×|å&ç4š…œT— ž|™â«dŸž| ë — íML ì œo¤\aµ ñ˜œož\ž|²Tµ ç㗠©…§’²Tµ  —¡°E–¥"´¡œo™lµ ˆ §i®¤\¤|±µ û ®¤\®žRµ6 ¥n™|±µ LœT ·µ!•Ÿž|®™lœTµON ë ™l°³¥Tž| ìPTü » P 篜ã–a¹ž|žl—¡°Ÿœo©Lá©…œT ¢ ´¡— ™|¤ í µ ë —¨— íQL ™lœT—¡§ M œ|¦/—¨§ 埥n©®™lµ°e–…— a¬˜±°³¥n©¥" ·— ™|¤Mœo¤ æ¹ ¸ ž|Ÿ² ß »ãñ˜œo©™|¤|¥T© Û Š ¥7»ON ë ™l°³¥Tž| ìnü’Û » ÞTí µœo©˜§ ë —¨—¡— íQL ¤\–L ß ¥7´¡œo©Ù%±— ªT–a¤\™lµ…«8–˜—¨°e– Õ ™|žlœo±´%°ŸœošL¤\¹Lž\a§ ¬Áž|¥"  ’²ažl—¡œÅ— ©Ø¤\–L ì(ÝPR ç㗡§L§L´ SN ë ™l°³¥Tž| ìnìŽ » PTí » ¼C!Ÿ­Zš/±°³¤¤\–…œo¤¤\–L®™| ¤\®žl >™|aÄ®¹LŸ©…°Ã®™!œož\1¹L™|±¬Á¹…´ — ©>°ŸœošL¤\¹LžR— ©ªÍ¤\–LÖ¤\¥nš…—¡°Ÿœn´ž|±´¨œr¤R— ¥T©L™ ¸ Ÿ¤×«%®Ÿ©H¤\–L §’¥°³¹˜ ¢®©±¤|™M— ©C¤\–L$™lœn ã°Ÿ´ ¹™|¤|Ÿžlµ ¸ ¹¤ ¤\–…— ™Mœo™|š a°Ã¤ ¥"¬L¤\–L‘L•‰É°Ÿ´ ¹™|¤|Ÿž1— ™P´ a¬Á¤!¬Á¥nžP¬Á¹L¤\¹Lž|â™|¤\¹…§’²n»  0  òôónä 58L;õj5¹9|= aö o 5qpK>J<L;39<Øj=°÷:5 pJ<=/÷<?1L 5 m =>5ùó|=>K Õ ©º¥n¹ž;™|a°Ã¥n©˜§ Ÿ­Zš/Ÿžl—¡ ¢®©±¤lµ,«dÜªnž\¥n¹š/a§¤\–L Ÿ­Z¤\žlœn°Ã¤|±§ L•‰É°Ÿ´ ¹™|¤|Ÿž|™— ©a¤\¥Ü¤\–LÜ¬Á¥"´¡´ ¥l«1— ©LªÜ¤^«d¥ ªnž\¥n¹šL™X ë — íMòXó r òZù s:u Svlÿ òXõþeùRørõ œož\$°Ã¥" ¢š/¥T™|a§¢¥"¬/œož\¤l—¡°Ÿ´ Ÿ™ ¸ ²·¤\–L¢™lœn œo¹¤|–¥nž ë —¨» a»¡µ œo¤¢´ aœr™|¤Ú¥n©¢¥"¬a¤\–L œo¹¤|–¥nž\™$— ™!°Ã¥" · ¢¥T©Ì¬Á¥nž!œn´¡´X¤\–LMœož\¤l—¡°Ÿ´ Ÿ™ í µ ë —¨— íßð„ Rù9ô s:u Svlÿ òXõþeùRørõ œož\q°Ã¥" ¢š/¥T™|a§·¥"¬œož\¤l—¡°Ÿ´ Ÿ™ ¸ ²§L— ä Ÿž|Ÿ©a¤áœo¹¤|–¥nž\™ ë —¨» a»¡µÚ©L¥T©L(¥"¬d¤\–LÌœo¹É ¤\–L¥nž\™P— ™P°Ã¥" · ¢¥T©Å¬Á¥nž!œn´¡´X¤\–LMœož\¤l—¡°Ÿ´ Ÿ™ í » Î:— ªT¹Lž\ Û — ™Ú¤\–Lž|Ÿ™|¹…´ ¤ ¬Á¥nž ®¹¤|Ÿž|™lµÿçãœn— ©˜—¨°e–…—¡µ/œo©˜§ ©a¤R°³É Õ û ºå±»%•:–á´ a¬Á¤.œo©˜§ß¤\–LMžl— ªn–±¤.°Ã¥"´ ¹… ¢©™1™|–L¥R« ¤\–Lqž|±´¨œr¤R— ¥T©L™|–…— š ¸ Ÿ¤×«%®Ÿ©,¤\–Lq¤\®žl Ê§L— ™|¤\žl— ¸ ¹L¤R— ¥n© ™l—¡ ·—¨´¡œržl— ¤×²qœo©˜§¢¤\–L8¤\®žl Ø™|aÄ®¹LŸ©…°Ã1°Ã¥"— ©…°Ÿ—¨§i®©˜°³1¬Á¥nž ¤\–Lá ·— ­Z±§Åœo©˜§q¹L©˜—¨Ä®¹LÖL•‰É°Ÿ´ ¹™|¤|Ÿž|™!ž|Ÿ™|š/±°³¤R— ¦’a´ ²’µ «8–®ž|<±œn°E–êš/¥7— ©a¤Í°Ã¥nž|ž\®™|š/¥T©…§i™›¤\¥¶œà§L— ™|¤R— ©˜°³¤R— ¦T °Ÿ´ ¹™|¤|Ÿžl»(•:–á ·—¡§L§´ Ö°Ã¥"´ ¹… ¢©<™|–L¥R«8™1¤\–L·žlœr¤l— ¥(¥"¬ ¤\–L8¤^«d¥1¤^²Zš/Ÿ™¥"¬a¤\–LDL•‰É°Ÿ´ ¹™|¤|Ÿž|™œoª7œn— ©L™|¤œ!§L— ä Ÿž|É Ÿ©a¤°Ã¥"— ©…°Ÿ—¨§i®©˜°³Œ™l°³¥Tž|±»ŒÎT¥nžž|±¬ Ÿž|Ÿ©…°Ãaµi¤\–L1œ|¦’®žRœoªT ™l°³¥Tž|¥"¬/œ™l— ©Lª7´ ™|®©±¤|Ÿ©…°Ã— ™âœn´ ™|¥P™|–L¥R«8©áœo™¢œP§’¥T¤|É ¤\a§·¦TŸž|¤R—¨°Ÿœn´G´¡— ©aµ˜ ¢¥T¤l— ¦œr¤|±§ ¸ ²<œâ©…œT— ¦T$–LŸ¹LžR— ™|¤R—¡° ¤\–…œo¤âœ«8–¥"´ â™|®©±¤|Ÿ©…°Ã¯— ™¹L©˜´¨— ’a´ ².¤\¥ ¸ ž|Ÿš/±œo¤\a§ ¸ ²Å°e–…œo©˜°³±»·ÎT¥nž¢¤\–L1šL¹ž|š/¥T™|P¥"¬/ž|±œn§œ ¸ —¡´¡— ¤×²lµ˜¥n©˜´ ² œ ¢¥T©a¤\–…å ™™|¤Rœo¤R— ™|¤R—¡°³™lµœå±œo©±¹…œrž|² ì(ݎÝR µ˜— ™™|–L¥R«8©Ì¬Á¥nž ®¹¤|Ÿž|™ ë §’¹L£¤\¥Ø¤\–L´¡œož|ªTÍ™l— Ç® í » Õ ©+œn§§L— ¤R— ¥T©…µ  ·— ­Z±§L•‰É°Ÿ´ ¹™|¤|Ÿž|™â¥"¬§’¥°³¹˜ ¢®©±¤|™ «1— ¤|–ã¤\–L·™lœn  §Lœr¤|«dŸž|.Ÿ­°®´ ¹˜§’a§ ¸ ±°®œr¹L™|«dá¬Á¥n¹©…§ß¤\–…— ™ß°Ÿœo™| °Ã¥n©a¤Rœn— ©L™£ ·œr©a²> ·— ™|™|É—¡§’Ÿ©a¤R— ½ °Ÿœo¤R— ¥T©L™lµ8™|š/±°®— ½ °Ÿœn´¡´ ² «1— ¤|–㜙|®žR— Ÿ™d¥"¬Gœn°Ÿœn§ia ·—¡°š…œrš/Ÿž|™¢°Ã¥nÉ9œo¹L¤\–L¥Tž|±§ ¸ ²  ·œr©a².ž|Ÿ™|aœržl°e–L®ž\™l» é œo™|±§M¥n©Ö¤\–LqÎ:— ªT¹Lž\ Û µ:— ¤ ¸ ±°³¥7 ¢®™°Ÿ´ ±œož¤\–…œo¤ ¤\–L ¤\–Lž|Ÿ¯°Ã¥"´¡´ ±°³¤R— ¥n©™ ¸ Ÿ–…œ\¦T.§L— ä Ÿž|Ÿ©a¤R´ ²C— ©›¤\®žl ¢™ ¥"¬·¤\–LÝœo¹¤|–¥nžÌ™|¤\ž|¹…°Ã¤|¹ž|Ï¥"¬·¤\–LÏ¤\®­a¤|™l» ¼>— ¤|– ®¹¤|Ÿž|™lµZ¤\–LM§L— ™|¤R— ©˜°³¤R— ¥n© ¸ Ÿ¤×«%®Ÿ©q¤\–LM ·— ­Z±§qœo©˜§ ¹L©˜—¨Ä®¹LÑL•‰É°Ÿ´ ¹™|¤|Ÿž|™â— ™©L¥T¤d¥ ¸ ¦— ¥n¹™l» Š ¥T©L™l—¡§’Ÿžlœ ¸ ´  ©a¹˜  ¸ Ÿž|™¥"¬:œož\¤l—¡°Ÿ´ Ÿ™¢™|–…œrž|1Ÿ­œn°Ã¤l´ ²›¤\–L1™lœn P™|®©É ¤\®©…°ÃŸ™Ÿ¦T®©ß«8–®©›¤\–La— ž!§’Ÿ™l— ªn©…œr¤|±§$ž|Ÿš/¥Tž|¤|Ÿž|™ œož\ §L— ä Ÿž|Ÿ©a¤R» Š ¥Tž|ž|Ÿ™|š/¥T©…§L— ©Lª7´ ²Rµ¤\–LÅ°e–…œo©ªnŸ™M¥"¬¤\–L ¤^«d¥Mžlœr¤l— ¥(°Ã¹Lž|¦’®™ ¸ ±°³¥7 ¢¢™l´ ¥l«È«1— ¤|– ®¹¤|Ÿž|™l» ˆ ©¤\–LÖ¥n¤\–LŸž.–…œr©…§µ«1— ¤|–‰çãœn— ©˜—¨°e–…—œo©˜§Å©a¤R°³É Õ û ºå±µ¤\–L®ž\¯Ÿ­— ™|¤ ¥n©˜´ ²,œ(¬Á®« L•‰É ·— ­Z±§<°Ÿ´ ¹™|¤|Ÿž|™ 0 0.2 0.4 0.6 0.8 1 10 100 1000 10000 T U V WX Y Z [ V Y \] [ Y ^ _ Z Y W Y ` a V Y [ b cedKf gh dKiHj?klOm;npoKq;r sOtKu vw%xOyKz {|{O} 0 0.2 0.4 0.6 0.8 1 10 100 1000 10000 ~  € ‚ ‚ ƒ „ … † ‡ ˆ ‰Š € ‹ Œ Š eŽK ‘ ŽK’%“?Ž”O•KŽp•K–K— ”O•K— ˜Ž”O•KŽ ™ “?š ›O–K œ|‘ ™ Ž ™ “?š ›O–K œO˜%— ž ž Ÿ ˜ ™ š ŽK˜Ž¢¡O£?¤ ¥|¥ ¤ 0 0.2 0.4 0.6 0.8 1 10 100 1000 10000 ¦ § ¨ ©ª « ¬ ­ ¨ « ®¯ ­ « ° ± ¬ « © « ² ³ ¨ « ­ ´ µe¶K· ¸¹ ¶Kº%»?¼½O¾K¿pÀKÁ; ÃOÄKÅ ÆÇÈOÉKÊ Ë|Ë|Ì © ‹ ¯ ¹º–4‘~–4“Í § ˜ ¦ –4š ¢c£nÑ•4¤8‘~2–4“ ©IÎ ¯ ¹º–4‘~–4“Í •1Î ‹~ž¸–Œ’ôÎ~–ᓎ‹˜8 © • ¯ ¹º–4‘~–4“Íù‘~~˜ϰ‘~–v¢c£nÑ•4¤8‘~2–4“ Ð Ñ Ò ÓÔ Õ Ö× Ò Õ ØÙ × Õ Ú Û Ö Õ Ó Õ Ü Ý Ò Õ × Þ 0 0.2 0.4 0.6 0.8 1 10 100 1000 10000 ßeà;á âã à;ä%å=æçOè;éëêKì;í îOï;ð ñò%óOô;õ öO÷|ø 0 0.2 0.4 0.6 0.8 1 10 100 1000 10000 ù ú ûü ý ý þ ÿ     û                   !     " " #$    %&' ()$* 0 0.2 0.4 0.6 0.8 1 10 100 1000 10000 + , ./ ,01,23,3 45 23 5 6,23 , 78$9 : ; < = > ? @ AB C DE FG H I J G K GL M NO P Q © š ¯ Ê͋1˜™~˜8•1Î~˜¢Íù§ ˜ ¦ –4š ¢c£nÑ•4¤8‘~2–4“ © – ¯ Ê͋1˜™~˜8•1Î~˜¢Íù•1Î ‹~ž¸–Œ’ôÎ~– “Ž‹˜8 © R ¯ Ê͋1˜™~˜8•1Î~˜¢Íù‘~~˜ϰ‘~–v¢c£nÑ•4¤8‘~2–4“ 0 0.2 0.4 0.6 0.8 1 10 100 1000 10000 S T U VW T XYTZ[ T[ \] Z[ ] ^TZ[ T _a`ab c d e fg h i jkl mn o p q r s p t pu v wxy z 0 0.2 0.4 0.6 0.8 1 10 100 1000 10000 { | }~   €  ‚ ƒ „ … †‡ } ˆ ‰ ‡ Š ‹Œ Ž ‹ ’‘‹“” ‹•” –— “”— ˜‹“” ‹ ™ ‘š ›–Œ œŽ ™ ‹ ™ ‘š ›–Œ œ˜— ž ž Ÿ$˜ ™ š ‹ ˜‹¡ ¢¤£ ¥a¦ £ 0 0.2 0.4 0.6 0.8 1 10 100 1000 10000 § ¨ © ª¬« ¨ ­®¨¯a° ¨°¡± ² ¯°¡² ³¨¯a° ¨ ´aµa¶ · ¸ ¹ º» ¼ ½ ¾¿À ÁÂ Ã Ä Å Æ Ç Ä È ÄÉ Ê ËÌÍ Î © ž ¯ °•4ÑÓÒ4—a¢~É Í°§ ˜ ¦ –4š ¢c£nÑ•4¤8‘~2–4“ © Î ¯ °•4ÑÓÒ4—a¢~É Í¸•1Î ‹~ž¸– ’ôÎ~–ᓎ‹˜8 © ˜ ¯ °•4ÑÓÒ4—a¢~É Í¸‘~~˜ϰ‘~–v¢c£nÑ•4¤8‘~2–4“ Î:— ªT¹Lž\ Û X:橘œn´ ²Z™l— ™¥"¬ L•‰É°Ÿ´ ¹™|¤|Ÿž|™!œo™|™|¥°®—¡œr¤|±§$«1— ¤|–£œož\¤l—¡°Ÿ´ Ÿ™ ¸ ²< 8¹…´ ¤R— š˜´ 1¥nž8¹L©˜—¨Ä®¹L¯œo¹¤|–¥nž\™ «1— ¤|–Í–…— ªT–H°Ã¥"— ©…°Ÿ—¨§i®©˜°³M™l°³¥Tž|Ÿ™l»(æ°Ÿ°³¥Tžl§L— ©Lª7´ ²iµ%¤\–L žlœr¤l— ¥P°Ã¹Lž|¦’®™¬Á¥nž˜¤\–L®™|¢°Ã¥"´¡´ ±°³¤R— ¥n©™ ¸ ±°³¥7 ¢™|¤\®Ÿš/®žR» Õ ©œn§§L— ¤R— ¥T©…µ«1— ¤|–£¤\–L çãœn— ©˜—¨°e–…—°Ã¥nž|š¹L™lµ¤\–L L•‰É ¹L©˜—¨Ä®¹Lã°Ÿ´ ¹™|¤|Ÿž|™›œož\á§L— ¦/—¨§ia§Í— ©a¤\¥Ö¤^«d¥áªnž\¥n¹šL™1¥n© ¤\–LÖªnžRœoš–…» ˆ ©;¬Á¹Lž|¤\–LŸž.Ÿ­œn ·— ©˜œo¤R— ¥T©…µŒ¤\–L,°Ÿ´ ¹™|É ¤\®ž|™— ©¤\–L8¹Lšš/Ÿž|Éežl— ªn–a¤‰ž|Ÿª"— ¥n©·«dŸž|!¬Á¥n¹©…§¢¤\¥¯°Ã¥n©LÉ ¤Rœn— ©§L— ä Ÿž|Ÿ©a¤M´ ¥G°Ÿœn´ ±§L— ¤l— ¥n©™ ë •¥(a²T¥Íœo©˜§ ˆ ™lœœ í ¥"¬n¤\–L™lœn ¥l¦’®ž\™|aœr™d™|¤\¥nžl— ®™%™|®©±¤ ¸ ²$¤\–L8™lœn ž|ŸÉ š/¥Tž|¤\®ž|™l»jЭ°Ã®š¤q¬Á¥nžP¤\–L®™|ßš…œrž|¤R—¨°Ã¹…´¡œožß°Ÿœo™|®™lµ‰¤\–L °Ã¥"— ©…°Ÿ—¨§i®©˜°³1™l°³¥Tž|1¥"¬"¤\–L L•‰ÉE¹©…—¡ÄR¹ß°Ÿ´ ¹™|¤|Ÿž«1— ¤|– ¤\–L çãœn— ©˜—¨°e–…—$°Ã¥"´¡´ ±°³¤R— ¥n©— ™Cž|±´¨œr¤R— ¦’a´ ²+´ ¥l« °Ã¥" ¢É š…œrž|±§C«1— ¤|–Û¤\–LÖ¥n¤\–LŸž¯¤^«d¥Ï°Ã¥"´¡´ ±°³¤R— ¥n©™i»¶ÎT¥nž·¤\–L ©a¤R°³É Õ û ºå±µ7Ÿ­°³ŸšL¤R— ¥n©…œT´˜°Ÿœo™|®™«dŸž|›¬Á¥n¹©…§·«8–®ž|ßœ ™|®žR— Ÿ™Œ¥"¬Xš…œrš/Ÿž|™d«dŸž|âšLž\®™|Ÿ©a¤\a§1¥n©M¤\–L¢™lœn §Lœ\² ¸ ¹¤·— ©C¤\–L$©…œT ¢P¥"¬§L— ä Ÿž|Ÿ©a¤Mœo¹¤|–¥nž\™l»Ù%¥R«d®¦’®žRµ ¤\–L®™|1°Ÿœo™|®™œož\8Ÿ­°®´ ¹˜§’a§$¬Áž|¥" H¤\–L ½ ªT¹Lž\aµLœo™–…œ\¦T ¸ Ÿ®©Øœn´ ž|±œn§i²§’Ÿ™l°³žR— ¸ a§µ8œo©˜§Å°Ÿœo©L©¥n¤ ¸ ·™|®Ÿ©È— © ¤\–L ½ ªT¹Lž\a» •:– Ý ¤\–Lž|Ÿ™|–L¥7´¨§Å¦œT´ ¹L<¥"¬8¤\–L ·— ­Z±§ L•‰É °Ÿ´ ¹™|¤|Ÿž|™«¢œr™ Ý(R » ÛàëÞ¸Û"í ¬Á¥nž ®¹¤|Ÿž|™lµ ìP’Û » üãë\ü(ní ¬Á¥nžôçãœn— ©˜—¨°e–…—¡µ/œo©˜§ ìn쎍 » ü!ë\ìní ¬Á¥nž%©a¤R°³É Õ û ºå±µT«8–®ž| ½ ªT¹Lž\®™ß— ©£¤\–LMš…œrž|Ÿ©a¤\–LŸ™i— ™›œož\·¤\–Lá´ ®©ªn¤\–£¥"¬…¤\–L ¤\®žl Â™|aÄ®¹LŸ©…°Ã®™â¥n©<¤\–L ¸ ¥Tžl§i®žl» Š ¥7 ¢š…œrž|a§M«1— ¤|– ¤\–LÌšLž\®¦/— ¥T¹L™Ì°Ÿœo™|Í™|–L¥R«8©Ü— ©•˜œ ¸ ´  ü µ ¤\–LÌ¦œT´ É ¹LŸ™ß¦œrž|²;°Ã¥n©L™l—¡§’Ÿžlœ ¸ ´ ²¶œn°Ãž|¥T™|™q¤\–L,§L— ä Ÿž|Ÿ©a¤<°Ã¥"´ É ´ a°Ã¤l— ¥n©™l»Ë•¥È¬Á¹Lž|¤\–LŸžÍ°Ÿ´¨œržl—¡¬ ²Û¤\–L,§L— ä Ÿž|Ÿ©…°Ãaµ«d œn´ ™|¥àŸ­œn ·— ©a§Ì¤\–L— ©ÐÏL¹®©˜°³C¥"¬8¤\–L<¤R—¨ ¢Û§’Ÿ¦— É œo¤R— ¥n©ê¥n©¤\–L<¤\–Lž|Ÿ™|–L¥7´¨§Å¦œT´ ¹L®™l»Ü•:–˜— ™.¤R—¨ ¢±µ«d ™|a´ ±°³¤\a§¥n©˜´ ²jL•‰É ·— ­Z±§Ö°Ÿ´ ¹™|¤|Ÿž|™«8–¥n™|1¤R—¨ ¢.§’ŸÉ ¦—¡œr¤l— ¥n©ã— ™%ªnž\aœr¤|Ÿžd¤\–…œo©ÒÑoµn«8–®ž|Óщ¦œržl— a§ ¸ Ÿ¤×«%®Ÿ©   ( §Lœ\²Z™lµ7œo©˜§$°Ÿœn´¡°³¹˜´¨œr¤|±§â¤\–LŒ¤\–Lž|Ÿ™|–L¥7´¨§¢¦œT´ ¹L®™ ¬Á¥nž!±œn°E–ê°Ã¥"´¡´ ±°³¤R— ¥n©˜» é œo™|±§q¥n©Å¤\–LMž|Ÿ™|¹…´ ¤1™|–L¥R«8© — ©CÎ:— ªT¹Lž\  µ/— ¤ ¸ ±°³¥7 ¢®™°Ÿ´ ±œož¤\–…œo¤%¤\–L¢¤\®­a¤Úž|Ÿ¹L™| «1— ¤|–w ®¹¤|Ÿž|™·— ™ ¢¥Tž|ž|±´¨œr¤\a§·¤\¥›¤\–Lß§Lœr¤|P¥"¬/¤\–L ™|¤\¥nžl— ®™l» ˆ ©˜´ ²Ø´¡—¨ ·— ¤\a§Å´ ®©ªn¤\–Í¥"¬…¤\®žl ™|aÄ®¹LŸ©…°Ã®™ «dŸž|%ž|ŸšLž|¥§’¹˜°³±§ œn¬Á¤\®ždœo©M— ©a¤\®ž|¦Xœn´®¥"¬®™|®¦’®žRœn´n§Lœ\²Z™l» 100 1000 0 Ô 5 Õ 10 15 20 25 3 Ö 0 ×ÙØÛÚÜ ÝßÞ Ü àßáâàäãÞæå’ØçÞ Ü èéØëê ×ßÝíì îðï ñò ó ô õ õö ÷ ô õ ø ù ô ú öû ü ØþýÞ Øíÿ î ÝþÜ áÜ þåÜ áÞ   Î:— ªT¹Lž\  X Õ ©ÐÏL¹®©˜°³ ¥"¬L¤R—¨ ¢.§’Ÿ¦—¡œo¤R— ¥n© Õ ©1™|¹… · œož|²’µT¤\–Ldž|Ÿ¹L™|dš…œr¤|¤\Ÿž|©1¥"¬l¤\–L¢œo¹¤|–¥nž\a§ ¤\®­a¤|™¢¦œržl— ®™œn°Ãž|¥T™|™M§L— ä Ÿž|Ÿ©a¤· ¢±§L—¡œ.œo©˜§MšL¹ ¸ ´¡—¡°®œrÉ ¤R— ¥T©™|¤^²´ ®™l»,¼>— ¤|– ®¹¤|Ÿž|™lµÚ¤\–LÅœo¹¤|–¥nž\a§C¤\®­a¤|™ œož\·ªnŸ©LŸžlœT´¨´ ²Û ¢¥Tž|ã´¡— ’a´ ²á¤\¥ ¸ Öœo™|™|¥°®—¡œr¤|±§ß«1— ¤|– œ¯™|š/±°®— ½ °!Ÿ¦T®©±¤·¤\–…œo¤P¥G°Ÿ°³¹ž|™1«1— ¤|–˜— ©Èœ¯™|–L¥Tž|¤$š/ŸÉ žl— ¥G§â¥"¬X¤R—¨ ¢±» ˆ ©M¤\–L¥n¤\–LŸžÚ–…œr©…§µ±«1— ¤|–Âçãœn— ©˜—¨°e–…—¡µ ¤\–L8¤\®­a¤|™¢œož\1— ©L™|¤\aœT§M°Ã¥n©L©a°Ã¤|±§â¤\¥.— ©…§L— ¦/—¨§i¹…œT´Lœož\É ¤R—¨°Ÿ´ Ÿ™â¥"¬G¤\–LÖ§Lœ\²nµ«8–˜—¨´ «1— ¤|–C©a¤R°³É Õ û ºå±µ¤\–L®²Íœož\ ™l—¡ ¢š…´ ²ßœo™|™|¥°®—¡œr¤|±§«1— ¤|–(— ©…§L— ¦/—¨§i¹…œT´Lœo¹¤|–¥nž\™¥nž8œo¹É ¤\–L¥nžªnž\¥n¹šL™l»  0  òôónä 58L;õj5¹9|=  ö o 5qpK>J<L;39<Øj=°÷:5ÖL 5¹J<K>5 L”p=;3? Õ ©·¥n¹ž ½ ©˜œn´TŸ­Zš/Ÿžl—¡ ¢®©±¤lµ’«d1 ¢±œo™|¹Lž\a§¢¤\–L1§’Ÿªnž\® ¥"¬8ä ž|a°Ã²°Ÿ´¡— ©ª"剫1— ¤|– ®¹¤|Ÿž|™lµvçãœn— ©˜—¨°e–…—¡µœo©˜§á©a¤R°³É Õ û ºå±»±¼C°Ÿœn´¡°³¹˜´¨œr¤|±§!¤\–Lžlœr¤l— ¥P¥"¬a¤\®žl È™|aÄ®¹LŸ©…°Ã®™ ¤\–…œo¤›œošš/±œož|±§à¬Á¥nž( ¢¥Tž|.¤\–…œo©Èœ.™|a°Ã¥n©˜§Ö¤R—¨ ¢£— © ¤\–L °Ã¥"´¡´ ±°³¤R— ¥n©˜µi«1— ¤|–!¤\–La— ž°Ã¥"— ©…°Ÿ—¨§i®©˜°³Œ™l°³¥Tž| ¸ ±— ©Lª ªnž\aœr¤|Ÿž:¤\–…œo©(œÚª"— ¦’®©$¤\–Lž|Ÿ™|–L¥7´¨§ Ö » ë •:–L±— ž ½ ž\™|¤œošÉ š/±œožRœo©…°Ã!«¢œr™©L¥T¤$°Ã¥n¹L©±¤|±§Ö— ©(¤\–L1©a¹˜  ¸ Ÿžl» í •:– ¦œT´ ¹Lá¥"¬ Ö «¢œr™q¦œržl— a§ ì( Ü Ö Ü P(Ž »Î:— ªT¹Lž\ P ™|–L¥R«8™·¤\–Lßž|Ÿ™|¹…´ ¤l»à•:–ßž|Ÿ™|¹…´ ¤$™|–L¥R«8™·¤\–…œo¤P¤\–L ž|Ÿ¹L™|$žlœr¤l— ¥›žlœrš…—¡§L´ ²Ì§’±°³ž\aœr™|®™M¬Á¥nž Ö ™l ·œT´¨´ ®žâ¤\–…œo© ( µLœo©˜§¤\–L®© ¸ ±°³¥7 ¢®™ υœr¤8¬Á¥nž Ö ªnž\aœr¤|Ÿž:¤\–…œo© ì(Ž » ¼C!©L¥T¤|±§P¤\–…œo¤¤\–LM°e–…œo©ªnŸ™$°Ã¥nž|ž\®™|š/¥T©…§›œoššLž\¥l­— É  ·œr¤|±´ ²·¤\¥P¤\–L ¸ ¥Tžl§i®ž|™%¥"¬a¤\–L·— ©L™|¤Rœo©a¤â´ ®­/—¨°Ã¥n©ãœo©˜§ œo¹¤|–¥nž\a§!¤\®­a¤|™ — ©.¤\–LšLž\®¦/— ¥T¹L™ŒŸ­Zš/Ÿžl—¡ ¢®©±¤|™lµ ¸ ¹¤ ¤\–LM§’Ÿ¤lœT—¡´ ™1œož\M´ a¬Á¤!¬Á¥nžP¬Á¹L¤\¹Lž|¯— ©a¦’®™|¤R— ª7œo¤R— ¥T©…» Î:— ©˜œn´¡´ ²nµœn´ ¤\–L¥T¹Lªn–à¤\–LqšL¹ž|š/¥T™|›¥"¬¤\–LqŸ­Zš/Ÿžl— É  ¢Ÿ©a¤— ™©L¥T¤¢¤\¥ã°Ã¥" ¢š…œrž|P¤\–L1ž|Ÿ¹L™|1žlœr¤l— ¥.¥"¬"¤\–L®™| š…œrž|¤R—¨°Ã¹…´¡œož §’¥°³¹˜ ¢®©±¤|™lµZ¤\–L ½ ªT¹Lž\®™8™|–L¥R«ê¤\–…œo¤¤\–L žlœr¤l— ¥Ø— ™¯©L¥T¤·©LŸª"´¡— ª"— ¸ ´ £¬Á¥nž¯™|¤Rœo©…§œožR§Û°Ã¥"´¡´ ±°³¤R— ¥n©™i» •:–Œ¦œT´ ¹L¢°Ã¥n¹…´¡§ ¸ ¢ 8¹…°e–P–…— ªT–L®ž8— ©PŸ©a¦— ž|¥T©… ¢Ÿ©a¤\™ ™|¹…°e–Íœo™8¤\–L ¼C ¸ »         ! " # $ % % & ' () * ( +,./10    +2. 34   4  5 6 0 7 0 7 .05 0 7 .1 0 7 .15 0 7 .2 0 7 .25 0 7 .3 0 7 .35 0 7 .4 0 7 100 200 3 8 00 400 5 9 00 6 : 00 Î:— ªT¹Lž\ P XŒ œo¤R— ¥¥"¬L¤\®­a¤ž|Ÿ¹L™| ; < DE’”Ôd’’DEF Ò æ´ ¤|–¥n¹ªn–1¤\–L¢œo©˜œn´ ²Z™l— ™…¥"¬lž|Ÿ¹L™|±§¤\®­a¤Ú§L— ™l°Ã¹L™|™|a§P— © ¤\–…— ™1š…œrš/Ÿž!¥n©˜´ ²°ŸœošL¤\¹Lž\®™.œš…œrž|¤R—¨°Ã¹…´¡œožßœo™|š a°Ã¤ ¥"¬ ¤\®­a¤§L— ™|™|± ·— ©˜œo¤R— ¥T©…µ7ž|±´¨œr¤\a§·™|¤\¹…§L— Ÿ™·œož\ß¬Á¥n¹©…§á— © ™|®¦’®žRœn´G§L— ä Ÿž|Ÿ©a¤ ½ ±´¡§’™¥"¬…— ©…¬Á¥nžl œo¤R— ¥T©(ž|Ÿ¤|žR— Ÿ¦œT´¨» ë\ìní æ¹¤|–¥nž|™|–˜— šÅ—¡§’Ÿ©a¤R— ½ °®œr¤l— ¥n© •:–®ž|ÏŸ­— ™|¤œêªnž\¥n¹šË¥"¬·™|¤\¹…§L— Ÿ™;°Ã¥n©…°Ã®ž\©…— ©Lª ¤\–L<—¡§’Ÿ©a¤R— ½ °®œr¤l— ¥a©Í¥"¬œo¹¤|–¥nž\™ ë •¥n©±² Û ç㗡°E–˜œo±´¨µ ü(Ž(Tí »;•:–¥n™|Ö™|¤^²´ ¥" ¢Ÿ¤|žR—¨°·™|¤\¹…§L— Ÿ™áÄR¹˜œo©±¤l—¡¬Á²,¤\–L ä ™|¤×²/´ ±åŸ¥"¬Xœ‰š…œrž|¤R—¨°Ã¹…´¡œožŒœo¹¤|–¥nžG¹L™l— ©Lª1œ°Ã¥"  ¸ — ©˜œo¤R— ¥T© ¥"¬"¦œržl— ¥n¹™™|¤Rœo¤R— ™|¤R—¡°®œn´‰ ¢±œo™|¹Lž\®™lµ™|¹…°e–àœo™¢¤\–L1™|®©É ¤\®©…°Ã(´ ®©ªn¤\–L™¢¥nž¢¦T¥°®œ ¸ ¹˜´¨œrž|²ãžl—¡°E–©LŸ™|™l» Õ ©<ž|±°³Ÿ©a¤ ™|¤\¹…§L— Ÿ™lµZ«d¥Tžl§ R ÉEªTžlœn Â— ©…¬Á¥nžl œo¤R— ¥T© ë= ] êì(Tí — ™(°Ã¥" · ¢¥T©…´ ²Í¹L™|±§àœn´ ™|¥7»ÛÙ%¥R«d®¦’®žRµÚš…œr™|¤™|¤\¹…§L— Ÿ™  ¢¥T™|¤l´ ²Å¬Á¥G°Ã¹L™|±§¥n©ÖŸ­Z¤\žlœn°Ã¤l— ©LªC§L— ™l°Ãžl—¡ ·— ©˜œo¤\¥nž|™¬Á¥nž — ©…§L—¡°Ÿœo¤R— ©ªàœo¹¤|–¥nž\™·¥"¬…¥"´¡§i®žlµ8§L— ™|šL¹¤|±§Í´¡— ¤\®žRœo¤\¹Lž| œožR°E–˜— ¦’®™l» Õ ¤¢— ™Ú¥n©˜´ ²·ž|±°³Ÿ©a¤R´ ²¤\–…œo¤Ú¤\–L™|¤^²´ ¥" ¢Ÿ¤|žR—¨° ™l°E–a ¢â–…œr™ ¸ Ÿ®©£œošš…´¡— a§1¤\¥ã—¡§’Ÿ©a¤R—¨¬Á²Cœo©¥n©±² ¢¥T¹L™ œo¹¤|–¥nž\™ ë ¬ ¥TžàŸ­œn ¢š˜´ aµá•:™|¹ ¸ ¥"—Û çãœo¤\™|¹… ¢¥T¤\¥"µ ü(ŽnüTí »,æ´ ¤|–¥n¹ªn–¤\–LqšLž\¥nš ¥n™|a§á™l°E–a ¢Í— ™.ž|±´ ŸÉ ¦œr©a¤¢¤\¥C°Ã¥nša²ažl— ªn–±¤·— ™|™|¹LŸ™lµ7¤\–L$¥ ¸  a°Ã¤l— ¦T›— ™©L¥T¤¢¤\¥ §’Ÿ¤|±°Ã¤ã—¡´¨´ Ÿª"œT´%ž|ŸšLžl— ©±¤l— ©LªH— ©a¤\®©±¤l— ¥n©˜œn´¡´_²Ï§L— ™|ªT¹…— ™|a§ ¸ ²Ì¤\–LÅœo¹¤|–¥nž\™l» Õ ©L™|¤\aœT§Lµd¤\–Lq™l°E–a ¢£°Ã¥n¹…´¡§ ¸  ¸ Ÿ¤|¤\Ÿž¹L™|±§â¤\¥âšLž\®¦’®©±¤Ú¹L©˜— ©±¤|Ÿ©a¤R— ¥n©:œT´i¦— ¥7´¨œr¤l— ¥n©·¥"¬ °Ã¥nša²ažl— ªn–±¤ ¸ ²<— ©LŸ­Zš/Ÿžl— ®©˜°³±§ßœo¹¤|–¥nž\™l» ë\üní M ¹Lš…´¡—¡°®œr¤|M§’¥°³¹˜ ¢®©±¤!§’Ÿ¤|±°Ã¤l— ¥n© æÍž|±´¨œr¤\a§Œž|Ÿ™|aœržl°e–P¤\¥nš…—¡°%— ™Ú§’¹š…´¡—¡°®œr¤|§’¥°³¹˜ ¢®©±¤ §’Ÿ¤|±°Ã¤l— ¥n©˜»Ú•:–8¤\¥nš…—¡°˜–…œr™ ¸ ±°³¥7 ¢Œ™|š/±°®— ½ °Ÿœn´¡´ ²M—¡ ¢É š/¥Tž|¤Rœo©±¤·— ©ãž|±°³Ÿ©a¤²T±œož\™M§’¹1¤\¥.¤\–L$Ÿ­Zš…´ ¥n™l— ¦’›— ©LÉ °Ãž|aœr™|Ö¥"¬¢§’¥°³¹˜ ¢®©±¤|™M¥n©,¤\–L Õ ©a¤\®ž|©®¤R» Š –¥l«1§iÉ –a¹ž|²ÍŸ¤ãœn´¡» ë\ü((Žní °Ÿœo¤\®ªT¥nžl— Ç®ß¤\–LÅ°Ã¥n©a¦’®©±¤l— ¥n©‰œn´ ¤\®­a¤|É ¸ œr™|a§È§’¹š…´¡—¡°®œr¤|Û§’Ÿ¤|±°Ã¤l— ¥n©È¤\a°e–L©…—¡Ä®¹L®™Í— ©a¤\¥ ¤\–LÌ¬Á¥"´¡´ ¥l«1— ©Lª<¤^«d¥Ì¤^²Zš/Ÿ™X͕:– ½ ž\™|¤ã— ™ õ÷( óTRÿÊ óT þeù~l÷nó@ r òXùRõ «8–®ž|Œ™|®¤\™‰¥"¬Lä ™|–…— ©Lª7´ Ÿ™l娵i¤^²Zš…—¡°Ÿœn´¡´ ²M°Ã¥n©LÉ ¤R— ªT¹L¥T¹L™M¤\®žl ¢™lµâœož\Ì°Ã¥" ¢š…œrž|±§Ø¬Á¥nžã§’¹š…´¡—¡°®œr¤|Í§’ŸÉ ¤\a°Ã¤l— ¥n© ë é ž\¥G§’ŸžâŸ¤l»¡œn´¡»¡µ ì(ݎÝR ç Š –¥l«1§i–a¹Lž\²ZµGŸ¤l»¡œn´¡»¡µ ü(ŽnüTí »º•:–Ì™|a°Ã¥n©˜§— ™ õ ð„ ÿ önø þØð$ù9ö±õòZøÃùw9ö±ÿÊS RòXÿ önþ¥ ï±ó «8–®ž|M¤\–Lß¤\®žl 6§L— ™|¤\žl— ¸ ¹L¤R— ¥n©Í™l—¡ ·—¨´¡œržl— ¤×² — ™¯¹L™|±§ã¤\¥Û§’Ÿ¤|±°Ã¤¯š/¥T¤|Ÿ©a¤R—¡œn´§’¹š…´¡—¡°®œr¤|Ÿ™ ë 篥"´¡— ©˜œnµ Ÿ¤l»¡œn´¡µ ì(ݎÝP çÓLœr©…§i®ž|™|¥T©…µ ì(ݎÝRTí »æ´ ¤|–¥n¹ªn–; ¢¥T™|¤ ™|¤\¹…§L— Ÿ™¢œn´¡´ ¥l«¶ ·— ©L¥Tž%™|²Z©±¤lœT°³¤R—¡°¦œržl—¡œo¤R— ¥n©L™lµ’¤\–L1§’¹É š…´¡—¡°®œr¤l— ¥n©Ë— ™ê§’Ÿ¤|±°Ã¤|±§Ý¬Á¥nžÍŸ©a¤l— ž|;§’¥°³¹˜ ¢®©±¤|™Å¥nž ¼C ¸ ™l— ¤|Ÿ™l» é a°Ÿœo¹™|¯¤\–L·šLž\¥nš ¥n™|a§M™l°E–a ¢ã— ™.¬Á¥nÉ °Ã¹L™|a§8¥n©1š…œrž|¤R—¨œT´X§’¹š…´¡—¡°®œr¤l— ¥n©™lµ— ¤°Ã¥n¹…´¡§ ¸ ¹L™|±§Pœo™ œ.°Ã¥" ¢š…´ ± ¢®©±¤lœŸž|²< ¢±œo™|¹Lž\¤\¥C—¡ ¢šLž|¥R¦T$¤\–L ÏLŸ­— É ¸ —¡´¡— ¤^².¥"¬"¤\–L·§’¹š…´¡—¡°®œr¤l— ¥n©Å°e–La°» ë ní M ¥G°Ã¹… ¢Ÿ©a¤1°Ÿ´ ¹™|¤|Ÿžl— ©Lª •:–®ž|1œn´ ™|¥!Ÿ­— ™|¤™|¤\¹…§L— Ÿ™d¤\–…œo¤%ªnŸ©LŸžlœr¤|!°Ÿ´ ¹™|¤|Ÿž|™ ¸ œr™|a§¥n©ÆšL–žlœr™9Ÿ™C™|–…œrž|±§ ¸ Ÿ¤×«%®Ÿ© §’¥°³¹˜ ¢®©±¤|™l» ’¹h·­à•ž|Ÿ Š ´ ¹L™|¤\®žR— ©ª ë L• Š í µ/šLž\¥nš ¥n™|a§á¬Á¥nžâ¥n©É ¤\–L®Éϲž|Ÿ¥nž\ª"œo©˜— Çaœo¤R— ¥a©¢¥"¬®¤\–L™|aœržl°e–1ž|Ÿ™|¹…´ ¤|™¥n© ¤\–L ¼C ¸ µX— ™Úœo©1Ÿ­œn ¢š˜´ °Ÿ´ ¥T™|%¤\¥¥n¹žŒœoššLž\¥"œn°e– ë ÙGœn ·— ž Û Ð¤|DZ— ¥n©…—¡µ ì(ݎÝ(ÞTí »æ´ ¤|–¥n¹ªn– ¸ ¥T¤|–wL• Š œo©˜§M¥n¹ž  ¢Ÿ¤|–¥G§’™PŸ­Zš…´ ¥"— ¤P™|¹·­<¤\ž|®¯™|¤\ž|¹…°Ã¤|¹ž|ß¤\¥ãž|±œn´¡— Ç® Ö°®— ®©±¤M°Ÿ´ ¹™|¤|Ÿžl— ©Lª"µ ¤\–LÖœn§œoš¤lœr¤l— ¥n©™·œož\·™l´¡— ªT–a¤R´ ² §L— ä Ÿž|Ÿ©a¤R» é a°Ÿœo¹™|ã¤\–Lá¥ ¸  a°Ã¤l— ¦TC¥"¬PL• Š — ™›¤\¥ °Ãž|aœr¤|Å™|a œo©a¤R—¡°®œT´¨´ ²œo™|™|¥°®—¡œr¤|±§§’¥°³¹˜ ¢®©±¤à°Ÿ´ ¹™|É ¤\®ž|™lµX™|¤\a · ·— ©LªÖœo©˜§·™|®©±¤|Ÿ©…°Ã.´ ®¦’a´L™|®ª7 ¢Ÿ©a¤Rœo¤R— ¥T© «dŸž|£œošš…´¡— a§Ìœo¤$¤\–LßšLž\®ÉešLž|¥°³Ÿ™|™l— ©Lªã™|¤Rœoªn±µ¤\®žl  ™|aÄ®¹LŸ©…°Ã®™ã´ ¥n©ªnŸž¯¤\–…œo©Û™l— ­Ì«dŸž|ãš/Ÿ©…œT´¨— ǟa§ã«1— ¤|– ±ÄR¹…œT´L«d±— ªT–a¤\™lµœo©˜§M¤\–L1Ÿ­Z¤\žlœn°Ã¤|±§Öä ¸ œr™|›°Ÿ´ ¹™|¤|Ÿž|™lå œož\á¬Á¹Lž|¤\–LŸž›— ©a¤\®ªTžlœo¤\a§Ì— ©a¤\¥,´¡œož|ªT®žß°Ÿ´ ¹™|¤|Ÿž|™l» é ®É °Ÿœo¹L™|ß¥n¹žÖ¬Á¥G°Ã¹L™q— ™¯¥n©Ì¤\–LqŸ­œn°Ã¤$¤\®žl Ë™|aÄ®¹LŸ©…°Ã  ·œr¤l°e–…µ«dœo©˜œn´ ²ZǟÛ§L— ž|±°³¤R´ ²¤\–Lä ¸ œr™|Û°Ÿ´ ¹™|¤|Ÿž|™lå Ÿ­Z¤\žlœn°Ã¤|±§q¬Áž|¥" ;¤\–L Ÿ©a¤l— ž| ¤\®­a¤P°Ã¥"´¡´ ±°³¤R— ¥n©™i» Î:— ©˜œn´¡´ ²nµ¬Á¹L¤\¹Lž|¯ž|Ÿ™|aœržl°e–ȧL— ž|±°³¤R— ¥n©™›œož\áœo™ß¬Á¥"´ É ´ ¥l«8™l» Î:— ž\™|¤lµâ¤\–LÏ— ™|™|¹LÌ¥"¬·ÄR¹˜œo©±¤l—¡¬Á²— ©LªØ¤\–LÏœo¹É ¤\–L¥nž\™|–…— š£¥"¬Œœo©¥n©±² ¢¥T¹L™¤\®­a¤|™1™|–L¥T¹…´¡§ ¸ á¬Á¹Lž|¤\–LŸž Ÿ­Zš…´ ¥nž\a§µ ¸ ±°®œr¹L™|Û¤\–L— ©a¤\®ž|šž|Ÿ¤lœr¤l—_¥n©  ·œ\²Ü§’ŸÉ š/Ÿ©…§¥n©q¦œržl— ¥n¹™¬ œn°Ã¤|¥Tž|™P— ©…°Ÿ´ ¹˜§L— ©Lª¯¤\–Lß´¡œo©ªn¹…œrªn±µ ¤\–LÏ ¢±§L—¡œnµ1¤\–LÌ±§L— ¤l— ©LªÏš/¥7´¡—¨°Ã²TµP¥nžá¤\–LÌ™|¹ ¸  ±°³¤ ½ ±´¡§L»'•:–ÌšLž\¥nš ¥n™|a§œo©˜œn´ ²Z¤R—¡°®œT´$ ¢Ÿ¤|–¥G§°Ã¥n¹…´¡§ ¸ Ìœ(šLž\¥" ·— ™l— ©LªÌ¤\¥X¥"´¤\¥àŸ­Zš…´ ¥nž\à§L— ä Ÿž|Ÿ©a¤ß¤^²Zš/Ÿ™ ¥"¬"¤\®­a¤|¹…œT´7ž|Ÿ™|¥n¹žl°³Ÿ™lµ˜— ©…°Ÿ´ ¹˜§L— ©Lª.¼C ¸ §’¥°³¹˜ ¢®©±¤|™lµ ï çèñÉ ¸ œo™|±§1§Lœr¤lœ ¸ œr™|®™lµi¥nž…šLž\¥nªTžlœn Û™|¥n¹žl°Ã °Ã¥G§’Ÿ™l» •:–¢™|a°Ã¥n©˜§$š/¥T¤|Ÿ©a¤R—¡œn´±ž|Ÿ™|aœržl°e–(¤\¥nš…—¡°— ™¤\–L žlœrš…—¡§ §’Ÿ¤|±°Ã¤l— ¥n©ê¥"¬š…œrž|¤R—¨œT´¡´ ²>§’¹š…´¡—¡°®œr¤|±§Í¤\®­a¤|™Ìœo™q«d±´¨´ œo™·¤\–L<œo¹¤|¥7 ·œo¤R—¡°PªnŸ©LŸžlœr¤l— ¥n©,¥"¬:±  ¸ ±§L§’±§C¤\®­a¤ œo©˜°E–¥nž|™P¹L™l— ©LªÖ¤\–LßšLž\¥nš ¥n™|a§<°Ÿ´ ¹™|¤|Ÿžl— ©Lª, ¢Ÿ¤|–¥G§L» •:–Œ¤\–…— žl§— ™|™|¹L °Ã¥n©…°Ã®ž\©L™¤\–LŒŸ­Z¤\žlœn°Ã¤l— ¥n©¥"¬’Ÿ¦T®©±¤|É ™|š/±°®— ½ °Ÿ­ZšLž\®™|™l— ¥n©™¢¤\–…œo¤P°Ÿœo© ¸  ¹L¤R—¡´¨— ǟa§á¬Á¹Lž|¤\–LŸž — ©.™|¹… · œožl— Ça— ©ª¤\–L$°Ã¥n©a¤\®©±¤|™¥"¬a¤\–L·°Ÿ´ ¹™|¤|Ÿžl»•:– ´¡œo™|¤Í— ™|™|¹Lœn´ ™|¥àž|±ÄR¹˜— ž\®™Ö™|¹…°e–Ȥ\a°e–L©…—¡Ä®¹L®™Íœo™qž|ŸÉ ™|®ª7 ¢Ÿ©a¤Rœo¤R— ¥T©èœo©˜§;— ©a¤\®ž|š ¥"´¡œo¤R— ¥n©¶¥"¬¤\®žl ¢™lµ¯œo©˜§ œo¹¤|¥7 ·œo¤R—¡°ê§’Ÿ¤|±°Ã¤l— ¥n©è¥"¬Ö ¢±§L—¡œoÉe™|š/±°®— ½ °ÅŸ­ZšLž\®™|É ™l— ¥n©™i» Õ ©Æœn§§L— ¤R— ¥T©…µâ— ¤C— ™C—¡ ¢š/¥Tž|¤lœr©a¤M¤\¥ê§’Ÿ¦T±´ ¥nš œž| ½ ©a§<´¡œo©ªn¹…œrªnŸÉ ¸ œo™|±§Å ¢Ÿ¤|–¥G§M¥"¬Œ—¡§’Ÿ©a¤R—¨¬Á²— ©Lª œo©˜§Ö°Ÿ´¨œr™|™l—¨¬Á²— ©LªãÄR¹¥n¤\a§ã§’Ÿ™l°³žR— š¤l— ¥n©™¤\–…œo¤Pœošš/±œož — ©(¤\–L ¤\®­a¤|™l» > '@?n'L“(' Ò ”h'L ¹º–4‘~–4“Ž´ ¹º–4‘~–4“ Õ ’“ ± ‘~ŽÄ1A|’¤8‘~§ –w»’Ä ¥ ~ž’¤8˜™2Î ¤!‹~ž’‘ ‹ž’–°Ä »’¼’¼’½ÑÓÀ-¾Ñ1¿ÀȍÍ»’¼’¼’ÃÑÓÀ-¾Ñ1»’¼‡¿À’À’À-´ Š¡’” ‹CB ‹“§í‹ ‹~šüÊ͋1“Ð ›q˜ Î –4“§í‹ ´w£nÒ4—a¢c£ED ¹ Õ ’§ÈÑ ± ¤8–4–°´ƒ»’¼’¼’Æ’´q›q˜™~ž’‘”˜™2˜8•ꊌ‹Ž‹ÈÕ ’~’“˜8‘~§í´ Ê͋1˜™~˜8•1Î~˜ƒÒ,°–4“Ž‹•4˜ ’–°´y»’¼’¼’¼’´Ô»’¼’¼’¾·Ê͋1˜™~˜8•1Î~˜ ŠŒ‹˜8¤8ˇ϶–Fn ÕnŠ¡Ñ1¹HG ÊIA|–4“˜8’ ´ ϶˜8Î~’KJ –4˜ML°‹˜ƒ¢cÎ~˜8 Î ‘” ´ƒ¿À’À-»’´²»’¼’¼’½Ñ1¿À’À’À Ï¶˜8Ð/и–4˜ON”‘~¤8¤8э– ¦  ŠŒ‹Ž‹ Î ‹–°´ Ï ‹˜8’ ‹¤œÕ –4°–4“ R ’“ã¢c•4˜8–4~•4–îÒ, R ’“§í‹˜8’y¢c˰–4§ Ž´­»’¼’¼’¼’´ ϶£îÕqÒ4¹y£ –4­Õ ’¤™¤8–4•4˜™’y»’´ P|‘RQ ˜ Ê͋12‘~§ ’-ÄSD¶Ð°˜™“Ž‹TJ ˜™Ž‹‘~•1Ę,Äv£n‹2‘~UP¶‹§í‹ʙŽ‹’Ä P|’2Î~˜™Ž‹Ð~‹VB¶˜8“Ž‹~-Ä2GÿŽ‹§î‘ Ò,§í‹‘~•1Î~˜!Ä‹~š £ ’§È‹Ð/˜-ÒF ‹Ñ §î‘~“Ž‹’´ÿ»’¼’¼’Ã’´ÿɸ‹ ± ‹~–4–îÊȐ¸“ ± Î~’¤8’ž’˜™•°‹¤WD¶ ‹¤8˰˜8¶¢c˰–4§ Õ Î ‹’¢c–4 Ê͋1°‘ ‹¤!´íÏXD¶Ò4¢Ž£ª£ –4•1Î~~˜™•°‹1¤ ¹º– ± ’“ŽÄºÏXD¶Ò4¢Ž£œÑ Ò4¢cÑӣ¼’ÃÀ’À-Ò´ £ ’°Ë Êȕ ¥ ~–4“Ë ‹~š ÊȘ8•1Î ‹–4¤YG ‹В–4 ´Ì¿À’À’À-´[Z]\_^a`cbcde f `cg hjilknmop^ag qsrtc^agbcoutcopkwvEbcxVh,\_^atc^ gbcoptcy1zp^a{|ybcx}m^ d{Ž´ ˜8 B ‹~š Î /’Ð  R Ï ‹‘~“Ž‹¤È›œ‹~ž’‘ ‹ž’– —“/•4–4˜8~ž-ÄvÊ͋1“•4–4¤ Š¡–4аи–4“ÿÒ,~•°´8ÄùÇÅ-ÇÁ-ǒ½’¿¸´ P|‘~Ž‹Ý£n‘ Î ’˜ ‹~š~P|‘RQ ˜ Ê͋12‘~§ ’-´ ¿À’À-¿’´Z]\_^a`cbcde f `cg h€ilknmop^ag qsrtc^agbco‚ƒbcd…„†m^amd b ‡mopmbc\ fHˆ b r\‰xŠmop^ f ´œ¢cÒ‹ ϶’–4  R Ò, R ’“§í‹˜8’ —“/•4–4˜8~ž€¢c/•4˜8–4!Ëü˜8 ɸ‹ ± ‹ Ä ¢cÒ‹ƒÑό›ùÑ1»Å-¾’Ä»’ÃÁ-¿Å-´ D Î š~‘”“áÕ Î~ Fœš”ΰ‘~“˰ÄWG ± Î~˜8“ŒN”“˜8–4š~–4“ŽÄ*ŠŒ‹; °˜8š€‹ÿ“’2§í‹ Ä‹~š Ê͋1“ËyÕn‹Δ–4“˜™~– Êȕ°Õn‹ Î –°´v¿À’À-¿’´uvEbcyymr^agbcoŽzp^atc^ g f ^agr f ƒbcd‘pt f ^ ˆ \h,y grtc^ m ˆ b r\‰xŠmop^ ˆ m^amr^ gbco”´HDîÕnÊ £ “Ž‹~Ž´ ’ôÒ, R ’“§í‹˜8’ ¢c˰–4§ ŽÄ&¿À © ¿ ¯ Ä&»’Ã’»Á-»’¼’»¸´ D¶~š~“2–4˜,’&´”“ù“/š”–4“ŽÄ³¢c– ¸–4vÕn´|‹ÿ¤!‹2§Í‹1 İÊ͋1“Р¢~´¸Ê͋1 ‹2–°Ä ‹~š•‹ÿ–4”–-“–4˗’nFƒ–4˜™ž´œ»’¼’¼’Ã’´]zp{ op^ tcr^ grSvEy \ f ^ mdgop‡˜bV^ `cm ™—mšc´ —“/•°´¡ R Î~– ¢c˜ ¦ Î Ò,°–4“º‹˜8’ ‹¤áµw’“¤™šÝµ ˜™š~– µw– Î Õ ’ R –4“–4~•4–°Äƒ¼’»Á’Å’À’Å´ Bœ› –4•4’“‹ ‹“• › ˜!‹ÑÊ ’¤8˜™ ‹’Ä ›q‘~˜™ž‹ÿ“Ž‹; ~‹~-Ä͋~š Ï ‹“Ž‹Ë‹ ‹ ¢cÎ~˜ ~‹Ð/‘”§Í‹“Ž´ »’¼’¼’½’´ kczsvEZ†Ÿœ Žsgopkcgop‡ ˆ b r\‰xŠmop^ vEbh,gm f ZŒrdlb ff Ÿœ\‰y ^ g h,ym ˆ tn^atcšt f m f ´æ—“/•°´ÿ R N”’‘~“Î Ò,°–4“º‹˜8’ ‹¤¶Õ ’ R –4“–4~•4–㐒Ô—|‹“Ž‹¤8¤™–4¤á‹~š Š¡˜8“˜ Î ‘~–4š Ò, R ’“§í‹˜8’ ¢c˰–4§íÄ&½’¾Á-Ò¼’´ Ê͋1“Ðæ¢~‹~š”–4“’ ´ »’¼’¼’Ã’´ ˆ \h,y grtc^ m ˆ m^amr^ gbco¡go¢^ `cm £¤m\_^amd f vEbcyymr^agbco”´v£ –4•1Î~~˜™•°‹1¤ ¹º– ± ’“­ R Î~–wŠ¡– ± ‹“Ñ § –4°a R Õ ’§ ± ‘~˜™~žÍ¢c•4˜8–4~•4– ‹ÿÎ~–…¥¶~˜ ¸–4“˜8,Ë  R ‹ÿ¤!‹Ñ ž’ Fîĸ£î¹ Ñ1»’¼’¼’ÃÑ1ǒ´ Gÿ“–4¦’&‹§ ˜8“w‹~š¦Gÿ“–4 ¥ ƒL4˜™’~˜!´ »’¼’¼’¾’´§™—mš ˆ b r\‰xŠmop^ vEy \ f ^ mdgop‡ ‚Z¨pmt f gšgy g^a{ ˆ mx}bco f ^adltc^ gbco”´Œ—“/•°´” R ¢cÒ,Ñ ‹ÿÒg¹V©8¼’¾’ĸÅ-½Á-ÇÅ-´
2003
49
Hierarchical Directed Acyclic Graph Kernel: Methods for Structured Natural Language Data Jun Suzuki, Tsutomu Hirao, Yutaka Sasaki, and Eisaku Maeda NTT Communication Science Laboratories, NTT Corp. 2-4 Hikaridai, Seika-cho, Soraku-gun, Kyoto, 619-0237 Japan jun, hirao, sasaki, maeda  @cslab.kecl.ntt.co.jp Abstract This paper proposes the “Hierarchical Directed Acyclic Graph (HDAG) Kernel” for structured natural language data. The HDAG Kernel directly accepts several levels of both chunks and their relations, and then efficiently computes the weighed sum of the number of common attribute sequences of the HDAGs. We applied the proposed method to question classification and sentence alignment tasks to evaluate its performance as a similarity measure and a kernel function. The results of the experiments demonstrate that the HDAG Kernel is superior to other kernel functions and baseline methods. 1 Introduction As it has become easy to get structured corpora such as annotated texts, many researchers have applied statistical and machine learning techniques to NLP tasks, thus the accuracies of basic NLP tools, such as POS taggers, NP chunkers, named entities taggers and dependency analyzers, have been improved to the point that they can realize practical applications in NLP. The motivation of this paper is to identify and use richer information within texts that will improve the performance of NLP applications; this is in contrast to using feature vectors constructed by a bagof-words (Salton et al., 1975). We now are focusing on the methods that use numerical feature vectors to represent the features of natural language data. In this case, since the original natural language data is symbolic, researchers convert the symbolic data into numeric data. This process, feature extraction, is ad-hoc in nature and differs with each NLP task; there has been no neat formulation for generating feature vectors from the semantic and grammatical structures inside texts. Kernel methods (Vapnik, 1995; Cristianini and Shawe-Taylor, 2000) suitable for NLP have recently been devised. Convolution Kernels (Haussler, 1999) demonstrate how to build kernels over discrete structures such as strings, trees, and graphs. One of the most remarkable properties of this kernel methodology is that it retains the original representation of objects and algorithms manipulate the objects simply by computing kernel functions from the inner products between pairs of objects. This means that we do not have to map texts to the feature vectors by explicitly representing them, as long as an efficient calculation for the inner products between a pair of texts is defined. The kernel method is widely adopted in Machine Learning methods, such as the Support Vector Machine (SVM) (Vapnik, 1995). In addition, kernel function  has been described as a similarity function that satisfies certain properties (Cristianini and ShaweTaylor, 2000). The similarity measure between texts is one of the most important factors for some tasks in the application areas of NLP such as Machine Translation, Text Categorization, Information Retrieval, and Question Answering. This paper proposes the Hierarchical Directed Acyclic Graph (HDAG) Kernel. It can handle several of the structures found within texts and can calculate the similarity with regard to these structures at practical cost and time. The HDAG Kernel can be widely applied to learning, clustering and similarity measures in NLP tasks. The following sections define the HDAG Kernel and introduce an algorithm that implements it. The results of applying the HDAG Kernel to the tasks of question classification and sentence alignment are then discussed. 2 Convolution Kernels Convolution Kernels were proposed as a concept of kernels for a discrete structure. This framework defines a kernel function between input objects by applying convolution “sub-kernels” that are the kernels for the decompositions (parts) of the objects. Let be a positive integer and    be nonempty, separable metric spaces. This paper focuses on the special case that      are countable sets. We start with  as a composite structure and      as its “parts”, where ! "#$ . % is defined as a relation on the set  '& ((( &  )&  such that %*+,  is true if  are the “parts” of  . %. / is defined as %0 /, 13254 %67,  89 Suppose  :; ,  be the parts of  with <       , and = be the parts of with =>?    . Then, the similarity @/ between  and is defined as the following generalized convolution: A$BDCFEHGJILK MONOPRQFSTVU W XONOPYQFST[Z W \ ]_^a` A ] BDC ] EbG ] I7c (1) We note that Convolution Kernels are abstract concepts, and that instances of them are determined by the definition of sub-kernel # 9/d 9 J Y . The Tree Kernel (Collins and Duffy, 2001) and String Subsequence Kernel (SSK) (Lodhi et al., 2002), developed in the NLP field, are examples of Convolution Kernels instances. An explicit definition of both the Tree Kernel and SSK @/ is written as: A$BDCFEeG9IfKgihjBDCFIlkmh9BDGJI7noK;p q ^a` h q BDCoIokrh q BDG9I7c (2) Conceptually, we enumerate all sub-structures occurring in  and , where s represents the total number of possible sub-structures in the objects. t , the feature mapping from the sample space to the feature space, is given by td >  t  / tduv  In the case of the Tree Kernel,  and be trees. The Tree Kernel computes the number of common subtrees in two trees  and . tdw_ is defined as the number of occurrences of the x ’th enumerated subtree in tree  . In the case of SSK, input objects  and are string sequences, and the kernel function computes the sum of the occurrences of x ’th common subsequence t w / weighted according to the length of the subsequence. These two kernels make polynomialtime calculations, based on efficient recursive calculation, possible, see equation (1). Our proposed method uses the framework of Convolution Kernels. 3 HDAG Kernel 3.1 Definition of HDAG This paper defines HDAG as a Directed Acyclic Graph (DAG) with hierarchical structures. That is, certain nodes contain DAGs within themselves. In basic NLP tasks, chunking and parsing are used to analyze the text semantically or grammatically. There are several levels of chunks, such as phrases, named entities and sentences, and these are bound by relation structures, such as dependency structure, anaphora, and coreference. HDAG is designed to enable the representation of all of these structures inside texts, hierarchical structures for chunks and DAG structures for the relations of chunks. We believe this richer representation is extremely useful to improve the performance of similarity measure between texts, moreover, learning and clustering tasks in the application areas of NLP. Figure 1 shows an example of the text structures that can be handled by HDAG. Figure 2 contains simple examples of HDAG that elucidate the calculation of similarity. As shown in Figures 1 and 2, the nodes are allowed to have more than zero attributes, because nodes in texts usually have several kinds of attributes. For example, attributes include words, partof-speech tags, semantic information such as Wordis of PERSON NNP NNP VBZ word named entity NP chunk dependency structure sentence coreference . ... Jun-ichi Tsujii general chair ACL2003 the He is one of the most famous Junichi Tsujii is the Gereral Chair of ACL2003. He is one of the most famous researchers in the NLP field. :node :direct link DT JJ NN IN NNP NP NP PRP VBZ CD IN DT RBS JJ NP NP ORG attribute: words Part-of-speech tags NP chunk class of NE Figure 1: Example of the text structures handled by HDAG p1 p2 p5 p4 p3 G1 G2 q1 q6 q4 q3 N V a b a d c N e b c a d q8 q2 q5 q7 p6 p7 NP NP Figure 2: Examples of HDAG structure Net, and class of the named entity. 3.2 Definition of HDAG Kernel First of all, we define the set of nodes in HDAGs y  and y{z as | and } , respectively, ~ and  represent nodes in the graph that are defined as 2€~, ~ w  |‚ xƒ…„J† |‡ 8 and 2Jˆ Y‰Š‹}Œ #3„Ž a } 8 , respectively. We use the expression ~ 6 ~J‘  ~f’ to represent the path from ~“ to ~ ’ through ~ ‘ . We define “attribute sequence” as a sequence of attributes extracted from nodes included in a subpath. The attribute sequence is expressed as ‘A-B’ or ‘A-(C-B)’ where ( ) represents a chunk. As a basic example of the extraction of attribute sequences from a sub-path,  z  ” in Figure 2 contains the four attribute sequences ‘e-b’, ‘e-V’, ‘N-b’ and ‘NV’, which are the combinations of all attributes in  z and  ” . Section 3.3 explains in detail the method of extracting attribute sequences from sub-paths. Next, we define “terminated nodes” as those that do not contain any graph, such as ~ z , ~l• ; “nonterminated nodes” are those that do, such as   , ‘ . Since HDAGs treat not only exact matching of sub-structures but also approximate matching, we allow node skips according to decay factor –! —™˜ –$š›„J when extracting attribute sequences from the sub-paths. This framework makes similarity evaluation robust; the similar sub-structures can be evaluated in the value of similarity, in contrast to exact matching that never evaluate the similar substructure. Next, we define parameter œ ( œ  „JJž ) as the number of attributes combined in the attribute sequence. When calculating similarity, we consider only combination lengths of up to œ . Given the above discussion, the feature vector of HDAG is written as td y '… t   y _tdu@ y  , where t represents the explicit feature mapping of HDAG and s represents the number of all possible œ attribute combinations. The value of t w  y is the number of occurrences of the x ’th attribute sequence in HDAG y ; each attribute sequence is weighted according to the node skip. The similarity between HDAGs, which is the definition of the HDAG Kernel, follows equation (2) where input objects  and are y  and y{z , respectively. According to this approach, the HDAG Kernel calculates the inner product of the common attribute sequences weighted according to their node skips and the occurrence between the two HDAGs, y  and y z . We note that, in general, if the dimension of the feature space becomes very high or approaches infinity, it becomes computationally infeasible to generate feature vector td y explicitly. To improve the reader’s understanding of what the HDAG Kernel calculates, before we introduce our efficient calculation method, the next section details the attribute sequences that become elements of the feature vector if the calculation is explicit. 3.3 Attribute Sequences: The Elements of the Feature Vector We describe the details of the attribute sequences that are elements of the feature vector of the HDAG Kernel using y  and y z in Figure 2. The framework of node skip We denote the explicit representation of a node skip by ” Ÿ ”. The attribute sequences in the sub-path under the “node skip” are written as ‘aŸ -c’. It costs – to skip a terminated node. The cost of skipping a Table 1: Attribute sequences and the values of nodes ~! and  ‘   ` sub-path a. seq. val. ¡ K¢   ` NP 1  j£ a¤ ¥   £ N¤ ¥  Y¦ c¤ ¥  § ¤ -b ¨ž¥ ¡ K ¨  j£ª© § a-b 1  j£ª© § N-b 1   ¦ ©  § c-b 1 « § sub-path a. seq. val. ¡ K¢ « § NP 1 «ž¬ ( ¤ ¤ )-a ¥ £ «ž­ (c¤ )¤ ¥ £ «ž­ ( ¤ -d)¤ ¥ £ ¡ K ¨ « ­ (c-d)¤ ¥ « ­ ©®«ž¬ (c¤ )-a ¥ « ­ ©®«ž¬ ( ¤ -d)-a ¥ ¡ K¯ «O­°©®« ¬ (c-d)-c 1 non-terminated node is the same as skipping all the graphs inside the non-terminated node. We introduce decay functions ±Š²ˆ~f , ³°²o/~f and ´'²ˆ/~f ; all are based on decay factor – . ±.²a~f represents the cost of node skip ~ . For example, ± ² /~dž ‡µJ– z represents the cost of node skip ~ z  ¶ ‘ and that of ~†”  ~J‘ ; ±ƒ²ˆ/~ z ‡µ– is the cost of just node skip ~ z . ³ ²ˆ/~f represents the sum of the multiplied cost of the node skips of all of the nodes that have a path to ~ , ³²o/~†‘j 1…9– that is the sum cost of both ~ z and ~†” that have a path to ~ˆ‘ , ³ ²o/~  "·„9–o¸Y . ´ ² /~f represents the sum of the multiplied cost of the node skips of all the nodes that ~ has a path to. ´¹²!/~ z @º– represents the cost of node skip ~l‘ where ~ z has a path to ~F‘ . Attribute sequences for non-terminated nodes We define the attributes of the non-terminated node as the combinations of all attribute sequences including the node skip. Table 1 shows the attribute sequences and values of ~  and  ‘ . Details of the elements in the feature vector The elements of the feature vector are not considered in any of the node skips. This means that ‘AŸ -B-C’ is the same element as ‘A-B-C’, and ‘AŸ Ÿ B-C’ and ‘AŸ -BŸ -C’ are also the same element as ‘A-B-C’. Considering the hierarchical structure, it is natural to assume that ‘(NŸ )-(d)-a’ and ‘(NŸ )-(( Ÿ d)-a)’ are different elements. However, in the framework of the node skip and the attributes of the nonterminated node, ‘(NŸ )-( Ÿ )-a’ and ‘(NŸ )-(( Ÿ Ÿ )-a)’ are treated as the same element. This framework Table 2: Similarity values of y  and y z in Figure 2 » ` » £ att. seq. value att. seq. value ¡ K¢ NP 1 NP 1 1 N 1 N 1 1 a 2 a 1 2 b 1 b 1 1 c 1 c 1 1 d 1 d 1 1 ¡ K ¨ (N¤ )-( ¤ )-a ¥ £ (N¤ )-(( ¤ ¤ )-a) ¥ ¦ ¥ ¬ N-b 1 N-b 1 1 (N¤ )-(d) ¥ (N¤ )-(( ¤ -d)¤ ) ¥ ¦ ¥ § ( ¤ -b)-( ¤ )-a ¨ž¥ £ ( ¤ -b)-(( ¤ ¤ )-a) ¥ ¦ ¨ž¥ ¬ ( ¤ -b)-(d) ¨ž¥ ( ¤ -b)-(( ¤ -d)¤ ) ¥ ¦ ¨ž¥ § (c¤ )-( ¤ )-a ¥ £ ((c¤ )-a) ¥ ¥ ¦ (c¤ )-(d) ¥ c-d 1 ¥ (d)-a 1 (c¤ )-a ¥ ¥ ¡ K¯ (N-b)-( ¤ )-a ¥ (N-b)-(( ¤ ¤ )-a) ¥ £ ¥ ¦ (N-b)-(d) 1 (N-b)-(( ¤ -d)¤ ) ¥ £ ¥ £ achieves approximate matching of the structure automatically, The HDAG Kernel judges all pairs of attributes in each attribute sequence that are inside or outside the same chunk. If all pairs of attributes in the attribute sequences are in the same condition, inside or outside the chunk, then the attribute sequences judge as the same element. Table 2 shows the similarity, the values of "¼ ‚½!¾  y   y z , when the feature vectors are explicitly represented. We only show the common elements of each feature vector that appear in both y  and y z , since the number of elements that appear in only y  or y{z becomes very large. Note that, as shown in Table 2, the attribute sequences of the non-terminated node itself are not addressed by the features of the graph. This is due to the use of the hierarchical structure; the attribute sequences of the non-terminated node come from the combination of the attributes in the terminated nodes. In the case of ¶9 , attribute sequence ‘NŸ ’ comes from ‘N’ in ¶ z . If we treat both ‘NŸ ’ in ~° and ‘N’ in ~ z , we evaluate the attribute sequence ‘N’ in ~ z twice. That is why the similarity value in Table 2 does not contain ‘cŸ ’ in ~Ž and ‘(cŸ )Ÿ ’ in  ‘ , see Table 1. 3.4 Calculation First, we determine ¿FÀ6 ¶ ÁO , which returns the sum of the common attribute sequences of the  combination of attributes between nodes ~ and  . ÃRÄÅB7ÆEbÇILK Ã°È Ä B   E « IaɃʞËÌ7B   E « I7E if Í K#¢ Ã È Ä B   E « I7E otherwise (3) Ã È Ä B   E « IfK Î E if Ï ¡ B   IfKÑÐ and Ï ¡ B « IfKÑÐ Ò7N qÔÓ TVÕ+W Öˆ× B7Æ Ilk7Ø × B7Æ Ilk7ʞËÌ7B7Æ E « I7E if Ï ¡ B   I‚Ù K"Ð and Ï ¡ B « IdK"Ð Ú N qHÓ TVÕ7W Öˆ× BDÇIlk_Ø × BDÇIlk_ʞËÌ7B   EbÇI7E if Ï ¡ B   IfK"Ð and Ï ¡ B « IŽÙ K"Ð Ò7N qÔÓ T ÛW Ú N qHÓ TVÕ7W ØÜמB7Æ Iak7Ø'מBDÇIlk_Ý Ä B7ÆEbÇI/E otherwise (4) Þdßaà /~fj returns the number of common attributes of nodes ~ and  , not including the attributes of nodes inside ~ and  . We define function x+œÅ~f as returning a set of nodes inside a non-terminated node ~ . x+œÅ~f áµâ means node ~ is a terminated node. For example, x+œÅ~!m Ž‹2€~ z  ~ ”  ~ ‘ 8 and x+œÅ~ z ,›â . We define functions ã{À.~f9 , ã¹ä À /~fj and ã ä ä À /~fj to calculate ¿fÀŒ/~f9 . ݹÄB   E « IfK#ÃRÄÅB   E « IaÉ Äå ` æ ^†` Ý È æ B   E « IakOÃRÄå æ B   E « I (5) Ý È Ä B   E « IfK Ú N€ç_è/éêTVÕ+W ë × BêÇIlk7Ý È Ä B   EbÇIJÉŒÝ È È Ä B   EbÇI (6) Ý È È Ä B   E « ILK Ò7N€ç_è/éêT Û W ë*×BiÆ Ilk7Ý È È Ä B7ÆE « IaÉ6Ý Ä B7ÆE « I (7) The boundary conditions are ݹÄB   E « IìK Öˆ× B   Iok Öˆ× B « IakrÃYÄÅB   E « I7E if Í K#¢ (8) Ý È Ä B   E « IìK Î E if ímî Ì7B « ILKÑÐ (9) Ý È È Ä B   E « IìK Î E if ímî Ì7B   ILKÑÐ9c (10) Function ïFð à ~f returns the set of nodes that have direct links to node ~ . ïFð à /~f 1ñâ means no nodes have direct links to ¶ . ïFð à /~!‘j ò 2€~ z ~†”j8 and ïFð à ~ªm ,›â . Next, we define @~f9 as representing the sum of the common attribute sequences that are the  combinations of attributes extracted from the subpaths whose sinks are ~ and  , respectively. A.Ä,B   E « ILK ʞËÌ7B   E « I7E if Í K¢ Äå ` æ ^a`Ló È æ B   E « IlkžÃ Ä“å æ B   E « IE otherwise (11) Functions ôŽÀŒ/~f9 , ô ä À /~fj and ô ä ä À ~f9 , needed for the recursive calculation of  À ~f9 , are written in the same form as ã'À"/~fj , ã ä À /~fj and ã ä ä À /~fj respectively, except for the boundary condition of ô À /~fj , which is written as: ó Ä B   E « IìK Ã Ä B   E « I7E if Í K¢žc (12) Finally, an efficient similarity calculation formula is written as AŠõ \ˆöl÷ B » ` E » £ ILK Ó Ä ^†` ÛmNOø Õ_NOù A.Ä,B   E « I7c (13) According to equation (13), given the recursive definition of $À./~fj , the similarity between two HDAGs can be calculated in úŒ/œ* |‡e } time1. 3.5 Efficient Calculation Method We will now elucidate an efficient processing algorithm. First, as a pre-process, the nodes are sorted under the following condition: all nodes that have a path to the focused node and are in the graph inside the focused node should be set before the focused node. We can get at least one set of ordered nodes since we are treating an HDAG. In the case of y  , we can get ûÔ~ z , ~J” , ~J‘ , ~  , ~ • , ~lü , ~!’žý . We can rewrite the recursive calculation formula in “for loops”, if we follow the sorted order. Figure 3 shows the algorithm of the HDAG kernel. Dynamic programming technique is used to compute the HDAG Kernel very efficiently because when following the sorted order, the values that are needed to calculate the focused pair of nodes are already calculated in the previous calculation. We can calculate the table by following the order of the nodes from left to right and top to bottom. We normalize the computed kernels before their use within the algorithms. The normalization corresponds to the standard unit norm normalization of 1We can easily rewrite the equation to calculate all combinations of attributes, but the order of calculation time becomes þ Biÿ 'ÿVÿÅÿ[I . Algorithm HDAG Kernel n combination for      i++  for       ++   S  !#"%$'& ( S  )*!+"$ ,-/./0  1!+"$  if +23   54 76 and +2 " $894 76 foreach :;<+23    foreach =>;?+2 " $8 for @   @A B23 C@ ++  (,D   !#" $' += EF):GHEFI=+HJK: ! =+ end end end else if  2  94 76 foreach :;<+23   ( S   G!C"$  +=E F :*HL F :GH-M./0: !+" $  end else if  2 " $ 54 76 foreach =>;?+23 "ON  (JS   !C" $8 += EFP=+HLF/=+H-M./0   ! =+ end end foreach :;?Q R108  + for @ S% @T B2 @ ++  UWV V D   !C"$8& += X F :G UWV V D  : !C"$'&MY U D : !C"$8& J V V D    !C" $ & += XKFI:*HJ V V D  : !#" $ &MY JZD  : !C" $ & end end foreach =[;\Q%R0 " $1 for @ S% @T B2 @ ++  UWV D    !C" $ & += XKF)=+ UWV D    ! = &MY U5V V D    ! = & J V D   *!C"$8& += X F =+HJ V D  /! = & Y J V V D   *! = & end end U S    !+" $ & (jS    !C" $ & JªS    !+" $ & ]L^FI  `_'LaFI " $*3_G(lS    !C" $ & for @b7cP @A B23 C@ ++  U D    !C" $ & JZD    !C" $ & (D    !C" $ & for .5  .edf@f '. ++   D    !C" $ & += U V g    !C" $ & _1(,DaQ g    !C" $ & U D  !C"$8& += U V g   !C"$8& _1( DaQ g  /!C"$8& JWD    !C" $ & += J V g    !C" $ & _'(,DaQ g    !C" $ & end end end end return h D3ioS j +k,l N $*k,m  D\  *!C"%$8& Figure 3: Algorithm of the HDAG Kernel examples in the feature space corresponding to the kernel space (Lodhi et al., 2002). n AáBDCFEbGJIFK A$BDCFEeG9I A‡BêCˆEDCFIak7A$BDGlEDGJI (14) 4 Experiments We evaluated the performance of the proposed method in an actual application of NLP; the data set is written in Japanese. We compared HDAG and DAG (the latter had no hierarchy structure) to the String Subsequence Kernel (SSK) for word sequence, Dependency Structure p1 p2 p5 p4 p3 p6 p7 George Bush purchased a small interest in which baseball team ? NNP NNP VBD DT JJ NN IN WDT NN NN . PERSON NP NP NP PP Question: George Bush purchased a small interest in which baseball team ? p8 p9 p11 p10 p12 p13 p14 p1 p5 p4 p6 p7 George Bush purchased a small interest in which baseball team ? VBD DT JJ NN IN WDT NN NN . PERSON p8 p9 p10 (a) Hierarchical and Dependency Structure (b) Dependency Structure p2 p3 (c) Word Order p1 p5 p4 p6 p7 George Bush purchased a small interest in which baseball team ? VBD DT JJ NN IN WDT NN NN . PERSON p8 p9 p10 p2 p3 Figure 4: Examples of Input Object Structure: (a) HDAG, (b) DAG and DSK’, (c) SSK’ Kernel (DSK) (Collins and Duffy, 2001) (a special case of the Tree Kernel), and Cosine measure for feature vectors consisting of the occurrence of attributes (BOA), and the same as BOA, but only the attributes of noun and unknown word (BOA’)were used. We expanded SSK and DSK to improve the total performance of the experiments. We denote them as SSK’ and DSK’ respectively. The original SSK treats only exact œ string combinations based on parameter œ . We consider string combinations of up to œ for SSK’. The original DSK was specifically constructed for parse tree use. We expanded it to be able to treat the œ combinations of nodes and the free order of child node matching. Figure 4 shows some input objects for each evaluated kernel, (a) for HDAG, (b) for DAG and DSK’, and (c) for SSK’. Note, though DAG and DSK’ treat the same input objects, their kernel calculation methods differ as do the return values. We used the words and semantic information of “Goi-taikei” (Ikehara et al., 1997), which is similar to WordNet in English, as the attributes of the node. The chunks and their relations in the texts were analyzed by cabocha (Kudo and Matsumoto, 2002), and named entities were analyzed by the method of (Isozaki and Kazawa, 2002). We tested each œ -combination case with changing parameter – from 0.1 through 0.9 in the step of 0.1. Only the best performance achieved under parameter – is shown in each case. Table 3: Results of the performance as a similarity measure for question classification ¡ 1 2 3 4 5 6 HDAG .580 .583 .580 .579 .573 DAG .577 .578 .573 .573 .563 DSK’ .547 .469 .441 .436 .436 SSK’ .568 .572 .570 .562 .548 BOA .556 BOA’ .555 4.1 Performance as a Similarity Measure Question Classification We used the 1011 questions of NTCIR-QAC1 2 and the 2000 questions of CRL-QA data 3 We assigned them into 148 question types based on the CRL-QA data. We evaluated classification performance in the following step. First, we extracted one question from the data. Second, we calculated the similarity between the extracted question and all the other questions. Third, we ranked the questions in order of descending similarity. Finally, we evaluated performance as a similarity measure by Mean Reciprocal Rank (MRR) (Voorhees and Tice, 1999) based on the question type of the ranked questions. Table 3 shows the results of this experiment. Sentence Alignment The data set (Hirao et al., 2003) taken from the “Mainichi Shinbun”, was formed into abstract sentences and manually aligned to sentences in the “Yomiuri Shinbun” according to the meaning of sentence (did they say the same thing). This experiment was prosecuted as follows. First, we extracted one abstract sentence from the “Mainichi Shinbun” data-set. Second, we calculated the similarity between the extracted sentence and the sentences in the “Yomiuri Shinbun” data-set. Third, we ranked the sentences in the “Yomiuri Shinbun” in descending order based on the calculated similarity values. Finally, we evaluated performance as a similarity measure using the MRR measure. Table 4 shows the results of this experiment. 2http://www.nlp.cs.ritsumei.ac.jp/qac/ 3http://www.cs.nyu.edu/˜sekine/PROJECT/CRLQA/ Table 4: Results of the performance as a similarity measure for sentence alignment ¡ 1 2 3 4 5 6 HDAG .523 .484 .467 .442 .423 DAG .503 .478 .461 .439 .420 DSK’ .174 .083 .035 .020 .021 SSK’ .479 .444 .422 .412 .398 BOA .394 BOA’ .451 Table 5: Results of question classification by SVM with comparison kernel functions ¡ 1 2 3 4 5 6 HDAG .862 .865 .866 .864 .865 DAG .862 .862 .847 .818 .751 DSK’ .731 .595 .473 .412 .390 SSK’ .850 .847 .825 .777 .725 BOA+poly .810 .823 .800 .753 .692 .625 BOA’+poly .807 .807 .742 .666 .558 .468 4.2 Performance as a Kernel Function Question Classification The comparison methods were evaluated the performance as a kernel function in the machine learning approach of the Question Classification. We chose SVM as a kernel-based learning algorithm that produces state-of-the-art performance in several NLP tasks. We used the same data set as used in the previous experiments with the following difference: if a question type had fewer than ten questions, we moved the entries into the upper question type as defined in CRL-QA data to provide enough training samples for each question type. We used one-vs-rest as the multi-class classification method and found a highest scoring question type. In the case of BOA and BOA’, we used the polynomial kernel (Vapnik, 1995) to consider the attribute combinations. Table 5 shows the average accuracy of each question as evaluated by 5-fold cross validation. 5 Discussion The experiments in this paper were designed to evaluated how the similarity measure reflects the semantic information of texts. In the task of Question Classification, a given question is classified into Question Type, which reflects the intention of the question. The Sentence Alignment task evaluates which sentence is the most semantically similar to a given sentence. The HDAG Kernel showed the best performance in the experiments as a similarity measure and as a kernel of the learning algorithm. This proves the usefulness of the HDAG Kernel in determining the similarity measure of texts and in providing an SVM kernel for resolving classification problems in NLP tasks. These results indicate that our approach, incorporating richer structures within texts, is well suited to the tasks that require evaluation of the semantical similarity between texts. The potential use of the HDAG Kernel is very wider in NLP tasks, and we believe it will be adopted in other practical NLP applications such as Text Categorization and Question Answering. Our experiments indicate that the optimal parameters of combination number œ and decay factor – depend the task at hand. They can be determined by experiments. The original DSK requires exact matching of the tree structure, even when expanded (DSK’) for flexible matching. This is why DSK’ showed the worst performance. Moreover, in Sentence Alignment task, paraphrasing or different expressions with the same meaning is common, and the structures of the parse tree widely differ in general. Unlike DSK’, SSK’ and HDAG Kernel offer approximate matching which produces better performance. The structure of HDAG approaches that of DAG, if we do not consider the hierarchical structure. In addition, the structure of sequences (strings) is entirely included in that of DAG. Thus, the framework of the HDAG Kernel covers DAG Kernel and SSK. 6 Conclusion This paper proposed the HDAG Kernel, which can reflect the richer information present within texts. Our proposed method is a very generalized framework for handling the structure inside a text. We evaluated the performance of the HDAG Kernel both as a similarity measure and as a kernel function. Our experiments showed that HDAG Kernel offers better performance than SSK, DSK, and the baseline method of the Cosine measure for feature vectors, because HDAG Kernel better utilizes the richer structure present within texts. References M. Collins and N. Duffy. 2001. Parsing with a Single Neuron: Convolution Kernels for Natural Language Problems. In Technical Report UCS-CRL-01-10. UC Santa Cruz. N. Cristianini and J. Shawe-Taylor. 2000. An Introduction to Support Vector Machines and Other Kernel-based Learning Methods. Cambridge University Press. D. Haussler. 1999. Convolution Kernels on Discrete Structures. In Technical Report UCS-CRL-99-10. UC Santa Cruz. T. Hirao, H. Kazawa, H. Isozaki, E. Maeda, and Y. Matsumoto. 2003. Machine Learning Approach to MultiDocument Summarization. Journal of Natural Language Processing, 10(1):81–108. (in Japanese). S. Ikehara, M. Miyazaki, S. Shirai, A. Yokoo, H. Nakaiwa, K. Ogura, Y. Oyama, and Y. Hayashi, editors. 1997. The Semantic Attribute System, GoiTaikei — A Japanese Lexicon, volume 1. Iwanami Publishing. (in Japanese). H. Isozaki and H. Kazawa. 2002. Efficient Support Vector Classifiers for Named Entity Recognition. In Proc. of the 19th International Conference on Computational Linguistics (COLING 2002), pages 390–396. T. Kudo and Y. Matsumoto. 2002. Japanese Dependency Analysis using Cascaded Chunking. In Proc. of the 6th Conference on Natural Language Learning (CoNLL 2002), pages 63–69. H. Lodhi, C. Saunders, J. Shawe-Taylor, N. Cristianini, and C. Watkins. 2002. Text Classification Using String Kernel. Journal of Machine Learning Research, 2:419–444. G. Salton, A. Wong, and C. Yang. 1975. A Vector Space Model for Automatic Indexing. Communication of the ACM, 11(18):613–620. V. N. Vapnik. 1995. The Nature of Statistical Learning Theory. Springer. E. M. Voorhees and D. M. Tice. 1999. The TREC-8 Question Answering Track Evaluation. Proc. of the 8th Text Retrieval Conference (TREC-8).
2003
5
Unsupervised Learning of Arabic Stemming using a Parallel Corpus Monica Rogati† Computer Science Department, Carnegie Mellon University [email protected] Scott McCarley IBM TJ Watson Research Center [email protected] Yiming Yang Language Technologies Institute, Carnegie Mellon University [email protected] Abstract This paper presents an unsupervised learning approach to building a non-English (Arabic) stemmer. The stemming model is based on statistical machine translation and it uses an English stemmer and a small (10K sentences) parallel corpus as its sole training resources. No parallel text is needed after the training phase. Monolingual, unannotated text can be used to further improve the stemmer by allowing it to adapt to a desired domain or genre. Examples and results will be given for Arabic , but the approach is applicable to any language that needs affix removal. Our resource-frugal approach results in 87.5% agreement with a state of the art, proprietary Arabic stemmer built using rules, affix lists, and human annotated text, in addition to an unsupervised component. Task-based evaluation using Arabic information retrieval indicates an improvement of 22-38% in average precision over unstemmed text, and 96% of the performance of the proprietary stemmer above. 1 Introduction Stemming is the process of normalizing word variations by removing prefixes and suffixes. From an † Work done while a summer intern at IBM TJ Watson Research Center information retrieval point of view, prefixes and suffixes add little or no additional meaning; in most cases, both the efficiency and effectiveness of text processing applications such as information retrieval and machine translation are improved. Building a rule-based stemmer for a new, arbitrary language is time consuming and requires experts with linguistic knowledge in that particular language. Supervised learning also requires large quantities of labeled data in the target language, and quality declines when using completely unsupervised methods. We would like to reach a compromise by using a few inexpensive and readily available resources in conjunction with unsupervised learning. Our goal is to develop a stemmer generator that is relatively language independent (to the extent that the language accepts stemming) and is trainable using little, inexpensive data. This paper presents an unsupervised learning approach to non-English stemming. The stemming model is based on statistical machine translation and it uses an English stemmer and a small (10K sentences) parallel corpus as its sole training resources. A parallel corpus is a collection of sentence pairs with the same meaning but in different languages (i.e. United Nations proceedings, bilingual newspapers, the Bible). Table 1 shows an example that uses the Buckwalter transliteration (Buckwalter, 1999). Usually, entire documents are translated by humans, and the sentence pairs are subsequently aligned by automatic means. A small parallel corpus can be available when native speakers and translators are not, which makes building a stemmer out of such corpus a preferable direction. Arabic English m$rwE Altqryr Draft report wAkdt mmvlp zAmbyA End ErDhA lltqryr An bldhA y$hd tgyyrAt xTyrp wbEydp Almdy fy AlmydAnyn AlsyAsy wAlAqtSAdy In introducing the report, the representative of Zambia emphasised that her country was undergoing serious and far-reaching changes in the political and economic field. Table 1: A Tiny Arabic-English Parallel Corpus We describe our approach towards reaching this goal in section 2. Although we are using resources other than monolingual data, the unsupervised nature of our approach is preserved by the fact that no direct information about non-English stemming is present in the training data. Monolingual, unannotated text in the target language is readily available and can be used to further improve the stemmer by allowing it to adapt to a desired domain or genre. This optional step is closer to the traditional unsupervised learning paradigm and is described in section 2.4, and its impact on stemmer quality is described in 3.1.4. Our approach (denoted by UNSUP in the rest of the paper) is evaluated in section 3.1 by comparing it to a proprietary Arabic stemmer (denoted by GOLD). The latter is a state of the art Arabic stemmer, and was built using rules, suffix and prefix lists, and human annotated text. GOLD is an earlier version of the stemmer described in (Lee et al., ). The task-based evaluation section 3.2 compares the two stemmers by using them as a preprocessing step in the TREC Arabic retrieval task. This section also presents the improvement obtained over using unstemmed text. 1.1 Arabic details In this paper, Arabic was the target language but the approach is applicable to any language that needs affix removal. In Arabic, unlike English, both prefixes and suffixes need to be removed for effective stemming. Although Arabic provides the additional challenge of infixes, we did not tackle them because they often substantially change the meaning. Irregular morphology is also beyond the scope of this paper. As a side note for readers with linguistic background (Arabic in particular), we do not claim that the resulting stems are units representing the entire paradigm of a lexical item. The main purpose of stemming as seen in this paper is to conflate the token space used in statistical methods in order to improve their effectiveness. The quality of the resulting tokens as perceived by humans is not as important, since the stemmed output is intended for computer consumption. 1.2 Related Work The problem of unsupervised stemming or morphology has been studied using several different approaches. For Arabic, good results have been obtained for plural detection (Clark, 2001). (Goldsmith, 2001) used a minimum description length paradigm to build Linguistica, a system for which the reported accuracy for European languages is cca. 83%. Note that the results in this section are not directly comparable to ours, since we are focusing on Arabic. A notable contribution was published by Snover (Snover, 2002), who defines an objective function to be optimized and performs a search for the stemmed configuration that optimizes the function over all stemming possibilities of a given text. Rule-based stemming for Arabic is a problem studied by many researchers; an excellent overview is provided by (Larkey et al., ). Morphology is not limited to prefix and suffix removal; it can also be seen as mapping from a word to an arbitrary meaning carrying token. Using an LSI approach, (Schone and Jurafsky, ) obtained 88% accuracy for English. This approach also deals with irregular morphology, which we have not addressed. A parallel corpus has been successfully used before by (Yarowsky et al., 2000) to project part of speech tags, named entity tags, and morphology information from one language to the other. For a parallel corpus of comparable size with the one used in our results, the reported accuracy was 93% for French (when the English portion was also available); however, this result only covers 90% of the tokens. Accuracy was later improved using suffix trees. (Diab and Resnik, 2002) used a parallel corpus for word sense disambiguation, exploiting the fact that different meanings of the same word tend to be translated into distinct words. 2 Approach Figure 1: Approach Overview Our approach is based on the availability of the following three resources: • a small parallel corpus • an English stemmer • an optional unannotated Arabic corpus Our goal is to train an Arabic stemmer using these resources. The resulting stemmer will simply stem Arabic without needing its English equivalent. We divide the training into two logical steps: • Step 1: Use the small parallel corpus • Step 2: (optional) Use the monolingual corpus The two steps are described in detail in the following subsections. 2.1 Step 1: Using the Small Parallel Corpus Figure 2: Step 1 Iteration In Step 1, we are trying to exploit the English stemmer by stemming the English half of the parallel corpus and building a translation model that will establish a correspondence between meaning carrying substrings (the stem) in Arabic and the English stems. For our purposes, a translation model is a matrix of translation probabilities p(Arabic stem| English stem) that can be constructed based on the small parallel corpus (see subsection 2.2 for more details). The Arabic portion is stemmed with an initial guess (discussed in subsection 2.1.1) Conceptually, once the translation model is built, we can stem the Arabic portion of the parallel corpus by scoring all possible stems that an Arabic word can have, and choosing the best one. Once the Arabic portion of the parallel corpus is stemmed, we can build a more accurate translation model and repeat the process (see figure 2). However, in practice, instead of using a harsh cutoff and only keeping the best stem, we impose a probability distribution over the candidate stems. The distribution starts out uniform and then converges towards concentrating most of the probability mass in one stem candidate. 2.1.1 The Starting Point The starting point is an inherent problem for unsupervised learning. We would like our stemmer to give good results starting from a very general initial guess (i.e. random). In our case, the starting point is the initial choice of the stem for each individual word. We distinguish several solutions: • No stemming. This is not a desirable starting point, since affix probabilities used by our model would be zero. • Random stemming As mentioned above, this is equivalent to imposing a uniform prior distribution over the candidate stems. This is the most general starting point. • A simple language specific rule - if available If a simple rule is available, it would provide a better than random starting point, at the cost of reduced generality. For Arabic, this simple rule was to use Al as a prefix and p as a suffix. This rule (or at least the first half) is obvious even to non-native speakers looking at transliterated text. It also constitutes a surprisingly high baseline. 2.2 The Translation Model ∗ We adapted Model 1 (Brown et al., 1993) to our purposes. Model 1 uses the concept of alignment between two sentences e and f in a parallel corpus; the alignment is defined as an object indicating for each word ei which word fj generated it. To obtain the probability of an foreign sentence f given the English sentence e, Model 1 sums the products of the translation probabilities over all possible alignments: Pr(f|e) ∼ X {a} m Y j=1 t(fj|eaj) The alignment variable ai controls which English word the foreign word fi is aligned with. t(f|e) is simply the translation probability which is refined iteratively using EM. For our purposes, the translation probabilities (in a translation matrix) are the final product of using the parallel corpus to train the translation model. To take into account the weight contributed by each stem, the model’s iterative phase was adapted to use the sum of the weights of a word in a sentence instead of the count. 2.3 Candidate Stem Scoring As previously mentioned, each word has a list of substrings that are possible stems. We reduced the problem to that of placing two separators inside each Arabic word; the “candidate stems” are simply the substrings inside the separators. While this may seem inefficient, in practice words tend to be short, and one or two letter stems can be disallowed. An initial, naive approach when scoring the stem would be to simply look up its translation probability, given the English stem that is most likely to be its translation in the parallel sentence (i.e. the English stem aligned with the Arabic stem candidate). Figure 3 presents scoring examples before normalization. ∗Note that the algorithm to build the translation model is not a “resource” per se, since it is a language-independent algorithm. English Phrase: the advisory committee Arabic Phrase: Alljnp AlAst$Aryp Task: stem AlAst$Aryp Choices Score AlAst$Aryp 0.2 AlAst$Aryp 0.7 AlAst$Aryp 0.8 AlAst$Aryp 0.1 ... ... Figure 3: Scoring the Stem However, this approach has several drawbacks that prevent us from using it on a corpus other than the training corpus. Both of the drawbacks below are brought about by the small size of the parallel corpus: • Out-of-vocabulary words: many Arabic stems will not be seen in the small corpus • Unreliable translation probabilities for lowfrequency stems. We can avoid these issues if we adopt an alternate view of stemming a word, by looking at the prefix and the suffix instead. Given the word, the choice of prefix and suffix uniquely determines the stem. Since the number of unique affixes is much smaller by definition, they will not have the two problems above, even when using a small corpus. These probabilities will be considerably more reliable and are a very important part of the information extracted from the parallel corpus. Therefore, the score of a candidate stem should be based on the score of the corresponding prefix and the suffix, in addition to the score of the stem string itself: score(“pas′′) = f(p) × f(a) × f(s) where a = Arabic stem, p = prefix, s=suffix When scoring the prefix and the suffix, we could simply use their probabilities from the previous stemming iteration. However, there is additional information available that can be successfully used to condition and refine these probabilities (such as the length of the word, the part of speech tag if given etc.). English Phrase: the advisory committee Arabic Phrase: Alljnp AlAst$Aryp Task: stem AlAst$Aryp Choices Score AlAst$Aryp 0.8 AlAst$Aryp 0.7 AlAst$Ary 0.6 AlAst$Aryp 0.1 ... ... Figure 4: Alternate View: Scoring the Prefix and Suffix 2.3.1 Scoring Models We explored several stem scoring models, using different levels of available information. Examples include: • Use the stem translation probability alone score = t(a|e) where a = Arabic stem, e = corresponding word in the English sentence • Also use prefix (p) and suffix (s) conditional probabilities; several examples are given in table 2. Probability conditioned on Scoring Formula the candidate stem t(a|e) × p(p,s|a)+p(s|a)×p(p|a) 2 the length of the unstemmed Arabic word (len) t(a|e) × p(p,s|len)+p(s|len)×p(p|len) 2 the possible prefixes and/or suffixes t(a|e) × p(s|Spossible) × p(p|Ppossible) the first and last letter t(a|e)×p(s|last)×p(p|first) Table 2: Example Scoring Models The first two examples use the joint probability of the prefix and suffix, with a smoothing back-off (the product of the individual probabilities). Scoring models of this form proved to be poor performers from the beginning, and they were abandoned in favor of the last model, which is a fast, good approximation to the third model in Table 2. The last two models successfully solve the problem of the empty prefix and suffix accumulating excessive probability, which would yield to a stemmer that never removed any affixes. The results presented in the rest of the paper use the last scoring model. 2.4 Step 2: Using the Unlabeled Monolingual Data This optional second step can adapt the trained stemmer to the problem at hand. Here, we are moving away from providing the English equivalent, and we are relying on learned prefix, suffix and (to a lesser degree) stem probabilities. In a new domain or corpus, the second step allows the stemmer to learn new stems and update its statistical profile of the previously seen stems. This step can be performed using monolingual Arabic data, with no annotation needed. Even though it is optional, this step is recommended since its sole resource can be the data we would need to stem anyway (see Figure 5). Arabic Arabic Stemmed Stemmer Unstemmed Figure 5: Step 2 Detail Step 1 produced a functional stemming model. We can use the corpus statistics gathered in Step 1 to stem the new, monolingual corpus. However, the scoring model needs to be modified, since t(a|e) is no longer available. By removing the conditioning, the first/last letter scoring model we used becomes score = p(a) × p(s|last) × p(p|first) The model can be updated if the stem candidate score/probability distribution is sufficiently skewed, and the monolingual text can be stemmed iteratively using the new model. The model is thus adapted to the particular needs of the new corpus; in practice, convergence is quick (less than 10 iterations). 3 Results 3.1 Unsupervised Training and Testing For unsupervised training in Step 1, we used a small parallel corpus: 10,000 Arabic-English sentences from the United Nations(UN) corpus, where the English part has been stemmed and the Arabic transliterated. For unsupervised training in Step 2, we used a larger, Arabic only corpus: 80,000 different sentences in the same dataset. The test set consisted of 10,000 different sentences in the UN dataset; this is the testing set used below unless specified. We also used a larger corpus ( a year of Agence France Press (AFP) data, 237K sentences) for Step 2 training and testing, in order to gauge the robustness and adaptation capability of the stemmer. Since the UN corpus contains legal proceedings, and the AFP corpus contains news stories, the two can be seen as coming from different domains. 3.1.1 Measuring Stemmer Performance In this subsection the accuracy is defined as agreement with GOLD. GOLD is a state of the art, proprietary Arabic stemmer built using rules, suffix and prefix lists, and human annotated text, in addition to an unsupervised component. GOLD is an earlier version of the stemmer described in (Lee et al., ). Freely available (but less accurate) Arabic light stemmers are also used in practice. When measuring accuracy, all tokens are considered, including those that cannot be stemmed by simple affix removal (irregulars, infixes). Note that our baseline (removing Al and p, leaving everything unchanged) is higher that simply leaving all tokens unchanged. For a more relevant task-based evaluation, please refer to Subsection 3.2. 3.1.2 The Effect of the Corpus Size: How little parallel data can we use? We begin by examining the effect that the size of the parallel corpus has on the results after the first step. Here, we trained our stemmer on three different corpus sizes: 50K, 10K, and 2K sentences. The high baseline is obtained by treating Al and p as affixes. The 2K corpus had acceptable results (if this is all the data available). Using 10K was significantly better; however the improvement obtained when five times as much data (50K) was used was insignificant. Note that different languages might have different corpus size needs. All other results Figure 6: Results after Step 1 : Corpus Size Effect in this paper use 10K sentences. 3.1.3 The Knowledge-Free Starting Point after Step 1 Figure 7: Results after Step 1 : Effect of Knowing the Al+p rule Although severely handicapped at the beginning, the knowledge-free starting point manages to narrow the performance gap after a few iterations. Knowing the Al+p rule still helps at this stage. However, the performance gap is narrowed further in Step 2 (see figure 8), where the knowledge free starting point benefitted from the monolingual training. 3.1.4 Results after Step 2: Different Corpora Used for Adaptation Figure 8 shows the results obtained when augmenting the stemmer trained in Step 1. Two different monolingual corpora are used: one from the same domain as the test set (80K UN), and one from a different domain/corpus, but three times larger (237K AFP). The larger dataset seems to be more useful in improving the stemmer, even though the domain was different. Figure 8: Results after Step 2 (Monolingual Corpus) The baseline and the accuracy after Step 1 are presented for reference. 3.1.5 Cross-Domain Robustness Figure 9: Results after Step 2 : Using a Different Test Set We used an additional test set that consisted of 10K sentences taken from AFP, instead of UN as in previous experiments shown in figure 8 . Its purpose was to test the cross-domain robustness of the stemmer and to further examine the importance of applying the second step to the data needing to be stemmed. Figure 9 shows that, even though in Step 1 the stemmer was trained on UN proceedings, the results on the cross-domain (AFP) test set are comparable to those from the same domain (UN, figure 8). However, for this particular test set the baseline was much higher; thus the relative improvement with respect to the baseline is not as high as when the unsupervised training and testing set came from the same collection. 3.2 Task-Based Evaluation : Arabic Information Retrieval Task Description: Given a set of Arabic documents and an Arabic query, find a list of documents relevant to the query, and rank them by probability of relevance. We used the TREC 2002 documents (several years of AFP data), queries and relevance judgments. The 50 queries have a shorter, “title” component as wel as a longer “description”. We stemmed both the queries and the documents using UNSUP and GOLD respectively. For comparison purposes, we also left the documents and queries unstemmed. The UNSUP stemmer was trained with 10K UN sentences in Step 1, and with one year’s worth of monolingual AFP data (1995) in Step 2. Evaluation metric: The evaluation metric used below is mean average precision (the standard IR metric), which is the mean of average precision scores for each query. The average precision of a single query is the mean of the precision scores after each relevant document retrieved. Note that average precision implicitly includes recall information. Precision is defined as the ratio of relevant documents to total documents retrieved up to that point in the ranking. Results Figure 10: Arabic Information Retrieval Results We looked at the effect of different testing conditions on the mean average precision for the 50 queries. In Figure 10, the first set of bars uses the query titles only, the second set adds the description, and the last set restricts the results to one year (1995), using both the title and description. We tested this last condition because the unsupervised stemmer was refined in Step 2 using 1995 documents. The last group of bars shows a higher relative improvement over the unstemmed baseline; however, this last condition is based on a smaller sample of relevance judgements (restricted to one year) and is therefore not as representative of the IR task as the first two testing conditions. 4 Conclusions and Future Work This paper presents an unsupervised learning approach to building a non-English (Arabic) stemmer using a small sentence-aligned parallel corpus in which the English part has been stemmed. No parallel text is needed to use the stemmer. Monolingual, unannotated text can be used to further improve the stemmer by allowing it to adapt to a desired domain or genre. The approach is applicable to any language that needs affix removal; for Arabic, our approach results in 87.5% agreement with a proprietary Arabic stemmer built using rules, affix lists, and human annotated text, in addition to an unsupervised component. Task-based evaluation using Arabic information retrieval indicates an improvement of 2238% in average precision over unstemmed text, and 93-96% of the performance of the state of the art, language specific stemmer above. We can speculate that, because of the statistical nature of the unsupervised stemmer, it tends to focus on the same kind of meaning units that are significant for IR, whether or not they are linguistically correct. This could explain why the gap betheen GOLD and UNSUP is narrowed with task-based evaluation and is a desirable effect when the stemmer is to be used for IR tasks. We are planning to experiment with different languages, translation model alternatives, and to extend task-based evaluation to different tasks such as machine translation and cross-lingual topic detection and tracking. 5 Acknowledgements We would like to thank the reviewers for their helpful observations and for identifying Arabic misspellings. This work was partially supported by the Defense Advanced Research Projects Agency and monitored by SPAWAR under contract No. N66001-99-2-8916. This research is also sponsored in part by the National Science Foundation (NSF) under grants EIA-9873009 and IIS-9982226, and in part by the DoD under award 114008N66001992891808. However, any opinions, views, conclusions and findings in this paper are those of the authors and do not necessarily reflect the position of policy of the Government and no official endorsement should be inferred. References P. Brown, S. Della Pietra, V. Della Pietra, and R. Mercer. 1993. The mathematics of machine translation: Parameter estimation. In Computational Linguistics, pages 263–311. Tim Buckwalter. 1999. Buckwalter transliteration. http://www.cis.upenn.edu/∼cis639/arabic/info/translitchart.html. Alexander Clark. 2001. Learning morphology with pair hidden markov models. In ACL (Companion Volume), pages 55–60. Mona Diab and Philip Resnik. 2002. An unsupervised method for word sense tagging using parallel corpora. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics (ACL), pages 255–262, July. John Goldsmith. 2001. Unsupervised learning of the morphology of a natural language. In Computational Linguistics. Leah Larkey, Lisa Ballesteros, and Margaret Connell. Improving stemming for arabic information retrieval: Light stemming and co-occurrence analysis. In SIGIR 2002, pages 275–282. Young-Suk Lee, Kishore Papineni, Salim Roukos, Ossama Emam, and Hany Hassan. Language model based arabic word segmentation. In To appear in ACL 2003. Patrick Schone and Daniel Jurafsky. Knowledge-free induction of morphology using latent semantic analysis. In 4th Conference on Computational Natural Language Learning, Lisbon, 2000. Matthew Snover. 2002. An unsupervised knowledge free algorithm for the learning of morphology in natural languages. Master’s thesis, Washington University, May. David Yarowsky, Grace Ngai, and Richard Wicentowski. 2000. Inducing multilingual text analysis tools via robust projection across aligned corpora.
2003
50
Language Model Based Arabic Word Segmentation Young-Suk Lee Kishore Papineni Salim Roukos IBM T. J. Watson Research Center Yorktown Heights, NY 10598 Ossama Emam Hany Hassan IBM Cairo Technology Development Center P.O.Box 166, El-Ahram, Giza, Egypt Abstract We approximate Arabic’s rich morphology by a model that a word consists of a sequence of morphemes in the pattern prefix*-stem-suffix* (* denotes zero or more occurrences of a morpheme). Our method is seeded by a small manually segmented Arabic corpus and uses it to bootstrap an unsupervised algorithm to build the Arabic word segmenter from a large unsegmented Arabic corpus. The algorithm uses a trigram language model to determine the most probable morpheme sequence for a given input. The language model is initially estimated from a small manually segmented corpus of about 110,000 words. To improve the segmentation accuracy, we use an unsupervised algorithm for automatically acquiring new stems from a 155 million word unsegmented corpus, and re-estimate the model parameters with the expanded vocabulary and training corpus. The resulting Arabic word segmentation system achieves around 97% exact match accuracy on a test corpus containing 28,449 word tokens. We believe this is a state-of-the-art performance and the algorithm can be used for many highly inflected languages provided that one can create a small manually segmented corpus of the language of interest. 1 Introduction Morphologically rich languages like Arabic present significant challenges to many natural language processing applications because a word often conveys complex meanings decomposable into several morphemes (i.e. prefix, stem, suffix). By segmenting words into morphemes, we can improve the performance of natural language systems including machine translation (Brown et al. 1993) and information retrieval (Franz, M. and McCarley, S. 2002). In this paper, we present a general word segmentation algorithm for handling inflectional morphology capable of segmenting a word into a prefix*-stemsuffix* sequence, using a small manually segmented corpus and a table of prefixes/suffixes of the language. We do not address Arabic infix morphology where many stems correspond to the same root with various infix variations; we treat all the stems of a common root as separate atomic units. The use of a stem as a morpheme (unit of meaning) is better suited than the use of a root for the applications we are considering in information retrieval and machine translation (e.g. different stems of the same root translate into different English words.) Examples of Arabic words and their segmentation into prefix*-stem-suffix* are given in Table 1, where '#' indicates a morpheme being a prefix, and '+' a suffix.1 As 1 Arabic is presented in both native and Buckwalter transliterated Arabic whenever possible. All native Arabic is to be read from right-to-left, and transliterated Arabic is to be read from left-to-right. The convention of shown in Table 1, a word may include multiple prefixes, as in ( ﻟѧѧﻞl: for, Al: the), or multiple suffixes, as in ( ﺗѧѧѧﻪt: feminine singular, h: his). A word may also consist only of a stem, as in ( اﻟѧѧﻰAlY, to/towards). The algorithm implementation involves (i) language model training on a morphemesegmented corpus, (ii) segmentation of input text into a sequence of morphemes using the language model parameters, and (iii) unsupervised acquisition of new stems from a large unsegmented corpus. The only linguistic resources required include a small manually segmented corpus ranging from 20,000 words to 100,000 words, a table of prefixes and suffixes of the language and a large unsegmented corpus. In Section 2, we discuss related work. In Section 3, we describe the segmentation algorithm. In Section 4, we discuss the unsupervised algorithm for new stem acquisition. In Section 5, we present experimental results. In Section 6, we summarize the paper. 2 Related Work Our work adopts major components of the algorithm from (Luo & Roukos 1996): language model (LM) parameter estimation from a segmented corpus and input segmentation on the basis of LM probabilities. However, our work diverges from their work in two crucial respects: (i) new technique of computing all possible segmentations of a word into prefix*-stem-suffix* for decoding, and (ii) unsupervised algorithm for new stem acquisition based on a stem candidate's similarity to stems occurring in the training corpus. (Darwish 2002) presents a supervised technique which identifies the root of an Arabic word by stripping away the prefix and the suffix of the word on the basis of manually acquired dictionary of word-root pairs and the likelihood that a prefix and a suffix would occur with the template from which the root is derived. He reports 92.7% segmentation accuracy on a 9,606 word evaluation corpus. His technique pre-supposes at most one prefix and one suffix per stem regardless of the actual number and meanings of prefixes/suffixes associated with the stem. (Beesley 1996) presents a finite-state morphological analyzer for Arabic, which displays the root, pattern, and prefixes/suffixes. The analyses are based on manually acquired lexicons and rules. Although his analyzer is comprehensive in the types of knowledge it presents, it has been criticized for their extensive development time and lack of robustness, cf. (Darwish 2002). marking a prefix with '#" and a suffix with '+' will be adopted throughout the paper. (Yarowsky and Wicentowsky 2000) presents a minimally supervised morphological analysis with a performance of over 99.2% accuracy for the 3,888 past-tense test cases in English. The core algorithm lies in the estimation of a probabilistic alignment between inflected forms and root forms. The probability estimation is based on the lemma alignment by frequency ratio similarity among different inflectional forms derived from the same lemma, given a table of inflectional parts-of-speech, a list of the canonical suffixes for each part of speech, and a list of the candidate noun, verb and adjective roots of the language. Their algorithm does not handle multiple affixes per word. (Goldsmith 2000) presents an unsupervised technique based on the expectationmaximization algorithm and minimum description length to segment exactly one suffix per word, resulting in an F-score of 81.8 for suffix identification in English according to (Schone and Jurafsky 2001). (Schone and Jurafsky 2001) proposes an unsupervised algorithm capable of automatically inducing the morphology of inflectional languages using only text corpora. Their algorithm combines cues from orthography, semantics, and contextual information to induce morphological relationships in German, Dutch, and English, among others. They report Fscores between 85 and 93 for suffix analyses and between 78 and 85 for circumfix analyses in these languages. Although their algorithm captures prefix-suffix combinations or circumfixes, it does not handle the multiple affixes per word we observe in Arabic. 2 Words Prefixes Stems Suffixes Arabic Translit. Arabic Translit. Arabic Translit. Arabic Translit. ا ﻟѧѧﻮﻻﻳѧѧѧѧﺎت AlwlAyAt # ال Al# وﻻي wlAy ات+ +At ﺣѧѧѧѧﻴѧﺎﺗѧﻪ HyAth ﺣѧѧﻴѧﺎ HyA ﻩ+ ت+ +t +h ﻟѧѧﻠﺤﺼѧѧѧѧѧﻮل llHSwl # ال# ل l# Al# ﺣﺼѧѧﻮل HSwl اﻟѧѧﻰ AlY اﻟѧѧﻰ AlY Table 1 Segmentation of Arabic Words into Prefix*-Stem-Suffix* 3 Morpheme Segmentation 3.1 Trigram Language Model Given an Arabic sentence, we use a trigram language model on morphemes to segment it into a sequence of morphemes {m1, m2, …,mn}. The input to the morpheme segmenter is a sequence of Arabic tokens – we use a tokenizer that looks only at white space and other punctuation, e.g. quotation marks, parentheses, period, comma, etc. A sample of a manually segmented corpus is given below2. Here multiple occurrences of prefixes and suffixes per word are marked with an underline. و#آﺎن اﻳﺮﻓﺎﻳﻦ اﻟﺬي ﺣﻞ ﰲ ال # ﻣﺮآﺰ ال # اول ﰲ ﺟﺎﺋﺰ+ة ال # ﳕﺴﺎ ال # ﻋﺎم ال # ﻣﺎﺿﻲ ﻋﻠﻲ ﺳﻴﺎر+ة ﻓﲑاري ﺷﻌﺮ ب # اﻻم ﰲ ﺑﻄﻦ+ ﻩ اﺿﻄﺮ+ ت+ﻩ اﱄ ال # اﻧﺴﺤﺎب ﻣﻦ ال #ﲡﺎرب و # هﻮ س #ي# ﻋﻮد اﱄ ﻟﻨﺪن ل #اﺟﺮا ء ال # ﻓﺤﻮص+ات ال # ﺿﺮوري+ ة ﺣﺴﺐ ﻣﺎ اﺷﺎر ﻓﺮﻳﻖ ﺟﺎﻏﻮار . و# س#ي# ﺣﻞ ﺳﺎﺋﻖ ال # ﲡﺎرب ﰲ ﺟﺎﻏﻮار ال # ﺑﺮازﻳﻠﻲ ﻟﻮﺳﻴﺎﻧﻮ ﺑﻮرﺗﻲ ﻣﻜﺎن اﻳﺮﻓﺎﻳﻦ ﰲ ال #ﺳﺒﺎق ﻏﺪا ال # اﺣﺪ اﻟﺬي س # ي# آﻮن اوﱄ ﺧﻄﻮ+ ات+ﻩ ﰲ ﻋﺎﱂ ﺳﺒﺎق +ات اﻟﻔﻮرﻣﻮﻻ w# kAn AyrfAyn Al*y Hl fy Al# mrkz Al# Awl fy jA}z +p Al# nmsA Al# EAm Al# mADy Ely syAr +p fyrAry $Er b# AlAm fy bTn +h ADTr +t +h Aly Al# AnsHAb mn Al# tjArb w# hw s# y# Ewd Aly lndn l# AjrA' Al# fHwS +At Al# Drwry +p Hsb mA A$Ar fryq 2 A manually segmented Arabic corpus containing about 140K word tokens has been provided by LDC (http://www.ldc.upenn.edu). We divided this corpus into training and the development test sets as described in Section 5. jAgwAr. w# s# y# Hl sA}q Al# tjArb fy jAgwAr Al# brAzyly lwsyAnw bwrty mkAn AyrfAyn fy Al# sbAq gdA Al# AHd Al*y s# y# kwn Awly xTw +At +h fy EAlm sbAq +At AlfwrmwlA Many instances of prefixes and suffixes in Arabic are meaning bearing and correspond to a word in English such as pronouns and prepositions. Therefore, we choose a segmentation into multiple prefixes and suffixes. Segmentation into one prefix and one suffix per word, cf. (Darwish 2002), is not very useful for applications like statistical machine translation, (Brown et al. 1993), for which an accurate word-to-word alignment between the source and the target languages is critical for high quality translations. The trigram language model probabilities of morpheme sequences, p(mi|mi-1, mi-2), are estimated from the morpheme-segmented corpus. At token boundaries, the morphemes from previous tokens constitute the histories of the current morpheme in the trigram language model. The trigram model is smoothed using deleted interpolation with the bigram and unigram models, (Jelinek 1997), as in (1): (1) p(m3 | m1 ,m2) = λ3 p(m3 |m1 ,m2) + λ2 p(m3 |m2) + λ3 p(m3), where λ1+λ2 +λ3 = 1. A small morpheme-segmented corpus results in a relatively high out of vocabulary rate for the stems. We describe below an unsupervised acquisition of new stems from a large unsegmented Arabic corpus. However, we first describe the segmentation algorithm. 3.2 Decoder for Morpheme Segmentation 3 We take the unit of decoding to be a sentence that has been tokenized using white space and punctuation. The task of a decoder is to find the morpheme sequence which maximizes the trigram probability of the input sentence, as in (2): (2) SEGMENTATIONbest = Argmax IIi=1, N p(mi|mi-1mi-2), N = number of morphemes in the input. Search algorithm for (2) is informally described for each word token as follows: Step 1: Compute all possible segmentations of the token (to be elaborated in 3.2.1). Step 2: Compute the trigram language model score of each segmentation. For some segmentations of a token, the stem may be an out of vocabulary item. In that case, we use an “UNKNOWN” class in the trigram language model with the model probability given by p(UNKNOWN|mi-1, mi-2) * UNK_Fraction, where UNK_Fraction is 1e-9 determined on empirical grounds. This allows us to segment new words with a high accuracy even with a relatively high number of unknown stems in the language model vocabulary, cf. experimental results in Tables 5 & 6. Step 3: Keep the top N highest scored segmentations. 3.2.1 Possible Segmentations of a Word Possible segmentations of a word token are restricted to those derivable from a table of prefixes and suffixes of the language for decoder speed-up and improved accuracy. Table 2 shows examples of atomic (e.g. ,ال )اتand multi-component (e.g. )اﺗﻬѧѧﺎ ,وﺑѧѧѧﺎل prefixes and suffixes, along with their component morphemes in native Arabic.3 3 We have acquired the prefix/suffix table from a 110K word manually segmented LDC corpus (51 prefixes & 72 suffixes) and from IBM-Egypt (additional 14 prefixes & 122 suffixes). The performance improvement by the additional prefix/suffix list ranges from 0.07% to 0.54% according to the manually segmented training corpus size. The smaller the manually segmented corpus size is, the bigger the performance improvement by adding additional prefix/suffix list is. Prefixes Suffixes ال# ال ات ات+ ﺑѧѧѧﺎل# ال# ب اﺗﻬѧѧﺎ +ات هﺎ+ وﺑѧѧѧﺎل# ال# ب# و وﻧѧﻬﻢ+ ون+هﻢ Table 2 Prefix/Suffix Table Each token is assumed to have the structure prefix*-stem-suffix*, and is compared against the prefix/suffix table for segmentation. Given a word token, (i) identify all of the matching prefixes and suffixes from the table, (ii) further segment each matching prefix/suffix at each character position, and (iii) enumerate all prefix*-stem-suffix* sequences derivable from (i) and (ii). Table 3 shows all of its possible segmentations of the token واآ ﺮرهﺎ (wAkrrhA; 'and I repeat it'),4 where ∅ indicates the null prefix/suffix and the Seg Score is the language model probabilities of each segmentation S1 ... S12. For this token, there are two matching prefixes #(وw#) and #(واwA#) from the prefix table, and two matching suffixes ا+(+A) and هﺎ+(+hA) from the suffix table. S1, S2, & S3 are the segmentations given the null prefix ∅ and suffixes ∅, +A, +hA. S4, S5, & S6 are the segmentations given the prefix w# and suffixes ∅, +A, +hA. S7, S8, & S9 are the segmentations given the prefix wA# and suffixes ∅, +A, +hA. S10, S11, & S12 are the segmentations given the prefix sequence w# A# derived from the prefix wA# and suffixes ∅, +A, +hA. As illustrated by S12, derivation of sub-segmentations of the matching prefixes/suffixes enables the system to identify possible segmentations which would have been missed otherwise. In this case, segmentation including the derived prefix sequence و #ا # آﺮر+ هﺎ (w# A# krr +hA) happens to be the correct one. 3.2.2. Prefix-Suffix Filter While the number of possible segmentations is maximized by sub-segmenting matching 4 A sentence in which the token occurs is as follows: ﻗﻠﺘﻬﺎ واآﺮرهﺎ ﻓﺎﻟﻤﺸﻜﻠﺔ ﻟﻴﺴﺖ ﻓﻲ اﻟﻨﻔﻂ اﻟﺨﺎم واﻧﻤﺎ ﻓﻲ اﻟﻤﺸﺘﻘﺎت اﻟﻨﻔﻄﻴﺔ (qlthA wAkrrhA fAlm$klp lyst fy AlfnT AlxAm wAnmA fy Alm$tqAt AlnfTyp.) 4 prefixes and suffixes, some of illegitimate subsegmentations are filtered out on the basis of the knowledge specific to the manually segmented corpus. For instance, subsegmentation of the suffix hA into +h +A is ruled out because there is no suffix sequence +h +A in the training corpus. Likewise, subsegmentation of the prefix Al into A# l# is filtered out. Filtering out improbable prefix/suffix sequences improves the segmentation accuracy, as shown in Table 5. Prefix Stem Suffix Seg Scores S1 ∅ wAkrrhA ∅ 2.6071e-05 S2 ∅ wAkrrh +A 1.36561e-06 S3 ∅ wAkrr +hA 9.45933e-07 S4 w# AkrrhA ∅ 2.72648e-06 S5 w# Akrrh +A 5.64843e-07 S6 w# Akrr +hA 4.52229e-05 S7 wA# krrhA ∅ 7.58256e-10 S8 wA# krrh +A 5.09988e-11 S9 wA# krr +hA 1.91774e-08 S10 w# A# krrhA ∅ 7.69038e-07 S11 w# A# krrh +A 1.82663e-07 S12 w# A# krr +hA 0.000944511 Table 3 Possible Segmentations of ( واآﺮرهﺎwAkrrhA) 4 Unsupervised Acquisition of New Stems Once the seed segmenter is developed on the basis of a manually segmented corpus, the performance may be improved by iteratively expanding the stem vocabulary and retraining the language model on a large automatically segmented Arabic corpus. Given a small manually segmented corpus and a large unsegmented corpus, segmenter development proceeds as follows. Initialization: Develop the seed segmenter Segmenter0 trained on the manually segmented corpus Corpus0, using the language model vocabulary, Vocab0, acquired from Corpus0. Iteration: For i = 1 to N, N = the number of partitions of the unsegmented corpus i. Use Segmenteri-1 to segment Corpusi. ii. Acquire new stems from the newly segmented Corpusi. Add the new stems to Vocabi-1, creating an expanded vocabulary Vocabi. iii. Develop Segmenteri trained on Corpus0 through Corpusi with Vocabi. Optimal Performance Identification: Identify the Corpusi and Vocabi, which result in the best performance, i.e. system training with Corpusi+1 and Vocabi+1 does not improve the performance any more. Unsupervised acquisition of new stems from an automatically segmented new corpus is a three-step process: (i) select new stem candidates on the basis of a frequency threshold, (ii) filter out new stem candidates containing a sub-string with a high likelihood of being a prefix, suffix, or prefix-suffix. The likelihood of a sub-string being a prefix, suffix, and prefix-suffix of a token is computed as in (5) to (7), (iii) further filter out new stem candidates on the basis of contextual information, as in (8). (5) Pscore = number of tokens with prefix P / number of tokens starting with sub-string P (6) Sscore = number of tokens with suffix S / number of tokens ending with sub-string S (7) PSscore = number of tokens with prefix P and suffix S / number of tokens starting with sub-string P and ending with sub-string S Stem candidates containing a sub-string with a high prefix, suffix, or prefix-suffix likelihood are filtered out. Example sub-strings with the prefix, suffix, prefix-suffix likelihood 0.85 or higher in a 110K word manually segmented corpus are given in Table 4. If a token starts with the sub-string ( ﺱﻨـsn), and end with ـﻬﺎ (hA), the sub-string's likelihood of being the prefix-suffix of the token is 1. If a token starts with the sub-string ( ﻟѧѧﻞll), the sub-string's likelihood of being the prefix of the token is 0.945, etc. Arabic Transliteration Score ـﻬﺎ+ stem # ﺱﻨـ sn# stem+hA 1.0 ة+ stem # اﻟѧѧـ Al# stem+p 0.984 stem # ﻟѧѧﻞ ll# stem 0.945 ات+ stem stem+At 0.889 Table 4 Prefix/Suffix Likelihood Score 5 (8) Contextual Filter: (i) Filter out stems cooccurring with prefixes/suffixes not present in the training corpus. (ii) Filter out stems whose prefix/suffix distributions are highly disproportionate to those seen in the training corpus. According to (8), if a stem is followed by a potential suffix +m, not present in the training corpus, then it is filtered out as an illegitimate stem. In addition, if a stem is preceded by a prefix and/or followed by a suffix with a significantly higher proportion than that observed in the training corpus, it is filtered out. For instance, the probability for the suffix +A to follow a stem is less than 50% in the training corpus regardless of the stem properties, and therefore, if a candidate stem is followed by +A with the probability of over 70%, e.g. mAnyl +A, then it is filtered out as an illegitimate stem. 5 Performance Evaluations We present experimental results illustrating the impact of three factors on segmentation error rate: (i) the base algorithm, i.e. language model training and decoding, (ii) language model vocabulary and training corpus size, and (iii) manually segmented training corpus size. Segmentation error rate is defined in (9). (9) (number of incorrectly segmented tokens / total number of tokens) x 100 Evaluations have been performed on a development test corpus containing 28,449 word tokens. The test set is extracted from 20001115_AFP_ARB.0060.xml.txt through 20001115_AFP_ARB.0236.xml.txt of the LDC Arabic Treebank: Part 1 v 2.0 Corpus. Impact of the core algorithm and the unsupervised stem acquisition has been measured on segmenters developed from 4 different sizes of manually segmented seed corpora: 10K, 20K, 40K, and 110K words. The experimental results are shown in Table 5. The baseline performances are obtained by assigning each token the most frequently occurring segmentation in the manually segmented training corpus. The column headed by '3-gram LM' indicates the impact of the segmenter using only trigram language model probabilities for decoding. Regardless of the manually segmented training corpus size, use of trigram language model probabilities reduces the word error rate of the corresponding baseline by approximately 50%. The column headed by '3-gram LM + PS Filter' indicates the impact of the core algorithm plus Prefix-Suffix Filter discussed in Section 3.2.2. Prefix-Suffix Filter reduces the word error rate ranging from 7.4% for the smallest (10K word) manually segmented corpus to 21.8% for the largest (110K word) manually segmented corpus - around 1% absolute reduction for all segmenters. The column headed by '3-gram LM + PS Filter + New Stems' shows the impact of unsupervised stem acquisition from a 155 million word Arabic corpus. Word error rate reduction due to the unsupervised stem acquisition is 38% for the segmenter developed from the 10K word manually segmented corpus and 32% for the segmenter developed from 110K word manually segmented corpus. Language model vocabulary size (LM VOC Size) and the unknown stem ratio (OOV ratio) of various segmenters is given in Table 6. For unsupervised stem acquisition, we have set the frequency threshold at 10 for every 10-15 million word corpus, i.e. any new morphemes occurring more than 10 times in a 10-15 million word corpus are considered to be new stem candidates. Prefix, suffix, prefix-suffix likelihood score to further filter out illegitimate stem candidates was set at 0.5 for the segmenters developed from 10K, 20K, and 40K manually segmented corpora, whereas it was set at 0.85 for the segmenters developed from a 110K manually segmented corpus. Both the frequency threshold and the optimal prefix, suffix, prefix-suffix likelihood scores were determined on empirical grounds. Contextual Filter stated in (8) has been applied only to the segmenter developed from 110K manually segmented training corpus.5 Comparison of Tables 5 and 6 indicates a high correlation between the segmentation error rate and the unknown stem ratio. 5 Without the Contextual Filter, the error rate of the same segmenter is 3.1%. 6 Manually Segmented Training Corpus Size Baseline 3-gram LM 3-gram LM + PS Filter 3-gram LM + PS Filter + New Stems 10K Words 26.0% 14.7% 13.6% 8.5% 20K Words 19.7% 9.1% 8.0% 5.9% 40K Words 14.3% 7.6% 6.5% 5.1% 110K Words 11.0% 5.5% 4.3% 2.9% Table 5 Impact of Core Algorithm and LM Vocabulary Size on Segmentation Error Rate 3-gram LM 3-gram LM + PS Filter + New Stems Manually Segmented Training Corpus Size LM VOC Size OOV Ratio LM VOC Size OOV Ratio 10K Words 2,496 20.4% 22,964 7.8% 20K Words 4,111 11.4% 25,237 5.3% 40K Words 5,531 9.0% 21,156 4.7% 110K Words 8,196 5.8% 25,306 1.9% Table 6 Language Model Vocabulary Size and Out of Vocabulary Ratio 3-gram LM + PS Filter + New Stems Manually Segmented Training Corpus Size Unknown Stem Alywm Other Errors Total # of Errors 10 K Words 1,844 (76.9%) 98 (4.1%) 455 (19.0%) 2,397 20 K Words 1,174 (71.1%) 82 (5.0%) 395 (23.9%) 1,651 40 K Words 1,005 (69.9%) 81 (5.6%) 351 (24.4%) 1,437 110 K Words 333 (39.6%) 82 (9.8%) 426 (50.7%) 841 Table 7 Segmentation Error Analyses Table 7 gives the error analyses of four segmenters according to three factors: (i) errors due to unknown stems, (ii) errors involving ( اﻟѧѧѧѧѧﻴﻮمAlywm), and (iii) errors due to other factors. Interestingly, the segmenter developed from a 110K manually segmented corpus has the lowest percentage of “unknown stem” errors at 39.6% indicating that our unsupervised acquisition of new stems is working well, as well as suggesting to use a larger unsegmented corpus for unsupervised stem acquisition. اﻟѧѧѧѧѧﻴﻮم (Alywm) should be segmented differently depending on its part-of-speech to capture the semantic ambiguities. If it is an adverb or a proper noun, it is segmented as ' اﻟѧѧѧѧѧﻴﻮمtoday/Al-Youm', whereas if it is a noun, it is segmented as ال # ﻳѧѧѧﻮم ' the day.' Proper segmentation of اﻟѧѧѧѧѧﻴﻮمprimarily requires its part-of-speech information, and cannot be easily handled by morpheme trigram models alone. Other errors include over-segmentation of foreign words such as ( ﺑѧѧﻮﺗѧѧѧѧﻴѧѧѧѧﻦbwtyn) as ب# وﺗѧѧѧﻴѧѧѧﻦand ( ﻟѧѧѧѧﻴѧѧѧﺘѧѧﺮlytr) 'litre' as ل# ي# ﺗѧѧѧﺮ . These errors are attributed to the segmentation ambiguities of these tokens: ﺑѧѧﻮﺗѧѧѧѧﻴѧѧѧѧﻦis ambiguous between ' ( ﺑѧѧﻮﺗѧѧѧѧﻴѧѧѧѧﻦPutin)' and 'ب# ( وﺗѧѧѧﻴѧѧѧﻦby aorta)'. ﻟѧѧѧѧﻴѧѧѧﺘѧѧﺮis ambiguous between ' ( ﻟѧѧѧѧﻴѧѧѧﺘѧѧﺮlitre)' and ' ل# ي# ﺗѧѧѧﺮ ( for him to harm)'. These errors may also be corrected by incorporating part-of-speech information for disambiguation. To address the segmentation ambiguity problem, as illustrated by ' ( ﺑѧѧﻮﺗѧѧѧѧﻴѧѧѧѧﻦPutin)' vs. 'ب # وﺗѧѧѧﻴѧѧѧﻦ ( by aorta)', we have developed a joint model for segmentation and part-ofspeech tagging for which the best segmentation of an input sentence is obtained according to the formula (10), where ti is the part-of-speech of morpheme mi, and N is the number of morphemes in the input sentence. (10) SEGMENTATIONbest = Argmax Πi=1,N p(mi|mi-1 mi-2) p(ti|ti-1 ti-2) p(mi|ti) By using the joint model, the segmentation word error rate of the best performing segmenter has been reduced by about 10% 7 from 2.9% (cf. the last column of Table 5) to 2.6%. 5 Summary and Future Work We have presented a robust word segmentation algorithm which segments a word into a prefix*-stem-suffix* sequence, along with experimental results. Our Arabic word segmentation system implementing the algorithm achieves around 97% segmentation accuracy on a development test corpus containing 28,449 word tokens. Since the algorithm can identify any number of prefixes and suffixes of a given token, it is generally applicable to various language families including agglutinative languages (Korean, Turkish, Finnish), highly inflected languages (Russian, Czech) as well as semitic languages (Arabic, Hebrew). Our future work includes (i) application of the current technique to other highly inflected languages, (ii) application of the unsupervised stem acquisition technique on about 1 billion word unsegmented Arabic corpus, and (iii) adoption of a novel morphological analysis technique to handle irregular morphology, as realized in Arabic broken plurals ( آѧѧѧѧﺘѧﺎبktAb) 'book' vs. آѧѧѧﺘѧѧﺐ (ktb) 'books'. Acknowledgment This work was partially supported by the Defense Advanced Research Projects Agency and monitored by SPAWAR under contract No. N66001-99-2-8916. The views and findings contained in this material are those of the authors and do not necessarily reflect the position of policy of the Government and no official endorsement should be inferred. We would like to thank Martin Franz for discussions on language model building, and his help with the use of ViaVoice language model toolkit. References Beesley, K. 1996. Arabic Finite-State Morphological Analysis and Generation. Proceedings of COLING-96, pages 89− 94. Brown, P., Della Pietra, S., Della Pietra, V., and Mercer, R. 1993. The mathematics of statistical machine translation: Parameter Estimation. Computational Linguistics, 19(2): 263−311. Darwish, K. 2002. Building a Shallow Arabic Morphological Analyzer in One Day. Proceedings of the Workshop on Computational Approaches to Semitic Languages, pages 47−54. Franz, M. and McCarley, S. 2002. Arabic Information Retrieval at IBM. Proceedings of TREC 2002, pages 402− 405. Goldsmith, J. 2000. Unsupervised learning of the morphology of a natural language. Computational Linguistics, 27(1). Jelinek, F. 1997. Statistical Methods for Speech Recognition. The MIT Press. Luo, X. and Roukos, S. 1996. An Iterative Algorithm to Build Chinese Language Models. Proceedings of ACL-96, pages 139−143. Schone, P. and Jurafsky, D. 2001. Knowledge-Free Induction of Inflectional Morphologies. Proceedings of North American Chapter of Association for Computational Linguistics. Yarowsky, D. and Wicentowski, R. 2000. Minimally supervised morphological analysis by multimodal alignment. Proceedings of ACL-2000, pages 207− 216. Yarowsky, D, Ngai G. and Wicentowski, R. 2001. Inducting Multilingual Text Analysis Tools via Robust Projection across Aligned Corpora. Proceedings of HLT 2001, pages 161−168. 8
2003
51
    ! " #%$&')( *+#-,.#/0(!12 (3  (546 04879& ;:<>=!#?#>;@A:<'CBD?# EFG IH? JLK%MONQP R"S;TVULTCJWTYX[Z]\_^%NQNa` bdc egf@hNQij^%N;kWhml)hnPhmophqbdc egr>h\_htsuRFS;TCJWv&Z_wpkLZ]x y;z{p|c}~€{_[‚'}„ƒ'…‡†dˆ‰y†mŠ3…€† ‹]Œ'_Œ[އ†mc‘†’”“;•cŠ3…€†mŠ3•’y†mŠ3…€†’n–€bd— bjc˜ ™3šœ›žj™5Ÿ¡ ¢ £j™ ¤m¥‘¦)j§m›”3¨d¢+©[©Qš ªj¢ Ÿm™€« Ÿ©£3¥‰© ¬œ­ V®I£nœ¥‘¦”m§Ÿt£j™6©'®m¢)¡©Q¯5¤¡©'¬€­ TC°%\sj±jhims ²;³+´&µm¶m·¹¸»º0¼[µ‘´½€¾m¿'·ÁÀmÂÄÃjÀmÅÆ¼[µ‘´Ç´&¾€Àm·»¼]ÃÈ ¿·¹µmÀm³ɀÃ]ÊdºË¶œº[¼[µ‘´&ºp½œµ‘½€¾m¸ÁÃjÌ[ͽ€Ì º[ŀ·»¼[¿'·»Êjº ¿º[Α¿º]Àd¿ÏÌ Ðdz'Ðm³'¿'º]´Ë³+ɀÃÊjº6¶œº[¼[µ‘´&ºÃjÀÆ·ÁÀdÈ ¼]ÌѺ]Ã_³'·ÁÀmÂj¸»Ð"·Á´Ç½€µ‘ÌÑ¿ÏÃjÀd¿Ç¿'º[¼[ɀÀmµj¸»µjÂjÐjÒFÓÎdÈ ·»³'¿·ÔÀmÂ!´&º[¿ÏÉmµmŀ³@³'¿·¹¸»¸‡Àmº[º[ÅÕÌ º ÖnÀmº]´Ëº]Àj¿]Í ¿ Émµ‘¾m‘ÉÍÇ×G·¹¿ ÉÌ º[³Ï½œº[¼[¿W¿µØ½œº]Ì ³'µ‘À€Ã_¸»·»Ù]ÃÈ ¿·¹µmÀÍ º[³Ï½œº[¼[·ÔÃu¸¹¸»ÐÉmµ_×!¿'µ.Ã_¼[ڀ¾m·ÁÌ ºÆÊjµ‘¼]Ãd¶dÈ ¾m¸ÁÃjÌÑÐÛÀmµd¿½€Ì º ÈÜÌ º[Âj·»³'¿'º]ÌѺ[Å@·ÁÀ@¿ÏÉmº&³'Ðm³'¿'º]´ ŀ·»¼[¿'·»µ‘À€ÃdÌ ÐjÒgÝaÀÞ¿ÏÉm·»³V½€Ãd½€º]Ì[Í0׺WÌѺ]½€µ‘Ì ¿ µmÀpÃjÀÆÃj¾m¿'µm´ÇÃ_¿'·»¼0´&º[¿ÏÉmµmŇ¿ÏɀÃ_¿%ŀЀÀ€Ãj´&·ßÈ ¼]Ãu¸¹¸»Ðǵm¶m¿ÏÃ_·ÁÀm³%þm³'º]Ì)³ ½€º[¼[·ßÖ3¼0Êjµm¼]Ãj¶€¾m¸ÁÃjÌ Ð àaÌ µ‘´Þ¿ÏÉmº¾m³'º]Ì[áâ³%¾€À€ÃjÀ€Ã_¸»Ð‘Ù[º[Åŀµ‘¼]¾€´&º]Àd¿'³]Ò ã Émº]À@Ǿm³º]Ì0´ÇÃjä_º[³ÃjÀYº]Àd¿ÏÌ ÐjÍ5¿ÏÉmº&³'Б³ÑÈ ¿º]´ ŀЀÀ€Ãj´&·»¼]Ã_¸»¸»Ðåº[Îm¿ÏÌ'Ã_¼[¿³"¿ÏÉmºO¼[µ‘Ì'ÌѺ È ³ ½€µ‘Àmŀ·ÁÀm¼[Ém¾€À€äj³LàaÌ µ‘´ ¿ÏÉmºO¾m³'º]ÌL¿'º[Îm¿ ÃdÀmÅæ³Ï¾mÂdÂjº[³'¿'³‡¿ÏÉmº]´çÃ_¸»µ‘ÀmÂè×·»¿ÏÉÞסµmÌ Å€³ ³ ¾mÂjÂjº[³'¿'º[Å.¶dЇ¿ÏÉmºŀ·¹¼[¿·¹µmÀ€ÃjÌ ÐjÒ ã ·»¿Ïɇµ‘¾€Ì ´Ëº[¿ÏÉmµ‘Å Íé¿'º[Α¿³ê·ÁÀ"ÃY½€ÃjÌ ¿'·»¼]¾m¸ÁÃjÌ&³'¿ Ðm¸»º‡µ‘Ì ¼[µmÀm¼[º]Ì'Àm·ÁÀmÂóϽ€º[¼[·ßÖ3¼ŀµ‘´ÇÃu·ÔÀ¼]ÃjÀ¶œº%º]ÀdÈ ¿º]Ì º[Åľm³·ÔÀmÂÇÃ&½€Ì º[ŀ·»¼[¿'·»Êjº0¿'º[Îm¿º]Àj¿ Ì Ð&³'Б³ÑÈ ¿º]´‡Ò ã º&Êjº]Ì ·ßÖ3º[ÅL¿ÏɀÃ_¿ÃƸÁÃjÌ ÂjºÆÃj´&µ‘¾€Àd¿ µuàסµmÌ Å€³pÀmµj¿pÌѺ[Âj·»³'¿'º]Ì º[ÅF·ÁÀè¿ÏÉmº.ŀ·»¼[¿'·»µ_È À€ÃdÌ Ð&¼]ÃjÀp¶œº+º]Àd¿'º]Ì º[Ň¾m³'·ÁÀmÂ&µ‘¾€Ì%´Ëº[¿ÏÉmµ‘Å Ò ë Z]ì5su±dRí%K%imsjNQR ì î º[¼[º]Àd¿Ã_ŀÊïÃdÀm¼[º[³0·ÔÀY¿'º[¼[ɀÀmµj¸»µjÂj·»º[³Àmµ_×!Ã_¸»¸¹µ_×!Ê_ÃjÌQÈ ·»µ‘¾m³p´&º]ÃdÀm³êµuà‰·ÁÀdàðµ‘Ì'´ÇÃu¿'·»µ‘À@º]Àd¿ÏÌ ÐjÍé³Ï¾m¼ñÉLÃ_³&¼[ɀÃjÌQÈ Ã_¼[¿'º]ÌGµ‘ÌG³Ï½€º[º[¼[ÉCÌ º[¼[µj‘Àm·»¿'·»µ‘ÀÒ0òm¿'·»¸»¸ðͺ]Àj¿ Ì Ð.¾m³'·ÁÀmÂ.à ä_º[Ѐ¶€µ‘ÃdÌ Å@Ì º]´ÇÃ_·ÁÀm³0ŀµ‘´&·ÁÀ€ÃjÀd¿¶€º[¼]Ãd¾m³'ºÇµ_à·¹¿³0º]Ã_³'º µ_à·Á´Ç½m¸»º]´&º]Àj¿ Ã_¿'·»µ‘À.ÃdÀmÅ.·»¿'³+¾m¿'·»¸»·»¿ ÐjÒ ã Ém·»¸¹º;ÊïÃdÌ ·»µ‘¾m³%¿'º[Îm¿º]Àd¿ÏÌ Ðê´Ëº[¿ÏÉmµ‘Å€³ó¼]ÃjÀ&¶€º+¾m³º[Å ×·»¿ÏÉèÃWä_º[Ѐ¶€µmÃjÌ Å Í󵑾€ÌǼ[µmÀm¼[º]Ì'À"·ÁÀè¿ÏÉm·»³Ä½€Ãd½€º]Ì·¹³ ô3õ_ö]÷møaù ú øaûmöÇú ö]üœúËöýtúÜõ_þ Ò2²6ÀdÐW½€Ì º[ŀ·»¼[¿'·»Êjº@´&º[¿ÏÉmµmÅ ÿ    !" #"$ %& '(") " * #+(,  $.-/ 0# 0 1 '# 0# (,2$3  4 567 ") "789:9 ; <2=  :9 # >8?@+?,AB '(") "C8?0 0 D"E: * #+(F ,  $ Ã_¸»¸»µ_׳¿'º[Îm¿0¿'µC¶€ºÇº]Àd¿'º]Ì º[Å@¼[µmÀj¿'·ÁÀm¾mµ‘¾m³'¸»Ð@·ÁÀ ¿ÏÉ€Ì º[º ³'¿ÏÃuÂjº[³G H ÒJIGÉmº+¾m³'º]Ì)º]Àd¿'º]Ì ³%ÃjÀLKMON øPRQRSEQUT ¼[ɀÃjÌ'Ãu¼[¿'º]Ì)³'º È Ú€¾mº]Àm¼[º V ÒJIGÉmº¡¿º[Α¿)º]Àj¿ Ì Ð0³'Ðm³'¿'º]´ ¸»µ‘µmäj³5à µmÌ ¼[µ‘ÌÌ º[³Ï½€µmÀmÅmÈ ·ÁÀmÂǼ]ÃdÀmŀ·¹Å3Ãu¿'º[³+·ÁÀ‡Ã&ŀ·»¼[¿'·»µ‘À€ÃdÌ ÐĽ€ÌѺ ÈÜÃ_¿'¿ÏÃ_¼[Émº[Å ¿µ¿ÏÉmº³'µ_àð¿ ׉ÃdÌ ºjÒnÝ ¿”³µ‘Ì ¿'³)¿ÏÉmº%¼]ÃdÀmŀ·¹Å3Ãu¿'º[³×·»¿ÏÉ ÌѺ[‘ÃjÌ Å‡¿'µ&¿ÏÉmº0¼[µmÀj¿'º[Îm¿+ÃjÀmŇŀ·»³Ï½m¸ÁÃÐm³+¿ÏÉmº]´ ¿'µ ¿ Émº¾m³'º]Ì[Ò W ÒJIGÉmºV¾m³'º]̼ñÉmµmµj³'º[³ÆÉm·¹³.½€ÌѺ à º]Ì'ÌѺ[Å×µ‘Ì ÅLàaÌ µ‘´ Ãd´&µ‘ÀmÂ&¿ÏÉmº¼]ÃdÀmŀ·¹Å3Ãu¿'º[³]Ò X µmÌ%º[΀Ãj´Ç½m¸»ºjÍRYGÉm·ÔÀmº[³ºê½m·ÁÀjÐm·ÁÀdÈÜɀÃjÀ(Z ·¼[µ‘ÀjÊdº]Ì ³'·»µ‘À·¹³ Ã&½€Ì º[ŀ·»¼[¿'·»Êjº¿º[Α¿º]Àd¿ÏÌ ÐpɀÃÊm·ÔÀmÂÆÃjÀpÃd´ê¶m·»Â‘¾mµm¾m³³'º È Ú€¾mº]Àm¼[º+¶€º[·ÁÀmÂê½m·ÁÀ&Ðm·ÁÀÇÃjÀmÅ&¿ÏÃjÌÑÂjº[¿ ×µ‘Ì ŀ³×+Ém·»¼[É&ÃjÌ º ɀÃjÀ(Z · ×µ‘Ì ŀ³]Ò [ ·¹³¿'µ‘Ì ·»¼]Ã_¸»¸»ÐjÍdzϾm¼[É!½€Ì º[ŀ·»¼[¿'·»Êjºè¿'º[Α¿Yº]Àj¿ Ì ÐO³'Ðm³ È ¿'º]´&³)ɀÃ]Êdº%¶€º[º]À&½€µ‘½€¾m¸ÁÃjÌ5µ‘Àm¸»Ð0·ÁÀ&Ó%Ã_³'¿²;³'·ÁÃjÀ0¼[µm¾€ÀdÈ ¿ÏÌ ·»º[³ó×+Émµj³'º¸ÁÃjÀmÂm¾€Ã_Âjº[³¡¾m³º+´ÇÃjÀdм[ɀÃjÌ'Ã_¼[¿º]Ì ³]Ò\IÉmº ½€Ì µ‘¶m¸»º]´ ×·»¿ÏÉ¿ÏÉmº[³º¡¸ÁÃjÀmÂm¾€Ã_Âjº[³×Ã_³ ¿ÏɀÃu¿”¿ÏÉmºÀm¾€´È ¶€º]Ì5µ_à3¼[ɀÃjÌ'Ã_¼[¿º]Ì ³¿'µ6¶œº¡É€ÃjÀmŀ¸»º[Å&º[Îm¼[º[º[ŀ³¿ÏÉmº¡Àm¾€´È ¶€º]Ìéµ_àIäuº[Б³óµ‘À&Ãä_º[Ѐ¶€µ‘ÃdÌ Å Ò\IÉmº]Ì º àðµ‘Ì ºjÍj½€Ì º[ŀ·»¼[¿'·»Êjº ¿'º[Îm¿º]Àj¿ Ì Ð0×Ã_³”·ÁÀdÊjº]Àd¿'º[ÅÇ¿'µ0º]À€Ãj¶m¸»º+½€Ìѵ!Z º[¼[¿'·»µ‘Àµ_ànà ×·»Å€ºÌ'ÃjÀmÂdº+µ_à)¼ñɀÃjÌÃ_¼[¿'º]Ì ³¾m³'·ÁÀmÂÇ¿ÏÉmº0¸»·Á´&·»¿'º[ÅVÀm¾€´È ¶€º]ÌGµ_àžäuº[Б³µmÀ.Ã&ä_º[Ѐ¶€µ‘ÃjÌÑÅ Ò IÉm·»³ê½€Ìѵ‘¶m¸»º]´ ɀÃ_³¶€º[¼[µ‘´ËºË´Ëµ‘Ì º·ÁÀd¿'º]Ì'À€Ã_¿·¹µmÀ€Ã_¸ Ã_³³ ´ÇÃ_¸»¸»º]Ì0´ÇÃ_¼[Ém·ÔÀmº[³0ɀÃÊjºê¶œº[º]ÀYŀº[Êjº[¸»µ‘½€º[Å ÒDY¾€ÌQÈ Ì º]Àd¿.ŀº[Êm·»¼[º[³@¼]ÃjÀ ¶€º"Ã_³.³Ï´Ã_¸»¸ËÃu³VÃ"×;Ì ·»³'¿ ׉Ãu¿'¼ñÉ ] Ý0^J_"Í V!`!`aHb Í0³'µF¿ÏÉmºÀm¾€´&¶€º]ÌYµ_àp¼[ɀÃjÌ'Ãu¼[¿'º]Ì ³C·ÁÀ ÃjÀdÐǸÁÃjÀm‘¾€Ã_Âdº+×·»¸»¸º[Α¼[º[º[Ň¿ Émº6Àm¾€´&¶€º]̵_à”¶€¾m¿'¿µ‘Àm³ ÃÊïÃ_·»¸ÁÃj¶m¸»ºà µ‘ÌG·ÁÀ€½€¾m¿]Ò²+³Ã‡Ì º[³ ¾m¸¹¿Í ½€Ì º[ŀ·»¼[¿'·»ÊjºÇ¿'º[Îm¿ º]Àd¿ÏÌ ÐÇɀÃ_³%¶€º[º]À×·»Å€º[¸»Ðpŀ·»³'¼]¾m³³'º[Åp·ÁÀ¿ÏÉmº+Ã_¼]Ãuŀº]´&·»¼ ÃjÀmÅC·ÁÀmÅ3¾m³'¿ÏÌ ·ÁÃ_¸ŀµm´ÇÃ_·ÁÀm³]Ò.IÉmºcIedÆ´&º[¿ÏÉmµmÅVµuà%º]ÀdÈ ¿'º]Ì ·ÁÀmÂCÃÆÅ€·¹Âd·¹¿&³'º[ڀ¾mº]Àm¼[º‡ÃjÀmÅ@½€Ì º[ŀ·»¼[¿'·ÁÀmÂ@×µ‘Ì ŀ³;·¹³ ÃjÀº[΀Ãj´Ç½m¸»ºµ_à3µ‘Àmº×ÃÐ0¿'µ0ŀº]Ã_¸m×·»¿ÏÉ&¿ÏÉm·»³%½€Ì µm¶m¸¹º]´ ] I5º[Âj·»¼jÍ V!`!`!`!b Ò î º[³'º]ÃdÌ ¼ñɇɀÃu³¡º[Êdº]ÀdzÏÉmµ_×+À¿ÏɀÃ_¿ÃjÀ º]Àd¿ÏÌ Ð¼]ÃdÀ궜º+´ÇÃ_ŀºG×·»¿ÏÉ&Ì º]Ã_³'µ‘À€Ãd¶m¸¹ºGº fp¼[·»º]Àm¼[Ðp¾m³ È ·ÁÀmÂ.µ‘Àm¸»ÐCà𵑾€Ì0¶€¾m¿'¿µ‘Àm³0·ßàGÃY½€Ì º[ŀ·»¼[¿'·»ÊjºÆ¿'º[Α¿0º]Àd¿ÏÌ Ð ´&º[¿ÏÉmµmŇ·»³6Ãd½€½m¸¹·»º[Å ] I)ÃjÀ€ÃjäÃÈÜÝð³ÏÉm·»·)º[¿+Ã_¸ðÒÁÍ Vg`!`!V!b Ò IÉmº@´ÇÃhZܵ‘ÌËÅ3Ì'Ã×+¶€Ã_¼[ä@¿'µL¿ÏÉm·»³.½€Ì º[ŀ·»¼[¿'·»ÊjºC¿'º[Îm¿ I)Ãj¶m¸»º H GJIGÉmº î Ã_¿'º0µ_à;À€ä‘Àmµ_×+À ã µmÌ Å€³ ,  E",6+   :(4 0    8 + 0#,?  ,) ;     -U+      * + 3 0#+E" ! U (  " #" $$ %  # 0  &   (' -) #*+& %  #?",,3?,- " .   , 0/ 1 # +   %  # 0  .  "* (' -) #*+&*/ . ". "2 " * ", D" = 34F0&L+  0F $5 " $"  & R  e-/  6 &)+. ($5 . $  "* -U+  " $ #" &* º]Àd¿ÏÌ Ð"´&º[¿ÏÉmµmÅ·»³ÄÌѺ[¸ÔÃu¿'º[Å¿'µ ¿ÏÉmº.ŀ·»¼[¿'·»µ‘À€ÃjÌÑоm³'ºjÒ IÉmºp¾m³º]Ì0¼]ÃjÀ€Àmµj¿º]Àj¿º]Ì0×µ‘Ì ŀ³ ý/Súõ_ö P3øT[ú ö]õ_ö]÷ ·ÁÀ ¿ÏÉmº@ŀ·»¼[¿'·»µ‘À€ÃdÌ Ð ] ×+ɀÃu¿‡×¡º@Ì º àðº]̇¿µÃ_³87:9<;=?>A@CB*D E =F;5=FGL×µ‘Ì ŀ³·ÁÀ ¿ÏÉmº0àðµj¸»¸»µï×G·ÔÀm b Ò>IÉm¾m³]Í3ü[µ‘ÀdÊjº]ÀdÈ ¿'·»µ‘À€Ã_¸€½€Ì º[ŀ·»¼[¿'·»Êjºº]Àd¿ÏÌ Ð;³'Б³¿'º]´O¼]ÃdÀ€Àmµj¿”º]Ãu³'·»¸¹Ð&ɀÃjÀdÈ Å€¸»º0¿'º[Îm¿%×+Ì ·»¿'¿º]ÀǾm³'·ÁÀmÂpÃ0³ ½€º[¼[·ÁÃ_¸Êjµm¼]Ãj¶€¾m¸ÁÃjÌ Ðǵ‘Ìé·ÁÀ ÃjÀY¾€À‘¾m³Ï¾€Ãu¸ ³'¿ Б¸»ºHœà µm̺[Î Ãd´Ç½m¸»ºjÍ¿'º[Α¿G×+Ì ·»¿'¿'º]À ·ÔÀYà ½€ÃjÌ ¿·¹¼]¾m¸ÁÃjÌŀ·ÁÃ_¸»º[¼[¿]͵mÌê¾m³·ÔÀmÂCµj¸»ÅL×µ‘Ì ŀ³µmÌ0×µ‘Ì ŀ³ ×·»¿ÏÉ ³Ï½€º[¼[·ßÖ3¼&¿'º[¼[ɀÀm·¹¼]Ãu¸”´&º]ÃjÀm·ÁÀmÂj³]Ò IÉm·»³%½€Ì µ‘¶m¸»º]´Þ·»³”¼]¾€Ì'ÌѺ]Àj¿'¸»ÐɀÃjÀmŀ¸»º[Å&¿ÏÉ€Ì µm¾m‘É;¿ÏÉmº ¼]Ì º]Ã_¿·¹µmÀ"µ_àÃ@¾m³'º]Ìŀ·»¼[¿'·»µ‘À€ÃjÌ ÐjÒIG³'º]Ì ³Ç¼]ÃdÀ"º]Àd¿'º]Ì ¾€À€Ì º[Âj·»³'¿º]Ì º[ÅO×µ‘Ì ŀ³Æ·ÁÀ³µ‘´&ºC×Ã]Ð ] ºjÒ ÂtÒÁͼñɀÃdÌ'Ã_¼ È ¿'º]̶dÐ&¼ñɀÃjÌÃ_¼[¿'º]Ì b ÃjÀmÅÆÌ º[Âj·»³'¿'º]Ì)¿ÏÉmºGסµmÌ Å€³·ÁÀj¿'µ¿ÏÉmº ¾m³'º]ÌÆÅ€·»¼[¿'·»µ‘À€ÃjÌÑÐjÒ!²àð¿'º]Ì¿ÏɀÃ_¿]Í¿ Émº.³'Ðm³'¿'º]´ Ì'ÃjÀ€äd³ ¿ÏÉmº[³'ºGסµmÌ Å€³%Ém·»Â‘Ém¸»ÐÇ×+Émº]À&¿ÏÉmº;¾m³'º]̺]Àd¿'º]Ì ³¿ÏÉmºG¼[µ‘ÌQÈ Ì º[³Ï½œµ‘Àmŀ·ÁÀmÂ@³'º[ڀ¾mº]Àm¼[ºjÒ [ µ_׺[Êjº]Ì[Í)·»¿&·»³ê¿ ÉmºÄ¾m³'º]Ì á ³ Ì º[³Ï½œµ‘Àm³'·Á¶m·»¸»·¹¿ ÐÆ¿'µ0Ì º[Âj·»³'¿'º]Ì)¿ÏÉmºÊjµm¼]Ãj¶€¾m¸ÁÃjÌ Ð&·ÁÀd¿'µ0¿ÏÉmº ¾m³'º]Ìŀ·»¼[¿'·»µ‘À€ÃjÌ ÐLÃjÀmÅW¿ Ém·¹³&µ_àð¿'º]À@¶œº[¼[µ‘´&º[³ÇÃY¼]¾€´È ¶€º]Ìѳ'µ‘´&º0¿ÏÃu³Ïä3Ò I5µêÃ_¸»¸»º[ʑ·ÁÃ_¿º0¿ÏÉm·»³‰½€Ìѵ‘¶m¸»º]´‡Íj³'µm´&º¼[µ‘´Ç½€ÃdÀm·¹º[³µ_à„È àðº]Ì%ŀ·»¼[¿'·»µ‘À€ÃdÌ ·»º[³+µ_à)Êjµ‘¼]Ãd¶€¾m¸ÔÃdÌ ·»º[³àaÌ µ‘´9³Ï½œº[¼[· Ö3¼ŀµ_È ´ÇÃ_·ÁÀm³5Huà µmÌ)º[Î Ãj´½m¸¹ºjÍm¿ÏÉmº¼[ɀÃ_¿ŀ·»¼[¿'·»µ‘À€ÃjÌ Ðµ‘ÌÃ;ŀ·»¼ È ¿'·»µ‘À€ÃjÌÑе_à&ŀ·ÔÃu¸¹º[¼[¿ ]CJ ¾m³'¿ÏòmÐm³'¿'º]´&³]Í V!`g`!V!b Ò [ µ_×È º[Êjº]Ì[Íé¿ÏÉmº‡³'¿ Б¸»ºÆ¿ÏɀÃ_¿ÇÃjÀdÐ@µ‘Àmº.¾m³'º]Ì˺]Àj¿'º]Ìѳê·ÁÀd¿'µ@à ¼[µ‘´Ç½€¾m¿º]ÌÇ·»³‡¸»·Áä_º[¸»ÐO¿'µW¶œºÛ¾€Àm·»Ú€¾mºjÍ0µ‘ÌÆ´&µ‘Ì ºC½€Ì º È ¼[·»³'º[¸»ÐjÍ QRTö]õcT'ô3öù[øLKøaù Ò ã ºC¿ÏÉmº]Ì º àðµ‘Ì º@½€Ì µm½€µj³'º@Ã"¶œº[¿'¿'º]̇´&º[¿ÏÉmµmÅF¿ÏɀÃ_¿ ×µ‘Ì'äd³Ä¶dÐŀЀÀ€Ãj´&·»¼]Ã_¸»¸»Ðæ½€Ì µm¼[º[³'³'·ÁÀmÂÃL³Ï´Ã_¸»¸¾m³'º]Ì ¼[µ‘Ì'½€¾m³ÒJIÉm·»³·¹³;¶€Ã_³'º[ҵ‘ÀÆÃjÀ·ÔÀd¿'º]Ì º[³¿'·ÁÀmÂǵ‘¶m³'º]Ì Ê_ÃÈ ¿'·»µ‘ÀY¿ÏɀÃ_¿0Ǿm³'º]Ì;¿ Ѐ½m·¹¼]Ãu¸¹¸»Ð õ_ö(QRT]ö(T Êjµm¼]Ãj¶€¾m¸ÁÃjÌ Ð@Ã_¿ ÃNM `O Ì'Ãu¿'ºpÃàð¿'º]Ì0º]Àd¿ÏÌ ÐCµ_൑Àm¸»ÐWà ³Ï´ÇÃ_¸»¸Ãj´&µm¾€Àj¿ µ_à ¿'º[Îm¿]ÒB^Ã_³º[Åpµ‘ÀË¿ÏÉm·»³¡½€Ì µ‘½œº]Ì ¿ ÐjÍj׺+ɀÃÊjº¼]ÌѺ]Ã_¿'º[Å ÃY³'·Á´Ç½m¸»ºV½€Ì º[ŀ·»¼[¿'·»ÊjºY¿'º[Îm¿º]Àd¿ÏÌ Ð@³'Ðm³'¿'º]´-¿ÏɀÃu¿êŀÐdÈ À€Ãj´&·»¼]Ã_¸»¸»Ðº[Îm¿ÏÌ'Ãu¼[¿'³ž¾€À€ämÀmµ_×+À0×µ‘ÌÑŀ³à Ìѵ‘´F¿ÏÉmº¾m³'º]Ì ¼[µ‘Ì'½€¾m³Ò%ÝaÀ ¿ÏÉm·»³+½€Ãj½€º]Ì Í€×º0³ÏÉmµ_× Émµ_×O¿ Ém·¹³G³'Б³¿'º]´ ³'µj¸»Êjº[³¿ÏÉmº.½€Ì µ‘¶m¸»º]´ µ_ྀÀ€ÌѺ[Âj·»³'¿'º]Ì º[ÅèסµmÌ Å€³]Ò ã º ³'¿ÏÃdÌ ¿0·ÁÀ@¿ÏÉmºÆÀmº[Α¿³'º[¼[¿'·»µ‘À@¶dÐCº[Î ½m¸ÁÃ_·ÁÀm·ÁÀm Émµ_×!׺ ¼]Ãj´&º‡¿µ@µ‘¶m³'º]Ì Êdº‡¿ÏÉmºV½€Ì µm½€º]Ì ¿ ÐL¿ÏɀÃ_¿Ç¿ÏÉmº‡¿ Ѐ½m·»¼]Ã_¸ Ì º]¾m³'º&Ì'Ã_¿'º;·»³PM `O Ò Q Sp^SR8TDZjR(UVR3±_s[lRXWv&í%N suR5±jNahZY\[]R3^óh?^%NaR ± _(`Ca bdcFE*c ÝaÀ ¿ÏÉm·»³V½€Ãj½œº]Ì[Í;סºCŀ·»³'¼]¾m³'³Yµ‘¾€Ì‡º[΀½€º]ÌѷԴ˺]Àj¿'³CÌ º È Â‘ÃjÌÑŀ·ÔÀmÂFÓ%ÀmÂj¸»·»³ÏÉ ÃjÀmÅ J Ãj½€ÃjÀmº[³ºW¿'º[Îm¿'³]Ò X µmÌÆµ‘¾€Ì Ì º[³'º]ÃdÌ ¼ñÉÍm׺¾m³'º[Ň¿'º[Îm¿'³óàaÌ µ‘´9ŀ·»Êjº]Ì ³'º0ŀµ‘´Ã_·ÁÀm³·ÁÀ ¿ÏÉmº[³'º0¿ ×µ&¸ÁÃjÀm‘¾€Ã_Âjº[³Í Ãu³³ÏÉmµ_×+À·ÔÀ I”Ãd¶m¸¹º H ÒÓ%Ã_¼[É ¿'º[Îm¿+Ì ºe3º[¼[¿'³G·ÔÀmŀ·»Êm·»Å3¾€Ã_¸ðá ³ê½œº]Ì ³'µ‘À€Ã_¸5×+Ì ·»¿'·ÁÀmÂp³¿ÜÐm¸»ºjÒ ã º;ŀº ÖnÀmº6ÃdÀǾ€À€Ì º[Âj·»³'¿'º]ÌѺ[ÅÇ×µ‘Ì ÅpÃ_³%ÃdÀjÐ&Êjµm¼]Ãj¶dÈ ¾m¸ÁÃjÌ Ð@Àmµj¿&Ãj½€½€º]ÃjÌÑ·ÔÀmÂC·ÁÀ ¿ÏÉmº W!` È_@¶dБ¿ºê¼[µmÌ'½€¾m³µ_à ã ò J ÃjÀmÅ _@Ã_·ÁÀm·»¼ñÉm·0Àmº[׳Ͻ€Ãd½€º]Ì[ÒgfGà0¼[µ‘¾€Ì ³'ºjÍó¿ÏÉmº ×µ‘Ì Ňŀ·»¼[¿'·»µ‘À€ÃjÌÑÐV¾m³'º[Å@¶dЇ¿ÏÉmºê½€Ì º[ŀ·»¼[¿'·»Êjº¿'º[Α¿Gº]ÀdÈ ¿ÏÌ Ð¼]ÃjÀ&¶€º¸ÁÃjÌÑÂjº]Ì[Ò [ µ_׺[Êjº]Ì[Í_¿ Ém·¹³×µ‘Àá⿞´ÇÃdä_ºÃjÀjРŀ·ih º]Ì º]Àm¼[ºÄÌѺ[‘ÃjÌ Å€·ÁÀmÂ.µm¾€Ì0¶€Ã_³'·»¼pÃjÌ Âm¾€´&º]Àj¿0·ÁÀ ¿ÏÉm·»³ ½€Ãj½œº]Ì[Í ³'·ÁÀm¼[ºÇ¿ Émºp½€Ì µ‘¶m¸»º]´ µuà‰¾€À€Ì º[Âj·»³'¿º]Ì º[Å@×µ‘Ì ŀ³ Ã_¸»×ÃБ³;º[Α·»³'¿³ÀmµÄ´ÇÃu¿'¿'º]ÌG×+ɀÃ_¿;³'·»Ù[ºêµuà%ŀ·¹¼[¿·¹µmÀ€ÃjÌ Ð ×º¾m³'ºjÒ ²;³0µ‘¾€Ì¼[µ‘Àm¼[º]Ì'ÀY×Ã_³¿ ÉmºË¾€À€ÌѺ[Âj·»³'¿'º]Ì º[ÅCÊjµm¼]Ãj¶€¾dÈ ¸ÁÃjÌ ÐjÍ׺0ÖnÌѳ'¿+·ÁÀdÊjº[³'¿'·»Â‘Ãu¿'º[Å.¿ÏÉmºÇ½œº]Ì ¼[º]Àj¿ Ã_Âjºµuࡾ€ÀdÈ Ì º[Âj·»³'¿º]Ì º[Å.×µ‘Ì ŀ³Ò ã º´&º]Ã_³Ï¾€Ì º[Ň¿ ×µÇÌ'Ã_¿'º[³(G j Hkml S?n SoK Q3ýZp€ý/Soq+ý8qDS_õ_÷T l S?n SoKYú;Sú K?r qDS_õï÷T ] Hb j Vskml StnUSuKC÷møLKZKö]õ_ö]ýtú)Q3ýõ_ö P3øT[ú ö]õ_ö]÷dqDSïõ_÷T l S?n SoK@÷møLKZKö]õ_öý3ú\q>S_õ_÷T ] V!b J Ãj½€ÃdÀmº[³'ºæ¿º[Α¿'³F׺]Ì ºæÃu¸¹¸@´ÇÃdÀ‘¾€Ã_¸»¸»Ð ³'º[‘´&º]Àd¿'º[Å ] ¶œº[¼]Ãj¾m³'ºO¿ÏÉmº[Ð9¼[µ‘Àd¿ÏÃ_·ÁÀmº[Å>¾€À€Ì º[Âj·»³'¿'º]Ì º[ŠסµmÌ Å€³ b Í ³'µL³'¿ÏÃ_¿·¹³¿'·»¼[³p׺]Ì ºY¼]Ã_¸»¼]¾m¸ÔÃu¿'º[Åæ¾m³'·ÁÀmÂ"¿ÏÉmºYÖnÌ ³'¿ V!` v ¶dБ¿'º[³Æµ_à0¿'º[Îm¿]Ò&I µL¶€ºC¼[µ‘Àm³'·»³'¿'º]Àd¿]Í׺.¾m³'º[Å V!` v ¶dБ¿'º[³‰à µ‘ÌGÓ%ÀmÂj¸»·»³Ïɇ¿º[Α¿'³̀¿'µmµ3Ò I)Ãj¶m¸»º H ³ÏÉmµ_׳ j H ÃjÀmÅ j V àðµ‘Ì;µ‘¾€Ì¿'º[³'¿0¿'º[Îm¿'³]Í ×·»¿ÏÉ@ÌѺ[³Ï¾m¸»¿'³0·ÁÀC¿ÏÉmºp¾€½€½œº]ÌɀÃ_¸ßà%àðµ‘Ì0ÓóÀmÂj¸»·¹³ É@¿'º[Îm¿'³ ÃjÀmÅL¿ÏÉmµj³'ºÆ·ÁÀL¿ÏÉmºÆ¸¹µ_׺]Ì&ɀÃ_¸ßà+àðµ‘Ì J Ãj½€ÃjÀmº[³'ºjÒ IÉmº Ì'Ã_¿ºµ_à”¾€À€ä‘Àmµ_×+À×µ‘Ì ŀ³óŀ·h º]Ì º[ŇÃ_¼[¼[µ‘ÌÑŀ·ÔÀmÂ&¿'µ¿ÏÉmº ¿ Ð ½œº+µuà”¿'º[Îm¿]ÒÝð¿×Ã_³%º[³Ï½œº[¼[·ÁÃ_¸»¸¹ÐCÉm·»Â‘Éà µmÌ%¿ Émº+¿'º[¼[ÉdÈ Àm·»¼]Ã_¸ð̀¼[µj¸»¸»µ‘Ú€¾m·ÁÃ_¸ðÍ3µmÌ)µj¸»Åp¿'º[Îm¿'³]Ò\IÉm¾m³]Ím¶€µj¿ÏÉÆÌ º[¼[º]Àd¿ ÃjÀmҵj¸»ÅÆ¿'º[Îm¿'³¼]ÃjÀƼ[µ‘Àd¿ÏÃ_·ÁÀpøÁÃjÌ ÂdºÀ‘¾€´&¶€º]Ìóµ_ྀÀdÈ Ì º[Âj·»³'¿º]Ì º[Ň×µ‘Ì ŀ³]Ò)²;³µ‘¾€Ì õ_ö P3øT úÜöõïö÷ ×µ‘Ì ŀ³×º]Ì º º[Îm¿ÏÌ'Ã_¼[¿'º[Åà Ìѵ‘´!Àmº[׳Ͻ€Ãj½œº]Ì ³]Í_׺׵‘¾m¸»ÅêÂm¾mº[³'³¿ÏɀÃ_¿ Àmº[׳Ͻ€Ãd½€º]Ì ³¾m³Ï¾€Ãu¸¹¸»Ð@¾m³'ºpó'¿ÏÃdÀmÅ3ÃjÌ Å€·»Ù[º[Å@Êjµm¼]Ãj¶€¾dÈ ¸ÁÃjÌ Ð"¶€Ã_³'º[Å"µ‘À ×µ‘Ì ŀ³&¿ÏɀÃ_¿ÇɀÃÊjº‡¶€º[º]Àè¾m³'º[Å"¸»µ‘Àm 0 10 20 30 40 50 60 70 80 90 100 0 2 4 6 8 10 12 14 16 18 20 Rate of Reuse (English) Offset of Text(Kbyes) Adventures Of Sherlock Holmes Chat The Merchant Of Venice Patent RFC1459 0 20 40 60 80 100 0 2 4 6 8 10 12 14 16 18 20 Rate of Reuse (Japanese) Offset inside Text (Kbytes) Patent RFC1459J Genji Neko Chat X ·»Â‘¾€Ì º H Gµm¼]Ãj¶€¾m¸ÁÃjÌ Ð î º]¾m³'º î Ãu¿'ºêÃjÀmÅNfVh ³'º[¿ ] Ó%ÀmÂj¸»·»³ÏÉYÃjÀmÅ J Ãj½€ÃjÀmº[³º b º]Àmµ‘¾m‘Éè¿'µL¶€ºC׺[¸¹¸0ämÀmµï×;ÀͶ€¾m¿.Àmµj¿pÐdº[¿&à µmÌ Âjµj¿ È ¿'º]ÀÒ)IɀÃ_¿0¾€À€Ì º[Âj·»³'¿'º]ÌѺ[Å.×µ‘Ì ŀ³µm¼[¼]¾€Ì;´&µ‘Ì º0·ÁÀCÌ º È ¼[º]Àd¿6ÃjÀmÅY¼[µj¸»¸»µ‘Ú€¾m·ÁÃ_¸¿'º[Îm¿'³³ÏÉmµ_׳+Émµ_× ³'º]Ì ·»µ‘¾m³;¿ÏÉmº ½€Ì µ‘¶m¸»º]´ µ_à%¾€À€Ì º[Âj·»³'¿'º]Ì º[ÅY×µ‘Ì ŀ³¼]ÃjÀ.¶œºjÒ _(`_ oc 7 c ;  =,7(B*= I5µ·Á´Ç½€Ì µ_Êjº¡½€ÌѺ[ŀ·¹¼[¿·¹Êdº¡¿º[Α¿ º]Àd¿ÏÌ ÐjÍ׺%Àmº[º[Å¿'µ;½€Ì µ_È Êm·¹Å€ºÆÃjÀ.Ã_¸»¿'º]ÌÀ€Ã_¿'º³µ‘¾€Ì ¼[ºà µmÌÖnÀmŀ·ÁÀmÂV´Ë·¹³³'·ÁÀm‡Êjµ_È ¼]Ãj¶€¾m¸ÁÃjÌ Ð‡³µÇ¿ÏɀÃ_¿¿ Émº³'Ðm³'¿'º]´>´Ã]ÐÆ³Ï¾mÂjÂdº[³'¿×µ‘Ì ŀ³ àaÌ µ‘´!¿ ɀÃ_¿%³'µm¾€Ì ¼[ºjÒòm·ÔÀm¼[º¿ÏÉmºÊjµm¼]Ãj¶€¾m¸ÁÃjÌ Ðŀº]½€º]Àmŀ³ µ‘À ¿ÏÉmº‡¾m³'º]Ì[á ³&¼[µ‘Àd¿'º[Îm¿]Í)·¹¿&·»³pÀ€Ã_¿ ¾€Ì'Ã_¸%¿µV½€Ì º[³Ï¾€´Ëº ¿ÏɀÃ_¿¿ÏÉmº0´&·»³'³'·ÁÀmÂÆÊjµm¼]Ãj¶€¾m¸ÁÃjÌ ÐdzÏÉmµm¾m¸¹ÅYº[Α·»³'¿G×·»¿ÏÉm·ÁÀ ¿ÏÉmº¾m³'º]Ì á ³¿'º[Îm¿]Ò\IÉmº]Ì º àðµ‘Ì ºjÍm׺ÃjÀ€Ã_¸»ÐmÙ[º[ÅVÉmµ_×FÊjµ_È ¼]Ãj¶€¾m¸ÁÃjÌ Ð‡·»³+ÌѺ]¾m³'º[Å.×+Ém·»¸»ºÇÃǾm³'º]̺[ŀ·»¿'³;¿'º[Îm¿]Ò ²¿º[Α¿·»³)àðµ‘Ì'´&º[ÅàaÌ µ‘´ Ã×µ‘Ì Å&³'º[ڀ¾mº]Àm¼[ºjÒ ã Émº]À ÃסµmÌ Å‡·»³Ì'ÃjÀmŀµ‘´&¸»Ð.½m·»¼[ä_º[ÅYà Ì µm´>Ãdzº[Ú ¾mº]Àm¼[ºjÍ׺ ¼]ÃjÀÆÂ‘Ìѵ‘¾€½‡¿ÏÉmº0×µ‘ÌÑŇ·ÔÀd¿'µÇ¿ ×µ&¼]Ã_¿'º[ÂjµmÌ ·»º[³G õ_öQUT]ö]÷ µ‘Ì Q3ý/QUT]ö]÷ Ò ;À‘¾m³º[Å"סµmÌ Å€³ÇÃj½€½€º]ÃjÌêà µ‘Ì¿ÏÉmºÆÖnÌѳ'¿ ¿'·Á´&ºY·ÔÀ2¿ÏÉmº‡³'º[ڀ¾mº]Àm¼[ºjÍ+ÃjÀmÅ ;=X7<B=!GÞ×µ‘Ì ŀ³ÇɀÃÊjº Ã_¸ÁÌ º]Ã_ŀÐÃd½€½€º]ÃjÌ º[Å Òµd¿'º.¿ÏɀÃu¿Ç¿ÏÉm·»³pŀ·ih º]Ì ³ÆàaÌ µ‘´ ¿ÏÉmº&Àmµj¿'·»µ‘ÀYµ_ࡾ€À€Ì º[Âj·»³'¿º]Ì º[ÅCסµmÌ Å€³G¿ÏÉmº]Ì º&ÃjÌ º&¾€ÀdÈ Ì º[Âj·»³'¿º]Ì º[Åp×µ‘ÌÑŀ³%¿ÏɀÃ_¿ó¼]ÃjÀƶ€º;Â‘Ì µ‘¾€½œº[ÅÄÃ_³¾m³'º[ÅÆµ‘Ì Ì º]¾m³'º[Å Ò ã ºè·ÔÀdÊjº[³'¿·¹ÂmÃ_¿'º[Å>Émµ_× ¿ÏÉmº ÌѺ]¾m³'º[Å×µ‘Ì ÅDÌ'Ã_¿'º ¼[ɀÃjÀmÂjº[ÅÃ_¼[¼[µ‘ÌÑŀ·ÔÀm ¿'µC¿ÏÉmº‡µoh ³'º[¿&µ_à6ÃC¿º[Α¿]Ò ã º ´ÇÃjÌä_º[Å@¿ÏÉmºÇ¿º[Α¿&Ã_¿¿ÏÉmºÆµoh ³'º[¿0µ_à `g` ¶dБ¿'º[³&ÃjÀmÅ ¼[µ‘¾€Àd¿'º[ÅO¿ÏÉmº@Ì º]¾m³º@×µ‘Ì ÅOÌ'Ã_¿'ºY·ÁÀO¿ÏÉmº H`g`!` ÈܶdБ¿'º ×·ÁÀmŀµ_×Ò IÉmº ÌѺ[³Ï¾m¸»¿'³WÃjÌ º ³ÏÉmµ_×+ÀO·ÁÀ X ·»Â‘¾€Ì º H Í ×+Émº]Ì º&¿ÏÉmº&Émµ‘Ì ·»Ù[µ‘Àd¿ÏÃ_¸Ã_Îm·¹³;³ÏÉmµ_׳;¿ÏÉmºµ hI³º[¿àaÌ µ‘´ ¿ÏÉmº‡Émº]Ã_Å"µ_à¿ÏÉmºÆ¿'º[Îm¿ ] ·ÔÀ v ¶dБ¿'º[³ b Í%ÃjÀmÅL¿ÏÉmºÆÊjº]ÌQÈ ¿'·»¼]Ã_¸%Ã_Îm·»³+³ Émµï×G³0¿ÏÉmºÊjµm¼]Ãj¶€¾m¸ÁÃjÌ ÐCÌ º]¾m³'ºÇÌ'Ã_¿ºjÒ î º È ³Ï¾m¸»¿'³óàðµ‘Ì)¿ÏÉmº0Ó%ÀmÂj¸»·»³ÏÉ¿'º[Α¿³¡ÃjÌ ºG³ÏÉmµ_×+À&µ‘À¿ÏÉmº¸»º àð¿]Í ×+Ém·»¸»º‡¿ÏÉmµj³'ºËà µ‘Ì¿ÏÉmº J Ãj½€ÃjÀmº[³'º‡ÃjÌ º³ÏÉmµ_×+ÀCµ‘À ¿ÏÉmº Ì ·»Â‘Éd¿]Ò ²¿C¿ÏÉmº"¶€º[Âj·ÁÀ€Àm·ÁÀm µ_àÇ¿ÏÉmº `!` ÈܶdБ¿'º ¿'º[Α¿Í0¿ÏÉmº Ì º]¾m³'ºÆÌ'Ã_¿'º׉Ãu³ `O ÃdÀmÅ@¿ÏÉmº]ÀY·»¿·ÁÀm¼]Ì º]Ã_³'º[ÅLÃ_³0¿ÏÉmº ¿'º[Îm¿ê½€Ì µm¼[º[º[ŀº[ÅL¿'µ_׉ÃdÌ Å€³0¿ÏÉmºÇº]ÀmÅ Ò IGÉm·¹³&·ÁÀm¼]Ì º]Ã_³º ¸»º[Êjº[¸»º[ÅWµ h Ã_¿ÃjÌ µm¾€ÀmÅ M `O ¿µ `O Ít¼]¾€Ì ·»µ‘¾m³'¸»ÐjÍ)·ÁÀ ¶€µd¿ÏÉY¸ÔÃdÀm‘¾€Ã_Âjº[³]Ò²+¸»³'µ3Í5׺ê׺]Ì º³Ï¾€Ì'½€Ì ·»³'º[Å ¿'µ‡³'º[º Émµ_×F³'·Á´&·»¸ÔÃdÌ+Àm·ÁÀmº0µ_à”¿ Émº0¿'º[Α¿³¡×º]Ì º;¿'µ&º]Ã_¼[Éǵd¿ÏÉmº]Ì ] ¿ÏÉmº0º[Îm¼[º]½m¿'·»µ‘ÀY¶€º[·ÁÀm‡¿ÏÉmº J Ãj½€ÃdÀmº[³'ºê½€Ã_¿º]Àj¿¿'º[Îm¿ b Ò IÉmº[³'º.ÌѺ[³Ï¾m¸»¿'³p³ ¾mÂjÂjº[³'¿&¿ÏɀÃ_¿ ù?Sïýtú ö]ü€ú ·»³p½€Ì µ_Êm·»Å€º[Å ¶dÐ M ` O ¿'µ `O µuà”¿ÏÉmºÊdµ‘¼]Ãj¶€¾m¸ÁÃjÌÑÐÄÃjÀmÅY¿ÏÉmº0³'¿'µmÌ Ð º[Êjµj¸»Êjº[³G¿ÏÉ€Ì µ‘¾mÂ‘ÉÆ¿ ÉmºÌ º[³'¿]Ò X Ìѵ‘´O¿ Ém·¹³µm¶m³'º]Ì ÊïÃu¿'·»µ‘ÀÍ_׺¡×µ‘¾m¸»Å&º[΀½€º[¼[¿%ÃG¿ÜЀ½dÈ ·»¼]Ã_¸ ¾m³'º]Ìó¿'µêÌѺ]¾m³'º0¿'º[Îm¿%º[ڀ¾m·¹Ê_Ã_¸»º]Àj¿;¿'µ&ÃjÌ µ‘¾€ÀmÅ]M `O ¿'µ `O µ_àž¿ ÉmºÊjµm¼]Ãj¶€¾m¸ÁÃjÌ ÐÆµ‘Àm¸»Ð.Ãàð¿'º]ÌÃjÀƵoh ³'º[¿µ_à ³'º[Êjº]ÌÃ_¸ v ¶jÐm¿'º[³]Ò‡Ý à‰³µ3͔¾€À€Ì º[Âd·¹³¿'º]Ì º[ÅLסµmÌ Å€³ê´&¾m³'¿ Ã_¸»³'µC¶€ºpÌ º]¾m³º[Å ÒLIÉmº]Ì º àðµ‘Ì ºjÍ ×ºÇÀmº[Îm¿³'¿Ï¾mŀ·»º[ÅL¿ÏÉmº Ì º]¾m³'º&Ì'Ã_¿'ºGàðµ‘̾€À€Ì º[Âj·»³'¿º]Ì º[Å.×µ‘Ì ŀ³Ò [ º]ÌѺj̀׺0¼]Ã_¸ßÈ ¼]¾m¸ÁÃ_¿'º[Å"¿ÏÉmº‡ÃÊjº]Ì'Ã_ÂdºËÌѺ]¾m³'º.Ì'Ã_¿'ºËà µ‘̾€À€Ì º[Âj·»³'¿'º]ÌѺ[Å ×µ‘Ì ŀ³µ_à%Ã_¸»¸ ¿'º[Îm¿¿ Ð ½œº[³]Ò X ·»Â‘¾€Ì º V ³ÏÉmµ_׳Yµ‘¾€Ì.ÌѺ[³Ï¾m¸»¿'³‡àðµ‘Ì.ÓóÀmÂj¸»·»³ÏÉ!ÃjÀmÅ J Ãj½€ÃdÀmº[³'ºjÒ%ÝaÀY¶€µj¿ ÉÆ¼]Ã_³'º[³]̀¿ ÉmºÌ'Ã_¿'º;¿'º]Àmŀº[Å.¿µËÌÑ·¹³º ÃjÀmÅ en¾m¼[¿Ï¾€Ã_¿'ºÆ¶€º[¿ ׺[º]À V!` O ¿'µ ` O Òµj¿ºÇ¿ÏɀÃ_¿ ¿ÏÉm·»³ en¾m¼[¿Ï¾€Ã_¿'·»µ‘À ׉Ãu³.¼]Ãj¾m³'º[Å ¶jÐO¿ Émº@³Ï½€ÃjÌ ³º]Àmº[³'³ ÃjÀmÅ.Ã_¸»³'µÆ¶jÐÇ¿ Émº¾€À€Ì º[Âj·»³'¿'º]Ì º[ÅY×µ‘Ì ŇÌ'Ã_¿'º;ŀ·ihIº]ÌÑ·ÔÀm Ã_¼[¼[µ‘ÌÑŀ·ÔÀmÂC¿'µ‡¿ ÉmºÇ¿'º[Îm¿]ÒCIÉmº[³ºÄÌ º[³ ¾m¸¹¿³¼[µ‘ÀdÖnÌ'´Ëº[Å µ‘¾€Ì)º[΀½€º[¼[¿ÏÃu¿'·»µ‘À¿ÏɀÃ_¿¾€À€Ì º[Âj·»³'¿'º]Ì º[ÅסµmÌ Å€³žÃdÌ ºÃ_¸»³'µ Ì º]¾m³'º[Å Ò f0¾€ÌCµ‘¶m³º]Ì ÊïÃ_¿·¹µmÀm³V³ ¾mÂjÂjº[³'¿'º[Å ¿ÏɀÃ_¿C·ßàľ€À€Ì º[Âd·¹³ÑÈ ¿'º]Ì º[Å×µ‘Ì ŀ³ ¼]ÃjÀ¶€ºÃj¾m¿'µ‘´ÇÃu¿'·»¼]Ã_¸»¸¹Ð0º[Îm¿ÏÌ'Ãu¼[¿'º[Å0àaÌ µ‘´ þm³º]Ì%¼[µmÌ'½€¾m³]Ím¿ÏÉmº[ÐǼ]ÃjÀ¶€º0³Ï¾mÂjÂdº[³'¿'º[ÅÇ¿'µ¿ÏÉmº0¾m³'º]Ì ÃjÀmÅY¿ÏÉm·»³+×·»¸»¸½€ÃjÌ ¿'¸»ÐƳ'µj¸»Êjº¿ Émº½€Ì µ‘¶m¸»º]´ µ_à%¾€À€Ì º[Â_È ·»³'¿'º]Ì º[Ň×µ‘ÌÑŀ³]Ò X Ì µ‘´9¿ÏÉmºÀmº[Îm¿³'º[¼[¿·¹µmÀƵ‘ÀÍmסº0º[ÎdÈ ½m¸ÁÃ_·ÁÀ.Émµ_×O׺0ɀÃ]Êdº¶€¾m·¹¸»¿0Ã&³'Ðm³'¿'º]´9¿'µÇÌ º]Ã_¸»·»Ù[º&¿ÏÉm·»³ 0 10 20 30 40 50 60 70 80 0 2 4 6 8 10 12 14 16 18 20 rate of used and unknown text size(kb) Japanese Average English Average X ·»Â‘¾€Ì º V G î º]¾m³'º î Ã_¿º%µ_à,+À€ÌѺ[Âj·»³'¿'º]Ì º[Å ã µ‘Ì ŀ³ÃjÀmÅ fVh ³'º[¿ ] ²;Êjº]Ì'ÃuÂjº+µuàéÓóÀmÂj¸»·¹³ É.ÃjÀmÅ J Ãj½€ÃjÀmº[³'º b ·»Å€º]Ãp¶€Ãu³'º[ҵ‘À‡³ ¾m¼ñɇÃÇÌѺ]¾m³'ºê½€Ì µm½€º]Ì ¿ ÐjÒ  NaM UYLRCv ^ hZYaK%htsjNaR5ì  l%\]sRtM [ º]Ì ºj̀׺+º[΀½m¸ÁÃ_·ÁÀ‡¿ÏÉmº0º[Ê_Ã_¸Á¾€Ã_¿'·»µ‘À ¿'µ‘µd¸3׺¶€¾m·»¸¹¿;¿'µ Êjº]Ì ·ßàðЇ¿ÏÉmººh º[¼[¿'·»Êjº]Àmº[³³0µ_à%µ‘¾€ÌG·¹Å€º]ÜÒ>IÉmº³'Б³¿'º]´ ´ÇÃjäuº[³Ä¾m³'ºYµ_àêà ³Ï´ÇÃ_¸»¸+¾m³'º]ÌÆ¼[µmÌ'½€¾m³pµuàêÃj¶€µ‘¾m¿ V!` v ¶dБ¿'º[³Ëà Ì µm´ ×+Ém·»¼[É"¿ÏÉmº‡¾€À€Ì º[Âj·»³'¿'º]ÌѺ[Å×µ‘Ì ŀ³&ÃjÌ º º[Îm¿ÏÌ'Ã_¼[¿'º[Å Ò IÉmºC³'Ðm³'¿'º]´ ·»³V¶€Ã_³º[Å µmÀ½€Ì º[ŀ·»¼[¿'·»Êjº ¿'º[Îm¿pº]Àd¿ÏÌ ÐÃdÀmÅ Ãu¸¹¸»µ_׳‡¿'º[Îm¿Æº]Àj¿ÏÌÑÐ ¶dÐÌ º]½€º]Ã_¿·ÔÀm ¿ÏÉmº;àðµj¸»¸»µï×G·ÔÀmÂÆ³'¿ÏÃuÂjº[³]Ò H ÒJIGÉmº¾m³'º]̺]Àd¿'º]Ì ³;ÃjÀ‡Ãj´&¶m·»Â‘¾mµ‘¾m³³'º[ڀ¾mº]Àm¼[ºjÒ V ÒJIGÉmºæ³'Ðm³'¿'º]´ ¸»µmµ‘äj³èàðµ‘Ì"¿ Émºæ¼[µ‘ÌÌ º[³Ï½€µmÀmŀ·ÔÀm ½€ÃdÌ ¿'³ ·ÁÀ0¿ Émº¡¾m³'º]Ì5¼[µ‘Ì'½€¾m³ÃjÀmź[Îm¿ÏÌ'Ãu¼[¿'³ ¼[ɑ¾€À€äd³ ×;Ém·¹¼[É.´Ë·¹ÂmÉj¿+¶œº¿ÏÉmº¾m³º]Ì[á ³¿ÏÃdÌ Âjº[¿]Ò W ÒJIGÉmº³'Ðm³'¿'º]´!³µ‘Ì ¿'³ó¿ÏÉmº[³'º¼[ɑ¾€À€äd³Ã_¼[¼[µ‘Ì ŀ·ÁÀmÂ&¿'µ ÃÆ½€ÃjÌ ¿·¹¼]¾m¸ÁÃjÌ;º[Ê_Ã_¸Á¾€Ã_¿'·»µ‘ÀYàa¾€Àm¼[¿'·»µ‘ÀCÃjÀmÅC³ÏÉmµ_׳ ¿ Émº]´D¿'µÇ¿ÏÉmº¾m³º]Ì[Ò  Ò²G¿+¿ÏÉmº&³ÏÃd´&º¿'·Á´&ºjÍ¿ Émºµ‘Ì ŀ·ÁÀ€ÃjÌÑÐ.¼]ÃjÀmŀ·»Å3Ã_¿'º[³ µm¶m¿ÏÃ_·ÁÀmº[Å9à Ì µm´ç¿ÏÉmº³Б³'¿º]´çŀ·»¼[¿'·»µ‘À€ÃjÌ ÐDÃjÌ º Ãu¸¹³µÇ³ÏÉmµ_×+ÀÆ¿'µÇ¿ Émº¾m³'º]Ì[Ò €ÒJIGÉmº+¾m³'º]Ì)¼[Émµ‘µj³º[³%Ém·¹³ó¿ÏÃjÌ Âdº[¿)à Ì µm´æÃj´Ëµ‘ÀmÂ0¿ÏÉmº ¿ ×µ&¸»·¹³¿'³]Ò Gµj¿'º¿ ɀÃ_¿+³¿ÏÃ_Âjº[³ V ÃdÀmÅ W ÃjÌ º&Ã_ŀŀº[ÅC¿'µÆ¿ÏÉmºÇ½€Ì µ_È ¼[º[Å3¾€Ì º;׺ŀº ÖnÀmº[ÅÄÃu³%½€Ì º[ŀ·»¼[¿'·»Êjº0¿'º[Îm¿º]Àj¿ÏÌÑÐê·ÁÀ¿ÏÉmº  H ÒC²;³à µ‘Ì¿ÏÉmº‡Ãj´&¶m·»Â‘¾mµ‘¾m³&³'º[ڀ¾mº]Àm¼[º[³]ÍóסºÆ¼[Émµj³'º ¿'µ‡¾m³'º¿ Émºê×µ‘ÌÑÅmÈܶ€Ã_³'º[ÅV½€ÌѺ Ö3Î.º]Àd¿ÏÌ ÐÆàaÌ µ‘´ Ãj´&µ‘Àm ÊïÃdÌ ·»µ‘¾m³Ä½€ÌѺ[ŀ·¹¼[¿·¹ÊdºV´&º[¿ÏÉmµmŀ³]Ò IGÉm·¹³Æ×Ã_³Ç¶€º[¼]Ãj¾m³º ×µ‘Ì ÅmÈܶ€Ã_³º[ź]Àd¿ÏÌ Ð0·»³¿ÏÉmº%³·Ô´½m¸¹º[³¿%ÃjÀmÅÇ´&µj³'¿”Ãd½€½m¸¹·ßÈ ¼]Ãj¶m¸»º0×µ‘Ì ¸»Å€×·»Å€ºjÍ3ÃjÀmÅ.¶œº[¼]Ãj¾m³'º0³'µm´&º6½€Ãd½€º]Ì ³ɀÃÊjº Ì º]½œµ‘Ì ¿'º[Å¿ÏɀÃ_¿)º]Àj¿ÏÌÑÐ6¶dн€Ì º Ö3ηÁÀm¼]Ì º]Ãu³'º[³º]Àj¿ Ì Ð0º fÇÈ ¼[·»º]Àm¼[Ð ] I)ÃjÀ€ÃjäÃÈÜÝð³ Ém·¹·º[¿;Ã_¸ðÒÁÍ Vg`!`!`!b Ò%Ì º[¼[·»³'º[¸»ÐjÍàðµ‘Ì Ó%ÀmÂd¸¹·»³ÏÉÍG׺.¼[Émµj³'º.³·ÔÀmÂd¸¹º Èa¿ Ãj½Fº]Àd¿ÏÌ Ð"´&º[¿ÏÉmµmÅàðµ‘Ì ´&µ‘¶m·»¸»ºp½€Émµ‘Àmº&×·»¿ÏÉYÃ_¸Á½€É€Ãj¶€º[¿&Ã_³'³'·»Â‘À€´Ëº]Àj¿;µ_à X ·»Â_È ¾€Ì º W ] ³'·Á´&·»¸ÁÃjÌ;¿'µ Ý'á ³0´&º[¿ÏÉmµmÅ ]  ݄È;YµmÌ'½ÒÁÍ V!`!`!`!bb µ‘À&ŀ·»Âj·»¿'³ÃjÀmÅ&àðµ‘Ì J Ãj½€ÃdÀmº[³'ºjÍmסº¼[Émµj³'º×µ‘ÌÑÅmÈܶ€Ã_³'º[Å ä]ÃdÀ€ÃÈÜä]ÃjÀ(Z ·3¼[µmÀjÊjº]Ìѳ'·»µ‘Àp¶dÐǽ€Ì º Ö3ÎǺ]Àd¿ÏÌ Ð ] ³'·Á´&·»¸ÁÃjÌó¿'µ fGÈ;^óµ_Î.´&º[¿ÏÉmµmÅ ] _@Ã_³Ï¾m·ðÍ H d!d!d b"b Ò²+³;ÃjÀ.ÓóÀmÂj¸»·¹³ É º[΀Ãj´Ç½m¸»ºjÍ   á¼[µ‘ÌÌ º[³Ï½€µmÀmŀ³@¿'µ ú tö Í ú tøT Í ú tö]þ Í ûmøaö5q Í ú ø M ö ×;Ém·¹¼[ÉÃdÌ º‡¿ÏÉmº‡¼]ÃjÀmŀ·»Å3Ã_¿'º[³àaÌ µ‘´-¿ÏÉmº ŀ·»¼[¿'·»µ‘À€ÃjÌ ÐjÒ ã ·¹¿ É"³'¿ÏÃuÂjº[³ V ÃjÀmÅ W Í    á¼]ÃjÀèÃ_¼ È Ú€¾m·ÔÌѺ‡¼]ÃjÀmŀ·»Å3Ã_¿'º[³Ç³Ï¾m¼[ÉLÃ_³  ¿ÏÉmº[ºjá)µ‘Ì ß¿ÏÉmµ‘¾á ·ßà0¿ÏÉmº ¾m³'º]̼[µmÌ'½€¾m³+¼[µmÀj¿ÏÃu·ÔÀm³G¿ÏÉmº]´‡Ò X ·»Â‘¾€Ì º  ³ Émµï×G³ÃjÀCº[΀Ãj´Ç½m¸»ºÇµ_൑¾€Ì;¿'µmµj¸”×;Émº]À  î µm´&º[µÄÃdÀmÅ J ¾m¸»·»º[¿‡·»³Ã_ŀµ‘½m¿'º[Å.Ãu³+¿ÏÉmº&¾m³'º]ÌG¼[µ‘ÌQÈ ½€¾m³]ÒeIÉmº&¾m³'º]Ì·»³;¿ÏÌ Ðm·ÔÀmÂÆ¿'µÇº]Àd¿'º]Ì J ¾m¸»·»º[¿]á ³;àaÃj´&µ‘¾m³ ³Ï½œº[º[¼ñÉ2f î µm´&º[µ3Í î µ‘´&º[µ×+Émº]Ì º àðµ‘Ì ºpÃjÌÑ¿0¿ÏÉmµ‘¾ î µ‘´Ëº[µ3Ò µj¿º¿ÏɀÃ_¿0Ã_¸»¸µ_à%¿ÏÉmº[³ºËÓóÀmÂj¸»·»³ÏÉC×µ‘Ì ŀ³ ÃjÌ ºL¾€À€Ì º[Âj·»³'¿'º]Ì º[Å!×µ‘ÌÑŀ³]Ò ÝaÀÞ¿ÏÉm·»³Vº[΀Ãj´Ç½m¸»ºjÍ2f î µ‘´Ëº[µ3Í î µ‘´&º[µ ×Ã_³pÃ_¸ÁÌ º]ÃuŀÐ"º]Àj¿'º]ÌѺ[Å ] ³ÏÉmµ_×+À ¶€º àðµ‘Ì º ¿ÏÉmº ½€Ìѵ‘´Ç½m¿Wá0á b Í0ÃjÀmÅO¿ÏÉmºL¾m³'º]ÌY׉ÃdÀj¿'º[Å ¿'µèº]Àj¿'º]ÌY¿ÏÉmº"Ì º[³'¿Ò ã Émº]ÀO¿ÏÉmº"¾m³'º]ÌYº]Àd¿'º]Ì ³C¿ÏÉmº Émº]Ã_Å"½€ÃjÌ ¿µ_à¡¿ÏÉmºÆ¿ÏÃjÌÑÂjº[¿0¿'º[Îm¿ê¶dÐVŀ·»Âj·»¿'³ ] d W M W!W Í ×+Ém·»¼[ÉO¼[µ‘ÌÌ º[³Ï½€µmÀmŀ³.¿'µOá ×;Émº]Ì º àïáóµ_à ×+Émº]Ì º àðµ‘Ì ºjá b Í ¿ÏÉmºL³'Ðm³'¿'º]´ º[Îm¿ÏÌ'Ãu¼[¿'³V½œµj³'³'·Á¶m¸»º"¼ñÉm¾€À€äd³.àaÌ µ‘´ ¿ÏÉmº µ‘Ì ·»Âj·ÁÀ€Ã_¸ò€É€Ãjäuº[³Ï½€º]ÃjÌÑ·ÔÃdÀ"¿'º[Îm¿]ÍÃjÀmÅ"¿ÏÉmº]ÀL³Ï¾mÂdÂjº[³'¿'³ ¼]ÃjÀmŀ·»Å3Ã_¿'º[³0¿µÇ¿ÏÉmºê¾m³º]Ì ] µ‘Àm¸»Ð‡¿ÏÉmº¼]ÃdÀmŀ·¹Å3Ãu¿'º[³0µ‘¶dÈ ¿ÏÃ_·ÁÀmº[ÅLàaÌ µ‘´-¿ÏÉmº‡¾m³'º]̼[µ‘Ì'½€¾m³pÃdÌ ºp³ Émµï×;À@·ÁÀ X ·»Â_È ¾€Ì º b Ò IÉmºC¼[µ‘Ì'ÌѺ[³Ï½€µ‘Àmŀ·ÁÀmÂè¼]ÃjÀmŀ·»Å3Ã_¿'º[³.·ÁÀÞ¿ÏÉm·»³ º[΀Ãj´Ç½m¸»º@׺]Ì º q tö]õ_ö5K/S_õ_ö ] ×·»¿ÏÉFÃL³Ï½€Ã_¼[ºWÃàð¿'º]ÌQÈ ×ÃjÌ Å€³ b Í q tö]õ_ö5K/S_õ_ö ÍÃjÀmÅ q 3öõ_ö5K/S_õïö(T ÒDIÉmºÇ¾m³'º]Ì ×µ‘¾m¸»ÅW¼[Émµmµj³'ºÆ¿ÏÉmºÇÖnÌѳ'¿µ‘ÀmºjÍóÃjÀmÅW¿ ɑ¾m³¿ÏÉmºpº]Àd¿ÏÌ Ð ×µ‘¾m¸»Å"¼[µ‘Àd¿'·ÁÀ‘¾mºjÒ8f0Àmº.´&·»Â‘Éd¿Ç×µ‘Àmŀº]Ì&Ã_³&¿'µY×+Émµ ×µ‘¾m¸»ÅF׉ÃdÀj¿Æ¿'µ"º]Àd¿'º]Ìpò€É€Ãdä_º[³Ï½€º]ÃdÌ[á ³‡¿'º[Îm¿.¶jÐF¾m³ È ·ÁÀmÂêÃ0´&µm¶m·¹¸»º+½€ÉmµmÀmº+½€Ì º[ŀ·»¼[¿'·»Êjº;¿'º[Α¿º]Àd¿ÏÌ Ð³'Ðm³'¿'º]´‡Ò [ µï׺[Êjº]Ì Í_³'·Á´&·»¸ÔÃd̳'·»¿Ï¾€Ã_¿'·»µ‘Àm³óµ‘¼[¼]¾€Ìó×·»¿ÏÉË¿'º[¼ñɀÀm·»¼]Ã_¸ µ‘ÌG¼[µj¸»¸»µ‘Ú€¾m·ÁÃ_¸¿'º]Ì'´&³]ÒÝaÀCÃ_ŀŀ·»¿'·»µ‘ÀÍ ³ ¾m¼ñÉCŀ· fp¼]¾m¸»¿ Ð µm¼[¼]¾€Ì ³ê¾m³Ï¾€Ãu¸¹¸»ÐL·ÁÀWÓóÃ_³'¿0²;³'·ÁÃjÀ@¼[µm¾€Àj¿ÏÌÑ·¹º[³&¶€º[¼]Ãj¾m³º ¿ÏÉmº[·ÁÌ;´ÇÃ_·ÁÀÆ¿'º[Îm¿º]Àj¿ Ì ÐĴ˺[¿ÏÉmµ‘Ň·»³0½€Ì º[ŀ·»¼[¿'·»Êjº&º[Êjº]À ×+Émº]ÀY¾m³'·ÁÀmÂÇàa¾m¸»¸ Èa³·¹Ù[º&¼[µ‘´½€¾m¿'º]Ìä_º[Ѐ¶€µ‘ÃdÌ Å€³]Ò Gµï×̀׺ɀÃ]Êdº0¿'µÇº[΀½m¸ÔÃu·ÔÀY¿ ׵ǽ€Ì µ‘¼[º[³³'º[³+·ÁÀ ŀº È ¿ÏÃ_·»¸#Gó¼]ÃjÀmŀ·»Å3Ã_¿'ºº[Îm¿ÏÌ'Ãu¼[¿'·»µ‘À.ÃjÀmÅ.ÌÃjÀ€äj·ÁÀmÂ3Ò ã ºŀ·¹³ÑÈ ¼]¾m³'³Y¿ÏÉmº[³'ºW½€Ì µm¼[º[³'³'º[³Y·ÁÀO¿ÏÉmºÛà µj¸»¸»µ_×·ÁÀm¿ ×µ"³'º[¼ È ¿'·»µ‘Àm³]Ò  wphnìóí%NQí%htsR"kWhì%íSYaNQì  `Ca "!(E ; coE @  9 f0¾€Ì&½€Ì µ‘¶m¸»º]´×Ã_³¿'µ.µ‘¶m¿ Ã_·ÁÀWÃY½€ÃjÌ ¿'·»¼]¾m¸ÁÃjÌ&¼[ɑ¾€À€ä µ‘¾m¿µuàéÃ&¿º[Α¿;µ_à%Ãj¶€µ‘¾m¿ V!` v ¶dБ¿'º[³Ò4f0Àmº&½€Ì º[ʑ·»µ‘¾m³ Ãj½€½€Ì µmÃ_¼ñÉ×Ã_³)¿'µ¾m³'º¿'º]Ì'´ º[Îm¿ÏÌ'Ãu¼[¿'·»µ‘À&´&º[¿ÏÉmµmŀ³%Ã_³ X ·»Â‘¾€Ì º W G²6ÀƲ;¸Á½€É€Ãj¶€º[¿0²;³'³'·»Â‘À€´Ëº]Àj¿µmÀ0·¹Âd·¹¿³ ii     (%u  ) N&2"*"2"        6  /+E  0 ) 0  /+E  0  " /+E  0 ;L     (%u  ) S  ! "$#&%'($ )$*&)$($      (%u  ) +/+E  0 ' 2    ($ ,  6   +E"3+E+11$  0   +E"  "  +E"3+E+11$5  - * 0 <     (%u  ) +/+E  0 '  ! "$#/.0($ )$*&)$($      (%u  ) +/+E  0 \" +E"1N.    2+3425$67 6  #* 0  # "   ) ;*  #+  0 #       (%u  ) +/+E  0 \" +E"18:"  ! "$#/90($ )$*&)$($      (%u  ) +/+E  0 \" +E"   )  $   : ; =<>?@ A*B$C6=($ DFE4G!$ :@H6=I,JKL 6  M+ 0       (%u  ) +/+E  0 \" +E"   ) NS  ! "$#&%'($ )$*&)$($      (%u  ) +/+E  0 \" +E"   ) OMO X ·»Â‘¾€Ì º  GžÓóÀj¿º]Ì ·ÁÀm  î µm´&º[µÃjÀmÅ J ¾m¸»·»º[¿]ám×·»¿ÏÉdf0¾€Ì òm·Á´Ç½m¸»ºpÓÊ_Ã_¸Á¾€Ã_¿'·»µ‘ÀCòmÐm³'¿'º]´ ] ÝQÀCÃc_@ÃjÀ€Àmº]̵uà¡Ó%ÀdÈ Âj¸»·»³ÏÉO_Cµ‘¶m·»¸»º %Émµ‘Àmº b ŀ·»³'¼]¾m³'³'º[ÅC·ÁÀ ] ;Ãjä]Ã_ÂmÃ]×Ã&ÃjÀmÅ _VµmÌ ·ðÍ V!`g`!V!b Ò [ µ_×È º[Êjº]Ì[Í_³ ¾m¼ñÉǴ˺[¿ÏÉmµ‘Å€³%ÃdÌ ºŀº[³'·»Â‘Àmº[Åp¿µ+µ‘¶m¿ Ã_·ÁÀÇÃ;¸»·Ô´êÈ ·»¿'º[ÅÀm¾€´ê¶œº]Ì&µ_à+³¿ÏÌ ·»¼[¿'¸»ÐWŀº ÖnÀmº[ÅÃdÀmÅ"³Ï½€º[¼[·ÁÃ_¸»·»Ù[º[Å ¿'º]Ì'´Ë³)à Ì µm´!ÃÉm¾mÂjº¼[µ‘Ì'½€¾m³Ã_³'³Ï¾€´Ë·ÔÀm¿ÏɀÃ_¿¿ÏÃuÂjÂjº]Ì ³ ×µ‘Ì'䯽€Ì µ‘½œº]Ì ¸»ÐjÒ [ µ_׺[Êjº]Ì[Ím·ÁÀ‡µ‘¾€Ì‰¼]Ã_³'ºjÍtסº0³ Émµ‘¾m¸»Å Àmµj¿‡Ã_³'³Ï¾€´Ëº.¿ÏɀÃ_¿ÆÃjÀjÐ"¸ÁÃjÀmÂm¾€Ã_Âjº.¿µ‘µj¸»³Æ×·»¸»¸0סµmÌ'ä ½€Ì µ‘½œº]Ì ¸»Ðj̀¶€º[¼]Ãj¾m³'º;µ‘¾€Ì)¿ÏÃdÌ Âjº[¿·»³%¿ÏÉmº Q3ýõ_ö P3øT[ú ö]õ_ö]÷ qDSïõ_÷T ÒJIGÉmº]Ì º à µmÌ ºj̀³'·ÁÀm¼[º&µ‘¾€Ì¿ÏÃdÌ Âjº[¿·»³+¿'µ·ÔÀm¼[¸Á¾mŀº Àmµ‘ÀdÈa³'º[Âm´&º]Àj¿º[ŸÁÃjÀm‘¾€Ã_Âdº[³]Í]µm¾€Ì”´&º[¿ Émµ‘ųÏÉmµ‘¾m¸»Å궜º ³'¿ÏÌÑ·ÔÀmÂÆ¶€Ã_³'º[Å Ò ÝaÀd¿Ï¾m·»¿'·»Êjº[¸»ÐjÍ ×;ɀÃ_¿+׺0×ÃjÀd¿+Émº]Ì º&·»³àðµ‘Ì¿ Émº¿'º[Îm¿ ¼[ɑ¾€À€äC¿µV¶€º QRTö]÷ K?r q K þTú;S9P3ö[ú 3ö]õ ÒN;Àmŀº]Ì¿ÏÉmº ¼[µ‘Àm³'¿ Ì'Ã_·ÁÀj¿)¿ÏɀÃ_¿)¿ÏÉmº¾m³'º]Ì)¼[µ‘Ì'½€¾m³·»³³Ï´ÇÃu¸¹¸ðÍ_×+ɀÃu¿”׺ ´ÇÃÐC¸»µ‘µ‘ä Ã_¿0·»³ê¿ Émº õ_ö'ôtö[úÜø úÜø0Sïý µ_à³'¿ÏÌ ·ÁÀmÂj³Ò [ µ_×È º[Êjº]Ì[Íd·ßàI׺º[Îm¿ÏÌ'Ãu¼[¿%Ã_¸»¸nÌ º]½œº]Ã_¿'º[ÅÆ³'¿ÏÌ ·ÁÀmÂj³]Íd¿ÏÉmº+Àm¾€´È ¶€º]Ì;µuà%¼]ÃjÀmŀ·»Å3Ã_¿'º[³0º[΀½m¸¹µmŀº[³ÃjÀmÅC¿ÏÉmº&¾m³'º]Ì×G·¹¸»¸%¶œº àðµ‘Ì ¼[º[ÅO¿'µè¸»µ‘µ‘äFÃ_¿Y·ÁÀ€Ã_ŀº[Ú ¾€Ãu¿'ºW¼[Ém¾€À€äj³ ] ¶€º[¼]Ãj¾m³º ×+Émº]À '¿ Émº Ì º]½œº]Ã_¿'³]Í'¿ÏÉ Fµ‘Ì '¿FÃjÌ º"Ã_¸»³'µOÌ º È ½€º]Ãu¿'º[Å b Ò IÉmº]Ì º àðµ‘Ì ºjÍ׺‡Å€º[¼[·»Å€º[ÅF¿'µCº[Α¿ Ì'Ã_¼[¿&¿ÏÉmº MOK ü3ø MOK?r õ_ö'ôtö K úÜö]÷2ô3õ_ö5KøaütöT Ò-²8´ÇÃuΑ·Á´ÇÃ_¸&Ì º È ½€º]Ãu¿'º[ÅV½€Ì º Ö3··»³ÃÇÌѺ]½€º]Ã_¿'º[ÅY³'¿ÏÌ ·ÁÀmÂÆ¿ÏɀÃ_¿ŀµmº[³0Àmµj¿ µm¼[¼]¾€Ì+Ã_³¿ÏÉmº&½€Ì º Ö3ÎÆµ_àéÃdÀmµj¿ÏÉmº]̽€Ì º Ö3·³¿ÏÌ ·ÁÀmÂ3Ò X µm̺[΀Ãj´Ç½m¸»ºjÍj·ßàÏÃd¶€Ì'Ã_¼]Ã_Å3Ãj¶€Ìà +·»³ó¿ÏÉmº+¾m³'º]Ìé¼[µ‘ÌQÈ ½€¾m³]Í3ÃdÀmÅ.¿ÏÉmº¾m³'º]̉º]Àj¿'º]ÌѺ[Å Ïà t͑¿ Émº]À Ãd¶€Ì'à ] V!b ͑Ãj¶€Ì ] V!b ̀Ãj¶ ] V!b Í Ã ]  b ÃjÌ ºY¿ÏÉmº@Ì º]½€º]Ãu¿'º[ÅO³'¿ÏÌÑ·ÔÀmÂd³p¿ÏɀÃu¿p¼]ÃjÀF¶œº@¿ÏÉmºC¼]ÃjÀdÈ Å€·»Å3Ã_¿'º[³]Ò ²6´Ëµ‘Àm¿ÏÉmº[³ºjÍ  Ãj¶€Ì 3Í ÏÃj¶ tÍ6ÃdÀmÅ Ïà  ] ¿ ×·»¼[º b µ‘¼[¼]¾€ÌÆÃ_³Æ¿ÏÉmºV½€Ì º Ö3Îèµ_à0¿Ü×µ@µm¼[¼]¾€Ì'Ì º]Àm¼[º[³ µ_à ÏÃj¶€Ì'à 3Ò\IÉmº]Ì º àðµ‘Ì ºjÍm׺0º[¸¹·Á´&·ÁÀ€Ã_¿ºê¿ Émº[³'ºj̀×+Ém·»¼[É ¸»º]ÃÊjº[³G Ãd¶€Ì'à ] V!b ͑à ] Wgb Ò ²0Àmµj¿ÏÉmº]Ì)¿ ×µ Ïà +µm¼[¼]¾€Ì'Ì º[ÅpÃu³ž½€ÃjÌÑ¿µ_à ÏÃd¶€Ì'à 3Í_¶€¾m¿ ·»¿pµm¼[¼]¾€Ì'Ì º[ÅOÃ_³Æ¿ÏÉmº ôRSET[ú#KøQü µuà ÏÃj¶€Ì'à tÍ%³'µtÍ%·»¿Æ·¹³ ³ÏÉmµ_×+À@Ãd´&µ‘Àm‡¿ÏÉmºÆ¼]ÃjÀmŀ·»Å3Ã_¿'º[³]ÒÆ²;³¿ÏÉmº‡´ÇÃ_Îm·Á´ÇÃ_¸ Ì º]½œº]Ã_¿'º[Ň½€Ì º Ö3Αº[³ó¼]ÃjÀǶœº+ڀ¾m·»¼ñäd¸»ÐǺ[Α¿ Ì'Ã_¼[¿'º[Åp¾m³·ÔÀm ¿ÏÉmº³Ï¾gfpÎ.ÃjÌ'ÌÃ]Ð ] _@ÃjÀm¶€º]ÌÃdÀmÅ _CÐjº]Ì ³]Í H d!d W!b Ím¿ÏÉmº ³'Ðm³'¿'º]´ º[Î ½m¸ÁÃ_·ÁÀmº[Å"·ÁÀY¿ÏÉmºÇ½€Ì º[Êm·¹µm¾m³³'º[¼[¿'·»µ‘ÀY¿ÏÌÃjÀm³ È àðµ‘Ì'´&³¿ÏÉmº0¾m³'º]Ìé¼[µ‘Ì'½€¾m³·ÁÀj¿'µ&Ã0³Ï¾gfÆÎËÃdÌ'Ì'ÃÐ×+Émº]À·»¿ ·»³+·ÁÀm·»¿'·ÁÃ_¿'º[Å Ò  `_ c 9!P(@9A> YÃjÀmŀ·»Å3Ã_¿º[³ ÃjÌ ºFŀ·»³Ï½m¸ÁÃÐjº[Å ·ÔÀ Ã!¼[º]Ì ¿ÏÃu·ÔÀ µ‘Ì ŀº]Ì[Ò IÉm·»³µ‘ÌÑŀº]Ì%·»³ŀº[¿º]Ì'´&·ÁÀmº[ÅV¶dÐpÃjÀƺ[Ê_Ã_¸Á¾€Ã_¿'·»µ‘Ààa¾€Àm¼ È ¿'·»µ‘ÀÍ0ÃjÀmÅO׺C¼[Émµj³'ºC¿ÏɀÃ_¿‡·»¿V¶œº@ŀµ‘ÀmºW¾m³·ÔÀmÂè¿ÏÉmº   _ ] ½€ÌѺ[ŀ·¹¼[¿·¹µmÀ"¶jÐW½€ÃdÌ ¿'·ÁÃ_¸´ÇÃ_¿'¼[É b àaÌ'Ãj´Ëº[סµmÌ'ä ] ^óº[¸»¸.º[¿Ã_¸ðÒÁÍ H d!d `!b Ò ã ºFŀº[¼[·»Å€º[Åå¿'µ¾m³ºO¿ÏÉmº   _ ¶€º[¼]Ãj¾m³ºæ¿ÏÉmºÞ¼[µ‘Àj¿º[Α¿F½€Ì µ_ʑ·»Å€º[³¿ ÉmºØ¶œº[³'¿ ·ÁÀdà µmÌ'´ÇÃ_¿'·»µ‘À à µ‘̳'º[¸»º[¼[¿'·ÁÀmÂOÃ_ŀº[ڀ¾€Ã_¿'º@¼]ÃjÀmŀ·»Å3Ã_¿º[³]Ò   _ ·ÁÀj¿º[‘Ì'Ã_¿'º[³Æ³Ï¾m¼[ÉFÃ@¼[µ‘Àm¼[º]½m¿.¶dÐ"·ÁÀj¿º]Ì'½€µj¸ÁÃ_¿ÑÈ ·ÁÀmÂp¿ Émº³'¿ÏÃu¿'·»³'¿'·»¼[³µ‘¶m¿ÏÃu·ÔÀmº[ÅYàaÌ µ‘´9¿ÏÉmº&¾m³'º]̼[µmÌ'½€¾m³ ÃjÀmÅY¿ÏÉmº·ÁÀm·»¿'·ÁÃ_¸¸ÁÃjÀm‘¾€Ã_ÂdºË´Ëµ‘Å€º[¸ µm¶m¿ÏÃ_·ÁÀmº[ÅYà Ì µm´>à Ém¾mÂjºÇ¼[µ‘Ì'½€¾m³Ò \_ ¸ÔÃdÀm‘¾€Ã_Âjºp´Ëµ‘Å€º[¸»³êɀÃÊjºÇ¶œº[º]À ¾m³'º[ÅC·ÁÀ@´&¾m¼ñÉYµ_à%¿ Émºêº]ÃdÌ ¸»·¹º]Ì;×µ‘Ìäj³ ] ã ÃjÌÑÅ.º[¿0Ã_¸ðÒÁÍ V!`!`g`!b ] I)ÃjÀ€ÃjäÃÈÜÝð³ÏÉm·»·º[¿‡Ã_¸ðÒÁÍ Vg`!`!V!b ÍÃjÀmÅFɀÃ_³p¶œº[º]À à𵑾€ÀmÅW¿µ.¶€º‡³Ï¾€½œº]Ì ·»µ‘Ì¿'µ.µj¿ Émº]Ì0¸ÁÃjÀm‘¾€Ã_Âdºp´&µmŀº[¸¹³ ³Ï¾m¼[ÉWÃu³Ö3Îmº[Å"ÀdÈa‘Ì'Ãj´&³&ÃjÀmÅL¼[µ_Èaµ‘¼[¼]¾€ÌÌ º]Àm¼[º Èܶ€Ã_³'º[Å ´&º[¿ÏÉmµmŀ³ ] _@Ãj̾mБÃj´Çú[¿+Ã_¸ðÒÁÍ V!`g` Hb Ò   _ ¼]ÃjÀ@¶œºÇ³'·»¿Ï¾€Ã_¿'º[Å"Ã_³0Ê_ÃjÌ ·ÁÃjÀd¿êÀdÈa‘Ì'Ãd´ ¸ÔÃdÀdÈ Â‘¾€Ã_Âdº‰´Ëµ‘Å€º[¸»³]Ò%Ý ¿·ÁÀj¿'º]̽€µj¸ÁÃ_¿'º[³¿ÏÉmºÀdÈa‘ÌÃj´ ¼[µ‘¾€Àd¿'³ ·ÁÀ‡¿ÏÉmº¾m³º]̼[µ‘Ì'½€¾m³+ÃdÀmÅ.¿ÏÉmº0³'¿ Ã_¿'·»³'¿'·»¼[³·ÁÀ‡¿ Émº¶€Ã_³'º ŀ·»¼[¿'·»µ‘À€ÃjÌ ÐjÒ5IÉmºàðµj¸»¸»µï×G·ÔÀmÂ;àðµ‘Ì'´&¾m¸ÁÃG·¹³¾m³'º[Å&¿'µº[³¿'·ßÈ ´ÇÃ_¿º%Ã+½€Ì µm¶€Ãj¶m·»¸¹·»¿ Ð0àðµ‘Ì5¿ÏÉmº ø ¿ÏÉ;º[¸»º]´&º]Àd¿ qÍ ] qb G  ] qbk     Q    ] qb ] W!b [ º]Ì ºjÍ p Ít¿ÏÉmº&µ‘Ì ŀº]Ì Í3·ÁÀmŀ·»¼]Ã_¿'º[³0¿ÏÉmºÇÀm¾€´&¶€º]ÌGµ_à%º[¸»º È ´&º]Àd¿'³¶€º àðµ‘Ì º q¿ÏɀÃ_¿ÃjÌ º0¾m³'º[Ň·ÁÀÇ¿ÏÉmº;¼]Ã_¸»¼]¾m¸ÁÃ_¿'·»µ‘À µ_à  ] qb Ò X µ‘̺[΀Ãj´Ç½m¸»ºjÍ ] qb ·»³0º[³'¿'·Á´ÇÃ_¿'º[ÅYµ‘À ¿ÏÉmºC¶€Ã_³'·»³‡µ_à0¿ÏÉmºCµ‘¼[¼]¾€Ì'ÌѺ]Àm¼[ºVµuà q  ÃdÀmÅ q Ò p MOK ü ·¹³¿ÏÉmºp¸»º]ÀmÂj¿ ÉLµ_à6ÃÊïÃ_·»¸ÁÃj¶m¸»º.¼[µmÀj¿'º[Îm¿ÇÃjÀmÅL·¹³ ³'º[¿0Ã_¿  Í·ÁÀ.µm¾€Ì;³'¿Ï¾mŀР] ¿ÏÉm¾m³0´ÇÃ_Îm·Ô´Ã_¸»¸¹Ð ÈaÂmÌ'Ãj´&³ ÃjÌ º0¼[µmÀm¼[º]Ì'Àmº[Å b Ò  ] qb ·»³;¼]Ã_¸»¼]¾m¸ÁÃ_¿'º[ÅVÃu³G   ] qbk ù  ] qb   ] b ×+Émº]Ì º  ·¹³¿ÏÉmºà ÌѺ[Ú ¾mº]Àm¼[ÐCµ_à¿ÏÉmº&¼]¾€Ì'Ì º]Àd¿ p º[¸»º È ´&º]Àd¿+¼[µmÀj¿'º[Îm¿]Í ÃjÀmÅ ù  ] qb ·»³0¿ÏÉmºà Ì º[ڀ¾mº]Àm¼[ÐC×·»¿ÏÉ ×+Ém·»¼[É qµ‘¼[¼]¾€Ìѳp·ÁÀè¿ÏɀÃ_¿&¼[µ‘Àd¿'º[Îm¿]Ò  ] qb ŀº È ³'¼]Ì ·Á¶œº[³ÛÃ趀Ã_³'ºL·ÁÀm·¹¿·ÔÃu¸Ë½€Ìѵ‘¶€Ãj¶m·»¸»·¹¿ ÐæÃu³'³Ï¾€´&·ÁÀmÂOÀmµ ¼[µ‘Àd¿'º[Îm¿]Ò!ÝaÀFµ‘¾€ÌƼ]Ãu³'ºjÍ q·»³CÃWÌ º[Âd·¹³¿'º]Ì º[ÅO×µ‘Ì Å µ‘ÌÃjÀW¾€À€ÌѺ[Âj·»³'¿'º]Ì º[Å"¼[ɑ¾€À€ätÒ X µmÌÃjÀL¾€À€Ì º[Âj·»³'¿'º]ÌѺ[Å ¼[ɑ¾€À€ätÍ_¿ÏÉmº%·ÁÀm·»¿'·ÁÃ_¸3½€Ì µm¶€Ãj¶m·»¸¹·»¿ м]ÃdÀ€Àmµj¿ž¶œº%µ‘¶m¿ Ã_·ÁÀmº[Å ¶€º[¼]Ãd¾m³'º¿ÏÉmº×µ‘Ì Å0·»³ Qtýõïö Pœø0T[ú ö]õ_ö]÷ ÒBIÉmº]ÌѺ à µ‘ÌѺjÍ]׺ ³'º[¿¿ Émº0·ÔÀm·»¿'·ÁÃ_¸%½€Ìѵ‘¶€Ãj¶m·»¸»·¹¿ Ð.Ã_¿;ü[µmÀm³'¿ÏÃjÀd¿ÊïÃu¸Ô¾mºjÒ X µm̵j¿ Émº]Ì p Í  ] qb ·»³0¼]Ã_¸»¼]¾m¸ÔÃu¿'º[ÅCàaÌ µ‘´D³'¿ Ã_¿'·»³ È ¿'·»¼[³‡µ‘¶m¿ÏÃu·ÔÀmº[Å à Ì µm´ ¿ÏÉmº@¾m³'º]ÌÆ¼[µmÌ'½€¾m³]Ò [ º]Ì ºjÍGº[¸ßÈ º]´&º]Àd¿'³L¼]ÃjÀ€Àmµj¿"¼[µ‘Ì'ÌѺ[³Ï½€µ‘ÀmÅD¿'µ ×µ‘Ì ŀ³]ÍÆ¶€º[¼]Ãj¾m³º ¿ÏÉmºp¾m³º]Ì0¼[µ‘Ì'½€¾m³·»³ê¾€À€ÃjÀ€Ã_¸»ÐmÙ[º[Å Ò IÉmº]Ì º àðµ‘Ì ºjÍ5¼[µ‘ÀdÈ ¿'º[Îm¿Ï¾€Ã_¸@º[¸»º]´&º]Àd¿'³ÃjÌѺ¼[µ‘¾€Àd¿'º[Å ¶jÐ ¼[ɀÃjÌ'Ãu¼[¿'º]Ì ³]Í ×+Émº]Ì º]Ãu³ê¿ Émºp¼]¾€Ì'ÌѺ]Àj¿&º[¸»º]´&º]Àd¿ê·ÁÀLڀ¾mº[³'¿'·»µ‘À ·»³Ç¿ÏÉmº ¾€À€Ì º[Âj·»³'¿º]Ì º[Å@¼[ɑ¾€À€äYµ‘Ì;¿ÏÉmºÆÌ º[Âj·»³'¿'º]Ì º[Šŀ·¹¼[¿·¹µmÀ€ÃjÌ Ð ×µ‘Ì Å Ò X ·ÁÀ€Ã_¸»¸»ÐjÍ Q  ·¹³Çà סº[·»Â‘Éd¿'·ÁÀmÂ@½€Ì µ‘¶€Ãj¶m·»¸»·»¿ ÐW¿ÏɀÃu¿0·¹³ ´&¾m¸»¿'·Á½m¸¹·»º[ŶdÐ  ] qb ¿'µÆµ‘¶m¿ÏÃ_·ÁÀY¿ÏÉmºÖnÀ€Ã_¸ ] qb Ò IÉmº]Ì ºCɀÃÊjºÄ¶œº[º]ÀF´ÇÃjÀdÐW³'¿ ¾mŀ·¹º[³Æµ_à Q  ] I º]ÃdɀÃjÀÍ V!`!`g`!b Í ÃjÀmÅ@׺pɀÃÊjº¼ñÉmµj³º]ÀC¿'µ.¾m³'º  _.Èa² ] ^óº[¸»¸ º[¿%Ã_¸ðÒÁÍ H dgd `!b Í¿ÏÉmº³'·Á´Ç½m¸»º[³'¿àðµ‘Ì'´ Íj¶€º[¼]Ãj¾m³'ºGµ‘¾€Ì½€Ì º È ¸»·Á´&·ÁÀ€ÃjÌ ÐCº[Î ½œº]Ì ·Á´&º]Àd¿'³³ÏÉmµ_׺[ÅVÀmµÆ³'·»Â‘Àm·ßÖ3¼]ÃjÀd¿ŀ·ßà„È àðº]Ì º]Àm¼[º·ÁÀǽ€º]ÌÜà µ‘Ì´ÇÃjÀm¼[ºÃj´&µ‘ÀmÂ0¿ Émº¡Ê_ÃjÌ ·»µ‘¾m³´&º[¿ÏÉdÈ µmŀ³×º0¿ÏÌ ·»º[Å Ò  v ^ hZYaKóhtsuNQR ì (`Ca  = E*E @69Z>AB G³'·ÁÀm‡¿ÏÉmº&¿'µ‘µd¸ ŀº[³'¼]Ì ·Á¶€º[ÅL·ÁÀ  W Í׺º[΀Ãj´&·ÁÀmº[Å@¿'µ ×+ɀÃ_¿5º[Α¿'º]Àd¿ ¿ÏÉmº½€Ì º[ŀ·»¼[¿'·»Êjº¿'º[Îm¿ º]Àj¿ Ì Ð0³'Ðm³'¿'º]´F×Ã_³ ·Á´Ç½€Ì µ_Êjº[Å Ò IÉmº‡º[΀½€º]Ì ·Á´&º]Àd¿Æ×‰Ãu³êŀµmÀmº.¶jÐWÃd¾m¿'µ_È ´ÇÃ_¿·¹¼]Ãu¸¹¸»Ð‡º]Àd¿'º]Ì ·ÁÀmÂÆ¿'º[³'¿¿'º[Îm¿·ÁÀj¿'µµ‘¾€Ì³'Ðm³'¿'º]´‡Ò IÉmºÆ¿'º[³'¿¿'º[Α¿×Ã_³½€Ì º]½€ÃjÌ º[Å"Ã_¼[¼[µ‘ÌÑŀ·ÔÀmÂC¿'µY¿ÏÉmº àðµj¸»¸¹µ_×·ÁÀmÂÆ³'¿ÏÃ_Âdº[³]Ò H Ò²;¸»¸%¿'º[Îm¿'³&¾m³'º[ÅW·ÁÀ  V ׺]Ì ºÇ³º]½€ÃjÌ'Ã_¿'º[Å@Ã_¿¿ÏÉmº ³º]Àj¿'º]Àm¼[º¸»º[Êjº[¸ðÒ V Ò%òmº]Àd¿'º]Àm¼[º[³@׺]Ì ºL³'µ‘ÌÑ¿'º[Åæ·ÁÀd¿'µOÃFÌ'ÃjÀmŀµ‘´ µ‘ÌQÈ Å€º]Ì)¿µ6´&º]Ãu³Ï¾€Ì º¿ÏÉmºÃÊjº]Ì'Ã_Âjºó¼]Ãj½€Ãj¶m·»¸»·»¿ÜÐ&µ_à¿ÏÉmº ´Ëº[¿ÏÉmµ‘Å Ò W Ò X ·ÁÌ ³¿  `!`!` ¶dÐm¿'º[³×º]Ì º¿ÏÃjä_º]À&Ã_³)¿ÏÉmº¿'º[³'¿¿'º[Îm¿]Ò X µ‘Ì J Ãj½€ÃdÀmº[³'ºjÍ)¿ÏÉm·»³¿'º[³'¿¿'º[Îm¿0׉Ãu³Ã_¸»¸¡´ÇÃjÀm¾dÈ Ãu¸¹¸»ÐCÃjÀ€Ã_¸»Ð‘Ù[º[ÅCÃjÀmÅ.¼]¾m¿;·ÁÀj¿µp×µ‘Ì ŀ³ÒJIÉmºêÌѺ[³'¿ µuà¿ÏÉmºC¿'º[Α¿‡×Ã_³‡¾m³'º[ÅOÃ_³‡¿ÏÉmºC¸»º]ÃjÌ'Àm·ÁÀmÂè¼[µ‘ÌQÈ ½€¾m³%¶dÐŀ·ih º]Ì º]Àd¿'·ÁÃ_¿'·ÁÀm¿ Émº¡³·¹Ù[ºGµ_àn¿ÏÉmº¼[µ‘̽€¾m³]Ò IGÉmº¡³ ´ÇÃ_¸»¸»º]Ì)¸¹º]ÃdÌ'Àm·ÁÀm¼[µ‘Ì'½€¾m³)×Ã_³)·ÔÀm¼[¸Á¾mŀº[ÅÆ·ÁÀ ¿ Émº0¸ÔÃdÌ Âjº]̼[µ‘̽€µ‘Ì'ÃœÒ òm¿ÏÃ_Âdº[³ V ÃdÀmÅ W ׺]Ì ºÌ º]½œº]Ã_¿'º[ÅYÖ3Êjº0¿'·Á´&º[³;³'µÆ¿ÏɀÃ_¿ Ö3ÊjºYŀ·h º]Ì º]Àd¿‡³'º[¿'³Æµ_à¿ÏÉmºY¿'º[³'¿‡ÃjÀmÅ¿ ÉmºV¸»º]ÃjÌÀm·ÔÀm ¼[µ‘Ì'½œµ‘Ì'ÃY׺]Ì ºÇµ‘¶m¿ Ã_·ÁÀmº[Å Ò@²;¸¹¸µ_à;¿ÏÉmºpÂmÌ'Ãj½€Ém³&¿ÏɀÃ_¿ àðµj¸»¸¹µ_× ³ÏÉmµ_×D¿ÏÉmºVÃÊjº]Ì'ÃuÂjºpµuà6¿ÏÉmºY¿'º[³'¿pÌѺ[³Ï¾m¸»¿'³Çàðµ‘Ì ¿ÏÉmº[³'ºÖ3Êjº&ŀ·ih º]Ì º]Àd¿0³'º[¿'³ ] Ö3Êjº Èa¿·Ô´Ëº[³0¼]Ì µj³'³;ÊïÃ_¸»·»Å3ÃÈ ¿'·»µ‘À b Ò IÉmºWÃd¾m¿'µ‘´ÇÃ_¿·¹¼C¿'º[Îm¿‡º]Àd¿ÏÌ ÐO½€Ì µm¼[º[º[ŀº[ÅæÃ_³‡àðµj¸ßÈ ¸»µ_׳]Ò X ·ÁÌ ³'¿]̀ÃjÀÇÃd´ê¶m·»Â‘¾mµm¾m³¡³º[Ú ¾mº]Àm¼[º0¼[µ‘ÌÌ º[³Ï½€µmÀmÅmÈ ·ÁÀmÂC¿'µ‡¿ÏÉmº&ÖnÌ ³¿×µ‘Ì ÅC×Ã_³0º]Àd¿'º]Ì º[Å ÒOI×µ‡¸»·¹³¿'³µ_à ¼]ÃjÀmŀ·»Å3Ã_¿'º[³;׺]Ì º0¿ÏÉmº]À ³ÏÉmµ_×+À‡¶jÐÆ¿ÏÉmº0³'Ðm³'¿'º]´ ̀µ‘Àmº ¸»·»³'¿0µ‘¶m¿ÏÃ_·ÁÀmº[ÅYàaÌ µ‘´D¿ÏÉmº&×µ‘Ì Ňŀ·»¼[¿'·»µ‘À€ÃdÌ ÐjÍ ÃjÀmÅC¿ÏÉmº µj¿ÏÉmº]̞à Ìѵ‘´ ¿ÏÉmº+¾m³'º]Ìó¼[µ‘̽€¾m³]Ò\IÉmº+³¿ÏÌ ·ÁÀm¿ÏɀÃu¿¼[µ‘ÌQÈ Ì º[³Ï½œµ‘Àmŀº[Å¿µV¿ ÉmºÄ½€Ì º Ö3ÎLµ_à+¿ Émº‡¿ÏÃjÌ Âjº[¿¿'º[³'¿&¿'º[Îm¿ ×Ã_³¿ÏÉmº]ÀƼ[Émµj³'º]À ] ¿ÏÉmº0¼[µ‘ÌÌ º[¼[¿¼]ÃjÀmŀ·»Å3Ã_¿'º b Ò ã Émº]À ¿ÏÉmº]Ì º סº]ÌѺ۴&¾m¸»¿'·Á½m¸»º"¼[µ‘Ì'Ì º[¼[¿‡¼]ÃdÀmŀ·¹Å3Ãu¿'º[³@Ãj´&µ‘Àm ¿ÏÉmº&¿ ׵Ǹ»·¹³¿'³]Í ¿ ÉmºêÉm·»Â‘Émº[³'¿&Ì'ÃjÀ€ä_º[ÅY¼]ÃjÀmŀ·»Å3Ã_¿'ºÇ×Ã_³ ¼[Émµj³'º]ÀÒ.Ý à+¿ סµY¼[µ‘Ì'Ì º[¼[¿¼]ÃjÀmŀ·»Å3Ã_¿'º[³&׺]Ì ºÆº[Ú ¾€Ã_¸»¸»Ð Ì'ÃjÀ€äuº[ŵ‘À¿ÏÉmº%¿ ×µ¸»·»³'¿'³]Í_¿ÏÉmºó¸»µ‘ÀmÂjº]Ì)¼]ÃjÀmŀ·»Å3Ã_¿'º×Ã_³ ¼[Émµj³'º]ÀÒ (`_ =FBo7 E B X ·ÁÌ ³'¿]Í_׺¼[µ‘Àm³'·»Å€º]Ìó¿ÏÉmºÌ'Ã_¿º¡µuàI¾€À€Ì º[Âj·»³'¿º]Ì º[ÅÇ×µ‘Ì ŀ³ ¶€º[·ÁÀm ³Ï¾m¼[¼[º[³'³ àa¾m¸»¸»Ð º]Àj¿'º]ÌѺ[Å ×·»¿ÏÉ µ‘¾€ÌF´&º[¿ÏÉmµmÅ Ò X ·»Â‘¾€Ì º Þ³ÏÉmµ_׳@¿ÏÉmºOÌ º[³Ï¾m¸»¿'³Làðµ‘ÌLÓ%ÀmÂj¸»·»³ÏÉ ] ¸»º àð¿ b ÃjÀmÅ J Ãj½€ÃjÀmº[³'º ] ÌÑ·¹ÂmÉj¿ b ̀×+Émº]Ì º¿ÏÉmºêÉmµ‘ÌÑ·¹Ù[µmÀj¿ÏÃu¸žÃ_Îm·»³ ³ÏÉmµ_׳¿ Émºê¾m³'º]ÌG¼[µ‘Ì'½€¾m³³·¹Ù[ºÇÃjÀmÅY¿ÏÉmºÊdº]Ì ¿'·»¼]Ã_¸”Ã_Îm·»³ ³ÏÉmµ_׳ǿ Émº.Ì'Ã_¿'º‡µ_à;³Ï¾m¼[¼[º[³'³ àa¾m¸»¸¹ÐFº]Àd¿'º]Ì º[ž€À€Ì º[Âd·¹³ÑÈ 0 10 20 30 40 50 60 70 80 0 20 40 60 80 100 Rate of SUccessfully Entered Unregistered Words Kbytes Adventure Of Sherlock Homes Chat-E Patent for CreatingCommunity The Merchant of Venice RFC1459 0 10 20 30 40 50 60 70 80 0 20 40 60 80 100 Rate of Successfully Entered Unregistered Words Kbytes Genji Neko Chat RFC1459J Patent X ·»Â‘¾€Ì º  Gò€¾m¼[¼[º[³'³ àa¾m¸»¸¹Ð@ÓóÀj¿'º]ÌѺ[ÅN+À€Ì º[Âd·¹³¿'º]Ì º[Å ã µ‘Ì ŀ³+ÃjÀmÅ Y‰µ‘Ì'½€¾m³+òm·»Ù[º ¿'º]Ì º[ÅÞסµmÌ Å€³  Ò>²0¶€µ_Êjº V!` v ¶dÐm¿'º[³]Í V!` ¿µ M  O µ_à¿ÏÉmºp¾€À€ÌѺ[Âj·»³'¿'º]Ì º[ÅL×µ‘Ì ŀ³0׺]Ì ºÇ³Ï¾m¼[¼[º[³³ à ¾m¸»¸»ÐLº]ÀdÈ ¿'º]Ì º[Å Ò0º]½œº]Àmŀ·ÔÀmÂÆµ‘À&¿ Émº6Ãj´Ëµ‘¾€Àj¿µuàI¿ Émº6¾€À€Ì º[Âd·¹³ÑÈ ¿'º]Ì º[Ň¿º[Α¿]Ím¿ÏÉmº&Ì'Ã_¿'º;×Ã_³+Ã_³Ém·»Â‘ÉYÃ_³+ÃjÌ µm¾€ÀmÅNM  O Ò ²;¸¹¸¾€Àm³Ï¾m¼[¼[º[³'³Ñà ¾m¸ ¼]Ãu³'º[³×º]Ì º0¼]Ãj¾m³'º[Å.¶dÐÆ¸¹µ_×Oµm¼ È ¼]¾€Ì'Ì º]Àm¼[ºLµ_àp¾€À€Ì º[Âj·»³'¿º]Ì º[Åæ×µ‘ÌÑŀ³‡·ÔÀÞ¿ÏÉmº@¸»º]ÃjÌÀmº[Å ¾m³'º]̼[µmÌ'½€¾m³]Ò\_Cµ‘Ì º½€ÌѺ[¼[·¹³º[¸¹ÐjÍ G³'º]ÌÆ¼[µ‘̽€¾m³‡³'·»Ù[º@·»³‡·ÔÀm³ ¾gfp¼[·»º]Àj¿]Ò IÉmºC¿'º[Îm¿ ×G·¹¿ ÉF¸»µï×g¾€À€ÌѺ[Âj·»³'¿'º]Ì º[ÅO×µ‘ÌÑÅ Ì'Ã_¿ºV¿º]Àmŀ³.¿'µ ɀÃÊjº0¿ Ém·¹³½€Ìѵ‘¶m¸»º]´ ] à µm̺[Î Ãd´Ç½m¸»ºjÍ [ µj¸Á´&º[³;µ‘Ì Ãu¿'º]Àj¿·ÁÀ.ÓóÀmÂj¸»·»³ÏÉ b Ò ;À€Ì º[Âj·»³'¿º]Ì º[Å×µ‘Ì ŀ³&µm¼[¼]¾€Ì'Ì º[Å Sïý r þ S_ýnù[ö ·ÁÀ ¿ Émºp¾m³'º]̼[µ‘Ì'½€¾m³]ÒÆ²;³µ‘¾€Ì¼]ÃjÀmŀ·»Å3Ã_¿'ºÆº[Α¿ Ì'Ã_¼ È ¿·¹µmÀ"´&º[¿ÏÉmµmÅ·»³p¶€Ã_³'º[Å赑À"Ì º]½œº]Ã_¿'º[Å"³'¿ÏÌÑ·ÔÀmÂd³]Í ¼]ÃdÀmŀ·¹Å3Ãu¿'º[³µ_൑Àmº0µm¼[¼]¾€Ì'Ì º]Àm¼[º¼]ÃdÀ€Àmµj¿¶€ºº[ÎdÈ ¿ Ì'Ã_¼[¿'º[Å Ò ²;ŀŀ·¹¿·¹µmÀ€Ã_¸»¸¹Ðj͜×+Émº]À&Àmµ0¿ÏÃjÌ Âjº[¿¾€À€Ì º[Âj·»³'¿º]Ì º[ÅÇ×µ‘Ì ŀ³ µm¼[¼]¾€Ì'Ì º[Å&·ÁÀ¿ Émº¡¾m³'º]Ì)¼[µ‘̽€¾m³]ÍjÀmµj¿ÏÉm·ÁÀmÂ;¼]ÃjÀ&¶€ºŀµ‘Àmº ×·»¿ÏÉCµm¾€Ì´&º[¿ÏÉmµmÅ Ò [ µ_׺[Êjº]Ì[Í·ßà+·»¿êÃj½€½œº]ÃjÌ ³µ‘Àm¼[ºjÍ ×ºW´&·»Â‘Éd¿V¶œºWÃj¶m¸»ºW¿µ"µ‘¶m¿ÏÃ_·ÁÀF·»¿C¼[µ‘´Ç½€ÃjÌÑ·ÔÀmÂè¿ÏÉmº ³'¿ÏÌÑ·ÔÀmÂ"×G·¹¿ É¿ Émº.ŀ·»¼[¿'·»µ‘À€ÃjÌ ÐOÃ_¸»¿ÏÉmµm¾m‘Éè¿ÏÉmº]Ì ºC³'¿'·»¸»¸ ¸»·»º[³¿ÏÉmº0½€Ì µ‘¶m¸»º]´Dµ_à”Émµ_×F¿'µº[Ê_Ã_¸Á¾€Ã_¿'º0¿ÏÉmº[·ÁÌó·Á´Ç½€µ‘ÌÜÈ ¿ÏÃjÀm¼[ºjÒ5IÉm·»³6½œµj·ÁÀj¿·»³;µ‘Àmº0µ_൑¾€Ìóàa¾m¿Ï¾€Ì º0×µ‘Ì'ätÒ ²;³¿ÏÉm·»³êÌѺ[³Ï¾m¸»¿×Ã_³0Ã_¼ñÉm·»º[Êjº[ÅL¶jÐC¼[Émµ‘µj³·ÔÀmÂY¼]ÃjÀdÈ Å€·»Å3Ã_¿'º[³µ‘¶m¿ÏÃu·ÔÀmº[ÅÆÅ€Ð€À€Ãj´&·»¼]Ã_¸»¸»ÐÇàaÌ µ‘´ ¿ÏÉmº0¾m³'º]Ìé¼[µ‘ÌQÈ ½€¾m³‡¸»·»³'¿'³]Í;׺V½m¸»µj¿'¿º[Å ¿ ÉmºV¼[Émµj·»¼[ºWÌ'Ãu¿'º‡àaÌ µ‘´ ¿ÏÉmº ¾m³'º]Ìó¼[µ‘Ì'½€¾m³ó¸»·¹³¿ ] X ·»Â‘¾€Ì º b Ãj´&µmÀm¿ ×µ0¸¹·»³'¿³ ] ¾m³'º]Ì ¼[µ‘Ì'½€¾m³͑ÃjÀmÅ&¿ÏÉmºŀ·»¼[¿'·»µ‘À€ÃdÌ Ð b ÒIGÉmº+Émµ‘Ì ·»Ù[µ‘Àd¿ÏÃ_¸3Ã_Îm·»³ Ã_‘Ãu·ÔÀY³ÏÉmµ_׳0¿ ÉmºË¾m³º]Ì;¼[µ‘Ì'½€¾m³0³'·»Ù[ºpÃjÀmÅ ¿ÏÉmº&Êjº]Ì ¿'·ßÈ   +E P/ 1 # D # %R# ) e )) ( ;F # 0 :1/":( D",6  :$L 0# 0 :L 4 0 /+)$.E# 0 "61( * +//"E# :e $ +E )$?)% * :?! 0 0,$ #  ) 0# :D "## 0 : /":(/"56 + J# ; ¼]Ã_¸‘ÃuΑ·»³)³ÏÉmµ_׳”¿ Émº%Ì'Ã_¿'ºóµ_à ¼]ÃdÀmŀ·¹Å3Ãu¿'º[³¼ñÉmµd³'º]À;àaÌ µ‘´ ¿ÏÉmº¾m³'º]Ì ¼[µmÌ'½€¾m³ ] ·ÔÀm¼[¸Á¾mŀ·ÁÀmÂ&¿ÏÉmº%º]Àd¿ÏÌ Ð;µ_ànÌ º[Âd·¹³¿'º]Ì º[Å ×µ‘Ì ŀ³ b Ҕòm·ÁÀm¼[º×ºGà µ‘¾€ÀmÅÆ¸»·»¿'¿'¸»º0ŀ·ihIº]ÌѺ]Àm¼[ºê¶€º[¿ ׺[º]À ¿ÏÉmº J Ãj½€ÃjÀmº[³'º&ÃjÀmÅ.Ó%ÀmÂd¸¹·»³ÏÉCÌ º[³ ¾m¸¹¿³]̀׺³ÏÉmµ_×Fµ‘Àm¸»Ð ¿ÏÉmº&Ó%ÀmÂj¸»·»³ÏÉCÌ º[³Ï¾m¸»¿'³0Émº]Ì ºjÒÝaÀd¿'º]Ì º[³'¿'·ÁÀmÂj¸»ÐjÍ¿ÏÉm·»³0Ì'Ã_¿'º ×Ã_³+ÃjÌѵ‘¾€ÀmÅ M ` O ¿'µCd ` O Ãàð¿'º]̾m³'·ÁÀmÂ.à V!` È v ¶jÐm¿'º ¼[µ‘Ì'½€¾m³ÒY¸»º]ÃjÌѸ¹Ð‡¿ Ém·¹³;º[΀½€º]Ì ·Á´&º]Àd¿6ÌѺe3º[¼[¿'³+µm¾€Ìµ‘¶dÈ ³'º]Ì Ê_Ã_¿'·»µ‘À ·ÁÀ  V ¿ ɀÃ_¿ê´&µmÌ ºÇ¿ÏɀÃjÀ M `O µ_à¿ÏÉmºÆÊjµ_È ¼]Ãj¶€¾m¸ÁÃjÌ ÐY·»³êÌ º]¾m³'º[ÅYàaÌ µ‘´ ¿ÏÉmºÇ¾m³'º]Ì;¼[µ‘Ì'½€¾m³Ò.IÉm¾m³]Í ¿ÏÉmº+½€Ìѵ‘¶m¸»º]´µuàI¾€À€Ì º[Âj·»³'¿º]Ì º[Åp×µ‘ÌÑŀ³¼]ÃjÀǶ€º0½€ÃjÌ ¿¸¹Ð ³'µj¸»Êjº[Å"¶dÐVµm¾€Ì³Б³'¿º]´g¾€Àmŀº]Ì¿ÏÉmºÆ¼[µ‘Àmŀ·»¿'·»µ‘ÀL¿ÏɀÃ_¿ ¿ÏÉmºp¾m³º]̼]ÃdÀ@½€Ì º]½€ÃjÌ º‡Ã‡³ ´ÇÃ_¸»¸%¿'º[Îm¿¿ ɀÃ_¿êɀÃ_³¿ÏÉmº ³ÏÃj´Ëº0¼[µ‘Àj¿º[Α¿+Ãu³¡¿ Émº¼]¾€Ì'Ì º]Àd¿µ‘ÀmºjÒ f0ÀmºO¼]¾€Ì'ÌѺ]Àj¿èÅ3Ì'Ã×+¶€Ã_¼[ä µ_àVµ‘¾€ÌL·»Å€º]à ·¹³è¿ÏɀÃ_¿ ¿ÏÉmº&À‘¾€´&¶€º]ÌGµ_à¼]ÃjÀmŀ·»Å3Ã_¿'º[³0³Ï¾mÂdÂjº[³'¿'º[ÅY¿'µÇ¿ÏÉmº&¾m³'º]Ì ¿'º]Àmŀ³;¿'µÇ‘Ìѵï× Ã_³¿ÏÉmº¼[µ‘Ì'½€¾m³;³'·»Ù[º·ÁÀm¼]Ì º]Ã_³'º[³Ò X ·»Â_È ¾€Ì º]M‡³ÏÉmµ_׳¿ÏÉmºpÃÊjº]Ì'ÃuÂjºÇÌ'ÃjÀ€äd·ÔÀmÂC¼[ɀÃjÀmÂjºÆµ_à¿ÏÉmº ¼[Émµj³'º]À‡×µ‘ÌÑŀ³6Ã_¼[¼[µmÌ Å€·ÁÀmÂp¿'µÆ¿ÏÉmº&¾m³'º]̼[µmÌ'½€¾m³+³·¹Ù[ºjÒ Ýð¿ en¾m¼[¿Ï¾€Ã_¿º[³%¾€Àj¿'·»¸ V!` v ¶dБ¿º[³]ÍïÃdÀmÅàaÌ µ‘´ V!` v ¶dÐm¿'º[³ µ‘ÀÍ à µm̳'µ‘´&º¿º[Α¿]Í¿ÏÉmºÌ'ÃjÀ€äd·ÔÀmÂ;ŀº[³'¼[º]Àmŀ³×+Ém·»¸»º¿ÏÉmº ¾m³'º]Ì;¿'º[Îm¿;³'·»Ù[ºê·ÁÀm¼]Ì º]Ãu³'º[³]Ò0²;¸»¿ÏÉmµ‘¾m‘ÉY¿ÏÉmº&½€Ì µ‘½€º]ÌÑ¿ÜРŀº[³'¼]Ì ·Á¶œº[Å!·ÔÀ  V ³Ï¾mÂjÂdº[³'¿'³‡¿ÏɀÃ_¿‡¿ ÉmºW¾m³'º]̇¼[µmÌ'½€¾m³ ³'·»Ù[ºÆ·¹³&³Ï¾gfƼ[·¹º]Àd¿ÇÃ_¿êÃd¶€µ‘¾m¿ H` ¿'µ V!` v ¶dÐm¿'º[³]Í)¿ÏÉmº ´&º[¼[ɀÃjÀm·»³Ï´ ¿µ Ì'ÃdÀ€ä"¼]ÃjÀmŀ·»Å3Ã_¿'º[³CÀmº[º[ÅO¿'µ¶œºWÌ º È ÖnÀmº[Å ÒCIGÉm·¹³&×·»¸»¸¡¶œºÇ¿ÏÉmºp´Ëµj³'¿0·Á´Ç½€µmÌ ¿ÏÃjÀd¿½€ÃjÌ ¿0µ_à µ‘¾€Ìóàa¾m¿Ï¾€Ì º0×µ‘Ìä3Ò  wpR ì%iFYQK%\_NaR ì %ÌѺ[ŀ·¹¼[¿·¹Êdº¿'º[Îm¿º]Àd¿ÏÌ ÐÆ·¹³0Àmµ_׿ÃjÀÆ·Á´Ç½œµ‘Ì ¿ÏÃjÀd¿¿'º[¼[ÉdÈ Àmµj¸»µjÂjÐ ¶€º[¼]Ãj¾m³'ºLµ_àÇ¿ÏÉmºL³'¼]Ã_¸»·ÁÀmÂFŀµï×;ÀFµ_àpŀº[Êm·»¼[º ³'·»Ù[ºjÒ8òm·ÁÀm¼[º"¿ÏÉmº½€ÌѺ[ŀ·¹¼[¿·¹Êdº"º]Àj¿ÏÌÑÐæ´&º[¿ÏÉmµmŀ³@ÃjÌ º ¶€Ã_³'º[Å9µ‘À!½€Ì º[ŀ·»¼[¿'·»µ‘ÀD¾m³'·ÁÀmÂ!ÃO×µ‘Ì Šŀ·»¼[¿'·»µ‘À€ÃjÌ ÐjÍ ÃF½€Ì µ‘¶m¸»º]´ ÃjÌÑ·¹³º[³ÛÃu³V¿µ Émµ_×-¿'µFº]Àj¿'º]ÌC¾€À€Ì º[Âd·¹³ÑÈ 0 10 20 30 40 50 60 70 80 90 100 0 20 40 60 80 100 from corpus Kbytes Adventure Of Sherlock Homes Chat-E Patent for CreatingCommunity The Merchant of Venice RFC1459 X ·»Â‘¾€Ì º  GcYÉmµj·»¼[º î Ã_¿'ºÆµ_à>YÃjÀmŀ·»Å3Ã_¿º[³àaÌ µ‘´ ¿ÏÉmº G³'º]ÌeYµ‘̽€¾m³ ] Ó%ÀmÂd¸¹·»³ÏÉ b 1.25 1.3 1.35 1.4 1.45 1.5 1.55 1.6 0 20 40 60 80 100 emergence position Kbytes Adventure Of Sherlock Homes Chat-E Patent for CreatingCommunity The Merchant of Venice RFC1459 X ·»Â‘¾€ÌѺ M G)²;Êjº]Ì'Ã_Âjº î ÃjÀ€äj·ÁÀmÂÆµ_à ã µ‘Ì ŀ³ ] Ó%ÀmÂj¸»·»³ÏÉ b ¿'º]Ì º[ÅL×µ‘Ì ŀ³]ÒCÝaÀL¿ Ém·¹³Æ½€Ãj½€º]Ì[Íéסº‡É€ÃÊjºÆÅ€º[³'¼]Ì ·Á¶€º[Å ÃÆ´&º[¿ÏÉmµmÅ.¿'µ‡¾m³'ºêóϴÇÃu¸¹¸Ãj´&µm¾€Àj¿µuࡾm³'º]̼[µmÌ'½€¾m³ ÃjÀmÅèŀРÀ€Ãj´Ë·¹¼]Ãu¸¹¸»ÐFº[Îm¿ÏÌ'Ã_¼[¿Æ¼ñÉm¾€À€äd³ÄÃ_³Æ¼]ÃjÀmŀ·»Å3Ã_¿'º[³ ¿'µÇ¼[µm´Ç½m¸»º]´&º]Àj¿G¿ÏÉmºê´&·»³'³·ÔÀmÂÆÊjµm¼]Ãj¶€¾m¸ÁÃjÌ ÐjÒ f0¾€ÌÃd½€½€Ì µ‘Ã_¼[ÉC·»³ê¶€Ã_³º[Å@µ‘ÀCÃjÀCµ‘¶m³º]Ì ÊïÃ_¿·¹µmÀ@Ì º È Â‘ÃjÌÑŀ·ÔÀm ¿ÏÉmº‡º[ŀ·»¿'µ‘Ì ·ÁÃ_¸+¶œº]ɀÃʑ·»µ‘Ì&µuà6×+ÌÑ·¹¿º]Ì ³G]M `O µ_à.¿ÏÉmºèÊjµm¼]Ãj¶€¾m¸ÁÃjÌ Ð!·»³Ì º]¾m³'º[ÅDÃà ¿º]Ì@µ‘Àm¸»Ð H` ¿'µ V!` v ¶dÐm¿'º[³Çµ_à0¿'º[Îm¿]Òòm·Á´&·»¸ÁÃjÌ ¸»ÐjÍ;¿ÏÉmº.¾€À€Ì º[Âj·»³'¿'º]ÌѺ[Å ×µ‘Ì ŀ³ó·ÔÀ ¿ÏÉmº;ŀ·¹¼[¿·¹µmÀ€ÃjÌ Ð‡ÃjÌ º0Ã_¸»³'µÇÌ º]¾m³'º[Å.Ã_¿Ã¼[º]ÌQÈ ¿ÏÃ_·ÁÀ‡Ì'Ãu¿'ºjÒ ã ºɀÃÊjº0¶€¾m·»¸¹¿;ó·Ô´½m¸¹º×µ‘Ì ÅmÈܶ€Ã_³'º[Ň½€Ì º[ŀ·»¼[¿'·»Êjº ¿'º[Îm¿Cº]Àj¿ÏÌÑÐO³'Ðm³'¿'º]´‡ÍÇÃjÀmÅÃu¿'¿ÏÃ_¼[Émº[Åæ·»¿@¿µ Ãè¿'µmµj¸ ¿ÏɀÃ_¿LŀЀÀ€Ãj´&·»¼]Ã_¸»¸»ÐÕº[Îm¿ÏÌ'Ãu¼[¿'³W¾€À€Ì º[Âj·»³'¿º]Ì º[Å>×µ‘Ì ŀ³ ÃjÀmÅ@³ ¾mÂjÂjº[³'¿'³0¿ Émº[³'ºÇ¿'µ‡¿ÏÉmºÆ¾m³'º]Ì[Ò ã º&à µm¾€ÀmÅ@¿ÏɀÃ_¿ Ã"¸ÁÃjÌÑÂjº Ãd´&µ‘¾€Àd¿.µ_àľ€À€ÌѺ[Âj·»³'¿'º]Ì º[Å ×µ‘Ì ŀ³.¼]ÃdÀ涜º Ãj¾m¿'µm´ÇÃ_¿'·»¼]Ã_¸»¸»Ð Ãu¼[Ú ¾m·ÁÌ º[ÅFÃjÀmÅ"º]Àj¿º]Ì º[Å Ò IÉm·»³p×Ã_³ Ã_¼[Ém·»º[Êjº[ÅĶdÐ&¼ñÉmµmµj³'·ÁÀmÂ&´&µ‘Ì º¿ ɀÃjÀ M `O µuàI¿ Émº¼]ÃjÀdÈ Å€·»Å3Ã_¿'º[³µ‘¶m¿ÏÃu·ÔÀmº[ÅÆàaÌ µ‘´!¾m³º]Ì%¼[µmÌ'½€¾m³µ_à V!` v ¶dБ¿º[³5H ¿ÏÉm·»³Ì'Ã_¿'ºG¼[µ‘Ì'Ì º[³Ï½œµ‘Àmŀº[Ň¿'µµ‘¾€Ìó·ÔÀm·»¿'·ÁÃ_¸5µ‘¶m³'º]Ì Ê_Ã_¿'·»µ‘À Ì º[‘ÃdÌ Å€·ÁÀmÂp¾€À€Ì º[Âj·»³'¿'º]ÌѺ[Å.×µ‘Ì ŀ³]Ò ã º+ÃdÌ º+Àmµ_×¶€¾m·»¸¹Å€·ÁÀmÂpÃ6à Ìѵ‘Àj¿ÑÈaº]ÀmÅ꿵‘µj¸€¿ ɀÃ_¿¼]ÃjÀ ¶€ºÃu¿'¿ÏÃ_¼[Émº[ÅÇ¿'µ&¿ÏÉmº0½€Ì º[ŀ·»¼[¿'·»Êjº0¿'º[Îm¿º]Àj¿ÏÌÑÐdz'Ðm³'¿'º]´‡Ò IÉm·»³&¿'µmµj¸%³Ï¾mÂdÂjº[³'¿'³¼]ÃjÀmŀ·»Å3Ã_¿'º[³Ç¿ÏɀÃu¿êÃjÌ ºÆÅ€Ð€À€Ãj´È ·»¼]Ã_¸»¸»Ð浑¶m¿ Ã_·ÁÀmº[ÅFà Ìѵ‘´ ¿ÏÉmº@¼[µ‘̽€¾m³]Í´&º]Ì Âdº[ÅO×·»¿ÏÉ ¼]ÃjÀmŀ·»Å3Ã_¿'º[³GàaÌ µ‘´9¿ÏÉmº³Б³'¿º]´ŀ·»¼[¿'·»µ‘À€ÃdÌ ÐjÒ  R?W2R3± R3ì%iFRt\       ! "# $&%(')' *,+- -/.10243 5 687 9;:=< 2 >4> ? 7/@ 8AB C')%D "E  F ,GH I ./.+/ $JK')D4LMNKO+/ PQ RTS K% UKV  XWYZ [ \]]^ _``/aaa b]cedfb gih/jkb4lKmKjN`/^ cmnoQl]p ` qrjQ`   sQt'vuet)' [wt I ./. IQ R,'Z/xy+iz{LQZ [! |MN}  aaa~bn/pK]NpK€pK]oKj,b4l mfbn/^`‚/]miƒN`‚/]miƒ…„i† a`  ‡ˆ/G‰KV 8K GŠ/ ti~+i- -/‹Œuse!ŽˆK)vitBT  [! C'LZ#W‘Z/~Z/’F % ,t)'%}“t D4L ti1”Q•F–1—™˜ 7 š< @›œ 7ž687 9;:=š 5‘? @ Ÿ  G‰K)s[ ¡T8KNx i’ tL%%K¢G£¤x %D4L% I ./.+/¦¥§ iKQ%(Q}H[!Z W‘Z/x KN¨’Fx ©4% D Z S/ t)%(Z/ tt)' [ªst)%}«AfA8G¬[! ')LZŒB ‰•­ ”N˜®1¯¤°”N2 9 ? @›/< ® 7 5‘2#? @²± ˜ ›4:› @ 2 > 2³QS Z/ (sQ[! + ´z G‰Kt)s%µ+i- -/-OA*Z VNZKŽ¢¢ 4!D % 'H') Ž'²%MsQ' [! C'LZHWYZ #LNKQL ¶JsVQ%(·s%')Z stDCZ [!MQs' )t ¸5¹2ˆ– 6 —º”N» 9;:e7 > ? š9¼7/@X½ >¾2 < • @ 5Y2 <À¿,› Á 2w” 7 5À°  ›/< 2 › @à 0N2 Á ¹ @7œ(74Ÿ »K "ˆ/ÄEx K}ikKQGŠZ/%BI . ./I§RÅt%[wM  ~VsQ'8MNZK ’ W‘s 8Ks')Z [ŠK'%DT' )[! Ž'4KD ')%Z ‰[w 'LZŒ  687 9 ° :=š 5Y2 <C9ÆÇ,Æ@Qà • @ 5‘2 < @› 5‘? 7 @›œkÈw7 <É >¹ 74:‰7 @Ê687 9;:=š ° 5 › 5Y? 7 @›œ 0N2 < 9 ? @7œ(74Ÿ » ±C6,Ë ¯B•®Ì ° È!7/<É >¹ 74: ³ KMK} Ct I -¨Íe‹/P ¡K8KNx i’ tL%%4Îi sQ'tsQx  «G£K8Kx C%(D4L%§I ./. .Q MNKQ t w%(QMs'1t)t' C[¼,%')L²%}/%(')tÍ,K‰KMN t V ~%MsQ'ŒZ Q (T,%'LD Z/tZ/NK')tvÏCТŒ #Ñ š9ˆ›/@ ¯ ›/@ ° Ÿ/š›iŸ 2‰02 Á ¹ @7œ(74Ÿ » 687/@  2 < 2 @Á 2 ÆÒ ÒÓ  ¡K8KNx i’ tL%%4Îi sQ'tsQx  «G£K8Kx C%(D4L%§I ./. IQ ÔB' C%}Õ' Ž'st)%}²žW‘Z sQVs')'Z HQ S%(DC ! ¸5¹2 ÓKÖ 5(¹×• @ 5Y2 < @Q› 5Y? 7 @›œ687 @  2 < 2 @Á 2 7/@Å687 9;:=š 5 › 5‘? 7 @›œ ¯8? @ Ÿ š ?> 5Y? Á >¾QMNK}/ t1- Ø/ØiÍ- -K´ $¢ NÙ C'ÚK JB LNK* I ./. . ABZ/VNKV% %'Û Ct’ '%[ŠK'%Z/ W‘Z/ A8A8G£  ®ÝÜ 6 ”ÞT” 6ß Ö/à  \]]^ _``/aaa b4lp…ba‚gCƒQ‚/]m8b‚QlŒbqá`Õan ]`/^‚ ^o/cNp `âNãäåæNåäfbF^pBbrá  B } %D BI ./. .Q§B } %D8LZ/[! 8MK} / \]]^~_``/aaa b]çfbvl mj  è#K$JK)ŒKR# é K UKD4x~ C (  èˆ  /G‰KD ¡iŒI ./. .Q èKt)L  R|NK'4Ù ')X%') WKD ×st%}¸DCZ '%sZ sQt } Ct'sQ tˆŠ UK}/sNK}/ E[wZ tB J5¹2– 6 —™”N» 9 ° :7 > ? š9Ú7/@Ž > 2 < • @ 5‘2 <U¿,› Á 2#” 7) 5  › < 2 › @à 02 Á ¹ @7œ ° 74Ÿ »K ê Y’ Z M*µI/. . .Qµê%LQZ [! ×MNK}/ ëR1S K% UKVQ ( ÕWYZ [ \]]^ _``ìíîfbììçfb„iïfbì/ð§„` 
2003
52
A Word-Order Database for Testing Computational Models of Language Acquisition William Gregory Sakas Department of Computer Science PhD Programs in Linguistics and Computer Science Hunter College and The Graduate Center City University of New York [email protected] Abstract An investment of effort over the last two years has begun to produce a wealth of data concerning computational psycholinguistic models of syntax acquisition. The data is generated by running simulations on a recently completed database of word order patterns from over 3,000 abstract languages. This article presents the design of the database which contains sentence patterns, grammars and derivations that can be used to test acquisition models from widely divergent paradigms. The domain is generated from grammars that are linguistically motivated by current syntactic theory and the sentence patterns have been validated as psychologically/developmentally plausible by checking their frequency of occurrence in corpora of child-directed speech. A small case-study simulation is also presented. 1 Introduction The exact process by which a child acquires the grammar of his or her native language is one of the most beguiling open problems of cognitive science. There has been recent interest in computer simulation of the acquisition process and the interrelationship between such models and linguistic and psycholinguistic theory. The hope is that through computational study, certain bounds can be established which may be brought to bear on pivotal issues in developmental psycholinguistics. Simulation research is a significant departure from standard learnability models that provide results through formal proof (e.g., Bertolo, 2001; Gold, 1967; Jain et al., 1999; Niyogi, 1998; Niyogi & Berwick, 1996; Pinker, 1979; Wexler & Culicover, 1980, among many others). Although research in learnability theory is valuable and ongoing, there are several disadvantages to formal modeling of language acquisition: • Certain proofs may involve impractically many steps for large language domains (e.g. those involving Markov methods). • Certain paradigms are too complex to readily lend themselves to deductive study (e.g. connectionist models).1 • Simulations provide data on intermediate stages whereas formal proofs typically prove whether a domain is (or more often is not) learnable a priori to specific trials. • Proofs generally require simplifying assumptions which are often distant from natural language. However, simulation studies are not without disadvantages and limitations. Most notable perhaps, is that out of practicality, simulations are typically carried out on small, severely circumscribed domains – usually just large enough to allow the researcher to hone in on how a particular model (e.g. a connectionist network or a principles & parameters learner) handles a few grammatical features (e.g. long-distance agreement and/or topicalization) often, though not always, in a single language. So although there have been many successful studies that demonstrate how one algorithm or another is able to acquire some aspect of grammatical structure, there is little doubt that the question of what mechanism children actually employ during the acquisition process is still open. This paper reports the development of a large, multilingual database of sentence patterns, gram 1 Although see Niyogi, 1998 for some insight. mars and derivations that may be used to test computational models of syntax acquisition from widely divergent paradigms. The domain is generated from grammars that are linguistically motivated by current syntactic theory and the sentence patterns have been validated as psychologically/developmentally plausible by checking their frequency of occurrence in corpora of childdirected speech. We report here the structure of the domain, its interface and a case-study that demonstrates how the domain has been used to test the feasibility of several different acquisition strategies. The domain is currently publicly available on the web via http://146.95.2.133 and it is our hope that it will prove to be a valuable resource for investigators interested in computational models of natural language acquisition. 2 The Language Domain Database The focus of the language domain database, (hereafter LDD), is to make readily available the different word order patterns that children are typically exposed to, together with all possible syntactic derivations of each pattern. The patterns and their derivations are generated from a large battery of grammars that incorporate many features from the domain of natural language. At this point the multilingual language domain contains sentence patterns and their derivations generated from 3,072 abstract grammars. The patterns encode sentences in terms of tokens denoting the grammatical roles of words and complex phrases, e.g., subject (S), direct object (O1), indirect object (O2), main verb (V), auxiliary verb (Aux), adverb (Adv), preposition (P), etc. An example pattern is S Aux V O1 which corresponds to the English sentence: The little girl can make a paper airplane. There are also tokens for topic and question markers for use when a grammar specifies overt topicalization or question marking. Declarative sentences, imperative sentences, negations and questions are represented within the LDD, as is prepositional movement/stranding (pied-piping), null subjects, null topics, topicalization and several types of movement. Although more work needs to be done, a first round study of actual child-directed sentences from the CHILDES corpus (MacWhinney, 1995) indicates that our patterns capture many sentential word orders that children typically encounter in the period from 1-1/2 to 2-1/2 years; the period generally accepted by psycholinguists to be when children establish the correct word order of their native language. For example, although the LDD is currently limited to degree-0 (i.e. no embedding) and does not contain DP-internal structure, after examining by hand, several thousand sentences from corpora in the CHILDES database in five languages (English, German, Italian, Japanese and Russian), we found that approximately 85% are degree-0 and an approximate 10 out of 11 have no internal DP structure. Adopting the principles and parameters (P&P) hypothesis (Chomsky, 1981) as the underlying framework, we implemented an application that generated patterns and derivations given the following points of variation between languages: 1. Affix Hopping 2. Comp Initial/Final 3. I to C Movement 4. Null Subject 5. Null Topic 6. Obligatory Topic 7. Object Final/Initial 8. Pied Piping 9. Question Inversion 10. Subject Initial/Final 11. Topic Marking 12. V to I Movement 13. Obligatory Wh movement The patterns have fully specified X-bar structure, and movement is implemented as HPSG local dependencies. Pattern production is generated topdown via rules applied at each subtree level. Subtree levels include: CP, C', IP, I', NegP, Neg', VP, V' and PP. After the rules are applied, the subtrees are fully specified in terms of node categories, syntactic feature values and constituent order. The subtrees are then combined by a simple unification process and syntactic features are percolated down. In particular, movement chains are represented as traditional “slash” features which are passed (locally) from parent to daughter; when unification is complete, there is a trace at the bottom of each slash-feature path. Other features include +/-NULL for non-audible tokens (e.g. S[+NULL] represents a null subject pro), +TOPIC to represent a topicalized token, +WH to represent “who”, “what”, etc. (or “qui”, “que” if one prefers), +/-FIN to mark if a verb is tensed or not and the illocutionary (ILLOC) features Q, DEC, IMP for questions, declaratives and imperatives respectively. Although further detail is beyond the scope of this paper, those interested may refer to Fodor et al. (2003) which resides on the LDD website. It is important to note that the domain is suitable for many paradigms beyond the P&P framework. For example the context-free rules (with local dependencies) could be easily extracted and used to test probabilistic CFG learning in a multilingual domain. Likewise the patterns, without their derivations, could be used as input to statistical/connectionist models which eschew traditional (generative) structure altogether and search for regularity in the left-to-right strings of tokens that makeup the learner's input stream. Or, the patterns could help bootstrap the creation of a domain that might be used to test particular types of lexical learning by using the patterns as templates where tokens may be instantiated with actual words from a lexicon of interest to the investigator. The point is that although a particular grammar formalism was used to generate the patterns, the patterns are valid independently of the formalism that was in play during generation.2 To be sure, similar domains have been constructed. The relationship between the LDD and other artificial domains is summarized in Table 1. In designing the LDD, we chose to include syntactic phenomena which: i) occur in a relatively high proportion of the known natural languages; 2 If this is the case, one might ask: Why bother with a grammar formalism at all; why not use actual child-directed speech as input instead of artificially generated patterns? Although this approach has proved workable for several types of non-generative acquisition models, a generative (or hybrid) learner is faced with the task of selecting the rules or parameter values that generate the linguistic environment being encountered by the learner. In order to simulate this, there must be some grammatical structure incorporated into the experimental design that serves as the target the learner must acquire. Constructing a viable grammar and a parser with coverage over a multilingual domain of real child-directed speech is a daunting proposition. Even building a parser to parse a single language of child-directed speech turns out to be extremely difficult. See, for example, Sagae, Lavie, & MacWhinney (2001), which discusses an impressive number of practical difficulties encountered while attempting to build a parser that could cope with the EVE corpus; one the cleanest transcriptions in the CHILDES database. By abstracting away from actual child-directed speech, we were able to build a pattern generator and include the pattern derivations in the database for retrieval during simulation runs, effectively sidestepping the need to build an online multilingual parser. ii) are frequently exemplified in speech directed to 2-year-olds; iii) pose potential learning problems (e.g. crosslanguage ambiguity) for which theoretical solutions are needed; iv) have been a focus of linguistic and/or psycholinguistic research; v) have a syntactic analysis that is broadly agreed on. As a result the following have been included: • By criteria (i) and (ii): negation, nondeclarative sentences (questions, imperatives). • By criterion (iv): null subject parameter (Hyams 1986 and since). • By criterion (iv): affix-hopping (though not widespread in natural languages). • By criterion (v): no scrambling yet. There are several phenomena that the LDD does not yet include: • No verb subcategorization. • No interface with LF (cf. Briscoe 2000; Villavicencio 2000). • No discourse contexts to license sentence fragments (e.g., DP or PP fragments). • No XP-internal structure yet (except PP = P + O3, with piping or stranding). • No Linear Correspondence Axiom (Kayne 1994). • No feature checking as implementation of movement parameters (Chomsky 1995). Table 1: A history of abstract domains for wordorder acquisition modeling. # parame ters # languages Tree structure? Language properties Gibson & Wexler (1994) 3 8 Not fully specified Word order, V2 Bertolo et. al (1997b) 7 64 distinct Yes G&W + V-raising to Agr, T; deg-2 Kohl (1999) based on Bertolo 12 2,304 Partial Bertolo et al. (1997b) + scrambling Sakas & Nishimoto (2002) 4 16 Yes G&W + null subject/topic LDD 13 3,072 Yes S&N + wh-movt + imperatives +aux inversion, etc. The LDD on the web: The two primary purposes of the web-interface are to allow the user to interactively peruse the patterns and the derivations that the LDD contains and to download raw data for the user to work with locally. Users are asked to register before using the LDD online. The user ID is typically an email address, although no validity checking is carried out. The benefit of entering a valid email address is simply to have the ability to recover a forgotten password, otherwise a user can have full access anonymously. The interface has three primary areas: Grammar Selection, Sentence Selection and Data Download. First a user has to specify, on the Grammar Selection page, which settings of the 13 parameters are of interest and save those settings as an available grammar. A user may specify multiple grammars. Then in the sentence selection page a user may peruse sentences and their derivations. On this page a user may annotate the patterns and derivations however he or she wishes. All grammar settings and annotations are saved and available the next time the user logs on. Finally on the Data Download page, users may download data so that they can use the patterns and derivations offline. The derivations are stored as bracketed strings representing tree structure. These are practically indecipherable by human users. E.g.: (CP[ILLOC Q][+FIN][+WH] "Adv[+TOPIC]" (Cbar[ILLOC Q] [+FIN][+WH][SLASH Adv](C[ILLOC Q][+FIN] "KA" ) (IP[ILLOC Q][+FIN][+WH][SLASH Adv]"S" (Ibar[ILLOC Q][+FIN][+WH][SLASH Adv](I[ILLOC Q][+FIN]"Aux[+FIN]")(NegP[+WH] [SLASH Adv](NegBar[+WH][SLASH Adv](Neg "NOT") (VP[+WH][SLASH Adv](Vbar[+WH][SLASH Adv](V"Verb")"O1" "O2" (PP[+WH] "P" "O3[+WH]" )"Adv[+NULL][SLASH Adv]")))))))) To be readable, the derivations are displayed graphically as tree structures. Towards this end we have utilized a set of publicly available LaTex macros: QTree (Siskind & Dimitriadis, [online]). A server-side script parses the bracketed structures into the proper QTree/LaTex format from which a pdf file is generated and subsequently sent to the user's client application. Even with the graphical display, a simple sentence-by-sentence presentation is untenable given the large amount of linguistic data contained in the database. The Sentence Selection area allows users to access the data filtered by sentence type and/or by grammar features (e.g. all sentences that have obligatory-wh movement and contain a prepositional phrase), as well as by the user’s defined grammar(s) (all sentences that are "Italian-like"). On the Data Download page, users may filter sentences as on the Sentence Selection page and download sentences in a tab-delimited format. The entire LDD may also be downloaded – approximately 17 MB compressed, 600 MB as a raw ascii file. 3 A Case Study: Evaluating the efficiency of parameter-setting acquisition models. We have recently run experiments of seven parameter-setting (P&P) models of acquisition on the domain. What follows is a brief discussion of the algorithms and the results of the experiments. We note in particular where results stemming from work with the LDD lead to conclusions that differ from those previously reported. We stress that this is not intended as a comprehensive study of parameter-setting algorithms or acquisition algorithms in general. There is a large number of models that are omitted; some of which are targets of current investigation. Rather, we present the study as an example of how the LDD could be effectively utilized. In the discussion that follows we will use the terms “pattern”, “sentence” and “input” interchangeably to mean a left-to-right string of tokens drawn from the LDD without its derivation. 3.1 A Measure of Feasibility As a simple example of a learning strategy and of our simulation approach, consider a domain of 4 binary parameters and a memoryless learner 3 which blindly guesses how all 4 parameters should be set upon encountering an input sentence. Since there are 4 parameters, there are 16 possible combinations of parameter settings. i.e., 16 different grammars. Assuming that each of the 16 grammars is equally likely to be guessed, the learner will consume, on average, 16 sentences before achieving the target grammar. This is one measure of a model’s efficiency or feasibility. 3 By “memoryless” we mean that the learner processes inputs one at a time without keeping a history of encountered inputs or past learning events. However, when modeling natural language acquisition, since practically all human learners attain the target grammar, the average number of expected inputs is a less informative statistic than the expected number of inputs required for, say, 99% of all simulation trials to succeed. For our blind-guess learner, this number is 72.4 We will use this 99-percentile feasibility measure for most discussion that follows, but also include the average number of inputs for completeness. 3.2 The Simulations In all experiments: • The learners are memoryless. • The language input sample presented to the learner consists of only grammatical sentences generated by the target grammar. • For each learner, 1000 trials were run for each of the 3,072 target languages in the LDD. • At any point during the acquisition process, each sentence of the target grammar is equally likely to be presented to the learner. Subset Avoidance and Other Local Maxima: Depending on the algorithm, it may be the case that a learner will never be motivated to change its current hypothesis (Gcurr), and hence be unable to ultimately achieve the target grammar (Gtarg). For example, most error-driven learners will be trapped if Gcurr generates a language that is a superset of the language generated by Gtarg. There is a wealth of learnability literature that addresses local maxima and their ramifications.5 However, since our study’s focus is on feasibility (rather than on whether a domain is learnable given a particular algorithm), we posit a built-in avoidance mechanism, such as the subset principle and/or default values that preclude local maxima; hence, we set aside trials where a local maximum ensues. 4 The average and 99-percentile figures (16 and 72) in this section are easily derived from the fact that input consumption follows a hypergeometric distribution. 5 Discussion of the problem of subset relationships among languages starts with Gold’s (1967) seminal paper and is discussed in Berwick (1985) and Wexler & Manzini (1987). Detailed accounts of the types of local maxima that the learner might encounter in a domain similar to the one we employ are given in Frank & Kapur (1996), Gibson & Wexler (1994), and Niyogi & Berwick (1996). 3.3 The Learners' strategies In all cases the learner is error-driven: if Gcurr can parse the current input pattern, retain it.6 The following refers to what the learner does when Gcurr fails on the current input. • Error-driven, blind-guess (EDBG): adopt any grammar from the domain chosen at random – not psychologically plausible, it serves as our baseline. • TLA (Gibson & Wexler, 1994): change any one parameter value of those that make up Gcurr. Call this new grammar Gnew. If Gnew can parse the current input, adopt it. Otherwise, retain Gcurr. • Non-Greedy TLA (Niyogi & Berwick, 1996): change any one parameter value of those that make up Gcurr. Adopt it. (I.e. there is no testing of the new grammar against the current input). • Non-SVC TLA (Niyogi & Berwick, 1996): try any grammar in the domain. Adopt it only in the event that it can parse the current input. • Guessing STL (Fodor, 1998a): Perform a structural parse of the current input. If a choice point is encountered, chose an alternative based on one of the following and then set parameter values based on the final parse tree: • STL Random Choice (RC) – randomly pick a parsing alternative. • Minimal Chain (MC) – pick the choice that obeys the Minimal Chain Principle (De Vincenzi, 1991), i.e., avoid positing movement transformations if possible. • Local Attachment/Late Closure (LAC) –pick the choice that attaches the new word to the current constituent (Frazier, 1978). The EDBG learner is our first learner of interest. It is easy to show that the average and 99% scores increase exponentially in the number of parameters and syntactic research has proposed more than 100 (e.g. Cinque, 1999). Clearly, human learners do not employ a strategy that performs as poorly as this. Results will serve as a baseline to compare against other models. 6 We intend for a “can-parse/can’t-parse outcome” to be equivalent to the result from a language membership test. If the current input sentence is one of the set of sentences generated by Gcurr, can-parse is engendered; if not, can’tparse. 99% Average EDBG 16,663 3,589 Table 2: EDBG, # of sentences consumed The TLA: The TLA incorporates two search heuristics: the Single Value Constraint (SVC) and Greediness. In the event that Gcurr cannot parse the current input sentence s, the TLA attempts a second parse with a randomly chosen new grammar, Gnew, that differs from Gcurr by exactly one parameter value (SVC). If Gnew can parse s, Gnew becomes the new Gcurr otherwise Gnew is rejected as a hypothesis (Greediness). Following Berwick and Niyogi (1996), we also ran simulations on two variants of the TLA – one with the Greediness heuristic but without the SVC (TLA minus SVC, TLA–SVC) and one with the SVC but without Greediness (TLA minus Greediness, TLA–Greed). The TLA has become a seminal model and has been extensively studied (cf. Bertolo, 2001 and references therein; Berwick & Niyogi, 1996; Frank & Kapur, 1996; Sakas, 2000; among others). The results from the TLA variants operating in the LDD are presented in Table 3. 99% Average TLA-SVC 67,896 11,273 TLA-Greed 19,181 4,110 TLA 16,990 961 Table 3: TLA variants, # of sentences consumed Particularly interesting is that contrary to results reported by Niyogi & Berwick (1996) and Sakas & Nishimoto (2002), the SVC and Greediness constraints do help the learner achieve the target in the LDD. The previous research was based on simulations run on much smaller 9 and 16 language domains (see Table 1). It would seem that the local hill-climbing search strategies employed by the TLA do improve learning efficiency in the LDD. However, even at best, the TLA performs less well than the blind guess learner. We conjecture that this fact probably rules out the TLA as a viable model of human language acquisition. The STL: Fodor’s Structural Triggers Learner (STL) makes greater use of the parser than the TLA. A key feature of the model is that parameter values are not simply the standardly presumed 0 or 1, but rather bits of tree structure or treelets. Thus, a grammar, in the STL sense, is a collection of treelets rather than a collection of 1's and 0's. The STL is error-driven. If Gcurr cannot license s, new treelets will be utilized to achieve a successful parse.7 Treelets are applied in the same way as any “normal” grammar rule, so no unusual parsing activity is necessary. The STL hypothesizes grammars by adding parameter value treelets to Gcurr when they contribute to a successful parse. The basic algorithm for all STL variants is: 1. If Gcurr can parse the current input sentence, retain the treelets that make up Gcurr. 2. Otherwise, parse the sentence making use of any or all parametric treelets available and adopt those treelets that contribute to a successful parse. We call this parametric decoding. Because the STL can decode inputs into their parametric signatures, it stands apart from other acquisition models in that it can detect when an input sentence is parametrically ambiguous. During a parse of s, if more than one treelet could be used by the parser (i.e., a choice point is encountered), then s is parametrically ambiguous. The TLA variants do not have this capacity because they rely only on a can-parse/can’t-parse outcome and do not have access to the on-line operations of the parser. Originally, the ability to detect ambiguity was employed in two variations of the STL: the strong STL (SSTL) and the weak STL. The SSTL executes a full parallel parse of each input sentence and adopts only those treelets (parameter values) that are present in all the generated parse trees. This would seem to make the SSTL an extremely powerful, albeit psychologically implausible, learner.8 However, this is not necessarily the case. The SSTL needs some unambiguity to be present in the structures derived from the sentences of the target language. For example, there may not be a single input generated by Gtarg that when parsed yields an unambiguous treelet for a particular parameter. 7 In addition to the treelets, UG principles are also available for parsing, as they are in the other models discussed above. 8 It is important to note that Fodor (1998a) does not put forth the strong STL as a psychologically plausible model. Rather, it is intended to demonstrate the potential effectiveness of parametric decoding. Unlike the SSTL, the weak STL executes a psychologically plausible left-to-right serial (deterministic) parse. One variant of the weak STL, the waiting STL (WSTL), deals with ambiguous inputs abiding by the heuristic: Don’t learn from sentences that contain a choice point. These sentences are simply discarded for the purposes of learning. This is not to imply that children do not parse ambiguous sentences they hear, but only that they set no parameters if the current evidence is ambiguous. As with the TLA, these STL variants have been studied from a mathematical perspective (Bertolo et al., 1997a; Sakas, 2000). Mathematical analyses point to the fact that the strong and weak STL are extremely efficient learners in conducive domains with some unambiguous inputs but may become paralyzed in domains with high degrees of ambiguity. These mathematical analyses among other considerations spurred a new class of weak STL variants which we informally call the guessing STL family. The basic idea behind the guessing STL models is that there is some information available even in sentences that are ambiguous, and some strategy that can exploit that information. We incorporate three different heuristics into the original STL paradigm, the RC, MC and LAC heuristics described above. Although the MC and LAC heuristics are not stochastic, we regard them as “guessing” heuristics because, unlike the WSTL, a learner cannot be certain that the parametric treelets obtained from a parse guided by MC and LAC are correct for the target. These heuristics are based on wellestablished human parsing strategies. Interestingly, the difference in performance between the three variants is slight. Although we have just begun to look at this data in detail, one reason may be that the typical types of problems these parsing strategies address are not included in the LDD (e.g. relative clause attachment ambiguity). Still, the STL variants perform the most efficiently of the strategies presented in this small study (approximately a 100-fold improvement over the TLA). Certainly this is due to the STL's ability to perform parametric decoding. See Fodor (1998b) and Sakas & Fodor (2001) for detailed discussion about the power of decoding when applied to the acquisition process. Guessing STL 99% Average RC 1,486 166 MC 1,412 160 LAC 1,923 197 Table 4: guessing STL family, # of sentences consumed 4 Conclusion and future work The thrust of our current research is directed at collecting data for a comprehensive, comparative study of psycho-computational models of syntax acquisition. To support this endeavor, we have developed the Language Domain Database – a publicly available test-bed for studying acquisition models from diverse paradigms. Mathematical analysis has shown that learners are extremely sensitive to various distributions in the input stream (Niyogi & Berwick, 1996; Sakas, 2000, 2003). Approaches that thrive in one domain may dramatically flounder in others. So, whether a particular computational model is successful as a model of natural language acquisition is ultimately an empirical issue and depends on the exact conditions under which the model performs well and the extent to which those favorable conditions are in line with the facts of human language. The LDD is a useful tool that can be used within such an empirical research program. Future work: Though the LDD has been validated against CHILDES data in certain respects, we intend to extend this work by adding distributions to the LDD that correspond to actual distributions of child-directed speech. For example, what percentage of utterances, in child-directed Japanese, contain pro-drop? object-drop? How often in English does the pattern: S[+WH] aux Verb O1 occur and at what periods of a child's development? We believe that these distributions will shed some light on many of the complex subtleties involved in ambiguity disambiguation and the role of nondeterminism and statistics in the language acquisition process. This is proving to be a formidable, yet surmountable task; one that we are just beginning to tackle. Acknowledgements This paper reports work done in part with other members of CUNY-CoLAG (CUNY's Computational Language Acquisition Group) including Janet Dean Fodor, Virginia Teller, Eiji Nishimoto, Aaron Harnley, Yana Melnikova, Erika Troseth, Carrie Crowther, Atsu Inoue, Yukiko Koizumi, Lisa Resig-Ferrazzano, and Tanya Viger. Also thanks to Charles Yang for much useful discussion, and valuable comments from the anonymous reviewers. This research was funded by PSCCUNY Grant #63387-00-32 and CUNY Collaborative Grant #92902-00-07. References Bertolo, S. (Ed.) (2001). Language Acquisition and Learnability. Cambridge, UK: Cambridge University Press. Bertolo, S., Broihier, K., Gibson, E., & Wexler, K. (1997a). Characterizing learnability conditions for cue-based learners in parametric language systems. Proceedings of the Fifth Meeting on Mathematics of Language. Bertolo, S., Broihier, K., Gibson, E., and Wexler, K. (1997b) Cue-based learners in parametric language systems: Application of general results to a recently proposed learning algorithm based on unambiguous 'superparsing'. In M. G. Shafto and P. Langley (eds.) the Cognitive Science Society, Mahwah NJ: Lawrence Erlbaum Associates. Berwick, R. C., & Niyogi, P. (1996). Learning from triggers. Linguistic Inquiry, 27 (4), 605-622. Briscoe, T. (2000). Grammatical acquisition: Inductive bias and coevolution of language and the language acquisition device. Language, 76 (2), 245-296. Chomsky, N. (1981) Lectures on Government and Binding, Dordrecht: Foris Publications. Chomsky, N. (1995) The Minimalist Program. Cambridge MA: MIT Press. Cinque, G. (1999) Adverbs and Functional Heads. Oxford Oxford, UK:University Press, Oxford, UK. Fodor, J. D. (1998a) Unambiguous triggers, Linguistic Inquiry 29.1, 1-36. Fodor, J. D. (1998b) Parsing to learn. Journal of Psycholinguistic Research 27.3, 339-374. Fodor, J.D., Melnikova, Y. & Troseth, E. (2002) A structurally defined language domain for testing syntax acquisition models. Technical Report. CUNY Graduate Center. Gibson, E. and Wexler, K. (1994) Triggers. Linguistic Inquiry 25, 407-454. Gold, E. M. (1967) Language identification in the limit. Information and Control 10, 447-474. Hyams, N. (1986) Language Acquisition and the Theory of Parameters. Dordrecht: Reidel. Jain, S., E. Martin, D. Osherson, J. Royer, and A. Sharma. (1991) Systems That Learn. 2nd ed. Cambridge, MA: MIT Press. Kayne, R. S. (1994) The Antisymmetry of Syntax. Cambridge MA: MIT Press. Kohl, K.T. (1999) An Analysis of Finite Parameter Learning in Linguistic Spaces. Master’s Thesis, MIT. MacWhinney, B. (1995) The CHILDES Project: Tools for Analyzing Talk. (2nd ed.) Hillsdale, NJ: Lawrence Erlbaum Associates. Niyogi, P (1998) The Informational Complexity of Learning: Perspectives on Neural Networks and Generative Grammar Dordrecht: Kluwer Academic. Pinker, S. (1979) Formal models of language learning, Cognition 7, 217-283. Sagae, K., Lavie, A., MacWhinney, B. (2001) Parsing the CHILDES database: Methodology and lessons learned. In Proceedings of the Seventh International Workshop in Parsing Technologies. Beijing, China. Sakas, W.G. (in prep) Grammar/Language smoothness and the need (or not) of syntactic parameters. Hunter College and The Graduate Center, City University of New York. Sakas, W.G. (2000) Ambiguity and the Computational Feasibility of Syntax Acquisition, Doctoral Dissertation, City University of New York. Sakas, W.G. and Fodor, J.D. (2001). The Structural Triggers Learner. In S. Bertolo (ed.) Language Acquisition and Learnability. Cambridge, UK: Cambridge University Press. Sakas, W.G. and Nishimoto, E. (2002) Search, Structure or Statistics? A Comparative Study of Memoryless Heuristics for Syntax Acquisition, Proceedings of the 24th Annual Conference of the Cognitive Science Society. Hillsdale NJ: Lawrence Erlbaum Associates, Siskind, J.M & Dimitriadis, A., [Online 5/20/2003] Documentation for qtree, a LaTex tree package http://www.ling.upenn.edu/advice/latex/qtree/ Villavicencio, A. (2000) The use of default unification in a system of lexical types. Paper presented at the Workshop on Linguistic Theory and Grammar Implementation, Birmingham,UK. Wexler, K. and Culicover, P. (1980) Formal Principles of Language Acquisition. Cambridge MA: MIT Press.
2003
53
Accurate Unlexicalized Parsing Dan Klein Computer Science Department Stanford University Stanford, CA 94305-9040 [email protected] Christopher D. Manning Computer Science Department Stanford University Stanford, CA 94305-9040 [email protected] Abstract We demonstrate that an unlexicalized PCFG can parse much more accurately than previously shown, by making use of simple, linguistically motivated state splits, which break down false independence assumptions latent in a vanilla treebank grammar. Indeed, its performance of 86.36% (LP/LR F1) is better than that of early lexicalized PCFG models, and surprisingly close to the current state-of-theart. This result has potential uses beyond establishing a strong lower bound on the maximum possible accuracy of unlexicalized models: an unlexicalized PCFG is much more compact, easier to replicate, and easier to interpret than more complex lexical models, and the parsing algorithms are simpler, more widely understood, of lower asymptotic complexity, and easier to optimize. In the early 1990s, as probabilistic methods swept NLP, parsing work revived the investigation of probabilistic context-free grammars (PCFGs) (Booth and Thomson, 1973; Baker, 1979). However, early results on the utility of PCFGs for parse disambiguation and language modeling were somewhat disappointing. A conviction arose that lexicalized PCFGs (where head words annotate phrasal nodes) were the key tool for high performance PCFG parsing. This approach was congruent with the great success of word n-gram models in speech recognition, and drew strength from a broader interest in lexicalized grammars, as well as demonstrations that lexical dependencies were a key tool for resolving ambiguities such as PP attachments (Ford et al., 1982; Hindle and Rooth, 1993). In the following decade, great success in terms of parse disambiguation and even language modeling was achieved by various lexicalized PCFG models (Magerman, 1995; Charniak, 1997; Collins, 1999; Charniak, 2000; Charniak, 2001). However, several results have brought into question how large a role lexicalization plays in such parsers. Johnson (1998) showed that the performance of an unlexicalized PCFG over the Penn treebank could be improved enormously simply by annotating each node by its parent category. The Penn treebank covering PCFG is a poor tool for parsing because the context-freedom assumptions it embodies are far too strong, and weakening them in this way makes the model much better. More recently, Gildea (2001) discusses how taking the bilexical probabilities out of a good current lexicalized PCFG parser hurts performance hardly at all: by at most 0.5% for test text from the same domain as the training data, and not at all for test text from a different domain.1 But it is precisely these bilexical dependencies that backed the intuition that lexicalized PCFGs should be very successful, for example in Hindle and Rooth’s demonstration from PP attachment. We take this as a reflection of the fundamental sparseness of the lexical dependency information available in the Penn Treebank. As a speech person would say, one million words of training data just isn’t enough. Even for topics central to the treebank’s Wall Street Journal text, such as stocks, many very plausible dependencies occur only once, for example stocks stabilized, while many others occur not at all, for example stocks skyrocketed.2 The best-performing lexicalized PCFGs have increasingly made use of subcategorization3 of the 1There are minor differences, but all the current best-known lexicalized PCFGs employ both monolexical statistics, which describe the phrasal categories of arguments and adjuncts that appear around a head lexical item, and bilexical statistics, or dependencies, which describe the likelihood of a head word taking as a dependent a phrase headed by a certain other word. 2This observation motivates various class- or similaritybased approaches to combating sparseness, and this remains a promising avenue of work, but success in this area has proven somewhat elusive, and, at any rate, current lexicalized PCFGs do simply use exact word matches if available, and interpolate with syntactic category-based estimates when they are not. 3In this paper we use the term subcategorization in the original general sense of Chomsky (1965), for where a syntactic catcategories appearing in the Penn treebank. Charniak (2000) shows the value his parser gains from parentannotation of nodes, suggesting that this information is at least partly complementary to information derivable from lexicalization, and Collins (1999) uses a range of linguistically motivated and carefully hand-engineered subcategorizations to break down wrong context-freedom assumptions of the naive Penn treebank covering PCFG, such as differentiating “base NPs” from noun phrases with phrasal modifiers, and distinguishing sentences with empty subjects from those where there is an overt subject NP. While he gives incomplete experimental results as to their efficacy, we can assume that these features were incorporated because of beneficial effects on parsing that were complementary to lexicalization. In this paper, we show that the parsing performance that can be achieved by an unlexicalized PCFG is far higher than has previously been demonstrated, and is, indeed, much higher than community wisdom has thought possible. We describe several simple, linguistically motivated annotations which do much to close the gap between a vanilla PCFG and state-of-the-art lexicalized models. Specifically, we construct an unlexicalized PCFG which outperforms the lexicalized PCFGs of Magerman (1995) and Collins (1996) (though not more recent models, such as Charniak (1997) or Collins (1999)). One benefit of this result is a much-strengthened lower bound on the capacity of an unlexicalized PCFG. To the extent that no such strong baseline has been provided, the community has tended to greatly overestimate the beneficial effect of lexicalization in probabilistic parsing, rather than looking critically at where lexicalized probabilities are both needed to make the right decision and available in the training data. Secondly, this result affirms the value of linguistic analysis for feature discovery. The result has other uses and advantages: an unlexicalized PCFG is easier to interpret, reason about, and improve than the more complex lexicalized models. The grammar representation is much more compact, no longer requiring large structures that store lexicalized probabilities. The parsing algorithms have lower asymptotic complexity4 and have much smaller grammar egory is divided into several subcategories, for example dividing verb phrases into finite and non-finite verb phrases, rather than in the modern restricted usage where the term refers only to the syntactic argument frames of predicators. 4O(n3) vs. O(n5) for a naive implementation, or vs. O(n4) if using the clever approach of Eisner and Satta (1999). constants. An unlexicalized PCFG parser is much simpler to build and optimize, including both standard code optimization techniques and the investigation of methods for search space pruning (Caraballo and Charniak, 1998; Charniak et al., 1998). It is not our goal to argue against the use of lexicalized probabilities in high-performance probabilistic parsing. It has been comprehensively demonstrated that lexical dependencies are useful in resolving major classes of sentence ambiguities, and a parser should make use of such information where possible. We focus here on using unlexicalized, structural context because we feel that this information has been underexploited and underappreciated. We see this investigation as only one part of the foundation for state-of-the-art parsing which employs both lexical and structural conditioning. 1 Experimental Setup To facilitate comparison with previous work, we trained our models on sections 2–21 of the WSJ section of the Penn treebank. We used the first 20 files (393 sentences) of section 22 as a development set (devset). This set is small enough that there is noticeable variance in individual results, but it allowed rapid search for good features via continually reparsing the devset in a partially manual hill-climb. All of section 23 was used as a test set for the final model. For each model, input trees were annotated or transformed in some way, as in Johnson (1998). Given a set of transformed trees, we viewed the local trees as grammar rewrite rules in the standard way, and used (unsmoothed) maximum-likelihood estimates for rule probabilities.5 To parse the grammar, we used a simple array-based Java implementation of a generalized CKY parser, which, for our final best model, was able to exhaustively parse all sentences in section 23 in 1GB of memory, taking approximately 3 sec for average length sentences.6 5The tagging probabilities were smoothed to accommodate unknown words. The quantity P(tag|word) was estimated as follows: words were split into one of several categories wordclass, based on capitalization, suffix, digit, and other character features. For each of these categories, we took the maximum-likelihood estimate of P(tag|wordclass). This distribution was used as a prior against which observed taggings, if any, were taken, giving P(tag|word) = [c(tag, word) + κ P(tag|wordclass)]/[c(word)+κ]. This was then inverted to give P(word|tag). The quality of this tagging model impacts all numbers; for example the raw treebank grammar’s devset F1 is 72.62 with it and 72.09 without it. 6The parser is available for download as open source at: http://nlp.stanford.edu/downloads/lex-parser.shtml VP <VP:[VBZ]. . . PP> <VP:[VBZ]. . . NP> <VP:[VBZ]> VBZ NP PP Figure 1: The v=1, h=1 markovization of VP →VBZ NP PP. 2 Vertical and Horizontal Markovization The traditional starting point for unlexicalized parsing is the raw n-ary treebank grammar read from training trees (after removing functional tags and null elements). This basic grammar is imperfect in two well-known ways. First, the category symbols are too coarse to adequately render the expansions independent of the contexts. For example, subject NP expansions are very different from object NP expansions: a subject NP is 8.7 times more likely than an object NP to expand as just a pronoun. Having separate symbols for subject and object NPs allows this variation to be captured and used to improve parse scoring. One way of capturing this kind of external context is to use parent annotation, as presented in Johnson (1998). For example, NPs with S parents (like subjects) will be marked NPˆS, while NPs with VP parents (like objects) will be NPˆVP. The second basic deficiency is that many rule types have been seen only once (and therefore have their probabilities overestimated), and many rules which occur in test sentences will never have been seen in training (and therefore have their probabilities underestimated – see Collins (1999) for analysis). Note that in parsing with the unsplit grammar, not having seen a rule doesn’t mean one gets a parse failure, but rather a possibly very weird parse (Charniak, 1996). One successful method of combating sparsity is to markovize the rules (Collins, 1999). In particular, we follow that work in markovizing out from the head child, despite the grammar being unlexicalized, because this seems the best way to capture the traditional linguistic insight that phrases are organized around a head (Radford, 1988). Both parent annotation (adding context) and RHS markovization (removing it) can be seen as two instances of the same idea. In parsing, every node has a vertical history, including the node itself, parent, grandparent, and so on. A reasonable assumption is that only the past v vertical ancestors matter to the current expansion. Similarly, only the previous h horizontal ancestors matter (we assume that the head Horizontal Markov Order Vertical Order h = 0 h = 1 h ≤2 h = 2 h = ∞ v = 1 No annotation 71.27 72.5 73.46 72.96 72.62 (854) (3119) (3863) (6207) (9657) v ≤2 Sel. Parents 74.75 77.42 77.77 77.50 76.91 (2285) (6564) (7619) (11398) (14247) v = 2 All Parents 74.68 77.42 77.81 77.50 76.81 (2984) (7312) (8367) (12132) (14666) v ≤3 Sel. GParents 76.50 78.59 79.07 78.97 78.54 (4943) (12374) (13627) (19545) (20123) v = 3 All GParents 76.74 79.18 79.74 79.07 78.72 (7797) (15740) (16994) (22886) (22002) Figure 2: Markovizations: F1 and grammar size. child always matters). It is a historical accident that the default notion of a treebank PCFG grammar takes v = 1 (only the current node matters vertically) and h = ∞(rule right hand sides do not decompose at all). On this view, it is unsurprising that increasing v and decreasing h have historically helped. As an example, consider the case of v = 1, h = 1. If we start with the rule VP →VBZ NP PP PP, it will be broken into several stages, each a binary or unary rule, which conceptually represent a head-outward generation of the right hand size, as shown in figure 1. The bottom layer will be a unary over the head declaring the goal: ⟨VP: [VBZ]⟩→ VBZ. The square brackets indicate that the VBZ is the head, while the angle brackets ⟨X⟩indicates that the symbol ⟨X⟩is an intermediate symbol (equivalently, an active or incomplete state). The next layer up will generate the first rightward sibling of the head child: ⟨VP: [VBZ]. . . NP⟩→⟨VP: [VBZ]⟩ NP. Next, the PP is generated: ⟨VP: [VBZ]. . . PP⟩→ ⟨VP: [VBZ]. . . NP⟩PP. We would then branch off left siblings if there were any.7 Finally, we have another unary to finish the VP. Note that while it is convenient to think of this as a head-outward process, these are just PCFG rewrites, and so the actual scores attached to each rule will correspond to a downward generation order. Figure 2 presents a grid of horizontal and vertical markovizations of the grammar. The raw treebank grammar corresponds to v = 1, h = ∞(the upper right corner), while the parent annotation in (Johnson, 1998) corresponds to v = 2, h = ∞, and the second-order model in Collins (1999), is broadly a smoothed version of v = 2, h = 2. In addition to exact nth-order models, we tried variable7In our system, the last few right children carry over as preceding context for the left children, distinct from common practice. We found this wrapped horizon to be beneficial, and it also unifies the infinite order model with the unmarkovized raw rules. Cumulative Indiv. Annotation Size F1 1 F1 1 F1 Baseline (v ≤2, h ≤2) 7619 77.77 – – UNARY-INTERNAL 8065 78.32 0.55 0.55 UNARY-DT 8066 78.48 0.71 0.17 UNARY-RB 8069 78.86 1.09 0.43 TAG-PA 8520 80.62 2.85 2.52 SPLIT-IN 8541 81.19 3.42 2.12 SPLIT-AUX 9034 81.66 3.89 0.57 SPLIT-CC 9190 81.69 3.92 0.12 SPLIT-% 9255 81.81 4.04 0.15 TMP-NP 9594 82.25 4.48 1.07 GAPPED-S 9741 82.28 4.51 0.17 POSS-NP 9820 83.06 5.29 0.28 SPLIT-VP 10499 85.72 7.95 1.36 BASE-NP 11660 86.04 8.27 0.73 DOMINATES-V 14097 86.91 9.14 1.42 RIGHT-REC-NP 15276 87.04 9.27 1.94 Figure 3: Size and devset performance of the cumulatively annotated models, starting with the markovized baseline. The right two columns show the change in F1 from the baseline for each annotation introduced, both cumulatively and for each single annotation applied to the baseline in isolation. history models similar in intent to those described in Ron et al. (1994). For variable horizontal histories, we did not split intermediate states below 10 occurrences of a symbol. For example, if the symbol ⟨VP: [VBZ]. . . PP PP⟩were too rare, we would collapse it to ⟨VP: [VBZ]. . . PP⟩. For vertical histories, we used a cutoff which included both frequency and mutual information between the history and the expansions (this was not appropriate for the horizontal case because MI is unreliable at such low counts). Figure 2 shows parsing accuracies as well as the number of symbols in each markovization. These symbol counts include all the intermediate states which represent partially completed constituents. The general trend is that, in the absence of further annotation, more vertical annotation is better – even exhaustive grandparent annotation. This is not true for horizontal markovization, where the variableorder second-order model was superior. The best entry, v = 3, h ≤2, has an F1 of 79.74, already a substantial improvement over the baseline. In the remaining sections, we discuss other annotations which increasingly split the symbol space. Since we expressly do not smooth the grammar, not all splits are guaranteed to be beneficial, and not all sets of useful splits are guaranteed to co-exist well. In particular, while v = 3, h ≤2 markovization is good on its own, it has a large number of states and does not tolerate further splitting well. Therefore, we base all further exploration on the v ≤2, h ≤2 ROOT SˆROOT NPˆS NN Revenue VPˆS VBD was NPˆVP QP $ $ CD 444.9 CD million , , SˆVP VPˆS VBG including NPˆVP NPˆNP JJ net NN interest , , CONJP RB down RB slightly IN from NPˆNP QP $ $ CD 450.7 CD million . . Figure 4: An error which can be resolved with the UNARYINTERNAL annotation (incorrect baseline parse shown). grammar. Although it does not necessarily jump out of the grid at first glance, this point represents the best compromise between a compact grammar and useful markov histories. 3 External vs. Internal Annotation The two major previous annotation strategies, parent annotation and head lexicalization, can be seen as instances of external and internal annotation, respectively. Parent annotation lets us indicate an important feature of the external environment of a node which influences the internal expansion of that node. On the other hand, lexicalization is a (radical) method of marking a distinctive aspect of the otherwise hidden internal contents of a node which influence the external distribution. Both kinds of annotation can be useful. To identify split states, we add suffixes of the form -X to mark internal content features, and ˆX to mark external features. To illustrate the difference, consider unary productions. In the raw grammar, there are many unaries, and once any major category is constructed over a span, most others become constructible as well using unary chains (see Klein and Manning (2001) for discussion). Such chains are rare in real treebank trees: unary rewrites only appear in very specific contexts, for example S complements of verbs where the S has an empty, controlled subject. Figure 4 shows an erroneous output of the parser, using the baseline markovized grammar. Intuitively, there are several reasons this parse should be ruled out, but one is that the lower S slot, which is intended primarily for S complements of communication verbs, is not a unary rewrite position (such complements usually have subjects). It would therefore be natural to annotate the trees so as to confine unary productions to the contexts in which they are actually appropriate. We tried two annotations. First, UNARYINTERNAL marks (with a -U) any nonterminal node which has only one child. In isolation, this resulted in an absolute gain of 0.55% (see figure 3). The same sentence, parsed using only the baseline and UNARY-INTERNAL, is parsed correctly, because the VP rewrite in the incorrect parse ends with an SˆVPU with very low probability.8 Alternately, UNARY-EXTERNAL, marked nodes which had no siblings with ˆU. It was similar to UNARY-INTERNAL in solo benefit (0.01% worse), but provided far less marginal benefit on top of other later features (none at all on top of UNARYINTERNAL for our top models), and was discarded.9 One restricted place where external unary annotation was very useful, however, was at the preterminal level, where internal annotation was meaningless. One distributionally salient tag conflation in the Penn treebank is the identification of demonstratives (that, those) and regular determiners (the, a). Splitting DT tags based on whether they were only children (UNARY-DT) captured this distinction. The same external unary annotation was even more effective when applied to adverbs (UNARY-RB), distinguishing, for example, as well from also). Beyond these cases, unary tag marking was detrimental. The F1 after UNARY-INTERNAL, UNARY-DT, and UNARY-RB was 78.86%. 4 Tag Splitting The idea that part-of-speech tags are not fine-grained enough to abstract away from specific-word behaviour is a cornerstone of lexicalization. The UNARY-DT annotation, for example, showed that the determiners which occur alone are usefully distinguished from those which occur with other nominal material. This marks the DT nodes with a single bit about their immediate external context: whether there are sisters. Given the success of parent annotation for nonterminals, it makes sense to parent annotate tags, as well (TAG-PA). In fact, as figure 3 shows, exhaustively marking all preterminals with their parent category was the most effective single annotation we tried. Why should this be useful? Most tags have a canonical category. For example, NNS tags occur under NP nodes (only 234 of 70855 do not, mostly mistakes). However, when a tag 8Note that when we show such trees, we generally only show one annotation on top of the baseline at a time. Moreover, we do not explicitly show the binarization implicit by the horizontal markovization. 9These two are not equivalent even given infinite data. VPˆS TO to VPˆVP VB see PPˆVP IN if NPˆPP NN advertising NNS works VPˆS TOˆVP to VPˆVP VBˆVP see SBARˆVP INˆSBAR if SˆSBAR NPˆS NNˆNP advertising VPˆS VBZˆVP works (a) (b) Figure 5: An error resolved with the TAG-PA annotation (of the IN tag): (a) the incorrect baseline parse and (b) the correct TAGPA parse. SPLIT-IN also resolves this error. somewhat regularly occurs in a non-canonical position, its distribution is usually distinct. For example, the most common adverbs directly under ADVP are also (1599) and now (544). Under VP, they are n’t (3779) and not (922). Under NP, only (215) and just (132), and so on. TAG-PA brought F1 up substantially, to 80.62%. In addition to the adverb case, the Penn tag set conflates various grammatical distinctions that are commonly made in traditional and generative grammar, and from which a parser could hope to get useful information. For example, subordinating conjunctions (while, as, if), complementizers (that, for), and prepositions (of, in, from) all get the tag IN. Many of these distinctions are captured by TAGPA (subordinating conjunctions occur under S and prepositions under PP), but are not (both subordinating conjunctions and complementizers appear under SBAR). Also, there are exclusively nounmodifying prepositions (of), predominantly verbmodifying ones (as), and so on. The annotation SPLIT-IN does a linguistically motivated 6-way split of the IN tag, and brought the total to 81.19%. Figure 5 shows an example error in the baseline which is equally well fixed by either TAG-PA or SPLIT-IN. In this case, the more common nominal use of works is preferred unless the IN tag is annotated to allow if to prefer S complements. We also got value from three other annotations which subcategorized tags for specific lexemes. First we split off auxiliary verbs with the SPLITAUX annotation, which appends ˆBE to all forms of be and ˆHAVE to all forms of have.10 More minorly, SPLIT-CC marked conjunction tags to indicate 10This is an extended uniform version of the partial auxiliary annotation of Charniak (1997), wherein all auxiliaries are marked as AUX and a -G is added to gerund auxiliaries and gerund VPs. whether or not they were the strings [Bb]ut or &, each of which have distinctly different distributions from other conjunctions. Finally, we gave the percent sign (%) its own tag, in line with the dollar sign ($) already having its own. Together these three annotations brought the F1 to 81.81%. 5 What is an Unlexicalized Grammar? Around this point, we must address exactly what we mean by an unlexicalized PCFG. To the extent that we go about subcategorizing POS categories, many of them might come to represent a single word. One might thus feel that the approach of this paper is to walk down a slippery slope, and that we are merely arguing degrees. However, we believe that there is a fundamental qualitative distinction, grounded in linguistic practice, between what we see as permitted in an unlexicalized PCFG as against what one finds and hopes to exploit in lexicalized PCFGs. The division rests on the traditional distinction between function words (or closed-class words) and content words (or open class or lexical words). It is standard practice in linguistics, dating back decades, to annotate phrasal nodes with important functionword distinctions, for example to have a CP[for] or a PP[to], whereas content words are not part of grammatical structure, and one would not have special rules or constraints for an NP[stocks], for example. We follow this approach in our model: various closed classes are subcategorized to better represent important distinctions, and important features commonly expressed by function words are annotated onto phrasal nodes (such as whether a VP is finite, or a participle, or an infinitive clause). However, no use is made of lexical class words, to provide either monolexical or bilexical probabilities.11 At any rate, we have kept ourselves honest by estimating our models exclusively by maximum likelihood estimation over our subcategorized grammar, without any form of interpolation or shrinkage to unsubcategorized categories (although we do markovize rules, as explained above). This effec11It should be noted that we started with four tags in the Penn treebank tagset that rewrite as a single word: EX (there), WP$ (whose), # (the pound sign), and TO), and some others such as WP, POS, and some of the punctuation tags, which rewrite as barely more. To the extent that we subcategorize tags, there will be more such cases, but many of them already exist in other tag sets. For instance, many tag sets, such as the Brown and CLAWS (c5) tagsets give a separate sets of tags to each form of the verbal auxiliaries be, do, and have, most of which rewrite as only a single word (and any corresponding contractions). VPˆS TO to VPˆVP VB appear NPˆVP NPˆNP CD three NNS times PPˆNP IN on NPˆPP NNP CNN JJ last NN night VPˆS TO to VPˆVP VB appear NPˆVP NPˆNP CD three NNS times PPˆNP IN on NPˆPP NNP CNN NP-TMPˆVP JJ last NNˆTMP night (a) (b) Figure 6: An error resolved with the TMP-NP annotation: (a) the incorrect baseline parse and (b) the correct TMP-NP parse. tively means that the subcategories that we break off must themselves be very frequent in the language. In such a framework, if we try to annotate categories with any detailed lexical information, many sentences either entirely fail to parse, or have only extremely weird parses. The resulting battle against sparsity means that we can only afford to make a few distinctions which have major distributional impact. Even with the individual-lexeme annotations in this section, the grammar still has only 9255 states compared to the 7619 of the baseline model. 6 Annotations Already in the Treebank At this point, one might wonder as to the wisdom of stripping off all treebank functional tags, only to heuristically add other such markings back in to the grammar. By and large, the treebank out-of-the package tags, such as PP-LOC or ADVP-TMP, have negative utility. Recall that the raw treebank grammar, with no annotation or markovization, had an F1 of 72.62% on our development set. With the functional annotation left in, this drops to 71.49%. The h ≤2, v ≤1 markovization baseline of 77.77% dropped even further, all the way to 72.87%, when these annotations were included. Nonetheless, some distinctions present in the raw treebank trees were valuable. For example, an NP with an S parent could be either a temporal NP or a subject. For the annotation TMP-NP, we retained the original -TMP tags on NPs, and, furthermore, propagated the tag down to the tag of the head of the NP. This is illustrated in figure 6, which also shows an example of its utility, clarifying that CNN last night is not a plausible compound and facilitating the otherwise unusual high attachment of the smaller NP. TMP-NP brought the cumulative F1 to 82.25%. Note that this technique of pushing the functional tags down to preterminals might be useful more generally; for example, locative PPs expand roughly the ROOT SˆROOT “ “ NPˆS DT This VPˆS VBZ is VPˆVP VB panic NPˆVP NN buying . ! ” ” ROOT SˆROOT “ “ NPˆS DT This VPˆS-VBF VBZ is NPˆVP NN panic NN buying . ! ” ” (a) (b) Figure 7: An error resolved with the SPLIT-VP annotation: (a) the incorrect baseline parse and (b) the correct SPLIT-VP parse. same way as all other PPs (usually as IN NP), but they do tend to have different prepositions below IN. A second kind of information in the original trees is the presence of empty elements. Following Collins (1999), the annotation GAPPED-S marks S nodes which have an empty subject (i.e., raising and control constructions). This brought F1 to 82.28%. 7 Head Annotation The notion that the head word of a constituent can affect its behavior is a useful one. However, often the head tag is as good (or better) an indicator of how a constituent will behave.12 We found several head annotations to be particularly effective. First, possessive NPs have a very different distribution than other NPs – in particular, NP →NP α rules are only used in the treebank when the leftmost child is possessive (as opposed to other imaginable uses like for New York lawyers, which is left flat). To address this, POSS-NP marked all possessive NPs. This brought the total F1 to 83.06%. Second, the VP symbol is very overloaded in the Penn treebank, most severely in that there is no distinction between finite and infinitival VPs. An example of the damage this conflation can do is given in figure 7, where one needs to capture the fact that present-tense verbs do not generally take bare infinitive VP complements. To allow the finite/non-finite distinction, and other verb type distinctions, SPLIT-VP annotated all VP nodes with their head tag, merging all finite forms to a single tag VBF. In particular, this also accomplished Charniak’s gerund-VP marking. This was extremely useful, bringing the cumulative F1 to 85.72%, 2.66% absolute improvement (more than its solo improvement over the baseline). 12This is part of the explanation of why (Charniak, 2000) finds that early generation of head tags as in (Collins, 1999) is so beneficial. The rest of the benefit is presumably in the availability of the tags for smoothing purposes. 8 Distance Error analysis at this point suggested that many remaining errors were attachment level and conjunction scope. While these kinds of errors are undoubtedly profitable targets for lexical preference, most attachment mistakes were overly high attachments, indicating that the overall right-branching tendency of English was not being captured. Indeed, this tendency is a difficult trend to capture in a PCFG because often the high and low attachments involve the very same rules. Even if not, attachment height is not modeled by a PCFG unless it is somehow explicitly encoded into category labels. More complex parsing models have indirectly overcome this by modeling distance (rather than height). Linear distance is difficult to encode in a PCFG – marking nodes with the size of their yields massively multiplies the state space.13 Therefore, we wish to find indirect indicators that distinguish high attachments from low ones. In the case of two PPs following a NP, with the question of whether the second PP is a second modifier of the leftmost NP or should attach lower, inside the first PP, the important distinction is usually that the lower site is a non-recursive base NP. Collins (1999) captures this notion by introducing the notion of a base NP, in which any NP which dominates only preterminals is marked with a -B. Further, if an NP-B does not have a non-base NP parent, it is given one with a unary production. This was helpful, but substantially less effective than marking base NPs without introducing the unary, whose presence actually erased a useful internal indicator – base NPs are more frequent in subject position than object position, for example. In isolation, the Collins method actually hurt the baseline (absolute cost to F1 of 0.37%), while skipping the unary insertion added an absolute 0.73% to the baseline, and brought the cumulative F1 to 86.04%. In the case of attachment of a PP to an NP either above or inside a relative clause, the high NP is distinct from the low one in that the already modified one contains a verb (and the low one may be a base NP as well). This is a partial explanation of the utility of verbal distance in Collins (1999). To 13The inability to encode distance naturally in a naive PCFG is somewhat ironic. In the heart of any PCFG parser, the fundamental table entry or chart item is a label over a span, for example an NP from position 0 to position 5. The concrete use of a grammar rule is to take two adjacent span-marked labels and combine them (for example NP[0,5] and VP[5,12] into S[0,12]). Yet, only the labels are used to score the combination. Length ≤40 LP LR F1 Exact CB 0 CB Magerman (1995) 84.9 84.6 1.26 56.6 Collins (1996) 86.3 85.8 1.14 59.9 this paper 86.9 85.7 86.3 30.9 1.10 60.3 Charniak (1997) 87.4 87.5 1.00 62.1 Collins (1999) 88.7 88.6 0.90 67.1 Length ≤100 LP LR F1 Exact CB 0 CB this paper 86.3 85.1 85.7 28.8 1.31 57.2 Figure 8: Results of the final model on the test set (section 23). capture this, DOMINATES-V marks all nodes which dominate any verbal node (V*, MD) with a -V. This brought the cumulative F1 to 86.91%. We also tried marking nodes which dominated prepositions and/or conjunctions, but these features did not help the cumulative hill-climb. The final distance/depth feature we used was an explicit attempt to model depth, rather than use distance and linear intervention as a proxy. With RIGHT-REC-NP, we marked all NPs which contained another NP on their right periphery (i.e., as a rightmost descendant). This captured some further attachment trends, and brought us to a final development F1 of 87.04%. 9 Final Results We took the final model and used it to parse section 23 of the treebank. Figure 8 shows the results. The test set F1 is 86.32% for ≤40 words, already higher than early lexicalized models, though of course lower than the state-of-the-art parsers. 10 Conclusion The advantages of unlexicalized grammars are clear enough – easy to estimate, easy to parse with, and time- and space-efficient. However, the dismal performance of basic unannotated unlexicalized grammars has generally rendered those advantages irrelevant. Here, we have shown that, surprisingly, the maximum-likelihood estimate of a compact unlexicalized PCFG can parse on par with early lexicalized parsers. We do not want to argue that lexical selection is not a worthwhile component of a state-ofthe-art parser – certain attachments, at least, require it – though perhaps its necessity has been overstated. Rather, we have shown ways to improve parsing, some easier than lexicalization, and others of which are orthogonal to it, and could presumably be used to benefit lexicalized parsers as well. Acknowledgements This paper is based on work supported in part by the National Science Foundation under Grant No. IIS0085896, and in part by an IBM Faculty Partnership Award to the second author. References James K. Baker. 1979. Trainable grammars for speech recognition. In D. H. Klatt and J. J. Wolf, editors, Speech Communication Papers for the 97th Meeting of the Acoustical Society of America, pages 547–550. Taylor L. Booth and Richard A. Thomson. 1973. Applying probability measures to abstract languages. IEEE Transactions on Computers, C-22:442–450. Sharon A. Caraballo and Eugene Charniak. 1998. New figures of merit for best-first probabilistic chart parsing. Computational Linguistics, 24:275–298. Eugene Charniak, Sharon Goldwater, and Mark Johnson. 1998. Edge-based best-first chart parsing. In Proceedings of the Sixth Workshop on Very Large Corpora, pages 127–133. Eugene Charniak. 1996. Tree-bank grammars. In Proc. of the 13th National Conference on Artificial Intelligence, pp. 1031–1036. Eugene Charniak. 1997. Statistical parsing with a context-free grammar and word statistics. In Proceedings of the 14th National Conference on Artificial Intelligence, pp. 598–603. Eugene Charniak. 2000. A maximum-entropy-inspired parser. In NAACL 1, pages 132–139. Eugene Charniak. 2001. Immediate-head parsing for language models. In ACL 39. Noam Chomsky. 1965. Aspects of the Theory of Syntax. MIT Press, Cambridge, MA. Michael John Collins. 1996. A new statistical parser based on bigram lexical dependencies. In ACL 34, pages 184–191. M. Collins. 1999. Head-Driven Statistical Models for Natural Language Parsing. Ph.D. thesis, Univ. of Pennsylvania. Jason Eisner and Giorgio Satta. 1999. Efficient parsing for bilexical context-free grammars and head-automaton grammars. In ACL 37, pages 457–464. Marilyn Ford, Joan Bresnan, and Ronald M. Kaplan. 1982. A competence-based theory of syntactic closure. In Joan Bresnan, editor, The Mental Representation of Grammatical Relations, pages 727–796. MIT Press, Cambridge, MA. Daniel Gildea. 2001. Corpus variation and parser performance. In 2001 Conference on Empirical Methods in Natural Language Processing (EMNLP). Donald Hindle and Mats Rooth. 1993. Structural ambiguity and lexical relations. Computational Linguistics, 19(1):103–120. Mark Johnson. 1998. PCFG models of linguistic tree representations. Computational Linguistics, 24:613–632. Dan Klein and Christopher D. Manning. 2001. Parsing with treebank grammars: Empirical bounds, theoretical models, and the structure of the Penn treebank. In ACL 39/EACL 10. David M. Magerman. 1995. Statistical decision-tree models for parsing. In ACL 33, pages 276–283. Andrew Radford. 1988. Transformational Grammar. Cambridge University Press, Cambridge. Dana Ron, Yoram Singer, and Naftali Tishby. 1994. The power of amnesia. Advances in Neural Information Processing Systems, volume 6, pages 176–183. Morgan Kaufmann.
2003
54
Deep Syntactic Processing by Combining Shallow Methods P´eter Dienes and Amit Dubey Department of Computational Linguistics Saarland University PO Box 15 11 50 66041 Saarbr¨ucken, Germany {dienes,adubey}@coli.uni-sb.de Abstract We present a novel approach for finding discontinuities that outperforms previously published results on this task. Rather than using a deeper grammar formalism, our system combines a simple unlexicalized PCFG parser with a shallow pre-processor. This pre-processor, which we call a trace tagger, does surprisingly well on detecting where discontinuities can occur without using phase structure information. 1 Introduction In this paper, we explore a novel approach for finding long-distance dependencies. In particular, we detect such dependencies, or discontinuities, in a two-step process: (i) a conceptually simple shallow tagger looks for sites of discontinuties as a preprocessing step, before parsing; (ii) the parser then finds the dependent constituent (antecedent). Clearly, information about long-distance relationships is vital for semantic interpretation. However, such constructions prove to be difficult for stochastic parsers (Collins et al., 1999) and they either avoid tackling the problem (Charniak, 2000; Bod, 2003) or only deal with a subset of the problematic cases (Collins, 1997). Johnson (2002) proposes an algorithm that is able to find long-distance dependencies, as a postprocessing step, after parsing. Although this algorithm fares well, it faces the problem that stochastic parsers not designed to capture non-local dependencies may get confused when parsing a sentence with discontinuities. However, the approach presented here is not susceptible to this shortcoming as it finds discontinuties before parsing. Overall, we present three primary contributions. First, we extend the mechanism of adding gap variables for nodes dominating a site of discontinuity (Collins, 1997). This approach allows even a context-free parser to reliably recover antecedents, given prior information about where discontinuities occur. Second, we introduce a simple yet novel finite-state tagger that gives exactly this information to the parser. Finally, we show that the combination of the finite-state mechanism, the parser, and our new method for antecedent recovery can competently analyze discontinuities. The overall organization of the paper is as follows. First, Section 2 sketches the material we use for the experiments in the paper. In Section 3, we propose a modification to a simple PCFG parser that allows it to reliably find antecedents if it knows the sites of long-distance dependencies. Then, in Section 4, we develop a finite-state system that gives the parser exactly that information with fairly high accuracy. We combine the models in Section 5 to recover antecedents. Section 6 discusses related work. 2 Annotation of empty elements Different linguistic theories offer various treatments of non-local head–dependent relations (referred to by several other terms such as extraction, discontinuity, movement or long-distance dependencies). The underlying idea, however, is the same: extraction sites are marked in the syntactic structure and this mark is connected (co-indexed) to the controlType Freq. Example NP–NP 987 Sam was seen * WH–NP 438 the woman who you saw *T* PRO–NP 426 * to sleep is nice COMP–SBAR 338 Sam said 0 Sasha snores UNIT 332 $ 25 *U* WH–S 228 Sam had to go, Sasha said *T* WH–ADVP 120 Sam told us how he did it *T* CLAUSE 118 Sam had to go, Sasha said 0 COMP–WHNP 98 the woman 0 we saw *T* ALL 3310 Table 1: Most frequent types of EEs in Section 0. ling constituent. The experiments reported here rely on a training corpus annotated with non-local dependencies as well as phrase-structure information. We used the Wall Street Journal (WSJ) part of the Penn Treebank (Marcus et al., 1993), where extraction is represented by co-indexing an empty terminal element (henceforth EE) to its antecedent. Without committing ourselves to any syntactic theory, we adopt this representation. Following the annotation guidelines (Bies et al., 1995), we distinguish seven basic types of EEs: controlled NP-traces (NP), PROs (PRO), traces of A -movement (mostly wh-movement: WH), empty complementizers (COMP), empty units (UNIT), and traces representing pseudo-attachments (shared constituents, discontinuous dependencies, etc.: PSEUDO) and ellipsis (ELLIPSIS). These labels, however, do not identify the EEs uniquely: for instance, the label WH may represent an extracted NP object as well as an adverb moved out of the verb phrase. In order to facilitate antecedent recovery and to disambiguate the EEs, we also annotate them with their parent nodes. Furthermore, to ease straightforward comparison with previous work (Johnson, 2002), a new label CLAUSE is introduced for COMP-SBAR whenever it is followed by a moved clause WH–S. Table 1 summarizes the most frequent types occurring in the development data, Section 0 of the WSJ corpus, and gives an example for each, following Johnson (2002). For the parsing and antecedent recovery experiments, in the case of WH-traces (WH–  ) and SBAR NP who S      NP you VP      V saw NP      *WH-NP* Figure 1: Threading gap+WH-NP. controlled NP-traces (NP–NP), we follow the standard technique of marking nodes dominating the empty element up to but not including the parent of the antecedent as defective (missing an argument) with a gap feature (Gazdar et al., 1985; Collins, 1997).1 Furthermore, to make antecedent co-indexation possible with many types of EEs, we generalize Collins’ approach by enriching the annotation of non-terminals with the type of the EE in question (eg. WH–NP) by using different gap+ features (gap+WH-NP; cf. Figure 1). The original nonterminals augmented with gap+ features serve as new non-terminal labels. In the experiments, Sections 2–21 were used to train the models, Section 0 served as a development set for testing and improving models, whereas we present the results on the standard test set, Section 23. 3 Parsing with empty elements The present section explores whether an unlexicalized PCFG parser can handle non-local dependencies: first, is it able to detect EEs and, second, can it find their antecedents? The answer to the first question turns out to be negative: due to efficiency reasons and the inappropriateness of the model, detecting all types of EEs is not feasible within the parser. Antecedents, however, can be reliably recovered provided a parser has perfect knowledge about EEs occurring in the input. This shows that the main bottleneck is detecting the EEs and not finding their antecedents. In the following section, therefore, we explore how we can provide the parser with information about EE sites in the current sentence without 1This technique fails for 82 sentences of the treebank where the antecedent does not c-command the corresponding EE. relying on phrase structure information. 3.1 Method There are three modifications required to allow a parser to detect EEs and resolve antecedents. First, it should be able to insert empty nodes. Second, it must thread the gap+ variables to the parent node of the antecedent. Knowing this node is not enough, though. Since the Penn Treebank grammar is not binary-branching, the final task is to decide which child of this node is the actual antecedent. The first two modifications are not difficult conceptually. A bottom-up parser can be easily modified to insert empty elements (c.f. Dienes and Dubey (2003)). Likewise, the changes required to include gap+ categories are not complicated: we simply add the gap+ features to the nonterminal category labels. The final and perhaps most important concern with developing a gap-threading parser is to ensure it is possible to choose the correct child as the antecedent of an EE. To achieve this task, we employ the algorithm presented in Figure 2. At any node in the tree where the children, all together, have more gap+ features activated than the parent, the algorithm deduces that a gap+ must have an antecedent. It then picks a child as the antecedent and recursively removes the gap+ feature corresponding to its EE from the non-terminal labels. The algorithm has a shortcoming, though: it cannot reliably handle cases when the antecedent does not c-command its EE. This mostly happens with PSEUDOs (pseudo-attachments), where the algorithm gives up and (wrongly) assumes they have no antecedent. Given the perfect trees of the development set, the antecedent recovery algorithm finds the correct antecedent with 95% accuracy, rising to 98% if PSEUDOs are excluded. Most of the remaining mistakes are caused either by annotation errors, or by binding NP-traces (NP–NP) to adjunct NPs, as opposed to subject NPs. The parsing experiments are carried out with an unlexicalized PCFG augmented with the antecedent recovery algorithm. We use an unlexicalized model to emphasize the point that even a simple model detects long distance dependencies successfully. The parser uses beam thresholding (Goodman, 1998) to for a tree T, iterate over nodes bottom-up for a node with rule P C0  Cn N  multiset of EEs in P M  multiset of EEs in C0  Cn foreach EE of type e in M  N pick a j such that e allows Cj as an antecedent pick a k such that k   j and Ck dominates an EE of type e if no such j or k exist, return no antecedent else bind the EE dominated by Ck to the antecedent Cj Figure 2: The antecedent recovery algorithm. ensure efficient parsing. PCFG probabilities are calculated in the standard way (Charniak, 1993). In order to keep the number of independently tunable parameters low, no smoothing is used. The parser is tested under two different conditions. First, to assess the upper bound an EEdetecting unlexicalized PCFG can achieve, the input of the parser contains the empty elements as separate words (PERFECT). Second, we let the parser introduce the EEs itself (INSERT). 3.2 Evaluation We evaluate on all sentences in the test section of the treebank. As our interest lies in trace detection and antecedent recovery, we adopt the evaluation measures introduced by Johnson (2002). An EE is correctly detected if our model gives it the correct label as well as the correct position (the words before and after it). When evaluating antecedent recovery, the EEs are regarded as four-tuples, consisting of the type of the EE, its location, the type of its antecedent and the location(s) (beginning and end) of the antecedent. An antecedent is correctly recovered if all four values match the gold standard. The precision, recall, and the combined F-score is presented for each experiment. Missed parses are ignored for evaluation purposes. 3.3 Results The main results for the two conditions are summarized in Table 2. In the INSERT case, the parser detects empty elements with precision 64.7%, recall 40.3% and F-Score 49.7%. It recovers antecedents Condition PERFECT INSERT Empty element detection (F-score) – 49 7% Antecedent recovery (F-score) 91 4% 43 0% Parsing time (sec/sent) 2 5 21 Missed parses 1 6% 44 3% Table 2: EE detection, antecedent recovery, parsing times, and missed parses for the parser with overall precision 55.7%, recall 35.0% and Fscore 43.0%. With a beam width of 1000, about half of the parses were missed, and successful parses take, on average, 21 seconds per sentence and enumerate 1.7 million edges. Increasing the beam size to 40000 decreases the number of missed parses marginally, while parsing time increases to nearly two minutes per sentence, with 2.9 million edges enumerated. In the PERFECT case, when the sites of the empty elements are known before parsing, only about 1.6% of the parses are missed and average parsing time goes down to 2 5 seconds per sentence. More importantly, the overall precision and recall of antecedent recovery is 91.4%. 3.4 Discussion The result of the experiment where the parser is to detect long-distance dependencies is negative. The parser misses too many parses, regardless of the beam size. This cannot be due to the lack of smoothing: the model with perfect information about the EE-sites does not run into the same problem. Hence, the edges necessary to construct the required parse are available but, in the INSERT case, the beam search loses them due to unwanted local edges having a higher probability. Doing an exhaustive search might help in principle, but it is infeasible in practice. Clearly, the problem is with the parsing model: an unlexicalized PCFG parser is not able to detect where EEs can occur, hence necessary edges get low probability and are, thus, filtered out. The most interesting result, though, is the difference in speed and in antecedent recovery accuracy between the parser that inserts traces, and the parser which uses perfect information from the treebank about the sites of EEs. Thus, the question wi  X; wi  1  X; wi  1  X X is a prefix of wi,  X  4 X is a suffix of wi,  X  4 wi contains a number wi contains uppercase character wi contains hyphen li  1  X posi  X; posi  1  X; posi  1  X posi  1posi  XY posi  2posi  1posi  XYZ posiposi  1  XY posiposi  1posi  2  XYZ Table 3: Local features at position i  1. naturally arises: could EEs be detected before parsing? The benefit would be two-fold: EEs might be found more reliably with a different module, and the parser would be fast and accurate in recovering antecedents. In the next section we show that it is indeed possible to detect EEs without explicit knowledge of phrase structure, using a simple finite-state tagger. 4 Detecting empty elements This section shows that EEs can be detected fairly reliably before parsing, i.e. without using phrase structure information. Specifically, we develop a finite-state tagger which inserts EEs at the appropriate sites. It is, however, unable to find the antecedents for the EEs; therefore, in the next section, we combine the tagger with the PCFG parser to recover the antecedents. 4.1 Method Detecting empty elements can be regarded as a simple tagging task: we tag words according to the existence and type of empty elements preceding them. For example, the word Sasha in the sentence Sam said COMP–SBAR Sasha snores. will get the tag EE=COMP–SBAR, whereas the word Sam is tagged with EE=* expressing the lack of an EE immediately preceding it. If a word is preceded by more than one EE, such as to in the following example, it is tagged with the concatenation of the two EEs, i.e., EE=COMP–WHNP PRO–NP. It would have been too late COMP–WHNP PRO–NP to think about on Friday. Target Matching regexp Explanation NP–NP BE RB* VBN passive NP–NP PRO-NP  RB* to RB* VB to-infinitive N [,:] RB* VBG gerund COMP–SBAR (V  ,) !that* (MD  V) lookahead for that WH–NP !IN    WP WDT COMP–WHNP    !WH–NP* V lookback for pending WHNPs WH–ADVP WRB !WH–ADVP* V !WH–ADVP* [.,:] lookback for pending WHADVP before a verb UNIT $ CD* $ sign before numbers Table 4: Non-local binary feature templates; the EE-site is indicated by Although this approach is closely related to POStagging, there are certain differences which make this task more difficult. Despite the smaller tagset, the data exhibits extreme sparseness: even though more than 50% of the sentences in the Penn Treebank contain some EEs, the actual number of EEs is very small. In Section 0 of the WSJ corpus, out of the 46451 tokens only 3056 are preceded by one or more EEs, that is, approximately 93.5% of the words are tagged with the EE=* tag. The other main difference is the apparently nonlocal nature of the problem, which motivates our choice of a Maximum Entropy (ME) model for the tagging task (Berger et al., 1996). ME allows the flexible combination of different sources of information, i.e., local and long-distance cues characterizing possible sites for EEs. In the ME framework, linguistic cues are represented by (binary-valued) features (fi), the relative importance (weight, λi) of which is determined by an iterative training algorithm. The weighted linear combination of the features amount to the log-probability of the label (l) given the context (c): p  l c 1 Z  c exp  ∑iλi fi  l c  (1) where Z  c is a context-dependent normalizing factor to ensure that p  l c be a proper probability distribution. We determine weights for the features with a modified version of the Generative Iterative Scaling algorithm (Curran and Clark, 2003). Templates for local features are similar to the ones employed by Ratnaparkhi (1996) for POS-tagging (Table 3), though as our input already includes POStags, we can make use of part-of-speech information as well. Long-distance features are simple handwritten regular expressions matching possible sites for EEs (Table 4). Features and labels occurring less than 10 times in the training corpus are ignored. Since our main aim is to show that finding empty elements can be done fairly accurately without using a parser, the input to the tagger is a POS-tagged corpus, containing no syntactic information. The best label-sequence is approximated by a bigram Viterbi-search algorithm, augmented with variable width beam-search. 4.2 Results The results of the EE-detection experiment are summarized in Table 5. The overall unlabeled F-score is 85 3%, whereas the labeled F-score is 79 1%, which amounts to 97 9% word-level tagging accuracy. For straightforward comparison with Johnson’s results, we must conflate the categories PRO–NP and NP–NP. If the trace detector does not need to differentiate between these two categories, a distinction that is indeed important for semantic analysis, the overall labeled F-score increases to 83 0%, which outperforms Johnson’s approach by 4%. 4.3 Discussion The success of the trace detector is surprising, especially if compared to Johnson’s algorithm which uses the output of a parser. The tagger can reliably detect extraction sites without explicit knowledge of the phrase structure. This shows that, in English, extraction can only occur at well-defined sites, where local cues are generally strong. Indeed, the strength of the model lies in detecting such sites (empty units, UNIT; NP traces, NP–NP) or where clear-cut long-distance cues exist (WH–S, COMP–SBAR). The accuracy of detecting unconEE Prec. Rec. F-score Here Here Here Johnson LABELED 86.5% 72.9% 79.1% – UNLABELED 93.3% 78.6% 85.3% – NP–NP 87.8% 79.6% 83.5% – WH–NP 92.5% 75.6% 83.2% 81.0% PRO–NP 68.7% 70.4% 69.5% – COMP–SBAR 93.8% 78.6% 85.5% 88.0% UNIT 99.1% 92.5% 95.7% 92.0% WH–S 94.4% 91.3% 92.8% 87.0% WH–ADVP 81.6% 46.8% 59.5% 56.0% CLAUSE 80.4% 68.3% 73.8% 70.0% COMP–WHNP 67.2% 38.3% 48.8% 47.0% Table 5: EE-detection results on Section 23 and comparison with Johnson (2002) (where applicable). trolled PROs (PRO–NP) is rather low, since it is a difficult task to tell them apart from NP traces: they are confused in 10 15% of the cases. Furthermore, the model is unable to capture for. . . to+INF constructions if the noun-phrase is long. The precision of detecting long-distance NP extraction (WH–NP) is also high, but recall is lower: in general, the model finds extracted NPs with overt complementizers. Detection of null WHcomplementizers (COMP–WHNP), however, is fairly inaccurate (48 8% F-score), since finding it and the corresponding WH–NP requires information about the transitivity of the verb. The performance of the model is also low (59 5%) in detecting movement sites for extracted WH-adverbs (WH–ADVP) despite the presence of unambiguous cues (where, how, etc. starting the subordinate clause). The difficulty of the task lies in finding the correct verb-phrase as well as the end of the verb-phrase the constituent is extracted from without knowing phrase boundaries. One important limitation of the shallow approach described here is its inability to find the antecedents of the EEs, which clearly requires knowledge of phrase structure. In the next section, we show that the shallow trace detector and the unlexicalized PCFG parser can be coupled to efficiently and successfully tackle antecedent recovery. Condition NOINSERT INSERT Antecedent recovery (F-score) 72 6% 69 3% Parsing time (sec/sent) 2 7 25 Missed parses 2 4% 5 3% Table 6: Antecedent recovery, parsing times, and missed parses for the combined model 5 Combining the models In Section 3, we found that parsing with EEs is only feasible if the parser knows the location of EEs before parsing. In Section 4, we presented a finite-state tagger which detects these sites before parsing takes place. In this section, we validate the two-step approach, by applying the parser to the output of the trace tagger, and comparing the antecedent recovery accuracy to Johnson (2002). 5.1 Method Theoretically, the ‘best’ way to combine the trace tagger and the parsing algorithm would be to build a unified probabilistic model. However, the nature of the models are quite different: the finite-state model is conditional, taking the words as given. The parsing model, on the other hand, is generative, treating the words as an unlikely event. There is a reasonable basis for building the probability models in different ways. Most of the tags emitted by the EE tagger are just EE=*, which would defeat generative models by making the ‘hidden’ state uninformative. Conditional parsing algorithms do exist, but they are difficult to train using large corpora (Johnson, 2001). However, we show that it is quite effective if the parser simply treats the output of the tagger as a certainty. Given this combination method, there still are two interesting variations: we may use only the EEs proposed by the tagger (henceforth the NOINSERT model), or we may allow the parser to insert even more EEs (henceforth the INSERT model). In both cases, EEs outputted by the tagger are treated as separate words, as in the PERFECT model of Section 3. 5.2 Results The NOINSERT model did better at antecedent detection (see Table 6) than the INSERT model. The Type Prec. Rec. F-score Here Here Here Johnson OVERALL 80.5% 66.0% 72.6% 68.0% NP–NP 71.2% 62.8% 66.8% 60.0% WH–NP 91.6% 71.9% 80.6% 80.0% PRO–NP 68.7% 70.4% 69.5% 50.0% COMP–SBAR 93.8% 78.6% 85.5% 88.0% UNIT 99.1% 92.5% 95.7% 92.0% WH–S 86.7% 83.9% 84.8% 87.0% WH–ADVP 67.1% 31.3% 42.7% 56.0% CLAUSE 80.4% 68.3% 73.8% 70.0% COMP–WHNP 67.2% 38.8% 48.8% 47.0% Table 7: Antecedent recovery results for the combined NOINSERT model and comparison with Johnson (2002). NOINSERT model was also faster, taking on average 2.7 seconds per sentence and enumerating about 160,000 edges whereas the INSERT model took 25 seconds on average and enumerated 2 million edges. The coverage of the NOINSERT model was higher than that of the INSERT model, missing 2.4% of all parses versus 5.3% for the INSERT model. Comparing our results to Johnson (2002), we find that the NOINSERT model outperforms that of Johnson by 4.6% (see Table 7). The strength of this system lies in its ability to tell unbound PROs and bound NP–NP traces apart. 5.3 Discussion Combining the finite-state tagger with the parser seems to be invaluable for EE detection and antecedent recovery. Paradoxically, taking the combination to the extreme by allowing both the parser and the tagger to insert EEs performed worse. While the INSERT model here did have wider coverage than the parser in Section 3, it seems the real benefit of using the combined approach is to let the simple model reduce the search space of the more complicated parsing model. This search space reduction works because the shallow finitestate method takes information about adjacent words into account, whereas the context-free parser does not, since a phrase boundary might separate them. 6 Related Work Excluding Johnson (2002)’s pattern-matching algorithm, most recent work on finding head– dependencies with statistical parser has used statistical versions of deep grammar formalisms, such as CCG (Clark et al., 2002) or LFG (Riezler et al., 2002). While these systems should, in theory, be able to handle discontinuities accurately, there has not yet been a study on how these systems handle such phenomena overall. The tagger presented here is not the first one proposed to recover syntactic information deeper than part-of-speech tags. For example, supertagging (Joshi and Bangalore, 1994) also aims to do more meaningful syntactic pre-processing. Unlike supertagging, our approach only focuses on detecting EEs. The idea of threading EEs to their antecedents in a stochastic parser was proposed by Collins (1997), following the GPSG tradition (Gazdar et al., 1985). However, we extend it to capture all types of EEs. 7 Conclusions This paper has three main contributions. First, we show that gap+ features, encoding necessary information for antecedent recovery, do not incur any substantial computational overhead. Second, the paper demonstrates that a shallow finite-state model can be successful in detecting sites for discontinuity, a task which is generally understood to require deep syntactic and lexical-semantic knowledge. The results show that, at least in English, local clues for discontinuity are abundant. This opens up the possibility of employing shallow finite-state methods in novel situations to exploit non-apparent local information. Our final contribution, but the one we wish to emphasize the most, is that the combination of two orthogonal shallow models can be successful at solving tasks which are well beyond their individual power. The accent here is on orthogonality – the two models take different sources of information into account. The tagger makes good use of adjacency at the word level, but is unable to handle deeper recursive structures. A context-free grammar is better at finding vertical phrase structure, but cannot exploit linear information when words are separated by phrase boundaries. As a consequence, the finitestate method helps the parser by efficiently and reliably pruning the search-space of the more complicated PCFG model. The benefits are immediate: the parser is not only faster but more accurate in recovering antecedents. The real power of the finite-state model is that it uses information the parser cannot. Acknowledgements The authors would like to thank Jason Baldridge, Matthew Crocker, Geert-Jan Kruijff, Miles Osborne and the anonymous reviewers for many helpful comments. References Adam L. Berger, Stephen A. Della Pietra, and Vincent J. Della Pietra. 1996. A maximum entropy approach to natural language processing. Computational Linguistics, 22(1):39–71. Ann Bies, Mark Ferguson, Karen Katz, and Robert MacIntyre, 1995. Bracketting Guidelines for Treebank II style Penn Treebank Project. Linguistic Data Consortium. Rens Bod. 2003. An efficient implementation of a new dop model. In Proceedings of the 11th Conference of the European Chapter of the Association for Computational Linguistics, Budapest. Eugene Charniak. 1993. Statistical Language Learning. MIT Press, Cambridge, MA. Eugene Charniak. 2000. A maximum-entropy-inspired parser. In Proceedings of the 1st Conference of North American Chapter of the Association for Computational Linguistics, Seattle, WA. Stephen Clark, Julia Hockenmaier, and Mark Steedman. 2002. Building deep dependency structures with a wide-coverage CCG parser. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, Philadelphia. Michael Collins, Jan Hajiˇc, Lance Ramshaw, and Christoph Tillmann. 1999. A statistical parser for Czech. In Proceedings of the 37th Annual Meeting of the Association for Computational Linguistics, University of Maryland, College Park. Michael Collins. 1997. Three generative, lexicalised models for statistical parsing. In Proceedings of the 35th Annual Meeting of the Association for Computational Linguistics and the 8th Conference of the European Chapter of the Association for Computational Linguistics, Madrid. James R. Curran and Stephen Clark. 2003. Investigating GIS and smoothing for maximum entropy taggers. In Proceedings of the 11th Annual Meeting of the European Chapter of the Association for Computational Linguistics, Budapest, Hungary. P´eter Dienes and Amit Dubey. 2003. Antecedent recovery: Experiments with a trace tagger. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, Sapporo, Japan. Gerald Gazdar, Ewan Klein, Geoffrey Pullum, and Ivan Sag. 1985. Generalized Phase Structure Grammar. Basil Blackwell, Oxford, England. Joshua Goodman. 1998. Parsing inside-out. Ph.D. thesis, Harvard University. Mark Johnson. 2001. Joint and conditional estimation of tagging and parsing models. In Proceedings of the 39th Annual Meeting of the Association for Computational Linguistics and the 10th Conference of the European Chapter of the Association for Computational Linguistics, Toulouse. Mark Johnson. 2002. A simple pattern-matching algorithm for recovering empty nodes and their antecedents. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, Philadelphia. Aravind K. Joshi and Srinivas Bangalore. 1994. Complexity of descriptives–supertag disambiguation or almost parsing. In Proceedings of the 1994 International Conference on Computational Linguistics (COLING-94), Kyoto, Japan. Mitchell P. Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building a large annotated corpus of English: The Penn Treebank. Computational Linguistics, 19(2):313–330. Adwait Ratnaparkhi. 1996. A Maximum Entropy Partof-Speech tagger. In Proceedings of the Empirical Methods in Natural Language Processing Conference. University of Pennsylvania. Stefan Riezler, Tracy H. King, Ronald M. Kaplan, Richard Crouch, John T. Maxwell, and Mark Johnson. 2002. Parsing the Wall Street Journal using a Lexical-Functional Grammar and discriminative estimation techniques. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, Philadelphia.
2003
55
        !" $#% & ' ( ) *,+.-./10,2'/1354 6 798:<;"=?>%79@=BA<CEDGFH@JILKJFHM"=NFHOPM Q =R:<@CSAL;?TVUW@JFYXL79;?M?FY=[Z ;?A<I1\ M"=R:]@CSAL;?T5^_79TJK `ba 0dcfe9gh+.i a /10,jlkmmncfm6 798:<;=N>o7P@=BA<CEpqAL>%8K=N79; Q OPFr79@JOP7 Q =R:<@CSAL;?TVUW@JFYXL79;?M?FY=[Z >%:<@J@JFH@JI1\ OPM9^_M=R:<@CSAL;?Ts^t7PTK u,v ewgh0xky<g z {}|9~S{€{‚Rƒ…„‡†w{€ƒf„"ˆŠ‰Š{[†‹ˆŠ‚wŒ?{ƒSˆŠŽN„ƒSˆŠ"‚‘’ƒS“P{ ” “P„"‰Š‰Š{‚PŽ"{|h?{[†'•.“P{‚–„"|9|P‰˜—wˆŠ‚PŽ|™„~SˆŠ‚9Ž›š œ†w {‰Šž†w{ŒN{‰Š"|h{[†Ÿ„"Ž?„"ˆŠ‚Pƒ5 1‚PŽ?‰ŠˆŠ“ ” ~S|h"~f„›ƒS'¡“9ˆ˜ ‚P{{?¢£z { †9{ŒN{€‰t"|¤„}‘¥„ ” ƒS"~S{†wYš œ†9{‰ ƒf„¦ƒSˆŠ ƒSˆ ” „"‰5|P„~S{€~'‘§"~'ƒS“P{B¨L{‚9‚}¡“9ˆt‚9{{B©]~S{{€ª™„"‚9«h¬ “P[•.ˆŠ‚PŽ­ƒS“P{®ˆŠš |9‰tˆ ” „¦ƒSˆŠ?‚Pq"‘nŽ"~S?’ƒf„¦ƒSˆŠƒSˆ ” „"‰ †9ˆ˜¯x{€~S{‚ ” {qªh{°ƒr•J{€{‚}zo±9²­„"‚P†o¡“PˆŠ‚P{{W©]~S{{° ª™„"‚9«œJ‘³"~´ƒS“P{5š ?ƒ.Ž?{€‚P{€~f„‰<š {€ƒS“9œ†9"‘1|P„~S{€~ „?†P„|9ƒf„¦ƒSˆt"‚]¢Jz®{sƒS“P{‚q|9~S¦Œœˆ_†w{„ †9{€ƒf„ˆŠ‰t{†’„"‚™„‰˜ —9ˆŠE‘žƒS“9{–š'„[µ~ ?¶w~ ” { ‘5ƒf„¦ƒSˆtƒSˆ ” „‰ž|P„~S{ {€~~S"~S’‘³~’ƒS“PˆŠ ” ~S|P¶P¬“9[•.ˆt‚9ŽbƒS“P{€ˆŠ~ ” „"¶P{€ „"‚™†~S{‰t„ƒSˆŠŒN{‘§~S{[·R¶P{€‚ ” ˆŠ{¬x„‚™†Ÿ“9¦•%ƒS“™„¦ƒ´•.“9ˆŠ‰t{ ?š {Eƒr—w|h{‘ž{°~~S"~Sn„~S{–†9¶P{EƒSB†wˆ˜¸ ” ¶P‰˜ƒn„"šE ªPˆŠŽ?¶9ˆŠƒSˆŠ{ˆŠ‚P“9{€~S{‚Rƒ'ˆŠ‚o¡“PˆŠ‚P{{’Ž~f„"š š'„¦~[¬."ƒS“9 {€~S1„~SˆŠ{.†9¶9{ƒS´ƒ~S{{ª™„‚P«„‚P‚Pƒf„ƒSˆŠ?‚›|9~f„ ” ƒSˆ ” {¢ z {“P[•‹“P[•‹{„ ” “­ƒH—w|h{–‘5{€~~S~ ” „"‚­ªh{q„?†w †w~S{{[†¹•.ˆŠƒS“®ˆtš |9‰Š{?¬<ƒf„¦~SŽ?{€ƒS{[† ” “™„‚PŽ"{GƒSqƒS“P{ ˆt‚P†9{|h{€‚™†9{€‚ ” {„¶Pš |wƒSˆt"‚PB"‘ ƒS“9{ š'„ºwˆŠš›¶9š ‰tˆŠ«N{€‰tˆŠ“9Rœ†wY{ƒSˆŠš'„ƒS{[† ¨¡J»L¼…‘¥„ ” ƒS"~"‘hƒS“9{|™„¦~S ˆt‚9Ž›š œ†w{‰¥¬w•.“9ˆ ” “'~f„"ˆŠ{.?¶w~´»½5‘_~S?š¿¾?Àœ¢ Á?ÂVƒS ¾NÜ¢ Ä?Âŏ?‚­?¶w~'†9{€ŒN{‰Š?|9š {‚Rƒ›{€ƒ[¬„"‚P†­„ ” “PˆŠ{ŒN{ |™„~S{ž„ ”€” ¶9~f„ ” — ” ‰Š?{´ƒSGƒS“P{´ªh{ƒ|9¶Pª9‰tˆŠ“9{[†EƙŽ" ¶9~S{‘³~5¡“PˆŠ‚P{{G|P„~SˆŠ‚PŽw¢ Ç È kJydÉs-.0x+´ÊnmË ÌÍhΜÏVÏ1ЙѦѦÒ9Ó ÔfÕNÒwÍhÎRÑ"ЙÖh΅ÕNÒxÏ]×[ÎNØL×ÔSÙHѦÎR΅Ï1ЙצÚÑ"ÐhÛGÛÜÐhÏLÔ ÖxÚ1ЙÖhοÖhÑ"ÐhݭݭЙÑއßàѦÒ<áàÚ1ÕNÎVÎNØLßàÛ¥ÒxÞâ¥ÍhοÐhݹãâ¥ÖxÚ1â¥×°ä åSæEç ÚÑÕ ç ÐhÏ1áéèsЙצâÜÛSê­ëœìhíhîxï"ðòñ´ÒLá1ÐwäóŠÞo×[ѦÎRÎRãàÐhÏô]Ô áÎRÑâÜÍhΜáõã1ѦÒxÐháÔfÕNÒwÍhÎRÑЙÖhÎ æEö5÷ Þ£ÖhΜÏÎRÑЙ×[ÎøÎRÍhÎœÏ ÝÒhѦÎhêBÞ¦ÒxÝÎùÒhÙâ¥×úÖhΜÏ<Ú1âÜÏ1ÎûÛÜâÜÏÖxÚ1âYÞ[צâÜÕ,Ðhݹãâ¥ÖxÚ1â¥×°ä ÐhÏ1á®Þ[ÒxÝΛÒhÙàâ¥×5ЙѦצâ¥üàÕRâÜÐhÛ å Þ[ÎRÎ åý ѦÒh×[ÒwÍWÎR×GÐhہð¥êëœìhìhíxï ÐhÏ1á åþ Ò ç Ï1Þ[ÒxÏê®ëœìhìhíxïbÙrÒhÑúá1âÜÞ¦ÕRÚ1ÞÞ¦â¥ÒxÏï"ð æ ÒhÑßàÚ1ÞÔ ãàÐhÞ[Μá¤Þ[צЙצâÜަצâÜÕRÐhۛßàЙÑÞ¦âÜÏ1ÖoâÜÞ®Ð%ۥΜÐháàâÜÏÖú×[ÎœÕ ç Ï1âÜÿ<ÚÎ ×[Ò áΜÐhÛӖâ¥× ç × ç âÜÞ ÎNØ<×[ÑΜÝÎ ÐhÝWãàâ¥ÖxÚàâ¥×fä  × ç Îo͙ÐhÞ[× ãàÚ1ÛÜôÒhÙoӛÒhѦô âÜÏ£× ç âÜÞVü1ΜÛÜá ç ÐhÞ ãJÎRÎœÏ áÒxÏÎâÜÏ ÌÏ1ÖxÛÜâÜÞ ç ênÚ1Þ¦âÜÏÖ,× ç ÎûÐhÛYÛ<×[ѦÎRÎR× þ ÒxÚÑÏ1ÐhÛ Þ¦ÎœÕNצâ¥ÒxÏ ÒhÙ× ç Î ÌÏ1ÖxÛÜâÜÞ ç èžÎœÏàÏ%ñžÑ¦ÎRÎRãàÐhÏô å Ìñ'ï"ð<צЙ×[ÎNÔSÒhÙHÔ × ç ÎNÔfЙѦ׭ަצЙצâÜÞ[צâÜÕRÐhۛßàЙÑÞâÜÏÖo×[ÎœÕ ç Ï1âÜÿ<ÚΜÞÏÒ9Ó ç ÐhÏLÔ á1ۥήÝÒxÞ[ןÐhݹãâ¥ÖxÚ1â¥×°äúÐháΜÿ]ÚàЙ×[Μۥä  × ç ÎWãJΜÞ[×qަצЙצâÜÞÔ ×¦âÜÕRÐhÛqßàЙÑÞ¦ÎRÑÞbÙHÒhÑ}× ç ·Ìnñ$ЙÑÎ,Ï1ÒwÓ Ð™×…Ñ¦ÒxÚÖ ç ÛÜä ì ÛÜЙãJΜۥΜá ã1Ñ"ÐhÕ¦ôhÎRצâÜÏÖ ÐhÕRÕRÚ1ÑÐhÕNä åSæEç ЙÑÏ1âYЙôê î  æ ÒxÛÜÛYâÜÏ1ÞRêxî dï"ðnñ ç Î'ѦΜݭÐhâYÏ1âÜÏÖBá1â ÕRÚ1Û¥×ÔS×[Ò™Ô Ñ¦ÎœÞ[ÒxÛÜÍhÎÐhݹãâ¥ÖxÚ1â¥×¦â¥ÎœÞЙÑÎÙrÐhâÜÑÛ¥ä'ӛΜÛÜÛ³ÔfÚ1ÏàáÎRÑÞ[×[Ò<Ò<áqÙrÒhÑ ÌÏ1ÖxÛÜâÜÞ ç ßÎRÑ ç ЙßàÞ®× ç ÎoãJΜÞ[×ÔSôLÏÒ9ÓÏ Ð™ÑÎàÐ™× ÍhÎRÑ[Ô Þ¦Ú1ÞnΜÝWãΜáàáΜá…Ðhá[ÚàÏ1ÕNצâ¥ÒxÏ Þ¦×[ÑÚ1ÕNצÚÑ¦ÎœÞ å Þ[ÎRÎ åþ Ò ç ÏLÔ Þ[ÒxÏê.ëœìhìhíxïEÙHÒhÑ'á1âYÞ¦ÕRÚ1Þ¦Þ¦âÜÒxÏïÐhÏàá–èÔfÕNÒxÏ[ÚàÏ1ÕN×EÍhÎRÑ[Ô Þ¦Ú1ÞàЙ×–è£ÕNÒ<ÒhÑá1âÜÏ1ЙצâÜÒxÏ1Þ  ãàÚ1׮ЙѦΠç ЙÑá1Û¥ä¤ÐhÏLÔ ÐhÛ¥äRΜáVÐ™× ÐhÛÜÛÙHÒhÑ ÐhÏdäùÒh× ç ÎRÑ®ÛYÐhÏÖxÚ1ЙÖhÎhðoÒhѦÎbѦÎNÔ ÕNΜÏ]צۥähê ç Ò9ÓnÎRÍhÎRÑwênÐ ÓâÜáÎRѮ͙ЙÑâ¥ÎR×fäVÒhÙßàЙÑ"Þ[Μá ÕNÒhÑ[Ô ßJÒhÑÐ ç ÐhÞ ãJΜÕNÒxÝÎoМ͙ÐhâÜÛÜЙãÛ¥ÎoâÜϋÒh× ç ÎRÑ ÛÜÐhÏÖxÚàЙÖhΜÞRð ‡Î'צЙôhÎÐhá1ÍPÐhÏ]צЙÖhÎ'ÒhÙ× ç ÎEѦΜÕNΜÏdצÛÜäWѦΜÛÜΜÐhÞ[Μáè5ΜÏ1Ï æEç âYÏΜÞ[ÎEñ´Ñ¦ÎRÎRãÐhÏô å ÍhÎRÑÞâ¥ÒxÏ­î<ð Lê]Йã1ãàѦÎRÍ<âYЙ×[Μá ç ÎRѦΠÐhÞ æ ñ  å ÚÎùÎR×oÐhہð¥êBî xîxï[ï}×[Ò Ðhá1áÑ¦ÎœÞ¦Þ…× ç ΜÞ[Î ÿ<ÚΜÞ[צâ¥ÒxÏ1ÞÙHÒhÑ æEç âÜÏ1ΜÞ[ÎhêdÐBÛÜÐhÏ1ÖxÚ1ЙÖhÎÓâÜ× ç ۥΜަÞnÝÒhÑ[Ô ß ç ÒxÛÜÒhÖhäùÐhÏ1á ÝÒhѦÎ}Ý­â³ØLΜá ç ΜÐháΜá1ÏΜÞÞW× ç ÐhϤÌnÏLÔ ÖxÛÜâÜÞ ç ð æEç âYÏΜÞ[Îhê ÐhÞbÓnÎ,ÓâÜÛYÛ'Þ ç Ò9ÓBê ç ÐhÞbÐùÑ"Ð™× ç ÎRÑ á1âJÎRѦΜÏ]כަÎRכÒhٞަÐhÛÜâ¥ÎœÏ]כÐhÝWãàâ¥ÖxÚàâ¥×¦â¥ÎœÞnÙHѦÒxÝ£× ç ΟßJÎRÑ[Ô Þ[ßJΜÕNצâ¥ÍhΒÒhÙsÞ[צЙצâÜÞ[צâYÕRÐhÛßàЙÑÞ¦âYÏÖðGñ ç âÜÞEÞ[ΜÕNצâ¥ÒxÏúß1Ñ¦Ò™Ô ÍLâÜáΜޅãàÐhÕô]ÖhѦÒxÚàÏ1á ÒxÏéÑΜۥÎRÍPÐhÏ]×úÛÜâÜÏ1ÖxÚ1âÜÞ[צâÜÕ á1âJÎRÑ[Ô ÎœÏ1ÕNΜÞãÎR×°ÓnÎRÎœÏ æEç âÜÏ1ΜÞ[ΒÐhÏ1áúÌÏ1ÖxÛÜâÜÞ ç êLÐhÏ1á}ÒxÏúÑ¦ÎœÛ³Ô ÎR͙ÐhÏdמ×[ѦÎRÎNÔfÞ[×[ÑÚ1ÕNצÚ1ÑÐhÛdá1â ÎRѦΜÏàÕNΜÞ.ãJÎR×fӛÎRΜÏB× ç În×fÓ›Ò ×[ѦÎRÎRãàÐhÏ1ô<ÞRð !#"$! %'&)(+*-,.&0/213&456&87:9<;9(+4=9/>?9<1A@B99( C (+*-D)&/FEHGI(+5KJLE.&)(9/29 æEç âYÏΜÞ[ÎÅÐhÏàá$ÌnÏÖxÛÜâÜÞ ç ЙѦÎãJÒh× ç âÜÞ¦ÒxÛÜЙצâÜÏÖ£ÛÜÐhÏLÔ ÖxÚ1ЙÖhΜÞNM × ç ÎRä‹Ñ¦ÎœÛÜäVß1Ñ"âÜݭЙÑâÜÛÜä‡ÒxϿѦΜÛÜЙצâÜÍhΜۥä Ñâ¥ÖxâÜá ß ç Ñ"ÐhÞ[·Þ[×[ÑÚ1ÕNצÚÑÎ Ñ"Ð™× ç ÎRÑú× ç ÐhÏÑâYÕ ç ÝÒhÑ¦ß ç ÒxÛ¥ÒhÖxâ³Ô ÕRÐhÛ¹âYÏÙHÒhÑ"ݭЙצâ¥ÒxÏ×[ҐΜÏàÕNÒ<áÎùÙrÚ1ÏàÕNצâ¥ÒxÏ1ÐhےѦΜÛYЙצâ¥ÒxÏ1Þ ãJÎR×fӛÎRΜÏéΜۥΜݭΜÏdצÞRð ö ÒhÑoßàÚÑßÒxÞ¦ÎœÞ ÒhÙ Þ[צЙצâÜަצâÜÕRÐhÛ ßàЙÑÞâÜÏÖêL× ç ѦÎRÎBÞ¦ÐhÛYâ¥ÎœÏd× á1â ÎRѦΜÏàÕNΜÞ'á1âÜÞ[צâYÏÖxÚ1âÜÞ ç × ç Î ×fӛ҅ÛÜÐhÏÖxÚ1ЙÖhΜÞRð ö â¥Ñ"Þ[×Rê æEç âYÏΜÞ[ÎBݭЙôhΜޟۥΜަÞÚ1Þ[ιÒh٠ٍÚ1Ï1ÕNצâ¥ÒxÏ ÓnÒhÑ"á1Þ}ÐhÏ1áÝÒhÑ¦ß ç ÒxÛ¥ÒhÖhä¿× ç ÐhÏÌÏ1ÖxÛÜâÜÞ ç M áÎR×[ÎRÑÝ âÜÏÎRÑۥΜÞޛÏ1ÒxÚ1Ï1Þ Ð™Ñ¦ÎBÝÒhÑΒÓâÜáΜÞ[ßàѦΜÐháê<ßàÛÜÚLÔ ÑÐhۮݭЙѦôLâÜÏÖ âÜÞoÑΜÞ[×[ÑâÜÕN×[ΜáÐhÏ1áÅÑЙѦÎhê ÐhÏ1á ÍhÎRÑãàÞ Ð™ß1ßJΜЙÑúâÜÏЋÚ1Ï1âYÿ]ÚÎ%ÙHÒhÑ"Ý Óâ¥× ç ÙHÎRÓ ÞÚß1ßJÒhѦצâÜÏÖ ÙÚ1Ï1ÕNצâ¥ÒxϣӛÒhÑá1ÞRð <ΜÕNÒxÏ1áêúÓ ç ÎRѦΜÐhދÌÏ1ÖxÛÜâÜÞ ç âÜÞ ÛÜЙѦÖhΜÛÜä¹Û¥ÎRÙr×Ô ç ΜÐháΜáÐhÏ1á®Ñâ¥Ö ç ×ÔSã1ÑÐhÏàÕ ç âÜÏ1Öê æEç âÜÏΜÞ[Î âÜÞ.Ý­ÒhѦÎÝ­â³ØLΜáOM.ÝÒxަ״ÕRЙ×[ÎRÖhÒhÑâ¥ÎœÞsЙѦÎÑ"â¥Ö ç ×Ô ç ΜÐháΜáê ãàÚ×ÍhÎRÑãàÐhÛdÐhÏ1á¹ß1ѦÎRßJÒxÞ¦â¥×¦âÜÒxÏ1ÐhۙÕNÒxÝßàۥΜÝΜÏ]צޞÙHÒxÛÜÛÜÒwÓ × ç Μâ¥Ñ ç ΜÐhá1Þ åö â¥ÖxÚÑΟîxï"ð:LâÜÖxÏ1â¥üàÕRÐhÏ]צۥähêd× ç âÜÞnݭΜÐhÏ1Þ × ç Й×Й×[צÐhÕ ç ÝΜÏ]×Ðhݹãâ¥ÖxÚ1â¥×°äWÐhÝÒxÏ֒ПÍhÎRѦã´óŠÞGÕNÒxÝ Ô ßàۥΜݭΜÏdצÞRê™Ð'Ý­Ð3[ÒhÑsÞ[ÒxÚÑÕNÎÒhÙßЙÑÞ¦âÜÏÖ'ÐhÝWãàâ¥ÖxÚàâ¥×fäŸâÜÏ ÌÏ1ÖxÛÜâÜÞ ç êxâYÞsÑЙѦÎâÜÏ æEç âÜÏΜަÎhðžñ ç Î × ç â¥ÑáÝ­Ð3[ÒhÑná1âÜÙYÔ ÙrÎRѦΜÏ1ÕNΒâÜÞEÞ¦ÚãP[ΜÕN×RQ-SUThÔfáѦÒhß  × ç ΒÏ]Ú1ÛYÛàѦΜÐhÛY⠜Йצâ¥ÒxÏ ÒhÙ¹Ú1ÏàÕNÒxÏd×[ѦÒxÛYۥΜá¿ß1ÑÒxÏÒxÝ­âÜÏ1ÐhۖަÚãP[ΜÕN×¦Þ  Ó ç âÜÕ ç âÜÞ ÓâÜá1ΜÞ[ß1ѦΜÐháWâÜÏ æEç âÜÏΜÞ[ÎhêPãàÚ×.Ñ"ЙѦΛâYϹÌÏ1ÖxÛÜâÜÞ ç ðžñ ç âÜÞ ÕNѦΜЙ×[ΜÞ}ÐhÝWãàâ¥ÖxÚ1âÜצâ¥ÎœÞãJÎR×fӛÎRÎœÏ ßàЙÑÞ[ÎœÞ ÒhÙBÞ¦ÚãP[ΜÕN×Ô Û¥ÎœÞ¦ÞÞ[×[ÑÚ1ÕNצÚÑΜÞ'ÐhÞè å Μÿ<Ú1â¥Í™ÐhۥΜÏ]× ×[Ò ÌñqóŠÞ 1ïnÒhÑ ÐhÞqè5êÐhÏ1á ãJÎR×fӛÎRΜÏ}âÜÏd×[ÎRÑß1ѦÎRצЙצâ¥ÒxÏ1ÞnÒhÙ.ß1ÑÎRÍhÎRѦãàÐhÛ qèÞ'ÐhÞ–è Ðhá[ÚàÏ1ÕN×¦Þ ÒhÑÐhޖަÚãP[ΜÕNצÞRð !#" :;99 1F; ,4 1,;GID &07R9<;9P(4=9/>?9<1 @L99( C (+*-D)&/FEHGI(+5KJLE.&)(9/29 ';399<>.GI( O/ ñ ç Î æ ñ ÕNÒxÏ1Þ¦âYÞ[×¦Þ ÒhÙhî ÏÎRÓÞ[ÓâÜѦΒЙѦצâÜÕRÛ¥ÎœÞ  îhìLë ЙѦέÒxÏûΜÕNÒxÏÒxÝ­âÜÕ ×[ÒhßàâÜÕRÞRê%ÒxχßJÒxÛÜâ¥×¦âÜÕRޒÐhÏàáûÕRÚ1Û³Ô ×¦ÚѦÎhð èsÐhÞ[× Ó›ÒhѦô ÒxÏ æ ñßàЙÑÞâÜÏÖ å Eâ¥ôhÎœÛ ÐhÏ1á æEç âYÐhÏÖê<î  æEç âÜÐhÏÖ¹ÐhÏ1á Eâ¥ôhΜہêî xîxï ç ÐhޛÚ1Þ[Μá ЙѦצâÜÕRÛÜΜޒë<î ­ÙrÒhÑ ×[ÑÐhâÜÏ1âÜÏ1Öê ëhî ÙHÒhÑ'á1ÎRÍhΜۥÒhßÔ ÝΜÏ]×RêÐhÏ1á î<ë …ÙHÒhі×[ΜÞ[צâÜÏÖð ‡Î®ÙHÒxÚàÏ1áê ç Ò9Ó Ô ÎRÍhÎRќêh× ç Й×s× ç âYÞ´áÎRÍhΜۥÒhßÝΜÏd×GÞ[ÎR×5ÓEÐhÞ5ÚàÏ1Õ ç ЙÑ"ÐhÕN×[ÎRÑ[Ô âÜÞ[צâYÕ¹ÒhÙ›× ç ÎÕNÒhѦßàÚ1ޒÐhÞ¹Ð Ó ç ÒxÛ¥ÎÐhÏ1á,ÏÒh×BâÜáΜÐhÛ5ÙrÒhÑ áÎRÍhΜۥÒhßÝΜÏd×RðqÞEÐhÏbÎNØ<×[ѦΜݭÎqÎNØÐhÝßàÛÜÎhê]× ç ÎqӛÒhÑá  Йß1ßJΜЙÑޅâÜÏ âÜ×bîhíVצâÜÝΜޅÐhޅРÝΜÐhÞ¦ÚѦΠӛÒhÑáê ÝΜÐhÏ1âYÏÖŠßJÒxâÜÏ]×Ró¥êh×fӖâÜÕNÎ'ÐhÞGÐhÏÐháÍhÎRѦã.ê]ÐhÏ1á®ÒxÏ1ÕNÎ'ÐhÞ ÐWÍhÎRѦã  âÜÏ × ç ΖÑΜÞ[×nÒhÙ´× ç ÎqÕNÒhÑßàÚ1ޛâ¥×nЙß1ßJΜЙÑޛΜâ¥Ö ç × ×¦âÜÝΜÞnÐhÞnйÍhÎRѦã.ê]ÒxÏ1ÕNΖÐhޛÐhÏ Ðhá1ÍhÎRѦã.ê]ÐhÏàá ÏÎRÍhÎRÑEÐhÞ ÐúÝΜÐhÞÚѦÎӛÒhÑáð­ñ ç ÎRÑÎצÚÑÏ1ޟÒxÚ×¹×[ÒúãJέРç â¥Ö ç ÕNÒxÏ1ÕNΜÏ]×[ÑЙצâ¥ÒxÏ ÒhÙJЙÑצâÜÕRۥΜÞsÒxÏ Ï1ÒxÏLÔSΜÕNÒxÏÒxÝ­âÜÕE×[ÒhßâÜÕRÞ âÜÏ ëhî å × ç Î,ß1ѦÒhãۥΜÝõÓâÜ× ç  ЙÑâÜÞ¦âYÏÖûÙrѦÒxÝ Þ[ßJÒhѦצÞ}ЙѦצâYÕRۥΜÞ"ï"ð ñ ç ÎRÑÎRÙHÒhѦÎhêŸÙrÒhÑ}× ç âÜޅßàЙßJÎRÑ}ӛΠÞ[ÎRכÐhÞ¦âÜáÎЙÑצâÜÕRۥΜÞë<î¹ÙrÒhћáÎRÍhΜÛÜÒhßàÝΜÏ]×nÐhÏ1á Ú1Þ[Μá î !<î …ÐhÞ'×[ÑÐhâYÏ1âÜÏÖá1ÚÑ"âÜÏÖáÎRÍhΜۥÒhßàݭΜÏd×Rð"qÚ1ÑâÜÏÖ áÎRÍhΜۥÒhßÝΜÏd×Rê ӛÎoÙrÒxÚ1Ï1á× ç Î%áàâÎRÑΜÏ1ÕNÎoâÜϋßЙÑÞ[Î ÐhÕRÕRÚÑÐhÕNä}ÙrÒhѹë<î ÐhÏ1á# ëhî ×[Ò­ÑÐhÏÖhιЙÑÒxÚ1Ï1á ÐÑ¦ÎœÝ Ð™Ñ¦ôPЙãÛ¥ÎëN bð  ç ÎRѦΜÐhÞéÌnñ ÐhÏ1ÏÒhצЙצâ¥ÒxÏÞ[×[ÑÒxÏÖxÛ¥ä ѦÎNàΜÕN×¦Þ ÛÜЙ×[Τëœì hÞ}Ý­ÐhâÜÏ1Þ[×[ÑΜÐhÝõ×[ÑÐhÏ1Þ¦ÙHÒhÑÝ Ð™×¦â¥ÒxÏ1ÐhÛ'ÖhÑ"ÐhÝ Ô Ý­Ð™Ñœê æ ñ ÐhÏ1ÏÒhצЙצâÜÒxÏ áÑÐwÓÞ ßàÑâÜݭЙÑâYÛ¥ä ÒxÏ ÷ Ò9ÍhÎRÑÏ1ÝΜÏ]×Ô EâÜÏ1á1âÜÏ1Ö åS÷ 'ï × ç ÎRÒhѦä ÙrѦÒxÝ × ç Î ëœìhí hÞRð ÷  á1â ÎRÑÞ¹ÙHÑÒxÝ × ç Î ÙrÒhÑÝÎRÑâÜÏV×fӛ҇ݭÐPÔ [ÒhÑùѦΜÞ[ßJΜÕNצÞNM ü1ÑÞ[×Rê âÜׇÑâ¥ÖxâÜá1ÛÜäéѦΜÿ<Ú1â¥Ñ¦ÎœÞ,ß ç ÑÐhÞ¦ÐhÛ ß1ѦÒ[ΜÕNצâ¥ÒxÏÒhÙ ÐhÛYÛ®ÛÜÎNØLâÜÕRÐhÛ®ÕRЙ×[ÎRÖhÒhÑâ¥ÎœÞ  Þ[ΜÕNÒxÏ1áê â¥× Þ ç ЙÑßàÛ¥ä‡á1âÜÞ[צâYÏÖxÚ1âÜÞ ç ΜޒãJÎR×fӛÎRΜϋۥÎRÍhΜÛÜÞWÒhٖÐhá¦Ú1Ï1Õ?Ô ×¦â¥ÒxÏ ÐhÏ1áÕNÒxÝßàۥΜݭΜÏdצЙצâ¥ÒxÏ.ð ›Òh× ç × ç ΜÞ[Î,á1âJÎRÑ[Ô ÎœÏ1ÕNΜޛЙѦΖÏ1ÒhצâÜÕNΜЙãàÛ¥Î–Ó ç ÎœÏ ÕNÒxÝ­ßàЙÑâÜÏÖ¹×[ѦÎRÎRãàÐhÏôLÞRð ñ ç Î%ü1ÑÞ[×bá1âJÎRѦΜÏ1ÕNÎhê'ßàѦÒΜÕNצâÜÒxϐÒhÙBß ç ÑÐhÞ¦ÐhۖÕRЙ×Ô ÎRÖhÒhÑâ¥ÎœÞœêqâÜÞbßàЙÑצâÜÕRÚ1ÛÜЙÑÛÜä¤ßàѦÒxÝ­âÜÏΜÏ]×bÓâ¥× ç âÜÏH–ènÞNM æ ñÐhá[ΜÕNצâ¥ÍhÎNÔfÏÒxÚ1ÏùÝÒ<áàâ¥üàÕRЙצâ¥ÒxÏêJÙrÒhђÎNØLÐhÝ­ßàÛ¥Îhê âÜÞEÐhÛ¥ÓEМäLÞ Ð™×E× ç ΒۥÎRÍhΜÛÒhÙ$" þ è¿ÐhÏ1á qèžêLÓ ç ÎRѦΜÐhÞ âÜÏ ÌÏÖxÛÜâYÞ ç â¥×›ÕRÐhÏ ãJÎЮá1â¥ÑΜÕN×ѦÎRÓ'Ñ"â¥×[ÎÒhÙ qè¤×[Ò þ]þ ÐhÏ1á  צЙÖxÞ åö â¥ÖxÚ1ѦΠëwï"ðnñ ç ÎBÞ[ΜÕNÒxÏ1á%á1âJÎRѦΜÏ1ÕNÎhê á1âÜަצâÜÏ1ÕNצâ¥ÒxχÒhٖÐhá[ÚàÏ1ÕNצâ¥ÒxÏùÐhÏàáVÕNÒxÝßۥΜÝΜÏ]צЙצâ¥ÒxÏ Û¥ÎRÍhΜÛÜޜê ç ÐhÞ ãJÎRΜϋݭÐhá΅ÒxÏ1ÛÜäùÙrÒhÑ%qè åö âÜÖxÚѦÎúîxï"ê &' (*) +-, ./. +1010-2435276-,8+-9 && :;2=<?> &' (*' (*) @ 35A12=< BC ' D E 35FHGJI K (*./' ./. LNM <H3=:H+-,POPI &' && QSR GAJI;,86-TUI;,86-, V „‚’„?†9†9ˆ˜ƒSˆŠ?‚P„"‰]~SˆŠ«8W V ƒS“PˆŠƒr—w|h{s‘ƒ~f„"‚9Ž?{|P“9{‚P"š {‚P"‚W ö â¥ÖxÚѦΖë M+–ÒxÚ1ÏWÝ­Ò<á1âÜüàÕRЙצâ¥ÒxÏBâÜϹÌÏÖxÛÜâYÞ ç ÐhÏ1á æEç â³Ô ÏΜÞ[ÎBñžÑ¦ÎRÎRãàÐhÏ1ô<Þ X*' K (YX*' 37I;TZG6-:H+-:?259 F X[]\ 3=A1:H6-^< &' 35AJIZ^6-:;> '' 2=,_3`6a;AJ+16-< b ×[ΜÝßJÒhÑЙÑâÜÛÜä × ç ѦÒ9ÓÞ'× ç ΒÓnÒhѦô}âÜÏ]×[Ò­Õ ç ЙÒxÞdc X*' K (]X*' e +-9 <f6 '' gihkj l 6-:mG:;6 l I;<n<?276-, X' XX oSp G:;6-qr2701I &' skj <-I?:;q-2`a?I b ÐhÛÜÞ¦Ò­ß1ѦÒ9Í<âÜá1ΜÞ'Þ[ÎRѦÍLâÜÕNΒÙrÒhÑ'× ç Βß1ѦÒhÙrΜަަâÜÒxÏtc ö â¥ÖxÚ1ѦιîMŸèÐhá¦Ú1Ï1ÕNצâ¥ÒxÏ}ÐhÏàáúÕNÒxÝßàۥΜݭΜÏdצЙצâ¥ÒxÏ ÕNÒxÏ1Þ¦âYÞ[×[ΜÏdזÓâ¥× ç × ç Î ç ΜÐhá1Μá1ÏΜަޖâÜÞ¦Þ¦ÚΜÞáΜަÕNÑâ¥ãJΜá âÜÏ<ΜÕNצâÜÒxÏûë™ðÜë™ð ñ ç ÎGÑâ¥ÖxâÜáŸÑ¦Îœÿ<Ú1â¥ÑΜÝΜÏd×ÒhÙß ç ÑÐhÞ¦ÐhÛhÕRЙ×[ÎRÖhÒhѦä¹ß1Ñ¦Ò™Ô [ΜÕNצâ¥ÒxÏ âYÞ­ÐhÛÜÞ[ҤݭÐhÏ1â¥ÙrΜÞ[׭ΜÛÜÞ¦ÎRÓ ç ÎRѦΠM,ÐhÛÜÛ æEç âÜÏΜÞ[Î ß1ѦΜÏ1ÒxÝ­âÜÏ1ÐhÛbѦΜÛÜЙצâÜÍhÎNÔfÕRÛÜÐhÚ1Þ[ÎéΜÿ<Ú1â¥Í™ÐhۥΜÏ]×¦Þ ç ÐwÍhÎÅÐ Û¥ÎRÍhΜÛbÒhÙ æ èÐhÏ1Ï1ÒhצЙצâ¥ÒxÏê…Μÿ]Úàâ¥ÍPÐhÛÜΜÏd×V×[ÒÌnñqóŠÞ Pu$vBê´ÕNÒxÏ]צÐhâÜÏ1âÜÏ1ÖúÐoÏ<Ú1ÛÜÛ.xwÔ qèžê5ÎRÍhΜÏû× ç ÒxÚÖ ç æEç âYÏΜÞ[Î ç ÐhސÏ1Ò Ñ¦ÎœÛÜЙצâ¥ÍhÎ ß1ѦÒxÏÒxÚ1ÏàÞ å × ç ÎéÒwÍhÎRÑ× ß1ѦΜÏ1ÒxÝ­âÜÏ1ÐhÛ.Ý­Ò<á1âÜüàÕRЙצâ¥ÒxÏúݭЙÑôhÎRќêUy êJâÜÏ]×[ѦÒLá1Ú1ÕNÎœÞ ÐhÏÒh× ç ÎRћۥÎRÍhΜÛÒhÙ æ èVÐhÏ1ÏÒhצЙצâ¥ÒxÏ Ó ç ΜÏß1ÑΜÞ[ΜÏd×Rê<ÐhÞ Þ[ÎRÎœÏ âÜÏ ö â¥ÖxÚÑÎz<êàãàÚ1×âÜÏ}× ç âYÞÕRÐhÞ[ÎW× ç ÎWÚ1ÏàÐ™Ñ¦ä æ è âÜÞ­ÕNÒxÝ­ß1ѦΜަÞ[Μá Ú1Ï1áÎRѭަצÐhÏ1á1ЙÑá‹ßàѦÎNÔSßàЙÑÞ¦âYÏÖ,×[ѦÎRÎ ×[ÑÐhÏ1Þ¦ÙHÒhÑÝ Ð™×¦â¥ÒxÏ1Þ"ï"ð7{i€Ï®× ç ÎEÕNÒhѦѦΜÞ[ßJÒxÏ1áàâÜÏÖÕRÐhÞ[ÎEÙrÒhÑ × ç Î'ÌnñqêxѦΜá1Ú1ÕNΜá ÑΜÛÜЙצâ¥ÍhÎÕRÛÜÐhÚàÞ[ΜÞGЙѦÎÐhÏ1Ï1ÒhצЙ×[Μá ÐhÞ|Ÿèޜð ñ ç ΜÞ[Î ÐhÏ1ÏÒhצЙצâÜÒxÏûß1ÑÐhÕNצâYÕNÎœÞ ç ÐwÍhÎbÐúÞ¦×[ѦÒxÏÖoÎRÙHÔ ÙrΜÕNמÒxÏW× ç ÎÖhÑÒxÞ¦ÞsÞ[צЙצâÜÞ[צâÜÕRޞÒhÙ× ç Î æ ñûЙÙr×[ÎRÑGÞ[צÐhÏLÔ á1ЙÑáú×[ѦÎRÎWÏÒhÑ"Ý­ÐhÛÜ⠜Йצâ¥ÒxÏàÞ ÙHÒhіßàЙÑÞ¦âYÏÖðnñ ç Î æ ñ ç ÐhޛٍЙћÙrÎRӛÎRÑ ÑÚ1ۥΖ×fä<ßÎœÞ›× ç ÐhυÐhυÌñ ÒhÙ.Μÿ<Ú1â¥Í™ÐPÔ Û¥ÎœÏ]מަâRÎhê™ÐhÏàá ç ÐhÞ5ÐqÕNÒxÏ1Þ¦âÜáÎRÑ"ЙãàÛ¥ä’Û¥ÒwӛÎRÑ5ã1ÑÐhÏ1Õ ç âÜÏÖ ÙÐhÕN×[Òhќð}Ï,ßàЙѦצâÜÕRÚ1ÛYЙÑBЅÙrÐ™Ñ ç â¥Ö ç ÎRђß1ѦÒhßJÒhѦצâÜÒxÏ%ÒhÙ ~ ±Rƒf„"‚P†P„~f†oƒ~S{{‚P~Sš'„"‰Šˆ=[„ƒSˆŠ?‚9 „¦~S{J€%ƒS“9{ ~S{š ¦Œ?„‰n‘ {š |wƒr— ‚9œ†9{E„"‚™†­‚Pœ†w{ †w?š ˆŠ‚™„¦ƒSˆt‚9ŽW‚9W‚9?‚wH{€š |9ƒr—¹ƒS{€~ š ˆŠ‚™„‰t¬9„‚™†EƒS“P{ž¶PªP{·œ¶9{‚Rƒ~S{€š Œ"„"‰h‘ú¦ŒN{€~k}¶9‚™„~SˆŠ{¢ ÍhÎRÑ× &' B ' B ' &' 6;G 7' &' 3 X*'  :HI?a;I;,_359 FN2=,8a;:;I;+-<fI;0 (  B &' <f+-qr2=,PO < qÚ1ÛÜÛ &' B ' &' 6;G  ' &'  G:;6  X*'  G:;+1aH3527a;+-9=9 F<fI;03 &'   l 6-:;I;2 Or,2=,Jq1I?<H35TUI?, 3 ö â¥ÖxÚѦΠMùèѦΜÏÒxÝ­âÜÏàÐhÛÝÒLá1â¥üàÕRЙצâÜÒxÏ¿Óâ¥× ç ÒwÍhÎRÑ× ÐhÏ1áúÏ<Ú1ÛÜÛJݭЙÑôhÎRÑÞ ´¶P‰Š{ 5‚ž¶ J» 5‚P©<?« zo±9²"Ü‘§¶P‰Š‰ ½[Á"À?à  à !?à ÃR¢t½[Á Ã#"R zo±9²"Yš'„‰t‰ %$NÃ"à ½[Ã?Á ÃR¢t½Ä Ã#"R ¡“PˆŠ‚P{{ ½¦Á $?Á !"à ½"¢ ¾?Á "w½¦Â ñ5Йãàۥΐë M ÷ ѦÒxÞ¦Þ%Þ[צЙצâÜÞ[צâYÕRÐhÛWá1â ÎRѦΜÏàÕNΜÞ}ãJÎR×fӛÎRÎœÏ Ìñ  ÐhÏàá æ ñqðv–Ú1ۥΜÞ.ÐhÏàá&–ÏmvÚBЙѦÎG× ç ÎÏ]ÚàݹãJÎRÑ ÒhÙGÑÚ1ÛÜι×fä<ßJΜޖÐhÏàá%Ú1ÏàЙѦä…ÑÚ1Û¥ÎB×°ä]ßJΜޖѦΜÞ[ßJΜÕNצâ¥ÍhΜۥä   ö âÜÞ ÐœÍhÎRÑЙÖhΛÑÐhÏàÕ ç âÜÏ1Ö ö ÐhÕN×[ÒhÑ ÐhÏ1á'&qÏ1ñ´Òhô¤âÜÞ ßJÎRÑÕNΜÏ]צЙÖhÎWÚ1Ï1ЙÑä ÒhÙGÛ¥ÒLÕRÐhÛJ×[ѦÎRÎB×[ÒhôhΜÏ1ÞRð æ ñÑ¦ÎRÓ'ÑâÜ×[ΖÑÚàۥΜÞnЙÑÎqÚ1ÏàЙѦä å ñ5ЙãàۥΠëwï"ð)(}ñ ç âÜޛâÜÞ ÕNÒxÏ1Þ[ÒxÏàÐhÏd× Óâ¥× ç × ç ΅ãJÎ ç ÐwÍ<â¥ÒhÑ®ÒhٖÞâÜÝßàÛ¥Î è æEöž÷ Þ ÒxÏ+*)S-,/.102.1043éá1ЙצÐLêBÐhÞ%Þ ç Ò9ÓÏéâÜÏéñ5ЙãàÛ¥Îùî<ðøèsЙÑ[Ô ÎœÏ]×´ÐhÏ1á’ÖhÑÐhÏàáßàЙѦΜÏ]×´ÐhÏ1ÏÒhצЙצâÜÒxÏ åþ Ò ç Ï1Þ[ÒxÏê<ëœìhìhíxï ç ÐhÞ5ÐÝ®Ú1Õ ç Þ[×[ѦÒxÏ1ÖhÎRўΠJΜÕNמÒxÏW×[ÑÐhâÜÏàâÜÏ֙Ôfá1ЙצРßЙÑÞÔ âÜÏÖ ÙrÒhÑ'Ìñ × ç ÐhυÙrÒhÑ æ ñ –ð ‡Î¹ãΜÛYâ¥ÎRÍhÎ–× ç Й×'× ç Î ÖhѦΜЙ×[ÎRђß1ÑΜÕRâÜÞ¦â¥ÒxÏ659ÑΜÕRÐhÛÜÛ.Þ[ßàÛYâ¥×'Þ[ÎRÎœÏ ç ÎRѦÎWÙHÒhÑ æ ñ âÜÞ ÐhÛÜÞ¦Ò á1ÚΟ×[Ò âÜ×¦Þ Û¥ÒwӛÎRÑã1Ñ"ÐhÏ1Õ ç âYÏÖ ÙrÐhÕN×[Òhќð 7 8 kJ0de™c°m-:9 +GË/<; ‡Î¿Ú1Þ[Î¤× ç ΠٍÐhÕN×[ÒhѦΜáßàЙÑÞâÜÏÖ Ý­Ò<áΜۮÒhÙ åý ۥΜâÜÏ ÐhÏ1á oÐhÏ1ÏàâÜÏÖê î xîxï"ð èsЙÑÞ¦âYÏÖ âÜÏ× ç âÜÞûÝÒLáÎœÛ âÜÏ]ÍhÒxÛ¥ÍhΜÞlÕNÒxÝWãàâÜÏ1âÜÏ1Ö ×°ÓnÒ âYÏ1áÎRßJΜÏ1áΜÏ]× ßàЙÑ"Þ[ΜÞNM ÒxÏÎ ÒhÙWФÏÒxÏLÔfÛ¥ÎNØâÜÕRÐhÛÜâRΜáê'Ý­ÐPØâÜÝWÚàÝ ÛÜâÜôhΜÛÜâ ç Ò<Ò<áÔ ÎœÞ[צâÜÝ Ð™×[Μá å >=´Ì›ïBè æEö5÷ ÝÒLáΜÛÐhÏ1áùÐhÏÒh× ç ÎRѹÒh٠ФÕNÒxÏ1Þ[צâ¥×¦ÚΜÏ]×ÔSÙrѦÎRÎ á1ÎRßΜÏàáΜÏ1ÕNä¤ßàЙÑÞ[Îhð Ï Ðhá1á1â³Ô צâ¥ÒxÏù×[Ò Þ¦âYÝßàÛÜâ¥ÙräLâÜÏÖ × ç Î ßàЙÑÐhÝ­ÎR×[ÎRÑ⠜Йצâ¥ÒxÏVÒhÙ × ç Î ßàЙÑÞâÜÏÖ,ÝÒ<á1ΜۛÐhÏàá‹Ý­ÐhâÜÏ]צÐhâÜÏ1âÜÏÖ%ÎNØLÐhÕNצÏ1ΜަÞRê›× ç âÜÞ ? zú±9²"Yš'„"‰Š‰ˆŠE„’~f„"‚P†9"š ‰Š— {‰Š{ ” ƒS{[†ƒS{‚RƒS“"‘´ƒS“P{'‘³¶9‰t‰  1‚PŽ?‰ŠˆŠ“Ÿz„‰Š‰L±Rƒ~S{{€ƒ´²?"¶9~S‚™„‰ ” ~S|P¶9¢ zú±9²"Yš'„"‰Š‰ ¡“PˆŠ‚9{{   ¨ ¼s¨   ¨ ¼s¨ @<¨ Á $œ¢ " ¾%w¢  ¾NÁR¢ Á Á"Äw¢ " Á¾w¢ Á ¾w½"¢ ¾ @4 ÁA"w¢ ! ¾w½?¢ Á ¾?Ĝ¢ " Ä?¾w¢ ¾ Á"Ü¢ À Á%!R¢ Ä »½ Á"Ĝ¢ ¾ ¾NÜ¢ ! ¾NÁR¢t½ Á?Ü¢ " Á !œ¢ à Á"¾œ¢  ñ5ЙãàۥΖîM EÐhÞ[ΜÛÜâÜÏ1Î'ßJÎRѦÙHÒhÑ"Ý­ÐhÏ1ÕNÎ'ÒxÏ ×[ÑÐhâÜÏ1âÜÏ1ÖBá1ЙצÐLð &$ MàÚàÏ1ÐhÏ1ÏÒhצЙ×[ΜáúÛYЙãΜÛYÞ  è  M1ßàЙѦΜÏ]×ÔfÐhÏ1ÏÒhצЙ×[Μá  ÷ è  MGßàЙѦΜÏ]×ԒÐhÏàá ÖhÑÐhÏ1áßàЙÑΜÏd×ÔfÐhÏ1Ï1ÒhצЙ×[ΜáðB=žè âÜÞWÛÜЙãJΜۥΜáûß1ÑΜÕRâÜÞ¦â¥ÒxÏ  =v£âYÞWÛÜЙãJΜۥΜáûѦΜÕRÐhÛÜÛ  ö ëbâÜÞ × ç Î ç ЙÑÝÒxÏ1âÜ՟ÝΜÐhÏúÒhÙC=žèÐhÏ1áD=NvBð ÝÒLáΜÛÒ ÎRÑÞ›× ç Οß1ÑÒxÞ[ßJΜÕN×ÒhÙ´âÜÏàÕNѦΜÐhÞ[Μá 1ÎNØâ¥ãàâÜÛÜâÜ×fä âÜÏWצÚ1Ï1âÜÏ1Ö× ç Î âÜÏ1áàâ¥Í<âYá1Ú1ÐhۙßàЙÑަΛÝÒLáΜÛÜޜðNÏWßЙѦצâÜÕ?Ô Ú1ÛÜЙÑwêàÛÜâÜÏÖxÚàâÜÞ[צâÜÕqÖhΜÏÎRÑÐhÛY⠜Йצâ¥ÒxÏ1ޖÕNÒhѦѦΜÞ[ßJÒxÏ1á1âYÏÖ×[Ò ÕRЙ×[ÎRÖhÒhѦä ѦÎRüàÏΜÝΜÏ]צÞЙÑÎΜÐhÞ¦âÜÛÜä âYÝßàۥΜÝΜÏ]×[ΜáÍLâÜÐ ÕRЙ×[ÎRÖhÒhѦä]ÔfÞ[ßàÛÜâÜ×[צâÜÏօâÜÏú× ç Î è æ öž÷ ÝÒLáΜہêàӖâ¥× ç ÒxÚ× ÕNÒxÏ1ÕNÎRÑÏoÙHÒhіÐΜÕNצâÜÏ1Ö­× ç ÎBáÎRßJΜÏ1áΜÏàÕNä ÝÒLáΜہð ÏÐhá1Йß1צâÜÏ1Öq× ç âÜÞsßàЙÑÞ¦âYÏÖqÝ­Ò<áΜÛ<×[Ò æ ç âÜÏΜÞ[Îhê™Ó›Î ç ÐwÍhΖѦÎRצÐhâYÏΜá ÚàÏ1Õ ç ÐhÏ1ÖhΜá­× ç ΟáÎRßJΜÏ1áΜÏ1ÕNä ÝÒLáÎœÛ áÎRÍhΜۥÒhßJΜá ÙHÒhÑ}ÌnÏÖxÛÜâÜÞ ç  × ç Î,ÝÒLáΜÛãÐhÕ¦ôLÞ…Ò  ×[Ò ×¦Ð™ÖxÞRêàÐhÏàá…ãàÐhÕ¦ôhÒ ‡ßàЙÑÐhÝ­ÎR×[ÎRÑÞ Ñ¦ÎœÝ­ÐhâYÏ × ç ÎBÞ¦ÐhÝÎhð)E ϐÐhÛÜۖÕRÐhÞ[ΜÞRê×[Μަ×bâÜÏ1ßàÚ××[Ò¤× ç ÎoßàЙÑ"Þ[ÎRÑ ÓEÐhޅÞ[ÎRÖ™Ô ÝΜÏ]×[ΜáoãÚזÚ1Ï]צЙÖhÖhΜáðGF ÚÑÙrÒ<ÕRÚàޖâÜÏúßàЙÑÞ[ÎRџáÎNÔ ÍhΜۥÒhßàݭΜÏd× ç ÐhޒãJÎRΜχ×[ÒúѦÎRüàÏ1Î× ç Î­è æEö5÷ ÝÒLáÎœÛ ÍLâÜÐÞ[×[ÎRß<ÓâÜÞ[ÎBÑÎRüàÏΜÝΜÏ]צÞ'âÜÏÙrÒhÑÝΜá…ã<ä…Ý­Ð3ÒhіÒhãÔ Þ[ÎRѦÍhΜáWÐhݹãâ¥ÖxÚ1â¥×°äqÕRÛÜÐhÞÞ[ΜÞRð‡Î›âÜÛÜÛÜÚàÞ[×[ÑЙ×[Îs× ç Й×.ΜÐhÕ ç ÒhÙ× ç ΜÞ[ÎqѦÎRüàÏ1ΜÝΜÏdצÞnÕRÐhÏbÎ ΜÕNצâ¥ÍhΜÛÜä­ãJÎ'Í<âÜÎRÓnΜábÐhÞ ÐhϹÐhݭΜÏ1á1ÝΜÏ]×.×[Ò× ç ÎnâÜÏ1á1ÎRßΜÏàáΜÏ1ÕNÎGÐhÞ¦Þ¦Ú1Ý­ß1צâ¥ÒxÏ1Þ Ý­ÐháΒã<ä}ÐÞ¦âÜÝßÛ¥Î’è æEöž÷ ð H 8 `JILK Ë/3´/6;S+.iM9 /m´g ñ ç Î%Þ¦âÜÝßàÛÜΜÞ[×Þ[ä<Þ¦×[ΜݭЙצâÜÕoÐhÚ1ÖxÝΜÏdצЙצâÜÒxÏ1Þ ×[ÒVãàÐPÔ Þ¦âÜÕ è æ öž÷ ÝÒLáΜÛÜÞbЙѦÎ,× ç ·âÜÏ1ÕRÛÜÚàÞ¦â¥ÒxÏ¿ÒhÙWÍPЙÑ"â¥ÒxÚ1Þ ×fä<ßJΜއÒhمÕNÒxÏd×[ÎNØLצÚ1ÐhÛbâÜÏÙrÒhÑݭЙצâ¥ÒxÏâÜÏ× ç οަ×[ÑÚ1Õ?Ô ×¦ÚѦΠÒhÙúâÜÏ1á1âÜÍ<âÜáàÚ1ÐhÛ ÏÒ<á1ΐÛÜЙãJΜÛÜÞRð Ï ß1Ñ"âÜÏ1ÕRâ¥ßàÛÜÎhê ÐhÏ]äùÕNÒxÏd×[ÎNØLצÚ1ÐhÛ âÜÏ1ÙHÒhÑÝ Ð™×¦â¥ÒxÏùÕNÒxÚ1ÛYáùãJÎ ÚàÞ[ΜáêsãàÚ× âÜÏÅß1ÑÐhÕNצâÜÕNÎ ×°ÓnÒ×fä<ßJΜއЙѦΠÝÒxÞ[× ç ΜМÍLâÜÛ¥ä ѦΜÛÜâ¥Îœá ÒxÏOM å âHïâÜÏÙrÒhÑݭЙצâ¥ÒxÏ ç â¥Ö ç ÛÜä ÛÜÒ<ÕRÐhÛ×[ÒW× ç Î–ÎœÏ ç ÐhÏ1ÕNΜá ÏÒLáÎ  ÐhÏ1á å âÜâHï%Ð Ú1ÏàâÜÿ]Ú1ÎVß1ÑÎR×[ÎRÑÝ­âÜÏ1ÐhÛ159×[ÎRÑÝ­âÜÏ1ÐhÛ ßàÐhâ¥Ñ âÜáΜÏ]צâ¥ü1ΜáùÐhÞW× ç ÎDNPOA,RQ¤ÒhÙ'× ç ΅ÏÒLáÎhð%ñ ç ΜÞ[Î ß1ÑÐhÕNצâYÕNÎœÞ ç МÍhΤÕNÒhѦѦΜÛÜЙ×[ΜÞ%âÜÏ ÕNÒxÏ]×[ΜÝßJÒhÑЙѦä ÛÜâÜÏLÔ ÖxÚ1âÜަצâÜÕ × ç ÎRÒhѦä¤ÐhÞ ßàÑâÜÏ1ÕRâ¥ßۥΜÞBÒhÙTSTVU ,/SW.1*YX‹ÐhÏ1áZS[O#\^] _ ´‚Ÿ„‰tŽ""~Sˆ˜ƒS“Pš‹ƒS †w{€ƒS{€~Sš ˆŠ‚P{´ƒS“P{a`bdc%eq†P„¶PŽ"“œƒS{°~."‘1{ŒR {€~—­‚9?‚9ÜƒS{€~Sš ˆŠ‚P„"‰‚9œ†9{–ˆt ‚9{ ” {S„¦~—‘³~ ƒS“P{–†9{|h{‚P†9{‚ ” — š œ†9{€‰.„"‚™†‘³~'Ž"~f„š š'„~ š'„¦~S«N¦Œwˆ=[„¦ƒSˆŠ?‚DfY¡?‰Š‰Šˆt‚9¬s½-$ $%$%g°¬ „"‚P† ˆŠ‚ ” {ƒS“P{–¡J©¤„‚™†­ ©ù“™„[ŒN{†9ˆ˜¯x{€~S{€‚œƒnŽ"~f„š š'„~S¬ •J{†wˆt†•~Sˆ˜ƒS{„›ˆŠš |P‰Š{s“P{„?†wÆP‚™†w{€~‘³~´ƒS“9{¡J©}Ž"~f„"š š'„¦~[¢ h »9"~J¶9‚P«œ‚P[•.‚E•"~f†9•J{´{ƒSˆŠš'„ƒS{[†'¨f³•J"~f†4i ƒf„"Žjgઙ„"{† ?‚–ƒS“P{5Æ9~Sƒ ” “P„~f„ ” ƒS{€~5‘LƒS“P{ž•J"~f†x¢ . UA,/SW.  å LÐ™Ö ÐhÏàá ûÐhÞ[ÒwÓBê ëœìhìhìxï"ð ö ÒhÑ ç ΜÐháLÔ Þ ç âÜß.êW× ç Î¿Õ ç ÒxâÜÕNΜÞûÒhÙbÏÒLáÎ ÎœÏ ç ÐhÏàÕNΜÝΜÏd×ûÞ¦×[ÑЙ×Ô ÎRÖhä¿Ð™Ñ¦Î%ÙrÐhâ¥Ñ"Û¥ä¤ÛYâÜÝ­â¥×[Μá  ÙrÒhÑ ÎœÏ1ÑâÜÕ ç ÝΜÏd× ã]ä¿Û¥ÒLÕRÐhÛ ÕNÒxÏ]×[ÎNØ<×Rê…× ç ÎRѦΠЙѦΐٍЙÑVÝÒhÑÎ Õ ç ÒxâÜÕNΜÞRð Ù…× ç Î Þ¦âÜÝ­ßàۥΜÞ[×´ÛÜÒ<ÕRÐhÛ³ÔfÕNÒxÏ]×[ÎNØL×5ΜÏ1ÑâÜÕ ç ÝΜÏd×5Þ[×[ÑЙ×[ÎRÖxâ¥ÎœÞœêx× ç Î ÒxÏÎ›× ç Ð™× ç ÐhÞ5ß1ѦÒ9ÍhΜÏWÎ JΜÕNצâ¥ÍhÎ ÒxÏ ÐqÞ[ä<Þ¦×[ΜݭЙצâÜ՛ãàÐPÔ Þ¦âÜޟâÜÏ]ÍhÒxÛ¥ÍhΜÞQP, S-Oj0<* ,^020-T^*d,^*Y.$T/0  åþ Ò ç Ï1Þ[ÒxÏ.êëœìhìhíxï Þ ç Ò9ÓnΜá × ç Й×qÓ ç ÎœÏ Ú1Ï1âÜÙHÒhÑÝ Û¥äbЙßàßàÛÜâ¥Îœáêâ¥×qÕNÒxÏàÞ¦âÜáLÔ ÎRÑЙãàÛÜäVâÜÝßàѦÒwÍhΜá K þ ñ´Ñ¦ÎRÎRãÐhÏôVßЙÑÞ¦âÜÏÖð'&qÏ1â³Ô ÙrÒhÑÝ!ÎœÏ ç ÐhÏ1ÕNΜݭΜÏdגã<äoÒh× ç ÎRÑWÛ¥Ò<ÕRÐhÛGÕNÒxÏ]×[ÎNØL×RêžÞ¦Ú1Õ ç ÐhÞÞ¦âYÞ[×[ÎRÑÞRêxá1ÐhÚ1Ö ç ×[ÎRÑޜêxÒhÑÕNÒxÚ1Þ¦âYÏ1ÞRêxÿ<Ú1âÜÕ¦ôLÛ¥ä¹ÛÜΜÐhá1Þs×[Ò Ú1Ï1ÐhÕRÕNÎRßàצЙãàÛ¥ÎBÞ[ßàЙÑ"Þ[ΜÏΜަÞ'ÚàÏ1áÎRÑ >=´Ì'ð ñžÒ¿ãJÎRÖxâÜÏ áÎRÍhΜۥÒhßÝΜÏd×RêŸÓ›Îû×[ΜÞ[×[ΜáÅ× ç ÎûâÜÏ]×[ÎRÑ[Ô ÐhÕNצâ¥ÒxÏòÒhÙ ÕNÒxÝ­ßàÛ¥ÎR×[Î ßàЙÑΜÏd× ÐhÏàá659ÒhÑ ÖhÑÐhÏàáßàЙÑ[Ô ÎœÏ]× ÐhÏàÏÒhצЙצâ¥ÒxÏ Óâ¥× ç è æ öž÷ ݭЙѦôhÒ9Í<⠜ЙצâÜÒxÏ å Þ[ÎRÎ åSæ ÒxÛÜÛYâÜÏ1ÞRêsëœìhìhì  æ ç ЙÑÏ1âÜЙôê.î dïŸÙrÒhÑBá1âÜÞ¦ÕRÚàÞ¦Þ¦â¥ÒxÏï"ð ñ ç Î}âÜÏàá1âÜÕRЙצâ¥ÒxÏ1ÞWÙrÒhÑ× ç ÎúÚצâÜÛÜâÜ×fäûÒhٖßàЙѦΜÏ]×­ÐhÏ1ÏÒ™Ô ×¦Ð™×¦â¥ÒxÏéâYÏ æ ñøßàЙÑÞ¦âYÏ֤ЙÑÎûÝ­â³ØLΜáðlñ ç Î æ ñ âÜÞ­ÞÝ­ÐhÛÜÛ¥ÎRÑ ÐhÏàá¿× ç Ú1Þ ÝÒhÑÎoÞ¦Ú1Þ¦ÕNÎRßàצâ¥ãàۥ΅×[ÒùÖhÑ"ÐhÝ Ô Ý­Ð™Ñ­ÙrÑЙÖxÝΜÏ]צЙצâ¥ÒxÏê›ãàÚ×â¥×­âYÞÐhÛÜÞ[ÒûۥΜަÞBÐ™× å Þ[ÎRÎ ñ5Йãàۥ΅ëwï"ð ‡ÎÙHÒxÚàÏ1áo× ç Й×qüàÑÞ[×ÔSÒhÑáÎRџݭЙѦôhÒ9Í<âœÐPÔ ×¦â¥ÒxÏ Ó›ÐhÞ%Þ¦ÚßJÎRÑâ¥ÒhÑ}×[Ò RÎRѦҙÔSÒhÑáÎRÑwê¹Þ[ΜÕNÒxÏ1áLÔSÒhÑ"áÎRќê ÐhÏ1á¿ÚàÏ1ݭЙѦôhÒ9Í<â RΜáè æEöž÷ Þ ÙrÒhÑ ÐhÛÜÛ'ÛÜÎRÍhΜÛÜÞÒhْÐhÏLÔ ÕNΜÞ[×[ÒhÑûÐhÏ1Ï1ÒhצЙצâ¥ÒxÏêWÐhÏàá × ç Ð™× Óâ¥× ç âÜÏ ü1ÑÞ¦×ÔSÒhÑáÎRÑ Ý­Ð™Ñ¦ôhÒ9Í<âœÐ™×¦â¥ÒxÏ ßàЙѦΜÏ]פÐhÏàÏÒhצЙצâ¥ÒxϣӛÐhދÞÛÜâ¥Ö ç צۥä Þ¦ÚßJÎRÑâÜÒhÑ5×[ÒWÏÒBÐhÏ1ÏÒhצЙצâ¥ÒxÏ.êxÓâ¥× ç ÖhÑÐhÏàáßàЙѦΜÏ]×ÐhÏLÔ ÏÒhצЙצâ¥ÒxÏ%áΜÕRâÜáΜáàÛ¥ä­Ó›ÒhÑÞ[Îhð   0d0x+.0okmk ;S4ePc°e+.0 ikJ0deP/10 Ë/3´/6;S+.iM9 /m´g ý ÎRÎRßàâYÏÖqâYÏ®Ý âÜÏ1á®× ç Й×ۥΜަÞ5ÙrÑЙÖxÝΜÏ]×[ΜáÖhÑÐhÝ­Ý Ð™ÑÞ Ð™Ñ¦ÎEÝÒhѦΛѦÒhãàÚ1ަ״×[ÒqÙrÚÑ× ç ÎRÑ5ÕRЙ×[ÎRÖhÒhÑädÔfÞ[ßÛÜâ¥×[צâÜÏÖê™Ó›Î Þ[äLÞ[×[ΜݭЙצâÜÕRÐhÛYÛ¥ä­âÜÏ]ÍhΜÞ[צâ¥ÖxЙ×[Μáb× ç ΟݭÐ3ÒhÑEÞ[ÒxÚÑ"ÕNΜÞnÒhÙ ÎRѦѦÒhхÙrÒhÑ}× ç Î,ÙrÐhÕN×[ÒhÑΜá ÝÒ<á1ΜۖӖâ¥× ç ÐhÏ ÚàÏ1ÐhÏ1ÏÒ™Ô ×¦Ð™×[Μá…ü1ÑÞ¦×ÔSÒhÑáÎRÑ Ý­Ð™Ñ¦ôhÒ9Íbè æEöž÷ ÖhÑÐhÝ­Ý Ð™Ñ›Ó ç ÒxÞ[Î ÒxÏ1Û¥ä¤ÎœÏÑâYÕ ç ÝΜÏ]×­ÒhÙ æ ñlÐhÏàÏÒhצЙצâ¥ÒxÏ¿ÓEÐhÞ ÐùѦÎNÔ üàÏΜݭΜÏd×®ÒhÙ'ßàÚ1ÏàÕNצÚ1Йצâ¥ÒxÏVצЙÖxÞÐhÛ¥ÒxÏÖ,Ìñ ÛYâÜÏΜÞRê Ó ç âYÕ ç ÐhÕ ç â¥ÎRÍhΜá ÐhÏ ö ëbÒhٖí Lð4 bðûñžÒ,ÐhÞÞ[ÎœÞ¦Þ®× ç Î Ý­Ð3[ÒhÑWÞ[ÒxÚ1ÑÕNΜ޹ÒhÙEßàЙÑ"Þ¦âÜÏÖúá1â ÕRÚ1Û¥×°ä%ÙrÒhÑ æEç âÜÏΜÞ[Îhê ӛÎצЙãàÚ1ÛYЙ×[Μá­ÙrѦΜÿ]Ú1ΜÏ1ÕRâ¥ÎœÞGÒhÙÝ­Ð3[ÒhÑn×°ä]ßJΜÞÒhÙßЙÑÞÔ âÜÏÖ ÎRѦÑÒhÑÞ âÜυÐ}ëN PÔfÞ[ΜÏ]×[ΜÏ1ÕNÎWÞ¦Ú1ãàÞ[ÎR×EÒhٞÒxÚ1Ñ'áÎRÍhÎœÛ³Ô ÒhßàÝΜÏ]× Þ[ÎR×Rð ñ5ЙãàۥΠoÖxâ¥ÍhÎœÞ Ðoã1ÑΜЙô<áÒ9ÓÏVÒhÙ'× ç Î Ý­Ð3[ÒhÑ ÎRѦѦÒhÑ®×fä<ßΜޮÙrÒxÚ1Ï1á  ö â¥ÖxÚ1ѦΠ Öxâ¥ÍhΜޮÎNØÐhÝ Ô ßàۥΜÞbÒhÙ®Ú1ÏٍÐhÝ­âÜÛÜâÜЙÑbÝ­Ð3[ÒhÑ}ÎRѦѦÒhÑ}×fä<ßJΜÞRð  Ï× ç Î ÏÎNØL×Þ[ΜÕNצâ¥ÒxÏoÓnήáΜަÕNÑâÜãΟΜÐhÕ ç Ý­Ð3[ÒhÑÎRÑѦÒhÑ×°ä]ßJÎhê ÐhÏ1ÐhÛ¥äRÎEâ¥×¦ÞžÕRÐhÚàÞ[ΜÞRêxÐhÏ1á®Þ¦ÚÖhÖhΜÞ[×sÞ¦âÜÝ­ßàÛ¥Îè æEöž÷ ΜÏLÔ {°—ƒSE©L„"ª9‰t{ ›„"‚™†q»LˆtŽ"¶9~S{ "!€ ©à—w|h{ ¡"¶P‚Rƒ »L‰t„ƒž„".šn¶P‰˜ƒSˆŠ‰t{€ŒN{‰ s¨ Ä ¨ ½ s¨s¨qœ†  ½   ÃÄ ¨~S{‚P"šqœ†  !  ! ¡R"~f†œH„ƒƒf„ ” “  ½À @ ½Ä  Á @ À 5†¦µ¶9‚ ” ƒ.„¦ƒƒf„ ” “ ¨ 5¨ Á  ƒS“P{°~  š ˆŠƒf„"Ž"Ž?ˆŠ‚PŽ  ½[Á   !  ƒS“P{°~ ½ " ñ5ЙãàÛ¥ÎM ö ѦΜÿ<ÚΜÏ1ÕNäbÒhÙ5ßЙÑÞ[ΒÎRѦѦÒhÑ×fä<ßΜޜð ç ÐhÏ1ÕNΜݭΜÏdצÞ× ç Й×ÕRÐhÏúãJΒÚ1Þ[Μáú×[Ò Ðhá1áѦΜÞÞ â¥×Rð ! "$! " (+GID$# /F&/ >%# 9;;'&-; 1#)(?9GI(5 * J,+. 9P(; &4E/ 9P( 1102 9</ %Ú1ۥצâÜÛ¥ÎRÍhΜÛqèÐhá¦Ú1Ï1ÕNצâ¥ÒxÏûÎRѦѦÒhÑÞ åö âÜÖxÚѦΠxï¹Ð™Ñ¦Î ÕNÒxÝ­ÝÒxÏ âÜÏVÝÒLáΜÛÜÞBӖâ¥× ç ÒxÚ×WßàЙѦΜÏ]×WÐhÏ1ÏÒhצЙצâÜÒxÏê ÐhÛ¥× ç ÒxÚ1Ö ç ÎRÍhΜÏoӖâ¥× ç ßàЙѦΜÏ]×ÐhÏ1ÏÒhצЙצâÜÒxÏú× ç ÎBßàѦΜÞÔ ÎœÏ1ÕNÎúÒhÙ$Ÿè ÕNÒ<ÒhÑá1âÜÏ1ЙצâÜÒxÏVӛÒxÚ1ÛÜá ÖxâÜÍhÎoÝWÚàۥצâÜÛ¥ÎRÍhÎœÛ Ÿè¿Ðhá¦Ú1Ï1ÕNצâ¥ÒxυÏÒxÏ RÎRÑÒß1ѦÒhãàЙãàâYÛÜâ¥×fähð.‡Î¹Ðhá1á1Ñ¦ÎœÞ¦Þ × ç âÜÞ,ÎRÑѦÒhчã]äצЙôLâÜÏÖéÐhá͙ÐhÏdצЙÖhÎ ÒhÙ × ç Î æ ñqóŠÞ ß1ÑâYÏ1ÕRâ¥ßàۥΜá Ÿè ÐhÏ1ÏÒhצЙצâ¥ÒxÏ$ß1ÑÐhÕNצâYÕNΜÞRêûݭЙѦôLâÜÏÖ Ðhá¦Ú1Ï1ÕNצâ¥ÒxÏ.ê1ÕNÒxÝßàۥΜÝΜÏ]צЙצâ¥ÒxÏêJÐhÏ1áúÕNÒ<ÒhÑá1âYÏ1Йצâ¥ÒxÏ ŸèÛ¥ÎRÍhΜÛÜޜêÓ ç âÜÕ ç ãÚ1âÜÛÜá1Þn× ç Î àЙ×'Ðhá [Ú1ÏàÕNצâ¥ÒxυÕNÒxÏLÔ Þ[×[ÑÐhâYÏd× ãàÐhÕôâÜÏd×[Òl× ç Î Þ[×[ÑÚ1ÕNצÚ1ѦÎÒhÙ¤× ç Î ç ΜÐhá á1ÐhÚÖ ç ×[ÎRќð qèGÔ qèlÝÒLá1â¥üàÕRЙצâ¥ÒxÏ.êEá1ÎRßàâÜÕN×[Μá âÜÏ   43 âÜÏ ö â¥ÖxÚѦÎêÓEÐhÞ× ç ÎWÝÒxÞ[×qÕNÒxÝ­ÝÒxÏ%ÎRѦѦÒhџÞ[ÎRÎœÏ  × ç Î ÖhѦΜЙ×[ÎRхß1ѦÎR͙ÐhۥΜÏ1ÕNÎ%ÒhْÙrÐhÛYÞ[ÎoßJÒxÞ¦â¥×¦âÜÍhÎœÞ âÜÞ ÛÜâ¥ôhΜۥä‹Ð ѦΜަÚàÛ¥× ÒhÙW× ç Î,ÒwÍhÎRÑÐhÛÜÛ’è æEöž÷ ßàЙÑ"Þ¦âÜÏÖVß1ѦÎRÙrÎRѦΜÏ1ÕNÎ ÙrÒhÑ Й×[×[ÎRхަ×[ÑÚ1ÕNצÚѦΜޜðÅñ ç âYÞ­×fä<ßJÎ%ÒhÙ¹ßàЙÑÞ[Î ÐhÝ Ô ãàâ¥ÖxÚàâ¥×fäùâYÞ ÖhѦÒxÚ1Ï1áΜá âÜÏ × ç ÎúÞ[ΜݭÐhÏ]צâÜÕ}Ðhݹãâ¥ÖxÚ1â¥×°ä ÒhÙÕNÒxÝßJÒxÚ1ÏàáoÏÒxÚ1Ï âÜÏ]×[ÎRѦß1ѦÎRצЙצâÜÒxÏðqñ ç âÜÞަΜݭÐhÏLÔ ×¦âÜ՟ÐhÝWãàâ¥ÖxÚ1â¥×°ä­ÎNØâÜÞ[צÞEâÜÏbÌÏÖxÛÜâYÞ ç ÐhÞEӛΜÛÜہêÐhÞEâÜÏb× ç Î 65 7 5¨1s¨úš œ†9ˆ˜Æ ” „ƒSˆŠ?‚ ‘³„"‰Š{'|h?ˆ˜ ƒSˆŠŒN{ ¦‚P{Ž?„ƒSˆŠŒN{ ¨8965: 7 f¥‚9?‚95¨ g™|9~S{€‚P?š ˆŠ‚P„"‰?š œ†x¢€‘³„"‰Š{ |h"ˆŠƒSˆŠŒN{;‚P{€ŽN„ƒSˆŠŒN{ ¡ =<>5;n¬ @ 7 5 ¬  7 ˆŠ‚ ” "~~S{ ” ƒ?5[“PˆŠŽ?“@‰Š[• 7 ” R"~f†9ˆŠ‚P„ ƒSˆŠ?‚Å„ƒƒf„ ” “9š {‚Rƒ,‘b~SˆŠŽ?“RƒS“™„‚™† 5[ŒN{€~Sª™„‰A‚P"š ˆŠ‚™„"‰ 7 š'„¦ƒS{€~Sˆt„"‰ 5†µCB;D ˆŠ‚ ” "~~S{ ” ƒ„?†µr¶P‚ ” ƒSˆŠ?‚ˆt‚RƒSEBGF ” ~ ~S{ ” ƒžˆ˜ƒS{s•J„")D š ˆtƒf„ŽHB9 D ” „¦ƒS{Ž?~—ID,š ˆtƒf„Ž?Ž"{[†Ÿ„")B  ž"ƒS{qƒS“P„ƒ "‚P‰˜— š ˆŠƒf„"Ž?Ž"ˆŠ‚PŽ?'‰Š{[„"†9ˆŠ‚PŽ®ƒS ” ?‚9ƒSˆ˜ƒS¶P{‚Rƒ ‰Š{ŒN{‰d|™„¦~S{G{€~~S"~S.•J{°~S{sƒf„"‰Š‰Šˆt{†d¢   43&' &' &   AJ+-,OrAJ+-2 &&   a1P<H3`6-Tk< &' &  AJ6-,OrTk25,PO &&   6a?I    &' &   AJ+-,OrAJ+-2 &&   a1P<H3`6-Tk< &  AJ6-,OrTk25,PO &&   6a?I èR 43 ' &' B ' 7' X*' &'  9=+-<H3]FdI;+-: X'  Q :;I;+-9 2-I;0 &'  Or:H6-<n<NI! fG6-:f3=< "#"$" èR   ' &'  95+-<H3FPI;+-: X*' XX  Q :HI?+-9 2rI;0  ' &'  O :;6-<?<kI% HG6-:f35< "#"#" æ v "$w &&' &' B ' ')(*,+.-0/21 AJI?:;If376 l 6-:;I<P,JTUIH3 &' 3 4 a;6-,80-2 3=276-,J< '65 7 8 &' 9: G:;6<;?95I?TN< æ v " =+ &' B ' ')(*,+.-0/21 AJI?:;If376 l 6-:;I<P,JTUIH3 &' &' 3 4 a;6-,80-2 3=276-,J< '65 7 8 &' 9: G:;6<;?95I?TN< qá€ÔfŸè=7' X' &' >0? ,8+ 35276-,J^2701I K (YX*' @ TU6-<H3 X*' A 956-,PO qá€Ôfè 7' &' >0? ,8+ 35276-,J^2701I X*' K (YX*' @ TU6-<H3 X*' A 956-,PO ö â¥ÖxÚѦΠ M %Ð3ÒhÑ ßàЙÑ"Þ[ΐÐhÝWãàâ¥ÖxÚ1â¥×¦âÜΜÞRð LצЙѦѦΜá ÎNØÐhÝßàۥΜÞbЙѦ·ÕNÒhѦѦΜÕN×}âÜÏ ÕNÒhѦßàÚ1Þ  ÐhÛ¥×[ÎRÑ"Ï1Й×[ΜÞ}ЙѦΠßàЙÑަΒÎRѦѦÒhÑÞRð X*'BDC K (*.<E K (YX*' K ( e +-9 <-6 K (YX*' K ( FG GJ6-<n2 3=2=qdI?9 F X*'BDC B.H DU'IE XX JK 25,qdI?<H3=2 OP+ 37I &' LMN hkj A12 O APO :?25<n>SG:;6 l I?<?<?276-, X*' K (YX*' K ( e +-9 <-6 X*' K (YX*' K ( FG GJ6-<n2 3=2=qdI?9 F X*' XX JK 25,qdI?<H3=2 OP+ 37I &' LMN hkj A12 O APO :?25<n>SG:;6 l I?<?<?276-, ö â¥ÖxÚѦΠM ö ÛÜÐ™× å ÕNÒhѦßàÚ1Þ"ïÍhÎRÑÞ¦Ú1Þ ÝWÚàۥצâÜÛ¥ÎRÍhÎœÛ å âÜÏ1ÕNÒhÑѦΜÕN×ÔSßàЙÑÞ[Î9ï5Ðhá [Ú1ÏàÕNצâ¥ÒxÏðžèsЙѦΜÏ]× ç ΜަâRΜáÝ­ÐPÔ ×[ÎRÑâÜÐhÛ´âÜÞ ÕRЙ×[ÎRÖhÒhѦä]ÔfÝÒ<áàâ¥üàÕRЙצâ¥ÒxÏð Ìñ  Þ¦×[ÑâÜÏÖ U T ? TVQ/.Y*YX  Q OAURQPS ,^* T STS . U N , S QVU O 0<] 02.  êLãàÚ×›× ç ΜÞ[ΒÞ[×[Ñ"Ú1ÕNצÚѦΜޛЙѦΟ×fä<ßàâÜÕRÐhÛÜÛÜäã1ÑÐhÕ¦ôhÎR×[Μá àЙ×bâÜÏ‹× ç ÎoÌnñqêÚ1Ï1á1ÎRÑÞ[ßJΜÕRâ¥Ùrä<âÜÏ1Ö%× ç Î Þ[ΜݭÐhÏ]צâÜÕ Ñ¦ÎœÛÜЙצâÜÒxÏ1ÞqѦΜÛÜЙצâ¥ÍhÎ ×[Ò}× ç Î æ ñqðZ€Ï æ ñéßàЙÑ"Þ¦âÜÏÖê × ç âÜÞn×fä<ßJΟÒhÙ´Ðhݹãâ¥ÖxÚ1â¥×°ä âÜޛá1â  ÕRÚ1ÛÜ××[ÒѦΜÞ[ÒxÛ¥ÍhÎ  á1âÜÙYÔ ÙrÎRѦΜÏd×sÕNÒxÝßJÒxÚ1Ï1á –è ßàЙÑަΜޞá1â ÎRÑ5âÜÏBáÎRßJΜÏ1áΜÏàÕNä Þ[×[ÑÚàÕNצÚѦÎhê1Þ[Ò­× ç ιáÎRßJΜÏ1áΜÏ1ÕNäbÝÒLáΜÛJѦΜÞ[ÒxÛ¥ÍhΜÞÎRÑ[Ô Ñ¦ÒhÑÞÓ ç ΜÏoÓnÒhÑá%ÙrѦΜÿ]Ú1ΜÏ1ÕRâ¥ÎœÞЙÑÎWÛÜЙѦÖhÎWΜÏÒxÚÖ ç ×[Ò ãJ΅ѦΜÛÜâÜЙãàÛÜÎhêGãàÚ×®× ç âÜÞâÜÞ®ÒhÙr×[ΜϿÏÒh× ßJÒxÞ¦Þ¦â¥ãàÛÜÎhð ‡Î ÙrÒxÚ1Ï1áV× ç Й×W× ç ΅âYÏd×[ÎRÑÏàÐhÛá1âÜÞ¦×[Ñâ¥ãàÚצâÜÒxÏ1ޒÒhÙ å âHï qè ÝÒLá1â¥ü1ÎRÑ"Þ´ÒhÙ –ènÞGÐhÏ1á å âÜâHïžÛ¥ÎRÙr×ÔfÝÒLá1â¥ü1ΜáLqèÞGãJÒh× ç á1âJÎRÑ­ÙrѦÒxÝõ× ç ÎoâÜÏ]×[ÎRÑÏ1ÐhÛ'á1âYÞ[×[Ñâ¥ãàÚ1צâ¥ÒxϤÒhÙ qèÞ âÜÏ ÖhΜÏÎRÑÐhÛ  ÓnΒצЙôhιÐhá1ÍPÐhÏ]צЙÖhÎBÒhÙ´× ç âÜÞ âÜÏb× ç Î’è æEö5÷ ÝÒLáΜÛ1ã<äݭЙѦôLâÜÏÖBãJÒh× ç ×fä<ßÎœÞ å âHïÐhÏ1á å âÜârï"êdÓ ç âYÕ ç ѦΜá1ÚàÕNΜÞn× ç ÎqãâÜÐhÞEЙÖxÐhâÜÏ1Þ[× –èÔ –èÝÒ<áàâ¥üàÕRЙצâ¥ÒxÏbâÜÏ ÕNÒxÝßJÒxÚ1Ï1áqèޜð èѦΜÏÒxÝ­âÜÏàÐhÛÝÒLá1â¥üÕRЙצâ¥ÒxÏ ÎRÑѦÒhÑÞRêâÜÛÜÛYÚ1Þ[×[ÑЙ×[ΜábâÜÏ èR òÒhÙ ö â¥ÖxÚѦΠêàЙÑΒÑÐ™× ç ÎRÑ'âÜÏÙrѦΜÿ<ÚΜÏ]×RêLá1ΜÞ[ßàâ¥×[Î × ç έÏ1ЙצÚ1ÑÐhÛsßàЙÑÐhÛÜÛÜΜ۴ÓâÜ× ç èèéЙ×[צÐhÕ ç ÝΜÏ]×WÐhÝWãàâ³Ô ÖxÚ1â¥×°äoâÜÏ,ÌÏ1ÖxÛÜâÜÞ ç ð$"ŸÚÎ ×[Òb× ç Î ç â¥Ö ç Û¥ä}ЙѦצâÜÕRÚàÛÜЙ×[Μá Þ[×[ÑÚàÕNצÚѦήÒhÙß1ѦΜÏ1ÒxÝ­âÜÏ1Ðh۞ÝÒ<áàâ¥ü1ÎRÑÞRêJâ¥×ŸÞ[ÎRΜݭޒá1âÜÙYÔ üàÕRÚ1ÛÜ××[ÒWÐhá1á1ѦΜަÞ× ç âÜÞß1ÑÒhãàÛ¥ÎœÝ á1âÜѦΜÕNצۥä  ÒxÏÎqÝΜÐPÔ Þ¦ÚÑÎbӛÎ}ÙrÒxÚ1Ï1á Þ[ÒxÝÎRÓ ç Ð™× ÞÚ1ÕRÕNΜަÞ[ٍÚ1Û âÜÞ®×[ÒûݭЙѦô èá1ÐhÚÖ ç ×[ÎRÑÞ ÒhÙsß1ÑΜÏÒxÝ­âÜÏ1ÐhÛJÝÒLá1â¥üÕRЙצâ¥ÒxÏð æ Ò<ÒhÑá1âYÏ1Йצâ¥ÒxÏÞ¦ÕNÒhßJÎÎRÑѦÒhÑÞÒLÕRÕRÚѦΜá âÜÏ×°ÓnҮݭÐPÔ [ÒhÑ͙ЙÑâ¥ÎRצâ¥ÎœÞNM× ç ÒxÞ[ÎBÓ ç ÎRѦÎB× ç Î¹Ý âÜަЙ×[צÐhÕ ç ΜáúÑâ¥Ö ç × ÕNÒxϦÚ1Ï1ÕN×sâÜÞ5ÍhÎRѦãàÐhÛ å Ð$qèûÒhÑ €èEï"êPÐhÏ1á®× ç ÒxÞ¦Î›Ó ç ÎRѦΠâ¥×}âÜÞ}ÏÒxÝ âÜÏ1ÐhÛ  × ç Î ÛÜЙ×[×[ÎRÑ%ÕRÐhÞ[ÎûâÜÞ}âÜÛÜÛYÚ1Þ[×[ÑЙ×[Μá âÜÏ æ v "$w ÐhÏ1á æ v " =+âÜÏ ö â¥ÖxÚѦÎzðXW,ñ ç ÎWΜÿ]Ú1âÜÍdÔ Y ¡“9ˆŠ‚P{{JŒ?{€~Sª™„‰ ” R"~f†9ˆŠ‚P„ƒSˆŠ?‚nˆtŽ"{‚P{°~f„"‰Š‰˜—›š'„¦~S«N{[†s•.ˆ˜ƒS“ ” "š š'„"¬•.“P{°~S{[„"n‚P?š ˆŠ‚P„"‰ ” R~f†9ˆŠ‚™„¦ƒSˆŠ?‚­ˆŠEš'„~S«N{†W•.ˆ˜ƒS“ ” "‚"µ"ˆŠ‚P{€~S ~ ƒS“P{­š "ƒS‰Š— ‚P"¶P‚9 ” "‚"µr?ˆŠ‚PˆŠ‚PŽo|P¶9‚ ” ƒS¶P„ƒSˆŠ?‚ š'„~S« V[Z W9¢ 7' &' AJI X*' XX  <f+-F '65   '  G:;I;<n2701I;,_3YTU+-F TUI;IH3A12=T 7' 7' &' AJI X*' XX  <-+-F '65   '  G:;I;<n2701I;,_3YTU+-F TUI;IH3A12=T ö â¥ÖxÚѦΠM qݹãâ¥ÖxÚ1â¥×°ä ãÎR×°ÓnÎRΜÏÕNÒxÝ­Ý®Ú1Ï1âYÕRЙצâ¥ÒxÏ ÍhÎRѦã‡Þ¦ÚãÕRЙ×[ÎRÖhÒhÑâœÐ™×¦â¥ÒxÏ,ÙHÑ"ÐhÝÎ å ÛÜÎRÙH×  ÕNÒhѦßÚ1Þ"ï–ÐhÏ1á ç â¥Ö ç ÕNÒ]ÒhÑáàâÜÏ1Йצâ¥ÒxÏ Ð™×[צÐhÕ ç ÝΜÏ]× å Ñâ¥Ö ç ×  âÜÏàÕNÒhѦѦΜÕN× ßàЙÑÞ¦Î9ï"ð ÒLÕRÐh۹ݭÐ3[ÒhÑâ¥×°ä ÒhÙ ÛÜÒwÓòÒ9ÍhÎRÑ ç â¥Ö ç ÍhÎRѦãÐh۹Й×[צÐhÕ ç Ô ÝΜÏ]×,ÎRѦѦÒhÑÞ ÕNÒxÏd×[Ñ"ÐhÞ[צÞûÿ<Ú1ÐhÛÜâ¥×¦Ð™×¦â¥ÍhΜÛÜä Óâ¥× ç Ìñ ßàЙÑÞâÜÏÖêdÓ ç ÎRѦΟۥÒwÓ Ð™×[צÐhÕ ç ÝΜÏd×'âYÞnÝÒhÑΟÕNÒxÝ ÝÒxÏ ÐhÏ1áßЙÑÞ[ÎRÑÞs×[ΜÏ1á­×[ÒBÎRѦÑG×[ÒwÓEЙÑá ç â¥Ö ç Й×[צÐhÕ ç ÝΜÏd×Rð ñ ç ÎRÑÎbЙѦ΅װÓnÒûÝ­Ð3[ÒhÑÞ[ÒxÚÑÕNÎœÞ ÒhٖÐhݹãàâÜÖxÚÒxÚ1Þ Ð™×Ô ×¦ÐhÕ ç ÝΜÏd× Þ¦â¥×[ΜÞNM å ârïbÐhÏdäŸè ÕRÐhÏéãJÎ,ßàЙÑަΜáéÐhÞ ÐhÏ è ßàÛÜÚ1Þ Ð Ú1Ï1Ð™Ñ¦ä €è ŸèžêÞ[Ò á1Ú1Β×[Ò Q-S T™ÔfáÑÒhß ÐhÏ]äŸèûÕNÒ]ÒhÑ"á1âÜÏ1Йצâ¥ÒxÏ âÜÞsÐhÝWãàâ¥ÖxÚÒxÚàޞӖâ¥× ç Ð ç â¥Ö ç ÎRÑ è ÕNÒ<ÒhÑá1âÜÏ1ЙצâÜÒxÏ  å âÜâHï ŸèޒЙѦΠÝWÚ1ÛÜצâÜÛ¥ÎRÍhΜہêJÖxâ¥ÍLâÜÏÖ ÑâÜÞ¦Îo×[ҋÐhݹãâ¥ÖxÚ1â¥×¦â¥ÎœÞ ÒhÙ¹Þ¦ÕNÒhßJÎ ÒwÍhÎRÑoÐhá [Ú1ÏàÕNצÞRð €× Þ[ÎRΜݭÞú× ç Ð™× å âHïbâÜÞ}Аá1â  ÕRÚ1ÛÜ× ß1ÑÒhãàÛ¥ÎœÝ  âYÏ Þ[ÒxÝÎ ÕRÐhÞ[ΜÞRê5ÕNÎRѦצÐhâÜÏ b áàâÜÞ¦ÕNÒxÚÑÞ¦ÎNÔfÛ¥ÎRÍhÎœÛ c}ÐháÍhÎRѦãÞBÞ¦Ú1Õ ç ÐhÞ  5 NT O^O S ÐhÏ1á 5 O  Q O#UV.Y,^S SWX ßàѦÎRÙHÎRÑ è ÝÒLá1â¥üàÕRЙצâÜÒxÏ¿ÐhÏ1áÐ™ÑÎo× ç Ú1Þ Þ[×[ÑÒxÏÖVâÜÏàá1âÜÕRЙ×[ÒhÑÞÒhÙ ç â¥Ö ç Й×[צÐhÕ ç ÝΜÏd×Rðnñ´Ò’ÕRЙß1צÚÑÎ›× ç âÜ޴ӛΠݭЙѦôB× ç ÒxÞ[Î ÐháÍhÎRѦãޒßJÒxÞ¦Þ[ΜަަâYÏÖ}ÐhÏ €è ÖhÑÐhÏ1áßЙѦΜÏd×Rð ‡ÎbÐháLÔ áѦΜÞÞ å âÜâHï…×[Ò Þ[ÒxÝ­ÎVÎNØL×[ΜÏd× ã<äéݭЙѦôLâÜÏÖ ŸèÞ ÐhÞ Ðhá¦Ú1Ï1ÕNצâ¥ÒxÏÒhÑVÕNÒxÝßàۥΜÝΜÏ]צЙצâ¥ÒxÏ Þ[×[ÑÚ1ÕNצÚÑΜÞRê­ÐhÞ Þ ç Ò9ÓυãJÎRÙHÒhÑÎBâÜÏ ö âÜÖxÚѦÎ  âÜυ×[ÑÐhâYÏ1âÜÏÖ á1ЙצÐLê1ÒxÏ1ÛÜä ÛÜâ¥ôhÎNÔS×°ä]ßJÎ ŸèÞ'ЙÑιÕNÒ]ÒhÑ"á1âÜÏ1Й×[Μáð â¥× ç Ï1ÒxÝ­âÜÏ1ÐhÛGÕNÒ<ÒhÑá1âÜÏàЙצâ¥ÒxÏùÞ¦ÕNÒhßJέÎRÑѦÒhÑÞRê5× ç Î Þ¦â¥×¦ÚàЙצâ¥ÒxÏ,âÜޖáàâÎRÑΜÏd×NM–ÓnÎ ÙrÒxÚ1Ï1á,ÏÒbÙrÐhÛÜަήۥÒ9Ó Ð™×Ô ×¦ÐhÕ ç ÝΜÏdצޜð ö ÐhÛÜÞ[Î ç â¥Ö ç Þ¦ÕNÒhßàâYÏÖxޒÕRÐhÏ,ãJήÑΜá1Ú1ÕNΜá ã<ä ݭЙѦôLâÜÏÖ –è ÕNÒxÏ [Ú1ÏàÕNצÞRð åSæEç ЙÑÏ1âYЙôê î dï ÕRÛÜÐhâÜÝ Þ­× ç ЙׅÐVÞâÜÝ­âÜÛÜÐ™Ñ Þ[×[ÑЙ×[ÎRÖhäßàѦÒwÍhΜá Î ΜÕNצâÜÍhÎ ÙrÒhÑ  þ ßàЙÑÞâÜÏÖð  ѦΜÛÜЙ×[Μá ÎRѦÑÒhÑЙÑâÜÞ[ΜÞsÙrѦÒxÝ× ç Î'âYÏd×[ѦÒLá1Ú1ÕNצâÜÒxÏWÒhÙ èÞEã<ä ÕNÒxÝ­Ý®Ú1Ï1âÜÕRЙצâÜÒxυÍhÎRѦãàÞ ÐhÏ1á…ÕNÒxÝ­Ý­ÐhޜêàÐhÞ âÜÏ ö â¥ÖxÚѦΠ<ð Ï1Û¥ä,Ð}ÙHÎRÓ£ÍhÎRѦãàÞWâÜχÒxÚÑW×[ÑÐhâÜÏ1âYÏÖ}Þ[ÎR× ×¦Ð™ôhÎ#èÞ­× ç âÜÞÓEМähê'Þ[ÒùÓnÎoÐháàáѦΜަÞ× ç âÜÞÐhݹãâ¥ÖxÚLÔ â¥×°ä‡Óâ¥× ç ÞÚãÕRЙ×[ÎRÖhÒhÑ⠜Йצâ¥ÒxÏ ÐhÏ1ÏÒhצЙצâÜÒxÏ åSæ ÒxÛYÛÜâÜÏ1ÞRê ëœìhìhìxï"êJݭЙѦôLâÜÏÖ  ŸÞ'ßJÒxަަΜަަâÜÏÖièÞ¦âÜÞ¦×[ÎRÑÞRð oÒxÞ[×Ðhá¦Ú1Ï1ÕNצâÜÒxÏ ÎRѦѦÒhÑÞRêxÞÚ1Õ ç ÐhÞâÜÏd×[Ò èûÑ"Ð™× ç ÎRÑ × ç ÐhÏ Ÿè5êxЙѦÎ'âÜÏ®ß1ÑâÜÏàÕRâ¥ßàÛ¥ÎަΜݭÐhÏdצâYÕRÐhÛÜÛ¥ä¹âÜÝ­ßÒh×[ΜÏ]×Rê Þ¦âÜÏàÕNΟâYυãÒh× ç ÕRÐhަΜÞ'× ç ÎRä…ЙѦÎBÐhÞ¦Þ[ÒLÕRâÜЙ×[Μá}Ӗâ¥× ç × ç Î Þ¦ÐhÝÎbÍhÎRѦãàÐhÛ ç ΜÐhá.ð#Ïùß1Ñ"ÐhÕNצâÜÕNÎhê ç Ò9ÓnÎRÍhÎRќênÝ­ÐhÏdä Ðhá¦Ú1Ï1ÕNצÞbЙѦΠ–ènÞRê'ÐhÏ1á ÐhݹãàâÜÖxÚÒxÚ1Þ Ðhá [Ú1ÏàÕNצâ¥ÒxÏ1Þ âÜÏ]×[Ò è ЙѦ΅ަÚßJÎRѦüÕRâÜÐhÛÜÛ¥ä,âÜÏ1á1âYÞ[צâÜÏÖxÚ1âYÞ ç ЙãàۥήÙrѦÒxÝ Þ¦Úã ΜÕNצÞ'áàÚΒ×[Ò Q-S T™Ôfá1ѦÒhß.ð:qè Ðhá[ÚàÏ1ÕNצÞ'âÜÏ]×[Ò}Ÿè ЙѦΠÏÒh×,ÐhݹãàâÜÖxÚÒxÚ1ÞoâYÏé× ç âÜÞ%ÓEМähê ÐhÏ1á ÙrѦÒxÝ âÜÏLÔ Þ[ßJΜÕNצâ¥ÒxÏ}× ç ÎWÐhÏ1ÏÒhצЙצâÜÒxÏúß1ÑÐhÕNצâYÕNιÒhÙs× ç Î æEç âÜÏΜÞ[Î ñžÑ¦ÎRÎRãàÐhÏô Йß1ßJΜЙÑÞG×[ÒBãJÎ ×[ÒBßÚ×?–èVÐhá¦Ú1Ï1ÕNצÞâÜÏ]×[Ò ŸèùÚ1Ï1ÛÜΜަÞs× ç ÎRäЙѦΠÙrÒxÛÜÛ¥ÒwӛΜáã<äÐhÏÒwÍhÎRѦכަÚãP[ΜÕN×Rê ÒhÑÒh× ç ÎRѦÓâÜÞ[Î'áàâÜÞ[צâÜÏÖxÚàâÜÞ ç ΜáWÐhÞèGÔfÛ¥ÎRÍhÎœÛ å Îhð˜Öð¥ê ç ÐwÍhÎ Þ¦ÕNÒhßJιÒ9ÍhÎRђРÕRۥΜЙÑ$€èÕNÒ<ÒhÑá1âÜÏàЙצâ¥ÒxÏï"ð ñ ç âÜÞÕNÒxÚ1ÛÜá ãJÎqá1ΜÐhÛ¥×EÓâ¥× ç âÜÏ…è æEöž÷ ÐhÏàÏÒhצЙצâ¥ÒxÏ}âÜÏb×fӛÒÓEМäLÞRð ÏÎ,âÜÞb×[Ò Ñ¦ÎRצÐhâÜÏ Þ¦Ú1ãPΜÕN× å ÐhÏ1á659ÒhÑúÏ1ÒxÏLÔfÞ¦ÚãP[ΜÕN×"ï ٍÚ1Ï1ÕNצâ¥ÒxÏ1ÐhÛÝ Ð™Ñ¦ô<âYÏÖ ÙHѦÒxÝ × ç Î æ ñqð1ñ ç ΟÒh× ç ÎRÑqâÜÞ ×[ҟݭЙѦô Ÿè‡Ðhá[ÚàÏ1ÕNצÞRðNÏWß1ÑÐhÕNצâÜÕNÎhê™Ó›ÎnÙrÒxÚ1Ï1á¹× ç Ð™× × ç ÎBÙrÒhÑÝÎRÑ ç Ú1Ñ¦× ßJÎRѦÙHÒhÑ"Ý­ÐhÏ1ÕNÎBÓ ç ÎRѦΜÐhÞ × ç ήÛÜЙ×[×[ÎRÑ ç ΜۥßJΜá…Þ[ÒxÝÎRÓ ç Й×Rð ! " :GI*I*-&)(+* / &/N1FGt -9</ ;9PD09 GI( 11 & (+G#;/F&)(+* ñ ç Î}Þ[×[ÑÒxÏÖhΜÞ[×­Ý­âÜަצЙÖhÖxâÜÏÖ,×[ΜÏ1áΜÏ1ÕNä¤ÓEÐhÞ×[Ò‡×¦Ð™Ö ÍhÎRѦãàÞ å  BïsÐhÞÕNÒxÝ ÝÒxÏ­ÏÒxÚ1ÏàÞ å ’ï"ð4&–ßJÒxÏÝ­ÐhÏLÔ Ú1ÐhÛ´ÎNØLÐhÝ­âYÏ1Йצâ¥ÒxÏê× ç ÎWÐhÞ[äLÝ­ÝÎR×[Ñä}ÒhÙ?'ÔfÐhÞÔf£ÐhÏ1á –ÔfÐhÞÔ Ý­âÜÞ[צЙÖhÖxâÜÏ1Ö®ÙrѦΜÿ<ÚΜÏ1ÕNä…Þ[ÎRΜݭÞ'âYÏbÛYâÜÏÎqÓâ¥× ç × ç ÎWÖxÛ¥ÒhãàÐhÛß1Ñ"â¥ÒhÑ'ÒwÍhÎRђè bצЙÖxÞ  × ç ιÒ9ÍhÎRÑÐhÛÜÛ+ M  ÑЙצâ¥Ò%âÜÏ,× ç Î ÕNÒhÑßàÚ1ÞBâÜÞWî<ð5MÜë™ð ‡Î ã1Ñâ¥ÎNàäoÎNØLßàÛÜÐhâÜÏ ç ÎRÑ¦Î Ó ç ä L5  ÐhÝWãàâ¥ÖxÚ1âÜ×fä‡âÜ޹Рç ЙÑáùß1ѦÒhãàÛÜÎœÝ âÜÏ æEç âYÏΜÞ[Îhð#qÛÜÛÏàЙצÚÑÐhۛÛÜÐhÏÖxÚ1ЙÖhΜޮßJÒxÞ¦Þ[ΜަޮáÎRÑâÜÍPÐPÔ ×¦â¥ÒxÏ1ÐhÛàÝΜÐhÏ1Þã<ä Ó ç âÜÕ ç ѦÒ<ÒhצÞÕRÐhÏ Þ[Óâ¥×¦Õ ç ãJÎR×fӛÎRÎœÏ ÏÒxÝ­âYÏ1ÐhÛEÐhÏ1á‹ÍhÎRѦãàÐhÛ ÕRЙ×[ÎRÖhÒhÑ"â¥ÎœÞRð  ç ÎœÏ‹× ç ÎRѦÎúâÜÞ ÒwÍhÎRÑ×sÝ­ÒhÑ¦ß ç ÒxÛ¥ÒhÖxâYÕRÐhÛxݭЙѦôLâÜÏÖ'ÙrÒhÑ5× ç ΜÞ[ÎßàѦÒ<ÕNΜÞÞ[ΜÞRê ÐhÞâYÞÐhÛ¥ÓEМäLÞ× ç ÎÕRÐhÞ¦ÎâÜÏ}v–Ú1Þ¦Þ¦âÜÐhÏ ÒhÑ ÷ ÎRÑÝ­ÐhÏ å ÐhÏ1á ÐhÛÜÞ[ҟÐhÞsÓâ¥× çæEç âYÏΜÞ[ÎEÐhÏ1áWÌnÏÖxÛÜâÜÞ ç Þ¦Ú#Ø<ΜÞ5Þ¦Ú1Õ ç ÐhÞ Ô! 54] . "%XLï–× ç ÎRÑέâÜޟÏÒúÐhݹãâ¥ÖxÚ1â¥×°ähðLEÚ×’× ç ÎÞ[ßЙÑÞ[Î ÝÒhÑ¦ß ç ÒxÛ¥ÒhÖhäbÒhÙsÌnÏÖxÛÜâÜÞ ç ÐhÏ1á æEç âYÏΜÞ[ÎBÝΜÐhÏ1Þ'× ç Ð™× ÙrѦΜÿ]Ú1ΜÏdצۥä}× ç ÎRѦΠ.  ÏÒxÚ1Ï659ÍhÎRѦã,ÐhÝWãàâ¥ÖxÚ1â¥×°äoÐ™×Ÿ× ç ΠӛÒhÑá,Û¥ÎRÍhΜہð ö ÒhђÌÏÖxÛÜâYÞ ç êÝ­ÒxÞ[×qÕRÐhަΜޟÒhÙ× ç âÜޖÐhÝ Ô ãàâ¥ÖxÚàâ¥×fä¤ÕRÐhϐãJÎ}ѦΜÞ[ÒxÛÜÍhΜá‹ã]ä × ç Î%ÛÜâÜÏÖxÚ1âYÞ[×WÒxÏ× ç Î ãàÐhÞ¦âYÞEÒhÙOQP, S-,RQ/. 3  ,^*Y. U Þ¦ÚãÞ[צâ¥×¦Úצâ¥ÒxυâÜυÞ[צЙצâYՒÕNÒxÏLÔ ×[ÎNØL×NMÓ ç ÎR× ç ÎRÑ'ÐhÏoâÜÏ1ަצÐhÏ1ÕNΒÒhÙž× ç ÎBÓnÒhÑ"á S ,^.  ONê1ÙrÒhÑ ÎNØÐhÝßàÛ¥Îhê1âÜÞЭÏÒxÚ1υÒhÑЭÍhÎRѦãoÕRÐhÏúãJÎBÿ<Ú1âÜÕô<Û¥äbáÎNÔ ×[ÎRÑÝ­âYÏΜá’ã<äBÕ ç ΜÕô<âÜÏ1ÖÓ ç ÎR× ç ÎRÑRS-,/.  O#Q®ÕRÐhÏWãÎnÞ¦ÚãÔ Þ[צâ¥×¦Ú1×[Μá‹âÜÏ¤× ç ÎúÞ¦ÐhÝÎúÕNÒxÏ]×[ÎNØ<×Rð æEç âÜÏΜަÎhêGÒxÏ‹× ç Î Òh× ç ÎRÑ ç ÐhÏ1áê ç ÐhÞnÏ1Ò¹ÝÒhÑ¦ß ç ÒxÛ¥ÒhÖxâÜÕRÐhÛßàЙÑ"Ðhá1â¥ÖxÝ­ÞRê<Þ[Ò ÐhÏ]ä¹×[ΜÞ[×G×[ҒáÎR×[ÎRÑÝ âÜÏÎ›× ç ÎEßàЙѦ×sÒhÙÞ[ßJÎRÎœÕ ç ÝWÚ1Þ¦×5ãJÎ Ý­Ðhá·Ӗâ¥× ç  X/0<*d,V3  ,^*Y.YU¤Þ¦Ú1ãàÞ[צâ¥×¦ÚצâÜÒxÏOMVÓ ç ÎR× ç ÎRÑ × ç Î,ӛÒhÑá ÕRÐhÏ ×¦Ð™ôhÎùÐhÏ ÐháÍhÎRÑãàâÜÐh۟ÝÒLá1â¥ü1ÎRÑbÒhÑoÐ ß1ѦΜÏ1ÒxÝ­âÜÏ1ÐhÛJÝÒLá1â¥ü1ÎRÑwê<ÙrÒhÑ'ÎNØÐhÝßàÛ¥Îhð ÏbãÒh× ç ÛÜÐhÏ1ÖxÚ1ЙÖhΜÞRê× ç ÎRѦΒЙѦΟãJÒhÑáÎRÑ"ÛÜâÜÏÎÕRÐhަΜÞRê ãàÚ×Ÿ× ç ÎRä‡Ð™Ñ¦Î ç ÐhÏ1á1ۥΜá,á1âJÎRѦΜÏ]צۥäoã<ä × ç ÎѦΜÞ[ßJΜÕ?Ô ×¦â¥ÍhΖ×[ѦÎRÎRãàÐhÏôLÞRðU€Ï ÌÏÖxÛÜâYÞ ç êhÖhÎRÑÚ1Ïàá1Þ å   ÷ ï ç ÐwÍhÎ ãJÒh× ç ÏÒxÝ­âYÏ1ÐhÛJÐhÏ1á}ÍhÎRѦãàÐhÛ.ß1ѦÒhßJÎRѦצâ¥ÎœÞRð Ï}× ç ÎBÌnÏLÔ ÖxÛÜâÜÞ ç ñ´Ñ¦ÎRÎRãÐhÏôê× ç ÎRä ç МÍhΐÐéÞâÜÏÖxۥ΋è ÅצЙÖê ãàÚ× × ç Μâ¥Ñá1âÜÞ¦×[Ñâ¥ãàÚצâÜÒxÏ Ò9ÍhÎRÑÛÜЙßàÞÓâ¥× ç ãJÒh× ç Ï1ÒxÚ1Ï1Þ ÐhÏ1á‹ÍhÎRÑãàÞRê Þ[Òû× ç Ð™× ÙHÒhÑ ÎNØLÐhÝ­ßàÛ¥Îú× ç ÎRä‹ÕRÐhÏ ç ΜÐhá ãJÒh× ç qèÞ­ÐhÏ1á qènÞRð €Ï æEç âÜÏΜަÎhêÒxÏ‹× ç ÎoÒh× ç ÎRÑ ç ÐhÏ1á.êP× ç ÎnצЙÖBÐhÞ¦Þâ¥ÖxÏΜáW×[Ò L5 –ÔfÐhݹãàâÜÖxÚÒxÚ1Þ5ÓnÒhÑ"á1Þ âÜÞ­á1ÎR×[ÎRÑÝ­âÜÏΜá ã<ä × ç ÎúÎNØL×[ÎRÑÏ1ÐhۖÕNÒxÏd×[ÎNØL×Rê'Þ[ßJΜÕRâ¥üÔ ÕRÐhÛÜÛ¥äo× ç Μâ¥Ñ’Ý­ÐPØâÜÝ­ÐhÛžß ç ÑÐhÞ[Î M–Óâ¥× ç × ç Î ÎNØLÕNÎRßàצâ¥ÒxÏ ÒhْáÒxÝ­âÜÏàЙצâ¥ÒxÏ ã]ä ö v  ÷ ênÐhÛÜÛ'Ï1ÒxÚ1Ï1ÞЙѦΠâÜÝ­ÝÎNÔ á1âÜЙ×[ΜÛÜäúáÒxÝ­âÜÏ1Й×[Μá%ã<ä–è5êÐhÏ1á ÐhÛÜ۞ÍhÎRѦãàޖã]ä Ÿè5ð ñžÒ‹×[ΜÞ[×…× ç ÎûâÜÝßàÐhÕN×bÒhÙ æEç âÜÏΜަÎ%ñžÑ¦ÎRÎRãàÐhÏô  5  צЙÖhÖxâÜÏÖVß1ÑÐhÕNצâÜÕNΜޜê›Ó›Îo×[Ñ"â¥Îœá‹×[ÑÐhâÜÏàâÜÏÖ,× ç ÎoßЙÑÞ[ÎRÑ Óâ¥× ç ÐhÏàá $×[ÑÐhâÜÏ1âYÏÖûצЙÖxޅÝÎRѦÖhΜáðÅñ ç âÜÞ äLâ¥ÎœÛÜáΜáÐi<ð < áÑÒhß âÜÏ ö ë'ÙrÒhÑ× ç ÎÍPÐhÏ1âYÛÜÛÜÐŸè æEö5÷ ê ÐhÏ1ábÐ ÝWÚ1Õ ç ަݭÐhÛÜÛ¥ÎRÑEáѦÒhßbÒhÙë™ð4 ÙrÒhÑE× ç ΖÑÎRüàÏΜá ÝÒLáΜہêqÞ¦ÚÖhÖhΜÞ[צâÜÏ1Ö Ð™×oü1ÑÞ[×bÖxÛÜÐhÏ1ÕNÎ‡× ç Й×úÕNÒxÏ]×[ÎNØL× ßàÛÜÚàÞ.ÕNÒhѦѦΜÕN×5âÜÏ1áÎRßJΜÏ1áΜÏàÕNÎsÐhÞÞ¦Ú1Ýß1צâÜÒxÏ1Þ.ÕRÐhÏ®ÕNÒxÝ Ô ßJΜÏ1ަЙ×[έÙrÒhÑWÝ­ÒxÞ[×BÒhÙ × ç Î áàâÜÞ[×[Ñâ¥ãÚצâ¥ÒxÏ1ÐhÛ5âÜÏÙrÒhÑÝ­ÐPÔ ×¦â¥ÒxÏVÖxÐhâÜÏΜáVÙHÑÒxÝ L5  צЙÖ,ß1Ñâ¥ÒhÑ"ÞRð EÚ×Wӛ΅ÐhÛÜÞ[Ò ×[Ñâ¥Îœá × ç Î Þ¦ÐhÝÎ ÎNØLßJÎRÑâÜÝΜÏ]זӖâ¥× ç × ç Î Ìnñ ÚàÞ¦âÜÏÖ × ç ήަݭÐhÛÜÛ×[Ñ"ÐhâÜÏ1âÜÏÖ­Þ¦ÎRזÐhÏ1á%Э͙ÐhÏ1âÜÛÜÛÜÐ è æEöž÷ êJÐhÏ1á ÙrÒxÚ1Ï1áê5ѦΜݭЙѦô™Ð™ãàÛ¥ähêsß1Ñ"ÐhÕNצâÜÕRÐhÛÜÛ¥ä‡ÏÒ%Î ΜÕN×NM ß1ѦΜÕRâ³Ô Þ¦â¥ÒxÏJ.106UNS#O#,  O#Q­ã]äL Lð  = bêLÐhÏ1á®Ñ¦ÎœÕRÐhÛÜÛáΜÕNѦΜÐhަΜáã]ä LðtîLëF bð–Û¥× ç ÒxÚÖ ç × ç âÜÞEÑΜަÚ1Û¥×'ÕRÐhÛÜÛYÞEÙHÒhіÙrÚÑ× ç ÎRÑ'âÜÏLÔ ÍhΜÞ[צâ¥ÖxЙצâÜÒxÏê1ӛΒ×[ΜÏdצЙצâ¥ÍhΜÛÜä…ÕNÒxÏàÕRÛÜÚ1áÎŸ× ç Й×âYÏbÌnÏLÔ ÖxÛÜâÜÞ ç êwÙrÒhўÝÒxÞ¦× 5 –ÔfÐhÝWãàâ¥ÖxÚÒxÚ1ޞ×[ÒhôhΜÏ1ÞRêhÝÒhÑß ç ÒxÛ³Ô ÒhÖhä,ÐhÏ1á‡è oß1Ñ"â¥ÒhђÕNÒxÏd×[ÑâÜãàÚ×[ΠΜަÞ[ΜÏ]צâÜÐhÛÜÛ¥ä%ÏÒh× ç Ô âÜÏÖ‹× ç Й×%ÕRÐhÏ1ÏÒh×%ãJÎùÐhá1á1Ú1ÕNΜáéÙHѦÒxÝ × ç Îù×[ÒhôhΜÏ.óŠÞ Þ¦ÚÑѦÒxÚ1Ï1á1âYÏÖ}ÙrÚàÏ1ÕNצâ¥ÒxÏLÔSӛÒhÑáùÕNÒxÏ]×[ÎNØL×  Ó ç ÎRѦΜÐhÞ âÜÏ æEç âYÏΜÞ[Îhêhâ¥×sÞ[ÎRΜݭÞ5× ç Й×s× ç Î ÛÜÐhÕ¦ôWÒhÙ1ٍÚ1Ï1ÕNצâ¥ÒxÏWÓnÒhÑ"á1Þ ßàÚ×¦Þ ÐÝ®Ú1Õ ç ÖhѦΜЙ×[ÎRÑãàÚ1ÑáΜÏbÒxυß1Ñâ¥ÒhÑEôLÏÒwӖۥΜáÖhÎ ÒhÙnÐhÏ  5 –ÔfÐhݹãâ¥ÖxÚÒxÚ1ޟӛÒhÑáóŠÞ’á1âYÞ[×[Ñâ¥ãàÚ1צâ¥ÒxÏ1ÐhÛãJÎNÔ ç ÐwÍ<â¥ÒhÑwð ! "++,+;F13E+9; 9P(+E+GI(+4=9 / 9P( 1F/  ç ΜÏúӛÎWâÜÏàÕRÛÜÚ1áΜá…ÐhÛÜÛ× ç ÎWè æ öž÷ ÎœÏ ç ÐhÏ1ÕNΜÝΜÏ]×¦Þ ÛÜâÜÞ¦×[Μá âÜÏ × ç âÜÞbÞ[ΜÕNצâ¥ÒxÏ.ê–ӛ·ÐhÕ ç â¥ÎRÍhΜá ÐhÏâÜÏàÕNѦΜÐhÞ[Î ÒhÙ ë™ð < âÜÏ ö ë  âÜÏ]×[ÎRѦΜÞ[צâÜÏ1ÖxÛ¥ähêß1ѦΜÕRâYÞ¦â¥ÒxÏ ÐhÕNצÚ1ÐhÛÜÛ¥ä áΜÕNѦΜÐhަΜá‹ã]ä Lð < Ó ç ÎRÑΜÐhÞѦΜÕRÐhÛÜÛEâÜÏ1ÕNÑΜÐhÞ[Μá ã]ä <ðtî= bð ÚÑ¹è æEöž÷ ÎœÏ ç ÐhÏ1ÕNΜÝΜÏ]צÞWÓnÎRѦΠÝÒxÞ[×¹ÎRÙHÔ ÙrΜÕNצâ¥Íh·Й×bÑΜá1Ú1ÕRâÜÏÖ –èÔ –è Ý­Ò<á1âÜüàÕRЙצâ¥ÒxÏ¿ÎRÑѦÒhќê âÜÏ1ÕNÒhÑѦΜÕN×WÑΜÕRÚÑÞ¦â¥ÍhÎ ã1ÑÐhÕôhÎRצâÜÏÖxÞRêÐhÏàá èM5 ŸèЙ×Ô ×¦ÐhÕ ç ÝΜÏd×ÎRѦѦÒhÑÞRðGñ ç ÎRä ӛÎRѦΒۥΜަޛΠΜÕNצâÜÍhιЙ×'âÜÝ Ô ß1ѦÒ9Í<âYÏÖ¹ÕNÒ<ÒhÑá1âÜÏ1ЙצâÜÒxÏ Ð™×[צÐhÕ ç ÝΜÏd× ÑΜÞ[ÒxÛÜÚצâ¥ÒxÏ ÐhÏ1á ß1ѦΜÏ1ÒxÝ­âÜÏ1ÐhÛJÝÒLá1â¥üàÕRЙצâÜÒxÏð qÛ¥× ç ÒxÚÖ ç × ç ÎRä}ӛÎRѦΠÏÒh×qáàâ¥Ñ¦ÎœÕNצۥä}âÜáΜÏ]צâ¥ü1Μá%ÐhÞ Þ[ÒxÛÜÚ1צâ¥ÒxÏ1Þ'×[Ò}ÕNÒxÝ­Ý­ÒxÏ%×°ä]ßJΜÞqÒhÙÎRѦÑÒhÑÞRêJÓnÎ âÜáΜÏ]צâ³Ô  = ÓnÒhÑ"á1Þ =´è =Nv ö ë Eâ¥ôhΜÛ æ ç âÜÐhÏÖ­î ]ðtî  <ðtî  <ð4 èÑΜÞ[ΜÏd× Ó›ÒhѦô ™í<ð  ™ì<ðtî ™í<ðtí æEç âYÐhÏÖ Eâ¥ôhΜ۴î xî íLë™ðÜë ™í<ðtí ™ì<ðtì ñžÐ™ãۥΠ MñžÎœÞ[×Þ[ÎR×'ßЙÑÞ[ΒßJÎRѦÙrÒhÑÝ­ÐhÏ1ÕNÎ ü1ΜábÞ[ÎRÍhÎRÑÐhÛJÝÒhÑ¦ÎŸè æ öž÷ ѦÎRüàÏΜÝΜÏ]צÞn× ç Й×nÑÎN1ΜÕN× ÛÜâÜÏ1ÖxÚ1âÜÞ[צâÜÕRÐhÛYÛ¥ä ÝÒhצâ¥Í™Ð™×[Μá%ÖhΜÏÎRÑÐhÛÜ⠜ЙצâÜÒxÏ1ÞÐhÏ1á%âÜÝ Ô ß1ѦÒ9ÍhÎúßàЙÑÞâÜÏÖ,ßJÎRѦÙHÒhÑ"Ý­ÐhÏ1ÕNÎhð ÷ ΜÏÎRÑÐhÛYâ œâÜÏÖ,ÙrѦÒxÝ × ç ÎùÞ¦ßΜÕRâÜüàÕ,ÎRѦѦÒhÑ ÕRÛÜÐhÞ¦Þ[ΜÞúÐhÏàÐhÛ¥ä<RΜáÅâÜÏ × ç Îùß1ѦÎNÔ ÍLâ¥ÒxÚ1ÞbÞ[ΜÕNצâ¥ÒxÏêÐVôhÎRäßàѦÒhãàÛ¥ÎœÝ âÜÏ æEç âÜÏΜÞ[ÎúßЙÑÞÔ âÜÏ֒Þ[ÎRÎœÝ ÞG×[ÒBãJÎ Þ[ÎRßàЙÑЙצâÜÏ1ÖBÏÒxÏΜÿ<Ú1â¥Í™ÐhۥΜÏd×GÕRÛÜÐhÞÞ[ÎœÞ ÒhÙ è5ðnñ Ó›Ò¤Ý Ð3ÒhхÕRÛÜÐhÞ¦Þ[ΜÞbÒhÙBÐhÝWãàâ¥ÖxÚ1âÜ×fä¿Ð™Ñ¦Î,âÜÏLÔ ÍhÒxÛ¥ÍhΜá âÜÏ €èlݭΜݹãJÎRÑÞ ç â¥ß M å âHï­× ç Î%ßàѦΜÞ[ΜÏ1ÕNÎ ÒhÑ Ð™ãàÞ[ΜÏàÕNιâÜÏ è¿ÒhÙGÞ¦Úã ΜÕNצÞ'ÐhÏàáúÐhá¦Ú1Ï1ÕN×¦Þ  ÐhÏàá å âÜâHï ÕNÒ<ÒhÑá1âÜÏ1Й×[ÎúЙ×[צÐhÕ ç ÝΜÏ]× ÒhÙqÍhÎRѦãàÐhÛ Ý­Ð™×[ÎRÑ"âÜÐhہð ‡Î ÙrÒxÚ1Ï1á × ç Й×}× ç ΜÞ[ÎûÐhÝWãàâ¥ÖxÚàâ¥×¦â¥ÎœÞbÓnÎRÑÎùÝÒxÞ[ׅΠJΜÕ?Ô ×¦â¥ÍhΜۥäbáΜÐhÛ¥×EÓâ¥× ç ã<ä ݭЙÑô<âÜÏ1Ö®ÑÒ]Òh×uèÞ ÐhÞEӛΜÛÜÛÐhÞ × ç ÒxÞ[Î%âÜϋÕNÎRѦצÐhâÜÏ Þ¦âÜÞ[×[ÎRÑ ÕNÒxÏ]×[ÎNØLצÞRð –ÖxÐhâÜϐâYÏ × ç âÜÞ ÕRÐhÞ[Îhê æ ñ ÐhÏ1ÏÒhצЙצâ¥ÒxÏ ß1ÑÐhÕNצâYÕNΜÞqâYÏd×[ѦÒLá1Ú1ÕNÎWÕRЙ×[ÎNÔ ÖhÒhѦä‡ÕNÒxÏ àЙצâÜÒxÏ1ÞŸ× ç Й×Rê5Ó ç ΜÏ,ѦΜâ¥üàΜá‡âÜχÐ}è æEö5÷ ê ۥΜÐhá×[Ò ÙÐhÛÜÞ[΋âÜÏ1á1ÎRßΜÏàáΜÏ1ÕNÎ ÐhÞ¦Þ¦Ú1Ýßàצâ¥ÒxÏ1ÞRð ö ÒhÑ ÎNØÐhÝßàÛ¥Îhê–× ç Î  ݭЙѦôhÎRÑúâÜޅáΜÞÕNÑâ¥ß1צâ¥ÍhΜÛÜä¿Ú1Þ[Μá âÜÏ æEç âÜÏ1ΜÞ[έ×[Ò,ß1ѦÎRÍhÎRѦãÐhÛÜâ RÎ Òhã ΜÕNצÞRð €×¦Þ Þ[äLÏdצÐPØJê ç Ò9ÓnÎRÍhÎRќêâÜÞ ÕNÒxÏ]×[ѦÒwÍhÎRÑÞâÜÐhÛ å ›ÎœÏ1áÎRќêî ëwï"ð Ï…× ç Î æ ñqê  ç ΜÐhá1ÞqÐ}Ÿè ÐhÏàá%ÐhÛ¥ÓEМäLÞ ç ÐhޖÐbÚ1Ï1âYÿ]ÚÎ Þ¦âÜÞ¦×[ÎRÑè  ãàÚ1×× ç Й×èVΜަÞ[ΜÏ]צâÜÐhÛÜÛ¥ä ÐhÛ¥ÓEМäLÞѦÎRÓÑâ¥×[ÎœÞ ÐhÞ qè Ÿè5ð  âÜ× ç × ç ΜÞ[οѦÎRüÏΜÝΜÏ]צއÒhÙ èӛΠÐhÕ ç â¥ÎRÍhΜáûÐhÏÒh× ç ÎRÑ Lð < øâYÏ%ÎRÑѦÒhџѦΜá1Ú1ÕNצâ¥ÒxÏ.êÙrÒhÑBÐ üàÏ1ÐhÛJáÎRÍhΜÛÜÒhßàÝΜÏ]×EÞ¦ÎR× ü1ÖxÚѦÎqÒhÙ5íhî<ðÜëF =žè5êàí<ðÜëF =Nv¹êhÐhÏ1áíhî<ð5 = ö ë™ð ‡Î × ç ΜϮÑÐhÏ × ç Î Þ¦ÐhÝÎ ÝÒLáÎœÛ ÒxÏb× ç Îq×[ΜÞ[כަÎR×EÚàÞ[Μá âYÏ­ß1ѦÎRÍLâ¥ÒxÚ1ÞnӛÒhѦô å Eâ¥ôhΜÛJÐhÏ1á æEç âYÐhÏÖê1î  æEç âÜÐhÏ1Ö®ÐhÏ1áEâ¥ôhΜÛSêî xîxï"ðuv'ΜÞÚ1Û¥×¦Þ Ð™Ñ¦Î¹Þ ç ÒwÓÏoâÜÏ}ñžÐ™ãۥΠð  * /ePÊ ;gxe}kmËùcfe™yÊe™e™cS+´m ñ ç ΋×[ÑΜÏ1á1ÞûӛοÒhã1צÐhâYÏΜá ЙѦΠá1âJÎRѦΜÏdׇΜÏ1ÒxÚÖ ç ÙrѦÒxÝ ß1ÑÎRÍ<â¥ÒxÚàÞûӛÒhѦô×[ÒÅÝÎRÑ"â¥×ùá1âÜÞÕRÚ1Þ¦Þ¦â¥ÒxÏ.ð qÞ Þ ç Ò9ÓϹâYÏBñžÐ™ãÛ¥Îê™ß1ѦÎRÍLâ¥ÒxÚ1Þ.ÓnÒhѦô’ÒxÏ æ ñ,ßàЙÑ"Þ¦âÜÏÖ ÕNÒxÏ1Þ¦âYÞ[×[ΜÏdצÛÜä­ÐhÕ ç â¥ÎRÍhΜá ç âÜÖ ç ÎRÑѦΜÞÚ1ۥצÞÒxÏbß1ÑΜÕRâÜÞ¦â¥ÒxÏ × ç ÐhχÒxÏûѦΜÕRÐhÛÜہðñ ç âYÞqâYޒÕNÒxÏ1Þ[ÒxÏ1ÐhÏ]×BÓâ¥× ç ÒxÚ1ѹâÜÏ1â³Ô צâÜÐhÛJÎNØLßJÎRÑâÜÝΜÏ]צÞEâÜÏ æ ñ è æEö5÷ ßàЙÑÞ¦âYÏÖ MsÒxÏ}× ç Î áÎRÍhΜۥÒhßÝΜÏd×BÞ¦ÎR×RêžÐ…Í™ÐhÏ1âÜÛÜÛÜÐbè æ öž÷ Þ ç Ò9ÓnΜáVÐ# ß1ѦΜÕRâYÞ¦â¥ÒxÏ659ѦΜÕRÐhÛYÛ'Þ[ßàÛÜâÜ× âÜϿٍМÍhÒhхÒhÙBßàѦΜÕRâÜÞ¦â¥ÒxÏ å × ç Î Þ[ßàÛYâ¥×ŸÙHÒhÑWÒxÚÑWÞ¦Ý ÐhÛÜÛ K þ ×[Ñ"ÐhâÜÏ1âÜÏÖ}Þ[ÎR×®âÜÞ ðtî= òâÜÏ × ç ÎqÞ¦ÐhÝÎá1âÜѦΜÕNצâ¥ÒxÏï"ð6‡Î–ÞÚ1Þ[ßJΜÕN×× ç Й×n× ç âÜÞâÜÞná1ÚÎ ×[Ò}× ç έۥÒ9Óã1ÑÐhÏ1Õ ç âÜÏÖbٍÐhÕN×[ÒhѹâÜÏ × ç Î æ ñqêJÓ ç âYÕ ç âÜÏ1ÕNÑΜÐhÞ[ΜÞE× ç ΟßJÒh×[ΜÏdצâÜÐhÛJÑÎRӛЙÑábÙrѦÒxÝ × ç ΟßàЙÑÞ¦ÎRќóŠÞ ßJÎRÑÞ[ßJΜÕNצâ¥ÍhέÙrÒhÑ®ßàâÜÕ¦ôLâÜÏÖàЙ×[×[ÎRѮަ×[ÑÚ1ÕNצÚѦΜޜð LâÜÝ Ô ßàÛ¥ä}Þ[ßÛÜâ¥×[צâÜÏÖßÚ1Ï1ÕNצÚ1Йצâ¥ÒxÏ%ÐhÛ¥ÒxÏÖb× ç ÎWÛYâÜÏΜÞ'ÒhÙÌnÏLÔ ÖxÛÜâÜÞ ç êÕNÒxÝWãàâÜÏΜá ÓâÜ× ç è æEöž÷ ݭЙѦôhÒ9Í<âœÐ™×¦â¥ÒxχÐhÏ1á × ç ΐâYÏd×[ѦÒLá1Ú1ÕNצâÜÒxÏÒhÙ}ÐéáÎRßJΜÏ1áΜÏ1ÕNäÝÒLáΜÛٍÐhÕ?Ô ×[ÒhќêqѦΜá1Ú1ÕNΜá × ç Î =žèM5 =NvòÞ[ßàÛÜâ¥× ×[Ò ë™ðÜëF bð ö ÑÒxÝ × ç ÎRѦ΋×[Ò ÒxÚ1чüàÏ1ÐhÛ ÝÒLáΜہêÏΜЙÑÛ¥äÅÐhÛÜÛâÜÝß1ÑÒwÍhÎNÔ ÝΜÏ]×ÓEÐhÞâÜÏúѦΜÕRÐhÛÜÛ$Mnß1ѦΜÕRâÜÞ¦âÜÒxÏ}âÜÝß1ѦÒ9ÍhΜáúã<ä Lð5= ×[Ò× ç Î üàÏ1ÐhÛWü1ÖxÚѦÎhê®Ó ç ÎRÑΜÐhÞ%ÑΜÕRÐhÛÜÛ [Ú1Ý­ßΜá ã]ä <ð < bð ‡ÎoâÜÏ]×[ÎRѦß1ѦÎR× × ç ΜަÎbÑΜަÚ1Û¥×¦Þ ÐhÞ­âÜÏàá1âÜÕRЙצâÜÏÖ × ç Й×%ÓnÎ ç МÍhÎVÚàÏ1Û¥ÒLÕ¦ôhΜá Ð ç ÎRѦÎR×[ÒhÙrÒhѦÎVÚ1Ïàá1âÜÞ¦ÕNÒ9ÍdÔ ÎRѦΜá Þ[ßÐhÕNÎ ÒhÙ âÜÏ1á1ÎRßΜÏàáΜÏ1ÕNÎNÔfÐhÞ¦Þ¦ÚàÝß1צâ¥ÒxϋÑÎRüàÏÎNÔ ÝΜÏ]צޛÙHÒhÑ æ ñßàЙÑÞ¦âYÏÖê]Þ¦Ú1ÖhÖhΜÞ[צâÜÏÖ®× ç Й×E× ç ÎRѦΟâÜÞ Þ[צâÜÛYÛÕNÒxÏ1Þ¦âÜáÎRÑ"ЙãàۥΠѦÒ<ÒxÝ ÙrÒhÑnâÜÝ­ß1ѦÒwÍhΜݭΜÏd×âYÏ æ ñ ßàЙÑÞâÜÏÖúÎRÍhΜϤÓâ¥× ç Ð%ÞÝ­ÐhÛÜÛ å ì Lê PÔSӛÒhÑáï ×[ÑÐhâÜÏLÔ âÜÏÖ}Þ[ÎR×  ЅßàЙÑÞ[ÎRѦÔfÕNÒxݹãàâYÏ1âÜÏօÝÒLáΜÛsÞ¦Ú1Õ ç ÐhÞŸ× ç Ð™× ß1ѦÒhßJÒxÞ[Μá âÜÏ å w–ΜÏ1áÎRÑÞ¦ÒxÏ ÐhÏ1á nÑ"âÜÛÜہê1ëœìhìhìxï"êÙrÒhÑÎNØ<Ô ÐhÝßàÛÜÎhê1Ý­â¥Ö ç × ãΒΠJΜÕNצâ¥ÍhÎ ç ÎRѦÎhð ñ ç âÜÞ­âÜÞ­ÐhÏ ÎœÏ1ÕNÒxÚÑ"ЙÖxâÜÏÖûѦΜÞÚ1Û¥×­ÙrÒhÑ × ç Î Ú1Þ[ÎúÒhÙ áÎRצÐhâÜÛÜΜá…ÎRѦѦÒhÑÐhÏ1ÐhÛÜä<Þ¦âYÞEÙHÒxÛÜÛÜÒwӛΜá…ã]äbÙHÒLÕRÚ1Þ[Μáú×[ѦÎRÎNÔ Þ[×[ÑÚàÕNצÚÑ¦Î ÎœÏ ç ÐhÏàÕNΜÝΜÏdצÞ×[Ò¹âÜÝßàѦÒwÍhΜáßЙÑÞ[ÎRÑßJÎRÑ[Ô ÙrÒhÑÝ­ÐhÏ1ÕNÎhð w–ÒwӛÎRÍhÎRќêLÓnÎ'ÙrÒxÚ1Ï1á ×°ÓnÒ®ÛÜâÜÝ­âÜצЙצâ¥ÒxÏ1Þ5×[Ò ÒxÚÑ5ÝÎR× ç ÒLáÒxÛ¥ÒhÖhähð ö âÜÑÞ[×Rê9Þ¦ÒxÝÎnâÜÝ­ßÒhÑצÐhÏd×.ÐhÏ1áWÐháLÔ áѦΜÞަЙãàÛ¥ÎqÎRѦѦÒhÑE×fä<ßJÎœÞ Ð™Ñ¦ÎŸÑ¦ÎœÛÜЙצâ¥ÍhΜÛÜä ÑЙѦΒâÜÏbñžÑ¦ÎRÎNÔ ãàÐhÏô}á1ЙצÐLð–ñ ç Î ëN PÔfÞ[ΜÏ]×[ΜÏ1ÕNÎÕ ç Ú1Ïô}ÒhÙá1ÎRÍhΜۥÒhßÔ ÝΜÏ]×á1ЙצРÓnÎ…Õ ç ÒxÞ¦Î}×[Ò,ÐhÏàÐhÛ¥ä<R΅ަâÜÝ­ßàÛ¥ä‡á1âÜá¤ÏÒh× ÕNÒxÏ]צÐhâÜÏ}ÐhÏdäbâÜÏ1Þ[צÐhÏàÕNΜÞnÒhÙ.u¹êá1âÜÞÕRÚ1Þ¦Þ[Μá ЙãÒ9ÍhÎBâÜÏ <ΜÕNצâ¥ÒxÏxð5<êqãàÚ×bÎRѦѦÒhÑޅâÜÏ]ÍhÒxÛ¥ÍLâÜÏÖ òÒ<ÕRÕRÚ1ѦѦΜá × ç ѦÎRÎ,צâÜݭΜޅΜÛÜÞ[ÎRÓ ç ÎRÑÎ,âYÏ× ç Îûá1ÎRÍhΜۥÒhßàÝΜÏ]×}Þ[ÎR×Rð <ΜÕNÒxÏ1á.êÞ[ÒxÝÎ}ÕNÒxÝ­ÝÒxÏ¿ÎRѦѦÒhÑ ×fä<ßJΜÞЙѦÎúÏÒh× × ç ΠѦΜަÚàÛ¥×'ÒhÙÞ¦âÜÝ­ßàۥιÐhÏ1áoΜÐhÞ¦âÜÛ¥ä…üØЙãàÛ¥Î®Þ ç ÒhѦצÕNÒxÝ âÜÏÖxÞ âÜÏúâYÏ1áÎRßJΜÏ1áΜÏ1ÕNΒÐhÞ¦ÞÚ1Ýß1צâ¥ÒxÏàÞRðÏ}ßàЙѦצâYÕRÚ1ÛÜЙќêӛΠÙrÒxÚ1Ï1á%× ç Й×qÕNÒ<ÒhÑá1âYÏ1Йצâ¥ÒxÏ Þ¦ÕNÒhßàâÜÏÖbÐhݹãâ¥ÖxÚ1â¥×°äoÐhÏ1á L5  צЙ֤ÐhÝWãàâ¥ÖxÚàâ¥×fä¿Ð™Ñ¦Î Ý­Ð3[ÒhÑbÞ[ÒxÚ1ÑÕNΜÞbÒhْѦΜÛÜÐPÔ ×¦â¥ÍhΜۥä¤ÕRЙצÐhÞ[×[ѦÒhß ç âÜÕbÎRѦѦÒhÑÙrÒhÑ®ÒxÚ1ÑßàЙÑÞ[ÎRќð €Ï]×[ÎRÑ[Ô ÎœÞ[צâÜÏ1ÖxÛ¥ähêÕNÒ<ÒhÑá1âÜÏàЙצâ¥ÒxÏúÞ¦ÕNÒhßJÎWÐhÝWãàâ¥ÖxÚàâ¥×fä}âÜÞ ÑΜÕNÒhÖ™Ô Ï1â RΜá ÐhÞqßJÎRÑ ç ЙßàÞ× ç ÎWÝÒxަזѦΜÕRÐhÛYÕRâ¥×[ÑÐhÏ]×qßàѦÒhãàÛ¥ÎœÝ âÜÏÌnñ ßàЙÑÞ¦âYÏÖêxÓ ç âYÛ¥Î›Ý ÐhÏdäÕRÐhÞ[ΜÞnÒhÙO 5  ÐhÝWãàâ³Ô ÖxÚ1â¥×°ä’ЙѦÎßàЙѦצâYÕRÚ1ÛÜЙÑÛ¥äqá1â  ÕRÚàÛ¥×JßÒxâYÏdצÞÒhÙÛÜâYÏÖxÚ1âÜÞ[צâYÕ ÐhÏ1ÐhÛ¥äLÞ¦âÜÞÙHÒhÑ æEç âÜÏΜަÎhêdÐhÞá1âYÞ¦ÕRÚ1Þ¦Þ[Μá âÜÏ <ΜÕNצâ¥ÒxÏ ðtî<ð ö ÒhÑ × ç Î}ٍÚצÚѦÎhênÓnÎúãJΜÛÜâÜÎRÍhÎb× ç Ð™×­× ç ÎRѦÎoâYÞÞ[צâÜÛÜÛ Ñ¦Ò<ÒxÝ ÙrÒhÑ ÕNÒxÏ1Þ¦âYáÎRÑЙãàۥΠâYÝß1ѦÒ9ÍhΜÝΜÏd×ÅâÜÏ æ ñ ßàЙÑÞâÜÏÖVÚàÏ1áÎRÑbÒxÚÑ}ÝÒ<á1Μہð €ÏÐhá1á1âÜצâ¥Òxϐ×[Ò ÙÚÑ[Ô × ç ÎRÑ'è æEöž÷ ѦÎRüÏΜÝΜÏ]צÞRêLצÚ1Ï1âÜÏÖ®× ç ΒáÎRßJΜÏ1áΜÏàÕNä ÝÒLáΜÛGÝ­Ðwä,ÛÜΜÐhá‡×[Ò%âÜÝ­ß1ѦÒwÍhΜá,ßJÎRѦÙrÒhÑÝ­ÐhÏàÕNÎhð‡Î ÙrÒxÚ1Ï1á × ç Ð™× ç ΜÐháÔfáÎRßJΜÏ1áΜÏ]× á1âÜÞ[צÐhÏàÕNΜÞÅâÜÏò× ç Î æ ñùЙѦÎnÛÜЙÑÖhÎRÑž× ç ÐhÏWâÜÏB× ç ÎnÌñ –êPÕNÒxÏ1Þ¦âÜÞ¦×[ΜÏdמÓâ¥× ç × ç ΛÖhѦΜЙ×[ÎRÑáÎRÖhÑÎRΛÒhÙÕNΜÏd×[ÎRÑ[ÔSΜÝWãJΜá1á1âÜÏ1֖ѦΜÞÚ1ۥצâÜÏÖ ÙrѦÒxÝ × ç Î–Ý­â³ØLΜá ç ΜÐháΜá1Ï1ΜަÞGÒhÙ æEç âYÏΜÞ[ÎhêxÐhÏ1á Þ¦ÚÖ™Ô ÖhΜÞ[צâÜÏ1Ö × ç ЙןÐbáÎRßJΜÏ1áΜÏ1ÕNä}ÝÒLáΜ۴á1ÎRÍhΜۥÒhßJΜáoÙrÒhÑ ÌÏ1ÖxÛÜâÜÞ ç ݭМä ÏÒhןãήÒhß1צâÜÝ Ðh۞ÙrÒhÑ æEç âÜÏ1ΜÞ[Îhð LâÜÏ1ÕNÎ qèlâÜÞ Ñ"â¥Ö ç ×Ô ç ΜÐháΜáÓ ç âÜۥΠŸèlÐhÏ1á €èlЙÑÎ Û¥ÎRÙr×Ô ç ΜÐháΜá.ê]ÐhÏbâÜÝß1ѦÒ9ÍhΜá áÎRßJΜÏ1á1ΜÏ1ÕNä®Ý­Ò<áΜÛàݭМä­ãJÎ × ç ΟãJΜÞ[×EßàÛÜÐhÕNΟ×[ÒÐhá1áѦΜÞÞ Ð™×EÛÜΜÐhÞ[× ÒxÏΟÒhÙ´× ç ΟôhÎRä ß1ѦÒhãۥΜݭÞEӛΠç МÍhήâÜáΜÏ]צâ¥ü1Μá…ÙrÒhÑ æ ñ ßàЙÑÞâÜÏÖð u yxÉGm+>;S/Ë-./<9 /àmžgxe ‡Î–ЙÑÎ'ÖhÑЙ×[ÎRٍÚ1Û1×[Ò"ŸÐhÏ ý ۥΜâÜÏ®ÙrÒhÑ͙ÐhÛÜÚ1ЙãàÛÜÎ'âÜÏßàÚ×Rê ÐhÏ1á ÙHÒhÑ%× ç ÎûßàЙÑÞ¦ÎRÑ%âÜÝßۥΜÝΜÏ]צЙצâ¥ÒxÏ Ú1Þ[Μá ç ÎRѦÎhð ñ ç âYÞßàЙßJÎRÑ âÜÞ ãàÐhÞ[Μá ÒxϿѦΜަΜЙÑÕ ç Þ¦ÚßàßÒhÑ×[Μá‹ã]ä × ç Î qáÍPÐhÏàÕNΜá v'ΜÞ[ΜЙÑÕ ç ÐhÏ1á "qÎRÍhΜۥÒhßàݭΜÏd× qÕ?Ô ×¦â¥ÍLâ¥×fä å  v "|’ï"óŠÞ|–á1ÍPÐhÏ1ÕNΜá’ÚΜÞ[צâ¥ÒxÏ qÏ1Þ[ӛÎRÑâÜÏÖ ÙrÒhÑ|€Ï]×[ΜÛÜÛÜâÜÖhΜÏ1ÕNÎ å  &$ U–ñqïEèѦÒhÖhÑ"ÐhÝ}ð *,/ /10x/àmnyL/àe  1š ˆt‰˜— {‚P†9{€~[¢qÃ"À"Àw½"¢B©J“P{ —w‚Rƒf„º¹"‘=Ÿ„‚™†9„~SˆŠ‚dc€M´{€ ” ?‚9ˆt†9{€~SˆŠ‚PŽGƒS“9{´Œ?{€~Sª™„‰™„"‚P„"‰˜—wˆŠ¢  ^c c[c   !#" !$&%'S¬^$fYà gr€Š½[À !)(™½ "j!œ¢ <G„‚PˆŠ{‰ˆŠ«N{€‰„"‚P† <s„¦Œœˆt†­¡“Pˆt„‚PŽ9¢ Ã"À"À?Àœ¢©•J¹ƒf„ƒSˆŠƒSˆ ” „‰ |™„¦~Sˆt‚9ŽWš œ†w{‰Š „"|9|P‰ŠˆŠ{[†ƒSBƒS“9{’¡“9ˆt‚9{{–ƒ~S{€{ª™„‚P«h¢1r‚ *+  ,% b b e- ". /0W` b01Rb2% /e43 ` !Rb5 b6 c#" c,"jb6*+  ,% b57 ! "98:  ; Y`< =™¬h|™„Ž?{G½(œÄw¢  1¶PŽ?{€‚P{q¡“™„~S‚9ˆt„"«h¢ Ã"À"À?Àœ¢}‹š'„ºwˆŠš›¶PšEY{‚Rƒ~S"|œ—œYˆŠ‚P|9ˆŠ~S{† |™„¦~S{€~[¢8r‚>* 2 ,% bdb e!#"? /A@BBC3D1¢ <G„[Œœˆt†q¡“9ˆ_„‚PŽ›„"‚™† <G„‚PˆŠ{‰ ˆŠ«N{‰¥¢àÃÀ?ÀNÃR¢ ´{ ” ¦ŒN{€~SˆŠ‚PŽn‰t„ƒS{‚Rƒ ˆŠ‚9‘³~Sš'„ƒSˆŠ?‚ˆŠ‚ ƒ~S{{ª™„‚P«œ¢ r‚* 2 ,% bdb e! "E 03+FGH2@JI+7 KLML K ¬™|™„Ž?{G½¾%N(P½[¾ $w¢ {‚¹¡“œ¶9~ ” “¹„"‚P† 5„š ˆt“B¨„¦ƒSˆŠ‰Ü¢n½-$"¾NÜ¢¡?|PˆŠ‚9Ž•.ˆ˜ƒS“B—w‚9 ƒf„ ” ƒSˆ ” „"šnªPˆŠŽ?¶9ˆ˜ƒr—¹"~n“P[•ùƒS’|P¶9ƒGƒS“9{ ªP‰Š ” «®ˆŠ‚®ƒS“P{ ªh[º ?‚ƒS“P{àƒf„"ªP‰Š{?¢OBP b5 5&%dcQ  ^cO A3G PR=O  c$& /cS#T!O7 " !$&%'S¬h¾w¢ qˆ ” “P„"{‰ž¡"‰t‰ŠˆŠ‚P¢%½-$ $%$w¢Uab c e7$V 5!W b5X1< c$!$&%dcSZY[ -ejb'  )  R@ c$ cO c  " c,"jb*c - ""¢à¨1“]¢ <›¢œƒS“P{ˆŠ¬ n¢9¨L{‚9‚]¢ qˆ ” “P„"{‰L¡?‰Š‰ŠˆŠ‚P¢JÃ"À?À"Àw¢<sˆŠ ” ~SˆŠš ˆŠ‚™„ƒSˆŠŒN{ž~S{€~f„"‚9«wˆŠ‚9Ž›‘³~ž‚P„ƒ ¶9~f„‰d‰t„‚PŽ?¶P„"Ž"{G|™„~SˆŠ‚9Ž9¢ r‚:* 2 ,% bdbde ! "C RH 3Y:¬œ|™„Ž?{ ½¦Á !,(™½¾NÃR¢q"~SŽN„‚ „"¶w‘³š'„‚P‚]¬P±w„"‚’»9~f„‚ ” ˆŠ ” w¬x¡ ¢ ²?"“P‚ ¡ž¢=ž{‚™†w{€~S?‚o„‚™†o 1~Sˆ ” ~SˆŠ‰Š‰¥¢é½-$%$ $w¢  1ºw|9‰t"ˆ˜ƒSˆt‚9Ž †9ˆŠŒN{€~Sˆ˜ƒH— ˆŠ‚­‚™„ƒS¶w~f„"‰J‰t„‚PŽ?¶P„"Ž"{Ÿ|9~S ” {€ˆt‚9Ž €q¡?šnªPˆŠ‚PˆŠ‚9Ž |™„¦~S{€~S¢8r‚>* 2 ,% bdb e!#"? /RY\7!@BD*¢ Ÿ„~S«²?"“P‚9?‚]¢x½ $%$?¾œ¢™¨¡J»L¼bš œ†w{‰Š"‘P‰tˆŠ‚9Ž?¶PˆŠƒSˆ ” ƒ~S{{~S{€|9 ~S{{‚Rƒf„ƒSˆŠ"‚P¢]3G PR=O  c$& /cSR !#" !$&%'S¬ÃA"f "jgr€ Äw½ N( Ä%?Ü¢ <G„‚ ‰Š{ˆŠ‚n„"‚P†›¡“w~SˆtƒS"|P“9{€~ <›¢:Ÿ„‚P‚PˆŠ‚9Ž9¢wÃÀ?ÀNÃR¢9»™„"ƒ{°ºP„ ” ƒ ˆŠ‚9‘³{°~S{‚ ” {E•.ˆ˜ƒS“®„–‘¥„ ” ƒS"~S{[†Wš œ†9{‰‘³~‚™„ƒS¶w~f„"‰à‰t„"‚PŽ"¶™„Ž?{ |™„¦~Sˆt‚9Ž9¢8r‚^*+  ,% b b e- "0 R@RH*1P¢ ž‰Š{€º9„‚™†9{°~ ~SƒSŒh¬:Ÿ„¦~S«95{|9|P‰Š{?¬#´?ªh{€~ƒL¼s„"ˆ=[„¶P«"„"¬?„"‚P† D~Sˆ ” «qz…ˆt‰Š«œ¢'½-$ $?¾w¢E¡?š |P„ ” ƒSˆŠ‚9ŽƒS“9{E¨L{‚9‚W©d~S{{ª™„‚P« Ž"~f„š š'„~[¢ r‚_*+  ,% b b e- "` 9C3D 73+F aH2@?IJ¬|™„Ž?{ Ä%$ $N(wÁÀ%œ¢ rŒ"„"‚¢±9„Ž5„‚™†©J“P"š'„"<z „"[•G¢h½-$ $%$w¢b1<c< cM%'$&%Cd4` b2  5cMe gf  P cHh$  -e <%'$& 9¢¡ 5¨¢ 5ˆt„‚œ•{‚B5¶9{?¬»P¶w$<5?‚PŽ¡“PˆŠ?¶d¬?„‚™† Ÿ„¦~ƒS“™„¨„"‰Šš {€~[¢NÃ"À"ÀNÃR¢ ¶PˆŠ‰t†9ˆŠ‚9Ž®„’‰t„~SŽ"{€Y ” „"‰Š{B„"‚9‚Pƒf„ƒS{[† ¡“PˆŠ‚9{{ ” ~S|P¶9¢?r‚ *+  ,% b b e- "0 J3+FGaH2@?IJ¢
2003
56
Feedback Cleaning of Machine Translation Rules Using Automatic Evaluation Kenji Imamura, Eiichiro Sumita ATR Spoken Language Translation Research Laboratories Seika-cho, Soraku-gun, Kyoto, Japan {kenji.imamura,eiichiro.sumita}@atr.co.jp Yuji Matsumoto Nara Institute of Science and Technology Ikoma-shi, Nara, Japan [email protected] Abstract When rules of transfer-based machine translation (MT) are automatically acquired from bilingual corpora, incorrect/redundant rules are generated due to acquisition errors or translation variety in the corpora. As a new countermeasure to this problem, we propose a feedback cleaning method using automatic evaluation of MT quality, which removes incorrect/redundant rules as a way to increase the evaluation score. BLEU is utilized for the automatic evaluation. The hillclimbing algorithm, which involves features of this task, is applied to searching for the optimal combination of rules. Our experiments show that the MT quality improves by 10% in test sentences according to a subjective evaluation. This is considerable improvement over previous methods. 1 Introduction Along with the efforts made in accumulating bilingual corpora for many language pairs, quite a few machine translation (MT) systems that automatically acquire their knowledge from corpora have been proposed. However, knowledge for transferbased MT acquired from corpora contains many incorrect/redundant rules due to acquisition errors or translation variety in the corpora. Such rules conflict with other existing rules and cause implausible MT results or increase ambiguity. If incorrect rules could be avoided, MT quality would necessarily improve. There are two approaches to overcoming incorrect/redundant rules: • Selecting appropriate rules in a disambiguation process during the translation (on-line processing, (Meyers et al., 2000)). • Cleaning incorrect/redundant rules after automatic acquisition (off-line processing, (Menezes and Richardson, 2001; Imamura, 2002)). We employ the second approach in this paper. The cutoff by frequency (Menezes and Richardson, 2001) and the hypothesis test (Imamura, 2002) have been applied to clean the rules. The cutoff by frequency can slightly improve MT quality, but the improvement is still insufficient from the viewpoint of the large number of redundant rules. The hypothesis test requires very large corpora in order to obtain a sufficient number of rules that are statistically confident. Another current topic of machine translation is automatic evaluation of MT quality (Papineni et al., 2002; Yasuda et al., 2001; Akiba et al., 2001). These methods aim to replace subjective evaluation in order to speed up the development cycle of MT systems. However, they can be utilized not only as developers’ aids but also for automatic tuning of MT systems (Su et al., 1992). We propose feedback cleaning that utilizes an automatic evaluation for removing incorrect/redundant translation rules as a tuning method Training Corpus Automatic Acquisition Translation Rules Evaluation Corpus MT Engine Automatic Evaluation MT Results Rule Selection/Deletion Feedback Cleaning Figure 1: Structure of Feedback Cleaning (Figure 1). Our method evaluates the contribution of each rule to the MT results and removes inappropriate rules as a way to increase the evaluation scores. Since the automatic evaluation correlates with a subjective evaluation, MT quality will improve after cleaning. Our method only evaluates MT results and does not consider various conditions of the MT engine, such as parameters, interference in dictionaries, disambiguation methods, and so on. Even if an MT engine avoids incorrect/redundant rules by on-line processing, errors inevitably remain. Our method cleans the rules in advance by only focusing on the remaining errors. Thus, our method complements on-line processing and adapts translation rules to the given conditions of the MT engine. 2 MT System and Problems of Automatic Acquisition 2.1 MT Engine We use the Hierarchical Phrase Alignment-based Translator (HPAT) (Imamura, 2002) as a transferbased MT system. The most important knowledge in HPAT is transfer rules, which define the correspondences between source and target language expressions. An example of English-to-Japanese transfer rules is shown in Figure 2. The transfer rules are regarded as a synchronized context-free grammar. When the system translates an input sentence, the sentence is first parsed by using source patterns of the transfer rules. Next, a tree structure of the target language is generated by mapping the source patterns to the corresponding target patterns. When non-terminal symbols remain in the target tree, target words are inserted by referring to a translation dictionary. Ambiguities, which occur during parsing or mapping, are resolved by selecting the rules that minimize the semantic distance between the input words and source examples (real examples in the training corpus) of the transfer rules (Furuse and Iida, 1994). For instance, when the input phrase “leave at 11 a.m.” is translated into Japanese, Rule 2 in Figure 2 is selected because the semantic distance from the source example (arrive, p.m.) is the shortest to the head words of the input phrase (leave, a.m.). 2.2 Problems of Automatic Acquisition HPAT automatically acquires its transfer rules from parallel corpora by using Hierarchical Phrase Alignment (Imamura, 2001). However, the rule set contains many incorrect/redundant rules. The reasons for this problem are roughly classified as follows. • Errors in automatic rule acquisition • Translation variety in corpora – The acquisition process cannot generalize the rules because bilingual sentences depend on the context or the situation. – Corpora contain multiple (paraphrasable) translations of the same source expression. In the experiment of Imamura (2002), about 92,000 transfer rules were acquired from about 120,000 bilingual sentences 1. Most of these rules are low-frequency. They reported that MT quality slightly improved, even though the low-frequency rules were removed to a level of about 1/9 the previous number. However, since some of them, such as idiomatic rules, are necessary for translation, MT quality cannot be dramatically improved by only removing low-frequency rules. 3 Automatic Evaluation of MT Quality We utilize BLEU (Papineni et al., 2002) for the automatic evaluation of MT quality in this paper. BLEU measures the similarity between MT results and translation results made by humans (called 1In this paper, the number of rules denotes the number of unique pairs of source patterns and target patterns. Rule No. Syn. Cat. Source Pattern Target Pattern Source Example 1 VP XVP at YNP ⇒ Y’ de X’ ((present, conference) ...) 2 VP XVP at YNP ⇒ Y’ ni X’ ((stay, hotel), (arrive, p.m) ...) 3 VP XVP at YNP ⇒ Y’ wo X’ ((look, it) ...) 4 NP XNP at YNP ⇒ Y’ no X’ ((man, front desk) ...) Figure 2: Example of HPAT Transfer Rules references). This similarity is measured by N-gram precision scores. Several kinds of N-grams can be used in BLEU. We use from 1-gram to 4-gram in this paper, where a 1-gram precision score indicates the adequacy of word translation and longer N-gram (e.g., 4-gram) precision scores indicate fluency of sentence translation. The BLEU score is calculated from the product of N-gram precision scores, so this measure combines adequacy and fluency. Note that a sizeable set of MT results is necessary in order to calculate an accurate BLEU score. Although it is possible to calculate the BLEU score of a single MT result, it contains errors from the subjective evaluation. BLEU cancels out individual errors by summing the similarities of MT results. Therefore, we need all of the MT results from the evaluation corpus in order to calculate an accurate BLEU score. One feature of BLEU is its use of multiple references for a single source sentence. However, one reference per sentence is used in this paper because an already existing bilingual corpus is applied to the cleaning. 4 Feedback Cleaning In this section, we introduce the proposed method, called feedback cleaning. This method is carried out by selecting or removing translation rules to increase the BLEU score of the evaluation corpus (Figure 1). Thus, this task is regarded as a combinatorial optimization problem of translation rules. The hillclimbing algorithm, which involves the features of this task, is applied to the optimization. The following sections describe the reasons for using this method and its procedure. The hill-climbing algorithm often falls into locally optimal solutions. However, we believe that a locally optimal solution is more effective in improving MT quality than the previous methods. 4.1 Costs of Combinatorial Optimization Most combinatorial optimization methods iterate changes in the combination and the evaluation. In the machine translation task, the evaluation process requires the longest time. For example, in order to calculate the BLEU score of a combination (solution), we have to translate C times, where C denotes the size of the evaluation corpus. Furthermore, in order to find the nearest neighbor solution, we have to calculate all BLEU scores of the neighborhood. If the number of rules is R and the neighborhood is regarded as consisting of combinations made by changing only one rule, we have to translate C × R times to find the nearest neighbor solution. Assume that C = 10, 000 and R = 100, 000, the number of sentence translations (sentences to be translated) becomes one billion. It is infeasible to search for the optimal solution without reducing the number of sentence translations. A feature of this task is that removing rules is easier than adding rules. The rules used for translating a sentence can be identified during the translation. Conversely, the source sentence set S[r], where a rule r is used for the translation, is determined once the evaluation corpus is translated. When r is removed, only the MT results of S[r] will change, so we do not need to re-translate other sentences. Assuming that five rules on average are applied to translate a sentence, the number of sentence translations becomes 5 × C + C = 60, 000 for testing all rules. On the contrary, to add a rule, the entire corpus must be re-translated because it is unknown which MT results will change by adding a rule. 4.2 Cleaning Procedure Based on the above discussion, we utilize the hillclimbing algorithm, in which the initial solution contains all rules (called the base rule set) and the search for a combination is done by only removing static: Ceval, an evaluation corpus Rbase, a rule set acquired from the entire training corpus (the base rule set) R, a current rule set, a subset of the base rule set S[r], a source sentence set where the rule r is used for the translation Dociter, an MT result set of the evaluation corpus translated with the current rule set procedure CLEAN-RULESET () R ←Rbase repeat Riter ←R Rremove ←∅ scoreiter ←SET-TRANSLATION() for each r in Riter do if S[r] ̸= ∅then R ←Riter −{r} translate all sentences in S[r], and obtain the MT results T[r] Doc[r] ←the MT result set that T[r] is replaced from Dociter the rule contribution contrib[r] ←scoreiter −BLEU-SCORE(Doc[r]) if contrib[r] < 0 then add r to Rremove end R ←Riter −Rremove until Rremove = ∅ function SET-TRANSLATION () returns a BLEU score of the evaluation corpus translated with R Dociter ←∅ for each r in Rbase do S[r] ←∅end for each s in Ceval do translate s and obtain the MT result t obtain the rule set R[s] that is used for translating s for each r in R[s] do add s to S[r] end add t to Dociter end return BLEU-SCORE(Dociter) Figure 3: Feedback Cleaning Algorithm rules. The algorithm is shown in Figure 3. This algorithm can be summarized as follows. • Translate the evaluation corpus first and then obtain the rules used for the translation and the BLEU score before removing rules. • For each rule one-by-one, calculate the BLEU score after removing the rule and obtain the difference between this score and the score before the rule was removed. This difference is called the rule contribution. • If the rule contribution is negative (i.e., the BLUE score increases after removing the rule), remove the rule. In order to achieve faster convergence, this algorithm removes all rules whose rule contribution is negative in one iteration. This assumes that the removed rules are independent from one another. 5 N-fold Cross-cleaning In general, most evaluation corpora are smaller than training corpora. Therefore, omissions of cleaning Training Corpus Training Evaluation Training Evaluation Training Evaluation Training Base Rule Set Rule Subset 1 Rule Subset 2 Rule Subset 3 Feedback Cleaning Feedback Cleaning Feedback Cleaning Rule Deletion Rule Contributions Cleaned Rule Set Divide Figure 4: Structure of Cross-cleaning (In the case of three-fold cross-cleaning) will remain because not all rules can be tested by the evaluation corpus. In order to avoid this problem, we propose an advanced method called cross-cleaning (Figure 4), which is similar to cross-validation. The procedure of cross-cleaning is as follows. 1. First, create the base rule set from the entire training corpus. 2. Next, divide the training corpus into N pieces uniformly. 3. Leave one piece for the evaluation, acquire rules from the rest (N −1) of the pieces, and repeat them N times. Thus, we obtain N pairs of rule set and evaluation sub-corpus. Each rule set is a subset of the base rule set. 4. Apply the feedback cleaning algorithm to each of the N pairs and record the rule contributions even if the rules are removed. The purpose of this step is to obtain the rule contributions. 5. For each rule in the base rule set, sum up the rule contributions obtained from the rule subsets. If the sum is negative, remove the rule from the base rule set. The major difference of this method from crossvalidation is Step 5. In the case of cross-cleaning, Set Name Feature English Japanese Training # of Sentences 149,882 Corpus # of Words 868,087 984,197 Evaluation # of Sentences 10,145 Corpus # of Words 59,533 67,554 Test # of Sentences 10,150 Corpus # of Words 59,232 67,193 Table 1: Corpus Size the rule subsets cannot be directly merged because some rules have already been removed in Step 4. Therefore, we only obtain the rule contributions from the rule subsets and sum them up. The summed contribution is an approximate value of the rule contribution to the entire training corpus. Crosscleaning removes the rules from the base rule set based on this approximate contribution. Cross-cleaning uses all sentences in the training corpus, so it is nearly equivalent to applying a large evaluation corpus to feedback cleaning, even though it does not require specific evaluation corpora. 6 Evaluation In this section, the effects of feedback cleaning are evaluated by using English-to-Japanese translation. 6.1 Experimental Settings Bilingual Corpora The corpus used in the following experiments is the Basic Travel Expression Corpus (Takezawa et al., 2002). This is a collection of Japanese sentences and their English translations based on expressions that are usually found in phrasebooks for foreign tourists. We divided it into sub-corpora for training, evaluation, and test as shown in Table 1. The number of rules acquired from the training corpus (the base rule set size) was 105,588. Evaluation Methods of MT Quality We used the following two methods to evaluate MT quality. 1. Test Corpus BLEU Score The BLUE score was calculated with the test corpus. The number of references was one for each sentence, in the same way used for the feedback cleaning. 0.22 0.24 0.26 0.28 0.3 0.32 0 1 2 3 4 5 6 7 8 9 80k 90k 100k 110k 120k BLEU Score Number of Rules Number of Iterations Test Corpus BLEU Score Evaluation Corpus BLEU Score Number of Rules Figure 5: Relationship between Number of Iterations and BLEU Scores/Number of Rules 2. Subjective Quality A total of 510 sentences from the test corpus were evaluated by paired comparison. Specifically, the source sentences were translated using the base rule set, and the same sources were translated using the rules after the cleaning. One-by-one, a Japanese native speaker judged which MT result was better or that they were of the same quality. Subjective quality is represented by the following equation, where I denotes the number of improved sentences and D denotes the number of degraded sentences. Subj. Quality = I −D # of test sentences (1) 6.2 Feedback Cleaning Using Evaluation Corpus In order to observe the characteristics of feedback cleaning, cleaning of the base rule set was carried out by using the evaluation corpus. The results are shown in Figure 5. This graph shows changes in the test corpus BLEU score, the evaluation corpus BLEU score, and the number of rules along with the number of iterations. Consequently, the removed rules converged at nine iterations, and 6,220 rules were removed. The evaluation corpus BLEU score was improved by increasing the number of iterations, demonstrating that the combinatorial optimization by the hill-climbing algorithm worked effectively. The test corpus BLEU score reached a peak score of 0.245 at the second iteration and slightly decreased after the third iteration due to overfitting. However, the final score was 0.244, which is almost the same as the peak score. The test corpus BLEU score was lower than the evaluation corpus BLEU score because the rules used in the test corpus were not exhaustively checked by the evaluation corpus. If the evaluation corpus size could be expanded, the test corpus score would improve. About 37,000 sentences were translated on average in each iteration. This means that the time for an iteration is estimated at about ten hours if translation speed is one second per sentence. This is a short enough time for us because our method does not require real-time processing. 2 6.3 MT Quality vs. Cleaning Methods Next, in order to compare the proposed methods with the previous methods, the MT quality achieved by each of the following five methods was measured. 1. Baseline The MT results using the base rule set. 2. Cutoff by Frequency Low-frequency rules that appeared in the training corpus less often than twice were removed from the base rule set. This threshold was experimentally determined by the test corpus BLEU score. 3. χ2 Test The χ2 test was performed in the same manner as in Imamura (2002)’s experiment. We introduced rules with more than 95 percent confidence (χ2 ≥3.841). 4. Simple Feedback Cleaning Feedback cleaning was carried out using the evaluation corpus in Table 1. 5. Cross-cleaning N-fold cross-cleaning was carried out. We applied five-fold cross-cleaning in this experiment. The results are shown in Table 2. This table shows that the test corpus BLEU score and the subjective 2In this experiment, it took about 80 hours until convergence using a Pentium 4 2-GHz computer. Previous Methods Proposed Methods Baseline Cutoff by Freq. χ2 Test Simple FC Cross-cleaning # of Rules 105,588 26,053 1,499 99,368 82,462 Test Corpus BLEU Score 0.232 0.234 0.157 0.244 0.277 Subjective Quality +1.77% -6.67% +6.67% +10.0% # of Improved Sentences 83 115 83 100 # of Same Quality 353 246 378 361 (Same Results) (257) (114) (266) (234) # of Degraded Sentences 74 149 49 49 Table 2: MT Quality vs. Cleaning Methods quality of the proposed methods (simple feedback cleaning and cross-cleaning) are considerably improved over those of the previous methods. Focusing on the subjective quality of the proposed methods, some MT results were degraded from the baseline due to the removal of rules. However, the subjective quality levels were relatively improved because our methods aim to increase the portion of correct MT results. Focusing on the number of the rules, the rule set of the simple feedback cleaning is clearly a locally optimal solution, since the number of rules is more than that of cross-cleaning, although the BLEU score is lower. In comparing the number of rules in cross-cleaning with that in the cutoff by frequency, the former is three times higher than the latter. We assume that the solution of cross-cleaning is also the locally optimal solution. If we could find the globally optimal solution, the MT quality would certainly improve further. 7 Discussion 7.1 Other Automatic Evaluation Methods The idea of feedback cleaning is independent of BLEU. Some automatic evaluation methods of MT quality other than BLEU have been proposed. For example, Su et al. (1992), Yasuda et al. (2001), and Akiba et al. (2001) measure similarity between MT results and the references by DP matching (edit distances) and then output the evaluation scores. These automatic evaluation methods that output scores are applicable to feedback cleaning. The characteristics common to these methods, including BLEU, is that the similarity to references are measured for each sentence, and the evaluation score of an MT system is calculated by aggregating the similarities. Therefore, MT results of the evaluation corpus are necessary to evaluate the system, and reducing the number of sentence translations is an important technique for all of these methods. The effects of feedback cleaning depend on the characteristics of objective measures. DP-based measures and BLEU have different characteristics (Yasuda et al., 2003). The exploration of several measures for feedback cleaning remains an interesting future work. 7.2 Domain Adaptation When applying corpus-based machine translation to a different domain, bilingual corpora of the new domain are necessary. However, the sizes of the new corpora are generally smaller than that of the original corpus because the collection of bilingual sentences requires a high cost. The feedback cleaning proposed in this paper can be interpreted as adapting the translation rules so that the MT results become similar to the evaluation corpus. Therefore, if we regard the bilingual corpus of the new domain as the evaluation corpus and carry out feedback cleaning, the rule set will be adapted to the new domain. In other words, our method can be applied to adaptation of an MT system by using a smaller corpus of the new domain. 8 Conclusions In this paper, we proposed a feedback cleaning method that utilizes automatic evaluation to remove incorrect/redundant translation rules. BLEU was utilized for the automatic evaluation of MT quality, and the hill-climbing algorithm was applied to searching for the combinatorial optimization. Utilizing features of this task, incorrect/redundant rules were removed from the initial solution, which contains all rules acquired from the training corpus. In addition, we proposed N-fold cross-cleaning to reduce the influence of the evaluation corpus size. Our experiments show that the MT quality was improved by 10% in paired comparison and by 0.045 in the BLEU score. This is considerable improvement over the previous methods. Acknowledgment The research reported here is supported in part by a contract with the Telecommunications Advancement Organization of Japan entitled, “A study of speech dialogue translation technology based on a large corpus.” References Yasuhiro Akiba, Kenji Imamura, and Eiichiro Sumita. 2001. Using multiple edit distances to automatically rank machine translation output. In Proceedings of Machine Translation Summit VIII, pages 15–20. Osamu Furuse and Hitoshi Iida. 1994. Constituent boundary parsing for example-based machine translation. In Proceedings of COLING-94, pages 105–111. Kenji Imamura. 2001. Hierarchical phrase alignment harmonized with parsing. In Proceedings of the 6th Natural Language Processing Pacific Rim Symposium (NLPRS 2001), pages 377–384. Kenji Imamura. 2002. Application of translation knowledge acquired by hierarchical phrase alignment for pattern-based MT. In Proceedings of the 9th Conference on Theoretical and Methodological Issues in Machine Translation (TMI-2002), pages 74–84. Arul Menezes and Stephen D. Richardson. 2001. A best first alignment algorithm for automatic extraction of transfer mappings from bilingual corpora. In Proceedings of the ‘Workshop on Example-Based Machine Translation’ in MT Summit VIII, pages 35–42. Adam Meyers, Michiko Kosaka, and Ralph Grishman. 2000. Chart-based translation rule application in machine translation. In Proceedings of COLING-2000, pages 537–543. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. BLEU: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics (ACL), pages 311–318. Keh-Yih Su, Ming-Wen Wu, and Jing-Shin Chang. 1992. A new quantitative quality measure for machine translation systems. In Proceedings of COLING-92, pages 433–439. Toshiyuki Takezawa, Eiichiro Sumita, Fumiaki Sugaya, Hirofumi Yamamoto, and Seiichi Yamamoto. 2002. Toward a broad-coverage bilingual corpus for speech translation of travel conversations in the real world. In Proceedings of the Third International Conference on Language Resources and Evaluation (LREC 2002), pages 147–152. Keiji Yasuda, Fumiaki Sugaya, Toshiyuki Takezawa, Seiichi Yamamoto, and Masuzo Yanagida. 2001. An automatic evaluation method of translation quality using translation answer candidates queried from a parallel corpus. In Proceedings of Machine Translation Summit VIII, pages 373–378. Keiji Yasuda, Fumiaki Sugaya, Toshiyuki Takezawa, Seiichi Yamamoto, and Masuzo Yanagida. 2003. Applications of automatic evaluation methods to measuring a capability of speech translation system. In Proceedings of the 10th Conference of the European Chapter of the Association for Computational Linguistics (EACL 2003), pages 371–378.
2003
57
Exploiting Parallel Texts for Word Sense Disambiguation: An Empirical Study Hwee Tou Ng Bin Wang Yee Seng Chan Department of Computer Science National University of Singapore 3 Science Drive 2, Singapore 117543 {nght, wangbin, chanys}@comp.nus.edu.sg Abstract A central problem of word sense disambiguation (WSD) is the lack of manually sense-tagged data required for supervised learning. In this paper, we evaluate an approach to automatically acquire sensetagged training data from English-Chinese parallel corpora, which are then used for disambiguating the nouns in the SENSEVAL-2 English lexical sample task. Our investigation reveals that this method of acquiring sense-tagged data is promising. On a subset of the most difficult SENSEVAL-2 nouns, the accuracy difference between the two approaches is only 14.0%, and the difference could narrow further to 6.5% if we disregard the advantage that manually sense-tagged data have in their sense coverage. Our analysis also highlights the importance of the issue of domain dependence in evaluating WSD programs. 1 Introduction The task of word sense disambiguation (WSD) is to determine the correct meaning, or sense of a word in context. It is a fundamental problem in natural language processing (NLP), and the ability to disambiguate word sense accurately is important for applications like machine translation, information retrieval, etc. Corpus-based, supervised machine learning methods have been used to tackle the WSD task, just like the other NLP tasks. Among the various approaches to WSD, the supervised learning approach is the most successful to date. In this approach, we first collect a corpus in which each occurrence of an ambiguous word w has been manually annotated with the correct sense, according to some existing sense inventory in a dictionary. This annotated corpus then serves as the training material for a learning algorithm. After training, a model is automatically learned and it is used to assign the correct sense to any previously unseen occurrence of w in a new context. While the supervised learning approach has been successful, it has the drawback of requiring manually sense-tagged data. This problem is particular severe for WSD, since sense-tagged data must be collected separately for each word in a language. One source to look for potential training data for WSD is parallel texts, as proposed by Resnik and Yarowsky (1997). Given a word-aligned parallel corpus, the different translations in a target language serve as the “sense-tags” of an ambiguous word in the source language. For example, some possible Chinese translations of the English noun channel are listed in Table 1. To illustrate, if the sense of an occurrence of the noun channel is “a path over which electrical signals can pass”, then this occurrence can be translated as “频道” in Chinese. WordNet 1.7 sense id Lumped sense id Chinese translations WordNet 1.7 English sense descriptions 1 1 频道 A path over which electrical signals can pass 2 2 水道 水渠 排水渠 A passage for water 3 3 沟 A long narrow furrow 4 4 海峡 A relatively narrow body of water 5 5 途径 A means of communication or access 6 6 导管 A bodily passage or tube 7 1 频道 A television station and its programs Table 1: WordNet 1.7 English sense descriptions, the actual lumped senses, and Chinese translations of the noun channel used in our implemented approach Parallel corpora Size of English texts (in million words (MB)) Size of Chinese texts (in million characters (MB)) Hong Kong News 5.9 (39.4) 10.7 (22.8) Hong Kong Laws 7.0 (39.8) 14.0 (22.6) Hong Kong Hansards 11.9 (54.2) 18.0 (32.4) English translation of Chinese Treebank 0.1 (0.7) 0.2 (0.4) Xinhua News 3.6 (22.9) 7.0 (17.0) Sinorama 3.2 (19.8) 5.3 (10.2) Total 31.7 (176.8) 55.2 (105.4) Table 2: Size of English-Chinese parallel corpora This approach of getting sense-tagged corpus also addresses two related issues in WSD. Firstly, what constitutes a valid sense distinction carries much subjectivity. Different dictionaries define a different sense inventory. By tying sense distinction to the different translations in a target language, this introduces a “data-oriented” view to sense distinction and serves to add an element of objectivity to sense definition. Secondly, WSD has been criticized as addressing an isolated problem without being grounded to any real application. By defining sense distinction in terms of different target translations, the outcome of word sense disambiguation of a source language word is the selection of a target word, which directly corresponds to word selection in machine translation. While this use of parallel corpus for word sense disambiguation seems appealing, several practical issues arise in its implementation: (i) What is the size of the parallel corpus needed in order for this approach to be able to disambiguate a source language word accurately? (ii) While we can obtain large parallel corpora in the long run, to have them manually wordaligned would be too time-consuming and would defeat the original purpose of getting a sensetagged corpus without manual annotation. However, are current word alignment algorithms accurate enough for our purpose? (iii) Ultimately, using a state-of-the-art supervised WSD program, what is its disambiguation accuracy when it is trained on a “sense-tagged” corpus obtained via parallel text alignment, compared with training on a manually sense-tagged corpus? Much research remains to be done to investigate all of the above issues. The lack of large-scale parallel corpora no doubt has impeded progress in this direction, although attempts have been made to mine parallel corpora from the Web (Resnik, 1999). However, large-scale, good-quality parallel corpora have recently become available. For example, six English-Chinese parallel corpora are now available from Linguistic Data Consortium. These parallel corpora are listed in Table 2, with a combined size of 280 MB. In this paper, we address the above issues and report our findings, exploiting the English-Chinese parallel corpora in Table 2 for word sense disambiguation. We evaluated our approach on all the nouns in the English lexical sample task of SENSEVAL-2 (Edmonds and Cotton, 2001; Kilgarriff 2001), which used the WordNet 1.7 sense inventory (Miller, 1990). While our approach has only been tested on English and Chinese, it is completely general and applicable to other language pairs. 2 2.1 2.2 2.3 2.4 Approach Our approach of exploiting parallel texts for word sense disambiguation consists of four steps: (1) parallel text alignment (2) manual selection of target translations (3) training of WSD classifier (4) WSD of words in new contexts. Parallel Text Alignment In this step, parallel texts are first sentence-aligned and then word-aligned. Various alignment algorithms (Melamed 2001; Och and Ney 2000) have been developed in the past. For the six bilingual corpora that we used, they already come with sentences pre-aligned, either manually when the corpora were prepared or automatically by sentencealignment programs. After sentence alignment, the English texts are tokenized so that a punctuation symbol is separated from its preceding word. For the Chinese texts, we performed word segmentation, so that Chinese characters are segmented into words. The resulting parallel texts are then input to the GIZA++ software (Och and Ney 2000) for word alignment. In the output of GIZA++, each English word token is aligned to some Chinese word token. The alignment result contains much noise, especially for words with low frequency counts. Manual Selection of Target Translations In this step, we will decide on the sense classes of an English word w that are relevant to translating w into Chinese. We will illustrate with the noun channel, which is one of the nouns evaluated in the English lexical sample task of SENSEVAL-2. We rely on two sources to decide on the sense classes of w: (i) The sense definitions in WordNet 1.7, which lists seven senses for the noun channel. Two senses are lumped together if they are translated in the same way in Chinese. For example, sense 1 and 7 of channel are both translated as “频道” in Chinese, so these two senses are lumped together. (ii) From the word alignment output of GIZA++, we select those occurrences of the noun channel which have been aligned to one of the Chinese translations chosen (as listed in Table 1). These occurrences of the noun channel in the English side of the parallel texts are considered to have been disambiguated and “sense-tagged” by the appropriate Chinese translations. Each such occurrence of channel together with the 3-sentence context in English surrounding channel then forms a training example for a supervised WSD program in the next step. The average time taken to perform manual selection of target translations for one SENSEVAL-2 English noun is less than 15 minutes. This is a relatively short time, especially when compared to the effort that we would otherwise need to spend to perform manual sense-tagging of training examples. This step could also be potentially automated if we have a suitable bilingual translation lexicon. Training of WSD Classifier Much research has been done on the best supervised learning approach for WSD (Florian and Yarowsky, 2002; Lee and Ng, 2002; Mihalcea and Moldovan, 2001; Yarowsky et al., 2001). In this paper, we used the WSD program reported in (Lee and Ng, 2002). In particular, our method made use of the knowledge sources of part-of-speech, surrounding words, and local collocations. We used naïve Bayes as the learning algorithm. Our previous research demonstrated that such an approach leads to a state-of-the-art WSD program with good performance. WSD of Words in New Contexts Given an occurrence of w in a new context, we then used the naïve Bayes classifier to determine the most probable sense of w. noun No. of senses before lumping No. of senses after lumping M1 P1 P1Baseline M2 M3 P2 P2- Baseline child 4 1 - - - - - - - detention 2 1 - - - - - - - feeling 6 1 - - - - - - - holiday 2 1 - - - - - - - lady 3 1 - - - - - - - material 5 1 - - - - - - - yew 2 1 - - - - - - - bar 13 13 0.619 0.529 0.500 - - - - bum 4 3 0.850 0.850 0.850 - - - - chair 4 4 0.887 0.895 0.887 - - - - day 10 6 0.921 0.907 0.906 - - - - dyke 2 2 0.893 0.893 0.893 - - - - fatigue 4 3 0.875 0.875 0.875 - - - - hearth 3 2 0.906 0.844 0.844 - - - - mouth 8 4 0.877 0.811 0.846 - - - - nation 4 3 0.806 0.806 0.806 - - - - nature 5 3 0.733 0.756 0.522 - - - - post 8 7 0.517 0.431 0.431 - - - - restraint 6 3 0.932 0.864 0.864 - - - - sense 5 4 0.698 0.684 0.453 - - - - stress 5 3 0.921 0.921 0.921 - - - - art 4 3 0.722 0.494 0.424 0.678 0.562 0.504 0.424 authority 7 5 0.879 0.753 0.538 0.802 0.800 0.709 0.538 channel 7 6 0.735 0.487 0.441 0.715 0.715 0.526 0.441 church 3 3 0.758 0.582 0.573 0.691 0.629 0.609 0.572 circuit 6 5 0.792 0.457 0.434 0.683 0.438 0.446 0.438 facility 5 3 0.875 0.764 0.750 0.874 0.893 0.754 0.750 grip 7 7 0.700 0.540 0.560 0.655 0.574 0.546 0.556 spade 3 3 0.806 0.677 0.677 0.790 0.677 0.677 0.677 Table 3: List of 29 SENSEVAL-2 nouns, their number of senses, and various accuracy figures 3 An Empirical Study We evaluated our approach to word sense disambiguation on all the 29 nouns in the English lexical sample task of SENSEVAL-2 (Edmonds and Cotton, 2001; Kilgarriff 2001). The list of 29 nouns is given in Table 3. The second column of Table 3 lists the number of senses of each noun as given in the WordNet 1.7 sense inventory (Miller, 1990). We first lump together two senses s1 and s2 of a noun if s1 and s2 are translated into the same Chinese word. The number of senses of each noun after sense lumping is given in column 3 of Table 3. For the 7 nouns that are lumped into one sense (i.e., they are all translated into one Chinese word), we do not perform WSD on these words. The average number of senses before and after sense lumping is 5.07 and 3.52 respectively. After sense lumping, we trained a WSD classifier for each noun w, by using the lumped senses in the manually sense-tagged training data for w provided by the SENSEVAL-2 organizers. We then tested the WSD classifier on the official SENSEVAL-2 test data (but with lumped senses) for w. The test accuracy (based on fine-grained scoring of SENSEVAL-2) of each noun obtained is listed in the column labeled M1 in Table 3. We then used our approach of parallel text alignment described in the last section to obtain the training examples from the English side of the parallel texts. Due to the memory size limitation of our machine, we were not able to align all six parallel corpora of 280MB in one alignment run of GIZA++. For two of the corpora, Hong Kong Hansards and Xinhua News, we gathered all English sentences containing the 29 SENSEVAL-2 noun occurrences (and their sentence-aligned Chinese sentence counterparts). This subset, together with the complete corpora of Hong Kong News, Hong Kong Laws, English translation of Chinese Treebank, and Sinorama, is then given to GIZA++ to perform one word alignment run. It took about 40 hours on our 2.4 GHz machine with 2 GB memory to perform this alignment. After word alignment, each 3-sentence context in English containing an occurrence of the noun w that is aligned to a selected Chinese translation then forms a training example. For each SENSEVAL-2 noun w, we then collected training examples from the English side of the parallel texts using the same number of training examples for each sense of w that are present in the manually sense-tagged SENSEVAL-2 official training corpus (lumped-sense version). If there are insufficient training examples for some sense of w from the parallel texts, then we just used as many parallel text training examples as we could find for that sense. We chose the same number of training examples for each sense as the official training data so that we can do a fair comparison between the accuracy of the parallel text alignment approach versus the manual sense-tagging approach. After training a WSD classifier for w with such parallel text examples, we then evaluated the WSD classifier on the same official SENSEVAL-2 test set (with lumped senses). The test accuracy of each noun obtained by training on such parallel text training examples (averaged over 10 trials) is listed in the column labeled P1 in Table 3. The baseline accuracy for each noun is also listed in the column labeled “P1-Baseline” in Table 3. The baseline accuracy corresponds to always picking the most frequently occurring sense in the training data. Ideally, we would hope M1 and P1 to be close in value, since this would imply that WSD based on training examples collected from the parallel text alignment approach performs as well as manually sense-tagged training examples. Comparing the M1 and P1 figures, we observed that there is a set of nouns for which they are relatively close. These nouns are: bar, bum, chair, day, dyke, fatigue, hearth, mouth, nation, nature, post, restraint, sense, stress. This set of nouns is relatively easy to disambiguate, since using the mostfrequently-occurring-sense baseline would have done well for most of these nouns. The parallel text alignment approach works well for nature and sense, among these nouns. For nature, the parallel text alignment approach gives better accuracy, and for sense the accuracy difference is only 0.014 (while there is a relatively large difference of 0.231 between P1 and P1-Baseline of sense). This demonstrates that the parallel text alignment approach to acquiring training examples can yield good results. For the remaining nouns (art, authority, channel, church, circuit, facility, grip, spade), the accuracy difference between M1 and P1 is at least 0.10. Henceforth, we shall refer to this set of 8 nouns as “difficult” nouns. We will give an analysis of the reason for the accuracy difference between M1 and P1 in the next section. 4 4.1 Analysis Sense-Tag Accuracy of Parallel Text Training Examples To see why there is still a difference between the accuracy of the two approaches, we first examined the quality of the training examples obtained through parallel text alignment. If the automatically acquired training examples are noisy, then this could account for the lower P1 score. The word alignment output of GIZA++ contains much noise in general (especially for the low frequency words). However, note that in our approach, we only select the English word occurrences that align to our manually selected Chinese translations. Hence, while the complete set of word alignment output contains much noise, the subset of word occurrences chosen may still have high quality sense tags. Our manual inspection reveals that the annotation errors introduced by parallel text alignment can be attributed to the following sources: (i) Wrong sentence alignment: Due to erroneous sentence segmentation or sentence alignment, the correct Chinese word that an English word w should align to is not present in its Chinese sentence counterpart. In this case, word alignment will align the wrong Chinese word to w. (ii) Presence of multiple Chinese translation candidates: Sometimes, multiple and distinct Chinese translations appear in the aligned Chinese sentence. For example, for an English occurrence channel, both “频道” (sense 1 translation) and “途 径” (sense 5 translation) happen to appear in the aligned Chinese sentence. In this case, word alignment may erroneously align the wrong Chinese translation to channel. (iii) Truly ambiguous word: Sometimes, a word is truly ambiguous in a particular context, and different translators may translate it differently. For example, in the phrase “the church meeting”, church could be the physical building sense (教 堂), or the institution sense (教会). In manual sense tagging done in SENSEVAL-2, it is possible to assign two sense tags to church in this case, but in the parallel text setting, a particular translator will translate it in one of the two ways (教堂 or 教 会), and hence the sense tag found by parallel text alignment is only one of the two sense tags. By manually examining a subset of about 1,000 examples, we estimate that the sense-tag error rate of training examples (tagged with lumped senses) obtained by our parallel text alignment approach is less than 1%, which compares favorably with the quality of manually sense tagged corpus prepared in SENSEVAL-2 (Kilgarriff, 2001). 4.2 Domain Dependence and Insufficient Sense Coverage While it is encouraging to find out that the parallel text sense tags are of high quality, we are still left with the task of explaining the difference between M1 and P1 for the set of difficult nouns. Our further investigation reveals that the accuracy difference between M1 and P1 is due to the following two reasons: domain dependence and insufficient sense coverage. Domain Dependence The accuracy figure of M1 for each noun is obtained by training a WSD classifier on the manually sense-tagged training data (with lumped senses) provided by SENSEVAL-2 organizers, and testing on the corresponding official test data (also with lumped senses), both of which come from similar domains. In contrast, the P1 score of each noun is obtained by training the WSD classifier on a mixture of six parallel corpora, and tested on the official SENSEVAL-2 test set, and hence the training and test data come from dissimilar domains in this case. Moreover, from the “docsrc” field (which records the document id that each training or test example originates) of the official SENSEVAL-2 training and test examples, we realized that there are many cases when some of the examples from a document are used as training examples, while the rest of the examples from the same document are used as test examples. In general, such a practice results in higher test accuracy, since the test examples would look a lot closer to the training examples in this case. To address this issue, we took the official SENSEVAL-2 training and test examples of each noun w and combined them together. We then randomly split the data into a new training and a new test set such that no training and test examples come from the same document. The number of training examples in each sense in such a new training set is the same as that in the official training data set of w. A WSD classifier was then trained on this new training set, and tested on this new test set. We conducted 10 random trials, each time splitting into a different training and test set but ensuring that the number of training examples in each sense (and thus the sense distribution) follows the official training set of w. We report the average accuracy of the 10 trials. The accuracy figures for the set of difficult nouns thus obtained are listed in the column labeled M2 in Table 3. We observed that M2 is always lower in value compared to M1 for all difficult nouns. This suggests that the effect of training and test examples coming from the same document has inflated the accuracy figures of SENSEVAL-2 nouns. Next, we randomly selected 10 sets of training examples from the parallel corpora, such that the number of training examples in each sense followed the official training set of w. (When there were insufficient training examples for a sense, we just used as many as we could find from the parallel corpora.) In each trial, after training a WSD classifier on the selected parallel text examples, we tested the classifier on the same test set (from SENSEVAL-2 provided data) used in that trial that generated the M2 score. The accuracy figures thus obtained for all the difficult nouns are listed in the column labeled P2 in Table 3. Insufficient Sense Coverage We observed that there are situations when we have insufficient training examples in the parallel corpora for some of the senses of some nouns. For instance, no occurrences of sense 5 of the noun circuit (racing circuit, a racetrack for automobile races) could be found in the parallel corpora. To ensure a fairer comparison, for each of the 10-trial manually sense-tagged training data that gave rise to the accuracy figure M2 of a noun w, we extracted a new subset of 10-trial (manually sense-tagged) training data by ensuring adherence to the number of training examples found for each sense of w in the corresponding parallel text training set that gave rise to the accuracy figure P2 for w. The accuracy figures thus obtained for the difficult nouns are listed in the column labeled M3 in Table 3. M3 thus gave the accuracy of training on manually sense-tagged data but restricted to the number of training examples found in each sense from parallel corpora. 4.3 5 6 Discussion The difference between the accuracy figures of M2 and P2 averaged over the set of all difficult nouns is 0.140. This is smaller than the difference of 0.189 between the accuracy figures of M1 and P1 averaged over the set of all difficult nouns. This confirms our hypothesis that eliminating the possibility that training and test examples come from the same document would result in a fairer comparison. In addition, the difference between the accuracy figures of M3 and P2 averaged over the set of all difficult nouns is 0.065. That is, eliminating the advantage that manually sense-tagged data have in their sense coverage would reduce the performance gap between the two approaches from 0.140 to 0.065. Notice that this reduction is particularly significant for the noun circuit. For this noun, the parallel corpora do not have enough training examples for sense 4 and sense 5 of circuit, and these two senses constitute approximately 23% in each of the 10-trial test set. We believe that the remaining difference of 0.065 between the two approaches could be attributed to the fact that the training and test examples of the manually sense-tagged corpus, while not coming from the same document, are however still drawn from the same general domain. To illustrate, we consider the noun channel where the difference between M3 and P2 is the largest. For channel, it turns out that a substantial number of the training and test examples contain the collocation “Channel tunnel” or “Channel Tunnel”. On average, about 9.8 training examples and 6.2 test examples contain this collocation. This alone would have accounted for 0.088 of the accuracy difference between the two approaches. That domain dependence is an important issue affecting the performance of WSD programs has been pointed out by (Escudero et al., 2000). Our work confirms the importance of domain dependence in WSD. As to the problem of insufficient sense coverage, with the steady increase and availability of parallel corpora, we believe that getting sufficient sense coverage from larger parallel corpora should not be a problem in the near future for most of the commonly occurring words in a language. Related Work Brown et al. (1991) is the first to have explored statistical methods in word sense disambiguation in the context of machine translation. However, they only looked at assigning at most two senses to a word, and their method only asked a single question about a single word of context. Li and Li (2002) investigated a bilingual bootstrapping technique, which differs from the method we implemented here. Their method also does not require a parallel corpus. The research of (Chugur et al., 2002) dealt with sense distinctions across multiple languages. Ide et al. (2002) investigated word sense distinctions using parallel corpora. Resnik and Yarowsky (2000) considered word sense disambiguation using multiple languages. Our present work can be similarly extended beyond bilingual corpora to multilingual corpora. The research most similar to ours is the work of Diab and Resnik (2002). However, they used machine translated parallel corpus instead of human translated parallel corpus. In addition, they used an unsupervised method of noun group disambiguation, and evaluated on the English all-words task. Conclusion In this paper, we reported an empirical study to evaluate an approach of automatically acquiring sense-tagged training data from English-Chinese parallel corpora, which were then used for disambiguating the nouns in the SENSEVAL-2 English lexical sample task. Our investigation reveals that this method of acquiring sense-tagged data is promising and provides an alternative to manual sense tagging. Acknowledgements This research is partially supported by a research grant R252-000-125-112 from National University of Singapore Academic Research Fund. References Peter F. Brown, Stephen A. Della Pietra, Vincent J. Della Pietra, and Robert L. Mercer. 1991. Wordsense disambiguation using statistical methods. In Proceedings of the 29th Annual Meeting of the Association for Computational Linguistics, pages 264270. Irina Chugur, Julio Gonzalo, and Felisa Verdejo. 2002. Polysemy and sense proximity in the Senseval-2 test suite. In Proceedings of the ACL SIGLEX Workshop on Word Sense Disambiguation: Recent Successes and Future Directions, pages 32-39. Mona Diab and Philip Resnik. 2002. An unsupervised method for word sense tagging using parallel corpora. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 255-262. Philip Edmonds and Scott Cotton. 2001. SENSEVAL-2: Overview. In Proceedings of the Second International Workshop on Evaluating Word Sense Disambiguation Systems (SENSEVAL-2), pages 1-5. Gerard Escudero, Lluis Marquez, and German Rigau. 2000. An empirical study of the domain dependence of supervised word sense disambiguation systems. In Proceedings of the Joint SIGDAT Conference on Empirical Methods in Natural Language Processing and Very Large Corpora, pages 172-180. Radu Florian and David Yarowsky. 2002. Modeling consensus: Classifier combination for word sense disambiguation. In Proceedings of the 2002 Conference on Empirical Methods in Natural Language Processing, pages 25-32. Nancy Ide, Tomaz Erjavec, and Dan Tufis. 2002. Sense discrimination with parallel corpora. In Proceedings of the ACL SIGLEX Workshop on Word Sense Disambiguation: Recent Successes and Future Directions, pages 54-60. Adam Kilgarriff. 2001. English lexical sample task description. In Proceedings of the Second International Workshop on Evaluating Word Sense Disambiguation Systems (SENSEVAL-2), pages 17-20. Yoong Keok Lee and Hwee Tou Ng. 2002. An empirical evaluation of knowledge sources and learning algorithms for word sense disambiguation. In Proceedings of the 2002 Conference on Empirical Methods in Natural Language Processing, pages 4148. Cong Li and Hang Li. 2002. Word translation disambiguation using bilingual bootstrapping. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 343-351. I. Dan Melamed. 2001. Empirical Methods for Exploiting Parallel Texts. MIT Press, Cambridge. Rada F. Mihalcea and Dan I. Moldovan. 2001. Pattern learning and active feature selection for word sense disambiguation. In Proceedings of the Second International Workshop on Evaluating Word Sense Disambiguation Systems (SENSEVAL-2), pages 127-130. George A. Miller. (Ed.) 1990. WordNet: An on-line lexical database. International Journal of Lexicography, 3(4):235-312. Franz Josef Och and Hermann Ney. 2000. Improved statistical alignment models. In Proceedings of the 38th Annual Meeting of the Association for Computational Linguistics, pages 440-447. Philip Resnik. 1999. Mining the Web for bilingual text. In Proceedings of the 37th Annual Meeting of the Association for Computational Linguistics, pages 527534. Philip Resnik and David Yarowsky. 1997. A perspective on word sense disambiguation methods and their evaluation. In Proceedings of the ACL SIGLEX Workshop on Tagging Text with Lexical Semantics: Why, What, and How?, pages 79-86. Philip Resnik and David Yarowsky. 2000. Distinguishing systems and distinguishing senses: New evaluation methods for word sense disambiguation. Natural Language Engineering, 5(2):113-133. David Yarowsky, Silviu Cucerzan, Radu Florian, Charles Schafer, and Richard Wicentowski. 2001. The Johns Hopkins SENSEVAL2 system descriptions. In Proceedings of the Second International Workshop on Evaluating Word Sense Disambiguation Systems (SENSEVAL-2), pages 163-166.
2003
58
Learning the Countability of English Nouns from Corpus Data Timothy Baldwin CSLI Stanford University Stanford, CA, 94305 [email protected] Francis Bond NTT Communication Science Laboratories Nippon Telegraph and Telephone Corporation Kyoto, Japan [email protected] Abstract This paper describes a method for learning the countability preferences of English nouns from raw text corpora. The method maps the corpus-attested lexico-syntactic properties of each noun onto a feature vector, and uses a suite of memory-based classifiers to predict membership in 4 countability classes. We were able to assign countability to English nouns with a precision of 94.6%. 1 Introduction This paper is concerned with the task of knowledgerich lexical acquisition from unannotated corpora, focusing on the case of countability in English. Knowledge-rich lexical acquisition takes unstructured text and extracts out linguistically-precise categorisations of word and expression types. By combining this with a grammar, we can build broad-coverage deep-processing tools with a minimum of human effort. This research is close in spirit to the work of Light (1996) on classifying the semantics of derivational affixes, and Siegel and McKeown (2000) on learning verb aspect. In English, nouns heading noun phrases are typically either countable or uncountable (also called count and mass). Countable nouns can be modified by denumerators, prototypically numbers, and have a morphologically marked plural form: one dog, two dogs. Uncountable nouns cannot be modified by denumerators, but can be modified by unspecific quantifiers such as much, and do not show any number distinction (prototypically being singular): *one equipment, some equipment, *two equipments. Many nouns can be used in countable or uncountable environments, with differences in interpretation. We call the lexical property that determines which uses a noun can have the noun’s countability preference. Knowledge of countability preferences is important both for the analysis and generation of English. In analysis, it helps to constrain the interpretations of parses. In generation, the countability preference determines whether a noun can become plural, and the range of possible determiners. Knowledge of countability is particularly important in machine translation, because the closest translation equivalent may have different countability from the source noun. Many languages, such as Chinese and Japanese, do not mark countability, which means that the choice of countability will be largely the responsibility of the generation component (Bond, 2001). In addition, knowledge of countability obtained from examples of use is an important resource for dictionary construction. In this paper, we learn the countability preferences of English nouns from unannotated corpora. We first annotate them automatically, and then train classifiers using a set of gold standard data, taken from COMLEX (Grishman et al., 1998) and the transfer dictionaries used by the machine translation system ALT-J/E (Ikehara et al., 1991). The classifiers and their training are described in more detail in Baldwin and Bond (2003). These are then run over the corpus to extract nouns as members of four classes — countable: dog; uncountable: furniture; bipartite: [pair of] scissors and plural only: clothes. We first discuss countability in more detail (§ 2). Then we present the lexical resources used in our experiment (§ 3). Next, we describe the learning process (§ 4). We then present our results and evaluation (§ 5). Finally, we discuss the theoretical and practical implications (§ 6). 2 Background Grammatical countability is motivated by the semantic distinction between object and substance reference (also known as bounded/non-bounded or individuated/non-individuated). It is a subject of contention among linguists as to how far grammatical countability is semantically motivated and how much it is arbitrary (Wierzbicka, 1988). The prevailing position in the natural language processing community is effectively to treat countability as though it were arbitrary and encode it as a lexical property of nouns. The study of countability is complicated by the fact that most nouns can have their countability changed: either converted by a lexical rule or embedded in another noun phrase. An example of conversion is the so-called universal packager, a rule which takes an uncountable noun with an interpretation as a substance, and returns a countable noun interpreted as a portion of the substance: I would like two beers. An example of embedding is the use of a classifier, e.g. uncountable nouns can be embedded in countable noun phrases as complements of classifiers: one piece of equipment. Bond et al. (1994) suggested a division of countability into five major types, based on Allan (1980)’s noun countability preferences (NCPs). Nouns which rarely undergo conversion are marked as either fully countable, uncountable or plural only. Fully countable nouns have both singular and plural forms, and cannot be used with determiners such as much, little, a little, less and overmuch. Uncountable nouns, such as furniture, have no plural form, and can be used with much. Plural only nouns never head a singular noun phrase: goods, scissors. Nouns that are readily converted are marked as either strongly countable (for countable nouns that can be converted to uncountable, such as cake) or weakly countable (for uncountable nouns that are readily convertible to countable, such as beer). NLP systems must list countability for at least some nouns, because full knowledge of the referent of a noun phrase is not enough to predict countability. There is also a language-specific knowledge requirement. This can be shown most simply by comparing languages: different languages encode the countability of the same referent in different ways. There is nothing about the concept denoted by lightning, e.g., that rules out *a lightning being interpreted as a flash of lightning. Indeed, the German and French translation equivalents are fully countable (ein Blitz and un ´eclair respectively). Even within the same language, the same referent can be encoded countably or uncountably: clothes/clothing, things/stuff, jobs/work. Therefore, we must learn countability classes from usage examples in corpora. There are several impediments to this approach. The first is that words are frequently converted to different countabilities, sometimes in such a way that other native speakers will dispute the validity of the new usage. We do not necessarily wish to learn such rare examples, and may not need to learn more common conversions either, as they can be handled by regular lexical rules (Copestake and Briscoe, 1995). The second problem is that some constructions affect the apparent countability of their head: for example, nouns denoting a role, which are typically countable, can appear without an article in some constructions (e.g. We elected him treasurer). The third is that different senses of a word may have different countabilities: interest “a sense of concern with and curiosity” is normally countable, whereas interest “fixed charge for borrowing money” is uncountable. There have been at several earlier approaches to the automatic determination of countability. Bond and Vatikiotis-Bateson (2002) determine a noun’s countability preferences from its semantic class, and show that semantics predicts (5-way) countability 78% of the time with their ontology. O’Hara et al. (2003) get better results (89.5%) using the much larger Cyc ontology, although they only distinguish between countable and uncountable. Schwartz (2002) created an automatic countability tagger (ACT) to learn noun countabilities from the British National Corpus. ACT looks at determiner co-occurrence in singular noun chunks, and classifies the noun if and only if it occurs with a determiner which can modify only countable or uncountable nouns. The method has a coverage of around 50%, and agrees with COMLEX for 68% of the nouns marked countable and with the ALT-J/E lexicon for 88%. Agreement was worse for uncountable nouns (6% and 44% respectively). 3 Resources Information about noun countability was obtained from two sources. One was COMLEX 3.0 (Grishman et al., 1998), which has around 22,000 noun entries. Of these, 12,922 are marked as being countable (COUNTABLE) and 4,976 as being uncountable (NCOLLECTIVE or :PLURAL *NONE*). The remainder are unmarked for countability. The other was the common noun part of ALTJ/E’s Japanese-to-English semantic transfer dictionary (Bond, 2001). It contains 71,833 linked Japanese-English pairs, each of which has a value for the noun countability preference of the English noun. Considering only unique English entries with different countability and ignoring all other information gave 56,245 entries. Nouns in the ALT-J/E dictionary are marked with one of the five major countability preference classes described in Section 2. In addition to countability, default values for number and classifier (e.g. blade for grass: blade of grass) are also part of the lexicon. We classify words into four possible classes, with some words belonging to multiple classes. The first class is countable: COMLEX’s COUNTABLE and ALTJ/E’s fully, strongly and weakly countable. The second class is uncountable: COMLEX’s NCOLLECTIVE or :PLURAL *NONE* and ALT-J/E’s strongly and weakly countable and uncountable. The third class is bipartite nouns. These can only be plural when they head a noun phrase (trousers), but singular when used as a modifier (trouser leg). When they are denumerated they use pair: a pair of scissors. COMLEX does not have a feature to mark bipartite nouns; trouser, for example, is listed as countable. Nouns in ALT-J/E marked plural only with a default classifier of pair are classified as bipartite. The last class is plural only nouns: those that only have a plural form, such as goods. They can neither be denumerated nor modified by much. Many of these nouns, such as clothes, use the plural form even as modifiers (a clothes horse). The word clothes cannot be denumerated at all. Nouns marked :SINGULAR *NONE* in COMLEX and nouns in ALTJ/E marked plural only without the default classifier pair are classified as plural only. There was some noise in the ALT-J/E data, so this class was handchecked, giving a total of 104 entries; 84 of these were attested in the training data. Our classification of countability is a subset of ALT-J/E’s, in that we use only the three basic ALTJ/E classes of countable, uncountable and plural only, (although we treat bipartite as a separate class, not a subclass). As we derive our countability classifications from corpus evidence, it is possible to reconstruct countability preferences (i.e. fully, strongly, or weakly countable) from the relative token occurrence of the different countabilities for that noun. In order to get an idea of the intrinsic difficulty of the countability learning task, we tested the agreement between the two resources in the form of classification accuracy. That is, we calculate the average proportion of (both positive and negative) countability classifications over which the two methods agree. E.g., COMLEX lists tomato as being only countable where ALT-J/E lists it as being both countable and uncountable. Agreement for this one noun, therefore, is 3 4, as there is agreement for the classes of countable, plural only and bipartite (with implicit agreement as to negative membership for the latter two classes), but not for uncountable. Averaging over the total set of nouns countability-classified in both lexicons, the mean was 93.8%. Almost half of the disagreements came from words with two countabilities in ALT-J/E but only one in COMLEX. 4 Learning Countability The basic methodology employed in this research is to identify lexical and/or constructional features associated with the countability classes, and determine the relative corpus occurrence of those features for each noun. We then feed the noun feature vectors into a classifier and make a judgement on the membership of the given noun in each countability class. In order to extract the feature values from corpus data, we need the basic phrase structure, and particularly noun phrase structure, of the source text. We use three different sources for this phrase structure: part-of-speech tagged data, chunked data and fullyparsed data, as detailed below. The corpus of choice throughout this paper is the written component of the British National Corpus (BNC version 2, Burnard (2000)), totalling around 90m w-units (POS-tagged items). We chose this because of its good coverage of different usages of English, and thus of different countabilities. The only component of the original annotation we make use of is the sentence tokenisation. Below, we outline the features used in this research and methods of describing feature interaction, along with the pre-processing tools and extraction techniques, and the classifier architecture. The full range of different classifier architectures tested as part of this research, and the experiments to choose between them are described in Baldwin and Bond (2003). 4.1 Feature space For each target noun, we compute a fixed-length feature vector based on a variety of features intended to capture linguistic constraints and/or preferences associated with particular countability classes. The feature space is partitioned up into feature clusters, each of which is conditioned on the occurrence of the target noun in a given construction. Feature clusters take the form of one- or twodimensional feature matrices, with each dimension describing a lexical or syntactic property of the construction in question. In the case of a onedimensional feature cluster (e.g. noun occurring in singular or plural form), each component feature feats in the cluster is translated into the 3-tuple: Feature cluster (base feature no.) Countable Uncountable Bipartite Plural only Head number (2) S,P S P P Modifier number (2) S,P S S P Subj–V agreement (2 × 2) [S,S],[P,P] [S,S] [P,P] [P,P] Coordinate number (2 × 2) [S,S],[P,S],[P,P] [S,S],[S,P] [P,S],[P,P] [P,S],[P,P] N of N (11 × 2) [100s,P], . . . [lack,S], . . . [pair,P], . . . [rate,P], . . . PPs (52 × 2) [per,-DET], . . . [in,-DET], . . . — — Pronoun (12 × 2) [it,S],[they,P], . . . [it,S], . . . [they,P], . . . [they,P], . . . Singular determiners (10) a,each, . . . much, . . . — — Plural determiners (12) many, few, . . . — — many, . . . Neutral determiners (11 × 2) [less,P], . . . [BARE,S], . . . [enough,P], . . . [all,P], . . . Table 1: Predicted feature-correlations for each feature cluster (S=singular, P=plural) ⟨freq(feats|word), freq(feats|word) freq(word) , freq(feats|word) P ifreq(feati|word) ⟩ In the case of a two-dimensional feature cluster (e.g. subject-position noun number vs. verb number agreement), each component feature feat s,t is translated into the 5-tuple: ⟨freq(feats,t|word), freq(feats,t|word) freq(word) , freq(feats,t|word) P i,jfreq(feati,j|word) , freq(feats,t|word) P ifreq(feati,t|word) , freq(feats,t|word) P jfreq(feats,j|word) ⟩ See Baldwin and Bond (2003) for further details. The following is a brief description of each feature cluster and its dimensionality (1D or 2D). A summary of the number of base features and prediction of positive feature correlations with countability classes is presented in Table 1. Head noun number:1D the number of the target noun when it heads an NP (e.g. a shaggy dog = SINGULAR) Modifier noun number:1D the number of the target noun when a modifier in an NP (e.g. dog food = SINGULAR) Subject–verb agreement:2D the number of the target noun in subject position vs. number agreement on the governing verb (e.g. the dog barks = ⟨SINGULAR,SINGULAR⟩) Coordinate noun number:2D the number of the target noun vs. the number of the head nouns of conjuncts (e.g. dogs and mud = ⟨PLURAL,SINGULAR⟩) N of N constructions:2D the number of the target noun (N) vs. the type of the Nin an N of Nconstruction (e.g. the type of dog = ⟨TYPE,SINGULAR⟩). We have identified a total of 11 Ntypes for use in this feature cluster (e.g. COLLECTIVE, LACK, TEMPORAL). Occurrence in PPs:2D the presence or absence of a determiner (±DET) when the target noun occurs in singular form in a PP (e.g. per dog = ⟨per,−DET⟩). This feature cluster exploits the fact that countable nouns occur determinerless in singular form with only very particular prepositions (e.g. by bus, *on bus, *with bus) whereas with uncountable nouns, there are fewer restrictions on what prepositions a target noun can occur with (e.g. on furniture, with furniture, ?by furniture). Pronoun co-occurrence:2D what personal and possessive pronouns occur in the same sentence as singular and plural instances of the target noun (e.g. The dog ate its dinner = ⟨its,SINGULAR⟩). This is a proxy for pronoun binding effects, and is determined over a total of 12 third-person pronoun forms (normalised for case, e.g. he, their, itself ). Singular determiners:1D what singular-selecting determiners occur in NPs headed by the target noun in singular form (e.g. a dog = a). All singular-selecting determiners considered are compatible with only countable (e.g. another, each) or uncountable nouns (e.g. much, little). Determiners compatible with either are excluded from the feature cluster (cf. this dog, this information). Note that the term “determiner” is used loosely here and below to denote an amalgam of simplex determiners (e.g. a), the null determiner, complex determiners (e.g. all the), numeric expressions (e.g. one), and adjectives (e.g. numerous), as relevant to the particular feature cluster. Plural determiners:1D what plural-selecting determiners occur in NPs headed by the target noun in plural form (e.g. few dogs = few). As with singular determiners, we focus on those plural-selecting determiners which are compatible with a proper subset of count, plural only and bipartite nouns. Non-bounded determiners:2D what non-bounded determiners occur in NPs headed by the target noun, and what is the number of the target noun for each (e.g. more dogs = ⟨more,PLURAL⟩). Here again, we restrict our focus to nonbounded determiners that select for singularform uncountable nouns (e.g. sufficient furniture) and plural-form countable, plural only and bipartite nouns (e.g. sufficient dogs). The above feature clusters produce a combined total of 1,284 individual feature values. 4.2 Feature extraction In order to extract the features described above, we need some mechanism for detecting NP and PP boundaries, determining subject–verb agreement and deconstructing NPs in order to recover conjuncts and noun-modifier data. We adopt three approaches. First, we use part-of-speech (POS) tagged data and POS-based templates to extract out the necessary information. Second, we use chunk data to determine NP and PP boundaries, and mediumrecall chunk adjacency templates to recover interphrasal dependency. Third, we fully parse the data and simply read off all necessary data from the dependency output. With the POS extraction method, we first Penntagged the BNC using an fnTBL-based tagger (Ngai and Florian, 2001), training over the Brown and WSJ corpora with some spelling, number and hyphenation normalisation. We then lemmatised this data using a version of morph (Minnen et al., 2001) customised to the Penn POS tagset. Finally, we implemented a range of high-precision, low-recall POS-based templates to extract out the features from the processed data. For example, NPs are in many cases recoverable with the following Perl-style regular expression over Penn POS tags: (PDT)* DT (RB|JJ[RS]?|NNS?)* NNS? [ˆN]. For the chunker, we ran fnTBL over the lemmatised tagged data, training over CoNLL 2000style (Tjong Kim Sang and Buchholz, 2000) chunkconverted versions of the full Brown and WSJ corpora. For the NP-internal features (e.g. determiners, head number), we used the noun chunks directly, or applied POS-based templates locally within noun chunks. For inter-chunk features (e.g. subject–verb agreement), we looked at only adjacent chunk pairs so as to maintain a high level of precision. As the full parser, we used RASP (Briscoe and Carroll, 2002), a robust tag sequence grammarbased parser. RASP’s grammatical relation output function provides the phrase structure in the form of lemmatised dependency tuples, from which it is possible to read off the feature information. RASP has the advantage that recall is high, although precision is potentially lower than chunking or tagging as the parser is forced into resolving phrase attachment ambiguities and committing to a single phrase structure analysis. Although all three systems map onto an identical feature space, the feature vectors generated for a given target noun diverge in content due to the different feature extraction methodologies. In addition, we only consider nouns that occur at least 10 times as head of an NP, causing slight disparities in the target noun type space for the three systems. There were sufficient instances found by all three systems for 20,530 common nouns (out of 33,050 for which at least one system found sufficient instances). 4.3 Classifier architecture The classifier design employed in this research is four parallel supervised classifiers, one for each countability class. This allows us to classify a single noun into multiple countability classes, e.g. demand is both countable and uncountable. Thus, rather than classifying a given target noun according to the unique most plausible countability class, we attempt to capture its full range of countabilities. Note that the proposed classifier design is that which was found by Baldwin and Bond (2003) to be optimal for the task, out of a wide range of classifier architectures. In order to discourage the classifiers from overtraining on negative evidence, we constructed the gold-standard training data from unambiguously negative exemplars and potentially ambiguous positive exemplars. That is, we would like classifiers to judge a target noun as not belonging to a given countability class only in the absence of positive evidence for that class. This was achieved in the case of countable nouns, for instance, by extracting all countable nouns from each of the ALT-J/E and COMLEX lexicons. As positive training exemplars, we then took the intersection of those nouns listed as countable in both lexicons (irrespective of membership in alternate countability classes); negative training exemplars, on the other hand, were those contained in both lexicons but not classified as countClass Positive data Negative data Baseline Countable 4,342 1,476 .746 Uncountable 1,519 5,471 .783 Bipartite 35 5,639 .994 Plural only 84 5,639 .985 Table 2: Details of the gold-standard data able in either.1 The uncountable gold-standard data was constructed in a similar fashion. We used the ALT-J/E lexicon as our source of plural only and bipartite nouns, using all the instances listed as our positive exemplars. The set of negative exemplars was constructed in each case by taking the intersection of nouns not contained in the given countability class in ALT-J/E, with all annotated nouns with nonidentical singular and plural forms in COMLEX. Having extracted the positive and negative exemplar noun lists for each countability class, we filtered out all noun lemmata not occurring in the BNC. The final make-up of the gold-standard data for each of the countability classes is listed in Table 2, along with a baseline classification accuracy for each class (“Baseline”), based on the relative frequency of the majority class (positive or negative). That is, for bipartite nouns, we achieve a 99.4% classification accuracy by arbitrarily classifying every training instance as negative. The supervised classifiers were built using TiMBL version 4.2 (Daelemans et al., 2002), a memory-based classification system based on the knearest neighbour algorithm. As a result of extensive parameter optimisation, we settled on the default configuration for TiMBL with k set to 9. 2 5 Results and Evaluation Evaluation is broken down into two components. First, we determine the optimal classifier configuration for each countability class by way of stratified cross-validation over the gold-standard data. We then run each classifier in optimised configuration over the remaining target nouns for which we have feature vectors. 5.1 Cross-validated results First, we ran the classifiers over the full feature set for the three feature extraction methods. In each case, we quantify the classifier performance by way 1Any nouns not annotated for countability in COMLEX were ignored in this process so as to assure genuinely negative exemplars. 2We additionally experimented with the kernel-based TinySVM system, but found TiMBL to be superior in all cases. Class System Accuracy (e.r.) F-score Tagger∗ .928 (.715) .953 Chunker .933 (.734) .956 Countable RASP∗ .923 (.698) .950 Combined .939 (.759) .960 Tagger .945 (.746) .876 Chunker∗ .945 (.747) .876 Uncountable RASP∗ .944 (.743) .872 Combined .952 (.779) .892 Tagger .997 (.489) .752 Chunker .997 (.460) .704 Bipartite RASP .997 (.488) .700 Combined .996 (.403) .722 Tagger .989 (.275) .558 Chunker .990 (.299) .568 Plural only RASP∗ .989 (.227) .415 Combined .990 (.323) .582 Table 3: Cross-validation results of 10-fold stratified cross-validation over the goldstandard data for each countability class. The final classification accuracy and F-score3 are averaged over the 10 iterations. The cross-validated results for each classifier are presented in Table 3, broken down into the different feature extraction methods. For each, in addition to the F-score and classification accuracy, we present the relative error reduction (e.r.) in classification accuracy over the majority-class baseline for that gold-standard set (see Table 2). For each countability class, we additionally ran the classifier over the concatenated feature vectors for the three basic feature extraction methods, producing a 3,852-value feature space (“Combined”). Given the high baseline classification accuracies for each gold-standard dataset, the most revealing statistics in Table 3 are the error reduction and Fscore values. In all cases other than bipartite, the combined system outperformed the individual systems. The difference in F-score is statistically significant (based on the two-tailed t-test, p < .05) for the asterisked systems in Table 3. For the bipartite class, the difference in F-score is not statistically significant between any system pairing. There is surprisingly little separating the tagger-, chunker- and RASP-based feature extraction methods. This is largely due to the precision/recall tradeoff noted above for the different systems. 5.2 Open data results We next turn to the task of classifying all unseen common nouns using the gold-standard data and the best-performing classifier configurations for each 3Calculated according to: 2·precision·recall precision+recall 0 0.2 0.4 0.6 0.8 1 10 100 1000 10000 0.2 0.4 0.6 0.8 1 precision recall Precision Recall Mean frequency Figure 1: Precision–recall curve for countable nouns countability class (indicated in bold in Table 3).4 Here, the baseline method is to classify every noun as being uniquely countable. There were 11,499 feature-mapped common nouns not contained in the union of the goldstandard datasets. Of these, the classifiers were able to classify 10,355 (90.0%): 7,974 (77.0%) as countable (e.g. alchemist), 2,588 (25.0%) as uncountable (e.g. ingenuity), 9 (0.1%) as bipartite (e.g. headphones), and 80 (0.8%) as plural only (e.g. damages). Only 139 nouns were assigned to multiple countability classes. We evaluated the classifier outputs in two ways. In the first, we compared the classifier output to the combined COMLEX and ALT-J/E lexicons: a lexicon with countability information for 63,581 nouns. The classifiers found a match for 4,982 of the nouns. The predicted countability was judged correct 94.6% of the time. This is marginally above the level of match between ALT-J/E and COMLEX (93.8%) and substantially above the baseline of all-countable at 89.7% (error reduction = 47.6%). To gain a better understanding of the classifier performance, we analysed the correlation between corpus frequency of a given target noun and its precision/recall for the countable class.5 To do this, we listed the 11,499 unannotated nouns in increasing order of corpus occurrence, and worked through the ranking calculating the mean precision and recall over each partition of 500 nouns. This resulted in the precision–recall graph given in Figure 1, from which it is evident that mean recall is proportional and precision inversely proportional to corpus fre4In each case, the classifier is run over the best500 features as selected by the method described in Baldwin and Bond (2003) rather than the full feature set, purely in the interests of reducing processing time. Based on crossvalidated results over the training data, the resultant difference in performance is not statistically significant. 5We similarly analysed the uncountable class and found the same basic trend. quency. That is, for lower-frequency nouns, the classifier tends to rampantly classify nouns as countable, while for higher-frequency nouns, the classifier tends to be extremely conservative in positively classifying nouns. One possible explanation for this is that, based on the training data, the frequency of a noun is proportional to the number of countability classes it belongs to. Thus, for the more frequent nouns, evidence for alternate countability classes can cloud the judgement of a given classifier. In secondary evaluation, the authors used BNC corpus evidence to blind-annotate 100 randomlyselected nouns from the test data, and tested the correlation with the system output. This is intended to test the ability of the system to capture corpusattested usages of nouns, rather than independent lexicographic intuitions as are described in the COMLEX and ALT-J/E lexicons. Of the 100, 28 were classified by the annotators into two or more groups (mainly countable and uncountable). On this set, the baseline of all-countable was 87.8%, and the classifiers gave an agreement of 92.4% (37.7% e.r.), agreement with the dictionaries was also 92.4%. Again, the main source of errors was the classifier only returning a single countability for each noun. To put this figure in proper perspective, we also hand-annotated 100 randomly-selected nouns from the training data (that is words in our combined lexicon) according to BNC corpus evidence. Here, we tested the correlation between the manual judgements and the combined ALT-J/E and COMLEX dictionaries. For this dataset, the baseline of allcountable was 80.5%, and agreement with the dictionaries was a modest 86.8% (32.3% e.r.). Based on this limited evaluation, therefore, our automated method is able to capture corpus-attested countabilities with greater precision than a manuallygenerated static repository of countability data. 6 Discussion The above results demonstrate the utility of the proposed method in learning noun countability from corpus data. In the final system configuration, the system accuracy was 94.6%, comparing favourably with the 78% accuracy reported by Bond and Vatikiotis-Bateson (2002), 89.5% of O’Hara et al. (2003), and also the noun token-based results of Schwartz (2002). At the moment we are merely classifying nouns into the four classes. The next step is to store the distribution of countability for each target noun and build a representation of each noun’s countability preferences. We have made initial steps in this direction, by isolating token instances strongly supporting a given countability class analysis for that target noun. We plan to estimate the overall frequency of the different countabilities based on this evidence. This would represent a continuous equivalent of the discrete 5-way scale employed in ALT-J/E, tunable to different corpora/domains. For future work we intend to: investigate further the relation between meaning and countability, and the possibility of using countability information to prune the search space in word sense disambiguation; describe and extract countability-idiosyncratic constructions, such as determinerless PPs and rolenouns; investigate the use of a grammar that distinguishes between countable and uncountable uses of nouns; and in combination with such a grammar, investigate the effect of lexical rules on countability. 7 Conclusion We have proposed a knowledge-rich lexical acquisition technique for multi-classifying a given noun according to four countability classes. The technique operates over a range of feature clusters drawing on pre-processed corpus data, which are then fed into independent classifiers for each of the countability classes. The classifiers were able to selectively classify the countability preference of English nouns with a precision of 94.6%. Acknowledgements This material is based upon work supported by the National Science Foundation under Grant No. BCS-0094638 and also the Research Collaboration between NTT Communication Science Laboratories, Nippon Telegraph and Telephone Corporation and CSLI, Stanford University. We would like to thank Leonoor van der Beek, Ann Copestake, Ivan Sag and the three anonymous reviewers for their valuable input on this research. References Keith Allan. 1980. Nouns and countability. Language, 56(3):541–67. Timothy Baldwin and Francis Bond. 2003. A plethora of methods for learning English countability. In Proc. of the 2003 Conference on Empirical Methods in Natural Language Processing (EMNLP 2003), Sapporo, Japan. (to appear). Francis Bond and Caitlin Vatikiotis-Bateson. 2002. Using an ontology to determine English countability. In Proc. of the 19th International Conference on Computational Linguistics (COLING 2002), Taipei, Taiwan. Francis Bond, Kentaro Ogura, and Satoru Ikehara. 1994. Countability and number in Japanese-to-English machine translation. In Proc. of the 15th International Conference on Computational Linguistics (COLING ’94), pages 32–8, Kyoto, Japan. Francis Bond. 2001. Determiners and Number in English, contrasted with Japanese, as exemplified in Machine Translation. Ph.D. thesis, University of Queensland, Brisbane, Australia. Ted Briscoe and John Carroll. 2002. Robust accurate statistical annotation of general text. In Proc. of the 3rd International Conference on Language Resources and Evaluation (LREC 2002), pages 1499–1504, Las Palmas, Canary Islands. Lou Burnard. 2000. User Reference Guide for the British National Corpus. Technical report, Oxford University Computing Services. Ann Copestake and Ted Briscoe. 1995. Semi-productive polysemy and sense extension. Journal of Semantics, pages 15– 67. Walter Daelemans, Jakub Zavrel, Ko van der Sloot, and Antal van den Bosch. 2002. TiMBL: Tilburg memory based learner, version 4.2, reference guide. ILK technical report 02-01. Ralph Grishman, Catherine Macleod, and Adam Myers, 1998. COMLEX Syntax Reference Manual. Proteus Project, NYU. (http://nlp.cs.nyu.edu/comlex/refman.ps). Satoru Ikehara, Satoshi Shirai, Akio Yokoo, and Hiromi Nakaiwa. 1991. Toward an MT system without pre-editing – effects of new methods in ALT-J/E–. In Proc. of the Third Machine Translation Summit (MT Summit III), pages 101– 106, Washington DC. Marc Light. 1996. Morphological cues for lexical semantics. In Proc. of the 34th Annual Meeting of the ACL, pages 25– 31, Santa Cruz, USA. Guido Minnen, John Carroll, and Darren Pearce. 2001. Applied morphological processing of English. Natural Language Engineering, 7(3):207–23. Grace Ngai and Radu Florian. 2001. Transformation-based learning in the fast lane. In Proc. of the 2nd Annual Meeting of the North American Chapter of Association for Computational Linguistics (NAACL2001), pages 40–7, Pittsburgh, USA. Tom O’Hara, Nancy Salay, Michael Witbrock, Dave Schneider, Bjoern Aldag, Stefano Bertolo, Kathy Panton, Fritz Lehmann, Matt Smith, David Baxter, Jon Curtis, and Peter Wagner. 2003. Inducing criteria for mass noun lexical mappings using the Cyc KB and its extension to WordNet. In Proc. of the Fifth International Workshop on Computational Semantics (IWCS-5), Tilburg, the Netherlands. Lane O.B. Schwartz. 2002. Corpus-based acquisition of head noun countability features. Master’s thesis, Cambridge University, Cambridge, UK. Eric V. Siegel and Kathleen McKeown. 2000. Learning methods to combine linguistic indicators: Improving aspectual classification and revealing linguistic insights. Computational Linguistics, 26(4):595–627. Erik F. Tjong Kim Sang and Sabine Buchholz. 2000. Introduction to the CoNLL-2000 shared task: Chunking. In Proc. of the 4th Conference on Computational Natural Language Learning (CoNLL-2000), Lisbon, Portugal. Anna Wierzbicka. 1988. The Semantics of Grammar. John Benjamin.
2003
59
Generalized Algorithms for Constructing Statistical Language Models Cyril Allauzen, Mehryar Mohri, Brian Roark AT&T Labs – Research 180 Park Avenue Florham Park, NJ 07932, USA allauzen,mohri,roark  @research.att.com Abstract Recent text and speech processing applications such as speech mining raise new and more general problems related to the construction of language models. We present and describe in detail several new and efficient algorithms to address these more general problems and report experimental results demonstrating their usefulness. We give an algorithm for computing efficiently the expected counts of any sequence in a word lattice output by a speech recognizer or any arbitrary weighted automaton; describe a new technique for creating exact representations of  -gram language models by weighted automata whose size is practical for offline use even for a vocabulary size of about 500,000 words and an  -gram order  ; and present a simple and more general technique for constructing class-based language models that allows each class to represent an arbitrary weighted automaton. An efficient implementation of our algorithms and techniques has been incorporated in a general software library for language modeling, the GRM Library, that includes many other text and grammar processing functionalities. 1 Motivation Statistical language models are crucial components of many modern natural language processing systems such as speech recognition, information extraction, machine translation, or document classification. In all cases, a language model is used in combination with other information sources to rank alternative hypotheses by assigning them some probabilities. There are classical techniques for constructing language models such as  gram models with various smoothing techniques (see Chen and Goodman (1998) and the references therein for a survey and comparison of these techniques). In some recent text and speech processing applications, several new and more general problems arise that are related to the construction of language models. We present new and efficient algorithms to address these more general problems. Counting. Classical language models are constructed by deriving statistics from large input texts. In speech mining applications or for adaptation purposes, one often needs to construct a language model based on the output of a speech recognition system. But, the output of a recognition system is not just text. Indeed, the word error rate of conversational speech recognition systems is still too high in many tasks to rely only on the one-best output of the recognizer. Thus, the word lattice output by speech recognition systems is used instead because it contains the correct transcription in most cases. A word lattice is a weighted finite automaton (WFA) output by the recognizer for a particular utterance. It contains typically a very large set of alternative transcription sentences for that utterance with the corresponding weights or probabilities. A necessary step for constructing a language model based on a word lattice is to derive the statistics for any given sequence from the lattices or WFAs output by the recognizer. This cannot be done by simply enumerating each path of the lattice and counting the number of occurrences of the sequence considered in each path since the number of paths of even a small automaton may be more than four billion. We present a simple and efficient algorithm for computing the expected count of any given sequence in a WFA and report experimental results demonstrating its efficiency. Representation of language models by WFAs. Classical  -gram language models admit a natural representation by WFAs in which each state encodes a left context of width less than  . However, the size of that representation makes it impractical for offline optimizations such as those used in large-vocabulary speech recognition or general information extraction systems. Most offline representations of these models are based instead on an approximation to limit their size. We describe a new technique for creating an exact representation of  -gram language models by WFAs whose size is practical for offline use even in tasks with a vocabulary size of about 500,000 words and for  . Class-based models. In many applications, it is natural and convenient to construct class-based language models, that is models based on classes of words (Brown et al., 1992). Such models are also often more robust since they may include words that belong to a class but that were not found in the corpus. Classical class-based models are based on simple classes such as a list of words. But new clustering algorithms allow one to create more general and more complex classes that may be regular languages. Very large and complex classes can also be defined using regular expressions. We present a simple and more general approach to class-based language models based on general weighted context-dependent rules (Kaplan and Kay, 1994; Mohri and Sproat, 1996). Our approach allows us to deal efficiently with more complex classes such as weighted regular languages. We have fully implemented the algorithms just mentioned and incorporated them in a general software library for language modeling, the GRM Library, that includes many other text and grammar processing functionalities (Allauzen et al., 2003). In the following, we will present in detail these algorithms and briefly describe the corresponding GRM utilities. 2 Preliminaries Definition 1 A system     is a semiring (Kuich and Salomaa, 1986) if:    is a commutative monoid with identity element  ;    is a monoid with identity element  ;  distributes over  ; and  is an annihilator for  : for all   ! " #$  . Thus, a semiring is a ring that may lack negation. Two semirings often used in speech processing are: the log semiring %&' (*),+.-0/1  2 3 45 6 -0 1 (Mohri, 2002) which is isomorphic to the familiar real or probability semiring (879 6 5:; < = via a >@?1A morphism with, for all B CD(E)F+.-0/ : ; 2 3G4 CHID>@?1AJ LKNMPOQ RISPQ6,KTMPOQ UICTU and the convention that: KTMPOQ UI -#   and ID>@?1A 1VW, and the tropical semiring XYY Z([7\) +.-0/1 G]!^@_` 6 - G1 which can be derived from the log semiring using the Viterbi approximation. Definition 2 A weighted finite-state transducer a over a semiring is an 8-tuple abc ed f g! Gh GiH jk lQ UmP where: d is the finite input alphabet of the transducer; f is the finite output alphabet; g is a finite set of states; h0nog the set of initial states; ipnqg the set of final states; jrn0gs:D td)u+v=/wx:y fz)u+wvN/.{:k \:Vg a finite set of transitions; l|<h}~ the initial weight function; and m|i}€ the final weight function mapping i to . A Weighted automaton '‚ ed g! Gh GiH jk lQ UmP is defined in a similar way by simply omitting the output labels. We denote by ƒ  „nd … the set of strings accepted by an automaton  and similarly by ƒ Z†‡ the strings described by a regular expression † . Given a transition ˆFj , we denote by ‰Š ˆ=‹ its input label, ŒxŠ ˆ=‹ its origin or previous state and Š ˆ=‹ its destination state or next state, Ž"Š ˆN‹ its weight, PŠ ˆ=‹ its output label (transducer case). Given a state D‘g , we denote by jVŠ ‹ the set of transitions leaving  . A path ’“'ˆ1”x•=•N•Uˆ.– is an element of jk… with consecutive transitions: Š ˆ˜—L™” ‹SšŒxŠ ˆw—Z‹ , ‰›œP =žNžNž= Ÿ . We extend  and Œ to paths by setting: Š ’ ‹V‚Š ˆ – ‹ and Œ¡Š ’ ‹F¢Œ¡Š ˆ.”‹ . A cycle ’ is a path whose origin and destination states coincide: Š ’ ‹;Œ¡Š ’ ‹ . We denote by £ Z¤ .¥¦ the set of paths from  to ¤¥ and by £ Z¤ U§¡ G1¥¨ and £ Z¤ U§¡ U©  G5¥¦ the set of paths from  to ¥ with input label §&&d … and output label © (transducer case). These definitions can be extended to subsets ª$ ªk¥¡n«g , by: £ ª G§Q ª¬¥­‡®)8¯°5±x²t¯U³L°5±`³ £ Z¤ U§¡ G.¥­ . The labeling functions ‰ (and similarly  ) and the weight function Ž can also be extended to paths by defining the label of a path as the concatenation of the labels of its constituent transitions, and the weight of a path as the  -product of the weights of its constituent transitions: ‰Š ’ ‹S‰Š ˆ ” ‹•N•N•U‰Š ˆ – ‹ , Ž"Š ’ ‹´ŽŠ ˆ ” ‹ µ•=•N•0Ž"Š ˆ – ‹ . We also extend Ž to any finite set of paths ¶ by setting: Ž"Š ¶;‹9¸·š¹ °˜º Ž"Š ’ ‹ . The output weight associated by  to each input string §yd… is: Š Š S‹ ‹t Z§B[ » ¹ °5¼{½@¾=² ¿5² ÀÁ lQ Œ¡Š ’ ‹¨xΊ ’ ‹<Ãm  LŠ ’ ‹¨ Š Š S‹ ‹t Z§B is defined to be  when £ Zh U§Q i9k'Ä . Similarly, the output weight associated by a transducer a to a pair of input-output string L§¡ U©< is: Š Š a;‹ ‹Å L§Q G©< » ¹ °5¼8½@¾=² ¿˜² Æw² ÀQÁ l¡ njxŠ ’ ‹L¡,Ž"Š ’ ‹P,mB ZŠ ’ ‹L Š Š aS‹ ‹Å L§¡ U©<  when £ Zh U§¡ U©  Gi9¢Ä . A successful path in a weighted automaton or transducer È is a path from an initial state to a final state. È is unambiguous if for any string §yd… there is at most one successful path labeled with § . Thus, an unambiguous transducer defines a function. For any transducer a , denote by ¶!É1 Za  the automaton obtained by projecting a on its output, that is by omitting its input labels. Note that the second operation of the tropical semiring and the log semiring as well as their identity elements are identical. Thus the weight of a path in an automaton  over the tropical semiring does not change if  is viewed as a weighted automaton over the log semiring or viceversa. 3 Counting This section describes a counting algorithm based on general weighted automata algorithms. Let   gk Gh GiH d Ê. GË  l Gm¤ be an arbitrary weighted automaton over the probability semiring and let † be a regular expression defined over the alphabet d . We are interested in counting the occurrences of the sequences §##ƒ L†‡ in  while taking into account the weight of the paths where they appear. 3.1 Definition When  is deterministic and pushed, or stochastic, it can be viewed as a probability distribution £ over all strings 0 a:ε/1 b:ε/1 1/1 X:X/1 a:ε/1 b:ε/1 Figure 1: Counting weighted transducer a with dÌ +B C/ . The transition weights and the final weight at state  are all equal to  . d;… .1 The weight Š Š S‹ ‹t Z§B associated by  to each string § is then £ Z§B . Thus, we define the count of the sequence § in  , Í L§  , as: Í L§B[ÌÎ Ï °5ÐBÑ Ò Ó{Ò ¿ Š Š S‹ ‹Å L§  where Ò Ó{Ò ¿ denotes the number of occurrences of § in the string Ó , i.e., the expected number of occurrences of § given  . More generally, we will define the count of § as above regardless of whether  is stochastic or not. In most speech processing applications,  may be an acyclic automaton called a phone or a word lattice output by a speech recognition system. But our algorithm is general and does not assume  to be acyclic. 3.2 Algorithm We describe our algorithm for computing the expected counts of the sequences §EFƒ L†‡ and give the proof of its correctness. Let Ô be the formal power series (Kuich and Salomaa, 1986) Ô over the probability semiring defined by Ô  Õ …¬:§y: Õ … , where §yDƒ L†‡ . Lemma 1 For all Ö dS… , Ô Ö  Ò Ö Ò ¿ . Proof. By definition of the multiplication of power series in the probability semiring: Ô Ö × Î Ï ¿ ØÙ`Ú Õ … Ó „:E L§Q G§BS: Õ … GÛ¤   Ï ¿ ØÙ`ÚD Ò Ö Ò ¿ This proves the lemma. Ô is a rational power series as a product and closure of the polynomial power series Õ and § (Salomaa and Soittola, 1978; Berstel and Reutenauer, 1988). Similarly, since † is regular, the weighted transduction defined by ed\:‡+=v=/w …˜ Z†W:†‡N ed\:F+vN/.G… is rational. Thus, by the theorem of Sch¨utzenberger (Sch¨utzenberger, 1961), there exists a weighted transducer a defined over the alphabet d and the probability semiring realizing that transduction. Figure 1 shows the transducer a in the particular case of d*µ+B C/ . 1There exist a general weighted determinization and weight pushing algorithms that can be used to create a deterministic and pushed automaton equivalent to an input word or phone lattice (Mohri, 1997). Proposition 1 Let  be a weighted automaton over the probability semiring, then: Š Š ¶ É *Ü[a ŋ ‹Å L§  Í L§B Proof. By definition of a , for any Ö d9… , Š Š a;‹ ‹Å Ö U§  Ô U§B , and by lemma 1, Š Š a;‹ ‹Å Ö G§B‡ Ò Ö Ò ¿ . Thus, by definition of composition: Š Š ¶ É Z‘Ü[a t‹ ‹Å L§BÝ Î ¹ °5¼{½¦¾N² ÀQÁt²ZÚPÙ—ZÞ ¹.ß Š Š ;‹ ‹Å Ö H: Ò Ö Ò ¿  Î ÚB°5ÐBÑ Ò Ö Ò ¿ Š Š ;‹ ‹t Ö à Í L§  This ends the proof of the proposition. The proposition gives a simple algorithm for computing the expected counts of † in a weighted automaton  based on two general algorithms: composition (Mohri et al., 1996) and projection of weighted transducers. It is also based on the transducer a which is easy to construct. The size of a is in á Ò d Ò 6 Ò â Ò  , where â is a finite automaton accepting † . With a lazy implementation of a , only one transition can be used instead of Ò d Ò , thereby reducing the size of the representation of a to á Ò  â Ò  . The weighted automaton ã “¶ É Ü;a  contains v transitions. A general v -removal algorithm can be used to compute an equivalent weighted automaton with no v transition. The computation of Š Š 㠋 ‹Å L§B for a given § is done by composing ã with an automaton representing § and by using a simple shortest-distance algorithm (Mohri, 2002) to compute the sum of the weights of all the paths of the result. For numerical stability, implementations often replace probabilities with ID>¦?5A probabilities. The algorithm just described applies in a similar way by taking ID>@?1A of the weights of a (thus all the weights of a will be zero in that case) and by using the log semiring version of composition and v -removal. 3.3 GRM Utility and Experimental Results An efficient implementation of the counting algorithm was incorporated in the GRM library (Allauzen et al., 2003). The GRM utility grmcount can be used in particular to generate a compact representation of the expected counts of the  -gram sequences appearing in a word lattice (of which a string encoded as an automaton is a special case), whose order is less or equal to a given integer. As an example, the following command line: grmcount -n3 foo.fsm > count.fsm creates an encoded representation count.fsm of the  gram sequences, ä‘å , which can be used to construct a trigram model. The encoded representation itself is also given as an automaton that we do not describe here. The counting utility of the GRM library is used in a variety of language modeling and training adaptation tasks. Our experiments show that grmcount is quite efficient. We tested this utility with 41,000 weighted automata outputs of our speech recognition system for the same number of speech utterances. The total number of transitions of these automata was =æJž æ M. It took about 1h52m, including I/O, to compute the accumulated expected counts of all  -gram, çäèå , appearing in all these automata on a single processor of a 1GHz Intel Pentium processor Linux cluster with 2GB of memory and 256 KB cache. The time to compute these counts represents just ” éUê th of the total duration of the 41,000 speech utterances used in our experiment. 4 Representation of ë -gram Language Models with WFAs Standard smoothed  -gram models, including backoff (Katz, 1987) and interpolated (Jelinek and Mercer, 1980) models, admit a natural representation by WFAs in which each state encodes a conditioning history of length less than  . The size of that representation is often prohibitive. Indeed, the corresponding automaton may have Ò d Ò ì ™” states and Ò d Ò ì transitions. Thus, even if the vocabulary size is just 1,000, the representation of a classical trigram model may require in the worst case up to one billion transitions. Clearly, this representation is even less adequate for realistic natural language processing applications where the vocabulary size is in the order of several hundred thousand words. In the past, two methods have been used to deal with this problem. One consists of expanding that WFA ondemand. Thus, in some speech recognition systems, the states and transitions of the language model automaton are constructed as needed based on the particular input speech utterances. The disadvantage of that method is that it cannot benefit from offline optimization techniques that can substantially improve the efficiency of a recognizer (Mohri et al., 1998). A similar drawback affects other systems where several information sources are combined such as a complex information extraction system. An alternative method commonly used in many applications consists of constructing instead an approximation of that weighted automaton whose size is practical for offline optimizations. This method is used in many large-vocabulary speech recognition systems. In this section, we present a new method for creating an exact representation of  -gram language models with WFAs whose size is practical even for very largevocabulary tasks and for relatively high  -gram orders. Thus, our representation does not suffer from the disadvantages just pointed out for the two classical methods. We first briefly present the classical definitions of  gram language models and several smoothing techniques commonly used. We then describe a natural representation of  -gram language models using failure transitions. This is equivalent to the on-demand construction referred to above but it helps us introduce both the approximate solution commonly used and our solution for an exact offline representation. 4.1 Classical Definitions In an  -gram model, the joint probability of a string Ž ê ž=žNžRŽS– is given as the product of conditional probabilities: íàî ZŽ ê žNž=žRŽ – × – ï —@Ù ê íî LŽ — Ò ð —  (1) where the conditioning history ð — consists of zero or more words immediately preceding Ž — and is dictated by the order of the  -gram model. Let Í ð Ž denote the count of  -gram ð Ž and let ñ íî LŽ Ò ð  be the maximum likelihood probability of Ž given ð , estimated from counts. ñ íî is often adjusted to reserve some probability mass for unseen  -gram sequences. Denote by ò íî ZŽ Ò ð  the adjusted conditional probability. Katz or absolute discounting both lead to an adjusted probability ò íî . For all  -grams ð «Ž ð ¥ where ð ‡d – for some Ÿó  , we refer to ð ¥ as the backoff  -gram of ð . Conditional probabilities in a backoff model are of the form: ôõTö¨÷ ø ùPúcû ürý ôõTö¨÷ ø ùPú þ ÿ  öLù÷„ú  ô`õTö¨÷ ø ù ¨ú   õþ (2) where  is a factor that ensures a normalized model. Conditional probabilities in a deleted interpolation model are of the form: ô`õTö¨÷ø ùPúQûü ö  úô`õTö¨÷ ø ù¤ú   ô`õNö¨÷ø ù ­ú#þ ÿ  öLù÷„ú!"  ô`õNö¨÷ø ù ­ú   õþ (3) where  is the mixing parameter between zero and one. In practice, as mentioned before, for numerical stability, ID>¦?5A probabilities are used. Furthermore, due the Viterbi approximation used in most speech processing applications, the weight associated to a string § by a weighted automaton representing the model is the minimum weight of a path labeled with § . Thus, an  -gram language model is represented by a WFA over the tropical semiring. 4.2 Representation with Failure Transitions Both backoff and interpolated models can be naturally represented using default or failure transitions. A failure transition is labeled with a distinct symbol # . It is the default transition taken at state  when  does not admit an outgoing transition labeled with the word considered. Thus, failure transitions have the semantics of otherwise. w w i-2 i-1 w w i-1 i wi wi-1 φ wi φ wi ε φ wi Figure 2: Representation of a trigram model with failure transitions. The set of states of the WFA representing a backoff or interpolated model is defined by associating a state $ to each sequence of length less than  found in the corpus: gµµ+  | Ò ðxÒ&% ('˜_*) Í ð ,+*</ Its transition set j is defined as the union of the following set of failure transitions: +¤ Z-³Å .#¡ NID>@?1A /0¤ 1³ZH|1-³xDg$/ and the following set of regular transitions: +1 Z2J GŽ9 NID>¦?5AB íî LŽ Ò ð U G32-„|51VDg! Í ð Ž ,+‘P/ where  2is defined by: 4 65 û ü87 65 þÇÿ9;:*ø ù1÷ø<: 4 7  ³ 5 þÇÿ{ø ù÷øû 4 = õxùûE÷> ­ù (4) Figure 2 illustrates this construction for a trigram model. Treating v -transitions as regular symbols, this is a deterministic automaton. Figure 3 shows a complete Katz backoff bigram model built from counts taken from the following toy corpus and using failure transitions: ? s @ b a a a a ? /s @ ? s @ b a a a a ? /s @ ? s @ a ? /s @ where ? s @ denotes the start symbol and ? /s @ the end symbol for each sentence. Note that the start symbol ? s @ does not label any transition, it encodes the history ? s @ . All transitions labeled with the end symbol ? /s @ lead to the single final state of the automaton. 4.3 Approximate Offline Representation The common method used for an offline representation of an  -gram language model can be easily derived from the representation using failure transitions by simply replacing each # -transition by an v -transition. Thus, a transition that could only be taken in the absence of any other alternative in the exact representation can now be taken regardless of whether there exists an alternative transition. Thus the approximate representation may contain paths whose weight does not correspond to the exact probability of the string labeling that path according to the model. </s> a </s>/1.101 a/0.405 φ/4.856 </s>/1.540 a/0.441 b b/1.945 a/0.287 φ/0.356 <s> a/1.108 φ/0.231 b/0.693 Figure 3: Example of representation of a bigram model with failure transitions. Consider for example the start state in figure 3, labeled with ? s @ . In a failure transition model, there exists only one path from the start state to the state labeled  , with a cost of 1.108, since the # transition cannot be traversed with an input of  . If the # transition is replaced by an v -transition, there is a second path to the state labeled  – taking the v -transition to the history-less state, then the  transition out of the history-less state. This path is not part of the probabilistic model – we shall refer to it as an invalid path. In this case, there is a problem, because the cost of the invalid path to the state – the sum of the two transition costs (0.672) – is lower than the cost of the true path. Hence the WFA with v -transitions gives a lower cost (higher probability) to all strings beginning with the symbol  . Note that the invalid path from the state labeled ? s @ to the state labeled C has a higher cost than the correct path, which is not a problem in the tropical semiring. 4.4 Exact Offline Representation This section presents a method for constructing an exact offline representation of an  -gram language model whose size remains practical for large-vocabulary tasks. The main idea behind our new construction is to modify the topology of the WFA to remove any path containing v -transitions whose cost is lower than the correct cost associated by the model to the string labeling that path. Since, as a result, the low cost path for each string will have the correct cost, this will guarantee the correctness of the representation in the tropical semiring. Our construction admits two parts: the detection of the invalid paths of the WFA, and the modification of the topology by splitting states to remove the invalid paths. To detect invalid paths, we determine first their initial non- v transitions. Let jBA denote the set of v -transitions of the original automaton. Let £ ¯ be the set of all paths ’ˆ˜”xž=žNžUˆ.– jI$j A  – , Ÿ(+z , leading to state  such that for all ‰ , ‰çžNž=žGŸ , Œ¡Š ˆ — ‹ is the destination state of some v -transition. Lemma 2 For an  -gram language model, the number of paths in £ ¯ is less than the  -gram order: Ò £ ¯ Ò&%  . Proof. For all ’`—! £ ¯ , let ’—"q’¥ — ˆw— . By definition, there is some ˆ˜¥ — ‡j A such that Š ˆ.¥ — ‹xzŒxŠ ˆw—¨‹x 2C . By definition of v -transitions in the model, Ò ð — Ò9% yI0 for all ‰ . It follows from the definition of regular transitions that Š ˆw—L‹[  2CD  . Hence, ð —H ð&E  ð , i.e. ˆw—; q’ r’ π’ q e r e’ π Figure 4: The path ˆ’ is invalid if ‰Š ˆ=‹Qv , ‰Š ’ ‹xµ‰Š ’ ¥ ‹ , ’ £0F , and either (i) G1¥àHG and Ž"Š ˆ’ ‹ % Ž"Š ’Q¥@‹ or (ii) ‰Š ˆw¥Â‹ 0v and Ž"Š ˆ’ ‹ % Ž"Š ’¡¥­ˆ¥Â‹ . ˆ E ˆ , for all ’ —G U’ E  £ ¯ . Then, £ ¯Sµ+=’ˆ9|5’‡ £ ¯IP/5) +ˆ1/ . The history-less state has no incoming non- v paths, therefore, by recursion, Ò £ ¯ Ò  Ò £ ¯I Ò 60  Ò ð Ž ÒJ%  . We now define transition sets K ¯U¯ ³ (originally empty) following this procedure: for all states Gµg and all ’soˆ˜”¡ž=žNžGˆ.–E £LF , if there exists another path ’¥ and transition ˆ,šj;A such that Š ˆ=‹¬ŒxŠ ’ ‹ , Œ¡Š ’¡¥@‹¬Œ¡Š ˆN‹ , and ‰Š ’¡¥Ç‹ ‰Š ’ ‹ , and either (i) Š ’{¥Â‹`Š ’ ‹ and Ž"Š ˆ’ ‹ % Ž"Š ’¥@‹ or (ii) there exists ˆ˜¥[,j A such that Œ¡Š ˆ.¥Â‹8“Š ’¥Â‹ and Š ˆ.¥Â‹ Š ’ ‹ and Ž"Š ˆ=’ ‹ % Ž"Š ’Q¥¨ˆw¥Â‹ , then we add ˆ ” to the set: KNM Þ ¹wß M Þ ¹ ³ ßO KNM Þ ¹wß M Þ ¹ ³ ß )+ˆ.”w/ . See figure 4 for an illustration of this condition. Using this procedure, we can determine the set: P juŠ ‹`s+ˆ"jŠ ‹Q|QP.¥e Gˆ9RKk¯U¯U³Å/ . This set provides the first non- v transition of each invalid path. Thus, we can use these transitions to eliminate invalid paths. Proposition 2 The cost of the construction of P jVŠ ‹ for all yg is  É Ò d Ò@Ò g Ò , where  is the n-gram order. Proof. For each ,\g and each ’š £ ¯ , there are at most Ò d Ò possible states ˜¥ such that for some ˆ#j A , Œ¡Š ˆ=‹¡µ.¥ and Š ˆN‹¡µ . It is trivial to see from the proof of lemma 2 that the maximum length of ’ is  . Hence, the cost of finding all ’{¥ for a given ’ is  Ò d Ò . Therefore, the total cost is  É Ò d Ò¦Ò g Ò . For all non-empty P jVŠ ‹ , we create a new state P  and for all ˆD P jŠ ‹ we set Œ¡Š ˆ=‹ P  . We create a transition P ¤ v G< ˜ , and for all ˆ‡«j I‘jSA such that Š ˆ=‹Sç , we set Š ˆ=‹à P  . For all ˆ#j A such that Š ˆ=‹r and Ò K ¯ M Þ T ß Ò r , we set Š ˆ=‹ P  . For all ˆu*j A such that Š ˆN‹¡ and Ò K ¯ M Þ T ß Ò +« , we create a new intermediate backoff state U  and set Š ˆ=‹`VU  ; then for all ˆ¤¥yjVŠ P ‹ , if ˆw¥!W RK ¯ M Þ T ß , we add a transition X ˆš U P U‰Š ˆ5¥Ç‹e GŽ"Š ˆw¥@‹t UŠ ˆ¥Â‹L to j . Proposition 3 The WFA over the tropical semiring modified following the procedure just outlined is equivalent to the exact online representation with failure transitions. Proof. Assume that there exists a string Y for which the WFA returns a weight P Ž ZYw less than the correct weight Ž" ZY that would have been assigned to Y by the exact online representation with failure transitions. We will call an v -transition ˆ — within a path ’¸̈ ” žNžNž ˆ – invalid if the next non- v transition ˆ E , [\+o‰ , has the label Ž , and there is a transition ˆ with Œ¡Š ˆ=‹ Œ¡Š ˆ—L‹ and b ε/0.356 a a/0.287a/0.441 ε/0 ε/4.856 a/0.405 </s> </s>/1.101 <s> b/0.693 a/1.108 ε/0.231b/1.945 </s>/1.540 Figure 5: Bigram model encoded exactly with v transitions. ‰Š ˆ=‹9qŽ . Let ’ be a path through the WFA such that ‰Š ’ ‹;VY and Ž"Š ’ ‹; P Ž ZYw , and ’ has the least number of invalid v -transitions of all paths labeled with Y with weight P Ž ZY . Let ˆ˜— be the last invalid v -transition taken in path ’ . Let ’x¥ be the valid path leaving ŒxŠ ˆ¤—¨‹ such that ‰Š ’¥Â‹!W‰Š ˆ —@7¡” žNžNž ˆ – ‹ . Ž"Š ’Q¥Â‹(+¸Ž"Š ˆ — žNž=žGˆ – ‹ , otherwise there would be a path with fewer invalid v -transitions with weight P Ž ZYw . Let G be the first state where paths ’ ¥ and ˆ —@7x” žNž=žGˆ – intersect. Then G"«Š ˆ E ‹ for some [(+0‰ . By definition, ˆ˜—¦7¡”xž=žNžGˆ E  £LF , since intersection will occur before any v -transitions are traversed in ’ . Then it must be the case that ˆ˜—¦7¡”V]K ì Þ T C ß M Þ T C ß , requiring the path to be removed from the WFA. This is a contradiction. 4.5 GRM Utility and Experimental Results Note that some of the new intermediate backoff states ( U  ) can be fully or partially merged, to reduce the space requirements of the model. Finding the optimal configuration of these states, however, is an NP-hard problem. For our experiments, we used a simple greedy approach to sharing structure, which helped reduce space dramatically. Figure 5 shows our example bigram model, after application of the algorithm. Notice that there are now two history-less states, which correspond to  and P  in the algorithm (no U  was required). The start state backs off to  , which does not include a transition to the state labeled  , thus eliminating the invalid path. Table 1 gives the sizes of three models in terms of transitions and states, for both the failure transition and v -transition encoding of the model. The DARPA North American Business News (NAB) corpus contains 250 million words, with a vocabulary of 463,331 words. The Switchboard training corpus has 3.1 million words, and a vocabulary of 45,643. The number of transitions needed for the exact offline representation in each case was between 2 and 3 times the number of transitions used in the representation with failure transitions, and the number of states was less than twice the original number of states. This shows that our technique is practical even for very large tasks. Efficient implementations of model building algorithms have been incorporated into the GRM library. The GRM utility grmmake produces basic backoff models, using Katz or Absolute discounting (Ney et al., 1994) methods, in the topology shown in figModel ^ -representation exact offline Corpus order arcs states arcs states NAB 3-gram 102752 16838 303686 19033 SWBD 3-gram 2416 475 5499 573 SWBD 6-gram 15430 6295 54002 12374 Table 1: Size of models (in thousands) built from the NAB and Switchboard corpora, with failure transitions # versus the exact offline representation. ure 3, with v -transitions in the place of failure transitions. The utility grmshrink removes transitions from the model according to the shrinking methods of Seymore and Rosenfeld (1996) or Stolcke (1998). The utility grmconvert takes a backoff model produced by grmmake or grmshrink and converts it into an exact model using either failure transitions or the algorithm just described. It also converts the model to an interpolated model for use in the tropical semiring. As an example, the following command line: grmmake -n3 counts.fsm > model.fsm creates a basic Katz backoff trigram model from the counts produced by the command line example in the earlier section. The command: grmshrink -c1 model.fsm > m.s1.fsm shrinks the trigram model using the weighted difference method (Seymore and Rosenfeld, 1996) with a threshold of 1. Finally, the command: grmconvert -tfail m.s1.fsm > f.s1.fsm outputs the model represented with failure transitions. 5 General class-based language modeling Standard class-based or phrase-based language models are based on simple classes often reduced to a short list of words or expressions. New spoken-dialog applications require the use of more sophisticated classes either derived from a series of regular expressions or using general clustering algorithms. Regular expressions can be used to define classes with an infinite number of elements. Such classes can naturally arise, e.g., dates form an infinite set since the year field is unbounded, but they can be easily represented or approximated by a regular expression. Also, representing a class by an automaton can be much more compact than specifying them as a list, especially when dealing with classes representing phone numbers or a list of names or addresses. This section describes a simple and efficient method for constructing class-based language models where each class may represent an arbitrary (weighted) regular language. Let Í ”. Í É5 NžNž=žN Í ì be a set of  classes and assume that each class Í — corresponds to a stochastic weighted automaton  — defined over the log semiring. Thus, the weight Š Š  —Z‹ ‹t ZŽ  associated by — to a string Ž can be interpreted as ID>¦?5A of the conditional probability £ LŽ Ò Í —e . Each class Í — defines a weighted transduction:  — IB} Í — This can be viewed as a specific obligatory weighted context-dependent rewrite rule where the left and right contexts are not restricted (Kaplan and Kay, 1994; Mohri and Sproat, 1996). Thus, the transduction corresponding to the class Í — can be viewed as the application of the following obligatory weighted rewrite rule:  — } Í —_ v v The direction of application of the rule, left-to-right or right-to-left, can be chosen depending on the task 2. Thus, these  classes can be viewed as a set of batch rewrite rules (Kaplan and Kay, 1994) which can be compiled into weighted transducers. The utilities of the GRM Library can be used to compile such a batch set of rewrite rules efficiently (Mohri and Sproat, 1996). Let a be the weighted transducer obtained by compiling the rules corresponding to the classes. The corpus can be represented as a finite automaton † . To apply the rules defining the classes to the input corpus, we just need to compose the automaton † with a and project the result on the output: X †q\¶É L†Ü[a  X † can be made stochastic using a pushing algorithm (Mohri, 1997). In general, the transducer a may not be unambiguous. Thus, the result of the application of the class rules to the corpus may not be a single text but an automaton representing a set of alternative sequences. However, this is not an issue since we can use the general counting algorithm previously described to construct a language model based on a weighted automaton. When ƒsr) ì —¦Ù¡” ƒ Z —e , the language defined by the classes, is a code, the transducer a is unambiguous. Denote now by X ` the language model constructed from the new corpus X † . To construct our final classbased language model ` , we simply have to compose X ` with a ™`” and project the result on the output side: ` \¶É1 X ` Ü[a ™”  A more general approach would be to have two transducers a¡” and aÉ , the first one to be applied to the corpus and the second one to the language model. In a probabilistic interpretation, a8” should represent the probability distribution £ Í — Ò Ž  and a É the probability distribution £ LŽ Ò Í —  . By using a ” za and a É a ™`” , we are in fact making the assumptions that the classes are equally probable and thus that £ Í — Ò Ž Ã £ ZŽ Ò Í —  _ d ì E Ù¡” £ LŽ Ò Í E  . More generally, the weights of aà” and aÉ could be the results of an iterative learning process. Note however that 2The simultaneous case is equivalent to the left-to-right one here. 0/0 returns:returns/0 batman:<movie>/0.510 1 batman:<movie>/0.916 returns:ε/0 Figure 6: Weighted transducer a obtained from the compilation of context-dependent rewrite rules. 0 1 batman 2 returns 0 1 <movie>/0.510 3 <movie>/0.916 2/0 returns/0 ε/0 Figure 7: Corpora † and X † . we are not limited to this probabilistic interpretation and that our approach can still be used if a[” and aÉ do not represent probability distributions, since we can always push X † and normalize ` . Example. We illustrate this construction in the simple case of the following class containing movie titles: % movie + s+ batman GJž   batman returns <ž a/ The compilation of the rewrite rule defined by this class and applied left to right leads to the weighted transducer a given by figure 6. Our corpus simply consists of the sentence “batman returns” and is represented by the automaton † given by figure 7. The corpus X † obtained by composing † with a is given by figure 7. 6 Conclusion We presented several new and efficient algorithms to deal with more general problems related to the construction of language models found in new language processing applications and reported experimental results showing their practicality for constructing very large models. These algorithms and many others related to the construction of weighted grammars have been fully implemented and incorporated in a general grammar software library, the GRM Library (Allauzen et al., 2003). Acknowledgments We thank Michael Riley for discussions and for having implemented an earlier version of the counting utility. References Cyril Allauzen, Mehryar Mohri, and Brian Roark. 2003. GRM Library-Grammar Library. http://www.research.att.com/sw/tools/grm, AT&T Labs - Research. Jean Berstel and Christophe Reutenauer. 1988. Rational Series and Their Languages. Springer-Verlag: Berlin-New York. Peter F. Brown, Vincent J. Della Pietra, Peter V. deSouza, Jennifer C. Lai, and Robert L. Mercer. 1992. Class-based ngram models of natural language. Computational Linguistics, 18(4):467–479. Stanley Chen and Joshua Goodman. 1998. An empirical study of smoothing techniques for language modeling. Technical Report, TR-10-98, Harvard University. Frederick Jelinek and Robert L. Mercer. 1980. Interpolated estimation of markov source parameters from sparse data. In Proceedings of the Workshop on Pattern Recognition in Practice, pages 381–397. Ronald M. Kaplan and Martin Kay. 1994. Regular models of phonological rule systems. Computational Linguistics, 20(3). Slava M. Katz. 1987. Estimation of probabilities from sparse data for the language model component of a speech recogniser. IEEE Transactions on Acoustic, Speech, and Signal Processing, 35(3):400–401. Werner Kuich and Arto Salomaa. 1986. Semirings, Automata, Languages. Number 5 in EATCS Monographs on Theoretical Computer Science. Springer-Verlag, Berlin, Germany. Mehryar Mohri and Richard Sproat. 1996. An Efficient Compiler for Weighted Rewrite Rules. In bc th Meeting of the Association for Computational Linguistics (ACL ’96), Proceedings of the Conference, Santa Cruz, California. ACL. Mehryar Mohri, Fernando C. N. Pereira, and Michael Riley. 1996. Weighted Automata in Text and Speech Processing. In Proceedings of the 12th biennial European Conference on Artificial Intelligence (ECAI-96), Workshop on Extended finite state models of language, Budapest, Hungary. ECAI. Mehryar Mohri, Michael Riley, Don Hindle, Andrej Ljolje, and Fernando C. N. Pereira. 1998. Full expansion of contextdependent networks in large vocabulary speech recognition. In Proceedings of the International Conference on Acoustics, Speech, and Signal Processing (ICASSP). Mehryar Mohri. 1997. Finite-State Transducers in Language and Speech Processing. Computational Linguistics, 23:2. Mehryar Mohri. 2002. Semiring Frameworks and Algorithms for Shortest-Distance Problems. Journal of Automata, Languages and Combinatorics, 7(3):321–350. Hermann Ney, Ute Essen, and Reinhard Kneser. 1994. On structuring probabilistic dependences in stochastic language modeling. Computer Speech and Language, 8:1–38. Arto Salomaa and Matti Soittola. 1978. Automata-Theoretic Aspects of Formal Power Series. Springer-Verlag: New York. Marcel Paul Sch¨utzenberger. 1961. On the definition of a family of automata. Information and Control, 4. Kristie Seymore and Ronald Rosenfeld. 1996. Scalable backoff language models. In Proceedings of the International Conference on Spoken Language Processing (ICSLP). Andreas Stolcke. 1998. Entropy-based pruning of backoff language models. In Proc. DARPA Broadcast News Transcription and Understanding Workshop, pages 270–274.
2003
6
A Syllable Based Word Recognition Model for Korean Noun Extraction Do-Gil Lee and Hae-Chang Rim Dept. of Computer Science & Engineering Korea University 1, 5-ka, Anam-dong, Seongbuk-ku Seoul 136-701, Korea dglee, rim @nlp.korea.ac.kr Heui-Seok Lim Dept. of Information & Communications Chonan University 115 AnSeo-dong CheonAn 330-704, Korea [email protected] Abstract Noun extraction is very important for many NLP applications such as information retrieval, automatic text classification, and information extraction. Most of the previous Korean noun extraction systems use a morphological analyzer or a Partof-Speech (POS) tagger. Therefore, they require much of the linguistic knowledge such as morpheme dictionaries and rules (e.g. morphosyntactic rules and morphological rules). This paper proposes a new noun extraction method that uses the syllable based word recognition model. It finds the most probable syllable-tag sequence of the input sentence by using automatically acquired statistical information from the POS tagged corpus and extracts nouns by detecting word boundaries. Furthermore, it does not require any labor for constructing and maintaining linguistic knowledge. We have performed various experiments with a wide range of variables influencing the performance. The experimental results show that without morphological analysis or POS tagging, the proposed method achieves comparable performance with the previous methods. 1 Introduction Noun extraction is a process to find every noun in a document (Lee et al., 2001). In Korean, Nouns are used as the most important terms (features) that express the document in NLP applications such as information retrieval, document categorization, text summarization, information extraction, and etc. Korean is a highly agglutinative language and nouns are included in Eojeols. An Eojeol is a surface level form consisting of more than one combined morpheme. Therefore, morphological analysis or POS tagging is required to extract Korean nouns. The previous Korean noun extraction methods are classified into two categories: morphological analysis based method (Kim and Seo, 1999; Lee et al., 1999a; An, 1999) and POS tagging based method (Shim et al., 1999; Kwon et al., 1999). The morphological analysis based method tries to generate all possible interpretations for a given Eojeol by implementing a morphological analyzer or a simpler method using lexical dictionaries. It may overgenerate or extract inaccurate nouns due to lexical ambiguity and shows a low precision rate. Although several studies have been proposed to reduce the over-generated results of the morphological analysis by using exclusive information (Lim et al., 1995; Lee et al., 2001), they cannot completely resolve the ambiguity. The POS tagging based method chooses the most probable analysis among the results produced by the morphological analyzer. Due to the resolution of the ambiguities, it can obtain relatively accurate results. But it also suffers from errors not only produced by a POS tagger but also triggered by the preceding morphological analyzer. Furthermore, both methods have serious deficien철수는(Cheol-Su-neun) 사람들을(sa-lam-deul-eul) 봤다(bwass-da) 철수(Cheol-Su) 는(neun) 사람들(sa-lam-deul) 을(eul) 봤다(bwass-da) 철수(Cheol-Su) 사람(sa-lam) 들(deul) 을(eul) 보(bo) 았(ass) 다(da) eojeol word morpheme proper noun : person name postposition noun : person noun suffix: plural postposition verb : see prefinal ending ending 는(neun) Figure 1: Constitution of the sentence “               (Cheol-Su saw the persons)” cies in that they require considerable manual labor to construct and maintain linguistic knowledge and suffer from the unknown word problem. If a morphological analyzer fails to recognize an unknown noun in an unknown Eojeol, the POS tagger would never extract the unknown noun. Although the morphological analyzer properly recognizes the unknown noun, it would not be extracted due to the sparse data problem. This paper proposes a new noun extraction method that uses a syllable based word recognition model. The proposed method does not require labor for constructing and maintaining linguistic knowledge and it can also alleviate the unknown word problem or the sparse data problem. It finds the most probable syllable-tag sequence of the input sentence by using statistical information and extracts nouns by detecting the word boundaries. The statistical information is automatically acquired from a POS annotated corpus and the word boundary can be detected by using an additional tag to represent the boundary of a word. This paper is organized as follows. In Section 2, the notion of word is defined. Section 3 presents the syllable based word recognition model. Section 4 describes the method of constructing the training data from existing POS tagged corpora. Section 5 discusses experimental results. Finally, Section 6 concludes the paper. 2 A new definition of word Korean spacing unit is an Eojeol, which is delimited by whitespace, as with word in English. In Korean, an Eojeol is made up of one or more words, and a word is made up of one or more morphemes. Figure 1 represents the relationships among morphemes, words, and Eojeols with an example sentence. Syllables are delimited by a hyphen in the figure. All of the previous noun extraction methods regard a morpheme as a processing unit. In order to extract nouns, nouns in a given Eojeol should be segmented. To do this, the morphological analysis has been used, but it requires complicated processes because of the surface forms caused by various morphological phenomena such as irregular conjugation of verbs, contraction, and elision. Most of the morphological phenomena occur at the inside of a morpheme or the boundaries between morphemes, not a word. We have also observed that a noun belongs to a morpheme as well as a word. Thus, we do not have to do morphological analysis in the noun extraction point of view. In Korean linguistics, a word is defined as a morpheme or a sequence of morphemes that can be used independently. Even though a postposition is not used independently, it is regarded as a word because it is easily segmented from the preceding word. This definition is rather vague for computational processing. If we follow the definition of the word in linguistics, it would be difficult to analyze a word like the morphological analysis. For this reason, we define a different notion of a word. According to our definition of a word, each uninflected morpheme or a sequence of successive inflected morphemes is regarded as an individual word. 1 By virtue of the new definition of a word, we need not consider mismatches between the surface level form and the lexical level one in recognizing words. The example sentence “                (Cheol-Su saw the persons)” represented in Figure 1 includes six words such as “  (Cheol-Su)”, “ (neun)”, “   (sa-lam)”, “  (deul)”, “  (eul)”, and “   (bwass-da)”. Unlike the Korean linguistics, a noun suffix such as “  (nim)”, “  (deul)”, or “  (jeog)” is also regarded as a word because it is an uninflected morpheme. 3 Syllable based word recognition model A Korean syllable consists of an obligatory onset (initial-grapheme, consonant), an obligatory peak (nuclear grapheme, vowel), and an optional coda (final-grapheme, consonant). In theory, the number of syllables that can be used in Korean is the same as the number of every combination of the graphemes. 2 Fortunately, only a fixed number of syllables is frequently used in practice. 3 The amount of information that a Korean syllable has is larger than that of an alphabet in English. In addition, there are particular characteristics in Korean syllables. The fact that words do not start with certain syllables is one of such examples. Several attempts have been made to use characteristics of Korean syllables. Kang (1995) used syllable information to reduce the over-generated results in analyzing conjugated forms of verbs. Syllable statistics have been also used for automatic word spacing (Shim, 1996; Kang and Woo, 2001; Lee et al., 2002). The syllable based word recognition model is represented as a function  like the following equations. It is to find the most probable syllable-tag sequence            , for a given sentence  consisting of a sequence of  syllables            . 1Korean morphemes can be classified into two types: uninflected morphemes having fixed word forms (such as noun, unconjugated adjective, postposition, adverb, interjection, etc.) and inflected morphemes having conjugated word forms (such as a morpheme with declined or conjugated endings, predicative postposition, etc.) 2  (     ) of pure Korean syllables are possible 3Actually,   of syllables are used in the training data, including Korean characters and non-Korean characters (e.g. alphabets, digits, Chinese characters, symbols).                (1)                    (2) Two Markov assumptions are applied in Equation 2. One is that the probability of a current syllable tag   conditionally depends on only the previous syllable tag. The other is that the probability of a current syllable   conditionally depends on the current tag. In order to reflect word spacing information in Equation 2, which is very useful in Korean POS tagging, Equation 2 is changed to Equation 3 which can consider the word spacing information by calculating the transition probabilities like the equation used in Kim et al. (1998).                         (3) In the equation, becomes zero if the transition occurs in the inside of an Eojeol; otherwise is one. Word boundaries can be detected by an additional tag. This method has been used in some tasks such as text chunking and named entity recognition to represent a boundary of an element (e.g. individual phrase or named entity). There are several possible representation schemes to do this. The simplest one is the BIO representation scheme (Ramshaw and Marcus, 1995), where a “B” denotes the first item of an element and an “I” any non-initial item, and a syllable with tag “O” is not a part of any element. Because every syllable corresponds to one syllable tag, “O” is not used in our task. The representation schemes used in this paper are described in detail in Section 4. The probabilities in Equation 3 are estimated by the maximum likelihood estimator (MLE) using relative frequencies in the training data. 4 The most probable sequence of syllable tags in a sentence (a sequence of syllables) can be efficiently computed by using the Viterbi algorithm. 4Since the MLE suffers from zero probability, to avoid zero probability, we just assign a very low value such as     for an unseen event in the training data. Table 1: Examples of syllable tagging by BI, BIS, IE, and IES representation schemes surface level lexical level BI BIS IE IES (syllable) (morpheme/POS tag)   (yak)     (yak-sok)/nc B-nc B-nc I-nc I-nc   (sok) I-nc I-nc E-nc E-nc   (jang)    (jang-so)/nc B-nc B-nc I-nc I-nc  (so) I-nc I-nc E-nc E-nc  (in)  (i)/co+  (n)/etm B-co etm S-co etm E-co etm S-co etm  (Sin)      (Sin-la-ho-tel)/nc B-nc B-nc I-nc I-nc  (la) I-nc I-nc I-nc I-nc  (ho) I-nc I-nc I-nc I-nc   (tel) I-nc I-nc E-nc E-nc (keo)   (keo-pi-syob)/nc B-nc B-nc I-nc I-nc (pi) I-nc I-nc I-nc I-nc   (syob) I-nc I-nc E-nc E-nc (e) (e)/jc B-jc S-jc E-jc S-jc  (Jai)    (Jai-Ok)/nc B-nc B-nc I-nc I-nc   (Ok) I-nc I-nc E-nc E-nc  (i)  (i)/jc B-jc S-jc E-jc S-jc  (meon)   (meon-jeo)/mag B-mag B-mag I-mag I-mag  (jeo) I-mag I-mag E-mag E-mag  (wa)  (o)/pv+ (a)/ec B-pv ec S-pv ec E-pv ec S-pv ec  (gi)    (gi-da-li)/pv+ (go)/ec B-pv ec B-pv ec I-pv ec I-pv ec  (da) I-pv ec I-pv ec I-pv ec I-pv ec  (li) I-pv ec I-pv ec I-pv ec I-pv ec  (go) I-pv ec I-pv ec E-pv ec E-pv ec   (iss)   (iss)/px+  (eoss)/ep+ (da)/ef B-px ef B-px ef I-px ef I-px ef   (eoss) I-px ef I-px ef I-px ef I-px ef  (da) I-px ef I-px ef E-px ef E-px ef . ./s B-s S-s E-s S-s Given a sequence of syllables and syllable tags, it is straightforward to obtain the corresponding sequence of words and word tags. Among the words recognized through this process, we can extract nouns by just selecting words tagged as nouns. 5 4 Constructing training data Our model is a supervised learning approach, so it requires a training data. Because the existing Korean POS tagged corpora are annotated by a morpheme level, we cannot use them as a training data without converting the data suitable for the word recognition model. The corpus can be modified through the following steps: Step 1 For a given Eojeol, segment word boundaries and assign word tags to each word. Step 2 For each separated word, assign the word tag to each syllable in the word according to one of the representations. 5For the purpose of noun extraction, we only select common nouns here (tagged as “nc” or “NC”) among other kinds of nouns. In step 1, word boundaries are identified by using the information of an uninflected morpheme and a sequence of successive inflected morphemes. An uninflected morpheme becomes one word and its tag is assigned to the morpheme’s tag. Successive inflected morphemes form a word and the combined form of the first and the last morpheme’s tag represents its tag. For example, the morpheme-unit POS tagged form of the Eojeol “     (gass-eoss-da)” is “ (ga)/pv+  (ass)/ep+  (eoss)/ep+  (da)/ef”, and all of them are inflected morphemes. Hence, the Eojeol “     (gass-eoss-da)” becomes one word and its tag is represented as “pv ef” by using the first morpheme’s tag (“pv”) and the last one’s (“ef”). In step 2, a syllable tag is assigned to each of syllables forming a word. The syllable tag should express not only POS tag but also the boundary of the word. In order to detect the word boundaries, we use the following four representation schemes: BI representation scheme Assign “B” tag to the first syllable of a word, and “I” tag to the others. BIS representation scheme Assign “S” tag to a syllable which forms a word, and other tags (“B” and “I”) are the same as “BI” representation scheme. IE representation scheme Assign “E” tag to the last syllable of a word, and “I” tag to the others. IES representation scheme Assign “S” tag to a syllable which forms a word, and other tags (“I” and “E”) are the same as “IE” representation scheme. Table 1 shows an example of assigning word tag by syllable unit to the morpheme unit POS tagged corpus. Table 2: Description of Tagset 2 and Tagset 3 Tag Description Tagset 2 Tagset 3 symbol s S foreign word f F common noun nc NC bound noun nb NB pronoun np NP numeral nn NN verb pv V adjective pa A auxiliary predicate px VX copula co CO general adverb mag MA conjunctive adverb maj adnoun mm MM interjection ii IC prefix xp XPN noun-derivational suffix xsn XSN verb-derivational suffix xsv XSV adjective-derivational suffix xsm case particle jc J auxilary particle jx conjunctive particle jj adnominal case particle jm prefinal ending ep EP final ending ef EF conjunctive ending ec EC nominalizing ending etn ETN adnominalizing ending etm ETM 5 Experiments 5.1 Experimental environment We used ETRI POS tagged corpus of 288,269 Eojoels for testing and the 21st Century Sejong Project’s POS tagged corpus (Sejong corpus, for short) for training. The Sejong corpus consists of three different corpora acquired from 1999 to 2001. The Sejong corpus of 1999 consists of 1.5 million Eojeols and other two corpora have 2 million Eojeols respectively. The evaluation measures for the noun extraction task are recall, precision, and Fmeasure. They measure the performance by document and are averaged over all the test documents. This is because noun extractors are usually used in the fields of applications such as information retrieval (IR) and document categorization. We also consider the frequency of nouns; that is, if the noun frequency is not considered, a noun occurring twice or more in a document is treated as other nouns occurring once. From IR point of view, this takes into account of the fact that even if a noun is extracted just once as an index term, the document including the term can also be retrieved. The performance considerably depends on the following factors: the representation schemes for word boundary detection, the tagset, the amount of training data, and the difference between training data and test data. First, we compare four different representation schemes (BI, BIS, IE, IES) in word boundary detection as explained in Section 4. We try to use the following three kinds of tagsets in order to select the most optimal tagset through the experiments: Tagset 1 Simply use two tags (e.g. noun and nonnoun). This is intended to examine the syllable characteristics; that is, which syllables tend to belong to nouns or not. Tagset 2 Use the tagset used in the training data without modification. ETRI tagset used for training is relatively smaller than that of other tagsets. This tagset is changeable according to the POS tagged corpus used in training. Tagset 3 Use a simplified tagset for the purpose of noun extraction. This tagset is simplified by combining postpositions, adverbs, and verbal suffixes into one tag, respectively. This tagset is always fixed even in a different training corpus. Tagset 2 used in Section 5.2 and Tagset 3 are represented in Table 2. 5.2 Experimental results with similar data We divided the test data into ten parts. The performances of the model are measured by averaging over Table 3: Experimental results of the ten-fold cross validation without considering frequency with considering frequency Precision Recall F-measure Precision Recall F-measure BI-1 72.37 83.61 77.58 74.61 82.47 78.34 BI-2 85.99 92.30 89.03 88.96 90.42 89.69 BI-3 84.85 91.20 87.90 87.56 89.55 88.54 BIS-1 78.50 83.53 80.93 80.36 83.99 82.13 BIS-2 88.15 92.34 90.19 90.65 91.58 91.11 BIS-3 86.92 91.07 88.94 89.27 90.62 89.94 IE-1 73.21 81.38 77.07 75.11 81.04 77.96 IE-2 85.12 91.54 88.21 88.37 90.34 89.34 IE-3 83.28 89.70 86.37 86.54 88.80 87.65 IES-1 78.07 82.69 80.31 79.54 83.08 81.27 IES-2 87.30 92.18 89.67 90.05 91.48 90.76 IES-3 85.80 90.79 88.22 88.46 90.47 89.45 74.00 76.00 78.00 80.00 82.00 84.00 86.00 88.00 90.00 92.00 BI BIS IE IES F-m easure Tagset 1 Tagset 2 Tagset 3 Figure 2: Changes of F-measure according to tagsets and representation schemes 85.00 85.50 86.00 86.50 87.00 87.50 88.00 88.50 89.00 89.50 99 99-2000 99-2001 training data F-m easure BI-2 BIS-2 IE-2 IES-2 Figure 3: Changes of F-measure according to the size of training data the ten test sets in the 10-fold cross-validation experiment. Table 3 shows experimental results according to each representation scheme and tagset. In the first column, each number denotes the tagset used. When it comes to the issue of frequency, the cases of considering frequency are better for precision but worse for recall, and better for F-measure. The representation schemes using single syllable information (e.g. “BIS”, “IES”) are better than other representation schemes (e.g. “BI”, “IE”). Contrary to our expectation, the results of Tagset 2 consistently outperform other tagsets. The results of Tagset 1 are not as good as other tagsets because of the lack of the syntactic context. Nevertheless, the results reflect the usefulness of the syllable based processing. The changes of the F-measure according to the tagsets and the representation schemes reflecting frequency are shown in Figure 2. 5.3 Experimental results with different data To show the influence of the difference between the training data and the test data, we have performed the experiments on the Sejong corpus as a training data and the entire ETRI corpus as a test data. Table 4 shows the experimental results on all of the three training data. Although more training data are used in this experiment, the results of Table 3 shows better outcomes. Like other POS tagging models, this indicates that our model is dependent on the text domain. Table 4: Experimental results of Sejong corpus (from 1999 to 2001) without considering frequency with considering frequency Precision Recall F-measure Precision Recall F-measure BI-1 71.91 83.92 77.45 73.57 82.95 77.98 BI-2 85.38 89.96 87.61 87.19 88.26 87.72 BI-3 83.36 89.17 86.17 85.12 87.39 86.24 BIS-1 76.77 82.60 79.58 78.40 83.16 80.71 BIS-2 87.66 90.41 89.01 88.75 89.75 89.25 BIS-3 86.02 88.89 87.43 87.10 88.41 87.75 IE-1 70.82 79.97 75.12 72.67 79.64 75.99 IE-2 84.18 89.23 86.63 85.99 87.83 86.90 IE-3 82.01 87.67 84.74 83.79 86.57 85.16 IES-1 76.19 81.84 78.91 77.31 82.32 79.74 IES-2 86.41 89.33 87.85 87.66 88.75 88.20 IES-3 84.45 88.28 86.33 85.89 87.96 86.91 Table 5: Performances of other systems without considering frequency with considering frequency Precision Recall F-measure Precision Recall F-measure NE2001 84.08 91.34 87.56 87.02 89.86 88.42 KOMA 60.10 93.12 73.06 58.07 93.67 71.70 HanTag 90.54 88.68 89.60 91.77 88.58 90.15 Figure 3 shows the changes of the F-measure according to the size of the training data. In this figure, “99-2000” means 1999 corpus and 2000 corpus are used, and “99-2001” means all corpora are used as the training data. The more training data are used, the better performance we obtained. However, the improvement is insignificant in considering the amount of increase of the training data. Results reported by Lee et al. (2001) are presented in Table 5. The experiments were performed on the same condition as that of our experiments. NE2001, which is a system designed only to extract nouns, improves efficiency of the general morphological analyzer by using positive and negative information about occurrences of nouns. KOMA (Lee et al., 1999b) is a general-purpose morphological analyzer. HanTag (Kim et al., 1998) is a POS tagger, which takes the result of KOMA as input. According to Table 5, HanTag, which is a POS tagger, is an optimal tool in performing noun extraction in terms of the precision and the F-measure. Although the best performance of our proposed model (BIS-2) is worse than HanTag, it is better than NE2001 and KOMA. 5.4 Limitation As mentioned earlier, we assume that morphological variations do not occur at any inflected words. However, some exceptions might occur in a colloquial text. For example, the lexical level forms of two Eojeols “ (ddai)+ (neun)” and “ (gogai)+  (leul)” are changed into the surface level forms by contractions such as “  (ddain)” and “ (go-gail)”, respectively. Our models alone cannot deal with these cases. Such exceptions, however, are very rare. 6 In these experiments, we do not perform any post-processing step to deal with such exceptions. 6 Conclusion We have presented a word recognition model for extracting nouns. While the previous noun extraction 6Actually, about 0.145% of nouns in the test data belong to these cases. methods require morphological analysis or POS tagging, our noun extraction method only uses the syllable information without using any additional morphological analyzer. This means that our method does not require any dictionary or linguistic knowledge. Therefore, without manual labor to construct and maintain those resources, our method can extract nouns by using only the statistics, which can be automatically extracted from a POS tagged corpus. The previous noun extraction methods take a morpheme as a processing unit, but we take a new notion of word as a processing unit by considering the fact that nouns belong to uninflected morphemes in Korean. By virtue of the new definition of a word, we need not consider mismatches between the surface level form and the lexical level one in recognizing words. We have performed various experiments with a wide range of variables influencing the performance such as the representation schemes for the word boundary detection, the tag set, the amount of training data, and the difference between the training data and the test data. Without morphological analysis or POS tagging, the proposed method achieves comparable performance compared with the previous ones. In the future, we plan to extend the context to improve the performance. Although the word recognition model is designed to extract nouns in this paper, the model itself is meaningful and it can be applied to other fields such as language modeling and automatic word spacing. Furthermore, our study make some contributions in the area of POS tagging research. References D.-U. An. 1999. A noun extractor using connectivity information. In Proceedings of the Morphological Analyzer and Tagger Evaluation Contest (MATEC 99), pages 173–178. S.-S. Kang and C.-W. Woo. 2001. Automatic segmentation of words using syllable bigram statistics. In Proceedings of the 6th Natural Language Processing Pacific Rim Symposium, pages 729–732. S.-S. Kang. 1995. Morphological analysis of Korean irregular verbs using syllable characteristics. Journal of the Korea Information Science Society, 22(10):1480– 1487. N.-C. Kim and Y.-H. Seo. 1999. A Korean morphological analyzer CBKMA and a index word extractor CBKMA/IX. In Proceedings of the MATEC 99, pages 50–59. J.-D. Kim, H.-S. Lim, S.-Z. Lee, and H.-C. Rim. 1998. Twoply hidden Markov model: A Korean pos tagging model based on morpheme-unit with word-unit context. Computer Processing of Oriental Languages, 11(3):277–290. O.-W. Kwon, M.-Y. Chung, D.-W. Ryu, M.-K. Lee, and J.-H. Lee. 1999. Korean morphological analyzer and part-of-speech tagger based on CYK algorithm using syllable information. In Proceedings of the MATEC 99. J.-Y. Lee, B.-H. Shin, K.-J. Lee, J.-E. Kim, and S.G. Ahn. 1999a. Noun extractor based on a multipurpose Korean morphological engine implemented with COM. In Proceedings of the MATEC 99, pages 167–172. S.-Z. Lee, B.-R. Park, J.-D. Kim, W.-H. Ryu, D.-G. Lee, and H.-C. Rim. 1999b. A predictive morphological analyzer, a part-of-speech tagger based on joint independence model, and a fast noun extractor. In Proceedings of the MATEC 99, pages 145–150. D.-G. Lee, S.-Z. Lee, and H.-C. Rim. 2001. An efficient method for Korean noun extraction using noun occurrence characteristics. In Proceedings of the 6th Natural Language Processing Pacific Rim Symposium, pages 237–244. D.-G. Lee, S.-Z. Lee, H.-C. Rim, and H.-S. Lim. 2002. Automatic word spacing using hidden Markov model for refining Korean text corpora. In Proceedings of the 3rd Workshop on Asian Language Resources and International Standardization, pages 51–57. H.-S. Lim, S.-Z. Lee, and H.-C. Rim. 1995. An efficient Korean mophological analysis using exclusive information. In Proceedings of the 1995 International Conference on Computer Processing of Oriental Languages, pages 225–258. Lance A. Ramshaw and Mitchell P. Marcus. 1995. Text chunking using transformation-basedlearning. In Proceedings of the Third Workshop on Very Large Corpora, pages 82–94. J.-H. Shim, J.-S. Kim, J.-W. Cha, and G.-B. Lee. 1999. Robust part-of-speech tagger using statistical and rulebased approach. In Proceedings of the MATEC 99, pages 60–75. K.-S. Shim. 1996. Automated word-segmentation for Korean using mutual information of syllables. Journal of the Korea Information Science Society, 23(9):991– 1000.
2003
60
Morphological Analysis of a Large Spontaneous Speech Corpus in Japanese Kiyotaka Uchimoto† Chikashi Nobata† Atsushi Yamada† Satoshi Sekine‡ Hitoshi Isahara† †Communications Research Laboratory 3-5, Hikari-dai, Seika-cho, Soraku-gun, Kyoto, 619-0289, Japan {uchimoto,nova,ark,isahara}@crl.go.jp ‡New York University 715 Broadway, 7th floor New York, NY 10003, USA [email protected] Abstract This paper describes two methods for detecting word segments and their morphological information in a Japanese spontaneous speech corpus, and describes how to tag a large spontaneous speech corpus accurately by using the two methods. The first method is used to detect any type of word segments. The second method is used when there are several definitions for word segments and their POS categories, and when one type of word segments includes another type of word segments. In this paper, we show that by using semiautomatic analysis we achieve a precision of better than 99% for detecting and tagging short words and 97% for long words; the two types of words that comprise the corpus. We also show that better accuracy is achieved by using both methods than by using only the first. 1 Introduction The “Spontaneous Speech: Corpus and Processing Technology” project is sponsoring the construction of a large spontaneous Japanese speech corpus, Corpus of Spontaneous Japanese (CSJ) (Maekawa et al., 2000). The CSJ is a collection of monologues and dialogues, the majority being monologues such as academic presentations and simulated public speeches. Simulated public speeches are short speeches presented specifically for the corpus by paid non-professional speakers. The CSJ includes transcriptions of the speeches as well as audio recordings of them. One of the goals of the project is to detect two types of word segments and corresponding morphological information in the transcriptions. The two types of word segments were defined by the members of The National Institute for Japanese Language and are called short word and long word. The term short word approximates a dictionary item found in an ordinary Japanese dictionary, and long word represents various compounds. The length and part-of-speech (POS) of each are different, and every short word is included in a long word, which is shorter than a Japanese phrasal unit, a bunsetsu. If all of the short words in the CSJ were detected, the number of the words would be approximately seven million. That would be the largest spontaneous speech corpus in the world. So far, approximately one tenth of the words have been manually detected, and morphological information such as POS category and inflection type have been assigned to them. Human annotators tagged every morpheme in the one tenth of the CSJ that has been tagged, and other annotators checked them. The human annotators discussed their disagreements and resolved them. The accuracies of the manual tagging of short and long words in the one tenth of the CSJ were greater than 99.8% and 97%, respectively. The accuracies were evaluated by random sampling. As it took over two years to tag one tenth of the CSJ accurately, tagging the remainder with morphological information would take about twenty years. Therefore, the remaining nine tenths of the CSJ must be tagged automatically or semi-automatically. In this paper, we describe methods for detecting the two types of word segments and corresponding morphological information. We also describe how to tag a large spontaneous speech corpus accurately. Henceforth, we call the two types of word segments short word and long word respectively, or merely morphemes. We use the term morphological analysis for the process of segmenting a given sentence into a row of morphemes and assigning to each morpheme grammatical attributes such as a POS category. 2 Problems and Their Solutions As we mentioned in Section 1, tagging the whole of the CSJ manually would be difficult. Therefore, we are taking a semi-automatic approach. This section describes major problems in tagging a large spontaneous speech corpus with high precision in a semiautomatic way, and our solutions to those problems. One of the most important problems in morphological analysis is that posed by unknown words, which are words found in neither a dictionary nor a training corpus. Two statistical approaches have been applied to this problem. One is to find unknown words from corpora and put them into a dictionary (e.g., (Mori and Nagao, 1996)), and the other is to estimate a model that can identify unknown words correctly (e.g., (Kashioka et al., 1997; Nagata, 1999)). Uchimoto et al. used both approaches. They proposed a morphological analysis method based on a maximum entropy (ME) model (Uchimoto et al., 2001). Their method uses a model that estimates how likely a string is to be a morpheme as its probability, and thus it has a potential to overcome the unknown word problem. Therefore, we use their method for morphological analysis of the CSJ. However, Uchimoto et al. reported that the accuracy of automatic word segmentation and POS tagging was 94 points in F-measure (Uchimoto et al., 2002). That is much lower than the accuracy obtained by manual tagging. Several problems led to this inaccuracy. In the following, we describe these problems and our solutions to them. • Fillers and disfluencies Fillers and disfluencies are characteristic expressions often used in spoken language, but they are randomly inserted into text, so detecting their segmentation is difficult. In the CSJ, they are tagged manually. Therefore, we first delete fillers and disfluencies and then put them back in their original place after analyzing a text. • Accuracy for unknown words The morpheme model that will be described in Section 3.1 can detect word segments and their POS categories even for unknown words. However, the accuracy for unknown words is lower than that for known words. One of the solutions is to use dictionaries developed for a corpus on another domain to reduce the number of unknown words, but the improvement achieved is slight (Uchimoto et al., 2002). We believe that the reason for this is that definitions of a word segment and its POS category depend on a particular corpus, and the definitions from corpus to corpus differ word by word. Therefore, we need to put only words extracted from the same corpus into a dictionary. We are manually examining words that are detected by the morpheme model but that are not found in a dictionary. We are also manually examining those words that the morpheme model estimated as having low probability. During the process of manual examination, if we find words that are not found in a dictionary, those words are then put into a dictionary. Section 4.2.1 will describe the accuracy of detecting unknown words and show how much those words contribute to improving the morphological analysis accuracy when they are detected and put into a dictionary. • Insufficiency of features The model currently used for morphological analysis considers the information of a target morpheme and that of an adjacent morpheme on the left. To improve the model, we need to consider the information of two or more morphemes on the left of the target morpheme. However, too much information often leads to overtraining the model. Using all the information makes training the model difficult when there is too much of it. Therefore, the best way to improve the accuracy of the morphological information in the CSJ within the limited time available to us is to examine and revise the errors of automatic morphological analysis and to improve the model. We assume that the smaller the probability estimated by a model for an output morpheme is, then the greater the likelihood is that the output morpheme is wrong. Therefore, we examine output morphemes in ascending order of their probabilities. The expected improvement of the accuracy of the morphological information in the whole of the CSJ will be described in Section 4.2.1 Another problem concerning unknown words is that the cost of manual examination is high when there are several definitions for word segments and their POS categories. Since there are two types of word definitions in the CSJ, the cost would double. Therefore, to reduce the cost, we propose another method for detecting word segments and their POS categories. The method will be described in Section 3.2, and the advantages of the method will be described in Section 4.2.2 The next problem described here is one that we have to solve to make a language model for automatic speech recognition. • Pronunciation Pronunciation of each word is indispensable for making a language model for automatic speech recognition. In the CSJ, pronunciation is transcribed separately from the basic form written by using kanji and hiragana characters as shown in Fig. 1. Text targeted for morphoBasic form Pronunciation 0017 00051.425-00052.869 L: (F えー) (F エー) 形態素解析 ケータイソカイセキ 0018 00053.073-00054.503 L: について ニツイテ 0019 00054.707-00056.341 L: お話しいたします オハナシイタシマス “Well, I’m going to talk about morphological analysis.” Figure 1: Example of transcription. logical analysis is the basic form of the CSJ and it does not have information on actual pronunciation. The result of morphological analysis, therefore, is a row of morphemes that do not have information on actual pronunciation. To estimate actual pronunciation by using only the basic form and a dictionary is impossible. Therefore, actual pronunciation is assigned to results of morphological analysis by aligning the basic form and pronunciation in the CSJ. First, the results of morphological analysis, namely, the morphemes, are transliterated into katakana characters by using a dictionary, and then they are aligned with pronunciation in the CSJ by using a dynamic programming method. In this paper, we will mainly discuss methods for detecting word segments and their POS categories in the whole of the CSJ. 3 Models and Algorithms This section describes two methods for detecting word segments and their POS categories. The first method uses morpheme models and is used to detect any type of word segment. The second method uses a chunking model and is only used to detect long word segments. 3.1 Morpheme Model Given a tokenized test corpus, namely a set of strings, the problem of Japanese morphological analysis can be reduced to the problem of assigning one of two tags to each string in a sentence. A string is tagged with a 1 or a 0 to indicate whether it is a morpheme. When a string is a morpheme, a grammatical attribute is assigned to it. A tag designated as a 1 is thus assigned one of a number, n, of grammatical attributes assigned to morphemes, and the problem becomes to assign an attribute (from 0 to n) to every string in a given sentence. We define a model that estimates the likelihood that a given string is a morpheme and has a grammatical attribute i(1 ≤i ≤n) as a morpheme model. We implemented this model within an ME modeling framework (Jaynes, 1957; Jaynes, 1979; Berger et al., 1996). The model is represented by Eq. (1): pλ(a|b) = exp  i,j λi,jgi,j(a, b)  Zλ(b) (1) Short word Long word Word Pronunciation POS Others Word Pronunciation POS Others 形態 (form) ケータイ(keitai) Noun 形態素解析 (morphological analysis) ケータイソカイセ キ (keitaisokaiseki) Noun 素 (element) ソ (so) Suffix 解析 (analysis) カイセキ(kaiseki) Noun に ニ (ni) PPP case marker について (about) ニツイテ (nitsuite) PPP case marker, compound word つい (relate) ツイ (tsui) Verb KA-GYO, ADF, euphonic change て テ (te) PPP conjunctive お オ (o) Prefix お話しいたし(talk) オハナシシタシ (ohanashiitasi) Verb SA-GYO, ADF 話し (talk) ハナシ (hanashi) Verb SA-GYO, ADF いたし(do) イタシ (itashi) Verb SA-GYO, ADF ます マス (masu) AUX ending form ます マス (masu) AUX ending form PPP : post-positional particle , AUX : auxiliary verb , ADF : adverbial form Figure 2: Example of morphological analysis results. Zλ(b) =  a exp  i,j λi,jgi,j(a, b)  , (2) where a is one of the categories for classification, and it can be one of (n + 1) tags from 0 to n (This is called a “future.”), b is the contextual or conditioning information that enables us to make a decision among the space of futures (This is called a “history.”), and Zλ(b) is a normalizing constant determined by the requirement that  a pλ(a|b) = 1 for all b. The computation of pλ(a|b) in any ME model is dependent on a set of “features” which are binary functions of the history and future. For instance, one of our features is gi,j(a, b) =  1 : if has(b, fj) = 1 & a = ai fj = “POS(−1)(Major) : verb,′′ 0 : otherwise. (3) Here “has(b, fj)” is a binary function that returns 1 if the history b has feature fj. The features used in our experiments are described in detail in Section 4.1.1. Given a sentence, probabilities of n tags from 1 to n are estimated for each length of string in that sentence by using the morpheme model. From all possible division of morphemes in the sentence, an optimal one is found by using the Viterbi algorithm. Each division is represented as a particular division of morphemes with grammatical attributes in a sentence, and the optimal division is defined as a division that maximizes the product of the probabilities estimated for each morpheme in the division. For example, the sentence “形態素解析についてお 話いたします” in basic form as shown in Fig. 1 is analyzed as shown in Fig. 2. “形態素解析” is analyzed as three morphemes, “形態(noun)”, “素(suffix)”, and “解析(noun)”, for short words, and as one morpheme, “形態素解析(noun)” for long words. In conventional models (e.g., (Mori and Nagao, 1996; Nagata, 1999)), probabilities were estimated for candidate morphemes that were found in a dictionary or a corpus and for the remaining strings obtained by eliminating the candidate morphemes from a given sentence. Therefore, unknown words were apt to be either concatenated as one word or divided into both a combination of known words and a single word that consisted of more than one character. However, this model has the potential to correctly detect any length of unknown words. 3.2 Chunking Model The model described in this section can be applied when several types of words are defined in a corpus and one type of words consists of compounds of other types of words. In the CSJ, every long word consists of one or more short words. Our method uses two models, a morpheme model for short words and a chunking model for long words. After detecting short word segments and their POS categories by using the former model, long word segments and their POS categories are detected by using the latter model. We define four labels, as explained below, and extract long word segments by estimating the appropriate labels for each short word according to an ME model. The four labels are listed below: Ba: Beginning of a long word, and the POS category of the long word agrees with the short word. Ia: Middle or end of a long word, and the POS category of the long word agrees with the short word. B: Beginning of a long word, and the POS category of the long word does not agree with the short word. I: Middle or end of a long word, and the POS category of the long word does not agree with the short word. A label assigned to the leftmost constituent of a long word is “Ba” or “B”. Labels assigned to other constituents of a long word are “Ia”, or “I”. For example, the short words shown in Fig. 2 are labeled as shown in Fig. 3. The labeling is done deterministically from the beginning of a given sentence to its end. The label that has the highest probability as estimated by an ME model is assigned to each short word. The model is represented by Eq. (1). In Eq. (1), a can be one of four labels. The features used in our experiments are described in Section 4.1.2. Short word Long word Word POS Label Word POS 形態 Noun Ba 形態素解析 Noun 素 Suffix I 解析 Noun Ia に PPP Ba について PPP つい Verb I て PPP Ia お Prefix B お話しいたし Verb 話し Verb Ia いたし Verb Ia ます AUX Ba ます AUX PPP : post-positional particle , AUX : auxiliary verb Figure 3: Example of labeling. When a long word that does not include a short word that has been assigned the label “Ba” or “Ia”, this indicates that the word’s POS category differs from all of the short words that constitute the long word. Such a word must be estimated individually. In this case, we estimate the POS category by using transformation rules. The transformation rules are automatically acquired from the training corpus by extracting long words with constituents, namely short words, that are labeled only “B” or “I”. A rule is constructed by using the extracted long word and the adjacent short words on its left and right. For example, the rule shown in Fig. 4 was acquired in our experiments. The middle division of the consequent part represents a long word “てみ” (auxiliary verb), and it consists of two short words “て” (post-positional particle) and “み” (verb). If several different rules have the same antecedent part, only the rule with the highest frequency is chosen. If no rules can be applied to a long word segment, rules are generalized in the following steps. 1. Delete posterior context 2. Delete anterior and posterior contexts 3. Delete anterior and posterior contexts and lexical entries. If no rules can be applied to a long word segment in any step, the POS category noun is assigned to the long word. 4 Experiments and Discussion 4.1 Experimental Conditions In our experiments, we used 744,204 short words and 618,538 long words for training, and 63,037 short words and 51,796 long words for testing. Those words were extracted from one tenth of the CSJ that already had been manually tagged. The training corpus consisted of 319 speeches and the test corpus consisted of 19 speeches. Transcription consisted of basic form and pronunciation, as shown in Fig. 1. Speech sounds were faithfully transcribed as pronunciation, and also represented as basic forms by using kanji and hiragana characters. Lines beginning with numerical digits are time stamps and represent the time it took to produce the lines between that time stamp and the next time stamp. Each line other than time stamps represents a bunsetsu. In our experiments, we used only the basic forms. Basic forms were tagged with several types of labels such as fillers, as shown in Table 1. Strings tagged with those labels were handled according to rules as shown in the rightmost columns in Table 1. Since there are no boundaries between sentences in the corpus, we selected the places in the CSJ that Anterior context Target words Posterior context Entry 行っ(it, go) て(te) み(mi, try) たい(tai, want) POS Verb PPP Verb AUX Label Ba B I Ba Antecedent part ⇒ Anterior context Long word Posterior context 行っ(it, go) てみ(temi, try) たい(tai, want) Verb AUX AUX Consequent part Figure 4: Example of transformation rules. Table 1: Type of labels and their handling. Type of Labels Example Rules Fillers (F あの) delete all Disfluencies (D こ) これ、これ(D2 は) が delete all No confidence in transcription (? タオングー) leave a candidate Entirely (?) delete all Several can(? あのー, あんのー) leave the former didates exist candidate Citation on sound or words (M わ) は(M は) と表記 leave a candidate Foreign, archaic, or dialect words (O ザッツファイン) leave a candidate Personal name, discriminating words, and slander ○○研の(R △△) さんが leave a candidate Letters and their pronunciation in katakana strings (A イーユー;EU) leave the former candidate Strings that cannot be written in kanji characters (K い(F んー) ずみ; 泉) leave the latter candidate are automatically detected as pauses of 500 ms or longer and then designated them as sentence boundaries. In addition to these, we also used utterance boundaries as sentence boundaries. These are automatically detected at places where short pauses (shorter than 200 ms but longer than 50 ms) follow the typical sentence-ending forms of predicates such as verbs, adjectives, and copula. 4.1.1 Features Used by Morpheme Models In the CSJ, bunsetsu boundaries, which are phrase boundaries in Japanese, were manually detected. Fillers and disfluencies were marked with the labels (F) and (D). In the experiments, we eliminated fillers and disfluencies but we did use their positional information as features. We also used as features, bunsetsu boundaries and the labels (M), (O), (R), and (A), which were assigned to particular morphemes such as personal names and foreign words. Thus, the input sentences for training and testing were character strings without fillers and disfluencies, and both boundary information and various labels were attached to them. Given a sentence, for every string within a bunsetsu and every string appearing in a dictionary, the probabilities of a in Eq. (1) were estimated by using the morpheme model. The output was a sequence of morphemes with grammatical attributes, as shown in Fig. 2. We used the POS categories in the CSJ as grammatical attributes. We obtained 14 major POS categories for short words and 15 major POS categories for long words. Therefore, a in Eq. (1) can be one of 15 tags from 0 to 14 for short words, and it can be one of 16 tags from 0 to 15 for long words. Table 2: Features. Number Feature Type Feature value (Number of value) (Short:Long) 1 String(0) (113,474:117,002) 2 String(-1) (17,064:32,037) 3 Substring(0)(Left1) (2,351:2,375) 4 Substring(0)(Right1) (2,148:2,171) 5 Substring(0)(Left2) (30,684:31,456) 6 Substring(0)(Right2) (25,442:25,541) 7 Substring(-1)(Left1) (2,160:2,088) 8 Substring(-1)(Right1) (1,820:1,675) 9 Substring(-1)(Left2) (11,025:12,875) 10 Substring(-1)(Right2) (10,439:13,364) 11 Dic(0)(Major) Noun, Verb, Adjective, . . . Undefined (15:16) 12 Dic(0)(Minor) Common noun, Topic marker, Basic form. . . (75:71) 13 Dic(0)(Major&Minor) Noun&Common noun, Verb&Basic form, . . . (246:227) 14 Dic(-1)(Minor) Common noun, Topic marker, Basic form. . . (16:16) 15 POS(-1) Noun, Verb, Adjective, . . . (14:15) 16 Length(0) 1, 2, 3, 4, 5, 6 or more (6:6) 17 Length(-1) 1, 2, 3, 4, 5, 6 or more (6:6) 18 TOC(0)(Beginning) Kanji, Hiragana, Number, Katakana, Alphabet (5:5) 19 TOC(0)(End) Kanji, Hiragana, Number, Katakana, Alphabet (5:5) 20 TOC(0)(Transition) Kanji→Hiragana, Number→Kanji, Katakana→Kanji, . . . (25:25) 21 TOC(-1)(End) Kanji, Hiragana, Number, Katakana, Alphabet (5:5) 22 TOC(-1)(Transition) Kanji→Hiragana, Number→Kanji, Katakana→Kanji, . . . (16:15) 23 Boundary Bunsetsu(Beginning), Bunsetsu(End), Label(Beginning), Label(End), (4:4) 24 Comb(1,15) (74,602:59,140) 25 Comb(1,2,15) (141,976:136,334) 26 Comb(1,13,15) (78,821:61,813) 27 Comb(1,2,13,15) (156,187:141,442) 28 Comb(11,15) (209:230) 29 Comb(12,15) (733:682) 30 Comb(13,15) (1,549:1,397) 31 Comb(12,14) (730:675) The features we used with morpheme models in our experiments are listed in Table 2. Each feature consists of a type and a value, which are given in the rows of the table, and it corresponds to j in the function gi,j(a, b) in Eq. (1). The notations “(0)” and “(-1)” used in the feature-type column in Table 2 respectively indicate a target string and the morpheme to the left of it. The terms used in the table are basically as same as those that Uchimoto et al. used (Uchimoto et al., 2002). The main difference is the following one: Boundary: Bunsetsu boundaries and positional information of labels such as fillers. “(Beginning)” and “(End)” in Table 2 respectively indicate whether the left and right side of the target strings are boundaries. We used only those features that were found three or more times in the training corpus. 4.1.2 Features Used by a Chunking Model We used the following information as features on the target word: a word and its POS category, and the same information for the four closest words, the two on the left and the two on the right of the target word. Bigram and trigram words that included a target word plus bigram and trigram POS categories that included the target word’s POS category were used as features. In addition, bunsetsu boundaries as described in Section 4.1.1 were used. For example, when a target word was “に” in Fig. 3, “素”, “解析”, “に”, “つ い”, “て”, “Suffix”, “Noun”, “PPP”, “Verb”, “PPP”, “解析& に”, “に& つい”, “素& 解析& に”, “に & つい& て”, “Noun&PPP”, “PPP&Verb”, “Suffix&Noun&PPP”, “PPP&Verb&PPP”, and “Bunsetsu(Beginning)” were used as features. 4.2 Results and Discussion 4.2.1 Experiments Using Morpheme Models Results of the morphological analysis obtained by using morpheme models are shown in Table 3 and 4. In these tables, OOV indicates Out-of-Vocabulary rates. Shown in Table 3, OOV was calculated as the proportion of words not found in a dictionary to all words in the test corpus. In Table 4, OOV was calculated as the proportion of word and POS category pairs that were not found in a dictionary to all pairs in the test corpus. Recall is the percentage of morphemes in the test corpus for which the segmentation and major POS category were identified correctly. Precision is the percentage of all morphemes identified by the system that were identified correctly. The F-measure is defined by the following equation. F −measure = 2 × Recall × Precision Recall + Precision Table 3: Accuracies of word segmentation. Word Recall Precision F OOV Short 97.47% ( 61,444 63,037 ) 97.62% ( 61,444 62,945 ) 97.54 1.66% 99.23% ( 62,553 63,037 ) 99.11% ( 62,553 63,114 ) 99.17 0% Long 96.72% ( 50,095 51,796 ) 95.70% ( 50,095 52,346 ) 96.21 5.81% 99.05% ( 51,306 51,796 ) 98.58% ( 51,306 52,047 ) 98.81 0% Table 4: Accuracies of word segmentation and POS tagging. Word Recall Precision F OOV Short 95.72% ( 60,341 63,037 ) 95.86% ( 60,341 62,945 ) 95.79 2.64% 97.57% ( 61,505 63,037 ) 97.45% ( 61,505 63,114 ) 97.51 0% Long 94.71% ( 49,058 51,796 ) 93.72% ( 49,058 52,346 ) 94.21 6.93% 97.30% ( 50,396 51,796 ) 96.83% ( 50,396 52,047 ) 97.06 0% Tables 3 and 4 show that accuracies would improve significantly if no words were unknown. This indicates that all morphemes of the CSJ could be analyzed accurately if there were no unknown words. The improvements that we can expect by detecting unknown words and putting them into dictionaries are about 1.5 in F-measure for detecting word segments of short words and 2.5 for long words. For detecting the word segments and their POS categories, for short words we expect an improvement of about 2 in F-measure and for long words 3. Next, we discuss accuracies obtained when unknown words existed. The OOV for long words was 4% higher than that for short words. In general, the higher the OOV is, the more difficult detecting word segments and their POS categories is. However, the difference between accuracies for short and long words was about 1% in recall and 2% in precision, which is not significant when we consider that the difference between OOVs for short and long words was 4%. This result indicates that our morpheme models could detect both known and unknown words accurately, especially long words. Therefore, we investigated the recall of unknown words in the test corpus, and found that 55.7% (928/1,667) of short word segments and 74.1% (2,660/3,590) of long word segments were detected correctly. In addition, regarding unknown words, we also found that 47.5% (791/1,667) of short word segments plus their POS categories and 67.3% (2,415/3,590) of long word segments plus their POS categories were detected correctly. The recall of unknown words was about 20% higher for long words than for short words. We believe that this result mainly depended on the difference between short words and long words in terms of the definitions of compound words. A compound word is defined as one word when it is based on the definition of long words; however it is defined as two or more words when it is based on the definition of short words. Furthermore, based on the definition of short words, a division of compound words depends on its context. More information is needed to precisely detect short words than is required for long words. Next, we extracted words that were detected by the morpheme model but were not found in a dictionary, and investigated the percentage of unknown words that were completely or partially matched to the extracted words by their context. This percentage was 77.6% (1,293/1,667) for short words, and 80.6% (2,892/3,590) for long words. Most of the remaining unknown words that could not be detected by this method are compound words. We expect that these compounds can be detected during the manual examination of those words for which the morpheme model estimated a low probability, as will be shown later. The recall of unknown words was lower than that of known words, and the accuracy of automatic morphological analysis was lower than that of manual morphological analysis. As previously stated, to improve the accuracy of the whole corpus we take a semi-automatic approach. We assume that the smaller the probability is for an output morpheme estimated by a model, the more likely the output morpheme is wrong, and we examine output morphemes in ascending order of their probabilities. We investigated how much the accuracy of the whole corpus would increase. Fig. 5 shows the relationship between the percentage of output morphemes whose probabilities exceed a threshold and their 93 94 95 96 97 98 99 100 20 30 40 50 60 70 80 90 100 Precision (%) Output Rates (%) "short_without_UKW" "long_without_UKW" "short_with_UKW" "long_with_UKW" Figure 5: Partial analysis. precision. In this figure, “short without UKW”, “long without UKW」”, “short with UKW”, and “long with UKW” represent the precision for short words detected assuming there were no unknown words, precision for long words detected assuming there were no unknown words, precision of short words including unknown words, and precision of long words including unknown words, respectively. When the output rate in the horizontal axis increases, the number of low-probability morphemes increases. In all graphs, precisions monotonously decrease as output rates increase. This means that tagging errors can be revised effectively when morphemes are examined in ascending order of their probabilities. Next, we investigated the relationship between the percentage of morphemes examined manually and the precision obtained after detected errors were revised. The result is shown in Fig. 6. Precision represents the precision of word segmentation and POS tagging. If unknown words were detected and put into a dictionary by the method described in the fourth paragraph of this section, the graph line for short words would be drawn between the graph lines “short without UKW” and “short with UKW”, and the graph line for long words would be drawn between the graph lines “long without UKW” and “long with UKW”. Based on test results, we can expect better than 99% precision for short words and better than 97% precision for long words in the whole corpus when we examine 10% of output mor93 94 95 96 97 98 99 100 0 20 40 60 80 100 120 Precision (%) Examined Morpheme Rates (%) "short_without_UKW" "long_without_UKW" "short_with_UKW" "long_with_UKW" Figure 6: Relationship between the percentage of morphemes examined manually and precision obtained after revising detected errors (when morphemes with probabilities under threshold and their adjacent morphemes are examined). 0 10 20 30 40 50 60 0 5 10 15 20 25 30 35 40 45 50 Error Rates in Examined Morphemes (%) Examined Morpheme Rates (%) "short_without_UKW" "short_with_UKW" "long_without_UKW" "long_with_UKW" Figure 7: Relationship between percentage of morphemes examined manually and error rate of examined morphemes. phemes in ascending order of their probabilities. Finally, we investigated the relationship between percentage of morphemes examined manually and the error rate for all of the examined morphemes. The result is shown in Fig. 7. We found that about 50% of examined morphemes would be found as errors at the beginning of the examination and about 20% of examined morphemes would be found as errors when examination of 10% of the whole corpus was completed. When unknown words were detected and put into a dictionary, the error rate decreased; even so, over 10% of examined morphemes would be found as errors. 4.2.2 Experiments Using Chunking Models Results of the morphological analysis of long words obtained by using a chunking model are shown in Table 5 and 6. The first and second lines Table 5: Accuracies of long word segmentation. Model Recall Precision F Morph 96.72% (50,095 51,796 ) 95.70% ( 50,095 52,346 ) 96.21 Chunk 97.65% ( 50,580 51,796 ) 97.41% ( 50,580 51,911 ) 97.54 Chunk 98.84% ( 51,193 51,796 ) 98.66% ( 51,193 51,888 ) 98.75 Table 6: Accuracies of long word segmentation and POS tagging. Model Recall Precision F Morph 94.71% (49,058 51,796 ) 93.72% ( 49,058 52,346 ) 94.21 Chunk 95.59% ( 49,513 51,796 ) 95.38% ( 49,513 51,911 ) 95.49 Chunk 98.56% ( 51,051 51,796 ) 98.39% ( 51,051 51,888 ) 98.47 Chunk w/o TR 92.61% ( 47,968 51,796 ) 92.40% ( 47,968 51,911 ) 92.51 TR : transformation rules show the respective accuracies obtained when OOVs were 5.81% and 6.93%. The third lines show the accuracies obtained when we assumed that the OOV for short words was 0% and there were no errors in detecting short word segments and their POS categories. The fourth line in Table 6 shows the accuracy obtained when a chunking model without transformation rules was used. The accuracy obtained by using the chunking model was one point higher in F-measure than that obtained by using the morpheme model, and it was very close to the accuracy achieved for short words. This result indicates that errors newly produced by applying a chunking model to the results obtained for short words were slight, or errors in the results obtained for short words were amended by applying the chunking model. This result also shows that we can achieve good accuracy for long words by applying a chunking model even if we do not detect unknown long words and do not put them into a dictionary. If we could improve the accuracy for short words, the accuracy for long words would be improved also. The third lines in Tables 5 and 6 show that the accuracy would improve to over 98 points in F-measure. The fourth line in Tables 6 shows that transformation rules significantly contributed to improving the accuracy. Considering the results obtained in this section and in Section 4.2.1, we are now detecting short and long word segments and their POS categories in the whole corpus by using the following steps: 1. Automatically detect and manually examine unknown words for short words. 2. Improve the accuracy for short words in the whole corpus by manually examining short words in ascending order of their probabilities estimated by a morpheme model. 3. Apply a chunking model to the short words to detect long word segments and their POS categories. As future work, we are planning to use an active learning method such as that proposed by ArgamonEngelson and Dagan (Argamon-Engelson and Dagan, 1999) to more effectively improve the accuracy of the whole corpus. 5 Conclusion This paper described two methods for detecting word segments and their POS categories in a Japanese spontaneous speech corpus, and describes how to tag a large spontaneous speech corpus accurately by using the two methods. The first method is used to detect any type of word segments. We found that about 80% of unknown words could be semiautomatically detected by using this method. The second method is used when there are several definitions for word segments and their POS categories, and when one type of word segments includes another type of word segments. We found that better accuracy could be achieved by using both methods than by using only the first method alone. Two types of word segments, short words and long words, are found in a large spontaneous speech corpus, CSJ. We found that the accuracy of automatic morphological analysis for the short words was 95.79 in F-measure and for long words, 95.49. Although the OOV for long words was much higher than that for short words, almost the same accuracy was achieved for both types of words by using our proposed methods. We also found that we can expect more than 99% of precision for short words, and 97% for long words found in the whole corpus when we examined 10% of output morphemes in ascending order of their probabilities as estimated by the proposed models. In our experiments, only the information contained in the corpus was used; however, more appropriate linguistic knowledge than that could be used, such as morphemic and syntactic rules. We would like to investigate whether such linguistic knowledge contributes to improved accuracy. References S. Argamon-Engelson and I. Dagan. 1999. Committee-Based Sample Selection For Probabilistic Classifiers. Artificial Intelligence Research, 11:335–360. A. L. Berger, S. A. Della Pietra, and V. J. Della Pietra. 1996. A Maximum Entropy Approach to Natural Language Processing. Computational Linguistics, 22(1):39–71. E. T. Jaynes. 1957. Information Theory and Statistical Mechanics. Physical Review, 106:620–630. E. T. Jaynes. 1979. Where do we Stand on Maximum Entropy? In R. D. Levine and M. Tribus, editors, The Maximum Entropy Formalism, page 15. M. I. T. Press. H. Kashioka, S. G. Eubank, and E. W. Black. 1997. DecisionTree Morphological Analysis Without a Dictionary for Japanese. In Proceedings of NLPRS, pages 541–544. K. Maekawa, H. Koiso, S. Furui, and H. Isahara. 2000. Spontaneous Speech Corpus of Japanese. In Proceedings of LREC, pages 947–952. S. Mori and M. Nagao. 1996. Word Extraction from Corpora and Its Part-of-Speech Estimation Using Distributional Analysis. In Proceedings of COLING, pages 1119–1122. M. Nagata. 1999. A Part of Speech Estimation Method for Japanese Unknown Words Using a Statistical Model of Morphology and Context. In Proceedings of ACL, pages 277– 284. K. Uchimoto, S. Sekine, and H. Isahara. 2001. The Unknown Word Problem: a Morphological Analysis of Japanese Using Maximum Entropy Aided by a Dictionary. In Proceedings of EMNLP, pages 91–99. K. Uchimoto, C. Nobata, A. Yamada, S. Sekine, and H. Isahara. 2002. Morphological Analysis of The Spontaneous Speech Corpus. In Proceedings of COLING, pages 1298–1302.
2003
61
Learning to predict pitch accents and prosodic boundaries in Dutch Erwin Marsi1, Martin Reynaert1, Antal van den Bosch1, Walter Daelemans2, V´eronique Hoste2 1 Tilburg University ILK / Computational Linguistics and AI Tilburg, The Netherlands {e.c.marsi,reynaert, antal.vdnbosch}@uvt.nl 2 University of Antwerp, CNTS Antwerp, Belgium {daelem,hoste}@uia.ua.ac.be Abstract We train a decision tree inducer (CART) and a memory-based classifier (MBL) on predicting prosodic pitch accents and breaks in Dutch text, on the basis of shallow, easy-to-compute features. We train the algorithms on both tasks individually and on the two tasks simultaneously. The parameters of both algorithms and the selection of features are optimized per task with iterative deepening, an efficient wrapper procedure that uses progressive sampling of training data. Results show a consistent significant advantage of MBL over CART, and also indicate that task combination can be done at the cost of little generalization score loss. Tests on cross-validated data and on held-out data yield F-scores of MBL on accent placement of 84 and 87, respectively, and on breaks of 88 and 91, respectively. Accent placement is shown to outperform an informed baseline rule; reliably predicting breaks other than those already indicated by intra-sentential punctuation, however, appears to be more challenging. 1 Introduction Any text-to-speech (TTS) system that aims at producing understandable and natural-sounding output needs to have on-board methods for predicting prosody. Most systems start with generating a prosodic representation at the linguistic or symbolic level, followed by the actual phonetic realization in terms of (primarily) pitch, pauses, and segmental durations. The first step involves placing pitch accents and inserting prosodic boundaries at the right locations (and may involve tune choice as well). Pitch accents correspond roughly to pitch movements that lend emphasis to certain words in an utterance. Prosodic breaks are audible interruptions in the flow of speech, typically realized by a combination of a pause, a boundary-marking pitch movement, and lengthening of the phrase-final segments. Errors at this level may impede the listener in the correct understanding of the spoken utterance (Cutler et al., 1997). Predicting prosody is known to be a hard problem that is thought to require information on syntactic boundaries, syntactic and semantic relations between constituents, discourse-level knowledge, and phonological well-formedness constraints (Hirschberg, 1993). However, producing all this information – using full parsing, including establishing semanto-syntactic relations, and full discourse analysis – is currently infeasible for a realtime system. Resolving this dilemma has been the topic of several studies in pitch accent placement (Hirschberg, 1993; Black, 1995; Pan and McKeown, 1999; Pan and Hirschberg, 2000; Marsi et al., 2002) and in prosodic boundary placement (Wang and Hirschberg, 1997; Taylor and Black, 1998). The commonly adopted solution is to use shallow information sources that approximate full syntactic, semantic and discourse information, such as the words of the text themselves, their part-of-speech tags, or their information content (in general, or in the text at hand), since words with a high (semantic) information content or load tend to receive pitch accents (Ladd, 1996). Within this research paradigm, we investigate pitch accent and prosodic boundary placement for Dutch, using an annotated corpus of newspaper text, and machine learning algorithms to produce classifiers for both tasks. We address two questions that have been left open thus far in previous work: 1. Is there an advantage in inducing decision trees for both tasks, or is it better to not abstract from individual instances and use a memory-based k-nearest neighbour classifier? 2. Is there an advantage in inducing classifiers for both tasks individually, or can both tasks be learned together. The first question deals with a key difference between standard decision tree induction and memorybased classification: how to deal with exceptional instances. Decision trees, CART (Classification and Regression Tree) in particular (Breiman et al., 1984), have been among the first successful machine learning algorithms applied to predicting pitch accents and prosodic boundaries for TTS (Hirschberg, 1993; Wang and Hirschberg, 1997). Decision tree induction finds, through heuristics, a minimallysized decision tree that is estimated to generalize well to unseen data. Its minimality strategy makes the algorithm reluctant to remember individual outlier instances that would take long paths in the tree: typically, these are discarded. This may work well when outliers do not reoccur, but as demonstrated by (Daelemans et al., 1999), exceptions do typically reoccur in language data. Hence, machine learning algorithms that retain a memory trace of individual instances, like memory-based learning algorithms based on the k-nearest neighbour classifier, outperform decision tree or rule inducers precisely for this reason. Comparing the performance of machine learning algorithms is not straightforward, and deserves careful methodological consideration. For a fair comparison, both algorithms should be objectively and automatically optimized for the task to be learned. This point is made by (Daelemans and Hoste, 2002), who show that, for tasks such as word-sense disambiguation and part-of-speech tagging, tuning algorithms in terms of feature selection and classifier parameters gives rise to significant improvements in performance. In this paper, therefore, we optimize both CART and MBL individually and per task, using a heuristic optimization method called iterative deepening. The second issue, that of task combination, stems from the intuition that the two tasks have a lot in common. For instance, (Hirschberg, 1993) reports that knowledge of the location of breaks facilitates accent placement. Although pitch accents and breaks do not consistently occur at the same positions, they are to some extent analogous to phrase chunks and head words in parsing: breaks mark boundaries of intonational phrases, in which typically at least one accent is placed. A learner may thus be able to learn both tasks at the same time. Apart from the two issues raised, our work is also practically motivated. Our goal is a good algorithm for real-time TTS. This is reflected in the type of features that we use as input. These can be computed in real-time, and are language independent. We intend to show that this approach goes a long way towards generating high-quality prosody, casting doubt on the need for more expensive sentence and discourse analysis. The remainder of this paper has the following structure. In Section 2 we define the task, describe the data, and the feature generation process which involves POS tagging, syntactic chunking, and computing several information-theoretic metrics. Furthermore, a brief overview is given of the algorithms we used (CART and MBL). Section 3 describes the experimental procedure (ten-fold iterative deepening) and the evaluation metrics (F-scores). Section 4 reports the results for predicting accents and major prosodic boundaries with both classifiers. It also reports their performance on held-out data and on two fully independent test sets. The final section offers some discussion and concluding remarks. 2 Task definition, data, and machine learners To explore the generalization abilities of machine learning algorithms trained on placing pitch accents and breaks in Dutch text, we define three classification tasks: Pitch accent placement – given a word form in its sentential context, decide whether it should be accented. This is a binary classification task. Break insertion – given a word form in its sentential context, decide whether it should be followed by a boundary. This is a binary classification task. Combined accent placement and break insertion – given a word form in its sentential context, decide whether it should be accented and whether it should be followed by a break. This is a four-class task: no accent and no break; an accent and no break; no accent and a break; an accent and a break. Finer-grained classifications could be envisioned, e.g. predicting the type of pitch accent, but we assert that finer classification, apart from being arguably harder to annotate, could be deferred to later processing given an adequate level of precision and recall on the present task. In the next subsections we describe which data we selected for annotation and how we annotated it with respect to pitch accents and prosodic breaks. We then describe the implementation of memory-based learning applied to the task. 2.1 Prosodic annotation of the data The data used in our experiments consists of 201 articles from the ILK corpus (a large collection of Dutch newspaper text), totalling 4,493 sentences and 58,097 tokens (excluding punctuation). We set apart 10 articles, containing 2,905 tokens (excluding punctuation) as held-out data for testing purposes. As a preprocessing step, the data was tokenised by a rule-based Dutch tokeniser, splitting punctuation from words, and marking sentence endings. The articles were then prosodically annotated, without overlap, by four different annotators, and were corrected in a second stage, again without overlap, by two corrector-annotators. The annotators’ task was to indicate the locations of accents and/or breaks that they preferred. They used a custom annotation tool which provided feedback in the form of synthesized speech. In total, 23,488 accents were placed, which amounts to roughly one accent in two and a half words. 8627 breaks were marked; 4601 of these were sentence-internal breaks; the remainder consisted of breaks at the end of sentences. 2.2 Generating shallow features The 201 prosodically-annotated articles were subsequently processed through the following 15 feature construction steps, each contributing one feature per word form token. An excerpt of the annotated data with all generated symbolic and numeric1 features is presented in Table 1. Word forms (Wrd) – The word form tokens form the central unit to which other features are added. Pre- and post-punctuation – All punctuation marks in the data are transferred to two separate features: a pre-punctuation feature (PreP) for punctuation marks such as quotation marks appearing before the token, and a post-punctuation feature (PostP) for punctuation marks such as periods, commas, and question marks following the token. Part-of-speech (POS) tagging – We used MBT version 1.0 (Daelemans et al., 1996) to develop a memory-based POS tagger trained on the Eindhoven corpus of written Dutch, which does not overlap with our base data. We split up the full POS tags into two features, the first (PosC) containing the main POS category, the second (PosF) the POS subfeatures. Diacritical accent – Some tokens bear an orthographical diacritical accent put there by the author to particularly emphasize the token in question. These accents were stripped off the accented letter, and transferred to a binary feature (DiA). NP and VP chunking (NpC & VpC) – An approximation of the syntactic structure is provided by simple noun phrase and verb phrase chunkers, which take word and POS information as input and are based on a small number of manually written regular expressions. Phrase boundaries are encoded per word using three tags: ‘B’ for chunk-initial words, ‘I’ for chunk-internal words, and ‘O’ for words outside chunks. The NPs are identified according to the base principle of one semantic head per chunk (nonrecursive, base NPs). VPs include only verbs, not the verbal complements. IC – Information content (IC) of a word w is given by IC(w) = −log(P(w)), where P(w) is esti1Numeric features were rounded off to two decimal points, where appropriate. mated by the observed frequency of w in a large disjoint corpus of about 1.7 GB of unannotated Dutch text garnered from various sources. Word forms not in this corpus were given the highest IC score, i.e. the value for hapax legomenae (words that occur once). Bigram IC – IC on bigrams (BIC) was calculated for the bigrams (pairs of words) in the data, according to the same formula and corpus material as for unigram IC. TF*IDF – The TF*IDF metric (Salton, 1989) estimates the relevance of a word in a document. Document frequency counts for all token types were obtained from a subset of the same corpus as used for IC calculations. TF*IDF and IC (previous two features) have been succesfully tested as features for accent prediction by (Pan and McKeown, 1999), who assert that IC is a more powerful predictor than TF*IDF. Phrasometer – The phrasometer feature (PM) is the summed log-likelihood of all n-grams the word form occurs in, with n ranging from 1 to 25, and computed in an iterative growth procedure: loglikelihoods of n + 1-grams were computed by expanding all stored n-grams one word to the left and to the right; only the n + 1-grams with higher log-likelihood than that of the original n-gram are stored. Computations are based on the complete ILK Corpus. Distance to previous occurrence – The distance, counted in the number of tokens, to previous occurrence of a token within the same article (D2P). Unseen words were assigned the arbitrary high default distance of 9999. Distance to sentence boundaries – Distance of the current token to the start of the sentence (D2S) and to the end of the sentence (D2E), both measured as a proportion of the total sentence length measured in tokens. 2.3 CART: Classification and regression trees CART (Breiman et al., 1984) is a statistical method to induce a classification or regression tree from a given set of instances. An instance consists of a fixed-length vector of n feature-value pairs, and an information field containing the classification of that particular feature-value vector. Each node in the CART tree contains a binary test on some categorical or numerical feature in the input vector. In the case of classification, the leaves contain the most likely class. The tree building algorithm starts by selecting the feature test that splits the data in such a way that the mean impurity (entropy times the number of instances) of the two partitions is minimal. The algorithm continues to split each partition recursively until some stop criterion is met (e.g. a minimal number of instances in the partition). Alternatively, a small stop value can be used to build a tree that is probably overfitted, but is then pruned back to where it best matches some amount of held-out data. In our experiments, we used the CART implementation that is part of the Edinburgh Speech Tools (Taylor et al., 1999). 2.4 Memory-based learning Memory-based learning (MBL), also known as instance-based, example-based, or lazy learning (Stanfill and Waltz, 1986; Aha et al., 1991), is a supervised inductive learning algorithm for learning classification tasks. Memory-based learning treats a set of training instances as points in a multidimensional feature space, and stores them as such in an instance base in memory (rather than performing some abstraction over them). After the instance base is stored, new (test) instances are classified by matching them to all instances in memory, and by calculating with each match the distance, given by a distance function between the new instance X and the memory instance Y . Cf. (Daelemans et al., 2002) for details. Classification in memorybased learning is performed by the k-NN algorithm (Fix and Hodges, 1951; Cover and Hart, 1967) that searches for the k ‘nearest neighbours’ according to the distance function. The majority class of the k nearest neighbours then determines the class of the new case. In our k-NN implementation2, equidistant neighbours are taken as belonging to the same k, so this implementation is effectively a knearest distance classifier. 3 Optimization by iterative deepening Iterative deepening (ID) is a heuristic search algorithm for the optimization of algorithmic parameter 2All experiments with memory-based learning were performed with TiMBL, version 4.3 (Daelemans et al., 2002). Wrd PreP PostP PosC PosF DiA NpC VpC IC BIC Tf*Idf PM D2P D2S D2E A B AB De = = Art bep,zijdofmv,neut 0 B O 2.11 5.78 0.00 4 9999 0.00 0.94 - bomen = = N soort,mv,neut 0 I O 4.37 7.38 0.16 4 17 0.06 0.89 A Arondom = = Prep voor 0 O O 4.58 5.09 0.04 4 17 0.11 0.83 - de = = Art bep,zijdofmv,neut 0 B O 1.31 5.22 0.00 5 20 0.17 0.78 - molen = = N soort,ev,neut 0 I O 5.00 7.50 0.18 5 9 0.22 0.72 A Abij = = Prep voor 0 O O 2.50 3.04 0.00 6 9999 0.28 0.67 - de = = Art bep,zijdofmv,neut 0 B O 1.31 6.04 0.00 6 3 0.33 0.61 - scheepswerf = = N soort,ev,neut 0 I O 5.63 8.02 0.03 4 9999 0.39 0.56 - Verolme = = N eigen,ev,neut 0 I O 6.38 7.59 0.05 0 9999 0.44 0.50 A Amoeten = = V trans,ott,3,ev 0 B O 2.99 6.77 0.01 4 9999 0.61 0.33 - verkassen = , V trans,inf 0 I O 5.75 5.99 0.02 4 9999 0.67 0.28 A B AB vindt = = V trans,ott,3,ev 0 O B 3.51 8.50 0.00 6 9999 0.72 0.22 - molenaar = = N soort,ev,neut 0 B O 5.95 8.50 0.05 0 9999 0.78 0.17 - Wijbrand = = N eigen,ev,neut 0 I O 7.89 8.50 0.11 0 38 0.83 0.11 A ATable 1: Symbolic and numerical features and class for the sentence De bomen rondom de scheepswerf Verolme moeten verkassen, vindt molenaar Wijbrandt. ‘Miller Wijbrand thinks that the trees surrounding the mill near shipyard Verolme have to relocate.’ and feature selection, that combines classifier wrapping (using the training material internally to test experimental variants) (Kohavi and John, 1997) with progressive sampling of training material (Provost et al., 1999). We start with a large pool of experiments, each with a unique combination of input features and algorithmic parameter settings. In the first step, each attempted setting is applied to a small amount of training material and tested on a fixed amount of held-out data (which is a part of the full training set). Only the best settings are kept; all others are removed from the pool of competing settings. In subsequent iterations, this step is repeated, exponentially decreasing the number of settings in the pool, while at the same time exponentially increasing the amount of training material. The idea is that the increasing amount of time required for training is compensated by running fewer experiments, in effect keeping processing time approximately constant across iterations. This process terminates when only the single best experiment is left (or, the n best experiments). This ID procedure can in fact be embedded in a standard 10-fold cross-validation procedure. In such a 10-fold CV ID experiment, the ID procedure is carried out on the 90% training partition, and the resulting optimal setting is tested on the remaining 10% test partition. The average score of the 10 optimized folds can then be considered, as that of a normal 10fold CV experiment, to be a good estimation of the performance of a classifier optimized on the full data set. For current purposes, our specific realization of this general procedure was as follows. We used folds of approximately equal size. Within each ID experiment, the amount of held-out data was approximately 5%; the initial amount of training data was 5% as well. Eight iterations were performed, during which the number of experiments was decreased, and the amount of training data was increased, so that in the end only the 3 best experiments used all available training data (i.e. the remaining 95%). Increasing the training data set was accomplished by random sampling from the total of training data available. Selection of the best experiments was based on their F-score (van Rijsbergen, 1979) on the target class (accent or break). F-score, the harmonic mean of precision and recall, is chosen since it directly evaluates the tasks (placement of accents or breaks), in contrast with classification accuracy (the percentage of correctly classified test instances) which is biased to the majority class (to place no accent or break). Moreover, accuracy masks relevant differences between certain inappropriate classifiers that do not place accents or breaks, and better classifiers that do place them, but partly erroneously. The initial pool of experiments was created by systematically varying feature selection (the input features to the classifier) and the classifier settings (the parameters of the classifiers). We restricted these selections and settings within reasonable bounds to keep our experiments computationally feasible. In particular, feature selection was limited to varying the size of the window that was used to model the local context of an instance. A uniform window (i.e. the same size for all features) was applied to all features except DiA, D2P, D2S, and D2E. Its size (win) could be 1, 3, 5, 7, or 9, where win = 1 implies no modeling of context, whereas win = 9 means that during classification not only the features of the current instance are taken into account, but also those of the preceding and following four instances. For CART, we varied the following parameter values, resulting in a first ID step with 480 experiments: • the minimum number of examples for leaf nodes (stop): 1, 10, 25, 50, and 100 • the number of partitions to split a float feature range into (frs): 2, 5, 10, and 25 • the percentage of training material held out for pruning (held-out): 0, 5, 10, 15, 20, and 25 (0 implies no pruning) For MBL, we varied the following parameter values, which led to 1184 experiments in the first ID step: • the number of nearest neighbours (k): 1, 4, 7, 10, 13, 16, 19, 22, 25, and 28 • the type of feature weighting: Gain Ratio (GR), and Shared Variance (SV) • the feature value similarity metric: Overlap, or Modified Value Difference Metric (MVDM) with back-off to Overlap at value frequency tresholds 1 (L=1, no back-off), 2, and 10 • the type of distance weighting: None, Inverse Distance, Inverse Linear Distance, and Exponential Decay with α = 1.0 (ED1) and α = 4.0 (ED4) 4 Results 4.1 Tenfold iterative deepening results We first determined two sharp, informed baselines; see Table 2. The informed baseline for accent placement is based on the content versus function word distinction, commonly employed in TTS systems (Taylor and Black, 1998). We refer to this baseline as CF-rule. It is constructed by accenting all content words, while leaving all function words (determiners, prepositions, conjunctions/complementisers and auxiliaries) unaccented. The required word class information is obtained from the POS tags. The baseline for break placement, henceforth PUNC-rule, relies solely on punctuation. A break is inserted after any sequence of punctuation symbols containing one Target : Method : Prec : Rec : F : Accent CF-rule 66.7 94.9 78.3 CART 78.6 ±2.8 85.7 ±1.1 82.0 ±1.7 MBL 80.0 ±2.7 86.6 ±1.4 83.6 ±1.6∗ CARTC 78.7 ±3.0 85.6 ±0.8 82.0 ±1.6 MBLC 81.0 ±2.7 86.1 ±1.1 83.4 ±1.5∗ Break PUNC-rule 99.2 75.7 85.9 CART 93.1 ±1.5 82.2 ±3.0 87.3 ±1.5 MBL 95.1 ±1.4 81.9 ±2.8 88.0 ±1.5∗ CARTC 94.5 ±0.8 80.2 ±3.1 86.7 ±1.6 MBLC 95.7 ±1.1 80.7 ±3.1 87.6 ±1.7∗ Table 2: Precision, recall, and F-scores on accent, break and combined prediction by means of CART and MBL, for baselines and for average results over 10 folds of the Iterative Deepening experiment; a ∗indicates a significant difference (p < 0.01) between CART and MBL according to a paired t-test. Superscript C refers to the combined task. or more characters from the set {,!?:;()}. It should be noted that both baselines are simple rule-based algorithms that have been manually optimized for the current training set. They perform well above chance level, and pose a serious challenge to any ML approach. From the results displayed in Table 2, the following can be concluded. First, MBL attains the highest F-scores on accent placement, 83.6, and break placement, 88.0. It does so when trained on the ACCENT and BREAK tasks individually. On these tasks, MBL performs significantly better than CART (paired ttests yield p < 0.01 for both differences). Second, the performances of MBL and CART on the combined task, when split in F-scores on accent and break placement, are rather close to those on the accent and break tasks. For both MBL and CART, the scores on accent placement as part of the combined task versus accent placement in isolation are not significantly different. For break insertion, however, a small but significant drop in performance can be seen with MBL (p < 0.05) and CART (p < 0.01) when it is performed as part of the COMBINED task. As is to be expected, the optimal feature selections and classifier settings obtained by iterative deepening turned out to vary over the ten folds for both MBL and CART. Table 3 lists the settings producing the best F-score on accents or breaks. A window of 7 (i.e. the features of the three preceding and following word form tokens) is used by CART and MBL for accent placement, and also for break insertion by CART, whereas MBL uses a window of Target: Method: Setting: Accent CART win=7, stop=50, frs=5, held-out=5 MBL win=7, MVDM with L=5, k=25, GR, ED4 Break CART win=7, stop=25, frs=2, held-out=5 MBL win=3, MVDM with L=2, k=28, GR, ED4 Table 3: Optimal parameter settings for CART and MBL with respect to accent and break prediction just 3. Both algorithms (stop in CART, and k in MBL) base classifications on minimally around 25 instances. Furthermore, MBL uses the Gain Ratio feature weighting and Exponential Decay distance weighting. Although no pruning was part of the Iterative Deepening experiment, CART prefers to hold out 5% of its training material to prune the decision tree resulting from the remaining 95%. 4.2 External validation We tested our optimized approach on our held-out data of 10 articles (2,905 tokens), and on an independent test corpus (van Herwijnen and Terken, 2001). The latter contains two types of text: 2 newspaper texts (55 sentences, 786 words excluding punctuation), and 17 email messages (70 sentences, 1133 words excluding punctuation). This material was annotated by 10 experts, who were asked to indicate the preferred accents and breaks. For the purpose of evaluation, words were assumed to be accented if they received an accent by at least 7 of the annotators. Furthermore, of the original four break levels annotated (i.e. no break, light, medium, or heavy ), only medium and heavy level breaks were considered to be a break in our evaluation. Table 4 lists the precision, recall, and F-scores obtained on the two tasks using the single-best scoring setting from the 10-fold CV ID experiment per task. It can be seen that both CART and MBL outperformed the CF-rule baseline on our own held-out data and on the news and email texts, with similar margins as observed in our 10-fold CV ID experiment. MBL attains an Fscore of 86.6 on accents, and 91.0 on breaks; both are improvements over the cross-validation estimations. On breaks, however, both CART and MBL failed to improve on the PUNC-rule baseline; on the news and email texts they perform even worse. Inspecting MBLs output on these text, it turned out that MBL does emulate the PUNC-rule baseline, but that it places additional breaks at positions not Target : Test set Method : Prec : Rec : F : Accent Held-out CF-rule 73.5 94.8 82.8 CART 84.3 86.1 85.2 MBL 87.0 86.3 86.6 News CF-rule 52.2 92.9 66.9 CART 62.7 92.5 74.6 MBL 66.3 89.2 76.0 Email CF-rule 54.3 91.0 68.0 CART 66.8 88.5 76.1 MBL 71.0 88.5 78.8 Break Held-out PUNC-rule 99.5 83.7 90.9 CART 92.6 88.9 90.7 MBL 95.5 87.0 91.0 News PUNC-rule 98.8 93.1 95.9 CART 80.6 95.4 87.4 MBL 89.3 95.4 92.2 Email PUNC-rule 93.9 87.0 90.3 CART 81.6 90.2 85.7 MBL 83.0 91.1 86.8 Table 4: Precision, recall, and F-scores on accent and break prediction for our held-out corpus and two external corpora of news and email texts, using the best settings for CART and MBL as determined by the ID experiments. marked by punctuation. A considerable portion of these non-punctuation breaks is placed incorrectly – or at least different from what the annotators preferred – resulting in a lower precision that does not outweigh the higher recall. 5 Conclusion With shallow features as input, we trained machine learning algorithms on predicting the placement of pitch accents and prosodic breaks in Dutch text, a desirable function for a TTS system to produce synthetic speech with good prosody. Both algorithms, the memory-based classifier MBL and decision tree inducer CART, were automatically optimized by an Iterative Deepening procedure, a classifier wrapper technique with progressive sampling of training data. It was shown that MBL significantly outperforms CART on both tasks, as well as on the combined task (predicting accents and breaks simultaneously). This again provides an indication that it is advantageous to retain individual instances in memory (MBL) rather than to discard outlier cases as noise (CART). Training on both tasks simultaneously, in one model rather than divided over two, results in generalization accuracies similar to that of the individually-learned models (identical on accent placement, and slightly lower for break placement). This shows that learning one task does not seriously hinder learning the other. From a practical point of view, it means that a TTS developer can resort to one system for both tasks instead of two. Pitch accent placement can be learned from shallow input features with fair accuracy. Break insertion seems a harder task, certainly in view of the informed punctuation baseline PUNC-rule. Especially the precision of the insertion of breaks at other points than those already indicated by commas and other ‘pseudo-prosodic’ orthographic mark up is hard. This may be due to the lack of crucial information in the shallow features, to inherent limitations of the ML algorithms, but may as well point to a certain amount of optionality or personal preference, which puts an upper bound on what can be achieved in break prediction (Koehn et al., 2000). We plan to integrate the placement of pitch accents and breaks in a TTS system for Dutch, which will enable the closed-loop annotation of more data using the TTS itself and on-line (active) learning. Moreover, we plan to investigate the perceptual cost of false insertions and deletions of accents and breaks in experiments with human listeners. Acknowledgements Our thanks go out to Olga van Herwijnen and Jacques Terken for the use of their TTS evaluation corpus. All research in this paper was funded by the Flemish-Dutch Committee (VNC) of the National Foundations for Research in the Netherlands (NWO) and Belgium (FWO). References D. W. Aha, D. Kibler, and M. Albert. 1991. Instance-based learning algorithms. Machine Learning, 6:37–66. A.W. Black. 1995. Comparison of algorithms for predicting pitch accent placement in English speech synthesis. In Proceedings of the Spring Meeting of the Acoustical Society of Japan. L. Breiman, J. Friedman, R. Ohlsen, and C. Stone. 1984. Classification and regression trees. Wadsworth International Group, Belmont, CA. C.J. van Rijsbergen. 1979. Information Retrieval. Buttersworth, London. T. M. Cover and P. E. Hart. 1967. Nearest neighbor pattern classification. Institute of Electrical and Electronics Engineers Transactions on Information Theory, 13:21–27. A. Cutler, D. Dahan, and W.A. Van Donselaar. 1997. Prosody in the comprehension of spoken language: A literature review. Language and Speech, 40(2):141–202. W. Daelemans and V. Hoste. 2002. Evaluation of machine learning methods for natural language processing tasks. In Proceedings of LREC-2002, the third International Conference on Language Resources and Evaluation, pages 755– 760. W. Daelemans, J. Zavrel, P. Berck, and S. Gillis. 1996. MBT: A memory-based part of speech tagger generator. In E. Ejerhed and I. Dagan, editors, Proc. of Fourth Workshop on Very Large Corpora, pages 14–27. ACL SIGDAT. W. Daelemans, A. van den Bosch, and J. Zavrel. 1999. Forgetting exceptions is harmful in language learning. Machine Learning, Special issue on Natural Language Learning, 34:11–41. W. Daelemans, J. Zavrel, K. van der Sloot, and A. van den Bosch. 2002. TiMBL: Tilburg Memory Based Learner, version 4.3, reference guide. Technical Report ILK-0210, ILK, Tilburg University. E. Fix and J. L. Hodges. 1951. Discriminatory analysis— nonparametric discrimination; consistency properties. Technical Report Project 21-49-004, Report No. 4, USAF School of Aviation Medicine. J. Hirschberg. 1993. Pitch accent in context: Predicting intonational prominence from text. Artificial Intelligence, 63:305– 340. P. Koehn, S. Abney, J. Hirschberg, and M. Collins. 2000. Improving intonational phrasing with syntactic information. In ICASSP, pages 1289–1290. R. Kohavi and G. John. 1997. Wrappers for feature subset selection. Artificial Intelligence Journal, 97(1–2):273–324. D. R. Ladd. 1996. Intonational phonology. Cambridge University Press. E. Marsi, G.J. Busser, W. Daelemans, V. Hoste, M. Reynaert, and A. van den Bosch. 2002. Combining information sources for memory-based pitch accent placement. In Proceedings of the International Conference on Spoken Language Processing, ICSLP-2002, pages 1273–1276. S. Pan and J. Hirschberg. 2000. Modeling local context for pitch accent prediction. In Proceedings of the 35th Annual Meeting of the Association for Computational Linguistics, Hong Kong. S. Pan and K. McKeown. 1999. Word informativeness and automatic pitch accent modeling. In Proceedings of EMNLP/VLC’99, New Brunswick, NJ, USA. ACL. F. Provost, D. Jensen, and T. Oates. 1999. Efficient progressive sampling. In Proceedings of the Fifth International Conference on Knowledge Discovery and Data Mining, pages 23–32. G. Salton. 1989. Automatic text processing: The transformation, analysis, and retrieval of information by computer. Addison–Wesley, Reading, MA, USA. C. Stanfill and D. Waltz. 1986. Toward memory-based reasoning. Communications of the ACM, 29(12):1213–1228, December. P. Taylor and A. Black. 1998. Assigning phrase breaks from part-of-speech sequences. Computer Speech and Language, 12:99–117. P. Taylor, R. Caley, A. W. Black, and S. King, 1999. Edinburgh Speech Tools Library, System Documentation Edition 1.2. CSTR, University of Edinburgh. O. van Herwijnen and J. Terken. 2001. Evaluation of pros-3 for the assignment of prosodic structure, compared to assignment by human experts. In Proceedings Eurospeech 2001 Scandinavia, Vol.1, pages 529–532. M. Q. Wang and J. Hirschberg. 1997. Automatic classification of intonational phrasing boundaries. Computer Speech and Language, 6(2):175–196.
2003
62
Text Chunking by Combining Hand-Crafted Rules and Memory-Based Learning Seong-Bae Park Byoung-Tak Zhang School of Computer Science and Engineering Seoul National University Seoul 151-744, Korea {sbpark,btzhang}@bi.snu.ac.kr Abstract This paper proposes a hybrid of handcrafted rules and a machine learning method for chunking Korean. In the partially free word-order languages such as Korean and Japanese, a small number of rules dominate the performance due to their well-developed postpositions and endings. Thus, the proposed method is primarily based on the rules, and then the residual errors are corrected by adopting a memory-based machine learning method. Since the memory-based learning is an efficient method to handle exceptions in natural language processing, it is good at checking whether the estimates are exceptional cases of the rules and revising them. An evaluation of the method yields the improvement in F-score over the rules or various machine learning methods alone. 1 Introduction Text chunking has been one of the most interesting problems in natural language learning community since the first work of (Ramshaw and Marcus, 1995) using a machine learning method. The main purpose of the machine learning methods applied to this task is to capture the hypothesis that best determine the chunk type of a word, and such methods have shown relatively high performance in English (Kudo and Matsumoto, 2000; Zhang et. al, 2001). In order to do it, various kinds of information, such as lexical information, part-of-speech and grammatical relation, of the neighboring words is used. Since the position of a word plays an important role as a syntactic constraint in English, the methods are successful even with local information. However, these methods are not appropriate for chunking Korean and Japanese, because such languages have a characteristic of partially free wordorder. That is, there is a very weak positional constraint in these languages. Instead of positional constraints, they have overt postpositions that restrict the syntactic relation and composition of phrases. Thus, unless we concentrate on the postpositions, we must enlarge the neighboring window to get a good hypothesis. However, enlarging the window size will cause the curse of dimensionality (Cherkassky and Mulier, 1998), which results in the deficiency in the generalization performance. Especially in Korean, the postpositions and the endings provide important information for noun phrase and verb phrase chunking respectively. With only a few simple rules using such information, the performance of chunking Korean is as good as the rivaling other inference models such as machine learning algorithms and statistics-based methods (Shin, 1999). Though the rules are approximately correct for most cases drawn from the domain on which the rules are based, the knowledge in the rules is not necessarily well-represented for any given set of cases. Since chunking is usually processed in the earlier step of natural language processing, the errors made in this step have a fatal influence on the following steps. Therefore, the exceptions that are ignored by the rules must be comTraining Phase w 1 ... w N (PO S1 ... PO SN) R ule B ased D eterm ination R ule B ase For Each W ord w i C orrectly D eterm ined? Find Error Type N o Finish Yes E rror C ase Library C lassification Phase w 1 ... w N (PO S1 ... PO SN) R ule B ased D eterm ination R ule B ase For Each W ord w i E rror C ase Library M em ory B ased D eterm ination C 1 ... C N C om bination Figure 1: The structure of Korean chunking model. This figure describes a sentence-based learning and classification. pensated for by some special treatments of them for higher performance. To solve this problem, we have proposed a combining method of the rules and the k-nearest neighbor (k-NN) algorithm (Park and Zhang, 2001). The problem in this method is that it has redundant kNNs because it maintains a separate k-NN for each kind of errors made by the rules. In addition, because it applies a k-NN and the rules to each examples, it requires more computations than other inference methods. The goal of this paper is to provide a new method for chunking Korean by combining the hand-crafted rules and a machine learning method. The chunk type of a word in question is determined by the rules, and then verified by the machine learning method. The role of the machine learning method is to determine whether the current context is an exception of the rules. Therefore, a memory-based learning (MBL) is used as a machine learning method that can handle exceptions efficiently (Daelemans et. al, 1999). The rest of the paper is organized as follows. Section 2 explains how the proposed method works. Section 3 describes the rule-based method for chunking Korean and Section 4 explains chunking by memory-based learning. Section 5 presents the experimental results. Section 6 introduces the issues for applying the proposed method to other problems. Finally, Section 7 draws conclusions. 2 Chunking Korean Figure 1 shows the structure of the chunking model for Korean. The main idea of this model is to apply rules to determine the chunk type of a word wi in a sentence, and then to refer to a memory based classifier in order to check whether it is an exceptional case of the rules. In the training phase, each sentence is analyzed by the rules and the predicted chunk type is compared with the true chunk type. In case of misprediction, the error type is determined according to the true chunk type and the predicted chunk type. The mispredicted chunks are stored in the error case library with their true chunk types. Since the error case library accumulates only the exceptions of the rules, the number of cases in the library is small if the rules are general enough to represent the instance space well. The classification phase in Figure 1 is expressed as a procedure in Figure 2. It determines the chunk type of a word wi given with the context Ci. First of all, the rules are applied to determine the chunk type. Then, it is checked whether Ci is an exceptional case of the rules. If it is, the chunk type determined by the rules is discarded and is determined again by the memory based reasoning. The condition to make a decision of exceptional case is whether the similarity between Ci and the nearest instance in the error Procedure Combine Input : a word wi, a context Ci, and the threshold t Output : a chunk type c [Step 1] c = Determine the chunk type of wi using rules. [Step 2] e = Get the nearest instance of Ci in error case library. [Step 3] If Similarity(Ci, e) ≥t, then c = Determine chunk type of wi by memorybased learning. Figure 2: The procedure for combining the rules and memory based learning. case library is larger than the threshold t. Since the library contains only the exceptional cases, the more similar is Ci to the nearest instance, the more probable is it an exception of the rules. 3 Chunking by Rules There are four basic phrases in Korean: noun phrase (NP), verb phrase (VP), adverb phrase (ADVP), and independent phrase (IP). Thus, chunking by rules is divided into largely four components. 3.1 Noun Phrase Chunking When the part-of-speech of wi is one of determiner, noun, and pronoun, there are only seven rules to determine the chunk type of wi due to the welldeveloped postpositions of Korean. 1. If POS(wi−1) = determiner and wi−1 does not have a postposition Then yi = I-NP. 2. Else If POS(wi−1) = pronoun and wi−1 does not have a postposition Then yi = I-NP. 3. Else If POS(wi−1) = noun and wi−1 does not have a postposition Then yi = I-NP. 4. Else If POS(wi−1) = noun and wi−1 has a possessive postposition Then yi = I-NP. 5. Else If POS(wi−1) = noun and wi−1 has a relative postfix Then yi = I-NP. 6. Else If POS(wi−1) = adjective and wi−1 has a relative ending Then yi = I-NP. 7. Else yi = B-NP. Here, POS(wi−1) is the part-of-speech of wi−1. B-NP represents the first word of a noun phrase, while I-NP is given to other words in the noun phrase. Since determiners, nouns and pronouns play the similar syntactic role in Korean, they form a noun phrase when they appear in succession without postposition (Rule 1–3). The words with postpositions become the end of a noun phrase, but there are only two exceptions. When the type of a postposition is possessive, it is still in the mid of noun phrase (Rule 4). The other exception is a relative postfix ‘   (jeok)’ (Rule 5). Rule 6 states that a simple relative clause with no sub-constituent also constitutes a noun phrase. Since the adjectives of Korean have no definitive usage, this rule corresponds to the definitive usage of the adjectives in English. 3.2 Verb Phrase Chunking The verb phrase chunking has been studied for a long time under the name of compound verb processing in Korean and shows relatively high accuracy. Shin used a finite state automaton for verb phrase chunking (Shin, 1999), while K.-C. Kim used knowledge-based rules (Kim et. al, 1995). For the consistency with noun phrase chunking, we use the rules in this paper. The rules used are the ones proposed by (Kim et. al, 1995) and the further explanation on the rules is skipped. The number of the rules used is 29. 3.3 Adverb Phrase Chunking When the adverbs appear in succession, they have a great tendency to form an adverb phrase. Though an adverb sequence is not always one adverb phrase, it usually forms one phrase. Table 1 shows this empirically. The usage of the successive adverbs is investigated from STEP 2000 dataset1 where 270 cases are observed. The 189 cases among them form a phrase whereas the remaining 81 cases form two phrases independently. Thus, it can be said that the possibility that an adverb sequence forms a phrase is far higher than the possibility that it forms two phrases. When the part-of-speech of wi is an adjective, its chunk type is determined by the following rule. 1. If POS(wi−1) = adverb Then yi = I-ADVP. 2. Else yi = B-ADVP. 1This dataset will be explained in Section 5.1. No. of Cases Probability One Phrase 189 0.70 Two Phrases 81 0.30 Table 1: The probability that an adverb sequence forms a chunk. 3.4 Independent Phrase Chunking There is no special rule for independent phrase chunking. It can be done only through knowledge base that stores the cases where independent phrases take place. We designed 12 rules for independent phrases. 4 Chunking by Memory-Based Learning Memory-based learning is a direct descent of the k-Nearest Neighbor (k-NN) algorithm (Cover and Hart, 1967). Since many natural language processing (NLP) problems have constraints of a large number of examples and many attributes with different relevance, memory-based learning uses more complex data structure and different speedup optimization from the k-NN. It can be viewed with two components: a learning component and a similarity-based performance component. The learning component involves adding training examples to memory, where all examples are assumed to be fixed-length vectors of n attributes. The similarity between an instance x and all examples y in memory is computed using a distance metric, ∆(x, y). The chunk type of x is then determined by assigning the most frequent category within the k most similar examples of x. The distance from x and y, ∆(x, y) is defined to be ∆(x, y) ≡ n  i=1 αiδ(xi, yi), where αi is the weight of i-th attribute and δ(xi, yi) =  0 if xi = yi, 1 if xi ̸= yi. When αi is determined by information gain (Quinlan, 1993), the k-NN algorithm with this metric is called IB1-IG (Daelemans et. al, 2001). All the experiments performed by memory-based learning in this paper are done with IB1-IG. Table 2 shows the attributes of IB1-IG for chunking Korean. To determine the chunk type of a word wi, the lexicons, POS tags, and chunk types of surrounding words are used. For the surrounding words, three words of left context and three words of right context are used for lexicons and POS tags, while two words of left context are used for chunk types. Since chunking is performed sequentially, the chunk types of the words in right context are not known in determining the chunk type of wi. 5 Experiments 5.1 Dataset For the evaluation of the proposed method, all experiments are performed on STEP 2000 Korean Chunking dataset (STEP 2000 dataset)2. This dataset is derived from the parsed corpus, which is a product of STEP 2000 project supported by Korean government. The corpus consists of 12,092 sentences with 111,658 phrases and 321,328 words, and the vocabulary size is 16,808. Table 3 summarizes the information on the dataset. The format of the dataset follows that of CoNLL2000 dataset (CoNLL, 2000). Figure 3 shows an example sentence in the dataset3. Each word in the dataset has two additional tags, which are a part-ofspeech tag and a chunk tag. The part-of-speech tags are based on KAIST tagset (Yoon and Choi, 1999). Each phrase can have two kinds of chunk types: BXP and I-XP. In addition to them, there is O chunk type that is used for words which are not part of any chunk. Since there are four types of phrases and one additional chunk type O, there exist nine chunk types. 5.2 Performance of Chunking by Rules Table 4 shows the chunking performance when only the rules are applied. Using only the rules gives 97.99% of accuracy and 91.87 of F-score. In spite of relatively high accuracy, F-score is somewhat low. Because the important unit of the work in the applications of text chunking is a phrase, F-score is far more important than accuracy. Thus, we have much room to improve in F-score. 2The STEP 2000 Korean Chunking dataset is available in http://bi.snu.ac.kr/∼sbpark/Step2000. 3The last column of this figure, the English annotation, does Attribute Explanation Attribute Explanation Wi−3 word of wi−3 POSi−3 POS of wi−3 Wi−2 word of wi−2 POSi−2 POS of wi−2 Wi−1 word of wi−1 POSi−1 POS of wi−1 Wi word of wi POSi POS of wi Wi+1 word of wi+1 POSi+1 POS of wi+1 Wi+2 word of wi+2 POSi+2 POS of wi+2 Wi+3 word of wi+3 POSi+3 POS of wi+3 Ci−3 chunk of wi−3 Ci−2 chunk of wi−2 Ci−1 chunk of wi−1 Table 2: The attributes of IB1-IG for chunking Korean. Information Value Vocabulary Size 16,838 Number of total words 321,328 Number of chunk types 9 Number of POS tags 52 Number of sentences 12,092 Number of phrases 112,658 Table 3: The simple statistics on STEP 2000 Korean Chunking dataset.    nq B-NP Korea  Æ jcm I-NP Postposition : POSS     nq I-NP Sejong    ncn I-NP base   jcj I-NP and   mmd I-NP the    ncn I-NP surrounding    ncn I-NP base   jxt I-NP Postposition: TOPIC     ncn B-NP western South Pole    ncn B-NP south      nq I-NP Shetland  Æ jcm I-NP Postposition : POSS      nq I-NP King George Island   jca I-NP Postposition : LOCA  paa B-VP is located   ef I-VP Ending : DECL . sf O Figure 3: An example of STEP 2000 dataset. Type Precision Recall F-score ADVP 98.67% 97.23% 97.94 IP 100.00% 99.63% 99.81 NP 88.96% 88.93% 88.94 VP 92.89% 96.35% 94.59 All 91.28% 92.47% 91.87 Table 4: The experimental results when the rules are only used. Error Type No. of Errors Ratio (%) B-ADVP I-ADVP 89 1.38 B-ADVP I-NP 9 0.14 B-IP B-NP 9 0.14 I-IP I-NP 2 0.03 B-NP I-NP 2,376 36.76 I-NP B-NP 2,376 36.76 B-VP I-VP 3 0.05 I-VP B-VP 1,599 24.74 All 6,463 100.00 Table 5: The error distribution according to the mislabeled chunk type. Table 5 shows the error types by the rules and their distribution. For example, the error type ‘BADVP I-ADVP’ contains the errors whose true label is B-ADVP and that are mislabeled by I-ADVP. There are eight error types, but most errors are related with noun phrases. We found two reasons for this: 1. It is difficult to find the beginning of noun phrases. All nouns appearing successively without postpositions are not a single noun phrase. But, they are always predicted to be single noun phrase by the rules, though they can be more than one noun phrase. 2. The postposition representing a noun coordination, ‘ (wa)’ is very ambiguous. When ‘ (wa)’ is representing the coordination, the chunk types of it and its next word should be “I-NP I-NP”. But, when it is just an adverbial postposition that implies ‘with’ in English, the chunk types should be “I-NP B-NP”. Decision Tree SVM MBL Accuracy 97.95±0.24% 98.15±0.20% 97.79±0.29% Precision 92.29±0.94% 93.63±0.81% 91.41±1.24% Recall 90.45±0.80% 91.48±0.70% 91.43±0.87% F-score 91.36±0.85 92.54±0.72 91.38±1.01 Table 6: The experimental results of various machine learning algorithms. 5.3 Performance of Machine Learning Algorithms Table 6 gives the 10-fold cross validation result of three machine learning algorithms. In each fold, the corpus is divided into three parts: training (80%), held-out (10%), test (10%). Since held-out set is used only to find the best value for the threshold t in the combined model, it is not used in measuring the performance of machine learning algorithms. The machine learning algorithms tested are (i) memory-based learning (MBL), (ii) decision tree, and (iii) support vector machines (SVM). We use C4.5 release 8 (Quinlan, 1993) for decision tree induction and SV Mlight (Joachims, 1998) for support vector machines, while TiMBL (Daelemans et. al, 2001) is adopted for memory-based learning. Decision trees and SVMs use the same attributes with memory-based learning (see Table 2). Two of the algorithms, memory-based learning and decision tree, show worse performance than the rules. The Fscores of memory-based learning and decision tree are 91.38 and 91.36 respectively, while that of the rules is 91.87 (see Table 4). On the other hand, support vector machines present a slightly better performance than the rules. The F-score of support vector machine is 92.54, so the improvement over the rules is just 0.67. Table 7 shows the weight of attributes when only memory-based learning is used. Each value in this table corresponds to αi in calculating ∆(x, y). The more important is an attribute, the larger is the weight of it. Thus, the most important attribute among 17 attributes is Ci−1, the chunk type of the previous word. On the other hand, the least important attributes are Wi−3 and Ci−3. Because the words make less influence on determining the chunk type of wi in question as they become more distant from wi. That not exist in the dataset. It is given for the explanation. Attribute Weight Attribute Weight Wi−3 0.03 POSi−3 0.04 Wi−2 0.07 POSi−2 0.11 Wi−1 0.17 POSi−1 0.28 Wi 0.22 POSi 0.38 Wi+1 0.14 POSi+1 0.22 Wi+2 0.06 POSi+2 0.09 Wi+3 0.04 POSi+3 0.05 Ci−3 0.03 Ci−2 0.11 Ci−1 0.43 Table 7: The weights of the attributes in IB1-IG. The total sum of the weights is 2.48. fold Precision (%) Recall (%) F-score t 1 94.87 94.12 94.49 1.96 2 93.52 93.85 93.68 1.98 3 95.25 94.72 94.98 1.95 4 95.30 94.32 94.81 1.95 5 92.91 93.54 93.22 1.87 6 94.49 94.50 94.50 1.92 7 95.88 94.35 95.11 1.94 8 94.25 94.18 94.21 1.94 9 92.96 91.97 92.46 1.91 10 95.24 94.02 94.63 1.97 Avg. 94.47±1.04 93.96±0.77 94.21±0.84 1.94 Table 8: The final result of the proposed method by combining the rules and the memory-based learning. The average accuracy is 98.21±0.43. is, the order of important lexical attributes is ⟨Wi, Wi−1, Wi+1, Wi−2, Wi+2, Wi+3, Wi−3⟩. The same phenomenon is found in part-of-speech (POS) and chunk type (C). In comparing the partof-speech information with the lexical information, we find out that the part-of-speech is more important. One possible explanation for this is that the lexical information is too sparse. The best performance on English reported is 94.13 in F-score (Zhang et. al, 2001). The reason why the performance on Korean is lower than that on English is the curse of dimensionality. That is, the wider context is required to compensate for the free order of Korean, but it hurts the performance (Cherkassky and Mulier, 1998). 5.4 Performance of the Hybrid Method Table 8 shows the final result of the proposed method. The F-score is 94.21 on the average which is improvement of 2.34 over the rules only, 1.67 over support vector machines, and 2.83 over memorybased learning. In addition, this result is as high as the performance on English (Zhang et. al, 2001). 80 82 84 86 88 90 92 94 96 98 100 ADVP IP NP VP Phrases F-score Rule Only Hybrid Figure 4: The improvement for each kind of phrases by combining the rules and MBL. The threshold t is set to the value which produces the best performance on the held-out set. The total sum of all weights in Table 7 is 2.48. This implies that when we set t > 2.48, only the rules are applied since there is no exception with this threshold. When t = 0.00, only the memory-based learning is used. Since the memory-based learning determines the chunk type of wi based on the exceptional cases of the rules in this case. the performance is poor with t = 0.00. The best performance is obtained when t is near 1.94. Figure 4 shows how much F-score is improved for each kind of phrases. The average F-score of noun phrase is 94.54 which is far improved over that of the rules only. This implies that the exceptional cases of the rules for noun phrase are well handled by the memory-based learning. The performance is much improved for noun phrase and verb phrase, while it remains same for adverb phrases and independent phrases. This result can be attributed to the fact that there are too small number of exceptions for adverb phrases and independent phrases. Because the accuracy of the rules for these phrases is already high enough, most cases are covered by the rules. Memory based learning treats only the exceptions of the rules, so the improvement by the proposed method is low for the phrases. 6 Discussion In order to make the proposed method practical and applicable to other NLP problems, the following issues are to be discussed: 1. Why are the rules applied before the memory-based learning? When the rules are efficient and accurate enough to begin with, it is reasonable to apply the rules first (Golding and Rosenbloom, 1996). But, if they were deficient in some way, we should have applied the memory-based learning first. 2. Why don’t we use all data for the machine learning method? In the proposed method, memory-based learning is used not to find a hypothesis for interpreting whole data space but to handle the exceptions of the rules. If we use all data for both the rules and memory-based learning, we have to weight the methods to combine them. But, it is difficult to know the weights of the methods. 3. Why don’t we convert the memory-based learning to the rules? Converting between the rules and the cases in the memory-based learning tends to yield inefficient or unreliable representation of rules. The proposed method can be directly applied to the problems other than chunking Korean if the proper rules are prepared. The proposed method will show better performance than the rules or machine learning methods alone. 7 Conclusion In this paper we have proposed a new method to learn chunking Korean by combining the handcrafted rules and a memory-based learning. Our method is based on the rules, and the estimates on chunks by the rules are verified by a memory-based learning. Since the memory-based learning is an efficient method to handle exceptional cases of the rules, it supports the rules by making decisions only for the exceptions of the rules. That is, the memorybased learning enhances the rules by efficiently handling the exceptional cases of the rules. The experiments on STEP 2000 dataset showed that the proposed method improves the F-score of the rules by 2.34 and of the memory-based learning by 2.83. Even compared with support vector machines, the best machine learning algorithm in text chunking, it achieved the improvement of 1.67. The improvement was made mainly in noun phrases among four kinds of phrases in Korean. This is because the errors of the rules are mostly related with noun phrases. With relatively many instances for noun phrases, the memory-based learning could compensate for the errors of the rules. We also empirically found the threshold value t used to determine when to apply the rules and when to apply memory-based learning. We also discussed some issues in combining a rule-based method and a memory-based learning. These issues will help to understand how the method works and to apply the proposed method to other problems in natural language processing. Since the method is general enough, it can be applied to other problems such as POS tagging and PP attachment. The memory-based learning showed good performance in these problems, but did not reach the stateof-the-art. We expect that the performance will be improved by the proposed method. Acknowledgement This research was supported by the Korean Ministry of Education under the BK21-IT program and by the Korean Ministry of Science and Technology under NRL and BrainTech programs. References V. Cherkassky and F. Mulier. 1998. Learning from Data: Concepts, Theory, and Methods, John Wiley & Sons, Inc. CoNLL. 2000. Shared Task for Computational Natural Language Learning (CoNLL), http://lcgwww.uia.ac.be/conll2000/chunking. T. Cover and P. Hart. 1967. Nearest Neighbor Pattern Classification, IEEE Transactions on Information Theory, Vol. 13, pp. 21–27. W. Daelemans, A. Bosch and J. Zavrel. 1999. Forgetting Exceptions is Harmful in Language Learning, Machine Learning, Vol. 34, No. 1, pp. 11–41. W. Daelemans, J. Zavrel, K. Sloot and A. Bosch. 2001. TiMBL: Tilburg Memory Based Learner, version 4.1, Reference Guide, ILK 01-04, Tilburg University. A. Golding and P. Rosenbloom. 1996. Improving Accuracy by Combining Rule-based and Case-based Reasoning, Artificial Intelligence, Vol. 87, pp. 215–254. T. Joachims. 1998. Making Large-Scale SVM Learning Practical, LS8, Universitaet Dortmund. K.-C. Kim, K.-O. Lee, and Y.-S. Lee. 1995. Korean Compound Verbals Processing driven by Morphological Analysis, Journal of KISS, Vol. 22, No. 9, pp. 1384–1393. Taku Kudo and Yuji Matsumoto. 2000. Use of Support Vector Learning for Chunk Identification, In Proceedings of the Fourth Conference on Computational Natural Language Learning, pp. 142–144. S.-B. Park and B.-T. Zhang. 2001. Combining a Rulebased Method and a k-NN for Chunking Korean Text, In Proceedings of the 19th International Conference on Computer Processing of Oriental Languages, pp. 225–230. R. Quinlan. 1993. C4.5: Programs for Machine Learning, Morgan Kaufmann Publishers. L. Ramshaw and M. Marcus. 1995. Text Chunking Using Transformation-Based Learning, In Proceedings of the Third ACL Workshop on Very Large Corpora, pp. 82–94. H.-P. Shin. 1999. Maximally Efficient Syntatic Parsing with Minimal Resources, In Proceedings of the Conference on Hangul and Korean Language Infomration Processing, pp. 242–244. J.-T. Yoon and K.-S. Choi. 1999. Study on KAIST Corpus, CS-TR-99-139, KAIST CS. T. Zhang, F. Damerau and D. Johnson. 2001. Text Chunking Using Regularized Winnow, In Proceedings of the 39th Annual Meeting of the Association for Computational Linguistics, pp. 539–546.
2003
63
A SNoW based Supertagger with Application to NP Chunking Libin Shen and Aravind K. Joshi Department of Computer and Information Science University of Pennsylvania Philadelphia, PA 19104, USA libin,joshi  @linc.cis.upenn.edu Abstract Supertagging is the tagging process of assigning the correct elementary tree of LTAG, or the correct supertag, to each word of an input sentence1. In this paper we propose to use supertags to expose syntactic dependencies which are unavailable with POS tags. We first propose a novel method of applying Sparse Network of Winnow (SNoW) to sequential models. Then we use it to construct a supertagger that uses long distance syntactical dependencies, and the supertagger achieves an accuracy of  . We apply the supertagger to NP chunking. The use of supertags in NP chunking gives rise to almost absolute increase (from   to  ) in F-score under Transformation Based Learning(TBL) frame. The surpertagger described here provides an effective and efficient way to exploit syntactic information. 1 Introduction In Lexicalized Tree-Adjoining Grammar (LTAG) (Joshi and Schabes, 1997; XTAG-Group, 2001), each word in a sentence is associated with an elementary tree, or a supertag (Joshi and Srinivas, 1994). Supertagging is the process of assigning the correct supertag to each word of an input sentence. The following two facts make supertagging attractive. Firstly supertags encode much more syntactical information than POS tags, which makes supertagging a useful pre-parsing tool, so-called, almost parsing (Srinivas and Joshi, 1999). On the 1By the correct supertag we mean the supertag that an LTAG parser would assign to a word in a sentence. other hand, as the term ’supertagging’ suggests, the time complexity of supertagging is similar to that of POS tagging, which is linear in the length of the input sentence. In this paper, we will focus on the NP chunking task, and use it as an application of supertagging. (Abney, 1991) proposed a two-phase parsing model which includes chunking and attaching. (Ramshaw and Marcus, 1995) approached chucking by using Transformation Based Learning(TBL). Many machine learning techniques have been successfully applied to chunking tasks, such as Regularized Winnow (Zhang et al., 2001), SVMs (Kudo and Matsumoto, 2001), CRFs (Sha and Pereira, 2003), Maximum Entropy Model (Collins, 2002), Memory Based Learning (Sang, 2002) and SNoW (Mu˜noz et al., 1999). The previous best result on chunking in literature was achieved by Regularized Winnow (Zhang et al., 2001), which took some of the parsing results given by an English Slot Grammar-based parser as input to the chunker. The use of parsing results contributed  absolute increase in F-score. However, this approach conflicts with the purpose of chunking. Ideally, a chunker geneates n-best results, and an attacher uses chunking results to construct a parse. The dilemma is that syntactic constraints are useful in the chunking phase, but they are unavailable until the attaching phase. The reason is that POS tags are not a good labeling system to encode enough linguistic knowledge for chunking. However another labeling system, supertagging, can provide a great deal of syntactic information. In an LTAG, each word is associated with a set of possible elementary trees. An LTAG parser assigns the correct elementary tree to each word of a sentence, and uses the elementary trees of all the words to build a parse tree for the sentence. Elementary trees, which we call supertags, contain more information than POS tags, and they help to improve the chunking accuracy. Although supertags are able to encode long distance dependence, supertaggers trained with local information in fact do not take full advantage of complex information available in supertags. In order to exploit syntactic dependencies in a larger context, we propose a new model of supertagging based on Sparse Network of Winnow (SNoW) (Roth, 1998). We also propose a novel method of applying SNoW to sequential models in a way analogous to the Projection-base Markov Model (PMM) used in (Punyakanok and Roth, 2000). In contrast to PMM, we construct a SNoW classifier for each POS tag. For each word of an input sentence, its POS tag, instead of the supertag of the previous word, is used to select the corresponding SNoW classifier. This method helps to avoid the sparse data problem and forces SNoW to focus on difficult cases in the context of supertagging task. Since PMM suffers from the label bias problem (Lafferty et al., 2001), we have used two methods to cope with this problem. One method is to skip the local normalization step, and the other is to combine the results of left-to-right scan and right-to-left scan. We test our supertagger on both the hand-coded supertags used in (Chen et al., 1999) as well as the supertags extracted from Penn Treebank(PTB) (Marcus et al., 1994; Xia, 2001). On the dataset used in (Chen et al., 1999), our supertagger achieves an accuracy of  . We then apply our supertagger to NP chunking. The purpose of this paper is to find a better way to exploit syntactic information which is useful in NP chunking, but not the machine learning part. So we just use TBL, a well-known algorithm in the community of text chunking, as the machine learning tool in our research. Using TBL also allows us to easily evaluate the contribution of supertags with respect to Ramshaw and Marcus’s original work, the de facto baseline of NP chunking. The use of supertags with TBL can be easily extended to other machine learning algorithms. We repeat Ramshaw and Marcus’ Transformation Based NP chunking (Ramshaw and Marcus, 1995) algorithm by substituting supertags for POS tags in the dataset. The use of supertags gives rise to almost absolute increase (from   to  ) in Fscore under Transformation Based Learning(TBL) frame. This confirms our claim that using supertagging as a labeling system helps to increase the overall performance of NP Chunking. The supertagger presented in this paper provides an opportunity for advanced machine learning techniques to improve their performance on chunking tasks by exploiting more syntactic information encoded in the supertags. 2 Supertagging and NP Chunking In (Srinivas, 1997) trigram models were used for supertagging, in which Good-Turing discounting technique and Katz’s back-off model were employed. The supertag for a word was determined by the lexical preference of the word, as well as by the contextual preference of the previous two supertags. The model was tested on WSJ section 20 of PTB, and trained on section 0 through 24 except section 20. The accuracy on the test data is   2. In (Srinivas, 1997), supertagging was used for NP chunking and it achieved an F-score of  . (Chen, 2001) reported a similar result with a trigram supertagger. In their approaches, they first supertagged the test data and then uesd heuristic rules to detect NP chunks. But it is hard to say whether it is the use of supertags or the heuristic rules that makes their system achieve the good results. As a first attempt, we use fast TBL (Ngai and Florian, 2001), a TBL program, to repeat Ramshaw and Marcus’ experiment on the standard dataset. Then we use Srinivas’ supertagger (Srinivas, 1997) to supertag both the training and test data. We run the fast TBL for the second round by using supertags instead of POS tags in the dataset. With POS tags we achieve an F-score of   , but with supertags we only achieve an F-score of   . This is not surprising becuase Srinivas’ supertag was only trained with a trigram model. Although supertags are able to encode long distance dependence, supertaggers trained with local information in fact do not take full advantage of their strong capability. So we must use long distance dependencies to train supertaggers to take full advantage of the information in supertags. 2This number is based on footnote 1 of (Chen et al., 1999). A few supertags were grouped into equivalence classes for evaluation The trigram model often fails in capturing the cooccurrence dependence between a head word and its dependents. Consider the phrase ”will join the board as a nonexecutive director”. The occurrence of join has influence on the lexical selection of as. But join is outside the window of trigram. (Srinivas, 1997) proposed a head trigram model in which the lexical selection of a word depended on the supertags of the previous two head words , instead of the supertags of the two words immediately leading the word of interest. But the performance of this model was worse than the traditional trigram model because it discarded local information. (Chen et al., 1999) combined the traditional trigram model and head trigram model in their trigram mixed model. In their model, context for the current word was determined by the supertag of the previous word and context for the previous word according to 6 manually defined rules. The mixed model achieved an accuracy of   on the same dataset as that of (Srinivas, 1997). In (Chen et al., 1999), three other models were proposed, but the mixed model achieved the highest accuracy. In addition, they combined all their models with pairwise voting, yielding an accuracy of   . The mixed trigram model achieves better results on supertagging because it can capture both local and long distance dependencies to some extent. However, we think that a better way to find useful context is to use machine learning techniques but not define the rules manually. One approach is to switch to models like PMM, which can not only take advantage of generative models with the Viterbi algorithm, but also utilize the information in a larger contexts through flexible feature sets. This is the basic idea guiding the design of our supertagger. 3 SNoW Sparse Network of Winnow (SNoW) (Roth, 1998) is a learning architecture that is specially tailored for learning in the presence of a very large number of features where the decision for a single sample depends on only a small number of features. Furthermore, SNoW can also be used as a general purpose multi-class classifier. It is noted in (Mu˜noz et al., 1999) that one of the important properites of the sparse architecture of SNoW is that the complexity of processing an example depends only on the number of features active in it,  , and is independent of the total number of features,   , observed over the life time of the system and this is important in domains in which the total number of features in very large, but only a small number of them is active in each example. As far as supertagging is concerned, word context forms a very large space. However, for each word in a given sentence, only a small part of features in the space are related to the decision on supertag. Specifically the supertag of a word is determined by the appearances of certain words, POS tags, or supertags in its context. Therefore SNoW is suitable for the supertagging task. Supertagging can be viewed in term of the sequential model, which means that the selection of the supertag for a word is influenced by the decisions made on the previous few words. (Punyakanok and Roth, 2000) proposed three methods of using classifiers in sequential inference, which are HMM, PMM and CSCL. Among these three models, PMM is the most suitable for our task. The basic idea of PMM is as follows. Given an observation sequence ! , we find the most likely state sequence " given ! by maximizing #%$ "'&!)(+* , . 0/21 #%$43 5& 367 88 753 :9 6;7 !<(>= #?6;$436 & @ 6 ( * , . 0/21 #%$43 5& 3 :9 67 @AB(>= #C6A$436 & @ 6 ( (1) In this model, the output of SNoW is used to estimate #%$43 & 3DE7 @( and #C6;$43 & @( , where 3 is the current state, 3D is the previous state, and @ is the current observation. #%$43 & 3 D 7 @( is separated to many subfunctions #GF>HB$43 & @( according to previous state 3 D . In practice, #IF H $43 & @( is estimated in a wider window of the observed sequence, instead of @ only. Then the problem is how to map the SNoW results into probabilities. In (Punyakanok and Roth, 2000), the sigmoid J $ ?KML 9 NO5P4:9Q2R ( is defined as confidence, where S is the threshold for SNoW, TUWV is the dot product of the weight vector and the example vector. The confidence is normalized by summing to 1 and used as the distribution mass #IFXHY$43 & @( . 4 Modeling Supertagging 4.1 A Novel Sequential Model with SNoW Firstly we have to decide how to treat POS tags. One approach is to assign POS tags at the same time that we do supertagging. The other approach is to assign POS tags with a traditional POS tagger first, and then use them as input to the supertagger. Supertagging an unknown word becomes a problem for supertagging due to the huge size of the supertag set, Hence we use the second approach in our paper. We first run the Brill POS tagger (Brill, 1995) on both the training and the test data, and use POS tags as part of the input. Let Z * [ 6 ['1A88[ be the sentence, \ * ] 6 ] 1A88 ] be the POS tags, and S^*_V 6 V`1A88V be the supertags respectively. Given Z 7 \ , we can find the most likely supertag sequence S given Z 7 \ by maximizing #%$ Sa&Z 7 \<(b*c, . d /21 #%$ V d & V 6feee d 9 6g7 Z 7 \<(>= #?6A$ V 6 & [ 67 ] 6 ( Analogous to PMM, we decompose #%$ V d & V 6feee d 9 6h7 Z 7 \)( into sub-classifiers. However, in our model, we divide it with respect to POS tags as follows #i$ V d & V 6feee d 9 6g7 Z 7 \<(?j #lkBmn$ V d & V 6feee d 9 6g7 Z 7 \<( (2) There are several reasons for decomposing #%$ V d & V 6feee d 9 6h7 Z 7 \)( with respect to the POS tag of the current word, instead of the supertag of the previous word. o To avoid sparse-data problem. There are 479 supertags in the set of hand-coded supertags, and almost 3000 supertags in the set of supertags extracted from Penn Treebank. o Supertags related to the same POS tag are more difficult to distinguish than supertags related to different POS tags. Thus by defining a classifier on the POS tag of the current word but not the POS tag of the previous word forces the learning algorithm to focus on difficult cases. o Decomposition of the probability estimation can decrease the complexity of the learning algorithm and allows the use of different parameters for different POS tags. For each POS ] , we construct a SNoW classifier pqk to estimate distribution #lk$ V& V D:7 Z 7 \<( according to the previous supertags V D . Following the estimation of distribution function in (Punyakanok and Roth, 2000), we define confidence with a sigmoid r k$ Vh& V D 7 Z 7 \)(Cj bKtsuL 9 NOvxwyNzX{  HE| }~|  R:9 F R 7 (3) where 3 is the threshold of pqk , and s is set to 1. The distribution mass is then defined with normalized confidence #GkA$ V& V D 7 Z 7 \<(Cj r k $ V& V DE7 Z 7 \<( €  r k$ Vh& V D 7 Z 7 \)( (4) 4.2 Label Bias Problem In (Lafferty et al., 2001), it is shown that PMM and other non-generative finite-state models based on next-state classifiers share a weakness which they called the label bias problem: the transitions leaving a given state compete only against each other, rather than against all other transitions in the model. They proposed Conditional Random Fields (CRFs) as solution to this problem. (Collins, 2002) proposed a new algorithm for parameter estimation as an alternate to CRF. The new algorithm was similar to maximum-entropy model except that it skipped the local normalization step. Intuitively, it is the local normalization that makes distribution mass of the transitions leaving a given state incomparable with all other transitions. It is noted in (Mu˜noz et al., 1999) that SNoW’s output provides, in addition to the prediction, a robust confidence level in the prediction, which enables its use in an inference algorithm that combines predictors to produce a coherent inference. In that paper, SNoW’s output is used to estimate the probability of open and close tags. In general, the probability of a tag can be estimated as follows # k $ V& V D 7 Z 7 \<(?j pk$ Vh& V D 7 Z 7 \)(ƒ‚ 3 €  $4pqk$ Vh& V D 7 Z 7 \)(I‚ 3 ( 7 (5) as one of the anonymous reviewers has suggested. However, this makes probabilities comparable only within the transitions of the same history V D . An alternative to this approach is to use the SNoW’s output directly in the prediction combination, which makes transitions of different history comparable, since the SNoW’s output provides a robust confidence level in the prediction. Furthermore, in order to make sure that the confidences are not too sharp, we use the confidence defined in (3). In addition, we use two supertaggers, one scans from left to right and the other scans from right to left. Then we combine the results via pairwise voting as in (van Halteren et al., 1998; Chen et al., 1999) as the final supertag. This approach of voting also helps to cope with the label bias problem. 4.3 Contextual Model #GkA$ V& V D 7 Z 7 \<( is estimated within a 5-word window plus two head supertags before the current word. For each word [ d , the basic features are Z * [ d 9„1 | eee | d… 1 , \ * ] d 9„1 | eee | d… 1 , V D * V d 9„1 | d 9 6 and † ‡ 9„1 | 9 6 , the two head supertags before the current word. Thus #Gk>mn$ V d & V 6feee d 9 6h7 Z 7 \)( * #Gk>mn$ V d & V d 9„1 | d 9 67 [ d 9„1 eee d8… 1 7 ] d 9„1 eee d… 1 7 † ‡ 9„1 | 9 6 ( A basic feature is called active for word [ d if and only if the corresponding word/POS-tag/supertag appears at a specified place around [ d . For our SNoW classifiers we use unigram and bigram of basic features as our feature set. A feature defined as a bigram of two basic features is active if and only if the two basic features are both active. The value of a feature of [ d is set to 1 if this feature is active for [ d , or 0 otherwise. 4.4 Related Work (Chen, 2001) implemented an MEMM model for supertagging which is analogous to the POS tagging model of (Ratnaparkhi, 1996). The feature sets used in the MEMM model were similar to ours. In addition, prefix and suffix features were used to handle rare words. Several MEMM supertaggers were implemented based on distinct feature sets. In (Mu˜noz et al., 1999), SNoW was used for text chunking. The IOB tagging model in that paper was similar to our model for supertagging, but there are some differences. They did not decompose the SNoW classifier with respect to POS tags. They used two-level deterministic ( beam-width=1 ) search, in which the second level IOB classifier takes the IOB output of the first classifier as input features. 5 Experimental Evaluation and Analysis In our experiments, we use the default settings of the SNoW promotion parameter, demotion parameter and the threshold value given by the SNoW system. We train our model on the training data for 2 rounds, only counting the features that appear for at least 5 times. We skip the normalization step in test, and we use beam search with the width of 5. In our first experiment, we use the same dataset as that of (Chen et al., 1999) for our experiments. We use WSJ section 00 through 24 expect section 20 as training data, and use section 20 as test data. Both training and test data are first tagged by Brill’s POS tagger (Brill, 1995). We use the same pairwise voting algorithm as in (Chen et al., 1999). We run supertagging on the training data and use the supertagging result to generate the mapping table used in pairwise voting. The SNoW supertagger scanning from left to right achieves an accuracy of   , and the one scanning from right to left achieves an accuracy of  ˆ . By combining the results of these two supertaggers with pairwise voting, we achieve an accuracy of  , an error reduction of  compared to  , the best supertagging result to date (Chen, 2001). Table 1 shows the comparison with previous work. Our algorithm, which is coded in Java, takes about 10 minutes to supertag the test data with a P3 1.13GHz processor. However, in (Chen, 2001), the accuracy of  was achieved by a Viterbi search program that took about 5 days to supertag the test data. The counterpart of our algorithm in (Chen, 2001) is the beam search on Model 8 with width of 5, which is the same as the beam width in our algorithm. Compared with this program, our algorithm achieves an error reduction of  . (Chen et al., 1999) achieved an accuracy of   by combination of 5 distinct supertaggers. However, our result is achieved by combining outputs of two homogeneous supertaggers, which only differ in scan direction. Our next experiment is with the set of supertags abstracted from PTB with Fei Xia’s LexTract (Xia, 2001). Xia extracted an LTAG-style grammar from PTB, and repeated Srinivas’ experiment (Srinivas, 1997) on her supertag set. There are 2920 elemenmodel acc Srinivas(97) trigram 91.37 Chen(99) trigram mix 91.79 Chen(99) voting 92.19 Chen(01) width=5 91.83 Chen(01) Viterbi 92.25 SNoW left-to-right 92.02 SNoW right-to-left 91.43 SNoW 92.41 Table 1: Comparison with previous work. Training data is WSJ section 00 thorough 24 except section 20 of PTB. Test data is WSJ section 20. Size of tag set is 479. acc = percentage of accuracy. The number of Srinivas(97) is based on footnote 1 of (Chen et al., 1999). The number of Chen(01) width=5 is the result of a beam search on Model 8 with the width of 5. model acc (22) acc (23) Xia(01) trigram 83.60 84.41 SNoW left-to-right 86.01 86.27 Table 2: Results on auto-extracted LTAG grammar. Training data is WSJ section 02 thorough 21 of PTB. Test data is WSJ section 22 and 23. Size of supertag set is 2920. acc = percentage of accuracy. tary trees in Xia’s grammar ‰Š1 , so that the supertags are more specialized and hence there is much more ambiguity in supertagging. We have experimented with our model on ‰‹1 and her dataset. We train our left-to-right model on WSJ section 02 through 21 of PTB, and test on section 22 and 23. We achieve an average error reduction of  ˆ . The reason why the accuracy is rather low is that systems using ‰Š1 have to cope with much more ambiguities due the large size of the supertag set. The results are shown in Table 2. We test on both normalized and unnormalized models with both hand coded supertag set and autoextracted supertag set. We use the left-to-right SNoW model in these experiments. The results in Table 3 show that skipping the local normalization improves performance in all the systems. The effect of skipping normalization is more significant on auto-extracted tags. We think this is because sparse tag set size norm? acc (20/22/23) auto 2920 yes NA / 85.77 / 85.98 auto 2920 no NA / 86.01 / 86.27 hand 479 yes 91.98 / NA / NA hand 479 no 92.02 / NA / NA Table 3: Experiments on normalized and unnormalized models using left-to-right SNoW supertagger. size = size of the tag set. norm? = normalized or not. acc = percentage of accuracy on section 20, 22 and 23. auto = auto-extracted tag set. hand = hand coded tag set. data is more vulnerable to the label bias problem. 6 Application to NP Chunking Now we come back to the NP chunking problem. The standard dataset of NP chunking consists of WSJ section 15-18 as train data and section 20 as test data. In our approach, we substitute the supertags for the POS tags in the dataset. The new data look as follows. For B Pnxs O the B Dnx I nine B Dnx I months A NXN I The first field is the word, the second is the supertag of the word, and the last is the IOB tag. We first use the fast TBL (Ngai and Florian, 2001), a Transformation Based Learning algorithm, to repeat Ramshaw and Marcus’ experiment, and then apply the same program to our new dataset. Since section 15-18 and section 20 are in the standard data set of NP chunking, we need to avoid using these sections as training data for our supertagger. We have trained another supertagger that is trained on 776K words in WSJ section 02-14 and 21-24, and it is tuned with 44K words in WSJ section 19. We use this supertagger to supertag section 15-18 and section 20. We train an NP Chunker on section 15-18 with fast TBL, and test it on section 20. There is a small problem with the supertag set that we have been using, as far as NP chunking is concerned. Two words with different POS tags may be tagged with the same supertag. For example both determiner (DT) and number (CD) can be tagged with B Dnx. However this will be harmful in the case model A P R F RM95 91.80 92.27 92.03 Brill-POS 97.42 91.83 92.20 92.01 Tri-STAG 97.29 91.60 91.72 91.66 SNoW-STAG 97.66 92.76 92.34 92.55 SNoW-STAG2 97.70 92.86 93.05 92.95 GOLD-POS 97.91 93.17 93.51 93.34 GOLD-STAG 98.48 94.74 95.63 95.18 Table 4: Results on NP Chunking. Training data is WSJ section 15-18 of PTB. Test data is WSJ section 20. A = Accuracy of IOB tagging. P = NP chunk Precision. R = NP chunk Recall. F = F-score. BrillPOS = fast TBL with Brill’s POS tags. Tri-STAG = fast TBL with supertags given by Srinivas’ trigrambased supertagger. SNoW-STAG = fast TBL with supertags given by our SNoW supertagger. SNoWSTAG2 = fast TBL with augmented supertags given by our SNoW supertagger. GOLD-POS = fast TBL with gold standard POS tags. GOLD-STAG = fast TBL with gold standard supertags. of NP Chunking. As a solution, we use augmented supertags that have the POS tag of the lexical item specified. An augmented supertag can also be regarded as concatenation of a supertag and a POS tag. For B Pnxs(IN) O the B Dnx(DT) I nine B Dnx(CD) I months A NXN(NNS) I The results are shown in Table 4. The system using augmented supertags achieves an F-score of  , or an error reduction of  Œ below the baseline of using Brill POS tags. Although these two systems are both trained with the same TBL algorithm, we implicitly employ more linguistic knowledge as the learning bias when we train the learning machine with supertags. Supertags encode more syntactical information than POS tag do. For example, in the sentence Three leading drug companies ..., the POS tag of 4L T ‡ˆŽ 2 is VBG, or present participle. Based on the local context of 4LT ‡Ž 2 , Three can be the subject of leading. However, the supertag of leading is B An, which represents a modifier of a noun. With this extra information, the chunker can easily solve the ambiguity. We find many instances like this in the test data. It is important to note that the accuracy of supertag itself is much lower than that of POS tag while the use of supertags helps to improve the overall performance. On the other hand, since the accuracy of supertagging is rather lower, there is more room left for improving. If we use gold standard POS tags in the previous experiment, we can only achieve an F-score of A . However, if we use gold standard supertags in our previous experiment, the F-score is as high as  Œ . This tells us how much room there is for further improvements. Improvements in supertagging may give rise to further improvements in chunking. 7 Conclusions We have proposed the use of supertags in the NP chunking task in order to use more syntactical dependencies which are unavailable with POS tags. In order to train a supertagger with a larger context, we have proposed a novel method of applying SNoW to the sequential model and have applied it to supertagging. Our algorithm takes advantage of rich feature sets, avoids the sparse-data problem, and forces the learning algorithm to focus on the difficult cases. Being aware of the fact that our algorithm may suffer from the label bias problem, we have used two methods to cope with this problem, and achieved desirable results. We have tested our algorithms on both the handcoded tag set used in (Chen et al., 1999) and supertags extracted for Penn Treebank(PTB). On the same dataset as that of (Chen et al., 1999), our new supertagger achieves an accuracy of  . Compared with the supertaggers with the same decoding complexity (Chen, 2001), our algorithm achieves an error reduction of  . We repeat Ramshaw and Marcus’ Transformation Based NP chunking (Ramshaw and Marcus, 1995) test by substituting supertags for POS tags in the dataset. The use of supertags in NP chunking gives rise to almost absolute increase (from   to  ) in F-score under Transformation Based Learning(TBL) frame, or an error reduction of  Œ . The accuracy of  with our individual TBL chunker is close to results of POS-tag-based systems using advanced machine learning algorithms, such as A by voted MBL chunkers (Sang, 2002), Œ by SNoW chunker (Mu˜noz et al., 1999). The benefit of using a supertagger is obvious. The supertagger provides an opportunity for advanced machine learning techniques to improve their performance on chunking tasks by exploiting more syntactic information encoded in the supertags. To sum up, the supertagging algorithm presented here provides an effective and efficient way to employ syntactic information. Acknowledgments We thank Vasin Punyakanok for help on the use of SNoW in sequential inference, John Chen for help on dataset and evaluation methods and comments on the draft. We also thank Srinivas Bangalore and three anonymous reviews for helpful comments. References S. Abney. 1991. Parsing by chunks. In Principle-Based Parsing. Kluwer Academic Publishers. E. Brill. 1995. Transformation-based error-driven learning and natural language processing: A case study in part-of-speech tagging. Computational Linguistics, 21(4):543–565. J. Chen, B. Srinivas, and K. Vijay-Shanker. 1999. New models for improving supertag disambiguation. In Proceedings of the 9th EACL. J. Chen. 2001. Towards Efficient Statistical Parsing using Lexicalized Grammatical Information. Ph.D. thesis, University of Delaware. M. Collins. 2002. Discriminative training methods for hidden markov models: Theory and experiments with perceptron algorithms. In EMNLP 2002. A. Joshi and Y. Schabes. 1997. Tree-adjoining grammars. In G. Rozenberg and A. Salomaa, editors, Handbook of Formal Languages, volume 3, pages 69 – 124. Springer. A. Joshi and B. Srinivas. 1994. Disambiguation of super parts of speech (or supertags): Almost parsing. In COLING’94. T. Kudo and Y. Matsumoto. 2001. Chunking with support vector machines. In Proceedings of NAACL 2001. J. Lafferty, A. McCallum, and F. Pereira. 2001. Conditional random fields: Probabilistic models for stgmentation and labeling sequence data. In Proceedings of ICML 2001. M. P. Marcus, B. Santorini, and M. A. Marcinkiewicz. 1994. Building a large annotated corpus of english: the penn treebank. Computational Linguistics, 19(2):313–330. M. Mu˜noz, V. Punyakanok, D. Roth, and D. Zimak. 1999. A learning approach to shallow parsing. In Proceedings of EMNLP-WVLC’99. G. Ngai and R. Florian. 2001. Transformation-based learning in the fast lane. In Proceedings of NAACL2001, pages 40–47. V. Punyakanok and D. Roth. 2000. The use of classifiers in sequential inference. In NIPS’00. L. Ramshaw and M. Marcus. 1995. Text chunking using transformation-based learning. In Proceedings of the 3rd WVLC. A. Ratnaparkhi. 1996. A maximum entropy part-ofspeech tagger. In Proceedings of EMNLP 96. D. Roth. 1998. Learning to resolve natural language ambiguities: A unified approach. In AAAI’98. Erik F. Tjong Kim Sang. 2002. Memory-based shallow parsing. Journal of Machine Learning Research, 2:559–594. F. Sha and F. Pereira. 2003. Shallow parsing with conditional random fields. In Proceedings of NAACL 2003. B. Srinivas and A. Joshi. 1999. Supertagging: An approach to almost parsing. Computational Linguistics, 25(2). B. Srinivas. 1997. Performance evaluation of supertagging for partial parsing. In IWPT 1997. H. van Halteren, J. Zavrel, and W. Daelmans. 1998. Improving data driven wordclass tagging by system combination. In Proceedings of COLING-ACL 98. F. Xia. 2001. Automatic Grammar Generation From Two Different Perspectives. Ph.D. thesis, University of Pennsylvania. XTAG-Group. 2001. A lexicalized tree adjoining grammar for english. Technical Report 01-03, IRCS, Univ. of Pennsylvania. T. Zhang, F. Damerau, and D. Johnson. 2001. Text chunking using regularized winnow. In Proceedings of ACL 2001.
2003
64
An Expert Lexicon Approach to Identifying English Phrasal Verbs Wei Li, Xiuhong Zhang, Cheng Niu, Yuankai Jiang, Rohini Srihari Cymfony Inc. 600 Essjay Road Williamsville, NY 14221, USA {wei, xzhang, cniu, yjiang, rohini}@Cymfony.com Abstract Phrasal Verbs are an important feature of the English language. Properly identifying them provides the basis for an English parser to decode the related structures. Phrasal verbs have been a challenge to Natural Language Processing (NLP) because they sit at the borderline between lexicon and syntax. Traditional NLP frameworks that separate the lexicon module from the parser make it difficult to handle this problem properly. This paper presents a finite state approach that integrates a phrasal verb expert lexicon between shallow parsing and deep parsing to handle morpho-syntactic interaction. With precision/recall combined performance benchmarked consistently at 95.8%-97.5%, the Phrasal Verb identification problem has basically been solved with the presented method. 1 Introduction Any natural language processing (NLP) system needs to address the issue of handling multiword expressions, including Phrasal Verbs (PV) [Sag et al. 2002; Breidt et al. 1996]. This paper presents a proven approach to identifying English PVs based on pattern matching using a formalism called Expert Lexicon. Phrasal Verbs are an important feature of the English language since they form about one third of the English verb vocabulary. 1 Properly 1 For the verb vocabulary of our system based on machine-readable dictionaries and two Phrasal Verb dictionaries, phrasal verb entries constitute 33.8% of the entries. recognizing PVs is an important condition for English parsing. Like single-word verbs, each PV has its own lexical features including subcategorization features that determine its structural patterns [Fraser 1976; Bolinger 1971; Pelli 1976; Shaked 1994], e.g., look for has syntactic subcategorization and semantic features similar to those of search; carry…on shares lexical features with continue. Such lexical features can be represented in the PV lexicon in the same way as those for single-word verbs, but a parser can only use them when the PV is identified. Problems like PVs are regarded as ‘a pain in the neck for NLP’ [Sag et al. 2002]. A proper solution to this problem requires tighter interaction between syntax and lexicon than traditionally available [Breidt et al. 1994]. Simple lexical lookup leads to severe degradation in both precision and recall, as our benchmarks show (Section 4). The recall problem is mainly due to separable PVs such as turn…off which allow for syntactic units to be inserted inside the PV compound, e.g., turn it off, turn the radio off. The precision problem is caused by the ambiguous function of the particle. For example, a simple lexical lookup will mistag looked for as a phrasal verb in sentences such as He looked for quite a while but saw nothing. In short, the traditional NLP framework that separates the lexicon module from a parser makes it difficult to handle this problem properly. This paper presents an expert lexicon approach that integrates the lexical module with contextual checking based on shallow parsing results. Extensive blind benchmarking shows that this approach is very effective for identifying phrasal verbs, resulting in the precision/recall combined F-score of about 96%. The remaining text is structured as follows. Section 2 presents the problem and defines the task. Section 3 presents the Expert Lexicon formalism and illustrates the use of this formalism in solving this problem. Section 4 shows the benchmarking and analysis, followed by conclusions in Section 5. 2 Phrasal Verb Challenges This section defines the problems we intend to solve, with a checklist of tasks to accomplish. 2.1 Task Definition First, we define the task as the identification of PVs in support of deep parsing, not as the parsing of the structures headed by a PV. These two are separated as two tasks not only because of modularity considerations, but more importantly based on a natural labor division between NLP modules. Essential to the second argument is that these two tasks are of a different linguistic nature: the identification task belongs to (compounding) morphology (although it involves a syntactic interface) while the parsing task belongs to syntax. The naturalness of this division is reflected in the fact that there is no need for a specialized, PV-oriented parser. The same parser, mainly driven by lexical subcategorization features, can handle the structural problems for both phrasal verbs and other verbs. The following active and passive structures involving the PVs look after (corresponding to watch) and carry…on (corresponding to continue) are decoded by our deep parser after PV identification: she is being carefully ‘looked after’ (watched); we should ‘carry on’ (continue) the business for a while. There has been no unified definition of PVs among linguists. Semantic compositionality is often used as a criterion to distinguish a PV from a syntactic combination between a verb and its associated adverb or prepositional phrase [Shaked 1994]. In reality, however, PVs reside in a continuum from opaque to transparent in terms of semantic compositionality [Bolinger 1971]. There exist fuzzy cases such as take something away2 that may be included either as a PV or as a regular syntactic sequence. There is agreement 2 Single-word verbs like ‘take’ are often over-burdened with dozens of senses/uses. Treating marginal cases like ‘take…away’ as independent phrasal verb entries has practical benefits in relieving the burden and the associated noise involving ‘take’. on the vocabulary scope for the majority of PVs, as reflected in the overlapping of PV entries from major English dictionaries. English PVs are generally classified into three major types. Type I usually takes the form of an intransitive verb plus a particle word that originates from a preposition. Hence the resulting compound verb has become transitive, e.g., look for, look after, look forward to, look into, etc. Type II typically takes the form of a transitive verb plus a particle from the set {on, off, up, down}, e.g., turn…on, take…off, wake…up, let…down. Marginal cases of particles may also include {out, in, away} such as take…away, kick …in, pull…out.3 Type III takes the form of an intransitive verb plus an adverb particle, e.g., get by, blow up, burn up, get off, etc. Note that Type II and Type III PVs have considerable overlapping in vocabulary, e.g., The bomb blew up vs. The clown blew up the balloon. The overlapping phenomenon can be handled by assigning both a transitive feature and an intransitive feature to the identified PVs in the same way that we treat the overlapping of single-word verbs. The first issue in handling PVs is inflection. A system for identifying PVs should match the inflected forms, both regular and irregular, of the leading verb. The second is the representation of the lexical identity of recognized PVs. This is to establish a PV (a compound word) as a syntactic atomic unit with all its lexical properties determined by the lexicon [Di Sciullo and Williams 1987]. The output of the identification module based on a PV lexicon should support syntactic analysis and further processing. This translates into two sub-tasks: (i) lexical feature assignment, and (ii) canonical form representation. After a PV is identified, its lexical features encoded in the PV lexicon should be assigned for a parser to use. The representation of a canonical form for an identified PV is necessary to allow for individual rules to be associated with identified PVs in further processing and to facilitate verb retrieval in applications. For example, if we use turn_off as the canonical form for the PV turn…off, identified in both he turned off the radio and he 3 These three are arguably in the gray area. Since they do not fundamentally affect the meaning of the leading verb, we do not have to treat them as phrasal verbs. In principle, they can also be treated as adverb complements of verbs. turned the radio off, a search for turn_off will match all and only the mentions of this PV. The fact that PVs are separable hurts recall. In particular, for Type II, a Noun Phrase (NP) object can be inserted inside the compound verb. NP insertion is an intriguing linguistic phenomenon involving the morpho-syntactic interface: a morphological compounding process needs to interact with the formation of a syntactic unit. Type I PVs also have the separability problem, albeit to a lesser degree. The possible inserted units are adverbs in this case, e.g., look everywhere for, look carefully after. What hurts precision is spurious matches of PV negative instances. In a sentence with the structure V+[P+NP], [V+P] may be mistagged as a PV, as seen in the following pairs of examples for Type I and Type II: (1a) She [looked for] you yesterday. (1b) She looked [for quite a while] (but saw nothing). (2a) She [put on] the coat. (2b) She put [on the table] the book she borrowed yesterday. To summarize, the following is a checklist of problems that a PV identification system should handle: (i) verb inflection, (ii) lexical identity representation, (iii) separability, and (iv) negative instances. 2.2 Related Work Two lines of research are reported in addressing the PV problem: (i) the use of a high-level grammar formalism that integrates the identification with parsing, and (ii) the use of a finite state device in identifying PVs as a lexical support for the subsequent parser. Both approaches have their own ways of handling the morpho-syntactic interface. [Sag et al. 2002] and [Villavicencio et al. 2002] present their project LinGO-ERG that handles PV identification and parsing together. LingGO-ERG is based on Head-driven Phrase Structure Grammar (HPSG), a unification-based grammar formalism. HPSG provides a mono-stratal lexicalist framework that facilitates handling intricate morpho-syntactic interaction. PV-related morphological and syntactic structures are accounted for by means of a lexical selection mechanism where the verb morpheme subcategorizes for its syntactic object in addition to its particle morpheme. The LingGO-ERG lexicalist approach is believed to be effective. However, their coverage and testing of the PVs seem preliminary. The LinGO-ERG lexicon contains 295 PV entries, with no report on benchmarks. In terms of the restricted flexibility and modifiability of a system, the use of high-level grammar formalisms such as HPSG to integrate identification in deep parsing cannot be compared with the alternative finite state approach [Breidt et al. 1994]. [Breidt et al.1994]’s approach is similar to our work. Multiword expressions including idioms, collocations, and compounds as well as PVs are accounted for by using local grammar rules formulated as regular expressions. There is no detailed description for English PV treatment since their work focuses on multilingual, multi-word expressions in general. The authors believe that the local grammar implementation of multiword expressions can work with general syntax either implemented in a high-level grammar formalism or implemented as a local grammar for the required morpho-syntactic interaction, but this interaction is not implemented into an integrated system and hence it is impossible to properly measure performance benchmarks. There is no report on implemented solutions covering the entire English PVs that are fully integrated into an NLP system and are well tested on sizable real life corpora, as is presented in this paper. 3 Expert Lexicon Approach This section illustrates the system architecture and presents the underlying Expert Lexicon (EL) formalism, followed by the description of the implementation details. 3.1 System Architecture Figure 1 shows the system architecture that contains the PV Identification Module based on the PV Expert Lexicon. This is a pipeline system mainly based on pattern matching implemented in local grammars and/or expert lexicons [Srihari et al 2003]. 4 4 POS and NE tagging are hybrid systems involving both hand-crafted rules and statistical learning. English parsing is divided into two tasks: shallow parsing and deep parsing. The shallow parser constructs Verb Groups (VGs) and basic Noun Phrases (NPs), also called BaseNPs [Church 1988]. The deep parser utilizes syntactic subcategorization features and semantic features of a head (e.g., VG) to decode both syntactic and logical dependency relationships such as Verb-Subject, Verb-Object, Head-Modifier, etc. Part-of-Speech (POS) Tagging General Lexicon Lexical lookup Named Entity (NE) Taggig Shallow Parsing PV Identification Deep parsing General Lexicon PV Expert Lexicon Figure 1. System Architecture The general lexicon lookup component involves stemming that transforms regular or irregular inflected verbs into the base forms to facilitate the later phrasal verb matching. This component also performs indexing of the word occurrences in the processed document for subsequent expert lexicons. The PV Identification Module is placed between the Shallow Parser and the Deep Parser. It requires shallow parsing support for the required syntactic interaction and the PV output provides lexical support for deep parsing. Results after shallow parsing form a proper basis for PV identification. First, the inserted NPs and adverbial time NEs are already constructed by the shallow parser and NE tagger. This makes it easy to write pattern matching rules for identifying separable PVs. Second, the constructed basic units NE, NP and VG provide conditions for constraint-checking in PV identification. For example, to prevent spurious matches in sentences like she put the coat on the table, it is necessary to check that the post-particle unit should NOT be an NP. The VG chunking also decodes the voice, tense and aspect features that can be used as additional constraints for PV identification. A sample macro rule active_V_Pin that checks the ‘NOT passive’ constraint and the ‘NOT time’, ‘NOT location’ constraints is shown in 3.3. 3.2 Expert Lexicon Formalism The Expert Lexicon used in our system is an index-based formalism that can associate pattern matching rules with lexical entries. It is organized like a lexicon, but has the power of a lexicalized local grammar. All Expert Lexicon entries are indexed, similar to the case for the finite state tool in INTEX [Silberztein 2000]. The pattern matching time is therefore reduced dramatically compared to a sequential finite state device [Srihari et al. 2003].5 The expert lexicon formalism is designed to enhance the lexicalization of our system, in accordance with the general trend of lexicalist approaches to NLP. It is especially beneficial in handling problems like PVs and many individual or idiosyncratic linguistic phenomena that can not be covered by non-lexical approaches. Unlike the extreme lexicalized word expert system in [Small and Rieger 1982] and similar to the IDAREX local grammar formalism [Breidt et al.1994], our EL formalism supports a parameterized macro mechanism that can be used to capture the general rules shared by a set of individual entries. This is a particular useful mechanism that will save time for computational lexicographers in developing expert lexicons, especially for phrasal verbs, as shall be shown in Section 3.3 below. The Expert Lexicon tool provides a flexible interface for coordinating lexicons and syntax: any number of expert lexicons can be placed at any levels, hand-in-hand with other non-lexicalized modules in the pipeline architecture of our system. 5 Some other unique features of our EL formalism include: (i) providing the capability of proximity checking as rule constraints in addition to pattern matching using regular expressions so that the rule writer or lexicographer can exploit the combined advantages of both, and (ii) the propagation functionality of semantic tagging results, to accommodate principles like one sense per discourse. 3.3 Phrasal Verb Expert Lexicon To cover the three major types of PVs, we use the macro mechanism to capture the shared patterns. For example, the NP insertion for Type II PV is handled through a macro called V_NP_P, formulated in pseudo code as follows. V_NP_P($V,$P,$V_P,$F1, $F2,…) := Pattern: $V NP (‘right’|‘back’|‘straight’) $P NOT NP Action: $V: %assign_feature($F1, $F2,…) %assign_canonical_form($V_P) $P: %deactivate This macro represents cases like Take the coat off, please; put it back on, it’s raining now. It consists of two parts: ‘Pattern’ in regular expression form (with parentheses for optionality, a bar for logical OR, a quoted string for checking a word or head word) and ‘Action’ (signified by the prefix %). The parameters used in the macro (marked by the prefix $) include the leading verb $V, particle $P, the canonical form $V_P, and features $Fn. After the defined pattern is matched, a Type II separable verb is identified. The Action part ensures that the lexical identity be represented properly, i.e. the assignment of the lexical features and the canonical form. The deactivate action flags the particle as being part of the phrasal verb. In addition, to prevent a spurious case in (3b), the macro V_NP_P checks the contextual constraints that no NP (i.e. NOT NP) should follow a PV particle. In our shallow parsing, NP chunking does not include identified time NEs, so it will not block the PV identification in (3c). (3a) She [put the coat on]. (3b) She put the coat [on the table]. (3c) She [put the coat on] yesterday. All three types of PVs when used without NP insertion are handled by the same set of macros, due to the formal patterns they share. We use a set of macros instead of one single macro, depending on the type of particle and the voice of the verb, e.g., look for calls the macro [active_V_Pfor | passive_V_Pfor], fly in calls the macro [active_V_Pin | passive_V_Pin], etc. The distinction between active rules and passive rules lies in the need for different constraints. For example, a passive rule needs to check the post-particle constraint [NOT NP] to block the spurious case in (4b). (4a) He [turned on] the radio. (4b) The world [had been turned] [on its head] again. As for particles, they also require different constraints in order to block spurious matches. For example, active_V_Pin (formulated below) requires the constraints ‘NOT location NOT time’ after the particle while active_V_Pfor only needs to check ‘NOT time’, shown in (5) and (6). (5a) Howard [had flown in] from Atlanta. (5b) The rocket [would fly] [in 1999]. (6a) She was [looking for] California on the map. (6b) She looked [for quite a while]. active_V_Pin($V, in, $V_P,$F1, $F2,…) := Pattern: $V NOT passive (Adv|time) $P NOT location NOT time Action: $V: %assign_feature($F1, $F2, …) %assign_canonical_form($V_P) $P: %deactivate The coding of the few PV macros requires skilled computational grammarians and a representative development corpus for rule debugging. In our case, it was approximately 15 person-days of skilled labor including data analysis, macro formulation and five iterations of debugging against the development corpus. But after the PV macros are defined, lexicographers can quickly develop the PV entries: it only cost one person-day to enter the entire PV vocabulary using the EL formalism and the implemented macros. We used the Cambridge International Dictionary of Phrasal Verbs and Collins Cobuild Dictionary of Phrasal Verbs as the major reference for developing our PV Expert Lexicon. 6 This expert lexicon contains 2,590 entries. The EL-rules are ordered with specific rules placed before more general rules. A sample of the developed PV Expert Lexicon is shown below (the prefix @ denotes a macro call): abide: @V_P_by(abide, by, abide_by, V6A, APPROVING_AGREEING) accede: @V_P_to(accede, to, accede_to, V6A, APPROVING_AGREEING) add: @V_P(add, up, add_up, V2A, MATH_REASONING); @V_NP_P(add, up, add_up, V6A, MATH_REASONING) ………… In the above entries, V6A and V2A are subcategorization features for transitive and intransitive verb respectively, while APPROVING_AGREEING and MATH_REASONING are semantic features. These features provide the lexical basis for the subsequent parser. The PV identification method as described above resolves all the problems in the checklist. The following sample output shows the identification result: NP[That] VG[could slow: slow_down/V6A/MOVING] NP[him] down/deactivated . 4 Benchmarking Blind benchmarking was done by two non-developer testers manually checking the results. In cases of disagreement, a third tester was involved in examining the case to help resolve it. We ran benchmarking on both the formal style and informal style of English text. 4.1 Corpus Preparation Our development corpus (around 500 KB) consists of the MUC-7 (Message Understanding 6 Some entries that are listed in these dictionaries do not seem to belong to phrasal verb categories, e.g., relieve…of (as used in relieve somebody of something), remind…of (as used in remind somebody of something), etc. It is generally agreed that such cases belong to syntactic patterns in the form of V+NP+P+NP that can be captured by subcategorization. We have excluded these cases. Conference-7) dryrun corpus and an additional collection of news domain articles from TREC (Text Retrieval Conference) data. The PV expert lexicon rules, mainly the macros, were developed and debugged using the development corpus. The first testing corpus (called English-zone corpus) was downloaded from a website that is designed to teach PV usage in Colloquial English (http://www.english-zone.com/phrasals/w-phras als.html). It consists of 357 lines of sample sentences containing 347 PVs. This addresses the sparseness problem for the less frequently used PVs that rarely get benchmarked in running text testing. This is a concentrated corpus involving varieties of PVs from text sources of an informal style, as shown below.7 "Would you care for some dessert? We have ice cream, cookies, or cake." Why are you wrapped up in that blanket? After John's wife died, he had to get through his sadness. After my sister cut her hair by herself, we had to take her to a hairdresser to even her hair out! After the fire, the family had to get by without a house. We have prepared two collections from the running text data to test written English of a more formal style in the general news domain: (i) the MUC-7 formal run corpus (342 KB) consisting of 99 news articles, and (ii) a collection of 23,557 news articles (105MB) from the TREC data. 4.2 Performance Testing There is no available system known to the NLP community that claims a capability for PV treatment and could thus be used for a reasonable performance comparison. Hence, we have devised a bottom-line system and a baseline system for comparison with our EL-driven system. The bottom-line system is defined as a simple lexical lookup procedure enhanced with the ability to match inflected verb forms but with no capability of checking contextual constraints. There is no discussion in the literature on what 7 Proper treatment of PVs is most important in parsing text sources involving Colloquial English, e.g., interviews, speech transcripts, chat room archives. There is an increasing demand for NLP applications in handling this type of data. constitutes a reasonable baseline system for PV. We believe that a baseline system should have the additional, easy-to-implement ability to jump over inserted object case pronouns (e.g., turn it on) and adverbs (e.g., look everywhere for) in PV identification. Both the MUC-7 formal run corpus and the English-zone corpus were fed into the bottom-line and the baseline systems as well as our EL-driven system described in Section 3.3. The benchmarking results are shown in Table 1 and Table 2. The F-score is a combined measure of precision and recall, reflecting the overall performance of a system. Table 1. Running Text Benchmarking 1 Bottom-line Baseline EL Correct 303 334 338 Missing 58 27 23 Spurious 33 34 7 Precision 90.2% 88.4% 98.0% Recall 83.9% 92.5% 93.6% F-score 86.9% 91.6% 95.8% Table 2. Sampling Corpus Benchmarking Bottom-line Baseline EL Correct 215 244 324 Missing 132 103 23 Spurious 0 0 0 Precision 100% 100% 100% Recall 62.0% 70.3% 93.4% F-score 76.5% 82.6% 96.6% Compared with the bottom-line performance and the baseline performance, the F-score for the presented method has surged 9-20 percentage points and 4-14 percentage points, respectively. The high precision (100%) in Table 2 is due to the fact that, unlike running text, the sampling corpus contains only positive instances of PV. This weakness, often associated with sampling corpora, is overcome by benchmarking running text corpora (Table 1 and Table 3). To compensate for the limited size of the MUC formal run corpus, we used the testing corpus from the TREC data. For such a large testing corpus (23,557 articles, 105MB), it is impractical for testers to read every article to count mentions of all PVs in benchmarking. Therefore, we selected three representative PVs look for, turn…on and blow…up and used the head verbs (look, turn, blow), including their inflected forms, to retrieve all sentences that contain those verbs. We then ran the retrieved sentences through our system for benchmarking (Table 3). All three of the blind tests show fairly consistent benchmarking results (F-score 95.8%-97.5%), indicating that these benchmarks reflect the true capability of the presented system, which targets the entire PV vocabulary instead of a selected subset. Although there is still some room for further enhancement (to be discussed shortly), the PV identification problem is basically solved. Table 3. Running Text Benchmarking 2 ‘look for’ ‘turn…on’ ‘blow…up’ Correct 1138 128 650 Missing 76 0 33 Spurious 5 9 0 Precision 99.6% 93.4% 100.0% Recall 93.7% 100.0% 95.2% F-score 96.6% 97.5% 97.5% 4.3 Error Analysis There are two major factors that cause errors: (i) the impact of errors from the preceding modules (POS and Shallow Parsing), and (ii) the mistakes caused by the PV Expert Lexicon itself. The POS errors caused more problems than the NP grouping errors because the inserted NP tends to be very short, posing little challenge to the BaseNP shallow parsing. Some verbs mis-tagged as nouns by POS were missed in PV identification. There are two problems that require the fine-tuning of the PV Identification Module. First, the macros need further adjustment in their constraints. Some constraints seem to be too strong or too weak. For example, in the Type I macro, although we expected the possible insertion of an adverb, however, the constraint on allowing for only one optional adverb and not allowing for a time adverbial is still too strong. As a result, the system failed to identify listening…to and meet…with in the following cases: …was not listening very closely on Thursday to American concerns about human tights… and ... meet on Friday with his Chinese... The second type of problems cannot be solved at the macro level. These are individual problems that should be handled by writing specific rules for the related PV. An example is the possible spurious match of the PV have…out in the sentence ...still have our budget analysts out working the numbers. Since have is a verb with numerous usages, we should impose more individual constraints for NP insertion to prevent spurious matches, rather than calling a common macro shared by all Type II verbs. 4.4 Efficiency Testing To test the efficiency of the index-based PV Expert Lexicon in comparison with a sequential Finite State Automaton (FSA) in the PV identification task, we conducted the following experiment. The PV Expert Lexicon was compiled as a regular local grammar into a large automaton that contains 97,801 states and 237,302 transitions. For a file of 104 KB (the MUC-7 dryrun corpus of 16,878 words), our sequential FSA runner takes over 10 seconds for processing on the Windows NT platform with a Pentium PC. This processing only requires 0.36 second using the indexed PV Expert Lexicon module. This is about 30 times faster. 5 Conclusion An effective and efficient approach to phrasal verb identification is presented. This approach handles both separable and inseparable phrasal verbs in English. An Expert Lexicon formalism is used to develop the entire phrasal verb lexicon and its associated pattern matching rules and macros. This formalism allows the phrasal verb lexicon to be called between two levels of parsing for the required morpho-syntactic interaction in phrasal verb identification. Benchmarking using both the running text corpus and sampling corpus shows that the presented approach provides a satisfactory solution to this problem. In future research, we plan to extend the successful experiment on phrasal verbs to other types of multi-word expressions and idioms using the same expert lexicon formalism. Acknowledgment This work was partly supported by a grant from the Air Force Research Laboratory’s Information Directorate (AFRL/IF), Rome, NY, under contract F30602-03-C-0044. The authors wish to thank Carrie Pine and Sharon Walter of AFRL for supporting and reviewing this work. Thanks also go to the anonymous reviewers for their constructive comments. References Breidt. E., F. Segond and G. Valetto. 1994. Local Grammars for the Description of Multi-Word Lexemes and Their Automatic Recognition in Text. Proceedings of Comlex-2380 - Papers in Computational Lexicography, Linguistics Institute, HAS, Budapest, 19-28. Breidt, et al. 1996. Formal description of Multi-word Lexemes with the Finite State formalism: IDAREX. Proceedings of COLING 1996, Copenhagen. Bolinger, D. 1971. The Phrasal Verb in English. Cambridge, Mass., Harvard University Press. Church, K. 1988. A stochastic parts program and noun phrase parser for unrestricted text. Proceedings of ANLP 1988. Di Sciullo, A.M. and E. Williams. 1987. On The Definition of Word. The MIT Press, Cambridge, Massachusetts. Fraser, B. 1976. The Verb Particle Combination in English. New York: Academic Press. Pelli, M. G. 1976. Verb Particle Constructions in American English. Zurich: Francke Verlag Bern. Sag, I., T. Baldwin, F. Bond, A. Copestake and D. Flickinger. 2002. Multiword Expressions: A Pain in the Neck for NLP. Proceedings of CICLING 2002, Mexico City, Mexico, 1-15. Shaked, N. 1994. The Treatment of Phrasal Verbs in a Natural Language Processing System, Dissertation, CUNY. Silberztein, M. 2000. INTEX: An FST Toolbox. Theoretical Computer Science, Volume 231(1): 33-46. Small, S. and C. Rieger. 1982. Parsing and comprehending with word experts (a theory and its realisation). W. Lehnert and M. Ringle, editors, Strategies for Natural Language Processing. Lawrence Erlbaum Associates, Hillsdale, NJ. Srihari, R., W. Li, C. Niu and T. Cornell. 2003. InfoXtract: An Information Discovery Engine Supported by New Levels of Information Extraction. Proceeding of HLT-NAACL Workshop on Software Engineering and Architecture of Language Technology Systems, Edmonton, Canada. Villavicencio, A. and A. Copestake. 2002. Verb-particle constructions in a computational grammar of English. Proceedings of the Ninth International Conference on Head-Driven Phrase Structure Grammar, Seoul, South Korea.
2003
65
Unsupervised Learning of Dependency Structure for Language Modeling Jianfeng Gao Microsoft Research, Asia 49 Zhichun Road, Haidian District Beijing 100080 China [email protected] Hisami Suzuki Microsoft Research One Microsoft Way Redmond WA 98052 USA [email protected] Abstract This paper presents a dependency language model (DLM) that captures linguistic constraints via a dependency structure, i.e., a set of probabilistic dependencies that express the relations between headwords of each phrase in a sentence by an acyclic, planar, undirected graph. Our contributions are three-fold. First, we incorporate the dependency structure into an n-gram language model to capture long distance word dependency. Second, we present an unsupervised learning method that discovers the dependency structure of a sentence using a bootstrapping procedure. Finally, we evaluate the proposed models on a realistic application (Japanese Kana-Kanji conversion). Experiments show that the best DLM achieves an 11.3% error rate reduction over the word trigram model. 1 Introduction In recent years, many efforts have been made to utilize linguistic structure in language modeling, which for practical reasons is still dominated by trigram-based language models. There are two major obstacles to successfully incorporating linguistic structure into a language model: (1) capturing longer distance word dependencies leads to higher-order n-gram models, where the number of parameters is usually too large to estimate; (2) capturing deeper linguistic relations in a language model requires a large annotated training corpus and a decoder that assigns linguistic structure, which are not always available. This paper presents a new dependency language model (DLM) that captures long distance linguistic constraints between words via a dependency structure, i.e., a set of probabilistic dependencies that capture linguistic relations between headwords of each phrase in a sentence. To deal with the first obstacle mentioned above, we approximate long-distance linguistic dependency by a model that is similar to a skipping bigram model in which the prediction of a word is conditioned on exactly one other linguistically related word that lies arbitrarily far in the past. This dependency model is then interpolated with a headword bigram model and a word trigram model, keeping the number of parameters of the combined model manageable. To overcome the second obstacle, we used an unsupervised learning method that discovers the dependency structure of a given sentence using an Expectation-Maximization (EM)-like procedure. In this method, no manual syntactic annotation is required, thereby opening up the possibility for building a language model that performs well on a wide variety of data and languages. The proposed model is evaluated using Japanese Kana-Kanji conversion, achieving significant error rate reduction over the word trigram model. 2 Motivation A trigram language model predicts the next word based only on two preceding words, blindly discarding any other relevant word that may lie three or more positions to the left. Such a model is likely to be linguistically implausible: consider the English sentence in Figure 1(a), where a trigram model would predict cried from next seat, which does not agree with our intuition. In this paper, we define a dependency structure of a sentence as a set of probabilistic dependencies that express linguistic relations between words in a sentence by an acyclic, planar graph, where two related words are connected by an undirected graph edge (i.e., we do not differentiate the modifier and the head in a dependency). The dependency structure for the sentence in Figure 1(a) is as shown; a model that uses this dependency structure would predict cried from baby, in agreement with our intuition. (a) [A baby] [in the next seat] cried [throughout the flight] (b) [/] [/ ] [ / ] [ / ] [ ] [/] Figure 1. Examples of dependency structure. (a) A dependency structure of an English sentence. Square brackets indicate base NPs; underlined words are the headwords. (b) A Japanese equivalent of (a). Slashes demarcate morpheme boundaries; square brackets indicate phrases (bunsetsu). A Japanese sentence is typically divided into non-overlapping phrases called bunsetsu. As shown in Figure 1(b), each bunsetsu consists of one content word, referred to here as the headword H, and several function words F. Words (more precisely, morphemes) within a bunsetsu are tightly bound with each other, which can be adequately captured by a word trigram model. However, headwords across bunsetsu boundaries also have dependency relations with each other, as the diagrams in Figure 1 show. Such long distance dependency relations are expected to provide useful and complementary information with the word trigram model in the task of next word prediction. In constructing language models for realistic applications such as speech recognition and Asian language input, we are faced with two constraints that we would like to satisfy: First, the model must operate in a left-to-right manner, because (1) the search procedures for predicting words that correspond to the input acoustic signal or phonetic string work left to right, and (2) it can be easily combined with a word trigram model in decoding. Second, the model should be computationally feasible both in training and decoding. In the next section, we offer a DLM that satisfies both of these constraints. 3 Dependency Language Model The DLM attempts to generate the dependency structure incrementally while traversing the sentence left to right. It will assign a probability to every word sequence W and its dependency structure D. The probability assignment is based on an encoding of the (W, D) pair described below. Let W be a sentence of length n words to which we have prepended <s> and appended </s> so that w0 = <s>, and wn+1 = </s>. In principle, a language model recovers the probability of a sentence P(W) over all possible D given W by estimating the joint probability P(W, D): P(W) = ∑D P(W, D). In practice, we used the so-called maximum approximation where the sum is approximated by a single term P(W, D*): ∑ ∗ ≈ = D D W P D W P W P ) , ( ) , ( ) ( . (1) Here, D* is the most probable dependency structure of the sentence, which is generally discovered by maximizing P(W, D): D D W P D ) , ( max arg = ∗ . (2) Below we restrict the discussion to the most probable dependency structure of a given sentence, and simply use D to represent D*. In the remainder of this section, we first present a statistical dependency parser, which estimates the parsing probability at the word level, and generates D incrementally while traversing W left to right. Next, we describe the elements of the DLM that assign probability to each possible W and its most probable D, P(W, D). Finally, we present an EM-like iterative method for unsupervised learning of dependency structure. 3.1 Dependency parsing The aim of dependency parsing is to find the most probable D of a given W by maximizing the probability P(D|W). Let D be a set of probabilistic dependencies d, i.e. d ∈D. Assuming that the dependencies are independent of each other, we have ∏ ∈ = D d W d P W D P ) | ( ) | ( (3) where P(d|W) is the dependency probability conditioned by a particular sentence.1 It is impossible to estimate P(d|W) directly because the same sentence is very unlikely to appear in both training and test data. We thus approximated P(d|W) by P(d), and estimated the dependency probability from the training corpus. Let dij = (wi, wj) be the dependency 1 The model in Equation (3) is not strictly probabilistic because it drops the probabilities of illegal dependencies (e.g., crossing dependencies). between wi and wj. The maximum likelihood estimation (MLE) of P(dij) is given by ) , ( ) , , ( ) ( j i j i ij w w C R w w C d P = (4) where C(wi, wj, R) is the number of times wi and wj have a dependency relation in a sentence in training data, and C(wi, wj) is the number of times wi and wj are seen in the same sentence. To deal with the data sparseness problem of MLE, we used the backoff estimation strategy similar to the one proposed in Collins (1996), which backs off to estimates that use less conditioning context. More specifically, we used the following three estimates: 4 4 4 3 2 3 2 23 1 1 1 δ η δ δ η η δ η = + + = = E E E , (5) Where ) , , ( 1 R w w C j i = η , ) , ( 1 j i w w C = δ , ) ,*, ( 2 R w C i = η , ,*) ( 2 i w C = δ , ) , (*, 3 R w C j = η , ) (*, 3 j w C = δ , ) (*,*, 4 R C = η , (*,*) 4 C = δ . in which * indicates a wild-card matching any word. The final estimate E is given by linearly interpolating these estimates: ) ) 1( )( 1( 4 2 23 2 1 1 1 E E E E λ λ λ λ − + − + = (6) where λ1 and λ2 are smoothing parameters. Given the above parsing model, we used an approximation parsing algorithm that is O(n2). Traditional techniques use an optimal Viterbi-style algorithm (e.g., bottom-up chart parser) that is O(n5).2 Although the approximation algorithm is not guaranteed to find the most probable D, we opted for it because it works in a left-to-right manner, and is very efficient and simple to implement. In our experiments, we found that the algorithm performs reasonably well on average, and its speed and simplicity make it a better choice in DLM training where we need to parse a large amount of training data iteratively, as described in Section 3.3. The parsing algorithm is a slightly modified version of that proposed in Yuret (1998). It reads a sentence left to right; after reading each new word 2 For parsers that use bigram lexical dependencies, Eisner and Satta (1999) presents parsing algorithms that are O(n4) or O(n3). We thank Joshua Goodman for pointing this out. wj, it tries to link wj to each of its previous words wi, and push the generated dependency dij into a stack. When a dependency crossing or a cycle is detected in the stack, the dependency with the lowest dependency probability in conflict is eliminated. The algorithm is outlined in Figures 2 and 3. DEPENDENCY-PARSING(W) 1 for j Å 1 to LENGTH(W) 2 for i Å j-1 downto 1 3 PUSH dij = (wi, wj) into the stack Dj 4 if a dependency cycle (CY) is detected in Dj (see Figure 3(a)) 5 REMOVE d, where ) ( min arg d P d CY d∈ = 6 while a dependency crossing (CR) is detected in Dj (see Figure 3(b)) do 7 REMOVE d, where ) ( min arg d P d CR d∈ = 8 OUTPUT(D) Figure 2. Approximation algorithm of dependency parsing (a) (b) Figure 3. (a) An example of a dependency cycle: given that P(d23) is smaller than P(d12) and P(d13), d23 is removed (represented as dotted line). (b) An example of a dependency crossing: given that P(d13) is smaller than P(d24), d13 is removed. Let the dependency probability be the measure of the strength of a dependency, i.e., higher probabilities mean stronger dependencies. Note that when a strong new dependency crosses multiple weak dependencies, the weak dependencies are removed even if the new dependency is weaker than the sum of the old dependencies. 3 Although this action results in lower total probability, it was implemented because multiple weak dependencies connected to the beginning of the sentence often pre 3 This operation leaves some headwords disconnected; in such a case, we assumed that each disconnected headword has a dependency relation with its preceding headword. w1 w2 w3 w1 w2 w3 w4 vented a strong meaningful dependency from being created. In this manner, the directional bias of the approximation algorithm was partially compensated for.4 3.2 Language modeling The DLM together with the dependency parser provides an encoding of the (W, D) pair into a sequence of elementary model actions. Each action conceptually consists of two stages. The first stage assigns a probability to the next word given the left context. The second stage updates the dependency structure given the new word using the parsing algorithm in Figure 2. The probability P(W, D) is calculated as: = ) , ( D W P (7) ∏ = − − − − − Φ Φ n j j j j j j j j j w D W D P D W w P 1 1 1 1 1 1 ) ), , ( | ( )) , ( | ( = Φ − − − ) ), , ( | ( 1 1 1 j j j j j w D W D P (8) ∏ = − − − j i j i j j j j i p p D W p P 1 1 1 1 1 ) ,..., , , | ( . Here (Wj-1, Dj-1) is the word-parse (j-1)-prefix that Dj-1 is a dependency structure containing only those dependencies whose two related words are included in the word (j-1)-prefix, Wj-1. wj is the word to be predicted. Dj-1 j is the incremental dependency structure that generates Dj = Dj-1 || Dj-1 j (|| stands for concatenation) when attached to Dj-1; it is the dependency structure built on top of Dj-1 and the newly predicted word wj (see the for-loop of line 2 in Figure 2). pi j denotes the ith action of the parser at position j in the word string: to generate a new dependency dij, and eliminate dependencies with the lowest dependency probability in conflict (see lines 4 – 7 in Figure 2). Φ is a function that maps the history (Wj-1, Dj-1) onto equivalence classes. The model in Equation (8) is unfortunately infeasible because it is extremely difficult to estimate the probability of pi j due to the large number of parameters in the conditional part. According to the parsing algorithm in Figure 2, the probability of 4 Theoretically, we should arrive at the same dependency structure no matter whether we parse the sentence left to right or right to left. However, this is not the case with the approximation algorithm. This problem is called directional bias. each action pi j depends on the entire history (e.g. for detecting a dependency crossing or cycle), so any mapping Φ that limits the equivalence classification to less context suitable for model estimation would be very likely to drop critical conditional information for predicting pi j. In practice, we approximated P(Dj-1 j| Φ(Wj-1, Dj-1), wj) by P(Dj|Wj) of Equation (3), yielding P(Wj, Dj) ≈ P(Wj| Φ(Wj-1, Dj-1)) P(Dj|Wj). This approximation is probabilistically deficient, but our goal is to apply the DLM to a decoder in a realistic application, and the performance gain achieved by this approximation justifies the modeling decision. Now, we describe the way P(wj|Φ(Wj-1,Dj-1)) is estimated. As described in Section 2, headwords and function words play different syntactic and semantic roles capturing different types of dependency relations, so the prediction of them can better be done separately. Assuming that each word token can be uniquely classified as a headword or a function word in Japanese, the DLM can be conceived of as a cluster-based language model with two clusters, headword H and function word F. We can then define the conditional probability of wj based on its history as the product of two factors: the probability of the category given its history, and the probability of wj given its category. Let hj or fj be the actual headword or function word in a sentence, and let Hj or Fj be the category of the word wj. P(wj|Φ(Wj-1,Dj-1)) can then be formulated as: = Φ − − )) , ( | ( 1 1 j j j D W w P (9) ) ), , ( | ( )) , ( | ( 1 1 1 1 j j j j j j j H D W w P D W H P − − − − Φ × Φ ) ), , ( | ( )) , ( | ( 1 1 1 1 j j j j j j j F D W w P D W F P − − − − Φ × Φ + . We first describe the estimation of headword probability P(wj | Φ(Wj-1, Dj-1), Hj). Let HWj-1 be the headwords in (j-1)-prefix, i.e., containing only those headwords that are included in Wj-1. Because HWj-1 is determined by Wj-1, the headword probability can be rewritten as P(wj | Φ(Wj-1, HWj-1, Dj-1), Hj). The problem is to determine the mapping Φ so as to identify the related words in the left context that we would like to condition on. Based on the discussion in Section 2, we chose a mapping function that retains (1) two preceding words wj-1 and wj-2 in Wj-1, (2) one preceding headword hj-1 in HWj-1, and (3) one linguistically related word wi according to Dj-1. wi is determined in two stages: First, the parser updates the dependency structure Dj-1 incrementally to Dj assuming that the next word is wj. Second, when there are multiple words that have dependency relations with wj in Dj, wi is selected using the following decision rule: ) , | ( max arg ) , (: R w w P w i j D w w w i j j i i ∈ = , (10) where the probability P(wj | wi, R) of the word wj given its linguistic related word wi is computed using MLE by Equation (11): ∑ = j w j i j i i j R w w C R w w C R w w P ) , , ( ) , , ( ) , | ( . (11) We thus have the mapping function Φ(Wj-1, HWj-1, Dj-1) = (wj-2, wj-1, hj-1, wi). The estimate of headword probability is an interpolation of three probabilities: = Φ − − ) ), , ( | ( 1 1 j j j j H D W w P (12) ) , | ( ( 1 2 1 j j j H h w P − λ λ )) , | ( ) 1( 2 R w w P i j λ − + ) , , | ( ) 1( 1 2 1 j j j j H w w w P − − − + λ . Here P(wj|wj-2, wj-1, Hj) is the word trigram probability given that wj is a headword, P(wj|hj-1, Hj) is the headword bigram probability, and λ1, λ2 ∈[0,1] are the interpolation weights optimized on held-out data. We now come back to the estimate of the other three probabilities in Equation (9). Following the work in Gao et al. (2002b), we used the unigram estimate for word category probabilities, (i.e., P(Hj|Φ(Wj-1, Dj-1)) ≈ P(Hj) and P(Fj | Φ(Wj-1, Dj-1)) ≈ P(Fj)), and the standard trigram estimate for function word probability (i.e., P(wj |Φ(Wj-1,Dj-1),Fj) ≈ P(wj | wj-2, wj-1, Fj)). Let Cj be the category of wj; we approximated P(Cj)× P(wj|wj-2, wj-1, Cj) by P(wj | wj-2, wj-1). By separating the estimates for the probabilities of headwords and function words, the final estimate is given below: P(wj | Φ(Wj-1, Dj-1))= (13) ) | ( )( ( ( 1 2 1 − j j j h w P H P λ λ )) , | ( ) 1( 2 R w w P i j λ − + ) , | ( ) 1( 1 2 1 − − − + j j j w w w P λ wj: headword ) , | ( 1 2 − − j j j w w w P        wj: function word All conditional probabilities in Equation (13) are obtained using MLE on training data. In order to deal with the data sparseness problem, we used a backoff scheme (Katz, 1987) for parameter estimation. This backoff scheme recursively estimates the probability of an unseen n-gram by utilizing (n–1)-gram estimates. In particular, the probability of Equation (11) backs off to the estimate of P(wj|R), which is computed as: N R w C R w P j j ) , ( ) | ( = , (14) where N is the total number of dependencies in training data, and C(wj, R) is the number of dependencies that contains wj. To keep the model size manageable, we removed all n-grams of count less than 2 from the headword bigram model and the word trigram model, but kept all long-distance dependency bigrams that occurred in the training data. 3.3 Training data creation This section describes two methods that were used to tag raw text corpus for DLM training: (1) a method for headword detection, and (2) an unsupervised learning method for dependency structure acquisition. In order to classify a word uniquely as H or F, we used a mapping table created in the following way. We first assumed that the mapping from part-of-speech (POS) to word category is unique and fixed;5 we then used a POS-tagger to generate a POS-tagged corpus, which are then turned into a category-tagged corpus.6 Based on this corpus, we created a mapping table which maps each word to a unique category: when a word can be mapped to either H or F, we chose the more frequent category in the corpus. This method achieved a 98.5% accuracy of headword detection on the test data we used. Given a headword-tagged corpus, we then used an EM-like iterative method for joint optimization of the parsing model and the dependency structure of training data. This method uses the maximum likelihood principle, which is consistent with lan 5 The tag set we used included 1,187 POS tags, of which 102 counted as headwords in our experiments. 6 Since the POS-tagger does not identify phrases (bunsetsu), our implementation identifies multiple headwords in phrases headed by compounds. guage model training. There are three steps in the algorithm: (1) initialize, (2) (re-)parse the training corpus, and (3) re-estimate the parameters of the parsing model. Steps (2) and (3) are iterated until the improvement in the probability of training data is less than a threshold. Initialize: We set a window of size N and assumed that each headword pair within a headword N-gram constitutes an initial dependency. The optimal value of N is 3 in our experiments. That is, given a headword trigram (h1, h2, h3), there are 3 initial dependencies: d12, d13, and d23. From the initial dependencies, we computed an initial dependency parsing model by Equation (4). (Re-)parse the corpus: Given the parsing model, we used the parsing algorithm in Figure 2 to select the most probable dependency structure for each sentence in the training data. This provides an updated set of dependencies. Re-estimate the parameters of parsing model: We then re-estimated the parsing model parameters based on the updated dependency set. 4 Evaluation Methodology In this study, we evaluated language models on the application of Japanese Kana-Kanji conversion, which is the standard method of inputting Japanese text by converting the text of a syllabary-based Kana string into the appropriate combination of Kanji and Kana. This is a similar problem to speech recognition, except that it does not include acoustic ambiguity. Performance on this task is measured in terms of the character error rate (CER), given by the number of characters wrongly converted from the phonetic string divided by the number of characters in the correct transcript. For our experiments, we used two newspaper corpora, Nikkei and Yomiuri Newspapers, both of which have been pre-word-segmented. We built language models from a 36-million-word subset of the Nikkei Newspaper corpus, performed parameter optimization on a 100,000-word subset of the Yomiuri Newspaper (held-out data), and tested our models on another 100,000-word subset of the Yomiuri Newspaper corpus. The lexicon we used contains 167,107 entries. Our evaluation was done within a framework of so-called “N-best rescoring” method, in which a list of hypotheses is generated by the baseline language model (a word trigram model in this study), which is then rescored using a more sophisticated language model. We use the N-best list of N=100, whose “oracle” CER (i.e., the CER of the hypotheses with the minimum number of errors) is presented in Table 1, indicating the upper bound on performance. We also note in Table 1 that the performance of the conversion using the baseline trigram model is much better than the state-of-the-art performance currently available in the marketplace, presumably due to the large amount of training data we used, and to the similarity between the training and the test data. Baseline Trigram Oracle of 100-best 3.73% 1.51% Table 1. CER results of baseline and 100-best list 5 Results The results of applying our models to the task of Japanese Kana-Kanji conversion are shown in Table 2. The baseline result was obtained by using a conventional word trigram model (WTM).7 HBM stands for headword bigram model, which does not use any dependency structure (i.e. λ2 = 1 in Equation (13)). DLM_1 is the DLM that does not use headword bigram (i.e. λ 2 = 0 in Equation (13)). DLM_2 is the model where the headword probability is estimated by interpolating the word trigram probability, the headword bigram probability, and the probability given one previous linguistically related word in the dependency structure. Although Equation (7) suggests that the word probability P(wj|Φ(Wj-1,Dj-1)) and the parsing model probability can be combined through simple multiplication, some weighting is desirable in practice, especially when our parsing model is estimated using an approximation by the parsing score P(D|W). We therefore introduced a parsing model weight PW: both DLM_1 and DLM_2 models were built with and without PW. In Table 2, the PW- prefix refers to the DLMs with PW = 0.5, and the DLMs without PW- prefix refers to DLMs with PW = 0. For both DLM_1 and DLM_2, models with the parsing weight achieve better performance; we 7 For a detailed description of the baseline trigram model, see Gao et al. (2002a). therefore discuss only DLMs with the parsing weight for the rest of this section. Model λ1 λ2 CER CER reduction WTM ---- ---- 3.73% ---- HBM 0.2 1 3.40% 8.8% DLM_1 0.1 0 3.48% 6.7% PW-DLM_1 0.1 0 3.44% 7.8% DLM_2 0.3 0.7 3.33% 10.7% PW-DLM_2 0.3 0.7 3.31% 11.3% Table 2. Comparison of CER results By comparing both HBM and PW-LDM_1 models with the baseline model, we can see that the use of headword dependency contributes greatly to the CER reduction: HBM outperformed the baseline model by 8.8% in CER reduction, and PW-LDM_1 by 7.8%. By combining headword bigram and dependency structure, we obtained the best model PW-DLM_2 that achieves 11.3% CER reduction over the baseline. The improvement achieved by PW-DLM_2 over the HBM is statistically significant according to the t test (P<0.01). These results demonstrate the effectiveness of our parsing technique and the use of dependency structure for language modeling. 6 Discussion In this section, we relate our model to previous research and discuss several factors that we believe to have the most significant impact on the performance of DLM. The discussion includes: (1) the use of DLM as a parser, (2) the definition of the mapping function Φ, and (3) the method of unsupervised dependency structure acquisition. One basic approach to using linguistic structure for language modeling is to extend the conventional language model P(W) to P(W, T), where T is a parse tree of W. The extended model can then be used as a parser to select the most likely parse by T* = argmaxT P(W, T). Many recent studies (e.g., Chelba and Jelinek, 2000; Charniak, 2001; Roark, 2001) adopt this approach. Similarly, dependency-based models (e.g., Collins, 1996; Chelba et al., 1997) use a dependency structure D of W instead of a parse tree T, where D is extracted from syntactic trees. Both of these models can be called grammar-based models, in that they capture the syntactic structure of a sentence, and the model parameters are estimated from syntactically annotated corpora such as the Penn Treebank. DLM, on the other hand, is a non-grammar-based model, because it is not based on any syntactic annotation: the dependency structure used in language modeling was learned directly from data in an unsupervised manner, subject to two weak syntactic constraints (i.e., dependency structure is acyclic and planar).8 This resulted in capturing the dependency relations that are not precisely syntactic in nature within our model. For example, in the conversion of the string below, the word  ban 'evening' was correctly predicted in DLM by using the long-distance bigram ~ asa~ban 'morning~evening', even though these two words are not in any direct syntactic dependency relationship:      'asks for instructions in the morning and submits daily reports in the evening' Though there is no doubt that syntactic dependency relations provide useful information for language modeling, the most linguistically related word in the previous context may come in various linguistic relations with the word being predicted, not limited to syntactic dependency. This opens up new possibilities for exploring the combination of different knowledge sources in language modeling. Regarding the function Φ that maps the left context onto equivalence classes, we used a simple approximation that takes into account only one linguistically related word in left context. An alternative is to use the maximum entropy (ME) approach (Rosenfeld, 1994; Chelba et al., 1997). Although ME models provide a nice framework for incorporating arbitrary knowledge sources that can be encoded as a large set of constraints, training and using ME models is extremely computationally expensive. Our working hypothesis is that the information for predicting the new word is dominated by a very limited set of words which can be selected heuristically: in this paper, Φ is defined as a heuristic function that maps D to one word in D that has the strongest linguistic relation with the word being predicted, as in (8). This hypothesis is borne out by 8 In this sense, our model is an extension of a dependency-based model proposed in Yuret (1998). However, this work has not been evaluated as a language model with error rate reduction. an additional experiment we conducted, where we used two words from D that had the strongest relation with the word being predicted; this resulted in a very limited gain in CER reduction of 0.62%, which is not statistically significant (P>0.05 according to the t test). The EM-like method for learning dependency relations described in Section 3.3 has also been applied to other tasks such as hidden Markov model training (Rabiner, 1989), syntactic relation learning (Yuret, 1998), and Chinese word segmentation (Gao et al., 2002a). In applying this method, two factors need to be considered: (1) how to initialize the model (i.e. the value of the window size N), and (2) the number of iterations. We investigated the impact of these two factors empirically on the CER of Japanese Kana-Kanji conversion. We built a series of DLMs using different window size N and different number of iterations. Some sample results are shown in Table 3: the improvement in CER begins to saturate at the second iteration. We also find that a larger N results in a better initial model but makes the following iterations less effective. The possible reason is that a larger N generates more initial dependencies and would lead to a better initial model, but it also introduces noise that prevents the initial model from being improved. All DLMs in Table 2 are initialized with N = 3 and are run for two iterations. Iteration N = 2 N = 3 N = 5 N = 7 N = 10 Init. 3.552% 3.523% 3.540% 3.514 % 3.511% 1 3.531% 3.503% 3.493% 3.509% 3.489% 2 3.527% 3.481% 3.483% 3.492% 3.488% 3 3.526% 3.481% 3.485% 3.490% 3.488% Table 3. CER of DLM_1 models initialized with different window size N, for 0-3 iterations 7 Conclusion We have presented a dependency language model that captures linguistic constraints via a dependency structure – a set of probabilistic dependencies that express the relations between headwords of each phrase in a sentence by an acyclic, planar, undirected graph. Promising results of our experiments suggest that long-distance dependency relations can indeed be successfully exploited for the purpose of language modeling. There are many possibilities for future improvements. In particular, as discussed in Section 6, syntactic dependency structure is believed to capture useful information for informed language modeling, yet further improvements may be possible by incorporating non-syntax-based dependencies. Correlating the accuracy of the dependency parser as a parser vs. its utility in CER reduction may suggest a useful direction for further research. Reference Charniak, Eugine. 2001. Immediate-head parsing for language models. In ACL/EACL 2001, pp.124-131. Chelba, Ciprian and Frederick Jelinek. 2000. Structured Language Modeling. Computer Speech and Language, Vol. 14, No. 4. pp 283-332. Chelba, C, D. Engle, F. Jelinek, V. Jimenez, S. Khudanpur, L. Mangu, H. Printz, E. S. Ristad, R. Rosenfeld, A. Stolcke and D. Wu. 1997. Structure and performance of a dependency language model. In Processing of Eurospeech, Vol. 5, pp 2775-2778. Collins, Michael John. 1996. A new statistical parser based on bigram lexical dependencies. In ACL 34:184-191. Eisner, Jason and Giorgio Satta. 1999. Efficient parsing for bilexical context-free grammars and head automaton grammars. In ACL 37: 457-464. Gao, Jianfeng, Joshua Goodman, Mingjing Li and Kai-Fu Lee. 2002a. Toward a unified approach to statistical language modeling for Chinese. ACM Transactions on Asian Language Information Processing, 1-1: 3-33. Gao, Jianfeng, Hisami Suzuki and Yang Wen. 2002b. Exploiting headword dependency and predictive clustering for language modeling. In EMNLP 2002: 248-256. Katz, S. M. 1987. Estimation of probabilities from sparse data for other language component of a speech recognizer. IEEE transactions on Acoustics, Speech and Signal Processing, 35(3): 400-401. Rabiner, Lawrence R. 1989. A tutorial on hidden Markov models and selected applications in speech recognition. Proceedings of IEEE 77:257-286. Roark, Brian. 2001. Probabilistic top-down parsing and language modeling. Computational Linguistics, 17-2: 1-28. Rosenfeld, Ronald. 1994. Adaptive statistical language modeling: a maximum entropy approach. Ph.D. thesis, Carnegie Mellon University. Yuret, Deniz. 1998. Discovery of linguistic relations using lexical attraction. Ph.D. thesis, MIT.
2003
66
Using mo del-theoretic seman tic in terpretation to guide statistical parsing and w ord recognition in a sp ok en language in terface  William Sc h uler Departmen t of Computer and Information Science Univ ersit y of P ennsylv ania 200 S. 33rd Street Philadelphia, P A 19104 [email protected] Abstract This pap er describ es an extension of the seman tic grammars used in con v en tional statistical sp ok en language in terfaces to allo w the probabilities of deriv ed analyses to b e conditioned on the meanings or denotations of input utterances in the con text of an in terface's underlying application en vironmen t or world mo del. Since these denotations will b e used to guide disam biguation in in teractiv e applications, they m ust b e ef cien tly shared among the man y p ossible analyses that ma y b e assigned to an input utterance. This pap er therefore presen ts a formal restriction on the scop e of v ariables in a seman tic grammar whic h guaran tees that the denotations of all p ossible analyses of an input utterance can b e calculated in p olynomial time, without undue constrain ts on the expressivit y of the deriv ed seman tics. Empirical tests sho w that this mo del-theoretic in terpretation yields a statistically signi can t impro v emen t on standard measures of parsing accuracy o v er a baseline grammar not conditioned on denotations. 1 In tro duction The dev elopmen t of sp eak er-indep enden t mixedinitiativ e sp eec h in terfaces, in whic h users not only answ er questions but also ask questions and giv e instructions, is curren tly limited b y the p erformance of language mo dels based largely on w ord coo ccurrences. Ev en under ideal circumstances, with large application-sp eci c corp ora on whic h to train,  The author w ould lik e to thank Da vid Chiang, Karin Kipp er, and three anon ymous review ers for particularly helpful commen ts on this material. This w ork w as supp orted b y NSF gran t EIA 0224417. con v en tional language mo dels are not suÆcien tly predictiv e to correctly analyze a wide v ariet y of inputs from a wide v ariet y of sp eak ers, suc h as migh t b e encoun tered in a general-purp ose in terface for directing rob ots, oÆce assistan ts, or other agen ts with complex capabilities. Suc h tasks ma y in v olv e unlab eled ob jects that m ust b e precisely describ ed, and a wider range of actions than a standard database interface w ould require (whic h also m ust b e precisely describ ed), in tro ducing a great deal of am biguit y in to input pro cessing. This pap er therefore explores the use of a statistical mo del of language conditioned on the meanings or denotations of input utterances in the con text of an in terface's underlying application en vironmen t or world mo del. This use of mo del-theoretic in terpretation represen ts an imp ortan t extension to the `seman tic grammars' used in existing statistical sp ok en language in terfaces, whic h rely on co-o ccurrences among lexically-determined seman tic classes and slot llers (Miller et al., 1996), in that the probabilit y of an analysis is no w also conditioned on the existence of denoted en tities and relations in the w orld mo del. The adv an tage of the in terpretation-based disam biguation adv anced here is that the probabilit y of generating, for example, the noun phrase `the lemon next to the safe' can b e more reliably estimated from the frequency with whic h noun phrases ha v e non-empt y denotations { giv en the fact that `the lemon next to the safe' do es indeed denote something in the w orld mo del { than it can from the relativ ely sparse co-o ccurrences of frame lab els suc h as lemon and next-to, or of next-to and safe. Since there are exp onen tially man y w ord strings attributable to an y utterance, and an exp onen tial (Catalan-order) n um b er of p ossible parse tree analyses attributable to an y string of w ords, this use of mo del-theoretic in terpretation for disam biguation m ust in v olv e some kind of sharing of partial results b et w een comp eting analyses if in terpretation is to b e p erformed on large n um b ers of p ossible analyses in a practical in teractiv e application. This pap er therefore also presen ts a formal restriction on the scop e of v ariables in a seman tic grammar (without un to w ard constrain ts on the expressivit y of the deriv ed semantics) whic h guaran tees that the denotations of all p ossible analyses of an input utterance can b e calculated in p olynomial time. Empirical tests sho w that this use of mo del-theoretic in terpretation in disambiguation yields a statistically signi can t impro v emen t on standard measures of parsing accuracy o v er a baseline grammar not conditioned on denotations. 2 Mo del-theoretic in terpretation In order to determine whether a user's directions denote en tities and relations that exist in the w orld mo del { and of course, in order to execute those directions once they are disam biguated { it is necessary to precisely represen t the meanings of input utterances. Seman tic grammars of the sort emplo y ed in curren t sp ok en language in terfaces for igh t reserv ation tasks (Miller et al., 1996; Sene et al., 1998) associate fragmen ts of logical { t ypically relational algebra { expressions with recursiv e transition net w orks enco ding lexicalized rules in a con text-free grammar (the indep enden t probabilities of these rules can then b e estimated from a training corpus and m ultiplied together to giv e a probabilit y for an y giv en analysis). In igh t reserv ation systems, these asso ciated semantic expressions usually designate en tities through a xed set of constan t sym b ols used as prop er names (e.g. for cities and n um b ered igh ts); but in applications with unlab eled (p erhaps visually-represen ted) en vironmen ts, en tities m ust b e describ ed b y predicating one or more mo di ers o v er some v ariable, narro wing the set of p oten tial referen ts b y sp ecifying colors, spatial lo cations, etc., un til only the desired en tit y or en tities remain. A seman tic grammar for in teracting with this kind of unlab eled en vironmen t migh t con tain the follo wing rules, using v ariables x 1 ::: x n (o v er en tities in the w orld mo del) in the asso ciated logical expressions: VP ! VP PP : x 1 ::: x n : $1(x 1 ::: x m )^ $2(x 1 ; x m+1 ::: x n ) VP ! hold NP : x 1 : Hold (Ag ent; x 1 ) ^ $2(x 1 ) NP ! a glass : x 1 : Glass (x 1 ) PP ! under NP : x 1 x 2 : Under (x 1 ; x 2 ) ^ $2(x 2 ) NP ! the faucet : x 1 : F auc et (x 1 ) in whic h m and n are in tegers and 0  m  n. Eac h lam b da expression x 1 ::: x n :  indicates a function from a tuple of en tities he 1 ::: e n i to a truth v alue de ned b y the remainder of the expression  (subVP ! VP PP x 1 ::: x n=2 : $1(x 1 ::: x m=1 ) ^ $2(x 1 ; x m+1 ::: x n ) fhf 1 ; g 1 i; hf 2 ; g 2 i; : : : g VP ! hold NP x 1 : H (A; x 1 ) ^ $2(x 1 ) fg 1 ; g 2 ; : : : g hold NP ! . . . x 1 : G(x 1 ) fg 1 ; g 2 ; : : : g a glass PP ! under NP x 1 x 2 : U (x 1 ; x 2 ) ^ $2(x 2 ) fhf 1 ; g 1 i; hf 2 ; g 2 i; : : : g under NP ! . . . x 1 : F (x 1 ) ff 1 ; f 2 ; : : : g the faucet Figure 1: Seman tic grammar deriv ation sho wing the asso ciated seman tics and denotation of eac h constituen t. En tire rules are sho wn at eac h step in the deriv ation in order to mak e the seman tic asso ciations explicit. stituting e 1 ::: e n for x 1 ::: x n ), whic h denotes a set of tuples satisfying , dra wn from E n (where E is the set of en tities in the w orld mo del). The pseudo-v ariables $1; $2; : : : in this notation indicate the sites at whic h the seman tic expressions asso ciated with eac h rule's non terminal sym b ols are to comp ose (the n um b ers corresp ond to the relativ e p ositions of the sym b ols on the righ t hand side of eac h rule, n um b ered from left to righ t). Seman tic expressions for complete sen tences are then formed b y comp osing the sub-expressions asso ciated with eac h rule at the appropriate sites. 1 Figure 1 sho ws the ab o v e rules assem bled in a deriv ation of the sen tence `hold a glass under the faucet.' The denotation annotated b eneath eac h constituen t is simply the set of v ariable assignmen ts (for eac h free v ariable) that satisfy the constituen t's seman tics. These denotations exactly capture the meaning (in a giv en w orld mo del) of the assembled seman tic expressions dominated b y eac h constituen t, regardless of ho w man y sub-expressions are subsumed b y that constituen t, and can therefore b e shared among comp eting analyses in lieu of the seman tic expression itself, as a partial result in mo deltheoretic in terpretation. 2.1 V ariable scop e Note, ho w ev er, that the adjunction of the prep ositional phrase mo di er `under the faucet' adds another free v ariable (x 2 ) to the seman tics of the v erb 1 This use of pseudo-v ariables is in tended to resem ble that of the unix program `y acc,' whic h has a similar purp ose (asso ciating syn tax with seman tics in constructing compilers for programming languages). VP ! VP PP x 1 ::: x n=1 : $1(x 1 ::: x m=0 ) ^ $2(x 1 ; x m+1 ::: x n ) VP ! hold NP Q x 1 : H (A; x 1 ) ^ $2(x 1 ) hold NP ! . . . PP ! under NP x 1 : Q x 2 : U (x 1 ; x 2 ) ^ $2(x 2 ) under NP ! . . . Figure 2: Deriv ation with minimal scoping. The v ariable x 1 in the seman tic expression asso ciated with the prep ositional phrase `under the faucet' cannot b e iden ti ed with the v ariable in the v erb phrase. phrase, and therefore another factor of jE j to the cardinalit y of its denotation. Moreo v er, under this kind of global scoping, if additional prep ositional phrases are adjoined, they w ould eac h con tribute y et another free v ariable, increasing the complexit y of the denotation b y an additional factor of jE j, making the shared in terpretation of suc h structures p otentially exp onen tial on the length of the input. This proliferation of free v ariables means that the v ariables in tro duced b y the noun phrases in an utterance, suc h as `hold a glass under the faucet,' cannot all b e giv en global scop e, as in Figure 1. On the other hand, the v ariables in tro duced b y quan ti ed noun phrases cannot b e b ound as so on as the noun phrases are comp osed, as in Figure 2, b ecause these v ariables ma y need to b e used in mo di ers comp osed in subsequen t (higher) rule applications. F ortunately , if these non-immediate v ariable scoping arrangemen ts are expressed structurally , as dominance relationships in the elemen tary tree structures of some grammar, then a structural restriction on this grammar can b e enforced that preserv es as man y non-immediate scoping arrangemen ts as p ossible while still prev en ting an un b ounded proliferation of free v ariables. The correct scoping arrangemen ts (e.g. for the sentence `hold a glass under the faucet,' sho wn Figure 3) can b e expressed using ordered sets of parse rules group ed together in suc h a w a y as to allo w other structural material to in terv ene. In this case, a group w ould include a rule for comp osing a v erb and a noun phrase with some asso ciated predicate, and one or more rules for binding eac h of the predicate's v ariables in a quan ti er somewhere ab o v e it (thereb y ensuring that these rules alw a ys o ccur together with the quan ti er rules dominating the predicate rule), while still allo wing rules adjoining prep ositional phrase mo di ers to apply in b et w een them (so that v ariables in their asso ciated predicates can VP ! VP x 2 ::: x n=1 : Q x 1 : $1(x 1 ::: x n ) fhig VP ! VP PP x 1 ::: x n=1 : $1(x 1 ::: x m=1 ) ^ $2(x 1 ; x m+1 ::: x n ) fg 1 ; g 2 ; : : : g VP ! hold NP x 1 : H (A; x 1 ) ^ $2(x 1 ) fg 1 ; g 2 ; : : : g hold NP ! . . . PP ! PP x 2 ::: x n : Q x 1 : $1(x 1 ::: x n ) fg 1 ; g 2 ; : : : g PP ! under NP x 1 x 2 : U (x 2 ; x 1 ) ^ $2(x 1 ) fhf 1 ; g 1 i; hf 2 ; g 2 i; : : : g under NP ! . . . Figure 3: Deriv ation with desired scoping. b e b ound b y the same quan ti ers). These `group ed rules' can b e formalized using a tree-rewriting system whose elemen tary trees can subsume sev eral ordered CF G rule applications (or steps in a con text-free deriv ation), as sho wn in Figure 4. Eac h suc h elemen tary tree con tains a rule (no de) asso ciated with a logical predicate and rules (no des) asso ciated with quan ti ers binding eac h of the predicate's v ariables. These trees are then comp osed b y rewriting op erations (dotted lines), whic h split them up and either insert them b et w een or identify them with (if demarcated with dashed lines) the rules in another elemen tary tree { in this case, the elemen tary tree anc hored b y the w ord `under.' These trees are considered elemen tary in order to exclude the p ossibilit y of generating deriv ations that con tain un b ound v ariables or quan ti ers o v er un used v ariables, whic h w ould ha v e no in tuitiv e meaning. The comp osition op erations will b e presen ted in further detail in Section 2.2. 2.2 Seman tic comp osition as tree-rewriting A general class of rewriting systems can b e de ned using sets of allo w able expansions of some t yp e of ob ject to incorp orate zero or more other instances of the same t yp e of ob ject, eac h of whic h is similarly expandable. Suc h a system can generate arbitrarily complex structure b y recursiv ely expanding or `rewriting' eac h new ob ject, concluding with a set of zero-expansions at the fron tier. F or example, a con text-free grammar ma y b e cast as a rewriting system whose ob jects are strings, and whose allo wable expansions are its grammar pro ductions, eac h of whic h expands or rewrites a certain string as a set VP ! VP x 2 ::: x n : Q x 1 : $1(x 1 ::: x n ) VP ! hold NP x 1 ::: x n : $1(x 1 ::: x n ) ^ $2(x 1 ) V ! hold x 1 : Hold (A; x 1 ) NP ! . . . . . . 1 VP ! VP x 2 ::: x n : Q x 1 : $1(x 1 ::: x n ) VP ! VP PP x 1 ::: x n : $1(x 1 ::: x m ) ^ $2(x 1 ; x m+1 ::: x n ) VP 1 PP ! PP x 2 ::: x n : Q x 1 : $1(x 1 ::: x n ) PP ! P NP x 1 ::: x n : $1(x 1 ::: x n ) ^ $2(x 1 ) P ! under x 1 x 2 : Under (x 2 ; x 1 ) NP 2 1 2 PP ! PP x 2 ::: x n : Q x 1 : $1(x 1 ::: x n ) NP ! Q N $2 Q ! a N ! faucet x 1 : F auc et (x 1 ) Figure 4: Complete elemen tary tree for `under' sho wing argumen t insertion sites. of zero or more sub-strings arranged around certain `elemen tary' strings con tributing terminal sym b ols. A class of tr e e-rewriting systems can similarly b e de ned as rewriting systems whose ob jects are trees, and whose allo w able expansions are pro ductions (similar to con text-free pro ductions), eac h of whic h rewrite a tree A as some function f applied to zero or more sub-trees A 1 ; : : : A s ; s  0 arranged around some `elemen tary' tree structure de ned b y f (P ollard, 1984; W eir, 1988): A ! f (A 1 ; : : : A s ) (1) This elemen tary tree structure can b e used to express the dominance relationship b et w een a logical predicate and the quan ti ers that bind its v ariables (whic h m ust b e preserv ed in an y meaningful deriv ed structure); but in order to allo w the same instance of a quan ti er to bind v ariables in more than one predicate, the rewriting pro ductions of suc h a seman tic tree-rewriting system m ust allo w expanded subtrees to iden tify parts of their structure (sp eci cally , the parts con taining quan ti ers) with parts of eac h other's structure, and with that of their host elemen tary tree. In particular, a rewriting pro duction in suc h a system w ould rewrite a tree A as an elemen tary tree 0 with a set of sub-trees A 1 ; : : : A s inserted in to it, eac h of whic h is rst partitioned in to a set of con tiguous comp onen ts (in order to isolate particular quan ti er no des and other kinds of sub-structure) using a tree partition function g at some sequence of split p oin ts h# i1 ,... # ic i i, whic h are no de addresses in A i (the rst of whic h simply sp eci es the ro ot). 2 The resulting sequence of partitioned comp onen ts of eac h expanded 2 The no de addresses enco de a path from the ro ot of sub-tree are then inserted in to 0 at a corresp onding sequence of insertion site addresses h i1 ,...  ic i i de ned b y the rewriting function f : f (A 1 ; : : : A s ) = 0 [h 11 ,...  1c 1 i; g # 11 ,... # 1c 1 (A 1 )] : : : [h s1 ,...  sc s i; g # s1 ,... # sc s (A s )] (2) Since eac h address can only host a single inserted comp onen t, an y comp onen ts from di eren t sub-tree argumen ts of f that are assigned to the same insertion site address are constrained to b e iden tical in order for the pro duction to apply . Additionally , some addresses ma y b e `pre- lled' as part of the elementary structure de ned in f , and therefore ma y also b e iden ti ed with comp onen ts of sub-tree argumen ts of f that are inserted at the same address. Figure 4 sho ws the set of insertion sites (designated with b o xed indices) for eac h argumen t of an elementary tree anc hored b y `under.' The sites lab eled 1 , asso ciated with the rst argumen t sub-tree (in this case, the tree anc hored b y `hold'), indicate that it is comp osed b y partitioning it in to three comp onen ts, eac h dominating or dominated b y the others, the lo west of whic h is inserted at the terminal no de lab eled `VP ,' the middle of whic h is iden ti ed with a pre lled comp onen t (delimited b y dashed lines), containing the quan ti er no de lab eled `VP ! VP ,' and the upp ermost of whic h (empt y in the gure) is inserted at the ro ot, while preserving the relativ e dominance relationships among the no des in b oth trees. Similarly , sites lab eled 2 , asso ciated with the second argumen t sub-tree (for the noun phrase complethe tree in whic h ev ery address  i sp eci es the i th c hild of the no de at the end of path  . men t to the prep osition), indicate that it is comp osed b y partitioning it in to t w o comp onen ts { again, one dominating the other { the lo w est of whic h is inserted at the terminal no de lab eled `NP ,' and the upp ermost of whic h is iden ti ed with another pre- lled comp onen t con taining the quan ti er no de lab eled `PP ! PP ,' again preserving the relativ e dominance relationships among the no des in b oth trees. 2.3 Shared in terpretation Recall the problem of un b ounded v ariable proliferation describ ed in Section 2.1. The adv an tage of using a tree-rewriting system to mo del seman tic comp osition is that suc h systems allo w the application of w ell-studied restrictions to limit their recursiv e capacit y to generate structural descriptions (in this case, to limit the un b ounded o v erlapping of quan ti er-v ariable dep endencies that can pro duce unlimited n um b ers of free v ariables at certain steps in a deriv ation), without limiting the m ulti-lev el structure of their elemen tary trees, used here for capturing the w ell-formedness constrain t that a predicate b e dominated b y its v ariables' quan ti ers. One suc h restriction, based on the regular form restriction de ned for tree adjoining grammars (Rogers, 1994), prohibits a grammar from allo wing an y cycle of elemen tary trees, eac h in terv ening inside a spine (a path connecting the insertion sites of an y argumen t) of the next. This restriction is de ned b elo w: De nition 2.1 L et a spine in an elementary tr e e b e the p ath of no des (or obje ct-level rule applic ations) c onne cting al l insertion site addr esses of the same ar gument. De nition 2.2 A gr ammar G is in regular form if a dir e cte d acyclic gr aph hV ; E i c an b e dr awn with vertic es v H ; v A 2 V c orr esp onding to p artitione d elementary tr e es of G (p artitione d as describ e d ab ove), and dir e cte d e dges hv H ; v A i 2 E  V  V fr om e ach vertex v H , c orr esp onding to a p artitione d elementary tr e e that c an host an ar gument, to e ach vertex v A , c orr esp onding to a p artitione d elementary tr e e that c an function as its ar gument, whose p artition interse cts its spine at any plac e other than the top no de in the spine. This restriction ensures that there will b e no unb ounded `pumping' of in terv ening tree structure in an y deriv ation, so there will nev er b e an un b ounded amoun t of unrecognized tree structure to k eep trac k of at an y step in a b ottom-up parse, so the n um b er of p ossible descriptions of eac h sub-span of the input will b e b ounded b y some constan t. It is called a `regular form' restriction b ecause it ensures that the set of ro ot-to-leaf paths in an y deriv ed structure will form a regular language. A CKY-st yle parser can no w b e built that recognizes eac h con text-free rule in an elemen tary tree from the b ottom up, storing in order the unrecognized rules that lie ab o v e it in the elemen tary tree (as w ell as an y remaining rules from an y comp osed sub-trees) as a kind of promissory note. The fact that an y regular-form grammar has a regular path set means that only a nite n um b er of states will b e required to k eep trac k of this promised, unrecognized structure in a b ottom-up tra v ersal, so the parser will ha v e the usual O (n 3 ) complexit y (times a constan t equal to the nite n um b er of p ossible unrecognized structures). Moreo v er, since the parser can recognize an y string deriv able b y suc h a grammar, it can create a shared forest represen tation of ev ery p ossible analysis of a giv en input b y annotating ev ery p ossible application of parse rules that could b e used in the deriv ation of eac h constituen t (Billot and Lang, 1989). This p olynomial-sized shared forest represen tation can then b e in terpreted determine whic h constituen ts denote en tities and relations in the w orld mo del, in order to allo w mo del-theoretic seman tic information to guide disam biguation decisions in parsing. Finally , the regular form restriction also has the imp ortan t e ect of ensuring that the n um b er of unrecognized quanti er no des at an y step in a b ottomup analysis { and therefore the n um b er of free v ariables in an y w ord or phrase constituen t of a parse { is also b ounded b y some constan t, whic h limits the size of an y constituen t's denotation to a p olynomial order of E , the n um b er of en tities in the en vironmen t. The in terpretation of an y shared forest deriv ed b y this kind of regular-form tree-rewriting system can therefore b e calculated in w orst-case p olynomial time on E . A denotation-annotated shared forest for the noun phrase `the girl with the hat b ehind the coun ter' is sho wn in Figure 5, using the noun and prep osition trees from Figure 4, with alternativ e applications of parse rules represen ted as circles b elo w eac h deriv ed constituen t. This shared structure subsumes t w o comp eting analyses: one con taining the noun phrase `the girl with the hat', denoting the en tit y g 1 , and the other con taining the noun phrase `the hat b ehind the coun ter', whic h do es not denote an ything in the w orld mo del. Assuming that noun phrases rarely o ccur with empt y denotations in the training data, the parse con taining the phrase `the girl with the hat' will b e preferred, b ecause there is indeed a girl with a hat in the w orld mo del. This formalism has similarities with t w o exNP ! girl x 1 : Girl (x 1 ) fg 1 ; g 2 ; g 3 g P ! with x 1 x 2 : With (x 2 ; x 1 ) fhh 1 ; g 1 i; hh 2 ; b 1 ig NP ! hat x 1 : Hat (x 1 ) fh 1 ; h 2 ; h 3 ; h 4 g P ! b ehind x 1 x 2 : Behind (x 2 ; x 1 ) fhc 1 ; g 1 ig NP ! coun ter x 1 : Counter (x 1 ) fc 1 ; c 2 g PP ! P NP x 1 ::: x n=2 : $1(x 1 ::: x n ) ^ $2(x 1 ) fhh 1 ; g 1 i; hh 2 ; b 1 ig PP ! PP x 2 ::: x n=2 : Q x 1 : $1(x 1 ::: x n ) fg 1 ; b 1 g PP ! P NP x 1 ::: x n=2 : $1(x 1 ::: x n ) ^ $2(x 1 ) fhc 1 ; g 1 ig PP ! PP x 2 ::: x n=2 : Q x 1 : $1(x 1 ::: x n ) fg 1 g NP ! NP PP x 1 ::: x n=1 : $1(x 1 ::: x m=1 ) ^ $2(x 1 ; x m+1 ::: x n ) fg 1 g NP ! NP PP x 1 ::: x n=1 : $1(x 1 ::: x m=1 ) ^ $2(x 1 ; x m+1 ::: x n ) ; PP ! P NP x 1 ::: x n=2 : $1(x 1 ::: x n ) ^ $2(x 1 ) ; PP ! PP x 2 ::: x n=2 : Q x 1 : $1(x 1 ::: x n ) ; NP ! NP PP x 1 ::: x n=1 : $1(x 1 ::: x m=1 ) ^ $2(x 1 ; x m+1 ::: x n ) ; or fg 1 g Figure 5: Shared forest for `the girl with the hat b ehind the coun ter.' tensions of tree-adjoining grammar (Joshi, 1985), namely m ulti-comp onen t tree adjoining grammar (Bec k er et al., 1991) and description tree substitution grammar (Ram b o w et al., 1995), and indeed represen ts something of a com bination of the t w o: 1. Lik e description tree substitution grammars, but unlik e m ulti-comp onen t T A Gs, it allo ws trees to b e partitioned in to an y desired set of con tiguous comp onen ts during comp osition, 2. Lik e m ulti-comp onen t T A Gs, but unlik e description tree substitution grammars, it allo ws the sp eci cation of particular insertion sites within elemen tary trees, and 3. Unlik e b oth, it allo w duplication of structure (whic h is used for merging quan ti ers from differen t elemen tary trees). The use of lam b da calculus functions to de ne decomp osable meanings for input sen tences dra ws on traditions of Ch urc h (1940) and Mon tague (1973), but this approac h di ers from the Mon tago vian system b y in tro ducing explicit limits on computational complexit y (in order to allo w tractable disam biguation). This approac h to seman tics is v ery similar to that describ ed b y Shieb er (1994), in whic h syn tactic and seman tic expressions are assem bled sync hronously using paired tree-adjoining grammars with isomorphic deriv ations, except that in this approac h the deriv ed structures are isomorphic as w ell, hence the reduction of sync hronous tree pairs to seman ticallyannotated syn tax trees. This isomorphism restriction on deriv ed trees reduces the n um b er of quan ti er scoping con gurations that can b e assigned to an y giv en input (most of whic h are unlik ely to b e used in a practical application), but its relativ e parsimon y allo ws syn tactically am biguous inputs to b e semantically in terpreted in a shared forest represen tation in w orst-case p olynomial time. The in terlea ving of seman tic ev aluation and parsing for the purp ose of disam biguation also has m uc h in common with that of Do wding et al. (1994), except that in this case, constituen ts are not only seman tically t yp e-c hec k ed, but are also fully in terpreted eac h time they are prop osed. There are also commonalities b et w een the undersp eci ed seman tic represen tation of structurallyam biguous elemen tary tree constituen ts in a shared forest and the undersp eci ed seman tic represen tation of (e.g. quan ti er) scop e am biguit y describ ed b y Reyle (1993). 3 3 Ev aluation The con tribution of this mo del-theoretic seman tic information to w ard disam biguation w as ev aluated on a set of directions to animated agen ts collected in a con trolled but spatially complex 3-D sim ulated environmen t (of c hildren running a lemonade stand). In order to a v oid priming them to w ards particular linguistic constructions, sub jects w ere sho wn unnarrated animations of computer-sim ulated agen ts p erforming di eren t tasks in this en vironmen t (pic king fruit, op erating a juicer, and exc hanging lemonade for money), whic h w ere describ ed only as the `desired b eha vior' of eac h agen t. The sub jects w ere then ask ed to direct the agen ts, using their o wn w ords, to p erform the desired b eha viors as sho wn. 340 utterances w ere collected and annotated with brac k ets and elemen tary tree no de addresses as describ ed in Section 2.2, for use as training data and as gold standard data in testing. Some sample directions are sho wn in Figure 6. Most elemen tary trees w ere extracted, with some simpli cations for parsing eÆciency , from an existing broad-co v erage grammar resource (XT A G Researc h Group, 1998), but some elemen tary trees for m ulti-w ord expressions had to b e created anew. In all, a complete annotation of this corpus required a grammar of 68 elemen tary trees and a lexicon of 288 lexicalizations (that is, w ords or sets of w ords with indivisible seman tics, forming the anc hors of a giv en elemen tary tree). Eac h lexicalization w as then assigned a seman tic expression describing the in tended geometric relation or class of ob jects in the sim ulated 3-D en vironmen t. 4 The in terface w as tested on the rst 100 collected utterances, and the parsing mo del w as trained on the remaining utterances. The presence or absence of a denotation of eac h constituen t w as added to the lab el of eac h constituen t in the denotation-sensitiv e parsing mo del (for example, statistics w ere collected for the frequency of `NP:! NP:+ PP:+' ev en ts, meaning a noun phrase that do es not denote an y3 Denotation of comp eting applications of parse rules can b e unioned (though this e ectiv ely treats am biguit y as a form of disjunction), or stored separately to some nitie b eam (though some globally preferable but lo cally dispreferred structures w ould b e lost). 4 Here it w as assumed that the in ten tion of the user w as to direct the agen t to p erform the actions sho wn in the `desired b eha vior' animation. Walk towar ds the tr e e wher e you se e a yel low lemon on the gr ound. Pick up the lemon. Plac e the lemon in the p o ol. T ake the dol lar bil l fr om the p erson in fr ont of you. Walk to the left towar ds the big black cub e. Figure 6: Sample utterances from collected corpus. thing in the en vironmen t expands to a noun phrase and a prep ositional phrase that do ha v e a denotation in the en vironmen t), whereas the baseline system used a parsing mo del conditioned on only constituen t lab els (for example, `NP ! NP PP' ev en ts). The en tire w ord lattice output of the sp eec h recognizer w as fed directly in to the parser, so as to allo w the mo del-theoretic seman tic information to b e brough t to b ear on w ord recognition am biguit y as w ell as on structural am biguit y in parsing. Since an y deriv ation of elemen tary trees uniquely de nes a seman tic expression at eac h no de, the task of ev aluating this kind of seman tic analysis is reduced to the familiar task of ev aluating a the accuracy of a lab eled brac k eting (lab eled with elemen tary tree names and no de addresses). Here, the standard measures of lab eled precision and recall are used. Note that there ma y b e m ultiple p ossible brac k etings for eac h gold standard tree in a giv en w ord lattice that di er only in the start and end frames of the comp onen t w ords. Since neither the baseline nor test parsing mo dels are sensitiv e to the start and end frames of the comp onen t w ords, the gold standard brac k eting is simply assumed to use the most lik ely frame segmen tation in the w ord lattice that yields the correct w ord sequence. The results of the exp erimen t are summarized b elo w. The en vironmen t-based mo del sho ws a statistically signi can t (p<.05) impro v emen t of 3 p oin ts in lab eled recall, a 12% reduction in error. Most of the impro v emen t can b e attributed to the denotation-sensitiv e parser dispreferring noun phrase constituen ts with mis-attac hed mo di ers, whic h do not denote an ything in the w orld mo del. Mo del LR LP baseline mo del 82% 78% baseline + denotation bit 85% 81% 4 Conclusion This pap er has describ ed an extension of the semantic grammars used in con v en tional sp ok en language in terfaces to allo w the probabilities of deriv ed analyses to b e conditioned on the results of a mo deltheoretic in terpretation. In particular, a formal restriction w as presen ted on the scop e of v ariables in a seman tic grammar whic h guaran tees that the denotations of all p ossible analyses of an input utterance can b e calculated in p olynomial time, without undue constrain ts on the expressivit y of the deriv ed seman tics. Empirical tests sho w that this mo deltheoretic in terpretation yields a statistically significan t impro v emen t on standard measures of parsing accuracy o v er a baseline grammar not conditioned on denotations. References Tilman Bec k er, Ara vind Joshi, and Ow en Ram b o w. 1991. Long distance scram bling and tree adjoining grammars. In Fifth Confer enc e of the Eur op e an Chapter of the Asso ciation for Computational Linguistics (EA CL'91), pages 21{26. Sylvie Billot and Bernard Lang. 1989. The structure of shared forests in am biguous parsing. In Pr o c e e dings of the 27 th A nnual Me eting of the Asso ciation for Computational Linguistics (A CL '89), pages 143{151. Alonzo Ch urc h. 1940. A form ulation of the simple theory of t yp es. Journal of Symb olic L o gic, 5(2):56{68. John Do wding, Rob ert Mo ore, F ran cois Andery , and Douglas Moran. 1994. In terlea ving syn tax and seman tics in an eÆcien t b ottom-up parser. In Pr oc e e dings of the 32nd A nnual Me eting of the Association for Computational Linguistics (A CL'94). Ara vind K. Joshi. 1985. Ho w m uc h con text sensitivit y is necessary for c haracterizing structural descriptions: T ree adjoining grammars. In L. Karttunen D. Do wt y and A. Zwic ky , editors, Natur al language p arsing: Psycholo gic al, c omputational and the or etic al p ersp e ctives, pages 206{250. Cambridge Univ ersit y Press, Cam bridge, U.K. Scott Miller, Da vid Stallard, Rob ert Bobro w, and Ric hard Sc h w artz. 1996. A fully statistical approac h to natural language in terfaces. In Pr oc e e dings of the 34th A nnual Me eting of the Association for Computational Linguistics (A CL'96), pages 55{61. Ric hard Mon tague. 1973. The prop er treatmen t of quan ti cation in ordinary English. In J. Hintikk a, J.M.E. Mora v csik, and P . Supp es, editors, Appr o aches to Natur al L angauge, pages 221{242. D. Riedel, Dordrec h t. Reprin ted in R. H. Thomason ed., F ormal Philosophy, Y ale Univ ersit y Press, 1994. Carl P ollard. 1984. Gener alize d phr ase structur e gr ammars, he ad gr ammars and natur al langauge. Ph.D. thesis, Stanford Univ ersit y . Ow en Ram b o w, Da vid W eir, and K. Vija y-Shank er. 1995. D-tree grammars. In Pr o c e e dings of the 33r d A nnual Me eting of the Asso ciation for Computational Linguistics (A CL '95). Uw e Reyle. 1993. Dealing with am biguities b y undersp eci cation: Construction, represen tation and deduction. Journal of Semantics, 10:123{179. James Rogers. 1994. Capturing CFLs with tree adjoining grammars. In Pr o c e e dings of the 32nd A nnual Me eting of the Asso ciation for Computational Linguistics (A CL '94). Stephanie Sene , Ed Hurley , Ra ymond Lau, Christine P ao, Philipp Sc hmid, and Victor Zue. 1998. Galaxy-I I: a reference arc hitecture for con v ersational system dev elopmen t. In Pr o c e e dings of the 5th International Confer enc e on Sp oken L anguage Pr o c essing (ICSLP '98), Sydney , Australia. Stuart M. Shieb er. 1994. Restricting the w eakgenerativ e capabilit y of sync hronous tree adjoining grammars. Computational Intel ligenc e, 10(4). Da vid W eir. 1988. Char acterizing mild ly c ontextsensitive gr ammar formalisms. Ph.D. thesis, Departmen t of Computer and Information Science, Univ ersit y of P ennsylv ania. XT A G Researc h Group. 1998. A lexicalized tree adjoining grammar for english. T ec hnical rep ort, IR CS, Univ ersit y of P ennsylv ania.
2003
67
Towards a Resource for Lexical Semantics: A Large German Corpus with Extensive Semantic Annotation Katrin Erk and Andrea Kowalski and Sebastian Pad´o and Manfred Pinkal Department of Computational Linguistics Saarland University Saarbr¨ucken, Germany {erk, kowalski, pado, pinkal}@coli.uni-sb.de Abstract We describe the ongoing construction of a large, semantically annotated corpus resource as reliable basis for the largescale acquisition of word-semantic information, e.g. the construction of domainindependent lexica. The backbone of the annotation are semantic roles in the frame semantics paradigm. We report experiences and evaluate the annotated data from the first project stage. On this basis, we discuss the problems of vagueness and ambiguity in semantic annotation. 1 Introduction Corpus-based methods for syntactic learning and processing are well-established in computational linguistics. There are comprehensive and carefully worked-out corpus resources available for a number of languages, e.g. the Penn Treebank (Marcus et al., 1994) for English or the NEGRA corpus (Skut et al., 1998) for German. In semantics, the situation is different: Semantic corpus annotation is only in its initial stages, and currently only a few, mostly small, corpora are available. Semantic annotation has predominantly concentrated on word senses, e.g. in the SENSEVAL initiative (Kilgarriff, 2001), a notable exception being the Prague Treebank (Hajiˇcov´a, 1998) . As a consequence, most recent work in corpus-based semantics has taken an unsupervised approach, relying on statistical methods to extract semantic regularities from raw corpora, often using information from ontologies like WordNet (Miller et al., 1990). Meanwhile, the lack of large, domainindependent lexica providing word-semantic information is one of the most serious bottlenecks for language technology. To train tools for the acquisition of semantic information for such lexica, large, extensively annotated resources are necessary. In this paper, we present current work of the SALSA (SAarbr¨ucken Lexical Semantics Annotation and analysis) project, whose aim is to provide such a resource and to investigate efficient methods for its utilisation. In the current project phase, the focus of our research and the backbone of the annotation are semantic role relations. More specifically, our role annotation is based on the Berkeley FrameNet project (Baker et al., 1998; Johnson et al., 2002). In addition, we selectively annotate word senses and anaphoric links. The TIGER corpus (Brants et al., 2002), a 1.5M word German newspaper corpus, serves as sound syntactic basis. Besides the sparse data problem, the most serious problem for corpus-based lexical semantics is the lack of specificity of the data: Word meaning is notoriously ambiguous, vague, and subject to contextual variance. The problem has been recognised and discussed in connection with the SENSEVAL task (Kilgarriff and Rosenzweig, 2000). Annotation of frame semantic roles compounds the problem as it combines word sense assignment with the assignment of semantic roles, a task that introduces vagueness and ambiguity problems of its own. The problem can be alleviated by choosing a suitable resource as annotation basis. FrameNet roles, which are local to particular frames (abstract situations), may be better suited for the annotation task than the “classical” thematic roles concept with a small, universal and exhaustive set of roles like agent, patient, theme: The exact extension of the role concepts has never been agreed upon (Fillmore, 1968). Furthermore, the more concrete frame semantic roles may make the annotators’ task easier. The FrameNet database itself, however, cannot be taken as evidence that reliable annotation is possible: The aim of the FrameNet project is essentially lexicographic and its annotation not exhaustive; it comprises representative examples for the use of each frame and its frame elements in the BNC. While the vagueness and ambiguity problem may be mitigated by the using of a “good” resource, it will not disappear entirely, and an annotation format is needed that can cope with the inherent vagueness of word sense and semantic role assignment. Plan of the paper. In Section 2 we briefly introduce FrameNet and the TIGER corpus that we use as a basis for semantic annotation. Section 3 gives an overview of the aims of the SALSA project, and Section 4 describes the annotation with frame semantic roles. Section 5 evaluates the first annotation results and the suitability of FrameNet as an annotation resource, and Section 6 discusses the effects of vagueness and ambiguity on frame semantic role annotation. Although the current amount of annotated data does not allow for definitive judgements, we can discuss tendencies. 2 Resources SALSA currently extends the TIGER corpus by semantic role annotation, using FrameNet as a resource. In the following, we will give a short overview of both resources. FrameNet. The FrameNet project (Johnson et al., 2002) is based on Fillmore’s Frame Semantics. A frame is a conceptual structure that describes a situation. It is introduced by a target or frame-evoking element (FEE). The roles, called frame elements (FEs), are local to particular frames and are the participants and props of the described situations. The aim of FrameNet is to provide a comprehensive frame-semantic description of the core lexicon of English. A database of frames contains the frames’ basic conceptual structure, and names and descriptions for the available frame elements. A lexicon database associates lemmas with the frames they evoke, lists possible syntactic realizations of FEs and provides annotated examples from the BNC. The current on-line version of the frame database (Johnson et al., 2002) consists of almost 400 frames, and covers about 6,900 lexical entries. Frame: REQUEST FE Example SPEAKER Pat urged me to apply for the job. ADDRESSEE Pat urged me to apply for the job. MESSAGE Pat urged me to apply for the job. TOPIC Kim made a request about changing the title. MEDIUM Kim made a request in her letter. Frame: COMMERCIAL TRANSACTION (C T) BUYER Jess bought a coat. GOODS Jess bought a coat. SELLER Kim sold the sweater. MONEY Kim paid 14 dollars for the ticket. PURPOSE Kim bought peppers to cook them. REASON Bob bought peppers because he was hungry. Figure 1: Example frame descriptions. Figure 1 shows two frames. The frame REQUEST involves a FE SPEAKER who voices the request, an ADDRESSEE who is asked to do something, the MESSAGE, the request that is made, the TOPIC that the request is about, and the MEDIUM that is used to convey the request. Among the FEEs for this frame are the verb ask and the noun request. In the frame COMMERCIAL TRANSACTION (henceforth C T), a BUYER gives MONEY to a SELLER and receives GOODS in exchange. This frame is evoked e.g. by the verb pay and the noun money. The TIGER Corpus. We are using the TIGER Corpus (Brants et al., 2002), a manually syntactically annotated German corpus, as a basis for our annotation. It is the largest available such corpus (80,000 sentences in its final release compared to 20,000 sentences in its predecessor NEGRA) and uses a rich annotation format. The annotation scheme is surface oriented and comparably theoryneutral. Individual words are labelled with POS information. The syntactic structures of sentences are described by relatively flat trees providing information about grammatical functions (on edge labels), syntactic categories (on node labels), and argument structure of syntactic heads (through the use of dependency-oriented constituent structures, which are close to the syntactic surface). An example for a syntactic structure is given in Figure 2. 3 Project overview The aim of the SALSA project is to construct a large semantically annotated corpus and to provide methods for its utilisation. Corpus construction. In the first phase of the project, we annotate the TIGER corpus in part manFigure 2: A sentence and its syntactic structure. ually, in part semi-automatically, having tools propose tags which are verified by human annotators. In the second phase, we will extend these tools for the weakly supervised annotation of a much larger corpus, using the TIGER corpus as training data. Utilisation. The SALSA corpus is designed to be utilisable for many purposes, like improving statistical parsers, and extending methods for information extraction and access. The focus in the SALSA project itself is on lexical semantics, and our first use of the corpus will be to extract selectional preferences for frame elements. The SALSA corpus will be tagged with the following types of semantic information: FrameNet frames. We tag all FEEs that occur in the corpus with their appropriate frames, and specify their frame elements. Thus, our focus is different from the lexicographic orientation of the FrameNet project mentioned above. As we tag all corpus instances of each FEE, we expect to encounter a wider range of phenomena. which Currently, FrameNet only exists for English and is still under development. We will produce a “light version” of a FrameNet for German as a by-product of the annotation, reusing as many as possible of the semantic frame descriptions from the English FrameNet database. Our first results indicate that the frame structure assumed for the description of the English lexicon can be reused for German, with minor changes and extensions. Word sense. The additional value of word sense disambiguation in a corpus is obvious. However, exhaustive word sense annotation is a highly timeconsuming task. Therefore we decided for a selective annotation policy, annotating only the heads of frame elements. GermaNet, the German WordNet version, will be used as a basis for the annotation. request conversation SPKR FEE ADD MSG FEE FEE TOPIC INTLC_1 Figure 3: Frame annotation. Coreference. Similarly, we will selectively annotate coreference. If a lexical head of a frame element is an anaphor, we specify the antecedent to make the meaning of the frame element accessible. 4 Frame Annotation Annotation schema. To give a first impression of frame annotation, we turn to the sentence in Fig. 2: (1) SPD fordert Koalition zu Gespr¨ach ¨uber Reform auf. (SPD requests that coalition talk about reform.) Fig. 3 shows the frame annotation associated with (1). Frames are drawn as flat trees. The root node is labelled with the frame name. The edges are labelled with abbreviated FE names, like SPKR for SPEAKER, plus the tag FEE for the frame-evoking element. The terminal nodes of the frame trees are always nodes of the syntactic tree. Cases where a semantic unit (FE or FEE) does not form one syntactic constituent, like fordert . . . auf in the example, are represented by assignment of the same label to several edges. Sentence (1), a newspaper headline, contains at least two FEEs: auffordern and Gespr¨ach. auffordern belongs to the frame REQUEST (see Fig. 1). In our example the SPEAKER is the subject NP SPD, the ADDRESSEE is the direct object NP Koalition, and the MESSAGE is the complex PP zu Gespr¨ach ¨uber Reform. So far, the frame structure follows the syntactic structure, except for that fact that the FEE, as a separable prefix verb, is realized by two syntactic nodes. However, it is not always the case that frame structure parallels syntactic structure. The second FEE Gespr¨ach introduces the frame CONVERSATION. In this frame two (or more) groups talk to one another and no participant is construed as only a SPEAKER or only an ADDRESSEE. In our example the only NP-internal frame element is the TOPIC (“what the message is about”) ¨uber Reform, whereas the INTERLOCUTOR-1 (“the prominent participant in the conversation”) is realized by the direct object of auffordern. As shown in Fig. 3, frames are annotated as trees of depth one. Although it might seem semantically more adequate to admit deeper frame trees, e.g. to allow the MSG edge of the REQUEST frame in Fig. 3 to be the root node of the CONVERSATION tree, as its “real” semantic argument, the representation of frame structure in terms of flat and independent semantic trees seems to be preferable for a number of practical reasons: It makes the annotation process more modular and flexible – this way, no frame annotation relies on previous frame annotation. The closeness to the syntactic structure makes the annotators’ task easier. Finally, it facilitates statistical evaluation by providing small units of semantic information that are locally related to syntax. Difficult cases. Because frame elements may span more than one sentence, like in the case of direct speech, we cannot restrict ourselves to annotation at sentence level. Also, compound nouns require annotation below word level. For example, the word “Gagenforderung” (demand for wages) consists of “-forderung” (demand), which invokes the frame REQUEST, and a MESSAGE element “Gagen-”. Another interesting point is that one word may introduce more than one frame in cases of coordination and ellipsis. An example is shown in (2). In the elliptical clause only one fifth for daughters, the elided bought introduces a C T frame. So we let the bought in the antecedent introduce two frames, one for the antecedent and one for the ellipsis. (2) Ein Viertel aller Spielwaren w¨urden f¨ur S¨ohne erworben, nur ein F¨unftel f¨ur T¨ochter. (One quarter of all toys are bought for sons, only one fifth for daughters.) Annotation process. Frame annotation proceeds one frame-evoking lemma at a time, using subcorpora containing all instances of the lemma with some surrounding context. Since most FEEs are polysemous, there will usually be several frames relevant to a subcorpus. Annotators first select a frame for an instance of the target lemma. Then they assign frame elements. At the moment the annotation uses XML tags on bare text. The syntactic structure of the TIGERsentences can be accessed in a separate viewer. An annotation tool is being implemented that will provide a graphical interface for the annotation. It will display the syntactic structure and allow for a graphical manipulation of semantic frame trees, in a similar way as shown in Fig. 3. Extending FrameNet. Since FrameNet is far from being complete, there are many word senses not yet covered. For example the verb fordern, which belongs to the REQUEST frame, additionally has the reading challenge, for which the current version of FrameNet does not supply a frame. 5 Evaluation of Annotated Data Materials. Compared to the pilot study we previously reported (Erk et al., 2003), in which 3 annotators tagged 440 corpus instances of a single frame, resulting in 1,320 annotation instances, we now dispose of a considerably larger body of data. It consists of 703 corpus instances for the two frames shown in Figure 1, making up a total of 4,653 annotation instances. For the frame REQUEST, we obtained 421 instances with 8-fold and 114 with 7-fold annotation. The annotated lemmas comprise auffordern (to request), fordern, verlangen (to demand), zur¨uckfordern (demand back), the noun Forderung (demand), and compound nouns ending with -forderung. For the frame C T we have 30, 40 and 98 instances with 5-, 3-, and 2-fold annotation respectively. The annotated lemmas are kaufen (to buy), erwerben (to acquire), verbrauchen (to consume), and verkaufen (to sell). Note that the corpora we are evaluating do not constitute a random sample: At the moment, we cover only two frames, and REQUEST seems to be relatively easy to annotate. Also, the annotation results may not be entirely predictive for larger sample sizes: While the annotation guidelines were being developed, we used REQUEST as a “calibration” frame to be annotated by everybody. As a result, in some cases reliability may be too low because detailed guidelines were not available, and in others it may be too high because controversial instances were discussed in project meetings. Results. The results in this section refer solely to the assignment of fully specified frames and frame elements. Underspecification is discussed at length frames average best worst REQUEST 96.83% 100% 90.73% COMM. 97.11% 98.96% 88.71% elements average best worst REQUEST 88.86% 95.69% 66.57% COMM. 74.25% 90.30% 69.33% Table 1: Inter-annotator agreement on frames (top) and frame elements (below). in Section 6. Due to the limited space in this paper, we only address the question of inter-annotator agreement or annotation reliability, since a reliable annotation is necessary for all further corpus uses. Table 1 shows the inter-annotator agreement on frame assignment and on frame element assignment, computed for pairs of annotators. The “average” column shows the total agreement for all annotation instances, while “best” and “worst” show the figures for the (lemma-specific) subcorpora with highest and lowest agreement, respectively. The upper half of the table shows agreement on the assignment of frames to FEEs, for which we performed 14,410 pairwise comparisons, and the lower half shows agreement on assigned frame elements (29,889 pairwise comparisons). Agreement on frame elements is “exact match”: both annotators have to tag exactly the same sequence of words. In sum, we found that annotators agreed very well on frames. Disagreement on frame elements was higher, in the range of 12-25%. Generally, the numbers indicated considerable differences between the subcorpora. To investigate this matter further, we computed the Alpha statistic (Krippendorff, 1980) for our annotation. Like the widely used Kappa, α is a chancecorrected measure of reliability. It is defined as α = 1 −observed disagreement expected disagreement We chose Alpha over Kappa because it also indicates unreliabilities due to unequal coder preference for categories. With an α value of 1 signifying total agreement and 0 chance agreement, α values above 0.8 are usually interpreted as reliable annotation. Figure 4 shows single category reliabilities for the assignment of frame elements. The graphs shows that not only did target lemmas vary in their difficulty, but that reliability of frame element assignment was also a matter of high variation. Firstly, frames introduced by nouns (Forderung and -forderung) were more difficult to annotate than verbs. Secondly, frame elements could be assigned to three groups: frame elements which were always annotated reliably, those whose reliability was highly dependent on the FEE, and the third group whose members were impossible to annotate reliably (these are not shown in the graphs). In the REQUEST frames, SPEAKER, MESSAGE and ADDRESSEE belong to the first group, at least for verbal FEEs. MEDIUM is a member of the second group, and TOPIC was annotated at chance level (α ≈0). In the COMMERCE frame, only BUYER and GOODS always show high reliability. SELLER can only be reliably annotated for the target verkaufen. PURPOSE and REASON fall into the third group. 5.1 Discussion Interpretation of the data. Inter-annotator agreement on the frames shown in Table 1 is very high. However, the lemmas we considered so far were only moderately ambiguous, and we might see lower figures for frame agreement for highly polysemous FEEs like laufen (to run). For frame elements, inter-annotator agreement is not that high. Can we expect improvement? The Prague Treebank reported a disagreement of about 10% for manual thematic role assignment (ˇZabokrtsk´y, 2000). However, in contrast to our study, they also annotated temporal and local modifiers, which are easier to mark than other roles. One factor that may improve frame element agreement in the future is the display of syntactic structure directly in the annotation tool. Annotators were instructed to assign each frame element to a single syntactic constituent whenever possible, but could only access syntactic structure in a separate viewer. We found that in 35% of pairwise frame element disagreements, one annotator assigned a single syntactic constituent and the other did not. Since a total of 95.6% of frame elements were assigned to single constituents, we expect an increase in agreement when a dedicated annotation tool is available. As to the pronounced differences in reliability between frame elements, we found that while most central frame elements like SPEAKER or BUYER were easy to identify, annotators found it harder to agree on less frequent frame elements like MEDIUM, PURPOSE and REASON. The latter two with their 0.6 0.8 1 auffordern fordern verlangen Forderung -forderung alpha value addressee medium message speaker 0.6 0.8 1 erwerben kaufen verkaufen alpha value buyer seller money goods Figure 4: Alpha values for frame elements. Left: REQUEST. Right: COMMERCIAL TRANSACTION. particularly low agreement (α < 0.8) contribute towards the low overall inter-annotator agreement of the C T frame. We suspect that annotators saw too few instances of these elements to build up a reliable intuition. However, the elements may also be inherently difficult to distinguish. How can we interpret the differences in frame element agreement across target lemmas, especially between verb and noun targets? While frame elements for verbal targets are usually easy to identify based on syntactic factors, this is not the case for nouns. Figure 3 shows an example: Should SPD be tagged as INTERLOCUTOR-2 in the CONVERSATION frame? This appears to be a question of pragmatics. Here it seems that clearer annotation guidelines would be desirable. FrameNet as a resource for semantic role annotation. Above, we have asked about the suitability of FrameNet for semantic role annotation, and our data allow a first, though tentative, assessment. Concerning the portability of FrameNet to other languages than English, the English frames worked well for the German lemmas we have seen so far. For C T a number of frame elements seem to be missing, but these are not language-specific, like CREDIT (for on commission and in installments). The FrameNet frame database is not yet complete. How often do annotators encounter missing frames? The frame UNKNOWN was assigned in 6.3% of the instances of REQUEST, and in 17.6% of the C T instances. The last figure is due to the overwhelming number of UNKNOWN cases in verbrauchen, for which the main sense we encountered is “to use up a resource”, which FrameNet does not offer. Is the choice of frame always clear? And can frame elements always be assigned unambiguously? Above we have already seen that frame element assignment is problematic for nouns. In the next section we will discuss problematic cases of frame assignment as well as frame element assignment. 6 Vagueness, Ambiguity and Underspecification Annotation Challenges. It is a well-known problem from word sense annotation that it is often impossible to make a safe choice among the set of possible semantic correlates for a linguistic item. In frame annotation, this problem appears on two levels: The choice of a frame for a target is a choice of word sense. The assignment of frame elements to phrases poses a second disambiguation problem. An example of the first problem is the German verb verlangen, which associates with both the frame REQUEST and the frame C T. We found several cases where both readings seem to be equally present, e.g. sentence (3). Sentences (4) and (5) exemplify the second problem. The italicised phrase in (4) may be either a SPEAKER or a MEDIUM and the one in (5) either a MEDIUM or not a frame element at all. In our exhaustive annotation, these problems are much more virulent than in the FrameNet corpus, which consists mostly of prototypical examples. (3) Gleichwohl versuchen offenbar Assekuranzen, [das Gesetz] zu umgehen, indem sie von Nichtdeutschen mehr Geld verlangen. (Nonetheless insurance companies evidently try to circumvent [the law] by asking/demanding more money from non-Germans.) (4) Die nachhaltigste Korrektur der Programmatik fordert ein Antrag. . . (The most fundamental policy correction is requested by a motion. . . ) (5) Der Parteitag billigte ein Wirtschaftskonzept, in dem der Umbau gefordert wird. (The party congress approved of an economic concept in which a change is demanded.) Following Kilgarriff and Rosenzweig (2000), we distinguish three cases where the assignment of a single semantic tag is problematic: (1), cases in which, judging from the available context information, several tags are equally possible for an ambiguous utterance; (2), cases in which more than one tag applies at the same time, because the sense distinction is neutralised in the context; and (3), cases in which the distinction between two tags is systematically vague or unclear. In SALSA, we use the concept of underspecification to handle all three cases: Annotators may assign underspecified frame and frame element tags. While the cases have different semantic-pragmatic status, we tag all three of them as underspecified. This is in accordance with the general view on underspecification in semantic theory (Pinkal, 1996). Furthermore, Kilgarriff and Rosenzweig (2000) argue that it is impossible to distinguish those cases Allowing underspecified tags has several advantages. First, it avoids (sometimes dubious) decisions for a unique tag during annotation. Second, it is useful to know if annotators systematically found it hard to distinguish between two frames or two frame elements. This diagnostic information can be used for improving the annotation scheme (e.g. by removing vague distinctions). Third, underspecified tags may indicate frame relations beyond an inheritance hierarchy, horizontal rather than vertical connections. In (3), the use of underspecification can indicate that the frames REQUEST and C T are used in the same situation, which in turn can serve to infer relations between their respective frame elements. Evaluating underspecified annotation. In the previous section, we disregarded annotation cases involving underspecification. In order to evaluate underspecified tags, we present a method of computing inter-annotator agreement in the presence of underspecified annotations. Representing frames and frame elements as predicates that each take a sequence of word indices as their argument, a frame annotation can be seen as a pair (CF, CE) of two formulae, describing the frame and the frame elements, respectively. Without underspecification, CF is a single predicate and CE is a conjunction of predicates. For the CONVERSATION frame of sentence (1), CF has the form CONVERSATION(Gespr¨ach)1, and CE is INTLC 1(Koalition) ∧TOPIC(¨uber Reform). Underspecification is expressed by conjuncts that are disjunctions instead of single predicates. Table 2 shows the admissible cases. For example, the CE of (4) contains the conjunct SPKR(ein Antrag) ∨ MEDIUM(ein Antrag). Our annotation scheme guarantees that every FE name appears in at most one conjunct of CE. Exact agreement means that every conjunct of annotator A must correspond to a conjunct by annotator B, and vice versa. For partial agreement, it suffices that for each conjunct of A, one disjunct matches a disjunct in a conjunct of B, and conversely. frame annotation F(t) single frame: F is assigned to t (F1(t)∨F2(t)) frame disjunction: F1 or F2 is assigned to t frame element annotation E(s) single frame element: E is assigned to s (E1(s)∨E2(s)) frame element disjunction: E1 or E2 is assigned to s (E(s)∨NOFE(s)) optional element: E1 or no frame element is assigned to s (E(s)∨E(s1ss2)) underspecified length: frame element E is assigned to s or the longer sequence s1ss2, which includes s Table 2: Types of conjuncts. F is a frame name, E a frame element name, and t and s are sequences of word indices (t is for the target (FEE)) Using this measure of partial agreement, we now evaluate underspecified annotation. The most striking result is that annotators made little use of underspecification. Frame underspecification was used in 0.4% of all frames, and frame element underspecification for 0.9% of all frame elements. The frame element MEDIUM, which was rarely assigned outside 1We use words instead of indices for readability. underspecification, accounted for roughly half of all underspecification in the REQUEST frame. 63% of the frame element underspecifications are cases of optional elements, the third class in the lower half of Table 2. (Partial) agreement on underspecified tags was considerably lower than on non-underspecified tags, both in the case of frames (86%) and in the case of frame elements (54%). This was to be expected, since the cases with underspecified tags are the more difficult and controversial ones. Since underspecified annotation is so rare, overall frame and frame element agreement including underspecified annotation is virtually the same as in Table 1. It is unfortunate that annotators use underspecification only infrequently, since it can indicate interesting cases of relatedness between different frames and frame elements. However, underspecification may well find its main use during the merging of independent annotations of the same corpus. Not only underspecified annotation, also disagreement between annotators can point out vague and ambiguous cases. If, for example, one annotator has assigned SPEAKER and the other MEDIUM in sentence (4), the best course is probably to use an underspecified tag in the merged corpus. 7 Conclusion We presented the SALSA project, the aim of which is to construct and utilize a large corpus reliably annotated with semantic information. While the SALSA corpus is designed to be utilizable for many purposes, our focus is on lexical semantics, in order to address one of the most serious bottlenecks for language technology today: the lack of large, domain-independent lexica. In this paper we have focused on the annotation with frame semantic roles. We have presented the annotation scheme, and we have evaluated first annotation results, which show encouraging figures for inter-annotator agreement. We have discussed the problem of vagueness and ambiguity of the data and proposed a representation for underspecified tags, which are to be used both for the annotation and the merging of individual annotations. Important next steps are: the design of a tool for semi-automatic annotation, and the extraction of selectional preferences from the annotated data. Acknowledgments. We would like to thank the following people, who helped us with their suggestions and discussions: Sue Atkins, Collin Baker, Ulrike Baldewein, Hans Boas, Daniel Bobbert, Sabine Brants, Paul Buitelaar, Ann Copestake, Christiane Fellbaum, Charles Fillmore, Gerd Fliedner, Silvia Hansen, Ulrich Heid, Katja Markert and Oliver Plaehn. We are especially indebted to Maria Lapata, whose suggestions have contributed to the current shape of the project in an essential way. Any errors are, of course, entirely our own. References Collin F. Baker, Charles J. Fillmore, and John B. Lowe. 1998. The Berkeley FrameNet project. In Proceedings of COLING-ACL, Montreal, Canada. Sabine Brants, Stefanie Dipper, Silvia Hansen, Wolfgang Lezius, and George Smith. 2002. The TIGER treebank. In Proceedings of the Workshop on Treebanks and Linguistic Theories, Sozopol, Bulgaria. Katrin Erk, Andrea Kowalski, and Manfred Pinkal. 2003. A corpus resource for lexical semantics. In Proceedings of IWCS5, pages 106–121, Tilburg, The Netherlands. Charles J. Fillmore. 1968. The case for case. In Bach and Harms, editors, Universals in Linguistic Theory, pages 1–88. Holt, Rinehart, and Winston, New York. Eva Hajiˇcov´a. 1998. Prague Dependency Treebank: From Analytic to Tectogrammatical Annotation. In Proceedings of TSD’98, pages 45–50, Brno, Czech Republic. C. R. Johnson, C. J. Fillmore, M. R. L. Petruck, C. F. Baker, M. Ellsworth, J. Ruppenhofer, and E. J. Wood. 2002. FrameNet: Theory and Practice. http://www.icsi. berkeley.edu/˜framenet/book/book.html. Adam Kilgarriff and Joseph Rosenzweig. 2000. Framework and results for English Senseval. Computers and the Humanities, 34(1-2). Adam Kilgarriff, editor. 2001. SENSEVAL-2, Toulouse. Klaus Krippendorff. 1980. Content Analysis. Sage. M. Marcus, G. Kim, M.A. Marcinkiewicz, R. MacIntyre, A. Bies, M. Gerguson, K. Katz, and B. Schasberger. 1994. The Penn Treebank: Annotating predicate argument structure. In Proceedings of the ARPA HLT Workshop. G. Miller, R. Beckwith, C. Fellbaum, D. Gros, and K. Miller. 1990. Introduction to WordNet: An on-line lexical database. International Journal of Lexicography, 3(4):235–44. Manfred Pinkal. 1996. Vagueness, ambiguity, and underspecification. In Proceedings of SALT’96, pages 185–201. Wojciech Skut, Brigitte Krenn, Thorsten Brants, and Hans Uszkoreit. 1998. A linguistically interpreted corpus of German newspaper text. In Proceedings of LREC’98, Granada. Zdenˇek ˇZabokrtsk´y. 2000. Automatic functor assignment in the Prague Dependency Treebank. In Proceedings of TSD’00, Brno, Czech Republic.
2003
68
Probabilistic Text Structuring: Experiments with Sentence Ordering Mirella Lapata Department of Computer Science University of Sheffield Regent Court, 211 Portobello Street Sheffield S1 4DP, UK [email protected] Abstract Ordering information is a critical task for natural language generation applications. In this paper we propose an approach to information ordering that is particularly suited for text-to-text generation. We describe a model that learns constraints on sentence order from a corpus of domainspecific texts and an algorithm that yields the most likely order among several alternatives. We evaluate the automatically generated orderings against authored texts from our corpus and against human subjects that are asked to mimic the model’s task. We also assess the appropriateness of such a model for multidocument summarization. 1 Introduction Structuring a set of facts into a coherent text is a non-trivial task which has received much attention in the area of concept-to-text generation (see Reiter and Dale 2000 for an overview). The structured text is typically assumed to be a tree (i.e., to have a hierarchical structure) whose leaves express the content being communicated and whose nodes specify how this content is grouped via rhetorical or discourse relations (e.g., contrast, sequence, elaboration). For domains with large numbers of facts and rhetorical relations, there can be more than one possible tree representing the intended content. These different trees will be realized as texts with different sentence orders or even paragraph orders and different levels of coherence. Finding the tree that yields the best possible text is effectively a search problem. One way to address it is by narrowing down the search space either exhaustively or heuristically. Marcu (1997) argues that global coherence can be achieved if constraints on local coherence are satisfied. The latter are operationalized as weights on the ordering and adjacency of facts and are derived from a corpus of naturally occurring texts. A constraint satisfaction algorithm is used to find the tree with maximal weights from the space of all possible trees. Mellish et al. (1998) advocate stochastic search as an alternative to exhaustively examining the search space. Rather than requiring a global optimum to be found, they use a genetic algorithm to select a tree that is coherent enough for people to understand (local optimum). The problem of finding an acceptable ordering does not arise solely in concept-to-text generation but also in the emerging field of text-to-text generation (Barzilay, 2003). Examples of applications that require some form of text structuring, are single- and multidocument summarization as well as question answering. Note that these applications do not typically assume rich semantic knowledge organized in tree-like structures or communicative goals as is often the case in concept-to-text generation. Although in single document summarization the position of a sentence in a document can provide cues with respect to its ordering in the summary, this is not the case in multidocument summarization where sentences are selected from different documents and must be somehow ordered so as to produce a coherent summary (Barzilay et al., 2002). Answering a question may also involve the extraction, potentially summarization, and ordering of information across multiple information sources. Barzilay et al. (2002) address the problem of information ordering in multidocument summarization and show that naive ordering algorithms such as majority ordering (selects most frequent orders across input documents) and chronological ordering (orders facts according to publication date) do not always yield coherent summaries although the latter produces good results when the information is eventbased. Barzilay et al. further conduct a study where subjects are asked to produce a coherent text from the output of a multidocument summarizer. Their results reveal that although the generated orders differ from subject to subject, topically related sentences always appear together. Based on the human study they propose an algorithm that first identifies topically related groups of sentences and then orders them according to chronological information. In this paper we introduce an unsupervised probabilistic model for text structuring that learns ordering constraints from a large corpus. The model operates on sentences rather than facts in a knowledge base and is potentially useful for text-to-text generation applications. For example, it can be used to order the sentences obtained from a multidocument summarizer or a question answering system. Sentences are represented by a set of informative features (e.g., a verb and its subject, a noun and its modifier) that can be automatically extracted from the corpus without recourse to manual annotation. The model learns which sequences of features are likely to co-occur and makes predictions concerning preferred orderings. Local coherence is thus operationalized by sentence proximity in the training corpus. Global coherence is obtained by greedily searching through the space of possible orders. As in the case of Mellish et al. (1998) we construct an acceptable ordering rather than the best possible one. We propose an automatic method of evaluating the orders generated by our model by measuring closeness or distance from the gold standard, a collection of orders produced by humans. The remainder of this paper is organized as follows. Section 2 introduces our model and an algorithm for producing a possible order. Section 3 describes our corpus and the estimation of the model parameters. Our experiments are detailed in Section 4. We conclude with a discussion in Section 5. 2 Learning to Order Given a collection of texts from a particular domain, our task is to learn constraints on the ordering of their sentences. In the training phase our model will learn these constraints from adjacent sentences represented by a set of informative features. In the testing phase, given a set of unseen sentences, we will rely on our prior experience of how sentences are usually ordered for choosing the most likely ordering. 2.1 The Model We express the probability of a text made up of sentences S1 ...Sn as shown in (1). According to (1), the task of predicting the next sentence is dependent on its n−i previous sentences. P(T) = P(S1 ...Sn) = P(S1)P(S2|S1)P(S3|S1,S2)...P(Sn|S1 ... Sn−1) = n ∏ i=1 P(Sn|S1 ...Sn−i) (1) We will simplify (1) by assuming that the probability of any given sentence is determined only by its previous sentence: P(T) = P(S1)P(S2|S1)P(S3|S2)...P(Sn|Sn−1) = n ∏ i=1 P(Si|Si−1) (2) This is a somewhat simplistic attempt at capturing Marcu’s (1997) local coherence constraints as well as Barzilay et al.’s (2002) observations about topical relatedness. While this is clearly a naive view of text coherence, our model has some notion of the types of sentences that typically go together, even though it is agnostic about the specific rhetorical relations that glue sentences into a coherent text. Also note that the simplification in (2) will make the estimation of the probabilities P(Si|Si−1) more reliable in the face of sparse data. Of course estimating P(Si|Si−1) would be impossible if Si and Si−1 were actual sentences. It is unlikely to find the exact same sentence repeated several times in a corpus. What we can find and count is the number of times a given structure or word appears in the corpus. We will therefore estimate P(Si|Si−1) from features that express its structure and content (these features are described in detail in Section 3): P(Si|Si−1) = P(⟨a⟨i,1⟩,a⟨i,2⟩...a⟨i,n⟩⟩|⟨a⟨i−1,1⟩,a⟨i−1,2⟩...a⟨i−1,m⟩⟩) (3) where ⟨a⟨i,1⟩,a⟨i,2⟩...a⟨i,n⟩⟩are features relevant for sentence Si and ⟨a⟨i−1,1⟩,a⟨i−1,2⟩...a⟨i−1,m⟩⟩for sentence Si−1. We will assume that these features are independent and that P(Si|Si−1) can be estimated from the pairs in the Cartesian product defined over the features expressing sentences Si and Si−1: (a⟨i,j⟩,a⟨i−1,k⟩) ∈Si ×Si−1. Under these assumptions P(Si|Si−1) can be written as follows: P(Si|Si−1) = P(a⟨i,1⟩|a⟨i−1,1⟩)...P(a⟨i,n⟩|a⟨i−1,m⟩) = ∏ (a⟨i, j⟩,a⟨i−1,k⟩)∈Si×Si−1 P(a⟨i, j⟩|a⟨i−1,k⟩) (4) Assuming that the features are independent again makes parameter estimation easier. The Cartesian product over the features in Si and Si−1 is an attempt to capture inter-sentential dependencies. Since S1 : a b c d S2 : e f g S3 : h i Figure 1: Example of probability estimation we don’t know a priori what the important feature combinations are, we are considering all possible combinations over two sentences. This will admittedly introduce some noise, given that some dependencies will be spurious, but the model can be easily retrained for different domains for which different feature combinations will be important. The probability P(a⟨i,j⟩|a⟨i−1,k⟩) is estimated as: P(a⟨i, j⟩|a⟨i−1,k⟩) = f(a⟨i, j⟩,a⟨i−1,k⟩) ∑ a⟨i, j⟩ f(a⟨i, j⟩,a⟨i−1,k⟩) (5) where f(a⟨i,j⟩,a⟨i−1,k⟩) is the number of times feature a⟨i,j⟩is preceded by feature a⟨i−1,k⟩in the corpus. The denominator expresses the number of times a⟨i−1,k⟩is attested in the corpus (preceded by any feature). The probabilities P(a⟨i,j⟩|a⟨i−1,k⟩) will be unreliable when the frequency estimates for f(a⟨i,j⟩,a⟨i−1,k⟩) are small, and undefined in cases where the feature combinations are unattested in the corpus. We therefore smooth the observed frequencies using back-off smoothing (Katz, 1987). To illustrate with an example consider the text in Figure 1 which has three sentences S1, S2, S3, each represented by their respective features denoted by letters. The probability P(S3|S2) will be calculated by taking the product of P(h|e), P(h| f), P(h|g), P(i|e), P(i| f), and P(i|g). To obtain P(h|e), we need f(h,e) and f(e) which can be estimated in Figure 1 by counting the number of edges connecting e and h and the number of edges starting from e, respectively. So, P(h|e) will be 0.16 given that f(h,e) is one and f(e) is six (see the normalization in (5)). 2.2 Determining an Order Once we have collected the counts for our features we can determine the order for a new text that we haven’t encountered before, since some of the features representing its sentences will be familiar. Given a text with N sentences there are N! possible orders. The set of orders can be represented as a complete graph, where the set of vertices V is equal to the set of sentences S and each edge u →v has a weight, the probability P(u|v). Cohen et al. (1999) START  H H H H H H S1 (0.2)  H H S2 S3 S3 S2 S2 (0.3)  H H S1 (0.006) S3 S3 (0.02) S1 S3 (0.05)  H H S2 S1 S1 S2 Figure 2: Finding an order for a three sentence text show that the problem of finding an optimal ordering through a directed weighted graph is NP-complete. Fortunately, they propose a simple greedy algorithm that provides an approximate solution which can be easily modified for our task (see also Barzilay et al. 2002). The algorithm starts by assigning each vertex v ∈V a probability. Recall that in our case vertices are sentences and their probabilities can be calculated by taking the product of the probabilities of their features. The greedy algorithm then picks the node with the highest probability and orders it ahead of the other nodes. The selected node and its incident edges are deleted from the graph. Each remaining node is now assigned the conditional probability of seeing this node given the previously selected node (see (4)). The node which yields the highest conditional probability is selected and ordered ahead. The process is repeated until the graph is empty. As an example consider again a three sentence text. We illustrate the search for a path through the graph in Figure 2. First we calculate which of the three sentences S1, S2, and S3 is most likely to start the text (during training we record which sentences appear in the beginning of each text). Assuming that P(S2|START) is the highest, we will order S2 first, and ignore the nodes headed by S1 and S3. We next compare the probabilities P(S1|S2) and P(S3|S2). Since P(S3|S2) is more likely than P(S1|S2), we order S3 after S2 and stop, returning the order S2, S3, and S1. As can be seen in Figure 2 for each vertex we keep track of the most probable edge that ends in that vertex, thus setting th beam search width to one. Note, that equation (4) would assign lower and lower probabilities to sentences with large numbers of features. Since we need to compare sentence pairs with varied numbers of features, we will normalize the conditional probabilities P(Si|Si−1) by the number feature of pairs that form the Cartesian product over Si and Si−1. 1. Laidlaw Transportation Ltd. said shareholders will be asked at its Dec. 7 annual meeting to approve a change of name to Laidlaw Inc. 2. The company said its existing name hasn’t represented its businesses since the 1984 sale of its trucking operations. 3. Laidlaw is a waste management and school-bus operator, in which Canadian Pacific Ltd. has a 47% voting interest. Figure 3: A text from the BLLIP corpus 3 Parameter Estimation The model in Section 2.1 was trained on the BLLIP corpus (30 M words), a collection of texts from the Wall Street Journal (years 1987-89). The corpus contains 98,732 stories. The average story length is 19.2 sentences. 71.30% of the texts in the corpus are less than 50 sentences long. An example of the texts in this newswire corpus is shown in Figure 3. The corpus is distributed in a Treebankstyle machine-parsed version which was produced with Charniak’s (2000) parser. The parser is a “maximum-entropy inspired” probabilistic generative model. It achieves 90.1% average precision/recall for sentences with maximum length 40 and 89.5% for sentences with maximum length 100 when trained and tested on the standard sections of the Wall Street Journal Treebank (Marcus et al., 1993). We also obtained a dependency-style version of the corpus using MINIPAR (Lin, 1998) a broad coverage parser for English which employs a manually constructed grammar and a lexicon derived from WordNet with an additional dictionary of proper names (130,000 entries in total). The grammar is represented as a network of 35 nodes (i.e., grammatical categories) and 59 edges (i.e., types of syntactic (dependency) relations). The output of MINIPAR is a dependency graph which represents the dependency relations between words in a sentence (see Table 1 for an example). Lin (1998) evaluated the parser on the SUSANNE corpus (Sampson, 1996), a domain independent corpus of British English, and achieved a recall of 79% and precision of 89% on the dependency relations. From the two different parsed versions of the BLLIP corpus the following features were extracted: Verbs. Investigations into the interpretation of narrative discourse (Asher and Lascarides, 2003) have shown that specific lexical information (e.g., verbs, adjectives) plays an important role in determining the discourse relations between propositions. Although we don’t have an explicit model of rhetorical relations and their effects on sentence ordering, we capture the lexical inter-dependencies between sentences by focusing on verbs and their precedence relationships in the corpus. From the Treebank parses we extracted the verbs contained in each sentence. We obtained two versions of this feature: (a) a lemmatized version where verbs were reduced to their base forms and (b) a non-lemmatized version which preserved tense-related information; more specifically, verbal complexes (e.g., I will have been going) were identified from the parse trees heuristically by devising a set of 30 patterns that search for sequences of modals, auxiliaries and verbs. This is an attempt at capturing temporal coherence by encoding sequences of events and their morphology which indirectly indicates their tense. To give an example consider the text in Figure 3. For the lemmatized version, sentence (1) will be represented by say, will, be, ask, and approve; for the tensed version, the relevant features will be said, will be asked, and to approve. Nouns. Centering Theory (CT, Grosz et al. 1995) is an entity-based theory of local coherence, which claims that certain entities mentioned in an utterance are more central than others and that this property constrains a speaker’s use of certain referring expressions. The principles underlying CT (e.g., continuity, salience) are of interest to concept-to-text generation as they offer an entity-based model of text and sentence planning which is particularly suited for descriptional genres (Kibble and Power, 2000). We operationalize entity-based coherence for text-to-text generation by simply keeping track of the nouns attested in a sentence without however taking personal pronouns into account. This simplification is reasonable if one has text-to-text generation mind. In multidocument summarization for example, sentences are extracted from different documents; the referents of the pronouns attested in these sentences are typically not known and in some cases identical pronouns may refer to different entities. So making use of noun-pronoun or pronoun-pronoun co-occurrences will be uninformative or in fact misleading. We extracted nouns from a lemmatized version of the Treebank-style parsed corpus. In cases of noun compounds, only the compound head (i.e., rightmost noun) was taken into account. A small set of rules was used to identify organizations (e.g., United Laboratories Inc.), person names (e.g., Jose Y. Campos), and locations (e.g., New England) spanning more than one word. These were grouped together and were also given the general categories person, organization, and location. The model backs off to these categories when unknown person names, locations, and organizations are encountered. Dates, years, months and numbers were substituted by the categories date, year, month, and number. In sentence (1) (see Figure 3) we identify the nouns Laidlaw Transportation Ltd., shareholder, Dec 7, meeting, change, name and Laidlaw Inc. In sentence (2) the relevant nouns are company, name, business, 1984, sale, and operation. Dependencies. Note that the noun and verb features do not capture the structure of the sentences to be ordered. This is important for our domain, as texts seem to be rather formulaic and similar syntactic structures are often used (e.g., direct and indirect speech, restrictive relative clauses, predicative structures). In this domain companies typically say things, and texts often begin with a statement of what a company or an individual has said (see sentence (1) in Figure 3). Furthermore, companies and individuals are described with certain attributes (persons can be presidents or governors, companies are bankrupt or manufacturers, etc.) that can give clues for inferring coherence. The dependencies were obtained from the output of MINIPAR. Some of the dependencies for sentence (2) from Figure 3 are shown in Table 1. The dependencies capture structural as well lexical information. They are represented as triples, consisting of a head (leftmost element, e.g., say, name), a modifier (rightmost element, e.g., company, its) and a relation (e.g., subject (V:subj:N), object (V:obj:N), modifier (N:mod:A)). For efficiency reasons we focused on triples whose dependency relations (e.g., V:subj:N) were attested in the corpus with frequency larger than one per million. We further looked at how individual types of relations contribute to the ordering task. More specifically we experimented with dependencies relating to verbs (49 types), nouns (52 types), verbs and nouns (101 types) (see Table 1 for examples). We also ran a version of our model with all types of relations, including adjectives, adverbs and Verb Noun say V:subj:N company name N:gen:N its represent V:subj:N name name N:mod:A existing represent V:have:have have business N:gen:N its represent V:obj:N business business N:mod:Prep since company N:det:Det the Table 1: Dependencies for sentence (2) in Figure 3 A B C D E F G H I J Model 1 1 2 3 4 5 6 7 8 9 10 Model 2 2 1 5 3 4 6 7 9 8 10 Model 3 10 2 3 4 5 6 7 8 9 1 Table 2: Example of rankings for a 10 sentence text prepositions (147 types in total). 4 Experiments In this section we describe our experiments with the model and the features introduced in the previous sections. We first evaluate the model by attempting to reproduce the structure of unseen texts from the BLLIP corpus, i.e., the corpus on which the model is trained on. We next obtain an upper bound for the task by conducting a sentence ordering experiment with humans and comparing the model against the human data. Finally, we assess whether this model can be used for multi-document summarization using data from Barzilay et al. (2002). But before we outline the details of our experiments we discuss our choice of metric for comparing different orders. 4.1 Evaluation Metric Our task is to produce an ordering for the sentences of a given text. We can think of the sentences as objects for which a ranking must be produced. Table 2 gives an example of a text containing 10 sentences (A–J) and the orders (i.e., rankings) produced by three hypothetical models. A number of metrics can be used to measure the distance between two rankings such as Spearman’s correlation coefficient for ranked data, Cayley distance, or Kendall’s τ (see Lebanon and Lafferty 2002 for details). Kendall’s τ is based on the number of inversions in the rankings and is defined in (6): (6) τ = 1−2(number of inversions) N(N −1)/2 where N is the number of objects (i.e., sentences) being ranked and inversions are the number of interchanges of consecutive elements necessary to arrange them in their natural order. If we think in terms of permutations, then τ can be interpreted as the minimum number of adjacent transpositions needed to bring one order to the other. In Table 2 the number of inversions can be calculated by counting the number of intersections of the lines. The metric ranges from −1 (inverse ranks) to 1 (identical ranks). The τ for Model 1 and Model 2 in Table 2 is .822. Kendall’s τ seems particularly appropriate for the tasks considered in this paper. The metric is sensitive to the fact that some sentences may be always ordered next to each other even though their absolute orders might differ. It also penalizes inverse rankings. Comparison between Model 1 and Model 3 would give a τ of 0.244 even though the orders between the two models are identical modulo the beginning and the end. This seems appropriate given that flipping the introduction in a document with the conclusions seriously disrupts coherence. 4.2 Experiment 1: Ordering Newswire Texts The model from Section 2.1 was trained on the BLLIP corpus and tested on 20 held-out randomly selected unseen texts (average length 15.3). We also used 20 randomly chosen texts (disjoint from the test data) for development purposes (average length 16.2). All our results are reported on the test set. The input to the the greedy algorithm (see Section 2.2) was a text with a randomized sentence ordering. The ordered output was compared against the original authored text using τ. Table 3 gives the average τ (T) for all 20 test texts when the following features are used: lemmatized verbs (VL), tensed verbs (VT), lemmatized nouns (NL), lemmatized verbs and nouns (VLNL), tensed verbs and lemmatized nouns (VTNL), verb-related dependencies (VD), noun-related dependencies (ND), verb and noun dependencies (VDND), and all available dependencies (AD). For comparison we also report the naive baseline of generating a random oder (BR). As can be seen from Table 3 the best performing features are NL and VDND. This is not surprising given that NL encapsulates notions of entity-based coherence, which is relatively important for our domain. A lot of texts are about a particular entity (company or individual) and their properties. The feature VDND subsumes several other features and does expectedly better: it captures entity-based coherence, the interrelations among verbs, the structure of sentences and also preserves information about argument structure (who is doing what to whom). The distance between the orders produced by the model and the original texts increases when all types of dependencies are Feature T StdDev Min Max BR .35 .09 .17 .47 VL .44 .24 .17 .93 VT .46 .21 .17 .80 NL .54 .16 .18 .76 VLNL .46 .12 .18 .61 VT NL .49 .17 .21 .86 VD .51 .17 .10 .83 ND .45 .17 .10 .67 VDND .57 .12 .62 .83 AD .48 .17 .10 .83 Table 3: Comparison between original BLLIP texts and model generated variants taken into account. The feature space becomes too big, there are too many spurious feature pairs, and the model can’t distinguish informative from noninformative features. We carried out a one-way Analysis of Variance (ANOVA) to examine the effect of different feature types. The ANOVA revealed a reliable effect of feature type (F(9,171) = 3.31; p < 0.01). We performed Post-hoc Tukey tests to further examine whether there are any significant differences among the different features and between our model and the baseline. We found out that NL, VTNL, VD, and VDND are significantly better than BR (α = 0.01), whereas NL and VDND are not significantly different from each other. However, they are significantly better than all other features (α = 0.05). 4.3 Experiment 2: Human Evaluation In this experiment we compare our model’s performance against human judges. Twelve texts were randomly selected from the 20 texts in our test data. The texts were presented to subjects with the order of their sentences scrambled. Participants were asked to reorder the sentences so as to produce a coherent text. Each participant saw three texts randomly chosen from the pool of 12 texts. A random order of sentences was generated for every text the participants saw. Sentences were presented verbatim, pronouns and connectives were retained in order to make ordering feasible. Notice that this information is absent from the features the model takes into account. The study was conducted remotely over the Internet using a variant of Barzilay et al.’s (2002) software. Subjects first saw a set of instructions that explained the task, and had to fill in a short questionnaire including basic demographic information. The experiment was completed by 137 volunteers (approximately 33 per text), all native speakers of English. Subjects were recruited via postings to local Feature T StdDev Min Max VL .45 .16 .10 .90 VT .46 .18 .10 .90 NL .51 .14 .10 .90 VLNL .44 .14 .18 .61 VT NL .49 .18 .21 .86 VD .47 .14 .10 .93 ND .46 .15 .10 .86 VDND .55 .15 .10 .90 AD .48 .16 .10 .83 HH .58 .08 .26 .75 Table 4: Comparison between orderings produced by humans and the model on BLLIP texts Features T StdDev Min Max BR .43 .13 .19 .97 NL .48 .16 .21 .86 VDND .56 .13 .32 .86 HH .60 .17 −1 .98 Table 5: Comparison between orderings produced by humans and the model on multidocument summaries Usenet newsgroups. Table 4 reports pairwise τ averaged over 12 texts for all participants (HH) and the average τ between the model and each of the subjects for all features used in Experiment 1. The average distance in the orderings produced by our subjects is .58. The distance between the humans and the best features is .51 for NL and .55 for VDND. An ANOVA yielded a significant effect of feature type (F(9,99) = 5.213; p < 0.01). Post-hoc Tukey tests revealed that VL, VT, VD, ND, AD, VLNL, and VTNL perform significantly worse than HH (α = 0.01), whereas NL and VDND are not significantly different from HH (α = 0.01). This is in agreement with Experiment 1 and points to the importance of lexical and structural information for the ordering task. 4.4 Experiment 3: Summarization Barzilay et al. (2002) collected a corpus of multiple orderings in order to study what makes an order cohesive. Their goal was to improve the ordering strategy of MULTIGEN (McKeown et al., 1999) a multidocument summarization system that operates on news articles describing the same event. MULTIGEN identifies text units that convey similar information across documents and clusters them into themes. Each theme is next syntactically analysed into predicate argument structures; the structures that are repeated often enough are chosen to be included into the summary. A language generation system outputs a sentence (per theme) from the selected predicate argument structures. Barzilay et al. (2002) collected ten sets of articles each consisting of two to three articles reporting the same event and simulated MULTIGEN by manually selecting the sentences to be included in the final summary. This way they ensured that orderings were not influenced by mistakes their system could have made. Explicit references and connectives were removed from the sentences so as not to reveal clues about the sentence ordering. Ten subjects provided orders for each summary which had an average length of 8.8. We simulated the participants’ task by using the model from Section 2.1 to produce an order for each candidate summary1. We then compared the differences in the orderings generated by the model and participants using the best performing features from Experiment 2 (i.e., NL and VDND). Note that the model was trained on the BLLIP corpus, whereas the sentences to be ordered were taken from news articles describing the same event. Not only were the news articles unseen but also their syntactic structure was unfamiliar to the model. The results are shown in table 5, again average pairwise τ is reported. We also give the naive baseline of choosing a random order (BR). The average distance in the orderings produced by Barzilay et al.’s (2002) participants is .60. The distance between the humans and NL is .48 whereas the average distance between VDND and the humans is .56. An ANOVA yielded a significant effect of feature type (F(3,27) = 15.25; p < 0.01). Post-hoc Tukey tests showed that VDND was significantly better than BR, but NL wasn’t. The difference between VDND and HH was not significant. Although NL performed adequately in Experiments 1 and 2, it failed to outperform the baseline in the summarization task. This may be due to the fact that entity-based coherence is not as important as temporal coherence for the news articles summaries. Recall that the summaries describe events across documents. This information is captured more adequately by VDND and not by NL that only keeps a record of the entities in the sentence. 5 Discussion In this paper we proposed a data intensive approach to text coherence where constraints on sentence ordering are learned from a corpus of domain-specific 1The summaries as well as the human data are available from http://www.cs.columbia.edu/˜noemie/ordering/. texts. We experimented with different feature encodings and showed that lexical and syntactic information is important for the ordering task. Our results indicate that the model can successfully generate orders for texts taken from the corpus on which it is trained. The model also compares favorably with human performance on a single- and multiple document ordering task. Our model operates on the surface level rather than the logical form and is therefore suitable for text-to-text generation systems; it acquires ordering constraints automatically, and can be easily ported to different domains and text genres. The model is particularly relevant for multidocument summarization since it could provide an alternative to chronological ordering especially for documents where publication date information is unavailable or uninformative (e.g., all documents have the same date). We proposed Kendall’s τ as an automated method for evaluating the generated orders. There are a number of issues that must be addressed in future work. So far our evaluation metric measures order similarities or dissimilarities. This enables us to assess the importance of particular feature combinations automatically and to evaluate whether the model and the search algorithm generate potentially acceptable orders without having to run comprehension experiments each time. Such experiments however are crucial for determining how coherent the generated texts are and whether they convey the same semantic content as the originally authored texts. For multidocument summarization comparisons between our model and alternative ordering strategies are important if we want to pursue this approach further. Several improvements can take place with respect to the model. An obvious question is whether a trigram model performs better than the model presented here. The greedy algorithm implements a search procedure with a beam of width one. In the future we plan to experiment with larger widths (e.g., two or three) and also take into account features that express semantic similarities across documents either by relying on WordNet or on automatic clustering methods. Acknowledgments The author was supported by EPSRC grant number R40036. We are grateful to Regina Barzilay and Noemie Elhadad for making available their software and for providing valuable comments on this work. Thanks also to Stephen Clark, Nikiforos Karamanis, Frank Keller, Alex Lascarides, Katja Markert, and Miles Osborne for helpful comments and suggestions. References Asher, Nicholas and Alex Lascarides. 2003. Logics of Conversation. Cambridge University Press. Barzilay, Regina. 2003. Information Fusion for MultiDocument Summarization: Praphrasing and Generation. Ph.D. thesis, Columbia University. Barzilay, Regina, Noemie Elhadad, and Kathleen R. McKeown. 2002. Inferring strategies for sentence ordering in multidocument news summarization. Journal of Artificial Intelligence Research 17:35–55. Charniak, Eugene. 2000. A maximum-entropy-inspired parser. In Proceedings of the 1st Conference of the North American Chapter of the Association for Computational Linguistics. Seattle, WA, pages 132–139. Cohen, William W., Robert E. Schapire, and Yoram Singer. 1999. Learning to order things. Journal of Artificial Intelligence Research 10:243–270. Grosz, Barbara, Aravind Joshi, , and Scott Weinstein. 1995. Centering: A framework for modeling the local coherence of discourse. Computational Linguistics 21(2):203–225. Katz, Slava M. 1987. Estimation of probabilities from sparse data for the language model component of a speech recognizer. IEEE Transactions on Acoustics Speech and Signal Processing 33(3):400–401. Kibble, Rodger and Richard Power. 2000. An integrated framework for text planning and pronominalisation. In In Proceedings of the 1st International Conference on Natural Language Generation. Mitzpe Ramon, Israel, pages 77–84. Lebanon, Guy and John Lafferty. 2002. Combining rankings using conditional probability models on permutations. In C. Sammut and A. Hoffmann, editors, In Proceedings of the 19th International Conference on Machine Learning. Morgan Kaufmann Publishers, San Francisco, CA. Lin, Dekang. 1998. Dependency-based evaluation of MINIPAR. In In Proceedings on of the LREC Workshop on the Evaluation of Parsing Systems. Granada, pages 48–56. Marcu, Daniel. 1997. From local to global coherence: A bottom-up approach to text planning. In In Proceedings of the 14th National Conference on Artificial Intelligence. Providence, Rhode Island, pages 629–635. Marcus, Mitchell P., Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building a large annotated corpus of english: The penn treebank. Computational Linguistics 19(2):313–330. McKeown, Kathleen R., Judith L. Klavans, Vasileios Hatzivassiloglou, Regina Barzilay, and Eleazar Eskin. 1999. Towards multidocument summarization by reformulation: Progress and prospects. In Proceedings of the 16th National Conference on Artificial Intelligence. Orlando, FL, pages 453–459. Mellish, Chris, Alistair Knott, Jon Oberlander, and Mick O’ Donnell. 1998. Experiments using stochastic search for text planning. In In Proceedings of the 9th International Workshop on Natural Language Generation. Ontario, Canada, pages 98–107. Reiter, Ehud and Robert Dale. 2000. Building Natural Language Generation Systems. Cambridge University Press, Cambridge. Sampson, Geoffrey. 1996. English for the Computer. Oxford University Press.
2003
69
    "! " $#%&' () *  +(   , -.'/ ()102 "()34  " 576/6/8:9<; =?>/;@6AB6C86 DCEGF D/HJIK8ML=NABHPOQOSR T'UVXWZYQ[]\^VX_*`7acbedgfZhji[]VXklmYZndoamhqp)r)_sYt[ ufZi@am[]fc_sam[` vNwyxzx|{~} aZhq\^amU<'WmYtUBr)YZnGdgfZh€i[^VklmYdg‚gƒq„Z…‡†€nT‰ˆ  UUf)Š‹ˆam[ } amUYQUŒtŽŠSfZh‘Š‹fZZАr’ n x rkVX_sf)Š‹“3[^YQVX\]\”ŒtŽŠSfZh‘Š•fcZАr’ 5—–O˜IK=N8™ZI š7›œŸžc K›¢¡*£"œ¥¤N¦S£*›§£"¨z›ª©¨¦Kž¤K›ªœ«ž­¬›”®"¯°K®*± ² ¦Kž©s›³°K¯e¦K´z£*° ² ¦S£"œŸ©µ¡"´z¶©”¦S£*›”¤K°K®œ¥·¢¦S£"œ¥°Nž ¦K©”¸Z´œŸ¡œ¥£"œ¥°Nž7¹¨›¢ž‘¦q¹ °K®º§¡*›¢ž¡"›ºœŸ¡¦ ² ± ¶»œ¥¤N´¦S£"œ¥°Nž½¼š­¾m¿ÁÀ¡"ÂZ¡*£*› ² œŸ¡› ² ¬BÃ¥°˜ÂK›¢º £*°€¤N´œ«ºz› £"¨›¦K©”¸Z´œŸ¡œ¥£"œ¥°Nžq¬®"°m©s›¢¡"¡”ÄÁÅo¡~¦ ¡´z¶¤K°N¦KÃJÆ£"¨œŸ¡oœŸžc K°NÃ¥ K›¢¡~©s®"›¢¦S£"œŸž¤€¦j¬®°K¶z± ¦S¶»œŸÃŸœŸ¡*£"œ«©šÇ¾m¿4¡*Âm¡*£*› ² ƹ¨œ«©¨<¹ ›j›” Q¦KÃ¥± ´B¦S£*›‘°Nž­£"¨›MÈ)ÉÊÌË¢ÉmÍmÎ@ÏS±PÐ§Ñ žz¤NߜŸ¡"¨Ç¦KëÃÒ± ¹°K®º¡Ó£"¦K¡"ԉº¦S£"¦mÄš7›©”¦S®"®"Â'°N´£¦Kž›” Q¦KÃ¥± ´B¦S£"œ¥°NžÕ°K¯'£"¨z›—›¢žz®œ«©¨z›¢ºÕ¡"´z¶©”¦S£*›”¤K°K®œ¥·¢¦Q± £"œŸ°Nž:¦K©”¸Z´œŸ¡"œŸ£"œ¥°NžÖ¡*Âm¡*£*› ² ´¡"œ«žz¤<ÐK×ÖØºœ¥¯X± Ù ©”´Ã¥£”ÚÑ/žz¤Nߜ«¡"¨| K›”®¶B¡o¹¨œŸ©^¨Û¡¨z°˜¹¡g£"¨B¦S£ š­¾m¿%¨›¢Ã¥¬B¡3£*°€œ ² ¬B®"°˜ K› £"¨z› ¦K©”¸Z´œŸ¡"œŸ£"œ¥°Nž ¬›”®"¯°K® ² ¦KžB©s›KÄ Ü Ý 6ÌIK=N;D/F/™ZINHÞ;@6 ¾m´z¶©”¦S£*›”¤K°K®œŸ·¢¦S£"œ¥°Nž:œ«žz¯X°K® ² ¦S£"œ¥°NžÖœ«¡³ mœ¥£"¦KK®€¡"´©]± ©s›¢¡"¡*¯ß´Ã?¬B¦S®¡"œŸž¤zÆQ¨z°t¹ ›” K›”®˜Æ ² ¦KžZ´¦KÃcº›” K›¢Ã¥°K¬ ² ›¢žc£Ì°K¯ ߦS®"¤K› ¡"´z¶©”¦S£*›”¤K°K®œ¥·”›¢ºÁߛsàmœŸ©s°NžB¡ ¨¦K¡¬®°˜ K›¢º'ºBœ¥áµ©”´Ã¥£ ¶›¢©”¦K´¡*›¬®›¢ºœŸ©”¦S£*›¢¡‘©^¨¦Kžz¤K›M¶›¢¨¦˜ ZœŸ°N´z®7¶›”£Þ¹›”›¢ž ¡"´z¶»ÃŸ¦Kžz¤N´¦S¤K›¢¡”ÆÁº° ² ¦KœŸž¡y¦KžBºÇ°˜ K›”®—£"œ ² ›KÄâÅoººœÒ± £"œ¥°Nž¦KëåÂKÆ ² ¦KžZ´¦Kß姺z›” K›¢Ã¥°K¬›¢º¡"´z¶©”¦S£*›”¤K°K®œŸ·¢¦S£"œ¥°Nž Ã¥›sàzœŸ©s°Nž¡º° žz°K£¬B®"°˜ mœŸºz›3£"¨›3®"›¢ÃŸ¦S£"œ¥ K›¯X®›¢¸c´z›¢žB©s‰°K¯ ºœÒã›”®"›¢žc£o¡´z¶©”¦S£*›”¤K°K®œ¥·¢¦S£"œ¥°Nž—¯®¦ ² ›¢¡³¼ä¾åeæG¡]À3¯°K®Á¦ ¤Nœ¥ K›¢ž<¬®"›¢ºœ«©”¦S£*›KÆG›¢¡¡*›¢ž?£"œ«¦Kà œŸž<¦y¬®°K¶B¦S¶BœŸÃŸœ«¡*£"œŸ©¦S¬z± ¬®"°N¦K©^¨Ä ç  K›”®g£"¨z›¬B¦K¡"£ÂK›¢¦S®¡¢Æ ¡*›” K›”®¦KæS¬¬®°N¦K©¨z›¢¡~¨¦˜ K› ¶›”›¢žj¬®"°K¬°N¡*›¢ºj¯X°K®3¦K´£*° ² ¦S£"œŸ©~¦K©”¸Z´œŸ¡"œ¥£"œŸ°Nžµ°K¯G¡"´z¶z± ©”¦S£*›”¤K°K®œ¥·¢¦S£"œŸ°NžÇ¯X®° ² ©s°K®¬B´¡€º¦S£"¦­¼›KÄè¤zÄ'¼ß鮜Ÿ¡"©s°Z› ê ëìtíïî'ð)ñ^òäóqðô^î îJõQöQöKñòJ÷äø*ùqú”ûqüGýÿþÇö˜òäñ ø ÷    Ìñ^úQõtîß÷  õtòÞô"÷äø ”÷Þô÷äíïîß÷äí! *ô"#zô"òäîJí%$'& ( '*),+!¦Kžº:åe¦S®"®"°NßÃJÆ/.¢×K×1032g処S®"®°NßÛ”£µ¦KÃJÄ¥Æ/.¢×K×5462~¾m¦S®"ÔS¦S® ¦Kžº87› ² ¦KžÆBÐ:95959?À*À^ćÅÃ¥£"¨°N´z¤N¨ £"¨z›¢¡*›Á¦S¬B¬®"°N¦K©¨›¢¡  S¦S®"ÂߦS®¤K›¢Ã¥Â¦K©”©s°K®ºBœŸžz¤£*°£"¨z› ² ›”£"¨°Zº¡e´B¡*›¢º ¦Kžº £"¨z›žZ´ ² ¶›”®o°K¯¾凿ӡo¶›¢œŸžz¤³›sàm£*®¦K©s£*›¢ºÆ@£"¨z›”Â|¬›”®*± ¯°K® ² ¡"œ ² œŸÃ«¦S®Ã¥ÂKÄ<;3¨›” ² °N¡*£"彤N¦S£"¨z›”® œŸžz¯°K® ² ¦Q± £"œ¥°NžÇ¦S¶°N´z£ ¡*Âmž?£"¦K©s£"œ«©7¦K¡*¬›¢©s£"¡ °K¯ ¡"´¶©”¦S£*›”¤K°K®^œ¥·¢¦Q± £"œ¥°Nž ¦Kžº º°%žz°K£MºœŸ¡*£"œ«žz¤N´œŸ¡"¨ÿ¶›”£Þ¹›”›¢ž  Q¦S®^œ¥°N´¡ ¬®"›¢ºBœŸ©”¦S£*›µ¡*›¢ž¡"›¢¡”ÄyÅ¡ žz°qߛsàmœŸ©”¦KÃ>=Q¡*› ² ¦Kžc£"œŸ©³œŸžz¯°K®*± ² ¦S£"œ¥°Nž§œŸ¡'£ÞÂZ¬BœŸ©”¦Kßå—›sàm¬BÃ¥°Nœ¥£*›¢º ÆG¡*Âm¡*£*› ² °N´z£*¬B´z£‰œŸ¡ žz°NœŸ¡"¦Kžº £"¨›Á¦K©”©”´z®¦K©s °K¯G£"¨z›Á®"›¢¡"´BÃ¥£"œŸžz¤Ã¥›sàzœŸ©s°Nž¡ ¡"¨z°t¹¡e®"°Z° ² ¯°K®œ ² ¬B®"°˜ K› ² ›¢žc£”Ä ?›¢©s›¢ž?£"Ã¥ÂKÆA@~°K®^¨z°Nžz›¢ž ¼JÐ:959NÐcÀM¨¦K¡M¬B®"°K¬°N¡*›¢º1¦ ² ›”£"¨z°mº ¹¨œŸ©^¨ ² ¦SÔK›¢¡7´B¡*›<°K¯µ£"¨z›<¬B®"›¢ºz° ² œŸžB¦Kž?£ ¡*›¢ž¡"›<°K¯j¦Ö K›”®"¶@ÄCB@œ¥ÔK›§£"¨z›ª¬®"›” mœ¥°N´¡ ² ›”£"¨z°mº¡”Æ £"¨œŸ¡ ² ›”£"¨z°mº‰¦Kß¡*°g¦K©”¸c´œŸ®"›¢¡Ì¡´z¶©”¦S£*›”¤K°K®œ¥·¢¦S£"œ¥°Nž¡*¬›s± ©”œ Ù ©3£*°‰¦  K›”®"¶EDGF:HI½®¦S£"¨›”® £"¨¦KžKJGLGMNJGL”ÄPO°t¹ ›” K›”®˜Æ œ¥£ ¤N´œŸºz›¢¡'£"¨z›¦K©”¸Z´œŸ¡"œŸ£"œ¥°Nž ¬®°Z©s›¢¡"¡‰´¡"œ«žz¤ ¶B¦K©"Ôc±ä°S㠼ߜJÄè›KÄ¢¬B®"°K¶B¦S¶BœŸÃ«œ¥£ÞÂÀB›¢¡*£"œ ² ¦S£*›¢¡Ì¶B¦K¡*›¢º~°Nž'£"¨z›¬®›¢ºz° ² ± œŸž¦Kžc£¡*›¢žB¡*›'°K¯/¦µ K›”®"¶—œŸž š7°K®ºRQ›”£³¼TS|œ«ÃŸÃ¥›”®3›”£g¦KÃJÄ¥Æ .¢×K×:9?Àj¼ß¦K¡ º›”£*›”® ² œŸžz›¢º ¶c—£"¨z›¯X®"›¢¸Z´z›¢ž©sÂ7º¦S£"¦qœŸž £"¨z›¦K¡"¡*°m©”œŸ¦S£*›¢º³¾Z› ² å°K®Ó©s°K®"¬B´¡]À^Ä;3¨z›¢¡"›/›¢¡*£"œ ² ¦S£*›¢¡ ¨z›¢Ã¥¬ £*°M©s°K®"®›¢©s£y£"¨z›7¦K©”¸c´Bœ¥®"›¢º½¾凿 ºœŸ¡"£*®œ¥¶B´z£"œŸ°Nž ¦Kžº º›¢¦Kùœ¥£"¨‘¡*¬B¦S®¡*›³ºB¦S£"¦mÄš ¨z›”®"›£"¨z›µ¡*›¢ž¡"›µœŸ¡ ¦K¡"¡"œŸ¤Nžz›¢º'©s°K®"®›¢©s£"Ã¥ÂKÆN¡"œ¥¤Nžœ Ù ©”¦Kž?£Ìœ ² ¬®"°t K› ² ›¢žc£GœŸ¡@®"›s± ¬°K®"£*›¢ºyœŸž ¦K©”¸Z´œŸ¡œ¥£"œ¥°Nžj¬›”®"¯X°K® ² ¦Kž©s›KÄVU ;3¨BœŸ¡®"›¢¡"´BÃ¥£@¡"¨z°t¹¡Ì£"¨¦S£Ì¯°K® ² ¦Kž?Â' K›”®"¶B¡”ÆQ£"¨z›”®"›‡œŸ¡ ¡*° ² ›Á¡œŸžz¤NÃ¥›g¬®"›¢º° ² œŸž¦S£"œŸž¤‰¡*›¢žB¡*›ÁœŸž ©s°K®"¬B´B¡eº¦S£"¦ ¹¨œ«©¨¦K©”©s°N´žc£"¡³¯°K® ² °N¡*£°K¯o£"¨z›  K›”®"¶»¡³¡"´z¶©”¦S£*›s± ¤K°K®œ¥·¢¦S£"œŸ°Nž‘¶›¢¨¦˜ Zœ¥°N´®'¦Kžº‘©”¦Kž<£"¨z›”®"›”¯°K®"›¶)›´¡*›s± ¯ß´ßåÂÁºz›”¬»Ã¥°˜ÂK›¢º‰£*°Áœ ² ¬B®"°˜ K›e¦K´z£*° ² ¦S£"œ«©¦K©”¸Z´œŸ¡œ¥£"œ¥°NžÄ O°˜¹›” K›”®¢Æ?¯°K® ² ¦Kž?‰¨œ¥¤N¨ßÂo¬°NÃ¥Âm¡*› ² °N´B¡@ K›”®"¶»¡”ÆK£"¨z› WYX $Mô7îJø÷jñZ[G\Û÷äø îß÷ ]søòäúQî ^o÷äìQøK_6`baeø*ô^îJõ˜òäø ( ¢ø ` ÷äíïñ$E'-!cd)Gí%aeö˜òäñe]sø îGú”ûfcdgô &sôí%$tîß÷÷äìQø/úSôîJø "ïí%$tøhaeø÷äìtñ¢ùi^ ð ìQí! Þì~ô^îJîJõ'aeø îj$QñîJø $QîJøºœŸ¡"£*®œ¥¶B´z£"œŸ°Nžª°K¯Á¡*›¢ž¡*›¢¡jœŸžM¶»¦KߦKž©s›¢ºÕ©s°K®"¬B´¡º¦S£"¦ £*›¢žº¡Ó£*°~¶›B¦S£®¦S£"¨z›”®£"¨B¦Kž·¢œ¥¬ Ù ¦Kž ¼®›¢œŸ¡"¡G›”£¦KÃJÄ¥Æ Ð:959NÐNÀ^Ä æ°K®£"¨BœŸ¡/œ ² ¬)°K®£"¦Kž?£ ¤K®"°N´z¬j°K¯ ² ›¢ºœŸ´ ² ¦Kžº ¨œ¥¤N¨—¯®"›¢¸Z´z›¢ž©sÂq K›”®"¶B¡”Æ £"¨›¬®"›¢ºz° ² œ«ž¦Kž?£~¡*›¢žB¡*›³œŸ¡ žz°K£G¯X®›¢¸c´z›¢žc£G›¢žz°N´z¤N¨‰¯X°K®Ó¶B¦K©Ô?±ä°S〛¢¡*£"œ ² ¦S£*›¢¡Ì¶»¦K¡*›¢º °Nž*´¡"£/£"¨œŸ¡¡*›¢ž¡"›£*°‰ÂZœ¥›¢Ã«º ² ¦Qàzœ ² ´ ² ¶›¢žz› Ù £”Ä;̰ œ ² ¬B®"°˜ K›£"¨z›‡¦K©”¸c´œ«¡"œ¥£"œ¥°Nž'¬)›”®¯X°K® ² ¦Kž©s›KÆS¹ ›‡žz›”›¢º‰£*° ©s°Nž¡"œ«ºz›”®3žz°Nžm±ä¬®›¢ºz° ² œŸž¦Kžc£e¡*›¢ž¡*›¢¡¦K¡3¹›¢ÃŸÃJÄ  žµ¦KººBœ¥£"œ¥°NžÆc£"¨z›¦K¡"¡´ ² ¬£"œ¥°Nžµ£"¨¦S££"¨z›¬®›¢ºz° ² ± œŸž¦Kžc£g¡*›¢ž¡*›œŸ¡¬®"›¢ºœŸ©s£"¦S¶»Ã¥›KÆ)œäÄè›KÄ)¡*£"¦S£"œ«©‰¦K©s®°N¡"¡o¶B¦KÃÒ± ¦Kž©s›¢º ©s°K®¬)°K®^¦ œŸ¡~¸c´›¢¡*£"œ¥°Nž¦S¶BߛKĉš ¨œŸÃŸ›‰£"¨›¤K°c°mº ®"›¢¡"´BÃ¥£"¡Á°K¶£"¦Kœ«žz›¢º‘¡"´¤K¤K›¢¡*£ £"¨B¦S£ œŸ¡'°K¯X£*›¢ž§£"¨z›j©”¦K¡*›KÆ £"¨z›Õ´Ã¥£"œ ² ¦S£*›¤K°N¦KðK¯y¦K´z£*° ² ¦S£"œ«©Ö¦K©”¸Z´œŸ¡"œ¥£"œŸ°Nž%œŸ¡ £*° ¶›¦S¶BÃ¥›£*° ¬®"°mº´©s›ºz° ² ¦KœŸž—¡*¬›¢©”œ Ù ©Ã¥›sàzœŸ©s°Nž¡”Ä ¾å‡æÛ¯®"›¢¸Z´z›¢ž©”œ¥›¢¡¨B¦¢ K›¶›”›¢ž¡"¨°˜¹ž£*°  Q¦S®‰¦K©s®"°N¡¡ ©s°K®"¬B´B¡ £PÂc¬›‘¼›KÄè¤zĹ3®^œ¥£*£*›¢ž§ Z¡”Ä¡*¬°KÔK›¢žÃŸ¦Kž¤N´¦S¤K›tÀ ¦Kžºq¤K›¢žz®"› ¼›KÄè¤zÄ Ù ž¦Kž©”œŸ¦KÃ@ Z¡¢Ä»¶B¦Kë¦Kž©s›¢º|£*›sàm£^À¦Kžº ² ´©¨7°K¯£"¨œŸ¡o S¦S®œŸ¦S£"œ¥°Nž7œ«¡g®"›”¬°K®"£*›¢º—£*°y¶›º´z›£*° £"¨z›ª›s㛢©s£"¡7°K¯jºBœÒã)›”®›¢ž?£—©s°K®"¬B´¡Û¤K›¢žz®›¢¡—°NžC K›”®"¶ ¡*›¢ž¡"›Á¦Kžº€£"¨z›~›s㛢©s£3°K¯Ì K›”®¶|¡*›¢ž¡*›~°Nž|¡"´z¶©”¦S£*›”¤K°S± ®œ¥·¢¦S£"œŸ°NžÛ¼ ?°NߦKžº›”£‡¦KÃäÄ¥ÆzÐ:95959 2R?°NߦKžºj¦KžºN´z®^¦S¯«± ¡*ÔZÂKÆ?Ð:959N.˜À^Ä¿g´z›‡£*°o¦gߦK©ÔÁ°K¯BߦS®"¤K› ² ¦KžZ´¦KßåÂÁ¡"›¢ž¡*› ¦Kžžz°K£"¦S£*›¢º©s°K®¬)°K®^¦q¯°K®›¢¦K©¨©s°K®"¬»´¡‰£ÞÂZ¬)›'=t¤K›¢ž®"›KÆ £"¨z›~ S¦S®œŸ¦S¶Bߛo¬B®"›¢ºz° ² œŸžB¦Kž?£‡¡*›¢ž¡*›~¹°N´ߺ€¶›g¶›”£*£*›”® ºz›”£*›”® ² œŸžz›¢º7´¡"œ«žz¤ ¦K´z£*° ² ¦S£"œŸ©µ¹ °K®º‘¡*›¢ž¡*›µºBœŸ¡"¦ ² ± ¶Bœ¥¤N´B¦S£"œ¥°Nž7¼šÇ¾m¿ÁÀ^Ä)¾z´©¨ ¦¡*Âm¡*£*› ² ©s°N´ëºj¦Kë¡*°³œ ² ± ¬®"°t K› ¦K©”¸Z´œŸ¡"œŸ£"œ¥°Nž ¯°K® K›”®"¶B¡¹¨œŸ©^¨ ºz°ž°K£¨¦¢ K›¦ ©”Ã¥›¢¦S®3¬B®"›¢ºz° ² œŸžB¦Kž?£3¡*›¢žB¡*›KÄ Å:¡ ² ¦KßÃÒ±Þ¡©”¦KÃ¥› ›sàm¬›”®œ ² ›¢žc£Ó¹œ¥£"¨ ² ¦KžZ´¦KßåÂ'¡"›¢ž¡*› ¦Kžžz°K£"¦S£*›¢ºÖº¦S£"¦Ö¼ßœJÄè›KÄ .G959 ¦K©”©”´®¦S£*›yš­¾m¿ÁÀ‰¨¦K¡ ¡"¨z°t¹ž€£"¨¦S£eœ¥£eœŸ¡¬°N¡"¡"œ¥¶»Ã¥›£*°³œ ² ¬®"°˜ K›'¾åe摦K©”¸Z´œÒ± ¡"œ¥£"œŸ°Nž—°K¯ £"¨z›yؐºœ¥áµ©”´ߣ”Ú)¨œŸ¤N¨Ã¥Ây¬°NÃ¥Âm¡*› ² °N´¡g K›”®¶B¡ ¶ZÂM©s°Nž¡"œŸºz›”®^œŸžz¤ £"¨z›¢œŸ®jžz°Nžz±ä¬®"›¢ºz° ² œ«ž¦S£"œŸžz¤ ¡*›¢žB¡*›¢¡ ¦K¡ ¹›¢ÃŸÃ¼®"›¢œ«¡"¡€¦Kžº @~°K®¨z°Nž›¢žÆoÐ:959NÐNÀ^Ä  ž £"¨œŸ¡ ¬B¦S¬›”®¢Æ ¦ ¡"œ ² œŸÃŸ¦S®¢Æ¶B´£gë¦S®"¤K›”®'¡"©”¦KÃ¥››sàm¬›”®œ ² ›¢žc£~œŸ¡ ®"›”¬°K®"£*›¢ºª´¡"œ«žz¤q¦—®"›¢¦KÚ­¾m¿ ¡*ÂZ¡"£*› ² Ä š7›yœŸžc£*®"°S± º´©s›o¦ ž›”¹:¬®°K¶B¦S¶BœŸÃŸœ«¡*£"œŸ©‡©s° ² ¶»œŸž¦S£"œ¥°NžšÇ¾z¿½¡*Âm¡ ± £*› ² Ɖ¹¨œŸ©^¨½¬B®"°ZºB´©s›¢¡y¬®°K¶B¦S¶BœŸÃŸœŸ£ÞÂǺœŸ¡*£*®^œ¥¶B´z£"œ¥°NžB¡ °Nž­¡*›¢žB¡*›¢¡”Æg¦Kžº ¹ ›<¡"¨z°˜¹ £"¨B¦S£y£"¨z› ¡*Âm¡*£*› ² ¬›”®*± ¯°K® ² ¡µ©s° ² ¬B¦S®¦S¶B坰Nž:£"¨z›‘È)ÉÊ@˘ÉmÍZÎÌÏS±PЗÑ/ž¤NߜŸ¡"¨ ¦KßÃұ买K®º¡£"¦K¡*Ôjº¦S£"¦ ¼Ó¦Kà ² ›”®e›”£3¦KÃJÄ¥ÆÐ:959NÐNÀ^ďžz¯°K®*± ² ¦S£"œ¥°Nž ¯X®"° ² £"¨BœŸ¡~¡*Âm¡*£*› ² œŸ¡gœŸž©s°K®¬)°K®^¦S£*›¢º7œŸž—£"¨z› ¡"´z¶©”¦S£*›”¤K°K®œŸ·¢¦S£"œ¥°Nž€¡*ÂZ¡"£*› ² ´¡"œŸžz¤ ¦‰ž°˜ K›¢Ã ² ›”£"¨z°mºÄ æÓœŸž¦KßßÂKÆÌ¦Kž ›sàm¬›”®œ ² ›¢žc£'œŸ¡Á®"›”¬°K®"£*›¢º ¹œŸ£"¨‘ÐK×yºœŸ¯«± Ù ©”´ߣ K›”®"¶B¡o¹¨BœŸ©¨q¡"¨z°˜¹¡£"¨B¦S£®"›¢¦KÃGš­¾m¿ÿ©”¦Kž—¶› ´¡*›¢ºÛ£*° œ ² ¬B®"°˜ K›£"¨z›¦K©”©”´z®¦K©sÂÛ°K¯ ¡"´¶©”¦S£*›”¤K°K®^œ¥·¢¦Q± £"œ¥°Nžy¦K©”¸Z´œŸ¡œ¥£"œ¥°NžÄ  ž¾m›¢©s£"œ¥°NžÐ3¹›œ«ž?£*®"°mº´©s› £"¨z›¶B¦K¡"œŸ© ¡"´z¶©”¦S£*›”¤K°S± ®œ¥·¢¦S£"œŸ°Nž¦K©”¸Z´œŸ¡"œŸ£"œ¥°Nžµ¡*Âm¡*£*› ² ¦Kžº®"›”¬°K®"£/£"¨› ² °mºm± œ Ù ©”¦S£"œŸ°Nž¡ ² ¦Kº›£*°‰£"¨z›o¡"ÂZ¡*£*› ² £*°‰›¢ž¦S¶BÃ¥›oœ¥££*°´¡*› š­¾m¿‰ÄQ¾Z›¢©s£"œ¥°Nž 3ºz›¢¡"©s®^œ¥¶›¢¡°N´z® ¬®"°K¶»¦S¶BœŸÃŸœŸ¡"£"œŸ©@š­¾m¿ ¡*Âm¡*£*› ² Ä ?›¢¡"´Ã¥£"¡‡¦Kžº ºBœŸ¡"©”´¡"¡œ¥°Nž¦S®›~¬®"›¢¡*›¢žc£*›¢º œŸž ¾Z›¢©s£"œ¥°Nž¦KžºÁ©s°Nž©”ë´¡"œ¥°Nž¡¦S®›/ºz®¦˜¹žÁœŸž'¾Z›¢©s£"œŸ°NžZÄ   F–3™m8)IKA ;@=?Hz8)INHä;@6C57™ÓF/HPOSHJINHä;@6   ! #"%$&'("*)+, .-/"10 釴œŸÃŸºBœŸžz¤2°Nž£"¨› ¾åeæ+¦K©”¸c´œ«¡"œ¥£"œ¥°Nž¯X®¦ ² ›”¹ °K®"Ô °K¯é®œŸ¡"©s°Z›Á¦Kžºq処S®®"°Nßü .¢×K×10cÀ^Æ*@~°K®¨z°Nžz›¢ž ¼JÐ:959NÐmÀ ¨¦K¡e¬B®"°K¬°N¡*›¢º ¦¡*Âm¡*£*› ² ¹¨œ«©¨ ´¡*›¢¡3ÔZžz°t¹Ã¥›¢ºz¤K›~°K¯  K›”®"¶—¡*› ² ¦Kžc£"œŸ©”¡o£*°¤N´BœŸºz›Á£"¨z› ¬®"°m©s›¢¡"¡°K¯/¡"´z¶©”¦S£*›s± ¤K°K®œ¥·¢¦S£"œŸ°Nžy¦K©”¸c´BœŸ¡"œ¥£"œ¥°Nž Ä32 ;3¨›q¡*Âm¡*£*› ² ›sàZ¬»Ã¥°Nœ¥£"¡£"¨z›qÔmžz°t¹Ã¥›¢ºz¤K›Û£"¨¦S£€¡*›s± ² ¦Kžc£"œŸ©”¦Kßå ¡"œ ² œŸÃŸ¦S®7 K›”®"¶»¡<¦S®"›Õ¡"œ ² œ«ÃŸ¦S® œŸž%£*›”® ² ¡ °K¯3¡"´z¶©”¦S£*›”¤K°K®œŸ·¢¦S£"œ¥°NžÄ B ›” mœŸž ¼ .¢×K×4 cÀÁ¨¦K¡'º› ² °Nžm± ¡*£*®¦S£*›¢º £"¨¦S£— K›”®"¶ JGLGMNJGLJ|ºœ¥ mœŸºz›‘œ«ž?£*° ¡*› ² ¦Kžc£"œŸ© ©”ߦK¡"¡"›¢¡/ºœŸ¡"£"œŸž©s£"œ¥ K›eœŸž£*›”® ² ¡°K¯¡"´z¶©”¦S£*›”¤K°K®œŸ·¢¦S£"œ¥°NžÄ @~°K®¨°Nžz›¢ž ¼JÐ:959NÐZÀ¡"¨z°˜¹¡µ£"¨¦S£ ² ¦Kžc§ K›”®¶ DGF:HI[J ¦Kß¡*° ºœŸ ZœŸº› œŸžc£*° ¡"´©¨—©”ߦK¡¡*›¢¡”Æ ¦K©”©s°K®ºœ«žz¤€£*° £"¨z›¢œ¥® ¬®"›¢º° ² œŸž¦Kžc£e¡*›¢ž¡*›KÄæz°K®œŸžB¡*£"¦Kž©s›KÆ£"¨z›' K›”®"¶y¯°K® ² ¡*¬›¢©”œ Ù ©'¾B凿ºœŸ¡*£*®^œ¥¶B´z£"œ¥°NžB¡/¯°K®657'¦Kžº IF48:L©s°K®*± ®"›¢ÃŸ¦S£*›­¸Z´œ¥£*›Ç©”Ã¥°N¡"›¢Ã¥Âÿ¶›¢©”¦K´¡*› £"¨z›Ç¬B®"›¢ºz° ² œŸžB¦Kž?£ ¡*›¢ž¡"›¢¡°K¯£"¨z›¢¡*› K›”®"¶B¡o¼ß¦K©”©s°K®ºœŸžz¤ £*°'£"¨z›š7°K®ºRQ›”£ ¯®"›¢¸c´›¢ž©sÂÛºB¦S£"¦?Àg¦S®"›µ¡"œ ² œŸÃŸ¦S®¢Äf;3¨z›”ÂÛ¶°K£"¨7¶›¢Ã¥°Nžz¤ £*°³£"¨z›[B ›” mœŸž:9eSy°K£"œŸ°Nžy K›”®"¶B¡<;mÄ ;3¨›‡¡"ÂZ¡*£*› ² °K¯@~°K®¨z°Nž›¢ž‘¼JÐ:959NÐcÀ@¹°K®"ÔZ¡¶c ٠®¡*£ œŸºz›¢žc£"œ¥¯ÂZœ«žz¤³£"¨z›¡*›¢ž¡*›KÆ œäÄè›KÄ»£"¨z›¡*› ² ¦Kžc£"œŸ©©”ߦK¡"¡o¯°K® ¦j¬®"›¢ºœ«©”¦S£*›KÄE;3¨z›¡*› ² ¦Kžc£"œŸ©©”ߦK¡"¡"›¢¡o¦S®›¶B¦K¡*›¢ºÛ°Nž B ›” mœŸž ©”ߦK¡"¡*›¢¡¼ B ›” mœŸžÆ .¢×K×4 NÀ2 ² °N¡"£"Ã¥ÂǰNž½¶B®"°N¦Kº ©”ߦK¡"¡"›¢¡Ö¼›KÄè¤zÄ= .SÄ>9eS|°K£"œ¥°Nž% K›”®¶B¡?;?Àq®¦S£"¨z›”®‘£"¨¦Kž ¡"´z¶©”ߦK¡¡*›¢¡:¼›KÄè¤zÄ@ .SĐÐA9<BL<CD8:L§ K›”®¶B¡?;?À^Ä3E F›”®¶B¡ ¦S®"›Ç©”ߦK¡"¡œ Ù ›¢º2¦K©”©s°K®ºœŸž¤ £*°C£"¨z›¢œŸ®§¬B®"›¢ºz° ² œŸžB¦Kž?£ ¡*›¢ž¡"›yœŸž§š7°K®ºRQ›”£”Ä ;¨œŸ¡œŸ¡ºz°Nžz› ´¡œŸžz¤—¦ ² ¦S¬z± ¬BœŸž¤<¹¨BœŸ©¨ÕߜŸžzÔm¡š7°K®ºRQ›”£y¡*ÂZžB¡*›”£"¡j¹œ¥£"¨ B ›” mœŸž ©”ߦK¡"¡"›¢¡”ÄHG ůX£*›”®£"¨z›Á¡*› ² ¦Kžc£"œŸ©~©”ߦK¡"¡eœ«¡eœŸºz›¢žc£"œ Ù ›¢ºÆZ£"¨z›Á¡*Âm¡ ± £*› ² °K¯Ì鮜«¡"©s°c›Á¦KžBºq処S®"®"°Nëü .¢×K×10?À œŸ¡´¡*›¢ºj£*°³¦K©]± ¸Z´œ¥®"›€¦Û¬B´z£"¦S£"œ¥ K›|¾凿 ºœŸ¡*£*®^œ¥¶B´z£"œ¥°Nž ¯X®"° ² ©s°K®¬B´¡ º¦S£"¦mÄ ;3¨œŸ¡'¡*ÂZ¡"£*› ² › ² ¬Bß°˜Âm¡Á¦y®"°K¶»´¡*£Á¡*£"¦S£"œŸ¡"£"œŸ©”¦Kà I ëìtíïî»îßû˜îß÷äø a õtòJòäø $”÷ "èû/ñ$ "èû/÷Jòäø*ô÷äîR]søPòäúQîúQõ˜÷zö "ô$tîBôòäø õ $QùtøòGðô û ÷äñøKJ¢÷äø $Sù'íè÷÷äñgñ^÷äìtøòöSô"òJ÷äî/ñ ZîJöKø ø Þì ( $tñ]õ'$Qî ô$Qùoô^ù ßø ÷äí!]sø îY),L ëìtøútòäñ]ô]ù "ôîJîJø îô"òäøjaeñ^òäøõQîJø,ZÒõ "˜úKø *ôõQîJø÷äìtøû ô"!"ïñ"ð ô]ù˜øNM”õSô"÷äø&]ø$QøòÞô "í3O*ô"÷äíïñ$Qî÷äñGúKø*a3ô]ù˜ø^úQõt÷zôòäøîß÷äí!"!"˜ùtíïîß÷äí!$ ` ÷äí!]søBø $Qñ^õ &]ì/í!$÷äøò aeîZñ ZQîJõtú5 *ô÷äø &^ñ^òäí3O*ô"÷äíñ$÷äñ@ötòäñe]¢í‹ù˜ø*&^ñ”ñ¢ù ô  õtòÞô ûGP ¢ø ø3÷äìQø3ðñ^òäó‰ñZÌýñ^òäìQñ$Qø $ ( RQRQG)PZ•ñ^ò‡ùtø÷Þôí%"ïî‡ñ ZÌ÷äìtø a3ô^ötöQí!$ & ¬B¦S®¡"›”® ¼ß鮜Ÿ¡"©s°Z›¦Kžº 処S®®"°NßÃJÆtÐ:959NÐNÀ ¦KžºÁ¦3©s° ² ¬®"›s± ¨z›¢ž¡œ¥ K›©”ë¦K¡"¡"œ Ù ›”®Ó¹¨BœŸ©¨œŸ¡Ó©”¦S¬B¦S¶Bߛ°K¯)ºœŸ¡*£"œ«žz¤N´œŸ¡"¨z± œŸžz¤ .4  K›”®"¶B¦Kà ¾凿ӡµ¦ ¡´z¬›”®¡*›”£°K¯ £"¨z°N¡*›g¯°N´žº œŸž£"¨z›~Å/Q B ;%¼ßé°K¤N´z®¦S›” €¦Kžº 鮜Ÿ¡"©s°Z›KÆj.¢×5410KÀ‡¦Kžº å ç S8B@ѽ¾ZÂZžc£"¦Q೺œŸ©s£"œ¥°NžB¦S®œ¥›¢¡3¼o®œ«¡"¨ ² ¦Kž›”£¦KÃJÄ¥Æ .¢×K× cÀ^Ä ;3¨›€¾凿½ºœŸ¡*£*®œŸ¶B´z£"œ¥°Nž7œŸ¡ ¡ ² °Z°K£"¨z›¢ºª´¡"œŸž¤y£"¨z› ¬®"°K¶»¦S¶BœŸÃŸœ¥£P¼ߜJÄè›KÄ 9 ¶B¦K©Ô?±ä°Sã ;?ÀǛ¢¡"£"œ ² ¦S£*›¢¡Ç°K¯§£"¨z› ¡*› ² ¦Kžc£"œŸ©­©”ë¦K¡"¡M°K¯—£"¨z›­ K›”®¶ Ä ;3¨›­¡ ² °Z°K£"¨œŸžz¤ ² ›”£"¨z°mº œŸ¡q뜟žz›¢¦S®qœ«ž?£*›”®"¬°NߦS£"œŸ°Nžÿ¼›KÄè¤zĵ¼TS|¦KžBžœŸžz¤ ¦Kžº ¾m©¨ ´z£*·”›KÆj.¢×K×K×NÀ*À^Ä ;3¨z›¶B¦K©"Ôc±ä°Sãq›¢¡"£"œ ² ¦S£*›¢¡‡¦S®"› °K¶£"¦KœŸž›¢ºy´¡"œŸž¤‰£"¨›~¯X°Nßß°˜¹œŸž¤ ² ›”£"¨z°Zº ¼ßœXÀ S± y®"›”¬®"›¢¡"›¢ž?£"¦S£"œ¥ K›µ K›”®"¶B¡Á¦S®"›©¨z°N¡*›¢ž‘¯X®° ² ¦  K›”®¶|©”ߦK¡"¡”Ä ¼ßœŸœXÀ³¾Bå‡æÖºœ«¡*£*®œ¥¶B´£"œ¥°Nž¡¦S®›‰¶»´œŸÃ¥£¯X°K®~£"¨z›¢¡"›‰ K›”®¶B¡ ¶Z ² ¦KžZ´¦KßßÂM¦Kž¦KÃ¥Âm¡"œŸžz¤§©SÄ :959ª°m©”©”´z®"®›¢ž©s›¢¡ °K¯›¢¦K©^¨€ K›”®"¶€œŸž³£"¨z›o鮜¥£"œŸ¡"¨ Q¦S£"œ¥°NžB¦KÃå°K®¬B´¡ ¼ßéhQ~å3À~¼ B@›”›¢©¨Æ.¢×K×KÐNÀ^Ä ¼ßœŸœŸœXÀ ;¨z› ®"›¢¡´Ã¥£"œŸžz¤ ¾B凿 ºœŸ¡"£*®œ¥¶B´z£"œŸ°Nž¡ ¦S®"› ² ›”®"¤K›¢ºÆ ¤NœŸ ZœŸž¤ ›¢¸Z´¦KÃâ¹›¢œ¥¤N¨?£ £*° ›¢¦K©^¨ ºBœŸ¡*£*®œ¥¶»´z£"œ¥°NžÄ ;3¨›o¶»¦K©"Ôc±ä°S㠛¢¡*£"œ ² ¦S£*›¢¡e¯°K®3£"¨z› 9eS|°K£"œ¥°Nž| K›”®"¶!; 57QÆZ¯°K®/›sàz¦ ² ¬BÃ¥›KÆZ¦S®"›©s°Nž¡*£*®^´©s£*›¢ºµ¶Z ² ›”®"¤NœŸž¤Á£"¨z› ¾凿 ºœŸ¡"£*®œ¥¶B´z£"œŸ°Nž¡³¯°K®>§°K£"¨z›”® 9eS|°K£"œ¥°Nž­ K›”®¶B¡?; ›KÄè¤zÄI FD85LsÆjJ iL”Æ(C:HH 8:L”Æ H C48:L •Æ»¦KžBº J C •Ä Åo¡y¦ Ù žB¦KÃÁ¡*£*›”¬ Æ~¦M¡"œ ² ¬BÃ¥›—› ² ¬BœŸ®œŸ©”¦KßåÂMº›”£*›”®*± ² œŸž›¢º€£"¨z®"›¢¡"¨z°NëºyœŸ¡e´¡*›¢º|°Nž £"¨z›Á¬B®"°K¶B¦S¶BœŸÃ«œ¥£Þ›¢¡*£"œÒ± ² ¦S£*›¢¡¦S¯X£*›”®o¡ ² °Z°K£"¨œŸžz¤£*° ٠å£*›”®e°N´z£žz°Nœ«¡*€¾åeæG¡¢Ä    0 &'(&'! & -#"%$ )'& ®›¢œŸ¡"¡e¦Kžº @Á°K®¨z°Nžz›¢ž ¼JÐ:959NÐmÀ ² °mºœ Ù ›¢º—£"¨z›¶»¦K¡*›s± ߜŸž›¡*Âm¡*£*› ² ¯°K®£"¨z›¢œ¥®¡ ² ¦Kßå±Þ¡"©”¦KÃ¥›j›sàZ¬›”®œ ² ›¢ž?£¡*° £"¨¦S£yœ¥£ ©s°N´ëº:¶›¢žz› Ù £j¯®"° ² ºœŸ¡¦ ² ¶BœŸ¤N´¦S£"œŸžz¤§£"¨z› Ù ®¡"£­¦Kžº*=t°K® ¡"›¢©s°Nžº ² °N¡"£½¯®"›¢¸c´›¢ž?£½¡"›¢ž¡*›¢¡­°K¯  K›”®"¶B¡¢Ä ¿gœÒã›”®"›¢ž?£y©s°K®¬B´¡ º¦S£"¦K¡*›”£"¡y¹ ›”®"› ©s®›¢¦S£*›¢º ¯°K®y£"¨z›Ö¼ .]±PÐNÀ ¡"›¢ž¡*›¢¡€¶›¢œŸžz¤ºœŸ¡¦ ² ¶BœŸ¤N´¦S£*›¢º ¼ßœŸžœÒ± £"œŸ¦Ká*›¢žB¡*›¢¡^À ¦Kžº€¯°K®e£"¨z›g®"› ² ¦KœŸžœ«žz¤‰¡"›¢ž¡*›¢¡Á¼¹¨œ«©¨ ¹›”®"›‘¤K®"°N´¬)›¢º­£*°K¤K›”£"¨z›”®]À^Ä ¾凿ӡ|¹ ›”®"›§¦K©”¸c´Bœ¥®"›¢º ¡*›”¬B¦S®^¦S£*›¢Ã¥Â—¯X°K® ›¢¦K©¨<°K¯£"¨›¢¡*›µº¦S£"¦K¡*›”£"¡¢Ä€æ°K® ›¢¦K©^¨ º¦S£"¦K¡*›”£‰©s°K®"®"›¢¡*¬°NžºBœŸžz¤ £*°y£"¨z›jœŸžœ¥£"œ«¦KÃG¡"›¢ž¡*›¢¡”ÆG£"¨z› ¶B¦K©Ô?±ä°S㠛¢¡*£"œ ² ¦S£*›¢¡§°K¯y£"¨z›:®"›¢Ã¥›” S¦Kž?£ª¡*›¢ž¡"›Ö¹›”®"› ´¡*›¢º|¯X°K®o¡ ² °Z°K£"¨œŸžz¤zÄ Q°¡ ² °c°K£"¨œ«žz¤³¹‡¦K¡ºz°Nž›'œŸž ( ëìtø ]søPòäú Z•ñ^ògð ìtí! Pì|îJõQú5 *ô"÷äø &]ñòäí3O*ô÷äíïñ$ÛíîoúKø í!$ &ô ` M”õQíèòäø*ùíî@ô "ïðô ûtî@øKJ "ïõSù˜ø*ùi£"¨z› ©”¦K¡*›y°K¯£"¨z› º¦S£"¦K¡*›”£³°K¯¤K®"°N´z¬›¢ºª¡*›¢ž¡*›¢¡¢Ä<æÓœÒ± ž¦KßßÂKÆÁ£"¨z›M¾凿 ߛsàmœŸ©s°NžB¡q¦K©”¸Z´œ¥®"›¢º ¯°K®Ûºœ¥ã)›”®"›¢žc£ º¦S£"¦K¡*›”£"¡¹›”®"› ² ›”®"¤K›¢º Æ/¡*°Û£"¨¦S£›¢¦K©¨Ã¥›sàzœŸ©s°Nž§®"›s± ©s›¢œ¥ K›¢º<¦ ¹ ›¢œŸ¤N¨?£'©s°K®®"›¢¡*¬°NžºœŸž¤£*°Ûœ¥£"¡Á¡"œ¥·”›KÄ ;3¨œŸ¡ Âmœ¥›¢ÃŸºz›¢º ¦­¾凿 ºBœŸ¡*£*®œ¥¶»´z£"œ¥°Nž­¡*¬›¢©”œ Ù © £*°Ç¦Õ K›”®"¶ ¯°K® ² ®¦S£"¨z›”®£"¨¦Kžy¡*›¢ž¡"›KÄ ÅoÃ¥£"¨z°N´z¤N¨ /®"›¢œŸ¡"¡e¦KžºK@~°K®¨z°Nžz›¢ž ¼JÐ:959NÐZÀ ®"›s± ¬°K®"£*›¢º ¦Kžœ ² ¬®"°˜ K› ² ›¢ž?£%´¡œŸžz¤ £"¨œŸ¡ ² ›”£"¨z°mº*) £"¨z›”—›¢ž©s°N´žc£*›”®"›¢º§¡*¬B¦S®¡"›³º¦S£"¦y¬®°K¶BÃ¥› ² ¡~œŸž‘¡"´z¶z± ©”¦S£*›”¤K°K®œ¥·¢¦S£"œŸ°Nž§¦K©”¸Z´œŸ¡œ¥£"œ¥°NžÆG¡"œŸž©s› ² ¦Kž? º¦S£"¦K¡"›”£"¡ ¹›”®"›%¡"œ ² ¬»Ã¥Â2£*°c°1¡ ² ¦KßÃÛ£*° ÂZœ¥›¢Ã«º4¦Kž ¦K©”©”´z®^¦S£*› Ã¥›sàzœŸ©s°NžÄÛ¾m›”¬B¦S®¦S£"œŸžz¤q°N´z£'£"¨›jº¦S£"¦ÛœŸžc£*°qºœ¥ã)›”®"›¢žc£ º¦S£"¦K¡*›”£"¡‰žz°K£Á°NžBÃ¥Âq¤K›¢ž›”®¦S£*›¢º<ž°NœŸ¡*›¶B´z£'¹¦K¡ ¦Kß¡*° ´žž›¢©s›¢¡"¡"¦S®"Â+ ‡£"¨z› Ã¥›sàzœŸ©s°Nž¡¨¦¢ K›£*°j¶› ² ›”®"¤K›¢ºÛœŸž £"¨z›³›¢žBºÆÌ£*°Û¦Kßå°t¹ÿ¡*›¢ž¡"œŸ¶BÃ¥›©s° ² ¬B¦S®œ«¡*°Nž ¹œŸ£"¨7£"¨z› ¶B¦K¡*›¢Ã«œŸžz›¡*Âm¡*£*› ² ¦Kžº7£"¨›³´¡*›°K¯‡£"¨z››sàm£"¦Kž?£  K›”®"¶ ¯°K® ² ¡*¬›¢©”œ Ù ©o¤K°Nߺ|¡*£"¦Kžº¦S®º|º¦S£"¦mÄ š7›­£"¨›”®"›”¯X°K®›­› ² ¬BÃ¥°tÂK›¢º1¦%ºœ¥ã)›”®"›¢žc£ ² ›”£"¨z°mº ¹¨œ«©¨µº°c›¢¡žz°K£‡œŸž? K°Nß K›¡*›”¬B¦S®^¦S£"œŸžz¤º¦S£"¦mÄ žB¡*£*›¢¦Kº œ¥£ œŸžc K°NÃ¥ K›¢¡ ´¡"œ«žz¤‘¶B¦K©Ô?±ä°S㠛¢¡*£"œ ² ¦S£*›¢¡ ¡*¬›¢©”œ Ù ©q£*° £"¨z›‡¡*›¢ž¡*›‡ºœŸ¡"£*®œ¥¶B´z£"œŸ°Nž~œŸžÁ°N´z®Óº¦S£"¦mÆN¦K¡Óºz›”£*›”® ² œŸžz›¢º ¶Z °N´z®š­¾m¿%¡*Âm¡*£*› ² Ä/;3¨c´B¡3£"¨z› ² ›”£"¨z°mº|œŸ¡œŸºz›¢žm± £"œŸ©”¦KÃe£*°‘£"¨z›y¶B¦K¡*›¢Ã«œŸžz› ² ›”£"¨z°mº¬®"›¢¡"›¢ž?£*›¢ºMœ«žÖ¾Z›¢©]± £"œ¥°Nž§ÐZÄ .SÆG¶»´z£'¦|ºBœÒã)›”®›¢ž?£ ² ›”£"¨z°Zº<œŸ¡'¦Kº°K¬£*›¢º ¯°K® ©s°Nž¡*£*®^´©s£"œŸžz¤§¶B¦K©"Ôc±ä°SãC›¢¡*£"œ ² ¦S£*›¢¡, <£"¨z›”ÂÖ¦S®"›7žz°t¹ ©s°Nž¡*£*®^´©s£*›¢ºÇ¯®"° ² £"¨z›7¶B¦K©Ô?±ä°S㠛¢¡*£"œ ² ¦S£*›¢¡y°K¯¦Kßà £"¨z›y¡*›¢žB¡*›¢¡°N´z®š­¾m¿â¡*ÂZ¡"£*› ² ¨¦K¡ºz›”£*›¢©s£*›¢º ¼ßžz°K£ "´¡*£3£"¨z› ¬®"›¢ºz° ² œŸž¦Kžc£^À^ÆB¡*°µ£"¨B¦S££"¨z›'©s°Nžc£*®œ¥¶»´z£"œ¥°Nž °K¯Á›¢¦K©^¨Ç¡*›”£°K¯'›¢¡*£"œ ² ¦S£*›¢¡ œŸ¡¹›¢œ¥¤N¨?£*›¢º ¦K©”©s°K®ºBœŸžz¤ £*°€£"¨z›¯X®›¢¸c´z›¢žB©sÂy°K¯£"¨z›©s°K®"®"›¢¡"¬)°NžBºœŸžz¤µ¡"›¢ž¡*›¢¡gœŸž ©s°K®"¬B´B¡3º¦S£"¦mÄ š7›©s° ² ¶»œŸžz›¢ºq£"¨z›‰ºBœÒã)›”®›¢ž?£3¶»¦K©"Ôc±ä°Sã<›¢¡*£"œ ² ¦S£*›¢¡ ´¡"œ«žz¤‰Ã«œŸžz›¢¦S®‡œŸž?£*›”®¬)°Në¦S£"œ¥°Nž7¼ä凨z›¢ž ¦Kžº-o°Z°Zº ² ¦KžÆ .¢×K×.NÀ^Ä B@›”£0/21N¼3#4576äÀ8:9<; .>=,=,=@?'ACB€¼¹¨›”®"›D?'ABœŸ¡ £"¨z›³žZ´ ² ¶)›”®g°K¯¶B¦K©"Ôc±ä°S㝛¢¡*£"œ ² ¦S£*›¢¡]Ào¶›‰£"¨›¬®"°K¶z± ¦S¶BœŸÃ«œ¥£"œ¥›¢¡‰°K¯'¾凿ӡ³œŸžºœ¥ã)›”®"›¢žc£¶B¦K©Ô?±ä°S㽺œ«¡*£*®œ¥¶B´z± £"œ¥°Nž¡f;3¨›j›¢¡"£"œ ² ¦S£*›¢ºª¬®"°K¶»¦S¶BœŸÃŸœ¥£P—°K¯£"¨z›|¾凿 œŸž £"¨z›y®›¢¡"´Ã¥£"œŸž¤ ©s° ² ¶Bœ«žz›¢º¶B¦K©Ô?±ä°Sã ºBœŸ¡*£*®œ¥¶»´z£"œ¥°NžªœŸ¡ ©”¦Kß©”´ë¦S£*›¢ºy¦K¡3¯°Nßå°t¹¡, Eµ¼3C4,5 6 ÀF;HGIKJ L 1M U:N 1PO / 1 ¼3#4,5 6 À QSR ìQø $fc Q QTÖô õtòÞô÷äø R U—ðô^îGõQîJø*ùo÷äñ3îJí!aeö "èûîJø ö'` ôòÞô"÷äø÷äìtøWVQòäîß÷ îJø $QîJø Z•òäña ô $”ûñ^÷äìtøò îJø $QîJø ( ZÒñò g ]søòäútîY) ÷äìQøPòäøðô^î»ô$eí!$' òäø*ô^îJø í!$÷äìtø _6` aeø ô^îJõtòäø Z‹òäña g  ÷äñ ggX>$3÷äìQø *ô^îJøGñZN ]søòäútî)ð ìQøòäø@÷äìtòäø øGîJø $QîJø&^òäñ]õtöQîð)øòäøùtíïî ` ÷äí!$ &^õQíïîJìQø*ù~÷äìQøPòäø/ðô^îô$Áí!$ Pòäø*ô^îJø‡í!$Á÷äìQø _6`baeø*ô^îJõ˜òäø Z‹òäña g\GQ÷äñ gYY-j¢ø ø ˜ø ÷äíïñ$  ZÒñ^ò *ô "% õ "ô÷äíïñ$~ñ Z_6`baeø*ô^îJõ˜òäø¹¨›”®"›'£"¨z› N 1‰ºz›¢žz°K£*›¹ ›¢œŸ¤N¨?£"¡o¯°K®o£"¨›‰ºœ¥ã)›”®"›¢žc£ ºœŸ¡"£*®œ¥¶B´z£"œŸ°Nž¡¦Kžº'¡"´ ² £*°f.SÄ;3¨› Q¦Kë´z›¢¡@¯°K® N 1¦S®"› ºz›”£*›”® ² œŸžz›¢º7¡*¬›¢©”œ Ù ©£*°q¦y K›”®"¶‘¦KžBº ¦S®"›µ°K¶B£"¦KœŸžz›¢º ¯®"° ² £"¨z›~¬®°K¶B¦S¶BœŸÃŸœ«¡*£"œŸ©3š­¾m¿ ¡*Âm¡*£*› ² Ä Lµ=N; – 8– HPHÞOtINHޙ  š­¾m¿â¡*ÂZ¡"£*› ² ¡©”¦Kž›¢œ¥£"¨z›”®³©^¨z°Z°N¡*›|¦7¡"œŸž¤NÃ¥›j¡"›¢ž¡*› ¯°K®¦7¹°K®ºÆ°K®µ£"¨z›”©”¦KžÖ¬®"°mº´©s› ¦7¬®"°K¶»¦S¶BœŸÃŸœ¥£P ºœŸ¡"£*®œ¥¶B´z£"œŸ°Nž °Nž£"¨z› ¹°K®ºÚ¡*›¢ž¡*›¢¡”Īš7›|©s®›¢¦S£*›¢º ¦¡*Âm¡*£*› ² ¹¨œŸ©^¨j¬®"°mº´©s›¢¡¦¬®"°K¶B¦S¶Bœ«ÃŸœ¥£Þºœ«¡*£*®œ¥¶B´z± £"œ¥°Nž‰¯°K®G›¢¦K©^¨³žz°N´ž ÆQ K›”®¶ ÆN¦Kº *›¢©s£"œ¥ K›e¦Kžº¦Kºz K›”®"¶œŸž ¦ £*›sàZ£”ÄP;3¨BœŸ¡ ² ¦SÔK›¢¡£"¨z›o¡*Âm¡*£*› ² ¬B¦S®"£"œŸ©”´ë¦S®Ã¥Â¡"´œ¥£ ± ¦S¶BÃ¥›o¯°K®e£"¨z›~¡"´z¶©”¦S£*›”¤K°K®œ¥·¢¦S£"œ¥°Nž ¦K©”¸Z´œŸ¡œ¥£"œ¥°Nžj¦S¬¬Bߜұ ©”¦S£"œ¥°Nž' ¹ ›‰›sàZ£*®^¦K©s£o£"¨› ¬®°K¶B¦S¶BœŸÃŸœŸ£Þ ºœŸ¡*£*®^œ¥¶B´z£"œ¥°NžB¡ ¯°K® °N´z®‡©¨°N¡*›¢žj K›”®"¶B¡¦Kžº©s° ² ¶BœŸž›£"¨z› ² ¶Z³©s° ² ± ¬B´z£"œ«žz¤~¦Kž³¦¢ K›”®¦S¤K›KÄ ;3¨œŸ¡ÓÂmœ¥›¢ÃŸºB¡G¦Á¬®"°K¶B¦S¶Bœ«ÃŸœ¥£Þ ºœŸ¡ ± £*®œ¥¶»´z£"œ¥°Nž°Nžy¡*›¢ž¡*›¢¡‡¯°K®3›¢¦K©^¨y K›”®"¶¹¨BœŸ©¨€œŸ¡eœŸžc£*›s± ¤K®¦S£*›¢ºqœŸž?£*°£"¨›‰¾凿ª¦K©”¸Z´œŸ¡"œ¥£"œŸ°Nžj¡"ÂZ¡*£*› ² Ä  ) +, ."%0 &" &(& ' ç ´z® ¡*Âm¡*£*› ² œŸ¡ º›¢¡"œ¥¤Nžz›¢º ¦KÃ¥°Nžz¤ £"¨z› 뜟žz›¢¡%°K¯ ¾Z£*›” K›¢ž¡"°Nž|¦Kžº š œŸÃ¥Ôm¡Á¼JÐ:959N.QÀ^ƹ¨z°j´B¡*› K°K£"œŸžz¤€£*° ©s° ² ¶BœŸžz›Á¦žZ´ ² ¶)›”®e°K¯GÔmžz°˜¹ߛ¢ºz¤K›Á¡*°N´z®©s›¢¡‡£*°µ¬®"°S± º´©s›q¦7šÇ¾z¿ ¡*ÂZ¡"£*› ² ÄÕÑ ¦K©¨:°K¯g°N´z®©s° ² ¬°Nžz›¢žc£ ² °mº´Ã¥›¢¡¬®"°mº´©s›¢¡g¦€¬®°K¶B¦S¶BœŸÃŸœŸ£ÞÂyºœŸ¡"£*®œ¥¶B´z£"œŸ°Nž °Nž ¡*›¢ž¡"›¢¡”Ä ;3¨z›¢¡*›3¬B®"°K¶B¦S¶BœŸÃ«œ¥£Þ‰ºœŸ¡*£*®œŸ¶B´z£"œ¥°Nž¡G¦S®"›o©s° ² ± ¶BœŸž›¢ºÆ ´¡œŸžz¤—£"¨z›|œŸžBºz›”¬›¢žºz›¢ž©s› ¦K¡"¡"´ ² ¬£"œ¥°NžÆ ¶c ² ´Ã¥£"œ¥¬B뜟©”¦S£"œ¥°Nžª£*°‘Âmœ¥›¢ÃŸºM¬®"°K¶B¦S¶Bœ«ÃŸœ¥£ÞªºœŸ¡"£*®œ¥¶B´z£"œŸ°Nž °Nž1¡*›¢žB¡*›¢¡M¯°K®M›¢¦K©¨ ¹ °K®^ºÄ ç ´z® ² °Zº´BÃ¥›¢¡¦S®"› ¶B¦K¡*›¢º­°NžÇ£"¨z°N¡*›‘º›¢¡"©s®œ¥¶›¢º œŸž/¦S®"°t¹¡*ÔZÂ7¼JÐ:95959mÀ^Æ SqœŸ¨¦Kß©s›¢¦y¼JÐ:959NÐmÀ ¦Kžº G›¢ºz›”®¡*›¢ž<¼JÐ:959NÐcÀ^Ƈ¦Kžº©”¦Kž ¶›|¯°N´žº:œŸž;G¦S¶BÃ¥› .SÄ š7›—£*®¦KœŸž›¢ºÖ¦Kßà ² °mº´Ã¥›¢¡ ¼›sàz©s›”¬£ -/* !" ¦KžBº "("1' 4+ ¹¨œŸ©^¨§¦S®›|žz°K£ £*®¦KœŸž›¢º»ÀG°Nž ¾Z› ² å°K®¢ÆZ£"¨z›oÑ/žz¤Nߜ«¡"¨³¦KßÃұ买K®º¡3È)ÉÊ Ë¢ÉzÍZÎ@ÏK±PУ"¦K¡*Ô £*›¢¡*£gº¦S£"¦j¦Kžº|¦KëÃ@º¦S£"¦¯X°K®£"¨z› Ñ žm± ¤NߜŸ¡¨MÃ¥›sàzœŸ©”¦KÃ3¡"¦ ² ¬»Ã¥›—È)ÉÊ@˘ÉmÍmÎ@ÏS±PÐÛ£"¦K¡*Ô)ÄÇÅ ¬B¦S®"£ °K¯~£"¨œŸ¡µ£*®¦KœŸžœ«žz¤ ©s°K®"¬B´¡œŸ¡³¨z›¢Ã«ºM°N´z££*°<©s®"›¢¦S£*›7¦ ºz›” K›¢Ã¥°K¬ ² ›¢ž?£©s°K®"¬»´¡”ƹ¨œ«©¨<¹ › ´¡*›j£*°—°K¶£"¦KœŸžª¦ ©s°Nž Ù º›¢ž©s› œŸžÁ›¢¦K©¨ ² °mº´Ã¥›¯°K®@›¢¦K©^¨‰°K¯z°N´z®Ìºz›¢¡œ¥®"›¢º  K›”®"¶ Ä ;3¨z›µ©s°Nž Ù ºz›¢ž©s›j Q¦Kß´›³°K¯3›¢¦K©¨ ² °mº´ߛµœŸ¡ ´¡*›¢º £*°:ºz›¢©”œŸºz›7£"¨z› ߛ” K›¢Ã~°K¯¡ ² °Z°K£"¨œŸžz¤ª¯°K®y£"¨z› ² °mº´Ã¥›. /£"¨z›'¬®°K¶B¦S¶BœŸÃŸœŸ£ÞºBœŸ¡*£*®œ¥¶»´z£"œ¥°Nž¯X°K®g¦¹°K®º  Ìñ^÷äøe÷äìSô"÷ ÷äìtíïî ö˜òäñ]úSôúQí!"ïíè÷û³ù˜íîß÷Jòäíïútõt÷äíïñ$íïîñ$ R ñòÞù ` GøP÷ cg-!c/îJø $QîJø îÌö "ïõQî@ô $~øKJ¢÷JòÞô  $Qñ^÷Gôd]]ôí!"‹ôú "ïø+KîJø$QîJø- ëìtø îJø $tîJø îô"òäøa3ôöQöKø*ù ÷äñ ?ø ]¢í!$eõQîJí!$ &÷äìQøja3ôöQötí%$'&ùtø îT PòäíúKø ù í!$gýñ^òäìtñ$tø $ ( <Q Q),! XbZcô]søòäú3ùtñ”øî*$tñ^÷)ô^öQöKø ôòí!$‡÷äìtøÌù˜ø ]sø "ïñ^ö aeø $”÷R ñòäöQõQî ^ ÷äìQø ñ$CVKù˜ø $ ø@ñZc÷äìQøaeñ¢ùtõ "ïø ZÒñò÷äìSô÷ ]søòäú3íïî)÷Þô^ósø$÷äñ/úKø ô$gôe]]øòÞô&^øñZBô"!"N÷äìQøaeñ¢ù˜õ "ïø+ î ñ$CVKù˜ø $ øî ¯®"° ² ¦ ² °mº´Ã¥›o¹œ¥£"¨€Ã¥°t¹½©s°Nž Ù ºz›¢žB©s›ÁœŸ¡‡¡ ² °Z°K£"¨z›¢º ›sàm£*›¢ž¡"œ¥ K›¢Ã¥Âj£*° ² °K®"›'¦S¬¬®°¢àzœ ² ¦S£*›'¦³´Bžœ¥¯°K® ² ºœŸ¡ ± £*®œ¥¶»´z£"œ¥°NžÄ š7›ºz›¢©”œŸº›¢º½°NžC£"¨z›§°K¬£"œ ² ¦Kà ©s° ² ¶BœŸž¦S£"œŸ°Nž °K¯ ² °mº´Ã¥›¢¡3¶B¦K¡*›¢ºq°Nž|£"¨z›‰¦K©”©”´z®¦K©sª¼ßæ@± ² ›¢¦K¡"´z®"›tÀ3°Nž £"¨z›€Ñ/žz¤N뜟¡"¨ ¦Kßå±ä¹ °K®ºB¡Á£"¦K¡*Ô:¼¯°K®£"¨œŸ¡'›” Q¦Kß´B¦S£"œ¥°NžÆ £"¨z›¡*ÂZ¡"£*› ² ¹‡¦K¡G£*®¦KœŸžz›¢º'°Nž'¦KßÃc©s°K®"¬°K®¦¦S¬B¦S®£@¯®"° ² £"¨z›€Ñ/žz¤N뜟¡"¨‘¦KëÃÒ±ä¹ °K®^º¡'£"¦K¡"ÔzÀ^Ä7š ¨z›¢ž§£"¨z› ¡*Âm¡*£*› ² œŸ¡'®´ž‘œ«ž‘¦q¯X°K®©s›¢º§©¨°NœŸ©s› ² °Zºz›—¼£"¨z›€¡*›¢ž¡*›¹œ¥£"¨ £"¨z›µ¨Bœ¥¤N¨z›¢¡*£Á¬B®"°K¶B¦S¶BœŸÃ«œ¥£ÞÂ|œ«¡'©¨°N¡*›¢ž»À^ÆÓœ¥£"¡Á¬®›¢©”œŸ¡"œ¥°Nž œŸ¡ 4 ZÄ%44 ¦Kžºq®"›¢©”¦KßÃ>KÐZÄ 4 °Nž|£"¨›‰Ñ žz¤NߜŸ¡"¨q¦KßÃÒ± ¹°K®º¡e£"¦K¡*Ô)Äh;3¨œŸ¡¹°N´ߺ ¬Bë¦K©s›Á£"¨z›Á¡*Âm¡*£*› ² œ«ž £"¨z› £"¨œ¥®^º¬BߦK©s›'¼ßæÌ± ² ›¢¦K¡"´z®›tÀœŸž£"¨z›Ñ/žz¤Nߜ«¡"¨¦KßÃұ买K®º¡ £"¦K¡*Ô ¼ßœŸžœŸ£"œŸ¦KÃ)®"›¢¡"´Ã¥£"¡]À^Ä " #%$& A=cH'ÕAB6ÌI (, ) "% .&-/ ®›¢œŸ¡"¡‡›”£¦KÃJÄo¼JÐ:959NÐcÀe¡¨z°˜¹›¢º|£"¨B¦S£¨œ¥¤N¨y¯X®"›¢¸Z´z›¢ž©s ¬°NÃ¥Âm¡*› ² °N´¡j K›”®"¶B¡€¹¨z°N¡*›—¬®"›¢ºz° ² œŸž¦Kžc£j¡*›¢žB¡*› œŸ¡ žz°K£ K›”®‰¯®"›¢¸Z´z›¢žc£¦S®"›ߜ¥ÔK›¢ÃŸÂ'£*°'¶›¢žz› Ù £ ² °N¡*£¯®"° ² š­¾m¿‰Ä?Å¡/£"¨z›¢¡*›3 K›”®¶B¡¦S®"›¬B¦S®"£"œŸ©”´ë¦S®Ã¥Â'œ ² ¬)°K®£"¦Kž?£ ¯°K®¬®^¦K©s£"œŸ©”¦KÃ/Q B 2¦S¬¬»ÃŸœŸ©”¦S£"œ¥°Nž¡¢Æ¹›Û¯X°m©”´¡*›¢º:°Nž £"¨z› ² <ºz›¢¡"¬Bœ¥£*›€£"¨z›€¯¦K©s££"¨B¦S£³œ¥£ ² ¦Kºz› °N´z®£"¦K¡*Ô ¨¦S®º›”® ¶›¢œŸž¤:›sàz©s›”¬£"œ¥°Nž¦Kë彺œ¥á©”´Ã¥£Û¯X°K®7¶°K£"¨ š­¾m¿ ¦KžBºÇ¡"´z¶©”¦S£*›”¤K°K®œ¥·¢¦S£"œ¥°NžÇ¦K©”¸Z´œŸ¡œ¥£"œ¥°NžÆ3£"¨›¢¡*›  K›”®"¶B¡¦S®"›|¯®"›¢¸Z´z›¢žc£"坴¡*›¢º£*°<›sàm¦ ² œ«žz›|£"¨z›|£*®´z› ¬°K£*›¢ž?£"œ«¦Kà ¦Kžºyߜ ² œ¥£"¡°K¯G¦K´£*° ² ¦S£"œŸ©Á¦K©”¸Z´œŸ¡"œŸ£"œ¥°NžÄ š7›j©¨z°N¡*›€ÐK× °K¯‡£"¨z›¢¡*›µ K›”®"¶B¡Á¯°K® œ«ž? K›¢¡*£"œŸ¤N¦S£"œ¥°NžÄ ;3¨z›  K›”®"¶B¡G¹›”®"›©^¨z°N¡*›¢ž¦S£Ó®¦Kžºz° ² ÆS¡´z¶1*›¢©s£@£*°g£"¨z› ©s°Nž¡*£*®^¦KœŸž?£~£"¨B¦S£Á£"¨z›”ÂÛ°Z©”©”´z® œŸž—£"¨z›¾Z› ² 凰K®Áº¦S£"¦ œŸž:¦S£jÃ¥›¢¦K¡"£µ£Þ¹°§¶®"°N¦Kº B ›” mœŸžm±Þ¡*£PÂZÃ¥›Û¡*›¢ž¡*›¢¡¢Ä ;3¨z› š7°K®ºRQ›”£~¡*›¢ž¡*›¢¡o°K¯£"¨›¢¡*› K›”®"¶B¡¹ ›”®"› ² ¦S¬B¬)›¢º|£*° B ›” mœŸžj¡*›¢ž¡*›¢¡¢Æm´B¡"œŸžz¤‰¦K¡¦¡*£"¦S®£"œŸžz¤‰¬)°Nœ«ž?£ £"¨z› ² ¦S¬z± ¬BœŸž¤N¡ ¬®"°t ZœŸº›¢º­¶Z @Á°K®¨z°Nžz›¢ž‘¼JÐ:959NÐZÀ ¦Kžº é°Nžm± žœ¥› ¿o°K®"®q¼£"¨›—Ø%BGå3¾§º¦S£"¦S¶»¦K¡*›KڋÀ^Ä* ;3¨z°N¡"›jš7°K®^ºm± Q›”£'¡*›¢ž¡*›¢¡~ž°K£Á©s°˜ K›”®"›¢º§œŸžÛ£"¨z›¢¡*› ² ¦S¬B¬BœŸžz¤N¡o¹›”®"› ² ¦S¬¬›¢º £*° B ›” mœŸžy¡*›¢žB¡*›¢¡‰¼›¢œŸ£"¨z›”®°K®œŸ¤NœŸž¦KðNžz›¢¡°K® ¿o°K®"®¢ÚÐ.—¦KººBœ¥£"œ¥°Nž¦Kà B ›” mœŸžm±Þ¡*£PÂZߛ€¡"›¢ž¡*›¢¡^À ² ¦KžZ´m± ¦KßåÂKÄâ¾Z›¢ž¡*›¢¡  K›”® ß°˜¹ œ«ž ¯X®"›¢¸Z´z›¢ž©s ¦KžºÇ£"¨°N¡*› ¹¨œ«©¨‰©s°N´ߺ'ž°K£Ì¶› ² ¦S¬¬›¢º'£*°g¦Kž? ›sàZ£"¦Kžc£ B ›” mœŸžm± ¡*£PÂZÃ¥›¡*›¢ž¡"›¢¡g¹ ›”®"›Ã¥›”¯£°N´£o°K¯ ©s°Nž¡"œŸºz›”®^¦S£"œ¥°NžÄ ;3¨z› ² ¦Qàzœ ² ´ ² žc´ ² ¶›”®7°K¯ B@›” ZœŸž ¡"›¢ž¡*›¢¡7©s°Nž¡"œ«ºz›”®"›¢º ¬›”® K›”®¶Û¹‡¦K¡ zÄ ;3¨z›¢¡*›‰£ÞÂZ¬BœŸ©”¦Kßß ² ¦S¬—£*°€¡"›” K›”®¦Kà š7°K®ºRQ›”£3¡*›¢ž¡"›¢¡”Æz¦K¡ B ›” mœŸž¦K¡¡"´ ² ›¢¡ ² °K®›g©s°N¦S®^¡*›s± ¤K®¦KœŸž›¢º ¡*›¢ž¡*›'ºBœŸ¡*£"œŸž©s£"œŸ°Nž¡£"¨¦Kž š7°K®ºRQ›”£”Ä + ëìtøùQô"÷Þô^úQô^îJøÓíïî ôd]]ôí!"‹ôú "ïøZ‹òäña-,/.0.0132546487676739;:8<>=0?/@6A9 :B<DCE9F0C0:G4 HJIGKBL6LG=MF/4MN/FMO0IDA0P/Q6L/R S/=0AB,395S/@6A S|°ZºB´Ã¥› ¿o›¢¡"©s®œ¥¬£"œŸ°Nž )*  " š7› ´¡*›µ£"¨›jÅo©”¸c´BœŸÃ¥›sà7£"¦S¤K¤K›”®|¼ßÑ/Ã¥¹°K®"£"¨cÂKÆ .¢×K× cÀ^Æ/¹¨BœŸ©¨‘¬®"°mº´©s›¢¡'¦q¬®"°K¶»¦S¶BœŸÃŸœ¥£P ºœŸ¡*£*®^œ¥¶B´z£"œ¥°Nž€°Nž7åhBÌÅ@š­¾c±?‡£"¦S¤N¡”Äoš7›‰©s° ² ¶BœŸž›'£"¨z›¢¡*›‰£*°j¬®"°mº´©s› ¦ºœŸ¡*£*®^œ¥¶B´z£"œ¥°Nž °Nžyžz°N´ž Æz K›”®"¶ ƦKº *›¢©s£"œ¥ K›'¦Kžº|¦Kºz K›”®"¶ Ä " "1' + ;3¨z›e¯®"›¢¸Z´z›¢ž©sœŸžz¯°K® ² ¦S£"œ¥°NžœŸ¡Ó£"¦SÔK›¢ž¯®"° ² š7°K®ºRQ›”£”Æm¦Kžº³©s°Nžc K›”®"£*›¢ºœ«ž?£*°Á¦~¬B®"°K¶z± ¦S¶BœŸÃŸœŸ£ÞºBœŸ¡*£*®œ¥¶»´z£"œ¥°NžÄ   ) Ó¦S®"£Ó°K¯»¡*¬›”›¢©¨y¼G°c¾À@°K¯)¡"´z®"®"°N´BžºœŸžz¤e¹°K®º¡e¼°Nžz›e¹°K®º‰¶›”¯X°K®›KÆK£Þ¹°~¹ °K®ºB¡Ì¶›”¯°K®"›KÆ ›”£"©SċÀ¬®°Zº´B©s›¢¡e¦¬®"°K¶B¦S¶»œŸÃŸœ¥£PµºœŸ¡*£*®^œ¥¶B´z£"œ¥°Nžµ°Nž|¡*›¢žB¡*›¢¡”Ä  ;3¨z›3¤K®^¦ ²³² ¦S£"œŸ©”¦KÃB®"°NÃ¥›‰¼ß¡´z¶1*›¢©s£”Æcºœ¥®›¢©s£°K¶1*›¢©s£ ¦KžºœŸžºœ¥®›¢©s£°K¶1 ›¢©s£^À/œŸ¡£"¦SÔK›¢ž€œŸžc£*° ¦K©”©s°N´žc£/¯°K®/žz°N´ž¡¢Æ?¦KÃ¥°Nžz¤~¹œŸ£"¨£"¨z›©s°K®"®"›¢¡"¬)°NžBºœŸžz¤o K›”®"¶µ£*°'¬®"°mº´©s›e¦~¬®"°K¶»¦S¶BœŸÃŸœ¥£P ºœŸ¡*£*®^œ¥¶B´z£"œ¥°Nž Ä  "% ž¯X°K® ² ¦S£"œ¥°Nž<¦S¶°N´z££"¨z›€¹°K®º§¶›¢œŸžz¤q¦7¨›¢¦Kº<°K¯o¦¼ßžz°N´žÆ K›”®"¶@Æ›”£"©SċÀ ¬»¨z®¦K¡*›€œŸ¡ £"¦SÔK›¢ž|œŸžc£*°µ¦K©”©s°N´žc££*°µ¬®"°mº´©s›~¦¬®"°K¶»¦S¶BœŸÃŸœ¥£PµºœŸ¡"£*®œ¥¶B´z£"œŸ°NžÄ ) & !0 G°c¾ £*®^œ¥¤K®¦ ² ¡‰¬®"°mº´©s›j¦Û¬®"°K¶B¦S¶Bœ«ÃŸœ¥£ÞÂ7ºœ«¡*£*®œ¥¶B´£"œ¥°Nž7°Nžª¡*›¢ž¡*›¢¡‰°K¯o©s›”®"£"¦KœŸž§¹ °K®^º¡”Ä š7›'´¡*›¢ºy£"¨z›EQ~¾1M¡*°K¯£Þ¹‡¦S®"›'¯°K®3£"¨œ«¡ ² °mº´Ã¥›KÄ  ëìtíïîÌíïî@ôd]]ôí%"ôú "ïø Z•òäña ,6.6.01325404B7676739;C 9;:B<GL 9F0C0:G4 H .01 F0C6F0OGA0F/4BLGAB139;,/.8<>S ;G¦S¶BÃ¥› . 6/®"°K¶B¦S¶Bœ«ÃŸœŸ¡*£"œŸ© S|°Zº´BÃ¥›¢¡ ;3¨›§£*›¢¡"£  K›”®"¶B¡ ¦KžBºC£"¨z›¢œ¥® ¡*›¢ž¡"›¢¡7¦S®"›Ö¡"¨z°t¹ž œŸž ;̦S¶»Ã¥›—ÐZÄ ;¨z›Û¡"›¢ž¡*›¢¡”ÆoœŸžºœŸ©”¦S£*›¢º:¶ZÂ֞c´ ² ¶›”® ©s°mºz›¢¡¯®"° ² B@›” Zœ«žÚKžº³¿o°K®"®˜Úï¡ ©”ߦK¡¡"œ Ù ©”¦S£"œ¥°NžB¡¦S®"› ߜŸ¡"£*›¢º³œŸž£"¨z›°K®ºz›”® °K¯£"¨z›¢œ¥®¯®"›¢¸Z´z›¢ž©sœŸž¾Z› ² å°K®¢Æ ¡*£"¦S®"£"œ«žz¤Á¯®"° ² £"¨z›3¬B®"›¢ºz° ² œŸžB¦Kž?£¡*›¢žB¡*›'¼ ² ¦S®"ÔK›¢ºj¦K¡ .”¡*£^À^Ä U (,  !$5(& ' (,    " -#" š7›³£*°Z°Kԗ¦ ¡"¦ ² ¬BÃ¥›°K¯‡Ð:9 ² œŸÃŸÃ«œ¥°Nžy¹°K®º¡g°K¯£"¨z› éhQg呩s°K®"¬B´B¡@¦Kžº ›sàZ£*®^¦K©s£*›¢º‰¦KëÃ?¡*›¢žc£*›¢ž©s›¢¡Ó©s°Nž?£"¦Kœ«žm± œŸžz¤q¦Kž?—°K¯e£"¨z›³£*›¢¡"£  K›”®"¶»¡”Äjů£*›”® £"¨z›µ›sàZ£*®¦K©s£"œŸ°Nž ¬®"°m©s›¢¡"¡”Æ@¹ ›®"›”£"¦Kœ«žz›¢º7°Nž ¦˜ K›”®¦S¤K›8.G95959q¡*›¢ž?£*›¢žB©s›¢¡ ¬›”®' K›”®"¶@Ä8;3¨z›¢¡"›¡*›¢žc£*›¢ž©s›¢¡‰¹ ›”®›jºœ«¡"¦ ² ¶»œ¥¤N´¦S£*›¢º ´¡"œ«žz¤7£"¨z›|¬B®"°K¶B¦S¶BœŸÃ«œŸ¡*£"œŸ©jš­¾m¿ ¡*Âm¡*£*› ² ºz›¢¡"©s®œ¥¶›¢º œŸž­¾Z›¢©s£"œ¥°Nž ª¦KžBº:£"¨›¢ž ¬®"°m©s›¢¡"¡*›¢º ¶ZÂÖ£"¨z› ² °mºm± œ Ù ›¢ºª¡"´¶©”¦S£*›”¤K°K®^œ¥·¢¦S£"œ¥°Nž¡*Âm¡*£*› ² °N´z£"뜟žz›¢º§œŸžM¾Z›¢©]± £"œ¥°NžjÐZĐÐZÄP;3¨z›3ߦS£*£*›”®‡©s°Nž¡*£*®´B©s£"¡/¦Kžº³´B¡*›¢¡¯°K®/›¢¦K©^¨ £*›¢¡*£ K›”®"¶€¦KžœŸžBºœ¥ mœŸº´¦KÃZ¡*›”£/°K¯@¶B¦K©"Ôc±ä°Sã|›¢¡*£"œ ² ¦S£*›¢¡”Æ ¶B´œ«Ã¥£g¶c—£"¦SÔmœŸžz¤yœŸžc£*°|¦K©”©s°N´žc£ £"¨›µºœÒã›”®"›¢žc£µ¼JÐt±cÀ ¡*›¢ž¡"›¢¡'°K¯‡£*›¢¡*£‰ K›”®"¶B¡ ¦Kžº‘£"¨›³¯X®›¢¸c´z›¢žB©sÂ7°K¯£"¨›¢¡*› ¡*›¢ž¡"›¢¡jœ«žM£"¨z›—©s°K®"¬B´¡jº¦S£"¦Ç¼ß¦K¡€ºz›”£*›¢©s£*›¢º ¶ZÂM£"¨z› š­¾m¿ ¡*Âm¡*£*› ² À^Ä W ëìtíïî@÷Þôú "ïøùtíïîJö "ô û˜î îJø $QîJø î@ôî ?ø ]¢í%$îJõQú5 "ôîJîJø îÌð ìtøòäø îJõ Þìµôòäøoôe]^ô^í!"ô^ú'"ïø ( ø&'c  \),^»òäø &sô"òÞù'"ïø îJîeñZ@ð ìQøP÷äìQøò‡ð)ø ô^îJîJõ'aeø*ùô $QôòJòäñ*ðyñ^òú˜òäñsô^ù/ "ô^îJî ( ø& -1cd))í!$ñ]õ˜òjaeø÷äìQñ¢ù ZÒñò îJõQú5 *ô"÷äø &]ñòäí3O*ô÷äíïñ$gô NM”õQíïîJíè÷äíïñ$i-*Ìñ^÷äøGô"ïîJñ ÷äìQô÷ñ$QøÓô^ù ` ùtíè÷äíïñ$Qô"îJø $tîJø ðôîõQîJø*ùgð ìtí% Þì ùtñ”ø î $tñ^÷ôöQöKø*ô"òí!$%?ø ]¢í!$ ñ^òUÓñòJòd Q cegG;3¨›‰®›¢¡"´Ã¥£"¡g¹›”®"››” S¦Kß´¦S£*›¢º ¦S¤N¦KœŸžB¡*£Á¦ ² ¦KžZ´¦Kà ¦Kž¦KÃ¥Âm¡"œŸ¡°K¯/£"¨›©s°K®¬B´¡~º¦S£"¦mÄE;¨œŸ¡¹¦K¡~°K¶B£"¦KœŸžz›¢º ¶Z­¦Kž¦KÃ¥Âm¡"œŸžz¤Õ©SÄ :959Õ°m©”©”´z®"®"›¢ž©s›¢¡q¯°K®q›¢¦K©^¨C£*›¢¡*£  K›”®"¶7œŸžq°N´z®~éhQgå½£*›¢¡*£~º¦S£"¦mÄt±PÐ .¤K°Nߺ—¡*£"¦KžBº¦S®º ¾凿ӡj¹›”®"›7¯X°N´BžºÕ¯X°K® ›¢¦K©^¨Ç K›”®"¶ÿ¼ .Ö¾åeæG¡j¬›”®  K›”®"¶y°Nžq¦¢ K›”®¦S¤K›tÀ^Ä š7›©”¦Kß©”´ë¦S£*›¢ºq£PÂc¬›‰¬®"›¢©”œŸ¡œ¥°Nž<¼£"¨z›¬›”®©s›¢žc£"¦S¤K› °K¯ ¾凿%£ÞÂZ¬)›¢¡£"¨¦S£j£"¨z›—¡*ÂZ¡"£*› ² ¬®°K¬)°N¡"›¢¡³¹¨œ«©¨ ¦S®"›'©s°K®®"›¢©s£^À^ÆB£PÂc¬›Á®"›¢©”¦KëÃ/¼£"¨z›Á¬›”®©s›¢žc£"¦S¤K›'°K¯¾凿 £ÞÂZ¬›¢¡3œŸžj£"¨z›~¤K°Nߺ ¡"£"¦Kžº¦S®º€£"¨¦S£e£"¨›g¡"ÂZ¡*£*› ² ¬®"°S± ¬°N¡*›¢¡^À¦Kžº|æ@± ² ›¢¦K¡"´z®"›. H; Ð O /C4 3 7? O 4 ! /4"S3# 7?%$&4'! ¼ .˜À š7›¦Kß¡*°µ©s° ² ¬B¦S®"›¢ºq£"¨z› ¡"œ ² œŸÃŸ¦S®œŸ£Þµ¶›”£P¹ ›”›¢žq£"¨z› ¦K©”¸Z´œ¥®"›¢º ´ž ٠ߣ*›”®"›¢º UdU‘¦Kžº ¤K°Nߺ ¡*£"¦Kžº¦S®º2¾凿 ºœŸ¡"£*®œ¥¶B´z£"œŸ°Nž¡Ì´B¡"œŸžz¤g S¦S®œ¥°N´¡ ² ›¢¦K¡"´z®"›¢¡°K¯ºœ«¡*£*®œ¥¶B´z± £"œ¥°Nž¦KÃz¡œ ² œŸÃŸ¦S®^œ¥£ÞÂ+ £"¨z›¾Z¬›¢¦S® ² ¦Kž‰®¦KžÔ ©s°K®"®"›¢Ã«¦S£"œ¥°Nž ¼ ?3å3À^Æ @'´ßå¶B¦K©Ô?±YB@›¢œ¥¶BÃ¥›”®ºœ«¡*£"¦Kž©s›<¼ @ BÓÀ^Æ K›¢žB¡*›¢žm± ¾m¨¦KžBžz°Nž ºœ¥ K›”®"¤K›¢ž©s›M¼Z¾À^Æ3©s®"°N¡"¡€›¢žc£*®"°K¬Z ¼äåeÑÀ^Æ ¡*ÔK›”¹ªºœ¥ K›”®"¤K›¢žB©s›g¼ä¾m¿ÁÀ^ÆS¦Kžº'œ«ž?£*›”®¡"›¢©s£"œ¥°Nžj¼ "¾À^Ä ;3¨z› ºz›”£"¦KœŸÃ«¡Á°K¯3£"¨z›¢¡"› ² ›¢¦K¡´z®"›¢¡‰¦Kžº £"¨z›¢œŸ®‰¦S¬¬»ÃŸœŸ©”¦S£"œ¥°Nž £*° ¡´z¶©”¦S£*›”¤K°K®œ¥·¢¦S£"œ¥°Nž¦K©”¸c´œ«¡"œ¥£"œ¥°Nž©”¦Kž¶›e¯X°N´žBº³œŸž @~°K®¨°Nžz›¢ž ¦Kžº @~® ² °Nß°˜¹¡*Ômœ¼JÐ:959NÐcÀ^Ä æÓœŸž¦KëåÂKÆt¹›®"›¢©s°K®ºz›¢º‰£"¨z›£*°K£"¦KÃzžc´ ² ¶›”®G°K¯)¾凿ӡ ² œŸ¡¡"œŸžz¤gœŸž£"¨z›ºBœŸ¡*£*®œ¥¶»´z£"œ¥°Nž¡”ÆQœJÄè›KÄN£"¨z›e£PÂc¬›3°K¯)¯ß¦Kß¡*› WTW Ìñ‡÷äìtòäø îJìtñ"ùoðôîÓôöQö "ïíïø*ùo÷äñ‡òäø aeñe]søÓ÷äìtø $Qñ^íîßû   _mî Z•òäñaª÷äìQøù˜íîß÷Jòäíïútõt÷äíïñ$tî ¢ø $QîJø î »øPòäú c îß÷  $Qù "÷äì ÷äì   \ c G-%c    Q\Y c Q c      c  \ Q cdg     gY             -!c      \ c-!c     \\-!c gg   ! Qg c -%c "   " \Gc-!c \\-!c "  $# %&   -!c RQ  -%c ' gY    (*)  \     -!c +  " c  \     Q-%c ,   QRQ cd # & - Qc Y cd\-!c  "     Q Q .  cd\ \\-!c Q  '  . \-!c   -!c -%c /0  c   Q Q\ 1&*)2     gg 1&  " c      &    g \c  . \ Q Q\ \c Q ceg  -)3   Q cdg    . c Y -!c c-!c   4 cc-!c gg cd 56  g-!c \  -!c \ 56 (  RQ  \ 57 \ g-!c -%c ;̦S¶»Ã¥›'Ð2 ;̛¢¡*£3 K›”®"¶B¡¦Kžº £"¨z›¢œ¥®3¡"›¢ž¡*›¢¡ žz›”¤N¦S£"œ¥ K›¢¡¹¨œŸ©^¨€ºœŸº€žz°K£e›” K›¢žy°m©”©”´z®3œŸž€£"¨›Á´ž Ù ÃÒ± £*›”®"›¢º€ºœŸ¡"£*®œ¥¶B´z£"œŸ°Nž¡”Ä;3¨BœŸ¡¹‡¦K¡£*°‰œŸžc K›¢¡*£"œ¥¤N¦S£*›~¨z°t¹ ¹›¢ÃŸÃ/¦ ² ›”£"¨z°mº‘ºz›¢¦Kß¡'¹œ¥£"¨ ¡"¬B¦S®¡*›jº¦S£"¦mÆœJÄè›KĄ̈z°t¹ ¦K©”©”´z®¦S£*› £"¨z›~¶B¦K©Ô?±ä°S㠛¢¡*£"œ ² ¦S£*›¢¡¦S®"›KÄ æz°K®³©s° ² ¬B¦S®œŸ¡*°Nž ƹ ›€¦Kß¡*°—®"›”¬°K®"£*›¢º<®"›¢¡"´ߣ"¡'¯°K® £"¨z›Ç¶B¦K¡"›¢ÃŸœŸžz› ¡*Âm¡*£*› ² ºz›¢¡"©s®^œ¥¶›¢º œŸž1¾Z›¢©s£"œŸ°Nž ÐZÄ . ¹¨œ«©¨<¶B¦K©"Ôm¡ ±ä°Sã:£*°—£"¨z›j¬®"›¢º° ² œŸž¦Kžc£'¡*›¢ž¡"›KÆ/¦Kžº ¯°K®G¦Kžz°K£"¨›”®G K›”®^¡"œ¥°Nž‰°K¯z£"¨œŸ¡G¡*Âm¡*£*› ² ¹¨œŸ©^¨'¦K¡"¡"´ ² ›¢¡ žz°~¡*›¢ž¡"›3¦S£¦KßüߜJÄè›KÄNžz°g¶B¦K©"Ôc±ä°S㠛¢¡*£"œ ² ¦S£*›¢¡/¦S®"›e› ² ± ¬BÃ¥°tÂK›¢ºy¦Kžºyž°³¡ ² °Z°K£"¨œŸžz¤³œŸ¡eº°Nžz›tÀ^Ä (,     "% G $ -/ ;G¦S¶BÃ¥› ¡"¨z°t¹¡/¦˜ K›”®¦S¤K›~®"›¢¡"´BÃ¥£"¡¯°K®/£"¨z›gÐK×Á K›”®¶B¡ ¹œ¥£"¨£"¨›e£Þ¹°' K›”®¡"œŸ°Nž¡Ó°K¯»£"¨›3¶B¦K¡*›¢ÃŸœ«žz›¡"ÂZ¡*£*› ² ¦Kžº ¯°K®3£"¨z› ² °mºœ Ù ›¢ºj¡"ÂZ¡*£*› ² ¹¨œŸ©^¨ › ² ¬BÃ¥°tÂZ¡3šÇ¾m¿‰Ä š7›Û¡*›”›y£"¨¦S££"¨z›y¬›”®"¯°K® ² ¦Kž©s›yœ ² ¬®"°˜ K›¢¡³¹œ¥£"¨ £"¨z›žZ´ ² ¶›”®—°K¯¡"›¢ž¡*›¢¡7©s°Nž¡"œ«ºz›”®"›¢ºÄC;3¨›§š­¾m¿ Âmœ¥›¢ÃŸº¡ ZÄ Û¶›”£*£*›”®³æ@± ² ›¢¦K¡"´z®"›€£"¨¦Kžª£"¨z›€¬®›¢ºz° ² œÒ± ž¦Kžc£¡*›¢žB¡*›KƇ¹¨œŸ©^¨MœŸž£"´z®^žMÂmœ¥›¢ÃŸº¡ ZÄ%4‘¶›”£*£*›”®æÌ± ² ›¢¦K¡"´®"›Á£"¨¦Kž‘ؐžz°µ¡*›¢žB¡*›KÚ¥Ä ;3¨z›'œ ² ¬®"°t K› ² ›¢ž?£o©”¦Kž ¶›/°K¶B¡"›”®" K›¢º‰°Nž ¦Kßà ² ›¢¦K¡"´®"›¢¡¼£"¨› °NžßÂ~›sàm©s›”¬£"œŸ°Nž¡ 8oø÷äìtñ¢ù 8oø*ô^îJõ˜òäø î Ìñ ˜ø $tîJø zòäø*ùtñaeí!$Sô$”÷ R CU òäø íïîJíñ$ ( Th) g g g   Ìø *ô"!" ( Th) c Y  GG _6`baeø*ô^îJõ˜òäø  Y \<Q  \  Q\  Q\g Q c ý  c<Q Q Q \  9G Q-%c Q Q-!cNQ Q Q þ Gg G RQ XT Qg QY Q Q g ü$tîJø ø $ ' _mî cdg\ cd  ;G¦S¶BÃ¥› 2 /Å K›”®¦S¤K›‰®"›¢¡"´Ã¥£"¡‡¯°K®ÐK× K›”®¶B¡ ¦S®"›¬®"›¢©”œŸ¡"œŸ°Nž ¦Kžº ?3åoÆG¹¨œ«©¨ ¦S®"›j¡"ߜ¥¤N¨c£"å—¹°K®¡*› ¯°K®£"¨z› ¬®"›¢ºz° ² œ«ž¦Kž?£3¡"›¢ž¡*›'£"¨B¦Kž7ؐžz°j¡*›¢ž¡*›KڋÀ^Æ)¶B´z£ ¬B¦S®"£"œ«©”´ߦS®å—°Nž§£"¨z°N¡*›€¹¨BœŸ©¨<›” Q¦Kë´¦S£*›€£"¨z› ©”¦S¬B¦Q± ¶BœŸÃ«œ¥£Þª°K¯'£"¨›—¡"ÂZ¡*£*› ² £*°Mºz›¢¦KÃo¹œŸ£"¨:¡*¬»¦S®¡*›—º¦S£"¦mÄ æz®"° ² £"¨z›j£*°K£"¦KðK¯E. 0Dq¤K°Nߺ§¡*£"¦Kžº¦S®º¾凿ӡ‰´žm± ¡*›”›¢žyœ«ž€£"¨z›'´ž¡ ² °c°K£"¨›¢º Ã¥›sàmœ«©s°NžÆ .G9i0¦S®"›'´Bž¡*›”›¢ž ¦S¯£*›”®g´B¡"œŸžz¤³£"¨›'¬®"›¢º° ² œŸž¦Kžc£¡*›¢ž¡"› ² ›”£"¨z°mºÆ¦Kžº °Nžå€ÐKЮ"› ² ¦KœŸž ´ž¡*›”›¢ž ¦S¯£*›”®š­¾m¿ œŸ¡‡› ² ¬BÃ¥°˜ÂK›¢º Ä ;3¨›3›sã)›¢©s£°K¯š­¾m¿­œŸ¡/¬B¦S®"£"œŸ©”´ë¦S®å ©”ß›¢¦S® °Nž£"¨z› ² °K®"› ¡"›¢ž¡"œ¥£"œ¥ K› ² ›¢¦K¡"´®"›¢¡ °K¯ ºBœŸ¡*£*®œ¥¶»´z£"œ¥°Nž¦Ká"œ ² ± œŸÃŸ¦S®^œ¥£Þ ¹¨BœŸ©¨2©s°Nž¡"œ«ºz›”®§´Bž ٠å£*›”®"›¢ºâ¼ßžz°NœŸ¡"ÂzÀ§¾凿 ºœŸ¡"£*®œ¥¶B´z£"œŸ°Nž¡µ¦Kžº ¼ß´žߜ¥ÔK›—¬®›¢©”œŸ¡"œ¥°NžÇ¦Kžº ®›¢©”¦KßÃXÀ ›” S¦Kß´¦S£*›€£"¨z›€¦K©s£"´¦KïX®"›¢¸Z´z›¢ž©”œŸ›¢¡=t®¦KžzÔm¡'°K¯g¾凿ӡ”Ä "¾­œŸžBºœŸ©”¦S£*›¢¡y£"¨B¦S£q£"¨z›”®›§œŸ¡Û¦ÕߦS®"¤K›<œ«ž?£*›”®¡"›¢©s£"œ¥°Nž ¶›”£Þ¹›”›¢ž£"¨z› ¦K©”¸Z´œ¥®"›¢ºª¦Kžº§¤K°Nߺ¡*£"¦KžBº¦S®ºM¾凿ӡ ¹¨z›¢žÁšÇ¾m¿œ«¡´¡*›¢ºµ¼ 9mĐ×10cÆN¦K¡@°K¬¬°N¡*›¢º~£*°#9mÄ%4:9¹œ¥£"¨ £"¨z› ¬®›¢ºz° ² œŸž¦Kžc£¡*›¢ž¡*›tÀ^Ä ;¨z›yœ ² ¬®°˜ K› ² ›¢žc£°Nž ?3å œŸ¡~¡ ² ¦Kë囔®¼ 9mÄ!9DcÀ^ÆÓºz› ² °NžB¡*£*®¦S£"œŸžz¤€£"¨¦S£~š­¾m¿ œ ² ¬B®"°˜ K›¢¡e£"¨›Á®¦KžzÔmœŸžz¤°K¯¾凿ӡ3¡"ߜŸ¤N¨?£"Ã¥ÂKÄ æz®° ² £"¨z›—›¢ž?£*®°K¬cÂc±ä¶B¦K¡*›¢º­¡œ ² œŸÃŸ¦S®^œ¥£Þ ² ›¢¦K¡"´®"›¢¡ ¼ @ B/ÆÓå‡Ñ ¦Kžº c¾À^Æ@ B:œ ² ¬®"°t K›¢¡'£"¨z› ² °N¡*£ ¹œ¥£"¨ š­¾m¿ ¼ 9mÄ 10g¯®"° ² £"¨z›e¬®"›¢º° ² œŸž¦Kžc£¦Kžº9mÄ .g¯®"° ² ؐžz°'¡"›¢ž¡*›KڋÀ^Ä6c¾ÆN¹¨œŸ©^¨³œŸ¡©s°NžB¡"œŸºz›”®"›¢º³£"¨z› ² °N¡*£ ®"°S± ¶B´¡"£°K¯ £"¨z›¢¡*› ² ›¢¦K¡"´z®›¢¡”Æ ¡"¨z°t¹¡g¡ ² ¦Kß囔®g¶B´£ožz›” c± ›”®"£"¨z›¢ÃŸ›¢¡"¡3žz°K£"œŸ©s›¢¦S¶»Ã¥›Áœ ² ¬®°˜ K› ² ›¢žc£”Ä ;G¦S¶BÃ¥› ߜŸ¡"£"¡@æÌ± ² ›¢¦K¡"´z®›¦Kžº c¾Á®"›¢¡"´ߣ"¡ ¯°K®Ì›¢¦K©^¨ °K¯ £"¨z›~œŸžBºœ¥ mœŸº´¦KÃZ£*›¢¡*£‡ K›”®"¶B¡¢Äš7›g¡"›”›o£"¨B¦S£”ÆZ¤K›¢ž›”®*± ¦KßåÂKÆ@šÇ¾m¿ ¶›¢žz› Ù £"¡g£"¨› ² °N¡*£~£"¨z°N¡*›µ K›”®"¶B¡g¹¨œ«©¨ ¦S®"›3¨œŸ¤N¨Ã¥ÂÁ¬°NÃ¥Âm¡*› ² °N´¡G¹œŸ£"¨ t± ¡*›¢ž¡*›¢¡¼›KÄè¤zÄ;: L?C5H < = F>6M ?<  J >MA@B>2 J CD< HdF#C:H <FE6C«À€°K®y K›”®"¶B¡ ¹¨z°N¡*›  S¦S®œ¥°N´¡Û¡*›¢ž¡*›¢¡—ºœ¥ã)›”®Û¡"´z¶B¡*£"¦Kžc£"œŸ¦Kßß œ«ž½£*›”® ² ¡q°K¯ ¡"´z¶©”¦S£*›”¤K°K®œŸ·¢¦S£"œ¥°NžÇ¼›KÄè¤zÄ = F:M = L 8:L< = F:M  >M7>LG< LGIIH :HNC = LG<@1H C J JzÀ^Ä æz°K®:›sàm¦ ² ¬BÃ¥›KÆq¦ÿ©”Ã¥›¢¦S®Öœ ² ¬B®"°˜ K› ² ›¢žc£ÖœŸ¡Ö¡*›”›¢ž ¹œ¥£"¨ ² ¦Kž?ÂC°K¯€£"¨z›M K›”®"¶»¡ ¹¨z°N¡"›°Nžz›Ö¡*›¢ž¡"›ÖœŸžm±  K°NÃ¥ K›¢¡ ² ¦KœŸžå Q  =   ¾凿ӡy¼›KÄè¤zÄLK L   G "' _6`baeø*ôîJõtòäø 9 Bøòäú òäø*ù1R CU zòäø*ùiR U    Q Q  Q Q Q QY Q Qg  &  g \e' Q -!ce Q -!cNQ    \ \ Q Q Q Q    * \<Q Q \<Q Q Q -!cNQ Q Q     Y -!c \ Q -!cc Q -!cNQ    &  \<Q Q \<Q Q Q c Q -!ce\   %  Y  \   Q Q Q Q   ! \    ' Q QY Q Q "   " c\ c\ Q Q Q Q "  $# ! g\ g-!c Q Q Q Q ' \e' c\ Q Q Q QY  (*)    \   Q Q Q Q +  " Y Q Y Q Q -!ce\ Q -!c  ,    g  g Q -!ceg Q -!cd # & ( \\ \e' Q Qg Q Q\  "  c\ c\ Q Q\ Q Q .  \<Q Q \<Q Q Q -!c  Q -!cd '  . Y -!c Y -!c Q QY Q QY /0  g g Q Q Q Q 1*)2  \   \  Q Q -!cd Q -!cNQ 1  "   Q Q Q Q &    ge'-!c Q -!cc Q Q  .  g RQ Q Q -!cd Q -!ce  -)3  Q Q   Q -!cd Q -!ce\   . c\  'Q Q -!cd Q -!c   L \e' \e' Q Q Q Q 56  c g Q -!cNQ Q Q 56 (  Y \ Y \ Q -!cd Q -!ceg 57% \   RQ  Q -!cd Q -!ce ;̦S¶Bߛ  /æÌ± ² ›¢¦K¡"´z®"› ¦Kžº>c¾€¯X°K®3£*›¢¡"£3 K›”®"¶B¡ ?C L 1FF:HJFC%C:M  LG< C:M  C L8LGM YLHeLK ?C L = C%C5I :LH F D JL = HeL,>J"À¦KžºÖ¦Kžz°K£"¨z›”®µ°Nž›|œŸžc K°NÃ¥ K›¢¡µ¡*›¢žc£*›¢žc£"œŸ¦Kà ¾凿ӡM¼›KÄè¤zÄ[F'LJ C:M71F:ML 3   ?C%C. ?C  J E6C J 1F:M L M < C:M+ bJEF  MLEJ À^Ä ¿g´z› £*° ºœ«¦S£"¨z›¢¡"œŸ¡¦KÃ¥£*›”®žB¦S£"œ¥°Nž¡”Æ/¦KžM°Z©”©”´z®®"›¢ž©s› °K¯°Nžz›µ¾å‡æÕœŸ¡gߜ¥ÔK›¢Ã¥Ây£*° ¤NœŸ K›®œŸ¡*›£*°y¦Kžz°K£"¨z›”®˜Æ ®"›s± ߦS£*›¢ºq¾B凿Ä;3¨Z´¡¾凿ӡe£*›¢žºy£*°µ°m©”©”´z®œŸž º¦S£"¦¦K¡ Øï¯ß¦ ² œŸÃŸœ¥›¢¡¢Ú¥Ä¿o›”£*›¢©s£"œ¥°Nž<°K¯e¦  K›”®"¶‘¡"›¢ž¡*›µ©”¦Kž7£"¨›”®"›s± ¯°K®"›g®"›¢¡"´Ã¥£œŸžjºz›”£*›¢©s£"œ¥°Nž€°K¯Ì¦ ¹¨z°Nߛ¯ß¦ ² œŸÃ¥Â°K¯Ìžz›”¹ ¼¤K°Nߺy¡"£"¦Kžº¦S®º»Àe¾B凿ӡ”Ä ç ž›  K›”®"¶ ¡"¨°˜¹¡ ¹°K®¡*›2¬›”®"¯X°K® ² ¦Kž©s› ¹¨z›¢ž š­¾m¿œŸ¡´¡*›¢º' JGLL QÄC¾m´z®¬®œŸ¡"œ«žz¤NÃ¥ÂKÆ£"¨œ«¡³ K›”®"¶ œŸ¡ ¨œ¥¤N¨Bå¬)°NßÂZ¡*› ² °N´¡ ¦KžBºœ¥£"¡‡¡*›¢žB¡*›¢¡‡ºœÒã›”®¡"´¶B¡*£"¦Kžm± £"œŸ¦KßßÂqœŸž7£*›”® ² ¡Á°K¯e¡"´z¶©”¦S£*›”¤K°K®œŸ·¢¦S£"œ¥°NžÄ ž7£"¨›”°K®"ÂKÆ œ¥£~œŸ¡o¬°N¡"¡"œ¥¶»Ã¥›'£"¨¦S£ÁœŸ¯ ¡*›¢ž¡"›¢¡~ºœÒã›”®g¦ ß°K£gœ«žq£*›”® ² ¡ °K¯‡¡"´z¶©”¦S£*›”¤K°K®œ¥·¢¦S£"œ¥°Nž‘¦Kžº—°Nžz›°K¯/£"¨› ² œŸ¡~©”Ã¥›¢¦S®ß ¬®"›¢º° ² œŸž¦S£"œŸž¤œŸž‰£"¨z›eº¦S£"¦mÆc£"¨z›¢ž£"¨z›eºz›”£*›¢©s£"œ¥°Nž°K¯ ¦Kžc³°K¯£"¨z›o°K£"¨z›”®¡"›¢ž¡*›¢¡ ² ¦¢Âµ®"›¢¡"´Ã¥£ œŸžµžz°Nœ«¡*›KÄ ç ´z® ®"›¢¡"´BÃ¥£"¡¡"¨z°˜¹'Æ ¨z°t¹ ›” K›”®¢Æ‡£"¨¦S££"¨BœŸ¡œŸ¡žz°K£³´¡"´¦Këå £"¨z›'©”¦K¡"›KÄ ;3¨›e K›”®"¶B¡¹¨BœŸ©¨³º°Ážz°K£/¡"¨z°t¹ ¼ß©”Ã¥›¢¦S®]Àœ ² ¬®°˜ K›s± ² ›¢žc£~¹œ¥£"¨Ûš­¾m¿ ¼›KÄè¤zÄ = C6FF:JGLG< = F:I J F:JGLG<  M+1> = LG< E6C. = CSÀ7¦S®"›:žz°K£ ¦K¡‘¨œŸ¤N¨å­¬°NÃ¥Âm¡*› ² °N´¡M¼ßœ«žC°N´z® ©s°N¦S®¡*›¤K®¦KœŸžz›¢ºµ¤K°Nߺµ¡"£"¦Kžº¦S®º»À^Æc¦KÃ¥£"¨z°N´z¤N¨¡*° ² ›°K¯ £"¨z›¢œ¥®¡*›¢ž¡*›¢¡³º°‘ºœÒã›”®µ¡"´z¶»¡*£"¦Kž?£"œ«¦KßåÂ<œ«žª£*›”® ² ¡³°K¯ ¡"´z¶©”¦S£*›”¤K°K®œŸ·¢¦S£"œ¥°NžÄ P£ÓœŸ¡Ì¬)°N¡¡"œ¥¶BÃ¥›£"¨B¦S£G£"¨›¢¡*›  K›”®¶B¡ °m©”©”´z®"®"›¢º œŸž~°N´z®Gº¦S£"¦ ² °N¡*£"Ã¥Â~œŸž~£"¨z›¢œŸ®@¬®"›¢º° ² œŸž¦S£ ± œŸžz¤Á¡"›¢ž¡*›¦Kžºµ£"¨z›”®"›”¯°K®"›3š­¾m¿ ² ¦Kºz›ߜ¥£*£"Ã¥›'¼°K®‡žz°cÀ ºœÒã›”®"›¢ž©s›KÄ ;¨œŸ¡~œŸ¡~ºœŸáµ©”´Ã¥£g£*°y›” S¦Kß´¦S£*›³¹œ¥£"¨z°N´z£ ¡*›¢ž¡"›ÁºœŸ¡"¦ ² ¶Bœ¥¤N´¦S£*›¢º ºB¦S£"¦mÄ  ;@6/™EPF/OSHä;Ì6 ç ´z®®"›¢¡"´BÃ¥£"¡/¡"¨°˜¹›¢º£"¨B¦S£¦¡*£"¦S£*›s±ä°K¯X±ä£"¨z›s±Þ¦S®"£3š­¾m¿ ¡*Âm¡*£*› ² ©”¦Kž œ ² ¬®"°˜ K›~£"¨›g¦K©”©”´®¦K©s€°K¯Ó¾凿§¦K©”¸Z´œÒ± ¡"œ¥£"œŸ°Nž¯°K®3ºœ¥á©”´Ã¥£/ K›”®¶B¡”Ä  ž?£*›”®"›¢¡"£"œŸžz¤NÃ¥ÂKÆm£"¨z›”Âj¦Kß¡*° ¡"¨z°t¹ ›¢º£"¨B¦S£ÓœŸ£GœŸ¡Gžz°K£Ó°NžÃ¥Â'£"¨z› iL @1HdLLG°K¯B¬°NÃ¥Âm¡*› ²  ¹¨œ«©¨Mºz›”£*›”® ² œŸžz›¢¡µ£"¨z›|žz›”›¢ºÖ°K¯~šÇ¾m¿ ¼®"›¢œŸ¡¡›”£ ¦KÃJÄ¥ÆzÐ:959NÐNÀ ¶B´z£/¦Kë¡*°‰¨°˜¹ ² ´©^¨µ£"¨›o¡*›¢žB¡*›¢¡ ºBœÒã)›”®œŸž £*›”® ² ¡3°K¯Ó¡´z¶©”¦S£*›”¤K°K®œ¥·¢¦S£"œ¥°NžÄ æ´®"£"¨z›”®o®"›¢¡*›¢¦S®©^¨Ûœ«¡¹‡¦S®"®¦Kžc£*›¢º|£*°€œ ² ¬B®"°˜ K›‰£"¨z› ®"›¢¡"´BÃ¥£"¡µ¯ß´z®"£"¨›”®¢Ä­š7› œŸžc£*›¢žºM£*°œŸžc K›¢¡*£"œ¥¤N¦S£*›—¶›”£ ± £*›”®'¹‡¦¢Âm¡Á°K¯eœŸžc£*›”¤K®¦S£"œŸž¤y£"¨z›³¦K©”¸Z´œ¥®"›¢º ¡"›¢ž¡*›q¼¯®"›s± ¸Z´z›¢ž©sÂÀgœŸžz¯°K® ² ¦S£"œ¥°Nž7œ«ž?£*°y£"¨›j¾凿 ¡*Âm¡*£*› ² ÆÓ¦Kžº ©s°Nžc£"œŸžc´›~®"› Ù žœŸž¤ °N´z® ² ›”£"¨z°Zºy¯X°K®3¡"´¶©”¦S£*›”¤K°K®^œ¥·¢¦Q± £"œ¥°Nž§¦K©”¸c´BœŸ¡"œ¥£"œ¥°Nž Ä8;3œ ² ›µ¹œŸÃ«Ã¦Kß¡*°Û¶›œŸžc K›¢¡*£*›¢ºªœŸž ¦K´z£*° ² ¦S£"œ«©”¦KßåÂq¦K©”¸Z´œ¥®^œŸžz¤€¦ ߦS®"¤K›£*®¦KœŸžœ«žz¤j©s°K®¬B´¡ ¯°K®Á£"¨z›³¬®°K¶B¦S¶BœŸÃŸœ«¡*£"œŸ©Áš­¾m¿ ¡*Âm¡*£*› ² ¼›KÄè¤zÄe¼TSqœŸ¨¦KÃÒ± ©s›¢¦¦KžBº Sy°Nߺ°˜ S¦KžÆ6.¢×K×K×NÀ*À^Æ?¹¨œŸ©^¨~¡"¨z°N´ߺgœŸžB©s®"›¢¦K¡*› £"¨z›'¡"ÂZ¡*£*› ² Úï¡e¬›”®"¯X°K® ² ¦Kž©s›KÄ 57™6/;ÞABD @A3'ÕAB6ÌIKO š7›~¹°N´ߺjߜ¥ÔK›o£*°‰£"¨B¦KžzÔ ;̛¢º é ®^œŸ¡"©s°Z›¯°K®e¨œŸ¡¨z›¢Ã¥¬ œŸž£"¨z››¢¦S®Ã¥Â¡*£"¦S¤K›¢¡/°K¯£"¨œ«¡Ó¹ °K®"Ô)Ä ç ´z®£"¨B¦KžzÔZ¡/¦Kß¡*° ¤K°Á£*° ?¦Kº¦ SqœŸ¨¦Kß©s›¢¦Á¯X°K®/¨›”®›¢ž©s°N´z®¦S¤K› ² ›¢ž?£‡¦Kžº /´z S¦Kà @~®" ² °NÃ¥°t¹¡*Ômœ¯X°K®¨œŸ¡‡£*›¢©¨BžœŸ©”¦Kà ¨z›¢ÃŸ¬ Ä  A A=NAB6 ™mABO ! "$#%'&(#*),+.-/ "$021$345%!7698!:<;=?>@#*"$% A %4B=0C34),1EDF"G) #IH$ "$# AJA #*)! #*!%LK "M=34% 1M1M02) =HM0 A 021M0N)  H$O%P"Q#*RRS#*"T3'5+0N)U1MV=1WH$%'RX*DYHMO%LZ@[I\5]*^S_I\a`Pbdc'e f bd[I\g_Ihji[Wkmln[*\ fpo ^q [IhM_*hjiSr\5]*stbCuwv=mlJ[*^qgx f _ f bd[*\_Is Znby\<]x=bCu f bdc4uQz69{ |d}<~4 €698‚€I}<ƒ -Y/ Y"M0C1M3'5%U#*) +„/,†…#"M"$ A2A ‡6 88<;=‰ˆ =HM!RS#IHM0C3U%'B5Š H$"$#!3jHM02)‹DŒ1w 39#IHM%9!"M02Ž9#*HM02)DF"M!R3'"$K,!"$# ’‘p) “h$[9c oQoQ” by\<]*u•[Wk—–GlZ˜–™—Z@“›šœ9zK #%91T{<*žI5{ž!{ -,/  "$021$345%P#) +a/ …#"M"$ A2A Ÿ€*ƒ!ƒ!€=J  1wH#3934 "Q#IH$% 1wH$#*HM0C1WH$0239# A #))*HQ#IH$0N!)*Dn%9)%'"Q# A H$%4B5H9T‘p)‡“h$[9c oQoQ” e by\5]IuS[Wk f v oh ” \ f o hj\_ f bd[*\_*sTln[*\9k o h o \c o [*\‡Z_*\ e ]x _'] oo u'[Ix=hMc o uŸ_I\ ” r‚_*s x _ f bd[I\,z*K #%91J6'}!8!8‚g6 *ƒ}  /,G…#*"$"M ANA zL- Ÿ"$0C1M3'<%!zS#*) + ˆP =#) A 0NK K,, 6 88:  n#"$1M%'"%9&I# A #IH$0N!)ˆ 1w "M&!%'Va#*) +‡#Œ)% K"$K!1$# A  ‘p) “hM[ c o$oQ” by\<]*u [wk f v o \ f o hj\_ f bd[*\_IsnlJ[*\9k o h o \gc o [*\ Z@_I\5]*x _9] oo u'[Ix=hMc o u_*\ ” rI_Is x _ f bd[*\ zK #% 1S}}=;9 }5I}  gŸ…ŸO%9)˜#*),+˜/,<=+=RS#) 698!8ž „ˆ) %'RK02"M0C3'# A 1wHM +VmD†1MR<HMO02)H$%93QO)0< %91TDF" A #*)! #*!%ERL=+5Š % A 02) †‘p) “hM[ c o$oQ” by\<]*uP[wk f v o v5byh f iIe[*x=h f vm–—\ \,x _Is  oQo4f by\<]•[Wk f v o –uQu'[9c'bd_ f bd[I\k4[*h—lJ[I^qx f _ f bd[I\g_Is!Znby\ e ]x=bCu f bdc4ujzK #*!%91›{6 ƒ‚={69:  GA Ÿ!"wH$O<V!Ÿ6 88} —5%91››#* RLЇ% A 3QOm"M%'Š % 1WH$0NRS#*HM02) O% A K„H$#*!%9"$1 ‘p) “›hM[9c oQoQ” by\5]Iu‰[wk f v o! f v ln[*\9k o hje o \c o [I\‰–qqstb o$” ™Z“zK,#*% 1T*{‚*:    ""$021MORS#*)z …—$# #3 A %95+z#*) +.ˆP$#%9V%'"Q19X6 88*}, …Ÿ!R A %4B‹1MV<)<HQ#IB  0 A +=02)# 34!RLK =H$#*HM02) # A•A %4B=0tŠ 3') ‘p)  \ f o hj\_ f bd[*\_IsSlJ[*\9k o h o \gc o [*\ lJ[I^qx f _Ie f bd[I\g_IsZnby\<]x=bCu f bdc4u&%—l' Z  ™(›eWš zK #% 1€*ž:I=€<;*€= ˆPŸE"$O)%9)˜#) +*)LJE"MV5R A + 1-,<0  €ƒƒ<€=/.—)˜HMO % "$ ,1WH$)%91$1*DS%9)!H$"M!K5V!Š  #!1w% + 1M0NR0 A #"M0NHWV R%9#!1w "M% 1 02)Œ%9&‚# A ,#IHM02)Œ*D@1M g3'#IH$%'!"$0NŽ #IHM02)U#!30< 0C1w0NHM02)m1wV=1wŠ H$%'RS1'?‘p) “hM[ c oQoQ” by\5]Iu„[Wk f v o21If v ln[*\9k o h o \c o [*\ ™G_ f x=hM_Is,Z@_I\5]*x _9] o Z o _*hj\ by\5]*z=K #% 1T86j=8<;= ˆPnE!"MO!)%') €*ƒƒ<€=43x65$cQ_ f o ]<[Ihjb87 _ f bd[I\˜–—c94x=bCujb f bd[I\  O G=H$O%91M0219z;:) 0N&!%'"Q1w0NHWVU*D …#*RG"$02+=!% LG>%'% 3QO 698!8!€ 69ƒƒR0 ANA 0N!)< Ÿ!"$+1 D -J) A 0C1wO H$O%G"$0tH$021MO>=#*HM02) # A …Ÿ"$K 19Z@_I\5]*x _9] o?—o u o _IhMcQv5z €: |W6‚~j26jg69{ >%9&<02)‰698!8{Sr\<]s bCuMvA@ o h05al†sN_IuQu o uŒ_I\ ” –—s fpo hj\g_Ie f bd[I\ uQŸ…ŸO 0239#*!:) 0N&!%'"Q1w0NHWVB J"$%91$19 …—C•C# #))02)a#*) +>D•E=3QOGF =H$Ž'%!L6 88!8H[*x=\ ” _ f bd[*\u [wk3 f _ f bCu f bdcQ_*s†™G_ f x=hM_IsnZ_*\<]x _'] o “›hM[9c o uQuQby\5]*I#‘KJ "M% 1M19   #02O # A 3'%9# #) +L•†‘j# A +=I&I#)(698!88 ˆ) # =HMŠ RS#IH$023 R%4HMO 5+PDF"@%') %'"Q#IHM02)1M%'),1w%nHQ#*!%9+ 3'"$K,!"$#  ‘p) “hM[ c oQo$” by\<]*u•[Wk—–—––  eWššz=K #*!%91›}!ž 6j5}!žž    M#02O # A 3'%9#€*ƒ!ƒ!€=? "Q+ 1w%9) 1w%+=0C1M#R•02 ,#IHM02) 1WŠ 02)‰K #IHMHM%'"$) A %9#"M)02)‰#*) +‡#* =H$RS#IH$023PDF%9#IH$ "$%L1M% A %934Š H$0N!)HN=[Ix=hj\_*sY[wk—™•_ f x=h$_IsgZ@_I\5]*x _9] o _*\ ” r\5]*by\ o$o hje by\5]*zK,#*% 1›{*}!{I5{<*: LM#0 ANA %'" z  %93,O T0NHMOz@…—M % A  #* R‰z•E"$!1$1'z#) + P#0 ANA %'" 6988!ƒ ‘p)<HM"$=+= 34HM02)„H$4‡!"$+;=%4H aˆ) !)=Š A 02)% A %'B=0239# A +#*H$# #1M%QN5[*x=hj\_*sY[WkZ o&R bdc$[$]hM_$q v5iIz {,|F}<~4 €{!‚=€I}!}  # S n# A R%9"9zT…—% A2A  #* Raz"g…ŸHwH$)z›>J—% A Dd1'zT#) + D•HJEH—#)  €*ƒƒ<€= ) A 021MO HQ#1-,=1' ˆ A2A Š Ÿ!"$+ 1 #) + &!%'"$ A %'B=0239# A 1M#RLK A %! ‘p)I "M%9021$1#) +T) #"M+ 1-,5V | J"$%'0C1M1›#*),+B) #"M+ 1-,5Vz€*ƒ!ƒ!€!~jz5K #*!%91T€=64=€*}  JES †% +=%'"Q1w%9) €*ƒ!ƒ!€=U# #!3QO02)% A % #*"$)0N) I T0NHMO A %4B=0C3'# A DF% #IHM "M% 1'HJ›O%U+ A HMO #KK"$!#3QO HMV5%9) 1M%'&I# A Šp€S‘p) "M%9021$1†#*) + ) #*"$+ 1W,5VS|X "M%9021$1†#*) + ) #*"$+ 1-,<V!z€*ƒ!ƒ!€~4z K #% 1—6 {8I6'}5€= /,Y "M%9021$1•#) + ˆ•†E!"MO )%9) €ƒƒ!€ ‘pRK"$I&<02) 1M =Š 39#IHM%9!"M02Ž9#*HM02)Œ#!30< 0C1w0NHM02)Z T0tH$O[ •n‘p) “h$[9c oQoQ” e by\5]Iu[wk f v o]\ [Ih ” 3 o \ u o `•bCu'_I^54bt]x _ f bd[I\ \ [Ih-^IuMv[Qqz K #% 1—6 ƒ!€‚69ƒ!: /,Q "M%9021$1 #) +_•Q) #"M+ 1-,5VzP% +=0tH$"Q1' €*ƒƒ<€= “h$[Ie c oQoQ” by\<]*u [wk>3rn™`3r]@–Znebadc/3 o c$[*\ ”e \ fpo hj\g_ f bd[I\_*s \ [Ih-^IuMv[$q [*\’rI_Istx _ f by\<] \ [Ih ” 3 o \u o `PbCu'_I^Le 5'bt]*x _ f by\5]3i‚u f o ^•uj /,0 J"$%'0C1M19z9ˆ•9E!"MO!)%')z9#*),+ -'/,'"M0C1$345%€ƒƒ!€E5 =Š 39#IHM%9!"M02Ž9#*HM02)—#!30< 0C1w0NHM02) #!1#*) %9&‚# A ,#IHM02)ER%4H$O=+ DF!" [ G‘p) “hM[ c oQo$” by\<]*u‰[WkLZ  rl†zYK #*!%91S6‚6j 6‚*ž  GY T A #) + #*),+e•†/ "$#*Dd1W,5V€*ƒ!ƒ6!fn%9"M„1M%') 1M%Œ#) + &!%'"$G1M 39#IH$%'!"M02Ž9#*HM02)PK "M! #*0 A 0tH$0N% 1'@‘p)!gg5HM%'&!%')=Š 1M)(#*),+U †h#%9" A  z›%9+0tH$"Q1'z  v o Z o-R bdcQ_Is$i_IuQbCu‡[wk 3 o \ f o \c o “h$[9c o u$ujby\5]jc[*hj^_*sk% lJ[I^qx f _ f bd[I\g_Isk%n_I\ ” r R q o hjby^ o \ f _Is  uQuQx o Y…#R•"$02+%:) 0N&!%'"Q1w0NHWVl "M% 1M19z /!)a%')+mw#RL02) 19zˆRS1wHM%'"Q+#R‰J@S#*KK%9#"9 G T A #*) +z;•/! "Q#IDd1-,<V!z<> ;#%') )z6g6E#*O A z-5A +%'" z #) +.…—› T02+ +==3QO €ƒƒƒ _fn%'"$ 1M 39#IH$%934!"M02Ž9#*HM02) DF"$%< %9) 34V„+=08ng%'"$%') 3'%91%4HK %'%')˜ 1M0N) %91$1WŠ )%0 1#) +  # A #) 34% +U34!"MK"Q#@‘p)a–GlZ \ [Ih-^Iuwv [$q„[I\ lJ[I^q_*hje by\5] lJ[Ih q [Ih$_Iz K,#*% 1€*:I5{*}, ˆPE#*"&,‚#"#) +I•o%9RS#*)€*ƒƒ!ƒPˆ =HM!R#*HM0C3L%4B5HM"Q#34Š H$0N!) *D1w 39#IHM%9!"M02Ž9#*HM02)—DF"$#RL% 1gDF"n…ŸŽ'% 3QO‘p)Bp*š f v  \ fpo hj\g_ f bd[I\_*sEln[*\9k o h o \c o [I\ lJ[I^qx f _ f bd[I\g_IsŸZnby\ e ]x=bCu f bdc4ujzK #*!%91›ž8 6j=ž8<;= # q<H$%'&%9) 1M)P#*),+?)Gr(0 A ,=19Y€ƒƒ6!MJ›O%0N)<HM%9"$#!3jH$0N!) *D ,5)+ A %9+=!%G1M "Q34% 10N)I Ÿ!"$+ 1w%9) 1w%+=0C1M#R•02 ,#IHM02) lJ[I^qx f _ f bd[I\g_IsZ†by\<]x=bCu f bdc4ujz,€!;|d{<~j {!€=645{<*ƒ GP) #*"$+ 1W,5V!(€ƒƒ!ƒ/D02%'"Q#*"Q3QO0239# A +=% 340C1w02) A 0C1wH$1DF!" "Q+1w%9) 1w% +021$#*RG0N! #IH$0N!)?ln[*^qgx f o hQu‡_I\ ”„f v o sPx=^_*\ b f b o ujz {},|W6+t*€!~j26‚;I8I6 :ž
2003
7
Towards a Model of Face-to-Face Grounding Yukiko I. Nakano†/†† Gabe Reinstein† Tom Stocky† Justine Cassell† †MIT Media Laboratory E15-315 20 Ames Street Cambridge, MA 02139 USA {yukiko, gabe, tstocky, justine}@media.mit.edu ††Research Institute of Science and Technology for Society (RISTEX) 2-5-1 Atago Minato-ku, Tokyo 105-6218, Japan [email protected] Abstract We investigate the verbal and nonverbal means for grounding, and propose a design for embodied conversational agents that relies on both kinds of signals to establish common ground in human-computer interaction. We analyzed eye gaze, head nods and attentional focus in the context of a direction-giving task. The distribution of nonverbal behaviors differed depending on the type of dialogue move being grounded, and the overall pattern reflected a monitoring of lack of negative feedback. Based on these results, we present an ECA that uses verbal and nonverbal grounding acts to update dialogue state. 1 Introduction An essential part of conversation is to ensure that the other participants share an understanding of what has been said, and what is meant. The process of ensuring that understanding – adding what has been said to the common ground – is called grounding [1]. In face-to-face interaction, nonverbal signals as well as verbal participate in the grounding process, to indicate that an utterance is grounded, or that further work is needed to ground. Figure 1 shows an example of human face-to-face conversation. Even though no verbal feedback is provided, the speaker (S) continues to add to the directions. Intriguingly, the listener gives no explicit nonverbal feedback – no nods or gaze towards S. S, however, is clearly monitoring the listener’s behavior, as we see by the fact that S looks at her twice (continuous lines above the words). In fact, our analyses show that maintaining focus of attention on the task (dash-dot lines underneath the words) is the listener’s public signal of understanding S’s utterance sufficiently for the task at hand. Because S is manifestly attending to this signal, the signal allows the two jointly to recognize S’s contribution as grounded. This paper provides empirical support for an essential role for nonverbal behaviors in grounding, motivating an architecture for an embodied conversational agent that can establish common ground using eye gaze, head nods, and attentional focus. Although grounding has received significant attention in the literature, previous work has not addressed the following questions: (1) what predictive factors account for how people use nonverbal signals to ground information, (2) how can a model of the face-to-face grounding process be used to adapt dialogue management to face-to-face conversation with an embodied conversational agent. This paper addresses these issues, with the goal of contributing to the literature on discourse phenomena, and of building more advanced conversational humanoids that can engage in human conversational protocols. In the next section, we discuss relevant previous work, report results from our own empirical study and, based on our analysis of conversational data, propose a model of grounding using both verbal and nonverbal information, and present our implementation of that model into an embodied conversational agent. As a preliminary evaluation, we compare a user interacting with the embodied conversational agent with and without grounding. Figure 1: Human face-to-face conversation [580] S: Go to the fourth floor, [590] S: hang a left, [600] S: hang another left. look at map gaze at listener gaze at listener look at map look at map look at map look at map speaker’s behavior listener’s behavior [580] S: Go to the fourth floor, [590] S: hang a left, [600] S: hang another left. look at map gaze at listener gaze at listener look at map look at map look at map look at map speaker’s behavior listener’s behavior 2 Related Work Conversation can be seen as a collaborative activity to accomplish information-sharing and to pursue joint goals and tasks. Under this view, agreeing on what has been said, and what is meant, is crucial to conversation. The part of what has been said that the interlocutors understand to be mutually shared is called the common ground, and the process of establishing parts of the conversation as shared is called grounding [1]. As [2] point out, participants in a conversation attempt to minimize the effort expended in grounding. Thus, interlocutors do not always convey all the information at their disposal; sometimes it takes less effort to produce an incomplete utterance that can be repaired if needs be. [3] has proposed a computational approach to grounding where the status of contributions as provisional or shared is part of the dialogue system’s representation of the “information state” of the conversation. Conversational actions can trigger updates that register provisional information as shared. These actions achieve grounding. Acknowledgment acts are directly associated with grounding updates while other utterances effect grounding updates indirectly, because they proceed with the task in a way that presupposes that prior utterances are uncontroversial. [4], on the other hand, suggest that actions in conversation give probabilistic evidence of understanding, which is represented on a par with other uncertainties in the dialogue system (e.g., speech recognizer unreliability). The dialogue manager assumes that content is grounded as long as it judges the risk of misunderstanding as acceptable. [1, 5] mention that eye gaze is the most basic form of positive evidence that the addressee is attending to the speaker, and that head nods have a similar function to verbal acknowledgements. They suggest that nonverbal behaviors mainly contribute to lower levels of grounding, to signify that interlocutors have access to each other’s communicative actions, and are attending. With a similar goal of broadening the notion of communicative action beyond the spoken word, [6] examine other kinds of multimodal grounding behaviors, such as posting information on a whiteboard. Although these and other researchers have suggested that nonverbal behaviors undoubtedly play a role in grounding, previous literature does not characterize their precise role with respect to dialogue state. On the other hand, a number of studies on these particular nonverbal behaviors do exist. An early study, [7], reported that conversation involves eye gaze about 60% of the time. Speakers look up at grammatical pauses for feedback on how utterances are being received, and also look at the task. Listeners look at speakers to follow their direction of gaze. In fact, [8] claimed speakers will pause and restart until they obtain the listener’s gaze. [9] found that during conversational difficulties, mutual gaze was held longer at turn boundaries. Previous work on embodied conversational agents (ECAs) has demonstrated that it is possible to implement face-to-face conversational protocols in human-computer interaction, and that correct relationships among verbal and nonverbal signals enhances the naturalness and effectiveness of embodied dialogue systems [10], [11]. [12] reported that users felt the agent to be more helpful, lifelike, and smooth in its interaction style when it demonstrated nonverbal conversational behaviors. 3 Empirical Study In order to get an empirical basis for modeling face-to-face grounding, and implementing an ECA, we analyzed conversational data in two conditions. 3.1 Experiment Design Based on previous direction-giving tasks, students from two different universities gave directions to campus locations to one another. Each pair had a conversation in a (1) Face-to-face condition (F2F): where two subjects sat with a map drawn by the direction-giver sitting between them, and in a (2) Shared Reference condition (SR): where an L-shaped screen between the subjects let them share a map drawn by the direction-giver, but not to see the other’s face or body. Interactions between the subjects were videorecorded from four different angles, and combined by a video mixer into synchronized video clips. 3.2 Data Coding 10 experiment sessions resulted in 10 dialogues per condition (20 in total), transcribed as follows. Coding verbal behaviors: As grounding occurs within a turn, which consists of consecutive utterances by a speaker, following [13] we tokenized a turn into utterance units (UU), corresponding to a single intonational phrase [14]. Each UU was categorized using the DAMSL coding scheme [15]. In the statistical analysis, we concentrated on the following four categories with regular occurrence in our data: Acknowledgement, Answer, Information request (Info-req), and Assertion. Coding nonverbal behaviors: Based on previous studies, four types of behaviors were coded: Gaze At Partner (gP): Looking at the partner’s eyes, eye region, or face. Gaze At Map (gM): Looking at the map Gaze Elsewhere (gE): Looking away elsewhere Head nod (Nod): Head moves up and down in a single continuous movement on a vertical axis, but eyes do not go above the horizontal axis. By combining Gaze and Nod, six complex categories (ex. gP with nod, gP without nod, etc) are generated. In what follows, however, we analyze only categories with more than 10 instances. In order to analyze dyadic behavior, 16 combinations of the nonverbal behaviors are defined, as shown in Table 1. Thus, gP/gM stands for a combination of speaker gaze at partner and listener gaze at map. Results We examine differences between the F2F and SR conditions, correlate verbal and nonverbal behaviors within those conditions, and finally look at correlations between speaker and listener behavior. Basic Statistics: The analyzed corpus consists of 1088 UUs for F2F, and 1145 UUs for SR. The mean length of conversations in F2F is 3.24 minutes, and in SR is 3.78 minutes (t(7)=-1.667 p<.07 (one-tail)). The mean length of utterances in F2F (5.26 words per UU) is significantly longer than in SR (4.43 words per UU) (t(7)=3.389 p< .01 (onetail)). For the nonverbal behaviors, the number of shifts between the statuses in Table 1 was compared (eg. NV status shifts from gP/gP to gM/gM is counted as one shift). There were 887 NV status shifts for F2F, and 425 shifts for SR. The number of NV status shifts in SR is less than half of that in F2F (t(7)=3.377 p< .01 (one-tail)). These results indicate that visual access to the interlocutor’s body affects the conversation, suggesting that these nonverbal behaviors are used as communicative signals. In SR, where the mean length of UU is shorter, speakers present information in smaller chunks than in F2F, leading to more chunks and a slightly longer conversation. In F2F, on the other hand, conversational participants convey more information in each UU. Correlation between verbal and nonverbal behaviors: We analyzed NV status shifts with respect to the type of verbal communicative action and the experimental condition (F2F/SR). To look at the continuity of NV status, we also analyzed the amount of time spent in each NV status. For gaze, transition and time spent gave similar results; since head nods are so brief, however, we discuss the data in terms of transitions. Table 2 shows the most frequent target NV status (shift to these statuses from others) for each speech act type in F2F. Numbers in parentheses indicates the proportion to the total number of transitions. <Acknowledgement> Within an UU, the dyad’s NV status most frequently shifts to gMwN/gM (eg. speaker utters “OK” while nodding, and listener looks at the map). At pauses, a shift to gMgM is most frequent. The same results were found in SR where the listener could not see the speaker’s nod. These findings suggest that Acknowledgement is likely to be accompanied by a head nod, and this behavior may function introspectively, as well as communicatively. <Answer> In F2F, the most frequent shift within a UU is to gP/gP. This suggests that speakers and listeners rely on mutual gaze (gP/gP) to ensure an answer is grounded, whereas they cannot use this strategy in SR. In addition, we found that Table 1: NV statuses Listener’s behavior Combinations of NVs gP gM gMwN gE gP gP/gP gP/gM gP/gMwN gP/gE gM gM/gP gM/gM gM/gMwN gM/gE gMwN gMwN/gP gMwN/gM gMwN/gMwN gMwN/gE Speaker’s behavior gE gE/gP gE/gM gE/gMwN gE/gE Shift to within UU pause Acknowledgement gMwN/gM (0.495) gM/gM (0.888) Answer gP/gP (0.436) gM/gM (0.667) Info-req gP/gM (0.38) gP/gP (0.5) Assertion gP/gM (0.317) gM/gM (0.418) Table 2: Salient transitions speakers frequently look away at the beginning of an answer, as they plan their reply [7]. <Info-req> In F2F, the most frequent shift within a UU is to gP/gM, while at pauses between UUs shift to gP/gP is the most frequent. This suggests that speakers obtain mutual gaze after asking a question to ensure that the question is clear, before the turn is transferred to the listener to reply. In SR, however, rarely is there any NV status shift, and participants continue looking at the map. <Assertion> In both conditions, listeners look at the map most of the time, and sometimes nod. However, speakers’ nonverbal behavior is very different across conditions. In SR, speakers either look at the map or elsewhere. By contrast, in F2F, they frequently look at the listener, so that a shift to gP/gM is the most frequent within an UU. This suggests that, in F2F, speakers check whether the listener is paying attention to the referent mentioned in the Assertion. This implies that not only listener’s gazing at the speaker, but also paying attention to a referent works as positive evidence of understanding in F2F. In summary, it is already known that eye gaze can signal a turn-taking request [16], but turntaking cannot account for all our results. Gaze direction changes within as well as between UUs, and the usage of these nonverbal behaviors differs depending on the type of conversational action. Note that subjects rarely demonstrated communication failures, implying that these nonverbal behaviors represent positive evidence of grounding. Correlation between speaker and listener behavior: Thus far we have demonstrated a difference in distribution among nonverbal behaviors, with respect to conversational action, and visibility of interlocutor. But, to uncover the function of these nonverbal signals, we must examine how listener’s nonverbal behavior affects the speaker’s following action. Thus, we looked at two consecutive Assertion UUs by a direction-giver, and analyzed the relationship between the NV status of the first UU and the direction-giving strategy in the second UU. The giver’s second UU is classified as go-ahead if it gives the next leg of the directions, or as elaboration if it gives additional information about the first UU, as in the following example: [U1]S: And then, you’ll go down this little corridor. [U2]S: It’s not very long. Results are shown in Figure 2. When the listener begins to gaze at the speaker somewhere within an UU, and maintains gaze until the pause after the UU, the speaker’s next UU is an elaboration of the previous UU 73% of the time. On the other hand, when the listener keeps looking at the map during an UU, only 30% of the next UU is an elaboration (z = 3.678, p<.01). Moreover, when a listener keeps looking at the speaker, the speaker’s next UU is go-ahead only 27% of the time. In contrast, when a listener keeps looking at the map, the speaker’s next UU is go-ahead 52% of the time (z = -2.049, p<.05)1. These results suggest that speakers interpret listeners’ continuous gaze as evidence of not-understanding, and they therefore add more information about the previous UU. Similar findings were reported for a map task by [17] who suggested that, at times of communicative difficulty, interlocutors are more likely to utilize all the channels available to them. In terms of floor management, gazing at the partner is a signal of giving up a turn, and here this indicates that listeners are trying to elicit more information from the speaker. In addition, listeners’ continuous attention to the map is interpreted as evidence of understanding, and speakers go ahead to the next leg of the direction2. 3.3 A Model of Face-to-Face Grounding Analyzing spoken dialogues, [18] reported that grounding behavior is more likely to occur at an 1 The percentage for map does not sum to 100% because some of the UUs are cue phrases or tag questions which are part of the next leg of the direction, but do not convey content. 2 We also analyzed two consecutive Answer UUs from a giver, and found that when the listener looks at the speaker at a pause, the speaker elaborates the Answer 78% of the time. When the listener looks at the speaker during the UU and at the map after the UU (positive evidence), the speaker elaborates only 17% of the time. 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 gaze map elaboration go-ahead 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 gaze map elaboration go-ahead Figure 2: Relationship between receiver’s NV and giver’s next verbal behavior intonational boundary, which we use to identify UUs. This implies that multiple grounding behaviors can occur within a turn if it consists of multiple UUs. However, in previous models, information is grounded only when a listener returns verbal feedback, and acknowledgement marks the smallest scope of grounding. If we apply this model to the example in Figure 1, none of the UU have been grounded because the listener has not returned any spoken grounding clues. In contrast, our results suggest that considering the role of nonverbal behavior, especially eye-gaze, allows a more fine-grained model of grounding, employing the UU as a unit of grounding. Our results also suggest that speakers are actively monitoring positive evidence of understanding, and also the absence of negative evidence of understanding (that is, signs of miscommunication). When listeners continue to gaze at the task, speakers continue on to the next leg of directions. Because of the incremental nature of grounding, we implement nonverbal grounding functionality into an embodied conversational agent using a process model that describes steps for a system to judge whether a user understands system contribution: (1) Preparing for the next UU: according to the speech act type of the next UU, nonverbal positive or negative evidence that the agent expects to receive are specified. (2) Monitoring: monitors and checks the user’s nonverbal status and signals during the UU. After speaking, the agent continues monitoring until s/he gets enough evidence of understanding or not-understanding represented by user’s nonverbal status and signals.(3) Judging: once the agent gets enough evidence, s/he tries to judge groundedness as soon as possible. According to some previous studies, length of pause between UUs is in between 0.4 to 1 sec [18, 19]. Thus, time out for judgment is 1 sec after the end of the UU. If the agent does not have evidence then, the UU remains ungrounded. This model is based on the information state approach [3], with update rules that revise the state of the conversation based on the inputs the system receives. In our case, however, the inputs are sampled continuously, include the nonverbal state, and only some require updates. Other inputs indicate that the last utterance is still pending, and allow the agent to wait further. In particular, task attention over an interval following the utterance triggers grounding. Gaze in the interval means that the contribution stays provisional, and triggers an obligation to elaborate. Likewise, if the system times-out without recognizing any user feedback, the segment remains ungrounded. This process allows the system to keep talking across multiple utterance units without getting verbal feedback from the user. From the user’s perspective, explicit acknowledgement is not necessary, and minimal cost is involved in eliciting elaboration. 4 Face-to-face Grounding with ECAs Based on our empirical results, we propose a dialogue manager that can handle nonverbal input to the grounding process, and we implement the mechanism in an embodied conversational agent. 4.1 System MACK is an interactive public information ECA kiosk. His current knowledgebase concerns the activities of the MIT Media Lab; he can answer questions about the lab’s research groups, projects, and demos, and give directions to each. On the input side, MACK recognizes three modalities: (1) speech, using IBM’s ViaVoice, (2) pen gesture via a paper map atop a table with an embedded Wacom tablet, and (3) head nod and eye gaze via a stereo-camera-based 6-degree-offreedom head-pose tracker (based on [20]). These inputs operate as parallel threads, allowing the Understanding Module (UM) to interpret the multiple modalities both individually and in combination. MACK produces multimodal output as well: (1) speech synthesis using the Microsoft Whistler Text-to-Speech (TTS) API, (2) a graphical figure with synchronized hand and arm gestures, and head and eye movements, and (3) LCD projector highlighting on the paper map, allowing MACK to reference it. The system architecture is shown in Figure 3. The UM interprets the input modalities and converts them to dialogue moves which it then passes on to the Dialogue Manager (DM). The DM consists of two primary sub-modules, the Response Planner, which determines MACK’s next action(s) and creates a sequence of utterance units, and the Grounding Module (GrM), which updates the Discourse Model and decides when the Response Planner’s next UU should be passed on to the Generation module (GM). The GM converts the UU into speech, gesture, and projector output, sending these synchronized modalities to the TTS engine, Animation Module (AM), and Projector Module. The Discourse Model maintains information about the state and history of the discourse. This includes a list of grounded beliefs and ungrounded UUs; a history of previous UUs with timing information; a history of nonverbal information (divided into gaze states and head nods) organized by timestamp; and information about the state of the dialogue, such as the current UU under consideration, and when it started and ended. 4.2 Nonverbal Inputs Eye gaze and head nod inputs are recognized by a head tracker, which calculates rotations and translations in three dimensions based on visual and depth information taken from two cameras [20]. The calculated head pose is translated into “look at MACK,” “look at map,” or “look elsewhere.” The rotation of the head is translated into head nods, using a modified version of [21]. Head nod and eye gaze events are timestamped and logged within the nonverbal component of the Discourse History. The Grounding Module can thus look up the appropriate nonverbal information to judge a UU. 4.3 The Dialogue Manager In a kiosk ECA, the system needs to ensure that the user understands the information provided by the agent. For this reason, we concentrated on implementing a grounding mechanism for Assertion, when the agent gives the user directions, and An swer, when the agent answers the user’s questions Generating the Response The first job of the DM is to plan the response to a user’s query. When a user asks for directions, the DM receives an event from the UM stating this intention. The Response Planner in the DM, recognizing the user’s direction-request, calculates the directions, broken up into segments. These segments are added to the DM’s Agenda, the stack of UUs to be processed. At this point, the GrM sends the first UU (a direction segment) on the Agenda to the GM to be processed. The GM converts the UU into speech and animation commands. For MACK’s own nonverbal grounding acts, the GM determines MACK’s gaze behavior according to the type of UU. For example, when MACK generates a direction segment (an Assertion), 66% of the time he keeps looking at the map. When elaborating a previous UU, 47% of the time he gazes at the user. When the GM begins to process the UU, it logs the start time in the Discourse Model, and when it finishes processing (as it sends the final command to the animation module), it logs the end time. The GrM waits for this speech and animation to end (by polling the Discourse Model until the end time is available), at which point it retrieves the timing data for the UU, in the form of timestamps for the UU start and finish. This timing data is used to look up the nonverbal behavior co-occurring with the utterance in order to judge whether or not the UU was grounded. Judgment of grounding When MACK finishes uttering a UU, the Grounding Module judges whether or not the UU is grounded, based on the user’s verbal and nonverbal behaviors during and after the UU. Using verbal evidence: If the user returns an acknowledgement, such as “OK”, the GrM judges the UU grounded. If the user explicitly reports failure in perceiving MACK’s speech (ex. “what?”), or not-understanding (ex. “I don’t understand”), the UU remains ungrounded. Note that, for the moment, verbal evidence is considered stronger than nonverbal evidence. Using nonverbal evidence: The GrM looks up the nonverbal behavior occurring during the utterance, and compares it to the model shown in Table 3. For each type of speech act, this model specifies the nonverbal behaviors that signal positive or explicit negative evidence. First, the GrM compares the within-UU nonverbal behavior to the model. Then, it looks at the first nonverbal behavior occurring during the pause after the UU. If these two behaviors (“within” and “pause”) match a pattern that signals positive evidence, the UU is grounded. If they match a pattern for negative evidence, the UU is not grounded. If no pattern has yet been Figure 3: MACK system architecture matched, the GrM waits for a tenth of a second and checks again. If the required behavior has occurred during this time, the UU is judged. If not, the GrM continues looping in this manner until the UU is either grounded or ungrounded explicitly, or a 1 second threshold has been reached. If the threshold is reached without a decision, the GrM times out and judges the UU ungrounded. Updating the Dialogue State After judging grounding, the GrM updates the Discourse Model. The Discourse State maintained in the Discourse Model is similar to TRINDI kit [3], except that we store nonverbal information. There are three key fields: (1) a list of grounded UUs, (2) a list of pending (ungrounded) UUs, and (3) the current UU. If the current UU is judged grounded, its belief is added to (1). If ungrounded, the UU is stored in (2). If an UU has subsequent contributions such as elaboration, these are stored in a single discourse unit, and grounded together when the last UU is grounded. Determining the Next Action After judging the UU’s grounding, the GrM decides what MACK does next. (1) MACK can continue giving the directions as normal, by sending on the next segment in the Agenda to the GM. As shown in Table 3, this happens 70% of the time when the UU is grounded, and only 27% of the time when it is not grounded. Note, this happens 100% of the time if verbal acknowledgement (e.g. “Uh huh”) is received for the UU. (2) MACK can elaborate on the most recent stage of the directions. Elaborations are generated 73% of the time when an Assertion is judged ungrounded, and 78% of the time for an ungrounded Answer. MACK elaborates by describing the most recent landmark in more detail. For example, if the directions were “Go down the hall and make a right at the door,” he might elaborate by saying “The big blue door.” In this case, the GrM asks the Response Planner (RP) to provide an elaboration for the current UU; the RP generates this elaboration (looking up the landmark in the database) and adds it to the front of the Agenda; and the GrM sends this new UU on to the GM. Finally, if the user gives MACK explicit verbal evience of not understanding, MACK will simply repeat the last thing he said, by sending the UU back to the GM. 4.4 Example Figure 4 shows an example of a user's interaction with MACK. The user asks MACK for directions, and MACK replies using speech and pointing (using a projector) to the shared map. When the GrM sends the first segment in the Agenda to the GM,the starting time of the UU is noted and it is sent to the AM to be spoken and animated. During this time, the user’s nonverbal signals are logged in the Discourse Model. When the UU has finished, the GrM evaluates the log of the UU and of the very beginning of the pause (by waiting a tenth of a second and then checking the nonverbal history). In this case, MACK noted that the user looked at the map during the UU, and continued to do so just afterwards. This pattern matches the model for Assertion. The UU is judged as grounded, and the grounded belief is added to the Discourse Model. MACK then utters the second segment as before, but this time the GrM, finds that the user was looking up at MACK during most of the UU as well as after it, which signals that the UU is not grounded. Therefore, the RP generates an elaboration (line 4). This utterance is judged to be Table 3: Grounding Model for MACK Target UU Type Evidence Type NV Pattern Judgment of ground Suggested next action positive within: map pause: map /nod grounded go-ahead: 0.7 elaboration: 0.30 Assertion negative within: gaze pause: gaze ungrounded go-ahead: 0.27 elaboration:0.73 positive within: gaze pause: map grounded go-ahead: 0.83 elaboration: 0.17 Answer negative pause: gaze ungrounded go-ahead: 0.22 elaboration: 0.78 [1] U: How do I get to Room 309? [2] M: To get to Room 309, go to that door and make a right. [3] M: Walk down the hall and make a left at the door [4] M: It’s the glass door with red couches right outside. [5] M: And that’s Room 309. look at map look at map look at map look at map gaze at MACK nod [1] U: How do I get to Room 309? [2] M: To get to Room 309, go to that door and make a right. [3] M: Walk down the hall and make a left at the door [4] M: It’s the glass door with red couches right outside. [5] M: And that’s Room 309. look at map look at map look at map look at map gaze at MACK nod Figure 4: Example of user (U) interacting with MACK (M). User gives negative evidence of grounding in [3], so MACK elaborates [4]. grounded both because the user continues looking at the map, and because the user nods, and so the final stage of the directions is spoken. This is also grounded, leaving MACK ready for a new inquiry. 5 Preliminary Evaluation Although we have shown an empirical basis for our implementation, it is important to ensure both that human users interact with MACK as we expect, and that their interaction is more effective than without nonverbal grounding. The issue of effectiveness merits a full-scale study and thus we have chosen to concentrate here on whether MACK elicits the same behaviors from users as does interaction with other humans. Two subjects were therefore assigned to one of the following two conditions, both of which were run as Wizard of Oz (that is, “speech recognition” was carried out by an experimenter): (a) MACK-with-grounding: MACK recognized user’s nonverbal signals for grounding, and displayed his nonverbal signals as a speaker. (b) MACK-without-grounding: MACK paid no attention to the user’s nonverbal behavior, and did not display nonverbal signals as a speaker. He gave the directions in one single turn. Subjects were instructed to ask for directions to two places, and were told that they would have to lead the experimenters to those locations to test their comprehension. We analyzed the second direction-giving interaction, after subjects became accustomed to the system. Results: In neither condition, did users return verbal feedback during MACK’s direction giving. As shown in Table 4, in MACK-with-grounding 7 nonverbal status transitions were observed during his direction giving, which consisted of 5 Assertion UUs, one of them an elaboration. The transition patterns between MACK and the user when MACK used nonverbal grounding are strikingly similar to those in our empirical study of humanto-human communication. There were three transitions to gM/gM (both look at the map), which is a normal status in map task conversation, and two transitions to gP/gM (MACK looks at the user, and the user looks at the map), which is the most frequent transition in Assertion as reported in Section 3. Moreover, in MACK’s third UU, the user began looking at MACK at the middle of the UU and kept looking at him after the UU ended. This behavior successfully elicited MACK’s elaboration in the next UU. On the other hand, in the MACK-withoutgrounding condition, the user never looked at MACK, and nodded only once, early on. As shown in Table 4, only three transitions were observed (shift to gMgM at the beginning of the interaction, shift to gMgMwN, then back to gMgM). While a larger scale evaluation with quantitative data is one of the most important issues for future work, the results of this preliminary study strongly support our model, and show MACK’s potential for interacting with a human user using human-human conversational protocols. 6 Discussion and Future Work We have reported how people use nonverbal signals in the process of grounding. We found that nonverbal signals that are recognized as positive evidence of understanding are different depending on the type of speech act. We also found that maintaining gaze on the speaker is interpreted as evidence of not-understanding, evoking an additional explanation from the speaker. Based on these empirical results, we proposed a model of nonverbal grounding and implemented it in an embodied conversational agent. One of the most important future directions is to establish a more comprehensive model of faceto-face grounding. Our study focused on eye gaze Figure 5: MACK with user Table 4: Preliminary evaluation with-grounding w/o-grounding num of UUs 5 4 gMgM 3 2 gPgM 2 0 gMgP 1 0 gPgP 1 0 gMgMwN 0 1 Shift to total 7 3 and head nods, which directly contribute to grounding. It is also important to analyze other types of nonverbal behaviors and investigate how they interact with eye gaze and head nods to achieve common ground, as well as contradictions between verbal and nonverbal evidence (eg. an interlocutor says, “OK”, but looks at the partner). Finally, the implementation proposed here is a simple one, and it is clear that a more sophisticated dialogue management strategy is warranted, and will allow us to deal with back-grounding, and other aspects of miscommunication. For example, it would be useful to distinguish different levels of miscommunication: a sound that may or may not be speech, an out-of-grammar utterance, or an utterance whose meaning is ambiguous. In order to deal with such uncertainty in grounding, incorporating a probabilistic approach [4] into our model of face-to-face grounding is an elegant possibility. Acknowledgement Thanks to Candy Sidner, Matthew Stone, and 3 anonymous reviewers for comments that improved the paper. Thanks to Prof. Nishida at Univ. of Tokyo for his support of the research. References 1.Clark, H.H. and E.F. Schaefer, Contributing to discourse. Cognitive Science, 1989. 13,: p. 259-294. 2.Clark, H.H. and D. Wilkes-Gibbs, Referring as a collaborative process. Cognition, 1986. 22: p. 1-39. 3.Matheson, C., M. Poesio, and D. Traum. Modelling Grounding and Discourse Obligations Using Update Rules. in 1st Annual Meeting of the North American Association for Computational Linguistics (NAACL2000). 2000. 4.Paek, T. and E. Horvitz, Uncertainty, Utility, and Misunderstanding, in Working Papers of the AAAI Fall Symposium on Psychological Models of Communication in Collaborative Systems, S.E. Brennan, A. Giboin, and D. Traum, Editors. 1999, AAAI: Menlo Park, California. p. 85-92. 5.Clark, H.H., Using Language. 1996, Cambridge: Cambridge University Press. 6.Traum, D.R. and P. Dillenbourg. Miscommunication in Multimodal Collaboration. in AAAI Workshop on Detecting, Repairing, and Preventing Human-Machine Miscommunication. 1996. Portland, OR. 7.Argyle, M. and M. Cook, Gaze and Mutual Gaze. 1976, Cambridge: Cambridge University Press. 8.Goodwin, C., Achieving Mutual Orientation at Turn Beginning, in Conversational Organization: Interaction between speakers and hearers. 1981, Academic Press: New York. p. 55-89. 9.Novick, D.G., B. Hansen, and K. Ward. Coordinating turn-taking with gaze. in ICSLP-96. 1996. Philadelphia, PA. 10.Cassell, J., et al. More Than Just a Pretty Face: Affordances of Embodiment. in IUI 2000. 2000. New Orleans, Louisiana. 11.Traum, D. and J. Rickel. Embodied Agents for Multiparty Dialogue in Immersive Virtual Worlds. in Autonomous Agents and Multi-Agent Systems. 2002. 12.Cassell, J. and K.R. Thorisson, The Power of a Nod and a Glance: Envelope vs. Emotional Feedback in Animated Conversational Agents. Applied Artificial Intelligence, 1999. 13: p. 519-538. 13.Nakatani, C. and D. Traum, Coding discourse structure in dialogue (version 1.0). 1999, University of Maryland. 14.Pierrehumbert, J.B., The phonology and phonetics of english intonation. 1980, Massachusetts Institute of Technology. 15.Allen, J. and M. Core, Draft of DMSL: Dialogue Act Markup in Several Layers. 1997, http://www.cs.rochester.edu/research/cisd/resources/da msl/RevisedManual/RevisedManual.html. 16.Duncan, S., On the structure of speaker-auditor interaction during speaking turns. Language in Society, 1974. 3: p. 161-180. 17.Boyle, E., A. Anderson, and A. Newlands, The Effects of Visibility in a Cooperative Problem Solving Task. Language and Speech, 1994. 37(1): p. 1-20. 18.Traum, D. and P. Heeman. Utterance Units and Grounding in Spoken Dialogue. in ICSLP. 1996. 19.Nakajima, S.y. and J.F. Allen. Prosody as a cue for discourse structure. in ICSLP. 1992. 20.Morency, L.P., A. Rahimi, and T. Darrell. A ViewBased Appearance Model for 6 DOF Tracking," Proceed-ings of. in IEEE conference on Computer Vision and Pattern Recognition. 2003. Madison, Wisconsin. 21.Kapoor, A. and R.W. Picard. A Real-Time Head Nod and Shake Detector. in Workshop on Perceptive User Interfaces. 2001. Orlando FL.
2003
70
Discourse Segmentation of Multi-Party Conversation Michel Galley Kathleen McKeown Columbia University Computer Science Department 1214 Amsterdam Avenue New York, NY 10027, USA {galley,kathy}@cs.columbia.edu Eric Fosler-Lussier Columbia University Electrical Engineering Department 500 West 120th Street New York, NY 10027, USA [email protected] Hongyan Jing IBM T.J. Watson Research Center Yorktown Heights, NY 10598, USA [email protected] Abstract We present a domain-independent topic segmentation algorithm for multi-party speech. Our feature-based algorithm combines knowledge about content using a text-based algorithm as a feature and about form using linguistic and acoustic cues about topic shifts extracted from speech. This segmentation algorithm uses automatically induced decision rules to combine the different features. The embedded text-based algorithm builds on lexical cohesion and has performance comparable to state-of-the-art algorithms based on lexical information. A significant error reduction is obtained by combining the two knowledge sources. 1 Introduction Topic segmentation aims to automatically divide text documents, audio recordings, or video segments, into topically related units. While extensive research has targeted the problem of topic segmentation of written texts and spoken monologues, few have studied the problem of segmenting conversations with many participants (e.g., meetings). In this paper, we present an algorithm for segmenting meeting transcripts. This study uses recorded meetings of typically six to eight participants, in which the informal style includes ungrammatical sentences and overlapping speakers. These meetings generally do not have pre-set agendas, and the topics discussed in the same meeting may or may not related. The meeting segmenter comprises two components: one that capitalizes on word distribution to identify homogeneous units that are topically cohesive, and a second component that analyzes conversational features of meeting transcripts that are indicative of topic shifts, like silences, overlaps, and speaker changes. We show that integrating features from both components with a probabilistic classifier (induced with c4.5rules) is very effective in improving performance. In Section 2, we review previous approaches to the segmentation problem applied to spoken and written documents. In Section 3, we describe the corpus of recorded meetings intended to be segmented, and the annotation of its discourse structure. In Section 4, we present our text-based segmentation component. This component mainly relies on lexical cohesion, particularly term repetition, to detect topic boundaries. We evaluated this segmentation against other lexical cohesion segmentation programs and show that the performance is state-of-theart. In the subsequent section, we describe conversational features, such as silences, speaker change, and other features like cue phrases. We present a machine learning approach for integrating these conversational features with the text-based segmentation module. Experimental results show a marked improvement in meeting segmentation with the incorporation of both sets of features. We close with discussions and conclusions. 2 Related Work Existing approaches to textual segmentation can be broadly divided into two categories. On the one hand, many algorithms exploit the fact that topic segments tend to be lexically cohesive. Embodiments of this idea include semantic similarity (Morris and Hirst, 1991; Kozima, 1993), cosine similarity in word vector space (Hearst, 1994), inter-sentence similarity matrix (Reynar, 1994; Choi, 2000), entity repetition (Kan et al., 1998), word frequency models (Reynar, 1999), or adaptive language models (Beeferman et al., 1999). Other algorithms exploit a variety of linguistic features that may mark topic boundaries, such as referential noun phrases (Passonneau and Litman, 1997). In work on segmentation of spoken documents, intonational, prosodic, and acoustic indicators are used to detect topic boundaries (Grosz and Hirschberg, 1992; Nakatani et al., 1995; Hirschberg and Nakatani, 1996; Passonneau and Litman, 1997; Hirschberg and Nakatani, 1998; Beeferman et al., 1999; T¨ur et al., 2001). Such indicators include long pauses, shifts in speaking rate, great range in F0 and intensity, and higher maximum accent peak. These approaches use different learning mechanisms to combine features, including decision trees (Grosz and Hirschberg, 1992; Passonneau and Litman, 1997; T¨ur et al., 2001) exponential models (Beeferman et al., 1999) or other probabilistic models (Hajime et al., 1998; Reynar, 1999). 3 The ICSI Meeting Corpus We have evaluated our segmenter on the ICSI Meeting corpus (Janin et al., 2003). This corpus is one of a growing number of corpora with human-to-human multi-party conversations. In this corpus, recordings of meetings ranged primarily over three different recurring meeting types, all of which concerned speech or language research.1 The average duration is 60 minutes, with an average of 6.5 participants. They were transcribed, and each conversation turn was marked with the speaker, start time, end time, and word content. From the corpus, we selected 25 meetings to be segmented, each by at least three subjects. We opted for a linear representation of discourse, since finer-grained discourse structures (e.g. (Grosz and Sidner, 1986)) are generally considered to be difficult to mark reliably. Subjects were asked to mark each speaker change (potential boundary) as either boundary or non-boundary. In the resulting annotation, the agreed segmentation based on majority 1While it would be desirable to have a broader variety of meetings, we hope that experiments on this corpus will still carry some generality. opinion contained 7.5 segments per meeting on average, while the average number of potential boundaries is 770. We used Cochran’s Q (1950) to evaluate the agreement among annotators. Cochran’s test evaluates the null hypothesis that the number of subjects assigning a boundary at any position is randomly distributed. The test shows that the interjudge reliability is significant to the 0.05 level for 19 of the meetings, which seems to indicate that segment identification is a feasible task.2 4 Segmentation based on Lexical Cohesion Previous work on discourse segmentation of written texts indicates that lexical cohesion is a strong indicator of discourse structure. Lexical cohesion is a linguistic property that pertains to speech as well, and is a linguistic phenomenon that can also be exploited in our case: while our data does not have the same kind of syntactic and rhetorical structure as written text, we nonetheless expect that information from the written transcription alone should provide indications about topic boundaries. In this section, we describe our work on LCseg, a topic segmenter based on lexical cohesion that can handle both speech and text, but that is especially designed to generate the lexical cohesion feature used in the feature-based segmentation described in Section 5. 4.1 Algorithm Description LCseg computes lexical chains, which are thought to mirror the discourse structure of the underlying text (Morris and Hirst, 1991). We ignore synonymy and other semantic relations, building a restricted model of lexical chains consisting of simple term repetitions, hypothesizing that major topic shifts are likely to occur where strong term repetitions start and end. While other relations between lexical items also work as cohesive factors (e.g. between a term and its super-ordinate), the work on linear topic segmentation reporting the most promising results account for term repetitions alone (Choi, 2000; Utiyama and Isahara, 2001). The preprocessing steps of LCseg are common to many segmentation algorithms. The input document is first tokenized, non-content words are removed, 2Four other meetings failed short the significance test, while there was little agreement on the two last ones (p > 0.1). and remaining words are stemmed using an extension of Porter’s stemming algorithm (Xu and Croft, 1998) that conflates stems using corpus statistics. Stemming will allow our algorithm to more accurately relate terms that are semantically close. The core algorithm of LCseg has two main parts: a method to identify and weight strong term repetitions using lexical chains, and a method to hypothesize topic boundaries given the knowledge of multiple, simultaneous chains of term repetitions. A term is any stemmed content word within the text. A lexical chain is constructed to consist of all repetitions ranging from the first to the last appearance of the term in the text. The chain is divided into subchains when there is a long hiatus of h consecutive sentences with no occurrence of the term, where h is determined experimentally. For each hiatus, a new division is made and thus, we avoid creating weakly linked chains. For all chains that have been identified, we use a weighting scheme that we believe is appropriate to the task of inducing the topical or sub-topical structure of text. The weighting scheme depends on two factors: Frequency: chains containing more repeated terms receive a higher score. Compactness: shorter chains receive a higher weight than longer ones. If two chains of different lengths contain the same number of terms, we assign a higher score to the shortest one. Our assumption is that the shorter one, being more compact, seems to be a better indicator of lexical cohesion.3 We apply a variant of a metric commonly used in information retrieval, TF.IDF (Salton and Buckley, 1988), to score term repetitions. If R1 . . . Rn is the set of all term repetitions collected in the text, t1 . . . tn the corresponding terms, L1 . . . Ln their respective lengths,4 and L the length of the text, the adapted metric is expressed as follows, combining frequency (freq(ti)) of a term ti and the compactness of its underlying chain: score(Ri) = freq(ti) · log( L Li ) 3The latter parameter might seem controversial at first, and one might assume that longer chains should receive a higher score. However we point out that in a linear model of discourse, chains that almost span the entire text are barely indicative of any structure (assuming boundaries are only hypothesized where chains start and end). 4All lengths are expressed in number of sentences. In the second part of the algorithm, we combine information from all term repetitions to compute a lexical cohesion score at each sentence break (or, in the case of spoken conversations, speaker turn break). This step of our algorithm is very similar in spirit to TextTiling (Hearst, 1994). The idea is to work with two adjacent analysis windows, each of fixed size k. For each sentence break, we determine a lexical cohesion function by computing the cosine similarity at the transition between the two windows. Instead of using word counts to compute similarity, we analyze lexical chains that overlap with the two windows. The similarity between windows (A and B) is computed with:5 cosine(A, B) = P i wi,A·wi,B qP i w2 i,A P i w2 i,B where wi,Γ = ( score(Ri) if Ri overlaps Γ ∈{A, B} 0 otherwise The similarity computed at each sentence break produces a plot that shows how lexical cohesion changes over time; an example is shown in Figure 1. The lexical cohesion function is then smoothed using a moving average filter, and minima become potential segment boundaries. Then, in a manner quite similar to (Hearst, 1994), the algorithm determines for every local minimum mi how sharp of a change there is in the lexical cohesion function. The algorithm looks on each side of mi for maxima of cohesion, and once it eventually finds one on each side (l and r), it computes the hypothesized segmentation probability: p(mi) = 1 2[LCF(l) + LCF(r) −2 · LCF(m)] where LCF(x) is the value of the lexical cohesion function at x. This score is supposed to capture the sharpness of the change in lexical cohesion, and give probabilities close to 1 for breaks like sentence 179 in Figure 1. Finally, the algorithm selects the hypothesized boundaries with the highest computed probabilities. If the number of reference boundaries is unknown, the algorithm has to make a guess. It computes the 5Normalizing anything in these windows has little effect, since the cosine similarity is scale invariant, that is cosine(αxa, xb) = cosine(xa, xb) for α > 0. 20 40 60 80 100 120 140 160 180 200 220 240 260 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Figure 1: Application of the LCseg algorithm on the concatenation of 16 WSJ stories. Numbers on the x-axis represent sentence indices, and y-axis represents the lexical cohesion function. The representative example presented here is segmented by LCseg with an error of Pk = 15.79, while the average performance of the algorithm is Pk = 15.31 on the WSJ test corpus (unknown number of segments). mean and the variance of the hypothesized probabilities of all potential boundaries (local minima). As we can see in Figure 1, there are many local minima that do not correspond to actual boundaries. Thus, we ignore all potential boundaries with a probability lower than plimit. For the remaining points, we compute the threshold using the average (µ) and standard deviation (σ) of the p(mi) values, and each potential boundary mi above the threshold µ−α·σ is hypothesized as a real boundary. 4.2 Evaluation We evaluate LCseg against two state-of-the-art segmentation algorithms based on lexical cohesion (Choi, 2000; Utiyama and Isahara, 2001). We use the error metric Pk proposed by Beeferman et al. (1999) to evaluate segmentation accuracy. It computes the probability that sentences k units (e.g. sentences) apart are incorrectly determined as being either in different segments or in the same one. Since it has been argued in (Pevzner and Hearst, 2002) that Pk has some weaknesses, we also include results according to the WindowDiff (WD) metric (which is described in the same work). A test corpus of concatenated6 texts extracted from the Brown corpus was built by Choi (2000) to evaluate several domain-independent segmentation algorithms. We reuse the same test corpus for our evaluation, in addition to two other test corpora we constructed to test how segmenters scale across genres and how they perform with texts with various 6Concatenated documents correspond to reference segments. number of segments.7 We designed two test corpora, each of 500 documents, using concatenated texts extracted from the TDT and WSJ corpora, ranging from 4 to 22 in number of segments. LCseg depends on several parameters. Parameter tuning was performed on three tuning corpora of one thousand texts each.8 We performed searches for the optimal settings of the four tunable parameters introduced above; the best performance was achieved with h = 11 (hiatus length for dividing a chain into parts), k = 2 (analysis window size), plimit = 0.1 and α = 1 2 (thresholding limits for the hypothesized boundaries). As shown in Table 1, our algorithm is significantly better than (Choi, 2000) (labeled C99) on all three test corpora, according to a one-sided ttest of the null hypothesis of equal mean at the 0.01 level. It is not clear whether our algorithm is better than (Utiyama and Isahara, 2001) (U00). When the number of segments is provided to the algorithms, our algorithm is significantly better than Utiyama’s on WSJ, better on Brown (but not significant), and significantly worse on TDT. When the number of boundaries is unknown, our algorithm is insignificantly worse on Brown, but significantly better on WSJ and TDT – the two corpora designed to have a varying number of segments per document. In the case of the Meeting corpus, none of the algorithms are significantly different than the others, due to the 7All texts in Choi’s test corpus have exactly 10 segments. We are concerned that the adjustments of any algorithm parameters might overfit this predefined number of segments. 8These texts are different from the ones used for evaluation. Brown corpus known unknown Pk WD Pk WD C99 11.19% 13.86% 12.07% 14.57% U00 8.77% 9.44% 9.76% 10.32% LCseg 8.69% 9.42% 10.49% 11.37% p-val. 0.42 0.48 0.027 0.0037 TDT corpus C99 9.37% 11.91% 10.18% 12.72% U00 4.70% 6.29% 8.70% 11.12% LCseg 6.15% 8.41% 6.95% 9.09% p-val. 1.1e-05 2.8e-07 4.5e-05 2.8e-05 WSJ corpus C99 19.61% 26.42% 22.32% 29.81% U00 15.18% 21.54% 17.71% 24.06% LCseg 12.21% 18.25% 15.31% 22.14% p-val. 1.4e-08 1.7e-08 2.6e-04 0.0063 Meeting corpus C99 33.79% 37.25% 47.42% 58.08% U00 31.99% 34.49% 37.39% 40.43% LCseg 26.37% 29.40% 31.91% 35.88% p-val. 0.026 0.14 0.14 0.23 Table 1: Comparison C99 and U00. The p-values in the table are the results of significance tests between U00 and LCseg. Bold-faced values are scores that are statistically significant. small test set size. In conclusion, LCseg has a performance comparable to state-of-the-art text segmentation algorithms, with the added advantage of computing a segmentation probability at each potential boundary. This information can be effectively used in the featurebased segmenter to account for lexical cohesion, as described in the next section. 5 Feature-based Segmentation In the previous section, we have concentrated exclusively on the consideration of content (through lexical cohesion) to determine the structure of texts, neglecting any influence of form. In this section, we explore formal devices that are indicative of topic shifts, and explain how we use these cues to build a segmenter targeting conversational speech. 5.1 Probabilistic Classifiers Topic segmentation is reduced here to a classification problem, where each utterance break Bi is either considered a topic boundary or not. We use statistical modeling techniques to build a classifier that uses local features (e.g. cue phrases, pauses) to determine if an utterance break corresponds to a topic boundary. We chose C4.5 and C4.5rules (Quinlan, 1993), two programs to induce classification rules in the form of decision trees and production rules (respectively). C4.5 generates an unpruned decision tree, which is then analyzed by C4.5rules to generate a set of pruned production rules (it tries to find the most useful subset of them). The advantage of pruned rules over decision trees is that they are easier to analyze, and allow combination of features in the same rule (feature interactions are explicit). The greedy nature of decision rule learning algorithms implies that a large set of features can lead to bad performance and generalization capability. It is desirable to remove redundant and irrelevant features, especially in our case since we have little data labeled with topic shifts; with a large set of features, we would risk overfitting the data. We tried to restrict ourselves to features whose inclusion is motivated by previous work (pauses, speech rate) and added features that are specific to multi-speaker speech (overlap, changes in speaker activity). 5.2 Features Cue phrases: previous work on segmentation has found that discourse particles like now, well provide valuable information about the structure of texts (Grosz and Sidner, 1986; Hirschberg and Litman, 1994; Passonneau and Litman, 1997). We analyzed the correlation between words in the meeting corpus and labeled topic boundaries, and automatically extracted utterance-initial cue phrases9 that are statistically correlated with boundaries. For every word in the meeting corpus, we counted the number of its occurrences near any topic boundary, and its number of appearances overall. Then, we performed χ2 significance tests (e.g. figure 2 for okay) under the null hypothesis that no correlation exists. We selected terms whose χ2 value rejected the hypothesis under a 0.01-level confidence (the rejection criterion is χ2 ≥6.635). Finally, induced cue phrases whose usage has never been described in other work were removed (marked with ∗in Table 3). Indeed, there is a risk that the automatically derived list of cue phrases could be too specific to the word usage in 9As in (Litman and Passonneau, 1995), we restrict ourselves to the first lexical item of any utterance, plus the second one if the first item is also a cue word. Near boundary Distant okay 64 740 Other 657 25896 Table 2: okay (χ2 = 89.11, df = 1, p < 0.01). okay 93.05 but 13.57 shall ∗ 27.34 so 11.65 anyway 23.95 and 10.99 we’re ∗ 17.67 should ∗ 10.21 alright 16.09 good ∗ 7.70 let’s ∗ 14.54 Table 3: Automatically selected cue phrases. these meetings. Silences: previous work has found that major shifts in topic typically show longer silences (Passonneau and Litman, 1993; Hirschberg and Nakatani, 1996). We investigated the presence of silences in meetings and their correlation with topic boundaries, and found it necessary to make a distinction between pauses and gaps (Levinson, 1983). A pause is a silence that is attributable to a given party, for example in the middle of an adjacency pair, or when a speaker pauses in the middle of her speech. Gaps are silences not attributable to any party, and last until a speaker takes the initiative of continuing the discussion. As an approximation of this distinction, we classified a silence that follows a question or in the middle of somebody’s speech as a pause, and any other silences as a gap. While the correlation between long silences and discourse boundaries seem to be less pervasive in meetings than in other speech corpora, we have noticed that some topic boundaries are preceded (within some window) by numerous gaps. However, we found little correlation between pauses and topic boundaries. Overlaps: we also analyzed the distribution of overlapping speech by counting the average overlap rate within some window. We noticed that, many times, the beginning of segments are characterized by having little overlapping speech. Speaker change: we sometimes noticed a correlation between topic boundaries and sudden changes in speaker activity. For example, in Figure 2, it is clear that the contribution of individual speakers to the discussion can greatly change from one discourse unit to the next. We try to capture significant changes in speakership by measuring the dissimilarity between two analysis windows. For each potential boundary, we count for each speaker i the number of words that are uttered before (Li) and after (Ri) the potential boundary (we limit our analysis to a window of fixed size). The two distributions are normalized to form two probability distributions l and r, and significant changes of speakership are detected by computing their Jensen-Shannon divergence: JS(l, r) = 1 2[D(l||avgl,r) + D(r||avgl,r)] where D(l||r) is the KL-divergence between the two distributions. Lexical cohesion: we also incorporated the lexical cohesion function computed by LCseg as a feature of the multi-source segmenter in a manner similar to the knowledge source combination performed by (Beeferman et al., 1999) and (T¨ur et al., 2001). Note that we use both the posterior estimate computed by LCseg and the raw lexical cohesion function as features of the system. 5.3 Features: Selection and Combination For every potential boundary Bi, the classifier analyzes features in a window surrounding Bi to decide whether it is a topic boundary or not. It is generally unclear what is the optimal window size and how features should be analyzed. Windows of various sizes can lead to different levels of prediction, and in some cases, it might be more appropriate to only extract features preceding or following Bi. We avoided making arbitrary choices of parameters; instead, for any feature F and a set F1, . . . , Fn of possible ways to measure the feature (different window sizes, different directions), we picked the Fi that is in isolation the best predictor of topic boundaries (among F1, . . . , Fn). Table 4 presents for each feature the analysis mode that is the most useful on the training data. 5.4 Evaluation We performed 25-fold cross-validation for evaluating the induced probabilistic classifier, computing the average of Pk and WD on the held-out meetings. Feature selection and decision rule learning 0 10 20 30 Figure 2: speaker activity in a meeting. Each row represent the speech activity of one speaker, utterance of words being represented as black. Vertical lines represent topic shifts. The x-axis represents time. Feature Tag Size (sec.) Side Cue phrases CUE 5 both Silence (gaps) SIL 30 left Overlap† OVR 30 right Speaker activity ACT 5 both Lexical cohesion LC 30 both †: the size of the window that was used to compute the JS-divergence was also determined automatically. Table 4: Parameters for feature analysis. is always performed on sets of 24 meetings, while the held-out data is used for testing. Table 5 gives some examples of the type of rules that are learned. The first rule states that if the value for the lexical cohesion (LC) function is low at the current sentence break, there is at least one CUE phrase, there is less than three seconds of silence to the left of the break,10 and a single speaker holds the floor for a longer period of time than usual to the right of the break, then we have a topic break. In general, we found that the derived rules show that lexical cohesion plays a stronger role than most other features in determining topic breaks. Nonetheless, the quantitative results summarized in table 6, which correspond to the average performance on the held-out sets, show that the integration of conversational features with the text-based segmenter outperforms either alone. 6 Conclusions We presented a domain-independent segmentation algorithm for multi-party conversation that integrates features based on content with features based on form. The learned combination of features results in a significant increase in accuracy over previous 10Note that rules are not always meaningful in isolation and it is likely that a subordinate rule in the tree to this one would do further tests on silence to determine if a topic boundary exists. Condition Decision Conf. LC ≤0.67, CUE ≥1, OVR ≤1.20, SIL ≤3.42 yes 94.1 LC ≤0.35, SIL > 3.42, OVR ≤4.55 yes 92.2 CUE ≥1, ACT > 0.1768, OVR ≤1.20, LC ≤0.67 yes 91.6 . . . default no Table 5: A selection of the most useful rules learned by C4.5rules along with their confidence levels. Times for OVR and SIL are expressed in seconds. Pk WD feature-based 23.00% 25.47% LCseg 31.91% 35.88% U00 37.39% 40.43% p-value 2.14e-04 3.30e-04 Table 6: Performance of the feature-based segmenter on the test data. approaches to segmentation when applied to meetings. Features based on form that are likely to indicate topic shifts are automatically extracted from speech. Content based features are computed by a segmentation algorithm that utilizes a metric of lexical cohesion and that performs as well as state-ofthe-art text-based segmentation techniques. It works both with written and spoken texts. The text-based segmentation approach alone, when applied to meetings, outperforms all other segmenters, although the difference is not statistically significant. In future work, we would like to investigate the effects of adding prosodic features, such as pitch ranges, to our segmenter, as well as the effect of using errorful speech recognition transcripts as opposed to manually transcribed utterances. An implementation of our lexical cohesion segmenter is freely available for educational or research purposes.11 Acknowledgments We are grateful to Julia Hirschberg, Dan Ellis, Elizabeth Shriberg, and Mari Ostendorf for their helpful advice. We thank our ICSI project partners for granting us access to the meeting corpus and for useful discussions. This work was funded under the NSF project Mapping Meetings (IIS-012196). References D. Beeferman, A. Berger, and J. Lafferty. 1999. Statistical models for text segmentation. Machine Learning, 34(1–3):177–210. F. Choi. 2000. Advances in domain independent linear text segmentation. In Proc. of NAACL’00. W. Cochran. 1950. The comparison of percentages in matched samples. Biometrika, 37:256–266. B. Grosz and J. Hirschberg. 1992. Some intonational characteristics of discourse structure. In Proc. of ICSLP-92, pages 429–432. B. Grosz and C. Sidner. 1986. Attention, intentions and the structure of discourse. Computational Linguistics, 12(3). M. Hajime, H. Takeo, and O. Manabu. 1998. Text segmentation with multiple surface linguistic cues. In COLING-ACL, pages 881–885. M. Hearst. 1994. Multi-paragraph segmentation of expository text. In Proc. of the ACL. J. Hirschberg and D. Litman. 1994. Empirical studies on the disambiguation of cue phrases. Computational Linguistics, 19(3):501–530. J. Hirschberg and C. Nakatani. 1996. A prosodic analysis of discourse segments in direction-giving monologues. In Proc. of the ACL. J. Hirschberg and C. Nakatani. 1998. Acoustic indicators of topic segmentation. In Proc. of ICSLP. A. Janin, D. Baron, J. Edwards, D. Ellis, D. Gelbart, N. Morgan, B. Peskin, T. Pfau, E. Shriberg, A. Stolcke, and C. Wooters. 2003. The ICSI meeting corpus. In Proc. of ICASSP-03, Hong Kong (to appear). 11http://www.cs.columbia.edu/˜galley/research.html M.-Y. Kan, J. Klavans, and K. McKeown. 1998. Linear segmentation and segment significance. In Proc. 6th Workshop on Very Large Corpora (WVLC-98). H. Kozima. 1993. Text segmentation based on similarity between words. In Proc. of the ACL. S. Levinson. 1983. Pragmatics. Cambridge University Press. D. Litman and R. Passonneau. 1995. Combining multiple knowledge sources for discourse segmentation. In Proc. of the ACL. J. Morris and G. Hirst. 1991. Lexcial cohesion computed by thesaural relations as an indicator of the structure of text. Computational Linguistics, 17:21–48. C. Nakatani, J. Hirschberg, and B. Grosz. 1995. Discourse structure in spoken language: Studies on speech corpora. In AAAI-95 Symposium on Empirical Methods in Discourse Interpretation. R. Passonneau and D. Litman. 1993. Intention-based segmentation: Human reliability and correlation with linguistic cues. In Proc. of the ACL. R. Passonneau and D. Litman. 1997. Discourse segmentation by human and automated means. Computational Linguistics, 23(1):103–139. L. Pevzner and M. Hearst. 2002. A critique and improvement of an evaluation metric for text segmentation. Computational Linguistics, 28 (1):19–36. R. Quinlan. 1993. C4.5: Programs for Machine Learning. Machine Learning. Morgan Kaufmann. J. Reynar. 1994. An automatic method of finding topic boundaries. In Proc. of the ACL. J. Reynar. 1999. Statistical models for topic segmentation. In Proc. of the ACL. G. Salton and C. Buckley. 1988. Term weighting approaches in automatic text retrieval. Information Processing and Management, 24(5):513–523. G. T¨ur, D. Hakkani-T¨ur, A. Stolcke, and E. Shriberg. 2001. Integrating prosodic and lexical cues for automatic topic segmentation. Computational Linguistics, 27(1):31–57. M. Utiyama and H. Isahara. 2001. A statistical model for domain-independent text segmentation. In Proc. of the ACL. J. Xu and B. Croft. 1998. Corpus-based stemming using cooccurrence of word variants. ACM Transactions on Information Systems, 16(1):61–81.
2003
71
Syntactic Features and Word Similarity for Supervised Metonymy Resolution Malvina Nissim ICCS, School of Informatics University of Edinburgh [email protected] Katja Markert ICCS, School of Informatics University of Edinburgh and School of Computing University of Leeds [email protected] Abstract We present a supervised machine learning algorithm for metonymy resolution, which exploits the similarity between examples of conventional metonymy. We show that syntactic head-modifier relations are a high precision feature for metonymy recognition but suffer from data sparseness. We partially overcome this problem by integrating a thesaurus and introducing simpler grammatical features, thereby preserving precision and increasing recall. Our algorithm generalises over two levels of contextual similarity. Resulting inferences exceed the complexity of inferences undertaken in word sense disambiguation. We also compare automatic and manual methods for syntactic feature extraction. 1 Introduction Metonymy is a figure of speech, in which one expression is used to refer to the standard referent of a related one (Lakoff and Johnson, 1980). In (1),1 “seat 19” refers to the person occupying seat 19. (1) Ask seat 19 whether he wants to swap The importance of resolving metonymies has been shown for a variety of NLP tasks, e.g., machine translation (Kamei and Wakao, 1992), question answering (Stallard, 1993) and anaphora resolution (Harabagiu, 1998; Markert and Hahn, 2002). 1(1) was actually uttered by a flight attendant on a plane. In order to recognise and interpret the metonymy in (1), a large amount of knowledge and contextual inference is necessary (e.g. seats cannot be questioned, people occupy seats, people can be questioned). Metonymic readings are also potentially open-ended (Nunberg, 1978), so that developing a machine learning algorithm based on previous examples does not seem feasible. However, it has long been recognised that many metonymic readings are actually quite regular (Lakoff and Johnson, 1980; Nunberg, 1995).2 In (2), “Pakistan”, the name of a location, refers to one of its national sports teams.3 (2) Pakistan had won the World Cup Similar examples can be regularly found for many other location names (see (3) and (4)). (3) England won the World Cup (4) Scotland lost in the semi-final In contrast to (1), the regularity of these examples can be exploited by a supervised machine learning algorithm, although this method is not pursued in standard approaches to regular polysemy and metonymy (with the exception of our own previous work in (Markert and Nissim, 2002a)). Such an algorithm needs to infer from examples like (2) (when labelled as a metonymy) that “England” and “Scotland” in (3) and (4) are also metonymic. In order to 2Due to its regularity, conventional metonymy is also known as regular polysemy (Copestake and Briscoe, 1995). We use the term “metonymy” to encompass both conventional and unconventional readings. 3All following examples are from the British National Corpus (BNC, http://info.ox.ac.uk/bnc). Scotland subj-of subj-of win lose context reduction Pakistan Scotland-subj-of-lose Pakistan-subj-of-win similarity semantic class head similarity role similarity Pakistan had won the World Cup lost in the semi-final Scotland Figure 1: Context reduction and similarity levels draw this inference, two levels of similarity need to be taken into account. One concerns the similarity of the words to be recognised as metonymic or literal (Possibly Metonymic Words, PMWs). In the above examples, the PMWs are “Pakistan”, “England” and “Scotland”. The other level pertains to the similarity between the PMW’s contexts (“<subject> (had) won the World Cup” and “<subject> lost in the semi-final”). In this paper, we show how a machine learning algorithm can exploit both similarities. Our corpus study on the semantic class of locations confirms that regular metonymic patterns, e.g., using a place name for any of its sports teams, cover most metonymies, whereas unconventional metonymies like (1) are very rare (Section 2). Thus, we can recast metonymy resolution as a classification task operating on semantic classes (Section 3). In Section 4, we restrict the classifier’s features to head-modifier relations involving the PMW. In both (2) and (3), the context is reduced to subj-of-win. This allows the inference from (2) to (3), as they have the same feature value. Although the remaining context is discarded, this feature achieves high precision. In Section 5, we generalize context similarity to draw inferences from (2) or (3) to (4). We exploit both the similarity of the heads in the grammatical relation (e.g., “win” and “lose”) and that of the grammatical role (e.g. subject). Figure 1 illustrates context reduction and similarity levels. We evaluate the impact of automatic extraction of head-modifier relations in Section 6. Finally, we discuss related work and our contributions. 2 Corpus Study We summarize (Markert and Nissim, 2002b)’s annotation scheme for location names and present an annotated corpus of occurrences of country names. 2.1 Annotation Scheme for Location Names We identify literal, metonymic, and mixed readings. The literal reading comprises a locative (5) and a political entity interpretation (6). (5) coral coast of Papua New Guinea (6) Britain’s current account deficit We distinguish the following metonymic patterns (see also (Lakoff and Johnson, 1980; Fass, 1997; Stern, 1931)). In a place-for-people pattern, a place stands for any persons/organisations associated with it, e.g., for sports teams in (2), (3), and (4), and for the government in (7).4 (7) a cardinal element in Iran’s strategy when Iranian naval craft [...] bombarded [...] In a place-for-event pattern, a location name refers to an event that occurred there (e.g., using the word Vietnam for the Vietnam war). In a place-for-product pattern a place stands for a product manufactured there (e.g., the word Bordeaux referring to the local wine). The category othermet covers unconventional metonymies, as (1), and is only used if none of the other categories fits (Markert and Nissim, 2002b). We also found examples where two predicates are involved, each triggering a different reading. (8) they arrived in Nigeria, hitherto a leading critic of the South African regime In (8), both a literal (triggered by “arriving in”) and a place-for-people reading (triggered by “leading critic”) are invoked. We introduced the category mixed to deal with these cases. 2.2 Annotation Results Using Gsearch (Corley et al., 2001), we randomly extracted 1000 occurrences of country names from the BNC, allowing any country name and its variants listed in the CIA factbook5 or WordNet (Fellbaum, 4As the explicit referent is often underspecified, we introduce place-for-people as a supertype category and we evaluate our system on supertype classification in this paper. In the annotation, we further specify the different groups of people referred to, whenever possible (Markert and Nissim, 2002b). 5http://www.cia.gov/cia/publications/ factbook/ 1998) to occur. Each country name is surrounded by three sentences of context. The 1000 examples of our corpus have been independently annotated by two computational linguists, who are the authors of this paper. The annotation can be considered reliable (Krippendorff, 1980) with 95% agreement and a kappa (Carletta, 1996) of .88. Our corpus for testing and training the algorithm includes only the examples which both annotators could agree on and which were not marked as noise (e.g. homonyms, as “Professor Greenland”), for a total of 925. Table 1 reports the reading distribution. Table 1: Distribution of readings in our corpus reading freq % literal 737 79.7 place-for-people 161 17.4 place-for-event 3 .3 place-for-product 0 .0 mixed 15 1.6 othermet 9 1.0 total non-literal 188 20.3 total 925 100.0 3 Metonymy Resolution as a Classification Task The corpus distribution confirms that metonymies that do not follow established metonymic patterns (othermet) are very rare. This seems to be the case for other kinds of metonymies, too (Verspoor, 1997). We can therefore reformulate metonymy resolution as a classification task between the literal reading and a fixed set of metonymic patterns that can be identified in advance for particular semantic classes. This approach makes the task comparable to classic word sense disambiguation (WSD), which is also concerned with distinguishing between possible word senses/interpretations. However, whereas a classic (supervised) WSD algorithm is trained on a set of labelled instances of one particular word and assigns word senses to new test instances of the same word, (supervised) metonymy recognition can be trained on a set of labelled instances of different words of one semantic class and assign literal readings and metonymic patterns to new test instances of possibly different words of the same semantic class. This class-based approach enables one to, for example, infer the reading of (3) from that of (2). We use a decision list (DL) classifier. All features encountered in the training data are ranked in the DL (best evidence first) according to the following loglikelihood ratio (Yarowsky, 1995): Log  Pr(readingi|featurek) P j̸=i Pr(readingj|featurek)  We estimated probabilities via maximum likelihood, adopting a simple smoothing method (Martinez and Agirre, 2000): 0.1 is added to both the denominator and numerator. The target readings to be distinguished are literal, place-for-people, place-forevent, place-for-product, othermet and mixed. All our algorithms are tested on our annotated corpus, employing 10-fold cross-validation. We evaluate accuracy and coverage: Acc = # correct decisions made # decisions made Cov = # decisions made # test data We also use a backing-off strategy to the most frequent reading (literal) for the cases where no decision can be made. We report the results as accuracy backoff (Accb); coverage backoff is always 1. We are also interested in the algorithm’s performance in recognising non-literal readings. Therefore, we compute precision (P), recall (R), and Fmeasure (F), where A is the number of non-literal readings correctly identified as non-literal (true positives) and B the number of literal readings that are incorrectly identified as non-literal (false positives): P = A/(A + B) R = A #non-literal examples in the test data F = 2PR/(R + P) The baseline used for comparison is the assignment of the most frequent reading literal. 4 Context Reduction We show that reducing the context to head-modifier relations involving the Possibly Metonymic Word achieves high precision metonymy recognition.6 6In (Markert and Nissim, 2002a), we also considered local and topical cooccurrences as contextual features. They constantly achieved lower precision than grammatical features. Table 2: Example feature values for role-of-head role-of-head (r-of-h) example subj-of-win England won the World Cup (place-for-people) subjp-of-govern Britain has been governed by . . . (literal) dobj-of-visit the Apostle had visited Spain (literal) gen-of-strategy in Iran’s strategy . . . (place-for-people) premod-of-veteran a Vietnam veteran from Rhode Island (place-for-event) ppmod-of-with its border with Hungary (literal) Table 3: Role distribution role freq #non-lit subj 92 65 subjp 6 4 dobj 28 12 gen 93 20 premod 94 13 ppmod 522 57 other 90 17 total 925 188 We represent each example in our corpus by a single feature role-of-head, expressing the grammatical role of the PMW (limited to (active) subject, passive subject, direct object, modifier in a prenominal genitive, other nominal premodifier, dependent in a prepositional phrase) and its lemmatised lexical head within a dependency grammar framework.7 Table 2 shows example values and Table 3 the role distribution in our corpus. We trained and tested our algorithm with this feature (hmr).8 Results for hmr are reported in the first line of Table 5. The reasonably high precision (74.5%) and accuracy (90.2%) indicate that reducing the context to a head-modifier feature does not cause loss of crucial information in most cases. Low recall is mainly due to low coverage (see Problem 2 below). We identified two main problems. Problem 1. The feature can be too simplistic, so that decisions based on the head-modifier relation can assign the wrong reading in the following cases: • “Bad” heads: Some lexical heads are semantically empty, thus failing to provide strong evidence for any reading and lowering both recall and precision. Bad predictors are the verbs “to have” and “to be” and some prepositions such as “with”, which can be used with metonymic (talk with Hungary) and literal (border with Hungary) readings. This problem is more serious for function than for content word heads: precision on the set of subjects and objects is 81.8%, but only 73.3% on PPs. • “Bad” relations: The premod relation suffers from noun-noun compound ambiguity. US op7We consider only one link per PMW, although cases like (8) would benefit from including all links the PMW participates in. 8The feature values were manually annotated for the following experiments, adapting the guidelines in (Poesio, 2000). The effect of automatic feature extraction is described in Section 6. eration can refer to an operation in the US (literal) or by the US (metonymic). • Other cases: Very rarely neglecting the remaining context leads to errors, even for “good” lexical heads and relations. Inferring from the metonymy in (4) that “Germany” in “Germany lost a fifth of its territory” is also metonymic, e.g., is wrong and lowers precision. However, wrong assignments (based on headmodifier relations) do not constitute a major problem as accuracy is very high (90.2%). Problem 2. The algorithm is often unable to make any decision that is based on the head-modifier relation. This is by far the more frequent problem, which we adress in the remainder of the paper. The feature role-of-head accounts for the similarity between (2) and (3) only, as classification of a test instance with a particular feature value relies on having seen exactly the same feature value in the training data. Therefore, we have not tackled the inference from (2) or (3) to (4). This problem manifests itself in data sparseness and low recall and coverage, as many heads are encountered only once in the corpus. As hmr’s coverage is only 63.1%, backoff to a literal reading is required in 36.9% of the cases. 5 Generalising Context Similarity In order to draw the more complex inference from (2) or (3) to (4) we need to generalise context similarity. We relax the identity constraint of the original algorithm (the same role-of-head value of the test instance must be found in the DL), exploiting two similarity levels. Firstly, we allow to draw inferences over similar values of lexical heads (e.g. from subj-of-win to subj-of-lose), rather than over identical ones only. Secondly, we allow to discard the Table 4: Example thesaurus entries lose[V]: win1 0.216, gain2 0.209, have3 0.207, ... attitude[N]:stance1 0.181, behavior2 0.18, ..., strategy17 0.128 lexical head and generalise over the PMW’s grammatical role (e.g. subject). These generalisations allow us to double recall without sacrificing precision or increasing the size of the training set. 5.1 Relaxing Lexical Heads We regard two feature values r-of-h and r-of-h ′ as similar if h and h ′ are similar. In order to capture the similarity between h and h ′ we integrate a thesaurus (Lin, 1998) in our algorithm’s testing phase. In Lin’s thesaurus, similarity between words is determined by their distribution in dependency relations in a newswire corpus. For a content word h (e.g., “lose”) of a specific part-of-speech a set of similar words Σh of the same part-of-speech is given. The set members are ranked in decreasing order by a similarity score. Table 4 reports example entries.9 Our modified algorithm (relax I) is as follows: 1. train DL with role-of-head as in hmr; for each test instance observe the following procedure (r-of-h indicates the feature value of the test instance); 2. if r-of-h is found in the DL, apply the corresponding rule and stop; 2′ otherwise choose a number n ≥1 and set i = 1; (a) extract the ith most similar word hi to h from the thesaurus; (b) if i > n or the similarity score of hi < 0.10, assign no reading and stop; (b’) otherwise: if r-of-hi is found in the DL, apply corresponding rule and stop; if r-of-hi is not found in the DL, increase i by 1 and go to (a); The examples already covered by hmr are classified in exactly the same way by relax I (see Step 2). Let us therefore assume we encounter the test instance (4), its feature value subj-of-lose has not been seen in the training data (so that Step 2 fails and Step 2 ′ has to be applied) and subj-of-win is in the DL. For all n ≥1, relax I will use the rule for subj-of-win to assign a reading to “Scotland” in (4) as “win” is the most similar word to “lose” in the thesaurus (see Table 4). In this case (2b’) is only 9In the original thesaurus, each Σh is subdivided into clusters. We do not take these divisions into account. 0 10 20 30 40 50 Thesaurus Iterations (n) 0.1 0.1 0.2 0.2 0.3 0.3 0.4 0.4 0.5 0.5 0.6 0.6 0.7 0.7 0.8 0.8 0.9 0.9 Results Precision Recall F-Measure Figure 2: Results for relax I applied once as already the first iteration over the thesaurus finds a word h1 with r-of-h1 in the DL. The classification of “Turkey” with feature value gen-of-attitude in (9) required 17 iterations to find a word h17 (“strategy”; see Example (7)) similar to “attitude”, with r-of-h17 (gen-of-strategy) in the DL. (9) To say that this sums up Turkey’s attitude as a whole would nevertheless be untrue Precision, recall and F-measure for n ∈ {1, ..., 10, 15, 20, 25, 30, 40, 50} are visualised in Figure 2. Both precision and recall increase with n. Recall more than doubles from 18.6% in hmr to 41% and precision increases from 74.5% in hmr to 80.2%, yielding an increase in F-measure from 29.8% to 54.2% (n = 50). Coverage rises to 78.9% and accuracy backoff to 85.1% (Table 5). Whereas the increase in coverage and recall is quite intuitive, the high precision achieved by relax I requires further explanation. Let S be the set of examples that relax I covers. It consists of two subsets: S1 is the subset already covered by hmr and its treatment does not change in relax I, yielding the same precision. S2 is the set of examples that relax I covers in addition to hmr. The examples in S2 consist of cases with highly predictive content word heads as (a) function words are not included in the thesaurus and (b) unpredictive content word heads like “have” or “be” are very frequent and normally already covered by hmr (they are therefore members of S1). Precision on S2 is very high (84%) and raises the overall precision on the set S. Cases that relax I does not cover are mainly due to (a) missing thesaurus entries (e.g., many proper Table 5: Results summary for manual annotation. For relax I and combination we report best results (50 thesaurus iterations). algorithm Acc Cov Accb P R F hmr .902 .631 .817 .745 .186 .298 relax I .877 .789 .851 .802 .410 .542 relax II .865 .903 .859 .813 .441 .572 combination .894 .797 .870 .814 .510 .627 baseline .797 1.00 .797 n/a .000 n/a names or alternative spelling), (b) the small number of training instances for some grammatical roles (e.g. dobj), so that even after 50 thesaurus iterations no similar role-of-head value could be found that is covered in the DL, or (c) grammatical roles that are not covered (other in Table 3). 5.2 Discarding Lexical Heads Another way of capturing the similarity between (3) and (4), or (7) and (9) is to ignore lexical heads and generalise over the grammatical role (role) of the PMW (with the feature values as in Table 3: subj, subjp, dobj, gen, premod, ppmod). We therefore developed the algorithm relax II. 1. train decision lists: (a) DL1 with role-of-head as in hmr (b) DL2 with role; for each test instance observe the following procedure (rof-h and r are the feature values of the test instance); 2. if r-of-h is found in the DL1, apply the corresponding rule and stop; 2’ otherwise, if r is found in DL2, apply the corresponding rule. Let us assume we encounter the test instance (4), subj-of-lose is not in DL1 (so that Step 2 fails and Step 2 ′ has to be applied) and subj is in DL2. The algorithm relax II will assign a place-forpeople reading to “Scotland”, as most subjects in our corpus are metonymic (see Table 3). Generalising over the grammatical role outperforms hmr, achieving 81.3% precision, 44.1% recall, and 57.2% F-measure (see Table 5). The algorithm relax II also yields fewer false negatives than relax I (and therefore higher recall) since all subjects not covered in DL1 are assigned a metonymic reading, which is not true for relax I. 5.3 Combining Generalisations There are several ways of combining the algorithms we introduced. In our experiments, the most successful one exploits the facts that relax II performs better than relax I on subjects and that relax I performs better on the other roles. Therefore the algorithm combination uses relax II if the test instance is a subject, and relax I otherwise. This yields the best results so far, with 87% accuracy backoff and 62.7% F-measure (Table 5). 6 Influence of Parsing The results obtained by training and testing our classifier with manually annotated grammatical relations are the upper bound of what can be achieved by using these features. To evaluate the influence parsing has on the results, we used the RASP toolkit (Briscoe and Carroll, 2002) that includes a pipeline of tokenisation, tagging and state-of-the-art statistical parsing, allowing multiple word tags. The toolkit also maps parse trees to representations of grammatical relations, which we in turn could map in a straightforward way to our role categories. RASP produces at least partial parses for 96% of our examples. However, some of these parses do not assign any role of our roleset to the PMW — only 76.9% of the PMWs are assigned such a role by RASP (in contrast to 90.2% in the manual annotation; see Table 3). RASP recognises PMW subjects with 79% precision and 81% recall. For PMW direct objects, precision is 60% and recall 86%.10 We reproduced all experiments using the automatically extracted relations. Although the relative performance of the algorithms remains mostly unchanged, most of the resulting F-measures are more than 10% lower than for hand annotated roles (Table 6). This is in line with results in (Gildea and Palmer, 2002), who compare the effect of manual and automatic parsing on semantic predicateargument recognition. 7 Related Work Previous Approaches to Metonymy Recognition. Our approach is the first machine learning algorithm to metonymy recognition, building on our previous 10We did not evaluate RASP’s performance on relations that do not involve the PMW. Table 6: Results summary for the different algorithms using RASP. For relax I and combination we report best results (50 thesaurus iterations). algorithm Acc Cov Accb P R F hmr .884 .514 .812 .674 .154 .251 relax I .841 .666 .821 .619 .319 .421 relax II .820 .769 .823 .621 .340 .439 combination .850 .672 .830 .640 .388 .483 baseline .797 1.00 .797 n/a .000 n/a work (Markert and Nissim, 2002a). The current approach expands on it by including a larger number of grammatical relations, thesaurus integration, and an assessment of the influence of parsing. Best Fmeasure for manual annotated roles increased from 46.7% to 62.7% on the same dataset. Most other traditional approaches rely on handcrafted knowledge bases or lexica and use violations of hand-modelled selectional restrictions (plus sometimes syntactic violations) for metonymy recognition (Pustejovsky, 1995; Hobbs et al., 1993; Fass, 1997; Copestake and Briscoe, 1995; Stallard, 1993).11 In these approaches, selectional restrictions (SRs) are not seen as preferences but as absolute constraints. If and only if such an absolute constraint is violated, a non-literal reading is proposed. Our system, instead, does not have any a priori knowledge of semantic predicate-argument restrictions. Rather, it refers to previously seen training examples in head-modifier relations and their labelled senses and computes the likelihood of each sense using this distribution. This is an advantage as our algorithm also resolved metonymies without SR violations in our experiments. An empirical comparison between our approach in (Markert and Nissim, 2002a)12 and an SRs violation approach showed that our approach performed better. In contrast to previous approaches (Fass, 1997; Hobbs et al., 1993; Copestake and Briscoe, 1995; Pustejovsky, 1995; Verspoor, 1996; Markert and Hahn, 2002; Harabagiu, 1998; Stallard, 1993), we use a corpus reliably annotated for metonymy for evaluation, moving the field towards more objective 11(Markert and Hahn, 2002) and (Harabagiu, 1998) enhance this with anaphoric information. (Briscoe and Copestake, 1999) propose using frequency information besides syntactic/semantic restrictions, but use only a priori sense frequencies without contextual features. 12Note that our current approach even outperforms (Markert and Nissim, 2002a). evaluation procedures. Word Sense Disambiguation. We compared our approach to supervised WSD in Section 3, stressing word-to-word vs. class-to-class inference. This allows for a level of abstraction not present in standard supervised WSD. We can infer readings for words that have not been seen in the training data before, allow an easy treatment of rare words that undergo regular sense alternations and do not have to annotate and train separately for every individual word to treat regular sense distinctions.13 By exploiting additional similarity levels and integrating a thesaurus we further generalise the kind of inferences we can make and limit the size of annotated training data: as our sampling frame contains 553 different names, an annotated data set of 925 samples is quite small. These generalisations over context and collocates are also applicable to standard WSD and can supplement those achieved e.g., by subcategorisation frames (Martinez et al., 2002). Our approach to word similarity to overcome data sparseness is perhaps most similar to (Karov and Edelman, 1998). However, they mainly focus on the computation of similarity measures from the training data. We instead use an off-the-shelf resource without adding much computational complexity and achieve a considerable improvement in our results. 8 Conclusions We presented a supervised classification algorithm for metonymy recognition, which exploits the similarity between examples of conventional metonymy, operates on semantic classes and thereby enables complex inferences from training to test examples. We showed that syntactic head-modifier relations are a high precision feature for metonymy recognition. However, basing inferences only on the lexical heads seen in the training data leads to data sparseness due to the large number of different lexical heads encountered in natural language texts. In order to overcome this problem we have integrated a thesaurus that allows us to draw inferences be13Incorporating knowledge about particular PMWs (e.g., as a prior) will probably improve performance, as word idiosyncracies — which can still exist even when treating regular sense distinctions — could be accounted for. In addition, knowledge about the individual word is necessary to assign its original semantic class. tween examples with similar but not identical lexical heads. We also explored the use of simpler grammatical role features that allow further generalisations. The results show a substantial increase in precision, recall and F-measure. In the future, we will experiment with combining grammatical features and local/topical cooccurrences. The use of semantic classes and lexical head similarity generalises over two levels of contextual similarity, which exceeds the complexity of inferences undertaken in standard supervised word sense disambiguation. Acknowledgements. The research reported in this paper was supported by ESRC Grant R000239444. Katja Markert is funded by an Emmy Noether Fellowship of the Deutsche Forschungsgemeinschaft (DFG). We thank three anonymous reviewers for their comments and suggestions. References E. Briscoe and J. Carroll. 2002. Robust accurate statistical annotation of general text. In Proc. of LREC, 2002, pages 1499–1504. T. Briscoe and A. Copestake. 1999. Lexical rules in constraint-based grammar. Computational Linguistics, 25(4):487–526. J. Carletta. 1996. Assessing agreement on classification tasks: The kappa statistic. Computational Linguistics, 22(2):249–254. A. Copestake and T. Briscoe. 1995. Semi-productive polysemy and sense extension. Journal of Semantics, 12:15–67. S. Corley, M. Corley, F. Keller, M. Crocker, and S. Trewin. 2001. Finding syntactic structure in unparsed corpora: The Gsearch corpus query system. Computers and the Humanities, 35(2):81–94. D. Fass. 1997. Processing Metaphor and Metonymy. Ablex, Stanford, CA. C. Fellbaum, ed. 1998. WordNet: An Electronic Lexical Database. MIT Press, Cambridge, Mass. D. Gildea and M. Palmer. 2002. The necessity of parsing for predicate argument recognition. In Proc. of ACL, 2002, pages 239–246. S. Harabagiu. 1998. Deriving metonymic coercions from WordNet. In Workshop on the Usage of WordNet in Natural Language Processing Systems, COLINGACL, 1998, pages 142–148. J. R. Hobbs, M. E. Stickel, D. E. Appelt, and P. Martin. 1993. Interpretation as abduction. Artificial Intelligence, 63:69–142. S. Kamei and T. Wakao. 1992. Metonymy: Reassessment, survey of acceptability and its treatment in machine translation systems. In Proc. of ACL, 1992, pages 309–311. Y. Karov and S. Edelman. 1998. Similarity-based word sense disambiguation. Computational Linguistics, 24(1):41-59. K. Krippendorff. 1980. Content Analysis: An Introduction to Its Methodology. Sage Publications. G. Lakoff and M. Johnson. 1980. Metaphors We Live By. Chicago University Press, Chicago, Ill. D. Lin. 1998. An information-theoretic definition of similarity. In Proc. of International Conference on Machine Learning, Madison, Wisconsin. K. Markert and U. Hahn. 2002. Understanding metonymies in discourse. Artificial Intelligence, 135(1/2):145–198. K. Markert and M. Nissim. 2002a. Metonymy resolution as a classification task. In Proc. of EMNLP, 2002, pages 204–213. Katja Markert and Malvina Nissim. 2002b. Towards a corpus annotated for metonymies: the case of location names. In Proc. of LREC, 2002, pages 1385–1392. D. Martinez and E. Agirre. 2000. One sense per collocation and genre/topic variations. In Proc. of EMNLP, 2000. D. Martinez, E. Agirre, and L. Marquez. 2002. Syntactic features for high precision word sense disambiguation. In Proc. of COLING, 2002. G. Nunberg. 1978. The Pragmatics of Reference. Ph.D. thesis, City University of New York, New York. G. Nunberg. 1995. Transfers of meaning. Journal of Semantics, 12:109–132. M. Poesio, 2000. The GNOME Annotation Scheme Manual. University of Edinburgh, 4th version. Available from http://www.hcrc.ed.ac.uk/˜gnome. J. Pustejovsky. 1995. The Generative Lexicon. MIT Press, Cambridge, Mass. D. Stallard. 1993. Two kinds of metonymy. In Proc. of ACL, 1993, pages 87–94. G. Stern. 1931. Meaning and Change of Meaning. G¨oteborg: Wettergren & Kerbers F¨orlag. C. Verspoor. 1996. Lexical limits on the influence of context. In Proc. of CogSci, 1996, pages 116–120. C. Verspoor. 1997. Conventionality-governed logical metonymy. In H. Bunt et al., editors, Proc. of IWCS-2, 1997, pages 300–312. D. Yarowsky. 1995. Unsupervised word sense disambiguation rivaling supervised methods. In Proc. of ACL, 1995, pages 189–196.
2003
8
Clustering Polysemic Subcategorization Frame Distributions Semantically Anna Korhonen∗ Computer Laboratory University of Cambridge 15 JJ Thomson Avenue Cambridge CB3 0FD, UK [email protected] Yuval Krymolowski Division of Informatics University of Edinburgh 2 Buccleuch Place Edinburgh EH8 9LW Scotland, UK [email protected] Zvika Marx Interdisciplinary Center for Neural Computation, The Hebrew University Jerusalem, Israel [email protected] Abstract Previous research has demonstrated the utility of clustering in inducing semantic verb classes from undisambiguated corpus data. We describe a new approach which involves clustering subcategorization frame (SCF) distributions using the Information Bottleneck and nearest neighbour methods. In contrast to previous work, we particularly focus on clustering polysemic verbs. A novel evaluation scheme is proposed which accounts for the effect of polysemy on the clusters, offering us a good insight into the potential and limitations of semantically classifying undisambiguated SCF data. 1 Introduction Classifications which aim to capture the close relation between the syntax and semantics of verbs have attracted a considerable research interest in both linguistics and computational linguistics (e.g. (Jackendoff, 1990; Levin, 1993; Pinker, 1989; Dang et al., 1998; Dorr, 1997; Merlo and Stevenson, 2001)). While such classifications may not provide a means for full semantic inferencing, they can capture generalizations over a range of linguistic properties, and can therefore be used as a means of reducing redundancy in the lexicon and for filling gaps in lexical knowledge. ∗This work was partly supported by UK EPSRC project GR/N36462/93: ‘Robust Accurate Statistical Parsing (RASP)’. Verb classifications have, in fact, been used to support many natural language processing (NLP) tasks, such as language generation, machine translation (Dorr, 1997), document classification (Klavans and Kan, 1998), word sense disambiguation (Dorr and Jones, 1996) and subcategorization acquisition (Korhonen, 2002). One attractive property of these classifications is that they make it possible, to a certain extent, to infer the semantics of a verb on the basis of its syntactic behaviour. In recent years several attempts have been made to automatically induce semantic verb classes from (mainly) syntactic information in corpus data (Joanis, 2002; Merlo et al., 2002; Schulte im Walde and Brew, 2002). In this paper, we focus on the particular task of classifying subcategorization frame (SCF) distributions in a semantically motivated manner. Previous research has demonstrated that clustering can be useful in inferring Levin-style semantic classes (Levin, 1993) from both English and German verb subcategorization information (Brew and Schulte im Walde, 2002; Schulte im Walde, 2000; Schulte im Walde and Brew, 2002). We propose a novel approach, which involves: (i) obtaining SCF frequency information from a lexicon extracted automatically using the comprehensive system of Briscoe and Carroll (1997) and (ii) applying a clustering mechanism to this information. We use clustering methods that process raw distributional data directly, avoiding complex preprocessing steps required by many advanced methods (e.g. Brew and Schulte im Walde (2002)). In contrast to earlier work, we give special emphasis to polysemy. Earlier work has largely ignored this issue by assuming a single gold standard class for each verb (whether polysemic or not). The relatively good clustering results obtained suggest that many polysemic verbs do have some predominating sense in corpus data. However, this sense can vary across corpora (Roland et al., 2000), and assuming a single sense is inadequate for an important group of medium and high frequency verbs whose distribution of senses in balanced corpus data is flat rather than zipfian (Preiss and Korhonen, 2002). To allow for sense variation, we introduce a new evaluation scheme against a polysemic gold standard. This helps to explain the results and offers a better insight into the potential and limitations of clustering undisambiguated SCF data semantically. We discuss our gold standards and the choice of test verbs in section 2. Section 3 describes the method for subcategorization acquisition and section 4 presents the approach to clustering. Details of the experimental evaluation are supplied in section 5. Section 6 concludes with directions for future work. 2 Semantic Verb Classes and Test Verbs Levin’s taxonomy of verbs and their classes (Levin, 1993) is the largest syntactic-semantic verb classification in English, employed widely in evaluation of automatic classifications. It provides a classification of 3,024 verbs (4,186 senses) into 48 broad / 192 fine grained classes. Although it is quite extensive, it is not exhaustive. As it primarily concentrates on verbs taking NP and PP complements and does not provide a comprehensive set of senses for verbs, it is not suitable for evaluation of polysemic classifications. We employed as a gold standard a substantially extended version of Levin’s classification constructed by Korhonen (2003). This incorporates Levin’s classes, 26 additional classes by Dorr (1997)1, and 57 new classes for verb types not covered comprehensively by Levin or Dorr. 110 test verbs were chosen from this gold standard, 78 polysemic and 32 monosemous ones. Some low frequency verbs were included to investigate the 1These classes are incorporated in the ’LCS database’ (http://www.umiacs.umd.edu/∼bonnie/verbs-English.lcs). effect of sparse data on clustering performance. To ensure that our gold standard covers all (or most) senses of these verbs, we looked into WordNet (Miller, 1990) and assigned all the WordNet senses of the verbs to gold standard classes.2 Two versions of the gold standard were created: monosemous and polysemic. The monosemous one lists only a single sense for each test verb, that corresponding to its predominant (most frequent) sense in WordNet. The polysemic one provides a comprehensive list of senses for each verb. The test verbs and their classes are shown in table 1. The classes are indicated by number codes from the classifications of Levin, Dorr (the classes starting with 0) and Korhonen (the classes starting with A).3 The predominant sense is indicated by bold font. 3 Subcategorization Information We obtain our SCF data using the subcategorization acquisition system of Briscoe and Carroll (1997). We expect the use of this system to be beneficial: it employs a robust statistical parser (Briscoe and Carroll, 2002) which yields complete though shallow parses, and a comprehensive SCF classifier, which incorporates 163 SCF distinctions, a superset of those found in the ANLT (Boguraev et al., 1987) and COMLEX (Grishman et al., 1994) dictionaries. The SCFs abstract over specific lexicallygoverned particles and prepositions and specific predicate selectional preferences but include some derived semi-predictable bounded dependency constructions, such as particle and dative movement. 78 of these ‘coarse-grained’ SCFs appeared in our data. In addition, a set of 160 fine grained frames were employed. These were obtained by parameterizing two high frequency SCFs for prepositions: the simple PP and NP + PP frames. The scope was restricted to these two frames to prevent sparse data problems in clustering. A SCF lexicon was acquired using this system from the British National Corpus (Leech, 1992, BNC) so that the maximum of 7000 citations were 2As WordNet incorporates particularly fine grained sense distinctions, some senses were found which did not appear in our gold standard. As many of them appeared marginal and/or low in frequency, we did not consider these additional senses in our experiment. 3The gold standard assumes Levin’s broad classes (e.g. class 10) instead of possible fine-grained ones (e.g. class 10.1). TEST GOLD STANDARD TEST GOLD STANDARD TEST GOLD STANDARD TEST GOLD STANDARD VERB CLASSES VERB CLASSES VERB CLASSES VERB CLASSES place 9 dye 24, 21, 41 focus 31, 45 stare 30 lay 9 build 26, 45 force 002, 11 glow 43 drop 9, 45, 004, 47, bake 26, 45 persuade 002 sparkle 43 51, A54, A30 pour 9, 43, 26, 57, 13, 31 invent 26, 27 urge 002, 37 dry 45 load 9 publish 26, 25 want 002, 005, 29, 32 shut 45 settle 9, 46, A16, 36, 55 cause 27, 002 need 002, 005, 29, 32 hang 47, 9, 42, 40 fill 9, 45, 47 generate 27, 13, 26 grasp 30, 15 sit 47, 9 remove 10, 11, 42 induce 27, 002, 26 understand 30 disappear 48 withdraw 10, A30 acknowledge 29, A25, A35 conceive 30, 29, A56 vanish 48 wipe 10, 9 proclaim 29, 37, A25 consider 30, 29 march 51 brush 10, 9, 41, 18 remember 29, 30 perceive 30 walk 51 filter 10 imagine 29, 30 analyse 34, 35 travel 51 send 11, A55 specify 29 evaluate 34, 35 hurry 53, 51 ship 11, A58 establish 29, A56 explore 35, 34 rush 53, 51 transport 11, 31 suppose 29, 37 investigate 35, 34 begin 55 carry 11, 54 assume 29, A35, A57 agree 36, 22, A42 continue 55, 47, 51 drag 11, 35, 51, 002 think 29, 005 communicate 36, 11 snow 57, 002 push 11, 12, 23, 9, 002 confirm 29 shout 37 rain 57 pull 11, 12, 13, 23, 40, 016 believe 29, 31, 33 whisper 37 sin 003 give 13 admit 29, 024, 045, 37 talk 37 rebel 003 lend 13 allow 29, 024, 13, 002 speak 37 risk 008, A7 study 14, 30, 34, 35 act 29 say 37, 002 gamble 008, 009 hit 18, 17, 47, A56, 31, 42 behave 29 mention 37 beg 015, 32 bang 18, 43, 9, 47, 36 feel 30, 31, 35, 29 eat 39 pray 015, 32 carve 21, 25, 26 see 30, 29 drink 39 seem 020 add 22, 37, A56 hear 30, A32 laugh 40, 37 appear 020, 48, 29 mix 22, 26, 36 notice 30, A32 smile 40, 37 colour 24, 31, 45 concentrate 31, 45 look 30, 35 Table 1: Test verbs and their monosemous/polysemic gold standard senses used per test verb. The lexicon was evaluated against manually analysed corpus data after an empirically defined threshold of 0.025 was set on relative frequencies of SCFs to remove noisy SCFs. The method yielded 71.8% precision and 34.5% recall. When we removed the filtering threshold, and evaluated the noisy distribution, F-measure4 dropped from 44.9 to 38.51.5 4 Clustering Method Data clustering is a process which aims to partition a given set into subsets (clusters) of elements that are similar to one another, while ensuring that elements that are not similar are assigned to different clusters. We use clustering for partitioning a set of verbs. Our hypothesis is that information about SCFs and their associated frequencies is relevant for identifying semantically related verbs. Hence, we use SCFs as relevance features to guide the clustering process.6 4F = 2·precision·recall precision+recall 5These figures are not particularly impressive because our evaluation is exceptionally hard. We use 1) highly polysemic test verbs, 2) a high number of SCFs and 3) evaluate against manually analysed data rather than dictionaries (the latter have high precision but low recall). 6The relevance of the features to the task is evident when comparing the probability of a randomly chosen pair of verbs verbi and verbj to share the same predominant sense (4.5%) with the probability obtained when verbj is the JS-divergence We chose two clustering methods which do not involve task-oriented tuning (such as pre-fixed thresholds or restricted cluster sizes) and which approach data straightforwardly, in its distributional form: (i) a simple hard method that collects the nearest neighbours (NN) of each verb (figure 1), and (ii) the Information Bottleneck (IB), an iterative soft method (Tishby et al., 1999) based on information-theoretic grounds. The NN method is very simple, but it has some disadvantages. It outputs only one clustering configuration, and therefore does not allow examination of different cluster granularities. It is also highly sensitive to noise. Few exceptional neighbourhood relations contradicting the typical trends in the data are enough to cause the formation of a single cluster which encompasses all elements. Therefore we employed the more sophisticated IB method as well. The IB quantifies the relevance information of a SCF distribution with respect to output clusters, through their mutual information I(Clusters; SCFs). The relevance information is maximized, while the compression information I(Clusters; V erbs) is minimized. This ensures optimal compression of data through clusters. The tradeoff between the two constraints is realized nearest neighbour of verbi (36%). NN Clustering: 1. For each verb v: 2. Calculate the JS divergence between the SCF distributions of v and all other verbs: JS(p, q) = 1 2  D  p p+q 2  + D  q p+q 2  3. Connect v with the most similar verb; 4. Find all the connected components Figure 1: Connected components nearest neighbour (NN) clustering. D is the Kullback-Leibler distance. through minimizing the cost term: L = I(Clusters; V erbs) −βI(Clusters; SCFs) , where β is a parameter that balances the constraints. The IB iterative algorithm finds a local minimum of the above cost term. It takes three inputs: (i) SCFverb distributions, (ii) the desired number of clusters K, and (iii) the value of β. Starting from a random configuration, the algorithm repeatedly calculates, for each cluster K, verb V and SCF S, the following probabilities: (i) the marginal proportion of the cluster p(K); (ii) the probability p(S|K) for a SCF to occur with members of the cluster; and (iii) the probability p(K|V ) for a verb to be assigned to the cluster. These probabilities are used, each in its turn, for calculating the other probabilities (figure 2). The collection of all p(S|K)’s for a fixed cluster K can be regarded as a probabilistic center (centroid) of that cluster in the SCF space. The IB method gives an indication of the most informative values of K.7 Intensifying the weight β attached to the relevance information I(Clusters; SCFs) allows us to increase the number K of distinct clusters being produced (while too small β would cause some of the output clusters to be identical to one another). Hence, the relevance information grows with K. Accordingly, we consider as the most informative output configurations those for which the relevance information increases more sharply between K −1 and K clusters than between K and K + 1. 7Most works on clustering ignore this issue and refer to an arbitrarily chosen number of clusters, or to the number of gold standard classes, which cannot be assumed in realistic applications. IB Clustering (fixed β): Perform till convergence, for each time step t = 1, 2, . . . : 1. zt(K, V ) = pt−1(K) e−βD[p(S|V )∥pt−1(S|K)] (When t = 1, initialize zt(K, V ) arbitrarily) 2. pt(K|V ) = zt(K,V ) P K′ zt(K′,V ) 3. pt(K) = P V p(V )pt(K|V ) 4. pt(S|K) = P V p(S|V )pt(V |K) Figure 2: Information Bottleneck (IB) iterative clustering. D is the Kullback-Leibler distance. When the weight of relevance grows, the assignment to clusters is more constrained and p(K|V ) becomes more similar to hard clustering. Let K(V ) = argmax K p(K|V ) denote the most probable cluster of a verb V . For K ≥30, more than 85% of the verbs have p(K(V )|V ) > 90% which makes the output clustering approximately hard. For this reason, we decided to use only K(V ) as output and defer a further exploration of the soft output to future work. 5 Experimental Evaluation 5.1 Data The input data to clustering was obtained from the automatically acquired SCF lexicon for our 110 test verbs (section 2). The counts were extracted from unfiltered (noisy) SCF distributions in this lexicon.8 The NN algorithm produced 24 clusters on this input. From the IB algorithm, we requested K = 2 to 60 clusters. The upper limit was chosen so as to slightly exceed the case when the average cluster size 110/K = 2. We chose for evaluation the IB results for K = 25, 35 and 42. For these values, the SCF relevance satisfies our criterion for a notable improvement in cluster quality (section 4). The value K=35 is very close to the actual number (34) of predominant senses in the gold standard. In this way, the IB yields structural information beyond clustering. 8This yielded better results, which might indicate that the unfiltered “noisy” SCFs contain information which is valuable for the task. 5.2 Method A number of different strategies have been proposed for evaluation of clustering. We concentrate here on those which deliver a numerical value which is easy to interpret, and do not introduce biases towards specific numbers of classes or class sizes. As we currently assign a single sense to each polysemic verb (sec. 5.4) the measures we use are also applicable for evaluation against a polysemous gold standard. Our first measure, the adjusted pairwise precision (APP), evaluates clusters in terms of verb pairs (Schulte im Walde and Brew, 2002) 9: APP = 1 K K P i=1 num. of correct pairs in ki num. of pairs in ki · |ki|−1 |ki|+1 . APP is the average proportion of all within-cluster pairs that are correctly co-assigned. It is multiplied by a factor that increases with cluster size. This factor compensates for a bias towards small clusters. Our second measure is derived from purity, a global measure which evaluates the mean precision of the clusters, weighted according to the cluster size (Stevenson and Joanis, 2003). We associate with each cluster its most prevalent semantic class, and denote the number of verbs in a cluster K that take its prevalent class by nprevalent(K). Verbs that do not take this class are considered as errors. Given our task, we are only interested in classes which contain two or more verbs. We therefore disregard those clusters where nprevalent(K) = 1. This leads us to define modified purity: mPUR = P nprevalent(ki)≥2 nprevalent(ki) number of verbs . The modification we introduce to purity removes the bias towards the trivial configuration comprised of only singletons. 5.3 Evaluation Against the Predominant Sense We first evaluated the clusters against the predominant sense, i.e. using the monosemous gold standard. The results, shown in Table 2, demonstrate that both clustering methods perform significantly 9Our definition differs by a factor of 2 from that of Schulte im Walde and Brew (2002). Alg. K +PP –PP +PP –PP APP: mPUR: NN (24) 21% 19% 48% 45% 25 12% 9% 39% 32% IB 35 14% 9% 48% 38% 42 15% 9% 50% 39% RAND 25 3% 15% Table 2: Clustering performance on the predominant senses, with and without prepositions. The last entry presents the performance of random clustering with K = 25, which yielded the best results among the three values K=25, 35 and 42. better on the task than our random clustering baseline. Both methods show clearly better performance with fine-grained SCFs (with prepositions, +PP) than with coarse-grained ones (-PP). Surprisingly, the simple NN method performs very similarly to the more sophisticated IB. Being based on pairwise similarities, it shows better performance than IB on the pairwise measure. The IB is, however, slightly better according to the global measure (2% with K = 42). The fact that the NN method performs better than the IB with similar K values (NN K = 24 vs. IB K = 25) seems to suggest that the JS divergence provides a better model for the predominant class than the compression model of the IB. However, it is likely that the IB performance suffered due to our choice of test data. As the method is global, it performs better when the target classes are represented by a high number of verbs. In our experiment, many semantic classes were represented by two verbs only (section 2). Nevertheless, the IB method has the clear advantage that it allows for more clusters to be produced. At best it classified half of the verbs correctly according to their predominant sense (mPUR = 50%). Although this leaves room for improvement, the result compares favourably to previously published results10. We argue, however, that evaluation against a monosemous gold standard reveals only part of the picture. 10Due to differences in task definition and experimental setup, a direct comparison with earlier results is impossible. For example, Stevenson and Joanis (2003) report an accuracy of 29% (which implies mPUR ≤29%), but their task involves classifying 841 verbs to 14 classes based on differences in the predicate-argument structure. K Pred. Multiple Pred. Multiple sense senses sense senses APP: mPUR: NN: (24) 21% 29% (23% + 5σ) 48% 60% (46%+ 2σ) IB: 25 12% 18% (14% + 5σ) 39% 48% (43%+ 3σ) 35 14% 20% (16% + 6σ) 47% 59% (50%+ 4σ) 42 15% 19% (16% + 3σ) 50% 59% (54%+ 2σ) Table 3: Evaluation against the monosemous (Pred.) and polysemous (Multiple) gold standards. The figures in parentheses are results of evaluation on randomly polysemous data + significance of the actual figure. Results were obtained with finegrained SCFs (including prepositions). 5.4 Evaluation Against Multiple Senses In evaluation against the polysemic gold standard, we assume that a verb which is polysemous in our corpus data may appear in a cluster with verbs that share any of its senses. In order to evaluate the clusters against polysemous data, we assigned each polysemic verb V a single sense: the one it shares with the highest number of verbs in the cluster K(V ). Table 3 shows the results against polysemic and monosemous gold standards. The former are noticeably better than the latter (e.g. IB with K = 42 is 9% better). Clearly, allowing for multiple gold standard classes makes it easier to obtain better results with evaluation. In order to show that polysemy makes a nontrivial contribution in shaping the clusters, we measured the improvement that can be due to pure chance by creating randomly polysemous gold standards. We constructed 100 sets of random gold standards. In each iteration, the verbs kept their original predominant senses, but the set of additional senses was taken entirely from another verb - chosen at random. By doing so, we preserved the dominant sense of each verb, the total frequency of all senses and the correlations between the additional senses. The results included in table 3 indicate, with 99.5% confidence (3σ and above), that the improvement obtained with the polysemous gold standard is not artificial (except in two cases with 95% confidence). 5.5 Qualitative Analysis of Polysemy We performed qualitative analysis to further investigate the effect of polysemy on clustering perforDifferent Pairs Fraction Senses in cluster 0 39 51% 1 85 10% 2 625 7% 3 1284 3% 4 1437 3% Table 4: The fraction of verb pairs clustered together, as a function of the number of different senses between pair members (results of the NN algorithm) Common one irregular no irregular Senses Pairs in cluster Pairs in cluster 0 2180 3% 3018 3% 1 388 9% 331 12% 2 44 20% 31 35% Table 5: The fraction of verb pairs clustered together, as a function of the number of shared senses (results of the NN algorithm) mance. The results in table 4 demonstrate that the more two verbs differ in their senses, the lower their chance of ending up in the same cluster. From the figures in table 5 we see that the probability of two verbs to appear in the same cluster increases with the number of senses they share. Interestingly, it is not only the degree of polysemy which influences the results, but also the type. For verb pairs where at least one of the members displays ‘irregular’ polysemy (i.e. it does not share its full set of senses with any other verb), the probability of co-occurrence in the same cluster is far lower than for verbs which are polysemic in a ‘regular’ manner (Table 5). Manual cluster analysis against the polysemic gold standard revealed a yet more comprehensive picture. Consider the following clusters (the IB output with K = 42): A1: talk (37), speak (37) A2: look (30, 35), stare (30) A3: focus (31, 45), concentrate (31, 45) A4: add (22, 37, A56) We identified a close relation between the clustering performance and the following patterns of semantic behaviour: 1) Monosemy: We had 32 monosemous test verbs. 10 gold standard classes included 2 or more or these. 7 classes were correctly acquired using clustering (e.g. A1), indicating that clustering monosemous verbs is fairly ‘easy’. 2) Predominant sense: 10 clusters were examined by hand whose members got correctly classified together, despite one of them being polysemous (e.g. A2). In 8 cases there was a clear indication in the data (when examining SCFs and the selectional preferences on argument heads) that the polysemous verb indeed had its predominant sense in the relevant class and that the co-occurrence was not due to noise. 3) Regular Polysemy: Several clusters were produced which represent linguistically plausible intersective classes (e.g. A3) (Dang et al., 1998) rather than single classes. 4) Irregular Polysemy: Verbs with irregular polysemy11 were frequently assigned to singleton clusters. For example, add (A4) has a ‘combining and attaching’ sense in class 22 which involves NP and PP SCFs and another ‘communication’ sense in 37 which takes sentential SCFs. Irregular polysemy was not a marginal phenomenon: it explains 5 of the 10 singletons in our data. These observations confirm that evaluation against a polysemic gold standard is necessary in order to fully explain the results from clustering. 5.6 Qualitative Analysis of Errors Finally, to provide feedback for further development of our verb classification approach, we performed a qualitative analysis of errors not resulting from polysemy. Consider the following clusters (the IB output for K = 42): B1: place (9), build (26, 45), publish (26, 25), carve (21, 25, 26) B2: sin (003), rain (57), snow (57, 002) B3: agree (36, 22, A42), appear (020, 48, 29), begin (55), continue (55, 47, 51) B4: beg (015, 32) Three main error types were identified: 1) Syntactic idiosyncracy: This was the most frequent error type, exemplified in B1, where place is incorrectly clustered with build, publish and carve merely because it takes similar prepositions to these verbs (e.g. in, on, into). 2) Sparse data: Many of the low frequency verbs (we had 12 with frequency less than 300) performed 11Recall our definition of irregular polysemy, section 5.4. poorly. In B2, sin (which had 53 occurrences) is classified with rain and snow because it does not occur in our data with the preposition against the ‘hallmark’ of its gold standard class (’Conspire Verbs’). 3) Problems in SCF acquisition: These were not numerous but occurred e.g. when the system could not distinguish between different control (e.g. subject/object equi/raising) constructions (B3). 6 Discussion and Conclusions This paper has presented a novel approach to automatic semantic classification of verbs. This involved applying the NN and IB methods to cluster polysemic SCF distributions extracted from corpus data using Briscoe and Carroll’s (1997) system. A principled evaluation scheme was introduced which enabled us to investigate the effect of polysemy on the resulting classification. Our investigation revealed that polysemy has a considerable impact on the clusters formed: polysemic verbs with a clear predominant sense and those with similar regular polysemy are frequently classified together. Homonymic verbs or verbs with strong irregular polysemy tend to resist any classification. While it is clear that evaluation should account for these cases rather than ignore them, the issue of polysemy is related to another, bigger issue: the potential and limitations of clustering in inducing semantic information from polysemic SCF data. Our results show that it is unrealistic to expect that the ‘important’ (high frequency) verbs in language fall into classes corresponding to single senses. However, they also suggest that clustering can be used for novel, previously unexplored purposes: to detect from corpus data general patterns of semantic behaviour (monosemy, predominant sense, regular/irregular polysemy). In the future, we plan to investigate the use of soft clustering (without hardening the output) and develop methods for evaluating the soft output against polysemous gold standards. We also plan to work on improving the accuracy of subcategorization acquisition, investigating the role of noise (irregular / regular) in clustering, examining whether different syntactic/semantic verb types require different approaches in clustering, developing our gold standard classification further, and extending our experiments to a larger number of verbs and verb classes. References B. Boguraev, E. J. Briscoe, J. Carroll, D. Carter, and C. Grover. 1987. The derivation of a grammaticallyindexed lexicon from the longman dictionary of contemporary english. In Proc. of the 25th ACL, pages 193–200, Stanford, CA. C. Brew and S. Schulte im Walde. 2002. Spectral clustering for german verbs. In Conference on Empirical Methods in Natural Language Processing, Philadelphia, USA. E. J. Briscoe and J. Carroll. 1997. Automatic extraction of subcategorization from corpora. In 5th ACL Conference on Applied Natural Language Processing, pages 356–363, Washington DC. E. J. Briscoe and J. Carroll. 2002. Robust accurate statistical annotation of general text. In 3rd International Conference on Language Resources and Evaluation, pages 1499–1504, Las Palmas, Gran Canaria. H. T. Dang, K. Kipper, M. Palmer, and J. Rosenzweig. 1998. Investigating regular sense extensions based on intersective Levin classes. In Proc. of COLING/ACL, pages 293–299, Montreal, Canada. B. Dorr and D. Jones. 1996. Role of word sense disambiguation in lexical acquisition: predicting semantics from syntactic cues. In 16th International Conference on Computational Linguistics, pages 322–333, Copenhagen, Denmark. B. Dorr. 1997. Large-scale dictionary construction for foreign language tutoring and interlingual machine translation. Machine Translation, 12(4):271–325. R. Grishman, C. Macleod, and A. Meyers. 1994. Comlex syntax: building a computational lexicon. In International Conference on Computational Linguistics, pages 268–272, Kyoto, Japan. R. Jackendoff. 1990. Semantic Structures. MIT Press, Cambridge, Massachusetts. E. Joanis. 2002. Automatic verb classification using a general feature space. Master’s thesis, University of Toronto. J. L. Klavans and M. Kan. 1998. Role of verbs in document analysis. In Proc. of COLING/ACL, pages 680– 686, Montreal, Canada. A. Korhonen. 2002. Subcategorization Acquisition. Ph.D. thesis, University of Cambridge, UK. A. Korhonen. 2003. Extending Levin’s Classification with New Verb Classes. Unpublished manuscript, University of Cambridge Computer Laboratory. G. Leech. 1992. 100 million words of english: the british national corpus. Language Research, 28(1):1–13. B. Levin. 1993. English Verb Classes and Alternations. Chicago University Press, Chicago. P. Merlo and S. Stevenson. 2001. Automatic verb classification based on statistical distributions of argument structure. Computational Linguistics, 27(3):373–408. P. Merlo, S. Stevenson, V. Tsang, and G. Allaria. 2002. A multilingual paradigm for automatic verb classification. In Proc. of the 40th ACL, Pennsylvania, USA. G. A. Miller. 1990. WordNet: An on-line lexical database. International Journal of Lexicography, 3(4):235–312. S. Pinker. 1989. Learnability and Cognition: The Acquisition of Argument Structure. MIT Press, Cambridge, Massachusetts. J. Preiss and A. Korhonen. 2002. Improving subcategorization acquisition with WSD. In ACL Workshop on Word Sense Disambiguation: Recent Successes and Future Directions, Philadelphia, USA. D. Roland, D. Jurafsky, L. Menn, S. Gahl, E. Elder, and C. Riddoch. 2000. Verb subcatecorization frequency differences between business-news and balanced corpora. In ACL Workshop on Comparing Corpora, pages 28–34. S. Schulte im Walde and C. Brew. 2002. Inducing german semantic verb classes from purely syntactic subcategorisation information. In Proc. of the 40th ACL, Philadephia, USA. S. Schulte im Walde. 2000. Clustering verbs semantically according to their alternation behaviour. In Proc. of COLING-2000, pages 747–753, Saarbr¨ucken, Germany. S. Stevenson and E. Joanis. 2003. Semi-supervised verb-class discovery using noisy features. In Proc. of CoNLL-2003, Edmonton, Canada. N. Tishby, F. C. Pereira, and W. Bialek. 1999. The information bottleneck method. In Proc. of the 37th Annual Allerton Conference on Communication, Control and Computing, pages 368–377.
2003
9
Optimization in Multimodal Interpretation Joyce Y. Chai* Pengyu Hong+ Michelle X. Zhou‡ Zahar Prasov* *Computer Science and Engineering Michigan State University East Lansing, MI 48824 {[email protected], [email protected]} +Department of Statistics Harvard University Cambridge, MA 02138 [email protected] ‡Intelligent Multimedia Interaction IBM T. J. Watson Research Ctr. Hawthorne, NY 10532 [email protected] Abstract In a multimodal conversation, the way users communicate with a system depends on the available interaction channels and the situated context (e.g., conversation focus, visual feedback). These dependencies form a rich set of constraints from various perspectives such as temporal alignments between different modalities, coherence of conversation, and the domain semantics. There is strong evidence that competition and ranking of these constraints is important to achieve an optimal interpretation. Thus, we have developed an optimization approach for multimodal interpretation, particularly for interpreting multimodal references. A preliminary evaluation indicates the effectiveness of this approach, especially for complex user inputs that involve multiple referring expressions in a speech utterance and multiple gestures. 1 Introduction Multimodal systems provide a natural and effective way for users to interact with computers through multiple modalities such as speech, gesture, and gaze (Oviatt 1996). Since the first appearance of “Put-That-There” system (Bolt 1980), a variety of multimodal systems have emerged, from early systems that combine speech, pointing (Neal et al., 1991), and gaze (Koons et al, 1993), to systems that integrate speech with pen inputs (e.g., drawn graphics) (Cohen et al., 1996; Wahlster 1998; Wu et al., 1999), and systems that engage users in intelligent conversation (Cassell et al., 1999; Stent et al., 1999; Gustafson et al., 2000; Chai et al., 2002; Johnston et al., 2002). One important aspect of building multimodal systems is multimodal interpretation, which is a process that identifies the meanings of user inputs. In a multimodal conversation, the way users communicate with a system depends on the available interaction channels and the situated context (e.g., conversation focus, visual feedback). These dependencies form a rich set of constraints from various aspects (e.g., semantic, temporal, and contextual). A correct interpretation can only be attained by simultaneously considering these constraints. In this process, two issues are important: first, a mechanism to combine information from various sources to form an overall interpretation given a set of constraints; and second, a mechanism that achieves the best interpretation among all the possible alternatives given a set of constraints. The first issue focuses on the fusion aspect, which has been well studied in earlier work, for example, through unificationbased approaches (Johnston 1998) or finite state approaches (Johnston and Bangalore, 2000). This paper focuses on the second issue of optimization. As in natural language interpretation, there is strong evidence that competition and ranking of constraints is important to achieve an optimal interpretation for multimodal language processing. We have developed a graph-based optimization approach for interpreting multimodal references. This approach achieves an optimal interpretation by simultaneously applying semantic, temporal, and contextual constraints. A preliminary evaluation indicates the effectiveness of this approach, particularly for complex user inputs that involve multiple referring expressions in a speech utterance and multiple gestures. In this paper, we first describe the necessities for optimization in multimodal interpretation, then present our graphbased optimization approach and discuss how our approach addresses key principles in Optimality Theory used for natural language interpretation (Prince and Smolensky 1993). 2 Necessities for Optimization in Multimodal Interpretation In a multimodal conversation, the way a user interacts with a system is dependent not only on the available input channels (e.g., speech and gesture), but also upon his/her conversation goals, the state of the conversation, and the multimedia feedback from the system. In other words, there is a rich context that involves dependencies from many different aspects established during the interaction. Interpreting user inputs can only be situated in this rich context. For example, the temporal relations between speech and gesture are important criteria that determine how the information from these two modalities can be combined. The focus of attention from the prior conversation shapes how users refer to those objects, and thus, influences the interpretation of referring expressions. Therefore, we need to simultaneously consider the temporal relations between the referring expressions and the gestures, the semantic constraints specified by the referring expressions, and the contextual constraints from the prior conversation. It is important to have a mechanism that supports competition and ranking among these constraints to achieve an optimal interpretation, in particular, a mechanism to allow constraint violation and support soft constraints. We use temporal constraints as an example to illustrate this viewpoint1. The temporal constraints specify whether multiple modalities can be combined based on their temporal alignment. In earlier work, the temporal constraints are empirically determined based on user studies (Oviatt 1996). For example, in the unificationbased approach (Johnston 1998), one temporal constraint indicates that speech and gesture can be combined only when the speech either overlaps with gesture or follows the gesture within a certain time frame. This is a hard constraint that has to be satisfied in order for the unification to take place. If a given input does not satisfy these hard constraints, the unification fails. In our user studies, we found that, although the majority of user temporal alignment behavior may satisfy pre-defined temporal constraints, there are 1 We implemented a system using real estate as an application domain. The user can interact with a map using both speech and gestures to retrieve information. All the user studies mentioned in this paper were conducted using this system. some exceptions. Table 1 shows the percentage of different temporal relations collected from our user studies. The rows indicate whether there is an overlap between speech referring expressions and their accompanied gestures. The columns indicate whether the speech (more precisely, the referring expressions) or the gesture occurred first. Consistent with the previous findings (Oviatt et al, 1997), in most cases (85% of time), gestures occurred before the referring expressions were uttered. However, in 15% of the cases the speech referring expressions were uttered before the gesture occurred. Among those cases, 8% had an overlap between the referring expressions and the gesture and 7% had no overlap. Furthermore, as shown in (Oviatt et al., 2003), although multimodal behaviors such as sequential (i.e., non-overlap) or simultaneous (e.g., overlap) integration are quite consistent during the course of interaction, there are still some exceptions. Figure 1 shows the temporal alignments from seven individual users in our study. User 2 and User 6 maintained a consistent behavior in that User 2’s speech referring expressions always overlapped with gestures and User 6’s gesture always occurred ahead of the speech expressions. The other five users exhibited varied temporal alignment between speech and gesture during the interaction. It will be difficult for a system using pre-defined temporal constraints to anticipate and accommodate all these different behaviors. Therefore, it is desirable to have a mechanism that 0 0.2 0.4 0.6 0.8 1 1 2 3 4 5 6 7 User Percentage Non-overlap Speech First Non-overlap Gesture First Overlap Speech First Overlap Gesture First Figure 1: Temporal relations between speech and gesture for individual users 100% 85% 15% Total 48% 40% 8% Overlap 52% 45% 7% Non-overlap Total Gesture First Speech First 100% 85% 15% Total 48% 40% 8% Overlap 52% 45% 7% Non-overlap Total Gesture First Speech First Table 1: Overall temporal relations between speech and gesture allows violation of these constraints and support soft or graded constraints. 3 A Graph-based Optimization Approach To address the necessities described above, we developed an optimization approach for interpreting multimodal references using graph matching. The graph representation captures both salient entities and their inter-relations. The graph matching is an optimization process that finds the best matching between two graphs based on constraints modeled as links or nodes in these graphs. This type of structure and process is especially useful for interpreting multimodal references. One graph can represent all the referring expressions and their inter-relations, and the other graph can represent all the potential referents. The question is how to match them together to achieve a maximum compatibility given a particular context. 3.1 Overview Graph-based Representation Attribute Relation Graph (ARG) (Tsai and Fu, 1979) is used to represent information in our approach. An ARG consists of a set of nodes that are connected by a set of edges. Each node represents an entity, which in our case is either a referring expression to be resolved or a potential referent. Each node encodes the properties of the corresponding entity including: • Semantic information that indicates the semantic type, the number of potential referents, and the specific attributes related to the corresponding entity (e.g., extracted from the referring expressions). • Temporal information that indicates the time when the corresponding entity is introduced into the discourse (e.g., uttered or gestured). Each edge represents a set of relations between two entities. Currently we capture temporal relations and semantic type relations. A temporal relation indicates the temporal order between two related entities during an interaction, which may have one of the following values: • Precede: Node A precedes Node B if the entity represented by Node A is introduced into the discourse before the entity represented by Node B. • Concurrent: Node A is concurrent with Node B if the entities represented by them are referred to or mentioned simultaneously. • Non-concurrent: Node A is non-concurrent with Node B if their corresponding objects/references cannot be referred/mentioned simultaneously. • Unknown: The temporal order between two entities is unknown. It may take the value of any of the above. A semantic type relation indicates whether two related entities share the same semantic type. It currently takes the following discrete values: Same, Different, and Unknown. It could be beneficial in the future to consider a continuous function measuring the rate of compatibility instead. Specially, two graphs are generated. One graph, called the referring graph, captures referring expressions from speech utterances. For example, suppose a user says Compare this house, the green house, and the brown one. Figure 2 show a referring graph that represents three referring expressions from this speech input. Each node captures the semantic information such as the semantic type (i.e., Semantic Type), the attribute (Color), the number (Number) of the potential referents, as well as the temporal information about when this referring expression is uttered (BeginTime and EndTime). Each edge captures the semantic (e.g., SemanticTypeRelation) and temporal relations (e.g., TemporalRelation) between the referring expressions. In this case, since the green house is uttered before the brown one, there is a temporal Precede relationship between these two expressions. Furthermore, according to our heuristic that objects-to-be-compared should share the same semantic type, therefore, the SemanticTypeRelation between two nodes is set to Same. Node 1 this house Node 2 the green house Node 3 the brown one SemanticType: House Number.: 1 Attribute: Color = $Green BeginTime: 32244242ms EndTime: … … … SemanticTypeRelation: Same TemporalRelation: Precede Direction: Node 2 -> Node 3 Speech: Compare this house, the green house and the brown one Figure 2: An example of a referring graph Similarly, the second graph, called the referent graph, represents all potential referents from multiple sources (e.g., from the last conversation, gestured by the user, etc). Each node captures the semantic and temporal information about a potential referent (e.g., the time when the potential referent is selected by a gesture). Each edge captures the semantic and temporal relations between two potential referents. For instance, suppose the user points to one position and then points to another position. The corresponding referent graph is shown in Figure 3. The objects inside the first dashed rectangle correspond to the potential referents selected by the first pointing gesture and those inside the second dashed rectangle correspond to the second pointing gesture. Each node also contains a probability that indicates the likelihood of its corresponding object being selected by the gesture. Furthermore, the salient objects from the prior conversation are also included in the referent graph since they could also be the potential referents (e.g., the rightmost dashed rectangle in Figure 32). To create these graphs, we apply a grammarbased natural language parser to process speech inputs and a gesture recognition component to process gestures. The details are described in (Chai et al. 2004a). 2 Each node from the conversation context is linked to every node corresponding to the first pointing and the second pointing. Graph-matching Process Given these graph representations, interpreting multimodal references becomes a graph-matching problem. The goal is to find the best match between a referring graph (Gs) and a referent graph (Gr). Suppose • A referring graph Gs = 〈{αm}, {γmn}〉, where {αm} are nodes and {γmn} are edges connecting nodes αm and αn. Nodes in Gs are named referring nodes. • A referent graph Gr = 〈{ax}, {rxy}〉, where {ax} are nodes and {rxy} are edges connecting nodes ax and ay. Nodes in Gr are named referent nodes. The following equation finds a match that achieves the maximum compatibility between Gr and Gs: ) , ( ) , ( ) , ( ) , ( ) , ( ) , ( mn xy n y m x x y m n m x m x x m s r r EdgeSim a P a P a NodeSim a P G G Q γ α α α α ∑∑∑∑ ∑∑ + = (1) In Equation (1), Q(Gr,Gs) measures the degree of the overall match between the referent graph and the referring graph. P(ax,αm) is the matching probability between a node ax in the referent graph and a node αm in the referring graph. The overall compatibility depends on the similarities between nodes (NodeSim) and the similarities between edges (EdgeSim). The function NodeSim(ax,αm) measures the similarity between a referent node ax and a referring node αm by combining semantic constraints and temporal constraints. The function EdgeSim(rxy,γmn) measures the similarity between rxy and γmn, which depends on the semantic and temporal constraints of the corresponding edges. These functions are described in detail in the next section. We use the graduated assignment algorithm (Gold and Rangarajan, 1996) to maximize Q(Gr,Gs) in Equation (1). The algorithm first initializes P(ax,αm) and then iteratively updates the values of P(ax,αm) until it converges. When the algorithm converges, P(ax,αm) gives the matching probabilities between the referent node ax and the referring node αm that maximizes the overall compatibility function. Given this probability matrix, the system is able to assign the most probable referent(s) to each referring expression. 3.2 Similarity Functions As shown in Equation (1), the overall compatibility between a referring graph and a referent graph depends on the node similarity Ossining Chappaqua Object ID: MLS2365478 SemanticType: House Attribute: Color = $Brown BeginTime: 32244292 ms SelectionProb: 0.65 … … Semantic Type Relation: Diff Temporal relation: Same Direction: Gesture: Point to one position and point to another position First pointing Second pointing Conversation Context Figure 3: An example of referent graph function and the edge similarity function. Next we give a detailed account of how we defined these functions. Our focus here is not on the actual definitions of those functions (since they may vary for different applications), but rather a mechanism that leads to competition and ranking of constraints. Node Similarity Function Given a referring expression (represented as αm in the referring graph) and a potential referent (represented as ax in the referent graph), the node similarity function is defined based on the semantic and temporal information captured in ax and αm through a set of individual compatibility functions: NodeSim(ax,αm) = Id(ax,αm) SemType(ax,αm) Πk Attrk(ax,αm) Temp(ax,αm) Currently, in our system, the specific return values for these functions are empirically determined through iterative regression tests. Id(ax,αm) captures the constraint of the compatibilities between identifiers specified in ax and αm. It indicates that the identifier of the potential referent, as expressed in a referring expression, should match the identifier of the true referent. This is particularly useful for resolving proper nouns. For example, if the referring expression is house number eight, then the correct referent should have the identifier number eight. We currently define this constraint as follows: Id(ax,αm) = 0 if the object identities of ax and αm are different. Id(ax,αm) = 100 if they are the same. Id(ax,αm) = 1 if at least one of the identities of ax and αm is unknown. The different return values enforce that a large reward is given to the case where the identifiers from the referring expressions match the identifiers from the potential referents. SemType(ax,αm) captures the constraint of semantic type compatibility between ax and αm. It indicates that the semantic type of a potential referent as expressed in the referring expression should match the semantic type of the correct referent. We define the following: SemType(ax,αm) = 0 if the semantic types of ax and αm are different. SemType(ax,αm) = 1 if they are the same. SemType(ax,αm) = 0.5 if at least one of the semantic types of ax and αm is unknown. Note that the return value given to the case where semantic types are the same (i.e., “1”) is much lower than that given to the case where identifiers are the same (i.e., “100”). This was designed to support constraint ranking. Our assumption is that the constraint on identifiers is more important than the constraint on semantic types. Because identifiers are usually unique, the corresponding constraint is a greater indicator of node matching if the identifier expressed from a referring expression matches the identifier of a potential referent. Attrk(ax,αm) captures the domain specific constraint concerning a particular semantic feature (indicated by the subscription k). This constraint indicates that the expected features of a potential referent as expressed in a referring expression should be compatible with features associated with the true referent. For example, in the referring expression the Victorian house, the style feature is Victorian. Therefore, an object can only be a possible referent if the style of that object is Victorian. Thus, we define the following: Ak(ax,αm) = 1 if both ax and αm share the kth feature with the same value. Ak(ax,αm) = 0 if both ax and αm have the feature k and the values of the feature k are not equal. Otherwise, when the kth feature is not present in either ax or αm, then Ak (ax,αm) = 0.1. Note that these feature constraints are dependent on the specific domain model for a particular application. Temp(ax,αm) captures the temporal constraint between a referring expression αm and a potential referent ax. As discussed in Section 2, a hard constraint concerning temporal relations between referring expressions and gestures will be incapable of handling the flexibility of user temporal alignment behavior. Thus the temporal constraint in our approach is a graded constraint, which is defined as follows: ) 2000 |) ( ) ( | exp( ) , ( m x m x BeginTime a BeginTime a Temp α α − − = This constraint indicates that the closer a referring expression and a potential referent in terms of their temporal alignment (regardless of the absolute precedence relationship), the more compatible they are. Edge Similarity Function The edge similarity function measures the compatibility of relations held between referring expressions (i.e., an edge γmn in the referring graph) and relations between the potential referents (i.e., an edge rxy in the referent graph). It is defined by two individual compatibility functions as follows: EdgeSim(rxy, γmn) = SemType(rxy, γmn) Temp(rxy, γmn) SemType(rxy, γmn) encodes the semantic type compatibility between an edge in the referring graph and an edge in the referent graph. It is defined in Table 2. This constraint indicates that the relation held between referring expressions should be compatible with the relation held between two correct referents. For example, consider the utterance How much is this green house and this blue house. This utterance indicates that the referent to the first expression this green house should share the same semantic type as the referent to the second expression this blue house. As shown in Table 2, if the semantic type relations of rxy and γmn are the same, SemType(rxy, γmn) returns 1. If they are different, SemType(rxy, γmn) returns zero. If either rxy or γmn is unknown, then it returns 0.5. Temp(rxy, γmn) captures the temporal compatibility between an edge in the referring graph and an edge in the referent graph. It is defined in Table 3. This constraint indicates that the temporal relationship between two referring expressions (in one utterance) should be compatible with the relations of their corresponding referents as they are introduced into the context (e.g., through gesture). The temporal relation between referring expressions (i.e., γmn) is either Precede or Concurrent. If the temporal relations of rxy and γmn are the same, then Temp(rxy, γmn) returns 1. Because potential references could come from prior conversation, even if rxy and γmn are not the same, the function does not return zero when γmn is Precede. Next, we discuss how these definitions and the process of graph matching address optimization, in particular, with respect to key principles of Optimality Theory for natural language interpretation. 3.3 Optimality Theory Optimality Theory (OT) is a theory of language and grammar, developed by Alan Prince and Paul Smolensky (Prince and Smolensky, 1993). In Optimality Theory, a grammar consists of a set of well-formed constraints. These constraints are applied simultaneously to identify linguistic structures. Optimality Theory does not restrict the content of the constraints (Eisner 1997). An innovation of Optimality Theory is the conception of these constraints as soft, which means violable and conflicting. The interpretation that arises for an utterance within a certain context maximizes the degree of constraint satisfaction and is consequently the best alternative (hence, optimal interpretation) among the set of possible interpretations. The key principles or components of Optimality Theory can be summarized as the following three components (Blutner 1998): 1) Given a set of input, Generator creates a set of possible outputs for each input. 2) From the set of candidate output, Evaluator selects the optimal output for that input. 3) There is a strict dominance in term of the ranking of constraints. Constraints are absolute and the ranking of the constraints is strict in the sense that outputs that have at least one violation of a higher ranked constraint outrank outputs that have arbitrarily many violations of lower ranked constraints. Although Optimality Theory is a grammar-based framework for natural language processing, its key principles can be applied to other representations. At a surface level, our approach addresses these main principles. First, in our approach, the matching matrix P(ax,αm) captures the probabilities of all the possible matches between a referring node αm and a referent node ax. The matching process updates these probabilities iteratively. This process corresponds to the Generator component in Optimality Theory. Second, in our approach, the satisfaction or violation of constraints is implemented via return values of compatibility functions. These 0.5 0.5 0.5 Unknown 0.5 1 0 Different 0.5 0 1 Same γmn Unknown Different Same rxy SemType(rxy, γmn) 0.5 0.5 0.5 Unknown 0.5 1 0 Different 0.5 0 1 Same γmn Unknown Different Same rxy SemType(rxy, γmn) Table 2: Definition of SemType(rxy, γmn) 0.5 0 1 0 Concurrent 0.5 0.7 0.5 1 Precede γmn Unknown Non-concurrent Concurrent Preceding rxy Temp(rxy, γmn) 0.5 0 1 0 Concurrent 0.5 0.7 0.5 1 Precede γmn Unknown Non-concurrent Concurrent Preceding rxy Temp(rxy, γmn) Table 3: Definition of Temp(rxy, γmn) constraints can be violated during the matching process. For example, functions Id(ax,αm), SemType(ax,αm), and Attrk(ax,αm) return zero if the corresponding intended constraints are violated. In this case, the overall similarity function will return zero. However, because of the iterative updating nature of the matching algorithm, the system will still find the most optimal match as a result of the matching process even some constraints are violated. Furthermore, A function that never returns zero such as Temp(ax,αm) in the node similarity function implements a gradient constraint in Optimality Theory. Given these compatibility functions, the graph-matching algorithm provides an optimization process to find the best match between two graphs. This process corresponds to the Evaluator component of Optimality Theory. Third, in our approach, different compatibility functions return different values to address the Constraint Ranking component in Optimality Theory. For example, as discussed earlier, once ax and αm share the same identifier, Id(ax,αm) returns 100. If ax and αm share the same semantic type, SemType(ax,αm) returns 1. Here, we consider the compatibility between identifiers is more important than the compatibility between semantic types. However, currently we have not yet addressed the strict dominance aspect of Optimality Theory. 3.4 Evaluation We conducted several user studies to evaluate the performance of this approach. Users could interact with our system using both speech and deictic gestures. Each subject was asked to complete five tasks. For example, one task was to find the cheapest house in the most populated town. Data from eleven subjects was collected and analyzed. Table 4 shows the evaluation results of 219 inputs. These inputs were categorized in terms of the number of referring expressions in the speech input and the number of gestures in the gesture inputs. Out of the total 219 inputs, 137 inputs had their referents correctly interpreted. For the remaining 82 inputs in which the referents were not correctly identified, the problem did not come from the approach itself, but rather from other sources such as speech recognition and language understanding errors. These were two major error sources, which were accounted for 55% and 20% of total errors respectively (Chai et al. 2004b). In our studies, the majority of user references were simple in that they involved only one referring expression and one gesture as in earlier findings (Kehler 2000). It is trivial for our approach to handle these simple inputs since the size of the graph is usually very small and there is only one node in the referring graph. However, we did find 23% complex inputs (the row S3 and the column G3 in Table 4), which involved multiple referring expressions from speech utterances and/or multiple gestures. Our optimization approach is particularly effective to interpret these complex inputs by simultaneously considering semantic, temporal, and contextual constraints. 4 Conclusion As in natural language interpretation addressed by Optimality Theory, the idea of optimizing constraints is beneficial and there is evidence in favor of competition and constraint ranking in multimodal language interpretation. We developed a graph-based approach to address optimization for multimodal interpretation; in particular, interpreting multimodal references. Our approach simultaneously applies temporal, semantic, and contextual constraints together and achieves the best interpretation among all alternatives. Although currently the referent graph corresponds to gesture 129(111) 90(26) 20(15), 19(2) 102(91), 65(22) 7(5), 6(2) Total Num 15(9), 16(1) 12(8), 8(0) 3(1), 7(1) 0(0), 1(0) S3: Multiple referring expressions 110(90), 74(25) 8(7), 11(2) 96(89), 58(21) 6(4), 5(2) S2: One referring expression 4(2), 0(0) 0 3(1), 0(0) 1(1), 0(0) S1:No referring expression Total Num G3: MultiGestures G2: One Gesture G1: No Gesture 129(111) 90(26) 20(15), 19(2) 102(91), 65(22) 7(5), 6(2) Total Num 15(9), 16(1) 12(8), 8(0) 3(1), 7(1) 0(0), 1(0) S3: Multiple referring expressions 110(90), 74(25) 8(7), 11(2) 96(89), 58(21) 6(4), 5(2) S2: One referring expression 4(2), 0(0) 0 3(1), 0(0) 1(1), 0(0) S1:No referring expression Total Num G3: MultiGestures G2: One Gesture G1: No Gesture Table 4: Evaluation Results. In each entry form “a(b), c(d)”, “a” indicates the number of inputs in which the referring expressions were correctly recognized by the speech recognizer; “b” indicates the number of inputs in which the referring expressions were correctly recognized and were correctly resolved; “c” indicates the number of inputs in which the referring expressions were not correctly recognized; “d” indicates the number of inputs in which the referring expressions also were not correctly recognized, but were correctly resolved. The sum of “a” and “c” gives the total number of inputs with a particular combination of speech and gesture. input and conversation context, it can be easily extended to incorporate other modalities such as gaze inputs. We have only taken an initial step to investigate optimization for multimodal language processing. Although preliminary studies have shown the effectiveness of the optimization approach based on graph matching, this approach also has its limitations. The graph-matching problem is a NP complete problem and it can become intractable once the size of the graph is increased. However, we have not experienced the delay of system responses during real-time user studies. This is because most user inputs were relatively concise (they contained no more than four referring expressions). This brevity limited the size of the graphs and thus provided an opportunity for such an approach to be effective. Our future work will address how to extend this approach to optimize the overall interpretation of user multimodal inputs. Acknowledgements This work was partially supported by grant IIS0347548 from the National Science Foundation and grant IRGP-03-42111 from Michigan State University. The authors would like to thank John Hale and anonymous reviewers for their helpful comments and suggestions. References Bolt, R.A. 1980. Put that there: Voice and Gesture at the Graphics Interface. Computer Graphics, 14(3): 262-270. Blutner, R., 1998. Some Aspects of Optimality In Natural Language Interpretation. Journal of Semantics, 17, 189-216. Cassell, J., Bickmore, T., Billinghurst, M., Campbell, L., Chang, K., Vilhjalmsson, H. and Yan, H. 1999. Embodiment in Conversational Interfaces: Rea. In Proceedings of the CHI'99 Conference, 520-527. Chai, J., Prasov, Z, and Hong, P. 2004b. Performance Evaluation and Error Analysis for Multimodal Reference Resolution in a Conversational System. Proceedings of HLTNAACL 2004 (Companion Volumn). Chai, J. Y., Hong, P., and Zhou, M. X. 2004a. A Probabilistic Approach to Reference Resolution in Multimodal User Interfaces, Proceedings of 9th International Conference on Intelligent User Interfaces (IUI): 70-77. Chai, J., Pan, S., Zhou, M., and Houck, K. 2002. Contextbased Multimodal Interpretation in Conversational Systems. Fourth International Conference on Multimodal Interfaces. Cohen, P., Johnston, M., McGee, D., Oviatt, S., Pittman, J., Smith, I., Chen, L., and Clow, J. 1996. Quickset: Multimodal Interaction for Distributed Applications. Proceedings of ACM Multimedia. Eisner, Jason. 1997. Efficient Generation in Primitive Optimality Theory. Proceedings of ACL’97. Gold, S. and Rangarajan, A. 1996. A Graduated Assignment Algorithm for Graph-matching. IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 18, no. 4. Gustafson, J., Bell, L., Beskow, J., Boye J., Carlson, R., Edlund, J., Granstrom, B., House D., and Wiren, M. 2000. AdApt – a Multimodal Conversational Dialogue System in an Apartment Domain. Proceedings of 6th International Conference on Spoken Language Processing (ICSLP). Johnston, M, Cohen, P., McGee, D., Oviatt, S., Pittman, J. and Smith, I. 1997. Unification-based Multimodal Integration, Proceedings of ACL’97. Johnston, M. 1998. Unification-based Multimodal Parsing, Proceedings of COLING-ACL’98. Johnston, M. and Bangalore, S. 2000. Finite-state Multimodal Parsing and Understanding. Proceedings of COLING’00. Johnston, M., Bangalore, S., Visireddy G., Stent, A., Ehlen, P., Walker, M., Whittaker, S., and Maloor, P. 2002. MATCH: An Architecture for Multimodal Dialog Systems, Proceedings of ACL’02, Philadelphia, 376-383. Kehler, A. 2000. Cognitive Status and Form of Reference in Multimodal Human-Computer Interaction, Proceedings of AAAI’01, 685-689. Koons, D. B., Sparrell, C. J. and Thorisson, K. R. 1993. Integrating Simultaneous Input from Speech, Gaze, and Hand Gestures. In Intelligent Multimedia Interfaces, M. Maybury, Ed. MIT Press: Menlo Park, CA. Neal, J. G., and Shapiro, S. C. 1991. Intelligent Multimedia Interface Technology. In Intelligent User Interfaces, J. Sullivan & S. Tyler, Eds. ACM: New York. Oviatt, S. L. 1996. Multimodal Interfaces for Dynamic Interactive Maps. In Proceedings of Conference on Human Factors in Computing Systems: CHI '96, 95-102. Oviatt, S., DeAngeli, A., and Kuhn, K., 1997. Integration and Synchronization of Input Modes during Multimodal Human-Computer Interaction, In Proceedings of Conference on Human Factors in Computing Systems: CHI '97. Oviatt, S., Coulston, R., Tomko, S., Xiao, B., Bunsford, R. Wesson, M., and Carmichael, L. 2003. Toward a Theory of Organized Multimodal Integration Patterns during HumanComputer Interaction. In Proceedings of Fifth International Conference on Multimodal Interfaces, 44-51. Prince, A. and Smolensky, P. 1993. Optimality Theory. Constraint Interaction in Generative Grammar. ROA 537. http://roa.rutgers.edu/view.php3?id=845. Stent, A., J. Dowding, J. M. Gawron, E. O. Bratt, and R. Moore. 1999. The Commandtalk Spoken Dialog System. Proceedings of ACL’99, 183–190. Tsai, W.H. and Fu, K.S. 1979. Error-correcting Isomorphism of Attributed Relational Graphs for Pattern Analysis. IEEE Transactions on Systems, Man and Cybernetics., vol. 9. Wahlster, W., 1998. User and Discourse Models for Multimodal Communication. Intelligent User Interfaces, M. Maybury and W. Wahlster (eds.), 359-370. Wu, L., Oviatt, S., and Cohen, P. 1999. Multimodal Integration – A Statistical View, IEEE Transactions on Multimedia, Vol. 1, No. 4, 334-341.
2004
1
Data-Driven Strategies for an Automated Dialogue System Hilda HARDY, Tomek STRZALKOWSKI, Min WU ILS Institute University at Albany, SUNY 1400 Washington Ave., SS262 Albany, NY 12222 USA hhardy|tomek|minwu@ cs.albany.edu Cristian URSU, Nick WEBB Department of Computer Science University of Sheffield Regent Court, 211 Portobello St. Sheffield S1 4DP UK [email protected], [email protected] Alan BIERMANN, R. Bryce INOUYE, Ashley MCKENZIE Department of Computer Science Duke University P.O. Box 90129, Levine Science Research Center, D101 Durham, NC 27708 USA awb|rbi|[email protected] Abstract We present a prototype natural-language problem-solving application for a financial services call center, developed as part of the Amitiés multilingual human-computer dialogue project. Our automated dialogue system, based on empirical evidence from real call-center conversations, features a datadriven approach that allows for mixed system/customer initiative and spontaneous conversation. Preliminary evaluation results indicate efficient dialogues and high user satisfaction, with performance comparable to or better than that of current conversational travel information systems. 1 Introduction Recently there has been a great deal of interest in improving natural-language human-computer conversation. Automatic speech recognition continues to improve, and dialogue management techniques have progressed beyond menu-driven prompts and restricted customer responses. Yet few researchers have made use of a large body of human-human telephone calls, on which to form the basis of a data-driven automated system. The Amitiés project seeks to develop novel technologies for building empirically induced dialogue processors to support multilingual human-computer interaction, and to integrate these technologies into systems for accessing information and services (http://www.dcs.shef.ac. uk/nlp/amities). Sponsored jointly by the European Commission and the US Defense Advanced Research Projects Agency, the Amitiés Consortium includes partners in both the EU and the US, as well as financial call centers in the UK and France. A large corpus of recorded, transcribed telephone conversations between real agents and customers gives us a unique opportunity to analyze and incorporate features of human-human dialogues into our automated system. (Generic names and numbers were substituted for all personal details in the transcriptions.) This corpus spans two different application areas: software support and (a much smaller size) customer banking. The banking corpus of several hundred calls has been collected first and it forms the basis of our initial multilingual triaging application, implemented for English, French and German (Hardy et al., 2003a); as well as our prototype automatic financial services system, presented in this paper, which completes a variety of tasks in English. The much larger software support corpus (10,000 calls in English and French) is still being collected and processed and will be used to develop the next Amitiés prototype. We observe that for interactions with structured data – whether these data consist of flight information, spare parts, or customer account information – domain knowledge need not be built ahead of time. Rather, methods for handling the data can arise from the way the data are organized. Once we know the basic data structures, the transactions, and the protocol to be followed (e.g., establish caller’s identity before exchanging sensitive information); we need only build dialogue models for handling various conversational situations, in order to implement a dialogue system. For our corpus, we have used a modified DAMSL tag set (Allen and Core, 1997) to capture the functional layer of the dialogues, and a frame-based semantic scheme to record the semantic layer (Hardy et al., 2003b). The “frames” or transactions in our domain are common customer-service tasks: VerifyId, ChangeAddress, InquireBalance, Lost/StolenCard and Make Payment. (In this context “task” and “transaction” are synonymous.) Each frame is associated with attributes or slots that must be filled with values in no particular order during the course of the dialogue; for example, account number, name, payment amount, etc. 2 Related Work Relevant human-computer dialogue research efforts include the TRAINS project and the DARPA Communicator program. The classic TRAINS natural-language dialogue project (Allen et al., 1995) is a plan-based system which requires a detailed model of the domain and therefore cannot be used for a wide-ranging application such as financial services. The US DARPA Communicator program has been instrumental in bringing about practical implementations of spoken dialogue systems. Systems developed under this program include CMU’s script-based dialogue manager, in which the travel itinerary is a hierarchical composition of frames (Xu and Rudnicky, 2000). The AT&T mixed-initiative system uses a sequential decision process model, based on concepts of dialog state and dialog actions (Levin et al., 2000). MIT’s Mercury flight reservation system uses a dialogue control strategy based on a set of ordered rules as a mechanism to manage complex interactions (Seneff and Polifroni, 2000). CU’s dialogue manager is event-driven, using a set of hierarchical forms with prompts associated with fields in the forms. Decisions are based not on scripts but on current context (Ward and Pellom, 1999). Our data-driven strategy is similar in spirit to that of CU. We take a statistical approach, in which a large body of transcribed, annotated conversations forms the basis for task identification, dialogue act recognition, and form filling for task completion. 3 System Architecture and Components The Amitiés system uses the Galaxy Communicator Software Infrastructure (Seneff et al., 1998). Galaxy is a distributed, message-based, hub-and-spoke infrastructure, optimized for spoken dialogue systems. Figure 1. Amitiés System Architecture Components in the Amitiés system (Figure 1) include a telephony server, automatic speech recognizer, natural language understanding unit, dialogue manager, database interface server, response generator, and text-to-speech conversion. 3.1 Audio Components Audio components for the Amitiés system are provided by LIMSI. Because acoustic models have not yet been trained, the current demonstrator system uses a Nuance ASR engine and TTS Vocalizer. To enhance ASR performance, we integrated static GSL (Grammar Specification Language) grammar classes provided by Nuance for recognizing several high-frequency items: numbers, dates, money amounts, names and yes-no statements. Training data for the recognizer were collected both from our corpus of human-human dialogues and from dialogues gathered using a text-based version of the human-computer system. Using this version we collected around 100 dialogues and annotated important domain-specific information, as in this example: “Hi my name is [fname ; David] [lname ; Oconnor] and my account number is [account ; 278 one nine five].” Next we replaced these annotated entities with grammar classes. We also utilized utterances from the Amitiés banking corpus (Hardy et al., 2002) in which the customer specifies his/her desired task, as well as utterances which constitute common, domain-independent speech acts such as acceptances, rejections, and indications of nonunderstanding. These were also used for training the task identifier and the dialogue act classifier (Section 3.3.2). The training corpus for the recognizer consists of 1744 utterances totaling around 10,000 words. Using tools supplied by Nuance for building recognition packages, we created two speech recognition components: a British model in the UK and an American model at two US sites. For the text to speech synthesizer we used Nuance’s Vocalizer 3.0, which supports multiple languages and accents. We integrated the Vocalizer and the ASR using Nuance’s speech and telephony API into a Galaxy-compliant server accessible over a telephone line. 3.2 Natural Language Understanding The goal of the language understanding component is to take the word string output of the ASR module, and identify key semantic concepts relating to the target domain. This is a specialized kind of information extraction application, and as such, we have adapted existing IE technology to this task. Hub Speech Recognition Dialogue Manager Database Server Nat’l Language Understanding Telephony Server Response Generation Customer Database Text-to-speech Conversion We have used a modified version of the ANNIE engine (A Nearly-New IE system; Cunningham et al., 2002; Maynard, 2003). ANNIE is distributed as the default built-in IE component of the GATE framework (Cunningham et al., 2002). GATE is a pure Java-based architecture developed over the past eight years in the University of Sheffield Natural Language Processing group. ANNIE has been used for many language processing applications, in a number of languages both European and non-European. This versatility makes it an attractive proposition for use in a multilingual speech processing project. ANNIE includes customizable components necessary to complete the IE task – tokenizer, gazetteer, sentence splitter, part of speech tagger and a named entity recognizer based on a powerful engine named JAPE (Java Annotation Pattern Engine; Cunningham et al., 2000). Given an utterance from the user, the NLU unit produces both a list of tokens for detecting dialogue acts, an important research goal inside this project, and a frame with the possible named entities specified by our application. We are interested particularly in account numbers, credit card numbers, person names, dates, amounts of money, locations, addresses and telephone numbers. In order to recognize these, we have updated the gazetteer, which works by explicit look-up tables of potential candidates, and modified the rules of the transducer engine, which attempts to match new instances of named entities based on local grammatical context. There are some significant differences between the kind of prose text more typically associated with information extraction, and the kind of text we are expecting to encounter. Current models of IE rely heavily on punctuation as well as certain orthographic information, such as capitalized words indicating the presence of a name, company or location. We have access to neither of these in the output of the ASR engine, and so had to retune our processors to data which reflected that. In addition, we created new processing resources, such as those required to spot number units and translate them into textual representations of numerical values; for example, to take “twenty thousand one hundred and fourteen pounds”, and produce “£20,114”. The ability to do this is of course vital for the performance of the system. If none of the main entities can be identified from the token string, we create a list of possible fallback entities, in the hope that partial matching would help narrow the search space. For instance, if a six-digit account number is not identified, then the incomplete number recognized in the utterance is used as a fallback entity and sent to the database server for partial matching. Our robust IE techniques have proved invaluable to the efficiency and spontaneity of our data-driven dialogue system. In a single utterance the user is free to supply several values for attributes, prompted or unprompted, allowing tasks to be completed with fewer dialogue turns. 3.3 Dialogue Manager The dialogue manager identifies the goals of the conversation and performs interactions to achieve those goals. Several “Frame Agents”, implemented within the dialogue manager, handle tasks such as verifying the customer’s identity, identifying the customer’s desired transaction, and executing those transactions. These range from a simple balance inquiry to the more complex change of address and debit-card payment. The structure of the dialogue manager is illustrated in Figure 2. Rather than depending on a script for the progression of the dialogue, the dialogue manager takes a data-driven approach, allowing the caller to take the initiative. Completing a task depends on identifying that task and filling values in frames, but this may be done in a variety of ways: one at a time, or several at once, and in any order. For example, if the customer identifies himself or herself before stating the transaction, or even if he or she provides several pieces of information in one utterance—transaction, name, account number, payment amount—the dialogue manager is flexible enough to move ahead after these variations. Prompts for attributes, if needed, are not restricted to one at a time, but they are usually combined in the way human agents request them; for example, city and county, expiration date and issue number, birthdate and telephone number. Figure 2. Amitiés Dialogue Manager If the system fails to obtain the necessary values from the user, reprompts are used, but no more than once for any single attribute. For the customer verification task, different attributes may be Response Decision Input: from NLU via Hub (token string, language id, named entities) Task info External files, domain-specific Dialogue Act Classifier Frame Agent Task ID Frame Agent Verify-Caller Frame Agent DB Server Customer Database Task Execution Frame Agents via Hub Dialogue History requested. If the system fails even after reprompts, it will gracefully give up with an explanation such as, “I’m sorry, we have not been able to obtain the information necessary to update your address in our records. Please hold while I transfer you to a customer service representative.” 3.3.1 Task ID Frame Agent For task identification, the Amitiés team has made use of the data collected in over 500 conversations from a British call center, recorded, transcribed, and annotated. Adapting a vectorbased approach reported by Chu-Carroll and Carpenter (1999), the Task ID Frame Agent is domain-independent and automatically trained. Tasks are represented as vectors of terms, built from the utterances requesting them. Some examples of labeled utterances are: “Erm I'd like to cancel the account cover premium that's on my, appeared on my statement” [CancelInsurance] and “Erm just to report a lost card please” [Lost/StolenCard]. The training process proceeds as follows: 1. Begin with corpus of transcribed, annotated calls. 2. Document creation: For each transaction, collect raw text of callers’ queries. Yield: one “document” for each transaction (about 14 of these in our corpus). 3. Text processing: Remove stopwords, stem content words, weight terms by frequency. Yield: one “document vector” for each task. 4. Compare queries and documents: Create “query vectors.” Obtain a cosine similarity score for each query/document pair. Yield: cosine scores/routing values for each query/document pair. 5. Obtain coefficients for scoring: Use binary logistic regression. Yield: a set of coefficients for each task. Next, the Task ID Frame Agent is tested on unseen utterances or queries: 1. Begin with one or more user queries. 2. Text processing: Remove stopwords, stem content words, weight terms (constant weights). Yield: “query vectors”. 3. Compare each query with each document. Yield: cosine similarity scores. 4. Compute confidence scores (use training coefficients). Yield: confidence scores, representing the system’s confidence that the queries indicate the user’s choice of a particular transaction. Tests performed over the entire corpus, 80% of which was used for training and 20% for testing, resulted in a classification accuracy rate of 85% (correct task is one of the system’s top 2 choices). The accuracy rate rises to 93% when we eliminate confusing or lengthy utterances, such as requests for information about payments, statements, and general questions about a customer’s account. These can be difficult even for human annotators to classify. 3.3.2 Dialogue Act Classifier The purpose of the DA Classifier Frame Agent is to identify a caller’s utterance as one or more domain-independent dialogue acts. These include Accept, Reject, Non-understanding, Opening, Closing, Backchannel, and Expression. Clearly, it is useful for a dialogue system to be able to identify accurately the various ways a person may say “yes”, “no”, or “what did you say?” As with the task identifier, we have trained the DA classifier on our corpus of transcribed, labeled human-human calls, and we have used vectorbased classification techniques. Two differences from the task identifier are 1) an utterance may have multiple correct classifications, and 2) a different stoplist is necessary. Here we can filter out the usual stops, including speech dysfluencies, proper names, number words, and words with digits; but we need to include words such as yeah, uh-huh, hi, ok, thanks, pardon and sorry. Some examples of DA classification results are shown in Figure 3. For sure, ok, the classifier returns the categories Backchannel, Expression and Accept. If the dialogue manager is looking for either Accept or Reject, it can ignore Backchannel and Expression in order to detect the correct classification. In the case of certainly not, the first word has a strong tendency toward Accept, though both together constitute a Reject act. Text: “sure, okay” Text: “certainly not” Categories returned: Backchannel, Expression, Accept Categories returned: Reject, Accept Expression Closing Accept Back. 0 0.2 0.4 0.6 0.8 1 Top four cosine scores Expression Accept Closing Back. 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 Confidence scores Reject Reject-part Accept Expression 0 0.1 0.2 0.3 0.4 0.5 0.6 Top four cosine scores Reject Accept Expression Reject-part 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 Confidence scores Figure 3. DA Classification examples Our classifier performs well if the utterance is short and falls into one of the selected categories (86% accuracy on the British data); and it has the advantages of automatic training, domain independence, and the ability to capture a great variety of expressions. However, it can be inaccurate when applied to longer utterances, and it is not yet equipped to handle domain-specific assertions, questions, or queries about a transaction. 3.4 Database Manager Our system identifies users by matching information provided by the caller against a database of user information. It assumes that the speech recognizer will make errors when the caller attempts to identify himself. Therefore perfect matches with the database entries will be rare. Consequently, for each record in the database, we attach a measure of the probability that the record is the target record. Initially, these measures are estimates of the probability that this individual will call. When additional identifying information arrives, the system updates these probabilities using Bayes’ rule. Thus, the system might begin with a uniform probability estimate across all database records. If the user identifies herself with a name recognized by the machine as “Smith”, the system will appropriately increment the probabilities of all entries with the name “Smith” and all entries that are known to be confused with “Smith” in proportion to their observed rate of substitution. Of course, all records not observed to be so confusable would similarly have their probabilities decreased by Bayes’ rule. When enough information has come in to raise the probability for some record above a threshold (in our system 0.99 probability), the system assumes that the caller has been correctly identified. The designer may choose to include a verification dialog, but our decision was to minimize such interactions to shorten the calls. Our error-correcting database system receives tokens with an identification of what field each token should represent. The system processes the tokens serially. Each represents an observation made by the speech recognizer. To process a token, the system examines each record in the database and updates the probability that the record is the target record using Bayes’ rule: where rec is the event where the record under consideration is the target record. As is common in Bayes’ rule calculations, the denominator P(obs) is treated as a scaling factor, and is not calculated explicitly. All probabilities are renormalized at the end of the update of all of the records. P(rec) is the previous estimate of the probability that the record is the target record. P(obs|rec) is the probability that the recognizer returned the observation that it did given that the target record is the current record under examination. For some of the fields, such as the account number and telephone number, the user responses consist of digits. We collected data on the probability that the speech recognition system we are using mistook one digit for another and calculated the values for P(obs|rec) from the data. For fields involving place names and personal names, the probabilities were estimated. Once a record has been selected (by virtue of its probability being greater than the threshold) the system compares the individual fields of the record with values obtained by the speech recognizer. If the values differ greatly, as measured by their Levenshtein distance, the system returns the field name to the dialogue manager as a candidate for additional verification. If no record meets the threshold probability criterion, the system returns the most probable record to the dialogue manager, along with the fields which have the greatest Levenshtein distance between the recognized and actual values, as candidates for reprompting. Our database contains 100 entries for the system tests described in this paper. We describe the system in a more demanding environment with one million records in Inouye et al. (2004). In that project, we required all information to be entered by spelling the items out so that the vocabulary was limited to the alphabet plus the ten digits. In the current project, with fewer names to deal with, we allowed the complete vocabulary of the domain: names, streets, counties, and so forth. 3.5 Response Generator Our current English-only system preserves the language-independent features of our original trilingual generator, storing all language- and domain-specific information in separate text files. It is a template-based system, easily modified and extended. The generator constructs utterances according to the dialogue manager’s specification of one or more speech acts (prompt, request, confirm, respond, inform, backchannel, accept, reject), repetition numbers, and optional lists of attributes, values, and/or the person’s name. As far as possible, we modeled utterances after the human-human dialogues. For a more natural-sounding system, we collected variations of the utterances, which the generator selects at random. Requests, for example, may take one of twelve possible forms: Request, part 1 of 2: Can you just confirm | Can I have | Can I take | What is | What’s | May I have ) ( ) ( ) | ( ) | ( obs P rec P rec obs P obs rec P × = Request, part 2 of 2: [list of attributes], [person name]? | [list of attributes], please? Offers to close or continue the dialogue are similarly varied: Closing offer, part 1 of 2: Is there anything else | Anything else | Is there anything else at all Closing offer, part 2 of 2: I can do for you today? | I can help you with today? | I can do for you? | I can help you with? | you need today? | you need? 4 Preliminary Evaluation Ten native speakers of English, 6 female and 4 male, were asked to participate in a preliminary inlab system evaluation (half in the UK and half in the US). The Amitiés system developers were not among these volunteers. Each made 9 phone calls to the system from behind a closed door, according to scenarios designed to test various customer identities as well as single or multiple tasks. After each call, participants filled out a questionnaire to register their degree of satisfaction with aspects of the interaction. Overall call success was 70%, with 98% successful completions for the VerifyId and 96% for the CheckBalance subtasks (Figure 4). “Failures” were not system crashes but simulated transfers to a human agent. There were 5 user terminations. Average word error rates were 17% for calls that were successfully completed, and 22% for failed calls. Word error rate by user ranged from 11% to 26%. 0.70 0.98 0.96 0.88 0.90 0.57 0.85 0.00 0.20 0.40 0.60 0.80 1.00 1.20 Call Success VerifyId CheckBalance LostCard MakePayment ChangeAddress FinishDialogue Figure 4. Task Completion Rates Call duration was found to reflect the complexity of each scenario, where complexity is defined as the number of “concepts” needed to complete each task. The following items are judged to be concepts: task identification; values such as first name, last name, house number, street and phone number; and positive or negative responses such as whether a new card is desired. Figures 5 and 6 illustrate the relationship between length of call and task complexity. It should be noted that customer verification, a task performed in every dialogue, requires a minimum of 3 personal details to be verified against a database record, but may require more in the case of recognition errors. The overall average number of turns per dialogue was 18.28. The user spoke an average of 6.89 words per turn and the system 11.42. User satisfaction for each call was assessed by way of a questionnaire containing five statements. These covered the clarity of the instructions, ease of doing the task, how well the system understands the caller, how well the system works, and the caller’s enjoyment of the system. Participants rated each on a five-point Likert scale. Summed results showed an average score of 20.45 over all users (range 5–25; higher = stronger agreement). 0 2 4 6 8 10 12 1 2 3 4 5 6 7 8 9 Scenario Number of concepts 85.0 135.8 119.3 88.3 106.7 114.0 48.4 109.3 127.1 0.0 20.0 40.0 60.0 80.0 100.0 120.0 140.0 160.0 1 2 3 4 5 6 7 8 9 Figures 5 and 6. Scenario Complexity (top) and Average Call Duration by Scenario (in seconds) Although user satisfaction was high, we were more interested in identifying the major problems for the callers. Users were often frustrated by recognition failures and/or unsuccessful attempts to capture values such as a new street address, county, or phone number. Sometimes the system failed to determine that the user was finished. Because the system is designed to be nonrestrictive in accepting users’ input, misidentifications were common. One user spoke for many by saying, “There were times when the system would ask me to repeat information I had not yet given to it.” We plan to revise our strategy so that we are a little more cautious in our reprompts. Occasionally, the system misidentified the user’s desired task and had difficulty backing off gracefully and starting the correct task. We are working on improving our recovery strategies for these cases. The following transcription of an interaction with our system illustrates the efficiency made possible when the data supplied by the caller drives the conversation: AMITIÉS: Hello, this is Am-ih-tee-eh! What can I do for you? USER: yes this is Stella Lang. My card number is three one six seven one four. Can you tell me the balance on my account please? AMITIÉS: Can you just confirm the city and county? USER: uh yes it’s London, Middlesex AMITIÉS: I’m sorry, I didn’t understand what you said. Can you repeat, please? USER: London, Middlesex AMITIÉS: All right, your balance is 1753 pounds and 20 pence. Anything else I can do for you? USER: yes I'd like to make a payment of fifty three pounds with my debit card please AMITIÉS: Can I have the debit card number, Miss Lang? USER: sure, it's two eight seven four six six four nine five eight nine five three eight nine one, and it expires October two thousand six AMITIÉS: What's the issue number, please? USER: one AMITIÉS: Ok, Miss Lang, I'll send in the payment amount of 53 pounds. Did I get that right? USER: yes, what's my new balance please? AMITIÉS: Ok, that's gone through. Your payment will be credited overnight and will reflect on your next statement. Your balance is 1700 pounds and 20 pence. Anything else I can do for you today? USER: no, that's it thanks AMITIÉS: Thanks very much for your call, goodbye. USER: bye 5 Discussion, Future Work The preliminary evaluation reported here indicates promise for an automated dialogue system such as ours, which incorporates robust techniques for information extraction, record matching, task identification, dialogue act classification, and an overall data-driven strategy. Task duration and number of turns per dialogue both appear to indicate greater efficiency and corresponding user satisfaction than many other similar systems. In the DARPA Communicator evaluation, for example, between 60 and 79 calls were made to each of 8 participating sites (Walker, et al., 2001, 2002). A sample scenario for a domestic round-trip flight contained 8 concepts (airline, departure city, state, date, etc.). The average duration for such a call was over 300 seconds; whereas our overall average was 104 seconds. ASR accuracy rates in 2001 were about 60% and 75%, for airline itineraries not completed and completed; and task completion rates were 56%. Our average number of user words per turn, 6.89, is also higher than that reported for Communicator systems. This number seems to reflect lengthier responses to open prompts, responses to system requests for multiple attributes, and greater user initiative. We plan to port the system to a new domain: from telephone banking to information-technology support. As part of this effort we are again collecting data from real human-human calls. For advanced speech recognition, we hope to train our ASR on new acoustic data. We also plan to expand our dialogue act classification so that the system can recognize more types of acts, and to improve our classification reliability. 6 Acknowledgements This paper is based on work supported in part by the European Commission under the 5th Framework IST/HLT Programme, and by the US Defense Advanced Research Projects Agency. References J. Allen and M. Core. 1997. Draft of DAMSL: Dialog Act Markup in Several Layers. http://www.cs.rochester.edu/research/cisd/resour ces/damsl/. J. Allen, L. K. Schubert, G. Ferguson, P. Heeman, Ch. L. Hwang, T. Kato, M. Light, N. G. Martin, B. W. Miller, M. Poesio, and D. R. Traum. 1995. The TRAINS Project: A Case Study in Building a Conversational Planning Agent. Journal of Experimental and Theoretical AI, 7 (1995), 7–48. Amitiés, http://www.dcs.shef.ac.uk/nlp/amities. J. Chu-Carroll and B. Carpenter. 1999. VectorBased Natural Language Call Routing. Computational Linguistics, 25 (3): 361–388. H. Cunningham, D. Maynard, K. Bontcheva, V. Tablan. 2002. GATE: A Framework and Graphical Development Environment for Robust NLP Tools and Applications. Proceedings of the 40th Anniversary Meeting of the Association for Computational Linguistics (ACL'02), Philadelphia, Pennsylvania. H. Cunningham and D. Maynard and V. Tablan. 2000. JAPE: a Java Annotation Patterns Engine (Second Edition). Technical report CS--00--10, University of Sheffield, Department of Computer Science. DARPA, http://www.darpa.mil/iao/Communicator.htm. H. Hardy, K. Baker, L. Devillers, L. Lamel, S. Rosset, T. Strzalkowski, C. Ursu and N. Webb. 2002. Multi-Layer Dialogue Annotation for Automated Multilingual Customer Service. Proceedings of the ISLE Workshop on Dialogue Tagging for Multi-Modal Human Computer Interaction, Edinburgh, Scotland. H. Hardy, T. Strzalkowski and M. Wu. 2003a. Dialogue Management for an Automated Multilingual Call Center. Research Directions in Dialogue Processing, Proceedings of the HLTNAACL 2003 Workshop, Edmonton, Alberta, Canada. H. Hardy, K. Baker, H. Bonneau-Maynard, L. Devillers, S. Rosset and T. Strzalkowski. 2003b. Semantic and Dialogic Annotation for Automated Multilingual Customer Service. Eurospeech 2003, Geneva, Switzerland. R. B. Inouye, A. Biermann and A. Mckenzie. 2004. Caller Identification from Spelled-Out Personal Data Using a Database for Error Correction. Duke University Internal Report. E. Levin, S. Narayanan, R. Pieraccini, K. Biatov, E. Bocchieri, G. Di Fabbrizio, W. Eckert, S. Lee, A. Pokrovsky, M. Rahim, P. Ruscitti, and M. Walker. 2000. The AT&T-DARPA Communicator Mixed-Initiative Spoken Dialog System. ICSLP 2000. D. Maynard. 2003. Multi-Source and Multilingual Information Extraction. Expert Update. S. Seneff, E. Hurley, R. Lau, C. Pao, P. Schmid, and V. Zue. 1998. Galaxy-II: A Reference Architecture for Conversational System Development. ICSLP 98, Sydney, Australia. S. Seneff and J. Polifroni. 2000. Dialogue Management in the Mercury Flight Reservation System. Satellite Dialogue Workshop, ANLPNAACL, Seattle, Washington. M. Walker, J. Aberdeen, J. Boland, E. Bratt, J. Garofolo, L. Hirschman, A. Le, S. Lee, S. Narayanan, K. Papineni, B. Pellom, J. Polifroni, A. Potamianos, P. Prabhu, A. Rudnicky, G. Sanders, S. Seneff, D. Stallard and S. Whittaker. 2001. DARPA Communicator Dialog Travel Planning Systems: The June 2000 Data Collection. Eurospeech 2001. M. Walker, A. Rudnicky, J. Aberdeen, E. Bratt, J. Garofolo, H. Hastie, A. Le, B. Pellom, A. Potamianos, R. Passonneau, R. Prasad, S. Roukos, G. Sanders, S. Seneff and D. Stallard. 2002. DARPA Communicator Evaluation: Progress from 2000 to 2001. ICSLP 2002. W. Ward and B. Pellom. 1999. The CU Communicator System. IEEE ASRU, pp. 341– 344. W. Xu and A. Rudnicky. 2000. Task-based Dialog Management Using an Agenda. ANLP/NAACL Workshop on Conversational Systems, pp. 42– 47.
2004
10
Trainable Sentence Planning for Complex Information Presentation in Spoken Dialog Systems Amanda Stent Stony Brook University Stony Brook, NY 11794 U.S.A. [email protected] Rashmi Prasad University of Pennsylvania Philadelphia, PA 19104 U.S.A. [email protected] Marilyn Walker University of Sheffield Sheffield S1 4DP U.K. M.A.Walker@sheffield.ac.uk Abstract A challenging problem for spoken dialog systems is the design of utterance generation modules that are fast, flexible and general, yet produce high quality output in particular domains. A promising approach is trainable generation, which uses general-purpose linguistic knowledge automatically adapted to the application domain. This paper presents a trainable sentence planner for the MATCH dialog system. We show that trainable sentence planning can produce output comparable to that of MATCH’s template-based generator even for quite complex information presentations. 1 Introduction One very challenging problem for spoken dialog systems is the design of the utterance generation module. This challenge arises partly from the need for the generator to adapt to many features of the dialog domain, user population, and dialog context. There are three possible approaches to generating system utterances. The first is templatebased generation, used in most dialog systems today. Template-based generation enables a programmer without linguistic training to program a generator that can efficiently produce high quality output specific to different dialog situations. Its drawbacks include the need to (1) create templates anew by hand for each application; (2) design and maintain a set of templates that work well together in many dialog contexts; and (3) repeatedly encode linguistic constraints such as subject-verb agreement. The second approach is natural language generation (NLG), which divides generation into: (1) text (or content) planning, (2) sentence planning, and (3) surface realization. NLG promises portability across domains and dialog contexts by using general rules for each generation module. However, the quality of the output for a particular domain, or a particular dialog context, may be inferior to that of a templatebased system unless domain-specific rules are developed or general rules are tuned for the particular domain. Furthermore, full NLG may be too slow for use in dialog systems. A third, more recent, approach is trainable generation: techniques for automatically training NLG modules, or hybrid techniques that adapt NLG modules to particular domains or user groups, e.g. (Langkilde, 2000; Mellish, 1998; Walker, Rambow and Rogati, 2002). Open questions about the trainable approach include (1) whether the output quality is high enough, and (2) whether the techniques work well across domains. For example, the training method used in SPoT (Sentence Planner Trainable), as described in (Walker, Rambow and Rogati, 2002), was only shown to work in the travel domain, for the information gathering phase of the dialog, and with simple content plans involving no rhetorical relations. This paper describes trainable sentence planning for information presentation in the MATCH (Multimodal Access To City Help) dialog system (Johnston et al., 2002). We provide evidence that the trainable approach is feasible by showing (1) that the training technique used for SPoT can be extended to a new domain (restaurant information); (2) that this technique, previously used for informationgathering utterances, can be used for information presentations, namely recommendations and comparisons; and (3) that the quality of the output is comparable to that of a template-based generator previously developed and experimentally evaluated with MATCH users (Walker et al., 2002; Stent et al., 2002). Section 2 describes SPaRKy (Sentence Planning with Rhetorical Knowledge), an extension of SPoT that uses rhetorical relations. SPaRKy consists of a randomized sentence plan generator (SPG) and a trainable sentence plan ranker (SPR); these are described in Sections 3 strategy:recommend items: Chanpen Thai relations:justify(nuc:1;sat:2); justify(nuc:1;sat:3); justify(nuc:1;sat:4) content: 1. assert(best(Chanpen Thai)) 2. assert(has-att(Chanpen Thai, decor(decent))) 3. assert(has-att(Chanpen Thai, service(good)) 4. assert(has-att(Chanpen Thai, cuisine(Thai))) Figure 1: A content plan for a recommendation for a restaurant in midtown Manhattan strategy:compare3 items: Above, Carmine’s relations:elaboration(1;2); elaboration(1;3); elaboration(1,4); elaboration(1,5); elaboration(1,6); elaboration(1,7); contrast(2;3); contrast(4;5); contrast(6;7) content: 1. assert(exceptional(Above, Carmine’s)) 2. assert(has-att(Above, decor(good))) 3. assert(has-att(Carmine’s, decor(decent))) 4. assert(has-att(Above, service(good))) 5. assert(has-att(Carmine’s, service(good))) 6. assert(has-att(Above, cuisine(New American))) 7. assert(has-att(Carmine’s, cuisine(italian))) Figure 2: A content plan for a comparison between restaurants in midtown Manhattan and 4. Section 5 presents the results of two experiments. The first experiment shows that given a content plan such as that in Figure 1, SPaRKy can select sentence plans that communicate the desired rhetorical relations, are significantly better than a randomly selected sentence plan, and are on average less than 10% worse than a sentence plan ranked highest by human judges. The second experiment shows that the quality of SPaRKy’s output is comparable to that of MATCH’s template-based generator. We sum up in Section 6. 2 SPaRKy Architecture Information presentation in the MATCH system focuses on user-tailored recommendations and comparisons of restaurants (Walker et al., 2002). Following the bottom-up approach to text-planning described in (Marcu, 1997; Mellish, 1998), each presentation consists of a set of assertions about a set of restaurants and a specification of the rhetorical relations that hold between them. Example content plans are shown in Figures 1 and 2. The job of the sentence planner is to choose linguistic resources to realize a content plan and then rank the resulting alternative realizations. Figures 3 and 4 show alternative realizations for the content plans in Figures 1 and 2. Alt Realization H SPR 2 Chanpen Thai, which is a Thai restaurant, has decent decor. It has good service. It has the best overall quality among the selected restaurants. 3 .28 5 Since Chanpen Thai is a Thai restaurant, with good service, and it has decent decor, it has the best overall quality among the selected restaurants. 2.5 .14 6 Chanpen Thai, which is a Thai restaurant, with decent decor and good service, has the best overall quality among the selected restaurants. 4 .70 Figure 3: Some alternative sentence plan realizations for the recommendation in Figure 1. H = Humans’ score. SPR = SPR’s score. Alt Realization H SPR 11 Above and Carmine’s offer exceptional value among the selected restaurants. Above, which is a New American restaurant, with good decor, has good service. Carmine’s, which is an Italian restaurant, with good service, has decent decor. 2 .73 12 Above and Carmine’s offer exceptional value among the selected restaurants. Above has good decor, and Carmine’s has decent decor. Above and Carmine’s have good service. Above is a New American restaurant. On the other hand, Carmine’s is an Italian restaurant. 2.5 .50 13 Above and Carmine’s offer exceptional value among the selected restaurants. Above is a New American restaurant. It has good decor. It has good service. Carmine’s, which is an Italian restaurant, has decent decor and good service. 3 .67 20 Above and Carmine’s offer exceptional value among the selected restaurants. Carmine’s has decent decor but Above has good decor, and Carmine’s and Above have good service. Carmine’s is an Italian restaurant. Above, however, is a New American restaurant. 2.5 .49 25 Above and Carmine’s offer exceptional value among the selected restaurants. Above has good decor. Carmine’s is an Italian restaurant. Above has good service. Carmine’s has decent decor. Above is a New American restaurant. Carmine’s has good service. NR NR Figure 4: Some of the alternative sentence plan realizations for the comparison in Figure 2. H = Humans’ score. SPR = SPR’s score. NR = Not generated or ranked The architecture of the spoken language generation module in MATCH is shown in Figure 5. The dialog manager sends a high-level communicative goal to the SPUR text planner, which selects the content to be communicated using a user model and brevity constraints (see (Walker Synthesizer How to Say It Realizer Surface Assigner Prosody Speech UTTERANCE SYSTEM Sentence SPUR Planner Communicative DIALOGUE MANAGER Goals Text Planner What to Say Figure 5: A dialog system with a spoken language generator et al., 2002)). The output is a content plan for a recommendation or comparison such as those in Figures 1 and 2. SPaRKy, the sentence planner, gets the content plan, and then a sentence plan generator (SPG) generates one or more sentence plans (Figure 7) and a sentence plan ranker (SPR) ranks the generated plans. In order for the SPG to avoid generating sentence plans that are clearly bad, a content-structuring module first finds one or more ways to linearly order the input content plan using principles of entity-based coherence based on rhetorical relations (Knott et al., 2001). It outputs a set of text plan trees (tp-trees), consisting of a set of speech acts to be communicated and the rhetorical relations that hold between them. For example, the two tp-trees in Figure 6 are generated for the content plan in Figure 2. Sentence plans such as alternative 25 in Figure 4 are avoided; it is clearly worse than alternatives 12, 13 and 20 since it neither combines information based on a restaurant entity (e.g Babbo) nor on an attribute (e.g. decor). The top ranked sentence plan output by the SPR is input to the RealPro surface realizer which produces a surface linguistic utterance (Lavoie and Rambow, 1997). A prosody assignment module uses the prior levels of linguistic representation to determine the appropriate prosody for the utterance, and passes a markedup string to the text-to-speech module. 3 Sentence Plan Generation As in SPoT, the basis of the SPG is a set of clause-combining operations that operate on tptrees and incrementally transform the elementary predicate-argument lexico-structural representations (called DSyntS (Melcuk, 1988)) associated with the speech-acts on the leaves of the tree. The operations are applied in a bottom-up left-to-right fashion and the resulting representation may contain one or more sentences. The application of the operations yields two parallel structures: (1) a sentence plan tree (sp-tree), a binary tree with leaves labeled by the assertions from the input tp-tree, and interior nodes labeled with clause-combining operations; and (2) one or more DSyntS trees (d-trees) which reflect the parallel operations on the predicate-argument representations. We generate a random sample of possible sentence plans for each tp-tree, up to a prespecified number of sentence plans, by randomly selecting among the operations according to a probability distribution that favors preferred operations1. The choice of operation is further constrained by the rhetorical relation that relates the assertions to be combined, as in other work e.g. (Scott and de Souza, 1990). In the current work, three RST rhetorical relations (Mann and Thompson, 1987) are used in the content planning phase to express the relations between assertions: the justify relation for recommendations, and the contrast and elaboration relations for comparisons. We added another relation to be used during the content-structuring phase, called infer, which holds for combinations of speech acts for which there is no rhetorical relation expressed in the content plan, as in (Marcu, 1997). By explicitly representing the discourse structure of the information presentation, we can generate information presentations with considerably more internal complexity than those generated in (Walker, Rambow and Rogati, 2002) and eliminate those that violate certain coherence principles, as described in Section 2. The clause-combining operations are general operations similar to aggregation operations used in other research (Rambow and Korelsky, 1992; Danlos, 2000). The operations and the 1Although the probability distribution here is handcrafted based on assumed preferences for operations such as merge, relative-clause and with-reduction, it might also be possible to learn this probability distribution from the data by training in two phases. nucleus:<3>assert-com-decor contrast nucleus:<2>assert-com-decor nucleus:<6>assert-com-cuisine nucleus:<7>assert-com-cuisine contrast nucleus:<4>assert-com-service nucleus:<5>assert-com-service contrast elaboration nucleus:<1>assert-com-list_exceptional infer nucleus:<3>assert-com-decor nucleus:<5>assert-com-service nucleus:<7>assert-com-cuisine infer infer nucleus:<2>assert-com-decor nucleus:<6>assert-com-cuisine nucleus:<4>assert-com-service elaboration nucleus:<1>assert-com-list_exceptional contrast Figure 6: Two tp-trees for alternative 13 in Figure 4. constraints on their use are described below. merge applies to two clauses with identical matrix verbs and all but one identical arguments. The clauses are combined and the nonidentical arguments coordinated. For example, merge(Above has good service;Carmine’s has good service) yields Above and Carmine’s have good service. merge applies only for the relations infer and contrast. with-reduction is treated as a kind of “verbless” participial clause formation in which the participial clause is interpreted with the subject of the unreduced clause. For example, with-reduction(Above is a New American restaurant;Above has good decor) yields Above is a New American restaurant, with good decor. with-reduction uses two syntactic constraints: (a) the subjects of the clauses must be identical, and (b) the clause that undergoes the participial formation must have a havepossession predicate. In the example above, for instance, the Above is a New American restaurant clause cannot undergo participial formation since the predicate is not one of havepossession. with-reduction applies only for the relations infer and justify. relative-clause combines two clauses with identical subjects, using the second clause to relativize the first clause’s subject. For example, relative-clause(Chanpen Thai is a Thai restaurant, with decent decor and good service;Chanpen Thai has the best overall quality among the selected restaurants) yields Chanpen Thai, which is a Thai restaurant, with decent decor and good service, has the best overall quality among the selected restaurants. relativeclause also applies only for the relations infer and justify. cue-word inserts a discourse connective (one of since, however, while, and, but, and on the other hand), between the two clauses to be combined. cue-word conjunction combines two distinct clauses into a single sentence with a coordinating or subordinating conjunction (e.g. Above has decent decor BUT Carmine’s has good decor), while cue-word insertion inserts a cue word at the start of the second clause, producing two separate sentences (e.g. Carmine’s is an Italian restaurant. HOWEVER, Above is a New American restaurant). The choice of cue word is dependent on the rhetorical relation holding between the clauses. Finally, period applies to two clauses to be treated as two independent sentences. Note that a tp-tree can have very different realizations, depending on the operations of the SPG. For example, the second tp-tree in Figure 6 yields both Alt 11 and Alt 13 in Figure 4. However, Alt 13 is more highly rated than Alt 11. The sp-tree and d-tree produced by the SPG for Alt 13 are shown in Figures 7 and 8. The composite labels on the interior nodes of the spPERIOD_elaboration PERIOD_contrast RELATIVE_CLAUSE_infer PERIOD_infer PERIOD_infer <4>assert-com-service <7>assert-com-cuisine MERGE_infer <3>assert-come-decor <5>assert-com-service <2>assert-com-decor <6>assert-com-cuisine <1>assert-com-list_exceptional Figure 7: Sentence plan tree (sp-tree) for alternative 13 in Figure 4 offer exceptional among restaurant selected Above_and_Carmine’s Carmine’s BE3 restaurant Carmine’s Italian decor decent AND2 service good HAVE1 PERIOD New_American BE3 Above Above decor good HAVE1 restaurant Above good HAVE1 service PERIOD PERIOD value PERIOD Figure 8: Dependency tree (d-tree) for alternative 13 in Figure 4 tree indicate the clause-combining relation selected to communicate the specified rhetorical relation. The d-tree for Alt 13 in Figure 8 shows that the SPG treats the period operation as part of the lexico-structural representation for the d-tree. After sentence planning, the d-tree is split into multiple d-trees at period nodes; these are sent to the RealPro surface realizer. Separately, the SPG also handles referring expression generation by converting proper names to pronouns when they appear in the previous utterance. The rules are applied locally, across adjacent sequences of utterances (Brennan et al., 1987). Referring expressions are manipulated in the d-trees, either intrasententially during the creation of the sp-tree, or intersententially, if the full sp-tree contains any period operations. The third and fourth sentences for Alt 13 in Figure 4 show the conversion of a named restaurant (Carmine’s) to a pronoun. 4 Training the Sentence Plan Ranker The SPR takes as input a set of sp-trees generated by the SPG and ranks them. The SPR’s rules for ranking sp-trees are learned from a labeled set of sentence-plan training examples using the RankBoost algorithm (Schapire, 1999). Examples and Feedback: To apply RankBoost, a set of human-rated sp-trees are encoded in terms of a set of features. We started with a set of 30 representative content plans for each strategy. The SPG produced as many as 20 distinct sp-trees for each content plan. The sentences, realized by RealPro from these sp-trees, were then rated by two expert judges on a scale from 1 to 5, and the ratings averaged. Each sptree was an example input for RankBoost, with each corresponding rating its feedback. Features used by RankBoost: RankBoost requires each example to be encoded as a set of real-valued features (binary features have values 0 and 1). A strength of RankBoost is that the set of features can be very large. We used 7024 features for training the SPR. These features count the number of occurrences of certain structural configurations in the sp-trees and the d-trees, in order to capture declaratively decisions made by the randomized SPG, as in (Walker, Rambow and Rogati, 2002). The features were automatically generated using feature templates. For this experiment, we use two classes of feature: (1) Rule-features: These features are derived from the sp-trees and represent the ways in which merge, infer and cueword operations are applied to the tp-trees. These feature names start with “rule”. (2) Sentfeatures: These features are derived from the DSyntSs, and describe the deep-syntactic structure of the utterance, including the chosen lexemes. As a result, some may be domain specific. These feature names are prefixed with “sent”. We now describe the feature templates used in the discovery process. Three templates were used for both sp-tree and d-tree features; two were used only for sp-tree features. Local feature templates record structural configurations local to a particular node (its ancestors, daughters etc.). Global feature templates, which are used only for sp-tree features, record properties of the entire sp-tree. We discard features that occur fewer than 10 times to avoid those specific to particular text plans. Strategy System Min Max Mean S.D. Recommend SPaRKy 2.0 5.0 3.6 .71 HUMAN 2.5 5.0 3.9 .55 RANDOM 1.5 5.0 2.9 .88 Compare2 SPaRKy 2.5 5.0 3.9 .71 HUMAN 2.5 5.0 4.4 .54 RANDOM 1.0 5.0 2.9 1.3 Compare3 SPaRKy 1.5 4.5 3.4 .63 HUMAN 3.0 5.0 4.0 .49 RANDOM 1.0 4.5 2.7 1.0 Table 1: Summary of Recommend, Compare2 and Compare3 results (N = 180) There are four types of local feature template: traversal features, sister features, ancestor features and leaf features. Local feature templates are applied to all nodes in a sp-tree or d-tree (except that the leaf feature is not used for d-trees); the value of the resulting feature is the number of occurrences of the described configuration in the tree. For each node in the tree, traversal features record the preorder traversal of the subtree rooted at that node, for all subtrees of all depths. An example is the feature “rule traversal assertcom-list exceptional” (with value 1) of the tree in Figure 7. Sister features record all consecutive sister nodes. An example is the feature “rule sisters PERIOD infer RELATIVE CLAUSE infer” (with value 1) of the tree in Figure 7. For each node in the tree, ancestor features record all the initial subpaths of the path from that node to the root. An example is the feature “rule ancestor PERIOD contrast*PERIOD infer” (with value 1) of the tree in Figure 7. Finally, leaf features record all initial substrings of the frontier of the sp-tree. For example, the sp-tree of Figure 7 has value 1 for the feature “leaf #assert-com-list exceptional#assert-comcuisine”. Global features apply only to the sptree. They record, for each sp-tree and for each clause-combining operation labeling a nonfrontier node, (1) the minimal number of leaves dominated by a node labeled with that operation in that tree (MIN); (2) the maximal number of leaves dominated by a node labeled with that operation (MAX); and (3) the average number of leaves dominated by a node labeled with that operation (AVG). For example, the sp-tree in Figure 7 has value 3 for “PERIOD infer max”, value 2 for “PERIOD infer min” and value 2.5 for “PERIOD infer avg”. 5 Experimental Results We report two sets of experiments. The first experiment tests the ability of the SPR to select a high quality sentence plan from a population of sentence plans randomly generated by the SPG. Because the discriminatory power of the SPR is best tested by the largest possible population of sentence plans, we use 2-fold cross validation for this experiment. The second experiment compares SPaRKy to template-based generation. Cross Validation Experiment: We repeatedly tested SPaRKy on the half of the corpus of 1756 sp-trees held out as test data for each fold. The evaluation metric is the humanassigned score for the variant that was rated highest by SPaRKy for each text plan for each task/user combination. We evaluated SPaRKy on the test sets by comparing three data points for each text plan: HUMAN (the score of the top-ranked sentence plan); SPARKY (the score of the SPR’s selected sentence); and RANDOM (the score of a sentence plan randomly selected from the alternate sentence plans). We report results separately for comparisons between two entities and among three or more entities. These two types of comparison are generated using different strategies in the SPG, and can produce text that is very different both in terms of length and structure. Table 1 summarizes the difference between SPaRKy, HUMAN and RANDOM for recommendations, comparisons between two entities and comparisons between three or more entities. For all three presentation types, a paired t-test comparing SPaRKy to HUMAN to RANDOM showed that SPaRKy was significantly better than RANDOM (df = 59, p < .001) and significantly worse than HUMAN (df = 59, p < .001). This demonstrates that the use of a trainable sentence planner can lead to sentence plans that are significantly better than baseline (RANDOM), with less human effort than programming templates. Comparison with template generation: For each content plan input to SPaRKy, the judges also rated the output of a templatebased generator for MATCH. This templatebased generator performs text planning and sentence planning (the focus of the current paper), including some discourse cue insertion, clause combining and referring expression generation; the templates themselves are described in (Walker et al., 2002). Because the templates are highly tailored to this domain, this generator can be expected to perform well. Example template-based and SPaRKy outputs for a comparison between three or more items are shown in Figure 9. Strategy System Min Max Mean S.D. Recommend Template 2.5 5.0 4.22 0.74 SPaRKy 2.5 4.5 3.57 0.59 HUMAN 4.0 5.0 4.37 0.37 Compare2 Template 2.0 5.0 3.62 0.75 SPaRKy 2.5 4.75 3.87 0.52 HUMAN 4.0 5.0 4.62 0.39 Compare3 Template 1.0 5.0 4.08 1.23 SPaRKy 2.5 4.25 3.375 0.38 HUMAN 4.0 5.0 4.63 0.35 Table 2: Summary of template-based generation results. N = 180 Table 2 shows the mean HUMAN scores for the template-based sentence planning. A paired t-test comparing HUMAN and template-based scores showed that HUMAN was significantly better than template-based sentence planning only for compare2 (df = 29, t = 6.2, p < .001). The judges evidently did not like the template for comparisons between two items. A paired t-test comparing SPaRKy and template-based sentence planning showed that template-based sentence planning was significantly better than SPaRKy only for recommendations (df = 29, t = 3.55, p < .01). These results demonstrate that trainable sentence planning shows promise for producing output comparable to that of a template-based generator, with less programming effort and more flexibility. The standard deviation for all three templatebased strategies was wider than for HUMAN or SPaRKy, indicating that there may be content-specific aspects to the sentence planning done by SPaRKy that contribute to output variation. The data show this to be correct; SPaRKy learned content-specific preferences about clause combining and discourse cue insertion that a template-based generator canSystem Realization H Template Among the selected restaurants, the following offer exceptional overall value. Uguale’s price is 33 dollars. It has good decor and very good service. It’s a French, Italian restaurant. Da Andrea’s price is 28 dollars. It has good decor and very good service. It’s an Italian restaurant. John’s Pizzeria’s price is 20 dollars. It has mediocre decor and decent service. It’s an Italian, Pizza restaurant. 4.5 SPaRKy Da Andrea, Uguale, and John’s Pizzeria offer exceptional value among the selected restaurants. Da Andrea is an Italian restaurant, with very good service, it has good decor, and its price is 28 dollars. John’s Pizzeria is an Italian , Pizza restaurant. It has decent service. It has mediocre decor. Its price is 20 dollars. Uguale is a French, Italian restaurant, with very good service. It has good decor, and its price is 33 dollars. 4 Figure 9: Comparisons between 3 or more items, H = Humans’ score not easily model, but that a trainable sentence planner can. For example, Table 3 shows the nine rules generated on the first test fold which have the largest negative impact on the final RankBoost score (above the double line) and the largest positive impact on the final RankBoost score (below the double line), for comparisons between three or more entities. The rule with the largest positive impact shows that SPaRKy learned to prefer that justifications involving price be merged with other information using a conjunction. These rules are also specific to presentation type. Averaging over both folds of the experiment, the number of unique features appearing in rules is 708, of which 66 appear in the rule sets for two presentation types and 9 appear in the rule sets for all three presentation types. There are on average 214 rule features, 428 sentence features and 26 leaf features. The majority of the features are ancestor features (319) followed by traversal features (264) and sister features (60). The remainder of the features (67) are for specific lexemes. To sum up, this experiment shows that the ability to model the interactions between domain content, task and presentation type is a strength of the trainable approach to sentence planning. 6 Conclusions This paper shows that the training technique used in SPoT can be easily extended to a new N Condition αs 1 sent anc PROPERNOUN RESTAURANT *HAVE1 ≥16.5 -0.859 2 sent anc II Upper East Side*ATTR IN1* locate ≥4.5 -0.852 3 sent anc PERIOD infer*PERIOD infer *PERIOD elaboration ≥-∞ -0.542 4 rule anc assert-com-service*MERGE infer ≥1.5 -0.356 5 sent tvl depth 0 BE3 ≥4.5 -0.346 6 rule anc PERIOD infer*PERIOD infer *PERIOD elaboration ≥-∞ -0.345 7 rule anc assert-com-decor*PERIOD infer *PERIOD infer*PERIOD contrast *PERIOD elaboration ≥-∞ -0.342 8 rule anc assert-com-food quality*MERGE infer ≥1.5 0.398 9 rule anc assert-com-price*CW CONJUNCTION infer*PERIOD justify ≥-∞ 0.527 Table 3: The nine rules generated on the first test fold which have the largest negative impact on the final RankBoost score (above the double line) and the largest positive impact on the final RankBoost score (below the double line), for Compare3. αs represents the increment or decrement associated with satisfying the condition. domain and used for information presentation as well as information gathering. Previous work on SPoT also compared trainable sentence planning to a template-based generator that had previously been developed for the same application (Rambow et al., 2001). The evaluation results for SPaRKy (1) support the results for SPoT, by showing that trainable sentence generation can produce output comparable to template-based generation, even for complex information presentations such as extended comparisons; (2) show that trainable sentence generation is sensitive to variations in domain application, presentation type, and even human preferences about the arrangement of particular types of information. 7 Acknowledgments We thank AT&T for supporting this research, and the anonymous reviewers for their helpful comments on this paper. References I. Langkilde. Forest-based statistical sentence generation. In Proc. NAACL 2000, 2000. S. E. Brennan, M. Walker Friedman, and C. J. Pollard. A centering approach to pronouns. In Proc. 25th Annual Meeting of the ACL, Stanford, pages 155–162, 1987. L. Danlos. 2000. G-TAG: A lexicalized formalism for text generation inspired by tree adjoining grammar. In Tree Adjoining Grammars: Formalisms, Linguistic Analysis, and Processing. CSLI Publications. M. Johnston, S. Bangalore, G. Vasireddy, A. Stent, P. Ehlen, M. Walker, S. Whittaker, and P. Maloor. MATCH: An architecture for multimodal dialogue systems. In Annual Meeting of the ACL, 2002. A. Knott, J. Oberlander, M. O’Donnell and C. Mellish. Beyond Elaboration: the interaction of relations and focus in coherent text. In Text Representation: linguistic and psycholinguistic aspects, pages 181-196, 2001. B. Lavoie and O. Rambow. A fast and portable realizer for text generation systems. In Proc. of the 3rd Conference on Applied Natural Language Processing, ANLP97, pages 265–268, 1997. W.C. Mann and S.A. Thompson. Rhetorical structure theory: A framework for the analysis of texts. Technical Report RS-87-190, USC/Information Sciences Institute, 1987. D. Marcu. From local to global coherence: a bottom-up approach to text planning. In Proceedings of the National Conference on Artificial Intelligence (AAAI’97), 1997. C. Mellish, A. Knott, J. Oberlander, and M. O’Donnell. Experiments using stochastic search for text planning. In Proceedings of INLG-98. 1998. I. A. Melˇcuk. Dependency Syntax: Theory and Practice. SUNY, Albany, New York, 1988. O. Rambow and T. Korelsky. Applied text generation. In Proceedings of the Third Conference on Applied Natural Language Processing, ANLP92, pages 40–47, 1992. O. Rambow, M. Rogati and M. A. Walker. Evaluating a Trainable Sentence Planner for a Spoken Dialogue Travel System In Meeting of the ACL, 2001. R. E. Schapire. A brief introduction to boosting. In Proc. of the 16th IJCAI, 1999. D. R. Scott and C. Sieckenius de Souza. Getting the message across in RST-based text generation. In Current Research in Natural Language Generation, pages 47–73, 1990. A. Stent, M. Walker, S. Whittaker, and P. Maloor. User-tailored generation for spoken dialogue: An experiment. In Proceedings of ICSLP 2002., 2002. M. A. Walker, S. J. Whittaker, A. Stent, P. Maloor, J. D. Moore, M. Johnston, and G. Vasireddy. Speech-Plans: Generating evaluative responses in spoken dialogue. In Proceedings of INLG-02., 2002. M. Walker, O. Rambow, and M. Rogati. Training a sentence planner for spoken dialogue using boosting. Computer Speech and Language: Special Issue on Spoken Language Generation, 2002.
2004
11
User Expertise Modelling and Adaptivity in a Speech-based E-mail System Kristiina JOKINEN University of Helsinki and University of Art and Design Helsinki Hämeentie 135C 00560 Helsinki [email protected] Kari KANTO University of Art and Design Helsinki Hämeentie 135C 00560 Helsinki [email protected] Abstract This paper describes the user expertise model in AthosMail, a mobile, speech-based e-mail system. The model encodes the system’s assumptions about the user expertise, and gives recommendations on how the system should respond depending on the assumed competence levels of the user. The recommendations are realized as three types of explicitness in the system responses. The system monitors the user’s competence with the help of parameters that describe e.g. the success of the user’s interaction with the system. The model consists of an online and an offline version, the former taking care of the expertise level changes during the same session, the latter modelling the overall user expertise as a function of time and repeated interactions. 1 Introduction Adaptive functionality in spoken dialogue systems is usually geared towards dealing with communication disfluencies and facilitating more natural interaction (e.g. Danieli and Gerbino, 1995; Litman and Pan, 1999; Krahmer et al, 1999; Walker et al, 2000). In the AthosMail system (Turunen et al., 2004), the focus has been on adaptivity that addresses the user’s expertise levels with respect to a dialogue system’s functionality, and allows adaptation to take place both online and between the sessions. The main idea is that while novice users need guidance, it would be inefficient and annoying for experienced users to be forced to listen to the same instructions every time they use the system. For instance, already (Smith, 1993) observed that it is safer for beginners to be closely guided by the system, while experienced users like to take the initiative which results in more efficient dialogues in terms of decreased average completion time and a decreased average number of utterances. However, being able to decide when to switch from guiding a novice to facilitating an expert requires the system to be able to keep track of the user's expertise level. Depending on the system, the migration from one end of the expertise scale to the other may take anything from one session to an extended period of time. In some systems (e.g. Chu-Carroll, 2000), user inexperience is countered with initiative shifts towards the system, so that in the extreme case, the system leads the user from one task state to the next. This is a natural direction if the application includes tasks that can be pictured as a sequence of choices, like choosing turns from a road map when navigating towards a particular place. Examples of such a task structure include travel reservation systems, where the requested information can be given when all the relevant parameters have been collected. If, on the other hand, the task structure is flat, system initiative may not be very useful, since nothing is gained by leading the user along paths that are only one or two steps long. Yankelovich (1996) points out that speech applications are like command line interfaces: the available commands and the limitations of the system are not readily visible, which presents an additional burden to the user trying to familiarize herself with the system. There are essentially four ways the user can learn to use a system: 1) by unaided trial and error, 2) by having a pre-use tutorial, 3) by trying to use the system and then asking for help when in trouble, or 4) by relying on advice the system gives when concluding the user is in trouble. Kamm, Litman & Walker (1998) experimented with a pre-session tutorial for a spoken dialogue e-mail system and found it efficient in teaching the users what they can do; apparently this approach could be enhanced by adding items 3 and 4. However, users often lack enthusiasm towards tutorials and want to proceed straight to using the system. Yankelovich (1996) regards the system prompt design at the heart of the effective interface design which helps users to produce well-formed spoken input and simultaneously to become familiar with the functionality that is available. She introduced various prompt design techniques, e.g. tapering which means that the system shortens the prompts for users as they gain experience with the system, and incremental prompts, which means that when a prompt is met with silence (or a timeout occurs in a graphical interface), the repeated prompt will be incorporated with helpful hints or instructions. The system utterances are thus adapted online to mirror the perceived user expertise. The user model that keeps track of the perceived user expertise may be session-specific, but it could also store the information between sessions, depending on the application. A call service providing bus timetables may harmlessly assume that the user is always new to the system, but an email system is personal and the user could presumably benefit from personalized adaptations. If the system stores user modelling information between sessions, there are two paths for adaptation: the adaptations take place between sessions on the basis of observations made during earlier sessions, or the system adapts online and the resulting parameters are then passed from one session to another by means of the user model information storage. A combination of the two is also possible, and this is the chosen path for AthosMail as disclosed in section 3. User expertise has long been the subject of user modelling in the related fields of text generation, question answering and tutorial systems. For example, Paris (1988) describes methods for taking the user's expertise level into account when designing how to tailor descriptions to the novice and expert users. Although the applications are somewhat different, we expect a fair amount of further inspiration to be forthcoming from this direction also. In this paper, we describe the AthosMail user expertise model, the Cooperativity Model, and discuss its effect on the system behaviour. The paper is organised as follows. In Section 2 we will first briefly introduce the AthosMail functionality which the user needs to familiarise herself with. Section 3 describes the user expertise model in more detail. We define the three expertise levels and the concept of DASEX (dialogue act specific explicitness), and present the parameters that are used to calculate the online, session-specific DASEX values as well as offline, between-thesessions DASEX values. We also list some of the system responses that correspond to the system's assumptions about the user expertise. In Section 4, we report on the evaluation of the system’s adaptive responses and user errors. In Section 5, we provide conclusions and future work. 2 System functionality AthosMail is an interactive speech-based e-mail system being developed for mobile telephone use in the project DUMAS (Jokinen and Gambäck, 2004). The research goal is to investigate adaptivity in spoken dialogue systems in order to enable users to interact with the speech-based systems in a more flexible and natural way. The practical goal of AthosMail is to give an option for visually impaired users to check their email by voice commands, and for sighted users to access their email using a mobile phone. The functionality of the test prototype is rather simple, comprising of three main functions: navigation in the mailbox, reading of messages, and deletion of messages. For ease of navigation, AthosMail makes use of automatic classification of messages by sender, subject, topic, or other relevant criteria, which is initially chosen by the system. The classification provides different "views" to the mailbox contents, and the user can move from one view to the next, e.g. from Paul's messages to Maria's messages, with commands like "next", "previous" or "first view", and so on. Within a particular view, the user may navigate from one message to another in a similar fashion, saying "next", "fourth message" or "last message", and so on. Reading messages is straightforward, the user may say "read (the message)", when the message in question has been selected, or refer to another message by saying, for example, "read the third message". Deletion is handled in the same way, with some room for referring expressions. The user has the option of asking the system to repeat its previous utterance. The system asks for a confirmation when the user's command entails something that has more potential consequences than just wasting time (by e.g. reading the wrong message), namely, quitting and the deletion of messages. AthosMail may also ask for clarifications, if the speech recognition is deemed unreliable, but otherwise the user has the initiative. The purpose of the AthosMail user model is to provide flexibility and variation in the system utterances. The system monitors the user’s actions in general, and especially on each possible system act. Since the user may master some part of the system functionality, while not be familiar with all commands, the system can thus provide responses tailored with respect to the user’s familiarity with individual acts. The user model produces recommendations for the dialogue manager on how the system should respond depending on the assumed competence levels of the user. The user model consists of different subcomponents, such as Message Prioritizing, Message Categorization and User Preference components (Jokinen et al, 2004). The Cooperativity Model utilizes two parameters, explicitness and dialogue control (i.e. initiative), and the combination of their values then guides utterance generation. The former is an estimate of the user’s competence level, and is described in the following sections. 3 User expertise modelling in AthosMail AthosMail uses a three-level user expertise scale to encode varied skill levels of the users. The common assumption of only two classes, experts and novices, seems too simple a model which does not take into account the fact that the user's expertise level increases gradually, and many users consider themselves neither novices nor experts but something in between. Moreover, the users may be experienced with the system selectively: they may use some commands more often than others, and thus their skill levels are not uniform across the system functionality. A more fine-grained description of competence and expertise can also be presented. For instance, Dreyfus and Dreyfus (1986) in their studies about whether it is possible to build systems that could behave in the way of a human expert, distinguish five levels in skill acquisition: Novice, Advanced beginner, Competent, Proficient, and Expert. In practical dialogue systems, however, it is difficult to maintain subtle user models, and it is also difficult to define such observable facts that would allow fine-grained competence levels to be distinguished in rather simple application tasks. We have thus ended up with a compromise, and designed three levels of user expertise in our model: novice, competent, and expert. These levels are reflected in the system responses, which can vary from explicit to concise utterances depending on how much extra information the system is to give to the user in one go. As mentioned above, one of the goals of the Cooperativity model is to facilitate more natural interaction by allowing the system to adapt its utterances according to the perceived expertise level. On the other hand, we also want to validate and assess the usability of the three-level model of user expertise. While not entering into discussions about the limits of rule-based thinking (e.g. in order to model intuitive decision making of the experts according to the Dreyfus model), we want to study if the designed system responses, adapted according to the assumed user skill levels, can provide useful assistance to the user in interactive situations where she is still uncertain about how to use the system. Since the user can always ask for help explicitly, our main goal is not to study the decrease in the user's help requests when she becomes more used to the system, but rather, to design the system responses so that they would reflect the different skill levels that the system assumes the user is on, and to get a better understanding whether the expertise levels and their reflection in the system responses is valid or not, so as to provide the best assistance for the user. 3.1 Dialogue act specific explicitness The user expertise model utilized in AthosMail is a collection of parameters aimed at observing telltale signals of the user's skill level and a set of second-order parameters (dialogue act specific explicitness DASEX, and dialogue control CTL) that reflect what has been concluded from the firstorder parameters. Most first-order parameters are tuned to spot incoherence between new information and the current user model (see below). If there's evidence that the user is actually more experienced than previously thought, the user expertise model is updated to reflect this. The process can naturally proceed in the other direction as well, if the user model has been too fast in concluding that the user has advanced to a higher level of expertise. The second-order parameters affect the system behaviour directly. There is a separate experience value for each system function, which enables the system to behave appropriately even if the user is very experienced in using one function but has never used another. The higher the value, the less experienced the user; the less experienced the user, the more explicit the manner of expression and the more additional advice is incorporated in the system utterances. The values are called DASEX, short for Dialogue Act Specific Explicitness, and their value range corresponds to the user expertise as follows: 1 = expert, 2 = competent, 3 = novice. The model comprises an online component and an offline component. The former is responsible for observing runtime events and calculating DASEX recommendations on the fly, whereas the latter makes long-time observations and, based on these, calculates default DASEX values to be used at the beginning of the next session. The offline component is, so to speak, rather conservative; it operates on statistical event distributions instead of individual parameter values and tends to round off the extremes, trying to catch the overall learning curve behind the local variations. The components work separately. In the beginning of a new session, the current offline model of the user’s skill level is copied onto the online component and used as the basis for producing the DASEX recommendations, while at the end of each session, the offline component calculates the new default level on the basis of the occurred events. Figure 1 provides an illustration of the relationships between the parameters. In the next section we describe them in detail. 3.1.1 Online parameter descriptions The online component can be seen as an extension of the ideas proposed by Yankelovich (1996) and Chu-Carroll (2000). The relative weights of the parameters are those used in our user tests, partly based on those of (Krahmer et al, 1999). They will be fine-tuned according to our results. Figure 1 The functional relationships between the offline and online parameters used to calculate the DASEX values. DASEX (dialogue act specific explicitness): The value is modified during sessions. Value: DDASEX (see offline parameters) modified by SDAI, HLP, TIM, and INT as specified in the respective parameter definitions. SDAI (system dialogue act invoked): A set of parameters (one for each system dialogue act) that tracks whether a particular dialogue act has been invoked during the previous round. If SDAI = 'yes', then DASEX -1. This means that when a particular system dialogue move has been instantiated, its explicitness value is decreased and will therefore be presented in a less explicit form the next time it is instantiated during the same session. HLP (the occurrence of a help request by the user): The system incorporates a separate help function; this parameter is only used to notify the offline side about the frequency of help requests. TIM (the occurrence of a timeout on the user's turn): If TIM = 'yes', then DASEX +1. This refers to speech recognizer timeouts. INT (occurrence of a user interruption during system turn): Can be either a barge-in or an interruption by telephone keys. If INT = 'yes', then DASEX = 1. 3.1.2 Offline parameter descriptions DDASEX (default dialogue act specific explicitness): Every system dialogue act has its own default explicitness value invoked at the beginning of a session. Value: DASE + GEX / 2. GEX (general expertise): General expertise. A general indicator of user expertise. Value: NSES + OHLP + OTIM / 3. DASE (dialogue act specific experience): This value is based on the number of sessions during which the system dialogue act has been invoked. There is a separate DASE value for every system dialogue act. number of sessions DASE 0-2 3 3-6 2 more than 7 1 NSES (number of sessions): Based on the total number of sessions the user has used the system. number of sessions NSES 0-2 3 3-6 2 more than 7 1 OHLP (occurrence of help requests): This parameter tracks whether the user has requested system help during the last 1 or 3 sessions. The HLP parameter is logged by the online component. HLP occurred during OHLP the last session 3 the last 3 sessions 2 if not 1 OTIM (occurrence of timeouts): This parameter tracks whether a timeout has occurred during the last 1 or 3 sessions. The TIM parameter is logged by the online component. TIM occurred during OTIM the last session 3 the last 3 sessions 2 if not 1 3.2 DASEX-dependent surface forms Each system utterance type has three different surface realizations corresponding to the three DASEX values. The explicitness of a system utterance can thus range between [1 = taciturn, 2 = normal, 3 = explicit]; the higher the value, the more additional information the surface realization will include (cf. Jokinen and Wilcock, 2001). The value is used for choosing between the surface realizations which are generated by the presentation components as natural language utterances. The following two examples have been translated from their original Finnish forms. Example 1: A speech recognition error (the ASR score has been too low). DASEX = 1: I'm sorry, I didn't understand. DASEX = 2: I'm sorry, I didn't understand. Please speak clearly, but do not over-articulate, and speak only after the beep. DASEX = 3: I'm sorry, I didn't understand. Please speak clearly, but do not over-articulate, and speak only after the beep. To hear examples of what you can say to the system, say 'what now'. Example 2: Basic information about a message that the user has chosen from a listing of messages from a particular sender. DASEX = 1: First message, about "reply: sample file". DASEX = 2: First message, about "reply: sample file". Say 'tell me more', if you want more details. DASEX = 3: First message, about "reply: sample file". Say 'read', if you want to hear the messages, or 'tell me more', if you want to hear a summary and the send date and length of the message. These examples show the basic idea behind the DASEX effect on surface generation. In the first example, the novice user is given additional information about how to try and avoid ASR problems, while the expert user is only given the error message. In the second example, the expert user gets the basic information about the message only, whereas the novice user is also provided with some possible commands how to continue. A full interaction with AthosMail is given in Appendix 1. 4 Evaluation of AthosMail Within the DUMAS project, we are in the process of conducting exhaustive user studies with the prototype AthosMail system that incorporates the user expertise model described above. We have already conducted a preliminary qualitative expert evaluation, the goal of which was to provide insights into the design of system utterances so as to appropriately reflect the three user expertise levels, and the first set of user evaluations where a set of four tasks was carried out during two consecutive days. 4.1 Adaptation and system utterances For the expert evaluation, we interviewed 5 interactive systems experts (two women and three men). They all had earlier experience in interactive systems and interface design, but were unfamiliar with the current system and with interactive email systems in general. Each interview included three walkthroughs of the system, one for a novice, one for a competent, and one for an expert user. The experts were asked to comment on the naturalness and appropriateness of each system utterance, as well as provide any other comments that they may have on adaptation and adaptive systems. All interviewees agreed on one major theme, namely that the system should be as friendly and reassuring as possible towards novices. Dialogue systems can be intimidating to new users, and many people are so afraid of making mistakes that they give up after the first communication failure, regardless of what caused it. Graphical user interfaces differ from speech interfaces in this respect, because there is always something salient to observe as long as the system is running at all. Four of the five experts agreed that in an error situation the system should always signal the user that the machine is to blame, but there are things that the user can do in case she wants to help the system in the task. The system should acknowledge its shortcomings "humbly" and make sure that the user doesn't get feelings of guilt – all problems are due to imperfect design. E.g., the responses in Example 1 were viewed as accusing the user of not being able to act in the correct way. We have since moved towards forms like "I may have misheard", where the system appears responsible for the miscommunication. This can pave the way when the user is taking the first wary steps in getting acquainted with the system. Novice users also need error messages that do not bother the user with technical matters that concern only the designers. For instance, a novice user doesn't need information about error codes or characteristics of the speech recognizer; when ASR errors occur, the system can simply talk about not hearing correctly; a reference to a piece of equipment that does the job – namely, the speech recognizer – is unnecessary and the user should not be burdened with it. Experienced users, on the other hand, wish to hear only the essentials. All our interviewees agreed that at the highest skill level, the system prompts should be as terse as possible, to the point of being blunt. Politeness words like "I'm sorry" are not necessary at this level, because the expert's attitude towards the system is pragmatic: they see it as a tool, know its limitations, and "rudeness" on the part of the system doesn't scare or annoy them anymore. However, it is not clear how the change in politeness when migrating from novice to expert levels actually affects the user’s perception of the system; the transition should at least be gradual and not too fast. There may also be cultural differences regarding certain politeness rules. The virtues of adaptivity are still a matter of debate. One of the experts expressed serious doubt over the usability of any kind of automatic adaptivity and maintained that the user should decide whether she wants the system to adapt at a given moment or not. In the related field of tutoring systems, Kay (2001) has argued for giving the user the control over adaptation. Whatever the case, it is clear that badly designed adaptivity is confusing to the user, and especially a novice user may feel disoriented if faced with prompts where nothing seems to stay the same. It is essential that the system is consistent in its use of concepts, and manner of speech. In AthosMail, the expert level (DASEX=1 for all dialogue acts) acts as the core around which the other two expertise levels are built. While the core remains essentially unchanged, further information elements are added after it. In practise, when the perceived user expertise rises, the system simply removes information elements that have become unnecessary from the end of the utterance, without touching the core. This should contribute to a feeling of consistency and dependability. On the other hand, Paris (1988) argued that the user’s expertise level does not affect only the amount but the kind of information given to the user. It will prove interesting to reconcile these views in a more general kind of user expertise modeling. 4.2 Adaptation and user errors The user evaluation of AthosMail consisted of four tasks that were performed on two consecutive days. The 26 test users, aged 20-62, thus produced four separate dialogues each and a total of 104 dialogues. They had no previous experience with speech-based dialogue systems, and to familiarize themselves to synthesized speech and speech recognizers, they had a short training session with another speech application in the beginning of the first test session. An outline of AthosMail functionality was presented to the users, and they were allowed to keep it when interacting with the system. At the end of each of the four tests, the users were asked to assess how familiar they were with the system functionality and how confident they felt about using it. Also, they were asked to assess whether the system gave too little information about its functionality, too much, or the right amount. The results are reported in (Jokinen et al, 2004). We also identified four error types, as a point of comparison for the user expertise model. 5 Conclusions Previous studies concerning user modelling in various interactive applications have shown the importance of the user model in making the interaction with the system more enjoyable. We have introduced the three-level user expertise model, implemented in our speech-based e-mail system, AthosMail, and argued for its effect on the behaviour of the overall system. Future work will focus on analyzing the data collected through the evaluations of the complete AthosMail system with real users. Preliminary expert evaluation revealed that it is important to make sure the novice user is not intimidated and feels comfortable with the system, but also that the experienced users should not be forced to listen to the same advice every time they use the system. The hand-tagged error classification shows a slight downward tendency in user errors, suggesting accumulation of user experience. This will act as a point of comparison for the user expertise model assembled automatically by the system. Another future research topic is to apply machine-learning and statistical techniques in the implementation of the user expertise model. Through the user studies we will also collect data which we plan to use in re-implementing the DASEX decision mechanism as a Bayesian network. 6 Acknowledgements This research was carried out within the EU’s Information Society Technologies project DUMAS (Dynamic Universal Mobility for Adaptive Speech Interfaces), IST-2000-29452. We thank all project participants from KTH and SICS, Sweden; UMIST, UK; ETeX Sprachsynthese AG, Germany; U. of Tampere, U. of Art and Design, Connexor Oy, and Timehouse Oy, Finland. References Jennifer Chu-Carroll. 2000. MIMIC: An Adaptive Mixed Initiative Spoken Dialogue System for Information Queries. In Procs of ANLP 6, 2000, pp. 97-104. Morena Danieli and Elisabetta Gerbino. 1995. Metrics for Evaluating Dialogue Strategies in a Spoken Language System. Working Notes, AAAI Spring Symposium Series, Stanford University. Hubert L. Dreyfus and Stuart E. Dreyfus. 1986. Mind over Machine: The Power of Human Intuition and Expertise in the Era of the Computer. New York: The Free Press. Kristiina Jokinen and Björn Gambäck. 2004. DUMAS Adaptation and Robust Information Processing for Mobile Speech Interfaces. Procs of The 1st Baltic Conference “Human Language Technologies – The Baltic Perspective”, Riga, Latvia, 115-120. Kristiina Jokinen, Kari Kanto, Antti Kerminen and Jyrki Rissanen. 2004. Evaluation of Adaptivity and User Expertise in a Speech-based E-mail System. Procs of the COLING Satellite Workshop Robust and Adaptive Information Processing for Mobile Speech Interfaces, Geneva, Switzerland. Kristiina Jokinen and Graham Wilcock. 2001. Adaptivity and Response Generation in a Spoken Dialogue System. In van Kuppevelt, J. and R. W. Smith (eds.) Current and New Directions in Discourse and Dialogue. Kluwer Academic Publishers. pp. 213-234. Candace Kamm, Diane Litman, and Marilyn Walker. 1998. From novice to expert: the effect of tutorials on user expertise with spoken dialogue systems. Procs of the International Conference on Spoken Language Processing (ICSLP98). Judy Kay. 2001. Learner control. User Modeling and User-Adapted Interaction 11: 111-127. Emiel Krahmer, Marc Swerts, Mariet Theune and Mieke Weegels. 1999. Problem Spotting in HumanMachine Interaction. In Procs of Eurospeech '99. Vol. 3, 1423-1426. Budapest, Hungary. Diane J. Litman and Shimei Pan. 2002. Designing and Evaluating an Adaptive Spoken Dialogue System. User Modeling and User-Adapted Interaction. Vol 12(2/3):111-137. Cécile Paris. 1988. Tailoring Descriptions to a User's Level of Expertise. Journal of Computational Linguistics, 14 (3): 64-78. Ronnie W. Smith. 1993. Effective Spoken Natural Language Dialog Requires Variable Initiative Behavior: An Empirical Study. Procs of the AAAI Fall Symposium on Human-Computer Collaboration: Reconciling Theory, Synthesizing Practice. M. Turunen, E-P. Salonen, M. Hartikainen, J. Hakulinen, W.J. Black, A: Ramsay, A. Funk, A. Conroy, P. Thompson, M. Stairmand, K. Jokinen, J. Rissanen, K. Kanto, A. Kerminen, B. Gambäck, M. Cheadle, F. Olsson, M. Sahlgren. 2004. AthosMail – a multilingual Adaptive Spoken Dialogue System for E-mail Domain. Procs of the COLING Workshop Robust and Adaptive Information Processing for Mobile Speech Interfaces, Geneva, Switzerland. Marilyn A. Walker, Irene Langkilde, Jerry Wright, Allen Gorin, Diane J. Litman. 2000. Learning to Predict Problematic Situations in a Spoken Dialogue System: Experiments with How May I Help You? Procs of NAACL’00, Seattle, US, 210-217. Nicole Yankelovich. 1996. How do users know what to say? Interactions 3(6): 32-43. 7 Appendix 1. Sample dialogue with a novice user System: Welcome to AthosMail. Please dial your user code. User: [dials the four-digit code] S: Please dial your secret passcode. U: [dials the passcode] System: Please wait, I'll contact your mail server. (...) Hello Jukka, you have twelve unread messages. They have been divided into three groups. In the first group, there are the messages about Re: Comments about the draft. In the second group, the messages from Timothy. In the third group, the rest of the messages. You can select a group by saying for example "first group" or "second group". Say help if you want further instructions. User: First group. S: You are in the first group, with three messages about Re: Comments about the draft. The first message is from Matti Hahtomaa, the second from Jaana Jaakkola, the third from Sakari Rantala. If you want to select a message, say for example "second message" or "Eddie Everyman's message". U: First message. S: First message, from Matti Hahtomaa, about Re: Comments about the draft. If you want to hear the message, say "read". You can have additional information about the message by saying "tell me more". U: Second message. S: Second message, from Jaana Jaakkola, about Re: Comments about the draft. You can have additional information about the message by saying "tell me more". U: Tell me more. S: Message sent on February 3rd, 2004. Message length 24 lines. U: Read Sakari Rantala's message. S: Third message, from Sakari Rantala, about Re: Comments about the draft. [reads the message] Message read. You are in the first message group. U: Quit. S: Are you sure you want to quit? Say yes or no. U: Yes. S: Bye bye.
2004
12
Discriminative Training of a Neural Network Statistical Parser James HENDERSON School of Informatics, University of Edinburgh 2 Buccleuch Place Edinburgh EH8 9LW United Kingdom [email protected] Abstract Discriminative methods have shown significant improvements over traditional generative methods in many machine learning applications, but there has been difficulty in extending them to natural language parsing. One problem is that much of the work on discriminative methods conflates changes to the learning method with changes to the parameterization of the problem. We show how a parser can be trained with a discriminative learning method while still parameterizing the problem according to a generative probability model. We present three methods for training a neural network to estimate the probabilities for a statistical parser, one generative, one discriminative, and one where the probability model is generative but the training criteria is discriminative. The latter model outperforms the previous two, achieving state-ofthe-art levels of performance (90.1% F-measure on constituents). 1 Introduction Much recent work has investigated the application of discriminative methods to NLP tasks, with mixed results. Klein and Manning (2002) argue that these results show a pattern where discriminative probability models are inferior to generative probability models, but that improvements can be achieved by keeping a generative probability model and training according to a discriminative optimization criteria. We show how this approach can be applied to broad coverage natural language parsing. Our estimation and training methods successfully balance the conflicting requirements that the training method be both computationally tractable for large datasets and a good approximation to the theoretically optimal method. The parser which uses this approach outperforms both a generative model and a discriminative model, achieving state-of-the-art levels of performance (90.1% F-measure on constituents). To compare these different approaches, we use a neural network architecture called Simple Synchrony Networks (SSNs) (Lane and Henderson, 2001) to estimate the parameters of the probability models. SSNs have the advantage that they avoid the need to impose hand-crafted independence assumptions on the learning process. Training an SSN simultaneously trains a finite representations of the unbounded parse history and a mapping from this history representation to the parameter estimates. The history representations are automatically tuned to optimize the parameter estimates. This avoids the problem that any choice of hand-crafted independence assumptions may bias our results towards one approach or another. The independence assumptions would have to be different for the generative and discriminative probability models, and even for the parsers which use the generative probability model, the same set of independence assumptions may be more appropriate for maximizing one training criteria over another. By inducing the history representations specifically to fit the chosen model and training criteria, we avoid having to choose independence assumptions which might bias our results. Each complete parsing system we propose consists of three components, a probability model for sequences of parser decisions, a Simple Synchrony Network which estimates the parameters of the probability model, and a procedure which searches for the most probable parse given these parameter estimates. This paper outlines each of these components, but more details can be found in (Henderson, 2003b), and, for the discriminative model, in (Henderson, 2003a). We also present the training methods, and experiments on the proposed parsing models. 2 Two History-Based Probability Models As with many previous statistical parsers (Ratnaparkhi, 1999; Collins, 1999; Charniak, 2000), we use a history-based model of parsing. Designing a history-based model of parsing involves two steps, first choosing a mapping from the set of phrase structure trees to the set of parses, and then choosing a probability model in which the probability of each parser decision is conditioned on the history of previous decisions in the parse. We use the same mapping for both our probability models, but we use two different ways of conditioning the probabilities, one generative and one discriminative. As we will show in section 6, these two different ways of parameterizing the probability model have a big impact on the ease with which the parameters can be estimated. To define the mapping from phrase structure trees to parses, we use a form of left-corner parsing strategy (Rosenkrantz and Lewis, 1970). In a left-corner parse, each node is introduced after the subtree rooted at the node’s first child has been fully parsed. Then the subtrees for the node’s remaining children are parsed in their left-to-right order. Parsing a constituent starts by pushing the leftmost word w of the constituent onto the stack with a shift(w) action. Parsing a constituent ends by either introducing the constituent’s parent nonterminal (labeled Y ) with a project(Y) action, or attaching to the parent with an attach action.1 A complete parse consists of a sequence of these actions, d1,..., dm, such that performing d1,..., dm results in a complete phrase structure tree. Because this mapping from phrase structure trees to sequences of decisions about parser actions is one-to-one, finding the most probable phrase structure tree is equivalent to finding the parse d1,..., dm which maximizes P(d1,..., dm|w1,..., wn). This probability is only nonzero if yield(d1,..., dm) = w1,..., wn, so we can restrict attention to only those parses which actually yield the given sentence. With this restriction, it is equivalent to maximize P(d1,..., dm), as is done with our first probability model. The first probability model is generative, because it specifies the joint probability of the input sentence and the output tree. This joint probability is simply P(d1,..., dm), since the 1More details on the mapping to parses can be found in (Henderson, 2003b). probability of the input sentence is included in the probabilities for the shift(wi) decisions included in d1,..., dm. The probability model is then defined by using the chain rule for conditional probabilities to derive the probability of a parse as the multiplication of the probabilities of each decision di conditioned on that decision’s prior parse history d1,..., di−1. P(d1,..., dm) = ΠiP(di|d1,..., di−1) The parameters of this probability model are the P(di|d1,..., di−1). Generative models are the standard way to transform a parsing strategy into a probability model, but note that we are not assuming any bound on the amount of information from the parse history which might be relevant to each parameter. The second probability model is discriminative, because it specifies the conditional probability of the output tree given the input sentence. More generally, discriminative models try to maximize this conditional probability, but often do not actually calculate the probability, as with Support Vector Machines (Vapnik, 1995). We take the approach of actually calculating an estimate of the conditional probability because it differs minimally from the generative probability model. In this form, the distinction between our two models is sometimes referred to as “joint versus conditional” (Johnson, 2001; Klein and Manning, 2002) rather than “generative versus discriminative” (Ng and Jordan, 2002). As with the generative model, we use the chain rule to decompose the entire conditional probability into a sequence of probabilities for individual parser decisions, where yield(dj,..., dk) is the sequence of words wi from the shift(wi) actions in dj,..., dk. P(d1,..., dm|yield(d1,..., dm)) = ΠiP(di|d1,..., di−1, yield(di,..., dm)) Note that d1,..., di−1 specifies yield(d1,..., di−1), so it is sufficient to only add yield(di,..., dm) to the conditional in order for the entire input sentence to be included in the conditional. We will refer to the string yield(di,..., dm) as the lookahead string, because it represents all those words which have not yet been reached by the parse at the time when decision di is chosen. The parameters of this model differ from those of the generative model only in that they include the lookahead string in the conditional. Although maximizing the joint probability is the same as maximizing the conditional probability, the fact that they have different parameters means that estimating one can be much harder than estimating the other. In general we would expect that estimating the joint probability would be harder than estimating the conditional probability, because the joint probability contains more information than the conditional probability. In particular, the probability distribution over sentences can be derived from the joint probability distribution, but not from the conditional one. However, the unbounded nature of the parsing problem means that the individual parameters of the discriminative model are much harder to estimate than those of the generative model. The parameters of the discriminative model include an unbounded lookahead string in the conditional. Because these words have not yet been reached by the parse, we cannot assign them any structure, and thus the estimation process has no way of knowing what words in this string will end up being relevant to the next decision it needs to make. The estimation process has to guess about the future role of an unbounded number of words, which makes the estimate quite difficult. In contrast, the parameters of the generative model only include words which are either already incorporated into the structure, or are the immediate next word to be incorporated. Thus it is relatively easy to determine the significance of each word. 3 Estimating the Parameters with a Neural Network The most challenging problem in estimating P(di|d1,..., di−1, yield(di,..., dm)) and P(di|d1,..., di−1) is that the conditionals include an unbounded amount of information. Both the parse history d1,..., di−1 and the lookahead string yield(di,..., dm) grow with the length of the sentence. In order to apply standard probability estimation methods, we use neural networks to induce finite representations of both these sequences, which we will denote h(d1,..., di−1) and l(yield(di,..., dm)), respectively. The neural network training methods we use try to find representations which preserve all the information about the sequences which are relevant to estimating the desired probabilities. P(di|d1,..., di−1) ≈P(di|h(d1,..., di−1)) P(di|d1,..., di−1, yield(di,..., dm)) ≈ P(di|h(d1,..., di−1), l(yield(di,..., dm))) Of the previous work on using neural networks for parsing natural language, by far the most empirically successful has been the work using Simple Synchrony Networks. Like other recurrent network architectures, SSNs compute a representation of an unbounded sequence by incrementally computing a representation of each prefix of the sequence. At each position i, representations from earlier in the sequence are combined with features of the new position i to produce a vector of real valued features which represent the prefix ending at i. This representation is called a hidden representation. It is analogous to the hidden state of a Hidden Markov Model. As long as the hidden representation for position i −1 is always used to compute the hidden representation for position i, any information about the entire sequence could be passed from hidden representation to hidden representation and be included in the hidden representation of that sequence. When these representations are then used to estimate probabilities, this property means that we are not making any a priori hard independence assumptions (although some independence may be learned from the data). The difference between SSNs and most other recurrent neural network architectures is that SSNs are specifically designed for processing structures. When computing the history representation h(d1,..., di−1), the SSN uses not only the previous history representation h(d1,..., di−2), but also uses history representations for earlier positions which are particularly relevant to choosing the next parser decision di. This relevance is determined by first assigning each position to a node in the parse tree, namely the node which is on the top of the parser’s stack when that decision is made. Then the relevant earlier positions are chosen based on the structural locality of the current decision’s node to the earlier decisions’ nodes. In this way, the number of representations which information needs to pass through in order to flow from history representation i to history representation j is determined by the structural distance between i’s node and j’s node, and not just the distance between i and j in the parse sequence. This provides the neural network with a linguistically appropriate inductive bias when it learns the history representations, as explained in more detail in (Henderson, 2003b). When computing the lookahead representation l(yield(di,..., dm)), there is no structural information available to tell us which positions are most relevant to choosing the decision di. Proximity in the string is our only indication of relevance. Therefore we compute l(yield(di,..., dm)) by running a recurrent neural network backward over the string, so that the most recent input is the first word in the lookahead string, as discussed in more detail in (Henderson, 2003a). Once it has computed h(d1,..., di−1) and (for the discriminative model) l(yield(di,..., dm)), the SSN uses standard methods (Bishop, 1995) to estimate a probability distribution over the set of possible next decisions di given these representations. This involves further decomposing the distribution over all possible next parser actions into a small hierarchy of conditional probabilities, and then using log-linear models to estimate each of these conditional probability distributions. The input features for these loglinear models are the real-valued vectors computed by h(d1,..., di−1) and l(yield(di,..., dm)), as explained in more detail in (Henderson, 2003b). Thus the full neural network consists of a recurrent hidden layer for h(d1,..., di−1), (for the discriminative model) a recurrent hidden layer for l(yield(di,..., dm)), and an output layer for the log-linear model. Training is applied to this full neural network, as described in the next section. 4 Three Optimization Criteria and their Training Methods As with many other machine learning methods, training a Simple Synchrony Network involves first defining an appropriate learning criteria and then performing some form of gradient descent learning to search for the optimum values of the network’s parameters according to this criteria. In all the parsing models investigated here, we use the on-line version of Backpropagation to perform the gradient descent. This learning simultaneously tries to optimize the parameters of the output computation and the parameters of the mappings h(d1,..., di−1) and l(yield(di,..., dm)). With multi-layered networks such as SSNs, this training is not guaranteed to converge to a global optimum, but in practice a network whose criteria value is close to the optimum can be found. The three parsing models differ in the criteria the neural networks are trained to optimize. Two of the neural networks are trained using the standard maximum likelihood approach of optimizing the same probability which they are estimating, one generative and one discriminative. For the generative model, this means maximizing the total joint probability of the parses and the sentences in the training corpus. For the discriminative model, this means maximizing the conditional probability of the parses in the training corpus given the sentences in the training corpus. To make the computations easier, we actually minimize the negative log of these probabilities, which is called cross-entropy error. Minimizing this error ensures that training will converge to a neural network whose outputs are estimates of the desired probabilities.2 For each parse in the training corpus, Backpropagation training involves first computing the probability which the current network assigns to that parse, then computing the first derivative of (the negative log of) this probability with respect to each of the network’s parameters, and then updating the parameters proportionately to this derivative.3 The third neural network combines the advantages of the generative probability model with the advantages of the discriminative optimization criteria. The structure of the network and the set of outputs which it computes are exactly the same as the above network for the generative model. But the training procedure is designed to maximize the conditional probability of the parses in the training corpus given the sentences in the training corpus. The conditional probability for a sentence can be computed from the joint probability of the generative model by normalizing over the set of all parses d′ 1,..., d′ m′ for the sentence. P(d1,..., dm|w1,..., wn) = P (d1,...,dm) P d′ 1,...,d′ m′ P (d′ 1,...,d′ m′) So, with this approach, we need to maximize this normalized probability, and not the probability computed by the network. The difficulty with this approach is that there are exponentially many parses for the sentence, so it is not computationally feasible to compute them all. We address this problem by only computing a small set of the most probable parses. The remainder of the sum is estimated using a combination of the probabilities from the best parses and the probabilities 2Cross-entropy error ensures that the minimum of the error function converges to the desired probabilities as the amount of training data increases (Bishop, 1995), so the minimum for any given dataset is considered an estimate of the true probabilities. 3A number of additional training techniques, such as regularization, are added to this basic procedure, as will be specified in section 6. from the partial parses which were pruned when searching for the best parses. The probabilities of pruned parses are estimated in such a way as to minimize their effect on the training process. For each decision which is part of some unpruned parses, we calculate the average probability of generating the remainder of the sentence by these un-pruned parses, and use this as the estimate for generating the remainder of the sentence by the pruned parses. With this estimate we can calculate the sum of the probabilities for all the pruned parses which originate from that decision. This approach gives us a slight overestimate of the total sum, but because this total sum acts simply as a weighting factor, it has little effect on learning. What is important is that this estimate minimizes the effect of the pruned parses’ probabilities on the part of the training process which occurs after the probabilities of the best parses have been calculated. After estimating P(d1,..., dm|w1,..., wn), training requires that we estimate the first derivative of (the negative log of) this probability with respect to each of the network’s parameters. The contribution to this derivative of the numerator in the above equation is the same as in the generative case, just scaled by the denominator. The difference between the two learning methods is that we also need to account for the contribution to this derivative of the denominator. Here again we are faced with the problem that there are an exponential number of derivations in the denominator, so here again we approximate this calculation using the most probable parses. To increase the conditional probability of the correct parse, we want to decrease the total joint probabilities of the incorrect parses. Probability mass is only lost from the sum over all parses because shift(wi) actions are only allowed for the correct wi. Thus we can decrease the total joint probability of the incorrect parses by making these parses be worse predictors of the words in the sentence.4 The combination of training the correct parses to be good predictors of the words and training the incorrect parses to be bad predictors of the words results in prediction prob4Non-prediction probability estimates for incorrect parses can make a small contribution to the derivative, but because pruning makes the calculation of this contribution inaccurate, we treat this contribution as zero when training. This means that non-prediction outputs are trained to maximize the same criteria as in the generative case. abilities which are not accurate estimates, but which are good at discriminating correct parses from incorrect parses. It is this feature which gives discriminative training an advantage over generative training. The network does not need to learn an accurate model of the distribution of words. The network only needs to learn an accurate model of how words disambiguate previous parsing decisions. When we apply discriminative training only to the most probable incorrect parses, we train the network to discriminate between the correct parse and those incorrect parses which are the most likely to be mistaken for the correct parse. In this sense our approximate training method results in optimizing the decision boundary between correct and incorrect parses, rather than optimizing the match to the conditional probability. Modifying the training method to systematically optimize the decision boundary (as in large margin methods such as Support Vector Machines) is an area of future research. 5 Searching for the most probable parse The complete parsing system uses the probability estimates computed by the SSN to search for the most probable parse. The search incrementally constructs partial parses d1,..., di by taking a parse it has already constructed d1,..., di−1 and using the SSN to estimate a probability distribution P(di|d1,..., di−1, ...) over possible next decisions di. These probabilities are then used to compute the probabilities for d1,..., di. In general, the partial parse with the highest probability is chosen as the next one to be extended, but to perform the search efficiently it is necessary to prune the search space. The main pruning is that only a fixed number of the most probable derivations are allowed to continue past the shifting of each word. Setting this post-word beam width to 5 achieves fast parsing with reasonable performance in all models. For the parsers with generative probability models, maximum accuracy is achieved with a post-word beam width of 100. 6 The Experiments We used the Penn Treebank (Marcus et al., 1993) to perform empirical experiments on the proposed parsing models. In each case the input to the network is a sequence of tag-word pairs.5 5We used a publicly available tagger (Ratnaparkhi, 1996) to provide the tags. For each tag, there is an We report results for three different vocabulary sizes, varying in the frequency with which tagword pairs must occur in the training set in order to be included explicitly in the vocabulary. A frequency threshold of 200 resulted in a vocabulary of 508 tag-word pairs, a threshold of 20 resulted in 4215 tag-word pairs, and a threshold of 5 resulted in 11,993 tag-word pairs For the generative model we trained networks for the 508 (“GSSN-Freq≥200”) and 4215 (“GSSN-Freq≥20”) word vocabularies. The need to calculate word predictions makes training times for the 11,993 word vocabulary very long, and as of this writing no such network training has been completed. The discriminative model does not need to calculate word predictions, so it was feasible to train networks for the 11,993 word vocabulary (“DSSN-Freq≥5”). Previous results (Henderson, 2003a) indicate that this vocabulary size performs better than the smaller ones, as would be expected. For the networks trained with the discriminative optimization criteria and the generative probability model, we trained networks for the 508 (“DGSSN-Freq≥200”) and 4215 (“DGSSNFreq≥20”) word vocabularies. For this training, we need to select a small set of the most probable incorrect parses. When we tried using only the network being trained to choose these top parses, training times were very long and the resulting networks did not outperform their generative counterparts. In the experiments reported here, we provided the training with a list of the top 20 parses found by a network of the same type which had been trained with the generative criteria. The network being trained was then used to choose its top 10 parses from this list, and training was performed on these 10 parses and the correct parse.6 This reduced the time necessary to choose the top parses during training, and helped focus the early stages of training on learning relevant discriminations. Once the training of these networks was complete, we tested both their ability to parse on their own and their ability to re-rank the top unknown-word vocabulary item which is used for all those words which are not sufficiently frequent with that tag to be included individually in the vocabulary (as well as other words if the unknown-word case itself does not have at least 5 instances). We did no morphological analysis of unknown words. 6The 20 candidate parses and the 10 training parses were found with post-word beam widths of 20 and 10, respectively, so these are only approximations to the top parses. 20 parses of their associated generative model (“DGSSN-. . ., rerank”). We determined appropriate training parameters and network size based on intermediate validation results and our previous experience.7 We trained several networks for each of the GSSN models and chose the best ones based on their validation performance. We then trained one network for each of the DGSSN models and for the DSSN model. The best post-word beam width was determined on the validation set, which was 5 for the DSSN model and 100 for the other models. To avoid repeated testing on the standard testing set, we first compare the different models with their performance on the validation set. Standard measures of accuracy are shown in table 1.8 The largest accuracy difference is between the parser with the discriminative probability model (DSSN-Freq≥5) and those with the generative probability model, despite the larger vocabulary of the former. This demonstrates the difficulty of estimating the parameters of a discriminative probability model. There is also a clear effect of vocabulary size, but there is a slightly larger effect of training method. When tested in the same way as they were trained (for reranking), the parsers which were trained with a discriminative criteria achieve a 7% and 8% reduction in error rate over their respective parsers with the same generative probability model. When tested alone, these DGSSN parsers perform only slightly better than their respective GSSN parsers. Initial experiments on giving these networks exposure to parses outside the top 20 parses of the GSSN parsers at the very end of training did not result in any improvement on this task. This suggests that at least some of the advantage of the DSSN models is due to the fact that re-ranking is a simpler task than parsing from scratch. But additional experimental work would be necessary to make any definite conclusions about this issue. 7All the best networks had 80 hidden units for the history representation (and 80 hidden units in the lookahead representation). Weight decay regularization was applied at the beginning of training but reduced to near 0 by the end of training. Training was stopped when maximum performance was reached on the validation set, using a post-word beam width of 5. 8All our results are computed with the evalb program following the standard criteria in (Collins, 1999), and using the standard training (sections 2–22, 39,832 sentences, 910,196 words), validation (section 24, 1346 sentence, 31507 words), and testing (section 23, 2416 sentences, 54268 words) sets (Collins, 1999). LR LP Fβ=1∗ DSSN-Freq≥5 84.9 86.0 85.5 GSSN-Freq≥200 87.6 88.9 88.2 DGSSN-Freq≥200 87.8 88.8 88.3 GSSN-Freq≥20 88.2 89.3 88.8 DGSSN-Freq≥200, rerank 88.5 89.6 89.0 DGSSN-Freq≥20 88.5 89.7 89.1 DGSSN-Freq≥20, rerank 89.0 90.3 89.6 Table 1: Percentage labeled constituent recall (LR), precision (LP), and a combination of both (Fβ=1) on validation set sentences of length at most 100. LR LP Fβ=1∗ Ratnaparkhi99 86.3 87.5 86.9 Collins99 88.1 88.3 88.2 Collins&Duffy02 88.6 88.9 88.7 Charniak00 89.6 89.5 89.5 Collins00 89.6 89.9 89.7 DGSSN-Freq≥20, rerank 89.8 90.4 90.1 Bod03 90.7 90.8 90.7 * Fβ=1 for previous models may have rounding errors. Table 2: Percentage labeled constituent recall (LR), precision (LP), and a combination of both (Fβ=1) on the entire testing set. For comparison to previous results, table 2 lists the results for our best model (DGSSNFreq≥20, rerank)9 and several other statistical parsers (Ratnaparkhi, 1999; Collins, 1999; Collins and Duffy, 2002; Charniak, 2000; Collins, 2000; Bod, 2003) on the entire testing set. Our best performing model is more accurate than all these previous models except (Bod, 2003). This DGSSN parser achieves this result using much less lexical knowledge than other approaches, which mostly use at least the words which occur at least 5 times, plus morphological features of the remaining words. However, the fact that the DGSSN uses a large-vocabulary tagger (Ratnaparkhi, 1996) as a preprocessing stage may compensate for its smaller vocabulary. Also, the main reason for using a smaller vocabulary is the computational complexity of computing probabilities for the shift(wi) actions on-line, which other models do not require. 9On sentences of length at most 40, the DGSSNFreq≥20-rerank model gets 90.1% recall and 90.7% precision. 7 Related Work Johnson (2001) investigated similar issues for parsing and tagging. His maximal conditional likelihood estimate for a PCFG takes the same approach as our generative model trained with a discriminative criteria. While he shows a non-significant increase in performance over the standard maximal joint likelihood estimate on a small dataset, because he did not have a computationally efficient way to train this model, he was not able to test it on the standard datasets. The other models he investigates conflate changes in the probability models with changes in the training criteria, and the discriminative probability models do worse. In the context of part-of-speech tagging, Klein and Manning (2002) argue for the same distinctions made here between discriminative models and discriminative training criteria, and come to the same conclusions. However, their arguments are made in terms of independence assumptions. Our results show that these generalizations also apply to methods which do not rely on independence assumptions. While both (Johnson, 2001) and (Klein and Manning, 2002) propose models which use the parameters of the generative model but train to optimize a discriminative criteria, neither proposes training algorithms which are computationally tractable enough to be used for broad coverage parsing. Our proposed training method succeeds in being both tractable and effective, demonstrating both a significant improvement over the equivalent generative model and state-of-the-art accuracy. Collins (2000) and Collins and Duffy (2002) also succeed in finding algorithms for training discriminative models which balance tractability with effectiveness, showing improvements over a generative model. Both these methods are limited to reranking the output of another parser, while our trained parser can be used alone. Neither of these methods use the parameters of a generative probability model, which might explain our better performance (see table 2). 8 Conclusions This article has investigated the application of discriminative methods to broad coverage natural language parsing. We distinguish between two different ways to apply discriminative methods, one where the probability model is changed to a discriminative one, and one where the probability model remains generative but the training method optimizes a discriminative criteria. We find that the discriminative probability model is much worse than the generative one, but that training to optimize the discriminative criteria results in improved performance. Performance of the latter model on the standard test set achieves 90.1% F-measure on constituents, which is the second best current accuracy level, and only 0.6% below the current best (Bod, 2003). This paper has also proposed a neural network training method which optimizes a discriminative criteria even when the parameters being estimated are those of a generative probability model. This training method successfully satisfies the conflicting constraints that it be computationally tractable and that it be a good approximation to the theoretically optimal method. This approach contrasts with previous approaches to scaling up discriminative methods to broad coverage natural language parsing, which have parameterizations which depart substantially from the successful previous generative models of parsing. References Christopher M. Bishop. 1995. Neural Networks for Pattern Recognition. Oxford University Press, Oxford, UK. Rens Bod. 2003. An efficient implementation of a new DOP model. In Proc. 10th Conf. of European Chapter of the Association for Computational Linguistics, Budapest, Hungary. Eugene Charniak. 2000. A maximum-entropyinspired parser. In Proc. 1st Meeting of North American Chapter of Association for Computational Linguistics, pages 132–139, Seattle, Washington. Michael Collins and Nigel Duffy. 2002. New ranking algorithms for parsing and tagging: Kernels over discrete structures and the voted perceptron. In Proc. 35th Meeting of Association for Computational Linguistics, pages 263–270. Michael Collins. 1999. Head-Driven Statistical Models for Natural Language Parsing. Ph.D. thesis, University of Pennsylvania, Philadelphia, PA. Michael Collins. 2000. Discriminative reranking for natural language parsing. In Proc. 17th Int. Conf. on Machine Learning, pages 175–182, Stanford, CA. James Henderson. 2003a. Generative versus discriminative models for statistical leftcorner parsing. In Proc. 8th Int. Workshop on Parsing Technologies, pages 115–126, Nancy, France. James Henderson. 2003b. Inducing history representations for broad coverage statistical parsing. In Proc. joint meeting of North American Chapter of the Association for Computational Linguistics and the Human Language Technology Conf., pages 103–110, Edmonton, Canada. Mark Johnson. 2001. Joint and conditional estimation of tagging and parsing models. In Proc. 39th Meeting of Association for Computational Linguistics, pages 314–321, Toulouse, France. Dan Klein and Christopher D. Manning. 2002. Conditional structure versus conditional estimation in NLP models. In Proc. Conf. on Empirical Methods in Natural Language Processing, pages 9–16, Univ. of Pennsylvania, PA. Peter Lane and James Henderson. 2001. Incremental syntactic parsing of natural language corpora with Simple Synchrony Networks. IEEE Transactions on Knowledge and Data Engineering, 13(2):219–231. Mitchell P. Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building a large annotated corpus of English: The Penn Treebank. Computational Linguistics, 19(2):313–330. A. Y. Ng and M. I. Jordan. 2002. On discriminative vs. generative classifiers: A comparison of logistic regression and naive bayes. In T. G. Dietterich, S. Becker, and Z. Ghahramani, editors, Advances in Neural Information Processing Systems 14, Cambridge, MA. MIT Press. Adwait Ratnaparkhi. 1996. A maximum entropy model for part-of-speech tagging. In Proc. Conf. on Empirical Methods in Natural Language Processing, pages 133–142, Univ. of Pennsylvania, PA. Adwait Ratnaparkhi. 1999. Learning to parse natural language with maximum entropy models. Machine Learning, 34:151–175. D.J. Rosenkrantz and P.M. Lewis. 1970. Deterministic left corner parsing. In Proc. 11th Symposium on Switching and Automata Theory, pages 139–152. Vladimir N. Vapnik. 1995. The Nature of Statistical Learning Theory. Springer-Verlag, New York.
2004
13
Parsing the WSJ using CCG and Log-Linear Models Stephen Clark School of Informatics University of Edinburgh 2 Buccleuch Place, Edinburgh, UK [email protected] James R. Curran School of Information Technologies University of Sydney NSW 2006, Australia [email protected] Abstract This paper describes and evaluates log-linear parsing models for Combinatory Categorial Grammar (CCG). A parallel implementation of the L-BFGS optimisation algorithm is described, which runs on a Beowulf cluster allowing the complete Penn Treebank to be used for estimation. We also develop a new efficient parsing algorithm for CCG which maximises expected recall of dependencies. We compare models which use all CCG derivations, including nonstandard derivations, with normal-form models. The performances of the two models are comparable and the results are competitive with existing wide-coverage CCG parsers. 1 Introduction A number of statistical parsing models have recently been developed for Combinatory Categorial Grammar (CCG; Steedman, 2000) and used in parsers applied to the WSJ Penn Treebank (Clark et al., 2002; Hockenmaier and Steedman, 2002; Hockenmaier, 2003b). In Clark and Curran (2003) we argued for the use of log-linear parsing models for CCG. However, estimating a log-linear model for a widecoverage CCG grammar is very computationally expensive. Following Miyao and Tsujii (2002), we showed how the estimation can be performed efficiently by applying the inside-outside algorithm to a packed chart. We also showed how the complete WSJ Penn Treebank can be used for training by developing a parallel version of Generalised Iterative Scaling (GIS) to perform the estimation. This paper significantly extends our earlier work in a number of ways. First, we evaluate a number of log-linear models, obtaining results which are competitive with the state-of-the-art for CCG parsing. We also compare log-linear models which use all CCG derivations, including non-standard derivations, with normal-form models. Second, we find that GIS is unsuitable for estimating a model of the size being considered, and develop a parallel version of the L-BFGS algorithm (Nocedal and Wright, 1999). And finally, we show that the parsing algorithm described in Clark and Curran (2003) is extremely slow in some cases, and suggest an efficient alternative based on Goodman (1996). The development of parsing and estimation algorithms for models which use all derivations extends existing CCG parsing techniques, and allows us to test whether there is useful information in the additional derivations. However, we find that the performance of the normal-form model is at least as good as the all-derivations model, in our experiments todate. The normal-form approach allows the use of additional constraints on rule applications, leading to a smaller model, reducing the computational resources required for estimation, and resulting in an extremely efficient parser. This paper assumes a basic understanding of CCG; see Steedman (2000) for an introduction, and Clark et al. (2002) and Hockenmaier (2003a) for an introduction to statistical parsing with CCG. 2 Parsing Models for CCG CCG is unusual among grammar formalisms in that, for each derived structure for a sentence, there can be many derivations leading to that structure. The presence of such ambiguity, sometimes referred to as spurious ambiguity, enables CCG to produce elegant analyses of coordination and extraction phenomena (Steedman, 2000). However, the introduction of extra derivations increases the complexity of the modelling and parsing problem. Clark et al. (2002) handle the additional derivations by modelling the derived structure, in their case dependency structures. They use a conditional model, based on Collins (1996), which, as the authors acknowledge, has a number of theoretical deficiencies; thus the results of Clark et al. provide a useful baseline for the new models presented here. Hockenmaier (2003a) uses a model which favours only one of the derivations leading to a derived structure, namely the normal-form derivation (Eisner, 1996). In this paper we compare the normal-form approach with a dependency model. For the dependency model, we define the probability of a dependency structure as follows: P(π|S ) = X d∈∆(π) P(d, π|S ) (1) where π is a dependency structure, S is a sentence and ∆(π) is the set of derivations which lead to π. This extends the approach of Clark et al. (2002) who modelled the dependency structures directly, not using any information from the derivations. In contrast to the dependency model, the normal-form model simply defines a distribution over normalform derivations. The dependency structures considered in this paper are described in detail in Clark et al. (2002) and Clark and Curran (2003). Each argument slot in a CCG lexical category represents a dependency relation, and a dependency is defined as a 5-tuple ⟨hf , f, s, ha, l⟩, where h f is the head word of the lexical category, f is the lexical category, s is the argument slot, ha is the head word of the argument, and l indicates whether the dependency is long-range. For example, the long-range dependency encoding company as the extracted object of bought (as in the company that IBM bought) is represented as the following 5-tuple: ⟨bought, (S[dcl]\NP1)/NP2, 2, company, ∗⟩ where ∗is the category (NP\NP)/(S[dcl]/NP) assigned to the relative pronoun. For local dependencies l is assigned a null value. A dependency structure is a multiset of these dependencies. 3 Log-Linear Parsing Models Log-linear models (also known as Maximum Entropy models) are popular in NLP because of the ease with which discriminating features can be included in the model. Log-linear models have been applied to the parsing problem across a range of grammar formalisms, e.g. Riezler et al. (2002) and Toutanova et al. (2002). One motivation for using a log-linear model is that long-range dependencies which CCG was designed to handle can easily be encoded as features. A conditional log-linear model of a parse ω ∈Ω, given a sentence S , is defined as follows: P(ω|S ) = 1 ZS eλ. f(ω) (2) where λ. f(ω) = P i λi fi(ω). The function fi is a feature of the parse which can be any real-valued function over the space of parses Ω. Each feature fi has an associated weight λi which is a parameter of the model to be estimated. ZS is a normalising constant which ensures that P(ω|S ) is a probability distribution: ZS = X ω′∈ρ(S ) eλ. f(ω′) (3) where ρ(S ) is the set of possible parses for S . For the dependency model a parse, ω, is a ⟨d, π⟩ pair (as given in (1)). A feature is a count of the number of times some configuration occurs in d or the number of times some dependency occurs in π. Section 6 gives examples of features. 3.1 The Dependency Model We follow Riezler et al. (2002) in using a discriminative estimation method by maximising the conditional likelihood of the model given the data. For the dependency model, the data consists of sentences S 1, . . . , S m, together with gold standard dependency structures, π1, . . . , πm. The gold standard structures are multisets of dependencies, as described earlier. Section 6 explains how the gold standard structures are obtained. The objective function of a model Λ is the conditional log-likelihood, L(Λ), minus a Gaussian prior term, G(Λ), used to reduce overfitting (Chen and Rosenfeld, 1999). Hence, given the definition of the probability of a dependency structure (1), the objective function is as follows: L′(Λ) = L(Λ) −G(Λ) (4) = log m Y j=1 PΛ(π j|S j) − n X i=1 λ2 i 2σ2 i = m X j=1 log P d∈∆(πj) eλ. f(d,π j) P ω∈ρ(S j) eλ. f(ω) − n X i=1 λ2 i 2σ2 i = m X j=1 log X d∈∆(πj) eλ. f(d,π j) − m X j=1 log X ω∈ρ(S j) eλ. f(ω) − n X i=1 λ2 i 2σ2 i where n is the number of features. Rather than have a different smoothing parameter σi for each feature, we use a single parameter σ. We use a technique from the numerical optimisation literature, the L-BFGS algorithm (Nocedal and Wright, 1999), to optimise the objective function. L-BFGS is an iterative algorithm which requires the gradient of the objective function to be computed at each iteration. The components of the gradient vector are as follows: ∂L′(Λ) ∂λi = m X j=1 X d∈∆(πj) eλ. f(d,π j) fi(d, π j) P d∈∆(πj) eλ. f(d,π j) (5) − m X j=1 X ω∈ρ(S j) eλ. f(ω) fi(ω) P ω∈ρ(S j) eλ. f(ω) −λi σ2 i The first two terms in (5) are expectations of feature fi: the first expectation is over all derivations leading to each gold standard dependency structure; the second is over all derivations for each sentence in the training data. Setting the gradient to zero yields the usual maximum entropy constraints (Berger et al., 1996), except that in this case the empirical values are themselves expectations (over all derivations leading to each gold standard dependency structure). The estimation process attempts to make the expectations equal, by putting as much mass as possible on the derivations leading to the gold standard structures.1 The Gaussian prior term penalises any model whose weights get too large in absolute value. Calculation of the feature expectations requires summing over all derivations for a sentence, and summing over all derivations leading to a gold standard dependency structure. In both cases there can be exponentially many derivations, and so enumerating all derivations is not possible (at least for wide-coverage automatically extracted grammars). Clark and Curran (2003) show how the sum over the complete derivation space can be performed efficiently using a packed chart and a variant of the inside-outside algorithm. Section 5 shows how the same technique can also be applied to all derivations leading to a gold standard dependency structure. 3.2 The Normal-Form Model The objective function and gradient vector for the normal-form model are as follows: L′(Λ) = L(Λ) −G(Λ) (6) = log m Y j=1 PΛ(dj|S j) − n X i=1 λ2 i 2σ2 i ∂L′(Λ) ∂λi = m X j=1 fi(dj) (7) − m X j=1 X d∈θ(S j) eλ. f(d) fi(d) P d∈θ(S j) eλ. f(d) −λi σ2 i 1See Riezler et al. (2002) for a similar description in the context of LFG parsing. where d j is the the gold standard derivation for sentence S j and θ(S j) is the set of possible derivations for S j. Note that the empirical expectation in (7) is simply a count of the number of times the feature appears in the gold-standard derivations. 4 Packed Charts The packed charts perform a number of roles: they are a compact representation of a very large number of CCG derivations; they allow recovery of the highest scoring parse or dependency structure without enumerating all derivations; and they represent an instance of what Miyao and Tsujii (2002) call a feature forest, which is used to efficiently estimate a log-linear model. The idea behind a packed chart is simple: equivalent chart entries of the same type, in the same cell, are grouped together, and back pointers to the daughters indicate how an individual entry was created. Equivalent entries form the same structures in any subsequent parsing. Since the packed charts are used for model estimation and recovery of the highest scoring parse or dependency structure, the features in the model partly determine which entries can be grouped together. In this paper we use features from the dependency structure, and features defined on the local rule instantiations.2 Hence, any two entries with identical category type, identical head, and identical unfilled dependencies are equivalent. Note that not all features are local to a rule instantiation; for example, features encoding long-range dependencies may involve words which are a long way apart in the sentence. For the purposes of estimation and finding the highest scoring parse or dependency structure, only entries which are part of a derivation spanning the whole sentence are relevant. These entries can be easily found by traversing the chart top-down, starting with the entries which span the sentence. The entries within spanning derivations form a feature forest (Miyao and Tsujii, 2002). A feature forest Φ is a tuple ⟨C, D, R, γ, δ⟩where: C is a set of conjunctive nodes; D is a set of disjunctive nodes; R ⊆D is a set of root disjunctive nodes; γ : D →2C is a conjunctive daughter function; δ : C →2D is a disjunctive daughter function. The individual entries in a cell are conjunctive nodes, and the equivalence classes of entries are dis2By rule instantiation we mean the local tree arising from the application of a CCG combinatory rule. ⟨C, D, R, γ, δ⟩is a packed chart / feature forest G is a set of gold standard dependencies Let c be a conjunctive node Let d be a disjunctive node deps(c) is the set of dependencies on node c cdeps(c) = ( −1 if, for some τ ∈deps(c), τ < G |deps(c)| otherwise dmax(c) =  −1 if cdeps(c) = −1 −1 if dmax(d) = −1 for some d ∈δ(c) P d∈δ(c) dmax(d) + cdeps(c) otherwise dmax(d) = max{dmax(c) | c ∈γ(d)} mark(d): mark d as a correct node foreach c ∈γ(d) if dmax(c) = dmax(d) mark c as a correct node foreach d′ ∈δ(c) mark(d′) foreach dr ∈R such that dmax . (dr) = |G| mark(dr) Figure 1: Finding nodes in correct derivations junctive nodes. The roots of the CCG derivations represent the root disjunctive nodes.3 5 Efficient Estimation The L-BFGS algorithm requires the following values at each iteration: the expected value, and the empirical expected value, of each feature (to calculate the gradient); and the value of the likelihood function. For the normal-form model, the empirical expected values and the likelihood can easily be obtained, since these only involve the single goldstandard derivation for each sentence. The expected values can be calculated using the method in Clark and Curran (2003). For the dependency model, the computations of the empirical expected values (5) and the likelihood function (4) are more complex, since these require sums over just those derivations leading to the gold standard dependency structure. We will refer to such derivations as correct derivations. Figure 1 gives an algorithm for finding nodes in a packed chart which appear in correct derivations. cdeps(c) is the number of correct dependencies on conjunctive node c, and takes the value −1 if there are any incorrect dependencies on c. dmax(c) is 3A more complete description of CCG feature forests is given in Clark and Curran (2003). the maximum number of correct dependencies produced by any sub-derivation headed by c, and takes the value −1 if there are no sub-derivations producing only correct dependencies. dmax(d) is the same value but for disjunctive node d. Recursive definitions for calculating these values are given in Figure 1; the base case occurs when conjunctive nodes have no disjunctive daughters. The algorithm identifies all those root nodes heading derivations which produce just the correct dependencies, and traverses the chart top-down marking the nodes in those derivations. The insight behind the algorithm is that, for two conjunctive nodes in the same equivalence class, if one node heads a sub-derivation producing more correct dependencies than the other node (and each sub-derivation only produces correct dependencies), then the node with less correct dependencies cannot be part of a correct derivation. The conjunctive and disjunctive nodes appearing in correct derivations form a new correct feature forest. The correct forest, and the complete forest containing all derivations spanning the sentence, can be used to estimate the required likelihood value and feature expectations. Let EΦ Λ fi be the expected value of fi over the forest Φ for model Λ; then the values in (5) can be obtained by calculating EΦj Λ fi for the complete forest Φ j for each sentence S j in the training data (the second sum in (5)), and also EΨj Λ fi for each forest Ψ j of correct derivations (the first sum in (5)): ∂L(Λ) ∂λi = m X j=1 (EΨj Λ fi −EΦj Λ fi) (8) The likelihood in (4) can be calculated as follows: L(Λ) = m X j=1 (log ZΨj −log ZΦj) (9) where log ZΦ is the normalisation constant for Φ. 6 Estimation in Practice The gold standard dependency structures are produced by running our CCG parser over the normal-form derivations in CCGbank (Hockenmaier, 2003a). Not all rule instantiations in CCGbank are instances of combinatory rules, and not all can be produced by the parser, and so gold standard structures were created for 85.5% of the sentences in sections 2-21 (33,777 sentences). The same parser is used to produce the packed charts. The parser uses a maximum entropy supertagger (Clark and Curran, 2004) to assign lexical categories to the words in a sentence, and applies the CKY chart parsing algorithm described in Steedman (2000). For parsing the training data, we ensure that the correct category is a member of the set assigned to each word. The average number of categories assigned to each word is determined by a parameter in the supertagger. For the first set of experiments, we used a setting which assigns 1.7 categories on average per word. The feature set for the dependency model consists of the following types of features: dependency features (with and without distance measures), rule instantiation features (with and without a lexical head), lexical category features, and root category features. Dependency features are the 5-tuples defined in Section 1. There are also three additional dependency feature types which have an extra distance field (and only include the head of the lexical category, and not the head of the argument); these count the number of words (0, 1, 2 or more), punctuation marks (0, 1, 2 or more), and verbs (0, 1 or more) between head and dependent. Lexical category features are word–category pairs at the leaf nodes, and root features are headword–category pairs at the root nodes. Rule instantiation features simply encode the combining categories together with the result category. There is an additional rule feature type which also encodes the lexical head of the resulting category. Additional generalised features for each feature type are formed by replacing words with their POS tags. The feature set for the normal-form model is the same except that, following Hockenmaier and Steedman (2002), the dependency features are defined in terms of the local rule instantiations, by adding the heads of the combining categories to the rule instantiation features. Again there are 3 additional distance feature types, as above, which only include the head of the resulting category. We had hoped that by modelling the predicate-argument dependencies produced by the parser, rather than local rule dependencies, we would improve performance. However, using the predicate-argument dependencies in the normal-form model instead of, or in addition to, the local rule dependencies, has not led to an improvement in parsing accuracy. Only features which occurred more than once in the training data were included, except that, for the dependency model, the cutoff for the rule features was 9 and the counting was performed across all derivations, not just the gold-standard derivation. The normal-form model has 482,007 features and the dependency model has 984,522 features. We used 45 machines of a 64-node Beowulf cluster to estimate the dependency model, with an average memory usage of approximately 550 MB for each machine. For the normal-form model we were able to reduce the size of the charts considerably by applying two types of restriction to the parser: first, categories can only combine if they appear together in a rule instantiation in sections 2–21 of CCGbank; and second, we apply the normal-form restrictions described in Eisner (1996). (See Clark and Curran (2004) for a description of the Eisner constraints.) The normal-form model requires only 5 machines for estimation, with an average memory usage of 730 MB for each machine. Initially we tried the parallel version of GIS described in Clark and Curran (2003) to perform the estimation, running over the Beowulf cluster. However, we found that GIS converged extremely slowly; this is in line with other recent results in the literature applying GIS to globally optimised models such as conditional random fields, e.g. Sha and Pereira (2003). As an alternative to GIS, we have implemented a parallel version of our L-BFGS code using the Message Passing Interface (MPI) standard. L-BFGS over forests can be parallelised, using the method described in Clark and Curran (2003) to calculate the feature expectations. The L-BFGS algorithm, run to convergence on the cluster, takes 479 iterations and 2 hours for the normal-form model, and 1,550 iterations and roughly 17 hours for the dependency model. 7 Parsing Algorithm For the normal-form model, the Viterbi algorithm is used to find the most probable derivation. For the dependency model, the highest scoring dependency structure is required. Clark and Curran (2003) outlines an algorithm for finding the most probable dependency structure, which keeps track of the highest scoring set of dependencies for each node in the chart. For a set of equivalent entries in the chart (a disjunctive node), this involves summing over all conjunctive node daughters which head subderivations leading to the same set of high scoring dependencies. In practice large numbers of such conjunctive nodes lead to very long parse times. As an alternative to finding the most probable dependency structure, we have developed an algorithm which maximises the expected labelled recall over dependencies. Our algorithm is based on Goodman’s (1996) labelled recall algorithm for the phrase-structure PARSEVAL measures. Let Lπ be the number of correct dependencies in π with respect to a gold standard dependency structure G; then the dependency structure, πmax, which maximises the expected recall rate is: πmax = arg max π E(Lπ/|G|) (10) = arg max π X πi P(πi|S )|π ∩πi| where S is the sentence for gold standard dependency structure G and πi ranges over the dependency structures for S . This expression can be expanded further: πmax = arg max π X πi P(πi|S ) X τ∈π 1 if τ ∈πi = arg max π X τ∈π X π′|τ∈π′ P(π′|S ) = arg max π X τ∈π X d∈∆(π′)|τ∈π′ P(d|S ) (11) The final score for a dependency structure π is a sum of the scores for each dependency τ in π; and the score for a dependency τ is the sum of the probabilities of those derivations producing τ. This latter sum can be calculated efficiently using inside and outside scores: πmax = arg max π X τ∈π 1 ZS X c∈C φcψc if τ ∈deps(c) (12) where φc is the inside score and ψc is the outside score for node c (see Clark and Curran (2003)); C is the set of conjunctive nodes in the packed chart for sentence S and deps(c) is the set of dependencies on conjunctive node c. The intuition behind the expected recall score is that a dependency structure scores highly if it has dependencies produced by high scoring derivations.4 The algorithm which finds πmax is a simple variant on the Viterbi algorithm, efficiently finding a derivation which produces the highest scoring set of dependencies. 8 Experiments Gold standard dependency structures were derived from section 00 (for development) and section 23 (for testing) by running the parser over the derivations in CCGbank, some of which the parser could not process. In order to increase the number of test sentences, and to allow a fair comparison with other CCG parsers, extra rules were encoded in the parser (but we emphasise these were only used to obtain 4Coordinate constructions can create multiple dependencies for a single argument slot; in this case the score for the multiple dependencies is the average of the individual scores. LP LR UP UR cat Dep model 86.7 85.6 92.6 91.5 93.5 N-form model 86.4 86.2 92.4 92.2 93.6 Table 1: Results on development set; labelled and unlabelled precision and recall, and lexical category accuracy Features LP LR UP UR cat RULES 82.6 82.0 89.7 89.1 92.4 +HEADS 83.6 83.3 90.2 90.0 92.8 +DEPS 85.5 85.3 91.6 91.3 93.5 +DISTANCE 86.4 86.2 92.4 92.2 93.6 FINAL 87.0 86.8 92.7 92.5 93.9 Table 2: Results on development set for the normalform models the section 23 test data; they were not used to parse unseen data as part of the testing). This resulted in 2,365 dependency structures for section 23 (98.5% of the full section), and 1,825 (95.5%) dependency structures for section 00. The first stage in parsing the test data is to apply the supertagger. We use the novel strategy developed in Clark and Curran (2004): first assign a small number of categories (approximately 1.4) on average to each word, and increase the number of categories if the parser fails to find an analysis. We were able to parse 98.9% of section 23 using this strategy. Clark and Curran (2004) shows that this supertagging method results in a highly efficient parser. For the normal-form model we returned the dependency structure for the most probable derivation, applying the two types of normal-form constraints described in Section 6. For the dependency model we returned the dependency structure with the highest expected labelled recall score. Following Clark et al. (2002), evaluation is by precision and recall over dependencies. For a labelled dependency to be correct, the first 4 elements of the dependency tuple must match exactly. For an unlabelled dependency to be correct, the heads of the functor and argument must appear together in some relation in the gold standard (in any order). The results on section 00, using the feature sets described earlier, are given in Table 1, with similar results overall for the normal-form model and the dependency model. Since experimentation is easier with the normal-form model than the dependency model, we present additional results for the normalform model. Table 2 gives the results for the normal-form model for various feature sets. The results show that each additional feature type increases perforLP LR UP UR cat Clark et al. 2002 81.9 81.8 90.1 89.9 90.3 Hockenmaier 2003 84.3 84.6 91.8 92.2 92.2 Log-linear 86.6 86.3 92.5 92.1 93.6 Hockenmaier(POS) 83.1 83.5 91.1 91.5 91.5 Log-linear (POS) 84.8 84.5 91.4 91.0 92.5 Table 3: Results on the test set mance. Hockenmaier also found the dependencies to be very beneficial — in contrast to recent results from the lexicalised PCFG parsing literature (Gildea, 2001) — but did not gain from the use of distance measures. One of the advantages of a log-linear model is that it is easy to include additional information, such as distance, as features. The FINAL result in Table 2 is obtained by using a larger derivation space for training, created using more categories per word from the supertagger, 2.9, and hence using charts containing more derivations. (15 machines were used to estimate this model.) More investigation is needed to find the optimal chart size for estimation, but the results show a gain in accuracy. Table 3 gives the results of the best performing normal-form model on the test set. The results of Clark et al. (2002) and Hockenmaier (2003a) are shown for comparison. The dependency set used by Hockenmaier contains some minor differences to the set used here, but “evaluating” our test set against Hockenmaier’s gives an F-score of over 97%, showing the test sets to be very similar. The results show that our parser is performing significantly better than that of Clark et al., demonstrating the benefit of derivation features and the use of a sound statistical model. The results given so far have all used gold standard POS tags from CCGbank. Table 3 also gives the results if automatically assigned POS tags are used in the training and testing phases, using the C&C POS tagger (Curran and Clark, 2003). The performance reduction is expected given that the supertagger relies heavily on POS tags as features. More investigation is needed to properly compare our parser and Hockenmaier’s, since there are a number of differences in addition to the models used: Hockenmaier effectively reads a lexicalised PCFG off CCGbank, and is able to use all of the available training data; Hockenmaier does not use a supertagger, but does use a beam search. Parsing the 2,401 sentences in section 23 takes 1.6 minutes using the normal-form model, and 10.5 minutes using the dependency model. The difference is due largely to the normal-form constraints used by the normal-form parser. Clark and Curran (2004) shows that the normal-form constraints significantly increase parsing speed and, in combination with adaptive supertagging, result in a highly efficient wide-coverage parser. As a final oracle experiment we parsed the sentences in section 00 using the correct lexical categories from CCGbank. Since the parser uses only a subset of the lexical categories in CCGbank, 7% of the sentences could not be parsed; however, the labelled F-score for the parsed sentences was almost 98%. This very high score demonstrates the large amount of information in lexical categories. 9 Conclusion A major contribution of this paper has been the development of a parsing model for CCG which uses all derivations, including non-standard derivations. Non-standard derivations are an integral part of the CCG formalism, and it is an interesting question whether efficient estimation and parsing algorithms can be defined for models which use all derivations. We have answered this question, and in doing so developed a new parsing algorithm for CCG which maximises expected recall of dependencies. We would like to extend the dependency model, by including the local-rule dependencies which are used by the normal-form model, for example. However, one of the disadvantages of the dependency model is that the estimation process is already using a large proportion of our existing resources, and extending the feature set will further increase the execution time and memory requirement of the estimation algorithm. We have also shown that a normal-form model performs as well as the dependency model. There are a number of advantages to the normal-form model: it requires less space and time resources for estimation and it produces a faster parser. Our normal-form parser significantly outperforms the parser of Clark et al. (2002) and produces results at least as good as the current state-of-the-art for CCG parsing. The use of adaptive supertagging and the normal-form constraints result in a very efficient wide-coverage parser. Our system demonstrates that accurate and efficient wide-coverage CCG parsing is feasible. Future work will investigate extending the feature sets used by the log-linear models with the aim of further increasing parsing accuracy. Finally, the oracle results suggest that further experimentation with the supertagger will significantly improve parsing accuracy, efficiency and robustness. Acknowledgements We would like to thank Julia Hockenmaier for the use of CCGbank and helpful comments, and Mark Steedman for guidance and advice. Jason Baldridge, Frank Keller, Yuval Krymolowski and Miles Osborne provided useful feedback. This work was supported by EPSRC grant GR/M96889, and a Commonwealth scholarship and a Sydney University Travelling scholarship to the second author. References Adam Berger, Stephen Della Pietra, and Vincent Della Pietra. 1996. A maximum entropy approach to natural language processing. Computational Linguistics, 22(1):39–71. Stanley Chen and Ronald Rosenfeld. 1999. A Gaussian prior for smoothing maximum entropy models. Technical report, Carnegie Mellon University, Pittsburgh, PA. Stephen Clark and James R. Curran. 2003. Log-linear models for wide-coverage CCG parsing. In Proceedings of the EMNLP Conference, pages 97–104, Sapporo, Japan. Stephen Clark and James R. Curran. 2004. The importance of supertagging for wide-coverage CCG parsing. In Proceedings of COLING-04, Geneva, Switzerland. Stephen Clark, Julia Hockenmaier, and Mark Steedman. 2002. Building deep dependency structures with a wide-coverage CCG parser. In Proceedings of the 40th Meeting of the ACL, pages 327–334, Philadelphia, PA. Michael Collins. 1996. A new statistical parser based on bigram lexical dependencies. In Proceedings of the 34th Meeting of the ACL, pages 184–191, Santa Cruz, CA. James R. Curran and Stephen Clark. 2003. Investigating GIS and smoothing for maximum entropy taggers. In Proceedings of the 10th Meeting of the EACL, pages 91–98, Budapest, Hungary. Jason Eisner. 1996. Efficient normal-form parsing for Combinatory Categorial Grammar. In Proceedings of the 34th Meeting of the ACL, pages 79–86, Santa Cruz, CA. Daniel Gildea. 2001. Corpus variation and parser performance. In Proceedings of the EMNLP Conference, pages 167–202, Pittsburgh, PA. Joshua Goodman. 1996. Parsing algorithms and metrics. In Proceedings of the 34th Meeting of the ACL, pages 177–183, Santa Cruz, CA. Julia Hockenmaier and Mark Steedman. 2002. Generative models for statistical parsing with Combinatory Categorial Grammar. In Proceedings of the 40th Meeting of the ACL, pages 335–342, Philadelphia, PA. Julia Hockenmaier. 2003a. Data and Models for Statistical Parsing with Combinatory Categorial Grammar. Ph.D. thesis, University of Edinburgh. Julia Hockenmaier. 2003b. Parsing with generative models of predicate-argument structure. In Proceedings of the 41st Meeting of the ACL, pages 359–366, Sapporo, Japan. Yusuke Miyao and Jun’ichi Tsujii. 2002. Maximum entropy estimation for feature forests. In Proceedings of the Human Language Technology Conference, San Diego, CA. Jorge Nocedal and Stephen J. Wright. 1999. Numerical Optimization. Springer, New York, USA. Stefan Riezler, Tracy H. King, Ronald M. Kaplan, Richard Crouch, John T. Maxwell III, and Mark Johnson. 2002. Parsing the Wall Street Journal using a Lexical-Functional Grammar and discriminative estimation techniques. In Proceedings of the 40th Meeting of the ACL, pages 271–278, Philadelphia, PA. Fei Sha and Fernando Pereira. 2003. Shallow parsing with conditional random fields. In Proceedings of the HLT/NAACL Conference, pages 213–220, Edmonton, Canada. Mark Steedman. 2000. The Syntactic Process. The MIT Press, Cambridge, MA. Kristina Toutanova, Christopher Manning, Stuart Shieber, Dan Flickinger, and Stephan Oepen. 2002. Parse disambiguation for a rich HPSG grammar. In Proceedings of the First Workshop on Treebanks and Linguistic Theories, pages 253–263, Sozopol, Bulgaria.
2004
14
Incremental Parsing with the Perceptron Algorithm Michael Collins MIT CSAIL [email protected] Brian Roark AT&T Labs - Research [email protected] Abstract This paper describes an incremental parsing approach where parameters are estimated using a variant of the perceptron algorithm. A beam-search algorithm is used during both training and decoding phases of the method. The perceptron approach was implemented with the same feature set as that of an existing generative model (Roark, 2001a), and experimental results show that it gives competitive performance to the generative model on parsing the Penn treebank. We demonstrate that training a perceptron model to combine with the generative model during search provides a 2.1 percent F-measure improvement over the generative model alone, to 88.8 percent. 1 Introduction In statistical approaches to NLP problems such as tagging or parsing, it seems clear that the representation used as input to a learning algorithm is central to the accuracy of an approach. In an ideal world, the designer of a parser or tagger would be free to choose any features which might be useful in discriminating good from bad structures, without concerns about how the features interact with the problems of training (parameter estimation) or decoding (search for the most plausible candidate under the model). To this end, a number of recently proposed methods allow a model to incorporate “arbitrary” global features of candidate analyses or parses. Examples of such techniques are Markov Random Fields (Ratnaparkhi et al., 1994; Abney, 1997; Della Pietra et al., 1997; Johnson et al., 1999), and boosting or perceptron approaches to reranking (Freund et al., 1998; Collins, 2000; Collins and Duffy, 2002). A drawback of these approaches is that in the general case, they can require exhaustive enumeration of the set of candidates for each input sentence in both the training and decoding phases1. For example, Johnson et al. (1999) and Riezler et al. (2002) use all parses generated by an LFG parser as input to an MRF approach – given the level of ambiguity in natural language, this set can presumably become extremely large. Collins (2000) and Collins and Duffy (2002) rerank the top N parses from an existing generative parser, but this kind of approach 1Dynamic programming methods (Geman and Johnson, 2002; Lafferty et al., 2001) can sometimes be used for both training and decoding, but this requires fairly strong restrictions on the features in the model. presupposes that there is an existing baseline model with reasonable performance. Many of these baseline models are themselves used with heuristic search techniques, so that the potential gain through the use of discriminative re-ranking techniques is further dependent on effective search. This paper explores an alternative approach to parsing, based on the perceptron training algorithm introduced in Collins (2002). In this approach the training and decoding problems are very closely related – the training method decodes training examples in sequence, and makes simple corrective updates to the parameters when errors are made. Thus the main complexity of the method is isolated to the decoding problem. We describe an approach that uses an incremental, left-to-right parser, with beam search, to find the highest scoring analysis under the model. The same search method is used in both training and decoding. We implemented the perceptron approach with the same feature set as that of an existing generative model (Roark, 2001a), and show that the perceptron model gives performance competitive to that of the generative model on parsing the Penn treebank, thus demonstrating that an unnormalized discriminative parsing model can be applied with heuristic search. We also describe several refinements to the training algorithm, and demonstrate their impact on convergence properties of the method. Finally, we describe training the perceptron model with the negative log probability given by the generative model as another feature. This provides the perceptron algorithm with a better starting point, leading to large improvements over using either the generative model or the perceptron algorithm in isolation (the hybrid model achieves 88.8% f-measure on the WSJ treebank, compared to figures of 86.7% and 86.6% for the separate generative and perceptron models). The approach is an extremely simple method for integrating new features into the generative model: essentially all that is needed is a definition of feature-vector representations of entire parse trees, and then the existing parsing algorithms can be used for both training and decoding with the models. 2 The General Framework In this section we describe a general framework – linear models for NLP – that could be applied to a diverse range of tasks, including parsing and tagging. We then describe a particular method for parameter estimation, which is a generalization of the perceptron algorithm. Finally, we give an abstract description of an incremental parser, and describe how it can be used with the perceptron algorithm. 2.1 Linear Models for NLP We follow the framework outlined in Collins (2002; 2004). The task is to learn a mapping from inputs x ∈X to outputs y ∈Y. For example, X might be a set of sentences, with Y being a set of possible parse trees. We assume: . Training examples (xi, yi) for i = 1 . . . n. . A function GEN which enumerates a set of candidates GEN(x) for an input x. . A representation Φ mapping each (x, y) ∈X × Y to a feature vector Φ(x, y) ∈Rd. . A parameter vector ¯α ∈Rd. The components GEN, Φ and ¯α define a mapping from an input x to an output F(x) through F(x) = arg max y∈GEN(x) Φ(x, y) · ¯α (1) where Φ(x, y) · ¯α is the inner product P s αsΦs(x, y). The learning task is to set the parameter values ¯α using the training examples as evidence. The decoding algorithm is a method for searching for the arg max in Eq. 1. This framework is general enough to encompass several tasks in NLP. In this paper we are interested in parsing, where (xi, yi), GEN, and Φ can be defined as follows: • Each training example (xi, yi) is a pair where xi is a sentence, and yi is the gold-standard parse for that sentence. • Given an input sentence x, GEN(x) is a set of possible parses for that sentence. For example, GEN(x) could be defined as the set of possible parses for x under some context-free grammar, perhaps a context-free grammar induced from the training examples. • The representation Φ(x, y) could track arbitrary features of parse trees. As one example, suppose that there are m rules in a context-free grammar (CFG) that defines GEN(x). Then we could define the i’th component of the representation, Φi(x, y), to be the number of times the i’th context-free rule appears in the parse tree (x, y). This is implicitly the representation used in probabilistic or weighted CFGs. Note that the difficulty of finding the arg max in Eq. 1 is dependent on the interaction of GEN and Φ. In many cases GEN(x) could grow exponentially with the size of x, making brute force enumeration of the members of GEN(x) intractable. For example, a context-free grammar could easily produce an exponentially growing number of analyses with sentence length. For some representations, such as the “rule-based” representation described above, the arg max in the set enumerated by the CFG can be found efficiently, using dynamic programming algorithms, without having to explicitly enumerate all members of GEN(x). However in many cases we may be interested in representations which do not allow efficient dynamic programming solutions. One way around this problem is to adopt a two-pass approach, where GEN(x) is the top N analyses under some initial model, as in the reranking approach of Collins (2000). In the current paper we explore alternatives to reranking approaches, namely heuristic methods for finding the arg max, specifically incremental beam-search strategies related to the parsers of Roark (2001a) and Ratnaparkhi (1999). 2.2 The Perceptron Algorithm for Parameter Estimation We now consider the problem of setting the parameters, ¯α, given training examples (xi, yi). We will briefly review the perceptron algorithm, and its convergence properties – see Collins (2002) for a full description. The algorithm and theorems are based on the approach to classification problems described in Freund and Schapire (1999). Figure 1 shows the algorithm. Note that the most complex step of the method is finding zi = arg maxz∈GEN(xi) Φ(xi, z)· ¯α – and this is precisely the decoding problem. Thus the training algorithm is in principle a simple part of the parser: any system will need a decoding method, and once the decoding algorithm is implemented the training algorithm is relatively straightforward. We will now give a first theorem regarding the convergence of this algorithm. First, we need the following definition: Definition 1 Let GEN(xi) = GEN(xi) −{yi}. In other words GEN(xi) is the set of incorrect candidates for an example xi. We will say that a training sequence (xi, yi) for i = 1 . . . n is separable with margin δ > 0 if there exists some vector U with ||U|| = 1 such that ∀i, ∀z ∈GEN(xi), U · Φ(xi, yi) −U · Φ(xi, z) ≥δ (2) (||U|| is the 2-norm of U, i.e., ||U|| = pP s U2s.) Next, define Ne to be the number of times an error is made by the algorithm in figure 1 – that is, the number of times that zi ̸= yi for some (t, i) pair. We can then state the following theorem (see (Collins, 2002) for a proof): Theorem 1 For any training sequence (xi, yi) that is separable with margin δ, for any value of T, then for the perceptron algorithm in figure 1 Ne ≤R2 δ2 where R is a constant such that ∀i, ∀z ∈ GEN(xi) ||Φ(xi, yi) −Φ(xi, z)|| ≤R. This theorem implies that if there is a parameter vector U which makes zero errors on the training set, then after a finite number of iterations the training algorithm will converge to parameter values with zero training error. A crucial point is that the number of mistakes is independent of the number of candidates for each example Inputs: Training examples (xi, yi) Algorithm: Initialization: Set ¯α = 0 For t = 1 . . . T, i = 1 . . . n Output: Parameters ¯α Calculate zi = arg maxz∈GEN(xi) Φ(xi, z) · ¯α If(zi ̸= yi) then ¯α = ¯α + Φ(xi, yi) −Φ(xi, zi) Figure 1: A variant of the perceptron algorithm. (i.e. the size of GEN(xi) for each i), depending only on the separation of the training data, where separation is defined above. This is important because in many NLP problems GEN(x) can be exponential in the size of the inputs. All of the convergence and generalization results in Collins (2002) depend on notions of separability rather than the size of GEN. Two questions come to mind. First, are there guarantees for the algorithm if the training data is not separable? Second, performance on a training sample is all very well, but what does this guarantee about how well the algorithm generalizes to newly drawn test examples? Freund and Schapire (1999) discuss how the theory for classification problems can be extended to deal with both of these questions; Collins (2002) describes how these results apply to NLP problems. As a final note, following Collins (2002), we used the averaged parameters from the training algorithm in decoding test examples in our experiments. Say ¯αt i is the parameter vector after the i’th example is processed on the t’th pass through the data in the algorithm in figure 1. Then the averaged parameters ¯αAV G are defined as ¯αAV G = P i,t ¯αt i/NT. Freund and Schapire (1999) originally proposed the averaged parameter method; it was shown to give substantial improvements in accuracy for tagging tasks in Collins (2002). 2.3 An Abstract Description of Incremental Parsing This section gives a description of the basic incremental parsing approach. The input to the parser is a sentence x with length n. A hypothesis is a triple ⟨x, t, i⟩such that x is the sentence being parsed, t is a partial or full analysis of that sentence, and i is an integer specifying the number of words of the sentence which have been processed. Each full parse for a sentence will have the form ⟨x, t, n⟩. The initial state is ⟨x, ∅, 0⟩where ∅is a “null” or empty analysis. We assume an “advance” function ADV which takes a hypothesis triple as input, and returns a set of new hypotheses as output. The advance function will absorb another word in the sentence: this means that if the input to ADV is ⟨x, t, i⟩, then each member of ADV(⟨x, t, i⟩) will have the form ⟨x, t′,i+1⟩. Each new analysis t′ will be formed by somehow incorporating the i+1’th word into the previous analysis t. With these definitions in place, we can iteratively define the full set of partial analyses Hi for the first i words of the sentence as H0(x) = {⟨x, ∅, 0⟩}, and Hi(x) = ∪h′∈Hi−1(x)ADV(h′) for i = 1 . . . n. The full set of parses for a sentence x is then GEN(x) = Hn(x) where n is the length of x. Under this definition GEN(x) can include a huge number of parses, and searching for the highest scoring parse, arg maxh∈Hn(x) Φ(h) · ¯α, will be intractable. For this reason we introduce one additional function, FILTER(H), which takes a set of hypotheses H, and returns a much smaller set of “filtered” hypotheses. Typically, FILTER will calculate the score Φ(h) · ¯α for each h ∈H, and then eliminate partial analyses which have low scores under this criterion. For example, a simple version of FILTER would take the top N highest scoring members of H for some constant N. We can then redefine the set of partial analyses as follows (we use Fi(x) to denote the set of filtered partial analyses for the first i words of the sentence): F0(x) = {⟨x, ∅, 0⟩} Fi(x) = FILTER ∪h′∈Fi−1(x)ADV(h′)  for i=1 . . . n The parsing algorithm returns arg maxh∈Fn Φ(h) · ¯α. Note that this is a heuristic, in that there is no guarantee that this procedure will find the highest scoring parse, arg maxh∈Hn Φ(h) · ¯α. Search errors, where arg maxh∈Fn Φ(h) · ¯α ̸= arg maxh∈Hn Φ(h) · ¯α, will create errors in decoding test sentences, and also errors in implementing the perceptron training algorithm in Figure 1. In this paper we give empirical results that suggest that FILTER can be chosen in such a way as to give efficient parsing performance together with high parsing accuracy. The exact implementation of the parser will depend on the definition of partial analyses, of ADV and FILTER, and of the representation Φ. The next section describes our instantiation of these choices. 3 A full description of the parsing approach The parser is an incremental beam-search parser very similar to the sort described in Roark (2001a; 2004), with some changes in the search strategy to accommodate the perceptron feature weights. We first describe the parsing algorithm, and then move on to the baseline feature set for the perceptron model. 3.1 Parser control The input to the parser is a string wn 0 , a grammar G, a mapping Φ from derivations to feature vectors, and a parameter vector ¯α. The grammar G = (V, T, S†, ¯S, C, B) consists of a set of non-terminal symbols V , a set of terminal symbols T, a start symbol S† ∈V , an end-ofconstituent symbol ¯S ∈V , a set of “allowable chains” C, and a set of “allowable triples” B. ¯S is a special empty non-terminal that marks the end of a constituent. Each chain is a sequence of non-terminals followed by a terminal symbol, for example ⟨S† →S →NP →NN → S† S ! ! NP NN Trash . . . . . . . . . . . . . NN can . . . . . . . . . . . . . . . VP MD can . . . . . . . . . . . . . . . . . . . VP VP MD can Figure 2: Left child chains and connection paths. Dotted lines represent potential attachments Trash⟩. Each “allowable triple” is a tuple ⟨X, Y, Z⟩ where X, Y, Z ∈V . The triples specify which nonterminals Z are allowed to follow a non-terminal Y under a parent X. For example, the triple ⟨S,NP,VP⟩ specifies that a VP can follow an NP under an S. The triple ⟨NP,NN,¯S⟩would specify that the ¯S symbol can follow an NN under an NP – i.e., that the symbol NN is allowed to be the final child of a rule with parent NP The initial state of the parser is the input string alone, wn 0 . In absorbing the first word, we add all chains of the form S† . . . →w0. For example, in figure 2 the chain ⟨S† →S →NP →NN →Trash⟩is used to construct an analysis for the first word alone. Other chains which start with S† and end with Trash would give competing analyses for the first word of the string. Figure 2 shows an example of how the next word in a sentence can be incorporated into a partial analysis for the previous words. For any partial analysis there will be a set of potential attachment sites: in the example, the attachment sites are under the NP or the S. There will also be a set of possible chains terminating in the next word – there are three in the example. Each chain could potentially be attached at each attachment site, giving 6 ways of incorporating the next word in the example. For illustration, assume that the set B is {⟨S,NP,VP⟩, ⟨NP,NN,NN⟩, ⟨NP,NN,¯S⟩, ⟨S,NP,VP⟩}. Then some of the 6 possible attachments may be disallowed because they create triples that are not in the set B. For example, in figure 2 attaching either of the VP chains under the NP is disallowed because the triple ⟨NP,NN,VP⟩is not in B. Similarly, attaching the NN chain under the S will be disallowed if the triple ⟨S,NP,NN⟩is not in B. In contrast, adjoining ⟨NN →can⟩under the NP creates a single triple, ⟨NP,NN,NN⟩, which is allowed. Adjoining either of the VP chains under the S creates two triples, ⟨S,NP,VP⟩and ⟨NP,NN,¯S⟩, which are both in the set B. Note that the “allowable chains” in our grammar are what Costa et al. (2001) call “connection paths” from the partial parse to the next word. It can be shown that the method is equivalent to parsing with a transformed context-free grammar (a first-order “Markov” grammar) – for brevity we omit the details here. In this way, given a set of candidates Fi(x) for the first i words of the string, we can generate a set of candidates Tree POS f24 f2-21 f2-21, # > 1 transform tags Type Type OOV Type OOV None Gold 386 1680 0.1% 1013 0.1% None Tagged 401 1776 0.1% 1043 0.2% FSLC Gold 289 1214 0.1% 746 0.1% FSLC Tagged 300 1294 0.1% 781 0.1% Table 1: Left-child chain type counts (of length > 2) for sections of the Wall St. Journal Treebank, and out-ofvocabulary (OOV) rate on the held-out corpus. for the first i + 1 words, ∪h′∈Fi(x)ADV(h′), where the ADV function uses the grammar as described above. We then calculate Φ(h)· ¯α for all of these partial hypotheses, and rank the set from best to worst. A FILTER function is then applied to this ranked set to give Fi+1. Let hk be the kth ranked hypothesis in Hi+1(x). Then hk ∈Fi+1 if and only if Φ(hk) · ¯α ≥θk. In our case, we parameterize the calculation of θk with γ as follows: θk = Φ(h0) · ¯α −γ k3 . (3) The problem with using left-child chains is limiting them in number. With a left-recursive grammar, of course, the set of all possible left-child chains is infinite. We use two techniques to reduce the number of left-child chains: first, we remove some (but not all) of the recursion from the grammar through a tree transform; next, we limit the left-child chains consisting of more than two non-terminal categories to those actually observed in the training data more than once. Left-child chains of length less than or equal to two are all those observed in training data. As a practical matter, the set of leftchild chains for a terminal x is taken to be the union of the sets of left-child chains for all pre-terminal part-ofspeech (POS) tags T for x. Before inducing the left-child chains and allowable triples from the treebank, the trees are transformed with a selective left-corner transformation (Johnson and Roark, 2000) that has been flattened as presented in Roark (2001b). This transform is only applied to left-recursive productions, i.e. productions of the form A → Aγ. The transformed trees look as in figure 3. The transform has the benefit of dramatically reducing the number of left-child chains, without unduly disrupting the immediate dominance relationships that provide features for the model. The parse trees that are returned by the parser are then de-transformed to the original form of the grammar for evaluation2. Table 1 presents the number of left-child chains of length greater than 2 in sections 2-21 and 24 of the Penn Wall St. Journal Treebank, both with and without the flattened selective left-corner transformation (FSLC), for gold-standard part-of-speech (POS) tags and automatically tagged POS tags. When the FSLC has been applied and the set is restricted to those occurring more than once 2See Johnson (1998) for a presentation of the transform/detransform paradigm in parsing. (a) NP     NP    NP   NNP Jim bb POS ’s HHH NN dog PPPP PP , , IN with . . . l l NP (b) NP      NNP Jim POS ’s XXXXX NP/NP    NN dog HHH NP/NP PP   IN with ... l l NP (c) NP NNP Jim ! ! ! POS ’s ll NP/NP NN dog `````` NP/NP PP , , IN with . . . l l NP Figure 3: Three representations of NP modifications: (a) the original treebank representation; (b) Selective left-corner representation; and (c) a flat structure that is unambiguously equivalent to (b) F0 = {L00, L10} F4 = F3 ∪{L03} F8 = F7 ∪{L21} F12 = F11 ∪{L11} F1 = F0 ∪{LKP} F5 = F4 ∪{L20} F9 = F8 ∪{CL} F13 = F12 ∪{L30} F2 = F1 ∪{L01} F6 = F5 ∪{L11} F10 = F9 ∪{LK} F14 = F13 ∪{CCP} F3 = F2 ∪{L02} F7 = F6 ∪{L30} F11 = F0 ∪{L20} F15 = F14 ∪{CC} Table 2: Baseline feature set. Features F0 −F10 fire at non-terminal nodes. Features F0, F11 −F15 fire at terminal nodes. in the training corpus, we can reduce the total number of left-child chains of length greater than 2 by half, while leaving the number of words in the held-out corpus with an unobserved left-child chain (out-of-vocabulary rate – OOV) to just one in every thousand words. 3.2 Features For this paper, we wanted to compare the results of a perceptron model with a generative model for a comparable feature set. Unlike in Roark (2001a; 2004), there is no look-ahead statistic, so we modified the feature set from those papers to explicitly include the lexical item and POS tag of the next word. Otherwise the features are basically the same as in those papers. We then built a generative model with this feature set and the same tree transform, for use with the beam-search parser from Roark (2004) to compare against our baseline perceptron model. To concisely present the baseline feature set, let us establish a notation. Features will fire whenever a new node is built in the tree. The features are labels from the left-context, i.e. the already built part of the tree. All of the labels that we will include in our feature sets are i levels above the current node in the tree, and j nodes to the left, which we will denote Lij. Hence, L00 is the node label itself; L10 is the label of parent of the current node; L01 is the label of the sibling of the node, immediately to its left; L11 is the label of the sibling of the parent node, etc. We also include: the lexical head of the current constituent (CL); the c-commanding lexical head (CC) and its POS (CCP); and the look-ahead word (LK) and its POS (LKP). All of these features are discussed at more length in the citations above. Table 2 presents the baseline feature set. In addition to the baseline feature set, we will also present results using features that would be more difficult to embed in a generative model. We included some punctuation-oriented features, which included (i) a Boolean feature indicating whether the final punctuation is a question mark or not; (ii) the POS label of the word after the current look-ahead, if the current lookahead is punctuation or a coordinating conjunction; and (iii) a Boolean feature indicating whether the look-ahead is punctuation or not, that fires when the category immediately to the left of the current position is immediately preceded by punctuation. 4 Refinements to the Training Algorithm This section describes two modifications to the “basic” training algorithm in figure 1. 4.1 Making Repeated Use of Hypotheses Figure 4 shows a modified algorithm for parameter estimation. The input to the function is a gold standard parse, together with a set of candidates F generated by the incremental parser. There are two steps. First, the model is updated as usual with the current example, which is then added to a cache of examples. Second, the method repeatedly iterates over the cache, updating the model at each cached example if the gold standard parse is not the best scoring parse from among the stored candidates for that example. In our experiments, the cache was restricted to contain the parses from up to N previously processed sentences, where N was set to be the size of the training set. The motivation for these changes is primarily efficiency. One way to think about the algorithms in this paper is as methods for finding parameter values that satisfy a set of linear constraints – one constraint for each incorrect parse in training data. The incremental parser is Input: A gold-standard parse = g for sentence k of N. A set of candidate parses F. Current parameters ¯α. A Cache of triples ⟨gj, Fj, cj⟩for j = 1 . . . N where each gj is a previously generated gold standard parse, Fj is a previously generated set of candidate parses, and cj is a counter of the number of times that ¯α has been updated due to this particular triple. Parameters T1 and T2 controlling the number of iterations below. In our experiments, T1 = 5 and T2 = 50. Initialize the Cache to include, for j = 1 . . . N, ⟨gj, ∅, T2⟩. Step 1: Step 2: Calculate z = arg maxt∈F Φ(t) · ¯α For t = 1 . . . T1, j = 1 . . . N If (z ̸= g) then ¯α = ¯α + Φ(g) −Φ(z) If cj < T2 then Set the kth triple in the Cache to ⟨g, F, 0⟩ Calculate z = arg maxt∈Fj Φ(t) · ¯α If (z ̸= gj) then ¯α = ¯α + Φ(gj) −Φ(z) cj = cj + 1 Figure 4: The refined parameter update method makes repeated use of hypotheses a method for dynamically generating constraints (i.e. incorrect parses) which are violated, or close to being violated, under the current parameter settings. The basic algorithm in Figure 1 is extremely wasteful with the generated constraints, in that it only looks at one constraint on each sentence (the arg max), and it ignores constraints implied by previously parsed sentences. This is inefficient because the generation of constraints (i.e., parsing an input sentence), is computationally quite demanding. More formally, it can be shown that the algorithm in figure 4 also has the upper bound in theorem 1 on the number of parameter updates performed. If the cost of steps 1 and 2 of the method are negligible compared to the cost of parsing a sentence, then the refined algorithm will certainly converge no more slowly than the basic algorithm, and may well converge more quickly. As a final note, we used the parameters T1 and T2 to limit the number of passes over examples, the aim being to prevent repeated updates based on outlier examples which are not separable. 4.2 Early Update During Training As before, define yi to be the gold standard parse for the i’th sentence, and also define yj i to be the partial analysis under the gold-standard parse for the first j words of the i’th sentence. Then if yj i /∈Fj(xi) a search error has been made, and there is no possibility of the gold standard parse yi being in the final set of parses, Fn(xi). We call the following modification to the parsing algorithm during training “early update”: if yj i /∈Fj(xi), exit the parsing process, pass yj i , Fj(xi) to the parameter estimation method, and move on to the next string in the training set. Intuitively, the motivation behind this is clear. It makes sense to make a correction to the parameter values at the point that a search error has been made, rather than allowing the parser to continue to the end of the sentence. This is likely to lead to less noisy input to the parameter estimation algorithm; and early update will also improve efficiency, as at the early stages of training the parser will frequently give up after a small proportion of each sentence is processed. It is more difficult to justify from a formal point of view, we leave this to future work. Figure 5 shows the convergence of the training algorithm with neither of the two refinements presented; with just early update; and with both. Early update makes 1 2 3 4 5 6 82 83 84 85 86 87 88 Number of passes over training data F−measure parsing accuracy No early update, no repeated use of examples Early update, no repeated use of examples Early update, repeated use of examples Figure 5: Performance on development data (section f24) after each pass over the training data, with and without repeated use of examples and early update. an enormous difference in the quality of the resulting model; repeated use of examples gives a small improvement, mainly in recall. 5 Empirical results The parsing models were trained and tested on treebanks from the Penn Wall St. Journal Treebank: sections 2-21 were kept training data; section 24 was held-out development data; and section 23 was for evaluation. After each pass over the training data, the averaged perceptron model was scored on the development data, and the best performing model was used for test evaluation. For this paper, we used POS tags that were provided either by the Treebank itself (gold standard tags) or by the perceptron POS tagger3 presented in Collins (2002). The former gives us an upper bound on the improvement that we might expect if we integrated the POS tagging with the parsing. 3For trials when the generative or perceptron parser was given POS tagger output, the models were trained on POS tagged sections 2-21, which in both cases helped performance slightly. Model Gold-standard tags POS-tagger tags LP LR F LP LR F Generative 88.1 87.6 87.8 86.8 86.5 86.7 Perceptron (baseline) 87.5 86.9 87.2 86.2 85.5 85.8 Perceptron (w/ punctuation features) 88.1 87.6 87.8 87.0 86.3 86.6 Table 3: Parsing results, section 23, all sentences, including labeled precision (LP), labeled recall (LR), and F-measure Table 3 shows results on section 23, when either goldstandard or POS-tagger tags are provided to the parser4. With the base features, the generative model outperforms the perceptron parser by between a half and one point, but with the additional punctuation features, the perceptron model matches the generative model performance. Of course, using the generative model and using the perceptron algorithm are not necessarily mutually exclusive. Another training scenario would be to include the generative model score as another feature, with some weight in the linear model learned by the perceptron algorithm. This sort of scenario was used in Roark et al. (2004) for training an n-gram language model using the perceptron algorithm. We follow that paper in fixing the weight of the generative model, rather than learning the weight along the the weights of the other perceptron features. The value of the weight was empirically optimized on the held-out set by performing trials with several values. Our optimal value was 10. In order to train this model, we had to provide generative model scores for strings in the training set. Of course, to be similar to the testing conditions, we cannot use the standard generative model trained on every sentence, since then the generative score would be from a model that had already seen that string in the training data. To control for this, we built ten generative models, each trained on 90 percent of the training data, and used each of the ten to score the remaining 10 percent that was not seen in that training set. For the held-out and testing conditions, we used the generative model trained on all of sections 2-21. In table 4 we present the results of including the generative model score along with the other perceptron features, just for the run with POS-tagger tags. The generative model score (negative log probability) effectively provides a much better initial starting point for the perceptron algorithm. The resulting F-measure on section 23 is 2.1 percent higher than either the generative model or perceptron-trained model used in isolation. 6 Conclusions In this paper we have presented a discriminative training approach, based on the perceptron algorithm with a couple of effective refinements, that provides a model capable of effective heuristic search over a very difficult search space. In such an approach, the unnormalized discriminative parsing model can be applied without either 4When POS tagging is integrated directly into the generative parsing process, the baseline performance is 87.0. For comparison with the perceptron model, results are shown with pre-tagged input. Model POS-tagger tags LP LR F Generative baseline 86.8 86.5 86.7 Perceptron (w/ punctuation features) 87.0 86.3 86.6 Generative + Perceptron (w/ punct) 89.1 88.4 88.8 Table 4: Parsing results, section 23, all sentences, including labeled precision (LP), labeled recall (LR), and F-measure an external model to present it with candidates, or potentially expensive dynamic programming. When the training algorithm is provided the generative model scores as an additional feature, the resulting parser is quite competitive on this task. The improvement that was derived from the additional punctuation features demonstrates the flexibility of the approach in incorporating novel features in the model. Future research will look in two directions. First, we will look to include more useful features that are difficult for a generative model to include. This paper was intended to compare search with the generative model and the perceptron model with roughly similar feature sets. Much improvement could potentially be had by looking for other features that could improve the models. Secondly, combining with the generative model can be done in several ways. Some of the constraints on the search technique that were required in the absence of the generative model can be relaxed if the generative model score is included as another feature. In the current paper, the generative score was simply added as another feature. Another approach might be to use the generative model to produce candidates at a word, then assign perceptron features for those candidates. Such variants deserve investigation. Overall, these results show much promise in the use of discriminative learning techniques such as the perceptron algorithm to help perform heuristic search in difficult domains such as statistical parsing. Acknowledgements The work by Michael Collins was supported by the National Science Foundation under Grant No. 0347631. References Steven Abney. 1997. Stochastic attribute-value grammars. Computational Linguistics, 23(4):597–617. Michael Collins and Nigel Duffy. 2002. New ranking algorithms for parsing and tagging: Kernels over discrete structures and the voted perceptron. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 263–270. Michael Collins. 2000. Discriminative reranking for natural language parsing. In The Proceedings of the 17th International Conference on Machine Learning. Michael Collins. 2002. Discriminative training methods for hidden markov models: Theory and experiments with perceptron algorithms. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1–8. Michael Collins. 2004. Parameter estimation for statistical parsing models: Theory and practice of distribution-free methods. In Harry Bunt, John Carroll, and Giorgio Satta, editors, New Developments in Parsing Technology. Kluwer. Fabrizio Costa, Vincenzo Lombardo, Paolo Frasconi, and Giovanni Soda. 2001. Wide coverage incremental parsing by learning attachment preferences. In Conference of the Italian Association for Artificial Intelligence (AIIA), pages 297–307. Stephen Della Pietra, Vincent Della Pietra, and John Lafferty. 1997. Inducing features of random fields. IEEE Transactions on Pattern Analysis and Machine Intelligence, 19:380–393. Yoav Freund and Robert Schapire. 1999. Large margin classification using the perceptron algorithm. Machine Learning, 3(37):277–296. Yoav Freund, Raj Iyer, Robert Schapire, and Yoram Singer. 1998. An efficient boosting algorithm for combining preferences. In Proc. of the 15th Intl. Conference on Machine Learning. Stuart Geman and Mark Johnson. 2002. Dynamic programming for parsing and estimation of stochastic unification-based grammars. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 279–286. Mark Johnson and Brian Roark. 2000. Compact nonleft-recursive grammars using the selective left-corner transform and factoring. In Proceedings of the 18th International Conference on Computational Linguistics (COLING), pages 355–361. Mark Johnson, Stuart Geman, Steven Canon, Zhiyi Chi, and Stefan Riezler. 1999. Estimators for stochastic “unification-based” grammars. In Proceedings of the 37th Annual Meeting of the Association for Computational Linguistics, pages 535–541. Mark Johnson. 1998. PCFG models of linguistic tree representations. Computational Linguistics, 24(4):617–636. John Lafferty, Andrew McCallum, and Fernando Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proceedings of the 18th International Conference on Machine Learning, pages 282–289. Adwait Ratnaparkhi, Salim Roukos, and R. Todd Ward. 1994. A maximum entropy model for parsing. In Proceedings of the International Conference on Spoken Language Processing (ICSLP), pages 803–806. Adwait Ratnaparkhi. 1999. Learning to parse natural language with maximum entropy models. Machine Learning, 34:151–175. Stefan Riezler, Tracy King, Ronald M. Kaplan, Richard Crouch, John T. Maxwell III, and Mark Johnson. 2002. Parsing the wall street journal using a lexicalfunctional grammar and discriminative estimation techniques. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 271–278. Brian Roark, Murat Saraclar, and Michael Collins. 2004. Corrective language modeling for large vocabulary ASR with the perceptron algorithm. In Proceedings of the International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pages 749–752. Brian Roark. 2001a. Probabilistic top-down parsing and language modeling. Computational Linguistics, 27(2):249–276. Brian Roark. 2001b. Robust Probabilistic Predictive Syntactic Processing. Ph.D. thesis, Brown University. http://arXiv.org/abs/cs/0105019. Brian Roark. 2004. Robust garden path parsing. Natural Language Engineering, 10(1):1–24.
2004
15